id stringlengths 2 8 | url stringlengths 31 117 | title stringlengths 1 71 | text stringlengths 153 118k | topic stringclasses 4
values | section stringlengths 4 49 ⌀ | sublist stringclasses 9
values |
|---|---|---|---|---|---|---|
351806 | https://en.wikipedia.org/wiki/Fractionating%20column | Fractionating column | A fractionating column or fractional column is equipment used in the distillation of liquid mixtures to separate the mixture into its component parts, or fractions, based on their differences in volatility. Fractionating columns are used in small-scale laboratory distillations as well as large-scale industrial distillations.
Laboratory fractionating columns
A laboratory fractionating column is a piece of glassware used to separate vaporized mixtures of liquid compounds with close volatility. Most commonly used is either a Vigreux column or a straight column packed with glass beads or metal pieces such as Raschig rings. Fractionating columns help to separate the mixture by allowing the mixed vapors to cool, condense, and vaporize again in accordance with Raoult's law. With each condensation-vaporization cycle, the vapors are enriched in a certain component. A larger surface area allows more cycles, improving separation. This is the rationale for a Vigreux column or a packed fractionating column. Spinning band distillation achieves the same outcome by using a rotating band within the column to force the rising vapors and descending condensate into close contact, achieving equilibrium more quickly.
In a typical fractional distillation, a liquid mixture is heated in the distilling flask, and the resulting vapor rises up the fractionating column (see Figure 1). The vapor condenses on glass spurs (known as theoretical trays or theoretical plates) inside the column, and returns to the distilling flask, refluxing the rising distillate vapor. The hottest tray is at the bottom of the column and the coolest tray is at the top. At steady-state conditions, the vapor and liquid on each tray reach an equilibrium. Only the most volatile of the vapors stays in gas form all the way to the top, where it may then proceed through a condenser, which cools the vapor until it condenses into a liquid distillate. The separation may be enhanced by the addition of more trays (to a practical limitation of heat, flow, etc.).
Industrial fractionating columns
Fractional distillation is one of the unit operations of chemical engineering. Fractionating columns are widely used in chemical process industries where large quantities of liquids have to be distilled. Such industries are petroleum processing, petrochemical production, natural gas processing, coal tar processing, brewing, liquefied air separation, and hydrocarbon solvents production. Fractional distillation finds its widest application in petroleum refineries. In such refineries, the crude oil feedstock is a complex, multicomponent mixture that must be separated. Yields of pure chemical compounds are generally not expected, however, yields of groups of compounds within a relatively small range of boiling points, also called fractions, are expected. This process is the origin of the name fractional distillation or fractionation.
Distillation is one of the most common and energy-intensive separation processes. Effectiveness of separation is dependent upon the height and diameter of the column, the ratio of the column's height to diameter, and the material that comprises the distillation column itself. In a typical chemical plant, it accounts for about 40% of the total energy consumption. Industrial distillation is typically performed in large, vertical cylindrical columns (as shown in Figure 2) known as "distillation towers" or "distillation columns" with diameters ranging from about 65 centimeters to 6 meters and heights ranging from about 6 meters to 60 meters or more.
Industrial distillation towers are usually operated at a continuous steady state. Unless disturbed by changes in feed, heat, ambient temperature, or condensing, the amount of feed being added normally equals the amount of product being removed.
The amount of heat entering the column from the reboiler and with the feed must equal the amount heat removed by the overhead condenser and with the products. The heat entering a distillation column is a crucial operating parameter, addition of excess or insufficient heat to the column can lead to foaming, weeping, entrainment, or flooding.
Figure 3 depicts an industrial fractionating column separating a feed stream into one distillate fraction and one bottoms fraction. However, many industrial fractionating columns have outlets at intervals up the column so that multiple products having different boiling ranges may be withdrawn from a column distilling a multi-component feed stream. The "lightest" products with the lowest boiling points exit from the top of the columns and the "heaviest" products with the highest boiling points exit from the bottom.
Industrial fractionating columns use external reflux to achieve better separation of products. Reflux refers to the portion of the condensed overhead liquid product that returns to the upper part of the fractionating column as shown in Figure 3.
Inside the column, the downflowing reflux liquid provides cooling and condensation of upflowing vapors thereby increasing the efficacy of the distillation tower. The more reflux and/or more trays provided, the better is the tower's separation of lower boiling materials from higher boiling materials.
The design and operation of a fractionating column depends on the composition of the feed as well as the composition of the desired products. Given a simple, binary component feed, analytical methods such as the McCabe–Thiele method or the Fenske equation can be used. For a multi-component feed, simulation models are used both for design, operation, and construction.
Bubble-cap "trays" or "plates" are one of the types of physical devices, which are used to provide good contact between the upflowing vapor and the downflowing liquid inside an industrial fractionating column. Such trays are shown in Figures 4 and 5.
The efficiency of a tray or plate is typically lower than that of a theoretical 100% efficient equilibrium stage. Hence, a fractionating column almost always needs more actual, physical plates than the required number of theoretical vapor–liquid equilibrium stages.
In industrial uses, sometimes a packing material is used in the column instead of trays, especially when low pressure drops across the column are required, as when operating under vacuum. This packing material can either be random dumped packing ( wide) such as Raschig rings or structured sheet metal. Liquids tend to wet the surface of the packing, and the vapors pass across this wetted surface, where mass transfer takes place. Differently shaped packings have different surface areas and void space between packings. Both of these factors affect packing performance.
| Physical sciences | Phase separations | Chemistry |
351882 | https://en.wikipedia.org/wiki/Automotive%20engineering | Automotive engineering | Automotive engineering, along with aerospace engineering and naval architecture, is a branch of vehicle engineering, incorporating elements of mechanical, electrical, electronic, software, and safety engineering as applied to the design, manufacture and operation of motorcycles, automobiles, and trucks and their respective engineering subsystems. It also includes modification of vehicles. Manufacturing domain deals with the creation and assembling the whole parts of automobiles is also included in it. The automotive engineering field is research intensive and involves direct application of mathematical models and formulas. The study of automotive engineering is to design, develop, fabricate, and test vehicles or vehicle components from the concept stage to production stage. Production, development, and manufacturing are the three major functions in this field.
Disciplines
Automobile engineering
Automobile engineering is a branch study of engineering which teaches manufacturing, designing, mechanical mechanisms as well as operations of automobiles.
It is an introduction to vehicle engineering which deals with motorcycles, cars, buses, trucks, etc. It includes branch study of mechanical, electronic, software and safety elements.
Some of the engineering attributes and disciplines that are of importance to the automotive engineer include:
Safety engineering: Safety engineering is the assessment of various crash scenarios and their impact on the vehicle occupants. These are tested against very stringent governmental regulations. Some of these requirements include: seat belt and air bag functionality testing, front and side-impact testing, and tests of rollover resistance. Assessments are done with various methods and tools, including computer crash simulation (typically finite element analysis), crash-test dummy, and partial system sled and full vehicle crashes.
Fuel economy/emissions: Fuel economy is the measured fuel efficiency of the vehicle in miles per gallon or kilometers per liter. Emissions-testing covers the measurement of vehicle emissions, including hydrocarbons, nitrogen oxides (), carbon monoxide (CO), carbon dioxide (), and evaporative emissions.
NVH engineering (noise, vibration, and harshness): NVH involves customer feedback (both tactile [felt] and audible [heard]) concerning a vehicle. While sound can be interpreted as a rattle, squeal, or hot, a tactile response can be seat vibration or a buzz in the steering wheel. This feedback is generated by components either rubbing, vibrating, or rotating. NVH response can be classified in various ways: powertrain NVH, road noise, wind noise, component noise, and squeak and rattle. Note, there are both good and bad NVH qualities. The NVH engineer works to either eliminate bad NVH or change the "bad NVH" to good (i.e., exhaust tones).
Vehicle electronics: Automotive electronics is an increasingly important aspect of automotive engineering. Modern vehicles employ dozens of electronic systems. These systems are responsible for operational controls such as the throttle, brake and steering controls; as well as many comfort-and-convenience systems such as the HVAC, infotainment, and lighting systems. It would not be possible for automobiles to meet modern safety and fuel-economy requirements without electronic controls.
Performance: Performance is a measurable and testable value of a vehicle's ability to perform in various conditions. Performance can be considered in a wide variety of tasks, but it generally considers how quickly a car can accelerate (e.g. standing start 1/4 mile elapsed time, 0–60 mph, etc.), its top speed, how short and quickly a car can come to a complete stop from a set speed (e.g. 70-0 mph), how much g-force a car can generate without losing grip, recorded lap-times, cornering speed, brake fade, etc. Performance can also reflect the amount of control in inclement weather (snow, ice, rain).
Shift quality: Shift quality is the driver's perception of the vehicle to an automatic transmission shift event. This is influenced by the powertrain (Internal combustion engine, transmission), and the vehicle (driveline, suspension, engine and powertrain mounts, etc.) Shift feel is both a tactile (felt) and audible (heard) response of the vehicle. Shift quality is experienced as various events: transmission shifts are felt as an upshift at acceleration (1–2), or a downshift maneuver in passing (4–2). Shift engagements of the vehicle are also evaluated, as in Park to Reverse, etc.
Durability / corrosion engineering: Durability and corrosion engineering is the evaluation testing of a vehicle for its useful life. Tests include mileage accumulation, severe driving conditions, and corrosive salt baths.
Drivability: Drivability is the vehicle's response to general driving conditions. Cold starts and stalls, RPM dips, idle response, launch hesitations and stumbles, and performance levels all contribute to the overall drivability of any given vehicle.
Cost: The cost of a vehicle program is typically split into the effect on the variable cost of the vehicle, and the up-front tooling and fixed costs associated with developing the vehicle. There are also costs associated with warranty reductions and marketing.
Program timing: To some extent programs are timed with respect to the market, and also to the production-schedules of assembly plants. Any new part in the design must support the development and manufacturing schedule of the model.
Design for manufacturability (DFM): DFM refers to designing vehicular components in such a way that they are not only feasible to manufacture, but also such that they are cost-efficient to produce while resulting in acceptable quality that meets design specifications and engineering tolerances. This requires coördination between the design engineers and the assembly/manufacturing teams.
Quality management: Quality control is an important factor within the production process, as high quality is needed to meet customer requirements and to avoid expensive recall campaigns. The complexity of components involved in the production process requires a combination of different tools and techniques for quality control. Therefore, the International Automotive Task Force (IATF), a group of the world's leading manufacturers and trade organizations, developed the standard ISO/TS 16949. This standard defines the design, development, production, and (when relevant) installation and service requirements. Furthermore, it combines the principles of ISO 9001 with aspects of various regional and national automotive standards such as AVSQ (Italy), EAQF (France), VDA6 (Germany) and QS-9000 (USA). In order to further minimize risks related to product failures and liability claims for automotive electric and electronic systems, the quality discipline functional safety according to ISO/IEC 17025 is applied.
Since the 1950s, the comprehensive business approach total quality management (TQM) has operated to continuously improve the production process of automotive products and components. Some of the companies who have implemented TQM include Ford Motor Company, Motorola and Toyota Motor Company.
Job functions
Development engineer
A development engineer has the responsibility for coordinating delivery of the engineering attributes of a complete automobile (bus, car, truck, van, SUV, motorcycle etc.) as dictated by the automobile manufacturer, governmental regulations, and the customer who buys the product.
Much like the Systems engineer, the development engineer is concerned with the interactions of all systems in the complete automobile. While there are multiple components and systems in an automobile that have to function as designed, they must also work in harmony with the complete automobile. As an example, the brake system's main function is to provide braking functionality to the automobile. Along with this, it must also provide an acceptable level of: pedal feel (spongy, stiff), brake system "noise" (squeal, shudder, etc.), and interaction with the ABS (anti-lock braking system)
Another aspect of the development engineer's job is a trade-off process required to deliver all of the automobile attributes at a certain acceptable level. An example of this is the trade-off between engine performance and fuel economy. While some customers are looking for maximum power from their engine, the automobile is still required to deliver an acceptable level of fuel economy. From the engine's perspective, these are opposing requirements. Engine performance is looking for maximum displacement (bigger, more power), while fuel economy is looking for a smaller displacement engine (ex: 1.4 L vs. 5.4 L). The engine size however, is not the only contributing factor to fuel economy and automobile performance. Different values come into play.
Other attributes that involve trade-offs include: automobile weight, aerodynamic drag, transmission gearing, emission control devices, handling/roadholding, ride quality, and tires.
The development engineer is also responsible for organizing automobile level testing, validation, and certification. Components and systems are designed and tested individually by the Product Engineer. The final evaluation is to be conducted at the automobile level to evaluate system to system interactions. As an example, the audio system (radio) needs to be evaluated at the automobile level. Interaction with other electronic components can cause interference. Heat dissipation of the system and ergonomic placement of the controls need to be evaluated. Sound quality in all seating positions needs to be provided at acceptable levels.
Manufacturing engineer
Manufacturing engineers are
responsible for ensuring proper production of the automotive components or complete vehicles. While the development engineers are responsible for the function of the vehicle, manufacturing engineers are responsible for the safe and effective production of the vehicle. This group of engineers consist of process engineers, logistic coordinators, tooling engineers, robotics engineers, and assembly planners.
In the automotive industry manufacturers are playing a larger role in the development stages of automotive components to ensure that the products are easy to manufacture. Design for manufacturability in the automotive world is crucial to make certain whichever design is developed in the Research and Development Stage of automotive design. Once the design is established, the manufacturing engineers take over. They design the machinery and tooling necessary to build the automotive components or vehicle and establish the methods of how to mass-produce the product. It is the manufacturing engineers job to increase the efficiency of the automotive plant and to implement lean manufacturing techniques such as Six Sigma and Kaizen.
Other automotive engineering roles
Other automotive engineers include those listed below:
Aerodynamics engineers will often give guidance to the styling studio so that the shapes they design are aerodynamic, as well as attractive.
Body engineers will also let the studio know if it is feasible to make the panels for their designs.
Change control engineers make sure that all of the design and manufacturing changes that occur are organized, managed and implemented...
NVH engineers perform sound and vibration testing to prevent loud cabin noises, detectable vibrations, and/or improve the sound quality while the vehicle is on the road.
The modern automotive product engineering process
Studies indicate that a substantial part of the modern vehicle's value comes from intelligent systems, and that these represent most of the current automotive innovation. To facilitate this, the modern automotive engineering process has to handle an increased use of mechatronics. Configuration and performance optimization, system integration, control, component, subsystem and system-level validation of the intelligent systems must become an intrinsic part of the standard vehicle engineering process, just as this is the case for the structural, vibro-acoustic and kinematic design. This requires a vehicle development process that is typically highly simulation-driven.
The V-approach
One way to effectively deal with the inherent multi-physics and the control systems development that is involved when including intelligent systems, is to adopt the V-Model approach to systems development, as has been widely used in the automotive industry for twenty years or more. In this V-approach, system-level requirements are propagated down the V via subsystems to component design, and the system performance is validated at increasing integration levels. Engineering of mechatronic systems requires the application of two interconnected "V-cycles": one focusing on the multi-physics system engineering (like the mechanical and electrical components of an electrically powered steering system, including sensors and actuators); and the other focuses on the controls engineering, the control logic, the software and realization of the control hardware and embedded software.
| Technology | Disciplines | null |
352085 | https://en.wikipedia.org/wiki/Metasyntax | Metasyntax | A metasyntax is a syntax used to define the syntax of a programming language or formal language. It describes the allowable structure and composition of phrases and sentences of a metalanguage, which is used to describe either a natural language or a computer programming language. Some of the widely used formal metalanguages for computer languages are Backus–Naur form (BNF), extended Backus–Naur form (EBNF), Wirth syntax notation (WSN), and augmented Backus–Naur form (ABNF).
Metalanguages have their own metasyntax each composed of terminal symbols, nonterminal symbols, and metasymbols. A terminal symbol, such as a word or a token, is a stand-alone structure in a language being defined. A nonterminal symbol represents a syntactic category, which defines one or more valid phrasal or sentence structure consisted of an n-element subset. Metasymbols provide syntactic information for denotational purposes in a given metasyntax. Terminals, nonterminals, and metasymbols do not apply across all metalanguages.
Typically, the metalanguage for token-level languages (formally called "regular languages") does not have nonterminals because nesting is not an issue in these regular languages. English, as a metalanguage for describing certain languages, does not contain metasymbols since all explanation could be done using English expression. There are only certain formal metalanguages used for describing recursive languages (formally called context-free languages) that have terminals, nonterminals, and metasymbols in their metasyntax.
Element of metasyntax
Terminals: a stand-alone syntactic structure. Terminals could be denoted by double quoting the name of the terminals.
e.g. , , ,
Nonterminals: a symbolic representation defining a set of allowable syntactic structures that is composed of a subset of elements. Nonterminals could be denoted by angle bracketing the name of the nonterminals.
e.g. , ,
Metasymbol: a symbolic representation denoting syntactic information.
e.g. , , , , ,
Methods of phrase termination
Juxtaposition: e.g.
Alternation: e.g.
Repetition: e.g.
Optional phrase: e.g.
Grouping: e.g.
Specific metasyntax conventions
The standard convention
'Backus–Naur form' denotes nonterminal symbols by angle bracketing the name of the syntactic category, while it denotes terminal symbols by double quoting the terminal words. Terminals can never appear on the left-hand side of the metasymbol in a derivation rule. The body of the definition on the right-hand side may be composed with several alternative forms with each alternative syntactic construct being separated by the metasymbol . Each of these alternative construct may be either terminal or nonterminal.
'Extended Backus–Naur form' uses all facilities in BNF and introduces two more metasymbols for additional features. One of these two new features is applied to denote an optional phrase in a statement by square bracketing the optional phrase. The second feature is applied to denote a phrase that is to be repeated zero or more times by curly bracketing the phrase.
'Wirth syntax notation' uses all facilities in EBNF except that the nonterminals are not necessarily angle bracketed but are always defined on the right-hand side of in its production rule. It also does not require every nonterminal to be explicitly defined. Nonterminals such as and are implicitly defined as ASCII-character and optional white space respectively.
'Augmented Backus–Naur form' denotes nonterminal symbols by starting a one-word-name with an alphabet as the name of the syntactic category. Angle brackets are not required. Terminal symbols are either denoted by double quoted words or denoted by the following numeric structure: a , followed by or or , followed by a numeric value or a concatenation of numeric values separated by . Metasymbol is placed between two numeric values to denote value range. As that of BNF, the terminals of ABNF never occurs on the left-hand-side of the metasymbol in the derivation rule. Metasymbol denotes alternations. White space is used to separate elements in the body of the definition. The metasyntax for repetition in ABNF has several forms. A preceding an element denotes the element to be repeated zero or more times. Numeric value followed by followed by numeric value followed by an element denotes the element to be repeated at least times and at most times. A single numeric value preceding an element denotes the element to be repeated times. Comments may be express after metasymbol . As in EBNF, square bracketing a phrase denotes the phrase to be optional.
Variations
The metasyntax convention of these formal metalanguages are not yet formalized. Many metasyntactic variations or extensions exist in the reference manual of various computer programming languages. One variation to the standard convention for denoting nonterminals and terminals is to remove metasymbols such as angle brackets and quotations and apply font types to the intended words. In Ada, for example, syntactic categories are denoted by applying lower case sans-serif font on the intended words or symbols. All terminal words or symbols, in Ada, consist of characters of code position between and (inclusive). The definition for each character set is referred to the International Standard described by ISO/IEC 10646:2003. In C and Java, syntactic categories are denoted using italic font while terminal symbols are denoted by gothic font. In J, its metasyntax does not apply metasymbols to describe J's syntax at all. Rather, all syntactic explanations are done in a metalanguage very similar to English called Dictionary, which is uniquely documented for J.
Advantage of the extensions
The purpose of the new extensions is to provide a simpler and unambiguous metasyntax. In terms of simplicity, BNF's metanotation definitely does not help to make the metasyntax easier-to-read as the open-end and close-end metasymbols appear too abundantly. In terms of ambiguity, BNF's metanotation generates unnecessary complexity when quotation marks, apostrophes, less-than signs or greater-than signs come to serve as terminal symbols, which they often do. The extended metasyntax utilizes properties such as case, font, and code position of characters to reduce unnecessary aforementioned complexity. Moreover, some metalanguages use fonted separator categories to incorporate metasyntactic features for layout conventions, which are not formally supported by BNF.
| Technology | Programming languages | null |
352184 | https://en.wikipedia.org/wiki/Tincture | Tincture | A tincture is typically an extract of plant or animal material dissolved in ethanol (ethyl alcohol). Solvent concentrations of 25–60% are common, but may run as high as 90%. In chemistry, a tincture is a solution that has ethanol as its solvent. In herbal medicine, alcoholic tinctures are made with various ethanol concentrations, which should be at least 20% alcohol for preservation purposes.
Other solvents for producing tinctures include vinegar, glycerol (also called glycerine), diethyl ether and propylene glycol, not all of which can be used for internal consumption. Ethanol has the advantage of being an excellent solvent for both acidic and basic (alkaline) constituents. A tincture using glycerine is called a glycerite. Glycerine is generally a poorer solvent than ethanol. Vinegar, being acidic, is a better solvent for obtaining alkaloids but a poorer solvent for acidic components. For individuals who choose not to ingest alcohol, non-alcoholic extracts offer an alternative for preparations meant to be taken internally.
Low volatility substances such as iodine and mercurochrome can also be turned into tinctures.
Characteristics
Tinctures are often made of a combination of ethyl alcohol and water as solvents, each dissolving constituents the other is unable to, or weaker at. Varying their proportions can also produce different levels of constituents in the final extraction. As an antimicrobial, alcohol also acts as a preservative.
A downside of using alcohol as a solvent is that ethanol has a tendency to denature some organic compounds, reducing or destroying their effectiveness. This tendency can also have undesirable effects when extracting botanical constituents, such as polysaccharides. Certain other constituents, common among them proteins, can become irreversibly denatured, or "pickled" by the alcohol. Alcohol can also have damaging effects on some aromatic compounds.
Ether and propylene glycol based tinctures are not suitable for internal consumption, although they are used in preparations for external use, such as personal care creams and ointments.
Examples
Some examples that were formerly common in medicine include:
Tincture of benzoin
Tincture of cannabis
Tincture of cantharides
Tincture of castoreum
Tincture of ferric citrochloride, a chelate of citric acid and Iron(III) chloride
Tincture of green soap, which classically contains lavender oil
Tincture of guaiac gum
Tincture of iodine
Tincture of opium (laudanum)
Camphorated tincture of opium (paregoric)
Tincture of pennyroyal
Warburg's tincture ("Tinctura Antiperiodica" or "Antiperiodic Tincture", a 19th-century antipyretic)
Examples of spirits include:
Spirit of ammonia (spirits of hartshorn)
Spirit of camphor
Spirit of ether, a solution of diethyl ether in alcohol
"Spirit of Mindererus", ammonium acetate in alcohol
"Spirit of nitre" is not a spirit in this sense, but an old name for nitric acid (but "sweet spirit of nitre" was ethyl nitrite)
Similarly "spirit(s) of salt" actually meant hydrochloric acid. The concentrated, fuming, 35% acid is still sold under this name in the UK, for use as a drain-cleaning fluid.
"Spirit of vinegar" is an antiquated term for glacial acetic acid
"Spirit of vitriol" is an antiquated term for sulfuric acid
"Spirit of wine" or "spirits of wine" is an old term for alcohol (especially food grade alcohol derived from the distillation of wine)
"Spirit of wood" referred to methanol, often derived from the destructive distillation of wood
| Physical sciences | Mixture | Chemistry |
352327 | https://en.wikipedia.org/wiki/Sawmill | Sawmill | A sawmill (saw mill, saw-mill) or lumber mill is a facility where logs are cut into lumber. Modern sawmills use a motorized saw to cut logs lengthwise to make long pieces, and crosswise to length depending on standard or custom sizes (dimensional lumber). The "portable" sawmill is simple to operate. The log lies flat on a steel bed, and the motorized saw cuts the log horizontally along the length of the bed, by the operator manually pushing the saw. The most basic kind of sawmill consists of a chainsaw and a customized jig ("Alaskan sawmill"), with similar horizontal operation.
Before the invention of the sawmill, boards were made in various manual ways, either rived (split) and planed, hewn, or more often hand sawn by two men with a whipsaw, one above and another in a saw pit below. The earliest known mechanical mill is the Hierapolis sawmill, a Roman water-powered stone mill at Hierapolis, Asia Minor dating back to the 3rd century AD. Other water-powered mills followed and by the 11th century they were widespread in Spain and North Africa, the Middle East and Central Asia, and in the next few centuries, spread across Europe. The circular motion of the wheel was converted to a reciprocating motion at the saw blade. Generally, only the saw was powered, and the logs had to be loaded and moved by hand. An early improvement was the development of a movable carriage, also water powered, to move the log steadily through the saw blade.
By the time of the Industrial Revolution in the 18th century, the circular saw blade had been invented, and with the development of steam power in the 19th century, a much greater degree of mechanisation was possible. Scrap lumber from the mill provided a source of fuel for firing the boiler. The arrival of railroads meant that logs could be transported to mills rather than mills being built beside navigable waterways. By 1900, the largest sawmill in the world was operated by the Atlantic Coast Lumber Company in Georgetown, South Carolina, using logs floated down the Pee Dee River from the Appalachian Mountains. In the 20th century the introduction of electricity and high technology furthered this process, and now most sawmills are massive and expensive facilities in which most aspects of the work are computerized. Besides the sawn timber, use is made of all the by-products including sawdust, bark, woodchips, and wood pellets, creating a diverse offering of forest products.
Sawmill process
A sawmill's basic operation is much like those of hundreds of years ago: a log enters on one end and dimensional lumber exits on the other end.
After trees are selected for harvest, the next step in logging is felling the trees, and bucking them to length.
Branches are cut off the trunk. This is known as limbing.
Logs are taken by logging truck, rail or a log drive to the sawmill.
Logs are scaled either on the way to the mill or upon arrival at the mill.
Debarking removes bark from the logs.
Decking is the process for sorting the logs by species, size and end use (lumber, plywood, chips).
A sawyer uses a head saw (also called head rig or primary saw) to break the log into cants (unfinished logs to be further processed) and flitches (unfinished planks).
Depending upon the species and quality of the log, the cants will either be further broken down by a resaw or a gang edger into multiple flitches and/or boards.
Edging will take the flitch and trim off all irregular edges leaving four-sided lumber.
Trimming squares the ends at typical lumber lengths.
Drying removes naturally occurring moisture from the lumber. This can be done with kilns or air-dried.
Planing smooths the surface of the lumber leaving a uniform width and thickness.
Shipping transports the finished lumber to market.
History
Pre–Industrial Revolution
The Hierapolis sawmill, a water-powered stone sawmill at Hierapolis, Asia Minor (modern-day Turkey, then part of the Roman Empire), dating to the second half of the 3rd century, is the earliest known sawmill. It also incorporates a crank and connecting rod mechanism.
Water-powered stone sawmills working with cranks and connecting rods, but without gear train, are archaeologically attested for the 6th century at the Byzantine cities Gerasa (in Asia Minor) and Ephesus (in Syria).
The earliest literary reference to a working sawmill comes from a Roman poet, Ausonius, who wrote a topographical poem about the river Moselle in Germany in the late 4th century AD. At one point in the poem, he describes the shrieking sound of a watermill cutting marble. Marble sawmills also seem to be indicated by the Christian saint Gregory of Nyssa from Anatolia around 370–390 AD, demonstrating a diversified use of water-power in many parts of the Roman Empire.
Sawmills later became widespread in medieval Europe, as one was sketched by Villard de Honnecourt in c. 1225–1235. They are claimed to have been introduced to Madeira following its discovery in c. 1420 and spread widely in Europe in the 16th century.
Prior to the invention of the sawmill, boards were rived (split) and planed, or more often sawn by two men with a whipsaw, using saddleblocks to hold the log, and a saw pit for the pitman who worked below. Sawing was slow, and required strong and hearty men. The topsawer had to be the stronger of the two because the saw was pulled in turn by each man, and the lower had the advantage of gravity. The topsawyer also had to guide the saw so that the board was of even thickness. This was often done by following a chalkline.
Early sawmills simply adapted the whipsaw to mechanical power, generally driven by a water wheel to speed up the process. The circular motion of the wheel was changed to back-and-forth motion of the saw blade by a connecting rod known as a pitman arm (thus introducing a term used in many mechanical applications).
Generally, only the saw was powered, and the logs had to be loaded and moved by hand. An early improvement was the development of a movable carriage, also water powered, to move the log steadily through the saw blade.
A type of sawmill without a crank is known from Germany called "knock and drop" or simply "drop" -mills. In these drop sawmills, the frame carrying the saw blade is knocked upwards by cams as the shaft turns. These cams are let into the shaft on which the waterwheel sits. When the frame carrying the saw blade is in the topmost position it drops by its own weight, making a loud knocking noise, and in so doing it cuts the trunk.
A small mill such as this would be the center of many rural communities in wood-exporting regions such as the Baltic countries and Canada. The output of such mills would be quite low, perhaps only 500 boards per day. They would also generally only operate during the winter, the peak logging season.
In the United States, the sawmill was introduced soon after the colonisation of Virginia by recruiting skilled men from Hamburg.
Later the metal parts were obtained from the Netherlands, where the technology was far ahead of that in England, where the sawmill remained largely unknown until the late 18th century. The arrival of a sawmill was a large and stimulative step in the growth of a frontier community.
The Dutch windmill owner Cornelis Corneliszoon van Uitgeest invented in 1594 the wind-powered sawmill, which made the conversion of log timber into planks 30 times faster than before. His wind-powered sawmill used a crankshaft to convert a windmill's circular motion into a back-and-forward motion powering the saw, and was granted a patent for the technique.
Industrial Revolution
Early mills had been taken to the forest, where a temporary shelter was built, and the logs were skidded to the nearby mill by horse or ox teams, often when there was some snow to provide lubrication. As mills grew larger, they were usually established in more permanent facilities on a river, and the logs were floated down to them by log drivers. Sawmills built on navigable rivers, lakes, or estuaries were called cargo mills because of the availability of ships transporting cargoes of logs to the sawmill and cargoes of lumber from the sawmill.
The next improvement was the use of circular saw blades, perhaps invented in England in the late 18th century, but perhaps in 17th-century Netherlands. Soon thereafter, millers used gangsaws, which added additional blades so that a log would be reduced to boards in one quick step. Circular saw blades were extremely expensive and highly subject to damage by overheating or dirty logs. A new kind of technician arose, the sawfiler. Sawfilers were highly skilled in metalworking. Their main job was to set and sharpen teeth. The craft also involved learning how to hammer a saw, whereby a saw is deformed with a hammer and anvil to counteract the forces of heat and cutting. Modern circular saw blades have replaceable teeth, but still need to be hammered.
The introduction of steam power in the 19th century created many new possibilities for mills. Availability of railroad transportation for logs and lumber encouraged building of rail mills away from navigable water. Steam powered sawmills could be far more mechanized. Scrap lumber from the mill provided a ready fuel source for firing the boiler. Efficiency was increased, but the capital cost of a new mill increased dramatically as well.
In addition, the use of steam or gasoline-powered traction engines also allowed the entire sawmill to be mobile.
By 1900, the largest sawmill in the world was operated by the Atlantic Lumber Company in Georgetown, South Carolina, using logs floated down the Pee Dee River from as far as the edge of the Appalachian Mountains in North Carolina.
A restoration project for Sturgeon's Mill in Northern California is underway, restoring one of the last steam-powered lumber mills still using its original equipment.
Current trends
In the twentieth century the introduction of electricity and high technology furthered this process, and now most sawmills are massive and expensive facilities in which most aspects of the work is computerized. The cost of a new facility with capacity is up to CAN$120,000,000. A modern operation will produce between annually.
Small gasoline-powered sawmills run by local entrepreneurs served many communities in the early twentieth century, and specialty markets still today.
A trend is the small portable sawmill for personal or even professional use. Many different models have emerged with different designs and functions. They are especially suitable for producing limited volumes of boards, or specialty milling such as oversized timber. Portable sawmills have gained popularity for the convenience of bringing the sawmill to the logs and milling lumber in remote locations. Some remote communities that have experienced natural disasters have used portable sawmills to rebuild their communities out of the fallen trees.
Technology has changed sawmill operations significantly in recent years, emphasizing increasing profits through waste minimization and increased energy efficiency as well as improving operator safety. The once-ubiquitous rusty, steel conical sawdust burners have for the most part vanished, as the sawdust and other mill waste is now processed into particleboard and related products, or used to heat wood-drying kilns. Co-generation facilities will produce power for the operation and may also feed superfluous energy onto the grid. While the bark may be ground for landscaping barkdust, it may also be burned for heat. Sawdust may make particle board or be pressed into wood pellets for pellet stoves. The larger pieces of wood that will not make lumber are chipped into wood chips and provide a source of supply for paper mills. Wood by-products of the mills will also make oriented strand board (OSB) paneling for building construction, a cheaper and in some use cases more robust alternative to plywood for paneling. Some automatic mills can process 800 small logs into bark chips, wood chips, sawdust and sorted, stacked, and bound planks, in an hour.
Gallery
| Technology | Building materials | null |
352349 | https://en.wikipedia.org/wiki/Fonio | Fonio | Fonio, also sometimes called findi or acha, is the term for two cultivated grasses in the genus Digitaria that are important crops in parts of West Africa. The nutritious food with a favorable taste is a vital food source in many rural areas, especially in the mountains of Fouta Djalon, Guinea, but it is also cultivated in Mali, Burkina Faso, Ivory Coast, Nigeria, and Senegal. The global fonio market was estimated at 721,400 tonnes in 2020. Guinea annually produces the most fonio in the world, accounting for over 75% of the world's production in 2019. The name fonio (borrowed into English from French) is from Wolof foño. In West Africa, the species black fonio (Digitaria iburua) and white fonio (Digitaria exilis) are cultivated; the latter is the economically more important crop.
Fonio is a glumaceous monocot belonging to the grass family Poaceae and the genus Digitaria. While hundreds of these crabgrass species exist, only a few of them are produced for their grains. It is a small annual herbaceous plant with an inflorescence containing two or three racemes. The racemes have spikelets grouped in twos, threes, or fours, with a sterile and a fertile flower producing the fonio grain. Fonio has a short growing season and is well adjusted to harsh environments. The size of its root system, which can extend down to more than one meter in depth, is advantageous in periods of drought and helps with its adaptation to poor soils. Once considered a humble and often overlooked grain commonly known as the "cereal of the poor," fonio is now gaining attention in urban West Africa. Its unique cooking properties and nutritional benefits are sparking renewed interest in this once underrated staple.
Types
White fonio
White fonio, Digitaria exilis, also called "hungry rice" by Europeans, is the most common of a diverse group of wild and domesticated Digitaria species that are harvested in the savannas of West Africa. Fonio has the smallest seeds of all species of millet. It has potential to improve nutrition, boost food security, foster rural development, and support sustainable use of the land.
Nutritious, gluten-free, and high in dietary fiber, fonio is one of the world's fastest-growing cereals, reaching maturity in as little as six to eight weeks. The grains are used to make porridge, couscous, bread, and beer.
Black fonio
Black fonio, D. iburua, also known as iburu, is a similar crop grown in several countries of West Africa, particularly Nigeria, Togo, and Benin. Like white fonio, it is nutritious, fast-growing, and has the benefit of maturing before other grains, allowing for harvest during the "hungry season." However, it contains considerably more protein compared to D. exilis.
Black fonio is mostly cultivated in rural communities and is rarely sold commercially, even in West African cities.
Cultivation and processing
Climate and attributes
Fonio is cultivated in all West Africa as a staple crop. Guinea is the biggest producer of fonio with a production of and a cultivated surface area of in 2021, followed by Nigeria () and Mali ().
Fonio grows in dry climates without irrigation, and is unlikely to be a successful crop in humid regions. It is planted in light (sandy to stony) soils, and will grow in poor soil. Fonio is cultivated at sea level in Gambia, Sierra Leone and Guinea-Bissau, but it is otherwise mostly cultivated in altitudes ranging between . The growth cycle ranges from 70–130 days, depending on variety. It is mostly grown in areas with an average annual rainfall of .
Fonio plants are medium in height. Indeed, D. exilis can reach a height of , and D. iburua a height of . The ploidy level for the species ranges from diploid (2n), tetraploid (4n), to hexaploid (6n). Like many other grasses, fonio has a C4 carbon fixation, which makes it drought tolerant.
Ploughing and sowing
The ploughing is done by the men by hand, animal traction or with tractors. The sowing is generally done by hand by the women, depending on the onset of the rainy season. The fonio plant grows quickly; some landrace reach maturity in 8 weeks. It is, however, a weak weed competitor at the beginning of its growth, so weeding is important in the first development stages.
Harvest
Fonio is labor-intensive to harvest and process. In some regions, the mature fonio plants are uprooted, but the most common method is to cut the straws with knives and sickles which often leads to wounds on the hands. Women then gather the sheaves into cylindrical stacks or horizontal beams to store the sheaves and allow them to dry before the threshing without overheating. The threshing is then done by trampling on the plants or by beating the plants with rigid rods or more flexible sticks
The fonio plants are prone to lodging in the soil, which makes potential mechanization of the harvest processes difficult.
Dehusking
After the threshing, the fonio grains are still in their husk and the small grains make husk removal difficult and time-consuming. Traditional methods include pounding it in a mortar with sand, and then separating the grains and sand, or "popping" it over a flame and then pounding it, which yields a toasted-color grain (a technique used among the Akposso). The invention of a simple fonio husking machine offers an easier mechanical way to dehusk.
Gender role
Gender role plays a big part in the cultivation of fonio; tasks are distributed differently between men and women. Women do the weeding, the threshing by trampling, the cleaning as well as the drying and processing, while men do the harvest and the threshing by beating. Women's role is predominant in fonio's production. Half of the cultivation's tasks are exclusively done by women, against 14% for men. The tasks assigned to women require patience and meticulousness, while those assigned to men call for strength.
Effect of processing methods on nutrient value
Before consumption, fonio grains must be processed using mechanical (dehusking, milling) or thermal (precooking, parboiling, roasting) methods. Depending on the processing method, the nutrient value may be affected.
Regarding the macronutrients, the carbohydrate content remains higher when the grains are precooked rather than roasted. The protein content is much lower after milling because the bran that gets removed contains a lot of protein. The highest protein content is achieved when parboiling. The lipid content is increased when roasted and decreased when milled or precooked.
Regarding micronutrients, the iron and zinc content remains the highest when parboiled while milling leads to a loss due to the removal of the bran. Phytate, an anti-nutritional factor that inhibits the absorption of minerals like iron and zinc, is reduced by washing and cooking but is still high enough to inhibit adequate mineral absorption.
Generally, parboiled fonio shows the best nutritional composition when compared to the other processing methods. However, parboiling fonio does not lead to as efficient redistribution of nutrients as is the case with parboiled rice. Additionally, the process of parboiling changes the color of the fonio grains which is disliked by some consumers.
Commercialization outside of Africa
Fonio has been relatively unknown outside the African continent until recently, when companies in Europe and the United States began to import the grain from West Africa, often citing its ecological and nutritional benefits in their marketing.
United States
In the United States, Yolélé Foods, led by Senegalese-American chef Pierre Thiam, started importing and selling fonio in 2017. Thiam hopes to introduce Americans to the grain while simultaneously supporting sustainable and traditional agriculture in Burkina Faso, Ghana, Mali and Senegal. What is considered to be a peasant's food in West Africa is now sold in luxury grocery stores in the United States.
However, Thiam positions his project as part of a larger movement to elevate the economic power of African farmers, who for centuries have been suppressed by Western hegemony in the global food system.
European Union
In December 2018, the European Commission approved commercialization of fonio as a novel food in the European Union, after submission by the Italian company Obà Food to manufacture and market new food products. These products include fonio pasta, revealing a desire to change fonio to be more recognizable to the European palate.
Since this initial approval, fonio has gradually become more popular and more accessible in Europe. By 2021, the EU was importing 422 metric tonnes (465.2 tons) of fonio, a significant increase from the 172 metric tonnes (189.6 tons) imported in 2016.
| Biology and health sciences | Grains | Plants |
352353 | https://en.wikipedia.org/wiki/Spirit%20level | Spirit level | A spirit level, bubble level, or simply a level, is an instrument designed to indicate whether a surface is horizontal (level) or vertical (plumb).
Two basic designs exist: tubular (or linear) and bull's eye (or circular).
Different types of spirit levels may be used by carpenters, stonemasons, bricklayers, other building trades workers, surveyors, millwrights and other metalworkers, and in some photographic or videographic work.
History
The history of the spirit level was discussed in brief in an 1887 article appearing in Scientific American. Melchisédech Thévenot, a French scientist, invented the instrument some time before February 2, 1661. This date can be established from Thevenot's correspondence with scientist Christiaan Huygens. Within a year of this date the inventor circulated details of his invention to others, including Robert Hooke in London and Vincenzo Viviani in Florence. It is occasionally argued that these "bubble levels" did not come into widespread use until the beginning of the the earliest surviving examples being from that but Adrien Auzout had recommended that the Académie Royale des Sciences take "levels of the Thevenot type" on its expedition to Madagascar in 1666. It is very likely that these levels were in use in France and elsewhere long before the turn of the century.
The Fell All-Way precision level, one of the first successful American made bull's eye levels for machine tool use, was invented by William B. Fell of Rockford, Illinois in 1939. The device was unique in that it could be placed on a machine bed and show tilt on the x-y axes simultaneously, eliminating the need to rotate the level 90 degrees. The level was so accurate it was restricted from export during World War II. The device set a new standard of .0005 inches per foot resolution (five ten thousands per foot or five arc seconds tilt). Production of the level stopped around 1970, and was restarted in the 1980s by Thomas Butler Technology, also of Rockford, Illinois, but finally ended in the mid-1990s. However, there are still hundreds of the devices in existence.
Design and construction
Early tubular spirit levels had very slightly curved glass vials with constant inner diameter at each viewing point. These vials are filled, incompletely, with a usually a colored spirit or leaving a bubble in the tube. They have a slight upward curve, so that the bubble naturally rests in the center, the highest point. At slight inclinations the bubble travels away from the marked center position. Where a spirit level must also be usable upside-down or on its side, the curved constant-diameter tube is replaced by an uncurved barrel-shaped tube with a slightly larger diameter in its middle.
Alcohols such as ethanol are often used rather than water. Alcohols have low viscosity and surface tension, which allows the bubble to travel the tube quickly and settle accurately with minimal interference from the glass surface. Alcohols also have a much wider liquid temperature range, and will not break the vial as water could due to ice expansion. A colorant such as fluorescein, typically yellow or green, may be added to increase the visibility of the bubble.
A variant of the linear spirit level is the bull's eye level: a circular, flat-bottomed device with the liquid under a slightly convex glass face with a circle at the center. It serves to level a surface across a plane, while the tubular level only does so in the direction of the tube.
Calibration
To check the accuracy of a carpenter's type level, a perfectly horizontal surface is not needed. The level is placed on a flat and roughly level surface and the reading on the bubble tube is noted. This reading indicates to what extent the surface is parallel to the horizontal plane, according to the level, which at this stage is of unknown accuracy. The spirit level is then rotated through 180 degrees in the horizontal plane, and another reading is noted. If the level is accurate, it will indicate the same orientation with respect to the horizontal plane. A difference implies that the level is inaccurate.
Adjustment of the spirit level is performed by successively rotating the level and moving the bubble tube within its housing to take up roughly half of the discrepancy, until the magnitude of the reading remains constant when the level is flipped.
A similar procedure is applied to more sophisticated instruments such as a surveyor's optical level or a theodolite and is a matter of course each time the instrument is set up. In this latter case, the plane of rotation of the instrument is levelled, along with the spirit level. This is done in two horizontal perpendicular directions.
Sensitivity
Sensitivity is an important specification for a spirit level, as the device's accuracy depends on its sensitivity. The sensitivity of a level is given as the change of angle or gradient required to move the bubble by unit distance. If the bubble housing has graduated divisions, then the sensitivity is the angle or gradient change that moves the bubble by one of these divisions. is the usual spacing for graduations; on a surveyor's level, the bubble will move when the vial is tilted about 0.005 degree. For a precision machinist level with divisions, when the vial is tilted one division, the level will change one meter from the pivot point, referred to by machinists as 5 tenths per foot. This terminology is unique to machinists and indicates a length of 5 tenths of 1 thousandth of an inch.
Types
There are different types of spirit levels for different uses:
Carpenter's level (either wood, aluminium or composite materials)
Mason's level
Torpedo level
Post level
Line level
Engineer's precision level
Electronic level
Inclinometer
Slip or Skid Indicator
Bull's eye level
A spirit level is usually found on the head of combination squares.
Carpenter's level
A traditional carpenter's spirit level looks like a short plank of wood and often has a wide body to ensure stability, and that the surface is being measured correctly. In the middle of the spirit level is a small window where the bubble and the tube is mounted. Two notches (or rings) designate where the bubble should be if the surface is level. Often an indicator for a 45 degree inclination is included.
Line level
A line level is a level designed to hang on a builder's string line. The body of the level incorporates small hooks to allow it to attach and hang from the string line. The body is lightweight, so as not to weigh down the string line, it is also small in size as the string line in effect becomes the body; when the level is hung in the center of the string, each 'leg' of the string line extends the level's plane.
Engineer's precision levels
An engineer's precision level permits leveling items to greater accuracy than a plain spirit level. They are used to level the foundations, or beds of machines to ensure the machine can output workpieces to the accuracy pre-built in the machine.
Surveyor's leveling instrument
Combining a spirit level with an optical telescope results in a tilting level or dumpy level. These leveling instruments as used in surveying to measure height differences over larger distances. A surveyor's leveling instrument has a spirit level mounted on a telescope (perhaps 30 power) with cross-hairs, itself mounted on a tripod. The observer reads height values off two graduated vertical rods, one 'behind' and one 'in front', to obtain the height difference between the ground points on which the rods are resting. Starting from a point with a known elevation and going cross country (successive points being perhaps apart) height differences can be measured cumulatively over long distances and elevations can be calculated. Precise levelling is supposed to give the difference in elevation between two points apart correct to within a few millimeters.
Alternatives
Alternatives include:
Reed level
Laser line level
Water level
Today level tools are available in most smartphones by using the device's accelerometer. These mobile apps come with various features and easy designs. Also new web standards allow websites to get orientation of devices.
Digital spirit levels are increasingly common in replacing conventional spirit levels, particularly in civil engineering applications such as traditional building construction and steel structure erection, for on-site angle alignment and leveling tasks. The industry practitioners often refer to those levelling tools as a "construction level", "heavy duty level", "inclinometer", or "protractor". These modern electronic levels are capable of displaying precise numeric angles within 360° with 0.1° to 0.05° accuracy, can be read from a distance with clarity, and are affordably priced due to mass adoption. They provide features that traditional levels are unable to match. Typically, these features enable steel beam frames under construction to be precisely aligned and levelled to the required orientation, which is vital to ensure the stability, strength and rigidity of steel structures on sites. Digital levels, embedded with angular MEMS technology effectively improve productivity and quality of many modern civil structures. Some recent models feature waterproof IP65 and impact resistance features for harsh working environments.
| Technology | Surveying tools | null |
352453 | https://en.wikipedia.org/wiki/Bucket | Bucket | A bucket is typically a watertight, vertical cylinder or truncated cone or square, with an open top and a flat bottom, attached to a semicircular carrying handle called the bail.
A bucket is usually an open-top container. In contrast, a pail can have a top or lid and is a shipping container. In non-technical usage, the two terms are often used interchangeably.
Types and uses
A number of bucket types exist, used for a variety of purposes. Though most of these are functional purposes, a number, including those constructed from precious metals, are used for ceremonial purposes. Common types of bucket and their adjoining purposes include:
Water buckets used to carry water
Household and garden buckets used for carrying liquids and granular products
Elaborate ceremonial or ritual buckets constructed of bronze, ivory or other materials, found in several ancient or medieval cultures, sometimes known by the Latin for bucket,
Large scoops or buckets attached to loaders and telehandlers for landscaping agricultural and purposes
Canvas buckets made of woven fabric, developed as a fire-resistant alternative to leather
Crusher buckets attached to excavators used for crushing and recycling material in the construction industry
Buckets shaped like castles often used as children's toys to shape and carry sand on a beach or in a sandpit
Buckets in special shapes such as cast iron buckets or smelting buckets to hold liquid metal at high temperatures
Though not always bucket shaped, lunch boxes are sometimes known as lunch pails or a lunch bucket. Buckets can be repurposed as seats, tool caddies, hydroponic gardens, chamber pots, "street" drums, or livestock feeders, amongst other uses. Buckets are also repurposed for the use of long term food storage by survivalists.
Shipping containers
When in reference to a shipping container, the term "pail" is used as a technical term, specifically referring to a bucket shaped package with a sealed top or lid, which is then used as a transport container for chemicals and industrial products.
Gallery
English language phrases and idioms
The bucket has been used in many phrases and idioms in the English language, some of which are regional or specific to the use of English in different English-speaking countries.
Kick the bucket: an informal term referring to someone's death
Drop the bucket on: to implicate a person in something (from Australian slang)
A drop in the bucket: a small, inadequate amount in relation to how much is requested or asked, taken from the biblical Book of Isaiah, chapter 40, verse 15
Bucket list: a list of activities an individual wishes to undertake before death
Unit of measurement
As an obsolete unit of measurement, at least one source documents a 'bucket' as being equivalent to .
| Technology | Containers | null |
352905 | https://en.wikipedia.org/wiki/R-process | R-process | In nuclear astrophysics, the rapid neutron-capture process, also known as the r-process, is a set of nuclear reactions that is responsible for the creation of approximately half of the atomic nuclei heavier than iron, the "heavy elements", with the other half produced by the p-process and s-process. The r-process usually synthesizes the most neutron-rich stable isotopes of each heavy element. The r-process can typically synthesize the heaviest four isotopes of every heavy element; of these, the heavier two are called r-only nuclei because they are created exclusively via the r-process. Abundance peaks for the r-process occur near mass numbers (elements Se, Br, and Kr), (elements Te, I, and Xe) and (elements Os, Ir, and Pt).
The r-process entails a succession of rapid neutron captures (hence the name) by one or more heavy seed nuclei, typically beginning with nuclei in the abundance peak centered on 56Fe. The captures must be rapid in the sense that the nuclei must not have time to undergo radioactive decay (typically via β− decay) before another neutron arrives to be captured. This sequence can continue up to the limit of stability of the increasingly neutron-rich nuclei (the neutron drip line) to physically retain neutrons as governed by the short range nuclear force. The r-process therefore must occur in locations where there exists a high density of free neutrons.
Early studies theorized that 1024 free neutrons per cm3 would be required, for temperatures of about 1 GK, in order to match the waiting points, at which no more neutrons can be captured, with the mass numbers of the abundance peaks for r-process nuclei. This amounts to almost a gram of free neutrons in every cubic centimeter, an astonishing number requiring extreme locations. Traditionally this suggested the material ejected from the reexpanded core of a core-collapse supernova, as part of supernova nucleosynthesis, or decompression of neutron star matter thrown off by a binary neutron star merger in a kilonova. The relative contribution of each of these sources to the astrophysical abundance of r-process elements is a matter of ongoing research .
A limited r-process-like series of neutron captures occurs to a minor extent in thermonuclear weapon explosions. These led to the discovery of the elements einsteinium (element 99) and fermium (element 100) in nuclear weapon fallout.
The r-process contrasts with the s-process, the other predominant mechanism for the production of heavy elements, which is nucleosynthesis by means of slow captures of neutrons. In general, isotopes involved in the s-process have half-lives long enough to enable their study in laboratory experiments, but this is not typically true for isotopes involved in the r-process. The s-process primarily occurs within ordinary stars, particularly AGB stars, where the neutron flux is sufficient to cause neutron captures to recur every 10–100 years, much too slow for the r-process, which requires 100 captures per second. The s-process is secondary, meaning that it requires pre-existing heavy isotopes as seed nuclei to be converted into other heavy nuclei by a slow sequence of captures of free neutrons. The r-process scenarios create their own seed nuclei, so they might proceed in massive stars that contain no heavy seed nuclei. Taken together, the r- and s-processes account for almost the entire abundance of chemical elements heavier than iron. The historical challenge has been to locate physical settings appropriate to their time scales.
History
Following pioneering research into the Big Bang and the formation of helium in stars, an unknown process responsible for producing heavier elements found on Earth from hydrogen and helium was suspected to exist. One early attempt at explanation came from Subrahmanyan Chandrasekhar and Louis R. Henrich who postulated that elements were produced at temperatures between 6×109 and 8×109 K. Their theory accounted for elements up to chlorine, though there was no explanation for elements of atomic weight heavier than 40 amu at non-negligible abundances.
This became the foundation of a study by Fred Hoyle, who hypothesized that conditions in the core of collapsing stars would enable nucleosynthesis of the remainder of the elements via rapid capture of densely packed free neutrons. However, there remained unanswered questions about equilibrium in stars that was required to balance beta-decays and precisely account for abundances of elements that would be formed in such conditions.
The need for a physical setting providing rapid neutron capture, which was known to almost certainly have a role in element formation, was also seen in a table of abundances of isotopes of heavy elements by Hans Suess and Harold Urey in 1956. Their abundance table revealed larger than average abundances of natural isotopes containing magic numbers of neutrons as well as abundance peaks about 10 amu lighter than stable nuclei containing magic numbers of neutrons which were also in abundance, suggesting that radioactive neutron-rich nuclei having the magic neutron numbers but roughly ten fewer protons were formed. These observations also implied that rapid neutron capture occurred faster than beta decay, and the resulting abundance peaks were caused by so-called waiting points at magic numbers. This process, rapid neutron capture by neutron-rich isotopes, became known as the r-process, whereas the s-process was named for its characteristic slow neutron capture. A table apportioning the heavy isotopes phenomenologically between s-process and r-process isotopes was published in 1957 in the B2FH review paper, which named the r-process and outlined the physics that guides it. Alastair G. W. Cameron also published a smaller study about the r-process in the same year.
The stationary r-process as described by the B2FH paper was first demonstrated in a time-dependent calculation at Caltech by Phillip A. Seeger, William A. Fowler and Donald D. Clayton, who found that no single temporal snapshot matched the solar r-process abundances, but, that when superposed, did achieve a successful characterization of the r-process abundance distribution. Shorter-time distributions emphasize abundances at atomic weights less than , whereas longer-time distributions emphasized those at atomic weights greater than . Subsequent treatments of the r-process reinforced those temporal features. Seeger et al. were also able to construct more quantitative apportionment between s-process and r-process of the abundance table of heavy isotopes, thereby establishing a more reliable abundance curve for the r-process isotopes than B2FH had been able to define. Today, the r-process abundances are determined using their technique of subtracting the more reliable s-process isotopic abundances from the total isotopic abundances and attributing the remainder to r-process nucleosynthesis. That r-process abundance curve (vs. atomic weight) has provided for many decades the target for theoretical computations of abundances synthesized by the physical r-process.
The creation of free neutrons by electron capture during the rapid collapse to high density of a supernova core along with quick assembly of some neutron-rich seed nuclei makes the r-process a primary nucleosynthesis process, a process that can occur even in a star initially of pure H and He. This in contrast to the B2FH designation which is a secondary process building on preexisting iron. Primary stellar nucleosynthesis begins earlier in the galaxy than does secondary nucleosynthesis. Alternatively the high density of neutrons within neutron stars would be available for rapid assembly into r-process nuclei if a collision were to eject portions of a neutron star, which then rapidly expands freed from confinement. That sequence could also begin earlier in galactic time than would s-process nucleosynthesis; so each scenario fits the earlier growth of r-process abundances in the galaxy. Each of these scenarios is the subject of active theoretical research.
Observational evidence of the early r-process enrichment of interstellar gas and of subsequent newly formed stars, as applied to the abundance evolution of the galaxy of stars, was first laid out by James W. Truran in 1981. He and subsequent astronomers showed that the pattern of heavy-element abundances in the earliest metal-poor stars matched that of the shape of the solar r-process curve, as if the s-process component were missing. This was consistent with the hypothesis that the s-process had not yet begun to enrich interstellar gas when these young stars missing the s-process abundances were born from that gas, for it requires about 100 million years of galactic history for the s-process to get started whereas the r-process can begin after two million years. These s-process–poor, r-process–rich stellar compositions must have been born earlier than any s-process, showing that the r-process emerges from quickly evolving massive stars that become supernovae and leave neutron-star remnants that can merge with another neutron star. The primary nature of the early r-process thereby derives from observed abundance spectra in old stars that had been born early, when the galactic metallicity was still small, but that nonetheless contain their complement of r-process nuclei.
Either interpretation, though generally supported by supernova experts, has yet to achieve a totally satisfactory calculation of r-process abundances because the overall problem is numerically formidable. However, existing results are supportive; in 2017, new data about the r-process was discovered when the LIGO and Virgo gravitational-wave observatories discovered a merger of two neutron stars ejecting r-process matter. See Astrophysical sites below.
Noteworthy is that the r-process is responsible for our natural cohort of radioactive elements, such as uranium and thorium, as well as the most neutron-rich isotopes of each heavy element.
Nuclear physics
There are three natural candidate sites for r-process nucleosynthesis where the required conditions are thought to exist: low-mass supernovae, Type II supernovae, and neutron star mergers.
Immediately after the severe compression of electrons in a Type II supernova, beta-minus decay is blocked. This is because the high electron density fills all available free electron states up to a Fermi energy which is greater than the energy of nuclear beta decay. However, nuclear capture of those free electrons still occurs, and causes increasing neutronization of matter. This results in an extremely high density of free neutrons which cannot decay, on the order of 1024 neutrons per cm3, and high temperatures. As this re-expands and cools, neutron capture by still-existing heavy nuclei occurs much faster than beta-minus decay. As a consequence, the r-process runs up along the neutron drip line and highly-unstable neutron-rich nuclei are created.
Three processes which affect the climbing of the neutron drip line are a notable decrease in the neutron-capture cross section in nuclei with closed neutron shells, the inhibiting process of photodisintegration, and the degree of nuclear stability in the heavy-isotope region. Neutron captures in r-process nucleosynthesis leads to the formation of neutron-rich, weakly bound nuclei with neutron separation energies as low as 2 MeV. At this stage, closed neutron shells at N = 50, 82, and 126 are reached, and neutron capture is temporarily paused. These so-called waiting points are characterized by increased binding energy relative to heavier isotopes, leading to low neutron capture cross sections and a buildup of semi-magic nuclei that are more stable toward beta decay. In addition, nuclei beyond the shell closures are susceptible to quicker beta decay owing to their proximity to the drip line; for these nuclei, beta decay occurs before further neutron capture. Waiting point nuclei are then allowed to beta decay toward stability before further neutron capture can occur, resulting in a slowdown or freeze-out of the reaction.
Decreasing nuclear stability terminates the r-process when its heaviest nuclei become unstable to spontaneous fission, when the total number of nucleons approaches 270. The fission barrier may be low enough before 270 such that neutron capture might induce fission instead of continuing up the neutron drip line. After the neutron flux decreases, these highly unstable radioactive nuclei undergo a rapid succession of beta decays until they reach more stable, neutron-rich nuclei. While the s-process creates an abundance of stable nuclei having closed neutron shells, the r-process, in neutron-rich predecessor nuclei, creates an abundance of radioactive nuclei about 10 amu below the s-process peaks. These abundance peaks correspond to stable isobars produced from successive beta decays of waiting point nuclei having N = 50, 82, and 126—which are about 10 protons removed from the line of beta stability.
The r-process also occurs in thermonuclear weapons, and was responsible for the initial discovery of neutron-rich almost stable isotopes of actinides like plutonium-244 and the new elements einsteinium and fermium (atomic numbers 99 and 100) in the 1950s. It has been suggested that multiple nuclear explosions would make it possible to reach the island of stability, as the affected nuclides (starting with uranium-238 as seed nuclei) would not have time to beta decay all the way to the quickly spontaneously fissioning nuclides at the line of beta stability before absorbing more neutrons in the next explosion, thus providing a chance to reach neutron-rich superheavy nuclides like copernicium-291 and -293 which may have half-lives of centuries or millennia.
Astrophysical sites
The most probable candidate site for the r-process has long been suggested to be core-collapse supernovae (spectral types Ib, Ic and II), which may provide the necessary physical conditions for the r-process. However, the very low abundance of r-process nuclei in the interstellar gas limits the amount each can have ejected. It requires either that only a small fraction of supernovae eject r-process nuclei to the interstellar medium, or that each supernova ejects only a very small amount of r-process material. The ejected material must be relatively neutron-rich, a condition which has been difficult to achieve in models, so that astrophysicists remain uneasy about their adequacy for successful r-process yields.
In 2017, new astronomical data about the r-process was discovered in data from the merger of two neutron stars. Using the gravitational wave data captured in GW170817 to identify the location of the merger, several teams observed and studied optical data of the merger, finding spectroscopic evidence of r-process material thrown off by the merging neutron stars. The bulk of this material seems to consist of two types: hot blue masses of highly radioactive r-process matter of lower-mass-range heavy nuclei ( such as strontium) and cooler red masses of higher mass-number r-process nuclei () rich in actinides (such as uranium, thorium, and californium). When released from the huge internal pressure of the neutron star, these ejecta expand and form seed heavy nuclei that rapidly capture free neutrons, and radiate detected optical light for about a week. Such duration of luminosity would not be possible without heating by internal radioactive decay, which is provided by r-process nuclei near their waiting points. Two distinct mass regions ( and ) for the r-process yields have been known since the first time dependent calculations of the r-process. Because of these spectroscopic features it has been argued that such nucleosynthesis in the Milky Way has been primarily ejecta from neutron-star mergers rather than from supernovae.
These results offer a new possibility for clarifying six decades of uncertainty over the site of origin of r-process nuclei. Confirming relevance to the r-process is that it is radiogenic power from radioactive decay of r-process nuclei that maintains the visibility of these spun off r-process fragments. Otherwise they would dim quickly. Such alternative sites were first seriously proposed in 1974 as decompressing neutron star matter. It was proposed such matter is ejected from neutron stars merging with black holes in compact binaries. In 1989 (and 1999) this scenario was extended to binary neutron star mergers (a binary star system of two neutron stars that collide). After preliminary identification of these sites, the scenario was confirmed in GW170817. Current astrophysical models suggest that a single neutron star merger event may have generated between 3 and 13 Earth masses of gold.
| Physical sciences | Stellar astronomy | Astronomy |
352908 | https://en.wikipedia.org/wiki/S-process | S-process | The slow neutron-capture process, or s-process, is a series of reactions in nuclear astrophysics that occur in stars, particularly asymptotic giant branch stars. The s-process is responsible for the creation (nucleosynthesis) of approximately half the atomic nuclei heavier than iron.
In the s-process, a seed nucleus undergoes neutron capture to form an isotope with one higher atomic mass. If the new isotope is stable, a series of increases in mass can occur, but if it is unstable, then beta decay will occur, producing an element of the next higher atomic number. The process is slow (hence the name) in the sense that there is sufficient time for this radioactive decay to occur before another neutron is captured. A series of these reactions produces stable isotopes by moving along the valley of beta-decay stable isobars in the table of nuclides.
A range of elements and isotopes can be produced by the s-process, because of the intervention of alpha decay steps along the reaction chain. The relative abundances of elements and isotopes produced depends on the source of the neutrons and how their flux changes over time. Each branch of the s-process reaction chain eventually terminates at a cycle involving lead, bismuth, and polonium.
The s-process contrasts with the r-process, in which successive neutron captures are rapid: they happen more quickly than the beta decay can occur. The r-process dominates in environments with higher fluxes of free neutrons; it produces heavier elements and more neutron-rich isotopes than the s-process. Together the two processes account for most of the relative abundance of chemical elements heavier than iron.
History
The s-process was seen to be needed from the relative abundances of isotopes of heavy elements and from a newly published table of abundances by Hans Suess and Harold Urey in 1956. Among other things, these data showed abundance peaks for strontium, barium, and lead, which, according to quantum mechanics and the nuclear shell model, are particularly stable nuclei, much like the noble gases are chemically inert. This implied that some abundant nuclei must be created by slow neutron capture, and it was only a matter of determining how other nuclei could be accounted for by such a process. A table apportioning the heavy isotopes between s-process and r-process was published in the famous B2FH review paper in 1957. There it was also argued that the s-process occurs in red giant stars. In a particularly illustrative case, the element technetium, whose longest half-life is 4.2 million years, had been discovered in s-, M-, and N-type stars in 1952 by Paul W. Merrill. Since these stars were thought to be billions of years old, the presence of technetium in their outer atmospheres was taken as evidence of its recent creation there, probably unconnected with the nuclear fusion in the deep interior of the star that provides its power.
A calculable model for creating the heavy isotopes from iron seed nuclei in a time-dependent manner was not provided until 1961. That work showed that the large overabundances of barium observed by astronomers in certain red-giant stars could be created from iron seed nuclei if the total neutron flux (number of neutrons per unit area) was appropriate. It also showed that no one single value for neutron flux could account for the observed s-process abundances, but that a wide range is required. The numbers of iron seed nuclei that were exposed to a given flux must decrease as the flux becomes stronger. This work also showed that the curve of the product of neutron-capture cross section times abundance is not a smoothly falling curve, as B2FH had sketched, but rather has a ledge-precipice structure. A series of papers in the 1970s by Donald D. Clayton utilizing an exponentially declining neutron flux as a function of the number of iron seed exposed became the standard model of the s-process and remained so until the details of AGB-star nucleosynthesis became sufficiently advanced that they became a standard model for s-process element formation based on stellar structure models. Important series of measurements of neutron-capture cross sections were reported from Oak Ridge National Lab in 1965 and by Karlsruhe Nuclear Physics Center in 1982 and subsequently, these placed the s-process on the firm quantitative basis that it enjoys today.
The s-process in stars
The s-process is believed to occur mostly in asymptotic giant branch stars, seeded by iron nuclei left by a supernova during a previous generation of stars. In contrast to the r-process which is believed to occur over time scales of seconds in explosive environments, the s-process is believed to occur over time scales of thousands of years, passing decades between neutron captures. The extent to which the s-process moves up the elements in the chart of isotopes to higher mass numbers is essentially determined by the degree to which the star in question is able to produce neutrons. The quantitative yield is also proportional to the amount of iron in the star's initial abundance distribution. Iron is the "starting material" (or seed) for this neutron capture-beta minus decay sequence of synthesizing new elements.
The main neutron source reactions are:
:{|border="0"
|- style="height:2em;"
| ||+ || ||→ || ||+ ||
|- style="height:2em;"
| ||+ || ||→ || ||+ ||
|}
One distinguishes the main and the weak s-process component. The main component produces heavy elements beyond Sr and Y, and up to Pb in the lowest metallicity stars. The production sites of the main component are low-mass asymptotic giant branch stars. The main component relies on the 13C neutron source above. The weak component of the s-process, on the other hand, synthesizes s-process isotopes of elements from iron group seed nuclei to 58Fe on up to Sr and Y, and takes place at the end of helium- and carbon-burning in massive stars. It employs primarily the 22Ne neutron source. These stars will become supernovae at their demise and spew those s-process isotopes into interstellar gas.
The s-process is sometimes approximated over a small mass region using the so-called "local approximation", by which the ratio of abundances is inversely proportional to the ratio of neutron-capture cross-sections for nearby isotopes on the s-process path. This approximation is – as the name indicates – only valid locally, meaning for isotopes of nearby mass numbers, but it is invalid at magic numbers where the ledge-precipice structure dominates.
Because of the relatively low neutron fluxes expected to occur during the s-process (on the order of 105 to 1011 neutrons per cm2 per second), this process does not have the ability to produce any of the heavy radioactive isotopes such as thorium or uranium. The cycle that terminates the s-process is:
captures a neutron, producing , which decays to by β− decay. in turn decays to by α decay:
:{|border="0"
|- style="height:2em;"
| ||+ || ||→ || ||+ ||
|- style="height:2em;"
| || || ||→ || ||+ || ||+ ||
|- style="height:2em;"
| || || ||→ || ||+ ||
|}
then captures three neutrons, producing , which decays to by β− decay, restarting the cycle:
:{|border="0"
|- style="height:2em;"
| ||+ ||3 ||→ ||
|- style="height:2em;"
| || || ||→ || ||+ || ||+ ||
|}
The net result of this cycle therefore is that 4 neutrons are converted into one alpha particle, two electrons, two anti-electron neutrinos and gamma radiation:
:{|border="0"
|- style="height:2em;"
| || ||4 ||→ || ||+ ||2 ||+ ||2 ||+ ||
|}
The process thus terminates in bismuth, the heaviest "stable" element, and polonium, the first non-primordial element after bismuth. Bismuth is actually slightly radioactive, but with a half-life so long—a billion times the present age of the universe—that it is effectively stable over the lifetime of any existing star. Polonium-210, however, decays with a half-life of 138 days to stable lead-206.
The s-process measured in stardust
Stardust is one component of cosmic dust. Stardust is individual solid grains that condensed during mass loss from various long-dead stars. Stardust existed throughout interstellar gas before the birth of the Solar System and was trapped in meteorites when they assembled from interstellar matter contained in the planetary accretion disk in early Solar System. Today they are found in meteorites, where they have been preserved. Meteoriticists habitually refer to them as presolar grains. The s-process enriched grains are mostly silicon carbide (SiC). The origin of these grains is demonstrated by laboratory measurements of extremely unusual isotopic abundance ratios within the grain. First experimental detection of s-process xenon isotopes was made in 1978, confirming earlier predictions that s-process isotopes would be enriched, nearly pure, in stardust from red giant stars. These discoveries launched new insight into astrophysics and into the origin of meteorites in the Solar System. Silicon carbide (SiC) grains condense in the atmospheres of AGB stars and thus trap isotopic abundance ratios as they existed in that star. Because the AGB stars are the main site of the s-process in the galaxy, the heavy elements in the SiC grains contain almost pure s-process isotopes in elements heavier than iron. This fact has been demonstrated repeatedly by sputtering-ion mass spectrometer studies of these stardust presolar grains. Several surprising results have shown that within them the ratio of s-process and r-process abundances is somewhat different from that which was previously assumed. It has also been shown with trapped isotopes of krypton and xenon that the s-process abundances in the AGB-star atmospheres changed with time or from star to star, presumably with the strength of neutron flux in that star or perhaps the temperature. This is a frontier of s-process studies in the 2000s.
| Physical sciences | Stellar astronomy | Astronomy |
352960 | https://en.wikipedia.org/wiki/Hysterectomy | Hysterectomy | Hysterectomy is the surgical removal of the uterus and cervix. Supracervical hysterectomy refers to removal of the uterus while the cervix is spared. These procedures may also involve removal of the ovaries (oophorectomy), fallopian tubes (salpingectomy), and other surrounding structures. The term “partial” or “total” hysterectomy are lay-terms that incorrectly describe the addition or omission of oophorectomy at the time of hysterectomy. These procedures are usually performed by a gynecologist. Removal of the uterus renders the patient unable to bear children (as does removal of ovaries and fallopian tubes) and has surgical risks as well as long-term effects, so the surgery is normally recommended only when other treatment options are not available or have failed. It is the second most commonly performed gynecological surgical procedure, after cesarean section, in the United States. Nearly 68 percent were performed for conditions such as endometriosis, irregular bleeding, and uterine fibroids. It is expected that the frequency of hysterectomies for non-malignant indications will continue to fall given the development of alternative treatment options.
Medical uses
Hysterectomy is a major surgical procedure that has risks and benefits. It affects the hormonal balance and overall health of patients. Because of this, hysterectomy is normally recommended as a last resort after pharmaceutical or other surgical options have been exhausted to remedy certain intractable and severe uterine/reproductive system conditions. There may be other reasons for a hysterectomy to be requested. Such conditions and/or indications include, but are not limited to:
Endometriosis: growth of the uterine lining outside the uterine cavity. This inappropriate tissue growth can lead to pain and bleeding.
Adenomyosis: a form of endometriosis, where the uterine lining has grown into and sometimes through the uterine wall musculature. This can thicken the uterine walls and also contribute to pain and bleeding.
Heavy menstrual bleeding: irregular or excessive menstrual bleeding for greater than a week. It can disturb regular quality of life and may be indicative of a more serious condition.
Uterine fibroids: benign growths on the uterus wall. These muscular noncancerous tumors can grow in single form or in clusters and can cause extreme pain and bleeding.
Uterine prolapse: when the uterus sags down due to weakened or stretched pelvic floor muscles potentially causing the uterus to protrude out of the vagina in more severe cases.
Reproductive system cancer prevention: especially if there is a strong family history of reproductive system cancers (especially breast cancer in conjunction with BRCA1 or BRCA2 mutation), or as part of recovery from such cancers.
Gynecologic cancer: depending on the type of hysterectomy, can aid in treatment of cancer or precancer of the endometrium, cervix, or uterus. In order to protect against or treat cancer of the ovaries, would need an oophorectomy.
Transgender (trans) male affirmation: aids in gender dysphoria, prevention of future gynecologic problems, and transition to obtaining new legal gender documentation.
Severe developmental disabilities: this treatment is controversial at best. In the United States, specific cases of sterilization due to developmental disabilities have been found by state-level Supreme Courts to violate the patient's constitutional and common-law rights.
Postpartum: to remove either a severe case of placenta praevia (a placenta that has either formed over or inside the birth canal) or placenta percreta (a placenta that has grown into and through the wall of the uterus to attach itself to other organs), as well as a last resort in case of excessive obstetrical haemorrhage.
Chronic pelvic pain: should try to obtain pain etiology, although may have no known cause.
PMS and menstrual pain and other psychic and physical conditions caused by menstrual period and causing suffering and diminishing life quality.
Risks and adverse effects
In 1995, the short-term mortality (within 40 days of surgery) was reported at 0.38 cases per 1000 when performed for benign causes. Risks for surgical complications were presence of fibroids, younger age (vascular pelvis with higher bleeding risk and larger uterus), dysfunctional uterine bleeding and parity.
The mortality rate is several times higher when performed in patients who are pregnant, have cancer or other complications.
Long-term effect on all case mortality is relatively small. Women under the age of 45 years have a significantly increased long-term mortality that is believed to be caused by the hormonal side effects of hysterectomy and prophylactic oophorectomy. This effect is not limited to pre-menopausal women; even women who have already entered menopause were shown to have experienced a decrease in long-term survivability post-oophorectomy.
Approximately 35% of women after hysterectomy undergo another related surgery within 2 years.
Ureteral injury is not uncommon and occurs in 0.2 per 1,000 cases of vaginal hysterectomy and 1.3 per 1,000 cases of abdominal hysterectomy. The injury usually occurs in the distal ureter close to the infundibulopelvic ligament or as a ureter crosses below the uterine artery, often from blind clamping and ligature placement to control hemorrhage.
Recovery
Hospital stay is 3 to 5 days or more for the abdominal procedure and between 1 and 2 days (but possibly longer) for vaginal or laparoscopically assisted vaginal procedures. After the procedure, the American College of Obstetricians and Gynecologists recommends not inserting anything into the vagina for the first 6 weeks (including inserting tampons or having sex).
Unintended oophorectomy and premature ovarian failure
Removal of one or both ovaries is performed in a substantial number of hysterectomies that were intended to be ovary sparing.
The average onset age of menopause after hysterectomy with ovarian conservation is 3.7 years earlier than average. This has been suggested to be due to the disruption of blood supply to the ovaries after a hysterectomy or due to missing endocrine feedback of the uterus. The function of the remaining ovaries is significantly affected in about 40% of people, some of them even require hormone replacement therapy. Surprisingly, a similar and only slightly weaker effect has been observed for endometrial ablation which is often considered as an alternative to hysterectomy.
A substantial number of women develop benign ovarian cysts after a hysterectomy.
Effects on sexual life and pelvic pain
After hysterectomy for benign indications the majority of patients report improvement in sexual life and pelvic pain. A smaller share of patients report worsening of sexual life and other problems. The picture is significantly different for hysterectomy performed for malignant reasons; the procedure is often more radical with substantial side effects. A proportion of patients who undergo a hysterectomy for chronic pelvic pain continue to have pelvic pain after a hysterectomy and develop dyspareunia (painful sexual intercourse).
Premature menopause and its effects
Estrogen levels fall sharply when the ovaries are removed, removing the protective effects of estrogen on the cardiovascular and skeletal systems. This condition is often referred to as "surgical menopause", although it is substantially different from a naturally occurring menopausal state; the former is a sudden hormonal shock to the body that causes rapid onset of menopausal symptoms such as hot flashes, while the latter is a gradually occurring decrease of hormonal levels over a period of years with uterus intact and ovaries able to produce hormones even after the cessation of menstrual periods.
One study showed that risk of subsequent cardiovascular disease is substantially increased for women who had hysterectomy at age 50 or younger. No association was found for women undergoing the procedure after age 50. The risk is higher when ovaries are removed but still noticeable even when ovaries are preserved.
Several other studies have found that osteoporosis (decrease in bone density) and increased risk of bone fractures are associated with hysterectomies. This has been attributed to the modulatory effect of estrogen on calcium metabolism and the drop in serum estrogen levels after menopause can cause excessive loss of calcium leading to bone wasting.
Hysterectomies have also been linked with higher rates of heart disease and weakened bones. Those who have undergone a hysterectomy with both ovaries removed typically have reduced testosterone levels as compared to those left intact. Reduced levels of testosterone in women are predictive of height loss, which may occur as a result of reduced bone density, while increased testosterone levels in women are associated with a greater sense of sexual desire.
Oophorectomy before the age of 45 is associated with a fivefold mortality from neurologic and mental disorders.
Urinary incontinence and vaginal prolapse
Urinary incontinence and vaginal prolapse are well known adverse effects that develop with high frequency a very long time after the surgery. Typically, those complications develop 10–20 years after the surgery. For this reason exact numbers are not known, and risk factors are poorly understood. It is also unknown if the choice of surgical technique has any effect. It has been assessed that the risk for urinary incontinence is approximately doubled within 20 years after hysterectomy. One long-term study found a 2.4 fold increased risk for surgery to correct urinary stress incontinence following hysterectomy.
The risk for vaginal prolapse depends on factors such as number of vaginal deliveries, the difficulty of those deliveries, and the type of labor. Overall incidence is approximately doubled after hysterectomy.
Adhesion formation and bowel obstruction
The formation of postoperative adhesions is a particular risk after hysterectomy because of the extent of dissection involved as well as the fact the hysterectomy wound is in the most gravity-dependent part of the pelvis into which a loop of bowel may easily fall. In one review, incidence of small bowel obstruction due to intestinal adhesion was found to be 15.6% in non-laparoscopic total abdominal hysterectomies vs. 0.0% in laparoscopic hysterectomies.
Wound infection
Wound infection occurs in approximately 3% of cases of abdominal hysterectomy. The risk is increased by obesity, diabetes, immunodeficiency disorder, use of systemic corticosteroids, smoking, wound hematoma, and preexisting infection such as chorioamnionitis and pelvic inflammatory disease. Such wound infections mainly take the form of either incisional abscess or wound cellulitis. Typically, both confer erythema, but only an incisional abscess confers purulent drainage. The recommended treatment of an incisional abscess after hysterectomy is by incision and drainage, and then coverage by a thin layer of gauze followed by sterile dressing. The dressing should be changed and the wound irrigated with normal saline at least twice each day. In addition, it is recommended to administer an antibiotic active against staphylococci and streptococci, preferably vancomycin when there is a risk of MRSA. The wound can be allowed to close by secondary intention. Alternatively, if the infection is cleared and healthy granulation tissue is evident at the base of the wound, the edges of the incision may be reapproximated, such as by using butterfly stitches, staples or sutures. Sexual intercourse remains possible after hysterectomy. Reconstructive surgery remains an option for women who have experienced benign and malignant conditions.
Other rare problems
Hysterectomy may cause an increased risk of the relatively rare renal cell carcinoma. The increased risk is particularly pronounced for young women; the risk was lower after vaginally performed hysterectomies. Hormonal effects or injury of the ureter were considered as possible explanations. In some cases the renal cell carcinoma may be a manifestation of an undiagnosed hereditary leiomyomatosis and renal cell cancer syndrome.
Removal of the uterus without removing the ovaries can produce a situation that on rare occasions can result in ectopic pregnancy due to an undetected fertilization that had yet to descend into the uterus before surgery. Two cases have been identified and profiled in an issue of the Blackwell Journal of Obstetrics and Gynecology; over 20 other cases have been discussed in additional medical literature. On very rare occasions, sexual intercourse after hysterectomy may cause a transvaginal evisceration of the small bowel. The vaginal cuff is the uppermost region of the vagina that has been sutured closed. A rare complication, it can dehisce and allow the evisceration of the small bowel into the vagina.
Alternatives
Depending on the indication there are alternatives to hysterectomy:
Heavy bleeding
Levonorgestrel intrauterine devices are highly effective at controlling dysfunctional uterine bleeding (DUB) or menorrhagia and should be considered before any surgery.
Menorrhagia (heavy or abnormal menstrual bleeding) may also be treated with the less invasive endometrial ablation which is an outpatient procedure in which the lining of the uterus is destroyed with heat, mechanically or by radio frequency ablation. Endometrial ablation greatly reduces or eliminates monthly bleeding in ninety percent of patients with DUB. It is not effective for patients with very thick uterine lining or uterine fibroids.
Uterine fibroids
Levonorgestrel intrauterine devices are highly effective in limiting menstrual blood flow and improving other symptoms. Side effects are typically very moderate because the levonorgestrel (a progestin) is released in low concentration locally. There is now substantial evidence that Levongestrel-IUDs provide good symptomatic relief for women with fibroids.
Uterine fibroids may be removed and the uterus reconstructed in a procedure called "myomectomy". A myomectomy may be performed through an open incision, laparoscopically, or through the vagina (hysteroscopy).
Uterine artery embolization (UAE) is a minimally invasive procedure for treatment of uterine fibroids. Under local anesthesia a catheter is introduced into the femoral artery at the groin and advanced under radiographic control into the uterine artery. A mass of microspheres or polyvinyl alcohol (PVA) material (an embolus) is injected into the uterine arteries in order to block the flow of blood through those vessels. The restriction in blood supply usually results in significant reduction of fibroids and improvement of heavy bleeding tendency. The 2012 Cochrane review comparing hysterectomy and UAE did not find any major advantage for either procedure. While UAE is associated with shorter hospital stay and a more rapid return to normal daily activities, it was also associated with a higher risk for minor complications later on. There were no differences between UAE and hysterectomy with regards to major complications.
Uterine fibroids can be removed with a non-invasive procedure called Magnetic Resonance guided Focused Ultrasound (MRgFUS).
Uterine prolapse
Prolapse may also be corrected surgically without removal of the uterus. There are several strategies that can be utilized to help strengthen pelvic floor muscles and prevent the worsening of prolapse. These include, but are not limited to, use of "kegel exercises", vaginal pessary, constipation relief, weight management, and care when lifting heavy objects.
Types
Hysterectomy, in the literal sense of the word, means merely removal of the uterus. However other organs such as ovaries, fallopian tubes, and the cervix are very frequently removed as part of the surgery.
Radical hysterectomy: complete removal of the uterus, cervix, upper vagina, and parametrium. Indicated for cancer. Lymph nodes, ovaries, and fallopian tubes are also usually removed in this situation, such as in .
Total hysterectomy: complete removal of the uterus and cervix, with or without oophorectomy.
Subtotal hysterectomy: removal of the uterus, leaving the cervix in situ.
Subtotal (supracervical) hysterectomy was originally proposed with the expectation that it may improve sexual functioning after hysterectomy, it has been postulated that removing the cervix causes excessive neurologic and anatomic disruption, thus leading to vaginal shortening, vaginal vault prolapse, and vaginal cuff granulations. These theoretical advantages were not confirmed in practice, but other advantages over total hysterectomy emerged. The principal disadvantage is that risk of cervical cancer is not eliminated and women may continue cyclical bleeding (although substantially less than before the surgery).
These issues were addressed in a systematic review of total versus supracervical hysterectomy for benign gynecological conditions, which reported the following findings:
There was no difference in the rates of incontinence, constipation, measures of sexual function, or alleviation of pre-surgery symptoms.
Length of surgery and amount of blood lost during surgery were significantly reduced during supracervical hysterectomy compared to total hysterectomy, but there was no difference in post-operative transfusion rates.
Febrile morbidity was less likely and ongoing cyclic vaginal bleeding one year after surgery was more likely after supracervical hysterectomy.
There was no difference in the rates of other complications, recovery from surgery, or readmission rates.
In the short-term, randomized trials have shown that cervical preservation or removal does not affect the rate of subsequent pelvic organ prolapse.
Supracervical hysterectomy does not eliminate the possibility of having cervical cancer since the cervix itself is left intact and may be contraindicated in women with increased risk of this cancer; regular pap smears to check for cervical dysplasia or cancer are still needed.
Technique
Hysterectomy can be performed in different ways. The oldest known technique is vaginal hysterectomy. The first planned hysterectomy was performed by Konrad Johann Martin Langenbeck - Surgeon General of the Hannovarian army, although there are records of vaginal hysterectomy for prolapse going back as far as 50BC.
The first abdominal hysterectomy recorded was by Ephraim McDowell. He performed the procedure in 1809 for a mother of five for a large ovarian mass on her kitchen table.
In modern medicine today, laparoscopic vaginal (with additional instruments passing through ports in small abdominal incisions, close or in the navel) and total laparoscopic techniques have been developed.
Abdominal hysterectomy
Most hysterectomies in the United States are done via laparotomy (abdominal incision, not to be confused with laparoscopy). A transverse (Pfannenstiel) incision is made through the abdominal wall, usually above the pubic bone, as close to the upper hair line of the individual's lower pelvis as possible, similar to the incision made for a caesarean section. This technique allows physicians the greatest access to the reproductive structures and is normally done for removal of the entire reproductive complex. The recovery time for an open hysterectomy is 4–6 weeks and sometimes longer due to the need to cut through the abdominal wall. Historically, the biggest problem with this technique was infections, but infection rates are well-controlled and not a major concern in modern medical practice. An open hysterectomy provides the most effective way to explore the abdominal cavity and perform complicated surgeries. Before the refinement of the vaginal and laparoscopic vaginal techniques, it was also the only possibility to achieve subtotal hysterectomy; meanwhile, the vaginal route is the preferable technique in most circumstances.
Vaginal hysterectomy
Vaginal hysterectomy is performed entirely through the vaginal canal and has clear advantages over abdominal surgery such as fewer complications, shorter hospital stays and shorter healing time. Abdominal hysterectomy, the most common method, is used in cases such as after caesarean delivery, when the indication is cancer, when complications are expected, or surgical exploration is required.
Laparoscopic-assisted vaginal hysterectomy
With the development of laparoscopic techniques in the 1970s and 1980s, the "laparoscopic-assisted vaginal hysterectomy" (LAVH) has gained great popularity among gynecologists because compared with the abdominal procedure it is less invasive and the post-operative recovery is much faster. It also allows better exploration and slightly more complicated surgeries than the vaginal procedure. LAVH begins with laparoscopy and is completed such that the final removal of the uterus (with or without removing the ovaries) is via the vaginal canal. Thus, LAVH is also a total hysterectomy; the cervix is removed with the uterus. If the cervix is removed along with the uterus, the upper portion of the vagina is sutured together and called the vaginal cuff.
Laparoscopic-assisted supracervical hysterectomy
The "laparoscopic-assisted supracervical hysterectomy" (LASH) was later developed to remove the uterus without removing the cervix using a morcellator which cuts the uterus into small pieces that can be removed from the abdominal cavity via the laparoscopic ports.
Total laparoscopic hysterectomy
Total laparoscopic hysterectomy (TLH) was developed in the early 90s by Prabhat K. Ahluwalia in Upstate New York. TLH is performed solely through the laparoscopes in the abdomen, starting at the top of the uterus, typically with a uterine manipulator. The entire uterus is disconnected from its attachments using long thin instruments through the "ports". Then all tissue to be removed is passed through the small abdominal incisions.
Other techniques
Supracervical (subtotal) laparoscopic hysterectomy (LSH) is performed similar to the total laparoscopic surgery but the uterus is amputated between the cervix and fundus.
Dual-port laparoscopy is a form of laparoscopic surgery using two 5 mm midline incisions: the uterus is detached through the two ports and removed through the vagina.
"Robotic hysterectomy" is a variant of laparoscopic surgery using special remotely controlled instruments that allow the surgeon finer control as well as three-dimensional magnified vision.
Comparison of techniques
Patient characteristics such as the reason for needing a hysterectomy, uterine size, descent of the uterus, presence of diseased tissues surrounding the uterus, previous surgery in the pelvic region, obesity, history of pregnancy, the possibility of endometriosis, or the need for an oophorectomy, will influence a surgeon's surgical approach when performing a hysterectomy.
Vaginal hysterectomy is recommended over other variants where possible for women with benign diseases. Vaginal hysterectomy was shown to be superior to LAVH and some types of laparoscopic surgery causing fewer short- and long-term complications, more favorable effect on sexual experience with shorter recovery times and fewer costs.
Laparoscopic surgery offers certain advantages when vaginal surgery is not possible but also has the disadvantage of significantly longer time required for the surgery.
In one 2004 study conducted in the UK comparing abdominal (laparotomic) and laparoscopic techniques, laparoscopic surgery was found to cause longer operation time and a higher rate of major complications while offering much quicker healing. In another study conducted in 2014, laparoscopy was found to be "a safe alternative to laparotomy" in patients receiving total hysterectomy for endometrial cancer. Researchers concluded the procedure "offers markedly improved perioperative outcomes with a lower reoperation rate and fewer postoperative complications when the standard of care shifts from open surgery to laparoscopy in a university hospital".
The abdominal technique is very often applied in difficult circumstances or when complications are expected. Given these circumstances the complication rate and time required for surgery compares very favorably with other techniques, however time required for healing is much longer.
Hysterectomy by abdominal laparotomy is correlated with much higher incidence of intestinal adhesions than other techniques.
Time required for completion of surgery in the eVAL trial is reported as follows:
abdominal 55.2 minutes average, range 19–155
vaginal 46.6 minutes average, range 14–168
laparoscopic (all variants) 82.5 minutes average, range 10–325 (combined data from both trial arms)
Morcellation has been widely used especially in laparoscopic techniques and sometimes for the vaginal technique, but now appears to be associated with a considerable risk of spreading benign or malignant tumors. In April 2014, the FDA issued a memo alerting medical practitioners to the risks of power morcellation.
Robotic assisted surgery is presently used in several countries for hysterectomies. Additional research is required to determine the benefits and risks involved, compared to conventional laparoscopic surgery.
A 2014 Cochrane review found that robotic assisted surgery may have a similar complication rate when compared to conventional laparoscopic surgery. In addition, there is evidence to suggest that although the surgery make take longer, robotic assisted surgery may result in shorter hospital stays. More research is necessary to determine if robotic assisted hysterectomies are beneficial for people with cancer.
Previously reported marginal advantages of robotic assisted surgery could not be confirmed; only differences in hospital stay and cost remain statistically significant. In addition, concerns over widespread misleading marketing claims have been raised.
Incidence
Canada
In Canada, the number of hysterectomies between 2008 and 2009 was almost 47,000. The national rate for the same timeline was 338 per 100,000 population, down from 484 per 100,000 in 1997. The reasons for hysterectomies differed depending on whether the woman was living in an urban or rural location. Urban women opted for hysterectomies due to uterine fibroids and rural women had hysterectomies mostly for menstrual disorders.
United States
Hysterectomy is the second most common major surgery among women in the United States (the first is cesarean section). In the 1980s and 1990s, this statistic was the source of concern among some consumer rights groups and puzzlement among the medical community, and brought about informed choice advocacy groups like Hysterectomy Educational Resources and Services (HERS) Foundation, founded by Nora W. Coffey in 1982.
According to the National Center for Health Statistics, of the 617,000 hysterectomies performed in 2004, 73% also involved the surgical removal of the ovaries. There are currently an estimated 22 million women in the United States who have undergone this procedure. Nearly 68 percent were performed for benign conditions such as endometriosis, irregular bleeding and uterine fibroids. Such rates being highest in the industrialized world has led to the controversy that hysterectomies are being largely performed for unwarranted reasons. More recent data suggests that the number of hysterectomies performed has declined in every state in the United States. From 2010 to 2013, there were 12 percent fewer hysterectomies performed, and the types of hysterectomies were more minimally invasive in nature, reflected by a 17 percent increase in laparoscopic procedures.
United Kingdom
In the UK, 1 in 5 women is likely to have a hysterectomy by the age of 60, and ovaries are removed in about 20% of hysterectomies.
Germany
The number of hysterectomies in Germany has been constant for many years. In 2006, 149,456 hysterectomies were performed. Additionally, of these, 126,743 (84.8%) successfully benefitted the patient without incident. Women between the ages of 40 and 49 accounted for 50 percent of hysterectomies, and those between the ages of 50 and 59 accounted for 20 percent. In 2007, the number of hysterectomies decreased to 138,164. In recent years, the technique of laparoscopic or laparoscopically assisted hysterectomies has been raised into the foreground.
Denmark
In Denmark, the number of hysterectomies from the 1980s to the 1990s decreased by 38 percent. In 1988, there were 173 such surgeries per 100,000 women, and by 1998 this number had been reduced to 107. The proportion of abdominal supracervical hysterectomies in the same time period grew from 7.5 to 41 percent. A total of 67,096 women underwent hysterectomy during these years.
| Biology and health sciences | Surgery | Health |
353021 | https://en.wikipedia.org/wiki/Homeomorphism%20%28graph%20theory%29 | Homeomorphism (graph theory) | In graph theory, two graphs and are homeomorphic if there is a graph isomorphism from some subdivision of to some subdivision of . If the edges of a graph are thought of as lines drawn from one vertex to another (as they are usually depicted in diagrams), then two graphs are homeomorphic to each other in the graph-theoretic sense precisely if their diagrams are homeomorphic in the topological sense.
Subdivision and smoothing
In general, a subdivision of a graph G (sometimes known as an expansion) is a graph resulting from the subdivision of edges in G. The subdivision of some edge e with endpoints {u,v } yields a graph containing one new vertex w, and with an edge set replacing e by two new edges, {u,w } and {w,v }. For directed edges, this operation shall reserve their propagating direction.
For example, the edge e, with endpoints {u,v }:
can be subdivided into two edges, e1 and e2, connecting to a new vertex w of degree-2, or indegree-1 and outdegree-1 for the directed edge:
Determining whether for graphs G and H, H is homeomorphic to a subgraph of G, is an NP-complete problem.
Reversion
The reverse operation, smoothing out or smoothing a vertex w with regards to the pair of edges (e1, e2) incident on w, removes both edges containing w and replaces (e1, e2) with a new edge that connects the other endpoints of the pair. Here, it is emphasized that only degree-2 (i.e., 2-valent) vertices can be smoothed. The limit of this operation is realized by the graph that has no more degree-2 vertices.
For example, the simple connected graph with two edges, e1 {u,w } and e2 {w,v }:
has a vertex (namely w) that can be smoothed away, resulting in:
Barycentric subdivisions
The barycentric subdivision subdivides each edge of the graph. This is a special subdivision, as it always results in a bipartite graph. This procedure can be repeated, so that the nth barycentric subdivision is the barycentric subdivision of the n−1st barycentric subdivision of the graph. The second such subdivision is always a simple graph.
Embedding on a surface
It is evident that subdividing a graph preserves planarity. Kuratowski's theorem states that
a finite graph is planar if and only if it contains no subgraph homeomorphic to K5 (complete graph on five vertices) or K3,3 (complete bipartite graph on six vertices, three of which connect to each of the other three).
In fact, a graph homeomorphic to K5 or K3,3 is called a Kuratowski subgraph.
A generalization, following from the Robertson–Seymour theorem, asserts that for each integer g, there is a finite obstruction set of graphs such that a graph H is embeddable on a surface of genus g if and only if H contains no homeomorphic copy of any of the . For example, consists of the Kuratowski subgraphs.
Example
In the following example, graph G and graph H are homeomorphic.
If G′ is the graph created by subdivision of the outer edges of G and H′ is the graph created by subdivision of the inner edge of H, then G′ and H′ have a similar graph drawing:
Therefore, there exists an isomorphism between G' and H', meaning G and H are homeomorphic.
mixed graph
The following mixed graphs are homeomorphic. The directed edges are shown to have an intermediate arrow head.
| Mathematics | Graph theory | null |
353042 | https://en.wikipedia.org/wiki/Graph%20minor | Graph minor | In graph theory, an undirected graph is called a minor of the graph if can be formed from by deleting edges, vertices and by contracting edges.
The theory of graph minors began with Wagner's theorem that a graph is planar if and only if its minors include neither the complete graph nor the complete bipartite graph . The Robertson–Seymour theorem implies that an analogous forbidden minor characterization exists for every property of graphs that is preserved by deletions and edge contractions.
For every fixed graph , it is possible to test whether is a minor of an input graph in polynomial time; together with the forbidden minor characterization this implies that every graph property preserved by deletions and contractions may be recognized in polynomial time.
Other results and conjectures involving graph minors include the graph structure theorem, according to which the graphs that do not have as a minor may be formed by gluing together simpler pieces, and Hadwiger's conjecture relating the inability to color a graph to the existence of a large complete graph as a minor of it. Important variants of graph minors include the topological minors and immersion minors.
Definitions
An edge contraction is an operation that removes an edge from a graph while simultaneously merging the two vertices it used to connect. An undirected graph is a minor of another undirected graph if a graph isomorphic to can be obtained from by contracting some edges, deleting some edges, and deleting some isolated vertices. The order in which a sequence of such contractions and deletions is performed on does not affect the resulting graph .
Graph minors are often studied in the more general context of matroid minors. In this context, it is common to assume that all graphs are connected, with self-loops and multiple edges allowed (that is, they are multigraphs rather than simple graphs); the contraction of a loop and the deletion of a cut-edge are forbidden operations. This point of view has the advantage that edge deletions leave the rank of a graph unchanged, and edge contractions always reduce the rank by one.
In other contexts (such as with the study of pseudoforests) it makes more sense to allow the deletion of a cut-edge, and to allow disconnected graphs, but to forbid multigraphs. In this variation of graph minor theory, a graph is always simplified after any edge contraction to eliminate its self-loops and multiple edges.
A function is referred to as "minor-monotone" if, whenever is a minor of , one has .
Example
In the following example, graph H is a minor of graph G:
H.
G.
The following diagram illustrates this. First construct a subgraph of G by deleting the dashed edges (and the resulting isolated vertex), and then contract the gray edge (merging the two vertices it connects):
Major results and conjectures
It is straightforward to verify that the graph minor relation forms a partial order on the isomorphism classes of finite undirected graphs: it is transitive (a minor of a minor of is a minor of itself), and and can only be minors of each other if they are isomorphic because any nontrivial minor operation removes edges or vertices. A deep result by Neil Robertson and Paul Seymour states that this partial order is actually a well-quasi-ordering: if an infinite list of finite graphs is given, then there always exist two indices such that is a minor of . Another equivalent way of stating this is that any set of graphs can have only a finite number of minimal elements under the minor ordering. This result proved a conjecture formerly known as Wagner's conjecture, after Klaus Wagner; Wagner had conjectured it long earlier, but only published it in 1970.
In the course of their proof, Seymour and Robertson also prove the graph structure theorem in which they determine, for any fixed graph , the rough structure of any graph that does not have as a minor. The statement of the theorem is itself long and involved, but in short it establishes that such a graph must have the structure of a clique-sum of smaller graphs that are modified in small ways from graphs embedded on surfaces of bounded genus.
Thus, their theory establishes fundamental connections between graph minors and topological embeddings of graphs.
For any graph , the simple -minor-free graphs must be sparse, which means that the number of edges is less than some constant multiple of the number of vertices. More specifically, if has vertices, then a simple -vertex simple -minor-free graph can have at most edges, and some -minor-free graphs have at least this many edges. Thus, if has vertices, then -minor-free graphs have average degree and furthermore degeneracy . Additionally, the -minor-free graphs have a separator theorem similar to the planar separator theorem for planar graphs: for any fixed , and any -vertex -minor-free graph , it is possible to find a subset of vertices whose removal splits into two (possibly disconnected) subgraphs with at most vertices per subgraph. Even stronger, for any fixed , -minor-free graphs have treewidth .
The Hadwiger conjecture in graph theory proposes that if a graph does not contain a minor isomorphic to the complete graph on vertices, then has a proper coloring with colors. The case is a restatement of the four color theorem. The Hadwiger conjecture has been proven for , but is unknown in the general case. call it "one of the deepest unsolved problems in graph theory." Another result relating the four-color theorem to graph minors is the snark theorem announced by Robertson, Sanders, Seymour, and Thomas, a strengthening of the four-color theorem conjectured by W. T. Tutte and stating that any bridgeless 3-regular graph that requires four colors in an edge coloring must have the Petersen graph as a minor.
Minor-closed graph families
Many families of graphs have the property that every minor of a graph in F is also in F; such a class is said to be minor-closed. For instance, in any planar graph, or any embedding of a graph on a fixed topological surface, neither the removal of edges nor the contraction of edges can increase the genus of the embedding; therefore, planar graphs and the graphs embeddable on any fixed surface form minor-closed families.
If F is a minor-closed family, then (because of the well-quasi-ordering property of minors) among the graphs that do not belong to F there is a finite set X of minor-minimal graphs. These graphs are forbidden minors for F: a graph belongs to F if and only if it does not contain as a minor any graph in X. That is, every minor-closed family F can be characterized as the family of X-minor-free graphs for some finite set X of forbidden minors.
The best-known example of a characterization of this type is Wagner's theorem characterizing the planar graphs as the graphs having neither K5 nor K3,3 as minors.
In some cases, the properties of the graphs in a minor-closed family may be closely connected to the properties of their excluded minors. For example a minor-closed graph family F has bounded pathwidth if and only if its forbidden minors include a forest, F has bounded tree-depth if and only if its forbidden minors include a disjoint union of path graphs, F has bounded treewidth if and only if its forbidden minors include a planar graph, and F has bounded local treewidth (a functional relationship between diameter and treewidth) if and only if its forbidden minors include an apex graph (a graph that can be made planar by the removal of a single vertex). If H can be drawn in the plane with only a single crossing (that is, it has crossing number one) then the H-minor-free graphs have a simplified structure theorem in which they are formed as clique-sums of planar graphs and graphs of bounded treewidth. For instance, both K5 and K3,3 have crossing number one, and as Wagner showed the K5-free graphs are exactly the 3-clique-sums of planar graphs and the eight-vertex Wagner graph, while the K3,3-free graphs are exactly the 2-clique-sums of planar graphs and K5.
Variations
Topological minors
A graph H is called a topological minor of a graph G if a subdivision of H is isomorphic to a subgraph of G. Every topological minor is also a minor. The converse however is not true in general (for instance the complete graph K5 in the Petersen graph is a minor but not a topological one), but holds for graph with maximum degree not greater than three.
The topological minor relation is not a well-quasi-ordering on the set of finite graphs and hence the result of Robertson and Seymour does not apply to topological minors. However it is straightforward to construct finite forbidden topological minor characterizations from finite forbidden minor characterizations by replacing every branch set with k outgoing edges by every tree on k leaves that has down degree at least two.
Induced minors
A graph H is called an induced minor of a graph G if it can be obtained from an induced subgraph of G by contracting edges. Otherwise, G is said to be H-induced minor-free.
Immersion minor
A graph operation called lifting is central in a concept called immersions. The lifting is an operation on adjacent edges. Given three vertices v, u, and w, where (v,u) and (u,w) are edges in the graph, the lifting of vuw, or equivalent of (v,u), (u,w) is the operation that deletes the two edges (v,u) and (u,w) and adds the edge (v,w). In the case where (v,w) already was present, v and w will now be connected by more than one edge, and hence this operation is intrinsically a multi-graph operation.
In the case where a graph H can be obtained from a graph G by a sequence of lifting operations (on G) and then finding an isomorphic subgraph, we say that H is an immersion minor of G.
There is yet another way of defining immersion minors, which is equivalent to the lifting operation. We say that H is an immersion minor of G if there exists an injective mapping from vertices in H to vertices in G where the images of adjacent elements of H are connected in G by edge-disjoint paths.
The immersion minor relation is a well-quasi-ordering on the set of finite graphs and hence the result of Robertson and Seymour applies to immersion minors. This furthermore means that every immersion minor-closed family is characterized by a finite family of forbidden immersion minors.
In graph drawing, immersion minors arise as the planarizations of non-planar graphs: from a drawing of a graph in the plane, with crossings, one can form an immersion minor by replacing each crossing point by a new vertex, and in the process also subdividing each crossed edge into a path. This allows drawing methods for planar graphs to be extended to non-planar graphs.
Shallow minors
A shallow minor of a graph G is a minor in which the edges of G that were contracted to form the minor form a collection of disjoint subgraphs with low diameter. Shallow minors interpolate between the theories of graph minors and subgraphs, in that shallow minors with high depth coincide with the usual type of graph minor, while the shallow minors with depth zero are exactly the subgraphs. They also allow the theory of graph minors to be extended to classes of graphs such as the 1-planar graphs that are not closed under taking minors.
Parity conditions
An alternative and equivalent definition of a graph minor is that H is a minor of G whenever the vertices of H can be represented by a collection of vertex-disjoint subtrees of G, such that if two vertices are adjacent in H, there exists an edge with its endpoints in the corresponding two trees in G.
An odd minor restricts this definition by adding parity conditions to these subtrees. If H is represented by a collection of subtrees of G as above, then H is an odd minor of G whenever it is possible to assign two colors to the vertices of G in such a way that each edge of G within a subtree is properly colored (its endpoints have different colors) and each edge of G that represents an adjacency between two subtrees is monochromatic (both its endpoints are the same color). Unlike for the usual kind of graph minors, graphs with forbidden odd minors are not necessarily sparse. The Hadwiger conjecture, that k-chromatic graphs necessarily contain k-vertex complete graphs as minors, has also been studied from the point of view of odd minors.
A different parity-based extension of the notion of graph minors is the concept of a bipartite minor, which produces a bipartite graph whenever the starting graph is bipartite. A graph H is a bipartite minor of another graph G whenever H can be obtained from G by deleting vertices, deleting edges, and collapsing pairs of vertices that are at distance two from each other along a peripheral cycle of the graph. A form of Wagner's theorem applies for bipartite minors: A bipartite graph G is a planar graph if and only if it does not have the utility graph K3,3 as a bipartite minor.
Algorithms
The problem of deciding whether a graph G contains H as a minor is NP-complete in general; for instance, if H is a cycle graph with the same number of vertices as G, then H is a minor of G if and only if G contains a Hamiltonian cycle. However, when G is part of the input but H is fixed, it can be solved in polynomial time. More specifically, the running time for testing whether H is a minor of G in this case is O(n3), where n is the number of vertices in G and the big O notation hides a constant that depends superexponentially on H; since the original Graph Minors result, this algorithm has been improved to O(n2) time. Thus, by applying the polynomial time algorithm for testing whether a given graph contains any of the forbidden minors, it is theoretically possible to recognize the members of any minor-closed family in polynomial time. This result is not used in practice since the hidden constant is so huge (needing three layers of Knuth's up-arrow notation to express) as to rule out any application, making it a galactic algorithm. Furthermore, in order to apply this result constructively, it is necessary to know what the forbidden minors of the graph family are. In some cases, the forbidden minors are known, or can be computed.
In the case where H is a fixed planar graph, then we can test in linear time in an input graph G whether H is a minor of G. In cases where H is not fixed, faster algorithms are known in the case where G is planar.
| Mathematics | Graph theory | null |
353091 | https://en.wikipedia.org/wiki/Malacology | Malacology | Malacology is the branch of invertebrate zoology that deals with the study of the Mollusca (molluscs or mollusks), the second-largest phylum of animals in terms of described species after the arthropods. Mollusks include snails and slugs, clams, and cephalopods, along with numerous other kinds, many of which have shells. Malacology derives '-logy', 'study of'.
Fields within malacological research include taxonomy, ecology and evolution. Several subdivisions of malacology exist, including conchology, devoted to the study of mollusk shells, and teuthology, the study of cephalopods such as octopus, squid, and cuttlefish. Applied malacology studies medical, veterinary, and agricultural applications, for example the study of mollusks as vectors of schistosomiasis and other diseases.
Archaeology employs malacology to understand the evolution of the climate, the biota of the area, and the usage of the site.
Zoological methods are used in malacological research. Malacological field methods and laboratory methods (such as collecting, documenting and archiving, and molecular techniques) were summarized by Sturm et al. (2006).
History
Malacology evolved from the earlier discipline of conchology, which focused solely on the collection and classification of shells. The transformation into a comprehensive field of biological study occurred over several key historical milestones.
Early period pre-1795
Before the late 18th century, the study of mollusks was limited to conchology, emphasizing the aesthetic and taxonomic value of shells. During this time, the term "mollusks" referred only to shell-less species such as cephalopods and slugs. Organisms with shells were classified under "Testacea", reflecting a limited understanding of their broader biological characteristics.
The contributions of Cuvier
In 1795, French naturalist Georges Cuvier introduced a new classification system for invertebrates based on anatomical observations. He proposed that mollusks represented a distinct group of organisms unified by common morphological traits. This approach laid the groundwork for the transition from conchology to malacology, as it highlighted the importance of internal anatomy over external shell features.
Early 19th century
Following Cuvier’s work, the early 19th century saw an expansion of the field’s focus. Scientists began studying not only the external shells of mollusks but also their internal anatomy, physiological functions, and ecological roles. This marked a shift toward viewing mollusks as complete organisms, rather than merely as shell producers. The term "malacology" was officially introduced in 1825 by French zoologist and anatomist Henri-Marie Ducrotay de Blainville. Derived from the Greek word "malakos" (meaning "soft"), it reflected a broader interest in the biological and ecological characteristics of mollusks, including their soft body structures. This moment is considered the formal establishment of malacology as a distinct scientific discipline.
Late 19th century and beyond
By the late 19th century, malacology had expanded further to encompass evolutionary biology, taxonomy, and ecology. Researchers investigated the relationships between mollusks and other invertebrates, as well as their roles in various ecosystems. The discipline continued to integrate new methodologies and technologies, solidifying its place within zoology.
Malacologists
Those who study malacology are known as malacologists. Those who study primarily or exclusively the shells of mollusks are known as conchologists, while those who study mollusks of the class Cephalopoda are teuthologists.
Societies
(Asociación Argentina de Malacología)
American Malacological Society
Association of Polish Malacologists ()
Belgian Malacological Society () – French speaking
– Dutch speaking
Brazilian Malacological Society ()
Conchological Society of Great Britain and Ireland
Conchologists of America
Dutch Malacological Society
Estonian Malacological Society
European Quaternary Malacologists
Freshwater Mollusk Conservation Society
German Malacological Society ()
Hungarian Malacological Society ()
Italian Malacological Society ()
Malacological Society of Australasia
Malacological Society of London
Malacological Society of the Philippines, Inc.
Mexican Malacological Society ()
Spanish Malacological Society ()
Western Society of Malacologists
Journals
More than 150 journals within the field of malacology are being published from more than 30 countries, producing an overwhelming amount of scientific articles. They include:
American Journal of Conchology (1865–1872)
American Malacological Bulletin
Basteria
Bulletin of Russian Far East Malacological Society
Fish & Shellfish Immunology
Folia conchyliologica
Folia Malacologica
Heldia
Johnsonia
Journal de Conchyliologie – volumes 1850–1922 at Biodiversity Heritage Library; volumes 1850–1938 at Bibliothèque nationale de France
Journal of Conchology
Journal of Medical and Applied Malacology
Journal of Molluscan Studies
Malacologia
Malacologica Bohemoslovaca
Malacological Review – volume 1 (1968) – today, contents of volume 27 (1996) – volume 40 (2009)
Soosiana
Zeitschrift für Malakozoologie (1844–1853) → Malakozoologische Blätter (1854–1878)
Miscellanea Malacologica
Mollusca
Molluscan Research – impact factor: 0.606 (2007)
Mitteilungen der Deutschen Malakozoologischen Gesellschaft
Occasional Molluscan Papers (since 2008)
Occasional Papers on Mollusks (1945–1989), 5 volumes
Ruthenica
Strombus
Tentacle – The Newsletter of the Mollusc Specialist Group of the Species Survival Commission of the International Union for Conservation of Nature.
The Conchologist (1891–1894) → The Journal of Malacology (1894–1905)
The Festivus – a journal which started as a club newsletter in 1970, published by the San Diego Shell Club
The Nautilus – since 1886 published by Bailey-Matthews Shell Museum. First two volumes were published under name The Conchologists’ Exchange. Impact factor: 0.500 (2009)
The Veliger – impact factor: 0.606 (2003)
貝類学雑誌 Venus (Japanese Journal of Malacology)
Vita Malacologica a Dutch journal published in English – one themed issue a year
Vita Marina (discontinued in May 2001)
Museums
Museums that have either exceptional malacological research collections (behind the scenes) and/or exceptional public exhibits of mollusks:
Academy of Natural Sciences of Philadelphia
American Museum of Natural History
Bailey-Matthews Shell Museum
Cau del Cargol Shell Museum
Maria Mitchell Association
Museum of Comparative Zoology at Harvard
National Museum of Natural History, France
Natural History Museum, London
Rinay
Royal Belgian Institute of Natural Sciences, Brussels: with a collection of more than 9 million shells (mainly from the collection of Philippe Dautzenberg)
Smithsonian Institution
| Biology and health sciences | Basics_2 | Biology |
353767 | https://en.wikipedia.org/wiki/Grand%20Coulee%20Dam | Grand Coulee Dam | Grand Coulee Dam is a concrete gravity dam on the Columbia River in the U.S. state of Washington, built to produce hydroelectric power and provide irrigation water. Constructed between 1933 and 1942, Grand Coulee originally had two powerhouses. The third powerhouse ("Nat"), completed in 1974 to increase energy production, makes Grand Coulee the largest power station in the United States by nameplate capacity at 6,809 MW.
The proposal to build the dam was the focus of a bitter debate during the 1920s between two groups. One group wanted to irrigate the ancient Grand Coulee with a gravity canal while the other pursued a high dam and pumping scheme. The dam supporters won in 1933, but, although they fully intended otherwise, the initial proposal by the Bureau of Reclamation was for a "low dam" tall which would generate electricity without supporting irrigation. That year, the U.S. Bureau of Reclamation and a consortium of three companies called MWAK (Mason-Walsh-Atkinson Kier Company) began construction on a high dam, although they had received approval for a low dam. After visiting the construction site in August 1934, President Franklin Delano Roosevelt endorsed the "high dam" design, which at high would provide enough electricity to pump water into the Columbia basin for irrigation. Congress approved the high dam in 1935, and it was completed in 1942. The first waters overtopped Grand Coulee's spillway on of that year.
Power from the dam fueled the growing industries of the Northwest United States during World War II. Between 1967 and 1974, the third powerplant was constructed. The decision to construct the additional facility was influenced by growing energy demand, regulated river flows stipulated in the Columbia River Treaty with Canada, and competition with the Soviet Union. Through a series of upgrades and the installation of pump-generators, the dam now supplies four power stations with an installed capacity of 6,809 MW. As the centerpiece of the Columbia Basin Project, the dam's reservoir supplies water for the irrigation of .
The reservoir is called Franklin Delano Roosevelt Lake, named after the president who endorsed the dam's construction. Creation of the reservoir forced the relocation of over 3,000 people, including Native Americans whose lands were partially flooded. The dam was constructed without fish passage. The next one downstream, Chief Joseph Dam, which was built decades later, also does not have fish passage. This means no salmon reach the Grand Coulee Dam or the Colville Indian Reservation. The third large dam downstream, Wells Dam, has an intricate system of fish ladders to accommodate yearly salmon spawning and migration.
Background
The Grand Coulee is an ancient river bed on the Columbia Plateau created during the Pliocene Epoch (Calabrian) by retreating glaciers and floods. Originally, geologists believed a glacier that diverted the Columbia River formed the Grand Coulee, but it was revealed in the mid-late 20th century that massive floods from Lake Missoula carved most of the gorge. The earliest known proposal to irrigate the Grand Coulee with the Columbia River dates to 1892, when the Coulee City News and The Spokesman Review reported on a scheme by a man named Laughlin McLean to construct a dam across the Columbia River, high enough that water would back up into the Grand Coulee. A dam that size would have its reservoir encroach into Canada, which would violate treaties. Soon after the Bureau of Reclamation was founded, it investigated a scheme for pumping water from the Columbia River to irrigate parts of central Washington. An attempt to raise funds for irrigation failed in 1914, as Washington voters rejected a bond measure.
In 1917, William M. Clapp, a lawyer from Ephrata, Washington, proposed the Columbia be dammed immediately below the Grand Coulee. He suggested a concrete dam could flood the plateau, just as nature blocked it with ice centuries ago. Clapp was joined by James O'Sullivan, another lawyer, and by Rufus Woods, publisher of The Wenatchee World newspaper in the nearby agricultural centre of Wenatchee. Together, they became known as the "Dam College". Woods began promoting the Grand Coulee Dam in his newspaper, often with articles written by O'Sullivan.
The dam idea gained popularity with the public in 1918. Backers of reclamation in Central Washington split into two camps. The "pumpers" favored a dam with pumps to elevate water from the river into the Grand Coulee from which canals and pipes could irrigate farmland. The "ditchers" favored diverting water from northeast Washington's Pend Oreille River via a gravity canal to irrigate farmland in Central and Eastern Washington. Many locals such as Woods, O'Sullivan and Clapp were pumpers, while many influential businessmen in Spokane associated with the Washington Water and Power Company (WWPC) were staunch ditchers. The pumpers argued that hydroelectricity from the dam could cover costs and claimed the ditchers sought to maintain a monopoly on electric power.
The ditchers took several steps to ensure support for their proposals. In 1921, WWPC secured a preliminary permit to build a dam at Kettle Falls, about upstream from the Grand Coulee. If built, the Kettle Falls Dam would have lain in the path of the Grand Coulee Dam's reservoir, essentially blocking its construction. WWPC planted rumors in the newspapers, stating exploratory drilling at the Grand Coulee site found no granite on which a dam's foundations could rest, only clay and fragmented rock. This was later disproved with Reclamation-ordered drilling. Ditchers hired General George W. Goethals, engineer of the Panama Canal, to prepare a report. Goethals visited the state and produced a report backing the ditchers. The Bureau of Reclamation was unimpressed by Goethals' report, believing it filled with errors.
In , President Warren G. Harding visited Washington state and expressed support for irrigation work there, but died a month later. His successor, Calvin Coolidge, had little interest in irrigation projects. The Bureau of Reclamation, desirous of a major project that would bolster its reputation, was focusing on the Boulder Canyon Project that resulted in the Hoover Dam. Reclamation was authorized to conduct a study in 1923, but the project's cost made federal officials reluctant. The Washington state proposals received little support from those further east, who feared the irrigation would result in more crops, depressing prices. With President Coolidge opposed to the project, bills to appropriate money for surveys of the Grand Coulee site failed.
In 1925, Congress authorized a U.S. Army Corps of Engineers study of the Columbia River. This study was included in the Rivers and Harbors Act of , which provided for studies on the navigation, power, flood control and irrigation potential of rivers. In , the Army Corps responded with the first of the "308 Reports" named after the 1925 House Document No. 308 (69th Congress, 1st Session). With the help of Washington's Senators, Wesley Jones and Clarence Dill, Congress ordered $600,000 in further studies to be carried out by the Army Corps and Federal Power Commission on the Columbia River Basin and Snake Rivers. U.S. Army Major John Butler was responsible for the upper Columbia River and Snake River and in 1932, his 1,000-page report was submitted to Congress. It recommended the Grand Coulee Dam and nine others on the river, including some in Canada. The report stated electricity sales from the Grand Coulee Dam could pay for construction costs. Reclamation—whose interest in the dam was revitalized by the report—endorsed it.
Although there was support for the Grand Coulee Dam, others argued there was little need for more electricity in the Northwest and crops were in surplus. The Army Corps did not believe construction should be a federal project and saw low demand for electricity. Reclamation argued energy demand would rise by the time the dam was complete. The head of Reclamation, Elwood Mead, stated he wanted the dam built no matter the cost. President Franklin D. Roosevelt, who took office in March 1933, supported the dam because of its irrigation potential and the power it would provide, but he was uneasy with its price tag. For this reason, he supported a "low dam" instead of the "high dam". He provided in federal funding, while Washington State provided $377,000. In 1933, Washington governor Clarence Martin set up the Columbia Basin Commission to oversee the dam project, and Reclamation was selected to oversee construction.
Construction
Low dam
On July 16, 1933, a crowd of 3,000 watched the driving of the first stake at the low dam site, and excavation soon began. Core drilling commenced that September while the Bureau of Reclamation accelerated its studies and designs for the dam. It would still help control floods and provide for irrigation and hydroelectricity, though at a reduced capacity. Most importantly, it would not raise its reservoir high enough to irrigate the plateau around the Grand Coulee. The dam's design provided for future raising and upgrading.
Before and during construction, workers and engineers experienced problems. Contracts for companies to construct the various parts of the dam were difficult to award as few companies were sizable enough to fill them. This forced companies to consolidate. Native American graves had to be relocated and temporary fish ladders had to be constructed. During construction additional problems included landslides and the need to protect newly poured concrete from freezing. Construction on the downstream Grand Coulee Bridge began in and more considerable earth-moving began in August. Excavation for the dam's foundation required the removal of 22 million cubic yards (17 million m³) of dirt and stone.
To reduce the amount of trucking required in the excavation, a conveyor belt nearly long was built. To further secure the foundation, workers drilled holes into the granite and filled any fissures with grout, creating a grout curtain. At times, excavated areas collapsed from overburden. In order to secure these areas from further movement and continue excavation, diameter pipes were inserted into the mass and chilled with cold liquid from a refrigeration plant. This froze the earth and secured it so construction could continue.
Final contract bidding for the dam began , 1934, in Spokane, and four bids were submitted. One bid was from a lawyer with no financial backing; another was from actress Mae West which consisted of nothing more than a poem and promise to divert the river. Of the two serious bids, the lowest bid was from a consortium of three companies: Silas Mason Co. from Louisville, Kentucky; Walsh Construction Co. of Davenport, Iowa and New York; and Atkinson-Kier Company of San Francisco and San Diego. The consortium was known as MWAK, and their bid was $29,339,301, almost 15% lower than the option submitted by the next bidder, Six Companies, Inc., which was building Hoover Dam at the time.
Cofferdams
Two large cofferdams were constructed for the dam, but they were parallel to the river rather than straddling its width, so drilling into the canyon walls was not required. By the end of 1935 about 1,200 workers completed the west and east cofferdams. The west cofferdam was long, thick and was constructed above the bedrock. The cofferdams allowed workers to dry portions of the riverbed and begin constructing the dam, while water continued to flow down the center of the riverbed.
In , once the west foundation was complete, portions of the west cofferdam were dismantled, allowing water to flow through part of the dam's new foundation. In , MWAK had begun constructing cofferdams above and below the channel between the east and west cofferdams. By December, the entire Columbia River was diverted over the foundations constructed within the east and west cofferdams. On , 1936, the Wenatchee Daily World announced the river was diverted and by early the next year, people were arriving in large numbers to see the riverbed.
Design change
On August 4, 1934, President Franklin D. Roosevelt visited the construction site and was impressed by the project and its purpose. He spoke to workers and spectators, closing with this statement: "I leave here today with the feeling that this work is well undertaken; that we are going ahead with a useful project, and we are going to see it through for the benefit of our country." Soon afterward, Reclamation was allowed to proceed with the high dam plan but faced the problems of transitioning the design and negotiating an altered contract with MWAK. In , for an additional , MWAK and Six Companies, Inc. agreed to join together as Consolidated Builders Inc. and construct the high dam. Six Companies had just finished the Hoover Dam and was nearing completion of Parker Dam. The new design, chosen and approved by the Reclamation office in Denver, included several improvements, one of which was the irrigation pumping plant.
Roosevelt envisioned the dam would fit into his New Deal under the Public Works Administration; it would create jobs and farming opportunities and would pay for itself. In addition, as part of a larger public effort, Roosevelt wanted to keep electricity prices low by limiting private ownership of utility companies, which could charge high prices for energy. Many opposed a federal takeover of the project, including its most prominent supporters, but Washington State lacked the resources to fully realize the project. In , with the help of Roosevelt and a Supreme Court decision allowing the acquisition of public land and Indian Reservations, Congress authorized funding for the upgraded high dam under the 1935 River and Harbors Act. The most significant legislative hurdle for the dam was over:
First concrete pour and completion
On December 6, 1935, Governor Clarence Martin presided over the ceremonial first concrete pour. During construction, bulk concrete was delivered on site by rail-cars where it was further processed by eight large mixers before being placed in form. Concrete was poured into columns by crane-lifted buckets, each supporting eight tons of concrete. To cool the concrete and facilitate curing, about of piping was placed throughout the hardening mass. Cold water from the river was pumped into the pipes, reducing the temperature within the forms from to . This caused the dam to contract about in length; the resulting gaps were filled with grout.
Until the project began, the stretch of the Columbia River where the dam was to rise was as yet unbridged, making it difficult to move men and materials. In , the Grand Coulee Bridge, a permanent highway bridge, was opened after major delays caused by high water. Three additional and temporary bridges downstream had moved vehicles and workers along with sand and gravel for cement mixing. In , MWAK completed the lower dam and Consolidated Builders Inc. began constructing the high dam. In , the west power house was completed. About 5,500 workers were on site that year. Between 1940 and 1941, the dam's eleven floodgates were installed on the spillway. In , the dam's first generator went into operation. On , 1942, the reservoir was full and the first water flowed over the dam's spillway. On , 1943, work was officially complete. The last of the original 18 generators did not operate until 1949.
Reservoir clearing
In 1933, Reclamation began efforts to purchase land behind the dam as far as upstream for the future reservoir zone. The reservoir, known later as Lake Roosevelt, flooded and Reclamation acquired an additional around the future shoreline. Within the zone were eleven towns, two railroads, three state highways, about one hundred and fifty miles of country roads, four sawmills, fourteen bridges, four telegraph and telephone systems, and many power lines and cemeteries. All facilities had to be purchased or relocated, and 3,000 residents were relocated. The Anti-Speculation Act was passed in 1937, limiting the amount of land farmers could own to prevent inflated prices.
The government appraised the land and offered to purchase it from the affected residents. Many refused to accept the offers, and Reclamation filed condemnation suits. Members of the Colville Confederated and Spokane tribes who had settlements within the reservoir zone were also resettled. The Acquisition of Indian Lands for Grand Coulee Dam Act of , 1940, allowed the Secretary of the Interior to acquire land on the Colville and Spokane Reservations, eventually accounting for . By 1942, all land had been purchased at market value: a cost of that included the relocation of farms, bridges, highways and railroads. Relocation reimbursement was not offered to property owners, which was common until U.S. laws changed in 1958.
In late 1938, the Works Progress Administration began clearing what would be of trees and other plants. The cut timber was floated downstream and sold to the highest bidder, Lincoln Lumber Company, which paid $2.25 per thousand board feet, . The pace of clearing was accelerated in when it was declared a national defense project, and the last tree was felled on , 1941. The felling was done by Reclamation Supervising Engineer Frank A. Banks and State WPA Administrator Carl W. Smith during a ceremony. 2,626 people living in five main camps along the Columbia worked on the project. When it was finished, had been spent in labor.
Labor and supporting infrastructure
Workers building the dam received an average of 80¢ an hour; the payroll for the dam was among the largest in the nation. The workers were mainly pulled from Grant, Lincoln, Douglas, and Okanogan counties and women were allowed to work only in the dorms and the cookhouse. Around 8,000 people worked on the project, and Frank A. Banks served as the chief construction engineer. Bert A. Hall was the chief inspector who would accept the dam from the contractors. Orin G. Patch served as the chief of concrete. Construction conditions were dangerous and 77 workers died.
To prepare for construction, housing for workers was needed along with four bridges downstream of the dam site, one of which, the Grand Coulee Bridge, exists today. The Bureau of Reclamation provided housing and located their administrative building at Engineer's Town, which was directly downstream of the construction site on the west side of the river. Opposite Engineer's Town, MWAK constructed Mason City in 1934. Mason City contained a hospital, post office, electricity and other amenities along with a population of 3,000. Three-bedroom houses in the city were rented for $32 a month.
Of the two living areas, Engineer's City was considered to have the better housing. Several other living areas formed around the construction site in an area known as Shack Town, which did not have reliable access to electricity and the same amenities as the other towns. Incorporated in 1935, the city of Grand Coulee supported workers as well and is just west of the dam on the plateau. MWAK eventually sold Mason City to Reclamation in 1937 before its contract was completed. In 1956, Reclamation combined both Mason City and Engineer's Town to form the city of Coulee Dam. It was incorporated as a city in .
Irrigation pumps
With the onset of World War II, power generation was given priority over irrigation. In 1943, Congress authorized the Columbia Basin Project and the Bureau of Reclamation began construction of irrigation facilities in 1948. Directly to the west and above the Grand Coulee Dam, the North Dam was constructed. This dam, along with the Dry Falls Dam to the south, enclosed and created Banks Lake, which covered the northern of the Grand Coulee. Additional dams, such as the Pinto and O'Sullivan Dams, were constructed alongside siphons and canals, creating a vast irrigation supply network called the Columbia Basin Project. Irrigation began between 1951 and 1953 as six of the 12 pumps were installed and Banks Lake was filled.
Expansion
Third powerplant
After World War II, the growing demand for electricity sparked interest in constructing another power plant supported by the Grand Coulee Dam. One obstacle to an additional power plant was the great seasonality of the Columbia River's streamflow. Today the flow is closely managed—there is almost no seasonality. Historically, about 75% of the river's annual flow occurred between April and September. During low flow periods, the river's discharge was between and while maximum spring runoff flows were around . Only nine out of the dam's eighteen generators could run year-round. The remaining nine operated for less than six months a year. In 1952, Congress authorized $125,000 for Reclamation to conduct a feasibility study on the Third Powerplant which was completed in 1953 and recommended two locations. Nine identical 108 MW generators were recommended, but as matters stood, they would be able to operate only in periods of high water.
Further regulation of the Columbia's flows was necessary to make the new power plant feasible. It would require water storage and regulation projects in Canada and a treaty to resolve the many economic and political issues involved. The Bureau of Reclamation and Army Corps of Engineers explored alternatives that would not depend on a treaty with Canada, such as raising the level of Flathead Lake or Pend Oreille Lake, but both proposals faced strong local opposition. The Columbia River Treaty, which had been discussed between the U.S. and Canada since 1944, was seen as the answer. Efforts to build the Third Powerplant were also influenced by competition with the Soviet Union, which had constructed power plants on the Volga River larger than Grand Coulee.
On , 1964, the Columbia River Treaty was ratified and included an agreement by Canada to construct the Duncan, Keenleyside, Mica Dams upstream and the U.S. would build the Libby Dam in Montana. Shortly afterward, Washington Senator Henry M. Jackson, who was influential in constructing the new power plant, announced Reclamation would present the project to Congress for appropriation and funding. To keep up with Soviet competition and increase the generating capacity it was determined the generators could be upgraded to much larger designs. With the possibility of international companies bidding on the project, the Soviets who had just installed a 500 MW hydroelectric generator on the Yenisei River indicated their interest. To avoid the potential embarrassment of an international rival building a domestic power plant, the Department of the Interior declined international bidding. The Third Powerplant was approved and President Lyndon Johnson signed its appropriation bill on , 1966.
Between 1967 and 1974, the dam was expanded to add the Third Powerplant, with architectural design by Marcel Breuer. Beginning in , this involved demolishing the northeast side of the dam and building a new fore-bay section. The excavation of of dirt and rock had been completed before the new long section of dam was built. The addition made the original dam almost a mile long. Original designs for the powerhouse had twelve smaller units but were altered to incorporate six of the largest generators available. To supply them with water, six diameter penstocks were installed. Of the new turbines and generators, three 600 MW units were built by Westinghouse and three 700 MW units by General Electric. The first new generator was commissioned in 1975 and the final one in 1980. The three 700 MW units were later upgraded to 805 MW by Siemens.
Pump-generating plant
After power shortages in the Northwest during the 1960s, it was determined the six remaining planned pumps be pump-generators. When energy demand is high, the pump-generators can generate electricity with water from the Banks Lake feeder canal adjacent to the dam at a higher elevation. By 1973, the Pump-Generating Plant was completed and the first two generators (P/G-7 and P/G-8) were operational. In 1983, two more generators went online, and by the final two were operational. The six pump-generators added 314 MW to the dam's capacity. In , the Pump-Generating Plant was officially renamed the John W. Keys III Pump-Generating Power Plant after John W. Keys III, the U.S. Bureau of Reclamation's commissioner from 2001 to 2006.
Overhauls
A major overhaul of the Third Powerplant, which contains generators numbered G19 through G24, began in and will be continuing for many years. Among the projects to be completed before the generators themselves can begin to be overhauled include replacing underground 500 kV oil-filled cables for G19, G20 and G21 generators with overhead transmission lines (started in ), new 236 MW transformers for G19 and G20 (started in ), and several other projects.
Planning, design, procurement and site preparation for the 805 MW G22, G23 and G24 generator overhauls are scheduled to begin in 2011. The overhauls will start in 2013 with the G22 generator, then G23 starting in 2014, and finally G24 starting in 2016, with planned completions in 2014, 2016 and 2017, respectively. The generator overhauls for G19, G20 and G21 have not been scheduled as of 2010.
Operation and benefits
The dam's primary goal, irrigation, was postponed as the wartime need for electricity increased. The dam's powerhouse began production around the time World War II began, and its electricity was vital to the war effort. The dam powered aluminum smelters in Longview and Vancouver, Washington, Boeing factories in Seattle and Vancouver, and Portland's shipyards. In 1943, its electricity was also used for plutonium production in Richland, Washington, at the Hanford Site, which was part of the top-secret Manhattan Project. The demand for power at that project was so great that in 1943, two generators originally intended for the Shasta Dam in California were installed at Grand Coulee to hurry the generator installation schedule.
Irrigation
Water is pumped via the Pump-Generating Plant's diameter pipes from Lake Roosevelt to a feeder canal. From the feeder canal, the water is transferred to Banks Lake which has an active storage of . The plant's twelve pumps can transfer up to to the lake. Currently, the Columbia Basin Project irrigates with a potential for . Over 60 different crops are grown within the project and distributed throughout the United States.
Power
Grand Coulee Dam supports four different power houses containing 33 hydroelectric generators. The original Left and Right Powerhouses contain 18 main generators and the Left has an additional three service generators for total installed capacity of 2,280 MW. The first generator was commissioned in 1941 and all 18 were operating by 1950. The Third Power plant contains a total of six main generators with a 4,215 MW installed capacity. Generators G-19, G-20 and G-21 in the Third Power Plant have a 600 MW installed capacity but can operate at a maximum capacity of 690 MW which brings the overall maximum capacity of the dam's power facilities to 7,079 MW. The Pump-Generating Plant contains six pump-generators with an installed capacity of 314 MW. When pumping water into Banks Lake they consume 600 MW of electricity. Each generator is supplied with water by an individual penstock. The largest of these feed the Third Power Plant and are in diameter and can supply up to . The dam's power facilities originally had an installed capacity of 1,974 MW but expansions and upgrades have increased generation to 6,809 MW installed, 7,079 MW maximum. Grand Coulee Dam generates 21 TWh of electricity annually. This means the dam generates about 2,397 MW of power on average, which results in a total plant factor efficiency of 35%. In 2014, 20.24 TWh of electricity was generated.
Spillway
Grand Coulee Dam's spillway is long and is an overflow, drum-gate controlled type with a maximum capacity. A record flood in May and flooded the lowlands below the dam and highlighted its limited flood control capability at the time, as its spillway and turbines hit a record flow of . The flood damaged downstream riverbanks and deteriorated the face of the dam and its flip bucket at the base (toe) of the spillway. The flood spurred the Columbia River Treaty and its provisions for dams constructed upstream in Canada, which would regulate the Columbia's flow.
Cost benefits
The Bureau of Reclamation in 1932 estimated the cost of constructing Grand Coulee Dam (not including the Third Powerplant) to be $168 million; its actual cost was $163 million in 1943 ($ in dollars). Expenses to finish the power stations and repair design flaws with the dam throughout the 1940s and '50s added another $107 million, bringing the total cost to $270 million ($ in dollars), about 33% over estimates. The Third Powerplant was estimated to cost in 1967, but higher construction costs and labor disputes drove the project's final cost in 1973 to ($ in dollars), about 87% over estimates. Despite estimates being exceeded, the dam became an economic success, particularly with the Third Powerplant exhibiting a benefit-cost ratio of 2:1. Although Reclamation has only irrigated about half of the land predicted, the gross value of crop output (in constant dollars) had doubled from 1962 to 1992, largely due to different farming practices and crop choices. The Bureau expects the money earned from supplying power and irrigation water will pay off the cost of construction by 2044.
Environmental and social consequences
The dam had severe negative consequences for the local Native American tribes whose traditional way of life revolved around salmon and the original shrub steppe habitat of the area. Because it lacks a fish ladder, Grand Coulee Dam permanently blocks fish migration, removing over of natural spawning habitat. By largely eliminating anadromous fish above the Okanogan River, the Grand Coulee Dam also set the stage for the subsequent decision not to provide for fish passage at Chief Joseph Dam (built in 1953). Chinook, Steelhead, Sockeye and Coho salmon (as well as other important species, including Lamprey) are now unable to spawn in the reaches of the Upper Columbia Basin. The lack of fish passage to the upper reaches of the Columbia River wiped out the June hogs, so-called "supersalmon" known to regularly weigh over 80 pounds (36kg). Today, the largest Chinook caught on the Columbia River are not even half that size. The extinction of the spawning grounds upstream from the dam has prevented the Spokane and other tribes from holding sacred salmon ceremonies since 1940.
Grand Coulee Dam flooded over 21,000 acres (85 km2) of prime bottom land where Native Americans had been living and hunting for thousands of years, forcing the relocation of settlements and graveyards. The Office of Indian Affairs negotiated with the United States Bureau of Reclamation on behalf of tribes who were concerned about the flooding of their grave sites. The Acquisition of Indian Lands for Grand Coulee Dam, 54 Stat.703 Act of June 20, 1940, allowed the Secretary of the Interior to remove human remains to new Native American grave sites. The burial relocation project started in September 1939. Human remains were put into small containers and many artifacts were discovered, but the methods of collection destroyed archaeological evidence. Various estimates for the number of relocated graves in 1939 include 915 graves reported by the Bureau of Reclamation Reclamation, or 1,388 reported by Howard T. Ball, who supervised the field work. Tribal leaders reported another 2,000 graves in 1940, but the Bureau of Reclamation would not continue grave relocation, and the sites were soon covered by water.
The town of Inchelium, Washington, home to around 250 Colville Indians, was submerged and later relocated. Kettle Falls, once a primary Native American fishing grounds, was also inundated. The average catch of over 600,000 salmon per year was eliminated. In one study, the Army Corps of Engineers estimated the annual loss was over fish. In , the Confederated Tribes of the Colville Reservation hosted a three-day event called the "Ceremony of Tears", marking the end of fishing at Kettle Falls. Within a year after the Ceremony, the falls were inundated. The town of Kettle Falls, Washington, was relocated.
The Columbia Basin Project has affected habitat ranges for species such as mule deer, pygmy rabbits and burrowing owls, resulting in decreased populations. However, it has created new habitats such as wetlands, and riparian corridors. The environmental impact of the dam effectively ended the traditional way of life of the native inhabitants. The government eventually compensated the Colville Indians in the 1990s with a lump settlement of approximately , plus annual payments of approximately . In 2019, a bill was passed to provide additional compensation to the Spokane Tribe. It provides roughly annually for the first decade, followed by roughly a year after that.
To compensate for the lack of ladder, three fisheries have been created above the dam, releasing into the upper Columbia River. One half of the fish are reserved for the displaced tribes, and one quarter of the reservoir is reserved for tribal hunting and boating.
Tourism
Built in the late 1970s, the Visitor Center contains many historical photos, geological samples, turbine and dam models, and a theater. The building was designed by Marcel Breuer and resembles a generator rotor. Since , on summer evenings, the laser light show at Grand Coulee Dam is projected onto the dam's wall. The show includes full-size images of battleships and the Statue of Liberty, as well as some environmental comments. Tours of the Third Power Plant are available to the public and last about an hour. Visitors take a shuttle to view the generators and also travel across the main dam span (otherwise closed to the public) as the formerly used glass elevator is indefinitely out of service.
The headquarters of the Lake Roosevelt National Recreation Area is near the dam, and the lake provides opportunities for fishing, swimming, canoeing, and boating.
Woody Guthrie connection
Folk singer Woody Guthrie wrote some of his most famous songs while working in the area in the 1940s. In 1941, after a brief stay in Los Angeles, Guthrie and his family moved north to Oregon on the promise of a job. Gunther von Fritsch was directing a documentary for the Bonneville Power Administration about the construction of the Grand Coulee Dam on the Columbia River and needed a narrator. Alan Lomax had recommended Guthrie to narrate the film and sing songs onscreen. The original project was expected to take 12 months, but as filmmakers became worried about casting a political figure like Guthrie, they minimized his role. The Department of the Interior hired him for one month to write songs about the Columbia River and the construction of the federal dams for the documentary's soundtrack. Guthrie toured the Columbia River and the Pacific Northwest. Guthrie said he "couldn't believe it, it's a paradise", which appeared to inspire him creatively. In one month, Guthrie wrote 26 songs, including three of his most famous: "Roll On, Columbia, Roll On", "Pastures of Plenty", and "Grand Coulee Dam". The surviving songs were released as Columbia River Songs. Guthrie was paid $266.66 for the month's work in 1941 (ca. $5,750 in 2024 dollars) for the project.
The film Columbia River was completed in 1949 and featured Guthrie's music. Guthrie had been commissioned in 1941 to provide songs for the project, but it had been postponed by WWII.
| Technology | Dams | null |
354042 | https://en.wikipedia.org/wiki/Malachite | Malachite | Malachite is a copper carbonate hydroxide mineral, with the formula Cu2CO3(OH)2. This opaque, green-banded mineral crystallizes in the monoclinic crystal system, and most often forms botryoidal, fibrous, or stalagmitic masses, in fractures and deep, underground spaces, where the water table and hydrothermal fluids provide the means for chemical precipitation. Individual crystals are rare, but occur as slender to acicular prisms. Pseudomorphs after more tabular or blocky azurite crystals also occur.
Etymology and history
The stone's name derives (via , , and Middle English melochites) from Greek Μολοχίτης λίθος molochites lithos, "mallow-green stone", from μολόχη molochē, variant of μαλάχη malāchē, "mallow". The mineral was given this name due to its resemblance to the leaves of the mallow plant. Copper (Cu2+) gives malachite its green color.
Malachite was mined from deposits near the Isthmus of Suez and the Sinai as early as 4000 BCE.
It was extensively mined at the Great Orme Mines in Britain 3,800 years ago, using stone and bone tools. Archaeological evidence indicates that mining activity ended , with up to 1,760 tonnes of copper being produced from the mined malachite.
Archaeological evidence indicates that the mineral has been mined and smelted to obtain copper at Timna Valley in Israel for more than 3,000 years. Since then, malachite has been used as both an ornamental stone and as a gemstone.
The use of azurite and malachite as copper ore indicators led indirectly to the name of the element nickel in the English language. Nickeline, a principal ore of nickel that is also known as niccolite, weathers at the surface into a green mineral (annabergite) that resembles malachite. This resemblance resulted in occasional attempts to smelt nickeline in the belief that it was copper ore, but such attempts always ended in failure due to high smelting temperatures needed to reduce nickel. In Germany this deceptive mineral came to be known as kupfernickel, literally "copper demon." The Swedish alchemist Baron Axel Fredrik Cronstedt (who had been trained by Georg Brandt, the discoverer of the nickel-like metal cobalt) realized that there was probably a new metal hiding within the kupfernickel ore, and in 1751 he succeeded in smelting kupfernickel to produce a previously unknown (except in certain meteorites) silvery white, iron-like metal. Logically, Cronstedt named his new metal after the nickel part of kupfernickel.
Occurrence
Malachite often results from the supergene weathering and oxidation of primary sulfidic copper ores, and is often found with azurite (Cu3(CO3)2(OH)2), goethite, and calcite. Except for its vibrant green color, the properties of malachite are similar to those of azurite and aggregates of the two minerals occur frequently. Malachite is more common than azurite and is typically associated with copper deposits around limestones, the source of the carbonate.
Large quantities of malachite have been mined in the Urals, Russia. Ural malachite is not being mined , but G.N Vertushkova reports the possible discovery of new deposits of malachite in the Urals. It is found worldwide including in the Democratic Republic of the Congo; Gabon; Zambia; Tsumeb, Namibia; Mexico; Broken Hill, New South Wales; Burra, South Australia; Lyon, France; Timna Valley, Israel; and the Southwestern United States, most notably in Arizona.
Anthropogenic malachite was historically believed to be the primary component of the patina which forms on copper and copper alloy structures exposed to open-air weathering; however, atmospheric sources of sulfate and chloride (such as air pollution or sea winds) typically favour the formation of brochantite or atacamite. Malachite can also be produced synthetically, in which case it is referred to as basic copper carbonate or green verditer.
Structure
Malachite crystallizes in the monoclinic system. The structure consists of chains of alternating Cu2+ ions and OH− ions, with a net positive charge, woven between isolated triangular CO32− ions. Thus each copper ion is conjugated to two hydroxyl ions and two carbonate ions; each hydroxyl ion is conjugated with two copper ions; and each carbonate ion is conjugated with six copper ions.
Use
Malachite was used as a mineral pigment in green paints from antiquity until 1800. The pigment is moderately lightfast, sensitive to acids, and varying in color. This natural form of green pigment has been replaced by its synthetic form, verditer, among other synthetic greens.
Malachite is also used for decorative purposes, such as in wands and the Malachite Room in the Hermitage Museum, which features a huge malachite vase, and the Malachite Room in Castillo de Chapultepec in Mexico City. Another example is the Demidov Vase, part of the former Demidov family collection, and now in the Metropolitan Museum of Art. "The Tazza", a large malachite vase, one of the largest pieces of malachite in North America and a gift from Tsar Nicholas II, stands as the focal point in the centre of the room of Linda Hall Library. In the time of Tsar Nicolas I decorative pieces with malachite were among the most popular diplomatic gifts. It was used in China as far back as the Eastern Zhou period. The base of FIFA World Cup Trophy has two layers of malachite.
Symbolism and superstitions
A 17th-century Spanish superstition held that having a child wear a lozenge of malachite would help them sleep, and keep evil spirits at bay. Marbodus recommended malachite as a talisman for young people because of its protective qualities and its ability to help with sleep. It has also historically been worn for protection from lightning and contagious diseases and for health, success, and constancy in the affections. During the Middle Ages it was customary to wear it engraved with a figure or symbol of the Sun to maintain health and to avert depression to which Capricorns were considered vulnerable.
In ancient Egypt the colour green (wadj) was associated with death and the power of resurrection as well as new life and fertility. Ancient Egyptians believed that the afterlife contained an eternal paradise, referred to as the "Field of Malachite", which resembled their lives but with no pain or suffering.
Ore uses
Simple methods of copper ore extraction from malachite involved thermodynamic processes such as smelting. This reaction involves the addition of heat and a carbon, causing the carbonate to decompose leaving copper oxide and an additional carbon source such as coal converts the copper oxide into copper metal.
The basic word equation for this reaction is:
Copper carbonate + heat → carbon dioxide + copper oxide (color changes from green to black).
Copper oxide + carbon → carbon dioxide + copper (color change from black to copper colored).
Malachite is a low grade copper ore, however, due to increase demand for metals, more economic processing such as hydrometallurgical methods (using aqueous solutions such as sulfuric acid) are being used as malachite is readily soluble in dilute acids. Sulfuric acid is the most common leaching agent for copper oxide ores like malachite and eliminates the need for smelting processes.
The chemical equation for sulfuric acid leaching of copper ore from malachite is as follows:
Health and environmental concerns
Mining for malachite for ornamental or copper ore purposes involves open-pit mining or underground mining depending on the grade of the ore deposits. Open-pit and underground mining practices can cause environmental degradation through habitat and biodiversity loss. Acid mine drainage can contaminate water and food sources to negatively impact human health if improperly managed or if leaks from tailing ponds occur. The risk of health and environmental impacts of both traditional metallurgy and newer methods of hydrometallurgy are both significant, however, water conservation and waste management practices for hydrometallurgy processes for ore extraction, such as for malachite, are stricter and relatively more sustainable. New research is also being conducted on better alternatives to methods such as sulfuric acid leaching which has high environmental impacts, even under hydrometallurgy regulation standards and innovation.
Gallery
| Physical sciences | Minerals | Earth science |
354300 | https://en.wikipedia.org/wiki/Craniate | Craniate | A craniate is a member of the Craniata (sometimes called the Craniota), a proposed clade of chordate animals with a skull of hard bone or cartilage. Living representatives are the Myxini (hagfishes), Hyperoartia (including lampreys), and the much more numerous Gnathostomata (jawed vertebrates). Formerly distinct from vertebrates by excluding hagfish, molecular and anatomical research in the 21st century has led to the reinclusion of hagfish as vertebrates, making living craniates synonymous with living vertebrates.
The clade was conceived largely on the basis of the Hyperoartia (lampreys and kin) being more closely related to the Gnathostomata (jawed vertebrates) than the Myxini (hagfishes). This, combined with an apparent lack of vertebral elements within the Myxini, suggested that the Myxini were descended from a more ancient lineage than the vertebrates, and that the skull developed before the vertebral column. The clade was thus composed of the Myxini and the vertebrates, and any extinct chordates with skulls.
However recent studies using molecular phylogenetics have contradicted this view, with evidence that the Cyclostomata (Hyperoartia and Myxini) is monophyletic; this suggests that the Myxini are degenerate vertebrates, and therefore the vertebrates and craniates are cladistically equivalent, at least for the living representatives. The placement of the Myxini within the vertebrates has been further strengthened by recent anatomical analysis, with vestiges of a vertebral column being discovered in the Myxini.
Characteristics
In the simplest sense, craniates are chordates with well-defined heads, thus excluding members of the chordate subphyla Tunicata (tunicates) and Cephalochordata (lancelets), but including Myxini, which have cartilaginous crania and tooth-like structures composed of keratin. Craniata also includes all lampreys and armoured jawless fishes, armoured jawed fish, sharks, skates, and rays, and teleostomians: spiny sharks, bony fish, lissamphibians, temnospondyls and protoreptiles, sauropsids and mammals. The craniate head consists of a three-part brain, neural crest which gives rise to many cell lineages, and a cranium.
In addition to distinct crania (sing. cranium), craniates possess many derived characteristics, which have allowed for more complexity to follow. Molecular-genetic analysis of craniates reveals that, compared to less complex animals, they developed duplicate sets of many gene families that are involved in cell signaling, transcription, and morphogenesis (see homeobox).
In general, craniates are much more active than tunicates and lancelets and, as a result, have greater metabolic demands, as well as several anatomical adaptations. Aquatic craniates have gill slits, which are connected to muscles to pump water through the slits, engaging in both feeding and gas exchange (as opposed to lancelets, whose pharyngeal slits are used only for suspension feeding, chiefly by cilia-mucus rather than muscles). Muscles line the alimentary canal, moving food through the canal, allowing higher craniates such as mammals to develop more complex digestive systems for optimal food processing. Craniates have cardiovascular systems that include a heart with at least two chambers, red blood cells, oxygen transporting hemoglobin as well as myoglobin, livers and kidneys.
Systematics and taxonomy
Linnaeus (1758) classified hagfishes as Vermes, a class for non-arthropod invertebrates (in modern nomenclature).
Dumeril (1806) grouped hagfishes and lampreys in the taxon Cyclostomi, characterized by horny teeth borne on a tongue-like apparatus, a large notochord as adults, and pouch-shaped gills (Marspibranchii). Cyclostomes were regarded as either degenerate cartilaginous fishes or primitive vertebrates. Cope (1889) coined the name Agnatha ("jawless") for a group that included the cyclostomes and a number of fossil groups in which jaws could not be observed. Vertebrates were subsequently divided into two major sister-groups: the Agnatha and the Gnathostomata (jawed vertebrates). Stensiö (1927) suggested that the two groups of living agnathans (i.e. the cyclostomes) arose independently from different groups of fossil agnathans.
Løvtrup (1977) argued that lampreys are more closely related to gnathostomes based on a number of uniquely derived characters, including:
Arcualia (serially arranged paired cartilages above the notochord)
Extrinsic eyeball muscles
Radial muscles in the fins
A closely set atrium and ventricle of the heart
Nervous regulation of the heart by the vagus nerve
A typhlosole (a spirally coiled valve of the intestinal wall)
True lymphocytes
A differentiated anterior lobe of the pituitary gland (adenohypophysis)
Three inner ear maculae (patches of acceleration sensitive 'hair cells' used in balance) organized into two or three vertical semicircular canals
Neuromast organs (composed of vibration sensitive hair cells) in the laterosensory canals
An electroreceptive lateral line (with voltage sensitive hair cells)
Electrosensory lateral line nerves
A cerebellum, i.e. the multi-layered roof of the hindbrain with unique structure (characteristic neural architecture including direct inputs from the lateral line and large output Purkinje cells) and function (integrating sensory perception and coordinating motor control)
In other words, the cyclostome characteristics (e.g. horny teeth on a "tongue", gill pouches) are either instances of convergent evolution for feeding and gill ventilation in animals with an eel-like body shape, or represent primitive craniate characteristics subsequently lost or modified in gnathostomes. On this basis Janvier (1978) proposed to use the names Vertebrata and Craniata as two distinct and nested taxa.
Validity
The validity of the taxon "Craniata" was recently examined by Delarbre et al. (2002) using mtDNA sequence data, concluding that Myxini is more closely related to Hyperoartia than to Gnathostomata - i.e., that modern jawless fishes form a clade called Cyclostomata. The argument is that, if Cyclostomata is indeed monophyletic, Vertebrata would return to its old content (Gnathostomata + Cyclostomata) and the name Craniata, being superfluous, would become a junior synonym.
The new evidence removes support for the hypothesis for the evolutionary sequence by which (from among tunicate-like chordates) first the hard cranium arose as it is exhibited by the hagfishes, then the backbone as exhibited by the lampreys, and then finally the hinged jaw that is now ubiquitous. In 2010, Philippe Janvier stated:
Classification
Below is a phylogenetic tree of the phylum Chordata. Lines show probable evolutionary relationships, including extinct taxa, which are denoted with a dagger, †. Some groups in this tree (lancelets and tunicates) are invertebrates. The positions (relationships) of the lancelet, tunicate, and craniate clades are as reported. Note that Placodermi is now thought to be paraphyletic.
| Biology and health sciences | General classifications_2 | Animals |
5657877 | https://en.wikipedia.org/wiki/Type%20I%20and%20type%20II%20errors | Type I and type II errors | In statistical hypothesis testing, a type I error, or a false positive, is the rejection of the null hypothesis when it is actually true. A type II error, or a false negative, is the failure to reject a null hypothesis that is actually false.
Type I error: an innocent person may be convicted. Type II error: a guilty person may be not convicted.
Much of statistical theory revolves around the minimization of one or both of these errors, though the complete elimination of either is an impossibility if the outcome is not determined by a known, observable causal process. The knowledge of type I errors and type II errors is widely used in medical science, biometrics and computer science.
Type I errors can be thought of as errors of commission (i.e., wrongly including a 'false case'). For instance, consider testing patients for a virus infection. If when the patient is not infected with the virus, but the test shows that they do, this is considered a type I error.
By contrast, type II errors are errors of omission (i.e, wrongly leaving out a 'true case'). In the example above, if the patient is infected by the virus, but the test shows that they are not, that would be a type II error.
Definition
Statistical background
In statistical test theory, the notion of a statistical error is an integral part of hypothesis testing. The test goes about choosing about two competing propositions called null hypothesis, denoted by and alternative hypothesis, denoted by . This is conceptually similar to the judgement in a court trial. The null hypothesis corresponds to the position of the defendant: just as he is presumed to be innocent until proven guilty, so is the null hypothesis presumed to be true until the data provide convincing evidence against it. The alternative hypothesis corresponds to the position against the defendant. Specifically, the null hypothesis also involves the absence of a difference or the absence of an association. Thus, the null hypothesis can never be that there is a difference or an association.
If the result of the test corresponds with reality, then a correct decision has been made. However, if the result of the test does not correspond with reality, then an error has occurred. There are two situations in which the decision is wrong. The null hypothesis may be true, whereas we reject . On the other hand, the alternative hypothesis may be true, whereas we do not reject . Two types of error are distinguished: type I error and type II error.
Type I error
The first kind of error is the mistaken rejection of a null hypothesis as the result of a test procedure. This kind of error is called a type I error (false positive) and is sometimes called an error of the first kind. In terms of the courtroom example, a type I error corresponds to convicting an innocent defendant.
Type II error
The second kind of error is the mistaken failure to reject the null hypothesis as the result of a test procedure. This sort of error is called a type II error (false negative) and is also referred to as an error of the second kind. In terms of the courtroom example, a type II error corresponds to acquitting a criminal.
Crossover error rate
The crossover error rate (CER) is the point at which type I errors and type II errors are equal. A system with a lower CER value provides more accuracy than a system with a higher CER value.
False positive and false negative
In terms of false positives and false negatives, a positive result corresponds to rejecting the null hypothesis, while a negative result corresponds to failing to reject the null hypothesis; "false" means the conclusion drawn is incorrect. Thus, a type I error is equivalent to a false positive, and a type II error is equivalent to a false negative.
Table of error types
Tabulated relations between truth/falseness of the null hypothesis and outcomes of the test:
Error rate
A perfect test would have zero false positives and zero false negatives. However, statistical methods are probabilistic, and it cannot be known for certain whether statistical conclusions are correct. Whenever there is uncertainty, there is the possibility of making an error. Considering this, all statistical hypothesis tests have a probability of making type I and type II errors.
The type I error rate is the probability of rejecting the null hypothesis given that it is true. The test is designed to keep the type I error rate below a prespecified bound called the significance level, usually denoted by the Greek letter α (alpha) and is also called the alpha level. Usually, the significance level is set to 0.05 (5%), implying that it is acceptable to have a 5% probability of incorrectly rejecting the true null hypothesis.
The rate of the type II error is denoted by the Greek letter β (beta) and related to the power of a test, which equals 1−β.
These two types of error rates are traded off against each other: for any given sample set, the effort to reduce one type of error generally results in increasing the other type of error.
The quality of hypothesis test
The same idea can be expressed in terms of the rate of correct results and therefore used to minimize error rates and improve the quality of hypothesis test. To reduce the probability of committing a type I error, making the alpha value more stringent is both simple and efficient. To decrease the probability of committing a type II error, which is closely associated with analyses' power, either increasing the test's sample size or relaxing the alpha level could increase the analyses' power. A test statistic is robust if the type I error rate is controlled.
Varying different threshold (cut-off) values could also be used to make the test either more specific or more sensitive, which in turn elevates the test quality. For example, imagine a medical test, in which an experimenter might measure the concentration of a certain protein in the blood sample. The experimenter could adjust the threshold (black vertical line in the figure) and people would be diagnosed as having diseases if any number is detected above this certain threshold. According to the image, changing the threshold would result in changes in false positives and false negatives, corresponding to movement on the curve.
Example
Since in a real experiment it is impossible to avoid all type I and type II errors, it is important to consider the amount of risk one is willing to take to falsely reject H0 or accept H0. The solution to this question would be to report the p-value or significance level α of the statistic. For example, if the p-value of a test statistic result is estimated at 0.0596, then there is a probability of 5.96% that we falsely reject H0. Or, if we say, the statistic is performed at level α, like 0.05, then we allow to falsely reject H0 at 5%. A significance level α of 0.05 is relatively common, but there is no general rule that fits all scenarios.
Vehicle speed measuring
The speed limit of a freeway in the United States is 120 kilometers per hour (75 mph). A device is set to measure the speed of passing vehicles. Suppose that the device will conduct three measurements of the speed of a passing vehicle, recording as a random sample X1, X2, X3. The traffic police will or will not fine the drivers depending on the average speed . That is to say, the test statistic
In addition, we suppose that the measurements X1, X2, X3 are modeled as normal distribution N(μ,2). Then, T should follow N(μ,2/) and the parameter μ represents the true speed of passing vehicle. In this experiment, the null hypothesis H0 and the alternative hypothesis H1 should be
H0: μ=120 against H1: μ>120.
If we perform the statistic level at α=0.05, then a critical value c should be calculated to solve
According to change-of-units rule for the normal distribution. Referring to Z-table, we can get
Here, the critical region. That is to say, if the recorded speed of a vehicle is greater than critical value 121.9, the driver will be fined. However, there are still 5% of the drivers are falsely fined since the recorded average speed is greater than 121.9 but the true speed does not pass 120, which we say, a type I error.
The type II error corresponds to the case that the true speed of a vehicle is over 120 kilometers per hour but the driver is not fined. For example, if the true speed of a vehicle μ=125, the probability that the driver is not fined can be calculated as
which means, if the true speed of a vehicle is 125, the driver has the probability of 0.36% to avoid the fine when the statistic is performed at level α=0.05, since the recorded average speed is lower than 121.9. If the true speed is closer to 121.9 than 125, then the probability of avoiding the fine will also be higher.
The tradeoffs between type I error and type II error should also be considered. That is, in this case, if the traffic police do not want to falsely fine innocent drivers, the level α can be set to a smaller value, like 0.01. However, if that is the case, more drivers whose true speed is over 120 kilometers per hour, like 125, would be more likely to avoid the fine.
Etymology
In 1928, Jerzy Neyman (1894–1981) and Egon Pearson (1895–1980), both eminent statisticians, discussed the problems associated with "deciding whether or not a particular sample may be judged as likely to have been randomly drawn from a certain population": and, as Florence Nightingale David remarked, "it is necessary to remember the adjective 'random' [in the term 'random sample'] should apply to the method of drawing the sample and not to the sample itself".
They identified "two sources of error", namely:
In 1930, they elaborated on these two sources of error, remarking that
In 1933, they observed that these "problems are rarely presented in such a form that we can discriminate with certainty between the true and false hypothesis". They also noted that, in deciding whether to fail to reject, or reject a particular hypothesis amongst a "set of alternative hypotheses", H1, H2..., it was easy to make an error,
In all of the papers co-written by Neyman and Pearson the expression H0 always signifies "the hypothesis to be tested".
In the same paper they call these two sources of error, errors of type I and errors of type II respectively.
Related terms
Null hypothesis
It is standard practice for statisticians to conduct tests in order to determine whether or not a "speculative hypothesis" concerning the observed phenomena of the world (or its inhabitants) can be supported. The results of such testing determine whether a particular set of results agrees reasonably (or does not agree) with the speculated hypothesis.
On the basis that it is always assumed, by statistical convention, that the speculated hypothesis is wrong, and the so-called "null hypothesis" that the observed phenomena simply occur by chance (and that, as a consequence, the speculated agent has no effect) – the test will determine whether this hypothesis is right or wrong. This is why the hypothesis under test is often called the null hypothesis (most likely, coined by Fisher (1935, p. 19)), because it is this hypothesis that is to be either nullified or not nullified by the test. When the null hypothesis is nullified, it is possible to conclude that data support the "alternative hypothesis" (which is the original speculated one).
The consistent application by statisticians of Neyman and Pearson's convention of representing "the hypothesis to be tested" (or "the hypothesis to be nullified") with the expression H0 has led to circumstances where many understand the term "the null hypothesis" as meaning "the nil hypothesis" – a statement that the results in question have arisen through chance. This is not necessarily the case – the key restriction, as per Fisher (1966), is that "the null hypothesis must be exact, that is free from vagueness and ambiguity, because it must supply the basis of the 'problem of distribution', of which the test of significance is the solution." As a consequence of this, in experimental science the null hypothesis is generally a statement that a particular treatment has no effect; in observational science, it is that there is no difference between the value of a particular measured variable, and that of an experimental prediction.
Statistical significance
If the probability of obtaining a result as extreme as the one obtained, supposing that the null hypothesis were true, is lower than a pre-specified cut-off probability (for example, 5%), then the result is said to be statistically significant and the null hypothesis is rejected.
British statistician Sir Ronald Aylmer Fisher (1890–1962) stressed that the null hypothesis
Application domains
Medicine
In the practice of medicine, the differences between the applications of screening and testing are considerable.
Medical screening
Screening involves relatively cheap tests that are given to large populations, none of whom manifest any clinical indication of disease (e.g., Pap smears).
Testing involves far more expensive, often invasive, procedures that are given only to those who manifest some clinical indication of disease, and are most often applied to confirm a suspected diagnosis.
For example, most states in the US require newborns to be screened for phenylketonuria and hypothyroidism, among other congenital disorders.
Hypothesis: "The newborns have phenylketonuria and hypothyroidism".
Null hypothesis (H0): "The newborns do not have phenylketonuria and hypothyroidism".
Type I error (false positive): The true fact is that the newborns do not have phenylketonuria and hypothyroidism but we consider they have the disorders according to the data.
Type II error (false negative): The true fact is that the newborns have phenylketonuria and hypothyroidism but we consider they do not have the disorders according to the data.
Although they display a high rate of false positives, the screening tests are considered valuable because they greatly increase the likelihood of detecting these disorders at a far earlier stage.
The simple blood tests used to screen possible blood donors for HIV and hepatitis have a significant rate of false positives; however, physicians use much more expensive and far more precise tests to determine whether a person is actually infected with either of these viruses.
Perhaps the most widely discussed false positives in medical screening come from the breast cancer screening procedure mammography. The US rate of false positive mammograms is up to 15%, the highest in world. One consequence of the high false positive rate in the US is that, in any 10-year period, half of the American women screened receive a false positive mammogram. False positive mammograms are costly, with over $100 million spent annually in the U.S. on follow-up testing and treatment. They also cause women unneeded anxiety. As a result of the high false positive rate in the US, as many as 90–95% of women who get a positive mammogram do not have the condition. The lowest rate in the world is in the Netherlands, 1%. The lowest rates are generally in Northern Europe where mammography films are read twice and a high threshold for additional testing is set (the high threshold decreases the power of the test).
The ideal population screening test would be cheap, easy to administer, and produce zero false negatives, if possible. Such tests usually produce more false positives, which can subsequently be sorted out by more sophisticated (and expensive) testing.
Medical testing
False negatives and false positives are significant issues in medical testing.
Hypothesis: "The patients have the specific disease".
Null hypothesis (H0): "The patients do not have the specific disease".
Type I error (false positive): The true fact is that the patients do not have a specific disease but the physician judges the patient is ill according to the test reports.
Type II error (false negative): The true fact is that the disease is actually present but the test reports provide a falsely reassuring message to patients and physicians that the disease is absent.
False positives can also produce serious and counter-intuitive problems when the condition being searched for is rare, as in screening. If a test has a false positive rate of one in ten thousand, but only one in a million samples (or people) is a true positive, most of the positives detected by that test will be false. The probability that an observed positive result is a false positive may be calculated using Bayes' theorem.
False negatives produce serious and counter-intuitive problems, especially when the condition being searched for is common. If a test with a false negative rate of only 10% is used to test a population with a true occurrence rate of 70%, many of the negatives detected by the test will be false.
This sometimes leads to inappropriate or inadequate treatment of both the patient and their disease. A common example is relying on cardiac stress tests to detect coronary atherosclerosis, even though cardiac stress tests are known to only detect limitations of coronary artery blood flow due to advanced stenosis.
Biometrics
Biometric matching, such as for fingerprint recognition, facial recognition or iris recognition, is susceptible to type I and type II errors.
Hypothesis: "The input does not identify someone in the searched list of people".
Null hypothesis: "The input does identify someone in the searched list of people".
Type I error (false reject rate): The true fact is that the person is someone in the searched list but the system concludes that the person is not according to the data.
Type II error (false match rate): The true fact is that the person is not someone in the searched list but the system concludes that the person is someone whom we are looking for according to the data.
The probability of type I errors is called the "false reject rate" (FRR) or false non-match rate (FNMR), while the probability of type II errors is called the "false accept rate" (FAR) or false match rate (FMR).
If the system is designed to rarely match suspects then the probability of type II errors can be called the "false alarm rate". On the other hand, if the system is used for validation (and acceptance is the norm) then the FAR is a measure of system security, while the FRR measures user inconvenience level.
Security screening
False positives are routinely found every day in airport security screening, which are ultimately visual inspection systems. The installed security alarms are intended to prevent weapons being brought onto aircraft; yet they are often set to such high sensitivity that they alarm many times a day for minor items, such as keys, belt buckles, loose change, mobile phones, and tacks in shoes.
Hypothesis: "The item is a weapon".
Null hypothesis: "The item is not a weapon".
Type I error (false positive): The true fact is that the item is not a weapon but the system still sounds an alarm.
Type II error (false negative) The true fact is that the item is a weapon but the system keeps silent at this time.
The ratio of false positives (identifying an innocent traveler as a terrorist) to true positives (detecting a would-be terrorist) is, therefore, very high; and because almost every alarm is a false positive, the positive predictive value of these screening tests is very low.
The relative cost of false results determines the likelihood that test creators allow these events to occur. As the cost of a false negative in this scenario is extremely high (not detecting a bomb being brought onto a plane could result in hundreds of deaths) whilst the cost of a false positive is relatively low (a reasonably simple further inspection) the most appropriate test is one with a low statistical specificity but high statistical sensitivity (one that allows a high rate of false positives in return for minimal false negatives).
Computers
The notions of false positives and false negatives have a wide currency in the realm of computers and computer applications, including computer security, spam filtering, malware, optical character recognition, and many others.
For example, in the case of spam filtering:
Hypothesis: "The message is spam".
Null hypothesis: "The message is not spam".
Type I error (false positive): Spam filtering or spam blocking techniques wrongly classify a legitimate email message as spam and, as a result, interfere with its delivery.
Type II error (false negative): Spam email is not detected as spam, but is classified as non-spam.
While most anti-spam tactics can block or filter a high percentage of unwanted emails, doing so without creating significant false-positive results is a much more demanding task. A low number of false negatives is an indicator of the efficiency of spam filtering.
| Mathematics | Statistics | null |
5662719 | https://en.wikipedia.org/wiki/Wood%20turtle | Wood turtle | The wood turtle (Glyptemys insculpta) is a species of turtle in the family Emydidae. The species is native to northeastern North America. The genus Glyptemys contains only one other species of turtle: the bog turtle (Glyptemys muhlenbergii). The wood turtle reaches a straight carapace length of , its defining characteristic being the pyramidal shape of the scutes on its upper shell. Morphologically, it is similar to the bog turtle, spotted turtle (Clemmys guttata), and Blanding's turtle (Emydoidea blandingii). The wood turtle exists in a broad geographic range extending from Nova Scotia in the north (and east) to Minnesota in the west and Virginia in the south. In the past, it was forced south by encroaching glaciers: skeletal remains have been found as far south as Georgia.
It spends a great deal of time in or near the water of wide rivers, preferring shallow, clear streams with compacted and sandy bottoms. The wood turtle can also be found in forests and grasslands, but will rarely be seen more than several hundred meters from flowing water. It is diurnal and is not overtly territorial. It spends the winter in hibernation and the hottest parts of the summer in estivation.
The wood turtle is omnivorous and is capable of eating on land or in water. On an average day, a wood turtle will move , a decidedly long distance for a turtle. Many other animals that live in its habitat pose a threat to it. Raccoons are over-abundant in many places and are a direct threat to all life stages of this species. Inadvertently, humans cause many deaths through habitat destruction, road traffic, farming accidents, and illegal collection. When unharmed, it can live for up to 40 years in the wild and 58 years in captivity.
The wood turtle belongs to the family Emydidae. The specific name, insculpta, refers to the rough, sculptured surface of the carapace. This turtle species inhabits aquatic and terrestrial areas of North America, primarily the northeast of the United States and parts of Canada. Wood turtle populations are under high conservation concerns due to human interference of natural habitats. Habitat destruction and fragmentation can negatively impact the ability for wood turtles to search for suitable mates and build high quality nests.
Taxonomy
Formerly in the genus Clemmys, the wood turtle is now a member of the genus Glyptemys, a classification that the wood turtle shares with only the bog turtle. It and the bog turtle have a similar genetic makeup, which is marginally different from that of the spotted turtle (Clemmys guttata), the only current member of the genus Clemmys. The wood turtle has undergone extensive scientific name changes by various scientists over the course of its history. Today, there are several prominent common names for the wood turtle, including sculptured tortoise, red-legged tortoise, and redleg.
Although no subspecies are recognized, there are morphological differences in wood turtles between areas. Individuals found in the west of its geographic range (areas like the Great Lakes and the Midwest United States) have a paler complexion on the inside of the legs and underside of the neck than ones found in the east (places including the Appalachian Mountains, New York, and Pennsylvania). Genetic analysis has also revealed that southern populations have less genetic diversity than the northern; however, both exhibit a fair amount of diversity considering the decline in numbers that have occurred during previous ice ages.
Description
The wood turtle usually grows to between in straight carapace length, but may reach a maximum of . It has a rough carapace that is a tan, grayish brown or brown in color, with a central ridge (called a keel) made up of a pyramidal pattern of ridges and grooves. Older turtles typically display an abraded or worn carapace. Fully grown, it weighs . The wood turtle's karyotype consists of 50 chromosomes.
The larger scutes display a pattern of black or yellow lines. The wood turtle's plastron (ventral shell) is yellowish in color and has dark patches. The posterior margin of the plastron terminates in a V-shaped notch. Although sometimes speckled with yellowish spots, the upper surface of the head is often a dark gray to solid black. The ventral surfaces of the neck, chin, and legs are orange to red with faint yellow stripes along the lower jaw of some individuals. Seasonal variation in color vibrancy is known to occur.
At maturity, males, which reach a maximum straight carapace length of , are larger than females, which have been recorded to reach . Males also have larger claws, a larger head, a concave plastron, a more dome-like carapace, and a longer tail than females. The plastron of females and juveniles is flat while in males it gains concavity with age. The posterior marginal scutes of females and juveniles (of either sex) radiate outward more than in mature males. The coloration on the neck, chin, and inner legs is more vibrant in males than in females, which display a pale yellowish color in those areas. Hatchlings range in size from in length (straight carapace measurement). The plastrons of hatchlings are dull gray to brown. Their tail usually equals the length of the carapace and their neck and legs lack the bright coloration found in adults. Hatchlings' carapaces also are as wide as they are long and lack the pyramidal pattern found in older turtles.
The eastern box turtle (Terrapene c. carolina) and Blanding's turtle (Emydoidea blandingii) are similar in appearance to the wood turtle, and all three live in overlapping habitats. However, unlike the wood turtle, both Blanding's turtle and the eastern box turtle have hinged plastrons that allow them to completely close their shells. The diamondback terrapin (Malaclemys terrapin) has a shell closely resembling the wood turtle's; however its skin is gray in color, and it inhabits coastal brackish and saltwater marshes. The bog turtle and spotted turtle are also similar, but neither of these has the specific sculptured surface found on the carapace of the wood turtle.
Distribution and habitat
The wood turtle is found in most New England states, Nova Scotia, west to Michigan, northern Indiana and Minnesota, and south to Virginia. Overall, the distribution is disjunct with populations often being small and isolated. Roughly 30% of its total population is in Canada. It prefers slow-moving streams containing a sandy bottom and heavily vegetated banks. The soft bottoms and muddy shores of these streams are ideal for overwintering. Also, the areas bordering the streams (usually with open canopies ) are used for nesting. Spring to summer is spent in open areas including forests, fields, bogs, wet meadows, and beaver ponds. The rest of the year is spent in the aforementioned waterways.
The densities of wood turtle populations have also been studied. In the northern portion of its range (Quebec and other areas of Canada), populations are fairly dilute, containing an average of 0.44 individuals per , while in the south, over the same area, the densities varied largely from 6 to 90 turtles. In addition to this, it has been found that colonies often have more females than males.
In the western portion of its range, wood turtles are more aquatic. In the east, wood turtles are decidedly more terrestrial, especially during the summer. During this time, they can be found in wooded areas with wide open canopies. However, even here, they are never far from water and will enter it every few days.
Evolutionary history
In the past, wood turtle populations were forced south by extending glaciers. Remains from the Rancholabrean period (300,000 to 11,000 years ago) have been found in states such as Georgia and Tennessee, both of which are well south of their current range. After the receding of the ice, wood turtle colonies were able to re-inhabit their customary northern range (areas like New Brunswick and Nova Scotia).
Nesting behavior
The wood turtle is oviparous. It produces offspring by laying eggs, and does not provide parental care outside of nest-building. Thus, the location and quality of nesting sites determine the offspring survival and fitness; so females invest significant time and energy into nest site selection and construction. Females select nest sites based on soil temperature (preferring warmer temperature nest sites), but not soil composition. Average nest size is four inches wide and three inches deep. Also, females build nests in elevated areas in order to avoid flooding and predation. After laying eggs, female wood turtles will cover the nest with leaves or dirt in order to hide the unhatched eggs from predators, and then the female will leave the nest location until the next mating season. Nesting sites can be used by the same female for multiple years. Because nest building occurs along rivers, females tend to spend more time along river areas, compared to male turtles.
Ecology and behavior
During the spring, the wood turtle is active during the daytime (usually between about 7:00 a.m. and 7:00 p.m.) and will almost always be found within several hundred metres of a stream. The early morning and late afternoon are preferred foraging periods. Throughout this season, the wood turtle uses logs, sandy shores, or banks to bask in sunlight. In order to maintain its body temperatures through thermoregulation, it spends a considerable amount of time basking, most of which takes place in the late morning and late afternoon. The wood turtle reaches a peak body temperature of after basking. During times of extreme heat, it has been known to estivate. Several reports mention individuals resting under vegetation, fallen debris and in shallow puddles. During the summer, the wood turtle is considered a largely terrestrial animal. At night, its average body temperature drops to between and it will rest in small creeks or nearby land (usually in areas containing some sort of underbrush or grass).
During colder weather, the wood turtle stays in the water for a larger percentage of the time. For this reason, during the winter months (and the late fall and early spring) it is considered an aquatic turtle. November through February or March is spent in hibernation at the bottom of a small, flowing river. The wood turtle may hibernate alone or in large groups. During this period, individuals bury themselves in the thick mud at the bottom of the river and rarely move. During hibernation, it is vulnerable to flash floods. Emergence does not occur until March or sometimes April, months that mark the beginning of its activation period (males are typically more active than females at this time).
Males are known to be aggressive, with larger and older turtles being more dominant. Larger males rank higher on the social hierarchy often created by wood turtle colonies. In the wild, the submissive turtle is either forced to flee, or is bombarded with physical abuses, which include biting, shoving, and ramming. Larger and more dominant males will sometimes try to remove a subordinate male while he is mating with a female. The defender will, if he does not successfully fight for his position, lose the female to the larger male. Therefore, among males, there is a direct relationship between copulation opportunities and social rank. However, the outcome of encounters between two turtles is more aggression-dependent than size-dependent. The wood turtle that is more protective of his or her area is the victor. Physical bouts between wood turtles (regardless of sex) increases marginally during the fall and spring (times of mating).
The wood turtle is omnivorous, feeding mainly on plant matter and animals both on land and in water. It eats prey such as beetles, millipedes, and slugs. Also, wood turtles consume specific fungi (Amanita muscaria and Leccinum arcolatum), mosses, grasses, various insects, and also carrion. On occasion, it can be seen stomping the ground with alternating hits of the left and right front feet. This behavior imitates the vibrations caused by moles, sometimes causing earthworms to rise to the surface where they quickly become easy prey. When hunting, the wood turtle pokes its head into such areas as dead and decaying logs, the bottoms of bushes, and in other vegetation. In the water, it exhibits similar behavior, searching algae beds and cavities along the sides of the stream or river.
Many different animals are predators of or otherwise pose a threat to the wood turtle. They include snapping turtles, raccoons, otters, foxes, and cats. All of these species destroy unhatched eggs and prey upon hatchlings and juveniles. Several animals that often target wood turtle eggs are the common raven and coyote, which may completely destroy the nests they encounter. Evidence of predatory attacks (wounds to the skin and such) are common on individuals, but the northern populations tend to display more scarring than the southern ones. In addition to these threats, wood turtles also suffer from leech infestations.
Movement
The wood turtle can travel at a relatively fast speed (upwards of ); it also travels long distances during the months that it is active. In one instance, of nine turtles studied, the average distance covered in a 24-hour period was , with a net displacement of .
The wood turtle, an intelligent animal, has homing capabilities. Its mental capacity for directional movement was discovered after the completion of an experiment that involved an individual finding food in a maze. The results proved that these turtles have locating abilities similar to that of a rat. This was also proved by another, separate experiment. One male wood turtle was displaced after being captured, and within five weeks, it returned to the original location. The homing ability of the wood turtle does not vary among sexes, age groups, or directions of travel.
Life cycle
The wood turtle takes a long time to reach sexual maturity, has a low fecundity (ability to reproduce), but has a high adult survival rate. However, the high survival rates are not true of juveniles or hatchlings. Although males establish hierarchies, they are not territorial. The wood turtle becomes sexually mature between 14 and 18 years of age. Mating activity among wood turtles peaks in the spring and again in the fall, although it is known to mate throughout the portion of the year they are active. However, it has been observed mating in December. In one rare instance, a female wood turtle hybridized with a male Blanding's turtle.
The courtship ritual consists of several hours of 'dancing,' which usually occurs on the edge of a small stream. Males often initiate this behavior, starting by nudging the female's shell, head, tail, and legs. Because of this behavior, the female may flee from the area, in which case the male will follow. After the chase (if it occurs), the male and female approach and back away from each other as they continually raise and extend their heads. After some time, they lower their heads and swing them from left to right. Once it is certain that the two individuals will mate, the male will gently bite the female's head and mount her. Intercourse lasts between 22 and 33 minutes. Actual copulation takes place in the water, between depths between . Although unusual, copulation does occur on land. During the two prominent times of mating (spring and fall), females are mounted anywhere from one to eight times, with several of these causing impregnation. For this reason, a number of wood turtle clutches have been found to have hatchlings from more than one male.
Nesting occurs from May until July. Nesting areas receive ample sunlight, contain soft soil, are free from flooding, and are devoid of rocks and disruptively large vegetation. These sites however, can be limited among wood turtle colonies, forcing females to travel long distances in search of a suitable site, sometimes a trip. Before laying her eggs, the female may prepare several false nests. After a proper area is found, she will dig out a small cavity, lay about seven eggs (but anywhere from three to 20 is common), and fill in the area with earth. Oval and white, the eggs average in length and in width, and weigh about . The nests themselves are deep, and digging and filling it may take a total of four hours. Hatchlings emerge from the nest between August and October with overwintering being rare although entirely possible. An average length of , the hatchlings lack the vibrant coloration of the adults. Female wood turtles in general lay one clutch per year and tend to congregate around optimal nesting areas.
The wood turtle, throughout the first years of its life, is a rapid grower. Five years after hatching, it already measures , at age 16, it is a full , depending on sex. The wood turtle can be expected to live for 40 years in the wild, with captives living up to 58 years.
The wood turtle is the only known turtle species in existence that has been observed committing same-sex intercourse. Same-sex behavior in tortoises is known in more than one species.
The wood turtle exhibits genetic sex determination, in contrast to the temperature-dependent sex determination of most turtles.
Mating system
Specific mating courtship occurs more often in the Fall months and usually during the afternoon hours from 11:00 to 13:00 when many of the turtles are out in the population feeding. Mating is based on a male competitive hierarchy where a few higher ranked males gain the majority of mates in the population. Male wood turtles fight to gain access to female mates. These fights involve aggressive behaviors such as biting or chasing one another, and the males defend themselves by retreating their heads into their hard shells. The higher ranked winning males in the hierarchy system have a greater number of offspring than the lower ranked male individuals, increasing the dominant male's fitness. Female wood turtles mate with multiple males and are able to store sperm from multiple mates. Although the mechanism of sperm storage is unknown for the Wood Turtle species, other turtle species have internal compartments that can store viable sperm for years. Multiple mating ensures fertilization of all the female's eggs and often results in multiple paternity of a clutch, which is a common phenomenon exhibited by many marine and freshwater turtles. Multiple paternity patterns have been made evident in wood turtle populations by DNA fingerprinting. DNA fingerprinting of turtles involves using an oligonucleotide probe to produce sex specific markers, ultimately providing multi-locus DNA markers.
Conservation
Despite many sightings and a seemingly large and diverse distribution, wood turtle numbers are in decline. Many deaths caused by humans result from: habitat destruction, farming accidents, and road traffic. Also, it is commonly collected illegally for the international pet trade. These combined threats have caused many areas where they live to enact laws protecting it. Despite legislation, enforcement of the laws and education of the public regarding the species are minimal.
For proper protection of the wood turtle, in-depth land surveys of its habitat to establish population numbers are needed. One emerging solution to the highway mortality problem, which primarily affects nesting females, is the construction of under-road channels. These tunnels allow the wood turtle to pass under the road, a solution that helps prevent accidental deaths. Brochures and other media that warn people to avoid keeping the wood turtle as a pet are currently being distributed. Next, leaving nests undisturbed, especially common nesting sites and populations, is the best solution to enable the wood turtle's survival.
While considered nationally as threatened by the Committee on the Status of Endangered Wildlife in Canada (COSEWIC), the wood turtle is listed as vulnerable within the province of Nova Scotia under the Species at Risk Act. The species is highly susceptible to human land use activities, and special management practices for woodlands, rivers and farmland areas as well as motor vehicle use restrictions and general disruption protection during critical times such as nesting and movement to overwintering habitat is closely monitored. Since 2012, the Clean Annapolis River Project (CARP) has provided research and stewardship for this species including the identification of crucial habitats, distribution and movement estimation, and outreach.
| Biology and health sciences | Turtles | Animals |
16315657 | https://en.wikipedia.org/wiki/High-definition%20television | High-definition television | High-definition television (HDTV) describes a television or video system which provides a substantially higher image resolution than the previous generation of technologies. The term has been used since at least 1933; in more recent times, it refers to the generation following standard-definition television (SDTV). It is the standard video format used in most broadcasts: terrestrial broadcast television, cable television, satellite television.
Formats
HDTV may be transmitted in various formats:
720p (): 921,600 pixels
1080i () interlaced scan: 1,036,800 pixels (≈1.04Mpx).
1080p () progressive scan: 2,073,600 pixels (≈2.07Mpx).
Some countries also use a non-standard CTA resolution, such as : 777,600 pixels (≈0.78Mpx) per field or 1,555,200 pixels (≈1.56Mpx) per frame
When transmitted at two megapixels per frame, HDTV provides about five times as many pixels as SD (standard-definition television). The increased resolution provides for a clearer, more detailed picture. In addition, progressive scan and higher frame rates result in a picture with less flicker and better rendering of fast motion. Modern HDTV began broadcasting in 1989 in Japan, under the MUSE/Hi-Vision analog system. HDTV was widely adopted worldwide in the late 2000s.
Standards
All modern high-definition broadcasts utilize digital television standards.
The major digital television broadcast standards used for terrestrial, cable, satellite, and mobile devices are:
DVB, originating in Europe and also used in much of Asia, Africa, and Australia
ATSC, used in much of North America
DTMB, used in China and some neighboring countries
ISDB, used in two incompatible variations in Japan and South America
DMB, used by mobile devices in South Korea
These standards use a variety of video codecs, some of which are also used for internet video.
History
The term high definition once described a series of television systems first announced in 1933 and launched starting in August 1936; however, these systems were only high definition when compared to earlier systems that were based on mechanical systems with as few as 30 lines of resolution. The ongoing competition between companies and nations to create true HDTV spanned the entire 20th century, as each new system became higher definition than the last. In the early 21st century, this race has continued with 4K, 5K and 8K systems.
The British high-definition TV service started trials in August 1936 and a regular service on 2 November 1936 using both the (mechanical) Baird 240 line sequential scan (later referred to as progressive) and the (electronic) Marconi-EMI 405 line interlaced systems. The Baird system was discontinued in February 1937. In 1938 France followed with its own 441-line system, variants of which were also used by a number of other countries. The US NTSC 525-line system joined in 1941. In 1949 France introduced an even higher-resolution standard at 819 lines, a system that would have been high definition even by modern standards, but was monochrome only and had technical limitations that prevented it from achieving the intended definition. All of these systems used interlacing and a 4:3 aspect ratio except the 240-line system which was progressive (actually described at the time by the technically correct term sequential) and the 405-line system which started as 5:4 and later changed to 4:3. The 405-line system adopted the (at that time) revolutionary idea of interlaced scanning to overcome the flicker problem of the 240-line with its 25 Hz frame rate. The 240-line system could have doubled its frame rate but this would have meant that the transmitted signal would have doubled in bandwidth, an unacceptable option as the video baseband bandwidth was required to be not more than 3 MHz.
Color broadcasts started at similar line counts, first with the US NTSC color system in 1953, which was compatible with the earlier monochrome systems and therefore had the same 525 lines per frame. European standards did not follow until the 1960s, when the PAL and SECAM color systems were added to the monochrome 625-line broadcasts.
The NHK (Japan Broadcasting Corporation) began researching to "unlock the fundamental mechanism of video and sound interactions with the five human senses" in 1964, after the Tokyo Olympics. NHK set out to create an HDTV system that scored much higher in subjective tests than NTSC's previously dubbed HDTV. This new system, NHK Color, created in 1972, included 1125 lines, a 5:3 (1.67:1) aspect ratio and 60 Hz refresh rate. The Society of Motion Picture and Television Engineers (SMPTE), headed by Charles Ginsburg, became the testing and study authority for HDTV technology in the international theater. SMPTE would test HDTV systems from different companies from every conceivable perspective, but the problem of combining the different formats plagued the technology for many years.
There were four major HDTV systems tested by SMPTE in the late 1970s, and in 1979 an SMPTE study group released A Study of High Definition Television Systems:
EIA monochrome: 4:3 aspect ratio, 1023 lines, 60 Hz
NHK color: 5:3 aspect ratio, 1125 lines, 60 Hz
NHK monochrome: 4:3 aspect ratio, 2125 lines, 50 Hz
BBC colour: 8:3 aspect ratio, 1501 lines, 60 Hz
Since the formal adoption of Digital Video Broadcasting's (DVB) widescreen HDTV transmission modes in the mid to late 2000s; the 525-line NTSC (and PAL-M) systems, as well as the European 625-line PAL and SECAM systems, have been regarded as standard definition television systems.
Analog systems
Early HDTV broadcasting used analog technology that was later converted to digital television with video compression.
In 1949, France started its transmissions with an 819 lines system (with 737 active lines). The system was monochrome only and was used only on VHF for the first French TV channel. It was discontinued in 1983.
In 1958, the Soviet Union developed Тransformator (, meaning Transformer), the first high-resolution (definition) television system capable of producing an image composed of 1,125 lines of resolution aimed at providing teleconferencing for military command. It was a research project and the system was never deployed by either the military or consumer broadcasting.
In 1986, the European Community proposed HD-MAC, an analog HDTV system with 1,152 lines. A public demonstration took place for the 1992 Summer Olympics in Barcelona. However HD-MAC was scrapped in 1993 and the DVB project was formed, which would foresee development of a digital HDTV standard.
Japan
In 1979, the Japanese public broadcaster NHK first developed consumer high-definition television with a 5:3 display aspect ratio. The system, known as Hi-Vision or MUSE after its multiple sub-Nyquist sampling encoding (MUSE) for encoding the signal, required about twice the bandwidth of the existing NTSC system but provided about four times the resolution (1035i/1125 lines). In 1981, the MUSE system was demonstrated for the first time in the United States, using the same 5:3 aspect ratio as the Japanese system. Upon visiting a demonstration of MUSE in Washington, US President Ronald Reagan was impressed and officially declared it "a matter of national interest" to introduce HDTV to the US. NHK taped the 1984 Summer Olympics with a Hi-Vision camera, weighing 40 kg.
Satellite test broadcasts started June 4, 1989, the first daily high-definition programs in the world, with regular testing starting on November 25, 1991, or "Hi-Vision Day"dated exactly to refer to its 1,125-lines resolution. Regular broadcasting of BS-9ch commenced on November 25, 1994, which featured commercial and NHK programming.
Several systems were proposed as the new standard for the US, including the Japanese MUSE system, but all were rejected by the Federal Communications Commission (FCC) because of their higher bandwidth requirements. At this time, the number of television channels was growing rapidly and bandwidth was already a problem. A new standard had to be more efficient, needing less bandwidth for HDTV than the existing NTSC.
Decrease of analog HD systems
The limited standardization of analog HDTV in the 1990s did not lead to global HDTV adoption as technical and economic constraints at the time did not permit HDTV to use bandwidths greater than normal television. Early HDTV commercial experiments, such as NHK's MUSE, required over four times the bandwidth of a standard-definition broadcast. Despite efforts made to reduce analog HDTV to about twice the bandwidth of SDTV, these television formats were still distributable only by satellite. In Europe too, the HD-MAC standard was considered not technically viable.
In addition, recording and reproducing an HDTV signal was a significant technical challenge in the early years of HDTV (Sony HDVS). Japan remained the only country with successful public broadcasting of analog HDTV, with seven broadcasters sharing a single channel.
However, the Hi-Vision/MUSE system also faced commercial issues when it launched on November 25, 1991. Only 2,000 HDTV sets were sold by that day, rather than the enthusiastic 1.32 million estimation. Hi-Vision sets were very expensive, up to US$30,000 each, which contributed to its low consumer adaption. A Hi-Vision VCR from NEC released at Christmas time retailed for US$115,000. In addition, the United States saw Hi-Vision/MUSE as an outdated system and had already made it clear that it would develop an all-digital system. Experts thought the commercial Hi-Vision system in 1992 was already eclipsed by digital technology developed in the U.S. since 1990. This was an American victory against the Japanese in terms of technological dominance. By mid-1993 prices of receivers were still as high as 1.5 million yen (US$15,000).
On February 23, 1994, a top broadcasting administrator in Japan admitted failure of its analog-based HDTV system, saying the U.S. digital format would be more likely a worldwide standard. However this announcement drew angry protests from broadcasters and electronic companies who invested heavily into the analog system. As a result, he took back his statement the next day saying that the government will continue to promote Hi-Vision/MUSE. That year NHK started development of digital television in an attempt to catch back up to America and Europe. This resulted in the ISDB format. Japan started digital satellite and HDTV broadcasting in December 2000.
Rise of digital compression
High-definition digital television was not possible with uncompressed video, which requires a bandwidth exceeding 1Gbit/s for studio-quality HD digital video. Digital HDTV was made possible by the development of discrete cosine transform (DCT) video compression. DCT coding is a lossy image compression technique that was first proposed by Nasir Ahmed in 1972, and was later adapted into a motion-compensated DCT algorithm for video coding standards such as the H.26x formats from 1988 onwards and the MPEG formats from 1993 onwards. Motion-compensated DCT compression significantly reduces the amount of bandwidth required for a digital TV signal. By 1991, it had achieved data compression ratios from 8:1 to 14:1 for near-studio-quality HDTV transmission, down to 70140 Mbit/s. Between 1988 and 1991, DCT video compression was widely adopted as the video coding standard for HDTV implementations, enabling the development of practical digital HDTV. Dynamic random-access memory (DRAM) was also adopted as framebuffer semiconductor memory, with the DRAM semiconductor industry's increased manufacturing and reducing prices important to the commercialization of HDTV.
Since 1972, International Telecommunication Union's radio telecommunications sector (ITU-R) had been working on creating a global recommendation for Analog HDTV. These recommendations, however, did not fit in the broadcasting bands which could reach home users. The standardization of MPEG-1 in 1993 led to the acceptance of recommendations ITU-R BT.709. In anticipation of these standards, the DVB organization was formed. It was alliance of broadcasters, consumer electronics manufacturers and regulatory bodies. The DVB develops and agrees upon specifications which are formally standardised by ETSI.
DVB created first the standard for DVB-S digital satellite TV, DVB-C digital cable TV and DVB-T digital terrestrial TV. These broadcasting systems can be used for both SDTV and HDTV. In the US the Grand Alliance proposed ATSC as the new standard for SDTV and HDTV. Both ATSC and DVB were based on the MPEG-2 standard, although DVB systems may also be used to transmit video using the newer and more efficient H.264/MPEG-4 AVC compression standards. Common for all DVB standards is the use of highly efficient modulation techniques for further reducing bandwidth, and foremost for reducing receiver-hardware and antenna requirements.
In 1983, the International Telecommunication Union's radio telecommunications sector (ITU-R) set up a working party (IWP11/6) with the aim of setting a single international HDTV standard. One of the thornier issues concerned a suitable frame/field refresh rate, the world already having split into two camps, 25/50 Hz and 30/60 Hz, largely due to the differences in mains frequency. The IWP11/6 working party considered many views and throughout the 1980s served to encourage development in a number of video digital processing areas, not least conversion between the two main frame/field rates using motion vectors, which led to further developments in other areas. While a comprehensive HDTV standard was not in the end established, agreement on the aspect ratio was achieved.
Initially the existing 5:3 aspect ratio had been the main candidate but, due to the influence of widescreen cinema, the aspect ratio 16:9 (1.78) eventually emerged as being a reasonable compromise between 5:3 (1.67) and the common 1.85 widescreen cinema format. An aspect ratio of 16:9 was duly agreed upon at the first meeting of the IWP11/6 working party at the BBC's Research and Development establishment in Kingswood Warren. The resulting ITU-R Recommendation ITU-R BT.709-2 ("Rec. 709") includes the 16:9 aspect ratio, a specified colorimetry, and the scan modes 1080i (1,080 actively interlaced lines of resolution) and 1080p (1,080 progressively scanned lines). The British Freeview HD trials used MBAFF, which contains both progressive and interlaced content in the same encoding.
It also includes the alternative 1440×1152 HDMAC scan format. (According to some reports, a mooted 750-line (720p) format (720 progressively scanned lines) was viewed by some at the ITU as an enhanced television format rather than a true HDTV format, and so was not included, although 1920×1080i and 1280×720p systems for a range of frame and field rates were defined by several US SMPTE standards.)
Inaugural HDTV broadcast in the United States
HDTV technology was introduced in the United States in the early 1990s and made official in 1993 by the Digital HDTV Grand Alliance, a group of television, electronic equipment, communications companies consisting of AT&T Bell Labs, General Instrument, Philips, Sarnoff, Thomson, Zenith and the Massachusetts Institute of Technology. Field testing of HDTV at 199 sites in the United States was completed August 14, 1994. The first public HDTV broadcast in the United States occurred on July 23, 1996, when the Raleigh, North Carolina television station WRAL-HD began broadcasting from the existing tower of WRAL-TV southeast of Raleigh, winning a race to be first with the HD Model Station in Washington, D.C., which began broadcasting July 31, 1996 with the callsign WHD-TV, based out of the facilities of NBC owned and operated station WRC-TV. The American Advanced Television Systems Committee (ATSC) HDTV system had its public launch on October 29, 1998, during the live coverage of astronaut John Glenn's return mission to space on board the Space Shuttle Discovery. The signal was transmitted coast-to-coast, and was seen by the public in science centers, and other public theaters specially equipped to receive and display the broadcast.
European HDTV broadcasts
Between 1988 and 1991, several European organizations were working on discrete cosine transform (DCT) based digital video coding standards for both SDTV and HDTV. The EU 256 project by the CMTT and ETSI, along with research by Italian broadcaster RAI, developed a DCT video codec that broadcast near-studio-quality HDTV transmission at about 70140 Mbit/s. The first HDTV transmissions in Europe, albeit not direct-to-home, began in 1990, when RAI broadcast the 1990 FIFA World Cup using several experimental HDTV technologies, including the digital DCT-based EU 256 codec, the mixed analog-digital HD-MAC technology, and the analog MUSE technology. The matches were shown in 8 cinemas in Italy, where the tournament was played, and 2 in Spain. The connection with Spain was made via the Olympus satellite link from Rome to Barcelona and then with a fiber optic connection from Barcelona to Madrid. After some HDTV transmissions in Europe, the standard was abandoned in 1993, to be replaced by a digital format from DVB.
The first regular broadcasts began on January 1, 2004, when the Belgian company Euro1080 launched the HD1 channel with the traditional Vienna New Year's Concert. Test transmissions had been active since the IBC exhibition in September 2003, but the New Year's Day broadcast marked the official launch of the HD1 channel, and the official start of direct-to-home HDTV in Europe.
Euro1080, a division of the later defunct Belgian TV services company Alfacam, broadcast HDTV channels to break the pan-European stalemate of "no HD broadcasts mean no HD TVs bought means no HD broadcasts ..." and kick-start HDTV interest in Europe. The HD1 channel was initially free-to-air and mainly comprised sporting, dramatic, musical and other cultural events broadcast with a multi-lingual soundtrack on a rolling schedule of four or five hours per day.
These first European HDTV broadcasts used the 1080i format with MPEG-2 compression on a DVB-S signal from SES's Astra 1H satellite. Euro1080 transmissions later changed to MPEG-4/AVC compression on a DVB-S2 signal in line with subsequent broadcast channels in Europe.
Despite delays in some countries, the number of European HD channels and viewers has risen steadily since the first HDTV broadcasts, with SES's annual Satellite Monitor market survey for 2010 reporting more than 200 commercial channels broadcasting in HD from Astra satellites, 185 million HD capable TVs sold in Europe (£60 million in 2010 alone), and 20 million households (27% of all European digital satellite TV homes) watching HD satellite broadcasts (16 million via Astra satellites).
In December 2009, the United Kingdom became the first European country to deploy high-definition content using the new DVB-T2 transmission standard, as specified in the Digital TV Group (DTG) D-book, on digital terrestrial television.
The Freeview HD service contains 13 HD channels () and was rolled out region by region across the UK in accordance with the digital switchover process, finally being completed in October 2012. However, Freeview HD is not the first HDTV service over digital terrestrial television in Europe; Italy's RAI started broadcasting in 1080i on April 24, 2008, using the DVB-T transmission standard.
In October 2008, France deployed five high definition channels using DVB-T transmission standard on digital terrestrial distribution.
Notation
HDTV broadcast systems are identified with three major parameters:
Frame size in pixels is defined as number of horizontal pixels × number of vertical pixels, for example 1280 × 720 or 1920 × 1080. Often the number of horizontal pixels is implied from context and is omitted, as in the case of 720p and 1080p.
Scanning system is identified with the letter p for progressive scanning or i for interlaced scanning.
Frame rate is identified as number of video frames per second. For interlaced systems, the number of frames per second should be specified, but it is not uncommon to see the field rate incorrectly used instead.
If all three parameters are used, they are specified in the following form: [frame size][scanning system][frame or field rate] or [frame size]/[frame or field rate][scanning system]. Often, frame size or frame rate can be dropped if its value is implied from context. In this case, the remaining numeric parameter is specified first, followed by the scanning system.
For example, 1920×1080p25 identifies progressive scanning format with 25 frames per second, each frame being 1,920 pixels wide and 1,080 pixels high. The 1080i25 or 1080i50 notation identifies interlaced scanning format with 25 frames (50 fields) per second, each frame being 1,920 pixels wide and 1,080 pixels high. The 1080i30 or 1080i60 notation identifies interlaced scanning format with 30 frames (60 fields) per second, each frame being 1,920 pixels wide and 1,080 pixels high. The 720p60 notation identifies progressive scanning format with 60 frames per second, each frame being 720 pixels high; 1,280 pixels horizontally are implied.
Systems using 50 Hz support three scanning rates: 50i, 25p and 50p, while 60 Hz systems support a much wider set of frame rates: 59.94i, 60i, 23.976p, 24p, 29.97p, 30p, 59.94p and 60p. In the days of standard-definition television, the fractional rates were often rounded up to whole numbers, e.g. 23.976p was often called 24p, or 59.94i was often called 60i. Sixty Hertz high definition television supports both fractional and slightly different integer rates, therefore strict usage of notation is required to avoid ambiguity. Nevertheless, 29.97p/59.94i is almost universally called 60i, likewise 23.976p is called 24p.
For the commercial naming of a product, the frame rate is often dropped and is implied from context (e.g., a 1080i television set). A frame rate can also be specified without a resolution. For example, 24p means 24 progressive scan frames per second, and 50i means 25 interlaced frames per second.
There is no single standard for HDTV color support. Colors are typically broadcast using a (10-bits per channel) YUV color space but, depending on the underlying image generating technologies of the receiver, are then subsequently converted to a RGB color space using standardized algorithms. When transmitted directly through the Internet, the colors are typically pre-converted to 8-bit RGB channels for additional storage savings with the assumption that it will only be viewed only on a (sRGB) computer screen. As an added benefit to the original broadcasters, the losses of the pre-conversion essentially make these files unsuitable for professional TV re-broadcasting.
Most HDTV systems support resolutions and frame rates defined either in the ATSC table 3, or in EBU specification. The most common are noted below.
Display resolutions
At a minimum, HDTV has twice the linear resolution of standard-definition television (SDTV), thus showing greater detail than either analog television or regular DVD. The technical standards for broadcasting HDTV also handle the 16:9 aspect ratio images without using letterboxing or anamorphic stretching, thus increasing the effective image resolution.
A very high-resolution source may require more bandwidth than available in order to be transmitted without loss of fidelity. The lossy compression that is used in all digital HDTV storage and transmission systems will distort the received picture when compared to the uncompressed source.
Standard frame or field rates
ATSC and DVB define the following frame rates for use with the various broadcast standards:
23.976 Hz (film-looking frame rate compatible with NTSC clock speed standards)
24 Hz (international film and ATSC high-definition material)
25 Hz (PAL film, DVB standard-definition and high-definition material)
29.97 Hz (NTSC film and standard-definition material)
30 Hz (NTSC film, ATSC high-definition material)
50 Hz (DVB high-definition material)
59.94 Hz (ATSC high-definition material)
60 Hz (ATSC high-definition material)
The optimum format for a broadcast depends upon the type of videographic recording medium used and the image's characteristics. For best fidelity to the source, the transmitted field ratio, lines, and frame rate should match those of the source.
PAL, SECAM and NTSC frame rates technically apply only to analog standard-definition television, not to digital or high definition broadcasts. However, with the rollout of digital broadcasting, and later HDTV broadcasting, countries retained their heritage systems. HDTV in former PAL and SECAM countries operates at a frame rate of 25/50 Hz, while HDTV in former NTSC countries operates at 30/60 Hz.
Types of media
High-definition image sources include terrestrial broadcast, direct broadcast satellite, digital cable, IPTV, Blu-ray video disc (BD), and internet downloads.
In the US, residents in the line of sight of television station broadcast antennas can receive free, over-the-air programming with a television set with an ATSC tuner via a TV aerial. Laws prohibit homeowners' associations and city government from banning the installation of antennas.
Standard 35mm photographic film used for cinema projection has a much higher image resolution than HDTV systems, and is exposed and projected at a rate of 24 frames per second (frame/s). To be shown on standard television, in PAL-system countries, cinema film is scanned at the TV rate of 25 frame/s, causing a speedup of 4.1 percent, which is generally considered acceptable. In NTSC-system countries, the TV scan rate of 30 frame/s would cause a perceptible speedup if the same were attempted, and the necessary correction is performed by a technique called 3:2 pulldown: Over each successive pair of film frames, one is held for three video fields (1/20 of a second) and the next is held for two video fields (1/30 of a second), giving a total time for the two frames of 1/12 of a second and thus achieving the correct average film frame rate.
Non-cinematic HDTV video recordings intended for broadcast are typically recorded either in 720p or 1080i format as determined by the broadcaster. 720p is commonly used for Internet distribution of high-definition video, because most computer monitors operate in progressive-scan mode. 720p also imposes less strenuous storage and decoding requirements compared to both 1080i and 1080p. 1080p/24, 1080i/30, 1080i/25, and 720p/30 is most often used on Blu-ray Disc.
Recording and compression
HDTV can be recorded to D-VHS (Digital-VHS or Data-VHS), W-VHS (analog only), to an HDTV-capable digital video recorder (for example DirecTV's high-definition digital video recorder, Sky HD's set-top box, Dish Network's VIP 622 or VIP 722 high-definition digital video recorder receivers (these set-top boxes allow for HD on the Primary TV and SD on the secondary TV (TV2) without a secondary box on TV2), or TiVo's Series 3 or HD recorders), or an HDTV-ready HTPC. Some cable boxes are capable of receiving or recording two or more broadcasts at a time in HDTV format, and HDTV programming, some included in the monthly cable service subscription price, some for an additional fee, can be played back with the cable company's on-demand feature.
The massive amount of data storage required to archive uncompressed streams meant that inexpensive uncompressed storage options were not available to the consumer. In 2008, the Hauppauge 1212 Personal Video Recorder was introduced. This device accepts HD content through component video inputs and stores the content in MPEG-2 format in a .ts file or in a Blu-ray-compatible format .m2ts file on the hard drive or DVD burner of a computer connected to the PVR through a USB 2.0 interface. More recent systems are able to record a broadcast high definition program in its 'as broadcast' format or transcode to a format more compatible with Blu-ray.
Analog tape recorders with bandwidth capable of recording analog HD signals, such as W-VHS recorders, are no longer produced for the consumer market and are both expensive and scarce in the secondary market.
In the United States, as part of the FCC's plug and play agreement, cable companies are required to provide customers who rent HD set-top boxes with a set-top box with "functional" FireWire (IEEE 1394) on request. None of the direct broadcast satellite providers have offered this feature on any of their supported boxes, but some cable TV companies have. , boxes are not included in the FCC mandate. This content is protected by encryption known as 5C. This encryption can prevent duplication of content or simply limit the number of copies permitted, thus effectively denying most if not all fair use of the content.
| Technology | Broadcasting | null |
4246681 | https://en.wikipedia.org/wiki/Glaucus%20atlanticus | Glaucus atlanticus | Glaucus atlanticus (common names include the blue sea dragon, sea swallow, blue angel, blue glaucus, dragon slug, blue dragon, blue sea slug, and blue ocean slug) is a species of sea slug in the family Glaucidae.
These sea slugs live in the pelagic zone (open ocean), where they float upside-down by using the surface tension of the water to stay afloat. They are carried along by the winds and ocean currents. G. atlanticus makes use of countershading; the blue side of their bodies faces upwards, blending in with the blue of the water. The silver/grey side of the sea slug faces downwards, blending in with the sunlight reflecting on the ocean's surface when viewed from below the surface of the water.
G. atlanticus feeds on other pelagic creatures, including the Portuguese man o' war and other venomous siphonophores. This sea slug stores stinging nematocysts from the siphonophores within its own tissues as defence against predators. Humans handling the slug may receive a very painful and potentially dangerous sting.
Taxonomy
This species looks similar to, and is closely related to, G. marginatus, which is now understood to be not one species, but a cryptic species complex of four separate species that live in the Indian and Pacific Oceans. It shares the common name "blue dragon" with Pteraeolidia ianthina and G. marginatus.
Description
At maturity, G. atlanticus is usually around in length, though larger specimens have been found. It can live up to a year under the right conditions. It is silvery grey on its dorsal side and dark and pale blue ventrally. It has dark blue stripes on its head. It has a flat, tapering body and six appendages that branch out into rayed, finger-like cerata.
Cerata, also known as papillae, extend laterally from three different pairs of peduncles. The papillae are placed in a single row (uniseriate) and may number up to 84 in total.
G. atlanticus is usually found in tropical/subtropical areas, floating at the ocean's surface due to the stored gulped air inside its stomach. It usually feeds on cnidarians, which can be noisy due to air escaping its stomach as it feeds.
The radula of this species bears serrated teeth, which paired with a strong jaw and denticles, allows it to grasp and "chip down" parts of its prey.
Buoyancy and coloration
With the aid of a gas-filled sac in its stomach, G. atlanticus floats at the surface. Due to the location of the gas sac, this species floats upside down. The upper surface is actually the foot (the underside in other slugs and snails), and this has either a blue or blue-white coloration. The true dorsal surface (carried downwards in G. atlanticus) is completely silver-grey. This coloration is an example of countershading, which helps protect it from predators that might attack from below and from above. The blue coloration is also thought to reflect harmful ultraviolet sunlight.
Distribution and habitat
This nudibranch is pelagic, and some evidence indicates that it occurs throughout the world's oceans, in temperate and tropical waters. It has been recorded from the east and south coasts of South Africa, European waters, the east coast of Australia, and Mozambique. Observations in 2015 and 2016 suggested that the G. atlanticus species' geographical range had increased northward by 150 km in the Gulf of California compared with previous sightings.
Since the middle of the 19th century, records of this species have been reported on the Azores.
G. atlanticus was recently found in the Humboldt Current ecosystem in Peru in 2013, and off Andhra Pradesh in India in 2012. This is in line with the known habitat characteristics of the species; they thrive in warm, temperate climates in the Southern Pacific, and in circumtropical and Lusitanian environments. Before finding G. atlanticus off Andhra Pradesh, these nudibranchs were documented as having been seen in the Bay of Bengal and off the coast of Tamil Nadu, India, over 677 km apart. G. atlanticus was also recently found off Bermuda in January 2016, and uncommonly washes ashore on east coast beaches at Barbados, Lesser Antilles.
Although these sea slugs live on the open ocean, they sometimes accidentally wash up onto the shore, so they may be found on beaches. In April 2022, specimens were found in the Gulf of Mexico along the Texas coast. On August 31, 2023, blue sea slugs were reported to be found along Karon Beach, Phuket, Thailand.
Life history and behavior
G. atlanticus preys on other larger pelagic organisms. The sea slugs can move toward prey or mates by using their cerata, the thin feather-like "fingers" on its body, to make slow swimming movements. They are known to prey on the dangerously venomous Portuguese man o' war (Physalia physalis), the by-the-wind-sailor (Velella velella), the blue button (Porpita porpita), and the violet snail, Janthina janthina. Occasionally, individuals attack and eat other individuals in captivity.
The species is able to feed on the Portuguese man o' war due to its immunity to the venomous nematocysts. The slug consumes chunks of the organism and appears to select and store the most venomous nematocysts for its own use against future prey. The nematocysts are collected in specialized sacs (cnidosacs) at the tip of the animal's cerata. Because G. atlanticus concentrates the venom, it can produce a more powerful and deadly sting than the man o' war on which it feeds.
Like almost all heterobranchs, blue dragons are hermaphrodites and their male reproductive organs have evolved to be especially large and hooked to avoid their partner's venomous cerata. Unlike most nudibranchs, which mate with their right sides facing, sea swallows mate with ventral sides facing. After mating, both individuals are able to lay eggs and can release up to 20 on an egg string, often laying them in wood pieces or carcasses. On average, G. atlanticus can lay 55 egg strings per hour. G. atlanticus is not globally panmictic, but is localized within ocean basins. Gene flow among Afro-Eurasian and American populations is thus hindered by physical obstructions and water temperatures in the Arctic and Southern Oceans.
Sting
G. atlanticus can swallow the venomous nematocysts from siphonophores, such as the Portuguese man o' war, and store them in the extremities of its finger-like cerata. Picking up the animal can result in a painful sting, with symptoms similar to those caused by the Portuguese man o' war. The symptoms that may appear after being stung are nausea, pain, vomiting, acute allergic contact dermatitis, erythema, urticarial papules, potential vesicle formation and postinflammatory hyperpigmentation.
In 2023, Julian Obayd, a TikTok user whose videos focus on marine life, went viral after going to the hospital for several Blue Dragon stings. In the viral video, Obayd claimed he had been stung when moving a group of Blue Dragons drying out in the sand and warned viewers about the risk of being stung.
| Biology and health sciences | Gastropods | Animals |
4248491 | https://en.wikipedia.org/wiki/Gauche%20effect | Gauche effect | In the study of conformational isomerism, the gauche effect is an atypical situation where a gauche conformation (groups separated by a torsion angle of approximately 60°) is more stable than the anti conformation (180°).
There are both steric and electronic effects that affect the relative stability of conformers. Ordinarily, steric effects predominate to place large substituents far from each other. However, this is not the case for certain substituents, typically those that are highly electronegative. Instead, there is an electronic preference for these groups to be gauche. Typically studied examples include 1,2-difluoroethane (H2FCCFH2), ethylene glycol, and vicinal-difluoroalkyl structures.
There are two main explanations for the gauche effect: hyperconjugation and bent bonds. In the hyperconjugation model, the donation of electron density from the C−H σ bonding orbital to the C−F σ* antibonding orbital is considered the source of stabilization in the gauche isomer. Due to the greater electronegativity of fluorine, the C−H σ orbital is a better electron donor than the C−F σ orbital, while the C−F σ* orbital is a better electron acceptor than the C−H σ* orbital. Only the gauche conformation allows good overlap between the better donor and the better acceptor.
Key in the bent bond explanation of the gauche effect in difluoroethane is the increased p orbital character of both C−F bonds due to the large electronegativity of fluorine. As a result, electron density builds up above and below to the left and right of the central C−C bond. The resulting reduced orbital overlap can be partially compensated when a gauche conformation is assumed, forming a bent bond. Of these two models, hyperconjugation is generally considered the principal cause behind the gauche effect in difluoroethane.
The molecular geometry of both rotamers can be obtained experimentally by high-resolution infrared spectroscopy augmented with in silico work. In accordance with the model described above, the carbon–carbon bond length is higher for the anti-rotamer (151.4 pm vs. 150 pm). The steric repulsion between the fluorine atoms in the gauche rotamer causes increased CCF bond angles (by 3.2°) and increased FCCF dihedral angles (from the default 60° to 71°).
In the related compound 1,2-difluoro-1,2-diphenylethane, the threo isomer is found (by X-ray diffraction and from NMR coupling constants) to have an anti conformation between the two phenyl groups and the two fluorine groups and a gauche conformation is found for both groups for the erythro isomer. According to in silico results, this conformation is more stable by 0.21 kcal/mol (880 J/mol).
A gauche effect has also been reported for a molecule featuring an all-syn array of four consecutive fluoro substituents. The reaction to install the fourth one is stereoselective:
The gauche effect is also seen in 1,2-dimethoxyethane and some vicinal-dinitroalkyl compounds.
The alkene cis effect is an analogous atypical stabilizing of certain alkenes.
External influences
The gauche effect is very sensitive to solvent effects, due to the large difference in polarity between the two conformers. For example, 2,3-dinitro-2,3-dimethylbutane, which in the solid state exists only in the gauche conformation, prefers the gauche conformer in benzene solution by a ratio of 79:21, but in carbon tetrachloride, it prefers the anti conformer by a ratio of 58:42. Another case is trans-1,2 difluorocyclohexane, which has a larger preference for the di-equatorial conformer, rather than the anti-diaxial conformer, in more polar solvents.
| Physical sciences | Stereochemistry | Chemistry |
3113497 | https://en.wikipedia.org/wiki/Innate%20immune%20system | Innate immune system | The innate immune system or nonspecific immune system is one of the two main immunity strategies in vertebrates (the other being the adaptive immune system). The innate immune system is an alternate defense strategy and is the dominant immune system response found in plants, fungi, prokaryotes, and invertebrates (see Beyond vertebrates).
The major functions of the innate immune system are to:
recruit immune cells to infection sites by producing chemical factors, including chemical mediators called cytokines
activate the complement cascade to identify bacteria, activate cells, and promote clearance of antibody complexes or dead cells
identify and remove foreign substances present in organs, tissues, blood and lymph, by specialized white blood cells
activate the adaptive immune system through antigen presentation
act as a physical and chemical barrier to infectious agents; via physical measures such as skin and mucus, and chemical measures such as clotting factors and host defence peptides.
Anatomical barriers
Anatomical barriers include physical, chemical and biological barriers. The epithelial surfaces form a physical barrier that is impermeable to most infectious agents, acting as the first line of defense against invading organisms. Desquamation (shedding) of skin epithelium also helps remove bacteria and other infectious agents that have adhered to the epithelial surface. Lack of blood vessels, the inability of the epidermis to retain moisture, and the presence of sebaceous glands in the dermis, produces an environment unsuitable for the survival of microbes. In the gastrointestinal and respiratory tract, movement due to peristalsis or cilia, respectively, helps remove infectious agents. Also, mucus traps infectious agents. Gut flora can prevent the colonization of pathogenic bacteria by secreting toxic substances or by competing with pathogenic bacteria for nutrients or cell surface attachment sites. The flushing action of tears and saliva helps prevent infection of the eyes and mouth.
Inflammation
Inflammation is one of the first responses of the immune system to infection or irritation. Inflammation is stimulated by chemical factors released by injured cells. It establishes a physical barrier against the spread of infection and promotes healing of any damaged tissue following pathogen clearance.
The process of acute inflammation is initiated by cells already present in all tissues, mainly resident macrophages, dendritic cells, histiocytes, Kupffer cells, and mast cells. These cells present receptors contained on the surface or within the cell, named pattern recognition receptors (PRRs), which recognize molecules that are broadly shared by pathogens but distinguishable from host molecules, collectively referred to as pathogen-associated molecular patterns (PAMPs). At the onset of an infection, burn, or other injuries, these cells undergo activation (one of their PRRs recognizes a PAMP) and release inflammatory mediators, like cytokines and chemokines, which are responsible for the clinical signs of inflammation. PRR activation and its cellular consequences have been well-characterized as methods of inflammatory cell death, which include pyroptosis, necroptosis, and PANoptosis. These cell death pathways help clear infected or aberrant cells and release cellular contents and inflammatory mediators.
Chemical factors produced during inflammation (histamine, bradykinin, serotonin, leukotrienes, and prostaglandins) sensitize pain receptors, cause local vasodilation of the blood vessels, and attract phagocytes, especially neutrophils. Neutrophils then trigger other parts of the immune system by releasing factors that summon additional leukocytes and lymphocytes. Cytokines produced by macrophages and other cells of the innate immune system mediate the inflammatory response. These cytokines include TNF, HMGB1, and IL-1.
The inflammatory response is characterized by the following symptoms:
redness of the skin, due to locally increased blood circulation;
heat, either increased local temperature, such as a warm feeling around a localized infection, or a systemic fever;
swelling of affected tissues, such as the upper throat during the common cold or joints affected by rheumatoid arthritis;
increased production of mucus, which can cause symptoms like a runny nose or a productive cough;
pain, either local pain, such as painful joints or a sore throat, or affecting the whole body, such as body aches; and
possible dysfunction of involved organs/tissues.
Complement system
The complement system is a biochemical cascade of the immune system that helps, or "complements", the ability of antibodies to clear pathogens or mark them for destruction by other cells. The cascade is composed of many plasma proteins, synthesized in the liver, primarily by hepatocytes. The proteins work together to:
trigger the recruitment of inflammatory cells
"tag" pathogens for destruction by other cells by opsonizing, or coating, the surface of the pathogen
form holes in the plasma membrane of the pathogen, resulting in cytolysis of the pathogen cell, causing its death
rid the body of neutralised antigen-antibody complexes.
The three different complement systems are classical, alternative and lectin.
Classical: starts when antibody binds to bacteria
Alternative: starts "spontaneously"
Lectin: starts when lectins bind to mannose on bacteria
Elements of the complement cascade can be found in many non-mammalian species including plants, birds, fish, and some species of invertebrates.
White blood cells
White blood cells (WBCs) are also known as leukocytes. Most leukocytes differ from other cells of the body in that they are not tightly associated with a particular organ or tissue; thus, their function is similar to that of independent, single-cell organisms. Most leukocytes are able to move freely and interact with and capture cellular debris, foreign particles, and invading microorganisms (although macrophages, mast cells, and dendritic cells are less mobile). Unlike many other cells, most innate immune leukocytes cannot divide or reproduce on their own, but are the products of multipotent hematopoietic stem cells present in bone marrow.
The innate leukocytes include: natural killer cells, mast cells, eosinophils, basophils; and the phagocytic cells include macrophages, neutrophils, and dendritic cells, and function within the immune system by identifying and eliminating pathogens that might cause infection.
Mast cells
Mast cells are a type of innate immune cell that resides in connective tissue and in mucous membranes. They are intimately associated with wound healing and defense against pathogens, but are also often associated with allergy and anaphylaxis. When activated, mast cells rapidly release characteristic granules, rich in histamine and heparin, along with various hormonal mediators and chemokines, or chemotactic cytokines into the environment. Histamine dilates blood vessels, causing the characteristic signs of inflammation, and recruits neutrophils and macrophages.
Phagocytes
The word 'phagocyte' literally means 'eating cell'. These are immune cells that engulf, or 'phagocytose', pathogens or particles. To engulf a particle or pathogen, a phagocyte extends portions of its plasma membrane, wrapping the membrane around the particle until it is enveloped (i.e., the particle is now inside the cell). Once inside the cell, the invading pathogen is contained inside a phagosome, which merges with a lysosome. The lysosome contains enzymes and acids that kill and digest the particle or organism. In general, phagocytes patrol the body searching for pathogens, but are also able to react to a group of highly specialized molecular signals produced by other cells, called cytokines. The phagocytic cells of the immune system include macrophages, neutrophils, and dendritic cells.
Phagocytosis of the hosts' own cells is common as part of regular tissue development and maintenance. When host cells die, either by apoptosis or by cell injury due to an infection, phagocytic cells are responsible for their removal from the affected site. By helping to remove dead cells preceding growth and development of new healthy cells, phagocytosis is an important part of the healing process following tissue injury.
Macrophages
Macrophages, from the Greek, meaning "large eaters", are large phagocytic leukocytes, which are able to move beyond the vascular system by migrating through the walls of capillary vessels and entering the areas between cells in pursuit of invading pathogens. In tissues, organ-specific macrophages are differentiated from phagocytic cells present in the blood called monocytes. Macrophages are the most efficient phagocytes and can phagocytose substantial numbers of bacteria or other cells or microbes. The binding of bacterial molecules to receptors on the surface of a macrophage triggers it to engulf and destroy the bacteria through the generation of a "respiratory burst", causing the release of reactive oxygen species. Pathogens also stimulate the macrophage to produce chemokines, which summon other cells to the site of infection.
Neutrophils
Neutrophils, along with eosinophils and basophils, are known as granulocytes due to the presence of granules in their cytoplasm, or as polymorphonuclear cells (PMNs) due to their distinctive lobed nuclei. Neutrophil granules contain a variety of toxic substances that kill or inhibit growth of bacteria and fungi. Similar to macrophages, neutrophils attack pathogens by activating a respiratory burst. The main products of the neutrophil respiratory burst are strong oxidizing agents including hydrogen peroxide, free oxygen radicals and hypochlorite. Neutrophils are the most abundant type of phagocyte, normally representing 50–60% of the total circulating leukocytes, and are usually the first cells to arrive at the site of an infection. The bone marrow of a normal healthy adult produces more than 100 billion neutrophils per day, and more than 10 times that many per day during acute inflammation.
Dendritic cells
Dendritic cells (DCs) are phagocytic cells present in tissues that are in contact with the external environment, mainly the skin (where they are often called Langerhans cells), and the inner mucosal lining of the nose, lungs, stomach, and intestines. They are named for their resemblance to neuronal dendrites, but dendritic cells are not connected to the nervous system. Dendritic cells are very important in the process of antigen presentation, and serve as a link between the innate and adaptive immune systems.
Basophils and eosinophils
Basophils and eosinophils are cells related to the neutrophil. When activated by a pathogen encounter, histamine-releasing basophils are important in the defense against parasites and play a role in allergic reactions, such as asthma. Upon activation, eosinophils secrete a range of highly toxic proteins and free radicals that are highly effective in killing parasites, but may also damage tissue during an allergic reaction. Activation and release of toxins by eosinophils are, therefore, tightly regulated to prevent any inappropriate tissue destruction.
Natural killer cells
Natural killer cells (NK cells) do not directly attack invading microbes. Rather, NK cells destroy compromised host cells, such as tumor cells or virus-infected cells, recognizing such cells by a condition known as "missing self". This term describes cells with abnormally low levels of a cell-surface marker called MHC I (major histocompatibility complex) - a situation that can arise in viral infections of host cells. They were named "natural killer" because of the initial notion that they do not require activation in order to kill cells that are "missing self". The MHC makeup on the surface of damaged cells is altered and the NK cells become activated by recognizing this. Normal body cells are not recognized and attacked by NK cells because they express intact self MHC antigens. Those MHC antigens are recognized by killer cell immunoglobulin receptors (KIR) that slow the reaction of NK cells. The NK-92 cell line does not express KIR and is developed for tumor therapy.
γδ T cells
Like other 'unconventional' T cell subsets bearing invariant T cell receptors (TCRs), such as CD1d-restricted Natural Killer T cells, γδ T cells exhibit characteristics that place them at the border between innate and adaptive immunity. γδ T cells may be considered a component of adaptive immunity in that they rearrange TCR genes to produce junctional diversity and develop a memory phenotype. The various subsets may be considered part of the innate immune system where a restricted TCR or NK receptors may be used as a pattern recognition receptor. For example, according to this paradigm, large numbers of Vγ9/Vδ2 T cells respond within hours to common molecules produced by microbes, and highly restricted intraepithelial Vδ1 T cells will respond to stressed epithelial cells.
Other vertebrate mechanisms
The coagulation system overlaps with the immune system. Some products of the coagulation system can contribute to non-specific defenses via their ability to increase vascular permeability and act as chemotactic agents for phagocytic cells. In addition, some of the products of the coagulation system are directly antimicrobial. For example, beta-lysine, a protein produced by platelets during coagulation, can cause lysis of many Gram-positive bacteria by acting as a cationic detergent. Many acute-phase proteins of inflammation are involved in the coagulation system.
Increased levels of lactoferrin and transferrin inhibit bacterial growth by binding iron, an essential bacterial nutrient.
Neural regulation
The innate immune response to infectious and sterile injury is modulated by neural circuits that control cytokine production period. The inflammatory reflex is a prototypical neural circuit that controls cytokine production in the spleen. Action potentials transmitted via the vagus nerve to the spleen mediate the release of acetylcholine, the neurotransmitter that inhibits cytokine release by interacting with alpha7 nicotinic acetylcholine receptors (CHRNA7) expressed on cytokine-producing cells. The motor arc of the inflammatory reflex is termed the cholinergic anti-inflammatory pathway.
Pathogen-specificity
The parts of the innate immune system display specificity for different pathogens.
Immune evasion
Innate immune system cells prevent free growth of microorganisms within the body, but many pathogens have evolved mechanisms to evade it.
One strategy is intracellular replication, as practised by Mycobacterium tuberculosis, or wearing a protective capsule, which prevents lysis by complement and by phagocytes, as in Salmonella. Bacteroides species are normally mutualistic bacteria, making up a substantial portion of the mammalian gastrointestinal flora. Species such as B. fragilis are opportunistic pathogens, causing infections of the peritoneal cavity. They inhibit phagocytosis by affecting the phagocytes receptors used to engulf bacteria. They may also mimic host cells so the immune system does not recognize them as foreign. Staphylococcus aureus inhibits the ability of the phagocyte to respond to chemokine signals. M. tuberculosis, Streptococcus pyogenes, and Bacillus anthracis utilize mechanisms that directly kill the phagocyte.
Bacteria and fungi may form complex biofilms, protecting them from immune cells and proteins; biofilms are present in the chronic Pseudomonas aeruginosa and Burkholderia cenocepacia infections characteristic of cystic fibrosis.
Viruses
Type I interferons (IFN), secreted mainly by dendritic cells, play a central role in antiviral host defense and a cell's antiviral state. Viral components are recognized by different receptors: Toll-like receptors are located in the endosomal membrane and recognize double-stranded RNA (dsRNA), MDA5 and RIG-I receptors are located in the cytoplasm and recognize long dsRNA and phosphate-containing dsRNA respectively. When the cytoplasmic receptors MDA5 and RIG-I recognize a virus the conformation between the caspase-recruitment domain (CARD) and the CARD-containing adaptor MAVS changes. In parallel, when TLRs in the endocytic compartments recognize a virus the activation of the adaptor protein TRIF is induced. Both pathways converge in the recruitment and activation of the IKKε/TBK-1 complex, inducing dimerization of transcription factors IRF3 and IRF7, which are translocated in the nucleus, where they induce IFN production with the presence of a particular transcription factor and activate transcription factor 2. IFN is secreted through secretory vesicles, where it can activate receptors on both the cell it was released from (autocrine) or nearby cells (paracrine). This induces hundreds of interferon-stimulated genes to be expressed. This leads to antiviral protein production, such as protein kinase R, which inhibits viral protein synthesis, or the 2′,5′-oligoadenylate synthetase family, which degrades viral RNA.
Some viruses evade this by producing molecules that interfere with IFN production. For example, the Influenza A virus produces NS1 protein, which can bind to host and viral RNA, interact with immune signaling proteins or block their activation by ubiquitination, thus inhibiting type I IFN production. Influenza A also blocks protein kinase R activation and establishment of the antiviral state. The dengue virus also inhibits type I IFN production by blocking IRF-3 phosophorylation using NS2B3 protease complex.
Beyond vertebrates
Prokaryotes
Bacteria (and perhaps other prokaryotic organisms), utilize a unique defense mechanism, called the restriction modification system to protect themselves from pathogens, such as bacteriophages. In this system, bacteria produce enzymes, called restriction endonucleases, that attack and destroy specific regions of the viral DNA of invading bacteriophages. Methylation of the host's own DNA marks it as "self" and prevents it from being attacked by endonucleases. Restriction endonucleases and the restriction modification system exist exclusively in prokaryotes.
Invertebrates
Invertebrates do not possess lymphocytes or an antibody-based humoral immune system, and it is likely that a multicomponent, adaptive immune system arose with the first vertebrates. Nevertheless, invertebrates possess mechanisms that appear to be precursors of these aspects of vertebrate immunity. Pattern recognition receptors (PRRs) are proteins used by nearly all organisms to identify molecules associated with microbial pathogens. TLRs are a major class of pattern recognition receptor, that exists in all coelomates (animals with a body-cavity), including humans. The complement system exists in most life forms. Some invertebrates, including various insects, crabs, and worms utilize a modified form of the complement response known as the prophenoloxidase (proPO) system.
Antimicrobial peptides are an evolutionarily conserved component of the innate immune response found among all classes of life and represent the main form of invertebrate systemic immunity. Several species of insect produce antimicrobial peptides known as defensins and cecropins.
Proteolytic cascades
In invertebrates, PRRs trigger proteolytic cascades that degrade proteins and control many of the mechanisms of the innate immune system of invertebrates—including hemolymph coagulation and melanization. Proteolytic cascades are important components of the invertebrate immune system because they are turned on more rapidly than other innate immune reactions because they do not rely on gene changes. Proteolytic cascades function in both vertebrate and invertebrates, even though different proteins are used throughout the cascades.
Clotting mechanisms
In the hemolymph, which makes up the fluid in the circulatory system of arthropods, a gel-like fluid surrounds pathogen invaders, similar to the way blood does in other animals. Various proteins and mechanisms are involved in invertebrate clotting. In crustaceans, transglutaminase from blood cells and mobile plasma proteins make up the clotting system, where the transglutaminase polymerizes 210 kDa subunits of a plasma-clotting protein. On the other hand, in the horseshoe crab clotting system, components of proteolytic cascades are stored as inactive forms in granules of hemocytes, which are released when foreign molecules, like lipopolysaccharides enter.
Plants
Members of every class of pathogen that infect humans also infect plants. Although the exact pathogenic species vary with the infected species, bacteria, fungi, viruses, nematodes, and insects can all cause plant disease. As with animals, plants attacked by insects or other pathogens use a set of complex metabolic responses that lead to the formation of defensive chemical compounds that fight infection or make the plant less attractive to insects and other herbivores. (see: plant defense against herbivory).
Like invertebrates, plants neither generate antibody or T-cell responses nor possess mobile cells that detect and attack pathogens. In addition, in case of infection, parts of some plants are treated as disposable and replaceable, in ways that few animals can. Walling off or discarding a part of a plant helps stop infection spread.
Most plant immune responses involve systemic chemical signals sent throughout a plant. Plants use PRRs to recognize conserved microbial signatures. This recognition triggers an immune response. The first plant receptors of conserved microbial signatures were identified in rice (XA21, 1995) and in Arabidopsis (FLS2, 2000). Plants also carry immune receptors that recognize variable pathogen effectors. These include the NBS-LRR class of proteins. When a part of a plant becomes infected with a microbial or viral pathogen, in case of an incompatible interaction triggered by specific elicitors, the plant produces a localized hypersensitive response (HR), in which cells at the site of infection undergo rapid apoptosis to prevent spread to other parts of the plant. HR has some similarities to animal pyroptosis, such as a requirement of caspase-1-like proteolytic activity of VPEγ, a cysteine protease that regulates cell disassembly during cell death.
"Resistance" (R) proteins, encoded by R genes, are widely present in plants and detect pathogens. These proteins contain domains similar to the NOD Like Receptors and TLRs. Systemic acquired resistance (SAR) is a type of defensive response that renders the entire plant resistant to a broad spectrum of infectious agents. SAR involves the production of chemical messengers, such as salicylic acid or jasmonic acid. Some of these travel through the plant and signal other cells to produce defensive compounds to protect uninfected parts, e.g., leaves. Salicylic acid itself, although indispensable for expression of SAR, is not the translocated signal responsible for the systemic response. Recent evidence indicates a role for jasmonates in transmission of the signal to distal portions of the plant. RNA silencing mechanisms are important in the plant systemic response, as they can block virus replication. The jasmonic acid response is stimulated in leaves damaged by insects, and involves the production of methyl jasmonate.
| Biology and health sciences | Immune system | Biology |
3115583 | https://en.wikipedia.org/wiki/Complexometric%20indicator | Complexometric indicator | A complexometric indicator is an ionochromic dye that undergoes a definite color change in presence of specific metal ions. It forms a weak complex with the ions present in the solution, which has a significantly different color from the form existing outside the complex.
Complexometric indicators are also known as pM indicators.
Complexometric titration
In analytical chemistry, complexometric indicators are used in complexometric titration to indicate the exact moment when all the metal ions in the solution are sequestered by a chelating agent (most usually EDTA). Such indicators are also called metallochromic indicators.
The indicator may be present in another liquid phase in equilibrium with the titrated phase, the indicator is described as extraction indicator.
Some complexometric indicators are sensitive to air and are destroyed. When such solution loses color during titration, a drop or two of fresh indicator may have to be added.
Examples
Complexometric indicators are water-soluble organic molecules. Some examples are:
Calcein with EDTA for calcium
Patton-Reeder Indicator with EDTA for calcium with magnesium
Curcumin for boron, that forms Rosocyanine, although the red color change of curcumin also occurs for pH > 8.4
Eriochrome Black T for aluminium, cadmium, zinc, calcium and magnesium
Fast Sulphon Black with EDTA for copper
Hematoxylin for copper
Murexide for calcium and rare earths, but also for copper, nickel, cobalt, and thorium
Xylenol orange for gallium, indium and scandium
Redox indicators
In some settings, when the titrated system is a redox system whose equilibrium is influenced by the removal of the metal ions, a redox indicator can function as a complexometric indicator.
| Physical sciences | Chemical methods | Chemistry |
3117240 | https://en.wikipedia.org/wiki/Gamma%20Cassiopeiae | Gamma Cassiopeiae | Gamma Cassiopeiae, Latinized from γ Cassiopeiae, is a bright star at the center of the distinctive "W" asterism in the northern circumpolar constellation of Cassiopeia. Although it is a fairly bright star with an apparent visual magnitude that varies from 1.6 to 3.0, it has no traditional Arabic or Latin name. It sometimes goes by the informal name Navi. It was observed in 1866 by Angelo Secchi, the first star ever observed with emission lines. It is now considered a Be star.
Gamma Cassiopeiae is also a variable star and a multiple star system. Based upon parallax measurements made by the Hipparcos satellite, it is located at a distance of roughly 550 light-years from Earth. Together with its common-proper-motion companion, HD 5408, the system could contain a total of eight stars. It is one of the highest multiplicity systems known.
Physical properties
Gamma Cassiopeiae is an eruptive variable star, whose apparent magnitude changes irregularly from 1.6 at its brightest to 3.0 at its dimmest. It is the prototype of the class of Gamma Cassiopeiae variable stars. In the late 1930s it underwent what is described as a shell episode and the brightness increased to above magnitude 2.0, then dropped rapidly to 3.4. It has since been gradually brightening back to around 2.2. At maximum intensity, γ Cassiopeiae outshines both Schedar (α Cas; magnitude 2.25) and Caph (β Cas; 2.3).
Gamma Cassiopeiae is a rapidly spinning star with a projected rotational velocity of 472 km s−1, giving it a pronounced equatorial bulge. When combined with the star's high luminosity, the result is the ejection of matter that forms a hot circumstellar disk of gas. The emissions and brightness variations are apparently caused by this "decretion disk".
The spectrum of this massive star matches a stellar classification of B0.5 IVe. A luminosity class of IV identifies it as a subgiant star that has reached a stage of its evolution where it is exhausting the supply of hydrogen in its core region and transforming into a giant star. The 'e' suffix is used for stars that show emission lines of hydrogen in the spectrum, caused in this case by the circumstellar disk. This places it among a category known as Be stars; in fact, the first such star ever to be so designated. It has 17 times the Sun's mass and is radiating as much energy as 34,000 Suns. At this rate of emission, the star has reached the end of its life as a late O-type main sequence star after a relatively brief 8 million years. The outer atmosphere has an intense effective temperature of 25,000 K, which is causing it to glow with a blue-white hue.
X-ray emission
Gamma Cassiopeiae is the prototype of a small group of stellar sources of X-ray radiation that is about 10 times stronger than emitted from other B or Be stars. The character of the X-ray spectrum is Be thermal, possibly emitted from plasmas of temperatures up to least ten million kelvins, and shows very short term and long-term cycles. Historically, it has been held that these X-rays might be excited by matter originating from the star, from a hot wind or a disk around the star, accreting onto the surface of a degenerate companion, such as a white dwarf or neutron star. However, there are difficulties with either of these hypotheses. For example, it is not clear that enough matter can be accreted by a white dwarf, at the distance of the purported secondary star implied by the orbital period, sufficient to power an X-ray emission of nearly 1033 erg/s or 100 YW. A neutron star could easily power this X-ray flux, but X-ray emission from neutron stars is known to be non-thermal, and thus in apparent variance with the spectral properties.
Evidence suggests that the X-rays may be associated with the Be star itself or caused by some complex interaction between the star and surrounding decretion disk. One line of evidence is that the X-ray production is known to vary on both short and long time scales with respect to various UV line and continuum changes associated with a B star or with circumstellar matter close to the star. Moreover, the X-ray emissions exhibit long-term cycles that correlate with the light curves in the visible wavelengths.
Gamma Cassiopeiae exhibits characteristics consistent with a strong disordered magnetic field. No field can be measured directly from the Zeeman effect because of the star's rotation-broadened spectral lines. Instead, the presence of this field is inferred from a robust periodic signal of 1.21 days that suggests a magnetic field rooted on the rotating star's surface. The star's UV and optical spectral lines show ripples moving from blue to red over several hours, which indicates clouds of matter being held frozen over the star's surface by strong magnetic fields. This evidence suggests that a magnetic field from the star is interacting with the decretion disk, resulting in the X-ray emission. A disk dynamo has been advanced as a mechanism to explain this modulation of the X-rays. However, difficulties remain with this mechanism, among which is that there are no disk dynamos known to exist in other stars, rendering this behavior more difficult to analyze.
Companions
Gamma Cassiopeiae has three faint companions, listed in double star catalogues as components B, C, and D. Star B is about 2 arc-seconds distant and magnitude 11, and has a similar space velocity to the bright primary, making it likely to be physically associated. Component C is magnitude 13, nearly an arc-minute distant, and is listed in Gaia Early Data Release 3 as having a very different proper motion and being much more distant than Gamma Cassiopeiae. Finally, component D, about 21 arc-minutes distant, is the naked-eye star HR 266 (HD 5408), itself a quadruple system.
Gamma Cassiopeiae A, the bright primary, itself contains a spectroscopic binary with an orbital period of about 203.5 days and an eccentricity alternately reported as 0.26 and "near zero." The mass of the companion is believed to be about that of the Sun, but its nature is unclear. It has been proposed that it is a degenerate star or a hot helium star, but it seems unlikely that it is a normal star. Therefore, it is likely to be more evolved than the primary and to have transferred mass to it during an earlier stage of evolution. Additionally, Hipparcos data show a "wobble" with an amplitude of about 150 mas, that may correspond to the orbit of a third star. This star would have an orbital period of at least 60 years.
Names
γ Cassiopeiae (Latinized to Gamma Cassiopeiae) is the object's Bayer designation, and it has the Flamsteed designation 27 Cassiopeiae.
The Chinese name Tsih, "the whip" (), is commonly associated with this star. The name however originally referred to Kappa Cassiopeiae, and Gamma Cassiopeiae was just one of four horses pulling the chariot of legendary charioteer Wangliang. This representation was later changed to make Gamma the whip.
The star was used as an easily identifiable navigational reference point during space missions and American astronaut Virgil Ivan "Gus" Grissom nicknamed the star Navi after his own middle name spelled backwards.
| Physical sciences | Notable stars | Astronomy |
12049028 | https://en.wikipedia.org/wiki/Hypercarnivore | Hypercarnivore | A hypercarnivore is an animal which has a diet that is more than 70% meat, either via active predation or by scavenging. The remaining non-meat diet may consist of non-animal foods such as fungi, fruits or other plant material. Some extant examples of hypercarnivorous animals include crocodilians, owls, shrikes, eagles, vultures, felids, most wild canids, polar bear, odontocetid cetaceans (toothed whales), snakes, spiders, scorpions, mantises, marlins, groupers, piranhas and most sharks. Every species in the family Felidae, including the domesticated cat, is a hypercarnivore in its natural state. Additionally, this term is also used in paleobiology to describe taxa of animals which have an increased slicing component of their dentition relative to the grinding component. In domestic settings, e.g. cats may have a diet designed from only plant and synthetic sources using modern processing methods. Feeding farmed animals such as alligators and crocodiles mostly or fully plant-based feed is sometimes done to save costs or as an environmentally friendly alternative. Hypercarnivores are not necessarily apex predators. For example, salmon are exclusively carnivorous, yet they are prey at all stages of life for a variety of organisms.
Many prehistoric mammals of the clade Carnivoramorpha (Carnivora and Miacoidea without Creodonta), along with the early order Creodonta, and some mammals of the even earlier order Cimolesta, were hypercarnivores. The earliest carnivorous mammal is considered to be Cimolestes, which existed during the Late Cretaceous and early Paleogene periods in North America about 66 million years ago. Theropod dinosaurs such as Tyrannosaurus rex that existed during the late Cretaceous, although not mammals, were obligate carnivores.
Large hypercarnivores evolved frequently in the fossil record, often in response to an ecological opportunity afforded by the decline or extinction of previously dominant hypercarnivorous taxa. While the evolution of large size and carnivory may be favored at the individual level, it can lead to a macroevolutionary decline, wherein such extreme dietary specialization results in reduced population densities and a greater vulnerability for extinction. As a result of these opposing forces, the fossil record of carnivores is dominated by successive clades of hypercarnivores that diversify and decline, only to be replaced by new hypercarnivorous clades.
As an example of related species with differing diets, even though they diverged only 150,000 years ago, the polar bear is the most highly carnivorous bear (more than 90% of its diet is meat) while the grizzly bear is one of the least carnivorous in many locales, with less than 10% of its diet being meat.
The genomes of the Tasmanian devil, killer whale, polar bear, leopard, lion, tiger, cheetah and domestic cat were analysed, and shared positive selection for two genes related to bone development and repair (DMP1, PTN), which is not seen in omnivores or herbivores, has been found. This indicates that a stronger bone structure is a crucial requirement and drives selection towards predatory hypercarnivore lifestyle in mammals. Positive selection of one gene related to enhanced bone mineralisation has been found in the Scimitar-toothed cat (Homotherium latidens).
| Biology and health sciences | Ethology | Biology |
176670 | https://en.wikipedia.org/wiki/Operon | Operon | In genetics, an operon is a functioning unit of DNA containing a cluster of genes under the control of a single promoter. The genes are transcribed together into an mRNA strand and either translated together in the cytoplasm, or undergo splicing to create monocistronic mRNAs that are translated separately, i.e. several strands of mRNA that each encode a single gene product. The result of this is that the genes contained in the operon are either expressed together or not at all. Several genes must be co-transcribed to define an operon.
Originally, operons were thought to exist solely in prokaryotes (which includes organelles like plastids that are derived from bacteria), but their discovery in eukaryotes was shown in the early 1990s, and are considered to be rare. In general, expression of prokaryotic operons leads to the generation of polycistronic mRNAs, while eukaryotic operons lead to monocistronic mRNAs.
Operons are also found in viruses such as bacteriophages. For example, T7 phages have two operons. The first operon codes for various products, including a special T7 RNA polymerase which can bind to and transcribe the second operon. The second operon includes a lysis gene meant to cause the host cell to burst.
History
The term "operon" was first proposed in a short paper in the Proceedings of the French Academy of Science in 1960. From this paper, the so-called general theory of the operon was developed. This theory suggested that in all cases, genes within an operon are negatively controlled by a repressor acting at a single operator located before the first gene. Later, it was discovered that genes could be positively regulated and also regulated at steps that follow transcription initiation. Therefore, it is not possible to talk of a general regulatory mechanism, because different operons have different mechanisms. Today, the operon is simply defined as a cluster of genes transcribed into a single mRNA molecule. Nevertheless, the development of the concept is considered a landmark event in the history of molecular biology. The first operon to be described was the lac operon in E. coli. The 1965 Nobel Prize in Physiology and Medicine was awarded to François Jacob, André Michel Lwoff and Jacques Monod for their discoveries concerning the operon and virus synthesis.
Overview
Operons occur primarily in prokaryotes but also rarely in some eukaryotes, including nematodes such as C. elegans and the fruit fly, Drosophila melanogaster. rRNA genes often exist in operons that have been found in a range of eukaryotes including chordates. An operon is made up of several structural genes arranged under a common promoter and regulated by a common operator. It is defined as a set of adjacent structural genes, plus the adjacent regulatory signals that affect transcription of the structural genes.5 The regulators of a given operon, including repressors, corepressors, and activators, are not necessarily coded for by that operon. The location and condition of the regulators, promoter, operator and structural DNA sequences can determine the effects of common mutations.
Operons are related to regulons, stimulons and modulons; whereas operons contain a set of genes regulated by the same operator, regulons contain a set of genes under regulation by a single regulatory protein, and stimulons contain a set of genes under regulation by a single cell stimulus. According to its authors, the term "operon" is derived from the verb "to operate".
As a unit of transcription
An operon contains one or more structural genes which are generally transcribed into one polycistronic mRNA (a single mRNA molecule that codes for more than one protein). However, the definition of an operon does not require the mRNA to be polycistronic, though in practice, it usually is. Upstream of the structural genes lies a promoter sequence which provides a site for RNA polymerase to bind and initiate transcription. Close to the promoter lies a section of DNA called an operator.
Operons versus clustering of prokaryotic genes
All the structural genes of an operon are turned ON or OFF together, due to a single promoter and operator upstream to them, but sometimes more control over the gene expression is needed. To achieve this aspect, some bacterial genes are located near together, but there is a specific promoter for each of them; this is called gene clustering. Usually these genes encode proteins which will work together in the same pathway, such as a metabolic pathway. Gene clustering helps a prokaryotic cell to produce metabolic enzymes in a correct order.
In one study, it has been posited that in the Asgard (archaea), ribosomal protein coding genes occur in clusters that are less conserved in their organization than in other Archaea; the closer an Asgard (archaea) is to the eukaryotes, the more dispersed is the arrangement of the ribosomal protein coding genes.
General structure
An operon is made up of 3 basic DNA components:
Promoter – a nucleotide sequence that enables a gene to be transcribed. The promoter is recognized by RNA polymerase, which then initiates transcription. In RNA synthesis, promoters indicate which genes should be used for messenger RNA creation – and, by extension, control which proteins the cell produces.
Operator – a segment of DNA to which a repressor binds. It is classically defined in the lac operon as a segment between the promoter and the genes of the operon. The main operator (O1) in the lac operon is located slightly downstream of the promoter; two additional operators, O2 and O3 are located at -82 and +412, respectively. In the case of a repressor, the repressor protein physically obstructs the RNA polymerase from transcribing the genes.
Structural genes – the genes that are co-regulated by the operon.
Not always included within the operon, but important in its function is a regulatory gene, a constantly expressed gene which codes for repressor proteins. The regulatory gene does not need to be in, adjacent to, or even near the operon to control it.
An inducer (small molecule) can displace a repressor (protein) from the operator site (DNA), resulting in an uninhibited operon.
Alternatively, a corepressor can bind to the repressor to allow its binding to the operator site. A good example of this type of regulation is seen for the trp operon.
Regulation
Control of an operon is a type of gene regulation that enables organisms to regulate the expression of various genes depending on environmental conditions. Operon regulation can be either negative or positive by induction or repression.
Negative control involves the binding of a repressor to the operator to prevent transcription.
In negative inducible operons, a regulatory repressor protein is normally bound to the operator, which prevents the transcription of the genes on the operon. If an inducer molecule is present, it binds to the repressor and changes its conformation so that it is unable to bind to the operator. This allows for expression of the operon. The lac operon is a negatively controlled inducible operon, where the inducer molecule is allolactose.
In negative repressible operons, transcription of the operon normally takes place. Repressor proteins are produced by a regulator gene, but they are unable to bind to the operator in their normal conformation. However, certain molecules called corepressors are bound by the repressor protein, causing a conformational change to the active site. The activated repressor protein binds to the operator and prevents transcription. The trp operon, involved in the synthesis of tryptophan (which itself acts as the corepressor), is a negatively controlled repressible operon.
Operons can also be positively controlled. With positive control, an activator protein stimulates transcription by binding to DNA (usually at a site other than the operator).
In positive inducible operons, activator proteins are normally unable to bind to the pertinent DNA. When an inducer is bound by the activator protein, it undergoes a change in conformation so that it can bind to the DNA and activate transcription. Examples of positive inducible operons include the MerR family of transcriptional activators.
In positive repressible operons, the activator proteins are normally bound to the pertinent DNA segment. However, when an inhibitor is bound by the activator, it is prevented from binding the DNA. This stops activation and transcription of the system.
The lac operon
The lac operon of the model bacterium Escherichia coli was the first operon to be discovered and provides a typical example of operon function. It consists of three adjacent structural genes, a promoter, a terminator, and an operator. The lac operon is regulated by several factors including the availability of glucose and lactose. It can be activated by allolactose. Lactose binds to the repressor protein and prevents it from repressing gene transcription. This is an example of the derepressible (from above: negative inducible) model. So it is a negative inducible operon induced by presence of lactose or allolactose.
The trp operon
Discovered in 1953 by Jacques Monod and colleagues, the trp operon in E. coli was the first repressible operon to be discovered. While the lac operon can be activated by a chemical (allolactose), the tryptophan (Trp) operon is inhibited by a chemical (tryptophan). This operon contains five structural genes: trp E, trp D, trp C, trp B, and trp A, which encodes tryptophan synthetase. It also contains a promoter which binds to RNA polymerase and an operator which blocks transcription when bound to the protein synthesized by the repressor gene (trp R) that binds to the operator. In the lac operon, lactose binds to the repressor protein and prevents it from repressing gene transcription, while in the trp operon, tryptophan binds to the repressor protein and enables it to repress gene transcription. Also unlike the lac operon, the trp operon contains a leader peptide and an attenuator sequence which allows for graded regulation. This is an example of the corepressible model.
Predicting the number and organization of operons
The number and organization of operons has been studied most critically in E. coli. As a result, predictions can be made based on an organism's genomic sequence.
One prediction method uses the intergenic distance between reading frames as a primary predictor of the number of operons in the genome. The separation merely changes the frame and guarantees that the read through is efficient. Longer stretches exist where operons start and stop, often up to 40–50 bases.
An alternative method to predict operons is based on finding gene clusters where gene order and orientation is conserved in two or more genomes.
Operon prediction is even more accurate if the functional class of the molecules is considered. Bacteria have clustered their reading frames into units, sequestered by co-involvement in protein complexes, common pathways, or shared substrates and transporters. Thus, accurate prediction would involve all of these data, a difficult task indeed.
Pascale Cossart's laboratory was the first to experimentally identify all operons of a microorganism, Listeria monocytogenes. The 517 polycistronic operons are listed in a 2009 study describing the global changes in transcription that occur in L. monocytogenes under different conditions.
| Biology and health sciences | Molecular biology | Biology |
176732 | https://en.wikipedia.org/wiki/Scent%20hound | Scent hound | Scent hounds (or scenthounds) are a type of hound that primarily hunts by scent rather than sight. These breeds are hunting dogs and are generally regarded as having some of the most sensitive noses among dogs. Scent hounds specialize in following scent or smells. Most of them tend to have long, drooping ears and large nasal cavities to enhance smell sensitivity. They need to have relatively high endurance to be able to keep track of scent over long distances and rough terrain. It is believed that they were first bred by the Celts by crossbreeding mastiff-type dogs with sighthounds. The first established scent hounds were St. Hubert Hounds (the ancestor of today's bloodhounds) bred by monks in Belgium during the Middle Ages.
Description
Hounds are hunting dogs that hunt either by following the scent of a game animal (scent hounds) or by following the animal by sight (sighthounds). There are many breeds in the scent hound type, and scent hounds may do other work as well, so exactly which breeds should be called scent hound can be controversial. Kennel clubs assign breeds of dogs to groups, which are loosely based on breed types. Each kennel club determines which breeds it will place in a given group.
Scent hounds specialize in following a smell or scent. Most of these breeds have long, drooping ears. One theory says that this trait helps to collect scent from the air and keep it near the dog's face and nose. They also have large nasal cavities, which helps them scent better. Their typically loose, moist lips are said to assist in trapping scent particles.
Because scent hounds tend to walk or run with their noses to the ground, many scent hound breeds have been developed such that the dog will hold their tail upright when on a scent. In addition, some breeds (e.g., beagle) have been bred to have white hair on the tips of their tails. These traits allow the dog's master to identify it at a distance or in longer grass.
Scent hounds do not need to be as fast as sighthounds, because they do not need to keep prey in sight, but they need endurance so they can stick with a scent and follow it for long distances over rough terrain. The best scent hounds can follow a scent trail even across running water and even when it is several days old. Most scent hounds are used for hunting in packs of multiple dogs. Longer-legged hounds run more quickly and usually require that the hunters follow on horseback; shorter-legged hounds allow hunters to follow on foot. Hunting with some breeds, such as German Bracke, American Foxhounds, or coonhounds, involves allowing the pack of dogs to run freely while the hunters wait in a fixed spot until the dogs' baying announces that the game has been "treed". The hunters then go to the spot on foot, following the sound of the dogs' baying.
Vocalization
Most scent hounds have a range of vocalizations, which can vary depending upon the situation the dog finds itself in. Their baying voice—most often used when excited and useful in informing their master that they are following a scent trail—is deep and booming and can be distinct from their barking voice, which itself can have variations in tone, from excited to nervous or fearful.
As they are bred to "give voice" when excited, scent hounds may bark much more frequently than other dog breeds. Although this can be a nuisance in settled areas, it is a valuable trait that allows the dog's handler to follow the dog or pack of dogs during a hunt even when they are out of sight, such as when following a fox or raccoon through woodland.
Classification
The Fédération Cynologique Internationale (FCI) places scent hounds into their classification "Group 6". This includes a subdivision, "Section 2, Leash Hounds", some examples of which are the Bavarian Mountain Hound (Bayrischer Gebirgsschweisshund, no. 217), the Hanover Hound (Hannover'scher Schweisshund, no. 213), and the Alpine Dachsbracke (Alpenländische Dachsbracke, no. 254). In addition, the Dalmatian and the Rhodesian Ridgeback are placed in Group 6 as "Related breeds".
Genetic history
Genetic studies indicate that the scent hounds are more closely related to each other than they are with other branches on the dog family tree.
Breeds
The scent hound type includes the following breeds:
Alpine Dachsbracke
American Leopard Hound
Anglo-French hounds (French hounds crossed with English Foxhounds)
Anglo-Français de Petite Vénerie
Grand Anglo-Français Blanc et Noir
Grand Anglo-Français Blanc et Orange
Grand Anglo-Français Tricolore
Ariegeois
Artois Hound
Austrian Black and Tan Hound
Basset Artésien Normand
Basset Bleu de Gascogne
Basset Fauve de Bretagne
Basset Hound
Bavarian Mountain Hound
Beagle
Beagle-Harrier
Billy
Black Mouth Cur
Bloodhound
Blue Lacy
Bosnian Broken-haired Hound
Briquet Griffon Vendéen
Catahoula Leopard Dog
Coonhounds
Black and Tan Coonhound
Bluetick Coonhound
English Coonhound (a.k.a. American English Coonhound and Redtick Coonhound)
Redbone Coonhound
Treeing Walker Coonhound
Cretan Hound
Dachshund
Deutsche Bracke
Drever (Swedish Dachsbracke)
Dunker (Norwegian Hound)
Estonian Hound
Finnish Hound
Foxhounds
American Foxhound
English Foxhound
Dumfriesshire Black and Tan Foxhound (extinct)
Welsh Foxhound
French hounds
Chien Français Blanc et Noir
Chien Français Blanc et Orange
Chien Français Tricolore
Grand Basset Griffon Vendéen
Grand Bleu de Gascogne
Grand Gascon Saintongeois
Grand Griffon Vendéen
Greek Harehound
Griffon Bleu de Gascogne
Griffon Fauve de Bretagne
Hamiltonstövare
Hanover Hound
Harrier
Istrian Coarse-haired Hound
Istrian Shorthaired Hound
Kerry Beagle
Laconian (extinct)
Limer (obsolete term)
Montenegrin Mountain Hound
Mountain Cur
North Country Beagle (Northern Hound) (extinct)
Otterhound
Petit Basset Griffon Vendéen
Petit Bleu de Gascogne
Petit Gascon Saintongeois
Plott Hound
Polish Hound (pl. Ogar Polski)
Polish Hunting Dog (pl. Gończy Polski)
Porcelaine
Posavac Hound
Rache (obsolete term)
Sabueso Español (Spanish Scenthound)
Sabueso fino Colombiano
St. Hubert Jura Hound
Schillerstövare
Segugio dell'Appennino
Segugio Italiano a pelo forte
Segugio Italiano a pelo raso
Segugio Maremmano
Serbian Hound
Serbian Tricolour Hound
Schweizer Laufhund
Schweizerischer Niederlaufhund
Slovenský Kopov (Slovakian Hound)
Smalandstövare
Southern Hound (extinct)
Stephens Cur
Styrian Coarse-haired Hound
Talbot Hound (extinct)
Transylvanian Hound
Treeing Cur
Treeing Tennessee Brindle
Trigg Hound
Tyrolean Hound
Westphalian Dachsbracke
United Kennel Club (US) Scenthound Group
The Scenthound Group is the group category used by the United Kennel Club (US), which it divides into two categories. The first includes the American hunting dogs known as coonhounds and the European hounds from which they were developed. These are referred to as Tree Hounds. The category also includes curs, American dogs bred for hunting a variety of game, such as squirrels, raccoons, opossums, bobcats, cougars, American black bears, and feral pigs. The second category is referred to as Trailing Scenthounds, and includes dogs used for tracking of humans, reputedly descended from the St. Hubert Hounds.
| Biology and health sciences | Dogs | Animals |
176733 | https://en.wikipedia.org/wiki/Cylindrical%20coordinate%20system | Cylindrical coordinate system | A cylindrical coordinate system is a three-dimensional coordinate system that specifies point positions by the distance from a chosen reference axis (axis L in the image opposite), the direction from the axis relative to a chosen reference direction (axis A), and the distance from a chosen reference plane perpendicular to the axis (plane containing the purple section). The latter distance is given as a positive or negative number depending on which side of the reference plane faces the point.
The origin of the system is the point where all three coordinates can be given as zero. This is the intersection between the reference plane and the axis.
The axis is variously called the cylindrical or longitudinal axis, to differentiate it from the polar axis, which is the ray that lies in the reference plane, starting at the origin and pointing in the reference direction.
Other directions perpendicular to the longitudinal axis are called radial lines.
The distance from the axis may be called the radial distance or radius, while the angular coordinate is sometimes referred to as the angular position or as the azimuth. The radius and the azimuth are together called the polar coordinates, as they correspond to a two-dimensional polar coordinate system in the plane through the point, parallel to the reference plane. The third coordinate may be called the height or altitude (if the reference plane is considered horizontal), longitudinal position, or axial position.
Cylindrical coordinates are useful in connection with objects and phenomena that have some rotational symmetry about the longitudinal axis, such as water flow in a straight pipe with round cross-section, heat distribution in a metal cylinder, electromagnetic fields produced by an electric current in a long, straight wire, accretion disks in astronomy, and so on.
They are sometimes called cylindrical polar coordinates or polar cylindrical coordinates, and are sometimes used to specify the position of stars in a galaxy (galactocentric cylindrical polar coordinates).
Definition
The three coordinates (, , ) of a point are defined as:
The radial distance is the Euclidean distance from the -axis to the point .
The azimuth is the angle between the reference direction on the chosen plane and the line from the origin to the projection of on the plane.
The axial coordinate or height is the signed distance from the chosen plane to the point .
Unique cylindrical coordinates
As in polar coordinates, the same point with cylindrical coordinates has infinitely many equivalent coordinates, namely and where is any integer. Moreover, if the radius is zero, the azimuth is arbitrary.
In situations where someone wants a unique set of coordinates for each point, one may restrict the radius to be non-negative () and the azimuth to lie in a specific interval spanning 360°, such as or .
Conventions
The notation for cylindrical coordinates is not uniform. The ISO standard 31-11 recommends , where is the radial coordinate, the azimuth, and the height. However, the radius is also often denoted or , the azimuth by or , and the third coordinate by or (if the cylindrical axis is considered horizontal) , or any context-specific letter.
In concrete situations, and in many mathematical illustrations, a positive angular coordinate is measured counterclockwise as seen from any point with positive height.
Coordinate system conversions
The cylindrical coordinate system is one of many three-dimensional coordinate systems. The following formulae may be used to convert between them.
Cartesian coordinates
For the conversion between cylindrical and Cartesian coordinates, it is convenient to assume that the reference plane of the former is the Cartesian -plane (with equation ), and the cylindrical axis is the Cartesian -axis. Then the -coordinate is the same in both systems, and the correspondence between cylindrical and Cartesian are the same as for polar coordinates, namely
in one direction, and
in the other. The arcsine function is the inverse of the sine function, and is assumed to return an angle in the range = . These formulas yield an azimuth in the range .
By using the arctangent function that returns also an angle in the range = , one may also compute without computing first
For other formulas, see the article Polar coordinate system.
Many modern programming languages provide a function that will compute the correct azimuth , in the range , given x and y, without the need to perform a case analysis as above. For example, this function is called by in the C programming language, and in Common Lisp.
Spherical coordinates
Spherical coordinates (radius , elevation or inclination , azimuth ), may be converted to or from cylindrical coordinates, depending on whether represents elevation or inclination, by the following:
Line and volume elements
In many problems involving cylindrical polar coordinates, it is useful to know the line and volume elements; these are used in integration to solve problems involving paths and volumes.
The line element is
The volume element is
The surface element in a surface of constant radius (a vertical cylinder) is
The surface element in a surface of constant azimuth (a vertical half-plane) is
The surface element in a surface of constant height (a horizontal plane) is
The del operator in this system leads to the following expressions for gradient, divergence, curl and Laplacian:
Cylindrical harmonics
The solutions to the Laplace equation in a system with cylindrical symmetry are called cylindrical harmonics.
Kinematics
In a cylindrical coordinate system, the position of a particle can be written as
The velocity of the particle is the time derivative of its position,
where the term comes from the Poisson formula . Its acceleration is
| Mathematics | Geometry: General | null |
176813 | https://en.wikipedia.org/wiki/Physical%20geodesy | Physical geodesy | Physical geodesy is the study of the physical properties of Earth's gravity and its potential field (the geopotential), with a view to their application in geodesy.
Measurement procedure
Traditional geodetic instruments such as theodolites rely on the gravity field for orienting their vertical axis along the local plumb line or local vertical direction with the aid of a spirit level. After that, vertical angles (zenith angles or, alternatively, elevation angles) are obtained with respect to this local vertical, and horizontal angles in the plane of the local horizon, perpendicular to the vertical.
Levelling instruments again are used to obtain geopotential differences between points on the Earth's surface. These can then be expressed as "height" differences by conversion to metric units.
Units
Gravity is commonly measured in units of m·s−2 (metres per second squared). This also can be expressed (multiplying by the gravitational constant G in order to change units) as newtons per kilogram of attracted mass.
Potential is expressed as gravity times distance, m2·s−2. Travelling one metre in the direction of a gravity vector of strength 1 m·s−2 will increase your potential by 1 m2·s−2. Again employing G as a multiplier, the units can be changed to joules per kilogram of attracted mass.
A more convenient unit is the GPU, or geopotential unit: it equals 10 m2·s−2. This means that travelling one metre in the vertical direction, i.e., the direction of the 9.8 m·s−2 ambient gravity, will approximately change your potential by 1 GPU. Which again means that the difference in geopotential, in GPU, of a point with that of sea level can be used as a rough measure of height "above sea level" in metres.
Gravity
Potential fields
Geoid
Due to the irregularity of the Earth's true gravity field, the equilibrium figure of sea water, or the geoid, will also be of irregular form. In some places, like west of Ireland, the geoid—mathematical mean sea level—sticks out as much as 100 m above the regular, rotationally symmetric reference ellipsoid of GRS80; in other places, like close to Sri Lanka, it dives under the ellipsoid by nearly the same amount.
The separation between the geoid and the reference ellipsoid is called the undulation of the geoid, symbol .
The geoid, or mathematical mean sea surface, is defined not only on the seas, but also under land; it is the equilibrium water surface that would result, would sea water be allowed to move freely (e.g., through tunnels) under the land. Technically, an equipotential surface of the true geopotential, chosen to coincide (on average) with mean sea level.
As mean sea level is physically realized by tide gauge bench marks on the coasts of different countries and continents, a number of slightly incompatible "near-geoids" will result, with differences of several decimetres to over one metre between them, due to the dynamic sea surface topography. These are referred to as vertical datums or height datums.
For every point on Earth, the local direction of gravity or vertical direction, materialized with the plumb line, is perpendicular to the geoid (see astrogeodetic leveling).
Gravity anomalies
Above we already made use of gravity anomalies . These are computed as the differences between true (observed) gravity , and calculated (normal) gravity . (This is an oversimplification; in practice the location in space at which γ is evaluated will differ slightly from that where g has been measured.) We thus get
These anomalies are called free-air anomalies, and are the ones to be used in the above Stokes equation.
In geophysics, these anomalies are often further reduced by removing from them the attraction of the topography, which for a flat, horizontal plate (Bouguer plate) of thickness H is given by
The Bouguer reduction to be applied as follows:
so-called Bouguer anomalies. Here, is our earlier , the free-air anomaly.
In case the terrain is not a flat plate (the usual case!) we use for H the local terrain height value but apply a further correction called the terrain correction.
| Physical sciences | Geophysics | Earth science |
176865 | https://en.wikipedia.org/wiki/Cache%20coherence | Cache coherence | In computer architecture, cache coherence is the uniformity of shared resource data that is stored in multiple local caches. In a cache coherent system, if multiple clients have a cached copy of the same region of a shared memory resource, all copies are the same. Without cache coherence, a change made to the region by one client may not be seen by others, and errors can result when the data used by different clients is mismatched.
A cache coherence protocol is used to maintain cache coherency. The two main types are snooping and directory-based protocols.
Cache coherence is of particular relevance in multiprocessing systems, where each CPU may have its own local cache of a shared memory resource.
Overview
In a shared memory multiprocessor system with a separate cache memory for each processor, it is possible to have many copies of shared data: one copy in the main memory and one in the local cache of each processor that requested it. When one of the copies of data is changed, the other copies must reflect that change. Cache coherence is the discipline which ensures that the changes in the values of shared operands (data) are propagated throughout the system in a timely fashion.
The following are the requirements for cache coherence:
Write Propagation Changes to the data in any cache must be propagated to other copies (of that cache line) in the peer caches.
Transaction Serialization Reads/Writes to a single memory location must be seen by all processors in the same order.
Theoretically, coherence can be performed at the load/store granularity. However, in practice it is generally performed at the granularity of cache blocks.
Definition
Coherence defines the behavior of reads and writes to a single address location.
In a multiprocessor system, consider that more than one processor has cached a copy of the memory location X. The following conditions are necessary to achieve cache coherence:
In a read made by a processor P to a location X that follows a write by the same processor P to X, with no writes to X by another processor occurring between the write and the read instructions made by P, X must always return the value written by P.
In a read made by a processor P1 to location X that follows a write by another processor P2 to X, with no other writes to X made by any processor occurring between the two accesses and with the read and write being sufficiently separated, X must always return the value written by P2. This condition defines the concept of coherent view of memory. Propagating the writes to the shared memory location ensures that all the caches have a coherent view of the memory. If processor P1 reads the old value of X, even after the write by P2, we can say that the memory is incoherent.
The above conditions satisfy the Write Propagation criteria required for cache coherence. However, they are not sufficient as they do not satisfy the Transaction Serialization condition. To illustrate this better, consider the following example:
A multi-processor system consists of four processors - P1, P2, P3 and P4, all containing cached copies of a shared variable S whose initial value is 0. Processor P1 changes the value of S (in its cached copy) to 10 following which processor P2 changes the value of S in its own cached copy to 20. If we ensure only write propagation, then P3 and P4 will certainly see the changes made to S by P1 and P2. However, P3 may see the change made by P1 after seeing the change made by P2 and hence return 10 on a read to S. P4 on the other hand may see changes made by P1 and P2 in the order in which they are made and hence return 20 on a read to S. The processors P3 and P4 now have an incoherent view of the memory.
Therefore, in order to satisfy Transaction Serialization, and hence achieve Cache Coherence, the following condition along with the previous two mentioned in this section must be met:
Writes to the same location must be sequenced. In other words, if location X received two different values A and B, in this order, from any two processors, the processors can never read location X as B and then read it as A. The location X must be seen with values A and B in that order.
The alternative definition of a coherent system is via the definition of sequential consistency memory model: "the cache coherent system must appear to execute all threads’ loads and stores to a single memory location in a total order that respects the program order of each thread". Thus, the only difference between the cache coherent system and sequentially consistent system is in the number of address locations the definition talks about (single memory location for a cache coherent system, and all memory locations for a sequentially consistent system).
Another definition is: "a multiprocessor is cache consistent if all writes to the same memory location are performed in some sequential order".
Rarely, but especially in algorithms, coherence can instead refer to the locality of reference.
Multiple copies of the same data can exist in different cache simultaneously and if processors are allowed to update their own copies freely, an inconsistent view of memory can result.
Coherence mechanisms
The two most common mechanisms of ensuring coherency are snooping and directory-based, each having their own benefits and drawbacks. Snooping based protocols tend to be faster, if enough bandwidth is available, since all transactions are a request/response seen by all processors. The drawback is that snooping isn't scalable. Every request must be broadcast to all nodes in a system, meaning that as the system gets larger, the size of the (logical or physical) bus and the bandwidth it provides must grow. Directories, on the other hand, tend to have longer latencies (with a 3 hop request/forward/respond) but use much less bandwidth since messages are point to point and not broadcast. For this reason, many of the larger systems (>64 processors) use this type of cache coherence.
Snooping
First introduced in 1983, snooping is a process where the individual caches monitor address lines for accesses to memory locations that they have cached. The write-invalidate protocols and write-update protocols make use of this mechanism.
For the snooping mechanism, a snoop filter reduces the snooping traffic by maintaining a plurality of entries, each representing a cache line that may be owned by one or more nodes. When replacement of one of the entries is required, the snoop filter selects for the replacement of the entry representing the cache line or lines owned by the fewest nodes, as determined from a presence vector in each of the entries. A temporal or other type of algorithm is used to refine the selection if more than one cache line is owned by the fewest nodes.
Directory-based
In a directory-based system, the data being shared is placed in a common directory that maintains the coherence between caches. The directory acts as a filter through which the processor must ask permission to load an entry from the primary memory to its cache. When an entry is changed, the directory either updates or invalidates the other caches with that entry.
Distributed shared memory systems mimic these mechanisms in an attempt to maintain consistency between blocks of memory in loosely coupled systems.
Coherence protocols
Coherence protocols apply cache coherence in multiprocessor systems. The intention is that two clients must never see different values for the same shared data.
The protocol must implement the basic requirements for coherence. It can be tailor-made for the target system or application.
Protocols can also be classified as snoopy or directory-based. Typically, early systems used directory-based protocols where a directory would keep a track of the data being shared and the sharers. In snoopy protocols, the transaction requests (to read, write, or upgrade) are sent out to all processors. All processors snoop the request and respond appropriately.
Write propagation in snoopy protocols can be implemented by either of the following methods:
Write-invalidate When a write operation is observed to a location that a cache has a copy of, the cache controller invalidates its own copy of the snooped memory location, which forces a read from main memory of the new value on its next access.
Write-update When a write operation is observed to a location that a cache has a copy of, the cache controller updates its own copy of the snooped memory location with the new data.
If the protocol design states that whenever any copy of the shared data is changed, all the other copies must be "updated" to reflect the change, then it is a write-update protocol. If the design states that a write to a cached copy by any processor requires other processors to discard or invalidate their cached copies, then it is a write-invalidate protocol.
However, scalability is one shortcoming of broadcast protocols.
Various models and protocols have been devised for maintaining coherence, such as MSI, MESI (aka Illinois), MOSI, MOESI, MERSI, MESIF, write-once, Synapse, Berkeley, Firefly and Dragon protocol. In 2011, ARM Ltd proposed the AMBA 4 ACE for handling coherency in SoCs. The AMBA CHI (Coherent Hub Interface) specification from ARM Ltd, which belongs to AMBA5 group of specifications defines the interfaces for the connection of fully coherent processors.
| Technology | Computer architecture concepts | null |
176883 | https://en.wikipedia.org/wiki/Major%20appliance | Major appliance | A major appliance, also known as a large domestic appliance or large electric appliance or simply a large appliance, large domestic, or large electric, is a non-portable or semi-portable machine used for routine housekeeping tasks such as cooking, washing laundry, or food preservation. Such appliances are sometimes collectively known as white goods, as the products were traditionally white in colour, although a variety of colours are now available. An appliance is different from a plumbing fixture because it uses electricity or fuel.
Major appliances differ from small appliances because they are bigger and not portable. They are often considered fixtures and part of real estate and as such they are often supplied to tenants as part of otherwise unfurnished rental properties. Major appliances may have special electrical connections, connections to gas supplies, or special plumbing and ventilation arrangements that may be permanently connected to the appliance. This limits where they can be placed in a home.
Since major appliances in a home consume a significant amount of energy, they have become the objectives of programs to improve their energy efficiency in many countries. Increasing energy efficiency is often described as an important element of climate change mitigation alongside other improvements like retrofitting buildings to increase building performance. Energy efficiency improvements may require changes in construction of the appliances, or improved control systems.
Brands
In the early days of electrification, many major consumer appliances were made by the same companies that made the generation and distribution equipment. While some of these brand names persist to the present day, even if only as licensed use of old popular brand names, today many major appliances are manufactured by companies or divisions of companies that specialize in particular appliances.
Types
Major appliances may be roughly divided as follows:
Refrigeration equipment
Freezer
Refrigerator
Water cooler
Ice maker
Cooking
Kitchen stove, also known as a range, cooker, oven, cooking plate, or cooktop
Wall oven
Steamer oven
Microwave oven
Washing and drying equipment
Washing machine
Clothes dryer
Drying cabinet
Dishwasher
Heating and cooling
Air conditioner
Furnace
Water heater
Whole house ventilator
Mechanical Air Ventilator
Efficiency
| Technology | Household appliances | null |
176931 | https://en.wikipedia.org/wiki/Internet%20Archive | Internet Archive | The Internet Archive is an American non-profit organization founded in 1996 by Brewster Kahle that runs a digital library website, archive.org. It provides free access to collections of digitized media including websites, software applications, music, audiovisual, and print materials. The Archive also advocates a free and open Internet. Its mission is committing to provide "universal access to all knowledge".
The Internet Archive allows the public to upload and download digital material to its data cluster, but the bulk of its data is collected automatically by its web crawlers, which work to preserve as much of the public web as possible. Its web archive, the Wayback Machine, contains hundreds of billions of web captures. The Archive also oversees numerous book digitization projects, collectively one of the world's largest book digitization efforts.
History
Brewster Kahle founded the Archive in May 1996, around the same time that he began the for-profit web crawling company Alexa Internet. The earliest known archived page on the site was saved on May 10, 1996, at 2:42 pm UTC (7:42 am PDT). By October of that year, the Internet Archive had begun to archive and preserve the World Wide Web in large amounts. The archived content became more easily available to the general public in 2001, through the Wayback Machine.
In late 1999, the Archive expanded its collections beyond the web archive, beginning with the Prelinger Archives. Now, the Internet Archive includes texts, audio, moving images, and software. It hosts a number of other projects: the NASA Images Archive, the contract crawling service Archive-It, and the wiki-editable library catalog and book information site Open Library. Soon after that, the Archive began working to provide specialized services relating to the information access needs of the print-disabled; publicly accessible books were made available in a protected Digital Accessible Information System (DAISY) format.
According to its website:
In August 2012, the Archive announced that it had added BitTorrent to its file download options for more than 1.3 million existing files, and all newly uploaded files. This method is the fastest means of downloading media from the Archive, as files are served from two Archive data centers, in addition to other torrent clients which have downloaded and continue to serve the files.
On November 6, 2013, the Internet Archive's headquarters in San Francisco's Richmond District caught fire, destroying equipment and damaging some nearby apartments. According to the Archive, it lost a side-building housing one of 30 of its scanning centers; cameras, lights, and scanning equipment worth hundreds of thousands of dollars; and "maybe 20 boxes of books and film, some irreplaceable, most already digitized, and some replaceable". The nonprofit Archive sought donations to cover the estimated $600,000 in damage.
An overhaul of the site was launched as beta in November 2014, and the legacy layout was removed in March 2016.
In November 2016, Kahle announced that the Internet Archive was building the Internet Archive of Canada, a copy of the Archive to be based somewhere in Canada. The announcement received widespread coverage due to the implication that the decision to build a backup archive in a foreign country was because of the upcoming presidency of Donald Trump.
Beginning in 2017, OCLC and the Internet Archive have collaborated to make the Archive's records of digitized books available in WorldCat.
Since 2018, the Internet Archive visual arts residency, which is organized by Amir Saber Esfahani and Andrew McClintock, helps connect artists with the Archive's over 48 petabytes of digitized materials. Over the course of the yearlong residency, visual artists create a body of work which culminates in an exhibition. The hope is to connect digital history with the arts and create something for future generations to appreciate online or off. Previous artists in residence include Taravat Talepasand, Whitney Lynn, and Jenny Odell.
The Internet Archive acquires most materials from donations, such as hundreds of thousands of 78 rpm discs from Boston Public Library in 2017, a donation of 250,000 books from Trent University in 2018, and the entire collection of Marygrove College's library after it closed in 2020. All material is then digitized and retained in digital storage, while a digital copy is returned to the original holder and the Internet Archive's copy, if not in the public domain, is lent to patrons worldwide one at a time under the controlled digital lending (CDL) theory of the first-sale doctrine.
On June 1, 2020, four large publishing houses – Hachette Book Group, Penguin Random House, HarperCollins, and John Wiley – filed a lawsuit against the Internet Archive before the United States District Court for the Southern District of New York, claiming that the Internet Archive's practice of controlled digital lending constituted copyright infringement. On March 25, 2023, the court found in favor of the publishers. The negotiated judgment of August 11, 2023, barred the Internet Archive from digitally lending books for which electronic copies are on sale.
Also on August 11, 2023, the music industry giants Universal Music Group, Sony Music and Concord (together with their respective labels Capitol Records, Arista Records and CMGI Recorded Music Assets) sued the Internet Archive before the same United States District Court for the Southern District of New York over the Internet Archive's Great 78 Project for $621 million in damages from alleged copyright infringement.
In September 2024, Google and the Internet Archive signed a partnership to allow people to see previous versions of websites on Google Search that uses the Wayback Machine, without linking the Google Cache yet.
Cyberattacks
During the week of May 27, 2024, the Internet Archive suffered a series of distributed denial of service (DDoS) attacks that made its services unavailable intermittently, sometimes for hours at a time, over a period of several days. The attack was claimed on May 28 by a hacker group called SN_BLACKMETA, with possible links to Anonymous Sudan. The incident drew a comparison with the 2023 British Library cyberattack, which affected the UK Web Archive.
Beginning October 9, 2024, the Internet Archive's team, including archivist Jason Scott and security researcher Scott Helme, confirmed DDoS attacks, site defacement, and a data breach. The purported hacktivist group SN_BLACKMETA again claimed responsibility. A pop-up on the defaced site claimed that there was a "catastrophic" security breach, stating "Have you ever felt like the Internet Archive runs on sticks and is constantly on the verge of suffering a catastrophic security breach? It just happened. See 31 million of you on HIBP!" It was reported that about 31 million user accounts were affected, and compromised in a file called "ia_users.sql", dated September 28, 2024. The attackers stole users' email addresses and Bcrypt-hashed passwords. As of October 15, 2024, the website was still mostly offline for "prioritizing keeping data safe at the expense of service availability." On October 11, Kahle said that the data is safe, and will bring the service back to normal "in days, not weeks." On October 13, the Wayback Machine was restored in a read-only format, while archiving web pages was temporarily disabled. On October 14, Brewster Kahle said "[the Wayback Machine] volume is back to normal: 1,500 requests per second". On October 20, threat actors stole unrotated API tokens and breached Internet Archive on its Zendesk email support platform; they also claimed responsibility for the other breaches yet stated that SN_BLACKMETA was behind just the DDoS attacks. On October 21, Internet Archive went back online in a read-only manner. On October 22, all Internet Archive services temporarily went offline, but later that same day, only the Wayback Machine, Archive-It, and blog.archive.org were resumed. On October 23, archive.org, the Wayback Machine, Archive-It, and the Open Library services all resumed but with some features, such as logging in, still unavailable until the staff announced it back available in the next day or two. On October 25, the login feature is now back available for now and the site is active.
Operations
The Archive is a 501(c)(3) nonprofit operating in the United States. In 2019, it had an annual budget of $37 million, derived from revenue from its Web crawling services, various partnerships, grants, donations, and the Kahle-Austin Foundation. The Internet Archive also manages periodic funding campaigns. For instance, a December 2019 campaign had a goal of reaching $6 million in donations. It uses Ubuntu as its choice of operating system for the website servers.
The Archive is headquartered in San Francisco, California. From 1996 to 2009, its headquarters were in the Presidio of San Francisco, a former U.S. military base. Since 2009, its headquarters have been at 300 Funston Avenue in San Francisco, a former Christian Science Church. At one time, most of its staff worked in its book-scanning centers; as of 2019, scanning is performed by 100 paid operators worldwide. The Archive also has data centers in three Californian cities: San Francisco, Redwood City, and Richmond. To reduce the risk of data loss, the Archive creates copies of parts of its collection at more distant locations, including the Bibliotheca Alexandrina in Egypt and a facility in Amsterdam.
Since 2016, Internet Archive started to work to create a decentralized prototype of the digital library. From 2020, content from Internet Archive started to be stored in Filecoin. By October 2023, one petabyte of data had been uploaded to the Filecoin network.
The Archive is a member of the International Internet Preservation Consortium and was officially designated as a library by the state of California in 2007.
Web archiving
Wayback Machine
The Wayback Machine is a service that allows archives of the World Wide Web to be searched and accessed. It can be used to see what previous versions of web sites used to look like or to visit web sites that no longer even exist. The Wayback Machine was created as a joint effort between Alexa Internet (owned by Amazon.com) and the Internet Archive. Hundreds of billions of web sites and their associated data (images, source code, documents, etc.) are saved in a database. , the Internet Archive held over 866 billion web pages, more than 42.5 million print materials, 13 million videos, 3 million TV news, 1.2 million software programs, 14 million audio files, 5 million images, and 272,660 concerts in its Wayback Machine.
Archive-It
Created in early 2006, Archive-It is a web archiving subscription service that allows institutions and individuals to build and preserve collections of digital content and create digital archives. Archive-It allows the user to customize their capture or exclusion of web content they want to preserve for cultural heritage reasons. Through a web application, Archive-It partners can harvest, catalog, manage, browse, search, and view their archived collections.
In terms of accessibility, the archived web sites are full text searchable within seven days of capture. Content collected through Archive-It is captured and stored as a WARC file. A primary and back-up copy is stored at the Internet Archive data centers. A copy of the WARC file can be given to subscribing partner institutions for geo-redundant preservation and storage purposes to their best practice standards. Periodically, the data captured through Archive-It is indexed into the Internet Archive's general archive.
, Archive-It had more than 275 partner institutions in 46 U.S. states and 16 countries that have captured more than 7.4 billion URLs for more than 2,444 public collections. Archive-It partners are universities and college libraries, state archives, federal institutions, museums, law libraries, and cultural organizations, including the Electronic Literature Organization, North Carolina State Archives and Library, Stanford University, Columbia University, American University in Cairo, Georgetown Law Library, and many others.
Internet Archive Scholar
In September 2020, Internet Archive announced a new initiative to archive and preserve open access academic journals, called Internet Archive Scholar. Its full-text search index includes over 25 million research articles and other scholarly documents preserved in the Internet Archive. The collection spans from digitized copies of eighteenth century journals through the latest open access conference proceedings and pre-prints crawled from the World Wide Web.
General Index
In 2021, the Internet Archive announced the initial version of the General Index, a publicly available index to a collection of 107 million academic journal articles.
Items and collections
The Archive stores files inside so-called items, which are similar to directories in that they can contain multiple files, but can have additional metadata such as a description and tags which make them more searchable.
Some file types can be previewed directly on the site, where as others have to be downloaded in order to be opened. If multiple multimedia files exist in an item, the website generates a playlist for video or audio files, or a slide show for pictures. If an item contains at least one video or picture, the Archive generates a preview thumbnail that can be seen on collection pages and in searches. Items can contain mixed data such as music files with an album cover picture, in which case the picture is used as thumbnail.
Staff members of the Internet Archive organize items by placing them into so-called collections, which are pages listing multiple items.
Book collections
Text collection
The scanning performed by the Internet Archive is financially supported by libraries and foundations. , when there were approximately 1 million texts, the entire collection was greater than 500 terabytes, which included raw camera images, cropped and skewed images, PDFs, and raw OCR data.
, the Internet Archive was operating 33 scanning centers in five countries, digitizing about 1,000 books a day for a total of more than 2 million books, in a total collection of 4.4 million booksincluding material digitized by others and fed into the Internet Archive; at that time, users were performing more than 15 million downloads per month.
The material digitized by others includes more than 300,000 books that were contributed to the collection, between about 2006 and 2008, by Microsoft through its Live Search Books project, which also included financial support and scanning equipment directly donated to the Internet Archive. On May 23, 2008, Microsoft announced it would be ending its Live Book Search project and would no longer be scanning books, donating its remaining scanning equipment to its former partners.
Around October 2007, Archive users began uploading public domain books from Google Book Search. , there were more than 900,000 Google-digitized books in the Archive's collection; the books are identical to the copies found on Google, except without the Google watermarks, and are available for unrestricted use and download. Brewster Kahle revealed in 2013 that this archival effort was coordinated by Aaron Swartz, who, with a "bunch of friends", downloaded the public domain books from Google slowly enough and from enough computers to stay within Google's restrictions. They did this to ensure public access to the public domain. The Archive ensured the items were attributed and linked back to Google, which never complained, while libraries "grumbled". According to Kahle, this is an example of Swartz's "genius" to work on what could give the most to the public good for millions of people.
In addition to books, the Archive offers free and anonymous public access to more than four million court opinions, legal briefs, or exhibits uploaded from the United States Federal Courts' PACER electronic document system via the RECAP web browser plugin. These documents had been kept behind a federal court paywall. On the Archive, they had been accessed by more than six million people by 2013.
The Archive's BookReader web app, built into its website, has features such as single-page, two-page, and thumbnail modes; fullscreen mode; page zooming of high-resolution images; and flip page animation.
In October 2024, the Internet Archive struck a deal with the Leiden University Library to accept the paper copies of 400,000 uncatalogued foreign dissertations held at the Library that were to be pulped – with a view to digitising them and making them accessible online. The collection includes theses by Niels Bohr, Marie Curie, Émile Durkheim, Albert Einstein, Otto Hahn, Carl Jung, J. Robert Oppenheimer, Max Planck, Luigi Pirandello, Gustav Stresemann and Max Weber.
Open Library
The Open Library is another project of the Internet Archive. The project seeks to include a web page for every book ever published: it holds 25 million catalog records of editions. It also seeks to be a web-accessible public library: it contains the full texts of approximately 1,600,000 public domain books (out of the more than five million from the main texts collection), as well as in-print and in-copyright books, many of which are fully readable, downloadable and full-text searchable; it offers a two-week loan of e-books in its controlled digital lending program for over 647,784 books not in the public domain, in partnership with over 1,000 library partners from six countries after a free registration on the web site. Open Library is a free and open-source software project, with its source code freely available on GitHub.
The Open Library faces objections from some authors and the Society of Authors, who hold that the project is distributing books without authorization and is thus in violation of copyright laws, and four major publishers initiated a copyright infringement lawsuit against the Internet Archive in June 2020 to stop the Open Library project.
Digitizing sponsors for books
Many large institutional sponsors have helped the Internet Archive provide millions of scanned publications (text items). Some sponsors that have digitized large quantities of texts include the University of Toronto's Robarts Library, University of Alberta Libraries, University of Ottawa, Library of Congress, Boston Library Consortium member libraries, Boston Public Library, Princeton Theological Seminary Library, and many others.
In 2017, the MIT Press authorized the Internet Archive to digitize and lend books from the press's backlist, with financial support from the Arcadia Fund. A year later, the Internet Archive received further funding from the Arcadia Fund to invite some other university presses to partner with the Internet Archive to digitize books, a project called "Unlocking University Press Books".
The Library of Congress created numerous Handle System identifiers that pointed to free digitized books in the Internet Archive. The Internet Archive and Open Library are listed on the Library of Congress website as a source of e-books.
Media collections
In addition to web archives, the Internet Archive maintains extensive collections of digital media that are attested by the uploader to be in the public domain in the United States or licensed under a license that allows redistribution, such as Creative Commons licenses. Media are organized into collections by media type (moving images, audio, text, etc.), and into sub-collections by various criteria. Each of the main collections includes a "Community" sub-collection (formerly named "Open Source") where general contributions by the public are stored.
Audio
Audio Archive
The Audio Archive includes music, audiobooks, news broadcasts, old time radio shows, podcasts, and a wide variety of other audio files. , there are more than 15,000,000 free digital recordings in the collection. The subcollections include audio books and poetry, podcasts, non-English audio, and many others. The sound collections are curated by B. George, director of the ARChive of Contemporary Music.
Digital Library of Amateur Radio and Communications
A project to preserve recordings of amateur radio transmissions, with funding from the Amateur Radio Digital Communications foundation.
Live Music Archive
The Live Music Archive sub-collection includes more than 170,000 concert recordings from independent musicians, as well as more established artists and musical ensembles with permissive rules about recording their concerts, such as the Grateful Dead, and more recently, The Smashing Pumpkins. Also, Jordan Zevon has allowed the Internet Archive to host a definitive collection of his father Warren Zevon's concert recordings. The Zevon collection ranges from 1976 to 2001 and contains 126 concerts including 1,137 songs.
The Great 78 Project
The Great 78 Project aims to digitize 250,000 78 rpm singles (500,000 songs) from the period between 1880 and 1960, donated by various collectors and institutions. It has been developed in collaboration with the Archive of Contemporary Music and George Blood Audio, responsible for the audio digitization.
Netlabels
The Archive has a collection of freely distributable music that is streamed and available for download via its Netlabels service. The music in this collection generally has Creative Commons-license catalogs of virtual record labels.
Images collection
This collection contains more than 3.5 million items. Cover Art Archive, Metropolitan Museum of Art – Gallery Images, NASA Images, Occupy Wall Street Flickr Archive, and USGS Maps are some sub-collections of Image collection.
Cover Art Archive
The Cover Art Archive is a joint project between the Internet Archive and MusicBrainz, whose goal is to make cover art images on the Internet. this collection contains more than 1,400,000 items.
Metropolitan Museum of Art images
The images of this collection are from the Metropolitan Museum of Art. This collection contains more than 140,000 items.
NASA Images
The NASA Images archive was created through a Space Act Agreement between the Internet Archive and NASA to bring public access to NASA's image, video, and audio collections in a single, searchable resource. The Internet Archive NASA Images team worked closely with all of the NASA centers to keep adding to the ever-growing collection. The nasaimages.org site launched in July 2008 and had more than 100,000 items online at the end of its hosting in 2012.
Occupy Wall Street Flickr archive
This collection contains Creative Commons-licensed photographs from Flickr related to the Occupy Wall Street movement. This collection contains more than 15,000 items.
USGS Maps
This collection contains more than 59,000 items from Libre Map Project.
Machinima Archive
One of the sub-collections of the Internet Archive's Video Archive is the Machinima Archive. This small section hosts many Machinima videos. Machinima is a digital artform in which computer games, game engines, or software engines are used in a sandbox-like mode to create motion pictures, recreate plays, or even publish presentations or keynotes. The archive collects a range of Machinima films from internet publishers such as Rooster Teeth and Machinima.com as well as independent producers. The sub-collection is a collaborative effort among the Internet Archive, the How They Got Game research project at Stanford University, the Academy of Machinima Arts and Sciences, and Machinima.com.
Microfilm collection
This collection contains approximately 160,000 microfilmed items from a variety of libraries including the University of Chicago Libraries, University of Illinois at Urbana-Champaign, University of Alberta, Allen County Public Library, and National Technical Information Service.
Moving image collection
The Internet Archive holds a collection of approximately 3,863 feature films. Additionally, the Internet Archive's Moving Image collection includes: newsreels, classic cartoons, pro- and anti-war propaganda, The Video Cellar Collection, Skip Elsheimer's "A.V. Geeks" collection, early television, and ephemeral material from Prelinger Archives, such as advertising, educational, and industrial films, as well as amateur and home movie collections.
Subcategories of this collection include:
IA's Brick Films collection, which contains stop-motion animation filmed with Lego bricks, some of which are "remakes" of feature films.
IA's Election 2004 collection, a non-partisan public resource for sharing video materials related to the 2004 United States presidential election.
IA's FedFlix collection, Joint Venture NTIS-1832 between the National Technical Information Service and Public.Resource.Org that features "the best movies of the United States Government, from training films to history, from our national parks to the U.S. Fire Academy and the Postal Inspectors"
IA's Independent News collection, which includes sub-collections such as the Internet Archive's World At War competition from 2001, in which contestants created short films demonstrating "why access to history matters". Among their most-downloaded video files are eyewitness recordings of the devastating 2004 Indian Ocean earthquake.
IA's September 11 Television Archive, which contains archival footage from the world's major television networks of the terrorist attacks of September 11, 2001, as they unfolded on live television.
Open Educational Resources
Open Educational Resources is a digital collection at archive.org. This collection contains hundreds of free courses, video lectures, and supplemental materials from universities in the United States and China. The contributors of this collection are ArsDigita University, Hewlett Foundation, MIT, Monterey Institute, and Naropa University.
TV News Search & Borrow
In September 2012, the Internet Archive launched the TV News Search & Borrow service for searching U.S. national news programs. The service is built on closed captioning transcripts and allows users to search and stream 30-second video clips. Upon launch, the service contained "350,000 news programs collected over 3 years from national U.S. networks and stations in San Francisco and Washington D.C." According to Kahle, the service was inspired by the Vanderbilt Television News Archive, a similar library of televised network news programs. In contrast to Vanderbilt, which limits access to streaming video to individuals associated with subscribing colleges and universities, the TV News Search & Borrow allows open access to its streaming video clips. In 2013, the Archive received an additional donation of "approximately 40,000 well-organized tapes" from the estate of a Philadelphia woman, Marion Stokes. Stokes "had recorded more than 35 years of TV news in Philadelphia and Boston with her VHS and Betamax machines."
Miscellaneous collections
Brooklyn Museum collection contains approximately 3,000 items from Brooklyn Museum. In December 2020, the film research library of Lillian Michelson was donated to the archive.
Other services and endeavors
Physical media
Voicing a strong reaction to the idea of books simply being thrown away, and inspired by the Svalbard Global Seed Vault, Kahle now envisions collecting one copy of every book ever published. "We're not going to get there, but that's our goal", he said. Alongside the books, Kahle plans to store the Internet Archive's old servers, which were replaced in 2010.
Software
The Internet Archive has "the largest collection of historical software online in the world", spanning 50 years of computer history in terabytes of computer magazines and journals, books, shareware discs, FTP sites, video games, etc. The Internet Archive has created an archive of what it describes as "vintage software", as a way to preserve them. The project advocated an exemption from the United States Digital Millennium Copyright Act to permit them to bypass copy protection, which the United States Copyright Office approved in 2003 for a period of three years. The Archive does not offer the software for download, as the exemption is solely "for the purpose of preservation or archival reproduction of published digital works by a library or archive." The Library of Congress renewed the exemption in 2006, and in 2009 indefinitely extended it pending further rulemakings. The Library reiterated the exemption as a "Final Rule" with no expiration date in 2010. In 2013, the Internet Archive began to provide select video games browser-playable via MESS, for instance the Atari 2600 game E.T. the Extra-Terrestrial. Since December 23, 2014, the Internet Archive presents, via a browser-based DOSBox emulation, thousands of DOS/PC games for "scholarship and research purposes only". In November 2020, the Archive introduced a new emulator for Adobe Flash called Ruffle, and began archiving Flash animations and games ahead of the December 31, 2020, end-of-life for the Flash plugin across all computer systems.
Table Top Scribe System
A combined hardware software system has been developed that performs a safe method of digitizing content.
Credit Union
From 2012 to November 2015, the Internet Archive operated the Internet Archive Federal Credit Union, a federal credit union based in New Brunswick, New Jersey, with the goal of providing access to low- and middle-income people. Throughout its short existence, the IAFCU experienced significant conflicts with the National Credit Union Administration, which severely limited the IAFCU's loan portfolio and concerns over serving Bitcoin firms. At the time of its dissolution, it consisted of 395 members and was worth $2.5 million.
Decentralization
Since 2019, the Internet Archive organizes an event called Decentralized Web Camp (DWeb Camp). It is an annual camp that brings together a diverse global community of contributors in a natural setting. The camp aims to tackle real-world challenges facing the web and co-create decentralized technologies for a better internet. It aims to foster collaboration, learning, and fun while promoting principles of trust, human agency, mutual respect, and ecological awareness.
Wayforward Machine
On 30 September 2021, as a part of its 25th anniversary celebration, Internet Archive launched the "Wayforward Machine", a satirical, fictional website covered with pop-ups asking for personal information. The site was intended to depict a fictional dystopian timeline of real-world events leading to such a future, such as the repeal of Section 230 of the United States Code in 2022 and the introduction of advertising implants in 2041.
Ceramic archivists collection
The Great Room of the Internet Archive features a collection of more than 100 ceramic figures representing employees of the Internet Archive, with the 100th statue immortalizing Aaron Swartz. This collection, inspired by the statues of the Xian warriors in China, was commissioned by Brewster Kahle, sculpted by Nuala Creed, and as of 2014, is ongoing.
Artists in residence
The Internet Archive visual arts residency, organized by Amir Saber Esfahani, is designed to connect emerging and mid-career artists with the Archive's millions of collections and to show what is possible when open access to information intersects with the arts. During this one-year residency, selected artists develop a body of work that responds to and utilizes the Archive's collections in their own practice.
2019 Residency Artists: Caleb Duarte, Whitney Lynn, and Jeffrey Alan Scudder
2018 Residency Artists: Mieke Marple, Chris Sollars, and Taravat Talepasand
2017 Residency Artists: Laura Kim, Jeremiah Jenkins, and Jenny Odell
Controversies, legal disputes, and activism
Opposition to National security letters, bills and settlements
On May 8, 2008, it was revealed that the Internet Archive had successfully challenged an FBI national security letter asking for logs on an undisclosed user.
On November 28, 2016, it was revealed that a second FBI national security letter had been successfully challenged that had been asking for logs on another undisclosed user.
The Internet Archive blacked out its web site for 12 hours on January 18, 2012, in protest of the Stop Online Piracy Act and the PROTECT IP Act bills, two pieces of legislation in the United States Congress that they argued would "negatively affect the ecosystem of web publishing that led to the emergence of the Internet Archive". This occurred in conjunction with the English Wikipedia blackout, as well as numerous other protests across the Internet.
The Internet Archive is a member of the Open Book Alliance, which has been among the most outspoken critics of the Google Book Settlement. The Archive advocates an alternative digital library project.
Hosting of disputed media
On October 9, 2016, the Internet Archive was temporarily blocked in Turkey after it was used (amongst other file hosting services) by hackers to host 17 GB of leaked government emails.
Because the Internet Archive only lightly moderates uploads, it includes resources that may be valued by extremists and the site may be used by them to evade block listing. In February 2018, the Counter Extremism Project said that the Archive hosted terrorist videos, including the beheading of Alan Henning, and had declined to respond to requests about the videos. In May 2018, a report published by the cyber-security firm Flashpoint stated that the Islamic State was using the Internet Archive to share its propaganda. Chris Butler, from the Internet Archive, responded that they regularly spoke to the US and EU governments about sharing information on terrorism. In April 2019, Europol, acting on a referral from French police, asked the Internet Archive to remove 550 sites of "terrorist propaganda". The Archive rejected the request, saying that the reports were wrong about the content they pointed to, or were too broad for the organization to comply with. On July 14, 2021, the Internet Archive held a joint "Referral Action Day" with Europol to target terrorist videos.
A 2021 article said that jihadists regularly used the Internet Archive for "dead drops" of terrorist videos. In January 2022, a former UCLA lecturer's 800-page manifesto, containing racist ideas and threats against UCLA staff, was uploaded to the Internet Archive. The manifesto was removed by the Internet Archive after a week, amidst discussion about whether such documents should be preserved by archivists or not. Another 2022 paper found "an alarming volume of terrorist, extremist, and racist material on the Internet Archive". A 2023 paper reported that Neo-Nazis collect links to online, publicly available resources to be shared with new recruits. As the Internet Archive hosts uploaded texts that are not allowed on other websites, Nazi and neo-Nazi books in the Archive (e.g., The Turner Diaries) frequently appear on these lists. These lists also feature older, public domain material created when white supremacist views were more mainstream.
2020 National Emergency Library
In the midst of the COVID-19 pandemic which closed many schools, universities, and libraries, the Archive announced on March 24, 2020, that it was creating the National Emergency Library by removing the lending restrictions it had in place for 1.4 million digitized books in its Open Library but otherwise limiting users to the number of books they could check out and enforcing their return; normally, the site would only allow one digital lending for each physical copy of the book they had, by use of an encrypted file that would become unusable after the lending period was completed. This Library would remain as such until at least June 30, 2020, or until the US national emergency was over, whichever came later. At launch, the Internet Archive allowed authors and rightholders to submit opt-out requests for their works to be omitted from the National Emergency Library.
The Internet Archive said the National Emergency Library addressed an "unprecedented global and immediate need for access to reading and research material" due to the closures of physical libraries worldwide. They justified the move in a number of ways. Legally, they said they were promoting access to those inaccessible resources, which they claimed was an exercise in fair use principles. The Archive continued implementing their controlled digital lending policy that predated the National Emergency Library, meaning they still encrypted the lent copies and it was no easier for users to create new copies of the books than before. An ultimate determination of whether or not the National Emergency Library constituted fair use could only be made by a court. Morally, they also pointed out that the Internet Archive was a registered library like any other, that they either paid for the books themselves or received them as donations, and that lending through libraries predated copyright restrictions.
The Archive had already been criticized by authors and publishers for its prior lending approach, and upon announcement of the National Emergency Library, authors, publishers, and groups representing both took further issue with The Archive and its Open Library project, equating the move to copyright infringement and digital piracy, and using the COVID-19 pandemic as a reason to push the boundaries of copyright. After the works of some of these authors were ridiculed in responses, the Internet Archive's Jason Scott requested that supporters of the National Emergency Library not denigrate anyone's books: "I realize there's strong debate and disagreement here, but books are life-giving and life-changing and these writers made them."
Copyright issues
In November 2005, free downloads of Grateful Dead concerts were removed from the site, following what seemed to be disagreements between some of the former band members. John Perry Barlow identified Bob Weir, Mickey Hart, and Bill Kreutzmann as the instigators of the change, according to an article in The New York Times. Phil Lesh, a founding member of the band, commented on the change in a November 30, 2005, posting to his personal web site:
A November 30 forum post from Brewster Kahle summarized what appeared to be the compromise reached among the band members. Audience recordings could be downloaded or streamed, but soundboard recordings were to be available for streaming only. Concerts have since been re-added.
In February 2016, Internet Archive users had begun archiving digital copies of Nintendo Power, Nintendo's official magazine for their games and products, which ran from 1988 to 2012. The first 140 issues had been collected, before Nintendo had the archive removed on August 8, 2016. In response to the take-down, Nintendo told gaming website Polygon, "[Nintendo] must protect our own characters, trademarks and other content. The unapproved use of Nintendo's intellectual property can weaken our ability to protect and preserve it, or to possibly use it for new projects".
In August 2017, the Department of Telecommunications of the Government of India blocked the Internet Archive along with other file-sharing websites, in accordance with two court orders issued by the Madras High Court, citing piracy concerns after copies of two Bollywood films were allegedly shared via the service. The HTTP version of the Archive was blocked but it remained accessible using the HTTPS protocol.
In 2023, the Internet Archive became a popular site for Indians to watch the first episode of India: The Modi Question, a BBC documentary released on January 17 and banned in India by January 20. The video was reported to have been removed by the Archive on January 23. The Internet Archive then stated, on January 27, that they had removed the video in response to a BBC request under the Digital Millennium Copyright Act.
Book publishers' lawsuit
The operation of the National Emergency Library was part of a lawsuit filed against the Internet Archive by four major book publishers—Hachette, HarperCollins, John Wiley & Sons, and Penguin Random House—in June 2020, challenging the copyright validity of the controlled digital lending program. In response, the Internet Archive closed the National Emergency Library on June 16, 2020, rather than the planned June 30, 2020, due to the lawsuit. The plaintiffs, supported by the Copyright Alliance, claimed in their lawsuit that the Internet Archive's actions constituted a "willful mass copyright infringement."
Judge Koeltl ruled on March 24, 2023, against Internet Archive in the case, saying the National Emergency Library concept was not fair use, so the Archive infringed their copyrights by lending out the books without the waitlist restriction. An agreement was then reached for the Internet Archive to pay an undisclosed amount to the publishers. The Internet Archive appealed the ruling. On September 4, 2024, the U.S. Court of Appeals for the Second Circuit upheld the district court's ruling, calling the Internet Archive's argument that they were shielded by fair use doctrine "unpersuasive".
Music publishers' lawsuit
In August 2023, the music industry corporations Universal Music Group (UMG), Sony Music and Concord sued the Internet Archive over its Great 78 Project, asserting the project was engaged in copyright infringement. The Great 78 Project stores digitized versions of pre-1972 songs and albums from 78 rpm phonograph records, for "the preservation, research and discovery of 78rpm records." The project had started in 2016, when pre-1972 recordings had not been protected by copyright; in 2018, the U.S. Congress passed the Music Modernization Act (MMA) which enabled legal remedies for unauthorised use of pre-1972 recordings until 2067, thus effectively covering them with copyright.
UMG and Sony had been the two largest companies in this sector for more than a decade, with respective market shares of 31.8% and 22.1% in 2023. Concord was a rapidly expanding music business closely partnered with UMG since its transformation into Concord Music Group in 2004 and backed since at least 2000 by J.P. Morgan. It was the first music company to perform an asset-backed securitization, led by Apollo Global Management, in December 2022. Its assets consisted of over 1 million copyrights to music older than 18 months. According to its CEO Bob Valentine, Concord derived about 85% of its revenue "from catalog, rather than newly-developed, music". As Valentine stated in his first interview, "The phenomenon of artists' IP has never been more liquid; it is now a real and proven asset class. Investment bankers are focused on it, financiers are financing it, and then there's entities like us, that know how to buy rights, but also know how to manage them and have the relationships to do so." The share of catalog music in total album equivalent consumption in the United States rose from 62.8% to 72.6% between 2019 and 2023.
The publishers are seeking statutory damages for nearly 4,142 songs named in the suit, with a maximum possible fine of $621 million. The Internet Archive has argued that the primitive sound quality of the original recordings falls within the doctrine of "fair use" to digitize for preservation, that the number of downloads is so small it has almost no impact on the publishers' revenue, and over 95% of the collection is not readily available anywhere else. The plaintiffs said in response, "if ever there were a theory of fair use invented for litigation, this is it." According to a legal source at Mayer Brown, the music publishers' case could be challenged as unconstitutional, since the granting of copyright to pre-1972 works in the MMA only benefitted record companies without having a systemic effect.
| Technology | Utility | null |
176964 | https://en.wikipedia.org/wiki/Diode%20bridge | Diode bridge | A diode bridge is a bridge rectifier circuit of four diodes that is used in the process of converting alternating current (AC) from the input terminals to direct current (DC, i.e. fixed polarity) on the output terminals. Its function is to convert the negative voltage portions of the AC waveform to positive voltage, after which a low-pass filter can be used to smooth the result into DC.
When used in its most common application, for conversion of an alternating-current (AC) input into a direct-current (DC) output, it is known as a bridge rectifier. A bridge rectifier provides full-wave rectification from a two-wire AC input, resulting in lower cost and weight as compared to a rectifier with a three-wire input from a transformer with a center-tapped secondary winding.
Prior to the availability of integrated circuits, a bridge rectifier was constructed from separate diodes. Since about 1950, a single four-terminal component containing the four diodes connected in a bridge configuration has been available and is now available with various voltage and current ratings.
Diodes are also used in bridge topologies along with capacitors as voltage multipliers.
History
The diode bridge circuit was invented by Karol Pollak and patented in December 1895 in Great Britain and in January 1896 in Germany. In 1897, Leo Graetz independently invented and published a similar circuit. Today the circuit is sometimes referred to as a "Graetz circuit" or "Graetz bridge".
Current flow
According to the conventional model of current flow, originally established by Benjamin Franklin and still followed by most engineers today), current flows through electrical conductors from the positive to the negative pole (defined as positive flow). In actuality, free electrons in a conductor nearly always flow from the negative to the positive pole. In the vast majority of applications, however, the actual direction of current flow is irrelevant. Therefore, in the discussion below the conventional model is retained.
The fundamental characteristic of a diode is that current can flow only one way through it, which is defined as the forward direction. A diode bridge uses diodes as series components to allow current to pass in the forward direction during the positive part of the AC cycle and as shunt components to redirect current flowing in the reverse direction during the negative part of the AC cycle to the opposite rails.
Rectifier
In the diagrams below, when the input connected to the left corner of the diamond is positive, and the input connected to the right corner is negative, current flows from the upper supply terminal to the right along the red (positive) path to the output and returns to the lower supply terminal through the blue (negative) path.
When the input connected to the left corner is negative, and the input connected to the right corner is positive, current flows from the lower supply terminal to the right along the red (positive) path to the output and returns to the upper supply terminal through the blue (negative) path.
In each case, the upper right output remains positive, and lower right output negative. Since this is true whether the input is AC or DC, this circuit not only produces a DC output from an AC input, it can also provide reverse-polarity protection; that is, it permits normal functioning of DC-powered equipment when batteries have been installed backwards, or when the leads from a DC power source have been reversed, and protects the equipment from potential damage caused by reverse polarity.
Alternatives to the diode-bridge full-wave rectifiers are the center-tapped transformer and double-diode rectifier, and voltage doubler rectifier using two diodes and two capacitors in a bridge topology.
Smoothing circuits
With AC input, the output of a diode bridge (called a full-wave rectifier for this purpose; there is also half-wave rectification, which does not use a diode bridge) is polarized pulsating non-sinusoidal voltage of the same amplitude but twice the frequency of the input. It may be considered as DC voltage upon which is superimposed a very large ripple voltage. This kind of electric power is not very usable, because ripple is dissipated as waste heat in DC circuit components and may cause noise or distortion during circuit operation. So nearly all rectifiers are followed by a series of bandpass or bandstop filters and/or a voltage regulator to convert most or all of the ripple voltage into a smoother and possibly higher DC output. A filter may be as simple as a single sufficiently large capacitor or choke, but most power-supply filters have multiple alternating series and shunt components. When the ripple voltage rises, reactive power is stored in the filter components, reducing the voltage; when the ripple voltage falls, reactive power is discharged from the filter components, raising the voltage. The final stage of rectification may consist of a zener diode-based voltage regulator, which almost completely eliminates any residual ripple.
Polyphase diode bridges
The diode bridge can be generalized to rectify polyphase AC inputs. For example, for a three-phase AC input, a half-wave rectifier consists of three diodes, but a full-wave bridge rectifier consists of six diodes.
A half-wave rectifier may be considered a wye connection (star connection), because it returns the current through the center (neutral) wire. A full-wave rectifier is more like a delta connection, although it can be connected to the three-phase source of either wye or delta and it does not use the center (neutral) wire.
| Technology | Functional circuits | null |
177092 | https://en.wikipedia.org/wiki/Active%20galactic%20nucleus | Active galactic nucleus | An active galactic nucleus (AGN) is a compact region at the center of a galaxy that emits a significant amount of energy across the electromagnetic spectrum, with characteristics indicating that this luminosity is not produced by the stars. Such excess, non-stellar emissions have been observed in the radio, microwave, infrared, optical, ultra-violet, X-ray and gamma ray wavebands. A galaxy hosting an AGN is called an active galaxy. The non-stellar radiation from an AGN is theorized to result from the accretion of matter by a supermassive black hole at the center of its host galaxy.
Active galactic nuclei are the most luminous persistent sources of electromagnetic radiation in the universe and, as such, can be used as a means of discovering distant objects; their evolution as a function of cosmic time also puts constraints on models of the cosmos.
The observed characteristics of an AGN depend on several properties such as the mass of the central black hole, the rate of gas accretion onto the black hole, the orientation of the accretion disk, the degree of obscuration of the nucleus by dust, and presence or absence of jets.
Numerous subclasses of AGN have been defined on the basis of their observed characteristics; the most powerful AGN are classified as quasars. A blazar is an AGN with a jet pointed toward the Earth, in which radiation from the jet is enhanced by relativistic beaming.
History
During the first half of the 20th century, photographic observations of nearby galaxies detected some characteristic signatures of AGN emission, although there was not yet a physical understanding of the nature of the AGN phenomenon. Some early observations included the first spectroscopic detection of emission lines from the nuclei of NGC 1068 and Messier 81 by Edward Fath (published in 1909), and the discovery of the jet in Messier 87 by Heber Curtis (published in 1918). Further spectroscopic studies by astronomers including Vesto Slipher, Milton Humason, and Nicholas Mayall noted the presence of unusual emission lines in some galaxy nuclei. In 1943, Carl Seyfert published a paper in which he described observations of nearby galaxies having bright nuclei that were sources of unusually broad emission lines. Galaxies observed as part of this study included NGC 1068, NGC 4151, NGC 3516, and NGC 7469. Active galaxies such as these are known as Seyfert galaxies in honor of Seyfert's pioneering work.
The development of radio astronomy was a major catalyst to understanding AGN. Some of the earliest detected radio sources are nearby active elliptical galaxies such as Messier 87 and Centaurus A. Another radio source, Cygnus A, was identified by Walter Baade and Rudolph Minkowski as a tidally distorted galaxy with an unusual emission-line spectrum, having a recessional velocity of 16,700 kilometers per second. The 3C radio survey led to further progress in discovery of new radio sources as well as identifying the visible-light sources associated with the radio emission. In photographic images, some of these objects were nearly point-like or quasi-stellar in appearance, and were classified as quasi-stellar radio sources (later abbreviated as "quasars").
Soviet Armenian astrophysicist Viktor Ambartsumian introduced Active Galactic Nuclei in the early 1950s. At the Solvay Conference on Physics in 1958, Ambartsumian presented a report arguing that "explosions in galactic nuclei cause large amounts of mass to be expelled. For these explosions to occur, galactic nuclei must contain bodies of huge mass and unknown nature. From this point forward Active Galactic Nuclei (AGN) became a key component in theories of galactic evolution." His idea was initially accepted skeptically.
A major breakthrough was the measurement of the redshift of the quasar 3C 273 by Maarten Schmidt, published in 1963. Schmidt noted that if this object was extragalactic (outside the Milky Way, at a cosmological distance) then its large redshift of 0.158 implied that it was the nuclear region of a galaxy about 100 times more powerful than other radio galaxies that had been identified. Shortly afterward, optical spectra were used to measure the redshifts of a growing number of quasars including 3C 48, even more distant at redshift 0.37.
The enormous luminosities of these quasars as well as their unusual spectral properties indicated that their power source could not be ordinary stars. Accretion of gas onto a supermassive black hole was suggested as the source of quasars' power in papers by Edwin Salpeter and Yakov Zeldovich in 1964. In 1969 Donald Lynden-Bell proposed that nearby galaxies contain supermassive black holes at their centers as relics of "dead" quasars, and that black hole accretion was the power source for the non-stellar emission in nearby Seyfert galaxies. In the 1960s and 1970s, early X-ray astronomy observations demonstrated that Seyfert galaxies and quasars are powerful sources of X-ray emission, which originates from the inner regions of black hole accretion disks.
Today, AGN are a major topic of astrophysical research, both observational and theoretical. AGN research encompasses observational surveys to find AGN over broad ranges of luminosity and redshift, examination of the cosmic evolution and growth of black holes, studies of the physics of black hole accretion and the emission of electromagnetic radiation from AGN, examination of the properties of jets and outflows of matter from AGN, and the impact of black hole accretion and quasar activity on galaxy evolution.
Models
Since the late 1960s it has been argued that an AGN must be powered by accretion of mass onto massive black holes (106 to 1010 times the Solar mass). AGN are both compact and persistently extremely luminous. Accretion can potentially give very efficient conversion of potential and kinetic energy to radiation, and a massive black hole has a high Eddington luminosity, and as a result, it can provide the observed high persistent luminosity. Supermassive black holes are now believed to exist in the centres of most if not all massive galaxies since the mass of the black hole correlates well with the velocity dispersion of the galactic bulge (the M–sigma relation) or with bulge luminosity. Thus, AGN-like characteristics are expected whenever a supply of material for accretion comes within the sphere of influence of the central black hole.
Accretion disc
In the standard model of AGN, cold material close to a black hole forms an accretion disc. Dissipative processes in the accretion disc transport matter inwards and angular momentum outwards, while causing the accretion disc to heat up. The expected spectrum of an accretion disc peaks in the optical-ultraviolet waveband; in addition, a corona of hot material forms above the accretion disc and can inverse-Compton scatter photons up to X-ray energies. The radiation from the accretion disc excites cold atomic material close to the black hole and this in turn radiates at particular emission lines. A large fraction of the AGN's radiation may be obscured by interstellar gas and dust close to the accretion disc, but (in a steady-state situation) this will be re-radiated at some other waveband, most likely the infrared.
Relativistic jets
Some accretion discs produce jets of twin, highly collimated, and fast outflows that emerge in opposite directions from close to the disc. The direction of the jet ejection is determined either by the angular momentum axis of the accretion disc or the spin axis of the black hole. The jet production mechanism and indeed the jet composition on very small scales are not understood at present due to the resolution of astronomical instruments being too low. The jets have their most obvious observational effects in the radio waveband, where very-long-baseline interferometry can be used to study the synchrotron radiation they emit at resolutions of sub-parsec scales. However, they radiate in all wavebands from the radio through to the gamma-ray range via the synchrotron and the inverse-Compton scattering process, and so AGN jets are a second potential source of any observed continuum radiation.
Radiatively inefficient AGN
There exists a class of "radiatively inefficient" solutions to the equations that govern accretion. Several theories exist, but the most widely known of these is the Advection Dominated Accretion Flow (ADAF). In this type of accretion, which is important for accretion rates well below the Eddington limit, the accreting matter does not form a thin disc and consequently does not efficiently radiate away the energy that it acquired as it moved close to the black hole. Radiatively inefficient accretion has been used to explain the lack of strong AGN-type radiation from massive black holes at the centres of elliptical galaxies in clusters, where otherwise we might expect high accretion rates and correspondingly high luminosities. Radiatively inefficient AGN would be expected to lack many of the characteristic features of standard AGN with an accretion disc.
Particle acceleration
AGN are a candidate source of high and ultra-high energy cosmic rays (see also Centrifugal mechanism of acceleration).
Observational characteristics
Among the many interesting characteristics of AGNs:
very high luminosity, visible out to very high red shifts,
small emitting regions, milli-parsecs in diameter,
strong evolution of luminosity functions,
detectable emission across the entire electromagnetic spectrum.
Types of active galaxy
It is convenient to divide AGN into two classes, conventionally called radio-quiet and radio-loud. Radio-loud objects have emission contributions from both the jet(s) and the lobes that the jets inflate. These emission contributions dominate the luminosity of the AGN at radio wavelengths and possibly at some or all other wavelengths. Radio-quiet objects are simpler since jet and any jet-related emission can be neglected at all wavelengths.
AGN terminology is often confusing, since the distinctions between different types of AGN sometimes reflect historical differences in how the objects were discovered or initially classified, rather than real physical differences.
Radio-quiet AGN
Low-ionization nuclear emission-line regions (LINERs). As the name suggests, these systems show only weak nuclear emission-line regions, and no other signatures of AGN emission. It is debatable whether all such systems are true AGN (powered by accretion on to a supermassive black hole). If they are, they constitute the lowest-luminosity class of radio-quiet AGN. Some may be radio-quiet analogues of the low-excitation radio galaxies (see below).
Seyfert galaxies. Seyferts were the earliest distinct class of AGN to be identified. They show optical range nuclear continuum emission, narrow and occasionally broad emission lines, occasionally strong nuclear X-ray emission and sometimes a weak small-scale radio jet. Originally they were divided into two types known as Seyfert 1 and 2: Seyfert 1s show strong broad emission lines while Seyfert 2s do not, and Seyfert 1s are more likely to show strong low-energy X-ray emission. Various forms of elaboration on this scheme exist: for example, Seyfert 1s with relatively narrow broad lines are sometimes referred to as narrow-line Seyfert 1s. The host galaxies of Seyferts are usually spiral or irregular galaxies.
Radio-quiet quasars/QSOs. These are essentially more luminous versions of Seyfert 1s: the distinction is arbitrary and is usually expressed in terms of a limiting optical magnitude. Quasars were originally 'quasi-stellar' in optical images as they had optical luminosities that were greater than that of their host galaxy. They always show strong optical continuum emission, X-ray continuum emission, and broad and narrow optical emission lines. Some astronomers use the term QSO (Quasi-Stellar Object) for this class of AGN, reserving 'quasar' for radio-loud objects, while others talk about radio-quiet and radio-loud quasars. The host galaxies of quasars can be spirals, irregulars or ellipticals. There is a correlation between the quasar's luminosity and the mass of its host galaxy, in that the most luminous quasars inhabit the most massive galaxies (ellipticals).
'Quasar 2s'. By analogy with Seyfert 2s, these are objects with quasar-like luminosities but without strong optical nuclear continuum emission or broad line emission. They are scarce in surveys, though a number of possible candidate quasar 2s have been identified.
Radio-loud AGN
There are several subtypes of radio-loud active galactic nuclei.
Radio-loud quasars behave exactly like radio-quiet quasars with the addition of emission from a jet. Thus they show strong optical continuum emission, broad and narrow emission lines, and strong X-ray emission, together with nuclear and often extended radio emission.
"Blazars" (BL Lac objects and OVV quasars) classes are distinguished by rapidly variable, polarized optical, radio and X-ray emission. BL Lac objects show no optical emission lines, broad or narrow, so that their redshifts can only be determined from features in the spectra of their host galaxies. The emission-line features may be intrinsically absent or simply swamped by the additional variable component. In the latter case, emission lines may become visible when the variable component is at a low level. OVV quasars behave more like standard radio-loud quasars with the addition of a rapidly variable component. In both classes of source, the variable emission is believed to originate in a relativistic jet oriented close to the line of sight. Relativistic effects amplify both the luminosity of the jet and the amplitude of variability.
Radio galaxies. These objects show nuclear and extended radio emission. Their other AGN properties are heterogeneous. They can broadly be divided into low-excitation and high-excitation classes. Low-excitation objects show no strong narrow or broad emission lines, and the emission lines they do have may be excited by a different mechanism. Their optical and X-ray nuclear emission is consistent with originating purely in a jet. They may be the best current candidates for AGN with radiatively inefficient accretion. By contrast, high-excitation objects (narrow-line radio galaxies) have emission-line spectra similar to those of Seyfert 2s. The small class of broad-line radio galaxies, which show relatively strong nuclear optical continuum emission probably includes some objects that are simply low-luminosity radio-loud quasars. The host galaxies of radio galaxies, whatever their emission-line type, are essentially always ellipticals.
Unification of AGN species
Unified models propose that different observational classes of AGN are a single type of physical object observed under different conditions. The currently favoured unified models are 'orientation-based unified models' meaning that they propose that the apparent differences between different types of objects arise simply because of their different orientations to the observer. However, they are debated (see below).
Radio-quiet unification
At low luminosities, the objects to be unified are Seyfert galaxies. The unification models propose that in Seyfert 1s the observer has a direct view of the active nucleus. In Seyfert 2s the nucleus is observed through an obscuring structure which prevents a direct view of the optical continuum, broad-line region or (soft) X-ray emission. The key insight of orientation-dependent accretion models is that the two types of object can be the same if only certain angles to the line of sight are observed. The standard picture is of a torus of obscuring material surrounding the accretion disc. It must be large enough to obscure the broad-line region but not large enough to obscure the narrow-line region, which is seen in both classes of object. Seyfert 2s are seen through the torus. Outside the torus there is material that can scatter some of the nuclear emission into our line of sight, allowing us to see some optical and X-ray continuum and, in some cases, broad emission lines—which are strongly polarized, showing that they have been scattered and proving that some Seyfert 2s really do contain hidden Seyfert 1s. Infrared observations of the nuclei of Seyfert 2s also support this picture.
At higher luminosities, quasars take the place of Seyfert 1s, but, as already mentioned, the corresponding 'quasar 2s' are elusive at present. If they do not have the scattering component of Seyfert 2s they would be hard to detect except through their luminous narrow-line and hard X-ray emission.
Radio-loud unification
Historically, work on radio-loud unification has concentrated on high-luminosity radio-loud quasars. These can be unified with narrow-line radio galaxies in a manner directly analogous to the Seyfert 1/2 unification (but without the complication of much in the way of a reflection component: narrow-line radio galaxies show no nuclear optical continuum or reflected X-ray component, although they do occasionally show polarized broad-line emission). The large-scale radio structures of these objects provide compelling evidence that the orientation-based unified models really are true. X-ray evidence, where available, supports the unified picture: radio galaxies show evidence of obscuration from a torus, while quasars do not, although care must be taken since radio-loud objects also have a soft unabsorbed jet-related component, and high resolution is necessary to separate out thermal emission from the sources' large-scale hot-gas environment. At very small angles to the line of sight, relativistic beaming dominates, and we see a blazar of some variety.
However, the population of radio galaxies is completely dominated by low-luminosity, low-excitation objects. These do not show strong nuclear emission lines—broad or narrow—they have optical continua which appear to be entirely jet-related, and their X-ray emission is also consistent with coming purely from a jet, with no heavily absorbed nuclear component in general. These objects cannot be unified with quasars, even though they include some high-luminosity objects when looking at radio emission, since the torus can never hide the narrow-line region to the required extent, and since infrared studies show that they have no hidden nuclear component: in fact there is no evidence for a torus in these objects at all. Most likely, they form a separate class in which only jet-related emission is important. At small angles to the line of sight, they will appear as BL Lac objects.
Criticism of the radio-quiet unification
In the recent literature on AGN, being subject to an intense debate, an increasing set of observations appear to be in conflict with some of the key predictions of the Unified Model, e.g. that each Seyfert 2 has an obscured Seyfert 1 nucleus (a hidden broad-line region).
Therefore, one cannot know whether the gas in all Seyfert 2 galaxies is ionized due to photoionization from a single, non-stellar continuum source in the center or due to shock-ionization from e.g. intense, nuclear starbursts. Spectropolarimetric studies reveal that only 50% of Seyfert 2s show a hidden broad-line region and thus split Seyfert 2 galaxies into two populations. The two classes of populations appear to differ by their luminosity, where the Seyfert 2s without a hidden broad-line region are generally less luminous. This suggests absence of broad-line region is connected to low Eddington ratio, and not to obscuration.
The covering factor of the torus might play an important role. Some torus models predict how Seyfert 1s and Seyfert 2s can obtain different covering factors from a luminosity and accretion rate dependence of the torus covering factor, something supported by studies in the x-ray of AGN. The models also suggest an accretion-rate dependence of the broad-line region and provide a natural evolution from more active engines in Seyfert 1s to more "dead" Seyfert 2s and can explain the observed break-down of the unified model at low luminosities and the evolution of the broad-line region.
While studies of single AGN show important deviations from the expectations of the unified model, results from statistical tests have been contradictory. The most important short-coming of statistical tests by direct comparisons of statistical samples of Seyfert 1s and Seyfert 2s is the introduction of selection biases due to anisotropic selection criteria.
Studying neighbour galaxies rather than the AGN themselves first suggested the numbers of neighbours were larger for Seyfert 2s than for Seyfert 1s, in contradiction with the Unified Model. Today, having overcome the previous limitations of small sample sizes and anisotropic selection, studies of neighbours of hundreds to thousands of AGN have shown that the neighbours of Seyfert 2s are intrinsically dustier and more star-forming than Seyfert 1s and a connection between AGN type, host galaxy morphology and collision history. Moreover, angular clustering studies of the two AGN types confirm that they reside in different environments and show that they reside within dark matter halos of different masses. The AGN environment studies are in line with evolution-based unification models where Seyfert 2s transform into Seyfert 1s during merger, supporting earlier models of merger-driven activation of Seyfert 1 nuclei.
While controversy about the soundness of each individual study still prevails, they all agree on that the simplest viewing-angle based models of AGN Unification are incomplete. Seyfert-1 and Seyfert-2 seem to differ in star formation and AGN engine power.
While it still might be valid that an obscured Seyfert 1 can appear as a Seyfert 2, not all Seyfert 2s must host an obscured Seyfert 1. Understanding whether it is the same engine driving all Seyfert 2s, the connection to radio-loud AGN, the mechanisms of the variability of some AGN that vary between the two types at very short time scales, and the connection of the AGN type to small and large-scale environment remain important issues to incorporate into any unified model of active galactic nuclei.
A study of Swift/BAT AGN published in July 2022 adds support to the "radiation-regulated unification model" outlined in 2017. In this model, the relative accretion rate (termed the "Eddington ratio") of the black hole has a significant impact on the observed features of the AGN. Black Holes with higher Eddington ratios appear to be more likely to be unobscured, having cleared away locally obscuring material in a very short timescale.
Cosmological uses and evolution
For a long time, active galaxies held all the records for the highest-redshift objects known either in the optical or the radio spectrum, because of their high luminosity. They still have a role to play in studies of the early universe, but it is now recognised that an AGN gives a highly biased picture of the "typical" high-redshift galaxy.
Most luminous classes of AGN (radio-loud and radio-quiet) seem to have been much more numerous in the early universe. This suggests that massive black holes formed early on and that the conditions for the formation of luminous AGN were more common in the early universe, such as a much higher availability of cold gas near the centre of galaxies than at present. It also implies that many objects that were once luminous quasars are now much less luminous, or entirely quiescent. The evolution of the low-luminosity AGN population is much less well understood due to the difficulty of observing these objects at high redshifts.
| Physical sciences | Active galactic nucleus | null |
177111 | https://en.wikipedia.org/wiki/Error%20function | Error function | In mathematics, the error function (also called the Gauss error function), often denoted by , is a function defined as:
The integral here is a complex contour integral which is path-independent because is holomorphic on the whole complex plane . In many applications, the function argument is a real number, in which case the function value is also real.
In some old texts,
the error function is defined without the factor of .
This nonelementary integral is a sigmoid function that occurs often in probability, statistics, and partial differential equations.
In statistics, for non-negative real values of , the error function has the following interpretation: for a real random variable that is normally distributed with mean 0 and standard deviation , is the probability that falls in the range .
Two closely related functions are the complementary error function is defined as
and the imaginary error function is defined as
where is the imaginary unit.
Name
The name "error function" and its abbreviation were proposed by J. W. L. Glaisher in 1871 on account of its connection with "the theory of Probability, and notably the theory of Errors." The error function complement was also discussed by Glaisher in a separate publication in the same year.
For the "law of facility" of errors whose density is given by
(the normal distribution), Glaisher calculates the probability of an error lying between and as:
Applications
When the results of a series of measurements are described by a normal distribution with standard deviation and expected value 0, then is the probability that the error of a single measurement lies between and , for positive . This is useful, for example, in determining the bit error rate of a digital communication system.
The error and complementary error functions occur, for example, in solutions of the heat equation when boundary conditions are given by the Heaviside step function.
The error function and its approximations can be used to estimate results that hold with high probability or with low probability. Given a random variable (a normal distribution with mean and standard deviation ) and a constant , it can be shown via integration by substitution:
where and are certain numeric constants. If is sufficiently far from the mean, specifically , then:
so the probability goes to 0 as .
The probability for being in the interval can be derived as
Properties
The property means that the error function is an odd function. This directly results from the fact that the integrand is an even function (the antiderivative of an even function which is zero at the origin is an odd function and vice versa).
Since the error function is an entire function which takes real numbers to real numbers, for any complex number :
where is the complex conjugate of z.
The integrand and are shown in the complex -plane in the figures at right with domain coloring.
The error function at is exactly 1 (see Gaussian integral). At the real axis, approaches unity at and −1 at . At the imaginary axis, it tends to .
Taylor series
The error function is an entire function; it has no singularities (except that at infinity) and its Taylor expansion always converges. For , however, cancellation of leading terms makes the Taylor expansion unpractical.
The defining integral cannot be evaluated in closed form in terms of elementary functions (see Liouville's theorem), but by expanding the integrand into its Maclaurin series and integrating term by term, one obtains the error function's Maclaurin series as:
which holds for every complex number . The denominator terms are sequence A007680 in the OEIS.
For iterative calculation of the above series, the following alternative formulation may be useful:
because expresses the multiplier to turn the th term into the th term (considering as the first term).
The imaginary error function has a very similar Maclaurin series, which is:
which holds for every complex number .
Derivative and integral
The derivative of the error function follows immediately from its definition:
From this, the derivative of the imaginary error function is also immediate:
An antiderivative of the error function, obtainable by integration by parts, is
An antiderivative of the imaginary error function, also obtainable by integration by parts, is
Higher order derivatives are given by
where are the physicists' Hermite polynomials.
Bürmann series
An expansion, which converges more rapidly for all real values of than a Taylor expansion, is obtained by using Hans Heinrich Bürmann's theorem:
where is the sign function. By keeping only the first two coefficients and choosing and , the resulting approximation shows its largest relative error at , where it is less than 0.0036127:
Inverse functions
Given a complex number , there is not a unique complex number satisfying , so a true inverse function would be multivalued. However, for , there is a unique real number denoted satisfying
The inverse error function is usually defined with domain , and it is restricted to this domain in many computer algebra systems. However, it can be extended to the disk of the complex plane, using the Maclaurin series
where and
So we have the series expansion (common factors have been canceled from numerators and denominators):
(After cancellation the numerator and denominator values in and respectively; without cancellation the numerator terms are values in .) The error function's value at is equal to .
For , we have .
The inverse complementary error function is defined as
For real , there is a unique real number satisfying . The inverse imaginary error function is defined as .
For any real x, Newton's method can be used to compute , and for , the following Maclaurin series converges:
where is defined as above.
Asymptotic expansion
A useful asymptotic expansion of the complementary error function (and therefore also of the error function) for large real is
where is the double factorial of , which is the product of all odd numbers up to . This series diverges for every finite , and its meaning as asymptotic expansion is that for any integer one has
where the remainder is
which follows easily by induction, writing
and integrating by parts.
The asymptotic behavior of the remainder term, in Landau notation, is
as . This can be found by
For large enough values of , only the first few terms of this asymptotic expansion are needed to obtain a good approximation of (while for not too large values of , the above Taylor expansion at 0 provides a very fast convergence).
Continued fraction expansion
A continued fraction expansion of the complementary error function was found by Laplace:
Factorial series
The inverse factorial series:
converges for . Here
denotes the rising factorial, and denotes a signed Stirling number of the first kind.
There also exists a representation by an infinite sum containing the double factorial:
Numerical approximations
Approximation with elementary functions
Abramowitz and Stegun give several approximations of varying accuracy (equations 7.1.25–28). This allows one to choose the fastest approximation suitable for a given application. In order of increasing accuracy, they are:
(maximum error: )
where , , ,
(maximum error: )
where , , ,
(maximum error: )
where , , , , ,
(maximum error: )
where , , , , ,
All of these approximations are valid for . To use these approximations for negative , use the fact that is an odd function, so .
Exponential bounds and a pure exponential approximation for the complementary error function are given by
The above have been generalized to sums of exponentials with increasing accuracy in terms of so that can be accurately approximated or bounded by , where
In particular, there is a systematic methodology to solve the numerical coefficients that yield a minimax approximation or bound for the closely related Q-function: , , or for . The coefficients for many variations of the exponential approximations and bounds up to have been released to open access as a comprehensive dataset.
A tight approximation of the complementary error function for is given by Karagiannidis & Lioumpas (2007) who showed for the appropriate choice of parameters that
They determined , which gave a good approximation for all . Alternative coefficients are also available for tailoring accuracy for a specific application or transforming the expression into a tight bound.
A single-term lower bound is
where the parameter can be picked to minimize error on the desired interval of approximation.
Another approximation is given by Sergei Winitzki using his "global Padé approximations":
where
This is designed to be very accurate in a neighborhood of 0 and a neighborhood of infinity, and the relative error is less than 0.00035 for all real . Using the alternate value reduces the maximum relative error to about 0.00013.
This approximation can be inverted to obtain an approximation for the inverse error function:
An approximation with a maximal error of for any real argument is:
with
and
An approximation of with a maximum relative error less than in absolute value is:
for
and for
A simple approximation for real-valued arguments could be done through Hyperbolic functions:
which keeps the absolute difference
Table of values
Related functions
Complementary error function
The complementary error function, denoted , is defined as
which also defines , the scaled complementary error function (which can be used instead of to avoid arithmetic underflow). Another form of for is known as Craig's formula, after its discoverer:
This expression is valid only for positive values of , but it can be used in conjunction with to obtain for negative values. This form is advantageous in that the range of integration is fixed and finite. An extension of this expression for the of the sum of two non-negative variables is as follows:
Imaginary error function
The imaginary error function, denoted , is defined as
where is the Dawson function (which can be used instead of to avoid arithmetic overflow).
Despite the name "imaginary error function", is real when is real.
When the error function is evaluated for arbitrary complex arguments , the resulting complex error function is usually discussed in scaled form as the Faddeeva function:
Cumulative distribution function
The error function is essentially identical to the standard normal cumulative distribution function, denoted , also named by some software languages, as they differ only by scaling and translation. Indeed,
or rearranged for and :
Consequently, the error function is also closely related to the Q-function, which is the tail probability of the standard normal distribution. The Q-function can be expressed in terms of the error function as
The inverse of is known as the normal quantile function, or probit function and may be expressed in terms of the inverse error function as
The standard normal cdf is used more often in probability and statistics, and the error function is used more often in other branches of mathematics.
The error function is a special case of the Mittag-Leffler function, and can also be expressed as a confluent hypergeometric function (Kummer's function):
It has a simple expression in terms of the Fresnel integral.
In terms of the regularized gamma function and the incomplete gamma function,
is the sign function.
Iterated integrals of the complementary error function
The iterated integrals of the complementary error function are defined by
The general recurrence formula is
They have the power series
from which follow the symmetry properties
and
Implementations
As real function of a real argument
In POSIX-compliant operating systems, the header math.h shall declare and the mathematical library libm shall provide the functions erf and erfc (double precision) as well as their single precision and extended precision counterparts erff, erfl and erfcf, erfcl.
The GNU Scientific Library provides erf, erfc, log(erf), and scaled error functions.
As complex function of a complex argument
libcerf, numeric C library for complex error functions, provides the complex functions cerf, cerfc, cerfcx and the real functions erfi, erfcx with approximately 13–14 digits precision, based on the Faddeeva function as implemented in the MIT Faddeeva Package
| Mathematics | Specific functions | null |
177127 | https://en.wikipedia.org/wiki/Balance%20disorder | Balance disorder | A balance disorder is a disturbance that causes an individual to feel unsteady, for example when standing or walking. It may be accompanied by feelings of giddiness, or wooziness, or having a sensation of movement, spinning, or floating. Balance is the result of several body systems working together: the visual system (eyes), vestibular system (ears) and proprioception (the body's sense of where it is in space). Degeneration or loss of function in any of these systems can lead to balance deficits.
Signs and symptoms
Cognitive dysfunction (disorientation) may occur with vestibular disorders. Cognitive deficits are not just spatial in nature, but also include non-spatial functions such as object recognition memory. Vestibular dysfunction has been shown to adversely affect processes of attention and increased demands of attention can worsen the postural sway associated with vestibular disorders. Recent MRI studies also show that humans with bilateral vestibular damage (damage to both inner ears) undergo atrophy of the hippocampus which correlates with their degree of impairment on spatial memory tasks.
Causes
Problems with balance can occur when there is a disruption in any of the vestibular, visual, or proprioceptive systems. Abnormalities in balance function may indicate a wide range of pathologies from causes like inner ear disorders, low blood pressure, brain tumors, and brain injury including stroke.
Related to the ear
Causes of dizziness related to the ear are often characterized by vertigo (spinning) and nausea. Nystagmus (flickering of the eye, related to the Vestibulo-ocular reflex [VOR]) is often seen in patients with an acute peripheral cause of dizziness.
Benign paroxysmal positional vertigo (BPPV) – The most common cause of vertigo. It is typically described as a brief, intense sensation of spinning that occurs when there are changes in the position of the head with respect to gravity. An individual may experience BPPV when rolling over to the left or right, upon getting out of bed in the morning, or when looking up for an object on a high shelf. The cause of BPPV is the presence of normal but misplaced calcium crystals called otoconia, which are normally found in the utricle and saccule (the otolith organs) and are used to sense movement. If they fall from the utricle and become loose in the semicircular canals, they can distort the sense of movement and cause a mismatch between actual head movement and the information sent to the brain by the inner ear, causing a spinning sensation.
Migraine
Migraine headaches are a common neurological disease. Although typical migraines are characterized by moderate to severe throbbing headaches, vestibular migraines may be accompanied by symptoms of vestibular disorders such as dizziness, disequilibrium, nausea, and vomiting.
Presyncope
Presyncope is a feeling of lightheadedness or simply feeling faint. Syncope, by contrast, is actually fainting. A circulatory system deficiency, such as low blood pressure, can contribute to a feeling of dizziness when one suddenly stands up.
Diagnosis
The difficulty of making the right vestibular diagnosis is reflected in the fact that in some populations, more than one-third of the patients with a vestibular disease consult more than one physician – in some cases up to more than fifteen.
Treatment
There are various options for treating balance disorders. One option includes treatment for a disease or disorder that may be contributing to the balance problem, such as ear infection, stroke, multiple sclerosis, spinal cord injury, Parkinson's, neuromuscular conditions, acquired brain injury, cerebellar dysfunctions and/or ataxia, or some tumors, such as acoustic neuroma. Individual treatment will vary and will be based upon assessment results including symptoms, medical history, general health, and the results of medical tests. Additionally, tai chi may be a cost-effective method to prevent falls in the elderly.
Vestibular rehabilitation
Many types of balance disorders will require balance training, prescribed by an occupational therapist or physiotherapist. Physiotherapists often administer standardized outcome measures as part of their assessment in order to gain useful information and data about a patient's current status. Some standardized balance assessments or outcome measures include but are not limited to the Functional Reach Test, Clinical Test for Sensory Integration in Balance (CTSIB), Berg Balance Scale and/or Timed Up and Go The data and information collected can further help the physiotherapist develop an intervention program that is specific to the individual assessed. Intervention programs may include training activities that can be used to improve static and dynamic postural control, body alignment, weight distribution, ambulation, fall prevention and sensory function.
Bilateral vestibular loss
Dysequilibrium arising from bilateral loss of vestibular function – such as can occur from ototoxic drugs such as gentamicin – can also be treated with balance retraining exercises (vestibular rehabilitation) although the improvement is not likely to be full recovery.
Research
Scientists at the National Institute on Deafness and Other Communication Disorders (NIDCD) are working to understand the various balance disorders and the complex interactions between the labyrinth, other balance-sensing organs, and the brain. NIDCD scientists are studying eye movement to understand the changes that occur in aging, disease, and injury, as well as collecting data about eye movement and posture to improve diagnosis and treatment of balance disorders. They are also studying the effectiveness of certain exercises as a treatment option. Recently, a study published in JAMA Otolaryngology-Head & Neck Surgery found that balance problems are an indicator of mortality potentially due to altered metabolism of vestibular system.
| Biology and health sciences | Symptoms and signs | Health |
177163 | https://en.wikipedia.org/wiki/Emergency%20department | Emergency department | An emergency department (ED), also known as an accident and emergency department (A&E), emergency room (ER), emergency ward (EW) or casualty department, is a medical treatment facility specializing in emergency medicine, the acute care of patients who present without prior appointment; either by their own means or by that of an ambulance. The emergency department is usually found in a hospital or other primary care center.
Due to the unplanned nature of patient attendance, the department must provide initial treatment for a broad spectrum of illnesses and injuries, some of which may be life-threatening and require immediate attention. In some countries, emergency departments have become important entry points for those without other means of access to medical care.
The emergency departments of most hospitals operate 24 hours a day, although staffing levels may be varied in an attempt to reflect patient volume.
History
Accident services were provided by workmen's compensation plans, railway companies, and municipalities in Europe and the United States by the late mid-nineteenth century, but the world's first specialized trauma care center was opened in 1911 in the United States at the University of Louisville Hospital in Louisville, Kentucky. It was further developed in the 1930s by surgeon Arnold Griswold, who also equipped police and fire vehicles with medical supplies and trained officers to give emergency care while en route to the hospital.
Today, a typical hospital has its emergency department in its own section of the ground floor of the grounds, with its own dedicated entrance. As patients can arrive at any time and with any complaint, a key part of the operation of an emergency department is the prioritization of cases based on clinical need. This process is called triage.
Triage is normally the first stage the patient passes through, and consists of a brief assessment, including a set of vital signs, and the assignment of a "chief complaint" (e.g. chest pain, abdominal pain, difficulty breathing, etc.). Most emergency departments have a dedicated area for this process to take place and may have staff dedicated to performing nothing but a triage role. In most departments, this role is fulfilled by a triage nurse, although dependent on training levels in the country and area, other health care professionals may perform the triage sorting, including paramedics and physicians. Triage is typically conducted face-to-face when the patient presents, or a form of triage may be conducted via radio with an ambulance crew; in this method, the paramedics will call the hospital's triage center with a short update about an incoming patient, who will then be triaged to the appropriate level of care.
Most patients will be initially assessed at triage and then passed to another area of the department, or another area of the hospital, with their waiting time determined by their clinical need. However, some patients may complete their treatment at the triage stage, for instance, if the condition is very minor and can be treated quickly, if only advice is required, or if the emergency department is not a suitable point of care for the patient. Conversely, patients with evidently serious conditions, such as cardiac arrest, will bypass triage altogether and move straight to the appropriate part of the department.
The resuscitation area, commonly referred to as "Trauma" or "Resus", is a key area in most departments. The most seriously ill or injured patients will be dealt with in this area, as it contains the equipment and staff required for dealing with immediately life-threatening illnesses and injuries. In such situations, the time in which the patient is treated is crucial. Typical resuscitation staffing involves at least one attending physician, and at least one and usually two nurses with trauma and Advanced Cardiac Life Support training. These personnel may be assigned to the resuscitation area for the entirety of the shift or may be "on call" for resuscitation coverage (i.e. if a critical case presents via walk-in triage or ambulance, the team will be paged to the resuscitation area to deal with the case immediately). Resuscitation cases may also be attended by residents, radiographers, ambulance personnel, respiratory therapists, hospital pharmacists and students of any of these professions depending upon the skill mix needed for any given case and whether or not the hospital provides teaching services.
Patients who exhibit signs of being seriously ill but are not in immediate danger of life or limb will be triaged to "acute care" or "majors", where they will be seen by a physician and receive a more thorough assessment and treatment. Examples of "majors" include chest pain, difficulty breathing, abdominal pain and neurological complaints. Advanced diagnostic testing may be conducted at this stage, including laboratory testing of blood and/or urine, ultrasonography, CT or MRI scanning. Medications appropriate to manage the patient's condition will also be given. Depending on underlying causes of the patient's chief complaint, he or she may be discharged home from this area or admitted to the hospital for further treatment.
Patients whose condition is not immediately life-threatening will be sent to an area suitable to deal with them, and these areas might typically be termed as a prompt care or minors area. Such patients may still have been found to have significant problems, including fractures, dislocations, and lacerations requiring suturing.
Children can present particular challenges in treatment. Some departments have dedicated pediatrics areas, and some departments employ a play therapist whose job is to put children at ease to reduce the anxiety caused by visiting the emergency department, as well as provide distraction therapy for simple procedures.
Many hospitals have a separate area for evaluation of psychiatric problems. These are often staffed by psychiatrists and mental health nurses and social workers. There is typically at least one room for people who are actively a risk to themselves or others (e.g. suicidal).
Fast decisions on life-and-death cases are critical in hospital emergency departments. As a result, doctors face great pressures to overtest and overtreat. The fear of missing something often leads to extra blood tests and imaging scans for what may be harmless chest pains, run-of-the-mill head bumps, and non-threatening stomach aches, with a high cost on the health care system.
Nomenclature in English
Emergency department became commonly used when emergency medicine was recognized as a medical specialty, and hospitals and medical centres developed departments of emergency medicine to provide services. Other common variations include 'emergency ward', 'emergency centre' or 'emergency unit'.
Accident and emergency (A&E) is deprecated in the United Kingdom but still in common parlance. It is also still in use in Hong Kong. Earlier terms such as 'casualty' or 'casualty department' were previously used officially and continue to be used informally. The same applies to 'emergency room', 'emerg', or 'ER' in North America, originating when emergency facilities were provided in a single room of the hospital by the department of surgery.
Signage
Regardless of naming convention, there is a widespread usage of directional signage in white text on a red background across the world, which indicates the location of the emergency department, or a hospital with such facilities.
Signs on emergency departments may contain additional information. In some American states, there is close regulation of the design and content of such signs. For example, California requires wording such as "Comprehensive Emergency Medical Service" and "Physician On Duty", to prevent persons in need of critical care from presenting to facilities that are not fully equipped and staffed.
In some countries, including the United States and Canada, a smaller facility that may provide assistance in medical emergencies is known as a clinic. Larger communities often have walk-in clinics where people with medical problems that would not be considered serious enough to warrant an emergency department visit can be seen. These clinics often do not operate on a 24-hour basis. Very large clinics may operate as "free-standing emergency centres", which are open 24 hours and can manage a very large number of conditions. However, if a patient presents to a free-standing clinic with a condition requiring hospital admission, he or she must be transferred to an actual hospital, as these facilities do not have the capability to provide inpatient care.
United States
The Centers for Medicare and Medicaid Services (CMS) classified emergency departments into two types: Type A, the majority, which are open 24 hours a day, 7 days a week, 365 days a year; and Type B, the rest, which are not. Many US emergency departments are exceedingly busy. A study found that in 2009, there were an estimated 128,885,040 ED encounters in US hospitals. Approximately one-fifth of ED visits in 2010 were for patients under the age of 18 years. In 2009–2010, a total of 19.6 million emergency department visits in the United States were made by persons aged 65 and over. Most encounters (82.8 percent) resulted in treatment and release; 17.2 percent were admitted to inpatient care.
The 1986 Emergency Medical Treatment and Active Labor Act is an act of the United States Congress, that requires emergency departments, if the associated hospital receives payments from Medicare, to provide appropriate medical examination and emergency treatment to all individuals seeking treatment for a medical condition, regardless of citizenship, legal status, or ability to pay. Like an unfunded mandate, there are no reimbursement provisions.
Rates of ED visits rose between 2006 and 2011 for almost every patient characteristic and location. The total rate of ED visits increased 4.5% in that time. However, the rate of visits for patients under one year of age declined 8.3%.
A survey of New York area doctors in February 2007 found that injuries and even deaths have been caused by excessive waits for hospital beds by ED patients. A 2005 patient survey found an average ED wait time from 2.3 hours in Iowa to 5.0 hours in Arizona.
One inspection of Los Angeles area hospitals by Congressional staff found the EDs operating at an average of 116% of capacity (meaning there were more patients than available treatment spaces) with insufficient beds to accommodate victims of a terrorist attack the size of the 2004 Madrid train bombings. Three of the five Level I trauma centres were on "diversion", meaning ambulances with all but the most severely injured patients were being directed elsewhere because the ED could not safely accommodate any more patients. This controversial practice was banned in Massachusetts (except for major incidents, such as a fire in the ED), effective 1 January 2009; in response, hospitals have devoted more staff to the ED at peak times and moved some elective procedures to non-peak times.
In 2009, there were 1,800 EDs in the country. In 2011, about 421 out of every 1,000 people in the United States visited the emergency department; five times as many were discharged as were admitted. Rural areas are the highest rate of ED visits (502 per 1,000 population) and large metro counties had the lowest (319 visits per 1,000 population). By region, the Midwest had the highest rate of ED visits (460 per 1,000 population) and Western States had the lowest (321 visits per 1,000 population).
Freestanding
In addition to the normal hospital based emergency departments a trend has developed in some states (including Texas and Colorado) of emergency departments not attached to hospitals. These new emergency departments are referred to as free standing emergency departments. The rationale for these operations is the ability to operate outside of hospital policies that may lead to increased wait times and reduced patient satisfaction.
These departments have attracted controversy due to consumer confusion around their prices and insurance coverage. In 2017, the largest operator, Adeptus Health, declared bankruptcy.
Overuse and utilization management
Patients may visit the emergency room for non-emergencies, which typically costs the patient and the managed care insurance company more, and therefore the insurance company may apply utilization management to deny coverage. In 2004, a study found that emergency room visits were the most common reason for appealing disputes over coverage after receiving service. In 2017, Anthem expanded this denial coverage more broadly, provoking public policy reactions.
United Kingdom
All accident and emergency (A&E) departments throughout the United Kingdom are financed and managed publicly by the National Health Service (NHS of each constituent country: England, Scotland, Wales and Northern Ireland). The term "A&E" is widely recognised and used rather than the full name; it is used on road signs, official documentation, etc.
A&E services are provided to all, without charge. Other NHS medical care, including hospital treatment following an emergency, is free of charge only to all who are "ordinarily resident" in Britain; residency rather than citizenship is the criterion (details on charges vary from country to country).
In England departments are divided into three categories:
Type 1 department – major A&E, providing a consultant-led 24 hour service with full resuscitation facilities
Type 2 department – single specialty A&E service (e.g. ophthalmology, dentistry)
Type 3 department – other A&E/minor injury unit/walk-in centre, treating minor injuries and illnesses
Historically, waits for assessment in A&E were very long in some areas of the UK. In October 2002, the Department of Health introduced a four-hour target in emergency departments that required departments in England to assess and treat patients within four hours of arrival, with referral and assessment by other departments if deemed necessary. It was expected that the patients would have physically left the department within the four hours. Present policy is that 95% of all patient cases do not "breach" this four-hour wait. The busiest departments in the UK outside London include University Hospital of Wales in Cardiff, The North Wales Regional Hospital in Wrexham, the Royal Infirmary of Edinburgh and Queen Alexandra Hospital in Portsmouth.
In July 2014, the QualityWatch research programme published in-depth analysis which tracked 41 million A&E attendances from 2010 to 2013. This showed that the number of patients in a department at any one time was closely linked to waiting times, and that crowding in A&E had increased as a result of a growing and ageing population, compounded by the freezing or reduction of A&E capacity. Between 2010/11 and 2012/13 crowding increased by 8%, despite a rise of just 3% in A&E visits, and this trend looks set to continue. Other influential factors identified by the report included temperature (with both hotter and colder weather pushing up A&E visits), staffing and inpatient bed numbers.
A&E services in the UK are often the focus of a great deal of media and political interest, and data on A&E performance is published weekly. However, this is only one part of a complex urgent and emergency care system. Reducing A&E waiting times therefore requires a comprehensive, coordinated strategy across a range of related services.
Many A&E departments are crowded and confusing. Many of those attending are understandably anxious, and some are mentally ill, and especially at night are under the influence of alcohol or other substances. Pearson Lloyd's redesign – 'A Better A&E' – is claimed to have reduced aggression against hospital staff in the departments by 50 per cent. A system of environmental signage provides location-specific information for patients. Screens provide live information about how many cases are being handled and the current status of the A&E department. Waiting times for patients to be seen at A&E were rising in the years leading up to 2020, and were hugely worsened during the COVID-19 pandemic that started in 2020.
In response to the year-on-year increasing pressure on A&E units, followed by the unprecedented effects of the COVID-19 pandemic, the NHS in late 2020 proposed a radical change to handling of urgent and emergency care,
separating "emergency" and "urgent". Emergencies are . Urgent requirements are for . As part of the response, walk-in Urgent Treatment Centres (UTC) were created. People potentially needing A&E treatment are recommended to phone the NHS111 line, which will either book an arrival time for A&E, or recommend a more appropriate procedure. (Information is for England; details may vary in different countries.)
Critical conditions handled
[Sudden] Cardiac Arrest
Cardiac arrest is a sudden (in most cases, unexpected) loss of heart function, breathing, and consciousness. This emergency usually results from an electrical disturbance in the heart that disrupts its pumping action, stopping blood flow to the rest of the body. It is different from a heart attack, where blood flow to a part of the heart is blocked. Cardiac arrest may occur in the ED/A&E or a patient may be transported by ambulance to the emergency department already in this state. Treatment is basic life support, Automated External Defibrillator (AED), and advanced life support as taught in advanced life support and advanced cardiac life support courses. Cardiac arrest is not a condition that can be self-diagnosed. It requires immediate medical attention and diagnosis by a healthcare professional.
Heart Attack
Patients arriving at the emergency department with a myocardial infarction (heart attack) are likely to be triaged to the resuscitation area. They will receive oxygen and monitoring and have an early ECG; aspirin will be given if not contraindicated or not already administered by the ambulance team; morphine or diamorphine will be given for pain; sub lingual (under the tongue) or buccal (between cheek and upper gum) glyceryl trinitrate (nitroglycerin) (GTN or NTG) will be given, unless contraindicated by the presence of other drugs.
An ECG that reveals ST segment elevation suggests complete blockage of one of the main coronary arteries. These patients require immediate reperfusion (re-opening) of the occluded vessel. This can be achieved in two ways: thrombolysis (clot-busting medication) or percutaneous transluminal coronary angioplasty (PTCA). Both of these are effective in reducing significantly the mortality of myocardial infarction. Many centers are now moving to the use of PTCA as it is somewhat more effective than thrombolysis if it can be administered early. This may involve transfer to a nearby facility with facilities for angioplasty.
Trauma
Major trauma, the term for patients with multiple injuries, often from a motor vehicle crash or a major fall, is initially handled in the Emergency Department. However, trauma is a separate (surgical) specialty from emergency medicine (which is itself a medical specialty, and has certifications in the United States from the American Board of Emergency Medicine).
Trauma is treated by a trauma team who have been trained using the principles taught in the internationally recognized Advanced Trauma Life Support (ATLS) course of the American College of Surgeons. Some other international training bodies have started to run similar courses based on the same principles.
The services that are provided in an emergency department can range from x-rays and the setting of broken bones to those of a full-scale trauma centre. A patient's chance of survival is greatly improved if the patient receives definitive treatment (i.e. surgery or reperfusion) within one hour of an accident (such as a car accident) or onset of acute illness (such as a heart attack). This critical time frame is commonly known as the "golden hour".
Some emergency departments in smaller hospitals are located near a helipad which is used by helicopters to transport a patient to a trauma centre. This inter-hospital transfer is often done when a patient requires advanced medical care unavailable at the local facility. In such cases the emergency department can only stabilize the patient for transport.
Mental Illness(es)
Some patients arrive at an emergency department for a complaint of mental illness. In many jurisdictions (including many U.S. states), patients who appear to be mentally ill and to present a danger to themselves or others may be brought against their will to an emergency department by law enforcement officers for psychiatric examination. The emergency department conducts medical clearance rather than treats acute behavioral disorders. From the emergency department, patients with significant mental illness will be transferred to a psychiatric unit (in many cases involuntarily). In recent years, EmPATH units have been developed to relieve pressure on hospital emergency departments and improve the treatment of psychiatric emergencies.
Emergency departments are often the first point of contact with healthcare for people who self-harm. As such they are crucial in supporting them and can play a role in preventing suicide. At the same time, according to a study conducted in England, people who self-harm often experience that they do not receive meaningful care at the emergency department. Higher ambient temperature may also increase mental illness related emergency department presentations, particularly in females.
Asthma and COPD
Acute exacerbations of chronic respiratory diseases, mainly asthma and chronic obstructive pulmonary disease (COPD), are assessed as emergencies and treated with oxygen therapy, bronchodilators, steroids or theophylline, have an urgent chest X-ray and arterial blood gases and are referred for intensive care if necessary. Noninvasive ventilation in the ED has reduced the requirement for tracheal intubation in many cases of severe exacerbations of COPD.
Special facilities, training, and equipment
An ED requires different equipment and different approaches than most other hospital divisions. Patients frequently arrive with unstable conditions, and so must be treated quickly. They may be unconscious, and information such as their medical history, allergies, and blood type may be unavailable. ED staff are trained to work quickly and effectively even with minimal information.
ED staff must also interact efficiently with pre-hospital care providers such as EMTs, paramedics, and others who are occasionally based in an ED. The pre-hospital providers may use equipment unfamiliar to the average physician, but ED physicians must be expert in using (and safely removing) specialized equipment, since devices such as military anti-shock trousers ("MAST") and traction splints require special procedures. Among other reasons, given that they must be able to handle specialized equipment, physicians can now specialize in emergency medicine, and EDs employ many such specialists.
ED staff have much in common with ambulance and fire crews, combat medics, search and rescue teams, and disaster response teams. Often, joint training and practice drills are organized to improve the coordination of this complex response system. Busy EDs exchange a great deal of equipment with ambulance crews, and both must provide for replacing, returning, or reimbursing for costly items.
Cardiac arrest and major trauma are relatively common in EDs, so defibrillators, automatic ventilation and CPR machines, and bleeding control dressings are used heavily. Survival in such cases is greatly enhanced by shortening the wait for key interventions, and in recent years some of this specialized equipment has spread to pre-hospital settings. The best-known example is defibrillators, which spread first to ambulances, then in an automatic version to police cars and fire apparatus, and most recently to public spaces such as airports, office buildings, hotels, and even shopping malls.
Because time is such an essential factor in emergency treatment, EDs typically have their own diagnostic equipment to avoid waiting for equipment installed elsewhere in the hospital. Nearly all have radiographic examination rooms staffed by dedicated radiographers, and many now have full radiology facilities including CT scanners and ultrasonography equipment. Laboratory services may be handled on a priority basis by the hospital lab, or the ED may have its own "STAT Lab" for basic labs (blood counts, blood typing, toxicology screens, etc.) that must be returned very rapidly.
Non-emergency use
Metrics applicable to the ED can be grouped into three main categories, volume, cycle time, and patient satisfaction. Volume metrics including arrivals per hour, percentage of ED beds occupied, and age of patients are understood at a basic level at all hospitals as an indication for staffing requirements. Cycle time metrics are the mainstays of the evaluation and tracking of process efficiency and are less widespread since an active effort is needed to collect and analyze this data. Patient satisfaction metrics, already commonly collected by nursing groups, physician groups, and hospitals, are useful in demonstrating the impact of changes in patient perception of care over time. Since patient satisfaction metrics are derivative and subjective, they are less useful in primary process improvement. Health information exchanges can reduce nonurgent ED visits by supplying current data about admissions, discharges, and transfers to health plans and accountable care organizations, allowing them to shift ED use to primary care settings.
In all primary care trusts there are out of hours medical consultations provided by general practitioners or nurse practitioners.
In the United States, barriers to accessing care contribute to frequent emergency room use. The National Hospital Ambulatory Medical Care Survey looked at the ten most common symptoms for which giving rise to emergency room visits (cough, sore throat, back pain, fever, headache, abdominal pain, chest pain, other pain, shortness of breath, vomiting) and made suggestions as to which would be the most cost-effective choice among virtual care, retail clinic, urgent care, or emergency room. Notably, certain complaints may also be addressed by a telephone call to a person's primary care provider. However, subsequent studies have shown that identifying non-emergency visits based on discharge diagnoses is inaccurate because people commonly present for emergency care for other reasons and are assigned a diagnosis after testing and evaluation.
In the United States, and many other countries, hospitals are beginning to create areas in their emergency rooms for people with minor injuries. These are commonly referred as Fast Track or Minor Care units. These units are for people with non-life-threatening injuries. The use of these units within a department have been shown to significantly improve the flow of patients through a department and to reduce waiting times. Urgent care clinics are another alternative, where patients can go to receive immediate care for non-life-threatening conditions. To reduce the strain on limited ED resources, American Medical Response created a checklist that allows EMTs to identify intoxicated individuals who can be safely sent to detoxification facilities instead.
Overcrowding
Emergency department overcrowding is when function of a department is hindered by an inability to treat all patients in an adequate manner. This is a common occurrence in emergency departments worldwide. Overcrowding causes inadequate patient care which leads to poorer patient outcomes. To address this problem, escalation policies are used by emergency departments when responding to an increase in demand (e.g., a sudden inflow of patients) or a reduction in capacity (e.g., a lack of beds to admit patients). The policies aim to maintain the ability to deliver patient care, without compromising safety, by modifying "normal" processes.
In recent years, there has been a rise in ER overcrowding that often leads to patients being treated in unsafe conditions. ER overcrowding is a risk for patient care, rising health care costs, and several hour long waits for hospital beds that don't exist.
Emergency department waiting times
Emergency department (ED) waiting times have a serious impact on patient mortality, morbidity with readmission in less than 30 days, length of stay, and patient satisfaction. The probability of death increases each 3 minutes for 1% in case of major injuries in the abdomen part. (Journal of Trauma and Acute Care Surgery) Equipment in emergency departments follows the prompt treatment principle with the least possible patient transfers from admittance to X-ray diagnostics. A review of the literature bears out the logical premise that since the outcome of treatment for all diseases and injuries is time-sensitive, the sooner treatment is rendered, the better the outcome. Various studies reported significant associations between waiting times and higher mortality and morbidity among those who survived. It is clear from the literature that untimely hospital deaths and morbidity can be reduced by reductions in ED waiting times.
Exit block
A significant proportion of emergency patients are discharged after treatment, but many require admission for ongoing observation, treatment, or to ensure adequate social care before discharge. If patients requiring admission cannot be placed in inpatient beds swiftly, "exit block" or "access block" occurs. This often leads to crowding and can lead to delays in treatment for newly presenting cases ("arrival access block"). This is more common in densely populated areas and affects adult departments more than pediatric ones. Exit block can lead to delays for the patients awaiting inpatient beds ("boarding") and also for new patients arriving at an exit-blocked department. Proposed solutions include changes in staffing or increasing inpatient capacity.
Frequent users
Frequent emergency service users are individuals who present themselves at a hospital much more often than non-frequent presenters. Many frequent users are homeless individuals seeking shelter and food at the hospital. Federal laws and regulations in the United States, like EMTALA and HIPAA, limit the options of hospital personnel when an individual presents to the ER with a fabricated problem. These individuals do not account for a significant number of visits but typically require a disproportionate amount of hospital resources. To help prevent inappropriate emergency department use and return visits some hospitals offer care coordination and support services such as at-home and in-shelter transitional primary care for frequent users and short-term housing for homeless patients recovering after discharge.
Telemedicine
A study found that telemedicine services in Saudi Arabia were effective in reducing emergency department overload by providing medical advice to patients with less urgent medical issues.
In the military
Emergency departments in the military benefit from the added support of enlisted personnel who are capable of performing a wide variety of tasks they have been trained for through specialized military schooling. For example, in United States Military Hospitals, Air Force Aerospace Medical Technicians and Navy Hospital Corpsmen perform tasks that fall under the scope of practice of both doctors (i.e. sutures, staples and incision and drainages) and nurses (i.e. medication administration, foley catheter insertion, and obtaining intravenous access) and also perform splinting of injured extremities, nasogastric tube insertion, intubation, wound cauterizing, eye irrigation, and much more. Often, some civilian education and/or certification will be required such as an EMT certification, in case of the need to provide care outside the base where the member is stationed. The presence of highly trained enlisted personnel in an Emergency Departments drastically reduces the workload on nurses and doctors.
Violence against healthcare workers
According to a survey at an urban inner-city tertiary care center in Vancouver, 57% of health care workers were physically assaulted in 1996. 73% were afraid of patients as a result of violence, 49% hid their identities from patients, and 74% had reduced job satisfaction. Over one-quarter of the respondents took days off because of violence. Of respondents no longer working in the emergency department, 67% reported that they had left the job at least partly owing to violence. Twenty-four-hour security and a workshop on violence prevention strategies were felt to be the most useful potential interventions. Physical exercise, sleep and the company of family and friends were the most frequent coping strategies cited by those surveyed.
Medication errors
Medication errors are issues that lead to incorrect medication distribution or potential for patient harm. As of 2014, around 3% of all hospital-related adverse effects were due to medication errors in the emergency department (ED); between 4% and 14% of medications given to patients in the ED were incorrect and children were particularly at risk.
Errors can arise if the doctor prescribes the wrong medication, if the prescription intended by the doctor is not the one actually communicated to the pharmacy due to an illegibly written prescription or misheard verbal order, if the pharmacy dispenses the wrong medication, or if the medication is then given to the wrong person.
The ED is a riskier environment than other areas of the hospital due to medical practitioners not knowing the patient as well as they know longer term hospital patients, due to time pressure caused by overcrowding, and due to the emergency-driven nature of the medicine that is practiced there.
| Biology and health sciences | General concepts | null |
177174 | https://en.wikipedia.org/wiki/Orion%20Nebula | Orion Nebula | The Orion Nebula (also known as Messier 42, M42, or NGC 1976) is a diffuse nebula situated in the Milky Way, being south of Orion's Belt in the constellation of Orion, and is known as the middle "star" in the "sword" of Orion. It is one of the brightest nebulae and is visible to the naked eye in the night sky with an apparent magnitude of 4.0. It is away and is the closest region of massive star formation to Earth. The M42 nebula is estimated to be 25 light-years across (so its apparent size from Earth is approximately 1 degree). It has a mass of about 2,000 times that of the Sun. Older texts frequently refer to the Orion Nebula as the Great Nebula in Orion or the Great Orion Nebula.
The Orion Nebula is one of the most scrutinized and photographed objects in the night sky and is among the most intensely studied celestial features. The nebula has revealed much about the process of how stars and planetary systems are formed from collapsing clouds of gas and dust. Astronomers have directly observed protoplanetary disks and brown dwarfs within the nebula, intense and turbulent motions of the gas, and the photo-ionizing effects of massive nearby stars in the nebula.
Physical characteristics
The Orion Nebula is visible with the naked eye even from areas affected by light pollution. It is seen as the middle "star" in the "sword" of Orion, which are the three stars located south of Orion's Belt. The "star" appears fuzzy to sharp-eyed observers, and the nebulosity is obvious through binoculars or a small telescope. The peak surface brightness of the central region of M42 is about 17 Mag/arcsec2 and the outer bluish glow has a peak surface brightness of 21.3 Mag/arcsec2.
The Orion Nebula contains a very young open cluster, known as the Trapezium Cluster due to the asterism of its primary four stars within a diameter of 1.5 light years. Two of these can be resolved into their component binary systems on nights with good seeing, giving a total of six stars. The stars of the Trapezium Cluster, along with many other stars, are still in their early years. The Trapezium Cluster is a component of the much larger Orion Nebula cluster, an association of about 2,800 stars within a diameter of 20 light years. The Orion Nebula is in turn surrounded by the much larger Orion molecular cloud complex which is hundreds of light years across, spanning the whole Orion Constellation. Two million years ago the Orion Nebula cluster may have been the home of the runaway stars AE Aurigae, 53 Arietis, and Mu Columbae, which are currently moving away from the nebula at speeds greater than .
Coloration
Observers have long noted a distinctive greenish tint to the nebula, in addition to regions of red and of blue-violet. The red hue is a result of the Hα recombination line radiation at a wavelength of 656.3 nm. The blue-violet coloration is the reflected radiation from the massive O-class stars at the core of the nebula.
The green hue was a puzzle for astronomers in the early part of the 20th century because none of the known spectral lines at that time could explain it. There was some speculation that the lines were caused by a new element, and the name nebulium was coined for this mysterious material. With better understanding of atomic physics, however, it was later determined that the green spectrum was caused by a low-probability electron transition in doubly ionized oxygen, a so-called "forbidden transition". This radiation was impossible to reproduce in the laboratory at the time, because it depended on the quiescent and nearly collision-free environment found in the high vacuum of deep space.
History
There has been speculation that the Mayans of Central America may have described the nebula within their "Three Hearthstones" creation myth; if so, the three would correspond to two stars at the base of Orion, Rigel and Saiph, and another, Alnitak at the tip of the "belt" of the imagined hunter, the vertices of a nearly perfect equilateral triangle with Orion's Sword (including the Orion Nebula) in the middle of the triangle seen as the smudge of smoke from copal incense in a modern myth, or, in (the translation it suggests of) an ancient one, the literal or figurative embers of a fiery creation.
Neither Ptolemy's Almagest nor al Sufi's Book of Fixed Stars noted this nebula, even though they both listed patches of nebulosity elsewhere in the night sky; nor did Galileo mention it, even though he also made telescopic observations surrounding it in 1610 and 1617. This has led to some speculation that a flare-up of the illuminating stars may have increased the brightness of the nebula.
The first discovery of the diffuse nebulous nature of the Orion Nebula is generally credited to French astronomer Nicolas-Claude Fabri de Peiresc, on November 26, 1610, when he made a record of observing it with a refracting telescope purchased by his patron Guillaume du Vair.
The first published observation of the nebula was by the Jesuit mathematician and astronomer Johann Baptist Cysat of Lucerne in his 1619 monograph on the comets (describing observations of the nebula that may date back to 1611).
He made comparisons between it and a bright comet seen in 1618 and described how the nebula appeared through his telescope as:
His description of the center stars as different from a comet's head in that they were a "rectangle" may have been an early description of the Trapezium Cluster. (The first detection of three of the four stars of this cluster is credited to Galileo Galilei on February 4, 1617.
)
The nebula was independently "discovered" (though visible to the naked eye) by several other prominent astronomers in the following years, including by Giovanni Battista Hodierna (whose sketch was the first published in De systemate orbis cometici, deque admirandis coeli characteribus). In 1659, Dutch scientist Christiaan Huygens published the first detailed drawing of the central region of the nebula in Systema Saturnium.
Charles Messier observed the nebula on March 4, 1769, and he also noted three of the stars in Trapezium. Messier published the first edition of his catalog of deep sky objects in 1774 (completed in 1771). As the Orion Nebula was the 42nd object in his list, it became identified as M42.
In 1865, English amateur astronomer William Huggins used his visual spectroscopy method to examine the nebula, showing that it, like other nebulae he had examined, was made up of "luminous gas". On September 30, 1880, Henry Draper used the new dry plate photographic process with an 11-inch (28 cm) refracting telescope to make a 51-minute exposure of the Orion Nebula, the first instance of astrophotography of a nebula in history. Another breakthrough in astronomical photography occurred in 1883, when amateur astronomer Andrew Ainslie Common used the dry plate process to record several images in exposures up to 60 minutes with a 36-inch (91 cm) reflecting telescope that he constructed in the backyard of his home in Ealing, west London. These images, for the first time, showed stars and nebula detail too faint to be seen by the human eye.
In 1902, Vogel and Eberhard discovered differing velocities within the nebula, and by 1914 astronomers at Marseilles had used the interferometer to detect rotation and irregular motions. Campbell and Moore confirmed these results using the spectrograph, demonstrating turbulence within the nebula.
In 1931, Robert J. Trumpler noted that the fainter stars near the Trapezium formed a cluster, and he was the first to name them the Trapezium cluster. Based on their magnitudes and spectral types, he derived a distance estimate of 1,800 light years. This was three times farther than the commonly accepted distance estimate of the period but was much closer to the modern value.
In 1993, the Hubble Space Telescope first observed the Orion Nebula. Since then, the nebula has been a frequent target for HST studies. The images have been used to build a detailed model of the nebula in three dimensions. Protoplanetary disks have been observed around most of the newly formed stars in the nebula, and the destructive effects of high levels of ultraviolet energy from the most massive stars have been studied.
In 2005, the Advanced Camera for Surveys instrument of the Hubble Space Telescope finished capturing the most detailed image of the nebula yet taken. The image was taken through 104 orbits of the telescope, capturing over 3,000 stars down to the 23rd magnitude, including infant brown dwarfs and possible brown dwarf binary stars. A year later, scientists working with the HST announced the first ever masses of a pair of eclipsing binary brown dwarfs, 2MASS J05352184–0546085. The pair are located in the Orion Nebula and have approximate masses of and respectively, with an orbital period of 9.8 days. Surprisingly, the more massive of the two also turned out to be the less luminous.
In October 2023, astronomers, based on observations of the Orion Nebula with the James Webb Space Telescope, reported the discovery of pairs of rogue planets, similar in mass to the planet Jupiter, and called JuMBOs (short for Jupiter Mass Binary Objects).
Structure
The entirety of the Orion Nebula extends across a 1° region of the sky, and includes neutral clouds of gas and dust, associations of stars, ionized volumes of gas, and reflection nebulae.
The Nebula is part of a much larger nebula that is known as the Orion molecular cloud complex. The Orion molecular cloud complex extends throughout the constellation of Orion and includes Barnard's Loop, the Horsehead Nebula, M43, M78, and the Flame Nebula.
Stars are forming throughout the entire Cloud Complex, but most of the young stars are concentrated in dense clusters like the one illuminating the Orion Nebula.
The current astronomical model for the nebula consists of an ionized (H II) region, roughly centered on Theta1 Orionis C, which lies on the side of an elongated molecular cloud in a cavity formed by the massive young stars.
(Theta1 Orionis C emits 3-4 times as much photoionizing light as the next brightest star, Theta2 Orionis A.) The H II region has a temperature ranging up to 10,000 K, but this temperature falls dramatically near the edge of the nebula. The nebulous emission comes primarily from photoionized gas on the back surface of the cavity.
The H II region is surrounded by an irregular, concave bay of more neutral, high-density cloud, with clumps of neutral gas lying outside the bay area. This in turn lies on the perimeter of the Orion Molecular Cloud. The gas in the molecular cloud displays a range of velocities and turbulence, particularly around the core region. Relative movements are up to 10 km/s (22,000 mi/h), with local variations of up to 50 km/s and possibly more.
Observers have given names to various features in the Orion Nebula. The dark bay that extends from the north into the bright region is known as "Sinus Magnus", also called the "Fish's Mouth". The illuminated regions to both sides are called the "Wings". Other features include "The Sword", "The Thrust", and "The Sail".
Star formation
The Orion Nebula is an example of a stellar nursery where new stars are being born. Observations of the nebula have revealed approximately 700 stars in various stages of formation within the nebula.
In 1979 observations with the Lallemand electronic camera at the Pic-du-Midi Observatory showed six unresolved high-ionization sources near the Trapezium Cluster. These sources were interpreted as partly ionized globules (PIGs). The idea was that these objects are being ionized from the outside by M42. Later observations with the Very Large Array showed solar-system-sized condensations associated with these sources. Here the idea appeared that these objects might be low-mass stars surrounded by an evaporating protostellar accretion disk. In 1993 observations with the Hubble Space Telescope have yielded the major confirmation of protoplanetary disks within the Orion Nebula, which have been dubbed proplyds. HST has revealed more than 150 of these within the nebula, and they are considered to be systems in the earliest stages of solar system formation. The sheer numbers of them have been used as evidence that the formation of planetary systems is fairly common in the universe.
Stars form when clumps of hydrogen and other gases in an H II region contract under their own gravity. As the gas collapses, the central clump grows stronger and the gas heats to extreme temperatures by converting gravitational potential energy to thermal energy. If the temperature gets high enough, nuclear fusion will ignite and form a protostar. The protostar is 'born' when it begins to emit enough radiative energy to balance out its gravity and halt gravitational collapse.
Typically, a cloud of material remains a substantial distance from the star before the fusion reaction ignites. This remnant cloud is the protostar's protoplanetary disk, where planets may form. Recent infrared observations show that dust grains in these protoplanetary disks are growing, beginning on the path towards forming planetesimals.
Once the protostar enters into its main sequence phase, it is classified as a star. Even though most planetary disks can form planets, observations show that intense stellar radiation should have destroyed any proplyds that formed near the Trapezium group, if the group is as old as the low mass stars in the cluster. Since proplyds are found very close to the Trapezium group, it can be argued that those stars are much younger than the rest of the cluster members.
Stellar wind and effects
Once formed, the stars within the nebula emit a stream of charged particles known as a stellar wind. Massive stars and young stars have much stronger stellar winds than the Sun. The wind forms shock waves or hydrodynamical instabilities when it encounters the gas in the nebula, which then shapes the gas clouds. The shock waves from stellar wind also play a large part in stellar formation by compacting the gas clouds, creating density inhomogeneities that lead to gravitational collapse of the cloud.
There are three different kinds of shocks in the Orion Nebula. Many are featured in Herbig–Haro objects:
Bow shocks are stationary and are formed when two particle streams collide with each other. They are present near the hottest stars in the nebula where the stellar wind speed is estimated to be thousands of kilometers per second and in the outer parts of the nebula where the speeds are tens of kilometers per second. Bow shocks can also form at the front end of stellar jets when the jet hits interstellar particles.
Jet-driven shocks are formed from jets of material sprouting off newborn T Tauri stars. These narrow streams are traveling at hundreds of kilometers per second, and become shocks when they encounter relatively stationary gases.
Warped shocks appear bow-like to an observer. They are produced when a jet-driven shock encounters gas moving in a cross-current.
The interaction of the stellar wind with the surrounding cloud also forms "waves" which are believed to be due to the hydrodynamical Kelvin-Helmholtz instability.
The dynamic gas motions in M42 are complex, but are trending out through the opening in the bay and toward the Earth. The large neutral area behind the ionized region is currently contracting under its own gravity.
There are also supersonic "bullets" of gas piercing the hydrogen clouds of the Orion Nebula. Each bullet is ten times the diameter of Pluto's orbit and tipped with iron atoms glowing in the infrared. They were probably formed one thousand years earlier from an unknown violent event.
Evolution
Interstellar clouds like the Orion Nebula are found throughout galaxies such as the Milky Way. They begin as gravitationally bound blobs of cold, neutral hydrogen, intermixed with traces of other elements. The cloud can contain hundreds of thousands of solar masses and extend for hundreds of light years. The tiny force of gravity that could compel the cloud to collapse is counterbalanced by the very faint pressure of the gas in the cloud.
Whether due to collisions with a spiral arm, or through the shock wave emitted from supernovae, the atoms are precipitated into heavier molecules and the result is a molecular cloud. This presages the formation of stars within the cloud, usually thought to be within a period of 10–30 million years, as regions pass the Jeans mass and the destabilized volumes collapse into disks. The disk concentrates at the core to form a star, which may be surrounded by a protoplanetary disk. This is the current stage of evolution of the nebula, with additional stars still forming from the collapsing molecular cloud. The youngest and brightest stars we now see in the Orion Nebula are thought to be less than 300,000 years old, and the brightest may be only 10,000 years in age.
Some of these collapsing stars can be particularly massive, and can emit large quantities of ionizing ultraviolet radiation. An example of this is seen with the Trapezium cluster. Over time the ultraviolet light from the massive stars at the center of the nebula will push away the surrounding gas and dust in a process called photoevaporation. This process is responsible for creating the interior cavity of the nebula, allowing the stars at the core to be viewed from Earth. The largest of these stars have short life spans and will evolve to become supernovae.
Within about 100,000 years, most of the gas and dust will be ejected. The remains will form a young open cluster, a cluster of bright, young stars surrounded by wispy filaments from the former cloud.
| Physical sciences | Notable nebulae | null |
177215 | https://en.wikipedia.org/wiki/Photomultiplier%20tube | Photomultiplier tube | Photomultiplier tubes (photomultipliers or PMTs for short) are extremely sensitive detectors of light in the ultraviolet, visible, and near-infrared ranges of the electromagnetic spectrum. They are members of the class of vacuum tubes, more specifically vacuum phototubes. These detectors multiply the current produced by incident light by as much as 100 million times or 108 (i.e., 160 dB), in multiple dynode stages, enabling (for example) individual photons to be detected when the incident flux of light is low.
The combination of high gain, low noise, high frequency response or, equivalently, ultra-fast response, and large area of collection has maintained photomultipliers an essential place in low light level spectroscopy, confocal microscopy, Raman spectroscopy, fluorescence spectroscopy, nuclear and particle physics, astronomy, medical diagnostics including blood tests, medical imaging, motion picture film scanning (telecine), radar jamming, and high-end image scanners known as drum scanners. Elements of photomultiplier technology, when integrated differently, are the basis of night vision devices. Research that analyzes light scattering, such as the study of polymers in solution, often uses a laser and a PMT to collect the scattered light data.
Semiconductor devices, particularly silicon photomultipliers and avalanche photodiodes, are alternatives to classical photomultipliers; however, photomultipliers are uniquely well-suited for applications requiring low-noise, high-sensitivity detection of light that is imperfectly collimated.
Structure and operating principles
Photomultipliers are typically constructed with an evacuated glass housing (using an extremely tight and durable glass-to-metal seal like other vacuum tubes), containing a photocathode, several dynodes, and an anode. Incident photons strike the photocathode material, which is usually a thin vapor-deposited conducting layer on the inside of the entry window of the device. Electrons are ejected from the surface as a consequence of the photoelectric effect. These electrons are directed by the focusing electrode toward the electron multiplier, where electrons are multiplied by the process of secondary emission.
The electron multiplier consists of a number of electrodes called dynodes. Each dynode is held at a more positive potential, by ≈100 Volts, than the preceding one. A primary electron leaves the photocathode with the energy of the incoming photon, or about 3 eV for "blue" photons, minus the work function of the photocathode. A small group of primary electrons is created by the arrival of a group of initial photons. (In Fig. 1, the number of primary electrons in the initial group is proportional to the energy of the incident high energy gamma ray.) The primary electrons move toward the first dynode because they are accelerated by the electric field. They each arrive with ≈100 eV kinetic energy imparted by the potential difference. Upon striking the first dynode, more low energy electrons are emitted, and these electrons are in turn accelerated toward the second dynode. The geometry of the dynode chain is such that a cascade occurs with an exponentially-increasing number of electrons being produced at each stage. For example, if at each stage an average of 5 new electrons are produced for each incoming electron, and if there are 12 dynode stages, then at the last stage one expects for each primary electron about 512 ≈ 108 electrons. This last stage is called the anode. This large number of electrons reaching the anode results in a sharp current pulse that is easily detectable, for example on an oscilloscope, signaling the arrival of the photon(s) at the photocathode ≈50 nanoseconds earlier.
The necessary distribution of voltage along the series of dynodes is created by a voltage divider chain, as illustrated in Fig. 2. In the example, the photocathode is held at a negative high voltage on the order of 1000 V, while the anode is very close to ground potential. The capacitors across the final few dynodes act as local reservoirs of charge to help maintain the voltage on the dynodes while electron avalanches propagate through the tube. Many variations of design are used in practice; the design shown is merely illustrative.
There are two common photomultiplier orientations, the head-on or end-on (transmission mode) design, as shown above, where light enters the flat, circular top of the tube and passes the photocathode, and the side-on design (reflection mode), where light enters at a particular spot on the side of the tube, and impacts on an opaque photocathode. The side-on design is used, for instance, in the type 931, the first mass-produced PMT. Besides the different photocathode materials, performance is also affected by the transmission of the window material that the light passes through, and by the arrangement of the dynodes. Many photomultiplier models are available having various combinations of these, and other, design variables. The manufacturers manuals provide the information needed to choose an appropriate design for a particular application.
History
The invention of the photomultiplier is predicated upon two prior achievements, the separate discoveries of the photoelectric effect and of secondary emission.
Photoelectric effect
The first demonstration of the photoelectric effect was carried out in 1887 by Heinrich Hertz using ultraviolet light. Significant for practical applications, Elster and Geitel two years later demonstrated the same effect using visible light striking alkali metals (potassium and sodium). The addition of caesium, another alkali metal, has permitted the range of sensitive wavelengths to be extended towards longer wavelengths in the red portion of the visible spectrum.
Historically, the photoelectric effect is associated with Albert Einstein, who relied upon the phenomenon to establish the fundamental principle of quantum mechanics in 1905, an accomplishment for which Einstein received the 1921 Nobel Prize. It is worthwhile to note that Heinrich Hertz, working 18 years earlier, had not recognized that the kinetic energy of the emitted electrons is proportional to the frequency but independent of the optical intensity. This fact implied a discrete nature of light, i.e. the existence of quanta, for the first time.
Secondary emission
The phenomenon of secondary emission (the ability of electrons in a vacuum tube to cause the emission of additional electrons by striking an electrode) was, at first, limited to purely electronic phenomena and devices (which lacked photosensitivity). In 1899 the effect was first reported by Villard. In 1902, Austin and Starke reported that the metal surfaces impacted by electron beams emitted a larger number of electrons than were incident. The application of the newly discovered secondary emission to the amplification of signals was only proposed after World War I by Westinghouse scientist Joseph Slepian in a 1919 patent.
The race towards a practical electronic television camera
The ingredients for inventing the photomultiplier were coming together during the 1920s as the pace of vacuum tube technology accelerated. The primary goal for many, if not most, workers was the need for a practical television camera technology. Television had been pursued with primitive prototypes for decades prior to the 1934 introduction of the first practical video camera (the iconoscope). Early prototype television cameras lacked sensitivity. Photomultiplier technology was pursued to enable television camera tubes, such as the iconoscope and (later) the orthicon, to be sensitive enough to be practical. So the stage was set to combine the dual phenomena of photoemission (i.e., the photoelectric effect) with secondary emission, both of which had already been studied and adequately understood, to create a practical photomultiplier.
First photomultiplier, single-stage (early 1934)
The first documented photomultiplier demonstration dates to the early 1934 accomplishments of an RCA group based in Harrison, NJ. Harley Iams and Bernard Salzberg were the first to integrate a photoelectric-effect cathode and single secondary emission amplification stage in a single vacuum envelope and the first to characterize its performance as a photomultiplier with electron amplification gain. These accomplishments were finalized prior to June 1934 as detailed in the manuscript submitted to Proceedings of the Institute of Radio Engineers (Proc. IRE). The device consisted of a semi-cylindrical photocathode, a secondary emitter mounted on the axis, and a collector grid surrounding the secondary emitter. The tube had a gain of about eight and operated at frequencies well above 10 kHz.
Magnetic photomultipliers (mid 1934–1937)
Higher gains were sought than those available from the early single-stage photomultipliers. However, it is an empirical fact that the yield of secondary electrons is limited in any given secondary emission process, regardless of acceleration voltage. Thus, any single-stage photomultiplier is limited in gain. At the time the maximum first-stage gain that could be achieved was approximately 10 (very significant developments in the 1960s permitted gains above 25 to be reached using negative electron affinity dynodes). For this reason, multiple-stage photomultipliers, in which the photoelectron yield could be multiplied successively in several stages, were an important goal. The challenge was to cause the photoelectrons to impinge on successively higher-voltage electrodes rather than to travel directly to the highest voltage electrode. Initially this challenge was overcome by using strong magnetic fields to bend the electrons' trajectories. Such a scheme had earlier been conceived by inventor J. Slepian by 1919 (see above). Accordingly, leading international research organizations turned their attention towards improving photomultipliers to achieve higher gain with multiple stages.
In the USSR, RCA-manufactured radio equipment was introduced on a large scale by Joseph Stalin to construct broadcast networks, and the newly formed All-Union Scientific Research Institute for Television was gearing up a research program in vacuum tubes that was advanced for its time and place. Numerous visits were made by RCA scientific personnel to the USSR in the 1930s, prior to the Cold War, to instruct the Soviet customers on the capabilities of RCA equipment and to investigate customer needs. During one of these visits, in September 1934, RCA's Vladimir Zworykin was shown the first multiple-dynode photomultiplier, or photoelectron multiplier. This pioneering device was proposed by Leonid A. Kubetsky in 1930 which he subsequently built in 1934. The device achieved gains of 1000x or more when demonstrated in June 1934. The work was submitted for print publication only two years later, in July 1936 as emphasized in a recent 2006 publication of the Russian Academy of Sciences (RAS), which terms it "Kubetsky's Tube." The Soviet device used a magnetic field to confine the secondary electrons and relied on the Ag-O-Cs photocathode which had been demonstrated by General Electric in the 1920s.
By October 1935, Vladimir Zworykin, George Ashmun Morton, and Louis Malter of RCA in Camden, NJ submitted their manuscript describing the first comprehensive experimental and theoretical analysis of a multiple dynode tube — the device later called a photomultiplier — to Proc. IRE. The RCA prototype photomultipliers also used an Ag-O-Cs (silver oxide-caesium) photocathode. They exhibited a peak quantum efficiency of 0.4% at 800 nm.
Electrostatic photomultipliers (1937–present)
Whereas these early photomultipliers used the magnetic field principle, electrostatic photomultipliers (with no magnetic field) were demonstrated by Jan Rajchman of RCA Laboratories in Princeton, NJ in the late 1930s and became the standard for all future commercial photomultipliers. The first mass-produced photomultiplier, the Type 931, was of this design and is still commercially produced today.
Improved photocathodes
Also in 1936, a much improved photocathode, Cs3Sb (caesium-antimony), was reported by P. Görlich. The caesium-antimony photocathode had a dramatically improved quantum efficiency of 12% at 400 nm, and was used in the first commercially successful photomultipliers manufactured by RCA (i.e., the 931-type) both as a photocathode and as a secondary-emitting material for the dynodes. Different photocathodes provided differing spectral responses.
Spectral response of photocathodes
In the early 1940s, the JEDEC (Joint Electron Device Engineering Council), an industry committee on standardization, developed a system of designating spectral responses. The philosophy included the idea that the product's user need only be concerned about the response of the device rather than how the device may be fabricated. Various combinations of photocathode and window materials were assigned "S-numbers" (spectral numbers) ranging from S-1 through S-40, which are still in use today. For example, S-11 uses the caesium-antimony photocathode with a lime glass window, S-13 uses the same photocathode with a fused silica window, and S-25 uses a so-called "multialkali" photocathode (Na-K-Sb-Cs, or sodium-potassium-antimony-caesium) that provides extended response in the red portion of the visible light spectrum. No suitable photoemissive surfaces have yet been reported to detect wavelengths longer than approximately 1700 nanometers, which can be approached by a special (InP/InGaAs(Cs)) photocathode.
RCA Corporation
For decades, RCA was responsible for performing the most important work in developing and refining photomultipliers. RCA was also largely responsible for the commercialization of photomultipliers. The company compiled and published an authoritative and widely used Photomultiplier Handbook. RCA provided printed copies free upon request. The handbook, which continues to be made available online at no cost by the successors to RCA, is considered to be an essential reference.
Following a corporate break-up in the late 1980s involving the acquisition of RCA by General Electric and disposition of the divisions of RCA to numerous third parties, RCA's photomultiplier business became an independent company.
Lancaster, Pennsylvania facility
The Lancaster, Pennsylvania facility was opened by the U.S. Navy in 1942 and operated by RCA for the manufacture of radio and microwave tubes. Following World War II, the naval facility was acquired by RCA. RCA Lancaster, as it became known, was the base for the development and the production of commercial television products. In subsequent years other products were added, such as cathode-ray tubes, photomultiplier tubes, motion-sensing light control switches, and closed-circuit television systems.
Burle Industries
Burle Industries, as a successor to the RCA Corporation, carried the RCA photomultiplier business forward after 1986, based in the Lancaster, Pennsylvania facility. The 1986 acquisition of RCA by General Electric resulted in the divestiture of the RCA Lancaster New Products Division. Hence, 45 years after being founded by the U.S. Navy, its management team, led by Erich Burlefinger, purchased the division and in 1987 founded Burle Industries.
In 2005, after eighteen years as an independent enterprise, Burle Industries and a key subsidiary were acquired by Photonis, a European holding company Photonis Group. Following the acquisition, Photonis was composed of Photonis Netherlands, Photonis France, Photonis USA, and Burle Industries. Photonis USA operates the former Galileo Corporation Scientific Detector Products Group (Sturbridge, Massachusetts), which had been purchased by Burle Industries in 1999. The group is known for microchannel plate detector (MCP) electron multipliers—an integrated micro-vacuum tube version of photomultipliers. MCPs are used for imaging and scientific applications, including night vision devices.
On 9 March 2009, Photonis announced that it would cease all production of photomultipliers at both the Lancaster, Pennsylvania and the Brive, France plants.
Hamamatsu
The Japan-based company Hamamatsu Photonics (also known as Hamamatsu) has emerged since the 1950s as a leader in the photomultiplier industry. Hamamatsu, in the tradition of RCA, has published its own handbook, which is available without cost on the company's website. Hamamatsu uses different designations for particular photocathode formulations and introduces modifications to these designations based on Hamamatsu's proprietary research and development.
Photocathode materials
The photocathodes can be made of a variety of materials, with different properties. Typically the materials have low work function and are therefore prone to thermionic emission, causing noise and dark current, especially the materials sensitive in infrared; cooling the photocathode lowers this thermal noise. The most common photocathode materials are Ag-O-Cs (also called S1) transmission-mode, sensitive from 300–1200 nm. High dark current; used mainly in near-infrared, with the photocathode cooled; GaAs:Cs, caesium-activated gallium arsenide, flat response from 300 to 850 nm, fading towards ultraviolet and to 930 nm; InGaAs:Cs, caesium-activated indium gallium arsenide, higher infrared sensitivity than GaAs:Cs, between 900–1000 nm much higher signal-to-noise ratio than Ag-O-Cs; Sb-Cs, (also called S11) caesium-activated antimony, used for reflective mode photocathodes; response range from ultraviolet to visible, widely used; bialkali (Sb-K-Cs, Sb-Rb-Cs), caesium-activated antimony-rubidium or antimony-potassium alloy, similar to Sb:Cs, with higher sensitivity and lower noise. can be used for transmission-mode; favorable response to a NaI:Tl scintillator flashes makes them widely used in gamma spectroscopy and radiation detection; high-temperature bialkali (Na-K-Sb), can operate up to 175 °C, used in well logging, low dark current at room temperature; multialkali (Na-K-Sb-Cs), (also called S20), wide spectral response from ultraviolet to near-infrared, special cathode processing can extend range to 930 nm, used in broadband spectrophotometers; solar-blind (Cs-Te, Cs-I), sensitive to vacuum-UV and ultraviolet, insensitive to visible light and infrared (Cs-Te has cutoff at 320 nm, Cs-I at 200 nm).
Window materials
The windows of the photomultipliers act as wavelength filters; this may be irrelevant if the cutoff wavelengths are outside of the application range or outside of the photocathode sensitivity range, but special care has to be taken for uncommon wavelengths. Borosilicate glass is commonly used for near-infrared to about 300 nm. High borate borosilicate glasses exist also in high UV transmission versions with high transmission also at 254 nm. Glass with very low content of potassium can be used with bialkali photocathodes to lower the background radiation from the potassium-40 isotope. Ultraviolet glass transmits visible and ultraviolet down to 185 nm. Used in spectroscopy. Synthetic silica transmits down to 160 nm, absorbs less UV than fused silica. Different thermal expansion than kovar (and than borosilicate glass that's expansion-matched to kovar), a graded seal needed between the window and the rest of the tube. The seal is vulnerable to mechanical shocks. Magnesium fluoride transmits ultraviolet down to 115 nm. Hygroscopic, though less than other alkali halides usable for UV windows.
Usage considerations
Photomultiplier tubes typically utilize 1000 to 2000 volts to accelerate electrons within the chain of dynodes. (See Figure near top of article.) The most negative voltage is connected to the cathode, and the most positive voltage is connected to the anode. Negative high-voltage supplies (with the positive terminal grounded) are often preferred, because this configuration enables the photocurrent to be measured at the low voltage side of the circuit for amplification by subsequent electronic circuits operating at low voltage. However, with the photocathode at high voltage, leakage currents sometimes result in unwanted "dark current" pulses that may affect the operation. Voltages are distributed to the dynodes by a resistive voltage divider, although variations such as active designs (with transistors or diodes) are possible. The divider design, which influences frequency response or rise time, can be selected to suit varying applications. Some instruments that use photomultipliers have provisions to vary the anode voltage to control the gain of the system.
While powered (energized), photomultipliers must be shielded from ambient light to prevent their destruction through overexcitation. In some applications this protection is accomplished mechanically by electrical interlocks or shutters that protect the tube when the photomultiplier compartment is opened. Another option is to add overcurrent protection in the external circuit, so that when the measured anode current exceeds a safe limit, the high voltage is reduced.
If used in a location with strong magnetic fields, which can curve electron paths, steer the electrons away from the dynodes and cause loss of gain, photomultipliers are usually magnetically shielded by a layer of soft iron or mu-metal. This magnetic shield is often maintained at cathode potential. When this is the case, the external shield must also be electrically insulated because of the high voltage on it. Photomultipliers with large distances between the photocathode and the first dynode are especially sensitive to magnetic fields.
Applications
Photomultipliers were the first electric eye devices, being used to measure interruptions in beams of light. Photomultipliers are used in conjunction with scintillators to detect Ionizing radiation by means of hand held and fixed radiation protection instruments, and particle radiation in physics experiments. Photomultipliers are used in research laboratories to measure the intensity and spectrum of light-emitting materials such as compound semiconductors and quantum dots. Photomultipliers are used as the detector in many spectrophotometers. This allows an instrument design that escapes the thermal noise limit on sensitivity, and which can therefore substantially increase the dynamic range of the instrument.
Photomultipliers are used in numerous medical equipment designs. For example, blood analysis devices used by clinical medical laboratories, such as flow cytometers, utilize photomultipliers to determine the relative concentration of various components in blood samples, in combination with optical filters and incandescent lamps. An array of photomultipliers is used in a gamma camera. Photomultipliers are typically used as the detectors in flying-spot scanners.
High-sensitivity applications
After 50 years, during which solid-state electronic components have largely displaced the vacuum tube, the photomultiplier remains a unique and important optoelectronic component. Perhaps its most useful quality is that it acts, electronically, as a nearly perfect current source, owing to the high voltage utilized in extracting the tiny currents associated with weak light signals. There is no Johnson noise associated with photomultiplier signal currents, even though they are greatly amplified, e.g., by 100 thousand times (i.e., 100 dB) or more. The photocurrent still contains shot noise.
Photomultiplier-amplified photocurrents can be electronically amplified by a high-input-impedance electronic amplifier (in the signal path subsequent to the photomultiplier), thus producing appreciable voltages even for nearly infinitesimally small photon fluxes. Photomultipliers offer the best possible opportunity to exceed the Johnson noise for many configurations. The aforementioned refers to measurement of light fluxes that, while small, nonetheless amount to a continuous stream of multiple photons.
For smaller photon fluxes, the photomultiplier can be operated in photon-counting, or Geiger, mode (see also Single-photon avalanche diode). In Geiger mode the photomultiplier gain is set so high (using high voltage) that a single photo-electron resulting from a single photon incident on the primary surface generates a very large current at the output circuit. However, owing to the avalanche of current, a reset of the photomultiplier is required. In either case, the photomultiplier can detect individual photons. The drawback, however, is that not every photon incident on the primary surface is counted either because of less-than-perfect efficiency of the photomultiplier, or because a second photon can arrive at the photomultiplier during the "dead time" associated with a first photon and never be noticed.
A photomultiplier will produce a small current even without incident photons; this is called the dark current. Photon-counting applications generally demand photomultipliers designed to minimise dark current.
Nonetheless, the ability to detect single photons striking the primary photosensitive surface itself reveals the quantization principle that Einstein put forth. Photon counting (as it is called) reveals that light, not only being a wave, consists of discrete particles (i.e., photons).
Temperature range
It is known that at cryogenic temperatures photo multipliers demonstrate increase in (bursting) electrons emission as temperature lowers. This phenomenon is still unexplained by any physics theory.
| Technology | Components | null |
177320 | https://en.wikipedia.org/wiki/Spectral%20line | Spectral line | A spectral line is a weaker or stronger region in an otherwise uniform and continuous spectrum. It may result from emission or absorption of light in a narrow frequency range, compared with the nearby frequencies. Spectral lines are often used to identify atoms and molecules. These "fingerprints" can be compared to the previously collected ones of atoms and molecules, and are thus used to identify the atomic and molecular components of stars and planets, which would otherwise be impossible.
Types of line spectra
Spectral lines are the result of interaction between a quantum system (usually atoms, but sometimes molecules or atomic nuclei) and a single photon. When a photon has about the right amount of energy (which is connected to its frequency) to allow a change in the energy state of the system (in the case of an atom this is usually an electron changing orbitals), the photon is absorbed. Then the energy will be spontaneously re-emitted, either as one photon at the same frequency as the original one or in a cascade, where the sum of the energies of the photons emitted will be equal to the energy of the one absorbed (assuming the system returns to its original state).
A spectral line may be observed either as an emission line or an absorption line. Which type of line is observed depends on the type of material and its temperature relative to another emission source. An absorption line is produced when photons from a hot, broad spectrum source pass through a cooler material. The intensity of light, over a narrow frequency range, is reduced due to absorption by the material and re-emission in random directions. By contrast, a bright emission line is produced when photons from a hot material are detected, perhaps in the presence of a broad spectrum from a cooler source. The intensity of light, over a narrow frequency range, is increased due to emission by the hot material.
Spectral lines are highly atom-specific, and can be used to identify the chemical composition of any medium. Several elements, including helium, thallium, and caesium, were discovered by spectroscopic means. Spectral lines also depend on the temperature and density of the material, so they are widely used to determine the physical conditions of stars and other celestial bodies that cannot be analyzed by other means.
Depending on the material and its physical conditions, the energy of the involved photons can vary widely, with the spectral lines observed across the electromagnetic spectrum, from radio waves to gamma rays.
Nomenclature
Strong spectral lines in the visible part of the electromagnetic spectrum often have a unique Fraunhofer line designation, such as K for a line at 393.366 nm emerging from singly-ionized calcium atom, Ca+, though some of the Fraunhofer "lines" are blends of multiple lines from several different species.
In other cases, the lines are designated according to the level of ionization by adding a Roman numeral to the designation of the chemical element. Neutral atoms are denoted with the Roman numeral I, singly ionized atoms with II, and so on, so that, for example:
Cu II copper ion with +1 charge, Cu1+
Fe III iron ion with +2 charge, Fe2+
More detailed designations usually include the line wavelength and may include a multiplet number (for atomic lines) or band designation (for molecular lines). Many spectral lines of atomic hydrogen also have designations within their respective series, such as the Lyman series or Balmer series. Originally all spectral lines were classified into series: the principal series, sharp series, and diffuse series. These series exist across atoms of all elements, and the patterns for all atoms are well-predicted by the Rydberg-Ritz formula. These series were later associated with suborbitals.
Line broadening and shift
There are a number of effects which control spectral line shape. A spectral line extends over a tiny spectral band with a nonzero range of frequencies, not a single frequency (i.e., a nonzero spectral width). In addition, its center may be shifted from its nominal central wavelength. There are several reasons for this broadening and shift. These reasons may be divided into two general categories – broadening due to local conditions and broadening due to extended conditions. Broadening due to local conditions is due to effects which hold in a small region around the emitting element, usually small enough to assure local thermodynamic equilibrium. Broadening due to extended conditions may result from changes to the spectral distribution of the radiation as it traverses its path to the observer. It also may result from the combining of radiation from a number of regions which are far from each other.
Broadening due to local effects
Natural broadening
The lifetime of excited states results in natural broadening, also known as lifetime broadening. The uncertainty principle relates the lifetime of an excited state (due to spontaneous radiative decay or the Auger process) with the uncertainty of its energy.
Some authors use the term "radiative broadening" to refer specifically to the part of natural broadening caused by the spontaneous radiative decay.
A short lifetime will have a large energy uncertainty and a broad emission. This broadening effect results in an unshifted Lorentzian profile. The natural broadening can be experimentally altered only to the extent that decay rates can be artificially suppressed or enhanced.
Thermal Doppler broadening
The atoms in a gas which are emitting radiation will have a distribution of velocities. Each photon emitted will be "red"- or "blue"-shifted by the Doppler effect depending on the velocity of the atom relative to the observer. The higher the temperature of the gas, the wider the distribution of velocities in the gas. Since the spectral line is a combination of all of the emitted radiation, the higher the temperature of the gas, the broader the spectral line emitted from that gas. This broadening effect is described by a Gaussian profile and there is no associated shift.
Pressure broadening
The presence of nearby particles will affect the radiation emitted by an individual particle. There are two limiting cases by which this occurs:
Impact pressure broadening or collisional broadening: The collision of other particles with the light emitting particle interrupts the emission process, and by shortening the characteristic time for the process, increases the uncertainty in the energy emitted (as occurs in natural broadening). The duration of the collision is much shorter than the lifetime of the emission process. This effect depends on both the density and the temperature of the gas. The broadening effect is described by a Lorentzian profile and there may be an associated shift.
Quasistatic pressure broadening: The presence of other particles shifts the energy levels in the emitting particle (see spectral band), thereby altering the frequency of the emitted radiation. The duration of the influence is much longer than the lifetime of the emission process. This effect depends on the density of the gas, but is rather insensitive to temperature. The form of the line profile is determined by the functional form of the perturbing force with respect to distance from the perturbing particle. There may also be a shift in the line center. The general expression for the lineshape resulting from quasistatic pressure broadening is a 4-parameter generalization of the Gaussian distribution known as a stable distribution.
Pressure broadening may also be classified by the nature of the perturbing force as follows:
Linear Stark broadening occurs via the linear Stark effect, which results from the interaction of an emitter with an electric field of a charged particle at a distance , causing a shift in energy that is linear in the field strength.
Resonance broadening occurs when the perturbing particle is of the same type as the emitting particle, which introduces the possibility of an energy exchange process.
Quadratic Stark broadening occurs via the quadratic Stark effect, which results from the interaction of an emitter with an electric field, causing a shift in energy that is quadratic in the field strength.
Van der Waals broadening occurs when the emitting particle is being perturbed by Van der Waals forces. For the quasistatic case, a Van der Waals profile is often useful in describing the profile. The energy shift as a function of distance between the interacting particles is given in the wings by e.g. the Lennard-Jones potential.
Inhomogeneous broadening
Inhomogeneous broadening is a general term for broadening because some emitting particles are in a different local environment from others, and therefore emit at a different frequency. This term is used especially for solids, where surfaces, grain boundaries, and stoichiometry variations can create a variety of local environments for a given atom to occupy. In liquids, the effects of inhomogeneous broadening is sometimes reduced by a process called motional narrowing.
Broadening due to non-local effects
Certain types of broadening are the result of conditions over a large region of space rather than simply upon conditions that are local to the emitting particle.
Opacity broadening
Opacity broadening is an example of a non-local broadening mechanism. Electromagnetic radiation emitted at a particular point in space can be reabsorbed as it travels through space. This absorption depends on wavelength. The line is broadened because the photons at the line center have a greater reabsorption probability than the photons at the line wings. Indeed, the reabsorption near the line center may be so great as to cause a self reversal in which the intensity at the center of the line is less than in the wings. This process is also sometimes called self-absorption.
Macroscopic Doppler broadening
Radiation emitted by a moving source is subject to Doppler shift due to a finite line-of-sight velocity projection. If different parts of the emitting body have different velocities (along the line of sight), the resulting line will be broadened, with the line width proportional to the width of the velocity distribution. For example, radiation emitted from a distant rotating body, such as a star, will be broadened due to the line-of-sight variations in velocity on opposite sides of the star (this effect usually referred to as rotational broadening). The greater the rate of rotation, the broader the line. Another example is an imploding plasma shell in a Z-pinch.
Combined effects
Each of these mechanisms can act in isolation or in combination with others. Assuming each effect is independent, the observed line profile is a convolution of the line profiles of each mechanism. For example, a combination of the thermal Doppler broadening and the impact pressure broadening yields a Voigt profile.
However, the different line broadening mechanisms are not always independent. For example, the collisional effects and the motional Doppler shifts can act in a coherent manner, resulting under some conditions even in a collisional narrowing, known as the Dicke effect.
Spectral lines of chemical elements
Bands
The phrase "spectral lines", when not qualified, usually refers to lines having wavelengths in the visible band of the full electromagnetic spectrum. Many spectral lines occur at wavelengths outside this range. At shorter wavelengths, which correspond to higher energies, ultraviolet spectral lines include the Lyman series of hydrogen. At the much shorter wavelengths of X-rays, the lines are known as characteristic X-rays because they remain largely unchanged for a given chemical element, independent of their chemical environment. Longer wavelengths correspond to lower energies, where the infrared spectral lines include the Paschen series of hydrogen. At even longer wavelengths, the radio spectrum includes the 21-cm line used to detect neutral hydrogen throughout the cosmos.
| Physical sciences | Atomic physics | Physics |
177434 | https://en.wikipedia.org/wiki/Virgo%20Supercluster | Virgo Supercluster | The Local Supercluster (LSC or LS), or Virgo Supercluster is a formerly defined supercluster containing the Virgo Cluster and Local Group, which itself contains the Milky Way and Andromeda galaxies, as well as others. At least 100 galaxy groups and clusters are located within its diameter of 33 megaparsecs (110 million light-years). The Virgo Supercluster is one of about 10 million superclusters in the observable universe and is in the Pisces–Cetus Supercluster Complex, a galaxy filament.
A 2014 study indicates that the Local Supercluster is only a part of an even greater supercluster, Laniakea, a larger group centered on the Great Attractor, thus subsuming the former Virgo Supercluster under Laniakea.
Background
Beginning with the first large sample of nebulae published by William and John Herschel in 1863, it was known that there is a marked excess of nebular fields in the constellation Virgo, near the north galactic pole. In the 1950s, French–American astronomer Gérard de Vaucouleurs was the first to argue that this excess represented a large-scale galaxy-like structure, coining the term "Local Supergalaxy" in 1953, which he changed to "Local Supercluster" (LSC) in 1958. Harlow Shapley, in his 1959 book Of Stars and Men, suggested the term Metagalaxy.
Debate went on during the 1960s and 1970s as to whether the Local Supercluster (LS) was actually a structure or a chance alignment of galaxies.
The issue was resolved with the large redshift surveys of the late 1970s and early 1980s, which convincingly showed the flattened concentration of galaxies along the supergalactic plane.
Structure
In 1982, R. Brent Tully presented the conclusions of his research concerning the basic structure of the LS. It consists of two components: an appreciably flattened disk containing two thirds of the supercluster's luminous galaxies, and a roughly spherical halo containing the remaining third.
The disk itself is a thin (~1 Mpc) ellipsoid with a long axis / short axis ratio of at least 6 to 1, and possibly as high as 9 to 1.
Data released in June 2003 from the 5-year Two-degree-Field Galaxy Redshift Survey (2dF) has allowed astronomers to compare the LS to other superclusters. The LS represents a typical poor (that is, lacking a high density core) supercluster of rather small size. It has one rich galaxy cluster in the center, surrounded by filaments of galaxies and poor groups.
The Local Group is located on the outskirts of the LS in a small filament extending from the Fornax Cluster to the Virgo Cluster. The Virgo Supercluster's volume is roughly 7,000 times that of the Local Group, or 100 billion times that of the Milky Way.
Galaxy distribution
The number density of galaxies in the LS falls off with the square of the distance from its center near the Virgo Cluster, suggesting that this cluster is not randomly located. Overall, the vast majority of the luminous galaxies (less than absolute magnitude −13) are concentrated in a small number of clouds (groups of galaxy clusters). Ninety-eight percent can be found in the following 11 clouds, given in decreasing order of number of luminous galaxies: Canes Venatici, Virgo Cluster, Virgo II (southern extension), Leo II, Virgo III, Crater (NGC 3672), Leo I, Leo Minor (NGC 2841), Draco (NGC 5907), Antlia (NGC 2997), and NGC 5643.
Of the luminous galaxies located in the disk, one third are in the Virgo Cluster. The remainder are found in the Canes Venatici Cloud and Virgo II Cloud, plus the somewhat insignificant NGC 5643 Group.
The luminous galaxies in the halo are concentrated in a small number of clouds (94% in 7 clouds). This distribution indicates that "most of the volume of the supergalactic plane is a great void." A helpful analogy that matches the observed distribution is that of soap bubbles. Flattish clusters and superclusters are found at the intersection of bubbles, which are large, roughly spherical (on the order of 20–60 Mpc in diameter) voids in space.
Long filamentary structures seem to predominate. An example of this is the Hydra–Centaurus Supercluster, the nearest supercluster to the Virgo Supercluster, which starts at a distance of roughly 30 Mpc and extends to 60 Mpc.
Cosmology
Large-scale dynamics
Since the late 1980s it has been apparent that not only the Local Group, but all matter out to a distance of at least 50 Mpc is experiencing a bulk flow on the order of 600 km/s in the direction of the Norma Cluster (Abell 3627).
Lynden-Bell et al. (1988) dubbed the cause of this the "Great Attractor". The Great Attractor is now understood to be the center of mass of an even larger structure of galaxy clusters, dubbed "Laniakea", which includes the Virgo Supercluster (including the Local Group) as well as the Hydra-Centaurus Supercluster, the Pavo-Indus Supercluster, and the Fornax Group.
The Great Attractor, together with the entire supercluster, is found to be moving toward Shapley Supercluster, with center of Shapley Attractor.
Dark matter
The LS has a total mass M ≈ 1015 and a total optical luminosity L ≈ 3 . This yields a mass-to-light ratio of about 300 times that of the solar ratio (/ = 1), a figure that is consistent with results obtained for other superclusters.
By comparison, the mass-to-light ratio for the Milky Way is 63.8 assuming a solar absolute magnitude of 4.83, a Milky Way absolute magnitude of −20.9,
and a Milky Way mass of . These ratios are one of the main arguments in favor of the presence of large amounts of dark matter in the universe; if dark matter did not exist, much smaller mass-to-light ratios would be expected.
Maps
Diagrams
| Physical sciences | Other notable objects | null |
177436 | https://en.wikipedia.org/wiki/Great%20Attractor | Great Attractor | The Great Attractor is a region of gravitational attraction in intergalactic space and the apparent central gravitational point of the Laniakea Supercluster of galaxies that includes the Milky Way galaxy, as well as about 100,000 other galaxies.
The observed attraction suggests a localized concentration of mass having the order of 1016 solar masses. However, it is obscured by the Milky Way's galactic plane, lying behind the Zone of Avoidance (ZOA), so that in visible light wavelengths, the Great Attractor is difficult to observe directly.
The attraction is observable by its effect on the motion of galaxies and their associated clusters over a region of hundreds of millions of light-years across the universe. These galaxies are observable above and below the Zone of Avoidance; all are redshifted in accordance with the Hubble flow, indicating that they are receding relative to the Milky Way and to each other, but the variations in their redshifts are large enough and regular enough to reveal that they are slightly drawn towards the attraction. The variations in their redshifts are known as peculiar velocities, and cover a range from about +700 km/s to −700 km/s, depending on the angular deviation from the direction to the Great Attractor.
The Great Attractor itself is moving towards the Shapley Supercluster.
History
The Great Attractor was named by Alan Dressler in 1987, following decades of redshift surveys that built up a large dataset of redshift values. The redshift values and distance measurements independent of redshift measurements were then combined to create maps of peculiar velocity.
Through a series of peculiar velocity tests, astrophysicists found that the Milky Way was moving in the direction of the constellation of Centaurus at about 600km/s. Then, the discovery of cosmic microwave background (CMB) dipoles was used to reflect the motion of the Local Group of galaxies towards the Great Attractor. The 1980s brought many discoveries about the Great Attractor, such as the fact that the Milky Way is not the only galaxy impacted. Approximately 400 elliptical galaxies are moving toward the Great Attractor beyond the Zone of Avoidance caused by the Milky Way galaxy light.
Intense efforts to work through the difficulties caused by the occlusion by the Milky Way during the late 1990s identified the Norma Cluster at the center of the Great Attractor region.
Location
The first indications of a deviation from uniform expansion of the universe were reported in 1973 and again in 1978. The location of the Great Attractor was finally determined in 1986: It is situated at a distance of somewhere between 150 and 250 Mly (million light-years) (47–79 Mpc), the larger being the most recent estimate, away from the Milky Way, in the direction of the constellations Triangulum Australe (The Southern Triangle) and Norma (The Carpenter's Square). While objects in that direction lie in the Zone of Avoidance (the part of the night sky obscured by the Milky Way galaxy) and are thus difficult to study with visible wavelengths, X-ray observations have revealed that region of space to be dominated by the Norma Cluster (ACO 3627), a massive cluster of galaxies containing a preponderance of large, old galaxies, many of which are colliding with their neighbours and radiating large amounts of radio waves.
Debate over apparent mass
In 1992, much of the apparent signal of the Great Attractor was attributed to a statistical effect called Malmquist bias. In 2005, astronomers conducting an X-ray survey of part of the sky known as the Clusters in the Zone of Avoidance (CIZA) project reported that the Great Attractor was actually only one tenth the mass that scientists had originally estimated. The survey also confirmed earlier theories that the Milky Way galaxy is in fact being pulled toward a much more massive cluster of galaxies near the Shapley Supercluster, which lies beyond the Great Attractor, and which is called the Shapley Attractor.
Norma Wall
A massive galaxy filament, called the Norma Wall (also called Great Attractor Wall) is located at the center of the supposed position of the Great Attractor. The Norma Wall contains the clusters Pavo II, Norma, Centaurus-Crux and CIZA J1324.7−5736. The most massive cluster in this region is the Norma supercluster. Later studies found that the wall continues over to the constellations of Centaurus and Vela.
Laniakea Supercluster
The proposed Laniakea Supercluster is defined as the Great Attractor's basin. It covers approximately four main galaxy superclusters, including superclusters of Virgo and Hydra–Centaurus, and spans across 500 million light years. Because it is not dense enough to be gravitationally bound, it should be dispersing as the universe expands, but it is instead anchored by a gravitational focal point. Thus the Great Attractor would be the core of the new supercluster. The local flows of the Laniakea supercluster converge in the region of the Norma and Centaurus Clusters, approximately at the position of the Great Attractor.
| Physical sciences | Basics_3 | null |
177442 | https://en.wikipedia.org/wiki/Jumbo | Jumbo | Jumbo (December 25, 1860 – September 15, 1885), also known as Jumbo the Elephant and Jumbo the Circus Elephant, was a 19th-century male African bush elephant born in Sudan. Jumbo was exported to Jardin des Plantes, a zoo in Paris, and then transferred in 1865 to London Zoo in England. Despite public protest, Jumbo was sold to P. T. Barnum, who took him to the United States for exhibition in March 1882.
The elephant's name spawned the common word "jumbo", meaning large in size. Examples of his lexical impact are phrases like "jumbo jet", "jumbo shrimp", and "jumbotron". Jumbo's shoulder height has been estimated to have been at the time of his death, and was claimed to be about by Barnum. "Jumbo" has been the mascot of Tufts University for over one hundred years.
History
Jumbo was born around December 25, 1860, in Sudan, and after his mother was killed by poachers, the infant Jumbo was captured by Sudanese elephant poacher Taher Sheriff and German big-game poacher Johann Schmidt. The calf was sold to Lorenzo Casanova, an Italian animal dealer and explorer. Casanova transported the animals that he had bought from Sudan north to Suez, and then across the Mediterranean Sea to Trieste.
This collection was sold to Gottlieb Christian Kreutzberg's "Menagerie Kreutzberg" in Germany. Soon after, the elephant was imported to France and kept in the Paris zoo Jardin des Plantes. In 1865, he was transferred to the London Zoo and arrived on 26 June. In the following years, Jumbo became a crowd favorite due to his size, and would give rides to children on his back, including those of Queen Victoria.
While in London, Jumbo broke both tusks, and when they regrew, he ground them down against the stonework of his enclosure. His keeper in London was Matthew Scott, whose 1885 autobiography details his life with Jumbo.
In 1882, Abraham Bartlett, superintendent of the London zoo, sparked national controversy with his decision to sell Jumbo to the American entertainer Phineas T. Barnum of the Barnum & Bailey Circus for £2,000 (US$10,000). This decision came as a result of concern surrounding Jumbo's growing aggression and potential to cause a public disaster. The sale of Jumbo, however, sent the citizens of London into a panic, because they viewed the transaction as an enormous loss for the British empire. 100,000 school children wrote to Queen Victoria begging her not to sell the elephant.
John Ruskin, a fellow of the Zoological Society, wrote in The Morning Post in February 1882: "I, for one of the said fellows, am not in the habit of selling my old pets or parting with my old servants because I find them subject occasionally, perhaps even "periodically," to fits of ill temper; and I not only "regret" the proceedings of the council, but disclaim them utterly, as disgraceful to the city of London and dishonourable to common humanity." Despite a lawsuit against the Zoological Gardens alleging the sale was in violation of multiple zoo bylaws, and the zoo's attempt to renege on the sale, the court upheld the sale. Matthew Scott elected to go with Jumbo to the United States. The London-based newspaper The Daily Telegraph begged Barnum to lay down terms on which he would return Jumbo; however, no such terms existed in the eyes of Barnum.
In New York, Barnum exhibited Jumbo at Madison Square Garden, earning enough in three weeks from the enormous crowds to recoup the money he spent to buy the animal. In the 31-week season, the circus earned $1.75M, largely due to its star attraction. On May 17, 1884, Jumbo was one of Barnum's 21 elephants that crossed the Brooklyn Bridge to demonstrate that it was safe, a year after 15 people died during a stampede precipitated by fear that the bridge might collapse. On July 6, 1885, Jumbo was paraded in Saint John, New Brunswick, celebrating his first appearance in Canada.
Death
Jumbo died at a railway classification yard in St. Thomas, Ontario, Canada, on September 15, 1885. In those days, the circus crisscrossed North America by train. St. Thomas was the perfect location for a circus because many rail lines converged there. Jumbo and the other animals had finished their performances that night, and as they were being led to their box car, a train came down the track. Jumbo was hit and mortally wounded, dying within minutes.
Barnum told the (possibly fictional) story that Tom Thumb, a young circus elephant, was walking on the railroad tracks and Jumbo was attempting to lead him to safety. Barnum claimed that the locomotive hit and killed Tom Thumb before it derailed and hit Jumbo, and other witnesses supported Barnum's account. According to newspapers, the freight train hit Jumbo directly, killing him, while Tom Thumb suffered a broken leg.
Many metallic objects were found in the elephant's stomach, including English pennies, keys, rivets, and a police whistle.
Ever the showman, Barnum had portions of his star attraction separated, to have multiple sites attracting curious spectators. After touring with Barnum's circus, the skeleton was donated to the American Museum of Natural History in New York City, where it remains. The elephant's heart was sold to Burt Green Wilder of Cornell University, and had been lost by the 1940s. Jumbo's hide was stuffed by William J. Critchley and Carl Akeley, both of Ward's Natural Science, who stretched it during the mounting process; the mounted specimen traveled with Barnum's circus for two years.
Barnum eventually donated the stuffed Jumbo to Tufts University, where it was displayed at P.T. Barnum Hall there for many years. The hide was destroyed in a fire in April 1975. Ashes from that fire, which are believed to contain the elephant's remains, are kept in a 14-ounce Peter Pan Crunchy Peanut Butter jar in the office of the Tufts athletic director, while his taxidermied tail, removed during earlier renovations, resides in the holdings of the Tufts Digital Collections and Archives. Jumbo is the official Tufts University athletic mascot.
Legacy
Remaining in the United Kingdom are statues and other memorabilia of Jumbo. The elephant – or rather his statuette in the Natural History Museum – was made holotype of Richard Lydekker's proposed subspecies (Loxodonta africana rothschildi) for the large elephants of the eastern Sahel. Modern authorities do not recognize this (or any other subspecies of African bush elephants), considering its purportedly diagnostic large size and peculiarly shaped ears to be individual variation.
While Jumbo's hide resided at Tufts' P.T. Barnum Hall, a superstition held that dropping a coin into a nostril of the trunk would bring good luck on an examination or sports event. Although the hide was destroyed by a major fire, Jumbo remains the mascot of Tufts, and representations of the elephant are featured prominently throughout the campus.
A life-sized statue of the elephant was erected in 1985 in St. Thomas, Ontario, to commemorate the centennial of the elephant's death. It is located on Talbot Street on the west side of the city. In 2006 the Jumbo statue was inducted into the North America Railway Hall of Fame in the category of "Railway Art Forms & Events" as having local significance.
St. Thomas's Railway City Brewery sells an IPA beer named Dead Elephant.
Jumbo was the inspiration of the nickname of the 19th-century Jumbo Water Tower in the town of Colchester in Essex, England.
Jumbo is referenced by a plaque outside the old Liberal Hall, now a Wetherspoons pub, in Crediton, United Kingdom.
Lucy the Elephant, a six-story structure in Margate City, New Jersey, was modeled after Jumbo. Built by James V. Lafferty in 1881, Lucy is the oldest surviving roadside tourist attraction in America and a National Historic Landmark. Lafferty also made other Jumbo-shaped structures, including Elephantine Colossus, on Coney Island.
Jumbo has been lionized on a series of sheet-music covers from roughly 1882–83. The four-colour lithograph of Jumbo was created by Alfred Concanen of England, with the music title "Why Part With Jumbo", a song by the lion comique of Victorian British music halls, G. H. MacDermott. It pictured children zoo visitors riding, somewhat precariously, on Jumbo's back. Multiple American lithographic music covers were done, including by J. H. Bufford's Sons.
Canadian folk singer James Gordon wrote the song "Jumbo's Last Ride", which recounts the story of Jumbo's life and death. It is on his 1999 CD Pipe Street Dreams.
Canadian professional ice hockey player Joe Thornton (b. 1979) from St. Thomas, Ontario is nicknamed Jumbo Joe as a homage to Jumbo.
The 1941 animated film Dumbo released by Walt Disney Animation Studios was inspired by the story of Jumbo and is regarded as one of the greatest animated films of all time. Despite the film being fictional, many people have speculated that Jumbo might have been the title character's father.
Examination of Jumbo's skeleton
A television program about Jumbo, Attenborough and the Giant Elephant, presented by the naturalist and broadcaster David Attenborough, was transmitted on BBC One in the United Kingdom on 10 December 2017. An international team of scientists examined the skeleton and found:
Jumbo's molar teeth were malformed and out of line as a result of a long-term soft diet that did not wear his molar teeth down enough, obstructing the forward eruptive movement of the next molar.
Jumbo's nightly rages were probably caused by toothache, rather than musth, as his keeper thought at the time.
A post mortem photograph of Jumbo shows skin abrasions consistent with an illustration produced just after his death of the freight train hitting him on a hip from behind as he was being led across to his traveling carriage, and said that the likeliest cause of death was internal bleeding from his injuries.
Examination of Jumbo's limb bones showed overgrown tendon attachment areas consistent with a long-term history of being overloaded at his work.
Jumbo was still growing at the time of his death, as is normal for African male elephants of his age, and might eventually have attained the size claimed by Barnum.
| Biology and health sciences | Individual animals | Animals |
177515 | https://en.wikipedia.org/wiki/Molecular%20engineering | Molecular engineering | Molecular engineering is an emerging field of study concerned with the design and testing of molecular properties, behavior and interactions in order to assemble better materials, systems, and processes for specific functions. This approach, in which observable properties of a macroscopic system are influenced by direct alteration of a molecular structure, falls into the broader category of “bottom-up” design.
Molecular engineering is highly interdisciplinary by nature, encompassing aspects of chemical engineering, materials science, bioengineering, electrical engineering, physics, mechanical engineering, and chemistry. There is also considerable overlap with nanotechnology, in that both are concerned with the behavior of materials on the scale of nanometers or smaller. Given the highly fundamental nature of molecular interactions, there are a plethora of potential application areas, limited perhaps only by one's imagination and the laws of physics. However, some of the early successes of molecular engineering have come in the fields of immunotherapy, synthetic biology, and printable electronics (see molecular engineering applications).
Molecular engineering is a dynamic and evolving field with complex target problems; breakthroughs require sophisticated and creative engineers who are conversant across disciplines. A rational engineering methodology that is based on molecular principles is in contrast to the widespread trial-and-error approaches common throughout engineering disciplines. Rather than relying on well-described but poorly-understood empirical correlations between the makeup of a system and its properties, a molecular design approach seeks to manipulate system properties directly using an understanding of their chemical and physical origins. This often gives rise to fundamentally new materials and systems, which are required to address outstanding needs in numerous fields, from energy to healthcare to electronics. Additionally, with the increased sophistication of technology, trial-and-error approaches are often costly and difficult, as it may be difficult to account for all relevant dependencies among variables in a complex system. Molecular engineering efforts may include computational tools, experimental methods, or a combination of both.
History
Molecular engineering was first mentioned in the research literature in 1956 by Arthur R. von Hippel, who defined it as "… a new mode of thinking about engineering problems. Instead of taking prefabricated materials and trying to devise engineering applications consistent with their macroscopic properties, one builds materials from their atoms and molecules for the purpose at hand." This concept was echoed in Richard Feynman's seminal 1959 lecture There's Plenty of Room at the Bottom, which is widely regarded as giving birth to some of the fundamental ideas of the field of nanotechnology. In spite of the early introduction of these concepts, it was not until the mid-1980s with the publication of Engines of Creation: The Coming Era of Nanotechnology by Drexler that the modern concepts of nano and molecular-scale science began to grow in the public consciousness.
The discovery of electrically conductive properties in polyacetylene by Alan J. Heeger in 1977 effectively opened the field of organic electronics, which has proved foundational for many molecular engineering efforts. Design and optimization of these materials has led to a number of innovations including organic light-emitting diodes and flexible solar cells.
Applications
Molecular design has been an important element of many disciplines in academia, including bioengineering, chemical engineering, electrical engineering, materials science, mechanical engineering and chemistry. However, one of the ongoing challenges is in bringing together the critical mass of manpower amongst disciplines to span the realm from design theory to materials production, and from device design to product development. Thus, while the concept of rational engineering of technology from the bottom-up is not new, it is still far from being widely translated into R&D efforts.
Molecular engineering is used in many industries. Some applications of technologies where molecular engineering plays a critical role:
Consumer Products
Antibiotic surfaces (e.g. incorporation of silver nanoparticles or antibacterial peptides into coatings to prevent microbial infection)
Cosmetics (e.g. rheological modification with small molecules and surfactants in shampoo)
Cleaning products (e.g. nanosilver in laundry detergent)
Consumer electronics (e.g. organic light-emitting diode displays (OLED))
Electrochromic windows (e.g. windows in the Boeing 787 Dreamliner)
Zero emission vehicles (e.g. advanced fuel cells/batteries)
Self-cleaning surfaces (e.g. super hydrophobic surface coatings)
Energy Harvesting and Storage
Flow batteries - Synthesizing molecules for high-energy density electrolytes and highly-selective membranes in grid-scale energy storage systems.
Lithium-ion batteries - Creating new molecules for use as electrode binders, electrolytes, electrolyte additives, or even for energy storage directly in order to improve energy density (using materials such as graphene, silicon nanorods, and lithium metal), power density, cycle life, and safety.
Solar cells - Developing new materials for more efficient and cost-effective solar cells including organic, quantum dot or perovskite-based photovoltaics.
Photocatalytic water splitting - Enhancing the production of hydrogen fuel using solar energy and advanced catalytic materials such as semiconductor nanoparticles
Environmental Engineering
Water desalination (e.g. new membranes for highly-efficient low-cost ion removal)
Soil remediation (e.g. catalytic nanoparticles that accelerate the degradation of long-lived soil contaminants such as chlorinated organic compounds)
Carbon sequestration (e.g. new materials for CO2 adsorption)
Immunotherapy
Peptide-based vaccines (e.g. amphiphilic peptide macromolecular assemblies induce a robust immune response)
Peptide-containing biopharmaceuticals (e.g. nanoparticles, liposomes, polyelectrolyte micelles as delivery vehicles)
Synthetic Biology
CRISPR - Faster and more efficient gene editing technique
Gene delivery/gene therapy - Designing molecules to deliver modified or new genes into cells of live organisms to cure genetic disorders
Metabolic engineering - Modifying metabolism of organisms to optimize production of chemicals (e.g. synthetic genomics)
Protein engineering - Altering structure of existing proteins to enable specific new functions, or the creation of fully artificial proteins
DNA-functionalized materials - 3D assemblies of DNA-conjugated nanoparticle lattices
Techniques and instruments used
Molecular engineers utilize sophisticated tools and instruments to make and analyze the interactions of molecules and the surfaces of materials at the molecular and nano-scale. The complexity of molecules being introduced at the surface is increasing, and the techniques used to analyze surface characteristics at the molecular level are ever-changing and improving. Meantime, advancements in high performance computing have greatly expanded the use of computer simulation in the study of molecular scale systems.
Computational and Theoretical Approaches
Computational chemistry
High performance computing
Molecular dynamics
Molecular modeling
Statistical mechanics
Theoretical chemistry
Topology
Microscopy
Atomic Force Microscopy (AFM)
Scanning Electron Microscopy (SEM)
Transmission Electron Microscopy (TEM)
Molecular Characterization
Dynamic light scattering (DLS)
Matrix-assisted laser desorption/ionization (MALDI) spectrocosopy
Nuclear magnetic resonance (NMR) spectroscopy
Size exclusion chromatography (SEC)
Spectroscopy
Ellipsometry
2D X-Ray Diffraction (XRD)
Raman Spectroscopy/Microscopy
Surface Science
Glow Discharge Optical Emission Spectrometry
Time of Flight-Secondary Ion Mass Spectrometry (ToF-SIMS)
X-Ray Photoelectron Spectroscopy (XPS)
Synthetic Methods
DNA synthesis
Nanoparticle synthesis
Organic synthesis
Peptide synthesis
Polymer synthesis
Other Tools
Focused Ion Beam (FIB)
Profilometer
UV Photoelectron Spectroscopy (UPS)
Vibrational Sum Frequency Generation
Research / Education
At least three universities offer graduate degrees dedicated to molecular engineering: the University of Chicago, the University of Washington, and Kyoto University. These programs are interdisciplinary institutes with faculty from several research areas.
The academic journal Molecular Systems Design & Engineering publishes research from a wide variety of subject areas that demonstrates "a molecular design or optimisation strategy targeting specific systems functionality and performance."
| Technology | Disciplines | null |
177602 | https://en.wikipedia.org/wiki/Outer%20space | Outer space | Outer space (or simply space) is the expanse that exists beyond Earth's atmosphere and between celestial bodies. It contains ultra-low levels of particle densities, constituting a near-perfect vacuum of predominantly hydrogen and helium plasma, permeated by electromagnetic radiation, cosmic rays, neutrinos, magnetic fields and dust. The baseline temperature of outer space, as set by the background radiation from the Big Bang, is .
The plasma between galaxies is thought to account for about half of the baryonic (ordinary) matter in the universe, having a number density of less than one hydrogen atom per cubic metre and a kinetic temperature of millions of kelvins. Local concentrations of matter have condensed into stars and galaxies. Intergalactic space takes up most of the volume of the universe, but even galaxies and star systems consist almost entirely of empty space. Most of the remaining mass-energy in the observable universe is made up of an unknown form, dubbed dark matter and dark energy.
Outer space does not begin at a definite altitude above Earth's surface. The Kármán line, an altitude of above sea level, is conventionally used as the start of outer space in space treaties and for aerospace records keeping. Certain portions of the upper stratosphere and the mesosphere are sometimes referred to as "near space". The framework for international space law was established by the Outer Space Treaty, which entered into force on 10 October 1967. This treaty precludes any claims of national sovereignty and permits all states to freely explore outer space. Despite the drafting of UN resolutions for the peaceful uses of outer space, anti-satellite weapons have been tested in Earth orbit.
The concept that the space between the Earth and the Moon must be a vacuum was first proposed in the 17th century after scientists discovered that air pressure decreased with altitude. The immense scale of outer space was grasped in the 20th century when the distance to the Andromeda Galaxy was first measured. Humans began the physical exploration of space later in the same century with the advent of high-altitude balloon flights. This was followed by crewed rocket flights and, then, crewed Earth orbit, first achieved by Yuri Gagarin of the Soviet Union in 1961. The economic cost of putting objects, including humans, into space is very high, limiting human spaceflight to low Earth orbit and the Moon. On the other hand, uncrewed spacecraft have reached all of the known planets in the Solar System. Outer space represents a challenging environment for human exploration because of the hazards of vacuum and radiation. Microgravity has a negative effect on human physiology that causes both muscle atrophy and bone loss.
Terminology
The use of the short version space, as meaning 'the region beyond Earth's sky', predates the use of full term "outer space", with the earliest recorded use of this meaning in an epic poem by John Milton called Paradise Lost, published in 1667.
The term outward space existed in a poem from 1842 by the English poet Lady Emmeline Stuart-Wortley called "The Maiden of Moscow", but in astronomy the term outer space found its application for the first time in 1845 by Alexander von Humboldt. The term was eventually popularized through the writings of H. G. Wells after 1901. Theodore von Kármán used the term of free space to name the space of altitudes above Earth where spacecrafts reach conditions sufficiently free from atmospheric drag, differentiating it from airspace, identifying a legal space above territories free from the sovereign jurisdiction of countries.
"Spaceborne" denotes existing in outer space, especially if carried by a spacecraft; similarly, "space-based" means based in outer space or on a planet or moon.
Formation and state
The size of the whole universe is unknown, and it might be infinite in extent. According to the Big Bang theory, the very early universe was an extremely hot and dense state about 13.8 billion years ago which rapidly expanded. About 380,000 years later the universe had cooled sufficiently to allow protons and electrons to combine and form hydrogen—the so-called recombination epoch. When this happened, matter and energy became decoupled, allowing photons to travel freely through the continually expanding space. Matter that remained following the initial expansion has since undergone gravitational collapse to create stars, galaxies and other astronomical objects, leaving behind a deep vacuum that forms what is now called outer space. As light has a finite velocity, this theory constrains the size of the directly observable universe.
The present day shape of the universe has been determined from measurements of the cosmic microwave background using satellites like the Wilkinson Microwave Anisotropy Probe. These observations indicate that the spatial geometry of the observable universe is "flat", meaning that photons on parallel paths at one point remain parallel as they travel through space to the limit of the observable universe, except for local gravity. The flat universe, combined with the measured mass density of the universe and the accelerating expansion of the universe, indicates that space has a non-zero vacuum energy, which is called dark energy.
Estimates put the average energy density of the present day universe at the equivalent of 5.9 protons per cubic meter, including dark energy, dark matter, and baryonic matter (ordinary matter composed of atoms). The atoms account for only 4.6% of the total energy density, or a density of one proton per four cubic meters. The density of the universe is clearly not uniform; it ranges from relatively high density in galaxies—including very high density in structures within galaxies, such as planets, stars, and black holes—to conditions in vast voids that have much lower density, at least in terms of visible matter. Unlike matter and dark matter, dark energy seems not to be concentrated in galaxies: although dark energy may account for a majority of the mass-energy in the universe, dark energy's influence is 5 orders of magnitude smaller than the influence of gravity from matter and dark matter within the Milky Way.
Environment
Outer space is the closest known approximation to a perfect vacuum. It has effectively no friction, allowing stars, planets, and moons to move freely along their ideal orbits, following the initial formation stage. The deep vacuum of intergalactic space is not devoid of matter, as it contains a few hydrogen atoms per cubic meter. By comparison, the air humans breathe contains about 1025 molecules per cubic meter. The low density of matter in outer space means that electromagnetic radiation can travel great distances without being scattered: the mean free path of a photon in intergalactic space is about 1023 km, or 10 billion light years. In spite of this, extinction, which is the absorption and scattering of photons by dust and gas, is an important factor in galactic and intergalactic astronomy.
Stars, planets, and moons retain their atmospheres by gravitational attraction. Atmospheres have no clearly delineated upper boundary: the density of atmospheric gas gradually decreases with distance from the object until it becomes indistinguishable from outer space. The Earth's atmospheric pressure drops to about Pa at of altitude, compared to 100,000 Pa for the International Union of Pure and Applied Chemistry (IUPAC) definition of standard pressure. Above this altitude, isotropic gas pressure rapidly becomes insignificant when compared to radiation pressure from the Sun and the dynamic pressure of the solar wind. The thermosphere in this range has large gradients of pressure, temperature and composition, and varies greatly due to space weather.
The temperature of outer space is measured in terms of the kinetic activity of the gas, as it is on Earth. The radiation of outer space has a different temperature than the kinetic temperature of the gas, meaning that the gas and radiation are not in thermodynamic equilibrium. All of the observable universe is filled with photons that were created during the Big Bang, which is known as the cosmic microwave background radiation (CMB). (There is quite likely a correspondingly large number of neutrinos called the cosmic neutrino background.) The current black body temperature of the background radiation is about . The gas temperatures in outer space can vary widely. For example, the temperature in the Boomerang Nebula is , while the solar corona reaches temperatures over .
Magnetic fields have been detected in the space around just about every class of celestial object. Star formation in spiral galaxies can generate small-scale dynamos, creating turbulent magnetic field strengths of around 5–10 μG. The Davis–Greenstein effect causes elongated dust grains to align themselves with a galaxy's magnetic field, resulting in weak optical polarization. This has been used to show ordered magnetic fields that exist in several nearby galaxies. Magneto-hydrodynamic processes in active elliptical galaxies produce their characteristic jets and radio lobes. Non-thermal radio sources have been detected even among the most distant high-z sources, indicating the presence of magnetic fields.
Outside a protective atmosphere and magnetic field, there are few obstacles to the passage through space of energetic subatomic particles known as cosmic rays. These particles have energies ranging from about 106 eV up to an extreme 1020 eV of ultra-high-energy cosmic rays. The peak flux of cosmic rays occurs at energies of about 109 eV, with approximately 87% protons, 12% helium nuclei and 1% heavier nuclei. In the high energy range, the flux of electrons is only about 1% of that of protons. Cosmic rays can damage electronic components and pose a health threat to space travelers.
Smells retained from low Earth orbit, when returning from extravehicular activity, have a burned/metallic odor, similar to the scent of arc welding fumes, resulting from oxygen in low Earth orbit around the ISS, which clings to suits and equipment. Other regions of space could have very different smells, like that of different alcohols in molecular clouds.
Human access
Effect on biology and human bodies
Despite the harsh environment, several life forms have been found that can withstand extreme space conditions for extended periods. Species of lichen carried on the ESA BIOPAN facility survived exposure for ten days in 2007. Seeds of Arabidopsis thaliana and Nicotiana tabacum germinated after being exposed to space for 1.5 years. A strain of Bacillus subtilis has survived 559 days when exposed to low Earth orbit or a simulated Martian environment. The lithopanspermia hypothesis suggests that rocks ejected into outer space from life-harboring planets may successfully transport life forms to another habitable world. A conjecture is that just such a scenario occurred early in the history of the Solar System, with potentially microorganism-bearing rocks being exchanged between Venus, Earth, and Mars.
Vacuum
The lack of pressure in space is the most immediate dangerous characteristic of space to humans. Pressure decreases above Earth, reaching a level at an altitude of around that matches the vapor pressure of water at the temperature of the human body. This pressure level is called the Armstrong line, named after American physician Harry G. Armstrong. At or above the Armstrong line, fluids in the throat and lungs boil away. More specifically, exposed bodily liquids such as saliva, tears, and liquids in the lungs boil away. Hence, at this altitude, human survival requires a pressure suit, or a pressurized capsule.
Out in space, sudden exposure of an unprotected human to very low pressure, such as during a rapid decompression, can cause pulmonary barotrauma—a rupture of the lungs, due to the large pressure differential between inside and outside the chest. Even if the subject's airway is fully open, the flow of air through the windpipe may be too slow to prevent the rupture. Rapid decompression can rupture eardrums and sinuses, bruising and blood seep can occur in soft tissues, and shock can cause an increase in oxygen consumption that leads to hypoxia.
As a consequence of rapid decompression, oxygen dissolved in the blood empties into the lungs to try to equalize the partial pressure gradient. Once the deoxygenated blood arrives at the brain, humans lose consciousness after a few seconds and die of hypoxia within minutes. Blood and other body fluids boil when the pressure drops below , and this condition is called ebullism. The steam may bloat the body to twice its normal size and slow circulation, but tissues are elastic and porous enough to prevent rupture. Ebullism is slowed by the pressure containment of blood vessels, so some blood remains liquid.
Swelling and ebullism can be reduced by containment in a pressure suit. The Crew Altitude Protection Suit (CAPS), a fitted elastic garment designed in the 1960s for astronauts, prevents ebullism at pressures as low as . Supplemental oxygen is needed at to provide enough oxygen for breathing and to prevent water loss, while above pressure suits are essential to prevent ebullism. Most space suits use around of pure oxygen, about the same as the partial pressure of oxygen at the Earth's surface. This pressure is high enough to prevent ebullism, but evaporation of nitrogen dissolved in the blood could still cause decompression sickness and gas embolisms if not managed.
Weightlessness and radiation
Humans evolved for life in Earth gravity, and exposure to weightlessness has been shown to have deleterious effects on human health. Initially, more than 50% of astronauts experience space motion sickness. This can cause nausea and vomiting, vertigo, headaches, lethargy, and overall malaise. The duration of space sickness varies, but it typically lasts for 1–3 days, after which the body adjusts to the new environment. Longer-term exposure to weightlessness results in muscle atrophy and deterioration of the skeleton, or spaceflight osteopenia. These effects can be minimized through a regimen of exercise. Other effects include fluid redistribution, slowing of the cardiovascular system, decreased production of red blood cells, balance disorders, and a weakening of the immune system. Lesser symptoms include loss of body mass, nasal congestion, sleep disturbance, and puffiness of the face.
During long-duration space travel, radiation can pose an acute health hazard. Exposure to high-energy, ionizing cosmic rays can result in fatigue, nausea, vomiting, as well as damage to the immune system and changes to the white blood cell count. Over longer durations, symptoms include an increased risk of cancer, plus damage to the eyes, nervous system, lungs and the gastrointestinal tract. On a round-trip Mars mission lasting three years, a large fraction of the cells in an astronaut's body would be traversed and potentially damaged by high energy nuclei. The energy of such particles is significantly diminished by the shielding provided by the walls of a spacecraft and can be further diminished by water containers and other barriers. The impact of the cosmic rays upon the shielding produces additional radiation that can affect the crew. Further research is needed to assess the radiation hazards and determine suitable countermeasures.
Boundary
The transition between Earth's atmosphere and outer space lacks a well-defined physical boundary, with the air pressure steadily decreasing with altitude until it mixes with the solar wind. Various definitions for a practical boundary have been proposed, ranging from out to .
High-altitude aircraft, such as high-altitude balloons have reached altitudes above Earth of up to 50 km. Up until 2021, the United States designated people who travel above an altitude of as astronauts. Astronaut wings are now only awarded to spacecraft crew members that "demonstrated activities during flight that were essential to public safety, or contributed to human space flight safety."
In 2009, measurements of the direction and speed of ions in the atmosphere were made from a sounding rocket. The altitude of above Earth was the midpoint for charged particles transitioning from the gentle winds of the Earth's atmosphere to the more extreme flows of outer space. The latter can reach velocities well over .
Spacecraft have entered into a highly elliptical orbit with a perigee as low as , surviving for multiple orbits. At an altitude of , descending spacecraft such as NASA's Space Shuttle begin atmospheric entry (termed the Entry Interface), when atmospheric drag becomes noticeable, thus beginning the process of switching from steering with thrusters to maneuvering with aerodynamic control surfaces.
The Kármán line, established by the Fédération Aéronautique Internationale, and used internationally by the United Nations, is set at an altitude of as a working definition for the boundary between aeronautics and astronautics. This line is named after Theodore von Kármán, who argued for an altitude where a vehicle would have to travel faster than orbital velocity to derive sufficient aerodynamic lift from the atmosphere to support itself, which he calculated to be at an altitude of about . This distinguishes altitudes below as the region of aerodynamics and airspace, and above as the space of astronautics and free space.
There is no internationally recognized legal altitude limit on national airspace, although the Kármán line is the most frequently used for this purpose. Objections have been made to setting this limit too high, as it could inhibit space activities due to concerns about airspace violations. It has been argued for setting no specified singular altitude in international law, instead applying different limits depending on the case, in particular based on the craft and its purpose. Spacecraft have flown over foreign countries as low as , as in the example of the Space Shuttle.
Legal status
The Outer Space Treaty provides the basic framework for international space law. It covers the legal use of outer space by nation states, and includes in its definition of outer space, the Moon, and other celestial bodies. The treaty states that outer space is free for all nation states to explore and is not subject to claims of national sovereignty, calling outer space the "province of all mankind". This status as a common heritage of mankind has been used, though not without opposition, to enforce the right to access and shared use of outer space for all nations equally, particularly non-spacefaring nations. It prohibits the deployment of nuclear weapons in outer space. The treaty was passed by the United Nations General Assembly in 1963 and signed in 1967 by the Union of Soviet Socialist Republics (USSR), the United States of America (USA), and the United Kingdom (UK). As of 2017, 105 state parties have either ratified or acceded to the treaty. An additional 25 states signed the treaty, without ratifying it.
Since 1958, outer space has been the subject of multiple United Nations resolutions. Of these, more than 50 have been concerning the international co-operation in the peaceful uses of outer space and preventing an arms race in space. Four additional space law treaties have been negotiated and drafted by the UN's Committee on the Peaceful Uses of Outer Space. Still, there remains no legal prohibition against deploying conventional weapons in space, and anti-satellite weapons have been successfully tested by the USA, USSR, China, and in 2019, India. The 1979 Moon Treaty turned the jurisdiction of all heavenly bodies (including the orbits around such bodies) over to the international community. The treaty has not been ratified by any nation that currently practices human spaceflight.
In 1976, eight equatorial states (Ecuador, Colombia, Brazil, The Republic of the Congo, Zaire, Uganda, Kenya, and Indonesia) met in Bogotá, Colombia: with their "Declaration of the First Meeting of Equatorial Countries", or the Bogotá Declaration, they claimed control of the segment of the geosynchronous orbital path corresponding to each country. These claims are not internationally accepted.
An increasing issue of international space law and regulation has been the dangers of the growing number of space debris.
Earth orbit
A spacecraft enters orbit when its centripetal acceleration due to gravity is less than or equal to the centrifugal acceleration due to the horizontal component of its velocity. For a low Earth orbit, this velocity is about ; by contrast, the fastest piloted airplane speed ever achieved (excluding speeds achieved by deorbiting spacecraft) was in 1967 by the North American X-15.
To achieve an orbit, a spacecraft must travel faster than a sub-orbital spaceflight along an arcing trajectory. The energy required to reach Earth orbital velocity at an altitude of is about 36 MJ/kg, which is six times the energy needed merely to climb to the corresponding altitude. The escape velocity required to pull free of Earth's gravitational field altogether and move into interplanetary space is about .
Orbiting spacecraft with a perigee below about are subject to drag from the Earth's atmosphere, which decreases the orbital altitude. The rate of orbital decay depends on the satellite's cross-sectional area and mass, as well as variations in the air density of the upper atmosphere. At altitudes above , orbital lifetime is measured in centuries. Below about , decay becomes more rapid with lifetimes measured in days. Once a satellite descends to , it has only hours before it vaporizes in the atmosphere.
Low Earth orbit overlaps the inner Van Allen radiation belt, a toroidal region composed primarily of high energy protons. This region represents a space weather threat to space systems and is difficult to shield against. Operational problems for satellites are particularly prevalent over the South Atlantic Anomaly, where charged particles approach closest to Earth.
Regions
Regions near the Earth
Space in proximity to the Earth is physically similar to the remainder of interplanetary space, but is home to a multitude of Earth–orbiting satellites and has been subject to extensive studies. For identification purposes, this volume is divided into overlapping regions of space.
is the region of space extending from low Earth orbits out to geostationary orbits. This region includes the major orbits for artificial satellites and is the site of most of humanity's space activity. The region has seen high levels of space debris, sometimes dubbed space pollution, threatening any space activity in this region. Some of this debris re-enters Earth's atmosphere periodically. Although it meets the definition of outer space, the atmospheric density inside low-Earth orbital space, the first few hundred kilometers above the Kármán line, is still sufficient to produce significant drag on satellites.
Geospace is a region of space that includes Earth's upper atmosphere and magnetosphere. The Van Allen radiation belts lie within the geospace. The outer boundary of geospace is the magnetopause, which forms an interface between the Earth's magnetosphere and the solar wind. The inner boundary is the ionosphere.
The variable space-weather conditions of geospace are affected by the behavior of the Sun and the solar wind; the subject of geospace is interlinked with heliophysics—the study of the Sun and its impact on the planets of the Solar System. The day-side magnetopause is compressed by solar-wind pressure—the subsolar distance from the center of the Earth is typically 10 Earth radii. On the night side, the solar wind stretches the magnetosphere to form a magnetotail that sometimes extends out to more than 100–200 Earth radii. For roughly four days of each month, the lunar surface is shielded from the solar wind as the Moon passes through the magnetotail.
Geospace is populated by electrically charged particles at very low densities, the motions of which are controlled by the Earth's magnetic field. These plasmas form a medium from which storm-like disturbances powered by the solar wind can drive electrical currents into the Earth's upper atmosphere. Geomagnetic storms can disturb two regions of geospace, the radiation belts and the ionosphere. These storms increase fluxes of energetic electrons that can permanently damage satellite electronics, interfering with shortwave radio communication and GPS location and timing. Magnetic storms can be a hazard to astronauts, even in low Earth orbit. They create aurorae seen at high latitudes in an oval surrounding the geomagnetic poles.
xGeo space is a concept used by the US to refer to space of high Earth orbits, ranging from beyond geosynchronous orbit (GEO) at approximately , out to the L2 Earth-Moon Lagrange point at . This is located beyond the orbit of the Moon and therefore includes cislunar space. Translunar space is the region of lunar transfer orbits, between the Moon and Earth. Cislunar space is a region outside of Earth that includes lunar orbits, the Moon's orbital space around Earth and the Lagrange points.
The region where a body's gravitational potential remains dominant against gravitational potentials from other bodies, is the body's sphere of influence or gravity well, mostly described with the Hill sphere model. In the case of Earth this includes all space from the Earth to a distance of roughly 1% of the mean distance from Earth to the Sun, or . Beyond Earth's Hill sphere extends along Earth's orbital path its orbital and co-orbital space. This space is co-populated by groups of co-orbital Near-Earth Objects (NEOs), such as horseshoe librators and Earth trojans, with some NEOs at times becoming temporary satellites and quasi-moons to Earth.
Deep space is defined by the United States government as all of outer space which lies further from Earth than a typical low-Earth-orbit, thus assigning the Moon to deep-space. Other definitions vary the starting point of deep-space from, "That which lies beyond the orbit of the moon," to "That which lies beyond the farthest reaches of the Solar System itself." The International Telecommunication Union responsible for radio communication, including with satellites, defines deep-space as, "distances from the Earth equal to, or greater than, ," which is about five times the Moon's orbital distance, but which distance is also far less than the distance between Earth and any adjacent planet.
Interplanetary space
Interplanetary space within the Solar System is the space between the eight planets, the space between the planets and the Sun, as well as that space beyond the orbit of the outermost planet Neptune where the solar wind remains active. The solar wind is a continuous stream of charged particles emanating from the Sun which creates a very tenuous atmosphere (the heliosphere) for billions of kilometers into space. This wind has a particle density of 5–10 protons/cm3 and is moving at a velocity of . Interplanetary space extends out to the heliopause where the influence of the galactic environment starts to dominate over the magnetic field and particle flux from the Sun. The distance and strength of the heliopause varies depending on the activity level of the solar wind. The heliopause in turn deflects away low-energy galactic cosmic rays, with this modulation effect peaking during solar maximum.
The volume of interplanetary space is a nearly total vacuum, with a mean free path of about one astronomical unit at the orbital distance of the Earth. This space is not completely empty, and is sparsely filled with cosmic rays, which include ionized atomic nuclei and various subatomic particles. There is gas, plasma and dust, small meteors, and several dozen types of organic molecules discovered to date by microwave spectroscopy. A cloud of interplanetary dust is visible at night as a faint band called the zodiacal light.
Interplanetary space contains the magnetic field generated by the Sun. There are magnetospheres generated by planets such as Jupiter, Saturn, Mercury and the Earth that have their own magnetic fields. These are shaped by the influence of the solar wind into the approximation of a teardrop shape, with the long tail extending outward behind the planet. These magnetic fields can trap particles from the solar wind and other sources, creating belts of charged particles such as the Van Allen radiation belts. Planets without magnetic fields, such as Mars, have their atmospheres gradually eroded by the solar wind.
Interstellar space
Interstellar space is the physical space outside of the bubbles of plasma known as astrospheres, formed by stellar winds originating from individual stars, or formed by solar wind emanating from the Sun. It is the space between the stars or stellar systems within a nebula or galaxy. Interstellar space contains an interstellar medium of sparse matter and radiation. The boundary between an astrosphere and interstellar space is known as an astropause. For the Sun, the astrosphere and astropause are called the heliosphere and heliopause.
Approximately 70% of the mass of the interstellar medium consists of lone hydrogen atoms; most of the remainder consists of helium atoms. This is enriched with trace amounts of heavier atoms formed through stellar nucleosynthesis. These atoms are ejected into the interstellar medium by stellar winds or when evolved stars begin to shed their outer envelopes such as during the formation of a planetary nebula. The cataclysmic explosion of a supernova propagates shock waves of stellar ejecta outward, distributing it throughout the interstellar medium, including the heavy elements previously formed within the star's core. The density of matter in the interstellar medium can vary considerably: the average is around 106 particles per m3, but cold molecular clouds can hold 108–1012 per m3.
A number of molecules exist in interstellar space, which can form dust particles as tiny as 0.1 μm. The tally of molecules discovered through radio astronomy is steadily increasing at the rate of about four new species per year. Large regions of higher density matter known as molecular clouds allow chemical reactions to occur, including the formation of organic polyatomic species. Much of this chemistry is driven by collisions. Energetic cosmic rays penetrate the cold, dense clouds and ionize hydrogen and helium, resulting, for example, in the trihydrogen cation. An ionized helium atom can then split relatively abundant carbon monoxide to produce ionized carbon, which in turn can lead to organic chemical reactions.
The local interstellar medium is a region of space within 100 pc of the Sun, which is of interest both for its proximity and for its interaction with the Solar System. This volume nearly coincides with a region of space known as the Local Bubble, which is characterized by a lack of dense, cold clouds. It forms a cavity in the Orion Arm of the Milky Way Galaxy, with dense molecular clouds lying along the borders, such as those in the constellations of Ophiuchus and Taurus. The actual distance to the border of this cavity varies from 60 to 250 pc or more. This volume contains about 104–105 stars and the local interstellar gas counterbalances the astrospheres that surround these stars, with the volume of each sphere varying depending on the local density of the interstellar medium. The Local Bubble contains dozens of warm interstellar clouds with temperatures of up to 7,000 K and radii of 0.5–5 pc.
When stars are moving at sufficiently high peculiar velocities, their astrospheres can generate bow shocks as they collide with the interstellar medium. For decades it was assumed that the Sun had a bow shock. In 2012, data from Interstellar Boundary Explorer (IBEX) and NASA's Voyager probes showed that the Sun's bow shock does not exist. Instead, these authors argue that a subsonic bow wave defines the transition from the solar wind flow to the interstellar medium. A bow shock is a third boundary characteristic of an astrosphere, lying outside the termination shock and the astropause.
Intergalactic space
Intergalactic space is the physical space between galaxies. Studies of the large-scale distribution of galaxies show that the universe has a foam-like structure, with groups and clusters of galaxies lying along filaments that occupy about a tenth of the total space. The remainder forms cosmic voids that are mostly empty of galaxies. Typically, a void spans a distance of 7–30 megaparsecs.
Surrounding and stretching between galaxies is the intergalactic medium (IGM). This a rarefied plasma is organized in a galactic filamentary structure. The diffuse photoionized gas contains filaments of higher density, about one atom per cubic meter, which is 5–200 times the average density of the universe. The IGM is inferred to be mostly primordial in composition, with 76% hydrogen by mass, and enriched with higher mass elements from high-velocity galactic outflows.
As gas falls into the intergalactic medium from the voids, it heats up to temperatures of 105 K to 107 K. At these temperatures, it is called the warm–hot intergalactic medium (WHIM). Although the plasma is very hot by terrestrial standards, 105 K is often called "warm" in astrophysics. Computer simulations and observations indicate that up to half of the atomic matter in the universe might exist in this warm–hot, rarefied state. When gas falls from the filamentary structures of the WHIM into the galaxy clusters at the intersections of the cosmic filaments, it can heat up even more, reaching temperatures of 108 K and above in the so-called intracluster medium (ICM).
History of discovery
In 350 BCE, Greek philosopher Aristotle suggested that nature abhors a vacuum, a principle that became known as the horror vacui. This concept built upon a 5th-century BCE ontological argument by the Greek philosopher Parmenides, who denied the possible existence of a void in space. Based on this idea that a vacuum could not exist, in the West it was widely held for many centuries that space could not be empty. As late as the 17th century, the French philosopher René Descartes argued that the entirety of space must be filled.
In ancient China, the 2nd-century astronomer Zhang Heng became convinced that space must be infinite, extending well beyond the mechanism that supported the Sun and the stars. The surviving books of the Hsüan Yeh school said that the heavens were boundless, "empty and void of substance". Likewise, the "sun, moon, and the company of stars float in the empty space, moving or standing still".
The Italian scientist Galileo Galilei knew that air had mass and so was subject to gravity. In 1640, he demonstrated that an established force resisted the formation of a vacuum. It would remain for his pupil Evangelista Torricelli to create an apparatus that would produce a partial vacuum in 1643. This experiment resulted in the first mercury barometer and created a scientific sensation in Europe. Torricelli suggested that since air has weight, then air pressure should decrease with altitude. The French mathematician Blaise Pascal proposed an experiment to test this hypothesis. In 1648, his brother-in-law, Florin Périer, repeated the experiment on the Puy de Dôme mountain in central France and found that the column was shorter by three inches. This decrease in pressure was further demonstrated by carrying a half-full balloon up a mountain and watching it gradually expand, then contract upon descent.
In 1650, German scientist Otto von Guericke constructed the first vacuum pump: a device that would further refute the principle of horror vacui. He correctly noted that the atmosphere of the Earth surrounds the planet like a shell, with the density gradually declining with altitude. He concluded that there must be a vacuum between the Earth and the Moon.
In the 15th century, German theologian Nicolaus Cusanus speculated that the universe lacked a center and a circumference. He believed that the universe, while not infinite, could not be held as finite as it lacked any bounds within which it could be contained. These ideas led to speculations as to the infinite dimension of space by the Italian philosopher Giordano Bruno in the 16th century. He extended the Copernican heliocentric cosmology to the concept of an infinite universe filled with a substance he called aether, which did not resist the motion of heavenly bodies. English philosopher William Gilbert arrived at a similar conclusion, arguing that the stars are visible to us only because they are surrounded by a thin aether or a void. This concept of an aether originated with ancient Greek philosophers, including Aristotle, who conceived of it as the medium through which the heavenly bodies move.
The concept of a universe filled with a luminiferous aether retained support among some scientists until the early 20th century. This form of aether was viewed as the medium through which light could propagate. In 1887, the Michelson–Morley experiment tried to detect the Earth's motion through this medium by looking for changes in the speed of light depending on the direction of the planet's motion. The null result indicated something was wrong with the concept. The idea of the luminiferous aether was then abandoned. It was replaced by Albert Einstein's theory of special relativity, which holds that the speed of light in a vacuum is a fixed constant, independent of the observer's motion or frame of reference.
The first professional astronomer to support the concept of an infinite universe was the Englishman Thomas Digges in 1576. But the scale of the universe remained unknown until the first successful measurement of the distance to a nearby star in 1838 by the German astronomer Friedrich Bessel. He showed that the star system 61 Cygni had a parallax of just 0.31 arcseconds (compared to the modern value of 0.287″). This corresponds to a distance of over 10 light years. In 1917, Heber Curtis noted that novae in spiral nebulae were, on average, 10 magnitudes fainter than galactic novae, suggesting that the former are 100 times further away. The distance to the Andromeda Galaxy was determined in 1923 by American astronomer Edwin Hubble by measuring the brightness of cepheid variables in that galaxy, a new technique discovered by Henrietta Leavitt. This established that the Andromeda Galaxy, and by extension all galaxies, lay well outside the Milky Way. With this Hubble formulated the Hubble constant, which allowed for the first time a calculation of the age of the Universe and size of the Observable Universe, starting at 2 billion years and 280 million light-years. This became increasingly precise with better measurements, until 2006 when data of the Hubble Space Telescope allowed a very accurate calculation of the age of the Universe and size of the Observable Universe.
The modern concept of outer space is based on the "Big Bang" cosmology, first proposed in 1931 by the Belgian physicist Georges Lemaître. This theory holds that the universe originated from a state of extreme energy density that has since undergone continuous expansion.
The earliest known estimate of the temperature of outer space was by the Swiss physicist Charles É. Guillaume in 1896. Using the estimated radiation of the background stars, he concluded that space must be heated to a temperature of 5–6 K. British physicist Arthur Eddington made a similar calculation to derive a temperature of 3.18 K in 1926. German physicist Erich Regener used the total measured energy of cosmic rays to estimate an intergalactic temperature of 2.8 K in 1933. American physicists Ralph Alpher and Robert Herman predicted 5 K for the temperature of space in 1948, based on the gradual decrease in background energy following the then-new Big Bang theory.
Exploration
For most of human history, space was explored by observations made from the Earth's surface—initially with the unaided eye and then with the telescope. Before reliable rocket technology, the closest that humans had come to reaching outer space was through balloon flights. In 1935, the American Explorer II crewed balloon flight reached an altitude of . This was greatly exceeded in 1942 when the third launch of the German A-4 rocket climbed to an altitude of about . In 1957, the uncrewed satellite Sputnik 1 was launched by a Russian R-7 rocket, achieving Earth orbit at an altitude of . This was followed by the first human spaceflight in 1961, when Yuri Gagarin was sent into orbit on Vostok 1. The first humans to escape low Earth orbit were Frank Borman, Jim Lovell and William Anders in 1968 on board the American Apollo 8, which achieved lunar orbit and reached a maximum distance of from the Earth.
The first spacecraft to reach escape velocity was the Soviet Luna 1, which performed a fly-by of the Moon in 1959. In 1961, Venera 1 became the first planetary probe. It revealed the presence of the solar wind and performed the first fly-by of Venus, although contact was lost before reaching Venus. The first successful planetary mission was the 1962 fly-by of Venus by Mariner 2. The first fly-by of Mars was by Mariner 4 in 1964. Since that time, uncrewed spacecraft have successfully examined each of the Solar System's planets, as well their moons and many minor planets and comets. They remain a fundamental tool for the exploration of outer space, as well as for observation of the Earth. In August 2012, Voyager 1 became the first man-made object to leave the Solar System and enter interstellar space.
Application
Outer space has become an important element of global society. It provides multiple applications that are beneficial to the economy and scientific research.
The placing of artificial satellites in Earth orbit has produced numerous benefits and has become the dominating sector of the space economy. They allow relay of long-range communications like television, provide a means of precise navigation, and permit direct monitoring of weather conditions and remote sensing of the Earth. The latter role serves a variety of purposes, including tracking soil moisture for agriculture, prediction of water outflow from seasonal snow packs, detection of diseases in plants and trees, and surveillance of military activities. They facilitate the discovery and monitoring of climate change influences. Satellites make use of the significantly reduced drag in space to stay in stable orbits, allowing them to efficiently span the whole globe, compared to for example stratospheric balloons or high-altitude platform stations, which have other benefits.
The absence of air makes outer space an ideal location for astronomy at all wavelengths of the electromagnetic spectrum. This is evidenced by the spectacular pictures sent back by the Hubble Space Telescope, allowing light from more than 13 billion years ago—almost to the time of the Big Bang—to be observed. Not every location in space is ideal for a telescope. The interplanetary zodiacal dust emits a diffuse near-infrared radiation that can mask the emission of faint sources such as extrasolar planets. Moving an infrared telescope out past the dust increases its effectiveness. Likewise, a site like the Daedalus crater on the far side of the Moon could shield a radio telescope from the radio frequency interference that hampers Earth-based observations.
The deep vacuum of space could make it an attractive environment for certain industrial processes, such as those requiring ultraclean surfaces. Like asteroid mining, space manufacturing would require a large financial investment with little prospect of immediate return. An important factor in the total expense is the high cost of placing mass into Earth orbit: $–$ per kg, according to a 2006 estimate (allowing for inflation since then). The cost of access to space has declined since 2013. Partially reusable rockets such as the Falcon 9 have lowered access to space below 3500 dollars per kilogram. With these new rockets the cost to send materials into space remains prohibitively high for many industries. Proposed concepts for addressing this issue include, fully reusable launch systems, non-rocket spacelaunch, momentum exchange tethers, and space elevators.
Interstellar travel for a human crew remains at present only a theoretical possibility. The distances to the nearest stars mean it would require new technological developments and the ability to safely sustain crews for journeys lasting several decades. For example, the Daedalus Project study, which proposed a spacecraft powered by the fusion of deuterium and helium-3, would require 36 years to reach the "nearby" Alpha Centauri system. Other proposed interstellar propulsion systems include light sails, ramjets, and beam-powered propulsion. More advanced propulsion systems could use antimatter as a fuel, potentially reaching relativistic velocities.
From the Earth's surface, the ultracold temperature of outer space can be used as a renewable cooling technology for various applications on Earth through passive daytime radiative cooling. This enhances longwave infrared (LWIR) thermal radiation heat transfer through the amosphere's infrared window into outer space, lowering ambient temperatures. Photonic metamaterials can be used to suppress solar heating.
| Physical sciences | Astronomy | null |
177938 | https://en.wikipedia.org/wiki/Moss | Moss | Mosses are small, non-vascular flowerless plants in the taxonomic division Bryophyta (, ) sensu stricto. Bryophyta (sensu lato, Schimp. 1879) may also refer to the parent group bryophytes, which comprise liverworts, mosses, and hornworts. Mosses typically form dense green clumps or mats, often in damp or shady locations. The individual plants are usually composed of simple leaves that are generally only one cell thick, attached to a stem that may be branched or unbranched and has only a limited role in conducting water and nutrients. Although some species have conducting tissues, these are generally poorly developed and structurally different from similar tissue found in vascular plants. Mosses do not have seeds and after fertilisation develop sporophytes with unbranched stalks topped with single capsules containing spores. They are typically tall, though some species are much larger. Dawsonia, the tallest moss in the world, can grow to in height. There are approximately 12,000 species.
Mosses are commonly confused with liverworts, hornworts and lichens. Although often described as non-vascular plants, many mosses have advanced vascular systems. Like liverworts and hornworts, the haploid gametophyte generation of mosses is the dominant phase of the life cycle. This contrasts with the pattern in all vascular plants (seed plants and pteridophytes), where the diploid sporophyte generation is dominant. Lichens may superficially resemble mosses, and sometimes have common names that include the word "moss" (e.g., "reindeer moss" or "Iceland moss"), but they are fungal symbioses and not related to mosses.
The main commercial significance of mosses is as the main constituent of peat (mostly the genus Sphagnum), although they are also used for decorative purposes, such as in gardens and in the florist trade. Traditional uses of mosses included as insulation and for the ability to absorb liquids up to 20 times their weight. Moss is a keystone genus and benefits habitat restoration and reforestation.
Physical characteristics
Description
Botanically, mosses are non-vascular plants in the land plant division Bryophyta. They are usually small (a few centimeters tall) herbaceous (non-woody) plants that absorb water and nutrients mainly through their leaves and harvest carbon dioxide and sunlight to create food by photosynthesis. With the exception of the ancient group Takakiopsida, no known mosses form mycorrhiza, but bryophilous fungi is widespread in moss and other bryophytes, where they live as saprotrophs, parasites, pathogens and mutualists, some of them endophytes. Mosses differ from vascular plants in lacking water-bearing xylem tracheids or vessels. As in liverworts and hornworts, the haploid gametophyte generation is the dominant phase of the life cycle. This contrasts with the pattern in all vascular plants (seed plants and pteridophytes), where the diploid sporophyte generation is dominant. Mosses reproduce using spores, not seeds, and have no flowers.
Moss gametophytes have stems which may be simple or branched and upright (acrocarp) or prostrate (pleurocarp). The early divergent classes Takakiopsida, Sphagnopsida, Andreaeopsida and Andreaeobryopsida either lack stomata or have pseudostomata that do not form pores. In the remaining classes, stomata have been lost more than 60 times. Their leaves are simple, usually only a single layer of cells with no internal air spaces, often with thicker midribs (nerves). The nerve can run beyond the edge of the leaf tip, termed excurrent. The tip of the leaf blade can be extended as a hair point, made of colourless cells. These appear white against the dark green of the leaves. The edge of the leaf can be smooth or it may have teeth. There may be a distinct type of cell defining the edge of the leaf, distinct in shape and/or colour from the other leaf cells.
Moss has threadlike rhizoids that anchor them to their substrate, comparable to root hairs rather than the more substantial root structures of spermatophytes. Mosses do not absorb water or nutrients from their substrate through their rhizoids. They can be distinguished from liverworts (Marchantiophyta or Hepaticae) by their multi-cellular rhizoids. Spore-bearing capsules or sporangia of mosses are borne singly on long, unbranched stems, thereby distinguishing them from the polysporangiophytes, which include all vascular plants. The spore-producing sporophytes (i.e. the diploid multicellular generation) are short-lived and usually capable of photosynthesis, but are dependent on the gametophyte for water supply and most or all of its nutrients. Also, in the majority of mosses, the spore-bearing capsule enlarges and matures after its stalk elongates, while in liverworts the capsule enlarges and matures before its stalk elongates. Other differences are not universal for all mosses and all liverworts, but the presence of a clearly differentiated stem with simple-shaped, non-vascular leaves that are not arranged in three ranks, all point to the plant being a moss.
Life cycle
Vascular plants have two sets of chromosomes in their vegetative cells and are said to be diploid, i.e. each chromosome has a partner that contains the same, or similar, genetic information. By contrast, mosses and other bryophytes have only a single set of chromosomes and so are haploid (i.e. each chromosome exists in a unique copy within the cell). There is a period in the moss life cycle when they do have a double set of paired chromosomes, but this happens only during the sporophyte stage.
The moss life-cycle starts with a haploid spore that germinates to produce a protonema (pl. protonemata), which is either a mass of thread-like filaments or thalloid (flat and thallus-like). Massed moss protonemata typically look like a thin green felt, and may grow on damp soil, tree bark, rocks, concrete, or almost any other reasonably stable surface. This is a transitory stage in the life of a moss, but from the protonema grows the gametophore ("gamete-bearer") that is structurally differentiated into stems and leaves. A single mat of protonemata may develop several gametophore shoots, resulting in a clump of moss.
From the tips of the gametophore stems or branches develop the sex organs of the mosses. The female organs are known as archegonia (sing. archegonium) and are protected by a group of modified leaves known as the perichaetum (plural, perichaeta). The archegonia are small flask-shaped clumps of cells with an open neck (venter) down which the male sperm swim. The male organs are known as antheridia (sing. antheridium) and are enclosed by modified leaves called the perigonium (pl. perigonia). The surrounding leaves in some mosses form a splash cup, allowing the sperm contained in the cup to be splashed to neighboring stalks by falling water droplets.
Gametophore tip growth is disrupted by fungal chitin. Galotto et al., 2020 applied chitooctaose and found that tips detected and responded to this chitin derivative by changing gene expression. They concluded that this defense response was probably conserved from the most recent common ancestor of bryophytes and tracheophytes. Orr et al., 2020 found that the microtubules of growing tip cells were structurally similar to F-actin and served a similar purpose.
Mosses can be either dioicous (compare dioecious in seed plants) or monoicous (compare monoecious). In dioicous mosses, male and female sex organs are borne on different gametophyte plants. In monoicous (also called autoicous) mosses, both are borne on the same plant. In the presence of water, sperm from the antheridia swim to the archegonia and fertilisation occurs, leading to the production of a diploid sporophyte. The sperm of mosses is biflagellate, i.e. they have two flagellae that aid in propulsion. Since the sperm must swim to the archegonium, fertilisation cannot occur without water. Some species (for example Mnium hornum or several species of Polytrichum) keep their antheridia in so called 'splash cups', bowl-like structures on the shoot tips that propel the sperm several decimeters when water droplets hit it, increasing the fertilization distance.
After fertilisation, the immature sporophyte pushes its way out of the archegonial venter. It takes several months for the sporophyte to mature. The sporophyte body comprises a long stalk, called a seta, and a capsule capped by a cap called the operculum. The capsule and operculum are in turn sheathed by a haploid calyptra which is the remains of the archegonial venter. The calyptra usually falls off when the capsule is mature. Within the capsule, spore-producing cells undergo meiosis to form haploid spores, upon which the cycle can start again. The mouth of the capsule is usually ringed by a set of teeth called peristome. This may be absent in some mosses.
Most mosses rely on the wind to disperse the spores. In the genus Sphagnum the spores are projected about off the ground by compressed air contained in the capsules; the spores are accelerated to about 36,000 times the earth's gravitational acceleration g.
It has recently been found that microarthropods, such as springtails and mites, can effect moss fertilization and that this process is mediated by moss-emitted scents. Male and female fire moss, for example emit different and complex volatile organic scents. Female plants emit more compounds than male plants. Springtails were found to choose female plants preferentially, and one study found that springtails enhance moss fertilization, suggesting a scent-mediated relationship analogous to the plant-pollinator relationship found in many seed plants. The stinkmoss species Splachnum sphaericum develops insect pollination further by attracting flies to its sporangia with a strong smell of carrion, and providing a strong visual cue in the form of red-coloured swollen collars beneath each spore capsule. Flies attracted to the moss carry its spores to fresh herbivore dung, which is the favoured habitat of the species of this genus.
In many mosses, e.g., Ulota phyllantha, green vegetative structures called gemmae are produced on leaves or branches, which can break off and form new plants without the need to go through the cycle of fertilization. This is a means of asexual reproduction, and the genetically identical units can lead to the formation of clonal populations.
Dwarf males
Moss dwarf males (also known as nannandry or phyllodioicy) originate from wind-dispersed male spores that settle and germinate on the female shoot where their growth is restricted to a few millimeters. In some species, dwarfness is genetically determined, in that all male spores become dwarf. More often, it is environmentally determined in that male spores that land on a female become dwarf, while those that land elsewhere develop into large, female-sized males. In the latter case, dwarf males that are transplanted from females to another substrate develop into large shoots, suggesting that the females emit a substance which inhibits the growth of germinating males and possibly also quickens their onset of sexual maturation. The nature of such a substance is unknown, but the phytohormone auxin may be involved
Having the males growing as dwarfs on the female is expected to increase the fertilization efficiency by minimizing the distance between male and female reproductive organs. Accordingly, it has been observed that fertilization frequency is positively associated with the presence of dwarf males in several phyllodioicous species.
Dwarf males occur in several unrelated lineages and may be more common than previously thought. For example, it is estimated that between one quarter and half of all dioicous pleurocarps have dwarf males.
DNA repair
The moss Physcomitrium patens has been used as a model organism to study how plants repair damage to their DNA, especially the repair mechanism known as homologous recombination. If the plant cannot repair DNA damage, e.g., double-strand breaks, in their somatic cells, the cells can lose normal functions or die. If this occurs during meiosis (part of sexual reproduction), they could become infertile. The genome of P. patens has been sequenced, which has allowed several genes involved in DNA repair to be identified. P. patens mutants that are defective in key steps of homologous recombination have been used to work out how the repair mechanism functions in plants. For example, a study of P. patens mutants defective in RpRAD51, a gene that encodes a protein at the core of the recombinational repair reaction, indicated that homologous recombination is essential for repairing DNA double-strand breaks in this plant. Similarly, studies of mutants defective in Ppmre11 or Pprad50 (that encode key proteins of the MRN complex, the principal sensor of DNA double-strand breaks) showed that these genes are necessary for repair of DNA damage as well as for normal growth and development.
Classification
More recently, mosses have been grouped with the liverworts and hornworts in the division Bryophyta (bryophytes, or Bryophyta sensu lato). The bryophyte division itself contains three (former) divisions: Bryophyta (mosses), Marchantiophyta (liverworts) and Anthocerotophyta (hornworts); it has been proposed that these latter divisions are de-ranked to the classes Bryopsida, Marchantiopsida, and Anthocerotopsida, respectively. The mosses and liverworts are now considered to belong to a clade called Setaphyta.
The mosses, (Bryophyta sensu stricto), are divided into eight classes:
Six of the eight classes contain only one or two genera each. Polytrichopsida includes 23 genera, and Bryopsida includes the majority of moss diversity with over 95% of moss species belonging to this class.
The Sphagnopsida, the peat-mosses, comprise the two living genera Ambuchanania and Sphagnum, as well as fossil taxa. Sphagnum is a diverse, widespread, and economically important one. These large mosses form extensive acidic bogs in peat swamps. The leaves of Sphagnum have large dead cells alternating with living photosynthetic cells. The dead cells help to store water. Aside from this character, the unique branching, thallose (flat and expanded) protonema, and explosively rupturing sporangium place it apart from other mosses.
Andreaeopsida and Andreaeobryopsida are distinguished by the biseriate (two rows of cells) rhizoids, multiseriate (many rows of cells) protonema, and sporangium that splits along longitudinal lines. Most mosses have capsules that open at the top.
Polytrichopsida have leaves with sets of parallel lamellae, flaps of chloroplast-containing cells that look like the fins on a heat sink. These carry out photosynthesis and may help to conserve moisture by partially enclosing the gas exchange surfaces. The Polytrichopsida differ from other mosses in other details of their development and anatomy too, and can also become larger than most other mosses, with e.g., Polytrichum commune forming cushions up to high. The tallest land moss, a member of the Polytrichidae is probably Dawsonia superba, a native to New Zealand and other parts of Australasia.
Geological history
The fossil record of moss is sparse, due to their soft-walled and fragile nature. Unambiguous moss fossils have been recovered from as early as the Permian of Antarctica and Russia, and a case has been made for Carboniferous mosses. It has further been claimed that tube-like fossils from the Silurian are the macerated remains of moss calyptræ. Mosses also appear to evolve 2–3 times slower than ferns, gymnosperms and angiosperms.
Recent research shows that ancient moss could explain why the Ordovician ice ages occurred. When the ancestors of today's moss started to spread on land 470 million years ago, they absorbed CO2 from the atmosphere and extracted minerals by secreting organic acids that dissolved the rocks they were growing on. These chemically altered rocks in turn reacted with the atmospheric CO2 and formed new carbonate rocks in the ocean through the weathering of calcium and magnesium ions from silicate rocks. The weathered rocks also released significant amounts of phosphorus and iron which ended up in the oceans, where it caused massive algal blooms, resulting in organic carbon burial, extracting more carbon dioxide from the atmosphere. Small organisms feeding on the nutrients created large areas without oxygen, which caused a mass extinction of marine species, while the levels of CO2 dropped all over the world, allowing the formation of ice caps on the poles.
Ecology
Habitat
Mosses live in almost every terrestrial habitat type on Earth. Though mosses are particularly abundant in certain habitats such as peatlands, where Sphagnum mosses are the dominant organism, and in moist boreal, temperate, and montane tropical forests, mosses grow in many other habitats, including habitats with conditions too extreme for vascular plants to survive. Desiccation tolerant mosses are important in arid and semi-arid ecosystems, where they help form biocrusts that mediate extremes of soil temperature, regulate soil moisture, and regulate the release and uptake of carbon. Mosses can live on substrates heated by geothermal activity to temperatures exceeding 50 degrees Celsius, on walls and pavement in urban areas, and in Antarctica. Moss diversity is generally not associated with latitude; boreal and temperate moss diversity is similar to tropical moss diversity. Moss diversity hotspots are found in the northern Andes mountains, Mexico, the Himalayan mountains, Madagascar, Japan, the highlands of eastern Africa, Southeast Asia, central Europe, Scandinavia, and British Columbia.
Moss gametophytes are autotrophic and require sunlight to perform photosynthesis. In most areas, mosses grow chiefly in moist, shaded areas, such as wooded areas and at the edges of streams, but shade tolerance varies by species.
Different moss species grow on different substrates as well. Moss species can be classed as growing on: rocks, exposed mineral soil, disturbed soils, acid soil, calcareous soil, cliff seeps and waterfall spray areas, streamsides, shaded humusy soil, downed logs, burnt stumps, tree trunk bases, upper tree trunks, and tree branches or in bogs. Moss species growing on or under trees are often specific about the species of trees they grow on, such as preferring conifers over broadleaf trees, oaks over alders, or vice versa. While mosses often grow on trees as epiphytes, they are never parasitic on the tree.
Mosses are also found in cracks between paving stones in damp city streets, and on roofs. Some species adapted to disturbed, sunny areas are well adapted to urban conditions and are commonly found in cities. Examples would be Rhytidiadelphus squarrosus, a garden weed in Vancouver and Seattle areas; Bryum argenteum, the cosmopolitan sidewalk moss, and Ceratodon purpureus, red roof moss, another cosmopolitan species. A few species are wholly aquatic, such as Fontinalis antipyretica, common water moss; and others such as Sphagnum inhabit bogs, marshes and very slow-moving waterways. Such aquatic or semi-aquatic mosses can greatly exceed the normal range of lengths seen in terrestrial mosses. Individual plants or more long are common in Sphagnum species for example. But even aquatic species of moss and other bryophytes needs their mature capsules to be exposed to air by seta elongation or seasonal lowering of water level to be able to reproduce.
Wherever they occur, mosses require liquid water for at least part of the year to complete fertilisation. Many mosses can survive desiccation, sometimes for months, returning to life within a few hours of rehydration. Mosses in arid habitats, such as the moss Syntrichia caninervis, have adaptations for collecting non-rainfall sources of moisture like dew and fog, capturing condensation from the air.
It is generally believed that in the Northern Hemisphere, the north side of trees and rocks will generally have more luxuriant moss growth on average than other sides. The reason is assumed to be because sunshine on the south side causes a dry environment. The reverse would be true in the Southern Hemisphere. Some naturalists feel that mosses grow on the damper side of trees and rocks. In some cases, such as sunny climates in temperate northern latitudes, this will be the shaded north side of the tree or rock. On steep slopes, it may be the uphill side. For mosses that grow on tree branches, this is generally the upper side of the branch on horizontally growing sections or near the crotch. In cool, humid, cloudy climates, all sides of tree trunks and rocks may be equally moist enough for moss growth. Each species of moss requires certain amounts of moisture and sunlight and thus will grow on certain sections of the same tree or rock.
Some mosses grow underwater, or completely waterlogged. Many prefer well-drained locations. There are mosses that preferentially grow on rocks and tree trunks of various chemistries.
Relationship with cyanobacteria
In boreal forests, some species of moss play an important role in providing nitrogen for the ecosystem due to their relationship with nitrogen-fixing cyanobacteria. Cyanobacteria colonize moss and receive shelter in return for providing fixed nitrogen. Moss releases the fixed nitrogen, along with other nutrients, into the soil "upon disturbances like drying-rewetting and fire events", making it available throughout the ecosystem.
Cultivation
Moss is often considered a weed in grass lawns, but is deliberately encouraged to grow under aesthetic principles exemplified by Japanese gardening. In old temple gardens, moss can carpet a forest scene. Moss is thought to add a sense of calm, age, and stillness to a garden scene. Moss is also used in bonsai to cover the soil and enhance the impression of age. Rules of cultivation are not widely established. Moss collections are quite often begun using samples transplanted from the wild in a water-retaining bag. Some species of moss can be extremely difficult to maintain away from their natural sites with their unique requirements of combinations of light, humidity, substrate chemistry, shelter from wind, etc.
Growing moss from spores is even less controlled. Moss spores fall in a constant rain on exposed surfaces; those surfaces which are hospitable to a certain species of moss will typically be colonised by that moss within a few years of exposure to wind and rain. Materials which are porous and moisture retentive, such as brick, wood, and certain coarse concrete mixtures, are hospitable to moss. Surfaces can also be prepared with acidic substances, including buttermilk, yogurt, urine, and gently puréed mixtures of moss samples, water and ericaceous compost.
In the cool, humid, cloudy Pacific Northwest, moss is sometimes allowed to grow naturally as a moss lawn, one that needs little or no mowing, fertilizing or watering. In this case, grass is considered to be the weed. Landscapers in the Seattle area sometimes collect boulders and downed logs growing mosses for installation in gardens and landscapes. Woodland gardens in many parts of the world can include a carpet of natural mosses. The Bloedel Reserve on Bainbridge Island, Washington State, is famous for its moss garden. The moss garden was created by removing shrubby underbrush and herbaceous groundcovers, thinning trees, and allowing mosses to fill in naturally.
Green roofs and walls
Mosses are sometimes used in green roofs. Advantages of mosses over higher plants in green roofs include reduced weight loads, increased water absorption, no fertilizer requirements, and high drought tolerance. Since mosses do not have true roots, they require less planting medium than higher plants with extensive root systems. With proper species selection for the local climate, mosses in green roofs require no irrigation once established and are low maintenance. Mosses are also used on green walls.
Mossery
A passing fad for moss-collecting in the late 19th century led to the establishment of mosseries in many British and American gardens. The mossery is typically constructed out of slatted wood, with a flat roof, open to the north side (maintaining shade). Samples of moss were installed in the cracks between wood slats. The whole mossery would then be regularly moistened to maintain growth.
Aquascaping
Aquascaping uses many aquatic mosses. They do best at low nutrient, light, and heat levels, and propagate fairly readily. They help maintain a water chemistry suitable for aquarium fish. They grow more slowly than many aquarium plants, and are fairly hardy.
Growth inhibition
Moss can be a troublesome weed in containerized nursery operations and greenhouses. Vigorous moss growth can inhibit seedling emergence and penetration of water and fertilizer to the plant roots.
Moss growth can be inhibited by a number of methods:
Decreasing availability of water through drainage.
Increasing direct sunlight.
Increasing number and resources available for competitive plants like grasses.
Increasing the soil pH with the application of lime.
Heavy traffic or manually disturbing the moss bed with a rake
Application of chemicals such as ferrous sulfate (e.g., in lawns) or bleach (e.g., on solid surfaces).
In containerized nursery operations, coarse mineral materials such as sand, gravel, and rock chips are used as a fast-draining top dressing in plant containers to discourage moss growth.
The application of products containing ferrous sulfate or ferrous ammonium sulfate will kill moss; these ingredients are typically in commercial moss control products and fertilizers. Sulfur and iron are essential nutrients for some competing plants like grasses. Killing moss will not prevent regrowth unless conditions favorable to their growth are changed.
Uses
Traditional
Preindustrial societies made use of the mosses growing in their areas.
Sámi people, North American tribes, and other circumpolar peoples used mosses for bedding. Mosses have also been used as insulation both for dwellings and in clothing. Traditionally, dried moss was used in some Nordic countries and Russia as an insulator between logs in log cabins, and tribes of the northeastern United States and southeastern Canada used moss to fill chinks in wooden longhouses. Circumpolar and alpine peoples have used mosses for insulation in boots and mittens. Ötzi the Iceman had moss-packed boots.
The capacity of dried mosses to absorb fluids has made their use practical in both medical and culinary uses. North American tribal people used mosses for diapers, wound dressing, and menstrual fluid absorption. Tribes of the Pacific Northwest in the United States and Canada used mosses to clean salmon prior to drying it, and packed wet moss into pit ovens for steaming camas bulbs. Food storage baskets and boiling baskets were also packed with mosses.
Recent research investigating the Neanderthals remains recovered from El Sidrón have provided evidence that their diet would have consisted primarily of pine nuts, moss and mushrooms. This is contrasted by evidence from other European locations, which point to a more carnivorous diet.
In Finland, peat mosses have been used to make bread during famines.
Commercial
There is a substantial market in mosses gathered from the wild. The uses for intact moss are principally in the florist trade and for home decoration. Decaying moss in the genus Sphagnum is also the major component of peat, which is "mined" for use as a fuel, as a horticultural soil additive, and in smoking malt in the production of Scotch whisky.
Sphagnum moss, generally the species S. cristatum and S. subnitens, is harvested while still growing and is dried out to be used in nurseries and horticulture as a plant growing medium.
Some Sphagnum mosses can absorb up to 20 times their own weight in water. In World War I, Sphagnum mosses were used as first-aid dressings on soldiers' wounds, as these mosses said to absorb liquids three times faster than cotton, retain liquids better, better distribute liquids uniformly throughout themselves, and are cooler, softer, and be less irritating. It is also claimed to have antibacterial properties. Native Americans were one of the peoples to use Sphagnum for diapers and menstrual pads, which is still done in Canada.
In rural UK, Fontinalis antipyretica was traditionally used to extinguish fires as it could be found in substantial quantities in slow-moving rivers and the moss retained large volumes of water which helped extinguish the flames. This historical use is reflected in its specific Latin/Greek name, which means "against fire".
In Mexico, moss is used as a Christmas decoration.
Physcomitrium patens is increasingly used in biotechnology. Prominent examples are the identification of moss genes
with implications for crop improvement or human health and the safe production of complex biopharmaceuticals in the moss bioreactor, developed by Ralf Reski and his co-workers.
London installed several structures called "City Trees": moss-filled walls, each of which is claimed to have "the air-cleaning capability of 275 regular trees" by consuming nitrogen oxides and other types of air pollution and producing oxygen.
| Biology and health sciences | Bryophytes | null |
178182 | https://en.wikipedia.org/wiki/Soyuz%20%28spacecraft%29 | Soyuz (spacecraft) | Soyuz () is a series of spacecraft which has been in service since the 1960s, having made more than 140 flights. It was designed for the Soviet space program by the Korolev Design Bureau (now Energia). The Soyuz succeeded the Voskhod spacecraft and was originally built as part of the Soviet crewed lunar programs. It is launched atop the similarly named Soyuz rocket from the Baikonur Cosmodrome in Kazakhstan.
Following the Soviet Union's dissolution, Roscosmos, the Russian space agency, continued to develop and utilize the Soyuz. Between the Space Shuttle's 2011 retirement and the SpaceX Crew Dragon's 2020 debut, Soyuz was the sole means of crewed transportation to and from the International Space Station, a role it continues to fulfill. The Soyuz design has also influenced other spacecraft, including China's Shenzhou and Russia's Progress cargo vehicle.
The Soyuz is a single-use spacecraft composed of three main sections. The descent module is where cosmonauts are seated for launch and reentry. The orbital module provides additional living space and storage during orbit but is jettisoned before reentry. The service module, responsible for propulsion and power, is also discarded prior to reentry. For added safety and aerodynamics, the spacecraft is encased within a fairing with a launch escape system during liftoff.
History
The first Soyuz mission, Kosmos 133, launched unmanned on 28 November 1966. The first crewed Soyuz mission, Soyuz 1, launched on 23 April 1967 but ended tragically on 24 April 1967 when the parachute failed to deploy on reentry, killing cosmonaut Vladimir Komarov. The following flight, Soyuz 2 was uncrewed. Soyuz 3 launched on 26 October 1968 and became the program's first successful crewed mission. The program suffered another fatal setback during Soyuz 11, where cabin depressurization during reentry killed the entire crew. These are the only humans to date who are known to have died above the Kármán line, the conventional definition of the edge of space.
Despite these early tragedies, Soyuz has earned a reputation as one of the safest and most cost-effective human spaceflight vehicles, a legacy built upon its unparalleled operational history. The spacecraft has served as the primary mode of transport for cosmonauts to and from the Salyut space stations, the Mir space station, and International Space Station (ISS).
Design
Soyuz spacecraft are composed of three primary sections (from top to bottom, when standing on the launch pad):
Orbital module: A spheroid compartment providing living space for the crew.
Descent module: A small, aerodynamic capsule where the crew is seated for launch and return the crew to Earth.
Service module: A cylindrical section housing propulsion, power, and other systems.
The orbital and service modules are discarded and destroyed upon reentry. This design choice, while seemingly wasteful, reduces the spacecraft's weight by minimizing the amount of heat shielding required. As a result, Soyuz offers more habitable interior space () compared to its Apollo counterpart (). While the reentry module does return to Earth, it is not reusable, a new Soyuz spacecraft must be made for every mission.
Soyuz can carry up to three crew members and provide life support for about 30 person-days.
A payload fairing protects Soyuz during launch and is jettisoned early in flight. Equipped with an automated docking system, the spacecraft can operate autonomously or under manual control.
Launch escape system
The Vostok spacecraft used an ejector seat to bail out the cosmonaut in the event of a low-altitude launch failure, as well as during reentry; however, it would probably have been ineffective in the first 20 seconds after liftoff, when the altitude would be too low for the parachute to deploy. Inspired by the Mercury LES, Soviet designers began work on a similar system in 1962. This included developing a complex sensing system to monitor various launch-vehicle parameters and trigger an abort if a booster malfunction occurred. Based on data from R-7 launches over the years, engineers developed a list of the most likely failure modes for the vehicle and could narrow down abort conditions to premature separation of a strap-on booster, low engine thrust, loss of combustion-chamber pressure, or loss of booster guidance. The spacecraft abort system (SAS; ) could also be manually activated from the ground, but unlike American spacecraft, there was no way for the cosmonauts to trigger it themselves.
Since it turned out to be almost impossible to separate the entire payload shroud from the Soyuz service module cleanly, the decision was made to have the shroud split between the service module and descent module during an abort. Four folding stabilizers were added to improve aerodynamic stability during ascent. Two test runs of the SAS were carried out in 1966–1967.
The basic design of the SAS has remained almost unchanged in 50 years of use, and all Soyuz launches carry it. The only modification was in 1972, when the aerodynamic fairing over the SAS motor nozzles was removed for weight-saving reasons, as the redesigned Soyuz 7K-T spacecraft carried extra life-support equipment. The uncrewed Progress resupply ferry has a dummy escape tower and removes the stabilizer fins from the payload shroud. There have been three failed launches of a crewed Soyuz vehicle: Soyuz 18a in 1975, Soyuz T-10a in 1983 and Soyuz MS-10 in October 2018. The 1975 failure was aborted after escape-tower jettison. In 1983, Soyuz T-10a's SAS successfully rescued the cosmonauts from an on-pad fire and explosion of the launch vehicle. Most recently, in 2018, the SAS sub-system in the payload shroud of Soyuz MS-10 successfully rescued the cosmonauts from a rocket failure 2 minutes and 45 seconds after liftoff, after the escape tower had already been jettisoned.
Orbital module
The forepart of the spacecraft is the orbital module (), also known as habitation section. It houses all the equipment that will not be needed for reentry, such as experiments, cameras or cargo. The module also contains a toilet, docking avionics and communications gear. Internal volume is , living space is . On later Soyuz versions (since Soyuz TM), a small window was introduced, providing the crew with a forward view.
A hatch between it and the descent module can be closed so as to isolate it to act as an airlock if needed so that crew members could also exit through its side port (near the descent module). On the launch pad, the crew enter the spacecraft through this port. This separation also lets the orbital module be customized to the mission with less risk to the life-critical descent module. The convention of orientation in a micro-g environment differs from that of the descent module, as crew members stand or sit with their heads to the docking port. Also the rescue of the crew whilst on the launch pad or with the SAS system is complicated because of the orbital module.
Separation of the orbital module is critical for a safe landing; without separation of the orbital module, it is not possible for the crew to survive landing in the descent module. This is because the orbital module would interfere with proper deployment of the descent module's parachutes, and the extra mass exceeds the capability of the main parachute and braking engines to provide a safe soft-landing speed. In view of this, the orbital module was separated before the ignition of the return engine until the late 1980s. This guaranteed that the descent module and orbital module would be separated before the descent module was placed in a reentry trajectory. However, after the problematic landing of Soyuz TM-5 in September 1988 this procedure was changed, and the orbital module is now separated after the return maneuver. This change was made as the TM-5 crew could not deorbit for 24 hours after they jettisoned their orbital module, which contained their sanitation facilities and the docking collar needed to attach to Mir. The risk of not being able to separate the orbital module is effectively judged to be less than the risk of needing the facilities in it, including the toilet, following a failed deorbit.
Descent module
The descent module (), also known as a reentry capsule, is used for launch and the journey back to Earth. Half of the descent module is covered by a heat-resistant covering to protect it during reentry; this half faces forward during reentry. It is slowed initially by the atmosphere, then by a braking parachute, followed by the main parachute, which slows the craft for landing. At one meter above the ground, solid-fuel braking engines mounted behind the heat shield are fired to give a soft landing. One of the design requirements for the descent module was for it to have the highest possible volumetric efficiency (internal volume divided by hull area). The best shape for this is a sphere – as the pioneering Vostok spacecraft's descent module used – but such a shape can provide no lift, resulting in a purely ballistic reentry. Ballistic reentries are hard on the occupants due to high deceleration and cannot be steered beyond their initial deorbit burn. Thus it was decided to go with the "headlight" shape that the Soyuz uses – a hemispherical upper area joined by a barely angled (seven degrees) conical section to a classic spherical section heat shield. This shape allows a small amount of lift to be generated due to the unequal weight distribution. The nickname was thought up at a time when nearly every headlight was circular. The small dimensions of the descent module led to it having only two-man crews after the death of the Soyuz 11 crew. The later Soyuz-T spacecraft solved this issue. Internal volume of Soyuz SA is ; is usable for crew (living space).
The thermal protection system on the slightly conical side walls is stood off from the structure to also provide micrometeoroid protection in orbit.
The slightly curved heat shield on the bottom consists of "21mm to 28mm thick ablator (glass-phenolic composite) which
is held by brackets approximately 15mm from the 3.5mm thick aluminum AMg-6 substrate. VIM low-density silica fibrous insulation (8mm thick) is contained in the gap between the heat shield ablator and aluminum substrate."
Service module
At the back of the vehicle is the service module (). It has a pressurized container shaped like a bulging can (instrumentation compartment, ) that contains systems for temperature control, electric power supply, long-range radio communications, radio telemetry, and instruments for orientation and control. A non-pressurized part of the service module (propulsion compartment, ) contains the main engine and a liquid-fuelled propulsion system, using N2O4 and UDMH, for maneuvering in orbit and initiating the descent back to Earth. The ship also has a system of low-thrust engines for orientation, attached to the intermediate compartment (). Outside the service module are the sensors for the orientation system and the solar array, which is oriented towards the Sun by rotating the ship. An incomplete separation between the service and reentry modules led to emergency situations during Soyuz 5, Soyuz TMA-10 and Soyuz TMA-11, which led to an incorrect reentry orientation (crew ingress hatch first). The failure of several explosive bolts did not cut the connection between the service and reentry modules on the latter two flights.
Reentry procedure
The Soyuz uses a method similar to the 1970s-era United States Apollo command and service module to deorbit itself. The spacecraft is turned engine-forward, and the main engine is fired for deorbiting on the far side of Earth ahead of its planned landing site. This requires the least propellant for reentry; the spacecraft travels on an elliptical Hohmann transfer orbit to the entry interface point, where atmospheric drag slows it enough to fall out of orbit.
Early Soyuz spacecraft would then have the service and orbital modules detach simultaneously from the descent module. As they are connected by tubing and electrical cables to the descent module, this would aid in their separation and avoid having the descent module alter its orientation. Later Soyuz spacecraft detached the orbital module before firing the main engine, which saved propellant. Since the Soyuz TM-5 landing issue, the orbital module is once again detached only after the reentry firing, which led to (but did not cause) emergency situations of Soyuz TMA-10 and TMA-11. The orbital module cannot remain in orbit as an addition to a space station, as the airlock hatch between the orbital and reentry modules is a part of the reentry module, and the orbital module therefore depressurizes after separation.
Reentry firing is usually done on the "dawn" side of the Earth, so that the spacecraft can be seen by recovery helicopters as it descends in the evening twilight, illuminated by the Sun when it is above the shadow of the Earth. The Soyuz craft is designed to come down on land, usually somewhere in the deserts of Kazakhstan in Central Asia. This is in contrast to the early United States crewed spacecraft and the current SpaceX Crew Dragon, which splash down in the ocean.
Spacecraft systems
Thermal control system –
Life support system –
Power supply system –
Communication and tracking systems – Rassvet (Dawn) radio communications system, onboard measurement system (SBI), Kvant-V spacecraft control, Klyost-M television system, orbit radio tracking (RKO)
Onboard complex control system –
Combined propulsion system –
Chaika-3 motion control system (SUD)
Optical/visual devices (OVP) – VSK-4 (), night vision device (VNUK-K, ), docking light, pilot's sight (VP-1, ), laser rangefinder (LPR-1, )
Kurs rendezvous system
SSVP docking system
TORU control mode
Entry actuators system –
Landing aids kit –
Portable survival kit – , containing a TP-82 or Makarov pistol
Launch escape system –
Orbital module (A)
1 SSVP docking mechanism
2, 4 Kurs rendezvous radar antenna
3 television transmission antenna
5 camera
6 hatch
Descent module (B)
7 parachute compartment
8 periscope
9 porthole
11 heat shield
Service module (C)
10, 18 attitude control engines
12 Earth sensors
13 Sun sensor
14 solar panel attachment point
15 thermal sensor
16 Kurs antenna
17 main propulsion
19 communication antenna
20 fuel tanks
21 oxygen tank
Variants
The Soyuz spacecraft has been the subject of continuous evolution since the early 1960s. Thus several different versions, proposals and projects exist.
Specifications
Soyuz 7K (part of the 7K-9K-11K circumlunar complex) (1963)
Sergei Korolev initially promoted the Soyuz A-B-V circumlunar complex (7K-9K-11K) concept (also known as L1) in which a two-man craft Soyuz 7K would rendezvous with other components (9K and 11K) in Earth orbit to assemble a lunar excursion vehicle, the components being delivered by the proven R-7 rocket.
First generation
The crewed Soyuz spacecraft can be classified into design generations. Soyuz 1 through Soyuz 11 (1967–1971) were first-generation vehicles, carrying a crew of up to three without spacesuits and distinguished from those following by their bent solar panels and their use of the Igla automatic docking navigation system, which required special radar antennas. This first generation encompassed the original Soyuz 7K-OK and the Soyuz 7K-OKS for docking with the Salyut 1 space station. The probe and drogue docking system permitted internal transfer of cosmonauts from the Soyuz to the station.
The Soyuz 7K-L1 was designed to launch a crew from the Earth to circle the Moon, and was the primary hope for a Soviet circumlunar flight. It had several test flights in the Zond program from 1967–1970 (Zond 4 to Zond 8), which produced multiple failures in the 7K-L1's reentry systems. The remaining 7K-L1s were scrapped. The Soyuz 7K-L3 was designed and developed in parallel to the Soyuz 7K-L1, but was also scrapped. Soyuz 1 was plagued with technical issues, and cosmonaut Vladimir Komarov was killed when the spacecraft crashed during its return to Earth. This was the first in-flight fatality in the history of spaceflight.
The next crewed version of the Soyuz was the Soyuz 7K-OKS. It was designed for space station flights and had a docking port that allowed internal transfer between spacecraft. The Soyuz 7K-OKS had two crewed flights, both in 1971. Soyuz 11, the second flight, depressurized upon reentry, killing its three-man crew.
Second generation
The second generation, called Soyuz Ferry or Soyuz 7K-T, comprised Soyuz 12 through Soyuz 40 (1973–1981). It did not have solar arrays. Two long, skinny antennas were put in the solar panels's place. It was developed out of the military Soyuz concepts studied in previous years and was capable of carrying 2 cosmonauts with Sokol space suits (after the Soyuz 11 accident). Several models were planned, but none actually flew in space. These versions were named Soyuz P, Soyuz PPK, Soyuz R, Soyuz 7K-VI, and Soyuz OIS (Orbital Research Station).
The Soyuz 7K-T/A9 version was used for the flights to the military Almaz space station.
Soyuz 7K-TM was the spacecraft used in the Apollo-Soyuz Test Project in 1975, which saw the first and only docking of a Soyuz spacecraft with an Apollo command and service module. It was also flown in 1976 for the Earth-science mission, Soyuz 22. Soyuz 7K-TM served as a technological bridge to the third generation.
Third generation
The third generation Soyuz-T spacecraft, where the "T" stands for "transport" (), featured solar panels again, allowing longer missions, a revised Igla rendezvous system and new translation/attitude thruster system on the Service module. It could carry a crew of three, now wearing spacesuits. It was used from 1976 until 1986.
Fourth generation
Soyuz-TM (1986–2002)
The fourth generation Soyuz-TM spacecraft, where the "M" stands for "modified" (), were used from 1986 to 2002 for ferry flights to Mir and the International Space Station (ISS).
Soyuz TMA (2003–2012)
The Soyuz TMA spacecraft, where the "A" stands for "anthropometric" (), features several changes to accommodate requirements requested by NASA in order to service the International Space Station (ISS), including more latitude in the height and weight of the crew and improved parachute systems. It is also the first expendable vehicle to feature a digital control technology. Soyuz-TMA looks identical to a Soyuz-TM spacecraft on the outside, but interior differences allow it to accommodate taller occupants with new adjustable crew couches.
Soyuz TMA-M (2010–2016)
The Soyuz TMA-M was an upgrade of the Soyuz-TMA, using a new computer, digital interior displays, updated docking equipment, and the vehicle's total mass was reduced by 70 kilograms. The new version debuted on 7 October 2010 with the launch of Soyuz TMA-01M, carrying the ISS Expedition 25 crew.
The Soyuz TMA-08M mission set a new record for the fastest crewed docking with a space station. The mission used a new six-hour rendezvous, faster than the previous Soyuz launches, which had, since 1986, taken two days.
Soyuz MS (since 2016)
Soyuz MS is the final planned upgrade of the Soyuz spacecraft. The "MS" stands for "modernized systems," reflecting upgrades primarily focused on the communications and navigation subsystems. Its maiden flight was in July 2016 with mission Soyuz MS-01.
Major changes include the new Kurs-NA rendezvous system, satellite navigation, re-arranged attitude control thrusters, an improved docking mechanism, a reusable black box, power system improvements, additional micro-meteoroid protection, and a digital camera system.
Related craft
The uncrewed Progress spacecraft are derived from Soyuz and are used for servicing space stations.
In 1995, China signed an agreement with Russia to acquire space technologies. The deal included the transfer of Soyuz spacecraft, life support systems, docking systems, and space suits. Leveraging this equipment, China developed its own Shenzhou spacecraft. While the Shenzhou is not a direct derivative of the Soyuz, it incorporates significant technology and design elements from the spacecraft because of the collaboration.
Image gallery
| Technology | Crewed spacecraft | null |
178235 | https://en.wikipedia.org/wiki/Large%20Binocular%20Telescope | Large Binocular Telescope | The Large Binocular Telescope (LBT) is an optical telescope for astronomy located on Mount Graham, in the Pinaleno Mountains of southeastern Arizona, United States. It is a part of the Mount Graham International Observatory.
When using both 8.4 m (330 inch) wide mirrors, with centres 14.4 m apart, the LBT has the same light-gathering ability as an 11.8 m (464 inch) wide single circular telescope and the resolution of a 22.8 m (897 inch) wide one.
The LBT mirrors individually are the joint second-largest optical telescope in continental North America, next to the Hobby–Eberly Telescope in West Texas. It has the largest monolithic, or in an optical telescope.
Strehl ratios of 60–90% in the infrared H band and 95% in the infrared M band have been achieved by the LBT.
Project
The LBT was originally named the "Columbus Project". It is a joint project of these members: the Italian astronomical community represented by the Istituto Nazionale di Astrofisica, the University of Arizona, University of Minnesota, University of Notre Dame, University of Virginia, the LBT Beteiligungsgesellschaft in Germany (Max Planck Institute for Astronomy in Heidelberg, Landessternwarte in Heidelberg, Leibniz Institute for Astrophysics Potsdam (AIP), Max Planck Institute for Extraterrestrial Physics in Munich and Max Planck Institute for Radio Astronomy in Bonn); Ohio State University; and the Research Corporation for Science Advancement based in Tucson, Arizona, USA. The cost was around 100 million Euro.
The telescope design has two 8.4 m (330 inch) mirrors mounted on a common base, hence the name "binocular". LBT takes advantage of active and adaptive optics, provided by Arcetri Observatory. The collecting area is two 8.4 meter aperture mirrors, which works out to about 111 m2 combined. This area is equivalent to an circular aperture, which would be greater than any other single telescope, but it is not comparable in many respects since the light is collected at a lower diffraction limit and is not combined in the same way. Also, an interferometric mode will be available, with a maximum baseline of for aperture synthesis imaging observations and a baseline of for nulling interferometry. This feature is along one axis with the LBTI instrument at wavelengths of 2.9–13 micrometres, which is the near infrared.
The telescope was designed by a group of Italian firms, and assembled by Ansaldo in its Milanese plant.
Mountain controversy
The choice of location sparked considerable local controversy, both from the San Carlos Apache Tribe, who view the mountain as sacred, and from environmentalists who contended that the observatory would cause the demise of an endangered subspecies of the American red squirrel, the Mount Graham red squirrel. Environmentalists and members of the tribe filed some forty lawsuits – eight of which went before a federal appeals court – but the project ultimately prevailed after an act of the United States Congress.
The telescope and mountain observatory survived two major forest fires in thirteen years, the more recent in the summer of 2017. Likewise the squirrels continue to survive. Some experts now believe their numbers fluctuate dependent upon nut harvest without regard to the observatory.
First light
The telescope was dedicated in October 2004 and saw first light with a single primary mirror on October 12, 2005, which viewed NGC 891. The second primary mirror was installed in January 2006 and became fully operational in January 2008. The first light with the second primary mirror was on September 18, 2006, and for the first and second together it was on January 11–12, 2008.
The first binocular light images show three false-color renditions of the spiral galaxy NGC 2770. The galaxy is 88 million light years from the Milky Way galaxy, a relatively close neighbor. The galaxy has a flat disk of stars and glowing gas tipped slightly toward Earth's line of sight.
The first image taken combined ultraviolet and green light, and emphasizes the clumpy regions of newly formed hot stars in the spiral arms. The second image combined two deep red colors to highlight the smoother distribution of older, cooler stars. The third image was a composite of ultraviolet, green and deep red light and shows the detailed structure of hot, moderate and cool stars in the galaxy. The cameras and images were produced by the Large Binocular Camera team, led by Emanuele Giallongo at the Rome Astrophysical Observatory.
In binocular aperture synthesis mode LBT has a light-collecting area of 111 m2, equivalent to a single primary mirror in diameter, and will combine light to produce the image sharpness equivalent to a single telescope. However, this requires a beam combiner that was tested in 2008, but has not been a part of regular operations. It can take images with one side at 8.4 m aperture, or take two images of the same object using different instruments on each side of the telescope.
Adaptive optics
In the summer of 2010, the "First Light Adaptive Optics" (FLAO) – an adaptive optics system with a deformable secondary mirror rather than correcting atmospheric distortion further downstream in the optics – was inaugurated. Using one 8.4 m side, it surpassed Hubble sharpness (at certain light wavelengths), achieving a Strehl ratio of 60–80% rather than the 20–30% of older adaptive optic systems, or the 1% typically achieved without adaptive optics for telescopes of this size. Adaptive optics at a telescope's secondary (M2) was previously tested at MMT Observatory by the Arcetri Observatory and University of Arizona team.
In the media
The telescope has made appearances on an episode of the Discovery Channel TV show Really Big Things, National Geographic Channel Big, Bigger, Biggest, and the BBC program The Sky At Night. The BBC Radio 4 radio documentary The New Galileos covered the LBT and the James Webb Space Telescope.
Discoveries and observations
LBT, with the XMM-Newton, was used to discover the galaxy cluster 2XMM J083026+524133 in 2008, over 7 billion light years away from Earth. In 2007 the LBT detected a 26th magnitude afterglow from the gamma ray burst GRB 070125.
In 2017, LBT observed the OSIRIS-REx spacecraft, an uncrewed asteroid sample return spacecraft, in space while it was en route.
Instruments
Some current or planned LBT telescope instruments:
LBC – optical and near ultraviolet wide field prime focus cameras. One is optimized for the blue part of the optical spectrum and one for the red. (Both cameras operational)
PEPSI – A high resolution and very high-resolution optical spectrograph and imaging polarimeter at the combined focus. (In development)
MODS – two optical multi object and longslit spectrographs plus imagers. Capable of running in a single mirror or binocular mode. (MODS1 operational – MODS2 in integration on the mountain)
LUCI – two multi-object and longslit infrared spectrographs plus imagers, one for each side (associated with one of the 8m mirrors) of the telescope. The imager has 2 cameras and can observe in both seeing-limited and diffraction-limited (with adaptive optics) modes. End of commissioning and hand over to the LBTO was in 2018.
LINC/Nirvana – wide-field interferometric imaging with adaptive optics at the combined focus (in commissioning).
LBTI/LMIRCAM – 2.9 to 5.2 micron Fizeau imaging and medium resolution grism spectroscopy at the combined focus.
LBTI/NOMIC – N band nulling imager for the study of protoplanetary and debris disks at the combined focus. (In commissioning phase – first stabilization of the fringes in December 2013)
FLAO – first light adaptive optics to correct atmospheric distortion
ARGOS – multiple laser guide star unit capable of supporting ground layer or multi conjugate adaptive optics. End of commissioning and handover to LBTO was in 2018.
LUCI
LUCI (originally LUCIFER: Large Binocular Telescope Near-infrared Spectroscopic Utility with Camera and Integral Field Unit for Extragalactic Research) is the near-infrared instrument for the LBT.
The name of the instrument was changed to LUCI in 2012. LUCI operates in the 0.9–2.5 μm spectral range using a 2048 x 2048 element Hawaii-2RG detector array from Teledyne and provides imaging and spectroscopic capabilities in seeing- and diffraction-limited modes. In its focal plane area, long-slit and multi-slit masks can be installed for single-object and multi-object spectroscopy. A fixed collimator produces an image of the entrance aperture in which either a mirror (for imaging) or a grating can be positioned. Three camera optics with numerical apertures of 1.8, 3.75 and 30 provide image scales of 0.25, 0.12, and 0.015 arcsec/detector element for wide field, seeing-limited and diffraction-limited observations. LUCI is operated at cryogenic temperatures, and is therefore enclosed in a cryostat of 1.6 m diameter and 1.6 m height, and cooled to about −200 °C by two closed-cycle coolers.
LBTO collaboration
Partners in the LBT project
Arizona, USA (25%) – AZ
The University of Arizona (Headquarters) – Tucson
Arizona State University – Tempe
Northern Arizona University – Flagstaff
Germany (25%) – LBTB
Landessternwarte – Heidelberg
Leibniz-Institut für Astrophysik Potsdam – Potsdam
Max-Planck-Institut für Astronomie – Heidelberg
Max-Planck-Institut für Extraterrestrische Physik – Munich
Max-Planck-Institut für Radioastronomie – Bonn
Italy (25%) – INAF
Istituto Nazionale di Astrofisica
Research Corporation for Science Advancement, USA (12.5%) – RC
The Ohio State University – Ohio
University of Notre Dame – Indiana
University of Minnesota – Minnesota
University of Virginia – Virginia
The Ohio State University, Ohio, USA (12.5%) – OSU
Other MGIO facilities
Mount Graham Submillimeter Telescope
Vatican Advanced Technology Telescope
| Technology | Ground-based observatories | null |
178300 | https://en.wikipedia.org/wiki/Intermodal%20container | Intermodal container | An intermodal container, often called a shipping container, or cargo container, (or simply "container") is a large metal crate designed and built for intermodal freight transport, meaning these containers can be used across different modes of transport – such as from ships to trains to trucks – without unloading and reloading their cargo. Intermodal containers are primarily used to store and transport materials and products efficiently and securely in the global containerized intermodal freight transport system, but smaller numbers are in regional use as well. It is like a boxcar that does not have wheels. Based on size alone, up to 95% of intermodal containers comply with ISO standards, and can officially be called ISO containers. These containers are known by many names: freight container, sea container, ocean container, container van or sea van, sea can or C can, or MILVAN, or SEAVAN. The term CONEX (Box) is a technically incorrect carry-over usage of the name of an important predecessor of the ISO containers: the much smaller steel CONEX boxes used by the U.S. Army.
Intermodal containers exist in many types and standardized sizes, but 90 percent of the global container fleet are "dry freight" or "general purpose" containers: durable closed rectangular boxes, made of rust-retardant weathering steel; almost all wide, and of either standard length, as defined by International Organization for Standardization (ISO) standard 668:2020. The worldwide standard heights are and – the latter are known as High Cube or Hi-Cube (HC or HQ) containers. Depending on the source, these containers may be termed TEUs (twenty-foot equivalent units), reflecting the 20- or 40-foot dimensions.
Invented in the early 20th century, 40-foot intermodal containers proliferated during the 1960s and 1970s under the containerization innovations of the American shipping company SeaLand. Like cardboard boxes and pallets, these containers are a means to bundle cargo and goods into larger, unitized loads that can be easily handled, moved, and stacked, and that will pack tightly in a ship or yard. Intermodal containers share a number of construction features to withstand the stresses of intermodal shipping, to facilitate their handling, and to allow stacking. Each has a unique ISO 6346 reporting mark.
In 2012, there were about 20.5 million intermodal containers in the world of varying types to suit different cargoes. Containers have largely supplanted the traditional break bulk cargo; in 2010, containers accounted for 60% of the world's seaborne trade. The predominant alternative methods of transport carry bulk cargo, whether gaseous, liquid, or solid—e.g., by bulk carrier or tank ship, tank car, or truck. For air freight, the lighter weight IATA-defined unit load devices are used.
History
Origins
Containerization has its origins in early coal mining regions in England beginning in the late 18th century. In 1766 James Brindley designed the box boat 'Starvationer' with ten wooden containers, to transport coal from Worsley Delph (quarry) to Manchester by Bridgewater Canal. In 1795, Benjamin Outram opened the Little Eaton Gangway, upon which coal was carried in wagons built at his Butterley Ironwork. The horse-drawn wheeled wagons on the gangway took the form of containers, which, loaded with coal, could be transshipped from canal barges on the Derby Canal, which Outram had also promoted.
By the 1830s, railways were carrying containers that could be transferred to other modes of transport. The Liverpool and Manchester Railway in the UK was one of these, making use of "simple rectangular timber boxes" to convey coal from Lancashire collieries to Liverpool, where a crane transferred them to horse-drawn carriages. Originally used for moving coal on and off barges, "loose boxes" were used to containerize coal from the late 1780s, at places like the Bridgewater Canal. By the 1840s, iron boxes were in use as well as wooden ones. The early 1900s saw the adoption of closed container boxes designed for movement between road and rail.
Creation of international standards
The first international standard for containers was established by the Bureau International des Containers et du Transport Intermodal (B.I.C.) in 1933, and a second one in 1935, primarily for transport between European countries. American containers at this time were not standardized, and these early containers were not yet stackable – neither in the U.S. nor Europe. In November 1932, the first container terminal in the world was opened by the Pennsylvania Rail Road Company in Enola, Pennsylvania. Containerization was developed in Europe and the US as a way to revitalize rail companies after the Wall Street crash of 1929, in New York, which resulted in economic collapse and a drop in all modes of transport.
Mid 20th century innovations
In April 1951 at Zürich Tiefenbrunnen railway station, the Swiss Museum of Transport and the Bureau International des Containers (BIC) held demonstrations of container systems for representatives from a number of European countries, and from the United States. A system was selected for Western Europe, based on the Netherlands' system for consumer goods and waste transportation called Laadkisten (lit. "Loading chests"), in use since 1934. This system used roller containers for transport by rail, truck and ship, in various configurations up to capacity, and up to in size. This became the first post World War II European railway standard of the International Union of Railways – UIC-590, known as "pa-Behälter". It was implemented in the Netherlands, Belgium, Luxembourg, West Germany, Switzerland, Sweden and Denmark.
The use of standardized steel shipping containers began during the late 1940s and early 1950s, when commercial shipping operators and the US military started developing such units. In 1948 the U.S. Army Transportation Corps developed the "Transporter", a rigid, corrugated steel container, able to carry . It was long, wide, and high, with double doors on one end, was mounted on skids, and had lifting rings on the top four corners. After proving successful in Korea, the Transporter was developed into the Container Express (CONEX) box system in late 1952. Based on the Transporter, the size and capacity of the Conex were about the same, but the system was made modular, by the addition of a smaller, half-size unit of long, wide and high. CONEXes could be stacked three high, and protected their contents from the elements. By 1965 the US military used some 100,000 Conex boxes, and more than 200,000 in 1967, making this the first worldwide application of intermodal containers. Their invention made a major contribution to the globalization of commerce in the second half of the 20th century, dramatically reducing the cost of transporting goods and hence of long-distance trade.
From 1949 onward, engineer Keith Tantlinger repeatedly contributed to the development of containers, as well as their handling and transportation equipment. In 1949, while at Brown Trailers Inc. of Spokane, Washington, he modified the design of their stressed skin aluminum 30-foot trailer, to fulfil an order of two-hundred containers that could be stacked two high, for Alaska-based Ocean Van Lines. Steel castings on the top corners provided lifting and securing points.
In 1955, trucking magnate Malcom McLean bought Pan-Atlantic Steamship Company, to form a container shipping enterprise, later known as Sea-Land. The first containers were supplied by Brown Trailers Inc, where McLean met Keith Tantlinger, and hired him as vice-president of engineering and research. Under the supervision of Tantlinger, a new x x Sea-Land container was developed, the length determined by the maximum length of trailers then allowed on Pennsylvanian highways. Each container had a frame with eight corner castings that could withstand stacking loads. Tantlinger also designed automatic spreaders for handling the containers, as well as the twistlock mechanism that connects with the corner castings.
Modern form
Containers in their modern 21st-century form first began to gain widespread use around 1956. Businesses began to devise a structured process to utilize and to get optimal benefits from the role and use of shipping containers. Over time, the invention of the modern telecommunications of the late 20th century made it highly beneficial to have standardized shipping containers and made these shipping processes more standardized, modular, easier to schedule, and easier to manage.
Two years after McLean's first container ship, the Ideal X, started container shipping on the US East Coast, Matson Navigation followed suit between California and Hawaii. Just like Pan-Atlantic's containers, Matson's were wide and high, but due to California's different traffic code Matson chose to make theirs long. In 1968, McLean began container service to South Vietnam for the US military with great success.
Modern ISO standards
ISO standards for containers were published between 1968 and 1970 by the International Maritime Organization. These standards allow for more consistent loading, transporting, and unloading of goods in ports throughout the world, thus saving time and resources.
The International Convention for Safe Containers (CSC) is a 1972 regulation by the Inter-governmental Maritime Consultative Organization on the safe handling and transport of containers. It decrees that every container traveling internationally be fitted with a CSC Safety-approval Plate. This holds essential information about the container, including age, registration number, dimensions and weights, as well as its strength and maximum stacking capability.
Impact of industry changes on workers
Longshoremen and related unions around the world struggled with this revolution in shipping goods. For example, by 1971 a clause in the International Longshoremen's Association (ILA) contract stipulated that the work of "stuffing" (filling) or "stripping" (emptying) a container within of a port must be done by ILA workers, or if not done by ILA, that the shipper needed to pay royalties and penalties to the ILA. Unions for truckers and consolidators argued that the ILA rules were not valid work preservation clauses, because the work of stuffing and stripping containers away from the pier had not traditionally been done by ILA members. In 1980 the Supreme Court of the United States heard this case and ruled against the ILA.
Impact in worldwide supply shortage of 2020 to present
Some experts have said that the centralized, continuous shipping process made possible by containers has created dangerous liabilities: one bottleneck, delay, or other breakdown at any point in the process can easily cause major delays everywhere up and down the supply chain.
The reliance on containers exacerbated some of the economic and societal damage from the 2021 global supply chain crisis of 2020 and 2021, and the resulting shortages related to the COVID-19 pandemic. In January 2021, for example, a shortage of shipping containers at ports caused shipping to be backlogged.
Marc Levinson, author of Outside the Box: How Globalization Changed from Moving Stuff to Spreading Ideas and The Box: How the Shipping Container Made the World Smaller and the World Economy Bigger, said in an interview:
Because of delays in the process, it's taking a container longer to go from its origin to its final destination where it's unloaded, so the container is in use longer for each trip. You've just lost a big hunk of the total capacity because the containers can't be used as intensively.
We've had in the United States an additional problem, which is that the ship lines typically charge much higher rates on services from Asia to North America than from North America to Asia. This has resulted in complaints, for example, from farmers and agricultural companies, that it's hard to get containers in some parts of the country because the ship lines want to ship them empty back to Asia, rather than letting them go to South Dakota and load over the course of several days. So we've had exporters in the United States complaining that they have a hard time finding a container that they can use to send their own goods abroad.
Description
Ninety percent of the global container fleet consists of "dry freight" or "general purpose" containers – both of standard and special sizes.
And although lengths of containers vary from , according to two 2012 container census reports about 80% of the world's containers are either 20- or 40-foot standard-length boxes of the dry freight design. These typical containers are rectangular, closed box models, with doors fitted at one end, and made of corrugated weathering steel (commonly known as CorTen) with a plywood floor. Although corrugating the sheet metal used for the sides and roof contributes significantly to the container's rigidity and stacking strength, just like in corrugated iron or in cardboard boxes, the corrugated sides cause aerodynamic drag, and up to 10% fuel economy loss in road or rail transport, compared to smooth-sided vans.
Standard containers are wide by high, although the taller "High Cube" or "hi-cube" units measuring have become very common in recent years. By the end of 2013, high-cube 40 ft containers represented almost 50% of the world's maritime container fleet, according to Drewry's Container Census report.
About 90% of the world's containers are either nominal or long, although the United States and Canada also use longer units of , and . ISO containers have castings with openings for twistlock fasteners at each of the eight corners, to allow gripping the box from above, below, or the side, and they can be stacked up to ten units high.
Although ISO standard 1496 of 1990 only required nine-high stacking, and only of containers rated at , current Ultra Large Container Vessels of the Post New Panamax and Maersk Triple E class are stacking them ten or eleven high. Moreover, vessels like the Marie Maersk no longer use separate stacks in their holds, and other stacks above deck – instead they maximize their capacity by stacking continuously from the bottom of the hull, to as much as 21 high. This requires automated planning to keep heavy containers at the bottom of the stack and light ones on top to stabilize the ship and to prevent crushing the bottom containers.
Regional intermodal containers, such as European, Japanese and U.S. domestic units however, are mainly transported by road and rail, and can frequently only be stacked up to two or three laden units high. Although the two ends are quite rigid, containers flex somewhat during transport.
Container capacity is often expressed in twenty-foot equivalent units (TEU, or sometimes teu). A twenty-foot equivalent unit is a measure of containerized cargo capacity equal to one standard long container. This is an approximate measure, wherein the height of the box is not considered. For example, the tall high-cube, as well as containers are equally counted as one TEU. Similarly, extra long containers are commonly counted as just two TEU, no different from standard long units. Two TEU are equivalent to one forty-foot equivalent unit (FEU).
In 2014 the global container fleet grew to a volume of 36.6 million TEU, based on Drewry Shipping Consultants' Container Census. Moreover, in 2014 for the first time in history 40-foot High-Cube containers accounted for the majority of boxes in service, measured in TEU. In 2019 it was noted by global logistics data analysis startup Upply that China's role as 'factory of the world' is further incentivizing the use of 40-foot containers, and that the computational standard 1 TEU boxes only make up 20% of units on major east–west liner routes, and demand for shipping them keeps dropping. In the 21st century, the market has shifted to using 40-foot high-cube dry and refrigerated containers more and more predominantly. Forty-foot units have become the standard to such an extent that the sea freight industry now charges less than 30% more for moving a 40-ft unit than for a 1 TEU box. Although 20-ft units mostly have heavy cargo, and are useful for stabilizing both ships and revenue, carriers financially penalize 1 TEU boxes by comparison.
For container manufacturers, 40-foot High-Cubes now dominate market demand both for dry and refrigerated units. Manufacturing prices for regular dry freight containers are typically in the range of $1750–$2000 U.S. per CEU (container equivalent unit), and about 90% of the world's containers are made in China. The average age of the global container fleet was a little over 5 years from end 1994 to end 2009, meaning containers remain in shipping use for well over 10 years.
Gooseneck tunnel
A gooseneck tunnel, an indentation in the floor structure, that meshes with the gooseneck on dedicated container semi-trailers, is a mandatory feature in the bottom structure of 1AAA and 1EEE (40- and 45-ft high-cube) containers, and optional but typical on standard height, forty-foot and longer containers.
Types
Other than the standard, general purpose container, many variations exist for use with different cargoes. The most prominent of these are refrigerated containers (also called reefers) for perishable goods, that make up 6% of the world's shipping boxes. Tanks in a frame, for bulk liquids, account for another 0.75% of the global container fleet.
Although these variations are not of the standard type, they mostly are ISO standard containers – in fact the ISO 6346 standard classifies a broad spectrum of container types in great detail. Aside from different size options, the most important container types are:
General-purpose dry vans, for boxes, cartons, cases, sacks, bales, pallets, drums, etc., Special interior layouts are known, such as:
rolling-floor containers, for difficult-to-handle cargo
garmentainers, for shipping garments on hangers (GOH)
Ventilated containers. Essentially dry vans, but either passively or actively ventilated. For instance for organic products requiring ventilation.
Temperature controlled – either insulated, refrigerated, and/or heated containers, for perishable goods
Tank containers, for liquids, gases, or powders. Frequently these are dangerous goods, and in the case of gases one shipping unit may contain multiple gas bottles
Bulk containers (sometimes bulktainers), either closed models with roof-lids, or hard or soft open-top units for top loading, for instance for bulk minerals. Containerized coal carriers and "bin-liners" (containers designed for the efficient road and rail transportation of rubbish from cities to recycling and dump sites) are used in Europe.
Open-top and open-side containers, for instance for easy loading of heavy machinery or oversize pallets. Crane systems can be used to load and unload crates without having to disassemble the container itself. Open sides are also used for ventilating hardy perishables like apples or potatoes.
Log cradles for cradling logs
Platform based containers such as:
flat-rack and bolster containers, for barrels, drums, crates, and any heavy or bulky out-of-gauge cargo, like machinery, semi-finished goods or processed timber. Empty flat-racks can either be stacked or shipped sideways in another ISO container
collapsible containers, ranging from flushfolding flat-racks to fully closed ISO and CSC certified units with roof and walls when erected.
trash containers, for carrying trash bags and cans to and from Recycling factories and landfills.
Containers for offshore use have a few different features, like pad eyes, and must meet additional strength and design requirements, standards and certification, such as the DNV2.7-1 by Det Norske Veritas, LRCCS by Lloyd's Register, Guide for Certification of Offshore Containers by American Bureau of Shipping and the International standard ISO10855: Offshore containers and associated lifting sets, in support of IMO MSC/Circ. 860
A multitude of equipment, such as generators, has been installed in containers of different types to simplify logistics – see for more details.
Swap body units usually have the same bottom corner fixtures as intermodal containers, and often have folding legs under their frame so that they can be moved between trucks without using a crane. However they frequently do not have the upper corner fittings of ISO containers, and are not stackable, nor can they be lifted and handled by the usual equipment like reach-stackers or straddle-carriers. They are generally more expensive to procure.
Specifications
Basic terminology of globally standardized intermodal shipping containers is set out in standard:
ISO 830:(1999) Freight containers – Vocabulary, 2nd edition; last reviewed and confirmed in 2016.
From its inception, ISO standards on international shipping containers, consistently speak of them sofar as 'Series 1' containers – deliberately so conceived, to leave room for another such series of interrelated container standards in the future.
Basic dimensions and permissible gross weights of intermodal containers are largely determined by two ISO standards:
ISO 668:2013–2020 Series 1 freight containers—Classification, dimensions and ratings
ISO 1496-1:2013 Series 1 freight containers—Specification and testing—Part 1: General cargo containers for general purposes
Weights and dimensions of the most common (standardized) types of containers are given below. Forty-eight foot and fifty-three foot containers have not yet been incorporated in the latest, 2020 edition of the ISO 668. ISO standard maximum gross mass for all standard sizes except 10-ft boxes was raised to per Amendment 1 on ISO 668:2013, in 2016. Draft Amendment 1 of ISO 668: 2020 – for the eighth edition – maintains this. Given the average container lifespan, the majority of the global container fleet have not caught up with this change yet.
Values vary slightly from manufacturer to manufacturer, but must stay within the tolerances dictated by the standards. Empty weight (tare weight) is not determined by the standards, but by the container's construction, and is therefore indicative, but necessary to calculate a net load figure, by subtracting it from the maximum permitted gross weight.
The bottom row in the table gives the legal maximum cargo weights for U.S. highway transport, and those based on use of an industry common tri-axle chassis. Cargo must also be loaded evenly inside the container, to avoid axle weight violations. The maximum gross weights that U.S. railroads accept or deliver are for 20-foot containers, and for 40-foot containers, in contrast to the global ISO-standard gross weight for 20-footers having been raised to the same as 40-footers in the year 2005. In the U.S., containers loaded up to the rail cargo weight limit cannot move over the road, as they will exceed the U.S. highway limit.
Other sizes
Australian RACE containers
Australian RACE containers are also slightly wider to optimise them for the use of Australia Standard Pallets, or are long and wide to be able to fit up to 40 pallets.
European pallet wide containers
European pallet wide (or PW) containers are minimally wider, and have shallow side corrugation, to offer just enough internal width, to allow common European Euro-pallets of long by wide, to be loaded with significantly greater efficiency and capacity. Having a typical internal width of , (a gain of ~ over the ISO-usual , gives pallet-wide containers a usable internal floor width of , compared to in standard containers, because the extra width enables their users to either load two Euro-pallets end on end across their width, or three of them side by side (providing the pallets were neatly stacked, without overspill), whereas in standard ISO containers, a strip of internal floor-width of about cannot be used by Euro-pallets.
As a result, while being virtually interchangeable:
A 20-foot PW can load 15 Euro-pallets – four more, or 36% better than the normal 11 pallets in an ISO-standard 20-foot unit
A 40-foot PW can load 30 Euro-pallets – five more, or 20% better than the 25 pallets in a standard 40-foot unit, and
A 45-foot PW can load 34 Euro-pallets – seven more, or 26% better than 27 in a standard 45-foot container.
Some pallet-wides are simply manufactured with the same, ISO-standard floor structure, but with the side-panels welded in, such that the ribs/corrugations are embossed outwards, instead of indenting to the inside. This makes it possible for some pallet-wides to be just wide, but others can be wide.
The pallet-wide high-cube container has gained particularly wide acceptance, as these containers can replace the swap bodies that are common for truck transport in Europe. The EU has started a standardization for pallet wide containerization in the European Intermodal Loading Unit (EILU) initiative.
Many sea shipping providers in Europe allow these on board, as their external width overhangs over standard containers are sufficiently minor that they fit in the usual interlock spaces in ship's holds, as long as their corner-castings patterns (both in the floor and the top) still match with regular 40-foot units, for stacking and securing.
North American containers
The North American market has widely adopted containerization, especially for domestic shipments that need to move between road and rail transport. While they appear similar to the ISO-standard containers, there are several significant differences: they are considered High-Cubes based on their ISO-standard height, their width matches the maximum width of road vehicles in the region but is wider than ISO-standard containers, and they are often not built strong enough to endure the rigors of ocean transport.
48-foot containers
The first North American containers to come to market were long. This size was introduced by container shipping company American President Lines (APL) in 1986. The size of the containers matched new federal regulations passed in 1983 which prohibited states from outlawing the operation of single trailers shorter than 48 feet long or 102 inches wide. This size being longer and wider has 29% more volume capacity than the standard 40-ft High-Cube, yet costs of moving it by truck or rail are almost the same.
53-foot containers
In the late 1980s, the federal government announced it would once again allow an increase in the length of trailers to at the start of 1990. Anticipating this change, 53 foot containers were introduced in 1989. These large boxes have 60% more capacity than 40' containers, enabling shippers to consolidate more cargo into fewer containers.
In 2007, APL introduced the first 53-foot ocean-capable containers designed to withstand voyages on its South China-to-Los Angeles service. In 2013, APL stopped offering vessel space for 53-foot containers on its trans-Pacific ships. In 2015 both Crowley and TOTE Maritime each announced the construction of their respective second combined container and roll-on/roll-off ships for Puerto Rico trade, with the specific design to maximize cubic cargo capacity by carrying 53-foot, containers. Within Canada, Oceanex offers 53-foot-container ocean service to and from Newfoundland. 53-foot containers are also being used on some Asia Pacific international shipping routes.
Canadian 60-foot containers
In April 2017, Canadian Tire and Canadian Pacific Railway announced deployment of what they claimed to be the first 60-foot intermodal containers in North America. The containers are transportable on the road using specially configured trucks and telescoping trailers (where vehicle size limits permit it), and on the railway using the top positions of double-stack container cars. According to initial projections, Canadian Tire believed it would allow them to increase the volume of goods shipped per container by 13%. 5 years after the deployment of the containers, analyst Larry Gross observed that United States truck size regulations are more constraining than those in Canada, and predicted that for the foreseeable future, these larger containers would remain exclusive to Canada.
Small containers
The ISO 668 standard has so far never standardized containers to be the same height as so-called "Standard-height", , 20- and 40-foot containers. By the ISO standard, 10-foot (and previously included 5-ft and 6-ft boxes) are only of unnamed, 8-foot (2.44 m) height. But industry makes 10-foot units more frequently of height, to mix, match (and stack) better in a fleet of longer, 8 ft 6 in tall containers. Smaller units, on the other hand, are no longer standardized, leading to deviating lengths, like or , with non-standard widths of 2.20 m / 86.6 in and 1.95 m / in respectively, and non-standard heights of 2.26 m / 7 ft 5 in and 1.91 m / 6 ft 3.2 in respectively, for storage or off-shore use.
U.S. military
The United States military continues to use small containers, strongly reminiscent of their Transporter and Conex boxes of the 1950s and 1960s. These mostly comply with (previous) ISO standard dimensions, or are a direct derivative thereof. Current terminology of the United States armed forces calls these small containers Bicon, Tricon and Quadcon, with sizes that correspond with (previous) ISO 668 standard sizes 1D, 1E and 1F respectively. These containers are of a standard height, and with a footprint size either one half (Bicon), one third (Tricon) or one quarter (Quadcon) the size of a standard 20-foot, one TEU container. At a nominal length of , two Bicons coupled together lengthwise match one 20-foot ISO container, but their height is shy of the more commonly available 10-foot ISO containers of so-called 'standard' height, which are tall. Tricons and Quadcons however have to be coupled transversely – either three or four in a row – to be stackable with twenty foot containers. Their length of corresponds to the width of a standard 20-foot container, which is why there are forklift pockets at their ends, as well as in the sides of these boxes, and the doors only have one locking bar each. The smallest of these, the Quadcon, exists in two heights: or . Only the first conforms to ISO-668 standard dimensions (size 1F).
ABC bulk containers
ABC containers are small containers, typically 20 ft long and 5 ft high, used for hauling dense materials. The smaller size reduces the tare weight (as compared to using a half-full standard height container). They are normally shipped on specialized railroad flatcars, where 6 containers can be carried in the space of 4 standard containers.
Japan: 12-foot containers
In Japan's domestic freight rail transport, most of the containers are long in order to fit Japan's unique standard pallet sizes.
Reporting mark
Each container is allocated a standardized ISO 6346 reporting mark (ownership code), four letters long ending in either U, J or Z, followed by six digits and a check digit. The ownership code for intermodal containers is issued by the (International container bureau, abbr. B.I.C.) in France, hence the name "BIC-Code" for the intermodal container reporting mark. So far there exist only four-letter BIC-Codes ending in "U".
The placement and registration of BIC Codes is standardized by the commissions TC104 and TC122 in the JTC1 of the ISO which are dominated by shipping companies. Shipping containers are labelled with a series of identification codes that includes the manufacturer code, the ownership code, usage classification code, UN placard for hazardous goods and reference codes for additional transport control and security.
Following the extended usage of pallet-wide containers in Europe the EU started the Intermodal Loading Unit (ILU) initiative. This showed advantages for intermodal transport of containers and swap bodies. This led to the introduction of ILU-Codes defined by the standard EN 13044 which has the same format as the earlier BIC-Codes. The International Container Office BIC agreed to only issue ownership codes ending with U, J or Z. The new allocation office of the UIRR (International Union of Combined Road-Rail Transport Companies) agreed to only issue ownership reporting marks for swap bodies ending with A, B, C, D or K – companies having a BIC-Code ending with U can allocate an ILU-Code ending with K having the same preceding letters. Since July 2011 the new ILU codes can be registered, beginning with July 2014 all intermodal ISO containers and intermodal swap bodies must have an ownership code and by July 2019 all of them must bear a standard-conforming placard.
Handling
Containers are transferred between rail, truck, and ship by container cranes at container terminals. Forklifts, reach stackers, straddle carriers, container jacks and cranes may be used to load and unload trucks or trains outside of container terminals. Swap bodies, sidelifters, tilt deck trucks, and hook trucks allow transfer to and from trucks with no extra equipment.
ISO-standard containers can be handled and lifted in a variety of ways by their corner fixtures, but the structure and strength of 45-foot (type E) containers limits their tolerance of side-lifting, nor can they be forklifted, based on ISO 3874 (1997).
Transport
Containers can be transported by container ship, truck and freight trains as part of a single journey without unpacking. Units can be secured in transit using "twistlock" points located at each corner of the container. Every container has a unique BIC code painted on the outside for identification and tracking, and is capable of carrying up to 20–25 tonnes. Costs for transport are calculated in twenty-foot equivalent units (TEU).
Rail
When carried by rail, containers may be carried on a spine car, flatcar, or well cars. The latter are specially designed for container transport, and can accommodate double-stacked containers. However, the loading gauge of a rail system may restrict the modes and types of container shipment. The smaller loading gauges often found in European railroads will only accommodate single-stacked containers. In some countries, such as the United Kingdom, there are sections of the rail network through which high-cube containers cannot pass, or can pass through only on well cars. On the other hand, Indian Railways runs double-stacked containers on flatcars under 25 kV overhead electrical wires. The wires must be at least above the track. China Railway also runs double-stacked containers under overhead wires, but must use well cars to do so, since the wires are only above the track.
Sea
About 90% of non-bulk cargo worldwide is transported by container, and the largest container ships can carry over 19,000 TEU (Twenty-Foot Equivalent, or how many 20 foot containers can fit on a ship). Between 2011 and 2013, an average of 2,683 containers were reported lost at sea. Other estimates go up to 10,000; of these 10% are expected to contain chemicals toxic to marine life. Various systems are used for securing containers on ships. Losses of containers at sea are low.
Air
Containers can also be transported in planes, as seen within intermodal freight transport. However, transporting containers in this way is typically avoided due to the cost of doing such and the lack of availability of planes which can accommodate such awkwardly sized cargo.
There are special aviation containers, smaller than intermodal containers, called unit load devices.
Securing and security
Securing containers and contents
There are many established methods and materials for stabilizing and securing intermodal containers loaded on ships, as well as the internal cargo inside the boxes. Conventional restraint methods and materials such as steel strapping and wood blocking and bracing have been around for decades and are still widely used. Polyester strapping and lashing, and synthetic webbings are also common today. Dunnage bags (also known as "air bags") are used to keep unit loads in place.
Flexi-bags can also be directly loaded, stacked in food-grade containers. Indeed, their standard shape fills the entire ground surface of a 20 ft ISO container.
Non-shipping uses
Containerized equipment
Container-sized units are also often used for moving large pieces of equipment to temporary sites. Specialised containers are particularly attractive to militaries already using containerisation to move much of their freight around. Shipment of specialized equipment in this way simplifies logistics and may prevent identification of high value equipment by enemies. Such systems may include command and control facilities, mobile operating theatres or even missile launchers (such as the Russian 3M-54 Klub surface-to-surface missile).
Complete water treatment systems can be installed in containers and shipped around the world.
Electric generators can be permanently installed in containers to be used for portable power.
Repurposing
Half the containers that enter the United States leave empty. Their value in the US is lower than in China, so they are sometimes used for other purposes. This is typically but not always at the end of their voyaging lives. The US military often used its Conex containers as on-site storage, or easily transportable housing for command staff and medical clinics. Nearly all of the more than 150,000 Conex containers shipped to Vietnam remained in the country, primarily as storage or other mobile facilities. Permanent or semi-permanent placement of containers for storage is common. A regular forty-foot container has about of steel, which takes of energy to melt down. Repurposing used shipping containers is increasingly a practical solution to both social and ecological problems.
Shipping container architecture employs used shipping containers as the main framing of modular home designs, where the steel may be an integrated part of the design, or be camouflaged into a traditional looking home. They have also been used to make temporary shops, cafes, and computer datacenters, e.g. the Sun Modular Datacenter.
Intermodal containers are not strong enough for conversion to underground bunkers without additional bracing, as the walls cannot sustain much lateral pressure and will collapse. Also, the wooden floor of many used containers could contain some fumigation residues, rendering them unsuitable as confined spaces, such as for prison cells or bunkers. Cleaning or replacing the wood floor can make these used containers habitable, with proper attention to such essential issues as ventilation and insulation.
Single-time use
The City of Göttingen has deployed containers for the disablement of unexploded ordnance: either FIBCs filled with sand or IBCs filled with water.
When the bomb squad performs controlled detonations, such prepared containers absorb shock and fragments.
This use requires level, load-bearing ground. The deformed containers are unsuitable for further circulation.
International standards
ASTM D5728-00 Standard Practices for Securement of Cargo in Intermodal and Unimodal Surface Transport
ISO 668:2013 Series 1 freight containers – Classification, dimensions and ratings
ISO 830:1999 Freight containers – Vocabulary
ISO 1161:1984 Series 1 freight containers – Corner fittings – Specification
ISO 1496 – Series 1 freight containers – Specification and testing
ISO 1496-1:2013 – Part 1: General cargo containers for general purposes
ISO 1496-2:2008 – Part 2: Thermal containers
ISO 1496-3:1995 – Part 3: Tank containers for liquids, gases, and pressurized dry bulk
ISO 1496-4:1991 – Part 4: Non-pressurized container for dry bulk
ISO 1496-5:1991 – Part 5: Platform and platform based containers
ISO 2308:1972 Hooks for lifting freight containers of up to 30 tonnes capacity – Basic requirements
ISO 3874:1997 Series 1 freight containers – Handling and securing
ISO 6346:1995 Freight containers – Coding, identification and marking
ISO 9897:1997 Freight containers – Container equipment data exchange (CEDEX) – General communication codes
ISO/TS 10891:2009 Freight containers – Radio frequency identification (RFID) – Licence plate tag
ISO 14829:2002 Freight containers – Straddle carriers for freight container handling – Calculation of stability
ISO 17363:2007 Supply chain applications of RFID – Freight containers
ISO/PAS 17712:2006 Freight containers – Mechanical seals
ISO 18185-2:2007 Freight containers – Electronic seals
| Technology | Containers | null |
178345 | https://en.wikipedia.org/wiki/Acer%20platanoides | Acer platanoides | Acer platanoides, commonly known as the Norway maple, is a species of maple native to eastern and central Europe and western Asia, from Spain east to Russia, north to southern Scandinavia and southeast to northern Iran. It was introduced to North America in the mid-1700s as a shade tree. It is a member of the family Sapindaceae.
Description
Acer platanoides is a deciduous tree, growing to tall with a trunk up to in diameter, and a broad, rounded crown. The bark is grey-brown and shallowly grooved. Unlike many other maples, mature trees do not tend to develop a shaggy bark. The shoots are green at first, soon becoming pale brown. The winter buds are shiny red-brown.
The leaves are opposite, palmately lobed with five lobes, long and across; the lobes each bear one to three side teeth, and an otherwise smooth margin. The leaf petiole is long, and secretes a milky juice when broken. The autumn colour is usually yellow, occasionally orange-red.
The flowers are in corymbs of 15–30 together, yellow to yellow-green with five sepals and five petals long; flowering occurs in early spring before the new leaves emerge. The fruit is a double samara with two winged seeds. the seeds are disc-shaped, strongly flattened, across and thick. The wings are long, widely spread, approaching a 180° angle. It typically produces a large quantity of viable seeds.
Under ideal conditions in its native range, Norway maple may live up to 250 years, but often has a much shorter life expectancy; in North America, for example, sometimes only 60 years. Especially when used on streets, it can have insufficient space for its root network and is prone to the roots wrapping around themselves, girdling and killing the tree. In addition, their roots tend to be quite shallow and thereby they easily out-compete nearby plants for nutrient uptake. Norway maples often cause significant damage and cleanup costs for municipalities and homeowners when branches break off in storms as they do not have strong wood.
Classification and identification
The Norway maple is a member (and is the type species) of the section Platanoidea Pax, characterised by flattened, disc-shaped seeds and the shoots and leaves containing milky sap. Other related species in this section include Acer campestre (field maple), Acer cappadocicum (Cappadocian maple), Acer lobelii (Lobel's maple), and Acer truncatum (Shandong maple). From the field maple, the Norway maple is distinguished by its larger leaves with pointed, not blunt, lobes, and from the other species by the presence of one or more teeth on all of the lobes.
It is also frequently confused with the more distantly related Acer saccharum (sugar maple). The sugar maple is easy to differentiate by clear sap in the petiole (leaf stem); Norway maple petioles have white sap. The tips of the points on Norway maple leaves reduce to a fine "hair", while the tips of the points on sugar maple leaves are, on close inspection, rounded. On mature trees, sugar maple bark is more shaggy, while Norway maple bark has small, often criss-crossing grooves. While the shape and angle of leaf lobes vary somewhat within all maple species, the leaf lobes of Norway maple tend to have a more triangular (acuminate) shape, in contrast to the more finely toothed lobes of sugar maples, that narrow towards the base. Flowering and seed production begins at ten years of age; however, large quantities of seeds are not produced until the tree is 20. As with most maples, Norway maple is normally dioecious (separate male and female trees), occasionally monoecious, and trees may change gender from year to year.
The fruits of Norway maple are paired samaras with widely diverging wings, distinguishing them from those of sycamore, Acer pseudoplatanus, which are at 90 degrees to each other. Norway maple seeds are flattened, while those of sugar maple are globose. The sugar maple usually has a brighter orange autumn color, where the Norway maple is usually yellow, although some of the red-leaved cultivars appear more orange.
The flowers emerge in spring before the leaves and last 2-3 weeks. Leafout of Norway maple occurs roughly when air temperatures reach 55°F (12°C) and there is at least 13 hours of daylight. Leaf drop in autumn is initiated when day lengths fall to approximately 10 hours. Depending on the latitude, leaf drop may vary by as much as three weeks, beginning in the second week of October in Scandinavia and the first week of November in southern Europe. Unlike some other maples that wait for the soil to warm up, A. platanoides seeds require only three months of exposure to temperatures lower than and will sprout in early spring, around the same time that leafout begins. Norway maple does not require freezing temperatures for proper growth; however, it is adapted to higher latitudes with long summer days and does not perform well when planted south of the 37th parallel, the approximate southern limit of its range in Europe. Further, most North American Norway maples are believed descended from stock brought from Germany, at approximately 48°N to 54°N, not the more southerly ecotypes found in Italy and the Balkans that evolved for similar lighting conditions as the continental United States. The heavy seed crop and high germination rate contributes to its invasiveness in North America, where it forms dense monotypic stands that choke out native vegetation. The tree is also capable of growing in low lighting conditions within a forest canopy, leafs out earlier than most North American maple species, and its growing season tends to run longer as the lighting conditions of the United States (see above) result in fall dormancy occurring later than it does in the higher latitude of Europe. It is one of the few introduced species that can successfully invade and colonize a virgin forest. By comparison, in its native range, Norway maple is rarely a dominant species and instead occurs mostly as a scattered understory tree.
Cultivation and uses
The wood is hard, yellowish-white to pale reddish, with the heartwood not distinct; it is used for furniture and woodturning. Norway maple sits ambiguously between hard and soft maple with a Janka hardness of . The wood is rated as non-durable to perishable in regard to decay resistance. In Europe, it is used for furniture, flooring and musical instruments, especially for violins.
Norway maple has been widely taken into cultivation in other areas, including western Europe northwest of its native range. It grows north of the Arctic Circle at Tromsø, Norway. In North America, it is planted as a street and shade tree as far north as Anchorage, Alaska. In Ontario, it is common in cultivation north to Sault Ste. Marie and Sudbury; although not considered reliably hardy northward, it has been established at Kapuskasing and Iroquois Falls, and even at Moose Factory. It is most recommended in USDA Hardiness Zones 4 to 7 but will grow in warmer zones (at least up to Zone 10) where summer heat is moderate, as along the Pacific coast south to the Los Angeles basin. They tend to prefer wetter Oceanic climates. During the 1950s–60s it became popular as a street tree due to the large-scale loss of American elms from Dutch elm disease.
It is favored due to its tall trunk and tolerance of poor, compacted soils and urban pollution, conditions in which the sugar maple has difficulty. It has become a popular species for bonsai in Europe, and is used for medium to large bonsai sizes and a multitude of styles. Norway maples are not typically cultivated for maple syrup production due to the lower sugar content of the sap compared to sugar maple.
Cultivars
Many cultivars have been selected for distinctive leaf shapes or colorations, such as the dark purple of 'Crimson King' and 'Schwedleri', the variegated leaves of 'Drummondii', the light green of 'Emerald Queen', and the deeply divided, feathery leaves of 'Dissectum' and 'Lorbergii'. The purple-foliage cultivars have orange to red autumn colour. 'Columnare' is selected for its narrow upright growth. The cultivars 'Crimson King' and 'Prigold' () have gained the Royal Horticultural Society's Award of Garden Merit.
As an invasive species in North America
The Norway maple was introduced to northeastern North America between 1750 and 1760 as an ornamental shade tree. It was brought to the Pacific Northwest in the 1870s. Today, Norway maples tend to be most common in the Pacific Northwest, in southern Ontario, and along the Kennebec river in southern Maine. The roots of Norway maples grow very close to the ground surface, starving other plants of moisture. For example, lawn grass (and even weeds) will usually not grow well beneath a Norway maple, but English ivy, with its minimal rooting needs, may thrive. In addition, the dense canopy of Norway maples can inhibit understory growth. Some have suggested Norway maples may also release chemicals to discourage undergrowth, although this claim is controversial. A. platanoides has been shown to inhibit the growth of native saplings as a canopy tree or as a sapling. The Norway maple also suffers less herbivory than the sugar maple, allowing it to gain a competitive advantage against the latter species. As a result of these characteristics, it is considered invasive in some states, and has been banned for sale in New Hampshire and Massachusetts. The state of New York has classified it as an invasive plant species. Despite these steps, the species is still available and widely used for urban plantings in many areas.
Natural enemies
The larvae of a number of species of Lepidoptera feed on Norway maple foliage. Ectoedemia sericopeza, the Norway maple seedminer, is a moth of the family Nepticulidae. The larvae emerge from eggs laid on the samara and tunnel to the seeds. Norway maple is generally free of serious diseases, though can be attacked by the powdery mildew Uncinula bicornis, and verticillium wilt disease caused by Verticillium spp. "Tar spots" caused by Rhytisma acerinum infection are common but largely harmless. Aceria pseudoplatani is an acarine mite that causes a 'felt gall', found on the underside of leaves of both sycamore maple (Acer pseudoplatanus) and Norway maples.
| Biology and health sciences | Sapindales | Plants |
9585510 | https://en.wikipedia.org/wiki/Animal%20slaughter | Animal slaughter | Animal slaughter is the killing of animals, usually referring to killing domestic livestock. It is estimated that each year, 80 billion land animals are slaughtered for food. Most animals are slaughtered for food; however, they may also be slaughtered for other reasons such as for harvesting of pelts, being diseased and unsuitable for consumption, or being surplus for maintaining a breeding stock. Slaughter typically involves some initial cutting, opening the major body cavities to remove the entrails and offal but usually leaving the carcass in one piece. Such dressing can be done by hunters in the field (field dressing of game) or in a slaughterhouse. Later, the carcass is usually butchered into smaller cuts.
The animals most commonly slaughtered for food are cattle and water buffalo, sheep, goats, pigs, deers, horses, rabbits, poultry (mainly chickens, turkeys, ducks and geese), insects (a commercial species is the house cricket), and increasingly, fish in the aquaculture industry (fish farming). In 2020, Faunalytics reported that the countries with the largest number of slaughtered cows and chickens are China, the United States, and Brazil. Concerning pigs, they are slaughtered by far the most in China, followed by the United States, Germany, Spain, Vietnam, and Brazil. For sheep, again China slaughtered the most, this time followed by Australia and New Zealand. Similarly, the amount (in tonnes) of fish used for production is highest in China, Indonesia, Peru, India, Russia, and the United States (in that order).
Modern history
The use of a sharpened blade for the slaughtering of livestock has been practised throughout history. Prior to the development of electric stunning equipment, some species were killed by simply striking them with a blunt instrument, sometimes followed by exsanguination with a knife.
The belief that this was unnecessarily cruel and painful to the animal eventually led to the adoption of specific stunning and slaughter methods in many countries. One of the first campaigners on the matter was the eminent physician, Benjamin Ward Richardson, who spent many years of his later working life developing more humane methods of slaughter as a result of attempting to discover and adapt substances capable of producing general or local anaesthesia to relieve pain in people. As early as 1853, he designed a chamber that could kill animals by gassing them. He also founded the Model Abattoir Society in 1882 to investigate and campaign for humane methods of slaughter and experimented with the use of electric current at the Royal Polytechnic Institution.
The development of stunning technologies occurred largely in the first half of the twentieth century. In 1911, the Council of Justice to Animals (later the Humane Slaughter Association, or HSA) was established in England to improve the slaughter of livestock. In the early 1920s, the HSA introduced and demonstrated a mechanical stunner, which led to the adoption of humane stunning by many local authorities.
The HSA went on to play a key role in the passage of the Slaughter of Animals Act 1933. This made the mechanical stunning of cows and electrical stunning of pigs compulsory, with the exception of Jewish and Muslim meat. Modern methods, such as the captive bolt pistol and electric tongs were required, and the act's wording specifically outlawed the poleaxe. The period was marked by the development of various innovations in slaughterhouse technologies, not all of them particularly long-lasting.
Methods
Stunning
Various methods are used to kill or render an animal unconscious during animal slaughter.
Electrical (stunning or slaughtering with electric current known as electronarcosis) This method is used for swine, sheep, calves, cattle, and goats. Current is applied either across the brain or the heart to render the animal unconscious before being killed. In industrial slaughterhouses, chickens are killed prior to scalding by being passed through an electrified water-bath while shackled.
Gaseous (carbon dioxide) This method can be used for sheep, calves and swine. The animal is asphyxiated by the use of CO2 gas before being killed. In several countries, CO2 stunning is mainly used on pigs. A number of pigs enter a chamber which is then sealed and filled with 80% to 90% CO2 in air. The pigs lose consciousness within 13 to 30 seconds. Older research produced conflicting results, with some showing pigs tolerated CO2 stunning and others showing they did not. However, the current scientific consensus is that the "inhalation of high concentration of carbon dioxide is aversive and can be distressing to animals." Nitrogen has been used to induce unconsciousness, often in conjunction with CO2. Domestic turkeys are averse to high concentrations of CO2 (72% CO2 in air) but not low concentrations (a mixture of 30% CO2 and 60% argon in air with 3% residual oxygen).
Mechanical (captive bolt pistol) This method can be used for sheep, swine, goats, calves, cattle, horses, mules, and other equines. A captive bolt pistol is applied to the head of the animal to quickly render them unconscious before being killed. There are three types of captive bolt pistols, penetrating, non-penetrating and free bolt. The use of penetrating captive bolts has largely been discontinued in commercial situations to minimize the risk of transmission of disease when parts of the brain enter the bloodstream.
Firearm (gunshot/free bullet) This method can be used for cattle, calves, sheep, swine, goats, horses, mules, and other equines. It is also the standard method for taking down wild game animals such as deer with the intention of consuming their meat. A conventional firearm is used to fire a bullet into the brain or through the heart of the animal to render the animal quickly unconscious (and presumably dead).
Killing
Exsanguination The animal either has its throat cut or has a chest stick inserted cutting close to the heart. In both these methods, main veins and/or arteries are cut and allowed to bleed.
Manual Used on poultry and other animals; different methods are practiced, here are some examples: a) grabbing the bird by the head then snapping its neck using quick and fast movements b) the bird is put upside down inside a metal funnel, then the head is either quickly cut or hit using the back end of a machete or knife. c) cattle, sheep and goats are tied then struck multiple times in the head with a sledgehammer until the animal dies or loses consciousness.
Drug administrationDrug administration is used to ensure the animal is dead. However, being that this method is expensive, time-consuming, and renders the animals' bodies toxic and inedible, it is mainly used for animal euthanasia, not as a commercialized slaughter method.
Preslaughter handling
Whether animals are stunned before slaughter or not, they suffer stress while waiting to be killed. A 1996 veterinary review found that there are many ways in which animals suffer and die during the preslaughter period. They include:
Dehydration: Animals may not be provided with water at market or during their journey to the slaughterhouse and may arrive dehydrated. The effects of severe dehydration include severe thirst, nausea, a hot-dry body, dry tongue, loss of co-ordination and concentrated urine of a small volume.
Emotional stress during transport: The unfamiliarity of being on board a transport truck causes fear in animals, and if they are cooped up with others who they do not know, they may start fighting. The noise and jolting of the truck also causes stress and cows, pigs, horses and birds are at particular risk of suffering from motion sickness.
Temperature stress during transport: Some animals die because of the heat that develops in the closely confined conditions on board the transport truck. During transport, animals are not able to express all the behaviors which normally allow them to keep cool like seeking shade, wallowing, licking their fur or stretching their wings and legs. During transport the only useful way they can dissipate heat is by panting. In colder climates, the animals can be exposed to extreme low temperatures, resulting in hypothermia.
Torn skin, bruising and injury: Caused by rough handling of animals, such as beating the animals with sticks when they refuse to move forward or dragging them along the ground when they fall down. The insults which lead to bruising may be painful, and the swelling and inflammation associated with a bruise lead to a longer-lasting pain.
Sickness and disease: Farmers vary between countries in their attitude as to which sick and diseased animals can be sent for slaughter. Some take the view that the slaughterhouses are expert at salvaging what they can from carcasses and so most diseased animals are sent in, whereas in other countries farmers appreciate that diseased stock are low grade and their likely low return does not justify sending them in. Sickness and disease are two of the most serious forms of animal suffering and transporting seriously ill animals imposes an additional stress.
Fecal soiling: In some countries, especially where animals come off lush pasture, transport is the main period when they pick up body surface fecal contamination. The emotional stress associated with transport no doubt induces defecation and this compounds the problem.
National laws
Europe
The measures for sanitary checks, animal welfare protection and slaughtering procedures are harmonised throughout the European Union, and detailed by the European Commissions' regulations CE 853/2004, 854/2004 and 1099/2009.
Canada
In Canada, the handling and slaughter of food animals is a shared responsibility of the Canadian Food Inspection Agency (CFIA), industry, stakeholders, transporters, operators and every person who handles live animals.
Canadian law requires that all federally registered slaughter establishments ensure that all species of food animals are handled and slaughtered humanely. The CFIA verifies that federal slaughter establishments are compliant with the Meat Inspection Regulations.
The CFIA's humane slaughter requirements take effect when the animals arrive at the federally registered slaughter establishment. Industry is required to comply with the Meat Inspection Regulations for all animals under their care.
The Meat Inspection Regulations define the conditions for the humane slaughter of all species of food animals in federally registered establishments. Some of the provisions contained in the regulations include:
guidelines and procedures for the proper unloading, holding and movement of animals in slaughter facilities
requirements for the segregation and handling of sick or injured animals
requirements for the humane slaughter of food animals
United Kingdom
The Department for Environment, Food and Rural Affairs (Defra) is the main governing body responsible for legislation and codes of practice covering animal slaughter in the UK.
In the UK the methods of slaughter are largely the same as those used in the United States with some differences.
The use of captive bolt equipment and electrical stunning are approved methods of stunning sheep, goats, cattle and calves for consumption- with the use of gas reserved for swine.
Until 2004, it was illegal to slaughter animals in sight of their conspecifics (members of the same species) because it was thought to cause them distress. However, there was a concern that moving the animals away from their conspecifics to a different place to be slaughtered would increase the stun-to-kill time (time between stunning the animal and killing it) for the stunned animal, increasing the risk the animal would regain consciousness and it was consequently recommended that slaughter in front of conspecifics be permitted alongside a mandatory limit on stun-to-kill time. Legislation was introduced which allowed animals to be slaughtered in sight of their conspecifics but there was no legislation for a legal maximum stun-to-kill time. Some critics argue that this resulted in the "worst of both worlds", as it mean that the slaughter methods now caused distress to conspecifics without reliably ensuring the animals were killed before regaining consciousness.
United States
In the United States, the United States Department of Agriculture (USDA) specifies the approved methods of livestock slaughter:
Each of these methods is outlined in detail, and the regulations require that inspectors identify operations which cause "undue" "excitement and discomfort" of animals.
In 1958, the law that is enforced today by the USDA Food Safety and Inspection Service (FSIS) was passed as the Humane Slaughter Act of 1958. This Act requires the proper treatment and humane handling of all food animals slaughtered in USDA inspected slaughter plants. It does not apply to chickens or other birds.
4D Meat
Meat from animals which are dead, diseased, disabled or dying (4-D meat) on the arrival at the slaughterhouse is often salvaged for rendering, and used by a wide range of industries including pet food manufacturers, zoos, greyhound kennels, and mink ranches.
The U.S. Code (Title 21, Chapter 12, Subchapter II, § 644) Regulates transactions, transportation, or importation of 4–D animals to prevent use as human food:
"No person, firm, or corporation engaged in the business of buying, selling, or transporting in commerce, or importing, dead, dying, disabled, or diseased animals, or any parts of the carcasses of any animals that died otherwise than by slaughter, shall buy, sell, transport, offer for sale or transportation, or receive for transportation, in commerce, or import, any dead, dying, disabled, or diseased cattle, sheep, swine, goats, horses, mules or other equines, or parts of the carcasses of any such animals that died otherwise than by slaughter, unless such transaction, transportation or importation is made in accordance with such regulations as the Secretary may prescribe to assure that such animals, or the unwholesome parts or products thereof, will be prevented from being used for human food purposes."
The 2004 report to US Congress titled “Animal Rendering: Economics and Policy”, available in the library of Congressional Research Service, in the ‘Introduction’ paragraph explains Renderers in the US and Canada convert dead animals and other waste material into sellable products:
“Renderers convert dead animals and animal parts that otherwise would require disposal into a variety of materials, including edible and inedible tallow and lard and proteins such as meat and bone meal (MBM). These materials in turn are exported or sold to domestic manufacturers of a wide range of industrial and consumer goods such as livestock feed and pet food, soaps, pharmaceuticals, lubricants, plastics, personal care products, and even crayons.”
Although some authors have found health problems associated with the consumption of 4D meat by certain species in its raw form, or found it potentially hazardous, FDA considers it fit for animal consumption:
"Pet food consisting of material from diseased animals or animals which have died otherwise than by slaughter, which is in violation of 402(a)(5) will not ordinarily be actionable, if it is not otherwise in violation of the law. It will be considered fit for animal consumption."
Religious laws
Ritual slaughter is the overarching term accounting for various methods of slaughter used by religions around the world for food production. While keeping religious autonomy, these methods of slaughter, within the United States, are governed by the Humane Slaughter Act and various religion-specific laws, most notably, Shechita and Dhabihah.
Jewish law (Shechita)
Animal slaughter in Judaism falls in accordance to the religious law of Shechita. In preparation, the animal being prepared for slaughter must be considered kosher (fit) before the act of slaughter can commence and consumed. The basic law of the Shechita process requires the rapid and uninterrupted severance of the major vital organs and vessels. They slit the throat, resulting in a quick drop in blood pressure, restricting blood to the brain. This abrupt loss of pressure results in the rapid and irreversible cessation of consciousness and sensibility to pain (a requirement held in high regard by most institutions).
Islamic law (Dhabihah)
Animal slaughtering in Islam is in accordance with the Qur’an. For the meat to be lawful (Halal) according to Islam, it must come from an animal which is a member of a lawful species and it must be ritually slaughtered, i.e. according to the Law, or the sole code recognized by the group as legitimate. The animal is killed in ways similar to the Jewish ritual with the throat being slit (dhabh), resulting in a quick drop in blood pressure, restricting blood to the brain. This abrupt loss of pressure results in the rapid and irreversible cessation of consciousness and sensibility to pain (a requirement held in high regard by most institutions). The slaughterer must say Bismillah (In the name of Allah/God) before slaughtering the animal. Blood must be drained out of the carcass.
Sikh customs (Jhatka)
The practice of Jhatka in India developed out of the Sikh tradition in accordance with the value of Ahimsa (no harm). Sikhs believe that an animal should be slaughtered quickly and with as little pain as possible in order to reduce bad Karma that may result from such a practice. In India today most establishments will provide both Halal and Jhatka options for dishes containing chicken and lamb. Jhatka meat is not widely available outside India. Jhatka meat is also often considered to be the preferred method of slaughter for Sikhs in India and abroad.
Effects on livestock workers
In 2010, Human Rights Watch described slaughterhouse line work in the United States as a human rights crime. Slaughterhouses in the United States commonly illegally employ and exploit underage workers and illegal immigrants. In a report by Oxfam America, slaughterhouse workers were observed not being allowed breaks, were often required to wear diapers, and were paid below minimum wage.
American slaughterhouse workers are three times more likely to suffer serious injury than the average American worker. NPR reports that pig and cattle slaughterhouse workers are nearly seven times more likely to suffer repetitive strain injuries than average. The Guardian reports that on average there are two amputations a week involving slaughterhouse workers in the United States. On average, one employee of Tyson Foods, the largest meat producer in America, is injured and amputates a finger or limb per month. The Bureau of Investigative Journalism reported that over a period of six years, in the UK 78 slaughter workers lost fingers, parts of fingers or limbs, more than 800 workers had serious injuries, and at least 4,500 had to take more than three days off after accidents. In a 2018 study in the Italian Journal of Food Safety, slaughterhouse workers are instructed to wear ear protectors to protect their hearing from the constant screams of animals being killed. A 2004 study in the Journal of Occupational and Environmental Medicine found that "excess risks were observed for mortality from all causes, all cancers, and lung cancer" in workers employed in the New Zealand meat processing industry.
The act of slaughtering animals, or of raising or transporting animals for slaughter, may engender psychological stress or trauma in the people involved. A 2016 study in Organization indicates, "Regression analyses of data from 10,605 Danish workers across 44 occupations suggest that slaughterhouse workers consistently experience lower physical and psychological well-being along with increased incidences of negative coping behavior." A 2009 study by criminologist Amy Fitzgerald indicates, "slaughterhouse employment increases total arrest rates, arrests for violent crimes, arrests for rape, and arrests for other sex offenses in comparison with other industries." As authors from the PTSD Journal explain, "These employees are hired to kill animals, such as pigs and cows that are largely gentle creatures. Carrying out this action requires workers to disconnect from what they are doing and from the creature standing before them. This emotional dissonance can lead to consequences such as domestic violence, social withdrawal, anxiety, drug and alcohol abuse, and PTSD."
Public attitudes
Even though around 90% of US adults regularly consume meat, almost half of them appear to support a ban on slaughterhouses: in Sentience Institute’s 2017 survey on attitudes towards animal farming with 1,094 US adults 49% of them "support a ban on factory farming, 47% support a ban on slaughterhouses, and 33% support a ban on animal farming”. The 2017 survey was replicated by researchers at the Oklahoma State University, who found similar results. They also got 73% of respondents answering “yes” to the question “Were you aware that slaughterhouses are where livestock are killed and processed into meat, such that, without them, you would not be able to consume meat?”.
In the United States, many public protest slaughters were held in the late 1960s and early 1970s by the National Farmers Organization. Protesting low prices for meat, farmers would kill their own animals in front of media representatives. The carcasses were wasted and not eaten. However, this effort backfired because it angered television audiences to see animals being needlessly and wastefully killed.
Animal welfare
There has been controversy over whether or not animals should be slaughtered and over the various methods used. Some people believe sentient beings should not be harmed regardless of the purpose, or that meat production is an insufficient justification for harm.
Religious slaughter laws and practices have always been a subject of debate, and the certification and labeling of meat products remain to be standardized. Animal welfare concerns are being addressed to improve slaughter practices by providing more training and new regulations. There are differences between conventional and religious slaughter practices, although both have been criticized on grounds of animal welfare. Concerns about religious slaughter focus on the stress caused during the preparation stages before the slaughtering, pain and distress that may be experienced during and after the neck cutting, and prolonged times to loss of brain function and death if stunning is not applied.
| Technology | Animal husbandry | null |
1044681 | https://en.wikipedia.org/wiki/Mesh | Mesh | A mesh is a barrier made of interlaced strands of metal, fiber or other flexible or ductile materials. A mesh is similar to a web or a net in that it has many interwoven strands.
Types
A plastic mesh may be extruded, oriented, expanded, woven or tubular. It can be made from polypropylene, polyethylene, nylon, PVC or PTFE.
A metal mesh may be woven, knitted, welded, expanded, sintered, photo-chemically etched or electroformed (screen filter) from steel or other metals.
In clothing, mesh is loosely woven or knitted fabric that has many closely spaced holes. Knitted mesh is frequently used for modern sports jerseys and other clothing like hosiery and lingerie
A meshed skin graft is a piece of harvested skin that has been systematically fenestrated to create a mesh-like patch. Meshing of skin grafts provides coverage of a greater surface area at the recipient site, and also allows for the egress of excess serous or sanguinous fluid, which can compromise the graft establishment via formation of haematoma or seroma. However, it results in a rather pebbled appearance upon healing that may ultimately look less aesthetically pleasing.
Fiberglass mesh is a neatly woven, crisscross pattern of fiberglass thread that can be used to create new products such as door screens, filtration components, and reinforced adhesive tapes. It is commonly sprayed with a PVC coating to make it stronger, last longer, and to prevent skin irritation.
Coiled wire fabric is a type of mesh that is constructed by interlocking metal wire coils via a simple corkscrew method. The resulting spirals are then woven together to create a flexible metal fabric panel. Coiled wire fabric mesh is a product that is used by architects to design commercial and residential structures. It is also used in industrial settings to protect personnel and contain debris. Additionally, coiled wire fabric mesh is used for zoo enclosures, typically aviary and small mammal exhibits.
Uses
Meshes are often used to screen out insects. Wire screens on windows and mosquito netting are meshes.
Wire screens can be used to shield against radio frequency radiation, e.g. in microwave ovens and Faraday cages.
Metal and nylon wire mesh filters are used in filtration.
Wire mesh is used in guarding for secure areas and as protection in the form of vandal screens.
Wire mesh can be fabricated to produce park benches, waste baskets and other baskets for material handling.
Woven meshes are basic to screen printing.
Surgical mesh is used to provide a reinforcing structure in surgical procedures like inguinal hernioplasty, and umbilical hernia repair.
Meshes are used as drum heads in practice and electronic drum sets.
Fence for livestock or poultry (chicken wire or hardware cloth)
Humane animal trapping uses woven or welded wire mesh cages (chicken wire or hardware cloth) to trap wild animals like raccoons and skunks in populated areas.
Meshes can be used for eyes in masks.
| Technology | Flexible components | null |
1044856 | https://en.wikipedia.org/wiki/Shotgun%20cartridge | Shotgun cartridge | A shotgun cartridge, shotshell, or shell is a type of rimmed, cylindrical (straight-walled) ammunition used specifically in shotguns. It is typically loaded with numerous small, spherical sub-projectiles called shot. Shotguns typically use a smoothbore barrel with a tapered constriction at the muzzle to regulate the extent of scattering.
Some cartridges contain a single solid projectile known as a slug (sometimes fired through a rifled slug barrel). The casing usually consists of a paper or plastic tube with a metallic base containing the primer. The shot charge is typically contained by wadding inside the case. The caliber of the cartridge is known as its gauge.
The projectiles are traditionally made of lead, but other metals like steel, tungsten and bismuth are also used due to restrictions on lead, or for performance reasons such as achieving higher shot velocities by reducing the mass of the shot charge. Other unusual projectiles such as saboted flechettes, rubber balls, rock salt and magnesium shards also exist. Cartridges can also be made with specialty non-lethal projectiles such as rubber and bean bag rounds.
Shotguns have an effective range of about with buckshot, with birdshot, with slugs, and well over with saboted slugs in rifled barrels.
Most shotgun cartridges are designed to be fired from a smoothbore barrel, as "shot" would be spread too wide by rifling. A rifled barrel will increase the accuracy of sabot slugs, but makes it unsuitable for firing shot, as it imparts a spin to the shot cup, causing the shot cluster to disperse. A rifled slug uses rifling on the slug itself so it can be used in a smoothbore shotgun.
History
Early shotgun cartridges used brass cases, not unlike pistol and rifle cartridge cases of the same era. These brass shotgun hulls or cases resembled large rifle cartridges, in terms of both the head and primer portions of the cartridge, as well as in their dimensions. Card wads, made of felt, leather, and cork, as well as paperboard, were all used at various times. Waterglass (sodium silicate) was commonly used to cement the top overshot wad into these brass casings. No roll crimp or fold crimp was used on these early brass cases, though roll crimps were eventually used by some manufacturers to hold the overshot wad in place securely. The primers on these early shotgun cartridges were identical to pistol primers of the same diameter.
Starting in the late 1870s, paper hulls began replacing brass hulls. Paper hulls remained popular for nearly a century, until the early 1960s. These shotgun cartridges using paper hulls were nearly always roll crimped, although fold crimping also eventually became popular. The primers on these paper hull cartridges also changed from the pistol primers used on the early brass shotgun shells to a primer containing both the priming charge and an anvil, making the shotgun primer taller. Card wads, made of felt and cork, as well as paperboard, were all used at various times, gradually giving way to plastic over powder wads, with card wads, and, eventually, to all plastic wads. Starting from the early 1960s to the late 1970s, plastic hulls started replacing paper hulls for the majority of cartridges and by the early 1980s, plastic hulls had become universally adopted.
Typical construction
Modern shotgun cartridges typically consist of a plastic hull, with the base covered in a thin brass or plated steel covering. Paper cartridges used to be common and are still made, as are solid brass shells. Some companies have produced what appear to be all-plastic shells, although in these there is a small metal ring cast into the rim of the cartridge to provide strength. More powerful loads may use "high brass" shells, with the brass extended up further along the sides of the cartridge, while light loads will use "low brass" shells. The brass does not provide a significant amount of strength, but the difference in appearance gives shooters a way to quickly differentiate between high and low powered ammunition.
The base of the cartridge is fairly thick to hold the large primer, which is longer than primers used for rifle and pistol ammunition. Modern smokeless powders are far more efficient than the original black powder, so little space is actually taken by propellant; shotguns use small quantities of double base powders, equivalent to quick-burning pistol powders, with up to 50% nitroglycerin. After the powder comes the wadding or wad. The primary purpose of a wad is to prevent the shot and powder from mixing, and to provide a seal that stops gas from blowing through the shot rather than propelling it. The wad design may also encompass a shock absorber and a cup that holds the shot together until it is out of the barrel.
A modern wad consists of three parts, the powder wad, the cushion, and the shot cup, which may be separate pieces or be one part. The powder wad acts as the gas seal (known as obturation), and is placed firmly over the powder; it may be a paper or plastic part. The cushion comes next, and it is designed to compress under pressure, to act as a shock absorber and minimize the deformation of the shot; it also serves to take up as much space as is needed between the powder wad and the shot. Cushions are almost universally made of plastic with crumple zones, although for game shooting in areas grazed by farm stock or wildlife biodegradable fiber wads are often preferred. The shot cup is the last part of the cartridge, and it serves to hold the shot together as it moves down the barrel. Shot cups have slits on the sides so that they peel open after leaving the barrel, allowing the shot to continue on in flight undisturbed. Shot cups, where used, are also almost universally plastic. The shot fills the shot cup (which must be of the correct length to hold the desired quantity of shot), and the cartridges is then crimped, or rolled closed.
The only known shotgun cartridge using rebated rims is the 12 Gauge RAS12, specially made for the RAS-12 semi automatic shotgun.
Sizes
Standard
Shotgun cartridges are generally measured by "gauge", which is the weight, in fractions of a pound, of a pure lead round ball that is the same diameter as the internal diameter of the barrel. In Britain and some other locations outside the United States, the term "bore" is used with the same meaning. This contrasts with rifles and handguns, which are almost always measured in "caliber", a measurement of the internal diameter of the barrel measured in millimeters or inches and, consequently, is approximately equal to the diameter of the projectile that is fired.
For example, a shotgun is called "12-gauge" because a lead sphere that just fits the inside diameter of the barrel weighs . This measurement comes from the time when early cannons were designated in a similar manner—a "12 pounder" would be a cannon that fired a cannonball; inversely, an individual "12-gauge" shot would in fact be a pounder. Thus, a 10-gauge shotgun has a larger-diameter barrel than a 12-gauge shotgun, which has a larger-diameter barrel than a 20-gauge shotgun, and so forth.
The most popular shotgun gauge by far is 12-gauge. The bigger 10-gauge, once popular for hunting larger birds such as goose and turkey, is on the decline with the advent of the longer, "magnum" 12-gauge cartridges, which offer similar performance. The mid-size 20-gauge is also a very popular chambering for smaller-framed shooters who favor its reduced recoil, those hunting smaller game, and experienced trap and skeet shooters who like the additional challenge of hitting their targets with a smaller shot charge. Other less-common, but commercially available gauges are 16 and 28. Several other gauges may be encountered, but are considered obsolete. The 4, 8, 24, and 32 gauge guns are collector items. There are also some shotguns measured by diameter, rather than gauge. These are the .410 (10.4mm), .380 (9mm), and .22 (5.5mm); these are correctly called ".410 bore", not ".410-gauge".
The .410 bore is the smallest shotgun size which is widely available commercially in the United States. For size comparison purposes, the .410, when measured by gauge, would be around 67- or 68-gauge (it is 67.62-gauge), The .410 is often mistakenly assigned 36-gauge. The 36 gauge had a 0.506" bore. Reloading components are still available.
Other calibers
Snake shot (AKA: bird shot, rat shot, and dust shot) refers to handgun and rifle cartridges loaded with small lead shot. Snake shot is generally used for shooting at snakes, rodents, birds, and other pests at very close range. The most common snake shot cartridges are .22 Long Rifle, .22 Magnum, .38 Special, 9×19mm Luger, .40 Smith & Wesson, .44 Special, .45 ACP, and .45 Colt.
Commonly used by hikers, backpackers and campers, snake shot is ideally suited for use in revolvers and derringers, chambered for .38 Special and .357 Magnum. Snake shot may not cycle properly in semi-automatic pistols. Rifles specifically made to fire .22 caliber snake shot are also commonly used by farmers for pest control inside of barns and sheds, as the snake shot will not shoot holes in the roof or walls, or more importantly injure livestock with a ricochet. They are also used for airport and warehouse pest control.
Shot shells have also been historically issued to soldiers, to be used in standard issue rifles. The .45-70 "Forager" round, which contained a thin wooden bullet filled with birdshot, was intended for hunting small game to supplement the soldiers' rations. This round in effect made the .45-70 rifle into a small gauge shotgun, capable of killing rabbits, ducks, and other small game.
During World War II, the United States military developed the .45 ACP M12 and M15 shot cartridges. They were issued to pilots, to be used as foraging ammunition in the event that they were shot down. While they were best used in the M1917 revolvers, the M15 cartridge would actually cycle the semi-automatic M1911 pistols action.
Garden guns
Garden guns are smooth-bore firearms specifically made to fire .22 caliber snake shot, and are commonly used by gardeners and farmers for pest control. Garden guns are short-range weapons that can do little harm past 15 to 20 yards, and they are quiet when fired with snake shot, compared to standard ammunition. These guns are especially effective inside of barns and sheds, as the snake shot will not shoot holes in the roof or walls, or more importantly injure livestock with a ricochet. They are also used for pest control at airports, warehouses, stockyards, etc.
Shotgun gauge diameter formula
The standard definition of shotgun gauge assumes that a pure lead ball is used. The following formulas relate the bore diameter dn (in inches) to the gauge n:
For example, the common bore diameter dn = 0.410 inches (.410 bore) is effectively gauge n = 67.6 .
Lead free
By 1957, the ammo industry had the capability of producing a nontoxic shot, made out of either iron or steel. In 1976, the United States Fish and Wildlife Service took the first steps toward phasing out lead shot by designating steel-shot-only hunting zones for waterfowl. In the 1970s, lead-free ammunition loaded with steel, bismuth, or tungsten composite pellets instead of more traditional lead-based shot was introduced and required for Migratory Bird Hunting (Ducks & Geese). Lead shot in waterfowl hunting was banned throughout the United States in 1991. Due to environmental regulations, lead-loaded ammunition must be used carefully by hunters in Europe. For instance, in France, it cannot be fired in the vicinity of a pond. In fact, the laws are so complex that some hunters in Europe prefer not to risk getting into problems for firing lead pellets in the wrong places, so they opt for composite pellets in all situations. The use of lead shot is banned in Canada and the United States when hunting migratory game birds, like ducks and geese, forcing the use of non-toxic shot in these countries for waterfowl hunting (lead shot can still legally be used in the United States for hunting game other than waterfowl). This means that manufacturers need to market new types of lead-free shotgun ammunition loaded with alternative pellets to meet environmental restrictions on the use of lead, as well as lead-based and cheaper shotshell ammunition, to remain competitive worldwide.
The C.I.P. enforces approval of all ammunition a manufacturer or importer intends to sell in any of the (mainly European) C.I.P. member states. The ammunition manufacturing plants are obliged to test their products during production against the C.I.P. pressure specifications. A compliance report must be issued for each production lot and archived for later verification if needed.
Besides pressure testing, cartridges containing steel pellets require an additional Vickers hardness test. The steel pellets used must have a hardness under 100 HV1, but, even so, steel is known to wear the barrel excessively over time if the steel pellet velocities become too high, leading to potentially harmful situations for the user. As a result, the measurement of pellet velocity is also an additional obligation for cartridges in 12-, 16-, and 20-gauges in both standard and high performance versions sold in Europe. The velocity of pellets must be below , and respectively for the standard versions. Another disadvantage of steel pellets is their tendency to ricochet unpredictably after striking any hard surface. This poses a major hazard at indoor ranges or whenever metal targets or hard backstops (e.g. concrete wall vs. a dirt berm) are used. For this reason, steel shot is explicitly banned at most indoor shooting ranges. Any shooters who are considering to buy ammo loaded with steel for anything other than hunting purposes should first find out if using it will not cause undue hazard to themselves and others.
However, data supporting the danger of firing high velocity cartridges loaded with steel shot causing barrel wear has not been published and the US equivalent of CIP, SAAMI, does not have any such restrictive limitations on the velocity of commercial steel shot cartridges sold in the United States. Similarly, shotgun manufacturers selling shotguns in the United States select their own appropriate standards for setting steel hardness for shotgun barrels and for velocities of steel shot ammunition.
Some indoor shooting ranges prohibit the use of steel shot over concern of it causing a spark when hitting an object down range and causing a fire.
Shot sizes
Cartridges are loaded with different sizes of shot depending on the target. For skeet shooting, a small shot such as a No. 8 or No. 9 would be used, because range is short and a high density pattern is desirable. Trap shooting requires longer shots, and so a larger shot, usually # is used. For hunting game, the range and penetration needed to assure a clean kill is considered. Shot loses its velocity very quickly due to its low sectional density and ballistic coefficient (see external ballistics). Small shot, like that used for skeet and trap, will have lost all appreciable energy by around , which is why trap and skeet ranges can be located in relatively close proximity to inhabited areas with negligible risk of injury to those outside the range.
Birdshot
Birdshots are designed to be used for waterfowl and upland hunting, where the game is agile small/medium-sized birds. Their sizes are numbered similarly to the shotgun gauges—the smaller the number, the larger the shot (except in the obsolete Swedish system, where it is reversed). Generally birdshot is just called "shot", such as "number 9 shot" or "BB shot".
To make matters more complex, there are small differences in the size of American, Standard (European), Belgian, Italian, Norwegian, Spanish, Swedish, British, and Australian shot. That is because some systems go by diameter in inches (American), some go by diameter in millimeters (European), and the British system goes by the number of lead shot per ounce. Australia has a hybrid system due to its market being flooded with a mixture of British, American, and European cartridges.
For American shot, a useful method for remembering the diameter of numbered shot in inches is simply to subtract the shot size from 17. The resulting answer is the diameter of the shot in hundredths of an inch. For example, #2 shot gives 17−2 = 15, meaning that the diameter of #2 shot is or . B shot is , and sizes go up in increments for BB and BBB sizes.
In metric measurement, #5 shot is 3 mm; each number up or down represents a 0.25 mm change in diameter, so e.g. #7 shot is 2.5 mm.
Number 11 and number 12 lead shot also exists. Shot of these sizes is used in specialized cartridges designed to be fired at close range (less than four yards) for killing snakes, rats and similar-sized animals. Such cartridges are typically intended to be fired from handguns, particularly revolvers. This type of ammunition is produced by Federal and CCI, among others.
Birdshot selection
For hunting, shot size must be chosen not only for the range, but also for the game. The shot must reach the target with enough energy to penetrate to a depth sufficient to kill the game. Lead shot is still the best ballistic performer, but environmental restrictions on the use of lead, especially with waterfowl, require steel, bismuth, or tungsten composites. Steel, being significantly less dense than lead, requires larger shot sizes, but is a good choice when lead is not legal and cost is a consideration. It is argued that steel shot cannot safely be used in some older shotguns without causing damage to either the bore or to the choke due to the hardness of steel shot. However, the increased pressure in most steel cartridges is a far greater problem, causing more strain to the breech of the gun. Since tungsten is very hard, it must also be used with care in older guns. Tungsten shot is often alloyed with nickel and iron, softening the base metal. That alloy is approximately 1/3 denser than lead, but far more expensive. Bismuth shot falls between steel and tungsten shot in both density and cost. The rule of thumb in converting appropriate steel shot is to go up by two numbers when switching from lead. However, there are different views on dense patterns versus higher pellet energies.
Buckshot
Larger sizes of shot, big enough that they must be carefully packed into the cartridge rather than simply dumped or poured in, are called "buckshot" or just "buck". Buckshot is used for hunting medium to large game, as a tactical round for law enforcement and military personnel, and for personal self-defense. Buckshot size is most commonly designated by a series of numbers and letters, with smaller numbers indicating larger shot. Sizes larger than "0" are designated by multiple zeros. "00" (usually pronounced "double-aught" in North American English) is the most commonly sold size.
The British system for designating buckshot size is based on the amount of shot per ounce. The sizes are LG (large grape – from grapeshot derived from musket shooting), MG (medium grape), and SG (small grape). For smaller game, SSG shot is half the weight of SG, SSSG shot is half the weight of SSG, SSSSG shot is half the weight of SSSG, and so on. The Australian system is similar, except that it has 00-SG, a small-game cartridge filled with 00 buckshot.
Loads of 12-gauge 00 buckshot are commonly available in cartridges holding from 8 (eight) to 18 (eighteen) pellets in standard lengths ( inches, 3 inches, and ). Reduced-recoil 00 buckshot is often used in tactical and self-defense rounds, minimizing shooter stress and improving the speed of follow-up shots.
Specialist loads
Other rounds include:
Ferret rounds: rounds designed to penetrate a thin barrier (e.g. a car door) and release a gas payload.
Bolo rounds: two large lead balls attached by a wire.
Piranha rounds: loaded with sharp tacks.
Dragon's breath rounds: loaded with incendiary chemicals that create a fireball/flame when discharged, and can ignite a flammable target at close range.
Spread and patterning
Most modern sporting shotguns have interchangeable choke tubes to let the shooter change the spread of shot that comes out of the gun. In some cases, it is not practical to do this; the gun might have a fixed choke, or a shooter firing at receding targets may want to fire a wide pattern immediately followed by a narrower pattern out of a single barrelled shotgun. The spread of the shot can also be altered by changing the characteristics of the cartridge.
Narrower patterns
A buffering material, such as granulated plastic, sawdust, or similar material can be mixed with the shot to fill the spaces between the individual pellets. When fired, the buffering material compresses and supports the shot, reducing the deformation the shot pellets experience under the extreme acceleration. Antimony-lead alloys, copper plated lead shot, steel, bismuth, and tungsten composite shot all have a hardness greater than that of plain lead shot, and will deform less. Reducing the deformation will result in tighter patterns, as the spherical pellets tend to fly straighter. One improvised method for achieving the same effect involves pouring molten wax or tar into the mass of shot. Another is a partial ring cut around the case intended to ensure the shot comes out tightly bunched along with the portion of the case forward of the cut, creating a 'cut-shell'. This can be dangerous, as it is thought to cause higher chamber pressures—especially if part of the cartridge remains behind in the barrel and is not cleared before another shot is fired.
Wider patterns
Shooting the softest possible shot will result in more shot deformation and a wider pattern. This is often the case with cheap ammunition, as the lead used will have minimal alloying elements such as antimony and be very soft. Spreader wads are wads that have a small plastic or paper insert in the middle of the shot cup, usually a cylinder or "X" cross-section. When the shot exits the barrel, the insert helps to push the shot out from the center, opening up the pattern. These often result in inconsistent performance, though modern designs are doing much better than the traditional improvised solutions. Intentionally deformed shot (hammered into ellipsoidal shape) or cubical shot will also result in a wider pattern, much wider than spherical shot, with more consistency than spreader wads. Spreader wads and non-spherical shot are disallowed in some competitions. Hunting loads that use either spreaders or non-spherical shot are usually called "brush loads", and are favored for hunting in areas where dense cover keeps shot distances very short.
Spread
Most shotgun cartridges contain multiple pellets in order to increase the likelihood of a target being hit. A shotgun's shot spread refers to the two-dimensional pattern that these projectiles (or shot) leave behind on a target. Another less important dimension of spread concerns the length of the in-flight shot string from the leading pellet to the trailing one. The use of multiple pellets is especially useful for hunting small game such as birds, rabbits, and other animals that fly or move quickly and can unpredictably change their direction of travel. However, some cartridges only contain one metal shot, known as a slug, for hunting large game such as deer.
As the shot leaves the barrel upon firing, the three-dimensional shot string is close together. When the shot moves further away, the individual pellets increasingly spread out and disperse. This leads to the effective range of a shotgun, when firing a multitude of shot, being limited to approximately . To control this effect, shooters may use a constriction within the barrel of a shotgun called a choke. The choke, whether fixed or selectable within a barrel, effectively reduces the diameter of the end of the barrel, forcing the shot even closer together as it leaves the barrel, thereby increasing the effective range. The tighter the choke, the narrower the end of the barrel. Consequently, the effective range of a shotgun is increased with a tighter choke, as the shot column is held tighter over longer ranges. Hunters or target shooters can install several types of chokes, on guns having selectable chokes, depending on the range at which their intended targets will be located. For fixed choke shotguns, different shotguns or barrels are often selected for the intended hunting application at hand. From tightest to loosest, the various choke sizes are: full choke, improved modified, modified, improved cylinder, skeet, and cylinder bore.
A hunter who intends to hunt an animal such as rabbit or grouse knows that the animal will be encountered at a close range—usually within —and will be moving very quickly. An ideal choke would be a cylinder bore (the loosest) as the hunter wants the shot to spread out as quickly as possible. If this hunter were using a full choke (the tightest) at , the shot would be very close together and cause an unnecessarily large amount of damage to the rabbit, or, alternatively, a complete miss of the rabbit. This would waste virtually all of the meat for a hit, as the little amount of meat remaining would be overly-laden with shot and rendered inedible. By using a cylinder bore, this hunter would maximize the likelihood of a kill, and maximize the amount of edible meat. Contrarily, a hunter who intends to hunt geese knows that a goose will likely be approximately away, so that hunter would want to delay the spread of the shot as much as possible by using a full choke. By using a full choke for targets that are further away, the shooter again maximizes the likelihood of a kill, and maximizes the amount of edible meat. This also maximizes the chances of a swift and humane kill as the target would be hit with enough shot to kill quickly instead of only wounding the animal.
For older shotguns having only one fixed choke, intended primarily for equally likely use against rabbits, squirrels, quail, doves, and pheasant, an often-chosen choke is the improved cylinder, in a barrel, making the shotgun suitable for use as a general all-round hunting shotgun, without having excess weight. Shotguns having fixed chokes intended for geese, in contrast, are often found with full choke barrels, in longer lengths, and are much heavier, being meant for fixed use within a blind against distant targets. Defensive shotguns with fixed chokes generally have a cylinder bore choke. Likewise, shotguns intended primarily for use with slugs invariably also are found with a choke that is a cylinder bore.
Dram equivalence
"Dram" equivalence is sometimes still used as a measure of the powder charge power in a cartridge. Today, it is an anachronistic equivalence that represents the equivalent power of a cartridge containing this equivalent amount of black-powder measured in drams avoirdupois. A dram in the avoirdupois system is the mass of pound or ounce or 27.3 grains. The reasoning behind this archaic equivalence is that when smokeless powder first came out, some method of establishing an equivalence with common loads was needed in order to sell a box of cartridges. For example, a cartridge containing a 3 or 3 1/2 dram load of black-powder was a common hunting field load, and a heavy full power load would have contained about a 4 to 4-1/2 dram load, whereas a cartridge containing only a 2 dram load of black-powder was a common target practice load. A hunter looking for a field or full power load familiar with black-powder shotgun loads would have known exactly what the equivalence of the cartridges would have been in the newly introduced smokeless powder. Today, however, this represents a poorly understood equivalence of the powder charge power in a cartridge. To further complicate matters, "dram" equivalence was only defined for 12 gauge cartridges, and only for lead shot, although it has often been used for describing other gauges of shells, and even steel shot loads. Furthermore, "dram" equivalence only came around about 15 years after smokeless powder had been introduced, long after the need for an equivalence had started to fade, and actual black-powder loaded shotshells had largely vanished. In practice, "dram" equivalence today most commonly equates just to a velocity rating equivalence in fps (feet-per-second), while assuming lead shot.
A secondary impact of this equivalence was that common cartridges needed to stay the same size, physically, e.g., 2-1/2 or 2-3/4-inch shells, in order to be used in pre-existing shotguns when smokeless powder started being in the place of black-powder. As smokeless powder did not have to be loaded in the same volume as black-powder to achieve the same power, being more powerful, the volumes of wads had to increase, to fill the cartridge enough to permit proper crimps still to be made. Initially, this meant that increased numbers of over powder card wads had to be stacked to achieve the same stack-up length. Eventually, this also led to the introduction of one-piece plastic wads in the late 1950s through the early 1960s, to add additional wad volumes, in order to maintain the same overall cartridge length.
Dram equivalence has no bearing on the reloading of cartridges with smokeless powder; loading a cartridge with an equivalent dram weight of smokeless powder would cause a shotgun to explode. It only has an equivalence in reloading with black powder.
| Technology | Ammunition | null |
1044906 | https://en.wikipedia.org/wiki/Flipper%20%28anatomy%29 | Flipper (anatomy) | A flipper is a broad, flattened limb adapted for aquatic locomotion. It refers to the fully webbed, swimming appendages of aquatic vertebrates that are not fish.
In animals with two flippers, such as whales, the flipper refers solely to the forelimbs. In animals with four flippers, such as pinnipeds and sea turtles, one may distinguish fore- and hind-flippers, or pectoral flippers and pelvic flippers.
Animals with flippers include penguins (whose flippers are also called wings), cetaceans (e.g., dolphins and whales), pinnipeds (e.g., walruses, earless and eared seals), sirenians (e.g., manatees and dugongs), and marine reptiles such as the sea turtles and the now-extinct plesiosaurs, mosasaurs, ichthyosaurs, and metriorhynchids.
Usage of the terms "fin" and "flipper" is sometimes inconsistent, even in the scientific literature. However, the hydrodynamic control surfaces of fish are always referred to as "fins" and never "flippers". Tetrapod limbs which have evolved into fin-like structures are usually (but not always) called "flippers" rather than fins. The dorsal structure on cetaceans is called the "dorsal fin" and the large cetacean tails are referred to primarily as flukes but occasionally as "caudal fins"; neither of these structures are flippers.
Some flippers are very efficient hydrofoils, analogous to wings (airfoils), used to propel and maneuver through the water with great speed and maneuverability (see Foil). Swimming appendages with the digits still apparent, as in the webbed forefeet of amphibious turtles and platypus, are considered paddles rather than flippers.
Locomotion
For all species of aquatic vertebrates, swimming performance depends upon the animal's control surfaces, which include flippers, flukes and fins. Flippers are used for different types of propulsion, control, and rotation. In cetaceans, they are primarily used for control while the fluke is used for propulsion.
The evolution of flippers in penguins was at the expense of their flying capabilities, in spite of evolving from an auk-like ancestor that could 'fly' underwater as well in the air. Form constrains function, and the wings of diving flying species, such as the murre or cormorant have not developed into flippers. The flippers of penguins became thicker, denser and smaller while being modified for hydrodynamic properties.
Hydrodynamics
Cetacean flippers may be viewed as being analogous to modern engineered hydrofoils, which have hydrodynamic properties: lift coefficient, drag coefficient and efficiency. Flippers are one of the principal control surfaces of cetaceans (whales, dolphins and porpoises) due to their position in front of the center of mass, and their mobility which provides three degrees of freedom.
Flippers on humpback whales (Megaptera novaeangliae) have non-smooth leading edges, yet demonstrate superior fluid dynamics to the characteristically smooth leading edges of artificial wings, turbines and other kinds of blades. The whale's surprising dexterity is due primarily to its non-conventional flippers, which have large, irregular looking bumps called tubercles across their leading edges. The tubercles break up the passage of water, maintaining even channels of the fast-moving water, limiting turbulence and providing greater maneuverability.
The foreflippers used by the pinnipeds act as oscillatory hydrofoils. Both fore and hind flippers are used for turning. A 2007 study of Steller's sea lion found that a majority of thrust was produced during the drive phase of the fore flipper stroke cycle. Although previous findings on eared seals suggested that thrust was generated by the initial outward movement of the fore flippers or the terminal drag-based paddling phase, the 2007 study found that little or no thrust was generated during those phases. Swimming performance in sea lions is modulated by changes in the duration and intensity of movements without changing their sequence. Using criteria based on velocity and the minimum radius of turns, pinnipeds' maneuverability is superior to cetaceans but inferior to many fish.
Evolution of flippers
Marine mammals have evolved several times, developing similar flippers. The forelimbs of cetaceans, pinnipeds, and sirenians presents a classic example of convergent evolution. There is widespread convergence at the gene level. Distinct substitutions in common genes created various aquatic adaptations, most of which constitute parallel evolution because the substitutions in question are not unique to those animals.
When comparing cetaceans to pinnipeds to sirenians, 133 parallel amino acid substitutions occur. Comparing and contrasting cetaceans-pinnipeds, cetaceans-sirenians, and pinnipeds-sirenians, 2,351, 7,684, and 2,579 substitutions occur, respectively.
Digit processes
Whales and their relatives have a soft tissue flipper that encases most of the forelimb, and elongated digits with an increased number of phalanges. Hyperphalangy is an increase in the number of phalanges beyond the plesiomorphic mammal condition of three phalanges-per-digit. This trait is characteristic of secondarily aquatic vertebrates with flippers. Hyperphalangy was present among extinct ichthyosaurs, plesiosaurs, and mosasaurs.
Cetaceans are the sole mammals to have evolved hyperphalangy. Though the flippers of modern cetaceans are not correctly described as webbed feet, the intermediate webbed limbs of ancient semiaquatic cetaceans may be described as such. The presence of interdigital webbing within the fossils of semi-aquatic Eocene cetaceans was probably the result of BMP antagonists counteracting interdigital apoptosis during embryonic limb development. Modifications to signals in these tissues likely contributed to the origin of an early form of hyperphalangy in fully aquatic cetaceans about 35 million years ago. The process continued over time, and a very derived form of hyperphalangy, with six or more phalanges per digit, evolved convergently in rorqual whales and oceanic dolphins, and was likely associated with another wave of signaling within the interdigital tissues.
Although toothed cetaceans have five digits, most baleen whales have four digits and even lack a metacarpal. In the latter (mysticetes), the first digit ray may have been lost as late as 14 million years ago.
Flipper evolution in turtles
Sea turtles evolved in the Cretaceous. Their flippers developed gradually by a series of stepwise adaptations, with the most fundamental traits of flippers appearing in the deepest nodes (the earliest times) in their phylogeny. These initial traits evolved only once among chelonioids, and the bauplan was refined through a secondary process of specialization.
Evers et al. identified characters related to the pectoral girdle and forelimb that are related to the modification of sea turtle arms and hands into flippers.
Key biomechanical features of flippers
flattening of elements
lengthening of the humerus
reduction of mobility between individual flipper elements
Fundamental traits for flipper movement
lateral position of the humeral process
change in the angle of the internal scapula
Foraging behavior
Because of the specialization of flippers and their hydrodynamic constraints, it was thought that they were not used to significantly interact with the environment, unlike the legs of terrestrial tetrapods. However, the use of limbs for foraging is documented in marine tetrapods. Use of the flippers for foraging behavior is observed in marine mammals such as walruses, seals, and manatee, and even in reptiles such as sea turtles. Among turtles, observed behaviors include a green turtle holding a jellyfish, a loggerhead rolling a scallop on the sea floor, and a hawksbill turtle pushing against a reef for leverage to rip an anemone loose. Based on presumed limb use in ancestral turtles, these behaviors may have occurred as long ago as 70 million years.
| Biology and health sciences | External anatomy and regions of the body | Biology |
1045201 | https://en.wikipedia.org/wiki/Pinus%20halepensis | Pinus halepensis | Pinus halepensis, commonly known as the Aleppo pine, also known as the Jerusalem pine, is a pine native to the Mediterranean region. It was officially named by the botanist Philip Miller in his 1768 book The Gardener's Dictionary; he probably never went to Aleppo but mentions seeing large specimens at Goodwood in the garden of the Duke of Richmond, which were transplanted (perhaps sent by Alexander Russell from Syria) in 1739.
Description
Pinus halepensis is a small to medium-sized tree, tall, with a trunk diameter up to , exceptionally up to . The bark is orange-red, thick, and deeply fissured at the base of the trunk, and thin and flaky in the upper crown. The leaves ('needles') are very slender, long, distinctly yellowish green, and produced in pairs (rarely a few in threes). The cones are narrow conic, long and broad at the base when closed, green at first, ripening glossy red-brown when 24 months old. They open slowly over the next few years, a process quickened if they are exposed to heat such as in forest fires. The cones open wide to allow the seeds to disperse. The seeds are long, with a wing, and are wind-dispersed.
Related species
The Aleppo pine is closely related to the Turkish pine, Canary Island pine, and maritime pine, which all share many of its characteristics. Some authors include the Turkish pine as a subspecies of the Aleppo pine, as Pinus halepensis subsp. brutia (Ten.) Holmboe, but it is usually regarded as a distinct species. It is a relatively nonvariable species, in that its morphological characteristics stay constant over the entire range.
Distribution and habitat
The native range of Pinus halepensis extends from Morocco, Algeria, Tunisia, and Spain north to southern France, Malta, Italy, Croatia, Montenegro, and Albania, and east to Greece. It has been introduced into many parts of the world, including Portugal. There is an outlying population (from which it was first described) in Syria, Lebanon, southern Turkey, Jordan, Israel and Palestine.
The species is generally found at low altitudes, mostly from sea level to , but can grow above in southern and eastern Spain, well over on Crete, and up to in the south, in Morocco, Algeria and Tunisia.
The tree is able to quickly colonize open and disturbed areas. It is classed as an invasive species in South Africa. It can grow on all substrates and almost in all bioclimates in the Mediterranean.
Pinus halepensis is a diagnostic species of the vegetation class Pinetea halepensis.
Uses
The resin of the Aleppo pine is used to flavor the Greek wine retsina.
From the pine nuts of the Aleppo pine is made a pudding called asidet zgougou in the Tunisian dialect; it is served in bowls, covered with cream, and topped with almonds and small candies.
The Maltese dessert prinjolata is also prepared using these pine nuts, both in its filling as well as a topping.
Aleppo pine are used for bonsai.
Forestry
In its native area, P. halepensis is widely planted for its fine timber, making it one of the most important forestry trees in Algeria and Morocco.
In Israel, natural patches of Aleppo pine forests can be found in the Carmel and Galilee regions. The Aleppo pine, along with Pinus brutia, has been planted extensively by the Jewish National Fund. It proved very successful in Yatir Forest in the northern Negev (on the edge of the desert), where foresters had not expected it to survive. Many Aleppo pine forests exist today in Israel and are used for recreational purposes. Although it is a local species, some argue that the historical replacement of natural oak maquis shrubland and garrigue with tall stands of pine has created "ecological deserts" and has significantly changed the species assemblage of these regions. The species produces timber which is valued for its hardness, density and unproblematic seasoning. Seasoned timber is inclined to tear out with planing, but this can be avoided by using sharp blades or adjusting the sharpening angle of tools.
The Aleppo pine is considered an invasive species though useful in South Africa; in South Australia, a control program is in place on Eyre Peninsula.
Landscape
Pinus halepensis is a popular ornamental tree, extensively planted in gardens, parks, and private and agency landscapes in hot dry areas such as Southern California and the Karoo in South Africa, where the Aleppo pine's considerable heat and drought tolerance, fast growth, and aesthetic qualities are highly valued.
In culture
Paul Cézanne had an Aleppo pine in his garden at Aix-en-Provence; this tree was the inspiration and model for his painting The Big Trees. As of 2005, the tree is still growing in Cézanne's garden.
The Aleppo pine is associated with ANZAC Day and the ANZACs in Australia due to its use by soldiers in the Battle of Lone Pine during the Gallipoli campaign. It is often planted at war memorials.
| Biology and health sciences | Pinaceae | Plants |
1045267 | https://en.wikipedia.org/wiki/Pinus%20pinaster | Pinus pinaster | Pinus pinaster, the maritime pine or cluster pine, is a pine native to the south Atlantic Europe region and parts of the western Mediterranean. It is a hard, fast growing pine bearing small seeds with large wings.
Description
Pinus pinaster is a medium-size tree, reaching tall with a trunk diameter of up to , exceptionally .
The bark is orange-red, thick, and deeply fissured at the base of the trunk, somewhat thinner in the upper crown.
The leaves ('needles') are in pairs, very stout ( broad), up to long, and bluish-green to distinctly yellowish-green. The maritime pine features the longest and most robust needles of all European pine species.
The cones are conic, long and broad at the base when closed, green at first, ripening glossy red-brown when 24 months old. They open slowly over the next few years, or after being heated by a forest fire, to release the seeds, opening to broad.
The seeds are long, with a wing, and are wind-dispersed.
Similar species
Maritime pine is closely related to Turkish pine, Canary Island pine, and Aleppo pine, which all share many features with it. It is a relatively non-variable species, with constant morphology over the entire range.
Distribution and habitat
Its range is in the western Mediterranean Basin and the southern Atlantic coast of Europe, extending from central Portugal and Northern Spain (especially in Galicia) to southern and Western France, east to western Italy, Croatia and south to northern Tunisia, Algeria and northern Morocco. It favours a Mediterranean climate, which is one that has cool, rainy winters and hot, dry summers.
It generally occurs at low to moderate altitudes, mostly from sea level to , but up to in the south of its range in Morocco. The high degree of fragmentation in the current natural distribution is caused by two factors: the discontinuity and altitude of the mountain ranges causing isolation of even close populations, and human activity.
Ecology
Pinus pinaster is a popular topic in ecology because of its problematic growth and spread in South Africa for the past 150 years after being imported into the region at the end of the 17th century (1685–1693). It was found spreading in the Cape Peninsula by 1772. Towards the end of the 18th century (1780), P. pinaster was widely planted, and at the beginning of the 19th century (1825–1830), P. pinaster was planted commercially as a timber resource and for the forestry industry. The pine tree species invades large areas and more specifically fynbos vegetation. Fynbos vegetation is a fire-prone shrubland vegetation that is found in the southern and southwest cape of South Africa. It is found in greater abundance close to watercourses. Dispersal, habitat loss, and fecundity are all factors that affect spread rate. The species favors acidic soils with medium to high-density vegetation, but it can also grow in basic soils and even in sandy and poor soils, where only few commercial species can grow.
Pinus pinaster is a diagnostic species of the vegetation class Pinetea halepensis.
Larvae of the moth Dioryctria sylvestrella feed on this pine. Their boring activity causes large quantities of resin to flow from the wounds which weakens the tree and allows fungi and other pathogens to gain entry.
Invasiveness
Results of invasion
Pinus pinaster is a successful invasive species in South Africa. One of the results of its invasion in South Africa is a decrease in the biodiversity of the native environment. The increase of extinction rates of the native species is correlated with the introduction of these species to South Africa. Invasive species occupy habitats of native species often forcing them to extinction or endangerment. For example, invasive species have the potential to decrease the diversity of native plants by 50–86% in the Cape Peninsula of South Africa. P. pinaster is found in shrubland in South Africa; when compared to other environments, shrublands have the largest decline of species richness when invaded by an invasive species (Z=–1.33, p<0.001). Compared to graminoids; trees, annual herbs, and creepers have a larger effect on decline of species richness (Z=–3.78; p<0.001). Lastly, compared to other countries, South Africa had the largest species richness decline when faced with invasive species. South Africa is not home to many insects and diseases that limit the population of P. pinaster back in its native habitat. Not only is there evidence that alien plant invasions decrease biodiversity, but there is also evidence that the location of P. pinaster increases its negative effect on the species richness.
In addition, depending on the regions P. pinaster invades, P. pinaster has the potential to dramatically alter the quantity of water in the environment. If P. pinaster invades an area covered with grasses and shrubs, the water level of the streams in this area would lower significantly because P. pinaster are evergreen trees that take up considerably more water than grasses and shrubs all year around. They deplete run-off in catchment areas and water flow in rivers. This depletes the resources available for other species in the environment. P. pinaster tends to grow rapidly in riparian zones, which are areas with abundant water where trees and plants grow twice as fast and invade. P. pinaster takes advantage of the water available and consequently reduces the amount of water in the area available for other species. The fynbos catchments on the Western Cape of South Africa are a habitat negatively affected by P. pinaster. Twenty-three years after planting the pines, there was a 55% decrease in streamflow in this area. Similarly, in KwaZulu-Natal Drakensberg there was an 82% reduction in streamflow 20 years after introducing P. pinaster to the area. In the Mpumalanga Province, 6 streams completely dried up 12 years after grasslands were replaced with pines. To reinforce that, there is a negative effect from the invasive species P. pinaster, these areas of dense P. pinaster were thinned and the number of trees in the area decreased. As a result, the streamflow in the fynbos catchments of the Western Cape increased by 44%. The streamflow in the Mpumalanga Province increased by 120%. As a result of P. pinaster growth, there is often less understory vegetation for livestock grazing. Once again there was a positive effect when some of the pines were removed and agreeable range grasses were planted. The grazing conditions for the sheep of the area were greatly improved when the P. pinaster plantation was thinned to 300 trees per hectare. The invasion of P. pinaster leads to the decrease of understory vegetation and therefore a decrease in livestock.
It is sporadically naturalizing in Oakland and San Leandro in northern California.
Ecological interactions
Pinus pinaster is particularly successful in regions with fynbos vegetation because it is adapted to high-intensity fires, thus allowing it to outcompete other species that are not as well adapted to high-intensity fires. In areas of fire-prone shrubland, the cones of P. pinaster will release seeds when in a relatively high-temperature environment for germination as a recovery mechanism. This adaptation increases the competitive ability of P. pinaster amongst other species in the fire-prone shrubland. In a 3-year observational study done in Northwestern Spain, P. pinaster showed a naturally high regeneration rate. Observations showed a mean of 25.25 seedlings per square metre within the first year and then slowly decreased the next two years due to intraspecific competition. So not only does P. pinaster compete with other species, they also compete within their own species as well. When the height of P. pinaster increased there was a negative correlation with the number of P. pinaster seedlings, results showed a decrease in P. pinaster seedlings (r=–0.41, p<0.05).
Several other characteristics contribute to their success in the regions they have invaded, including their ability to grow rapidly and to produce small seeds with large wings. Their ability to grow quickly with short juvenile periods allows them to outcompete many native species while their small seeds aids in their dispersal. The small seeds with large wings are beneficial for wind dispersal, which is the key to reaching new areas in regions with fynbos vegetation. Vertebrate seed dispersers are not commonly found in mountain fynbos vegetation; therefore those species that require the aid of vertebrate dispersal would be at a disadvantage in such an environment. For this reason, the small seed, low seed wing loading, and high winds found in mountainous regions all combine to provide a favorable situation for the dispersal of P. pinaster seeds. Without this efficient dispersal strategy, P. pinaster would not have been able to reach and invade areas, such as South Africa, that are suitable for its growth. Its dispersal ability is one of the key factors that have allowed P. pinaster to become such a successful invasive species.
In addition to being an efficient disperser, P. pinaster is known to produce oleoresins, such as oily terpenes or fatty acids, which can inhibit other species within the community from growing. These resins are produced as a defense mechanism against insect predators, such as the large pine weevil. According to an experiment done in Spain, the resin canal density was twice as high in the P. pinaster seedlings attacked by the weevils compared to the unattacked seedlings. Since P. pinaster has the ability to regulate their production of defense mechanisms, it can protect itself from predatory in an energy-efficient manner. The resins make the P. pinaster less vulnerable to damage from insects, but they are only produced in high concentrations when P. pinaster is under attack. In other words, P. pinaster does not waste energy producing resins in safe conditions, so the conserved energy can be used for growth or reproduction. These characteristics enhance the ability of P. pinaster survive and flourish in the areas it invades. Both the traits of P. pinaster and the habitat in South Africa are conducive to the success of P. pinaster in this region of the world.
Options for biological control
Insects and mites that feed on the seeds and cones of P. pinaster can be effective biological control options. An insect or mite that acts as an ideal biological control should have a high reproductive rate and be host-specific, meaning that it preys specifically on P. pinaster. The life cycle of the predator should also match that of its specific host. Two key characteristics the predator should also exhibit are self-limitation and the ability to survive in the presence of a declining prey population. Seed feeding insects are an effective control because they have high reproductive rates and target the seeds without diminishing the positive effect of the plant on the environment. Controlling the spread of P. pinaster seeds in the region is the key to limiting the growth and spread of this species because P. pinaster has the ability to produce a large number of seeds that are capable of dispersing very efficiently. One possible option is Trisetacus, an eriophyid mite. The main advantage to using this mite to control the population of P. Pinaster is its specificity to P. pinaster; it can effectively control the population of P. pinaster by destroying the growing conelets in P. pinaster while limiting its impact to only this species. Another possible option is Pissodes validirostris, a cone-feeding weevil that lays eggs in developing cones. When the larvae hatch, they feed on the growing seed tissue, preventing P. pinaster seeds from forming and dispersing. Although the adults feed on the trees as well, they do not do any damage to the seeds and only feed on the shoots of the tree, so they do not appear to negatively impact the growth of the trees. Different forms of P. validirostris have diverged to become host-specific to different pine trees. The type of P. validirostris that originated from Portugal appears to have specialized to P. pinaster; therefore, this insect may be used in the future to control the spread of P. pinaster in South Africa. The uncertainties regarding the host-specificity of different types of P. validirostris, however, require more research to be completed before the introduction of the weevils into South Africa. An introduction of a species that is not host-specific to P. pinaster can lead to detrimental effects on both the environment and industries that are dependent on certain tree species. Two other biological control possibilities include the pyralid moth species Dioryctria mendasella and D. mitatella, but these species attack the vegetative tissue instead of just the seeds of P. pinaster, harming the plant itself. As of now, the eriophyid mite and cone-feeding weevil seem to hold the most potential to controlling the spread of P. pinaster in the regions it has invaded because they destroy the reproductive structures of the target invasive species.
Uses
Pinus pinaster is widely planted for timber in its native area, being one of the most important trees in forestry in France, Spain and Portugal. Landes forest in southwest France is the largest man-made maritime pine forest in Europe. It has also been cultivated in Australia as plantation tree, to provide softwood timber. P. pinaster resin is a useful source of turpentine and rosin.
In addition to industrial uses, maritime pine is also a popular ornamental tree, often planted in parks and gardens in areas with warm temperate climates. It has become naturalised in parts of southern England, Uruguay, Argentina, South Africa and Australia.
It is also used as a source of flavonoids, catechins, proanthocyanidins, and phenolic acids. A dietary supplement derived from extracts from P. pinaster bark called Pycnogenol is marketed with claims it can treat many conditions; however, according to a 2012 Cochrane review, the evidence is insufficient to support its use for the treatment of any chronic disorder.
Pests
Pestalotiopsis pini (a species of ascomycete fungi), was found as an emerging pathogen on Pinus pinea (Stone pine) and also on Pinus pinaster in Portugal. Evidence of shoot blight and stem necrosis were found in 2020. The fungus was found on needles, shoots and trunks of the pines.
| Biology and health sciences | Pinaceae | Plants |
1045608 | https://en.wikipedia.org/wiki/Wright%20Flyer | Wright Flyer | The Wright Flyer (also known as the Kitty Hawk, Flyer I or the 1903 Flyer) made the first sustained flight by a manned heavier-than-air powered and controlled aircraft on December 17, 1903. Invented and flown by brothers Orville and Wilbur Wright, it marked the beginning of the pioneer era of aviation.
The aircraft is a single-place biplane design with anhedral (drooping) wings, front double elevator (a canard) and rear double rudder. It used a gasoline engine powering two pusher propellers. Employing "wing warping", it was relatively unstable and very difficult to fly.
The Wright brothers flew it four times in a location now part of the town of Kill Devil Hills, about south of Kitty Hawk, North Carolina. The airplane flew on its fourth and final flight, but was damaged on landing, and wrecked minutes later when powerful gusts blew it over.
The aircraft never flew again but was shipped home and subsequently restored by Orville. The aircraft was initially displayed in a place of honor at the London Science Museum until 1948 when the resolution of an acrimonious priority dispute finally allowed it to be displayed in the Smithsonian. It is now exhibited in the National Air and Space Museum in Washington, D.C.
Design and construction
The Flyer was based on the Wrights' experience testing gliders at Kitty Hawk between 1900 and 1902. Their last glider, the 1902 Glider, led directly to the design of the Wright Flyer.
The Wrights built the aircraft in 1903 using spruce for straight members of the airframe (such as wing spars) and ash wood for curved components (wing ribs). The wings were designed with a 1-in-20 camber. The fabric for the wing was 100% cotton muslin called "Pride of the West", a type used for women's underwear. It had a warp of 107 threads per inch, a weft of 102, and a total thread count of 209.
Since they could not find a suitable automobile engine for the task, they commissioned their employee Charlie Taylor to build a new design from scratch, a lightweight gasoline engine, weighing , with a fuel tank. A sprocket chain drive, borrowing from bicycle technology, powered the twin propellers, which were also made by hand. In order to avoid the risk of torque effects from affecting the aircraft handling, one drive chain was crossed over so that the propellers rotated in opposite directions.
According to Taylor:
The long propellers were based on airfoil number 9 from their wind tunnel data, which provided the best "gliding angle" for different angles of attack. The propellers were connected to the engine by chains from the Indianapolis Chain Company, with a sprocket gear reduction of 23-to-8. Wilbur had calculated that slower turning blades generated greater thrust, and two of them were better than a single blade turning faster. Made from three laminations of spruce, the tips were covered with duck canvas, and the entire propeller painted with aluminum paint.
On November 5, 1903, the brothers tested their engine on the Wright Flyer at Kitty Hawk, but before they could tune the engine, the propeller hubs came loose. The drive shafts were sent back to Dayton for repair, and returned on 20 November. A hairline crack was discovered in one of the propeller shafts. Orville returned to Dayton on 30 November to make new spring steel shafts. On December 12, the brothers installed the new shafts on the Wright Flyer and tested it on their launching rail system that included a wheeled launching dolly. According to Orville:
In practice tests, they were able to achieve a propeller rpm of 351, with a thrust of , more than enough for their flyer.
The Wright Flyer was a canard biplane configuration, with a wingspan of , a camber of 1-20, a wing area of , and a length of . The right wing was longer because the engine was heavier than Orville or Wilbur. Unoccupied, the machine weighed . As with the gliders, the pilot flew lying on his stomach on the lower wing with his head toward the front of the craft in an effort to reduce drag. The pilot was left of center while the engine was right of center. He steered by moving a hip cradle in the direction he wished to fly. The cradle pulled wires to warp the wings, and simultaneously turn the rudder, for coordinated flight. The pilot operated the elevator lever with his left hand, while holding a strut with his right. The Wright Flyers "runway" was a track of 2x4s, which the brothers nicknamed the "Junction Railroad". The Wright Flyer skids rested on a launching dolly, consisting of a plank, with a wheeled wooden section. The two tandem ball bearing wheels were made from bicycle hubs. A restraining wire held the plane back, while the engine was running and the propellers turning, until the pilot was ready to be released.
The Wright Flyer had three instruments on board. A Veeder engine revolution recorder measured the number of propeller turns. A stopwatch recorded the flight time, and a Richard hand anemometer, attached to the front center strut, recorded the distance covered in meters.
Flight trials at Kitty Hawk
Upon returning to Kitty Hawk in 1903, the Wrights completed assembly of the Flyer while practicing on the 1902 Glider from the previous season. On December 14, 1903, they felt ready for their first attempt at powered flight. With the help of men from the nearby government life-saving station, the Wrights moved the Flyer and its launching rail to the incline of a nearby sand dune, Big Kill Devil Hill, intending to make a gravity-assisted takeoff. The brothers tossed a coin to decide who would get the first chance at piloting, and Wilbur won. The airplane left the rail, but Wilbur pulled up too sharply, stalled, and came down after covering in 3 seconds, sustaining little damage.
Repairs after the abortive first flight took three days. When they were ready again on December 17, the wind was averaging more than , so the brothers laid the launching rail on level ground, pointed into the wind, near their camp. This time the wind, instead of an inclined launch, provided the necessary airspeed for takeoff. Because Wilbur had already had the first chance, Orville took his turn at the controls. His first flight lasted 12 seconds for a total distance of – shorter than the wingspan of a Boeing 747.
Taking turns, the Wrights made four brief, low-altitude flights that day. The flight paths were all essentially straight; turns were not attempted. Each flight ended in a bumpy and unintended landing. The last flight, by Wilbur, covered in 59 seconds, much longer than each of the three previous flights of 120, 175 and 200 feet (, and ) in 12, 12, and 15 seconds respectively. The fourth flight's landing broke the front elevator supports, which the Wrights hoped to repair for a possible flight to Kitty Hawk village. Soon after, a heavy gust picked up the Flyer and tumbled it end over end, damaging it beyond any hope of quick repair. It was never flown again.
In 1904, the Wrights continued refining their designs and piloting techniques in order to obtain fully controlled flight. Major progress toward this goal was achieved with a new machine called the Wright Flyer II in 1904 and even more decisively in 1905 with the third, Wright Flyer III, in which Wilbur made a 39-minute, nonstop circling flight on October 5.
Influence
The Flyer series of aircraft were the first to achieve controlled heavier-than-air flight, but some of the mechanical techniques the Wrights used to accomplish this were not influential for the development of aviation as a whole, although their theoretical achievements were. The Flyer design depended on wing-warping controlled by a hip cradle under the pilot, and a foreplane or "canard" for pitch control, features which would not scale and produced a hard-to-control aircraft. The Wrights' pioneering use of "roll control" by twisting the wings to change wingtip angle in relation to the airstream led to the more practical use of ailerons by their imitators, such as Glenn Curtiss and Henri Farman. The Wrights' original concept of simultaneous coordinated roll and yaw control (rear rudder deflection), which they discovered in 1902, perfected in 1903–1905, and patented in 1906, represents the solution to controlled flight and is used today on virtually every fixed-wing aircraft. The Wright patent included the use of hinged rather than warped surfaces for the forward elevator and rear rudder. Other features that made the Flyer a success were highly efficient wings and propellers, which resulted from the Wrights' exacting wind tunnel tests and made the most of the marginal power delivered by their early homebuilt engines; slow flying speeds (and hence survivable accidents); and an incremental test/development approach. The future of aircraft design lay with rigid wings, ailerons and rear control surfaces. A British patent of 1868 for aileron technology had apparently been completely forgotten by the time the 20th century dawned.
After a single statement to the press in January 1904 and a failed public demonstration in May, the Wright Brothers did not publicize their efforts, and other aviators who were working on the problem of flight (notably Alberto Santos-Dumont) were thought by the press to have preceded them by many years. After their successful demonstration flight in France on August 8, 1908, they were accepted as pioneers and received extensive media coverage.
In 1909, the Wright Military Flyer became the world's first military aircraft after successful tests on June 3, 1909. This airplane was purchased by the army but was never used in combat; it was, however, used to train some pilots. It was donated to the Smithsonian Institution in 1911 and is on display in the Early Flight exhibit at the National Air and Space Museum. A modified version, the Wright Model B, was produced in larger numbers by the Wright brothers and was used by the army "for training pilots and conducting aerial experiments" including tests of "a bombsight and bomb-dropping device".
The issue of patent control was correctly seen as critical by the Wrights, and they acquired a wide American patent, intended to give them ownership of basic aerodynamic control. This was fought in both American and European courts. European designers were little affected by the litigation and continued their own development. The legal fight in the U.S. had a crushing effect on the nascent American aircraft industry, and even by the time of America's entry into World War I, in 1917, the U.S. had "only six [American made] airplanes, and fourteen trained pilots". The numbers increased substantially over the subsequent years but during the war, all of the fighter aircraft flown by Americans were designed and built in Europe.
Stability
The Wright Flyer was conceived as a control-canard, as the Wrights were more concerned with control than stability. It was found to be unstable and barely controllable. During flight tests near Dayton the Wrights added ballast to the nose of the aircraft to move the center of gravity forward and reduce pitch instability. The Wright Brothers did not understand the basics of pitch stability of the canard configuration. F.E.C. Culick stated, "The backward state of the general theory and understanding of flight mechanics hindered them... Indeed, the most serious gap in their knowledge was probably the basic reason for their unwitting mistake in selecting their canard configuration."
According to aviation author Harry Combs, "Wright designs incorporated a 'balanced' forward elevator...the movable surface extending an equal distance on both sides of its hinge or pivot axis, as opposed to an 'in-trail' configuration... which would have enhanced controllability in flight." Orville wrote of the elevator, which the brothers called a "front rudder", "I found the control of the front rudder quite difficult on account of its being balanced too near the center and thus had a tendency to turn itself when started so that the rudder was turned too far on one side and then too far on the other." Thus, these early flights suffered from overcontrol.
After Kitty Hawk
The Wright Brothers returned home to Dayton for Christmas after the flights of the Kitty Hawk Flyer. While they had abandoned their other gliders, they realized the historical significance of the Flyer. They shipped the heavily damaged craft back to Dayton, where it remained stored in crates behind a Wright Company shed for nine years. The Great Dayton Flood of March 1913 covered the Flyer in mud and water for 11 days.
Charlie Taylor relates in a 1948 article that the Flyer nearly got disposed of by the Wrights. In early 1912 Roy Knabenshue, the Wrights Exhibition team manager, had a conversation with Wilbur and asked Wilbur what they planned to do with the Flyer. Wilbur said they most likely will burn it, as they had the 1904 machine. According to Taylor, Knabenshue talked Wilbur out of disposing of the machine for historical purposes.
In 1910 the Wrights offered the Flyer as an exhibit at the Smithsonian Institution, but the Smithsonian declined, saying it would be willing to display other aeronautical artifacts from the brothers. Wilbur died in 1912, and in 1916 Orville brought the Flyer out of storage and prepared it for display at the Massachusetts Institute of Technology. He replaced parts of the wing covering, the props, and the engine's crankcase, crankshaft, and flywheel. The crankcase, crankshaft, and flywheel of the original engine had been sent to the Aero Club of America in New York for an exhibit in 1906 and were never returned to the Wrights. The replacement crankcase, crankshaft and flywheel came from the experimental engine Charlie Taylor had built in 1904 and used for testing in the bicycle shop. A replica crankcase of the Flyer is on display at the visitor center at the Wright Brothers National Memorial.
Debate with the Smithsonian
The Smithsonian Institution, and primarily its then-secretary Charles Walcott, refused to give credit to the Wright Brothers for the first powered, controlled flight of an aircraft. Instead, they honored the former Smithsonian Secretary Samuel Pierpont Langley, whose 1903 tests of his Aerodrome on the Potomac were not successful. Walcott was a friend of Langley and wanted to see Langley's place in aviation history restored. In 1914, Glenn Curtiss had recently exhausted the appeal process in a patent infringement legal battle with the Wrights. Curtiss sought to prove Langley's machine, which failed piloted tests nine days before the Wrights' successful flight in 1903, capable of controlled, piloted flight in an attempt to invalidate the Wrights' wide-sweeping patents.
The Aerodrome was removed from exhibit at the Smithsonian and prepared for flight at Keuka Lake, New York. Curtiss called the preparations "restoration" claiming that the only addition to the design was pontoons to support testing on the lake but critics including patent attorney Griffith Brewer called them alterations of the original design. Curtiss flew the modified Aerodrome, hopping a few feet off the surface of the lake for 5 seconds at a time.
Between 1916 and 1928, the Wright Flyer was prepared and assembled for exhibition under the supervision of Orville by Wright Company mechanic Jim Jacobs several times. It was briefly exhibited at the Massachusetts Institute of Technology in 1916, the New York Aero Shows in 1917 and 1919, a Society of Automotive Engineers meeting in Dayton, Ohio in 1918, and the National Air Races in Dayton in 1924.
In 1925, Orville attempted to pressure the Smithsonian by warning that he would send the Flyer to the Science Museum in London if the Institution refused to recognize his and Wilbur's accomplishment. The threat did not achieve its intended effect, and on January 28, 1928, Orville shipped the Kitty Hawk to London for display at the museum. It remained there in "the place of honour", except during World War II when it was moved to an underground storage facility away, near Corsham.
In 1942, the Smithsonian Institution, under a new secretary, Charles Abbot, published a list of 35 Curtiss modifications to the Aerodrome and a retraction of its long-held claims for the craft. Abbot went on to list four regrets including the role the Institution played in supporting unsuccessful defendants in patent litigation by the Wrights, misinformation about modifications made to the Aerodrome after Wright Flyers first flight, and public statements attributing the "first aeroplane capable of sustained free flight with a man" to Secretary Langley. The entry in the 1942 Annual Report of Smithsonian Institution begins with the statement "It is everywhere acknowledged that the Wright brothers were the first to make sustained flights in a heavier-than-air machine at Kitty Hawk, North Carolina, on December 17, 1903" and closes with a promise that "Should Dr. Wright decide to deposit the plane ... it would be given the highest place of honor which it is due".
The following year, Orville, after exchanging several letters with Abbot, agreed to return the Flyer to the United States. The Flyer stayed at the Science Museum until a replica could be built, based on the original. This change of heart by the Smithsonian is also mired in controversy – the Flyer was sold to the Smithsonian under several contractual conditions, one of which reads:
On October 18, 1948, the official handover of the Kitty Hawk was made to Livingston L. Satterthwaite, the American Civil Air Attaché at a ceremony attended by representatives of the various flying organizations in the UK and by some British aviation pioneers such as Sir Alliott Verdon-Roe.
On November 11, 1948, the Kitty Hawk arrived in North America on board the Mauretania with 1,111 passengers. When the liner docked at Halifax, Nova Scotia, Paul E. Garber of the Smithsonian's National Air Museum met the aircraft and took command of the proceedings, overseeing its transfer to the US Navy aircraft carrier, the USS Palau, which repatriated the aircraft by way of New York Harbor. The rest of the journey to Washington continued on flatbed truck. While in Halifax Garber met John A. D. McCurdy, at the time the Lieutenant Governor of Nova Scotia. McCurdy as a young man had been a member of Alexander Graham Bell's team Aerial Experiment Association, which included Glenn Curtiss, and later a famous pioneer pilot. During the stay at Halifax, Garber and McCurdy reminisced about the pioneer aviation days and the Wright Brothers. McCurdy also offered Garber any assistance he needed to get the Flyer home.
In the Smithsonian
The Wright Flyer was put on display in the Arts and Industries Building of the Smithsonian on December 17, 1948, 45 years to the day after the aircraft's only successful flights. (Orville did not live to see this, as he had died that January.) In 1976, it was moved to the Milestones of Flight Gallery of the new National Air and Space Museum. Since 2003 it has resided in a special exhibit in the museum titled "The Wright Brothers and the Invention of the Aerial Age," in recognition of the 100th anniversary of their first flight.
1985 restoration
In 1981, discussion began on the need to restore the Wright Flyer from the aging it sustained after many decades on display. During the ceremonies celebrating the 78th anniversary of the first flights, Mrs. Harold S. Miller (Ivonette Wright, Lorin's daughter), one of the Wright brothers' nieces, presented the Museum with the original covering of one wing of the Flyer, which she had received in her inheritance from Orville. She expressed her wish to see the aircraft restored.
The fabric covering on the aircraft at the time, which came from the 1927 restoration, was discolored and marked with water spots. Metal fasteners holding the wing uprights together had begun to corrode, marking the nearby fabric.
Work began in 1985. The restoration was supervised by Senior Curator Robert Mikesh and assisted by Wright Brothers expert Tom Crouch. Museum director Walter J. Boyne decided to perform the restoration in full view of the public.
The wooden framework was cleaned, and corrosion on metal parts removed. The covering was the only part of the aircraft replaced. The new covering was more accurate to the original than that of the 1927 restoration. To preserve the original paint on the engine, the restorers coated it in inert wax before putting on a new coat of paint. The effects of the 1985 restoration were intended to last 75 years (to 2060) before another restoration would be required.
Reproductions
In 1978, 23-year-old Ken Kellett built a replica Wright Flyer in Colorado and flew it at Kitty Hawk on the 75th and 80th anniversaries of the first flight there. Construction took a year and cost $3,000.
As the 100th anniversary on December 17, 2003, approached, the U.S. Centennial of Flight Commission along with other organizations opened bids for companies to recreate the original flight. The Wright Experience, led by Ken Hyde, won the bid and painstakingly recreated reproductions of the original Wright Flyer, plus many of the prototype gliders and kites and subsequent Wright aircraft. The completed Flyer reproduction was brought to Kitty Hawk and pilot Kevin Kochersberger attempted to recreate the original flight at 10:35 on December 17, 2003, on level ground near the bottom of Kill Devil Hill. Although the aircraft had previously made several successful test flights, poor weather, rain, and weak winds prevented a successful flight on the anniversary. Hyde's reproduction is displayed at the Henry Ford Museum in Dearborn, Michigan.
The Los Angeles Section of the American Institute of Aeronautics and Astronautics (AIAA) built a full-scale replica of the 1903 Wright Flyer between 1979 and 1993 using plans from the original Wright Flyer published by the Smithsonian Institution in 1950. Constructed in advance of the 100th anniversary of the Wright Brothers' first flight, the replica was intended for wind tunnel testing to provide a historically accurate aerodynamic database of the Wright Flyer design. The aircraft went on display at the March Field Air Museum in Riverside, California. Numerous static display-only, nonflying reproductions are on display around the United States and across the world, making this perhaps the most reproduced single aircraft of the "pioneer" era in history, rivaling the number of copies – some of which are airworthy – of Louis Blériot's cross-Channel Bleriot XI from 1909.
Artifacts
In 1969, portions of the original fabric and wood from the Wright Flyer traveled to the Moon and its surface in Neil Armstrong's personal preference kit aboard the Apollo 11 Lunar Module Eagle, and then back to Earth in the Command module Columbia.
This artifact is on display at the visitors center at the Wright Brothers National Memorial in Kitty Hawk, North Carolina.
In 1986, separate portions of original wood and fabric, as well as a note by Orville Wright, were taken by North Carolina native astronaut Michael Smith aboard the Space Shuttle Challenger on mission STS-51-L, which was destroyed soon after liftoff. The portions of wood and fabric and Wright's note were recovered from the wreck of the Shuttle and are on display at the North Carolina Museum of History.
A small piece of the Wright Flyers wing fabric is attached to a cable underneath the solar panel of the helicopter Ingenuity, which became the first vehicle to perform a controlled atmospheric flight on Mars on April 19, 2021. Before moving on for further exploration and testing, Ingenuitys first base on Mars was named Wright Brothers Field.
Specifications
Commemorations
The Wright Brothers and their airplane have been commemorated on a U.S. Quarter and on several U. S. Postage stamps.
| Technology | Specific aircraft_2 | null |
1045705 | https://en.wikipedia.org/wiki/Influenza%20vaccine | Influenza vaccine | Influenza vaccines, colloquially known as flu shots or the flu jab, are vaccines that protect against infection by influenza viruses. New versions of the vaccines are developed twice a year, as the influenza virus rapidly changes. While their effectiveness varies from year to year, most provide modest to high protection against influenza. Vaccination against influenza began in the 1930s, with large-scale availability in the United States beginning in 1945.
Both the World Health Organization and the US Centers for Disease Control and Prevention (CDC) recommend yearly vaccination for nearly all people over the age of six months, especially those at high risk, and the influenza vaccine is on the World Health Organization's List of Essential Medicines. The European Centre for Disease Prevention and Control (ECDC) also recommends yearly vaccination of high-risk groups, particularly pregnant women, the elderly, children between six months and five years, and those with certain health problems.
The vaccines are generally safe, including for people who have severe egg allergies. A common side effect is soreness near the site of injection. Fever occurs in five to ten percent of children vaccinated, and temporary muscle pains or feelings of tiredness may occur. In certain years, the vaccine was linked to an increase in Guillain–Barré syndrome among older people at a rate of about one case per million doses. Influenza vaccines are not recommended in those who have had a severe allergy to previous versions of the vaccine itself. The vaccine comes in inactive and weakened viral forms. The live, weakened vaccine is generally not recommended in pregnant women, children less than two years old, adults older than 50, or people with a weakened immune system. Depending on the type it can be injected into a muscle (intramuscular), sprayed into the nose (intranasal), or injected into the middle layer of the skin (intradermal). The intradermal vaccine was not available during the 2018–2019 and 2019–2020 influenza seasons.
History
Vaccines are used in both humans and non-humans. The human vaccine is meant unless specifically identified as a veterinary, poultry, or livestock vaccine.
Origins and development
During the worldwide Spanish flu pandemic of 1918, "Pharmacists tried everything they knew, everything they had ever heard of, from the ancient art of bleeding patients, to administering oxygen, to developing new vaccines and serums (chiefly against what we call Hemophilus influenzaea name derived from the fact that it was originally considered the etiological agentand several types of pneumococci). Only one therapeutic measure, transfusing blood from recovered patients to new victims, showed any hint of success."
In 1931, viral growth in embryonated hens' eggs was reported by Ernest William Goodpasture and colleagues at Vanderbilt University. The work was extended to the growth of influenza virus by several workers, including Thomas Francis, Jonas Salk, Wilson Smith, and Macfarlane Burnet, leading to the first experimental influenza vaccines. In the 1940s, the US military developed the first approved inactivated vaccines for influenza, which were used during World War II. Hens' eggs continued to be used to produce virus used in influenza vaccines, but manufacturers made improvements in the purity of the virus by developing improved processes to remove egg proteins and to reduce systemic reactivity of the vaccine. In 2012, the US Food and Drug Administration (FDA) approved influenza vaccines made by growing virus in cell cultures and influenza vaccines made from recombinant proteins have been approved, with plant-based influenza vaccines being tested in clinical trials.
Acceptance
The egg-based technology for producing influenza vaccine was created in the 1950s. In the US swine flu scare of 1976, President Gerald Ford was confronted with a potential swine flu pandemic. The vaccination program was rushed, yet plagued by delays and public relations problems. Meanwhile, maximum military containment efforts succeeded unexpectedly in confining the new strain to the single army base where it had originated. On that base, several soldiers fell severely ill, but only one died. The program was canceled after about 24% of the population had received vaccinations. An excess in deaths of 25 over normal annual levels as well as 400 excess hospitalizations, both from Guillain–Barré syndrome, were estimated to have occurred from the vaccination program itself, demonstrating that the vaccine itself is not free of risks. In the end, however, even the maligned 1976 vaccine may have saved lives. A 2010 study found a significantly enhanced immune response against the 2009 pandemic H1N1 in study participants who had received vaccination against the swine flu in 1976. The 2009 H1N1 "swine flu" outbreak resulted in the rapid approval of pandemic influenza vaccines. Pandemrix was quickly modified to target the circulating strain and by late 2010, 70 million people had received a dose. Eight years later, the BMJ gained access to early vaccine pharmacovigilance reports compiled by GSK (GlaxoSmithKline) during the pandemic, which the BMJ reported indicated death was 5.39 fold more likely with Pandemrix vs the other pandemic vaccines. However, more thorough and robust latter analyses did not establish any increase of fatalities or most other serious adverse effects, with a possible rare exception for narcolepsy.
Quadrivalent vaccines
A quadrivalent flu vaccine administered by nasal mist was approved by the FDA in March 2012. Fluarix Quadrivalent was approved by the FDA in December 2012.
In 2014, the Canadian National Advisory Committee on Immunization (NACI) published a review of quadrivalent influenza vaccines.
Starting with the 2018–2019 influenza season most of the regular-dose egg-based flu shots and all the recombinant and cell-grown flu vaccines in the United States are quadrivalent. In the 2019–2020 influenza season all regular-dose flu shots and all recombinant influenza vaccine in the United States are quadrivalent.
In November 2019, the FDA approved Fluzone High-Dose Quadrivalent for use in the United States starting with the 2020–2021 influenza season.
In February 2020, the FDA approved Fluad Quadrivalent for use in the United States. In July 2020, the FDA approved both Fluad and Fluad Quadrivalent for use in the United States for the 2020–2021 influenza season.
The B/Yamagata lineage of influenza B, one of the four lineages targeted by quadrivalent vaccines, might have become extinct in 2020/2021 due to COVID-19 pandemic measures, and there have been no naturally occurring cases confirmed since March 2020. In 2023, the World Health Organization concluded that protection against the Yamagata lineage was no longer necessary in the seasonal flu vaccine, so future vaccines are recommended to be trivalent instead of quadrivalent. For the 2024–2025 Northern Hemisphere influenza season, the FDA recommends removing B/Yamagata from all influenza vaccines.
Medical uses
The influenza vaccine is indicated for active immunization for the prevention of influenza disease caused by influenza virus subtypes A and type B contained in the vaccine.
The US Centers for Disease Control and Prevention (CDC) recommends the flu vaccine as the best way to protect people against the flu and prevent its spread. The flu vaccine can also reduce the severity of the flu if a person contracts a strain that the vaccine did not contain. It takes about two weeks following vaccination for protective antibodies to form.
A 2012 meta-analysis found that flu vaccination was effective 67percent of the time; the populations that benefited the most were HIV-positive adults aged 18 to 55 (76percent), healthy adults aged 18 to 46 (approximately 70percent), and healthy children aged six months to 24 months (66percent). The influenza vaccine also appears to protect against myocardial infarction with a benefit of 15–45%.
Effectiveness
A vaccine is assessed by its efficacy – the extent to which it reduces the risk of disease under controlled conditions – and its effectiveness – the observed reduction in risk after the vaccine is put into use. In the case of influenza, effectiveness is expected to be lower than the efficacy because it is measured using the rates of influenza-like illness, which is not always caused by influenza. Studies on the effectiveness of flu vaccines in the real world are difficult; vaccines may be imperfectly matched, virus prevalence varies widely between years, and influenza is often confused with other influenza-like illnesses. However, in most years (16 of the 19 years before 2007), the flu vaccine strains have been a good match for the circulating strains, and even a mismatched vaccine can often provide cross-protection. The virus rapidly changes due to antigenic drift, a slight mutation in the virus that causes a new strain to arise.
The effectiveness of seasonal flu vaccines varies significantly, with an estimated average efficacy of 50–60% against symptomatic disease, depending on vaccine strain, age, prior immunity, and immune function, so vaccinated people can still contract influenza. The effectiveness of flu vaccines is considered to be suboptimal, particularly among the elderly, but vaccination is still beneficial in reducing the mortality rate and hospitalization rate due to influenza as well as duration of hospitalization. Vaccination of school-age children has shown to provide indirect protection for other age groups. LAIVs are recommended for children based on superior efficacy, especially for children under 6, and greater immunity against non-vaccine strains when compared to inactivated vaccines.
From 2012 to 2015 in New Zealand, vaccine effectiveness against admission to an intensive care unit was 82%. Effectiveness against hospitalized influenza illness in the 2019–2020 United States flu season was 41% overall and 54% in people aged 65 years or older. One review found 31% effectiveness against death among adults.
Repeated annual influenza vaccination generally offers consistent year-on-year protection against influenza. There is, however, suggestive evidence that repeated vaccinations may cause a reduction in vaccine effectiveness for certain influenza subtypes; this has no relevance to recommendations for yearly vaccinations but might influence future vaccination policy. , the CDC recommends a yearly vaccine as most studies demonstrate overall effectiveness of annual influenza vaccination.
There is not enough evidence to establish significant differences in the effectiveness of different influenza vaccine types, but there are high-dose or adjuvanted products that induce a stronger immune response in the elderly.
According to a 2016 study by faculty at the University of New South Wales, getting a flu shot was as effective or better at preventing a heart attack than even quitting smoking.
A 2024 CDC study found that the 2024 flu vaccine reduced the risk of hospitalization from the flu by 35% in the Southern Hemisphere. The research, conducted across five countries—Argentina, Brazil, Chile, Paraguay, and Uruguay—showed the vaccine was less effective than the one used in the previous season.
Children
In April 2002, the Advisory Committee on Immunization Practices (ACIP) encouraged that children 6 to 23 months of age be vaccinated annually against influenza. In 2010, ACIP recommended annual influenza vaccination for those 6 months of age and older. The CDC recommends that everyone except infants under the age of six months should receive the seasonal influenza vaccine. Vaccination campaigns usually focus special attention on people who are at high risk of serious complications if they catch the flu, such as pregnant women, children under 59 months, the elderly, and people with chronic illnesses or weakened immune systems, as well as those to whom they are exposed, such as health care workers.
As the death rate is also high among infants who catch influenza, the CDC and the WHO recommend that household contacts and caregivers of infants be vaccinated to reduce the risk of passing an influenza infection to the infant.
In children, the vaccine appears to decrease the risk of influenza and possibly influenza-like illness. In children under the age of two data are limited. During the 2017–18 flu season, the CDC director indicated that 85 percent of the children who died "likely will not have been vaccinated".
In the United States, , the CDC recommend that children aged six through 35 months may receive either 0.25milliliters or 0.5milliliters per dose of Fluzone Quadrivalent. There is no preference for one or the other dose volume of Fluzone Quadrivalent for that age group. All persons 36 months of age and older should receive 0.5 milliliters per dose of Fluzone Quadrivalent. , Afluria Quadrivalent is licensed for children six months of age and older in the United States. Children six months through 35 months of age should receive 0.25milliliters for each dose of Afluria Quadrivalent. All persons 36 months of age and older should receive 0.5milliliters per dose of Afluria Quadrivalent. , Afluria Tetra is licensed for adults and children five years of age and older in Canada.
In 2014, the Canadian National Advisory Committee on Immunization (NACI) published a review of influenza vaccination in healthy 5–18-year-olds, and in 2015, published a review of the use of pediatric Fluad in children 6–72 months of age. In one study, conducted in a tertiary referral center, the rate of influenza vaccination in children was only 31%. Higher rates were found among immuno-suppressed pediatric patients (46%) and in patients with inflammatory bowel disease (50%).
Adults
In unvaccinated adults, 16% get symptoms similar to the flu, while about 10% of vaccinated adults do. Vaccination decreased confirmed cases of influenza from about 2.4% to 1.1%. No effect on hospitalization was found.
In working adults, a review by the Cochrane Collaboration found that vaccination resulted in a modest decrease in both influenza symptoms and working days lost, without affecting transmission or influenza-related complications. In healthy working adults, influenza vaccines can provide moderate protection against virologically confirmed influenza, though such protection is greatly reduced or absent in some seasons.
In health care workers, a 2006 review found a net benefit. Of the eighteen studies in this review, only two also assessed the relationship of patient mortality relative to staff influenza vaccine uptake; both found that higher rates of healthcare worker vaccination correlated with reduced patient deaths. A 2014 review found benefits to patients when health care workers were immunized, as supported by moderate evidence based in part on the observed reduction in all-cause deaths in patients whose health care workers were given immunization compared with comparison patients where the workers were not offered the vaccine.
Elderly
Evidence for an effect in adults over 65 is unclear. Systematic reviews examining both randomized controlled and case–control studies found a lack of high-quality evidence. Reviews of case-control studies found effects against laboratory-confirmed influenza, pneumonia, and death among the community-dwelling elderly.
The group most vulnerable to non-pandemic flu, the elderly, benefits least from the vaccine. There are multiple reasons behind this steep decline in vaccine efficacy, the most common of which are the declining immunological function and frailty associated with advanced age. In a non-pandemic year, a person in the United States aged 50–64 is nearly ten times more likely to die an influenza-associated death than a younger person, and a person over 65 is more than ten times more likely to die an influenza-associated death than the 50–64 age group.
There is a high-dose flu vaccine specifically formulated to provide a stronger immune response. Available evidence indicates that vaccinating the elderly with the high-dose vaccine leads to a stronger immune response against influenza than the regular-dose vaccine.
A flu vaccine containing an adjuvant was approved by the US Food and Drug Administration (FDA) in November 2015, for use by adults aged 65 years of age and older. The vaccine is marketed as Fluad in the US and was first available in the 2016–2017 flu season. The vaccine contains the MF59C.1 adjuvant which is an oil-in-water emulsion of squalene oil. It is the first adjuvanted seasonal flu vaccine marketed in the United States. It is not clear if there is a significant benefit for the elderly to use a flu vaccine containing the MF59C.1 adjuvant. Per Advisory Committee on Immunization Practices guidelines, Fluad can be used as an alternative to other influenza vaccines approved for people 65 years and older.
Vaccinating healthcare workers who work with elderly people is recommended in many countries, with the goal of reducing influenza outbreaks in this vulnerable population. While there is no conclusive evidence from randomized clinical trials that vaccinating health care workers helps protect elderly people from influenza, there is tentative evidence of benefit.
Fluad Quad was approved for use in Australia in September 2019, Fluad Quadrivalent was approved for use in the United States in February 2020, and Fluad Tetra was authorized for use in the European Union in May 2020.
Pregnancy
As well as protecting mother and child from the effects of an influenza infection, the immunization of pregnant women tends to increase their chances of experiencing a successful full-term pregnancy.
The trivalent inactivated influenza vaccine is protective in pregnant women infected with HIV.
Safety
Side effects
Common side effects of vaccination include local injection-site reactions and cold-like symptoms. Fever, malaise, and myalgia are less common. Flu vaccines are contraindicated for people who have experienced a severe allergic reaction in response to a flu vaccine or to any component of the vaccine. LAIVs are not given to children or adolescents with severe immunodeficiency or to those who are using salicylate treatments because of the risk of developing Reye syndrome. LAIVs are also not recommended for children under the age of 2, pregnant women, and adults with immunosuppression. Inactivated flu vaccines cannot cause influenza and are regarded as safe during pregnancy.
While side effects of the flu vaccine may occur, they are usually minor, including soreness, redness, swelling around the point of injection, headache, fever, nausea, or fatigue. Side effects of a nasal spray vaccine may include runny nose, wheezing, sore throat, cough, or vomiting.
In some people, a flu vaccine may cause serious side effects, including an allergic reaction, but this is rare. Furthermore, the common side effects and risks are mild and temporary when compared to the risks and severe health effects of the annual influenza epidemic.
Contrary to a common misconception, flu shots cannot cause people to get the flu.
Guillain–Barré syndrome
Although Guillain–Barré syndrome had been feared as a complication of vaccination, the CDC states that most studies on modern influenza vaccines have seen no link with Guillain–Barré. Infection with influenza virus itself increases both the risk of death (up to one in ten thousand) and the risk of developing Guillain–Barré syndrome to a far higher level than the highest level of suspected vaccine involvement (approximately ten times higher by 2009 estimates).
Although one review gives an incidence of about one case of Guillain–Barré per million vaccinations, a large study in China, covering close to a hundred million doses of vaccine against the 2009 H1N1 "swine" flu found only eleven cases of Guillain–Barré syndrome, (0.1 per million doses) total incidence in persons vaccinated, actually lower than the normal rate of the disease in China, and no other notable side effects.
Egg allergy
Although most influenza vaccines are produced using egg-based techniques, influenza vaccines are nonetheless still recommended as safe for people with egg allergies, even if severe, as no increased risk of allergic reaction to the egg-based vaccines has been shown for people with egg allergies. Studies examining the safety of influenza vaccines in people with severe egg allergies found that anaphylaxis was very rare, occurring in 1.3 cases per million doses given.
Monitoring for symptoms from vaccination is recommended in those with more severe symptoms. A study of nearly 800 children with egg allergy, including over 250 with previous anaphylactic reactions, had zero systemic allergic reactions when given the live attenuated flu vaccine.
Vaccines produced using other technologies, notably recombinant vaccines and those based on cell culture rather than egg protein, started to become available in 2012 in the US, and later in Europe and Australia.
Other
Several studies have identified an increased incidence of narcolepsy among recipients of the pandemic H1N1 influenza AS03-adjuvanted vaccine; efforts to identify a mechanism for this suggest that narcolepsy is autoimmune, and that the AS03-adjuvanted H1N1 vaccine may mimic hypocretin, serving as a trigger.
Some injection-based flu vaccines intended for adults in the United States contain thiomersal (also known as thimerosal), a mercury-based preservative. Despite some controversy in the media, the World Health Organization's Global Advisory Committee on Vaccine Safety has concluded that there is no evidence of toxicity from thiomersal in vaccines and no reason on grounds of safety to change to more-expensive single-dose administration.
Exercising before the influenza vaccine is not thought to be harmful but there is no evidence of a beneficial effect either.
Types
Seasonal flu vaccines are available either as:
a trivalent or quadrivalent injection, which contains the inactivated form of the virus. This is usually an intramuscular injection, though subcutaneous and intradermal routes can also be protective.
a nasal spray of live attenuated influenza vaccine, which contains the live but attenuated (weakened) form of the virus.
Injected vaccines induce protection based on an immune response to the antigens present on the inactivated virus, while the nasal spray works by establishing short-term infection in the nasal passages.
Annual reformulation
Each year, three influenza strains are chosen for inclusion in the forthcoming year's seasonal flu vaccination by the Global Influenza Surveillance and Response System of the World Health Organization (WHO). The recommendation for trivalent vaccine comprises two strains of Influenza A (one each of A/H1N1 and A/H3N2), and one strain of influenza B (B/Victoria), together representing strains thought most likely to cause significant human suffering in the coming season. Starting in 2012, WHO has also recommended a second influenza B strain (B/Yamagata) for use in quadrivalent vaccines; this was discontinued in 2024.
"The WHO Global Influenza Surveillance Network was established in 1952 (renamed "Global Influenza Surveillance and Response System" in 2011). The network comprises four WHO Collaborating Centres (WHO CCs) and 112 institutions in 83 countries, which are recognized by WHO as WHO National Influenza Centres (NICs). These NICs collect specimens in their country and perform primary virus isolation and preliminary antigenic characterization. They ship newly isolated strains to WHO CCs for high-level antigenic and genetic analysis, the result of which forms the basis for WHO recommendations on the composition of influenza vaccine for the Northern and Southern Hemisphere each year."
Formal WHO recommendations were first issued in 1973. Beginning in 1999 there have been two recommendations per year: one for the northern hemisphere and the other for the southern hemisphere.
Due to the widespread use of non-pharmaceutical interventions at the beginning of the COVID-19 pandemic, the B/Yamagata influenza lineage has not been isolated since March 2020 and may have been eradicated. Starting with the 2024 Southern Hemisphere influenza season, the WHO and other regulatory bodies have removed B/Yamagata from influenza vaccine recommendations.
Recommendations
Various public health organizations, including the World Health Organization (WHO), recommend that yearly influenza vaccination be routinely offered, particularly to people at risk of complications of influenza and those individuals who live with or care for high-risk individuals, including:
people aged 50 years of age or older
people with chronic lung diseases, including asthma
people with chronic heart diseases
people with chronic liver diseases
people with chronic kidney diseases
people who have had their spleen removed or whose spleen is not working properly
people who are immunocompromised
residents of nursing homes and other long-term care facilities
health care workers (both to prevent sickness and to prevent spread to their patients)
children and adolescents (aged 6 months through 18 years) who are receiving aspirin- or salicylate-containing medications and who might be at risk for experiencing Reye syndrome after influenza virus infection
American Indians/Alaska Natives
people who are extremely obese (body mass index ≥40 for adults)
The flu vaccine is contraindicated for those under six months of age and those with severe, life-threatening allergies to flu vaccine or any ingredient in the vaccine.
World Health Organization
, the World Health Organization (WHO) recommends seasonal influenza vaccination for:
First priority:
Pregnant women
Second priority (in no particular order):
Children aged 6–59 months
Elderly
Individuals with specific chronic medical conditions
Health-care workers
Canada
The National Advisory Committee on Immunization (NACI), the group that advises the Public Health Agency of Canada, recommends that everyone over six months of age be encouraged to receive annual influenza vaccination and that children between the age of six months and 24 months, and their household contacts, should be considered a high priority for the flu vaccine.
Particularly:
People at high risk of influenza-related complications or hospitalization, including people who are morbidly obese, healthy pregnant women, children aged 6–59 months, the elderly, aboriginals, and people with one of an itemized list of chronic health conditions
People capable of transmitting influenza to those at high risk, including household contacts and healthcare workers
People who provide essential community services
Certain poultry workers
Live attenuated influenza vaccine (LAIV) was not available in Canada for the 2019–2020 season.
European Union
The European Centre for Disease Prevention and Control (ECDC) recommends vaccinating the elderly as a priority, with a secondary priority for people with chronic medical conditions and health care workers.
The influenza vaccination strategy is generally that of protecting vulnerable people, rather than limiting influenza circulation or eliminating human influenza sickness. This is in contrast with the high herd immunity strategies for other infectious diseases such as polio and measles. This is also due in part to the financial and logistics burden associated with the need of an annual injection.
United Kingdom
The National Health Service in the United Kingdom provides flu vaccination to:
people who are aged 65 or over
people who have certain long-term health conditions
people who are pregnant
people who live in a care home
people who are the main carer for an older or disabled person, or receive a carer's allowance
people who live with someone who has a weakened immune system.
This vaccination is available free of charge to people in these groups. People outside these groups aged between 18 and 65 years of age can also receive private flu vaccination for a small fee from pharmacies and some private surgeries.
United States
In the United States routine influenza vaccination is recommended for all persons aged six months and over. It takes up to two weeks after vaccination for sufficient antibodies to develop in the body. The CDC recommends vaccination before the end of October, although it considers getting a vaccine in December or even later to be still beneficial. The U.S. military also requires a flu shot annually for its active and reserve servicemembers.
According to the CDC, the live attenuated virus (LAIV4) (which comes in the form of nasal spray in the US) should be avoided by some groups.
Within its blanket recommendation for general vaccination in the United States, the CDC, which began recommending the influenza vaccine to healthcare workers in 1981, emphasizes to clinicians the special urgency of vaccination for members of certain vulnerable groups, and their caregivers:
Vaccination is especially important for people at higher risk of serious influenza complications or people who live with or care for people at higher risk for serious complications. In 2009, a new high-dose formulation of the standard influenza vaccine was approved. The Fluzone High Dose is specifically for people 65 and older; the difference is that it has four times the antigen dose of the standard Fluzone.
The US government requires hospitals to report worker vaccination rates. Some US states and hundreds of US hospitals require healthcare workers to either get vaccinations or wear masks during flu season. These requirements occasionally engender union lawsuits on narrow collective bargaining grounds, but proponents note that courts have generally endorsed forced vaccination laws affecting the general population during disease outbreaks.
Vaccination against influenza is especially considered important for members of high-risk groups who would be likely to have complications from influenza, for example pregnant women and children and teenagers from six months to 18 years of age who are receiving aspirin- or salicylate-containing medications and who might be at risk for experiencing Reye syndrome after influenza virus infection;
In raising the upper age limit to 18 years, the aim is to reduce both the time children and parents lose from visits to pediatricians and missing school and the need for antibiotics for complications
An added benefit expected from the vaccination of children is a reduction in the number of influenza cases among parents and other household members, and of possible spread to the general community.
The CDC indicated that live attenuated influenza vaccine (LAIV), also called the nasal spray vaccine, was not recommended for the 2016–2017 flu season in the United States.
Furthermore, the CDC recommends that healthcare personnel who care for severely immunocompromised persons receive injections (TIV or QIV) rather than LAIV.
Australia
The Australian Government recommends seasonal flu vaccination for everyone over the age of six months. Australia uses inactivated vaccines. Until 2021, the egg-based vaccine has been the only one available (and continues to be the only free one), but from March 2021 a new cell-based vaccine is available for those who wish to pay for it, and it is expected that this one will become the standard by 2026.
The standard flu vaccine is free for the following people:
children aged six months to five years;
people aged 65 years and over;
Aboriginal and Torres Strait Islander people aged six months and over;
pregnant women; and
anyone over six months of age with medical conditions such as severe asthma, lung disease or heart disease, low immunity, or diabetes that can lead to complications from influenza.
Uptake
At risk groups
Uptake of flu vaccination, both seasonally and during pandemics, is often low. Systematic reviews of pandemic flu vaccination uptake have identified several personal factors that may influence uptake, including gender (higher uptake in men), ethnicity (higher in people from ethnic minorities), and having a chronic illness. Beliefs in the safety and effectiveness of the vaccine are also important.
Several measures are useful to increase rates of vaccination in those over sixty including patient reminders using leaflets and letters, postcard reminders, client outreach programs, vaccine home visits, group vaccinations, free vaccinations, physician payment, physician reminders, and encouraging physician competition.
Health care workers
Frontline healthcare workers are often recommended to get seasonal and any pandemic flu vaccinations. For example, in the UK all healthcare workers involved in patient care are recommended to receive the seasonal flu vaccine, and were also recommended to be vaccinated against the H1N1/09 (later renamed A(H1N1)pdm09) swine flu virus during the 2009 pandemic. However, uptake is often low. During the 2009 pandemic, low uptake by healthcare workers was seen in countries including the UK, Italy, Greece, and Hong Kong.
In a 2010 survey of United States healthcare workers, 63.5% reported that they received the flu vaccine during the 2010–11 season, an increase from 61.9% reported the previous season. US Health professionals with direct patient contact had higher vaccination uptake, such as physicians and dentists (84.2%) and nurse practitioners (82.6%).
The main reason to vaccinate health care workers is to prevent staff from spreading flu to their patients and to reduce staff absence at a time of high service demand, but the reasons health care workers state for their decisions to accept or decline vaccination may more often be to do with perceived personal benefits.
In Victoria (Australia) public hospitals, rates of healthcare worker vaccination in 2005 ranged from 34% for non-clinical staff to 42% for laboratory staff. One of the reasons for rejecting vaccines was concern over adverse reactions; in one study, 31% of resident physicians at a teaching hospital incorrectly believed Australian vaccines could cause influenza.
Manufacturing
Research continues into the idea of a "universal" influenza vaccine that would not require tailoring to a particular strain, but would be effective against a broad variety of influenza viruses. No vaccine candidates had been announced by November 2007, but , there are several universal vaccines candidates, in pre-clinical development and in clinical trials.
In a 2007 report, the global capacity of approximately 826 million seasonal influenza vaccine doses (inactivated and live) was double the production of 413 million doses. In an aggressive scenario of producing pandemic influenza vaccines by 2013, only 2.8 billion courses could be produced in a six-month time frame. If all high- and upper-middle-income countries sought vaccines for their entire populations in a pandemic, nearly two billion courses would be required. If China pursued this goal as well, more than three billion courses would be required to serve these populations. Vaccine research and development is ongoing to identify novel vaccine approaches that could produce much greater quantities of vaccine at a price that is affordable to the global population.
Egg-based
Most flu vaccines are grown by vaccine manufacturers in fertilized chicken eggs. In the Northern hemisphere, the manufacturing process begins following the announcement (typically in February) of the WHO recommended strains for the winter flu season. Three strains (representing an H1N1, an H3N2, and a B strain) of flu are selected and chicken eggs are inoculated separately. These monovalent harvests are then combined to make the trivalent vaccine.
, both the conventional injection and the nasal spray are manufactured using chicken eggs. The European Union also approved Optaflu, a vaccine produced by Novartis using vats of animal cells. This technique is expected to be more scalable and avoid problems with eggs, such as allergic reactions and incompatibility with strains that affect avians like chickens.
Influenza vaccines are produced in pathogen-free eggs that are eleven or twelve days old. The top of the egg is disinfected by wiping it with alcohol and then the egg is candled to identify a non-veinous area in the allantoic cavity where a small hole is poked to serve as a pressure release. A second hole is made at the top of the egg, where the influenza virus is injected in the allantoic cavity, past the chorioallantoic membrane. The two holes are then sealed with melted paraffin and the inoculated eggs are incubated for 48 hours at 37 degrees Celsius. During the incubation time, the virus replicates and newly replicated viruses are released into the allantoic fluid
After the 48-hour incubation period, the top of the egg is cracked and ten milliliters of allantoic fluid is removed, from which about fifteen micrograms of the flu vaccine can be obtained. At this point, the viruses have been weakened or killed and the viral antigen is purified and placed inside vials, syringes, or nasal sprayers. Up to 3 eggs are needed to produce one dose of a trivalent vaccine, and an estimated 600 million eggs are produced each year for flu vaccine production.
Other methods of manufacture
Methods of vaccine generation that bypass the need for eggs include the construction of influenza virus-like particles (VLP). VLP resemble viruses, but there is no need for inactivation, as they do not include viral coding elements, but merely present antigens in a similar manner to a virion. Some methods of producing VLP include cultures of Spodoptera frugiperda Sf9 insect cells and plant-based vaccine production (e.g., production in Nicotiana benthamiana). There is evidence that some VLPs elicit antibodies that recognize a broader panel of antigenically distinct viral isolates compared to other vaccines in the hemagglutination-inhibition assay (HIA).
A gene-based DNA vaccine, used to prime the immune system after boosting with an inactivated H5N1 vaccine, underwent clinical trials in 2011.
In November 2012, Novartis received FDA approval for the first cell-culture vaccine. In 2013, the recombinant influenza vaccine, Flublok, was approved for use in the United States.
On September 17, 2020, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency (EMA) adopted a positive opinion, recommending the granting of a marketing authorization for Supemtek, a quadrivalent influenza vaccine (recombinant, prepared in cell culture). The applicant for this medicinal product is Sanofi Pasteur. Supemtek was authorized for medical use in the European Union in November 2020.
Australia authorized its first cell-based vaccine in March 2021, based on an "eternal cell line" of a dog kidney. Because of the way it is produced, it produces better-matched vaccines (to the flu strains).
Vaccine manufacturing countries
According to the WHO, , countries where influenza vaccine is produced include:
In addition, Kazakhstan, Serbia, and Thailand had facilities in the final stages of establishing production.
Cost-effectiveness
The cost-effectiveness of seasonal influenza vaccination has been widely evaluated for different groups and in different settings. In the elderly (over 65), the majority of published studies have found that vaccination is cost-saving, with the cost savings associated with influenza vaccination (e.g. prevented health care visits) outweighing the cost of vaccination. In older adults (aged 50–64 years), several published studies have found that influenza vaccination is likely to be cost-effective, however, the results of these studies were often found to be dependent on key assumptions used in the economic evaluations. The uncertainty in influenza cost-effectiveness models can partially be explained by the complexities involved in estimating the disease burden, as well as the seasonal variability in the circulating strains and the match of the vaccine. In healthy working adults (aged 18–49 years), a 2012 review found that vaccination was generally not cost-saving, with the suitability for funding being dependent on the willingness to pay to obtain the associated health benefits. In children, the majority of studies have found that influenza vaccination was cost-effective, however many of the studies included (indirect) productivity gains, which may not be given the same weight in all settings. Several studies have attempted to predict the cost-effectiveness of interventions (including pre-pandemic vaccination) to help protect against a future pandemic, however estimating the cost-effectiveness has been complicated by uncertainty as to the severity of a potential future pandemic and the efficacy of measures against it.
Research
Influenza research includes molecular virology, molecular evolution, pathogenesis, host immune responses, genomics, and epidemiology. These help in developing influenza countermeasures such as vaccines, therapies, and diagnostic tools. Improved influenza countermeasures require basic research on how viruses enter cells, replicate, mutate, evolve into new strains, and induce an immune response. The Influenza Genome Sequencing Project is creating a library of influenza sequences that will help researchers' understanding of what makes one strain more lethal than another, what genetic determinants most affect immunogenicity, and how the virus evolves.
A different approach uses Internet content to estimate the impact of an influenza vaccination campaign. More specifically, researchers have used data from Twitter and Microsoft's Bing search engine and proposed a statistical framework that, after a series of operations, maps this information to estimates of the influenza-like illness reduction percentage in areas where vaccinations have been performed. The method has been used to quantify the impact of two flu vaccination programmes in England (2013/14 and 2014/15), where school-age children were administered a live attenuated influenza vaccine (LAIV). Notably, the impact estimates were in accordance with estimations from Public Health England based on traditional syndromic surveillance endpoints.
Rapid response to pandemic flu
The rapid development, production, and distribution of pandemic influenza vaccines could potentially save millions of lives during an influenza pandemic. Due to the short time frame between the identification of a pandemic strain and the need for vaccination, researchers are looking at novel technologies for vaccine production that could provide better "real-time" access and be produced more affordably, thereby increasing access for people living in low- and moderate-income countries, where an influenza pandemic may likely originate, such as live attenuated (egg-based or cell-based) technology and recombinant technologies (proteins and virus-like particles). , more than seventy known clinical trials have been completed or are ongoing for pandemic influenza vaccines. In September 2009, the FDA approved four vaccines against the 2009 H1N1 influenza virus (the 2009 pandemic strain), and expected the initial vaccine lots to be available within the following month.
In January 2020, the US Food and Drug Administration (FDA) approved Audenz as a vaccine for the H5N1 flu virus. Audenz is a vaccine indicated for active immunization for the prevention of disease caused by the influenza A virus H5N1 subtype contained in the vaccine. Audenz is approved for use in persons six months of age and older at increased risk of exposure to the influenza A virus H5N1 subtype contained in the vaccine.
Zoonotic influenza vaccine Seqirus is authorized for use in the European Union. It is an H5N8 vaccine that is intended to provide acquired immunity against H5 subtype influenza A viruses.
Universal flu vaccines
A universal influenza vaccine that would not have to be designed and made for each flu season in each hemisphere would stabilize the supply, avoid errors in predicting the season's variants, and protect against the escape of the circulating strains by mutation. Such a vaccine has been the subject of research for decades.
One approach is to use broadly neutralizing antibodies that, unlike the annual seasonal vaccines used over the first decades of the 21st century that provoke the body to generate an immune response, instead provide a component of the immune response itself. The first neutralizing antibodies were identified in 1993, via experimentation. It was found that the flu neutralizing antibodies bound to the stalk of the Hemagglutinin protein. Antibodies that could bind to the head of those proteins were identified. The highly conserved M2 proton channel was proposed as a potential target for broadly neutralizing antibodies.
The challenges for researchers are to identify single antibodies that could neutralize many subtypes of the virus so that they could be useful in any season, and that target conserved domains that are resistant to antigenic drift.
Another approach is to take the conserved domains identified from these projects, and to deliver groups of these antigens to provoke an immune response; various approaches with different antigens, presented in different ways (as fusion proteins, mounted on virus-like particles, on non-pathogenic viruses, as DNA, and others), are under development.
Efforts have also been undertaken to develop universal vaccines that specifically activate a T-cell response, based on clinical data showing that people with a strong, early T-cell response have better outcomes when infected with influenza and because T-cells respond to conserved epitopes. The challenge for developers is that these epitopes are on internal protein domains that are only mildly immunogenic.
Along with the rest of the vaccine field, people working on universal vaccines have experimented with vaccine adjuvants to improve the ability of their vaccines to create a sufficiently powerful and enduring immune response.
Oral influenza vaccine
As of 2019, an oral flu vaccine was in clinical research. The oral vaccine candidate is based on an adenovirus type5 vector modified to remove genes needed for replication, with an added gene that expresses a small double-stranded RNA hairpin molecule as an adjuvant. In 2020, a phaseII human trial of the pill form of the vaccine showed that it was well tolerated and provided similar immunity to a licensed injectable vaccine.
COVID-19
An influenza vaccine and a COVID-19 vaccine may be given safely at the same time. Preliminary research indicates that influenza vaccination does not prevent COVID-19, but may reduce the incidence and severity of COVID-19 infection.
Criticism
Tom Jefferson, who has led Cochrane Collaboration reviews of flu vaccines, has called clinical evidence concerning flu vaccines "rubbish" and has therefore declared them to be ineffective; he has called for placebo-controlled randomized clinical trials, which most in the field hold as unethical. His views on the efficacy of flu vaccines are rejected by medical institutions including the CDC and the National Institutes of Health, and by key figures in the field like Anthony Fauci.
Michael Osterholm, who led the Center for Infectious Disease Research and Policy 2012 review on flu vaccines, recommended getting the vaccine but criticized its promotion, saying, "We have overpromoted and overhyped this vaccine... it does not protect as promoted. It's all a sales job: it's all public relations."
Veterinary use
Veterinary influenza vaccination aims to achieve the following four objectives:
Protection from clinical disease
Protection from infection with virulent virus
Protection from virus excretion
Serological differentiation of infected from vaccinated animals (so-called DIVA principle).
Horses
Horses with horse flu can run a fever, have a dry hacking cough, have a runny nose, and become depressed and reluctant to eat or drink for several days but usually recover in two to three weeks. "Vaccination schedules generally require a primary course of two doses, 3–6 weeks apart, followed by boosters at 6–12 month intervals. It is generally recognized that in many cases such schedules may not maintain protective levels of antibody and more frequent administration is advised in high-risk situations."
It is a common requirement at shows in the United Kingdom that horses be vaccinated against equine flu and a vaccination card must be produced; the International Federation for Equestrian Sports (FEI) requires vaccination every six months.
Poultry
It is possible to vaccinate poultry against specific strains of highly pathogenic avian influenza. Vaccination should be combined with other control measures such as infection monitoring, early detection, and biosecurity.
Pigs
Swine influenza vaccines are extensively used in pig farming in Europe and North America. Most swine flu vaccines include an H1N1 and an H3N2 strain.
Swine influenza has been recognized as a major problem since the outbreak in 1976. Evolution of the virus has resulted in inconsistent responses to traditional vaccines. Standard commercial swine flu vaccines are effective in controlling the problem when the virus strains match enough to have significant cross-protection. Customised (autogenous) vaccines made from the specific viruses isolated are made and used in the more difficult cases. The vaccine manufacturer Novartis claims that the H3N2 strain (first identified in 1998) has brought major losses to pig farmers. Abortion storms are a common sign and sows stop eating for a few days and run a high fever. The mortality rate can be as high as fifteen percent.
Dogs
In 2004, influenza A virus subtype H3N8 was discovered to cause canine influenza. Because of the lack of previous exposure to this virus, dogs have no natural immunity to this virus. However, a vaccine was found in 2004.
| Biology and health sciences | Vaccines | Health |
1047111 | https://en.wikipedia.org/wiki/HSAB%20theory | HSAB theory | HSAB is an acronym for "hard and soft (Lewis) acids and bases". HSAB is widely used in chemistry for explaining the stability of compounds, reaction mechanisms and pathways. It assigns the terms 'hard' or 'soft', and 'acid' or 'base' to chemical species. 'Hard' applies to species which are small, have high charge states (the charge criterion applies mainly to acids, to a lesser extent to bases), and are weakly polarizable. 'Soft' applies to species which are big, have low charge states and are strongly polarizable.
The theory is used in contexts where a qualitative, rather than quantitative, description would help in understanding the predominant factors which drive chemical properties and reactions. This is especially so in transition metal chemistry, where numerous experiments have been done to determine the relative ordering of ligands and transition metal ions in terms of their hardness and softness.
HSAB theory is also useful in predicting the products of metathesis reactions. In 2005 it was shown that even the sensitivity and performance of explosive materials can be explained on basis of HSAB theory.
Ralph Pearson introduced the HSAB principle in the early 1960s as an attempt to unify inorganic and organic reaction chemistry.
Theory
Essentially, the theory states that soft acids prefer to form bonds with soft bases, whereas hard acids prefer to form bonds with hard bases, all other factors being equal. It can also be said that hard acids bind strongly to hard bases and soft acids bind strongly to soft bases. The HSAB classification in the original work was largely based on equilibrium constants of Lewis acid/base reactions with a reference base for comparison.
Borderline cases are also identified: borderline acids are trimethylborane, sulfur dioxide and ferrous Fe2+, cobalt Co2+ caesium Cs+ and lead Pb2+ cations. Borderline bases are: aniline, pyridine, nitrogen N2 and the azide, chloride, bromide, nitrate and sulfate anions.
Generally speaking, acids and bases interact and the most stable interactions are hard–hard (ionogenic character) and soft–soft (covalent character).
An attempt to quantify the 'softness' of a base consists in determining the equilibrium constant for the following equilibrium:
BH + CH3Hg+ H+ + CH3HgB
where CH3Hg+ (methylmercury ion) is a very soft acid and H+ (proton) is a hard acid, which compete for B (the base to be classified).
Some examples illustrating the effectiveness of the theory:
Bulk metals are soft acids and are poisoned by soft bases such as phosphines and sulfides.
Hard solvents such as hydrogen fluoride, water and the protic solvents tend to dissolve strong solute bases such as fluoride and oxide anions. On the other hand, dipolar aprotic solvents such as dimethyl sulfoxide and acetone are soft solvents with a preference for solvating large anions and soft bases.
In coordination chemistry soft–soft and hard–hard interactions exist between ligands and metal centers.
Chemical hardness
In 1983 Pearson together with Robert Parr extended the qualitative HSAB theory with a quantitative definition of the chemical hardness (η) as being proportional to the second derivative of the total energy of a chemical system with respect to changes in the number of electrons at a fixed nuclear environment:
The factor of one-half is arbitrary and often dropped as Pearson has noted.
An operational definition for the chemical hardness is obtained by applying a three-point finite difference approximation to the second derivative:
where I is the ionization potential and A the electron affinity. This expression implies that the chemical hardness is proportional to the band gap of a chemical system, when a gap exists.
The first derivative of the energy with respect to the number of electrons is equal to the chemical potential, μ, of the system,
,
from which an operational definition for the chemical potential is obtained from a finite difference approximation to the first order derivative as
which is equal to the negative of the electronegativity (χ) definition on the Mulliken scale: μ = −χ.
The hardness and Mulliken electronegativity are related as
,
and in this sense hardness is a measure for resistance to deformation or change. Likewise a value of zero denotes maximum softness, where softness is defined as the reciprocal of hardness.
In a compilation of hardness values only that of the hydride anion deviates. Another discrepancy noted in the original 1983 article are the apparent higher hardness of Tl3+ compared to Tl+.
Modifications
If the interaction between acid and base in solution results in an equilibrium mixture the strength of the interaction can be quantified in terms of an equilibrium constant. An alternative quantitative measure is the heat (enthalpy) of formation of the Lewis acid-base adduct in a non-coordinating solvent. The ECW model is quantitative model that describes and predicts the strength of Lewis acid base interactions, -ΔH . The model assigned E and C parameters to many Lewis acids and bases. Each acid is characterized by an EA and a CA. Each base is likewise characterized by its own EB and CB. The E and C parameters refer, respectively, to the electrostatic and covalent contributions to the strength of the bonds that the acid and base will form. The equation is
-ΔH = EAEB + CACB + W
The W term represents a constant energy contribution for acid–base reaction such as the cleavage of a dimeric acid or base. The equation predicts reversal of acids and base strengths. The graphical presentations of the equation show that there is no single order of Lewis base strengths or Lewis acid strengths. The ECW model accommodates the failure of single parameter descriptions of acid-base interactions.
A related method adopting the E and C formalism of Drago and co-workers quantitatively predicts the formation constants for complexes of many metal ions plus the proton with a wide range of unidentate Lewis acids in aqueous solution, and also offered insights into factors governing HSAB behavior in solution.
Another quantitative system has been proposed, in which Lewis acid strength toward Lewis base fluoride is based on gas-phase affinity for fluoride. Additional one-parameter base strength scales have been presented. However, it has been shown that to define the order of Lewis base strength (or Lewis acid strength) at least two properties must be considered. For Pearson's qualitative HSAB theory the two properties are hardness and strength while for Drago's quantitative ECW model the two properties are electrostatic and covalent .
Kornblum's rule
An application of HSAB theory is the so-called Kornblum's rule (after Nathan Kornblum) which states that in reactions with ambident nucleophiles (nucleophiles that can attack from two or more places), the more electronegative atom reacts when the reaction mechanism is SN1 and the less electronegative one in a SN2 reaction. This rule (established in 1954) predates HSAB theory but in HSAB terms its explanation is that in a SN1 reaction the carbocation (a hard acid) reacts with a hard base (high electronegativity) and that in a SN2 reaction tetravalent carbon (a soft acid) reacts with soft bases.
According to findings, electrophilic alkylations at free CN− occur preferentially at carbon, regardless of whether the SN1 or SN2 mechanism is involved and whether hard or soft electrophiles are employed. Preferred N attack, as postulated for hard electrophiles by the HSAB principle, could not be observed with any alkylating agent. Isocyano compounds are only formed with highly reactive electrophiles that react without an activation barrier because the diffusion limit is approached. It is claimed that the knowledge of absolute rate constants and not of the hardness of the reaction partners is needed to predict the outcome of alkylations of the cyanide ion.
Criticism
Reanalysis of a large number of various most typical ambident organic system reveals that thermodynamic/kinetic control describes reactivity of organic compounds perfectly, whereas the HSAB principle fails and should be abandoned in the rationalization of ambident reactivity of organic compounds.
| Physical sciences | Concepts | Chemistry |
1047190 | https://en.wikipedia.org/wiki/Sidneyia | Sidneyia | Sidneyia is an extinct marine arthropod known from fossils found from the Early to the Mid Cambrian of China and the Mid Cambrian Burgess Shale of British Columbia, Canada.
Description
Sidneyia inexpectans reached lengths of at least . The largest known specimen of S. minor is around long and wide, while the largest specimen of S. malongensis is long and wide. The head shield is short, with notches present on the sides to accommodate stalked eyes, with the underside having a hypostome. The head has a pair of segmented antennae, as well as three pairs of post-antenal appendages. This was followed by a thorax, which had eight to ten segments/tergites, each associated with a pair of biramous appendages, this was followed with one to three abdomen segments/tergites, with the body terminating with a telson, which comprised a pair of tail flukes. The appendages bear heavily sclerotised spined basal segments (basipods) called gnathobases, used to process food. In S. minor, the biramous appendages have 8 podomeres/segments on the endopod, with the last segment being a terminal claw. The exopod of these limbs is flattened and bears lamellae. In S. inexpectans, the endopods of the biramous limbs have seven podomeres, with the first four of these each bearing a number of thin inward projecting spines, while the outer three podomeres bore more stout claw-like spines, with the fourth to ninth pairs of post antennal limbs bearing exopods with blade-like lamellae, which are thought to have been used as gills. S. inexpectans had three pairs of digestive glands within the head shield and front of the thorax, adjacent to the central gut tube.
Ecology
Sidneyia is thought to have been seafloor dwelling (epibenthic) generalist durophagous predator and/or scavenger that used its gnathobases (which closely resemble those of horseshoe crabs) to crush and shred prey items, including hard-shelled organisms like juvenile trilobites (which are abundantly preserved as stomach contents in S. inexpectans) and brachiopods (representing around 6% of the stomach contents of S. inexpectans), but possibly also softer animals like worms or soft bodied arthropods like bradoriids.
Taxonomy
Sidneyia was discovered in 1910 during the first day of Charles Walcott's exploration of the Burgess Shale. He named it after his elder son, Sidney, who had helped to locate the site and collect the specimen. The species name, Sidneyia inexpectans, is derived from the meaning of "Sidney's surprise".
144 specimens of Sidneyia are known from the Greater Phyllopod bed, where they comprise 0.27% of the community.
Sidneyia sinica was named in 2002 from a specimen found in the Chengjiang Biota of South China. However, it has since been rejected from the genus, and other indeterminate specimens assigned to the genus from the Spence Shale and Sirius Passet lack key diagnostic characters. Specimens that can confidently assigned to the genus include Sidneyia cf. inexpectans, known from the Wuliuan Mantou Formation of North China, Sidneyia minor from the Early Cambrian (Cambrian Stage 3) Xiaoshiba Biota of Yunnan, China, and a valid species of Sidneyia from Chengjiang, Sidneyia malongensis.
In 1923, Sidneyia, was placed, along with Emeraldella, as part of the group "Xenopoda". Today, both Sidneyia and Emeraldella are placed as part of the clade Vicissicaudata within Artiopoda, which includes trilobites and other arthropods with similar bodyforms. However, Sidneyia and Emeraldella are usually not recovered as each others closest relatives within Vicissicaudata, rendering "Xenopoda" invalid.
| Biology and health sciences | Fossil arthropods | Animals |
1047267 | https://en.wikipedia.org/wiki/Ottoia | Ottoia | Ottoia is a stem-group archaeopriapulid worm known from Cambrian fossils. Although priapulid-like worms from various Cambrian deposits are often referred to Ottoia on spurious grounds, the only clear Ottoia macrofossils come from the Burgess Shale of British Columbia, which was deposited . Microfossils extend the record of Ottoia throughout the Western Canada Sedimentary Basin, from the mid- to late- Cambrian. A few fossil finds are also known from China.
Morphology
Ottoia specimens are on average 8 centimeters in length. Both length and width show variation with contraction; shorter specimens often being wider than longer ones. The characteristic proboscis of priapulids is present at the anterior, attached to the trunk of the animal, proceeded by the "bursa" at the posterior. The organism's body is bilaterally symmetrical, however, its anterior displays external radial symmetry. Like some other modern invertebrates, a cuticle restricts the size of and protects the animal.
The trunk hosts the internal organs of the organism, divided into seventy to a hundred annulations of varying spacing, depending on curvature and contraction. The posterior displays a series of hooks, which likely acted as anchors during burrowing. Muscles support the animal and retract the bursa and proboscis. A gut leading from the anus in the bursa to the mouth in the proboscis runs through the trunk's spacious body cavity, and a concentration of gut muscles serve the function of a gizzard. A nerve chord runs down the organism's length. In addition to the other organs, it is possible Ottoia contained urogenital organs in its trunk. There is no evidence of a respiratory organ, though the bursa may have served this purpose.
The everted proboscis of Ottoia bears an armature of teeth and hooks. The detailed morphology of these elements distinguishes the two described species, O. tricuspida and O. prolifica. At the base of the pharynx, separated from the teeth by an unarmed region, sits a ring of spines. Behind this, at the front of the trunk, lies a series of hooks and spines, arranged in a quincunx pattern like the five dots on a domino or dice.
Ecology
Ottoia was a burrower that hunted prey with its eversible proboscis. It also appears to have scavenged on dead organisms such as the arthropod Sidneyia.
The spines on the proboscis of Ottoia have been interpreted as teeth used to capture prey. Its mode of life is uncertain, but it is thought to have been an active burrower, moving through the sediment after prey, and is believed to have lived within a U-shaped burrow that it constructed in the substrate. From that place of relative safety, it could extend its proboscis in search of prey. Gut contents show that this worm was a predator, often feasting on the hyolithid Haplophrentis (a shelled animal similar to mollusks), generally swallowed them head-first. They also show evidence of cannibalism, which is common in priapulids today.
Preservation
Because of its bottom-living habit and the location of the Burgess Shale site at the foot of a high limestone reef, one may presume the relative immobility of Ottoia placed it in danger of being carried away and/or buried by any underwater mud avalanche from the cliff top. This may explain why it remains one of the more abundant specimens of the Burgess Shale fauna.
Distribution
At least 1000 Burgess Shale specimens are known in the UNSM collections alone, in addition to the ROM collections and hundreds of specimens elsewhere. 677 specimens of Ottoia are known from the Greater Phyllopod bed, where they comprise 1.29% of the community.
Ottoia has also been reported from Middle Cambrian deposits in Utah and Spain, Nevada, and various other localities. Nevertheless, these reports are insecure, and the only verifiable Ottoia macrofossils herald from the Burgess Shale itself.
Microfossils corresponding to Ottoia teeth, however, have a much broader distribution, and are found throughout the Western Canada Sedimentary Basin. Indeed, putative candidates (initially described under the ICBN as Goniomorpha) may extend the range of Ottoia, or at least similar priapulans, into the Ordovician. One poorly preserved specimen that probably belongs to Ottoia, was discovered in the Lower Ordovician Madaoyu Formation in Hunan, China.
| Biology and health sciences | Lophotrochozoa | Animals |
2259412 | https://en.wikipedia.org/wiki/Ilex%20aquifolium | Ilex aquifolium | Ilex aquifolium, the holly, common holly, English holly, European holly, or occasionally Christmas holly, is a species of flowering plant in the family Aquifoliaceae, native to western and southern Europe, northwest Africa, and southwest Asia. It is regarded as the type species of the genus Ilex, which by association is also called "holly". It is an evergreen tree or shrub found, for example, in shady areas of forests of oak and in beech hedges. In the British Isles it is one of very few native hardwood evergreen trees. It has a great capacity to adapt to different conditions and is a pioneer species that repopulates the margins of forests or clearcuts.
I. aquifolium can exceed 10 m in height, but is often found at much smaller heights, typically tall and broad, with a straight trunk and pyramidal crown, branching from the base. It grows slowly and does not usually fully mature due to cutting or fire. It can live 500 years, but usually does not reach 100.
Ilex aquifolium is the species of holly long associated with Christmas, and previously the Roman festival of Saturnalia. Its glossy green prickly leaves and bright red berries (produced only by the female plant) are represented in wreaths, garlands and cards wherever Christmas is celebrated. It is a subject of music and folklore, especially in the British tradition. It is also a popular ornamental shrub or hedge, with numerous cultivars in a range of colours.
Description
Ilex aquifolium grows to tall with a woody stem as wide as , rarely or more, in diameter. The leaves are 5–12 cm long and 2–6 cm broad; they are evergreen, lasting about five years, and are dark green on the upper surface and lighter on the underside, oval, leathery, shiny, and about 5 to 9 cm long. In the young and in the lower limbs of mature trees, the leaves have three to five sharp spines on each side, pointing alternately upward and downward, while leaves of the upper branches in mature trees lack spines.
The flowers are white, four-lobed, and pollinated by bees. Holly is dioecious, meaning that there are male plants and female plants. The sex cannot be determined until the plants begin flowering, usually between 4 and 12 years of age. In male specimens, the flowers are yellowish and appear in axillary groups. In the female, flowers are isolated or in groups of three and are small and white or slightly pink, and consist of four petals and four sepals partially fused at the base.
The fruit only appears on female plants, which require male plants nearby to fertilise them. The fruit is a drupe (stone fruit), about 6–10 mm in diameter, a bright red or bright yellow, which matures around October or November; at this time they are very bitter due to the ilicin content and so are rarely eaten until late winter after frost has made them softer and more palatable. They are eaten by rodents, birds and larger herbivores. Each fruit contains 3 to 4 seeds which do not germinate until the second or third spring.
Distribution and habitat
Today, holly is found in western Asia and Europe in the undergrowth of oak forest and beech forest in particular, although at times it can form a dense thicket as the dominant species. It requires moist, shady environments, found within forests or in shady slopes, cliffs, and mountain gorges.
Along the west coast of the United States and Canada, from California to British Columbia, non-native English Holly has proved very invasive, quickly spreading into native forest habitat, where it thrives in shade and crowds out native species. It has been placed on the Washington State Noxious Weed Control Board's monitor list, and is a Class C invasive plant in Portland.
During the Cenozoic Era, the Mediterranean region, Europe, and northwest Africa had a wetter climate and were largely covered by laurel forests. Holly was a typical representative species of this biome, where many current species of the genus Ilex were present. With the drying of the Mediterranean Basin during the Pliocene, the laurel forests gradually retreated, replaced by more drought-tolerant sclerophyll plant communities. The modern Ilex aquifolium resulted from this change. Most of the last remaining laurel forests around the Mediterranean are believed to have died out approximately 10,000 years ago at the end of the Pleistocene.
Ecology
Holly is a rugged pioneer species that prefers relatively moist areas, and tolerates frost as well as summer drought. The plant is common in the garrigue and maquis and is also found in deciduous forest and oak forest.
Pure stands of hollies can grow into a labyrinth of vaults in which thrushes and deer take refuge, while smaller birds are protected among their spiny leaves. After the first frost of the season, holly fruits become soft and fall to the ground serving as important food in its native regions for winter birds at a time of scarce resources.
The flowers are attractive as nectar sources for insects such as bees, wasps, flies, and small butterflies. The commonly-encountered pale patches on leaves are due to the leaf-mine insect Phytomyza ilicis.
It is an invasive species on the West Coast of Canada and the United States as well as in Hawaii.
Epigenetics
Holly is well known in epigenetics. Some cultivars have smooth leaf edges, or both smooth and prickly leaf edges on the same plant. In response to stress these cultivars can produce leaves with more prickles.
Cultivation
Ilex aquifolium is widely grown in parks and gardens in temperate regions. Hollies are often used for hedges; the spiny leaves make them difficult to penetrate, and they take well to pruning and shaping.
AGM cultivars
Numerous cultivars have been selected, of which the following have gained the Royal Horticultural Society's Award of Garden Merit:
I. aquifolium
'Amber' (female)
'Argentea Marginata'
'Ferox Argentea'
'Golden Queen'
'Handsworth New Silver'
'J.C. van Tol'
'Madame Briot'
'Pyramidalis'
'Silver Queen'
Ilex × altaclerensis
The hybrid Ilex × altaclerensis was developed at Highclere Castle in Hampshire, England, in 1835, a cross between I. aquifolium and the tender species I. perado. The following cultivars have gained the RHS AGM:
'Belgica Aurea'
'Camelliifolia'
'Golden King'
'Lawsoniana'
Chemistry and toxicity
Holly berries contain alkaloids, theobromine, saponins, caffeic acid, and a yellow pigment, ilixanthin. The berries are generally regarded as toxic to humans.
Uses
Between the thirteenth and eighteenth centuries, before the introduction of turnips, Ilex aquifolium was cultivated for use as winter fodder for cattle and sheep. Less spiny varieties of holly were preferred, and in practice the leaves growing near the top of the tree have far fewer spines, making them more suitable for fodder.
Ilex aquifolium was once among the traditional woods for Great Highland bagpipes before tastes turned to imported dense tropical woods such as cocuswood, ebony, and African blackwood.
| Biology and health sciences | Aquifoliales | Plants |
2260429 | https://en.wikipedia.org/wiki/Tail | Tail | The tail is the elongated section at the rear end of a bilaterian animal's body; in general, the term refers to a distinct, flexible appendage extending backwards from the midline of the torso. In vertebrate animals that evolved to lose their tails (e.g. frogs and hominid primates), the coccyx is the homologous vestigial of the tail. While tails are primarily considered a feature of vertebrates, some invertebrates such as scorpions and springtails, as well as snails and slugs, have tail-like appendages that are also referred to as tails.
Tail-shaped objects are sometimes referred to as "caudate" (e.g. caudate lobe, caudate nucleus), and the body part associated with or proximal to the tail are given the adjective "caudal" (which is considered a more precise anatomical terminology).
Function
Animal tails are used in a variety of ways. They provide a source of thrust for aquatic locomotion for fish, cetaceans and crocodilians and other forms of marine life. Terrestrial species of vertebrates that do not need to swim, e.g. cats and kangaroos, instead use their tails for balance; and some, such as monkeys and opossums, have grasping prehensile tails, which are adapted for arboreal locomotion.
Many animals use their tail for utility purposes, for example many grazing animals, such as horses and oxens, use their tails to drive away parasitic flies and sweep off other biting insects. Some animals with broad, furry tails (e.g. foxes) often wrap the tail around the body as means of thermal insulation like a blanket.
Some species' tails serve aggressive functions, either predatorily or defensively. For example, the tails of scorpions have a stinger that contain venom, which can be used to either kill large prey or to fight off a threat. Similarly, stingrays have a thickened spine that can deliver penetrating trauma. Thresher sharks are known to use their long tails to stun prey. Many species of snakes wiggle their tails as a lure to attract prey, who may mistake the tail as a worm. The extinct armored dinosaurs (stegosaurs and ankylosaurs) have tails with spikes or clubs as defensive weapons against predators.
Tails are also used for communication and signalling. Most canines use their tails to communicate mood and intention. Some deer species flash the white underside of their tails to warn other nearby deer of possible danger, beavers slap the water with their tails to indicate danger, felids raise and quiver their tails while scent-marking, and canids (including domestic dogs) indicate emotions through the positioning and movement of their tails. Rattlesnakes perform tail vibration to generate a distinct rattling noise that signals aggression and warns potential predators to stay away.
Some species of lizard (e.g. geckos) can self-amputate ("cast") their tails from their bodies to help them escape predators, which are either distracted by the wriggling detached tail or only manages to seize the severed tail while the lizard flees. Tails cast in this manner generally grow back over time, though the replacement is typically darker in colour than the original and contains only cartilage, not bone. Various species of rat demonstrate a similar function with their tails, known as degloving, in which the outer layer is shed in order for the animal to escape from a predator.
Most birds' tails end in long feathers called rectrices. These feathers are used as a rudder, helping the bird steer and maneuver in flight; they also help the bird to balance while it is perched. In some species—such as birds of paradise, lyrebirds, and most notably peafowl—modified tail feathers play an important role in courtship displays. The extra-stiff tail feathers of other species, including woodpeckers and woodcreepers, allow them to brace themselves firmly against tree trunks.
Human tails
In humans, tail bud refers to the part of the embryo which develops into the end of the spine.
However, this is not a tail.
Infrequently, a child is born with a "soft tail", which contains no vertebrae, but only blood vessels, muscles, and nerves, but this is regarded as an abnormality rather than a vestigial true tail, even when such an appendage is located where the tail would be expected. Fewer than 40 cases have been reported of infants with "true tails" containing the caudal vertebrae, a result of atavism.
In 2024, scientists claimed to have found a genetic mutation that contributed to the loss of the tail in the common ancestor of humans and other apes.
Humans have a "tail bone" (the coccyx) attached to the pelvis; it comprises fused vertebrae, usually four, at the bottom of the vertebral column. It does not normally protrude externally - humans are an acaudal (or acaudate) species (i.e., tailless).
Gallery
| Biology and health sciences | External anatomy and regions of the body | Biology |
2260546 | https://en.wikipedia.org/wiki/South%20Pole%20Telescope | South Pole Telescope | The South Pole Telescope (SPT) is a diameter telescope located at the Amundsen–Scott South Pole Station, Antarctica. The telescope is designed for observations in the microwave, millimeter-wave, and submillimeter-wave regions of the electromagnetic spectrum, with the particular design goal of measuring the faint, diffuse emission from the cosmic microwave background (CMB). Key results include a wide and deep survey of discovering hundreds of clusters of galaxies using the Sunyaev–Zel'dovich effect, a sensitive 5 arcminute CMB power spectrum survey, and the first detection of B-mode polarized CMB.
The first major survey with the SPT—designed to find distant, massive, clusters of galaxies through their interaction with the CMB, with the goal of constraining the dark energy equation of state—was completed in October 2011. In early 2012, a new camera (SPTpol) was installed on the SPT with even greater sensitivity and the capability to measure the polarization of incoming light. This camera operated from 2012–2016 and was used to make unprecedentedly deep high-resolution maps of hundreds of square degrees of the Southern sky. In 2017, the third-generation camera SPT-3G was installed on the telescope, providing nearly an order-of-magnitude increase in detectors in the focal plane.
The SPT collaboration is made up of over a dozen (mostly North American) institutions, including the University of Chicago, the University of California, Berkeley, Case Western Reserve University, Harvard/Smithsonian Astrophysical Observatory, the University of Colorado Boulder, McGill University, Michigan State University, The University of Illinois at Urbana-Champaign, University of California, Davis, Ludwig Maximilian University of Munich, Argonne National Laboratory, and the Fermi National Accelerator Laboratory. It is funded by the National Science Foundation and the Department of Energy.
Microwave and millimeter-wave observations at the South Pole
The South Pole region is the premier observing site in the world for millimeter-wavelength observations. The Pole's high altitude of above sea level means the atmosphere is thin, and the extreme cold keeps the amount of water vapor in the air low. This is particularly important for observing at millimeter wavelengths, where incoming signals can be absorbed by water vapor, and where water vapor emits radiation that can be confused with astronomical signals. Since the sun does not rise and set daily, the atmosphere at the pole is particularly stable. In addition, no interference exists from the sun in the millimeter range during the months of polar night.
The telescope
The telescope is a 10-meter (394 in) diameter off-axis Gregorian telescope in an altazimuth mount (at the poles, an altazimuth mount is effectively identical to an equatorial mount). It was designed to allow a large field of view (over 1 square degree) while minimizing systematic uncertainties from ground spill-over and scattering off the telescope optics. The surface of the telescope mirror is smooth down to roughly , or about one-thousandth of an inch (i.e., one thou), which allows sub-millimeter wavelength observations. A key advantage of the SPT observing strategy is that the entire telescope is scanned, so the beam does not move relative to the telescope mirrors. The fast scanning of the telescope and its large field of view makes SPT efficient at surveying large areas of sky, which is required to achieve the science goals of the SPT cluster survey and CMB polarization measurements.
The SPT-SZ camera
The first camera installed on the SPT contained a 960-element bolometer array of superconducting transition edge sensors (TES), which made it one of the largest TES bolometer arrays ever built. The focal plane for this camera (referred to as the SPT-SZ camera because it was designed to conduct a survey of galaxy clusters through their Sunyaev–Zel'dovich effect signature) was split into six pie-shaped wedges, each with 160 detectors. These wedges observed at three different frequencies: 95 GHz, 150 GHz, and 220 GHz. The modularity of the focal plane allowed it to be broken into many different frequency configurations. For the majority of the life of the camera, the SPT-SZ focal plane had one wedge at 95 GHz, four at 150 GHz, and one at 220 GHz. The SPT-SZ camera was used primarily to conduct a survey of 2500 square degrees of the Southern sky (20h to 7h in right ascension, −65d to −40d declination) to a noise level of roughly 15 micro-Kelvin in a 1-arcminute pixel at 150 GHz.
The SPTpol camera
The second camera installed on the SPT–also designed with superconducting TES arrays–was even more sensitive than the SPT-SZ camera and, crucially, had the ability to measure the polarization of the incoming light (hence the name SPTpol – South Pole Telescope POLarimeter). The 780 polarization-sensitive pixels (each with two separate TES bolometers, one sensitive to each linear polarization) were divided between observing frequencies of 90 GHz and 150 GHz, and pixels at the two frequencies are designed with different detector architectures. The 150 GHz pixels were corrugated-feedhorn-coupled TES polarimeters fabricated in monolithic arrays at the National Institute of Standards and Technology. The 90 GHz pixels were individually packaged dual-polarization absorber-coupled polarimeters developed at Argonne National Laboratory. The 90 GHz pixels were coupled to the telescope optics through individually machined contoured feedhorns.
The first year of SPTpol observing was used to survey a 100-square-degree field centered at R.A. 23h30m declination −55d. The next four years were primarily spent surveying a 500-square-degree region of which the original 100 square degrees is a subset. These are currently the deepest high-resolution maps of the millimeter-wave sky over more than a few square degrees, with the noise level at 150 GHz around 5 micro-Kelvin-arcminute and square root of two deeper on the 100-square-degree field.
The SPT-3G camera
In January 2017, the third-generation camera SPT-3G was installed on the SPT. Taking advantage of a combination of improvements to the optical system (providing a significantly larger diffraction-limited field of view) and new detector technology (enabling detectors in multiple observing bands in a single pixel), the SPT-3G detector array contains over ten times more sensors than SPTpol, translating almost directly into a tenfold increase in the speed with which the telescope and camera can map a patch of sky to a given noise level. The camera consists of over 16,000 detectors, split evenly between 90, 150, and 220 GHz.
In 2018, a new survey began using the SPT-3G camera. This survey was to cover 1500 square degrees to a depth of < 3 micro-Kelvin-arcminute at 150 GHz. Significantly, this field overlaps completely with the BICEP Array observing field, enabling joint analyses of SPT and BICEP data which will deliver significantly better constraints on a potential signal from primordial gravitational waves than either instrument can provide alone.
Science goals and results
The first key project for the SPT, completed in October 2011, was a 2500-square degree survey to search for clusters of galaxies using the Sunyaev–Zel'dovich effect, a distortion of the cosmic microwave background radiation (CMB) due to interactions between CMB photons and the Intracluster medium in galaxy clusters. The survey has found hundreds of clusters of galaxies over an extremely wide redshift range. When combined with accurate redshifts and mass estimates for the clusters, this survey will place interesting constraints on the dark energy equation of state. Data from the SPT-SZ survey have also been used to make the most sensitive existing measurements of the CMB power spectrum at angular scales smaller than roughly 5 arcminutes (multipole number larger than 2000)
and to discover a population of distant, gravitationally lensed dusty, star-forming galaxies.
Data from the SPTpol camera was used to make several groundbreaking measurements, including the first detection of the so-called "B-mode" or "curl" component of the polarized CMB. This B-mode signal is generated at small angular scales by the gravitational lensing of the much larger primordial "E-mode" polarization signal (generated by scalar density perturbations at the time the CMB was emitted) and at large angular scales by the interaction of the CMB with a background of gravitational waves produced during the epoch of inflation. Measurements of the large-scale B-mode signal have the potential to constrain the energy scale of inflation, thus probing the physics of the universe at the earliest times and highest energy scales imaginable, but these measurements are limited by contamination from the lensing B modes. Using the larger E-mode component of the polarization and measurements of the CMB lensing potential, an estimate can be made of the lensing B modes and used to clean the large-scale measurements. This B-mode delensing was first demonstrated using SPTpol data. SPTpol data also has been used to make the most precise measurements of the E-mode power spectrum and temperature-E-mode correlation spectrum of the CMB and to make high-signal-to-noise maps of the projected matter density using reconstructions of the CMB lensing potential.
The 1500-square-degree SPT-3G survey will be used to achieve multiple science goals, including unprecedented constraints on a background of primordial gravitational waves joint analysis of B-mode polarization with the BICEP Array, a unique sample of distant galaxy clusters for cosmological and cluster evolution studies, and constraints on fundamental physics such as the mass of the neutrinos and the existence of light relic particles in the early Universe.
The Atacama Cosmology Telescope has similar, but complementary, science objectives.
Funding
The South Pole Telescope is funded through the National Science Foundation Office of Polar Programs and the United States Department of Energy, with additional support from the Kavli Foundation and the Gordon and Betty Moore Foundation. Funding for the SPTpol and SPT-3G instruments and operations are also provided by the United States Department of Energy Office of Science, Office of High Energy Physics.
Operations
On , the South Pole Telescope achieved first light. Formal science observations began in March 2007. Commissioning observations and an initial small survey were completed during austral winter 2007 with winter-overs Stephen Padin and Zak Staniszewski at its helm.
In 2008, larger survey fields were completed with winter-overs Keith Vanderlinde and Dana Hrubes, and in 2009 with winter-overs Erik Shirokoff and Ross Williamson.
In December 2009, the camera was upgraded again for the 2010 observing season. The full 2500 square-degree SPT-SZ survey was completed during the 2010 and 2011 observing seasons with winter-overs Dana Hrubes and Daniel Luong-Van.
First light (the first observation) with the SPTpol camera was achieved on January 27, 2012. During the first season of observations, the winterover crew, Cynthia Chiang and Nicholas Huang, took data on a 100 square degree survey field. 2013 winterovers Dana Hrubes and Jason Gallicchio surveyed a larger field as part of the full SPTpol survey. This larger survey was completed by 2014 winterovers Robert Citron and Nicholas Huang, 2015 winterovers Charlie Sievers and Todd Veach, and 2016 winterovers Christine Corbett Moran and Amy Lowitz. The first winter of SPT-3G observing was conducted by winterovers Daniel Michalik and Andrew Nadolski. Adam Jones and Joshua Montgomery followed in 2018, with Douglas Howe and David Riebel wintering in 2019, Geoff Chen and Allen Foster in 2020, Sasha Rahlin and Matt Young in 2021, Aman Chokshi and Allen Foster in 2022, and Kyle Ferguson and Alex Pollak in 2023.
| Technology | Ground-based observatories | null |
2260887 | https://en.wikipedia.org/wiki/Politics%20of%20climate%20change | Politics of climate change | The politics of climate change results from different perspectives on how to respond to climate change. Global warming is driven largely by the emissions of greenhouse gases due to human economic activity, especially the burning of fossil fuels, certain industries like cement and steel production, and land use for agriculture and forestry. Since the Industrial Revolution, fossil fuels have provided the main source of energy for economic and technological development. The centrality of fossil fuels and other carbon-intensive industries has resulted in much resistance to climate friendly policy, despite widespread scientific consensus that such policy is necessary.
Climate change first emerged as a political issue in the 1970s. Efforts to mitigate climate change have been prominent on the international political agenda since the 1990s, and are also increasingly addressed at national and local level. Climate change is a complex global problem. Greenhouse gas (GHG) emissions contribute to global warming across the world, regardless of where the emissions originate. Yet the impact of global warming varies widely depending on how vulnerable a location or economy is to its effects. Global warming is on the whole having negative impact, which is predicted to worsen as heating increases. Ability to benefit from both fossil fuels and renewable energy sources vary substantially from nation to nation.
Different responsibilities, benefits and climate related threats faced by the world's nations contributed to early climate change conferences producing little beyond general statements of intent to address the problem, and non-binding commitments from the developed countries to reduce emissions. In the 21st century, there has been increased attention to mechanisms like climate finance in order for vulnerable nations to adapt to climate change. In some nations and local jurisdictions, climate friendly policies have been adopted that go well beyond what was committed to at international level. Yet local reductions in GHG emission that such policies achieve have limited ability to slow global warming unless the overall volume of GHG emission declines across the planet.
Since entering the 2020s, the feasibility of replacing energy from fossil fuel with renewable energy sources significantly increased, with some countries now generating almost all their electricity from renewables. Public awareness of the climate change threat has risen, in large part due to social movement led by youth and visibility of the impacts of climate change, such as extreme weather events and flooding caused by sea level rise. Many surveys show a growing proportion of voters support tackling climate change as a high priority, making it easier for politicians to commit to policies that include climate action. The COVID-19 pandemic and economic recession lead to widespread calls for a "green recovery", with some polities like the European Union successfully integrating climate action into policy change. Outright climate change denial had become a much less influential force by 2019, and opposition has pivoted to strategies of encouraging delay or inaction.
Policy debate
Like all policy debates, the political debate on climate change is fundamentally about action. Various distinct arguments underpin the politics of climate change - such as different assessments of the urgency of the threat, and on the feasibility, advantages and disadvantages of various responses. But essentially, these all relate to potential responses to climate change.
The statements that form political arguments can be divided into two types: positive and normative statements. Positive statements can generally be clarified or refuted by careful definition of terms, and scientific evidence. Whereas normative statements about what one "ought" to do often relate at least partly to morality, and are essentially a matter of judgement. Experience has indicated that better progress is often made at debates if participants attempt to disentangle the positive and normative parts of their arguments, reaching agreement on the positive statements first. In the early stages of a debate, the normative positions of participants can be strongly influenced by perceptions of the best interests of whatever constituency they represent. In achieving exceptional progress at the 2015 Paris conference, Christiana Figueres and others noted it was helpful that key participants were able to move beyond a competitive mindset concerning competing interests, to normative statements that reflected a shared abundance based collaborative mindset.
Actions in response to climate change can be divided into three classes: mitigation – actions to reduce greenhouse gas emissions and to enhance carbon sinks, adaptation – actions to defend against the negative results of global warming, and solar geoengineering – a technology in which sunlight would be reflected back to outer space.
Most 20th century international debate on climate change focused almost entirely on mitigation. It was sometimes considered defeatist to pay much attention to adaptation. Also, compared to mitigation, adaptation is more a local matter, with different parts of the world facing vastly different threats and opportunities from climate change. By the early 21st century, while mitigation still receives most attention in political debates, it is no longer the sole focus. Some degree of adaptation is now widely considered essential, and is discussed internationally at least at high level, though which specific actions to take remain mostly a local matter. A commitment to provide $100 billion per year worth of funding to developing countries was made at the 2009 Copenhagen Summit. At Paris, it was clarified that allocation of the funding should involve a balanced split between adaptation and mitigation, though , not all funding had been provided, and what had been delivered was going mainly to mitigation projects. By 2019, possibilities for geoengineering were also increasingly being discussed, and were expected to become more prominent in future debates.
Political debate on how to mitigate tends to vary depending on the scale of governance concerned. Different considerations apply for international debate, compared with national and municipal level discussion. In the 1990s, when climate change first became prominent on the political agenda, there was optimism that the problem could be successfully tackled. The then recent signing of the 1987 Montreal Protocol to protect the ozone layer had indicated that the world was able to act collectively to address a threat warned about by scientists, even when it was not yet causing significant harm to humans. Yet by the early 2000s GHG emissions had continued to rise, with little sign of agreement to penalise emitters or reward climate friendly behaviour. It had become clear that achieving global agreement for effective action to limit global warming would be much more challenging. Some politicians, such as Arnold Schwarzenegger with his slogan "terminate pollution", say that activists should generate optimism by focusing on the health co-benefits of climate action.
Multilateral
Climate change became a fixture on the global political agenda in the early 1990s, with United Nations Climate Change conferences set to run yearly. These annual events are also called Conferences of the Parties (COPs). Major landmark COPs were the 1997 Kyoto Protocol, the 2009 Copenhagen Summit and the 2015 Paris conference. Kyoto was initially considered promising, yet by the early 2000s its results had proved disappointing. Copenhagen saw a major attempt to move beyond Kyoto with a much stronger package of commitments, yet largely failed. Paris was widely considered successful, yet how effective it will be at reducing long term global warming remains to be seen.
At international level, there are three broad approaches to emissions reduction that nations can attempt to negotiate. Firstly, the adoption of emissions reductions targets. Secondly, setting a carbon price. Lastly, creating a largely voluntary set of processes to encourage emission reduction, which include the sharing of information and progress reviews. These approaches are largely complementary, though at various conferences much of the focus has often been on a single approach. Until about 2010, international negotiations focused largely on emissions targets. The success of the Montreal treaty in reducing emissions that damaged the ozone layer suggested that targets could be effective. Yet in the case of greenhouse gas reductions, targets have not in general led to substantial cuts in emissions. Ambitious targets have usually not been met. Attempts to impose severe penalties that would incentivize more determined efforts to meet challenging targets, have always been blocked by at least one or two nations.
In the 21st century, there is widespread agreement that a carbon price is the most effective way to reduce emissions, at least in theory. Generally though, nations have been reluctant to adopt a high carbon price, or in most cases any price at all. One of the main reasons for this reluctance is the problem of carbon leakage – the phenomena where activities producing GHG emissions are moved out of the jurisdiction that imposes the carbon price thus depriving the jurisdiction of jobs & revenue, and to no benefit, as the emissions will be released elsewhere. Nonetheless, the percentage of the worlds' emissions that are covered by a carbon price rose from 5% in 2005, to 15% by 2019, and should reach over 40% once China's carbon price comes fully into force. Existing carbon price regimes have been implemented mostly independently by the European Union, nations and sub national jurisdictions acting autonomously.
The largely voluntary pledge and review system where states make their own plans for emissions reduction was introduced in 1991, but abandoned before the 1997 Kyoto treaty, where the focus was on securing agreement for "top down" emissions targets. The approach was revived at Copenhagen, and gained further prominence with the 2015 Paris Agreement, though pledges came to be called nationally determined contributions (NDCs). These are meant to be re-submitted in enhanced form every 5 years. How effective this approach is remains to be seen. Some countries submitted elevated NDCs in 2021, around the time of the Glasgow conference. Accounting rules for carbon trading were agreed at the 2021 Glasgow COP meeting.
Regional, national and sub-national
Policies to reduce GHG emissions are set by either national or sub national jurisdictions, or at regional level in the case of the European Union. Much of the emission reduction policies that have been put into place have been beyond those required by international agreements. Examples include the introduction of a carbon price by some individual US states, or Costa Rica reaching 99% electrical power generation by renewables in the 2010s.
Actual decisions to reduce emissions or deploy clean technologies are mostly not made by governments themselves, but by individuals, businesses and other organizations. Yet it is national and local governments that set policies to encourage climate friendly activity. Broadly these policies can be divided into four types: firstly, the implementation of a carbon price mechanism and other financial incentives; secondly prescriptive regulations, for example mandating that a certain percentage of electricity generation must be from renewables; thirdly, direct government spending on climate friendly activity or research; and fourthly, approaches based on information sharing, education and encouraging voluntary climate friendly behavior. Local politics is sometimes combined with air pollution, for example the politics of creating low emission zones in cities may also aim to reduce carbon emissions from road transport.
Non-governmental actors
Individuals, businesses and NGOs can affect the politics of climate change both directly and indirectly. Mechanisms include individual rhetoric, aggregate expression of opinion by means of polls, and mass protests. Historically, a significant proportion of these protests have been against climate friendly policies. Since the 2000 UK fuel protests there have been dozens of protests across the world against fuel taxes or the ending of fuel subsidies. Since 2019 and the advent of the school strike and Extinction Rebellion, pro climate protests have become more prominent. Indirect channels for apolitical actors to effect the politics of climate change include funding or working on green technologies, and the fossil fuel divestment movement.
Special interests and lobbying by non-country actors
There are numerous special interest groups, organizations, and corporations who have public and private positions on the multifaceted topic of global warming. The following is a partial list of the types of special interest parties that have shown an interest in the politics of global warming:
Fossil fuel companies: Traditional fossil fuel corporations stand to lose from stricter global warming regulations, though there are exceptions. The fact fossil fuel companies are engaged in energy trading might mean that their participation in trading schemes and other such mechanisms could give them a unique advantage, so it is unclear whether every traditional fossil fuel companies would always be against stricter global warming policies. As an example, Enron, a traditional gas pipeline company with a large trading desk heavily lobbied the United States government to regulate : they thought that they would dominate the energy industry if they could be at the center of energy trading.
Farmers and agribusiness are an important lobby but vary in their views on effects of climate change on agriculture and greenhouse gas emissions from agriculture and, for example, the role of the EU Common Agricultural Policy.
Financial Institutions: Financial institutions generally support policies against global warming, particularly the implementation of carbon trading schemes and the creation of market mechanisms that associate a price with carbon. These new markets require trading infrastructures, which banking institutions can provide. Financial institutions are also well positioned to invest, trade and develop various financial instruments that they could profit from through speculative positions on carbon prices and the use of brokerage and other financial functions like insurance and derivative instruments.
Environmental groups: Environmental advocacy groups generally favor strict restrictions on emissions. Environmental groups, as activists, engage in raising awareness.
Renewable energy and energy efficiency companies: companies in wind, solar and energy efficiency generally support stricter global warming policies. They expect their share of the energy market to expand as fossil fuels are made more expensive through trading schemes or taxes.
Nuclear power companies: support and benefit from carbon pricing or subsidies of low-carbon energy production, as nuclear power produces minimal greenhouse gas emissions.
Electricity distribution companies: may lose from solar panels but benefit from electric vehicles.
Traditional retailers and marketers: traditional retailers, marketers, and the general corporations respond by adopting policies that resonate with their customers. If "being green" provides customer appeal, then they could undertake modest programs to please and better align with their customers. However, since the general corporation does not make a profit from their particular position, it is unlikely that they would strongly lobby either for or against a stricter global warming policy position.
Medics: often say that climate change and air pollution can be tackled together and so save millions of lives.
Information and communications technology companies: say their products help others combat climate change, tend to benefit from reductions in travel, and many purchase green electricity.
The various interested parties sometimes align with one another to reinforce their message, for example electricity companies fund the purchase of electric school buses to benefit medics by reducing the load on the health service whilst at the same time selling more electricity. Sometimes industries will fund specialty nonprofit organizations to raise awareness and lobby on their behest.
Collective action
Current climate politics are influenced by a number of social and political movements focused on different parts of building political will for climate action. This includes the climate justice movement, youth climate movement and movements to divest from fossil fuel industries.
Divestment movement
Youth movement
Outlook
Historical political attempts to agree on policies to limit global warming have largely failed to mitigate climate change. Commentators have expressed optimism that the 2020s can be more successful, due to various recent developments and opportunities that were not present during earlier periods. Other commentators have expressed warnings that there is now very little time to act in order to have any chance of keeping warming below 1.5 °C, or even to have a good chance of keeping global heating under 2 °C.
According to Torsten Lichtenau, leading expert in global carbon transition, there was a huge peak on corporate climate action in 2021 - 2022 at the time of COP26, but in 2024 “it’s dropped back to 2019 levels." As for 2024 issues like geopolitics, inflation and artificial intelligence became more important for corporations even though the number of climate concerned consumers rose. 2024 was the first year in which the amount of money given to ESG declined.
Opportunities
In the late 2010s, various developments conducive to climate friendly politics saw commentators express optimism that the 2020s might see good progress in addressing the threat of global heating.
Tipping point in public opinion
The year 2019 has been described as "the year the world woke up to climate change", driven by factors such growing recognition of the global warming threat resulting from recent extreme weather events, the Greta effect and the IPPC 1.5 °C report.
In 2019, the secretary general of OPEC recognized the school strike movement as the greatest threat faced by the fossil fuel industry. According to Christiana Figueres, once about 3.5% of a population start participating in non violent protest, they are always successful in sparking political change, with the success of Greta Thunberg's Fridays for Future movement suggesting that reaching this threshold may be obtainable.
A 2023 review study published in One Earth stated that opinion polls show that most people perceive climate change as occurring now and close by. The study concluded that seeing climate change as more distant does not necessarily result in less climate action, and reducing psychological distancing does not reliably increase climate action.
Reduced influence of climate change denial
By 2019, outright climate change denial had become a much less influential force than it had been in previous years. Reasons for this include the increasing frequency of extreme weather events, more effective communication on the part of climate scientists, and the Greta effect. As an example, in 2019 the Cato Institute closed down its climate shop.
Growth of renewable energy
Renewable energy is an inexhaustible source of naturally replenishing energy. The major renewable energy sources are wind, hydropower, solar, geothermal, and biomass. In 2020, renewable energy generated 29% of world electricity.
In the wake of the Paris Agreement, adopted by 196 Parties, 194 of these Parties have submitted their Nationally Determined Contributions (NDCs), i.e., climate pledges, as of November 2021. There are many different efforts used by these countries to help include renewable energy investments such as 102 countries have implemented tax credits, 101 countries include some sort of public investment, and 100 countries currently use tax reductions. The largest emitters tend to be industrialized countries like the US, China, UK, and India. These countries aren't implementing enough industrial policies (188) compared to deployment policies (more than 1,000).
In November 2021, the 26th United Nation Conference of the Parties (COP26) took place in Glasgow, Scotland. Almost 200 nations agreed to accelerate the fight against climate change and commit to more effective climate pledges. Some of the new pledges included reforms on methane gas pollution, deforestation, and coal financing. Surprisingly, the US and China (the two largest carbon emitters) also both agreed to work together on efforts to prevent global warming from surpassing 1.5 degrees Celsius. Some scientists, politicians, and activist say that not enough was done at this summit and that we will still reach that 1.5 degree tipping point. An Independent report by Climate Action Tracker said the commitments were "lip service" and "we will emit roughly twice as much in 2030 as required for 1.5 degrees."
As of 2020, the feasibility of replacing energy from fossil fuel with nuclear and especially renewable energy has much increased, with dozens of countries now generating more than half of their electricity from renewable sources.
Green recovery
Challenges
Despite various promising conditions, commentators tend to warn that several difficult challenges remain, which need to be overcome if climate change politics is to result in a substantial reduction of greenhouse gas emissions. For example, increasing tax on meat can be politically difficult.
Urgency
As of 2021, levels have already increased by about 50% since the pre-industrial era, with billions of tons more being released each year. Global warming has already passed the point where it is beginning to have a catastrophic impact in some localities. So major policy changes need to be implemented very soon if the risk of escalating environmental impact is to be avoided.
Centrality of fossil fuel
Energy from fossil fuels remains central to the worlds economy, accounting for about 80% of its energy generation as of 2019. Suddenly removing fossil fuel subsidies from consumers has often been found to cause riots. While clean energy can sometimes be cheaper, provisioning large amounts of renewable energy in a short period of time tends to be challenging. According to a 2023 report by the International Energy Agency, coal emissions grew 243 Mt to a new all-time high of almost 15.5 Gt. This 1.6% increase was faster than the 0.4% annual average growth over the past decade. In 2022 the European Central Bank argued that high energy prices were accelerating the energy transition away from fossil fuel, but that governments should take steps to prevent energy poverty without hindering the move to low carbon energy.
Inactivism
While outright denial of climate change is much less prevalent in the 2020s compared to the preceding decades, many arguments continue to be made against taking action to limit GHG emissions. Such arguments include the view that there are better ways to spend available funds (such as adaptation), that it would be better to wait until new technology is developed as that would make mitigation cheaper, that technology and innovation will render climate change moot or resolve certain aspects, and that the future negative effects of climate change should be heavily discounted compared to current needs.
Fossil fuel lobby and political spending
The largest oil and gas corporations that comprise Big Oil and their industry lobbyist arm, the American Petroleum Institute (API), spend large amounts of money on lobbying and political campaigns, and employ hundreds of lobbyists, to obstruct and delay government action to address climate change. The fossil fuel lobby has considerable clout in Washington, D.C. and in other political centers, including the European Union and the United Kingdom. Fossil fuel industry interests spend many times as much on advancing their agenda in the halls of power than do ordinary citizens and environmental activists, with the former spending $2 billion in the years 2000–2016 on climate change lobbying in the United States. The five largest Big Oil corporations spent hundreds of millions of euros to lobby for its agenda in Brussels.
Big Oil companies often adopt "sustainability principles" that are at odds with the policy agenda their lobbyists advocate, which often entails sowing doubt about the reality and impacts of climate change and forestalling government efforts to address them. API launched a public relations disinformation campaign with the aim of creating doubt in the public mind so that "climate change becomes a non-issue." This industry also spends lavishly on American political campaigns, with approximately 2/3 of its political contributions over the past several decades fueling Republican Party politicians, and outspending many-fold political contributions from renewable energy advocates. Fossil fuel industry political contributions reward politicians who vote against environmental protections. According to a study published by the Proceedings of the National Academy of Sciences of the United States of America, as voting by a member of United States Congress turned more anti-environment, as measured by his/her voting record as scored by the League of Conservation Voters (LCV), the fossil fuel industry contributions that this member of Congress received increased. On average, a 10% decrease in the LCV score was correlated with an increase of $1,700 in campaign contributions from the fossil fuel industry for the campaign following the Congressional term.
Suppression of climate science
Big Oil companies, starting as early as the 1970s, suppressed their own scientists' reports of major climate impacts of the combustion of fossil fuels. ExxonMobil launched a corporate propaganda campaign promoting false information about the issue of climate change, a tactic that has been compared to Big Tobacco's public relations efforts to hoodwink the public about the dangers of smoking. Fossil fuel industry-funded think tanks harassed climate scientists who were publicly discussing the dire threat of climate change. As early as the 1980s when larger segments of the American public began to become aware of the climate change issue, the administrations of some United States presidents scorned scientists who spoke publicly of the threat fossil fuels posed for the climate. Other U.S. administrations have silenced climate scientists and muzzled government whistleblowers. Political appointees at a number of federal agencies prevented scientists from reporting their findings regarding aspects of the climate crisis, changed data modeling to arrive at conclusions they had set out a prior to prove, and shut out the input of career scientists of the agencies.
Targeting of climate activists
Climate and environmental activists, including, increasingly, those defending woodlands against the logging industry, have been killed in several countries, such as Colombia, Brazil and the Philippines. The perpetrators of most such killings have not been punished. A record number of such killings was recorded for the year 2019. Indigenous environmental activists are disproportionately targeted, comprising as many as 40% of fatalities worldwide. Domestic intelligence services of several governments, such as those of the U.S. government, have targeted environmental activists and climate change organizations as "domestic terrorists," surveilling them, investigating them, questioning them, and placing them on national "watchlists" that could make it more difficult for them to board airplanes and could instigate local law enforcement monitoring. Other U.S. tactics have included preventing media coverage of American citizen assemblies and protests against climate change, and partnering with private security companies to monitor activists.
Doomism
In the context of climate change politics, doomism refers to pessimistic narratives that claim that it is now too late to do anything about climate change. Doomism can include exaggeration of the probability of cascading climate tipping points, and their likelihood in triggering runaway global heating beyond human ability to control, even if humanity was able to immediately stop all burning of fossil fuels. In the US, polls found that for people who did not support further action to limit global warming, a belief that it is too late to do so was given as a more common reason than skepticism about man made climate change.
Lack of compromise
Several climate friendly policies have been blocked in the legislative process by environmental and/or left leaning pressure groups and parties. For example, in 2009, the Australian green party voted against the Carbon Pollution Reduction Scheme, as they felt it did not impose a high enough carbon price. In the US, the Sierra Club helped defeat a 2016 climate tax bill which they saw as lacking in social justice. Some of the attempts to impose a carbon price in US states have been blocked by left wing politicians because they were to be implemented by a cap and trade mechanism, rather than a tax.
Multi-sector governance
The issue of climate change usually fits into various sectors, which means that the integration of climate change policies into other policy areas is frequently called for. Thus the problem is difficult, as it needs to be addressed at multiple scales with diverse actors involved in the complex governance process.
Maladaptation
Successful adaptation to climate change requires balancing competing economic, social, and political interests. In the absence of such balancing, harmful unintended consequences can undo the benefits of adaptation initiatives. For example, efforts to protect coral reefs in Tanzania forced local villagers to shift from traditional fishing activities to farming that produced higher greenhouse gas emissions.
Wars and tensions
"Conflict sensitivity and peacebuilding" are a "key for climate policy-making." Wars and geopolitical tensions harm climate action, including by preventing just distribution of needed resources. Climate change can increase conflicts, creating a vicious cycle. The war in Ukraine seriously disturbed climate action. Military forces are responsible for 5.5% of global emissions and wars diverte resources from climate action.
Technology
The promise of technology is seen as both a threat and a potential boon. New technologies can open up possibilities for new and more effective climate policies. Most models that indicate a path to limiting warming to 2 °C have a big role for carbon dioxide removal, one of the approaches of climate change mitigation. Commentators from across the political spectrum tend to welcome removal. But some are skeptical that it will be ever be able to remove enough to slow global warming without there also being rapid cuts in emissions, and they warn that too much optimism about such technology may make it harder for mitigation policies to be enacted.
Solar radiation management is another technology aiming to reduce global warming. At least with stratospheric aerosol injection, there is broad agreement that it would be effective in bringing down average global temperatures. Yet the prospect is considered unwelcome by many climate scientists. They warn that side effects would include possible reductions in agricultural yields due to reduced sunlight and rainfall, and possible localized temperature rises and other weather disruptions. According to Michael Mann, the prospect of using solar management to reduce temperatures is another argument used to reduce willingness to enact emissions reduction policy.
Just transition
Economic disruption due to phaseout of carbon-intensive activities, such as coal mining, cattle farming or bottom trawling, can be politically sensitive due to the high political profile of coal miners, farmers and fishers in some countries. Many labor and environmental groups advocate for a just transition that minimizes the harm and maximizes the benefits associated with climate-related changes to society, for example by providing job training.
Different responses on the political spectrum
Climate friendly policies are generally supported across the political spectrum, though there have been many exceptions among voters and politicians leaning towards the right, and even politicians on the left have rarely made addressing climate change a top priority. In the 20th century, right wing politicians led much significant action against climate change, both internationally and domestically, with Richard Nixon and Margaret Thatcher being prominent examples. Yet by the 1990s, especially in some English speaking countries and most especially in the US, the issue began to be polarized. Right wing media started arguing that climate change was being invented or at least exaggerated by the left to justify an expansion in the size of government. As of 2020, some right wing governments have enacted increased climate friendly policies. Various surveys indicated a slight trend for even U.S. right wing voters to become less skeptical of global warming, and groups like American Conservation Coalition indicate young Republican voters embrace climate as a central policy field. Though in the view of Anatol Lieven, for some right wing US voters, being skeptical of climate change has become part of their identity, so their position on the matter cannot easily be shifted by rational argument.
A 2014 study from the University of Dortmund concluded that countries with center and left-wing governments had higher emission reductions than right-wing governments in OECD countries during 1992–2008. Historically, nationalist governments have been among the worst performers in enacting policies. Though according to Lieven, as climate change is increasingly seen as a threat to the ongoing existence of nation states, nationalism is likely to become one of the most effective forces to drive determined mitigation efforts. The growing trend to securitize the climate change threat may be especially effective for increasing support among nationalist and conservatives.
A 2024 analysis found 100 U.S. representatives and 23 U.S. senators—23% of the 535 members of Congress—to be climate change deniers, all the deniers being Republicans.
History
Relationship to climate science
In the scientific literature, there is an overwhelming consensus that global surface temperatures have increased in recent decades and that the trend is caused primarily by human-induced emissions of greenhouse gases.
The politicization of science in the sense of a manipulation of science for political gains is a part of the political process. It is part of the controversies about intelligent design (compare the Wedge strategy) or Merchants of Doubt, scientists that are under suspicion to willingly obscure findings. e.g. about issues like tobacco smoke, ozone depletion, global warming or acid rain. However, e.g. in case of ozone depletion, global regulation based on the Montreal Protocol was successful, in a climate of high uncertainty and against strong resistance while in case of climate change, the Kyoto Protocol failed.
While the IPCC process tries to find and orchestrate the findings of global climate change research to shape a worldwide consensus on the matter it has itself been the object of a strong politicization. Anthropogenic climate change evolved from a mere science issue to a top global policy topic.
The IPCC process having built a broad science consensus does not stop governments following different, if not opposing goals. For ozone depletion, global regulation was already being put into place before a scientific consensus was established. So a linear model of policy-making, based on a the more knowledge we have, the better the political response will be view is not necessarily accurate. Instead knowledge policy, successfully managing knowledge and uncertainties as a foundation for political decision making; requires a better understanding of the relation between science, public (lack of) understanding and policy.
Most of the policy debate concerning climate change mitigation has been framed by projections for the twenty-first century. Academics have criticized this as short term thinking, as decisions made in the next few decades will have environmental consequences that will last for many millennia.
It has been estimated that only 0.12% of all funding for climate-related research is spent on the social science of climate change mitigation. Vastly more funding is spent on natural science studies of climate change and considerable sums are also spent on studies of the impact of and adaptation to climate change. It has been argued that this is a misallocation of resources, as the most urgent challenge is to work out how to change human behavior to mitigate climate change, whereas the natural science of climate change is already well established and there will be decades and centuries to handle adaptation.
Political economy of climate change
Political economy of climate change is an approach that applies the political economy thinking concerning social and political processes to study the critical issues surrounding decision-making on climate change.
The ever-increasing awareness and urgency of climate change had led scholars to explore a better understanding of the multiple actors and influencing factors that affect climate change negotiation, and to seek more effective solutions to tackle climate change. Analyzing these complex issues from a political economy perspective helps to explain the interactions between different stakeholders in response to climate change impacts, and provides opportunities to achieve better implementation of climate change policies.
Introduction
Background
Climate change has become one of the most pressing environmental concerns and global challenges in society today. As the issue rises in prominence the international agenda, researchers from different academic sectors have for long been devoting great efforts to explore effective solutions to climate change. Technologists and planners have been devising ways of mitigating and adapting to climate change; economists estimating the cost of climate change and the cost of tackling it; development experts exploring the impact of climate change on social services and public goods. However, Cammack (2007) points out two problems with many of the above discussions, namely the disconnection between the proposed solutions to climate change from different disciplines; and the devoid of politics in addressing climate change at the local level. Further, the issue of climate change is facing various other challenges, such as the problem of elite-resource capture, the resource constraints in developing countries and the conflicts that frequently result from such constraints, which have often been less concerned and stressed in suggested solutions. In recognition of these problems, it is advocated that “understanding the political economy of climate change is vital to tackling it”.
Meanwhile, the unequal distribution of the impacts of climate change and the resulting inequity and unfairness on the poor who contribute least to the problem have linked the issue of climate change with development study, which has given rise to various programs and policies that aim at addressing climate change and promoting development. Although great efforts have been made on international negotiations concerning the issue of climate change, it is argued that much of the theory, debate, evidence-gathering and implementation linking climate change and development assume a largely apolitical and linear policy process. In this context, Tanner and Allouche (2011) suggest that climate change initiatives must explicitly recognize the political economy of their inputs, processes and outcomes so as to find a balance between effectiveness, efficiency and equity.
Definition
In its earliest manifestations, the term “political economy” was basically a synonym of economics, while it is now a rather elusive term that typically refers to the study of the collective or political processes through which public economic decisions are made. In the climate change domain, Tanner and Allouche (2011) define the political economy as “the processes by which ideas, power and resources are conceptualized, negotiated and implemented by different groups at different scales”. While there have emerged a substantial literature on the political economy of environmental policy, which explains the “political failure” of the environmental programmes to efficiently and effectively protect the environment, systematic analysis on the specific issue of climate change using the political economy framework is relatively limited.
Characteristics of climate change
The urgent need to consider and understand the political economy of climate change is based on the specific characteristics of the problem.
The key issues include:
The cross-sectoral nature of climate change: The issue of climate change usually fits into various sectors, which means that the integration of climate change policies into other policy areas is frequently called for. Thus the problem is complicated as it needs to be tackled at multiple scales, with diverse actors involved in the complex governance process. The interaction of these facets leads to political processes with multiple and overlapping conceptualizations, negotiation and governance issues, which requires the understanding of political economy processes.
The problematic perception of climate change as simply a “global” issue: Climate change initiatives and governance approaches have tended to be driven from a global scale. While the development of international agreements has witnessed a progressive step of global political action, this globally-led governance of climate change issue may be unable to provide adequate flexibility for specific national or sub-national conditions. Besides, from the development point of view, the issue of equity and global environmental justice would require a fair international regime within which the impact of climate change and poverty could be simultaneously prevented. In this context, climate change is not only a global crisis that needs the presence of international politics, but also a challenge for national or sub-national governments. The understanding of the political economy of climate change could explain the formulation and translation of international initiatives to specific national and sub-national policy context, which provides an important perspective to tackle climate change and achieve environmental justice.
The growth of climate change finance: Recent years have witnessed a growing number of financial flows and the development of financing mechanisms in the climate change arena. The 2010 United Nations Climate Change Conference in Cancun, Mexico committed a significant amount of money from developed countries to developing a world in supportive of the adaptation and mitigation technologies. In short terms, the fast start finance will be transferred through various channels including bilateral and multilateral official development assistance, the Global Environment Facility, and the UNFCCC. Besides, a growing number of public funds have provided greater incentives to tackle climate change in developing countries. For instance, the Pilot Program for Climate Resilience aims at creating an integrated and scaled-up approach of climate change adaptation in some low-income countries and preparing for future finance flows. In addition, climate change finance in developing countries could potentially change the traditional aid mechanisms, through the differential interpretations of ‘common but differentiated responsibilities’ by developing and developed countries. As a result, it is inevitable to change the governance structures so as for developing countries to break the traditional donor-recipient relationships. Within these contexts, the understanding of the political economy processes of financial flows in the climate change arena would be crucial to effectively govern the resource transfer and to tackling climate change.
Different ideological worldviews of responding to climate change: Nowadays, because of the perception of science as a dominant policy driver, much of the policy prescription and action in climate change arena have concentrated on assumptions around standardized governance and planning systems, linear policy processes, readily transferable technology, economic rationality, and the ability of science and technology to overcome resource gaps. As a result, there tends to be a bias towards technology-led and managerial approaches to address climate change in apolitical terms. Besides, a wide range of different ideological worldviews would lead to a high divergence of the perception of climate change solutions, which also has a great influence on decisions made in response to climate change. Exploring these issues from a political economy perspective provides the opportunity to better understand the “complexity of politic and decision-making processes in tackling climate change, the power relations mediating competing claims over resources, and the contextual conditions for enabling the adoption of technology”.
Unintended negative consequences of adaptation policies that fail to factor in environmental-economic trade-offs: Successful adaptation to climate change requires balancing competing economic, social, and political interests. In the absence of such balancing, harmful unintended consequences can undo the benefits of adaptation initiatives. For example, efforts to protect coral reefs in Tanzania forced local villagers to shift from traditional fishing activities to farming that produced higher greenhouse gas emissions.
Socio-political constraints
The role of political economy in understanding and tackling climate change is also founded upon the key issues surrounding the domestic socio-political constraints:
The problems of fragile states: Fragile states—defined as poor performers, conflict and/or post-conflict states—are usually incapable of using the aid for climate change effectively. The issues of power and social equity have exacerbated the climate change impacts, while insufficient attention has been paid to the dysfunction of fragile states. Considering the problems of fragile states, the political economy approach could improve the understanding of the long-standing constraints upon capacity and resilience, through which the problems associated with weak capacity, state-building and conflicts could be better addressed in the context of climate change.
Informal governance: In many poorly performing states, decision-making around the distribution and use of state resources is driven by informal relations and private incentives rather than formal state institutions that are based on equity and law. This informal governance nature that underlies in the domestic social structures prevents the political systems and structures from rational functioning and thus hinders the effective response towards climate change. Therefore, domestic institutions and incentives are critical to the adoption of reforms.
The difficulty of social change: Developmental change in underdeveloped countries is painfully slow because of a series of long-term collective problems, including the societies’ incapacity of working collectively to improve wellbeing, the lack of technical and social ingenuity, the resistance and rejection to innovation and change. In the context of climate change, these problems significantly hinder the promotion of climate change agenda. Taking a political economy view in the underdeveloped countries could help to understand and create incentives to promote transformation and development, which lays a foundation for the expectation of implementing a climate change adaptation agenda.
Research focuses and approaches
Brandt and Svendsen (2003) introduce a political economy framework that is based on the political support function model by Hillman (1982) into the analysis of the choice of instruments to control climate change in the European Union policy to implement its Kyoto Protocol target level. In this political economy framework, the climate change policy is determined by the relative strength of stakeholder groups. By examining the different objective of different interest groups, namely industry groups, consumer groups and environmental groups, the authors explain the complex interaction between the choices of an instrument for the EU climate change policy, specifically the shift from the green taxation to a grandfathered permit system.
A report by the Bank for Reconstruction and Development (EBRD) (2011) takes a political economy approach to explain why some countries adopt climate change policies while others do not, specifically among the countries in the transition region. This work analyzes the different political economy aspects of the characteristics of climate change policies so as to understand the likely factors driving climate change mitigation outcomes in many transition countries. The main conclusions are listed below:
The level of democracy alone is not a major driver of climate change policy adoption, which means that the expectations of contribution to global climate change mitigation are not necessarily limited by the political regime of a given country.
Public knowledge, shaped by various factors including the threat of climate change in a particular country, the national level of education and existence of free media, is a critical element in climate change policy adoption, as countries with the public more aware of the climate change causes are significantly more likely to adopt climate change policies. The focus should, therefore, be on promoting public awareness of the urgent threat of climate change and prevent information asymmetries in many transition countries.
The relative strength of the carbon-intensive industry is a major deterrent to the adoption of climate change policies, as it partly accounts for the information asymmetries. However, the carbon-intensive industries often influence government's decision-making on climate change policy, which thus calls for a change of the incentives perceived by these industries and a transition of them to a low-carbon production pattern. Efficient means include the energy price reform and the introduction of international carbon trading mechanisms.
The competitive edge gained national economies in the transition region in a global economy, where increasing international pressure is put to reduce emissions, would enhance their political regime's domestic legitimacy, which could help to address the inherent economic weaknesses underlying the lack of economic diversification and global economic crisis.
Tanner and Allouche (2011) propose a new conceptual and methodological framework for analyzing the political economy of climate change in their latest work, which focuses on the climate change policy processes and outcomes in terms of ideas, power and resources. The new political economy approach is expected to go beyond the dominant political economy tools formulated by international development agencies to analyze climate change initiatives that have ignored the way that ideas and ideologies determine the policy outcomes (see table). The authors assume that each of the three lenses, namely ideas, power and resources, tends to be predominant at one stage of the policy process of the political economy of climate change, with “ideas and ideologies predominant in the conceptualization phase, power in the negotiation phase and resource, institutional capacity and governance in the implementation phase”. It is argued that these elements are critical in the formulation of international climate change initiatives and their translation to national and sub-national policy context.
| Physical sciences | Climate change | Earth science |
2261920 | https://en.wikipedia.org/wiki/Zinc%20acetate | Zinc acetate | Zinc acetate is a salt with the formula Zn(CH3CO2)2, which commonly occurs as the dihydrate Zn(CH3CO2)2·2H2O. Both the hydrate and the anhydrous forms are colorless solids that are used as dietary supplements. When used as a food additive, it has the E number E650.
Uses
Zinc acetate is a component of some medicines, e.g., lozenges for treating the common cold. Zinc acetate can also be used as a dietary supplement. As an oral daily supplement it is used to inhibit the body's absorption of copper as part of the treatment for Wilson's disease. Zinc acetate is also sold as an astringent in the form of an ointment, a topical lotion, or combined with an antibiotic such as erythromycin for the topical treatment of acne. It is commonly sold as a topical anti-itch ointment.
Zinc acetate is used as the catalyst for the industrial production of vinyl acetate from acetylene:
. Approximately 1/3 of the worlds production uses this route, which because of its environmental impact, is mainly practiced in countries with relaxed environmental regulations such as China.
Preparation
Zinc acetates are prepared by the action of acetic acid on zinc carbonate or zinc metal. Treatment of zinc nitrate with acetic anhydride is an alternative route.
Structures
In anhydrous zinc acetate the zinc is coordinated to four oxygen atoms to give a tetrahedral environment, these tetrahedral polyhedra are then interconnected by acetate ligands to give a range of polymeric structures.
In the dihydrate, zinc is octahedral, wherein both acetate groups are bidentate.
Reactions
Heating Zn(CH3CO2)2 in a vacuum results in a loss of acetic anhydride, leaving a residue of "basic zinc acetate," with the formula Zn4O(CH3CO2)6. It can also be prepared by a reaction of glacial acetic acid with zinc oxide. The cluster compound has a tetrahedral structure with an oxide ligand at its center Basic zinc acetate is a common precursor to metal-organic frameworks (MOFs).
| Physical sciences | Acetates | Chemistry |
2262550 | https://en.wikipedia.org/wiki/Thermodynamic%20limit | Thermodynamic limit | In statistical mechanics, the thermodynamic limit or macroscopic limit, of a system is the limit for a large number of particles (e.g., atoms or molecules) where the volume is taken to grow in proportion with the number of particles.
The thermodynamic limit is defined as the limit of a system with a large volume, with the particle density held fixed.
In this limit, macroscopic thermodynamics is valid. There, thermal fluctuations in global quantities are negligible, and all thermodynamic quantities, such as pressure and energy, are simply functions of the thermodynamic variables, such as temperature and density. For example, for a large volume of gas, the fluctuations of the total internal energy are negligible and can be ignored, and the average internal energy can be predicted from knowledge of the pressure and temperature of the gas.
Note that not all types of thermal fluctuations disappear in the thermodynamic limit—only the fluctuations in system variables cease to be important.
There will still be detectable fluctuations (typically at microscopic scales) in some physically observable quantities, such as
microscopic spatial density fluctuations in a gas scatter light (Rayleigh scattering)
motion of visible particles (Brownian motion)
electromagnetic field fluctuations, (blackbody radiation in free space, Johnson–Nyquist noise in wires)
Mathematically an asymptotic analysis is performed when considering the thermodynamic limit.
Origin
The thermodynamic limit is essentially a consequence of the central limit theorem of probability theory. The internal energy of a gas of N molecules is the sum of order N contributions, each of which is approximately independent, and so the central limit theorem predicts that the ratio of the size of the fluctuations to the mean is of order 1/N1/2. Thus for a macroscopic volume with perhaps the Avogadro number of molecules, fluctuations are negligible, and so thermodynamics works. In general, almost all macroscopic volumes of gases, liquids and solids can be treated as being in the thermodynamic limit.
For small microscopic systems, different statistical ensembles (microcanonical, canonical, grand canonical) permit different behaviours. For example, in the canonical ensemble the number of particles inside the system is held fixed, whereas particle number can fluctuate in the grand canonical ensemble. In the thermodynamic limit, these global fluctuations cease to be important.
It is at the thermodynamic limit that the additivity property of macroscopic extensive variables is obeyed. That is, the entropy of two systems or objects taken together (in addition to their energy and volume) is the sum of the two separate values. In some models of statistical mechanics, the thermodynamic limit exists, but depends on boundary conditions. For example, this happens in six vertex model: the bulk free energy is different for periodic boundary conditions and for domain wall boundary conditions.
Inapplicability
A thermodynamic limit does not exist in all cases. Usually, a model is taken to the thermodynamic limit by increasing the volume together with the particle number while keeping the particle number density constant. Two common regularizations are the box regularization, where matter is confined to a geometrical box, and the periodic regularization, where matter is placed on the surface of a flat torus (i.e. box with periodic boundary conditions). However, the following three examples demonstrate cases where these approaches do not lead to a thermodynamic limit:
Particles with an attractive potential that (unlike the Van der Waals force between molecules) doesn't turn around and become repulsive even at very short distances: In such a case, matter tends to clump together instead of spreading out evenly over all the available space. This is the case for gravitational systems, where matter tends to clump into filaments, galactic superclusters, galaxies, stellar clusters and stars.
A system with a nonzero average charge density: In this case, periodic boundary conditions cannot be used because there is no consistent value for the electric flux. With a box regularization, on the other hand, matter tends to accumulate along the boundary of the box instead of being spread more or less evenly with only minor fringe effects.
Certain quantum mechanical phenomena near absolute zero temperature present anomalies; e.g., Bose–Einstein condensation, superconductivity and superfluidity.
Any system that is not H-stable; this case is also called catastrophic.
| Physical sciences | Thermodynamics | Physics |
2262599 | https://en.wikipedia.org/wiki/Cage | Cage | A cage is an enclosure often made of mesh, bars, or wires, used to confine, contain or protect something or someone. A cage can serve many purposes, including keeping an animal or person in captivity, capturing an animal or person, and displaying an animal at a zoo.
Construction
Since a cage is usually intended to hold living beings, at least some part of its structure must be such as to allow for the entry of light and air. Thus some cages may be made with bars spaced closely together for the intended captive to slip between them, or with windows covered by a mesh of some sort.
Animal cages
Cages often used to confine animals, and some are specially designed to fit a certain species of animal. One or more birds, rodents, reptiles, and even larger animals of certain breeds are sometimes confined in a cage as pets.
Animal cages have been a part of human culture since ancient times. For example, an Ancient Greek vase dated to 490 B.C. depicts a boy holding a possibly domesticated rabbit on his lap, with a cage with an open door in the background. The biblical Book of Jeremiah refers to a tribe being like "cages full of birds", and the Book of Ezekiel describes the capture of a lion in which the captors "pulled him into a cage and brought him to the king of Babylon".
The different laws governing the keeping of animals in captivity generally provide for the size of cages or minimum equipment, depending on the species, whether for transport or for breeding. Swiss legislation, for example, defines minimum absolute internal dimensions for pet cages, but the Swiss Animal Protection organization (PSA) states that even if these dimensions comply with the law, they are far from being in line with the needs of species. It is therefore necessary in practice to provide a much higher vital space to ensure the well-being of the occupants.
Animal protection associations have often argued for improving transport conditions in cages and for bans on battery cages, especially for egg-laying hens. The European legislation is constantly changing, but consumer behavior also influences breeding conditions.
Trapping
Cages also serve as a trapping tool. This is a common and illegal purpose of the cage, as poaching is illegal itself. These type of cages are used to trap an animal, or hold them for a certain period of time. U.S. President Theodore Roosevelt used a cage himself to capture a bear, as the cage serves a purpose for capturing large animals.
Human cages
Punishment
In history, prisoners were sometimes kept in a cage. During the Vietnam War they were referred to as "tiger cages". Captives would sometimes be chained up inside into uncomfortable positions to intensify suffering. In medieval England, King Edward punished Robert the Bruce by having two of his female supporters encaged in public.
Safety
Safety cage, in automobile safety
Roll cage, a frame built in or around the cab of a vehicle
Shark-proof cage, used to protect divers
Entertainment
Cages are used in various forms of entertainment to create excitement from the sense that the entertainers are trapped in the cage. For example, cage dancing "refers to a scantily-clad feminine dancer, perhaps wearing a mini-skirt or hot- pants, and (supposedly) trapped inside of a hanging bird cage". Cage fighting involves two combatants, usually engaging in mixed martial arts, inside a cage-like structure, and "conjures up the image of two combatants trapped in a cage, trading vicious blows as the audience bays for blood". In Australia, a ban on the use of "cage-like enclosures" at such events was lifted in 2014. Steel cages are also one of the oldest form of enclosures used in professional wrestling. The first "steel cage match" of any kind took place on June 25, 1937 in Atlanta, Georgia. This match took place in a ring surrounded by chicken wire, in order to keep the athletes inside, and prevent any potential interference.
Homes
Engineering
Rebar cages used in reinforced concrete
Cage (bearing) – a component of a rolling-element bearing
Gabion – a cage filled with coarse gravel or rock for use in civil engineering, road building, military applications and landscaping
Mine cage – similar to an elevator, for a shaft mine
Cage, a separated enclosure in a computer colocation centre
Faraday cage – an enclosure used to block electric fields
Other uses
Batting cage – an enclosure for baseball batting practice
Bottle cage – a bicycle bottle holder
Cage crinoline – a type of crinoline petticoat
Cage trolley - used for transporting goods
Fruit cage - used to protect fruit bushes from been eaten by birds
| Technology | Containers | null |
2263499 | https://en.wikipedia.org/wiki/Digital%20clock | Digital clock | A digital clock displays the time digitally (i.e. in numerals or other symbols), as opposed to an analogue clock.
Digital clocks are often associated with electronic drives, but the "digital" description refers only to the display, not to the drive mechanism. (Both analogue and digital clocks can be driven either mechanically or electronically, but "clockwork" mechanisms with digital displays are rare.)
History
The first digital pocket watch was the invention of Austrian engineer Josef Pallweber who created his "jump-hour" mechanism in 1883. Instead of a conventional dial, the jump-hour featured two windows in an enamel dial, through which the hours and minutes are visible on rotating discs. The second hand remained conventional. By 1885, Pallweber mechanism was already on the market in pocket watches by Cortébert and IWC; arguably contributing to the subsequent rise and commercial success of IWC. The principles of Pallweber jump-hour movement had appeared in wristwatches by the 1920s (Cortébert) and are still used today (Chronoswiss Digiteur). While the original inventor did not have a watch brand at the time, his name has since been resurrected by a newly established watch manufacturer.
Plato clocks used a similar idea but a different layout. These spring-wound pieces consisted of a glass cylinder with a column inside, affixed to which were small digital cards with numbers printed on them, which flipped as time passed. The Plato clocks were introduced at the St. Louis World Fair in 1904, produced by Ansonia Clock Company. Eugene Fitch of New York patented the clock design in 1903.
Thirteen years earlier, Josef Pallweber had patented the same invention using digital cards (different from his 1885 patent using moving disks) in Germany (DRP No. 54093).
The German factory Aktiengesellschaft für Uhrenfabrikation Lenzkirch made such digital clocks in 1893 and 1894.
The earliest patent for a digital alarm clock was registered by D. E. Protzmann and others on October 23, 1956, in the United States. Protzmann and his associates also patented another digital clock in 1970, which was said to use a minimal amount of moving parts. Two side-plates held digital numerals between them, while an electric motor and cam gear outside controlled movement.
In 1970, the first digital wristwatch with an LED display was unveiled on The Tonight Show Starring Johnny Carson, although it was not released until 1972. Called the Pulsar, and produced by the Hamilton Watch Company, this watch was hinted at two years prior when the same company created a non-function digital watch prop (with a main analogue face but a secondary digital display) for Kubrick's 2001: A Space Odyssey.
Construction
Digital clocks typically use the 50 or 60 hertz oscillation of AC power or a 32,768 hertz crystal oscillator as in a quartz clock to keep time. Most digital clocks display the hour of the day in 24-hour format; in the United States and a few other countries, a commonly used hour sequence option is 12-hour format (with some indication of AM or PM). Some timepieces, such as many digital watches, can be switched between 12-hour and 24-hour modes. Emulations of analog-style faces often use an LCD screen, and these are also sometimes described as "digital".
Displays
To represent time, most digital clocks use a seven-segment LED, VFD, or LCD for each of the four digits. They generally also include other elements to indicate whether the time is AM or PM, whether or not an alarm is set, and so on. Older digital clocks used numbers painted on wheels, or a split-flap display. High-end digital clocks use dot matrix displays and use animations for digit changes.
Setting
If people find difficulty in setting the time in some designs of digital clocks in electronic devices where the clock is not a critical function, they may not be set at all, displaying the default after powered on, 00:00 or 12:00.
Because they run on electricity, digital clocks often need to be reset whenever the power is cut off, even for a very brief period of time. This is a particular problem with alarm clocks that have no "battery" backup, because a power outage during the night usually prevents the clock from triggering the alarm in the morning.
To reduce the problem, many devices designed to operate on household electricity incorporate a battery backup to maintain the time during power outages and during times of disconnection from the power supply. More recently, some devices incorporate a method for automatically setting the time, such as using a broadcast radio time signal from an atomic clock, getting the time from an existing satellite television or computer connection, or by being set at the factory and then maintaining the time from then on with a quartz movement powered by an internal rechargeable battery.
Commercial digital clocks are typically more reliable than consumer clocks. Multi-decade backup batteries can be used to maintain time during power loss.
Uses
Because digital clocks can be very small and inexpensive devices that enhance the popularity of product designs, they are often incorporated into all kinds of devices such as cars, radios, televisions, microwave ovens, standard ovens, computers and cell phones. Sometimes their usefulness is disputed: a common complaint is that when time has to be set to Daylight Saving Time, many household clocks have to be readjusted. The incorporation of automatic synchronization by a radio time signal is reducing this problem (see Radio clock). Smart digital clocks, in addition to displaying time, scroll additional information such as weather and notifications.
| Technology | Clocks | null |
2263753 | https://en.wikipedia.org/wiki/Borate%20mineral | Borate mineral | The Borate Minerals are minerals which contain a borate anion group. The borate (BO3) units may be polymerised similar to the SiO4 unit of the silicate mineral class. This results in B2O5, B3O6, B2O4 anions as well as more complex structures which include hydroxide or halogen anions. The [B(O,OH)4]− anion exists as well.
Many borate minerals, such as borax, colemanite, and ulexite, are salts: soft, readily soluble, and found in evaporite contexts. However, some, such as boracite, are hard and resistant to weathering, more similar to the silicates.
There are over 100 different borate minerals. Borate minerals include:
Kernite Na2B4O6(OH)2·3H2O
Borax Na2B4O5(OH)4·8H2O
Ulexite NaCaB5O6(OH)6·5H2O
Colemanite CaB3O4(OH)3·H2O
Boracite Mg3B7O13Cl
Painite CaZrAl9O15(BO3)
Nickel–Strunz Classification -06- Borates
IMA-CNMNC proposes a new hierarchical scheme (Mills et al., 2009). This list uses it to modify the Classification of Nickel–Strunz (mindat.org, 10 ed, pending publication). Note that although Nickel–Strunz division letters were traditionally based on the number of boron atoms in a mineral's chemical formula (06.A are monoborates, 06.B are diborates, etc.), the IMA has reclassified borate minerals based on the polymerisation of the borate anion.
Abbreviations
REE: rare-earth element (Sc, Y, La, Ce, Pr, Nd, Pm, Sm, Eu, Gd, Tb, Dy, Ho, Er, Tm, Yb, Lu)
PGE: platinum-group element (Ru, Rh, Pd, Os, Ir, Pt)
* : discredited (IMA/CNMNC status)
? : questionable/doubtful (IMA/CNMNC status)
Regarding 03.C Aluminofluorides, 06 Borates, 08 Vanadates (04.H V[5,6] Vanadates), 09 Silicates:
neso-: insular (from Greek , "island")
soro-: grouped (from Greek , "heap, pile, mound")
cyclo-: rings of (from Greek , "circle")
ino-: chained (from Greek , "fibre", [from Ancient Greek ])
phyllo-: sheets of (from Greek , "leaf")
tekto-: three-dimensional framework (from Greek , "of building")
Nickel–Strunz code scheme NN.XY.##x
NN: Nickel–Strunz mineral class number
X: Nickel–Strunz mineral division letter
Y: Nickel–Strunz mineral family letter
##x: Nickel–Strunz mineral/group number; x an add-on letter
Class: borates
06. Alfredstelznerite
06.A Monoborates
06.AA BO3, without additional anions; 1(D): 05 Sassolite; 15 Nordenskioldine, 15 Tusionite; 35 Jimboite, 35 Kotoite; 40 Takedaite
06.AB BO3, with additional anions; 1(D) + OH, etc.: 05 Hambergite, 10 Berborite, 15 Jeremejevite; 20 Warwickite, 20 Yuanfuliite; 25 Karlite; 30 Azoproite, 30 Bonaccordite, 30 Fredrikssonite, 30 Ludwigite, 30 Vonsenite; 35 Pinakiolite; 40 Blatterite, 40 Orthopinakiolite, 40 Takeuchiite, 40 Chestermanite; 45 Hulsite, 45 Magnesiohulsite, 45 Aluminomagnesiohulsite; 50 Hydroxylborite, 50 Fluoborite; 55 Shabynite, 55 Wightmanite; 60 Gaudefroyite, 65 Sakhaite, 70 Harkerite; 75 IMA2008-060, 75 Pertsevite; 80 Jacquesdietrichite, 85 Painite
06.AC B(O,OH)4, without and with additional anions; 1(T), 1(T)+OH, etc.: 05 Sinhalite; 10 Pseudosinhalite; 15 Béhierite, 15 Schiavinatoite; 20 Frolovite; 25 Hexahydroborite; 30 Henmilite; 35 Bandylite; 40 Teepleite; 45 Moydite-(Y); 50 Carboborite; 55 Sulfoborite; 60 Luneburgite; 65 Seamanite; 70 Cahnite
06.H Unclassified Borates
06.HA Unclassified borates: 05 Chelkarite; 10 Braitschite-(Ce); 15 Satimolite; 20 Iquiqueite; 25 Wardsmithite; 30 Korzhinskite; 35 Halurgite; 40 Ekaterinite; 45 Vitimite; 50 Canavesite; 55 Qilianshanite
Subclass: nesoborates
06.BA Neso-diborates with double triangles B2(O,OH)5; 2(2D); 2(2D) + OH, etc.: 05 Suanite; 10 Clinokurchatovite, 10 Kurchatovite; 15 Sussexite, 15 Szaibelyite; 20 Wiserite
06.BB Neso-diborates with double tetrahedra B2O(OH)6; 2(2T): 05 Pinnoite; 10 Pentahydroborite
06.CA Neso-triborates: 10 Ameghinite; 15 Inderite; 20 Kurnakovite; 25 Inderborite; 30 Meyerhofferite; 35 Inyoite; 40 Solongoite; 45 Peprossiite-(Ce); 50 Nifontovite; 55 Olshanskyite
06.DA Neso-tetraborates: 10 Borax; 15 Tincalconite; 20 Hungchaoite; 25 Fedorovskite, 25 Roweite; 30 Hydrochlorborite; 35 Uralborite; 40 Numanoite, 40 Borcarite; 60 Fontarnauite
06.EA Neso-pentaborates: 05 Sborgite; 10 Ramanite-(Rb), 10 Ramanite-(Cs), 10 Santite; 15 Ammonioborite; 25 Ulexite
06.FA Neso-hexaborates: 05 Aksaite; 10 Mcallisterite; 15 Admontite; 20 Rivadavite; 25 Teruggite
Subclass: inoborates
06.BC Ino-diborates with triangles and/or tetrahedra: 10 Calciborite, 10 Aldzhanite*; 15 Vimsite; 20 Sibirskite, 20 Parasibirskite
06.CB Ino-triborates: 10 Colemanite; 15 Hydroboracite; 20 Howlite; 25 Jarandolite
06.DB Ino-tetraborates: 05 Kernite
06.EB Ino-pentaborates: 05 Larderellite; 10 Ezcurrite; 15 Probertite; 20 Tertschite; 25 Priceite
06.FB Ino-hexaborates: 05 Aristarainite; 10 Kaliborite
Subclass: phylloborates
06.CC Phyllo-triborates: 05 Johachidolite
06.EC Phyllo-pentaborates: 05 Biringuccite, 05 Nasinite; 10 Gowerite; 15 Veatchite, 15 Veatchite-A, 15 Veatchite-p; 20 Volkovskite; 25 Tuzlaite; 30 Heidornite; 35 Brianroulstonite
06.FC Phyllo-hexaborates: 05 Nobleite, 05 Tunellite, 05 Balavinskite; 10 Strontioborite; 15 Ginorite, 15 Strontioginorite; 20 Fabianite
06.GB Phyllo-nonaborates, etc.: 05 Studenitsite; 10 Penobsquisite; 15 Preobrazhenskite; 20 Walkerite
Subclass: tektoborates
06.BD Tektodiborates with tetrahedra: 05 Santarosaite
06.DD Tekto-tetraborates: 05 Diomignite
06.ED Tekto-pentaborates: 05 IMA2007-047, 05 Tyretskite, 05 Hilgardite, 05 Kurgantaite
06.GA Tekto-heptaborates: 05 Boracite, 05 Chambersite, 05 Ericaite; 10 Congolite, 10 Trembathite
06.GC Tekto-dodecaborates: 05 Rhodizite, 05 Londonite
06.GD Mega-tektoborates: 05 Ruitenbergite, 05 Pringleite; 10 Metaborite
| Physical sciences | Minerals | Earth science |
2263803 | https://en.wikipedia.org/wiki/Phosphate%20mineral | Phosphate mineral | Phosphate minerals are minerals that contain the tetrahedrally coordinated phosphate () anion, sometimes with arsenate () and vanadate () substitutions, along with chloride (Cl−), fluoride (F−), and hydroxide (OH−) anions, that also fit into the crystal structure.
The phosphate class of minerals is a large and diverse group, however, only a few species are relatively common.
Applications
Phosphate rock has high concentration of phosphate minerals, most commonly from the apatite group of minerals. It is the major resource mined to produce phosphate fertilizers for the agricultural industry. Phosphate is also used in animal feed supplements, food preservatives, anti-corrosion agents, cosmetics, fungicides, ceramics, water treatment and metallurgy.
The production of fertilizer is the largest source responsible for minerals mined for their phosphate content.
Phosphate minerals are often used to control rust, and to prevent corrosion on ferrous materials applied with electrochemical conversion coatings.
Examples
Phosphate minerals include:
Triphylite Li(Fe,Mn)PO4
Monazite (La, Y, Nd, Sm, Gd, Ce,Th)PO4, rare earth metals
Hinsdalite PbAl3(PO4)(SO4)(OH)6
Pyromorphite Pb5(PO4)3Cl
Amblygonite LiAlPO4F
Lazulite (Mg,Fe)Al2(PO4)2(OH)2
Wavellite Al3(PO4)2(OH)3·5H2O
Turquoise CuAl6(PO4)4(OH)8·5H2O
Autunite Ca(UO2)2(PO4)2·10–12H2O
Phosphophyllite Zn2(Fe,Mn)(PO4)2·4H2O
Struvite (NH4)MgPO4·6H2O
Xenotime-Y Y(PO4)
Apatite group Ca5(PO4)3(F,Cl,OH)
Hydroxylapatite Ca5(PO4)3OH
Fluorapatite Ca5(PO4)3F
Chlorapatite Ca5(PO4)3Cl
Bromapatite
Mitridatite group:
Arseniosiderite-mitridatite series (Ca2(Fe3+)3[(O)2|(AsO4)3]·3H2O -- Ca2(Fe3+)3[(O)2|(PO4)3]·3H2O)
Arseniosiderite-robertsite series (Ca2(Fe3+)3[(O)2|(AsO4)3]·3H2O -- Ca3(Mn3+)4[(OH)3|(PO4)2]2·3H2O)
Nickel–Strunz classification -08- phosphates
IMA-CNMNC proposes a new hierarchical scheme (Mills et al., 2009). This list uses it to modify the classification of Nickel–Strunz (mindat.org, 10 ed, pending publication).
Abbreviations:
"*" – discredited (IMA/CNMNC status).
"?" – questionable/doubtful (IMA/CNMNC status).
"REE" – Rare-earth element (Sc, Y, La, Ce, Pr, Nd, Pm, Sm, Eu, Gd, Tb, Dy, Ho, Er, Tm, Yb, Lu)
"PGE" – Platinum-group element (Ru, Rh, Pd, Os, Ir, Pt)
03.C Aluminofluorides, 06 Borates, 08 Vanadates (04.H V[5,6] Vanadates), 09 Silicates:
Neso: insular (from Greek νησος nēsos, island)
Soro: grouping (from Greek σωροῦ sōros, heap, mound (especially of corn))
Cyclo: ring
Ino: chain (from Greek ις [genitive: ινος inos], fibre)
Phyllo: sheet (from Greek φύλλον phyllon, leaf)
Tekto: three-dimensional framework
Nickel–Strunz code scheme: NN.XY.##x
NN: Nickel–Strunz mineral class number
X: Nickel–Strunz mineral division letter
Y: Nickel–Strunz mineral family letter
##x: Nickel–Strunz mineral/group number, x add-on letter
Class: phosphates
08.A Phosphates, etc. without additional anions, without H2O
08.AA With small cations (some also with larger ones): 05 Berlinite, 05 Rodolicoite; 10 Beryllonite, 15 Hurlbutite, 20 Lithiophosphate, 25 Nalipoite, 30 Olympite
08.AB With medium-sized cations: 05 Farringtonite; 10 Ferrisicklerite, 10 Heterosite, 10 Natrophilite, 10 Lithiophilite, 10 Purpurite, 10 Sicklerite, 10 Simferite, 10 Triphylite; 15 Chopinite, 15 Sarcopside; 20 Beusite, 20 Graftonite
08.AC With medium-sized and large cations: 10 IMA2008-054, 10 Alluaudite, 10 Hagendorfite, 10 Ferroalluaudite, 10 Maghagendorfite, 10 Varulite, 10 Ferrohagendorfite*; 15 Bobfergusonite, 15 Ferrowyllieite, 15 Qingheiite, 15 Rosemaryite, 15 Wyllieite, 15 Ferrorosemaryite; 20 Maricite, 30 Brianite, 35 Vitusite-(Ce); 40 Olgite?, 40 Bario-olgite; 45 Ferromerrillite, 45 Bobdownsite, 45 Merrillite-(Ca)*, 45 Merrillite, 45 Merrillite-(Y)*, 45 Whitlockite, 45 Tuite, 45 Strontiowhitlockite; 50 Stornesite-(Y), 50 Xenophyllite, 50 Fillowite, 50 Chladniite, 50 Johnsomervilleite, 50 Galileiite; 55 Harrisonite, 60 Kosnarite, 65 Panethite, 70 Stanfieldite, 90 IMA2008-064
08.AD With only large cations: 05 Nahpoite, 10 Monetite, 15 Archerite, 15 Biphosphammite; 20 Phosphammite, 25 Buchwaldite; 35 Pretulite, 35 Xenotime-(Y), 35 Xenotime-(Yb); 45 Ximengite, 50 Monazite-(Ce), 50 Monazite-(La), 50 Monazite-(Nd), 50 Monazite-(Sm), 50 Brabantite?
08.B Phosphates, etc. with Additional Anions, without H2O
08.BA With small and medium-sized cations: 05 Vayrynenite; 10 Hydroxylherderite, 10 Herderite; 15 Babefphite
08.BB With only medium-sized cations, (OH, etc.):RO4 £1:1: 05 Amblygonite, 05 Natromontebrasite?, 05 Montebrasite?, 05 Tavorite; 10 Zwieselite, 10 Triplite, 10 Magniotriplite?, 10 Hydroxylwagnerite; 15 Joosteite, 15 Stanekite, 15 Triploidite, 15 Wolfeite, 15 Wagnerite; 20 Satterlyite, 20 Holtedahlite; 25 Althausite; 30 Libethenite, 30 Zincolibethenite; 35 Tarbuttite; 40 Barbosalite, 40 Hentschelite, 40 Scorzalite, 40 Lazulite; 45 Trolleite, 55 Phosphoellenbergerite; 90 Zinclipscombite, 90 Lipscombite, 90 Richellite
08.BC With only medium-sized cations, (OH, etc.):RO4 > 1:1 and < 2:1: 10 Plimerite, 10 Frondelite, 10 Rockbridgeite
08.BD With only medium-sized cations, (OH, etc.):RO4 = 2:1: 05 Pseudomalachite, 05 Reichenbachite, 10 Gatehouseite, 25 Ludjibaite
08.BE With only medium-sized cations, (OH, etc.):RO4 > 2:1: 05 Augelite, 10 Grattarolaite, 15 Cornetite, 30 Raadeite, 85 Waterhouseite
08.BF With medium-sized and large cations, (OH, etc.):RO4 < 0.5:1: 05 Arrojadite, 05 Arrojadite-(BaFe), 05 Arrojadite-(KFe), 05 Arrojadite-(NaFe), 05 Arrojadite-(SrFe), 05 Arrojadite-(KNa), 05 Arrojadite-(PbFe), 05 Arrojadite-(BaNa), 05 Fluorarrojadite-(BaNa), 05 Fluorarrojadite-(KNa), 05 Fluorarrojadite-(BaFe), 05 Ferri-arrojadite-(BaNa), 05 Dickinsonite, 05 Dickinsonite-(KNa), 05 Dickinsonite-(KMnNa), 05 Dickinsonite-(KNaNa), 05 Dickinsonite-(NaNa); 10 Samuelsonite, 15 Griphite, 20 Nabiasite
08.BG With medium-sized and large cations, (OH, etc.):RO4 = 0.5:1: 05 Bearthite, 05 Goedkenite, 05 Tsumebite; 10 Melonjosephite, 15 Tancoite
08.BH With medium-sized and large cations, (OH,etc.):RO4 = 1:1: 05 Thadeuite; 10 Lacroixite, 10 Isokite, 10 Panasqueiraite; 15 Drugmanite; 20 Bjarebyite, 20 Kulanite, 20 Penikisite, 20 Perloffite, 20 Johntomaite; 25 Bertossaite, 25 Palermoite; 55 Jagowerite, 60 Attakolite
08.BK With medium-sized and large cations, (OH, etc.): 05 Brazilianite, 15 Curetonite, 25 Lulzacite
08.BL With medium-sized and large cations, (OH, etc.):RO4 = 3:1: 05 Corkite, 05 Hinsdalite, 05 Orpheite, 05 Woodhouseite, 05 Svanbergite; 10 Kintoreite, 10 Benauite, 10 Crandallite, 10 Goyazite, 10 Springcreekite, 10 Gorceixite; 10 Lusungite?, 10 Plumbogummite, 10 Ferrazite?; 13 Eylettersite, 13 Florencite-(Ce), 13 Florencite-(La), 13 Florencite-(Nd), 13 Waylandite, 13 Zairite; 15 Viitaniemiite, 20 Kuksite, 25 Pattersonite
08.BM With medium-sized and large cations, (OH, etc.):RO4 = 4:1: 10 Paulkellerite, 15 Brendelite
08.BN With only large cations, (OH, etc.):RO4 = 0.33:1: 05 IMA2008-068, 05 Phosphohedyphane, 05 IMA2008-009, 05 Alforsite, 05 Apatite*, 05 Apatite-(CaOH), 05 Apatite-(CaCl), 05 Apatite-(CaF), 05 Apatite-(SrOH), 05 Apatite-(CaOH)-M, Carbonate-fluorapatite?, 05 Carbonate-hydroxylapatite?, 05 Belovite-(Ce), 05 Belovite-(La), 05 Fluorcaphite, 05 Pyromorphite, 05 Hydroxylpyromorphite, 05 Deloneite-(Ce), 05 Kuannersuite-(Ce), 10 Arctite
08.BO With only large cations, (OH, etc.):RO4 1:1: 05 Nacaphite, 10 Petitjeanite, 15 Smrkovecite, 25 Heneuite, 30 Nefedovite, 40 Artsmithite
08.C Phosphates without Additional Anions, with H2O
08.CA With small and large/medium cations: 05 Fransoletite, 05 Parafransoletite; 10 Ehrleite, 15 Faheyite; 20 Gainesite, 20 Mccrillisite, 20 Selwynite; 25 Pahasapaite, 30 Hopeite, 40 Phosphophyllite; 45 Parascholzite, 45 Scholzite; 65 Gengenbachite, 70 Parahopeite
08.CB With only medium-sized cations, RO4:H2O = 1:1: 05 Serrabrancaite, 10 Hureaulite
08.CC With only medium-sized cations, RO4:H2O = 1:1.5: 05 Garyansellite, 05 Kryzhanovskite, 05 Landesite, 05 Phosphoferrite, 05 Reddingite
08.CD With only medium-sized cations, RO4:H2O = 1:2: 05 Kolbeckite, 05 Metavariscite, 05 Phosphosiderite; 10 Strengite, 10 Variscite; 20 Ludlamite
08.CE With only medium-sized cations, RO4:H2O £1:2.5: 10 Newberyite, 20 Phosphorrosslerite; 25 Metaswitzerite, 25 Switzerite; 35 Bobierrite; 40 Arupite, 40 Baricite, 40 Vivianite, 40 Pakhomovskyite; 50 Cattiite, 55 Koninckite; 75 IMA2008-046, 75 Malhmoodite; 80 Santabarbaraite, 85 Metavivianite
08.CF With large and medium-sized cations, RO4:H2O > 1:1: 05 Tassieite, 05 Wicksite, 05 Bederite; 10 Haigerachite
08.CG With large and medium-sized cations, RO4:H2O = 1:1: 05 Collinsite, 05 Cassidyite, 05 Fairfieldite, 05 Messelite, 05 Hillite, (05 Uranophane-beta but Uranophane 09.AK.15); 20 Phosphogartrellite
08.CH With large and medium-sized cations, RO4:H2O < 1:1: 10 Anapaite, 20 Dittmarite, 20 Niahite, 25 Francoanellite, 25 Taranakite, 30 Schertelite, 35 Hannayite, 40 Hazenite, 40 Struvite, 40 Struvite-(K), 45 Rimkorolgite, 50 Bakhchisaraitsevite, 55 IMA2008-048
08.CJ With only large cations: 05 Stercorite, 10 Mundrabillaite, 10 Swaknoite, 15 Nastrophite, 15 Nabaphite, 45 Brockite, 45 Grayite, 45 Rhabdophane-(Ce), 45 Rhabdophane-(La), 45 Rhabdophane-(Nd), 45 Tristramite, 50 Brushite, 50 Churchite-(Dy)*, 50 Churchite-(Nd), 50 Churchite-(Y), 50 Ardealite, 60 Dorfmanite, 70 Catalanoite, 80 Ningyoite
08.D Phosphates
08.DA With small (and occasionally larger) cations: 05 Moraesite, 10 Footemineite, 10 Ruifrancoite, 10 Guimaraesite, 10 Roscherite, 10 Zanazziite, 10 Atencioite, 10 Greifensteinite; 15 Uralolite, 20 Weinebeneite, 25 Tiptopite, 30 Veszelyite, 35 Kipushite, 40 Spencerite, 45 Glucine
08.DB With only medium-sized cations, (OH, etc.):RO4 < 1:1: 05 Diadochite, 10 Vashegyite, 15 Schoonerite, 20 Sinkankasite, 25 Mitryaevaite, 30 Sanjuanite, 50 Giniite, 55 Sasaite, 60 Mcauslanite, 65 Goldquarryite, 70 Birchite
08.DC With only medium-sized cations, (OH, etc.):RO4 = 1:1 and < 2:1: 05 Nissonite; 15 Kunatite, 15 Earlshannonite, 15 Whitmoreite; 17 Kleemanite, 20 Bermanite, 207? Oxiberaunite*, 22 Kovdorskite; 25 Ferrostrunzite, 25 Ferristrunzite, 25 Metavauxite, 25 Strunzite; 27 Beraunite; 30 Gordonite, 30 Laueite, 30 Sigloite, 30 Paravauxite, 30 Ushkovite, 30 Ferrolaueite, 30 Mangangordonite, 30 Pseudolaueite, 30 Stewartite, 30 Kastningite, 35 Vauxite, 37 Vantasselite, 40 Cacoxenite; 45 Gormanite, 45 Souzalite; 47 Kingite; 50 Wavellite, 50 Allanpringite, 52 Kribergite, 60 Nevadaite
08.DD With only medium-sized cations, (OH, etc.):RO4 = 2:1: 15 Aheylite, 15 Chalcosiderite, 15 Faustite, 15 Planerite, 15 Turquoise; 20 Ernstite, 20 Childrenite, 20 Eosphorite
08.DE With only medium-sized cations, (OH, etc.):RO4 = 3:1: 05 Senegalite, 10 Fluellite, 20 Zapatalite, (35 Alumoakermanite, Mindat.org: 09.BB.10), 35 Aldermanite
08.DF With only medium-sized cations, (OH,etc.):RO4 > 3:1: 05 Hotsonite-VII, 05 Hotsonite-VI; 10 Bolivarite, 10 Evansite, 10 Rosieresite, 25 Sieleckiite, 40 Gladiusite
08.DG With large and medium-sized cations, (OH, etc.):RO4 < 0.5:1: 05 Sampleite
08.DH With large and medium-sized cations, (OH, etc.):RO4 < 1:1: 05 Minyulite; 10 Leucophosphite, 10 Spheniscidite, 10 Tinsleyite; 15 Kaluginite*, 15 Keckite, 15 Jahnsite-(CaMnFe), 15 Jahnsite-(CaMnMg), 15 Jahnsite-(CaMnMn), 15 Jahnsite-(MnMnMn)*, 15 Jahnsite-(CaFeFe), 15 Jahnsite-(NaFeMg), 15 Jahnsite-(CaMgMg), 15 Jahnsite-(NaMnMg), 15 Rittmannite, 15 Whiteite-(MnFeMg), 15 Whiteite-(CaFeMg), 15 Whiteite-(CaMnMg); 20 Manganosegelerite, 20 Overite, 20 Segelerite, 20 Wilhelmvierlingite, 20 Juonniite; 25 Calcioferrite, 25 Kingsmountite, 25 Montgomeryite, 25 Zodacite; 30 Lunokite, 30 Pararobertsite, 30 Robertsite, 30 Mitridatite; 35 Matveevite?, 35 Mantienneite, 35 Paulkerrite, 35 Benyacarite, 40 Xanthoxenite, 55 Englishite
08.DJ With large and medium-sized cations, (OH, etc.):RO4 = 1:l: 05 Johnwalkite, 05 Olmsteadite, 10 Gatumbaite, 20 Meurigite-Na, 20 Meurigite-K, 20 Phosphofibrite, 25 Jungite, 30 Wycheproofite, 35 Ercitite, 40 Mrazekite
08.DK With large and medium-sized cations, (OH, etc.):RO4 > 1:1 and < 2:1: 15 Matioliite, 15 IMA2008-056, 15 Dufrenite, 15 Burangaite, 15 Natrodufrenite; 20 Kidwellite, 25 Bleasdaleite, 30 Matulaite, 35 Krasnovite
08.DL With large and medium-sized cations, (OH, etc.):RO4 = 2:1: 05 Foggite; 10 Cyrilovite, 10 Millisite, 10 Wardite; 15 Petersite-(Y), 15 Calciopetersite; 25 Angastonite
08.DM With large and medium-sized cations, (OH, etc.):RO4 > 2:1: 05 Morinite, 15 Melkovite, 25 Gutsevichite?, 35 Delvauxite
08.DN With only large cations: 05 Natrophosphate, 10 Isoclasite, 15 Lermontovite, 20 Vyacheslavite
08.DO With CO3, SO4, SiO4: 05 Girvasite, 10 Voggite, 15 Peisleyite, 20 Perhamite, 25 Saryarkite-(Y), 30 Micheelsenite, 40 Parwanite, 45 Skorpionite
08.E Uranyl Phosphates
08.EA UO2:RO4 = 1:2: 05 Phosphowalpurgite, 10 Parsonsite, 15 Ulrichite, 20 Lakebogaite
08.EB UO2:RO4 = 1:1: 05 Autunite, 05 Uranocircite, 05 Torbernite, 05 Xiangjiangite, 05 Saleeite; 10 Bassetite, 10 Meta-autunite, 10 Metauranocircite, 10 Metatorbernite, 10 Lehnerite, 10 Przhevalskite; 15 Chernikovite, 15 Meta-ankoleite, 15 Uramphite; 20 Threadgoldite, 25 Uranospathite, 30 Vochtenite, 35 Coconinoite, 40 Ranunculite, 45 Triangulite, 50 Furongite, 55 Sabugalite
08.EC UO2:RO4 = 3:2: 05 Francoisite-(Ce), 05 Francoisite-(Nd), 05 Phuralumite, 05 Upalite; 10 Kivuite?, 10 Yingjiangite, 10 Renardite, 10 Dewindtite, 10 Phosphuranylite; 15 Dumontite; 20 Metavanmeersscheite, 20 Vanmeersscheite; 25 Althupite, 30 Mundite, 35 Phurcalite, 40 Bergenite
08.ED Unclassified: 05 Moreauite, 10 Sreinite, 15 Kamitugaite
08.F Polyphosphates
08.FA Polyphosphates, without OH and H2O; dimers of corner-sharing RO4 tetrahedra: 20 Pyrocoproite*, 20 Pyrophosphite*
08.FC Polyphosphates, with H2O only: 10 Canaphite, 20 Arnhemite*, 25 Wooldridgeite, 30 Kanonerovite
08.X Unclassified Strunz Phosphates
08.XX Unknown: 00 Sodium-autunite, 00 Pseudo-autunite*, 00 Cheralite-(Ce)?, 00 Laubmannite?, 00 Spodiosite?, 00 Sodium meta-autunite, 00 Kerstenite?, 00 Lewisite, 00 Coeruleolactite, 00 Viseite, 00 IMA2009-005
| Physical sciences | Minerals | Earth science |
13612447 | https://en.wikipedia.org/wiki/Repeating%20decimal | Repeating decimal | A repeating decimal or recurring decimal is a decimal representation of a number whose digits are eventually periodic (that is, after some place, the same sequence of digits is repeated forever); if this sequence consists only of zeros (that is if there is only a finite number of nonzero digits), the decimal is said to be terminating, and is not considered as repeating.
It can be shown that a number is rational if and only if its decimal representation is repeating or terminating. For example, the decimal representation of becomes periodic just after the decimal point, repeating the single digit "3" forever, i.e. 0.333.... A more complicated example is , whose decimal becomes periodic at the second digit following the decimal point and then repeats the sequence "144" forever, i.e. 5.8144144144.... Another example of this is , which becomes periodic after the decimal point, repeating the 13-digit pattern "1886792452830" forever, i.e. 11.18867924528301886792452830....
The infinitely repeated digit sequence is called the repetend or reptend. If the repetend is a zero, this decimal representation is called a terminating decimal rather than a repeating decimal, since the zeros can be omitted and the decimal terminates before these zeros. Every terminating decimal representation can be written as a decimal fraction, a fraction whose denominator is a power of 10 (e.g. ); it may also be written as a ratio of the form (e.g. ). However, every number with a terminating decimal representation also trivially has a second, alternative representation as a repeating decimal whose repetend is the digit 9. This is obtained by decreasing the final (rightmost) non-zero digit by one and appending a repetend of 9. Two examples of this are and . (This type of repeating decimal can be obtained by long division if one uses a modified form of the usual division algorithm.)
Any number that cannot be expressed as a ratio of two integers is said to be irrational. Their decimal representation neither terminates nor infinitely repeats, but extends forever without repetition (see ). Examples of such irrational numbers are and .
Background
Notation
There are several notational conventions for representing repeating decimals. None of them are accepted universally.
Vinculum: In the United States, Canada, India, France, Germany, Italy, Switzerland, the Czech Republic, Slovakia, Slovenia, Chile, and Turkey, the convention is to draw a horizontal line (a vinculum) above the repetend.
Dots: In some Islamic countries, such as Malaysia, Morocco, Pakistan, Tunisia, Iran, Algeria and Egypt, as well as the United Kingdom, New Zealand, Australia, South Africa, Japan, Thailand, India, South Korea, Singapore, and the People's Republic of China, the convention is to place dots above the outermost numerals of the repetend.
Parentheses: In parts of Europe, incl. Austria, Denmark, Finland, the Netherlands, Norway, Poland, Russia and Ukraine, as well as Vietnam and Israel, the convention is to enclose the repetend in parentheses. This can cause confusion with the notation for standard uncertainty.
Arc: In Spain and some Latin American countries, such as Argentina, Brazil, and Mexico, the arc notation over the repetend is also used as an alternative to the vinculum and the dots notation.
Ellipsis: Informally, repeating decimals are often represented by an ellipsis (three periods, 0.333...), especially when the previous notational conventions are first taught in school. This notation introduces uncertainty as to which digits should be repeated and even whether repetition is occurring at all, since such ellipses are also employed for irrational numbers; , for example, can be represented as 3.14159....
In English, there are various ways to read repeating decimals aloud. For example, 1.2 may be read "one point two repeating three four", "one point two repeated three four", "one point two recurring three four", "one point two repetend three four" or "one point two into infinity three four". Likewise, 11. may be read "eleven point repeating one double eight six seven nine two four five two eight three zero", "eleven point repeated one double eight six seven nine two four five two eight three zero", "eleven point recurring one double eight six seven nine two four five two eight three zero" "eleven point repetend one double eight six seven nine two four five two eight three zero" or "eleven point into infinity one double eight six seven nine two four five two eight three zero".
Decimal expansion and recurrence sequence
In order to convert a rational number represented as a fraction into decimal form, one may use long division. For example, consider the rational number :
0.0
74 ) 5.00000
4.44
560
518
420
370
500
etc. Observe that at each step we have a remainder; the successive remainders displayed above are 56, 42, 50. When we arrive at 50 as the remainder, and bring down the "0", we find ourselves dividing 500 by 74, which is the same problem we began with. Therefore, the decimal repeats: ....
For any integer fraction , the remainder at step k, for any positive integer k, is A × 10k (modulo B).
Every rational number is either a terminating or repeating decimal
For any given divisor, only finitely many different remainders can occur. In the example above, the 74 possible remainders are 0, 1, 2, ..., 73. If at any point in the division the remainder is 0, the expansion terminates at that point. Then the length of the repetend, also called "period", is defined to be 0.
If 0 never occurs as a remainder, then the division process continues forever, and eventually, a remainder must occur that has occurred before. The next step in the division will yield the same new digit in the quotient, and the same new remainder, as the previous time the remainder was the same. Therefore, the following division will repeat the same results. The repeating sequence of digits is called "repetend" which has a certain length greater than 0, also called "period".
In base 10, a fraction has a repeating decimal if and only if in lowest terms, its denominator has any prime factors besides 2 or 5, or in other words, cannot be expressed as 2m 5n, where m and n are non-negative integers.
Every repeating or terminating decimal is a rational number
Each repeating decimal number satisfies a linear equation with integer coefficients, and its unique solution is a rational number. In the example above, satisfies the equation
{|
| nowrap="" |10000α − 10α
| nowrap="" |= 58144.144144... − 58.144144...
|-
| align="right" |9990α || = 58086
|-
| align="right" |Therefore, α || = =
|}
The process of how to find these integer coefficients is described below.
Formal proof
Given a repeating decimal where , , and are groups of digits, let , the number of digits of . Multiplying by separates the repeating and terminating groups:
If the decimals terminate (), the proof is complete. For with digits, let where is a terminating group of digits. Then,
where denotes the i-th digit, and
Since ,
Since is the sum of an integer () and a rational number (), is also rational.
Table of values
Thereby fraction is the unit fraction and ℓ10 is the length of the (decimal) repetend.
The lengths ℓ10(n) of the decimal repetends of , n = 1, 2, 3, ..., are:
0, 0, 1, 0, 0, 1, 6, 0, 1, 0, 2, 1, 6, 6, 1, 0, 16, 1, 18, 0, 6, 2, 22, 1, 0, 6, 3, 6, 28, 1, 15, 0, 2, 16, 6, 1, 3, 18, 6, 0, 5, 6, 21, 2, 1, 22, 46, 1, 42, 0, 16, 6, 13, 3, 2, 6, 18, 28, 58, 1, 60, 15, 6, 0, 6, 2, 33, 16, 22, 6, 35, 1, 8, 3, 1, 18, 6, 6, 13, 0, 9, 5, 41, 6, 16, 21, 28, 2, 44, 1, 6, 22, 15, 46, 18, 1, 96, 42, 2, 0... .
For comparison, the lengths ℓ2(n) of the binary repetends of the fractions , n = 1, 2, 3, ..., are:
0, 0, 2, 0, 4, 2, 3, 0, 6, 4, 10, 2, 12, 3, 4, 0, 8, 6, 18, 4, 6, 10, 11, 2, 20, 12, 18, 3, 28, 4, 5, 0, 10, 8, 12, 6, 36, 18, 12, 4, 20, 6, 14, 10, 12, 11, ... (=[n], if n not a power of 2 else =0).
The decimal repetends of , n = 1, 2, 3, ..., are:
0, 0, 3, 0, 0, 6, 142857, 0, 1, 0, 09, 3, 076923, 714285, 6, 0, 0588235294117647, 5, 052631578947368421, 0, 047619, 45, 0434782608695652173913, 6, 0, 384615, 037, 571428, 0344827586206896551724137931, 3, 032258064516129, 0, 03, 2941176470588235, 285714... .
The decimal repetend lengths of , p = 2, 3, 5, ... (nth prime), are:
0, 1, 0, 6, 2, 6, 16, 18, 22, 28, 15, 3, 5, 21, 46, 13, 58, 60, 33, 35, 8, 13, 41, 44, 96, 4, 34, 53, 108, 112, 42, 130, 8, 46, 148, 75, 78, 81, 166, 43, 178, 180, 95, 192, 98, 99, 30, 222, 113, 228, 232, 7, 30, 50, 256, 262, 268, 5, 69, 28, 141, 146, 153, 155, 312, 79... .
The least primes p for which has decimal repetend length n, n = 1, 2, 3, ..., are:
3, 11, 37, 101, 41, 7, 239, 73, 333667, 9091, 21649, 9901, 53, 909091, 31, 17, 2071723, 19, 1111111111111111111, 3541, 43, 23, 11111111111111111111111, 99990001, 21401, 859, 757, 29, 3191, 211, 2791, 353, 67, 103, 71, 999999000001, 2028119, 909090909090909091, 900900900900990990990991, 1676321, 83, 127, 173... .
The least primes p for which has n different cycles (), n = 1, 2, 3, ..., are:
7, 3, 103, 53, 11, 79, 211, 41, 73, 281, 353, 37, 2393, 449, 3061, 1889, 137, 2467, 16189, 641, 3109, 4973, 11087, 1321, 101, 7151, 7669, 757, 38629, 1231, 49663, 12289, 859, 239, 27581, 9613, 18131, 13757, 33931... .
Fractions with prime denominators
A fraction in lowest terms with a prime denominator other than 2 or 5 (i.e. coprime to 10) always produces a repeating decimal. The length of the repetend (period of the repeating decimal segment) of is equal to the order of 10 modulo p. If 10 is a primitive root modulo p, then the repetend length is equal to p − 1; if not, then the repetend length is a factor of p − 1. This result can be deduced from Fermat's little theorem, which states that .
The base-10 digital root of the repetend of the reciprocal of any prime number greater than 5 is 9.
If the repetend length of for prime p is equal to p − 1 then the repetend, expressed as an integer, is called a cyclic number.
Cyclic numbers
Examples of fractions belonging to this group are:
= 0., 6 repeating digits
= 0., 16 repeating digits
= 0., 18 repeating digits
= 0., 22 repeating digits
= 0., 28 repeating digits
= 0., 46 repeating digits
= 0., 58 repeating digits
= 0., 60 repeating digits
= 0., 96 repeating digits
The list can go on to include the fractions , , , , , , , , , , etc. .
Every proper multiple of a cyclic number (that is, a multiple having the same number of digits) is a rotation:
= 1 × 0. = 0.
= 2 × 0. = 0.
= 3 × 0. = 0.
= 4 × 0. = 0.
= 5 × 0. = 0.
= 6 × 0. = 0.
The reason for the cyclic behavior is apparent from an arithmetic exercise of long division of : the sequential remainders are the cyclic sequence . | Mathematics | Basics | null |
8905895 | https://en.wikipedia.org/wiki/Freshwater%20pearl%20mussel | Freshwater pearl mussel | The freshwater pearl mussel (Margaritifera margaritifera) is an endangered species of freshwater mussel, an aquatic bivalve mollusc in the family Margaritiferidae.
Although the name "freshwater pearl mussel" is often used for this species, other freshwater mussel species (e.g. Margaritifera auricularia) can also create pearls and some can also be used as a source of mother of pearl. Most cultured pearls today come from Hyriopsis species in Asia, or Amblema species in North America, both members of the related family Unionidae; pearls are also found within species in the genus Unio.
The interior of the shell of Margaritifera margaritifera has thick nacre (the inner mother of pearl layer of the shell). This species is capable of making fine-quality pearls, and was historically exploited in the search for pearls from wild sources. In recent times, the Russian malacologist Valeriy Zyuganov received worldwide reputation after he discovered that the pearl mussel exhibited negligible senescence and he determined that it had a maximum lifespan of 210–250 years. The data of V. V. Zyuganov have been confirmed by Finnish malacologists and gained general acceptance.
Subspecies
Subspecies within the species Margaritifera magaritifera include:
Margaritifera margaritifera margaritifera (Linnaeus, 1758)
Margaritifera margaritifera parvula (Haas, 1908)
Margaritifera margaritifera durrovensis Phillips, 1928 – critically endangered subspecies in Ireland. Synonym: Margaritifera durrovensis. This subspecies is mentioned in annexes II and V of Habitats Directive as Margaritifera durrovensis.
Description
The freshwater pearl mussel is one of the longest-living invertebrates in existence. The oldest known specimen in Europe was caught in 1993 in Estonia when it was 134 years old.
Like all bivalve molluscs, the freshwater pearl mussel has a shell consisting of two parts that are hinged together, which can be closed to protect the animal's soft body within. The shell is large, heavy and elongated, typically yellowish-brown in colour when young and becoming darker with age. Older parts of the shell often appear corroded, an identifying feature of this mussel species. The inner surface of the shell is pearl white, sometimes tinged with attractive iridescent colours. Like all molluscs, the freshwater pearl mussel has a muscular 'foot'; this very large, white foot enables the mussel to move slowly and bury itself within the bottom substrate of its freshwater habitat.
Distribution
The native distribution of this species is Holarctic. The freshwater pearl mussel can be found on both sides of the Atlantic, from the Arctic and temperate regions of western Russia, through Europe to northeastern North America.
North America: eastern Canada and New England in the United States' Northeast.
Europe, including:
Austria – estimated total population of 70 000 individuals in Mühlviertel (declining) and in Waldviertel (some recruitment), in the states of Upper and Lower Austria, respectively.
Belgium
Czech Republic – critically endangered (CR). In Bohemia, probably locally extinct in Moravia. Listed in Decree for implementation, No. 395/1992 Sb. (Czech code) (in Czech: Vyhláška 395/1992 Sb. ve znění vyhl. 175/2006 Sb.) as Critically Threatened species. Its conservation status in 2004-2006 was bad (U2) in a report for the European Commission in accordance with Habitats Directive.
Serbia - most commonly found along the shores of the Danube river and its lakes, as well as in some other rivers and freshwater areas in the Pannonian Basin
Denmark - only known from Varde River (never recorded elsewhere in the country in historical or recent times). Although sometimes suggested to have been extirpated in the period directly after 1970, it has been documented from the river in recent years and indirect evidence suggests that the population size is significant.
Estonia
Fennoscandia – vulnerable in Norway, endangered in Finland and Sweden. Very rare in southern Finland, more common in the north. Widespread but not common in Norway; Norway is considered to host a large proportion of the European stock. Rare in Sweden. Also in Kola Peninsula and Karelia (Russia) (see below).
France
Germany – critically endangered (vom Aussterben bedroht). Listed as strictly protected species in annex 1 in Bundesartenschutzverordnung.
Great Britain. More than half the world's recruiting population exists in Scotland with populations in more than 50 rivers, mainly in the Highlands, although illegal harvesting has seriously affected their survival. 75% of sites surveyed in 2010 had suffered "significant and lasting criminal damage" and in response the police and Scottish Natural Heritage have launched a campaign to protect the species. This species has been fully protected in the United Kingdom under the Wildlife and Countryside Act 1981 since 1998 and partly protected according to section 9(1) since 1991.
Iberian Peninsula (Portugal and Spain)
Ireland. The Cladagh (Swanlinbar) river contains one of the largest populations surviving in northern Ireland, estimated minimum 10,000, confined to a 6 km stretch of undisturbed river in the middle section.
Luxembourg
Latvia
Lithuania – extinct
Poland – extinct
Russian Federation – in the rivers of the White Sea basin of the Arkhangelsk and Murmansk Regions. It is east border of the area of distribution M. margaritifera.
Habitat
Clean, fast-flowing streams and rivers are required for the freshwater pearl mussel, where it lives buried or partly buried in fine gravel and coarse sand, generally in water at depths between 0.5 and 2 metres, but sometimes at greater depths. Clean gravel and sand is essential, particularly for juvenile freshwater pearl mussels, for if the stream or river bottom becomes clogged with silt, they cannot obtain oxygen and will die. Also essential is the presence of a healthy population of salmonids, a group of fish including salmon and trout, on which the freshwater pearl mussel relies for part of its life cycle.
Lifecycle
Capable of living for up to 130 years, the freshwater pearl mussel begins life as a tiny larva, measuring just 0.6 to 0.7 millimetres long, which is ejected into the water from an adult mussel in a mass of one to four million other larvae. This remarkable event takes place over just one to two days, sometime between July and September. The larvae, known as glochidia, resemble tiny mussels, but their minute shells are held open until they snap shut on a suitable host. The host of freshwater pearl mussel larvae are juvenile fish from the salmonid family, which includes the Atlantic salmon and sea trout. The chances of a larva encountering a suitable fish are very low, and thus nearly all are swept away and die; only a few are inhaled by an Atlantic salmon or sea trout, where they snap shut onto the fish's gills.
Attached to the gills of a fish, the glochidia live and grow in this oxygen-rich environment until the following May or June, when they drop off. The juvenile must land on clean gravelly or sandy substrates if it is to successfully grow. Attached to the substrate, juvenile freshwater pearl mussels typically burrow themselves completely into the sand or gravel, while adults are generally found with a third of their shell exposed. Should they become dislodged, freshwater pearl mussels can rebury themselves, and are also capable of moving slowly across sandy sediments, using their large, muscular foot.
The freshwater pearl mussel grows extremely slowly, inhaling water through exposed siphons, and filtering out tiny organic particles on which it feeds. It is thought that in areas where this species was once abundant, this filter feeding acted to clarify the water, benefiting other species which inhabited the rivers and streams. Maturity is reached at an age of 10 to 15 years, followed by a reproductive period of over 75 years in which about 200 million larvae can be produced. In early summer each year, around June and July, male freshwater pearl mussels release sperm into the water, where they are inhaled by female mussels. Inside the female, the fertilized eggs develop in a pouch on the gills for several weeks, until temperature or other environmental cues trigger the female to release the larvae into the surrounding water.
Threats and conservation
Once the most abundant bivalve mollusc in ancient rivers around the world, numbers of the freshwater pearl mussel are now declining in all countries and this species is nearly extinct in many areas. The causes of this decline are not fully understood, but alteration and degradation of its freshwater habitat undoubtedly plays a central role. The negative impacts humans have on rivers and streams come from a wide range of activities such as river regulation, drainage, sewage disposal, dredging, and water pollution, including the introduction of excess nutrients. Anything that affects the abundance of the fish hosts will also affect the freshwater pearl mussel; for example, the introduction of exotic fish species, such as the rainbow trout, reduce the number of native fish hosts. Introduced species are also directly affecting the freshwater pearl mussel; the invasion of the zebra mussel (Dreissena polymorpha), which has been spread to new locations by being transported on the bottom of boats or in ballast waters, has impacted freshwater pearl mussel populations in all countries it has invaded.
The freshwater pearl mussel, which is completely protected in all European countries, has been the focus of a significant amount of conservation efforts. Measures have included the transfer of adult mussels to areas where it had gone extinct, the culture of juvenile mussels, and the release of juvenile trout, which have been infected with glochidia, into small rivers, but mainly the freshwater pearl mussel has benefited from habitat restoration projects in some areas. Due to the essential role salmonid fish play in the life of the freshwater pearl mussel, the conservation of salmon and trout is also central in the survival of this endangered freshwater mussel.
Conservation efforts
The LIFE R4ever Kent project is a 5 year project worth 3.8 million pounds, led by Natural England, that began in January 2022. Its aim is saving and restore the River Kent's population of freshwater pearl mussels, as well as improving existing breeding areas to secure the long-term future of the population. The River Kent's population of freshwater pearl mussels was severely damaged by pollution, degraded habitats, low genetic diversity, and the lack of natural survival of juvenile pearl mussels. The project was developed in tandem with the Environmental Agency, the Freshwater Biological Association, and the South Cumbria Rivers Trust. The River Kent catchment area is the only river in the UK where the freshwater pearl mussel and the white clawed crayfish are found in the same habitat. The goal is to increase the freshwater pearl mussel population by 4,000 individuals and expand its range within the River Kent Special Area of Conservation (SAC). The site's population will be bolster using donor stock from other freshwater pearl mussel sites, while also improving breeding facilities within England. Louise Lavictoire, head of science at the Freshwater Biological Association, stated that the remnant populations in the River Kent and surrounding tributaries are too small to sustain a population into the future, and maintaining a self-sustaining population would need to be supplemented with captive breeding. The goal of the hatchery improvements are to bred more than 4,000 juveniles for release; release 3,000 of the 4,000 as juveniles; and retain 1,000 for reintroduction to the SAC once they have grown to a larger size.
| Biology and health sciences | Bivalvia | Animals |
5669923 | https://en.wikipedia.org/wiki/Field%20research | Field research | Field research, field studies, or fieldwork is the collection of raw data outside a laboratory, library, or workplace setting. The approaches and methods used in field research vary across disciplines. For example, biologists who conduct field research may simply observe animals interacting with their environments, whereas social scientists conducting field research may interview or observe people in their natural environments to learn their languages, folklore, and social structures.
Field research involves a range of well-defined, although variable, methods: informal interviews, direct observation, participation in the life of the group, collective discussions, analyses of personal documents produced within the group, self-analysis, results from activities undertaken off- or on-line, and life-histories. Although the method generally is characterized as qualitative research, it may (and often does) include quantitative dimensions.
History
Field research has a long history. Cultural anthropologists have long used field research to study other cultures. Although the cultures do not have to be different, this has often been the case in the past with the study of so-called primitive cultures, and even in sociology the cultural differences have been ones of class. The work is done... in Fields' that is, circumscribed areas of study which have been the subject of social research". Fields could be education, industrial settings, or Amazonian rain forests. Field research may be conducted by ethologists such as Jane Goodall. Alfred Radcliffe-Brown [1910] and Bronisław Malinowski [1922] were early anthropologists who set the models for future work.
Conducting field research
The quality of results obtained from field research depends on the data gathered in the field. The data in turn, depend upon the field worker, their level of involvement, and ability to see and visualize things that other individuals visiting the area of study may fail to notice. The more open researchers are to new ideas, concepts, and things which they may not have seen in their own culture, the better will be the absorption of those ideas. Better grasping of such material means a better understanding of the forces of culture operating in the area and the ways they modify the lives of the people under study. Social scientists (i.e. anthropologists, social psychologists, etc.) have always been taught to be free from ethnocentrism (i.e. the belief in the superiority of one's own ethnic group), when conducting any type of field research.
When humans themselves are the subject of study, protocols must be devised to reduce the risk of observer bias and the acquisition of too theoretical or idealized explanations of the workings of a culture. Participant observation, data collection, and survey research are examples of field research methods, in contrast to what is often called experimental or lab research.
Field notes
When conducting field research, keeping an ethnographic record is essential to the process. Field notes are a key part of the ethnographic record. The process of field notes begin as the researcher participates in local scenes and experiences in order to make observations that will later be written up. The field researcher tries first to take mental notes of certain details in order that they be written down later.
Kinds of field notes
Field Note Chart
Interviewing
Another method of data collection is interviewing, specifically interviewing in the qualitative paradigm. Interviewing can be done in different formats, this all depends on individual researcher preferences, research purpose, and the research question asked.
Analyzing data
In qualitative research, there are many ways of analyzing data gathered in the field. One of the two most common methods of data analysis are thematic analysis and narrative analysis. As mentioned before, the type of analysis a researcher decides to use depends on the research question asked, the researcher's field, and the researcher's personal method of choice.
Field research across different disciplines
Anthropology
In anthropology, field research is organized so as to produce a kind of writing called ethnography. Ethnography can refer to both a methodology and a product of research, namely a monograph or book. Ethnography is a grounded, inductive method that heavily relies on participant-observation. Participant observation is a structured type of research strategy. It is a widely used methodology in many disciplines, particularly, cultural anthropology, but also sociology, communication studies, and social psychology. Its aim is to gain a close and intimate familiarity with a given group of individuals (such as a religious, occupational, or sub cultural group, or a particular community) and their practices through an intensive involvement with people in their natural environment, usually over an extended period of time.
The method originated in field work of social anthropologists, especially the students of Franz Boas in the United States, and in the urban research of the Chicago School of sociology. Max Gluckman noted that Bronisław Malinowski significantly developed the idea of fieldwork, but it originated with Alfred Cort Haddon in England and Franz Boas in the United States. Robert G. Burgess concluded that "it is Malinowski who is usually credited with being the originator of intensive anthropological field research".
Anthropological fieldwork uses an array of methods and approaches that include, but are not limited to: participant observation, structured and unstructured interviews, archival research, collecting demographic information from the community the anthropologist is studying, and data analysis. Traditional participant observation is usually undertaken over an extended period of time, ranging from several months to many years, and even generations. An extended research time period means that the researcher is able to obtain more detailed and accurate information about the individuals, community, and/or population under study. Observable details (like daily time allotment) and more hidden details (like taboo behavior) are more easily observed and interpreted over a longer period of time. A strength of observation and interaction over extended periods of time is that researchers can discover discrepancies between what participants say—and often believe—should happen (the formal system) and what actually does happen, or between different aspects of the formal system; in contrast, a one-time survey of people's answers to a set of questions might be quite consistent, but is less likely to show conflicts between different aspects of the social system or between conscious representations and behavior.
Archaeology
Field research lies at the heart of archaeological research. It may include the undertaking of broad area surveys (including aerial surveys); of more localised site surveys (including photographic, drawn, and geophysical surveys, and exercises such as fieldwalking); and of excavation.
Biology and ecology
In biology, field research typically involves studying of free-living wild animals in which the subjects are observed in their natural habitat, without changing, harming, or materially altering the setting or behavior of the animals under study. Field research is an indispensable part of biological science.
Animal migration tracking (including bird ringing/banding) is a frequently-used field technique, allowing field scientists to track migration patterns and routes, and animal longevity in the wild. Knowledge about animal migrations is essential to accurately determining the size and location of protected areas.
Field research also can involve study of other kingdoms of life, such as plantae, fungi, and microbes, as well as ecological interactions among species.
Field courses have been shown to be efficacious for generating long-term interest in and commitment for undergraduate students in STEM, but the number of field courses has not kept pace with demand. Cost has been a barrier to student participation.
Consumer research
In applied business disciplines, such as in marketing, fieldwork is a standard research method both for commercial purposes, like market research, and academic research. For instance, researchers have used ethnography, netnography, and in-depth interviews within Consumer Culture Theory, a field that aims to understand the particularities of contemporary consumption. Several academic journals such as Consumption Markets & Culture, and the Journal of Consumer Research regularly publish qualitative research studies that use fieldwork.
Earth and atmospheric sciences
In geology fieldwork is considered an essential part of training and remains an important component of many research projects. In other disciplines of the Earth and atmospheric sciences, field research refers to field experiments (such as the VORTEX projects) utilizing in situ instruments. Permanent observation networks are also maintained for other uses but are not necessarily considered field research, nor are permanent remote sensing installations.
Economics
The objective of field research in economics is to get beneath the surface, to contrast observed behaviour with the prevailing understanding of a process, and to relate language and description to behavior (Deirdre McCloskey, 1985).
The 2009 Nobel Prize Winners in Economics, Elinor Ostrom and Oliver Williamson, have advocated mixed methods and complex approaches in economics and hinted implicitly to the relevance of field research approaches in economics. In a recent interview Oliver Williamson and Elinor Ostrom discuss the importance of examining institutional contexts when performing economic analyses. Both Ostrom and Williamson agree that "top-down" panaceas or "cookie cutter" approaches to policy problems don't work. They believe that policymakers need to give local people a chance to shape the systems used to allocate resources and resolve disputes. Sometimes, Ostrom points out, local solutions can be the most efficient and effective options. This is a point of view that fits very well with anthropological research, which has for some time shown us the logic of local systems of knowledge — and the damage that can be done when "solutions" to problems are imposed from outside or above without adequate consultation. Elinor Ostrom, for example, combines field case studies and experimental lab work in her research. Using this combination, she contested longstanding assumptions about the possibility that groups of people could cooperate to solve common pool problems, as opposed to being regulated by the state or governed by the market.
Edward J. Nell argued in 1998 that there are two types of field research in economics. One kind can give us a carefully drawn picture of institutions and practices, general in that it applies to all activities of a certain kind of particular society or social setting, but still specialized to that society or setting. Although institutions and practices are intangibles, such a picture will be objective, a matter of fact, independent of the state of mind of the particular agents reported on. Approaching the economy from a different angle, another kind of fieldwork can give us a picture of the state of mind of economic agents (their true motivations, their beliefs, state knowledge, expectations, their preferences and values).
Business use of field research is an applied form of anthropology and is as likely to be advised by sociologists or statisticians in the case of surveys. Consumer marketing field research is the primary marketing technique that is used by businesses to research their target market.
Ethnomusicology
Fieldwork in ethnomusicology has changed greatly over time. Alan P. Merriam cites the evolution of fieldwork as a constant interplay between the musicological and ethnological roots of the discipline. Before the 1950s, before ethnomusicology resembled what it is today, fieldwork and research were considered separate tasks. Scholars focused on analyzing music outside of its context through a scientific lens, drawing from the field of musicology. Notable scholars include Carl Stumf and Eric von Hornbostel, who started as Stumpf's assistant. They are known for making countless recordings and establishing a library of music to be analyzed by other scholars. Methodologies began to shift in the early 20th century. George Herzog, an anthropologist and ethnomusicologist, published a seminal paper titled "Plains Ghost Dance and Great Basin Music", reflecting the increased importance of fieldwork through his extended residency in the Great Basin and his attention to cultural contexts. Herzog also raised the question of how the formal qualities of the music he was studying demonstrated the social function of the music itself. Ethnomusicology today relies heavily on the relationship between the researcher and their teachers and consultants. Many ethnomusicologists have assumed the role of student in order to fully learn an instrument and its role in society. Research in the discipline has grown to consider music as a cultural product, and thus cannot be understood without consideration of context.
Law
Legal researchers conduct field research to understand how legal systems work in practice. Social, economic, cultural and other factors influence how legal processes, institutions and the law work (or do not work).
Management
Mintzberg played a crucial role in the popularization of field research in management. The tremendous amount of work that Mintzberg put into the findings earned him the title of leader of a new school of management, the descriptive school, as opposed to the prescriptive and normative schools that preceded his work. The schools of thought derive from Taylor, Henri Fayol, Lyndall Urwick, Herbert A. Simon, and others endeavored to prescribe and expound norms to show what managers must or should do. With the arrival of Mintzberg, the question was no longer what must or should be done, but what a manager actually does during the day. More recently, in his 2004 book Managers Not MBAs, Mintzberg examined what he believes to be wrong with management education today.
Aktouf (2006, p. 198) summed-up Mintzberg observations about what takes place in the field:‘’First, the manager’s job is not ordered, continuous, and sequential, nor is it uniform or homogeneous. On the contrary, it is fragmented, irregular, choppy, extremely changeable and variable. This work is also marked by brevity: no sooner has a manager finished one activity than he or she is called up to jump to another, and this pattern continues nonstop. Second, the manager’s daily work is a not a series of self-initiated, willful actions transformed into decisions, after examining the circumstances. Rather, it is an unbroken series of reactions to all sorts of request that come from all around the manager, from both the internal and external environments. Third, the manager deals with the same issues several times, for short periods of time; he or she is far from the traditional image of the individual who deals with one problem at a time, in a calm and orderly fashion. Fourth, the manager acts as a focal point, an interface, or an intersection between several series of actors in the organization: external and internal environments, collaborators, partners, superiors, subordinates, colleagues, and so forth. He or she must constantly ensure, achieve, or facilitate interactions between all these categories of actors to allow the firm to function smoothly.’’
Public health
In public health, the use of the term field research refers to epidemiology or the study of epidemics through the gathering of data about the epidemic (such as the pathogen and vector(s) as well as social or sexual contacts, depending upon the situation).
Sociology
Pierre Bourdieu played a crucial role in popularizing fieldwork in sociology. During the Algerian War in 1958–1962, Bourdieu undertook ethnographic research into the clash through a study of the Kabyle people (a subgroup of the Berbers), which provided the groundwork for his anthropological reputation. His first book, Sociologie de L'Algerie (The Algerians), was successful in France and published in America in 1962. A follow-up, Algeria 1960: The Disenchantment of the World: The Sense of Honour: The Kabyle House or the World Reversed: Essays, published in English in 1979 by Cambridge University Press, established him as a significant figure in the field of ethnology and a pioneer advocate scholar for more intensive fieldwork in social sciences. The book was based on his decade of work as a participant-observer in Algerian society. One of the outstanding qualities of his work has been his innovative combination of different methods and research strategies and his analytical skills in interpreting the obtained data.
Throughout his career, Bourdieu sought to connect his theoretical ideas with empirical research grounded in everyday life. His work can be seen as a sociology of culture, which Bourdieu labeled a "theory of practice". His contributions to sociology were both empirical and theoretical. His conceptual apparatus is based on three key terms: habitus, capital, and field.
Furthermore, Bourdieu fiercely opposed rational choice theory as grounded in a misunderstanding of how social agents operate. Bourdieu argued that social agents do not continuously calculate according to explicit rational and economic criteria. According to Bourdieu, social agents operate according to an implicit practical logic—a practical sense—and bodily dispositions. Social agents act according to their "feel for the game" (the "feel" being, roughly, habitus, and the "game" being the field).
Bourdieu's anthropological work was focused on the analysis of the mechanisms of reproduction of social hierarchies. Bourdieu criticized the primacy given to the economic factors, and stressed that the capacity of social actors to actively impose and engage their cultural productions and symbolic systems plays an essential role in the reproduction of social structures of domination. Bourdieu's empirical work played a crucial role in the popularization of correspondence analysis and particularly multiple correspondence analysis. Bourdieu held that these geometric techniques of data analysis are, like his sociology, inherently relational. In the preface to his book The Craft of Sociology, Bourdieu argued that: "I use Correspondence Analysis very much, because I think that it is essentially a relational procedure whose philosophy fully expresses what in my view constitutes social reality. It is a procedure that 'thinks' in relations, as I try to do it with the concept of field."
One of the classic ethnographies in Sociology is the book Ain't No Makin' It: Aspirations & Attainment in a Low-Income Neighborhood by Jay MacLeod. The study addresses the reproduction of social inequality among low-income, male teenagers. The researcher spent time studying two groups of teenagers in a housing project in a Northeastern city of the United States. The study concludes that three different levels of analysis play their part in the reproduction of social inequality: the individual, the cultural, and the structural.
An additional perspective of sociology includes interactionism. This point of view focuses on understanding people's actions based on their experience of the world around them. Similar to Bourdieu's work, this perspective gathers statements, observations and facts from real-world situations to create more robust research outcomes.
Notable field-workers
In anthropology
Napoleon Chagnon - ethnographer of the Yanomamö people of the Amazon
Georg Forster - ethnographer (1772–1775) to Captain James Cook
George M. Foster
Clifford Geertz
Alfred Cort Haddon
Claude Lévi-Strauss
Bronislaw Malinowski
Margaret Mead
Alfred Reginald Radcliffe-Brown
W.H.R. Rivers
Renato Rosaldo
James C. Scott
Colin Turnbull
Victor Turner
In sociology
William Foote Whyte
Erving Goffman
Pierre Bourdieu
Harriet Martineau
In management
Henry Mintzberg
In economics
Truman Bewley
Alan Blinder
Trygve Haavelmo
John Johnston
Lawrence Klein
Wassily Leontief
Edward J. Nell
Robert M. Townsend
In music
Alan Lomax
John Peel (with his Peel Sessions)
| Physical sciences | Basics | null |
5670193 | https://en.wikipedia.org/wiki/Hipparion | Hipparion | Hipparion is an extinct genus of three-toed, medium-sized equine belonging to the extinct tribe Hipparionini, which lived about 10-5 million years ago. While the genus formerly included most hipparionines, the genus is now more narrowly defined as hipparionines from Eurasia spanning the Late Miocene. Hipparion was a mixed-feeder who ate mostly grass, and lived in the savannah biome. Hipparion evolved from Cormohipparion, and went extinct due to environmental changes like cooling climates and decreasing atmospheric carbon dioxide levels.
Taxonomy
"Hipparion" in sensu lato
The genus "Hipparion" was used for over a century as a form classification to describe over a hundred species of Holartic hipparionines from the Pliocene and Miocene eras that had three toes and isolated protocones. Since then, groups such as the genera Cormohipparion and Neohipparion were proposed to further sort these species, typically based on differences in skull morphology. These species are now known as "Hipparion" in sensu lato (s.l.), or a broad sense.
Hipparion in sensu stricto
Hipparion in sensu stricto (s.s.), or a strict sense, describes the genus of Old World hipparionines from remains found in Eurasia (France, Greece, Turkey, Iran, and China) from the Late Miocene era (~10-5 Ma, or million years ago). The assignment of remains from elsewhere to the genus, such as North America and Africa, is uncertain.
Morphology
Limbs
Hipparion generally resembled a smaller version of the modern horse, but was tridactyl, or three-toed. It had two vestigial outer toes on each limb in addition to its hoof. In some species, these outer toes were functional.
Size
Hipparion was typically medium in size, at about 1.4 m (4.6 ft) tall at the shoulder. The estimated body mass of Hipparion depends on the species, but ranges from about 135 to 200 kg (about 298 to 441 lbs).
Skull
Hipparion had hypsodont dentition (high-crowned teeth) for its premolars and molars, with a crown height of about 60 mm (2.36 in). Hipparion had isolated protocones in the upper molars, meaning a cusp of the teeth called a protocone was not connected to a tooth crest called a protoloph. Hipparion is also characterized by its facial fossa, or deep depression in the skull, located high on the head in front of the orbit.
Life
Habitat and diet
Hipparion lived in the Old World Savannah Biome, or OWSB, consisting of woodlands to grasslands. Hipparion ate a mixed-feed diet, mostly consisting of grass. This diet is indicated by fossil evidence of microscopic wear patterns of scratches and pits on the enamel of Hipparion's teeth, observed using scanning electron microscopy (SEM).
Lifespan
Hipparion achieved skeletal maturity and possibly sexual maturity at about 3 years old. Fossils of Hipparion individuals are up to 10 years old at death.
Evolution and extinction
Evolution
Hipparion likely evolved from a species of Cormohipparion during the Late Miocene, about 11.4–11.0 Ma. This species, C. occidentale, came to Eurasia and Africa from North America. The last common ancestor of Hipparion and the modern horse was Merychippus.
Extinction
In the Old World, Hipparion experienced population decline and extinction down a North to South gradient, as did many other Miocene vertebrates. This trend is believed to be due to environmental changes caused by global cooling and decreasing carbon dioxide levels in the atmosphere.
Species
| Biology and health sciences | Equidae | Animals |
5671944 | https://en.wikipedia.org/wiki/Arp%20220 | Arp 220 | Arp 220 is the result of a collision between two galaxies which are now in the process of merging. It is the 220th object in Halton Arp's Atlas of Peculiar Galaxies.
Features
Arp 220 is the closest ultraluminous infrared galaxy (ULIRG) to Earth, at 250 million light years away. Its energy output was discovered by IRAS to be dominated by the far-infrared part of the spectrum.
It is often regarded as the prototypical ULIRG and has been the subject of much study as a result.
Most of its energy output is thought to be the result of a massive burst of star formation, or starburst, probably triggered by the merging of two smaller galaxies. Hubble Space Telescope observations of Arp 220 in 2002 and 1997, taken in visible light with the ACS, and in infrared light with NICMOS, revealed more than 200 huge star clusters in the central part of the galaxy.
The most massive of these clusters contains enough material to equal about 10 million suns.
X-ray observations by the Chandra and XMM-Newton satellites have shown that Arp 220 probably includes an active galactic nucleus (AGN) at its core, which raises interesting questions about the link between galaxy mergers and AGN, since it is believed that galactic mergers often trigger starbursts, and may also give rise to the supermassive black holes that appear to power AGN.
Luminous far-infrared objects like Arp 220 have been found in surprisingly large numbers by sky surveys of submillimetre wavelengths using instruments such as the Submillimetre Common-User Bolometer Array (SCUBA) at the James Clerk Maxwell Telescope (JCMT).
Arp 220 and other relatively local ULIRGs are being studied as equivalents of this kind of object.
Astronomers from the Arecibo Observatory have detected organic molecules in the galaxy.
Arp 220 contains at least two bright maser sources, an OH megamaser, and a water maser. In October 2011, astronomers spotted a record-breaking seven supernova all found at the same time in Arp 220. The merging of the two galaxies started around 700 million years ago.
| Physical sciences | Notable galaxies | Astronomy |
5672031 | https://en.wikipedia.org/wiki/Orbit%20of%20the%20Moon | Orbit of the Moon | The Moon orbits Earth in the prograde direction and completes one revolution relative to the Vernal Equinox and the stars in about 27.32 days (a tropical month and sidereal month) and one revolution relative to the Sun in about 29.53 days (a synodic month). Earth and the Moon orbit about their barycentre (common centre of mass), which lies about from Earth's centre (about 73% of its radius), forming a satellite system called the Earth–Moon system. On average, the distance to the Moon is about from Earth's centre, which corresponds to about 60 Earth radii or 1.282 light-seconds.
With a mean orbital velocity around the barycentre between the Earth and the Moon, of 1.022 km/s (0.635 miles/s, 2,286 miles/h), the Moon covers a distance approximately its diameter, or about half a degree on the celestial sphere, each hour. The Moon differs from most regular satellites of other planets in that its orbit is closer to the ecliptic plane instead of its primary's (in this case, Earth's) equatorial plane. The Moon's orbital plane is inclined by about 5.1° with respect to the ecliptic plane, whereas Earth's equatorial plane is tilted by about 23.4° with respect to the ecliptic plane.
Properties
The properties of the orbit described in this section are approximations. The Moon's orbit around Earth has many variations (perturbations) due to the gravitational attraction of the Sun and planets, the study of which (lunar theory) has a long history.
Elliptic shape
The orbit of the Moon is a nearly circular ellipse about Earth (the semimajor and semiminor axes are 384,400 km and 383,800 km, respectively: a difference of only 0.16%). The equation of the ellipse yields an eccentricity of 0.0549 and perigee and apogee distances of 362,600 km (225,300 mi) and 405,400 km (251,900 mi) respectively (a difference of 12%).
Since nearer objects appear larger, the Moon's apparent size changes as it moves toward and away from an observer on Earth. An event called a "supermoon" occurs when the full Moon is closest to Earth (perigee). The largest possible apparent diameter of the Moon is the same 12% larger (as perigee versus apogee distances) than the smallest; the apparent area is 25% more and so is the amount of light it reflects toward Earth.
The variance in the Moon's orbital distance corresponds with changes in its tangential and angular speeds, per Kepler's second law. The mean angular movement relative to an imaginary observer at the Earth–Moon barycentre is ° per day to the east (J2000.0 epoch).
Elongation
The Moon's elongation is its angular distance east of the Sun at any time. At new moon, it is zero and the Moon is said to be in conjunction. At full moon, the elongation is 180° and it is said to be in opposition. In both cases, the Moon is in syzygy, that is, the Sun, Moon and Earth are nearly aligned. When elongation is either 90° or 270°, the Moon is said to be in quadrature.
Precession
The orientation of the orbit is not fixed in space but rotates over time. This orbital precession is called apsidal precession and is the rotation of the Moon's orbit within the orbital plane, i.e. the axes of the ellipse change direction. The lunar orbit's major axis – the longest diameter of the orbit, joining its nearest and farthest points, the perigee and apogee, respectively – makes one complete revolution every 8.85 Earth years, or 3,232.6054 days, as it rotates slowly in the same direction as the Moon itself (direct motion) – meaning precesses eastward by 360°. The Moon's apsidal precession is distinct from the nodal precession of its orbital plane and axial precession of the moon itself.
Inclination
The mean inclination of the lunar orbit to the ecliptic plane is 5.145°. Theoretical considerations show that the present inclination relative to the ecliptic plane arose by tidal evolution from an earlier near-Earth orbit with a fairly constant inclination relative to Earth's equator. It would require an inclination of this earlier orbit of about 10° to the equator to produce a present inclination of 5° to the ecliptic. It is thought that originally the inclination to the equator was near zero, but it could have been increased to 10° through the influence of planetesimals passing near the Moon while falling to the Earth. If this had not happened, the Moon would now lie much closer to the ecliptic and eclipses would be much more frequent.
The rotational axis of the Moon is not perpendicular to its orbital plane, so the lunar equator is not in the plane of its orbit, but is inclined to it by a constant value of 6.688° (this is the obliquity). As was discovered by Jacques Cassini in 1722, the rotational axis of the Moon precesses with the same rate as its orbital plane, but is 180° out of phase (see Cassini's Laws). Therefore, the angle between the ecliptic and the lunar equator is always 1.543°, even though the rotational axis of the Moon is not fixed with respect to the stars. It also means that when the Moon is farthest north of the ecliptic, the centre of the part seen from Earth is about 6.7° south of the lunar equator and the south pole is visible, whereas when the Moon is farthest south of the ecliptic the centre of the visible part is 6.7° north of the equator and the north pole is visible. This is called libration in latitude.
Nodes
The nodes are points at which the Moon's orbit crosses the ecliptic. The Moon crosses the same node every 27.2122 days, an interval called the draconic month or draconitic month. The line of nodes, the intersection between the two respective planes, has a retrograde motion: for an observer on Earth, it rotates westward along the ecliptic with a period of 18.6 years or 19.3549° per year. When viewed from the celestial north, the nodes move clockwise around Earth, opposite to Earth's own spin and its revolution around the Sun. An eclipse of the Moon or Sun can occur when the nodes align with the Sun, roughly every 173.3 days. Lunar orbit inclination also determines eclipses; shadows cross when nodes coincide with full and new moon when the Sun, Earth, and Moon align in three dimensions.
In effect, this means that the "tropical year" on the Moon is only 347 days long. This is called the draconic year or eclipse year. The "seasons" on the Moon fit into this period. For about half of this draconic year, the Sun is north of the lunar equator (but at most 1.543°), and for the other half, it is south of the lunar equator. The effect of these seasons, however, is minor compared to the difference between lunar night and lunar day. At the lunar poles, instead of usual lunar days and nights of about 15 Earth days, the Sun will be "up" for 173 days as it will be "down"; polar sunrise and sunset takes 18 days each year. "Up" here means that the centre of the Sun is above the horizon. Lunar polar sunrises and sunsets occur around the time of eclipses (solar or lunar). For example, at the Solar eclipse of March 9, 2016, the Moon was near its descending node, and the Sun was near the point in the sky where the equator of the Moon crosses the ecliptic. When the Sun reaches that point, the centre of the Sun sets at the lunar north pole and rises at the lunar south pole.
The solar eclipse of September 1 of the same year, the Moon was near its ascending node, and the Sun was near the point in the sky where the equator of the Moon crosses the ecliptic. When the Sun reaches that point, the centre of the Sun rises at the lunar north pole and sets at the lunar south pole.
Inclination to the equator and lunar standstill
Every 18.6 years, the angle between the Moon's orbit and Earth's equator reaches a maximum of 28°36′, the sum of Earth's equatorial tilt (23°27′) and the Moon's orbital inclination (5°09′) to the ecliptic. This is called major lunar standstill. Around this time, the Moon's declination will vary from −28°36′ to +28°36′. Conversely, 9.3 years later, the angle between the Moon's orbit and Earth's equator reaches its minimum of 18°20′. This is called a minor lunar standstill. The last lunar standstill was a minor standstill in October 2015. At that time the descending node was lined up with the equinox (the point in the sky having right ascension zero and declination zero). The nodes are moving west by about 19° per year. The Sun crosses a given node about 20 days earlier each year.
When the inclination of the Moon's orbit to the Earth's equator is at its minimum of 18°20′, the centre of the Moon's disk will be above the horizon every day from latitudes less than 70°43' (90° − 18°20' – 57' parallax) north or south. When the inclination is at its maximum of 28°36', the centre of the Moon's disk will be above the horizon every day only from latitudes less than 60°27' (90° − 28°36' – 57' parallax) north or south.
At higher latitudes, there will be a period of at least one day each month when the Moon does not rise, but there will also be a period of at least one day each month when the Moon does not set. This is similar to the seasonal behaviour of the Sun, but with a period of 27.2 days instead of 365 days. Note that a point on the Moon can actually be visible when it is about 34 arc minutes below the horizon, due to atmospheric refraction.
Because of the inclination of the Moon's orbit with respect to the Earth's equator, the Moon is above the horizon at the North and South Pole for almost two weeks every month, even though the Sun is below the horizon for six months at a time. The period from moonrise to moonrise at the poles is a tropical month, about 27.3 days, quite close to the sidereal period. When the Sun is the furthest below the horizon (winter solstice), the Moon will be full when it is at its highest point. When the Moon is in Gemini it will be above the horizon at the North Pole, and when it is in Sagittarius it will be up at the South Pole.
The Moon's light is used by zooplankton in the Arctic when the Sun is below the horizon for months and must have been helpful to the animals that lived in Arctic and Antarctic regions when the climate was warmer.
Scale model
History of observations and measurements
About 1000 BC, the Babylonians were the first human civilization known to have kept a consistent record of lunar observations. Clay tablets from that period, which have been found in Iraq, are inscribed with cuneiform writing recording the times and dates of moonrises and moonsets, the stars that the Moon passed close by, and the time differences between rising and setting of both the Sun and the Moon around the time of a full moon. Babylonian astronomy discovered the three main periods of the Moon's motion and used data analysis to build lunar calendars that extended well into the future. This use of detailed, systematic observations to make predictions based on experimental data may be classified as the first scientific study in human history. However, the Babylonians seem to have lacked any geometric or physical interpretation of their data, and they could not predict future lunar eclipses (though "warnings" were issued before likely eclipse times).
Ancient Greek astronomers were the first to introduce and analyze mathematical models of the motion of objects in the sky. Ptolemy described lunar motion by using a well-defined geometric model of epicycles and evection.
Isaac Newton was the first to develop a complete theory of motion, Newtonian mechanics. The observations of the lunar motion were the main test of his theory.
Lunar periods
There are several different periods associated with the lunar orbit. The sidereal month is the time it takes to make one complete orbit around Earth with respect to the fixed stars. It is about 27.32 days. The synodic month is the time it takes the Moon to reach the same visual phase. This varies notably throughout the year, but averages around 29.53 days. The synodic period is longer than the sidereal period because the Earth–Moon system moves in its orbit around the Sun during each sidereal month, hence a longer period is required to achieve a similar alignment of Earth, the Sun, and the Moon. The anomalistic month is the time between perigees and is about 27.55 days. The Earth–Moon separation determines the strength of the lunar tide raising force.
The draconic month is the time from ascending node to ascending node. The time between two successive passes of the same ecliptic longitude is called the tropical month. The latter periods are slightly different from the sidereal month.
The average length of a calendar month (a twelfth of a year) is about 30.4 days. This is not a lunar period, though the calendar month is historically related to the visible lunar phase.
Tidal evolution
The gravitational attraction that the Moon exerts on Earth is the cause of tides in both the ocean and the solid Earth; the Sun has a smaller tidal influence. The solid Earth responds quickly to any change in the tidal forcing, the distortion taking the form of an ellipsoid with the high points roughly beneath the Moon and on the opposite side of Earth. This is a result of the high speed of seismic waves within the solid Earth.
However the speed of seismic waves is not infinite and, together with the effect of energy loss within the Earth, this causes a slight delay between the passage of the maximum forcing due to the Moon across and the maximum Earth tide. As the Earth rotates faster than the Moon travels around its orbit, this small angle produces a gravitational torque which slows the Earth and accelerates the Moon in its orbit.
In the case of the ocean tides, the speed of tidal waves in the ocean is far slower than the speed of the Moon's tidal forcing. As a result, the ocean is never in near equilibrium with the tidal forcing. Instead, the forcing generates the long ocean waves which propagate around the ocean basins until eventually losing their energy through turbulence, either in the deep ocean or on shallow continental shelves.
Although the ocean's response is the more complex of the two, it is possible to split the ocean tides into a small ellipsoid term which affects the Moon plus a second term which has no effect. The ocean's ellipsoid term also slows the Earth and accelerates the Moon, but because the ocean dissipates so much tidal energy, the present ocean tides have an order of magnitude greater effect than the solid Earth tides.
Because of the tidal torque, caused by the ellipsoids, some of Earth's angular (or rotational) momentum is gradually being transferred to the rotation of the Earth–Moon pair around their mutual centre of mass, called the barycentre. See tidal acceleration for a more detailed description.
This slightly greater orbital angular momentum causes the Earth–Moon distance to increase at approximately 38 millimetres per year. Conservation of angular momentum means that Earth's axial rotation is gradually slowing, and because of this its day lengthens by approximately 24 microseconds every year (excluding glacial rebound). Both figures are valid only for the current configuration of the continents. Tidal rhythmites from 620 million years ago show that, over hundreds of millions of years, the Moon receded at an average rate of per year (2200 km or 0.56% or the Earth-moon distance per hundred million years) and the day lengthened at an average rate of 12 microseconds per year (or 20 minutes per hundred million years), both about half of their current values.
The present high rate may be due to near resonance between natural ocean frequencies and tidal frequencies. Another explanation is that in the past the Earth rotated much faster, a day possibly lasting only 9 hours on the early Earth. The resulting tidal waves in the ocean would have then been much shorter and it would have been more difficult for the long wavelength tidal forcing to excite the short wavelength tides.
The Moon is gradually receding from Earth into a higher orbit, and calculations suggest that this would continue for about 50 billion years. By that time, Earth and the Moon would be in a mutual spin–orbit resonance or tidal locking, in which the Moon will orbit Earth in about 47 days (currently 27 days), and both the Moon and Earth would rotate around their axes in the same time, always facing each other with the same side. This has already happened to the Moon—the same side always faces Earth—and is also slowly happening to the Earth. However, the slowdown of Earth's rotation is not occurring fast enough for the rotation to lengthen to a month before other effects change the situation: approximately 2.3 billion years from now, the increase of the Sun's radiation will have caused Earth's oceans to evaporate, removing the bulk of the tidal friction and acceleration.
Libration
The Moon is in synchronous rotation, meaning that it keeps the same face toward Earth at all times. This synchronous rotation is only true on average because the Moon's orbit has a definite eccentricity. As a result, the angular velocity of the Moon varies as it orbits Earth and hence is not always equal to the Moon's rotational velocity which is more constant. When the Moon is at its perigee, its orbital motion is faster than its rotation. At that time the Moon is a bit ahead in its orbit with respect to its rotation about its axis, and this creates a perspective effect which allows us to see up to eight degrees of longitude of its eastern (right) far side. Conversely, when the Moon reaches its apogee, its orbital motion is slower than its rotation, revealing eight degrees of longitude of its western (left) far side. This is referred to as optical libration in longitude.
The Moon's axis of rotation is inclined by in total 6.7° relative to the normal to the plane of the ecliptic. This leads to a similar perspective effect in the north–south direction that is referred to as optical libration in latitude, which allows one to see almost 7° of latitude beyond the pole on the far side. Finally, because the Moon is only about 60 Earth radii away from Earth's centre of mass, an observer at the equator who observes the Moon throughout the night moves laterally by one Earth diameter. This gives rise to a diurnal libration, which allows one to view an additional one degree's worth of lunar longitude. For the same reason, observers at both of Earth's geographical poles would be able to see one additional degree's worth of libration in latitude.
Besides these "optical librations" caused by the change in perspective for an observer on Earth, there are also "physical librations" which are actual nutations of the direction of the pole of rotation of the Moon in space: but these are very small.
Path of Earth and Moon around Sun
When viewed from the north celestial pole (that is, from the approximate direction of the star Polaris) the Moon orbits Earth anticlockwise and Earth orbits the Sun anticlockwise, and the Moon and Earth rotate on their own axes anticlockwise.
The right-hand rule can be used to indicate the direction of the angular velocity. If the thumb of the right hand points to the north celestial pole, its fingers curl in the direction that the Moon orbits Earth, Earth orbits the Sun, and the Moon and Earth rotate on their own axes.
In representations of the Solar System, it is common to draw the trajectory of Earth from the point of view of the Sun, and the trajectory of the Moon from the point of view of Earth. This could give the impression that the Moon orbits Earth in such a way that sometimes it goes backwards when viewed from the Sun's perspective. However, because the orbital velocity of the Moon around Earth (1 km/s) is small compared to the orbital velocity of Earth about the Sun (30 km/s), this never happens. There are no rearward loops in the Moon's solar orbit.
Considering the Earth–Moon system as a binary planet, its centre of gravity is within Earth, about or 73.3% of the Earth's radius from the centre of the Earth. This centre of gravity remains on the line between the centres of the Earth and Moon as the Earth completes its diurnal rotation. The path of the Earth–Moon system in its solar orbit is defined as the movement of this mutual centre of gravity around the Sun. Consequently, Earth's centre veers inside and outside the solar orbital path during each synodic month as the Moon moves in its orbit around the common centre of gravity.
The Sun's gravitational effect on the Moon is more than twice that of Earth's on the Moon; consequently, the Moon's trajectory is always convex (as seen when looking Sunward at the entire Sun–Earth–Moon system from a great distance outside Earth–Moon solar orbit), and is nowhere concave (from the same perspective) or looped. That is, the region enclosed by the Moon's orbit of the Sun is a convex set.
| Physical sciences | Solar System | Astronomy |
1611195 | https://en.wikipedia.org/wiki/Hand%20cannon | Hand cannon | The hand cannon ( or ), also known as the gonne or handgonne, is the first true firearm and the successor of the fire lance. It is the oldest type of small arms, as well as the most mechanically simple form of metal barrel firearms. Unlike matchlock firearms it requires direct manual external ignition through a touch hole without any form of firing mechanism. It may also be considered a forerunner of the handgun. The hand cannon was widely used in China from the 13th century onward and later throughout Eurasia in the 14th century. In 15th century Europe, the hand cannon evolved to become the matchlock arquebus, which became the first firearm to have a trigger.
History
China
The earliest artistic depiction of what might be a hand cannon—a rock sculpture found among the Dazu Rock Carvings—is dated to 1128, much earlier than any recorded or precisely dated archaeological samples, so it is possible that the concept of a cannon-like firearm has existed since the 12th century. This has been challenged by others such as Liu Xu, Cheng Dong, and Benjamin Avichai Katz Sinvany. According to Liu, the weight of the cannon would have been too much for one person to hold, especially with just one arm, and points out that fire lances were being used a decade later at the Siege of De'an. Cheng Dong believes that the figure depicted is actually a wind spirit letting air out of a bag rather than a cannon emitting a blast. Stephen Haw also considered the possibility that the item in question was a bag of air but concludes that it is a cannon because it was grouped with other weapon-wielding sculptures. Sinvany concurred with the wind bag interpretation and that the cannonball indentation was added later on.
The first cannons were likely an evolution of the fire lance. In 1259 a type of "fire-emitting lance" (tūhuǒqiãng 突火槍) made an appearance. According to the History of Song: "It is made from a large bamboo tube, and inside is stuffed a pellet wad (zǐkē 子窠). Once the fire goes off it completely spews the rear pellet wad forth, and the sound is like a bomb that can be heard for five hundred or more paces." The pellet wad mentioned is possibly the first true bullet in recorded history depending on how bullet is defined, as it did occlude the barrel, unlike previous co-viatives (non-occluding shrapnel) used in the fire lance. Fire lances transformed from the "bamboo- (or wood- or paper-) barreled firearm to the metal-barreled firearm" to better withstand the explosive pressure of gunpowder. From there it branched off into several different gunpowder weapons known as "eruptors" in the late 12th and early 13th centuries, with different functions such as the "filling-the-sky erupting tube" which spewed out poisonous gas and porcelain shards, the "hole-boring flying sand magic mist tube" (zuànxuéfēishāshénwùtǒng 鑽穴飛砂神霧筒) which spewed forth sand and poisonous chemicals into orifices, and the more conventional "phalanx-charging fire gourd" which shot out lead pellets.
Hand cannons first saw widespread usage in China sometime during the 13th century and spread from there to the rest of the world. In 1287 Yuan Jurchen troops deployed hand cannons in putting down a rebellion by the Mongol prince Nayan. The History of Yuan reports that the cannons of Li Ting's soldiers "caused great damage" and created "such confusion that the enemy soldiers attacked and killed each other." The hand cannons were used again in the beginning of 1288. Li Ting's "gun-soldiers" or chòngzú () were able to carry the hand cannons "on their backs". The passage on the 1288 battle is also the first to coin the name chòng () with the metal radical jīn () for metal-barrel firearms. Chòng was used instead of the earlier and more ambiguous term huǒtǒng (fire tube; ), which may refer to the tubes of fire lances, proto-cannons, or signal flares. Hand cannons may have also been used in the Mongol invasions of Japan. Japanese descriptions of the invasions talk of iron and bamboo pào causing "light and fire" and emitting 2–3,000 iron bullets. The Nihon Kokujokushi, written around 1300, mentions huǒtǒng (fire tubes) at the Battle of Tsushima in 1274 and the second coastal assault led by Holdon in 1281. The Hachiman Gudoukun of 1360 mentions iron pào "which caused a flash of light and a loud noise when fired." The Taiheki of 1370 mentions "iron pào shaped like a bell." Mongol troops of Yuan dynasty carried Chinese cannons to Java during their 1293 invasion.
The oldest extant hand cannon bearing a date of production is the Xanadu Gun, which contains an era date corresponding to 1298. The Heilongjiang hand cannon is dated a decade earlier to 1288, corresponding to the military conflict involving Li Ting, but the dating method is based on contextual evidence; the gun bears no inscription or era date. Another cannon bears an era date that could correspond with the year 1271 in the Gregorian Calendar, but contains an irregular character in the reign name. Other specimens also likely predate the Xanadu and Heilongjiang guns and have been traced as far back as the late Western Xia period (1214–1227), but these too lack inscriptions and era dates (see Wuwei bronze cannon).
Spread
The earliest reliable evidence of cannons in Europe appeared in 1326 in a register of the municipality of Florence and evidence of their production can be dated as early as 1327. The first recorded use of gunpowder weapons in Europe was in 1331 when two mounted German knights attacked Cividale del Friuli with gunpowder weapons of some sort. By 1338 hand cannons were in widespread use in France. One of the oldest surviving weapons of this type is the "Loshult gun", a Swedish example from the mid-14th century. In 1999, a group of British and Danish researchers made a replica of the gun and tested it using four period-accurate mixes of gunpowder, firing both arrows and lead balls with charges of gunpowder. The velocities of the arrows varied from to with max ranges of to , while the balls achieve velocities of between to with an average range of . The first English source about handheld firearm (hand cannon) was written in 1473.
Although evidence of cannons appears later in the Middle East than Europe, fire lances were described earlier by Hasan al-Rammah between 1240 and 1280, and appeared in battles between Muslims and Mongols in 1299 and 1303. Hand cannons may have been used in the early 14th century. An Arabic text dating to 1320–1350 describes a type of gunpowder weapon called a midfa which uses gunpowder to shoot projectiles out of a tube at the end of a stock. Some scholars consider this a hand cannon while others dispute this claim. The Nasrid army besieging Elche in 1331 made use of "iron pellets shot with fire." According to Paul E. J. Hammer, the Mamluks certainly used cannons by 1342. According to J. Lavin, cannons were used by Moors at the siege of Algeciras in 1343. Shihab al-Din Abu al-Abbas al-Qalqashandi described a metal cannon firing an iron ball between 1365 and 1376.
Cannons are attested to in India starting from 1366. The Joseon kingdom in Korea acquired knowledge of gunpowder from China by 1372 and started producing cannons by 1377. In Southeast Asia Đại Việt soldiers were using hand cannons at the very latest by 1390 when they employed them in killing Champa king Che Bong Nga. Chinese observer recorded the Javanese use of hand cannon for marriage ceremony in 1413 during Zheng He's voyage. Japan was already aware of gunpowder warfare due to the Mongol invasions during the 13th century, but did not acquire a cannon until a monk took one back to Japan from China in 1510, and firearms were not produced until 1543, when the Portuguese introduced matchlocks which were known as tanegashima to the Japanese. The art of firing the hand cannon called Ōzutsu (大筒) has remained as a Ko-budō martial arts form.
Middle East
The earliest surviving documentary evidence for the use of the hand cannon in the Islamic world are from several Arabic manuscripts dated to the 14th century. The historian Ahmad Y. al-Hassan argues that several 14th-century Arabic manuscripts, one of which was written by Shams al-Din Muhammad al-Ansari al-Dimashqi (1256–1327), report the use of hand cannons by Mamluk-Egyptian forces against the Mongols at the Battle of Ain Jalut in 1260. However, Hassan's claim contradicts other historians who claim hand cannons did not appear in the Middle East until the 14th century.
Iqtidar Alam Khan argues that it was the Mongols who introduced gunpowder to the Islamic world, and believes cannons only reached Mamluk Egypt in the 1370s. According to Joseph Needham, fire lances or proto-guns were known to Muslims by the late 13th century and early 14th century. However the term midfa, dated to textual sources from 1342 to 1352, cannot be proven to be true hand-guns or bombards, and contemporary accounts of a metal-barrel cannon in the Islamic world do not occur until 1365. Needham also concludes that in its original form the term midfa refers to the tube or cylinder of a naphtha projector (flamethrower), then after the invention of gunpowder it meant the tube of fire lances, and eventually it applied to the cylinder of hand-gun and cannon. Similarly, Tonio Andrade dates the textual appearance of cannon in Middle-Eastern sources to the 1360s. David Ayalon and Gabor Ágoston believe the Mamluks had certainly used siege cannon by the 1360s, but earlier uses of cannon in the Islamic World are vague with a possible appearance in the Emirate of Granada by the 1320s, however evidence is inconclusive.
Khan claims that it was invading Mongols who introduced gunpowder to the Islamic world and cites Mamluk antagonism towards early riflemen in their infantry as an example of how gunpowder weapons were not always met with open acceptance in the Middle East. Similarly, the refusal of their Qizilbash forces to use firearms contributed to the Safavid rout at Chaldiran in 1514.
Arquebus
Early European hand cannons, such as the socket-handgonne, were relatively easy to produce; smiths often used brass or bronze when making these early gonnes. The production of early hand cannons was not uniform; this resulted in complications when loading or using the gunpowder in the hand cannon. Improvements in hand cannon and gunpowder technology—corned powder, shot ammunition, and development of the flash pan—led to the invention of the arquebus in late 15th-century Europe.
Design and features
The hand cannon consists of a barrel, a handle, and sometimes a socket to insert a wooden stock. Extant samples show that some hand cannons also featured a metal extension as a handle.
The hand cannon could be held in two hands, but another person is often shown aiding in the ignition process using smoldering wood, coal, red-hot iron rods, or slow-burning matches. The hand cannon could be placed on a rest and held by one hand, while the gunner applied the means of ignition himself.
Projectiles used in hand cannons were known to include rocks, pebbles, and arrows. Eventually stone projectiles in the shape of balls became the preferred form of ammunition, and then they were replaced by iron balls from the late 14th to 15th centuries.
Later hand cannons have been shown to include a flash pan attached to the barrel and a touch hole drilled through the side wall instead of the top of the barrel. The flash pan had a leather cover and, later on, a hinged metal lid, to keep the priming powder dry until the moment of firing and to prevent premature firing. These features were carried over to subsequent firearms.
Gallery
Asia
Europe
| Technology | Firearms | null |
1613879 | https://en.wikipedia.org/wiki/Plastic%20bag | Plastic bag | A plastic bag, poly bag, or pouch is a type of container made of thin, flexible, plastic film, nonwoven fabric, or plastic textile. Plastic bags are used for containing and transporting goods such as foods, produce, powders, ice, magazines, chemicals, and waste. It is a common form of packaging.
Most plastic bags are heat sealed at the seams, while some are bonded with adhesives or are stitched.
Many countries are introducing legislation to phase out lightweight plastic bags, because plastic never fully breaks down, causing everlasting pollution of plastics and environmental impacts. Every year, about 1 to 5 trillion plastic bags are used and discarded around the world. From point of sale to destination, plastic bags have a lifetime of 12 minutes. Approximately 320 bags per capita were used in 2014 in the United States of America.
Package
Several design options and features are available. Some bags have gussets to allow a higher volume of contents, special stand-up pouches have the ability to stand up on a shelf or a refrigerator, and some have easy-opening or reclosable options. Handles are cut into or added into some.
Bags can be made with a variety of plastics films. Polyethylene (LDPE, LLDPE, etc.) is the most common. Other forms, including laminates and co-extrusions can be used when the physical properties are needed. Plastics to create single use bags are primarily made with Fossil fuels. International Plastic Bag Free Day is celebrated on July 3.
Plastic bags usually use less material than comparable to boxes, cartons, or jars, thus are often considered as "reduced or minimized packaging". In June 2009 Germany’s Institute for Energy and Environmental Research concluded that oil-based plastics, especially if recycled, have a better life-cycle analysis than compostable plastics. They added that "The current bags made from bioplastics have less favourable environmental impact profiles than the other materials examined" and that this is due to the process of raw-material production.
Depending on the construction, plastic bags can be suited for plastic recycling. They can be incinerated in appropriate facilities for waste-to-energy conversion. They are stable and benign in sanitary landfills. If disposed of improperly, however, plastic bags can create unsightly litter and harm some types of wildlife. Plastic bags have low recycling rates due to lack of separation ability. Mixed material recycling causes contamination of the material. However, plastic bags are reused before discard at a rate of 1.6 times.
Bags come with various features such as carrying handles, hanging holes, tape attachments, and security features. Some bags are designed for easy opening and have reclosable press-to-seal zipper strips. This feature is commonly found in empty kitchen bags and some food packaging. Some bags are sealed for tamper-evident capability, including some where the press-to-reseal feature becomes accessible only when a perforated outer seal has torn away.
Boil-in-bags are often used for sealed frozen foods, sometimes complete entrees. The bags are usually tough heat-sealed nylon or polyester to withstand the temperatures of boiling water. Some bags are porous or perforated to allow the hot water to contact the food: rice, noodles, etc. Grocery stores are the single largest supplier of single-use plastic bags.
Bag-in-box packaging is often used for liquids such as box wine and institutional sizes of other liquids.
Medical uses
Plastic bags are used for many medical purposes. The non-porous quality of plastic film means that they are useful for isolating infectious body fluids; other porous bags made of nonwoven plastics can be sterilized by gas and maintain this sterility. Bags can be made under regulated sterile manufacturing conditions, so they can be used when the infection is a health risk. They are lightweight and flexible, so they can be carried by or laid next to patients without making the patient as uncomfortable as a heavy glass bottle would be. They are less expensive than re-usable options, such as glass bottles. Moderate quality evidence from a 2018 systematic review showed that plastic wraps or bags prevented hypothermia compared to routine care, especially in extremely preterm infants.
Waste disposal bags
Flexible intermediate bulk container
Flexible intermediate bulk containers are large industrial containers, usually used for bulk powders or flowables. They are usually constructed of woven heavy-duty plastic fibers.
Plastic shopping bags
Open bags with carrying handles are used in large numbers. Stores often provide them as a convenience to shoppers. Some stores charge a nominal fee for a bag. Heavy-duty reusable shopping bags are often considered environmentally better than single-use paper or plastic shopping bags. Because of environmental and litter problems, some locations are working toward a phase-out of lightweight plastic bags.
History
American and European patent applications relating to the production of plastic shopping bags can be found dating back to the early 1950s, but these refer to composite constructions with handles fixed to the bag in a secondary manufacturing process. The modern lightweight shopping bag is the invention of Swedish engineer Sten Gustaf Thulin. In the early 1960s, Thulin developed a method of forming a simple one-piece bag by folding, welding and die-cutting a flat tube of plastic for the packaging company Celloplast of Norrköping, Sweden. Thulin's design produced a simple, strong bag with a high load-carrying capacity, and was patented worldwide by Celloplast in 1965.
From the mid-1980s onwards, plastic bags became common for carrying daily groceries from the store to vehicles and homes throughout the developed world. As plastic bags increasingly replaced paper bags, and as other plastic materials and products replaced glass, metal, stone, timber and other materials, a packaging materials war erupted, with plastic shopping bags at the center of highly publicized disputes.
In 1992, Sonoco Products Company of Hartsville, SC patented the "self-opening polyethylene bag stack". The main innovation of this redesign is that the removal of a bag from the rack opens the next bag in the stack.
International usage
The number of plastic bags used and discarded worldwide has been estimated to be on the order of one trillion annually. The use of plastic bags differs dramatically across countries. While the average consumer in China uses only two or three plastic bags a year, the numbers are much higher in most other countries: Denmark: four; Ireland: 20; Germany: 65; Poland, Hungary, Slovakia: more than 400.
A large number of cities and counties have banned the use of plastic bags by grocery stores or introduced a minimum charge. In September 2014, California became the first state to pass a law banning their use, but the ban contained what has since been called a loophole, allowing supermarkets to provide thicker plastic bags as "reusable" bags at a cost of 10 cents each. In 2024, California passed a new law, taking effect in 2026, which closes this loophole and reinforces the original ban. In India, the government has banned the use of plastic bags of a thickness below 50 microns. In 2018, Montreal, Canada, also banned plastic bags with Ottawa expected to also put the ban into effect.
Plastic bags and the environment
Plastic bags are mostly made out of petroleum products and natural gas. 8% of the world's petroleum resources are used for creating plastic bags at 12 million barrels of oil a day. Half of that is used as materials to make them, and the other half for energy to make them. At this rate, it will soon run out, impacting many things that depend on the resource. Not to mention all the air pollution it causes.
Non-compostable plastic bags can take up to 1000 years to decompose. Plastic bags are not capable of biodegradation but rather they photodegrade, a process by which the plastic bags are broken down into smaller toxic parts. In the 2000s, many stores and companies began to use different types of biodegradable bags to comply with perceived environmental benefits.
When plastic shopping bags are not disposed of properly, they can end up in streams, which then lead them to end up in the open ocean. To mitigate marine plastic pollution from single-use shopping bags, many jurisdictions around the world have implemented bans or fees on the use of plastic bags. An estimated 300 million plastic bags end up in the Atlantic Ocean alone. The way in which the bags float in open water can resemble a jellyfish, posing significant dangers to marine mammals and Leatherback sea turtles, when they are eaten by mistake and enter the animals' digestive tracts. After ingestion, the plastic material can lead to premature death. Once death occurs and the animal body decomposes, the plastic reenters the environment, posing more potential problems.
Huge masses of plastic waste are arriving in the oceans per annum and causing several damages which include risk of marine species, disturbance in food web ultimately effecting marine ecosystem, several microbial and alien species colonize on plastic particles enhancing their harmfulness, and plastic particles driven by winds form garbage patches in various parts of the oceans.
Marine animals are not the only animals affected by improper plastic bag disposal. Sea birds, when hunting, sense for dimethyl sulfide (DMS) which is produced by algae. Plastic is a breeding ground for algae, so the sea birds mistakenly eat the bag rather than the fish that typically ingests algae. (National Geographic)
Plastic bags do not do well in the environment, but several government studies have found them to be an environmentally friendly carryout bag option. According to the Recyc-Quebec, a Canadian recycling agency, "The conventional plastic bag has several environmental and economic advantages. Thin and light, its production requires little material and energy. It also avoids the production and purchase of garbage/bin liner bags since it benefits from a high reuse rate when reused for this purpose (77.7%)." Government studies from Denmark and the United Kingdom, as well as a study from Clemson University, came to similar conclusions.
Even though the bags are plastic, they typically cannot be recycled in curbside recycling bins. The material frequently causes the equipment used at recycling plants to jam, thus having to pause the recycle machinery and slow down daily operations. However, plastic bags are 100% recyclable. To recycle them the user needs to drop them off at a location that accepts plastic film. Usually, this means taking them back to the grocery store or another major retail store.
Danger to children
Thin, conformable plastic bags, especially dry cleaning bags, have the potential to cause suffocation. Because of this, about 25 children in the United States suffocate each year due to plastic bags, almost nine-tenths of whom are under the age of one. This has led to voluntary warning labels on some bags posing a hazard to small children.
Uses
Plastic bags are used for diverse applications:
| Technology | Containers | null |
1614492 | https://en.wikipedia.org/wiki/Connectivity%20%28graph%20theory%29 | Connectivity (graph theory) | In mathematics and computer science, connectivity is one of the basic concepts of graph theory: it asks for the minimum number of elements (nodes or edges) that need to be removed to separate the remaining nodes into two or more isolated subgraphs. It is closely related to the theory of network flow problems. The connectivity of a graph is an important measure of its resilience as a network.
Connected vertices and graphs
In an undirected graph , two vertices and are called connected if contains a path from to . Otherwise, they are called disconnected. If the two vertices are additionally connected by a path of length (that is, they are the endpoints of a single edge), the vertices are called adjacent.
A graph is said to be connected if every pair of vertices in the graph is connected. This means that there is a path between every pair of vertices. An undirected graph that is not connected is called disconnected. An undirected graph is therefore disconnected if there exist two vertices in such that no path in has these vertices as endpoints. A graph with just one vertex is connected. An edgeless graph with two or more vertices is disconnected.
A directed graph is called weakly connected if replacing all of its directed edges with undirected edges produces a connected (undirected) graph. It is unilaterally connected or unilateral (also called semiconnected) if it contains a directed path from to or a directed path from to for every pair of vertices . It is strongly connected, or simply strong, if it contains a directed path from to and a directed path from to for every pair of vertices .
Components and cuts
A connected component is a maximal connected subgraph of an undirected graph. Each vertex belongs to exactly one connected component, as does each edge. A graph is connected if and only if it has exactly one connected component.
The strong components are the maximal strongly connected subgraphs of a directed graph.
A vertex cut or separating set of a connected graph is a set of vertices whose removal renders disconnected. The vertex connectivity (where is not a complete graph) is the size of a smallest vertex cut. A graph is called -vertex-connected or -connected if its vertex connectivity is or greater.
More precisely, any graph (complete or not) is said to be -vertex-connected if it contains at least vertices, but does not contain a set of vertices whose removal disconnects the graph; and is defined as the largest such that is -connected. In particular, a complete graph with vertices, denoted , has no vertex cuts at all, but .
A vertex cut for two vertices and is a set of vertices whose removal from the graph disconnects and . The local connectivity is the size of a smallest vertex cut separating and . Local connectivity is symmetric for undirected graphs; that is, . Moreover, except for complete graphs, equals the minimum of over all nonadjacent pairs of vertices .
-connectivity is also called biconnectivity and -connectivity is also called triconnectivity. A graph which is connected but not -connected is sometimes called separable.
Analogous concepts can be defined for edges. In the simple case in which cutting a single, specific edge would disconnect the graph, that edge is called a bridge. More generally, an edge cut of is a set of edges whose removal renders the graph disconnected. The edge-connectivity is the size of a smallest edge cut, and the local edge-connectivity of two vertices is the size of a smallest edge cut disconnecting from . Again, local edge-connectivity is symmetric. A graph is called -edge-connected if its edge connectivity is or greater.
A graph is said to be maximally connected if its connectivity equals its minimum degree. A graph is said to be maximally edge-connected if its edge-connectivity equals its minimum degree.
Super- and hyper-connectivity
A graph is said to be super-connected or super-κ if every minimum vertex cut isolates a vertex. A graph is said to be hyper-connected or hyper-κ if the deletion of each minimum vertex cut creates exactly two components, one of which is an isolated vertex. A graph is semi-hyper-connected or semi-hyper-κ if any minimum vertex cut separates the graph into exactly two components.
More precisely: a connected graph is said to be super-connected or super-κ if all minimum vertex-cuts consist of the vertices adjacent with one (minimum-degree) vertex.
A connected graph is said to be super-edge-connected or super-λ if all minimum edge-cuts consist of the edges incident on some (minimum-degree) vertex.
A cutset of is called a non-trivial cutset if does not contain the neighborhood of any vertex . Then the superconnectivity of is
A non-trivial edge-cut and the edge-superconnectivity are defined analogously.
Menger's theorem
One of the most important facts about connectivity in graphs is Menger's theorem, which characterizes the connectivity and edge-connectivity of a graph in terms of the number of independent paths between vertices.
If and are vertices of a graph , then a collection of paths between and is called independent if no two of them share a vertex (other than and themselves). Similarly, the collection is edge-independent if no two paths in it share an edge. The number of mutually independent paths between and is written as , and the number of mutually edge-independent paths between and is written as .
Menger's theorem asserts that for distinct vertices u,v, equals , and if u is also not adjacent to v then equals . This fact is actually a special case of the max-flow min-cut theorem.
Computational aspects
The problem of determining whether two vertices in a graph are connected can be solved efficiently using a search algorithm, such as breadth-first search. More generally, it is easy to determine computationally whether a graph is connected (for example, by using a disjoint-set data structure), or to count the number of connected components. A simple algorithm might be written in pseudo-code as follows:
Begin at any arbitrary node of the graph .
Proceed from that node using either depth-first or breadth-first search, counting all nodes reached.
Once the graph has been entirely traversed, if the number of nodes counted is equal to the number of nodes of , the graph is connected; otherwise it is disconnected.
By Menger's theorem, for any two vertices and in a connected graph , the numbers and can be determined efficiently using the max-flow min-cut algorithm. The connectivity and edge-connectivity of can then be computed as the minimum values of and , respectively.
In computational complexity theory, SL is the class of problems log-space reducible to the problem of determining whether two vertices in a graph are connected, which was proved to be equal to L by Omer Reingold in 2004. Hence, undirected graph connectivity may be solved in space.
The problem of computing the probability that a Bernoulli random graph is connected is called network reliability and the problem of computing whether two given vertices are connected the ST-reliability problem. Both of these are #P-hard.
Number of connected graphs
The number of distinct connected labeled graphs with n nodes is tabulated in the On-Line Encyclopedia of Integer Sequences as sequence . The first few non-trivial terms are
Examples
The vertex- and edge-connectivities of a disconnected graph are both .
-connectedness is equivalent to connectedness for graphs of at least two vertices.
The complete graph on vertices has edge-connectivity equal to . Every other simple graph on vertices has strictly smaller edge-connectivity.
In a tree, the local edge-connectivity between any two distinct vertices is .
Bounds on connectivity
The vertex-connectivity of a graph is less than or equal to its edge-connectivity. That is, .
The edge-connectivity for a graph with at least 2 vertices is less than or equal to the minimum degree of the graph because removing all the edges that are incident to a vertex of minimum degree will disconnect that vertex from the rest of the graph.
For a vertex-transitive graph of degree , we have: .
For a vertex-transitive graph of degree , or for any (undirected) minimal Cayley graph of degree , or for any symmetric graph of degree , both kinds of connectivity are equal: .
Other properties
Connectedness is preserved by graph homomorphisms.
If is connected then its line graph is also connected.
A graph is -edge-connected if and only if it has an orientation that is strongly connected.
Balinski's theorem states that the polytopal graph (-skeleton) of a -dimensional convex polytope is a -vertex-connected graph. Steinitz's previous theorem that any 3-vertex-connected planar graph is a polytopal graph (Steinitz's theorem) gives a partial converse.
According to a theorem of G. A. Dirac, if a graph is -connected for , then for every set of vertices in the graph there is a cycle that passes through all the vertices in the set. The converse is true when .
| Mathematics | Graph theory | null |
1614609 | https://en.wikipedia.org/wiki/Polarization%20in%20astronomy | Polarization in astronomy | Polarization of electromagnetic radiation is a useful tool for detecting various astronomical phenomenon. For example, energy can become polarized by passing through interstellar dust or by magnetic fields. Microwave energy from the primordial universe can be used to study the physics of that environment.
Stars
The polarization of starlight was first observed by the astronomers William Hiltner and John S. Hall in 1949. Subsequently, Jesse Greenstein and Leverett Davis, Jr. developed theories allowing the use of polarization data to trace interstellar magnetic fields.
Though the integrated thermal radiation of stars is not usually appreciably polarized at source, scattering by interstellar dust can impose polarization on starlight over long distances. Net polarization at the source can occur if the photosphere itself is asymmetric, due to limb polarization. Plane polarization of starlight generated at the star itself is observed for Ap stars (peculiar A type stars).
Sun
Both circular and linear polarization of sunlight has been measured. Circular polarization is mainly due to transmission and absorption effects in strongly magnetic regions of the Sun's surface. Another mechanism that gives rise to circular polarization is the so-called "alignment-to-orientation mechanism". Continuum light is linearly polarized at different locations across the face of the Sun (limb polarization) though taken as a whole, this polarization cancels. Linear polarization in spectral lines is usually created by anisotropic scattering of photons on atoms and ions which can themselves be polarized by this interaction. The linearly polarized spectrum of the Sun is often called the second solar spectrum. Atomic polarization can be modified in weak magnetic fields by the Hanle effect. As a result, polarization of the scattered photons is also modified providing a diagnostics tool for understanding stellar magnetic fields.
Other sources
Polarization is also present in radiation from coherent astronomical sources due to the Zeeman effect (e.g. hydroxyl or methanol masers).
The large radio lobes in active galaxies and pulsar radio radiation (which may, it is speculated, sometimes be coherent) also show polarization.
Apart from providing information on sources of radiation and scattering, polarization also probes the interstellar magnetic field in our galaxy as well as in radio galaxies via Faraday rotation. In some cases it can be difficult to determine how much of the Faraday rotation is in the external source and how much is local to our own galaxy, but in many cases it is possible to find another distant source nearby in the sky; thus by comparing the candidate source and the reference source, the results can be untangled.
Cosmic microwave background
The polarization of the cosmic microwave background (CMB) is also being used to study the physics of the very early universe. CMB exhibits 2 components of polarization: B-mode (divergence-free like magnetic field) and E-mode (curl-free gradient-only like electric field) polarization. The BICEP2 telescope located at the South Pole initially claimed the detection of B-mode polarization in the CMB, though the initially claimed result was later retracted. The polarization modes of the CMB may provide more information about the influence of gravitational waves on the development of the early universe.
It has been suggested that astronomical sources of polarised light caused the chirality found in biological molecules on Earth.
| Physical sciences | Basics | Astronomy |
1615402 | https://en.wikipedia.org/wiki/Sulfonic%20acid | Sulfonic acid | In organic chemistry, sulfonic acid (or sulphonic acid) refers to a member of the class of organosulfur compounds with the general formula , where R is an organic alkyl or aryl group and the group a sulfonyl hydroxide. As a substituent, it is known as a sulfo group. A sulfonic acid can be thought of as sulfuric acid with one hydroxyl group replaced by an organic substituent. The parent compound (with the organic substituent replaced by hydrogen) is the parent sulfonic acid, , a tautomer of sulfurous acid, . Salts or esters of sulfonic acids are called sulfonates.
Preparation
Aryl sulfonic acids are produced by the process of sulfonation. Usually the sulfonating agent is sulfur trioxide. A large scale application of this method is the production of alkylbenzenesulfonic acids:
In this reaction, sulfur trioxide is an electrophile and the arene is the nucleophile. The reaction is an example of electrophilic aromatic substitution. In a related process, carboxylic acid]]s react with sulfur trioxide to give the sulfonic acids. Direct reaction of alkanes with sulfur trioxide is used for the conversion methane to methanedisulfonic acid.
Alkylsulfonic acids can be prepared by sulfoxidation whereby alkanes are irradiated with a mixture of sulfur dioxide and oxygen. This reaction is employed industrially to produce alkyl sulfonic acids, which are used as surfactants.
From terminal alkenes, alkane sulfonic acids can be obtained by the addition of bisulfite.
Bisulfite can also be alkylated by alkyl halides:
Sulfonic acids can be prepared by oxidation of thiols:
This pathway is the basis of the biosynthesis of taurine.
Hydrolysis routes
Many sulfonic acids are prepared by hydrolysis of sulfonyl halides and related precursors. Thus, perfluorooctanesulfonic acid is prepared by hydrolysis of the sulfonyl fluoride, which in turn is generated by the electrofluorination of octanesulfonic acid. Similarly the sulfonyl chloride derived from polyethylene is hydrolyzed to the sulfonic acid. These sulfonyl chlorides are produced by free-radical reactions of chlorine, sulfur dioxide, and the hydrocarbons using the Reed reaction.
Vinylsulfonic acid is derived by hydrolysis of carbyl sulfate, (), which in turn is obtained by the addition of sulfur trioxide to ethylene.
Properties
Sulfonic acids are strong acids. They are commonly cited as being around a million times stronger than the corresponding carboxylic acid. For example, p-Toluenesulfonic acid and methanesulfonic acid have pKa values of −2.8 and −1.9, respectively, while those of benzoic acid and acetic acid are 4.20 and 4.76, respectively. However, as a consequence of their strong acidity, their pKa values cannot be measured directly, and values commonly quoted should be regarded as indirect estimates with significant uncertainties. For instance, various sources have reported the pKa of methanesulfonic acid to be as high as −0.6 or as low as −6.5. Sulfonic acids are known to react with solid sodium chloride (salt) to form the sodium sulfonate and hydrogen chloride. This property implies an acidity within two or three orders of magnitude of that of HCl(g), whose pKa was recently accurately determined (pKaaq = −5.9).
Because of their polarity, sulfonic acids tend to be crystalline solids or viscous, high-boiling liquids. They are also usually colourless and nonoxidizing, which makes them suitable for use as acid catalysts in organic reactions. Their polarity, in conjunction with their high acidity, renders short-chain sulfonic acids water-soluble, while longer-chain ones exhibit detergent-like properties.
The structure of sulfonic acids is illustrated by the prototype, methanesulfonic acid. The sulfonic acid group, RSO2OH features a tetrahedral sulfur centre, meaning that sulfur is at the center of four atoms: three oxygens and one carbon. The overall geometry of the sulfur centre is reminiscent of the shape of sulfuric acid.
Applications
Both alkyl and aryl sulfonic acids are known, most large-scale applications are associated with the aromatic derivatives.
Detergents and surfactants
Detergents and surfactants are molecules that combine highly nonpolar and highly polar groups. Traditionally, soaps are the popular surfactants, being derived from fatty acids. Since the mid-20th century, the usage of sulfonic acids has surpassed soap in advanced societies. For example, an estimated 2 billion kilograms of alkylbenzenesulfonates are produced annually for diverse purposes. Lignin sulfonates, produced by sulfonation of lignin are components of drilling fluids and additives in certain kinds of concrete.
Dyes
Many if not most of the anthraquinone dyes are produced or processed via sulfonation. Sulfonic acids tend to bind tightly to proteins and carbohydrates. Most "washable" dyes are sulfonic acids (or have the functional sulfonyl group in them) for this reason. p-Cresidinesulfonic acid is used to make food dyes.
Acid catalysts
Being strong acids, sulfonic acids are also used as catalysts. The simplest examples are methanesulfonic acid, CH3SO2OH and p-toluenesulfonic acid, which are regularly used in organic chemistry as acids that are lipophilic (soluble in organic solvents). Polymeric sulfonic acids are also useful. Dowex resin are sulfonic acid derivatives of polystyrene and is used as catalysts and for ion exchange (water softening). Nafion, a fluorinated polymeric sulfonic acid is a component of proton exchange membranes in fuel cells.
Drugs
Sulfa drugs, a class of antibacterials, are produced from sulfonic acids.
Lignosulfonates
In the sulfite process for paper-making, lignin is removed from the lignocellulose by treating wood chips with solutions of sulfite and bisulfite ions. These reagents cleave the bonds between the cellulose and lignin components and especially within the lignin itself. The lignin is converted to lignosulfonates, useful ionomers, which are soluble and can be separated from the cellulose fibers.
Reactions
The reactivity of the sulfonic acid group is so extensive that it is difficult to summarize.
Hydrolysis to phenols
When treated with strong base, benzenesulfonic acid derivatives convert to phenols.
In this case the sulfonate behaves as a pseudohalide leaving group.
Hydrolytic desulfonation
Arylsulfonic acids are susceptible to hydrolysis, the reverse of the sulfonation reaction:
Whereas benzenesulfonic acid hydrolyzes above 200 °C, many derivatives are easier to hydrolyze. Thus, heating aryl sulfonic acids in aqueous acid produces the parent arene. This reaction is employed in several scenarios. In some cases the sulfonic acid serves as a water-solubilizing protecting group, as illustrated by the purification of para-xylene via its sulfonic acid derivative. In the synthesis of 2,6-dichlorophenol, phenol is converted to its 4-sulfonic acid derivative, which then selectively chlorinates at the positions flanking the phenol. Hydrolysis releases the sulfonic acid group.
Esterification
Sulfonic acids can be converted to esters. This class of organic compounds has the general formula R−SO2−OR. Sulfonic esters such as methyl triflate are considered good alkylating agents in organic synthesis. Such sulfonate esters are often prepared by alcoholysis of the sulfonyl chlorides:
RSO2Cl + R′OH → RSO2OR′ + HCl
Halogenation
Sulfonyl halide groups (R−SO2−X) are produced by chlorination of sulfonic acids using thionyl chloride. Sulfonyl fluorides can be produced by treating sulfonic acids with sulfur tetrafluoride:
Displacement by hydroxide
Although strong, the (aryl)C−SO3− bond can be broken by nucleophilic reagents. Of historic and continuing significance is the α-sulfonation of anthroquinone followed by displacement of the sulfonate group by other nucleophiles, which cannot be installed directly. An early method for producing phenol involved the base hydrolysis of sodium benzenesulfonate, which can be generated readily from benzene.
C6H5SO3Na + NaOH → C6H5OH + Na2SO3
The conditions for this reaction are harsh, however, requiring 'fused alkali' or molten sodium hydroxide at 350 °C for benzenesulfonic acid itself. Unlike the mechanism for the fused alkali hydrolysis of chlorobenzene, which proceeds through elimination-addition (benzyne mechanism), benzenesulfonic acid undergoes the analogous conversion by an SNAr mechanism, as revealed by a 14C labeling, despite the lack of stabilizing substituents. Sulfonic acids with electron-withdrawing groups (e.g., with NO2 or CN substituents) undergo this transformation much more readily.
o-Lithiation
Arylsulfonic acids react with two equiv of butyl lithium to give the ortho-lithio derivatives, i.e. ortho-lithiation. These dilithio compounds are poised for reactions with many electrophiles.
| Physical sciences | Specific acids | Chemistry |
620957 | https://en.wikipedia.org/wiki/Water%20dispenser | Water dispenser | A water dispenser, sometimes referred to as a water cooler (if used for cooling only), is a machine that dispenses and often also cools or heats up water with a refrigeration unit. It is commonly located near the restroom due to closer access to plumbing. A drain line is also provided from the water cooler into the sewer system.
Water dispensers come in a variety of form factors, ranging from wall-mounted to bottle filler water dispenser combination units, to bi-level units and other formats. They are generally broken up into two categories: point-of-use (POU) water dispensers and bottled water dispensers. POU water dispensers are connected to a water supply, while bottled water dispensers require delivery (or self-pick-up) of water in large bottles from vendors. Bottled water dispensers can be top-mounted or bottom-loaded, depending on the design of the model.
Bottled water dispensers typically use 11- or 22-liter (5- or 10-gallon) dispensers commonly found on top of the unit. Pressure coolers are a subcategory of water dispensers encompassing drinking water fountains and direct-piping water dispensers. Water cooler may also refer to a primitive device for keeping water cool.
History
Dispenser types
Wall-mounted / recessed
The wall-mounted type is connected to the building's water supply for a continuous supply of water and electricity to run a refrigeration unit to cool the incoming water, and to the building's waste disposal system to dispose of unused water. Wall-mounted water coolers are frequently used in commercial buildings like hospitals, schools, businesses, and other facilities where a facility manager is present to monitor its installation and maintenance.
In the standard wall-mounted cooler, also commonly referred to as a water fountain or drinking fountain, a small tank in the machine holds chilled water so the user does not have to wait for chilled water. Water is delivered by turning or pressing a button on a spring-loaded valve located on the top of the unit, that turns off the water when released. Some devices also offer a large button on the front or side. Newer machines may not have a button at all; instead, a sensor that detects when someone is near and activates the water. Water is delivered in a stream that arches up, allowing the user to drink directly from the top of the stream of water. These devices usually dispense water directly from the municipal water supply, without treatment or filtering.
Wall mount water coolers come in a wide variety of styles, from recessed models to splash resistant, contoured basins protruding out from the wall, traditional rounded square edge designs, bottle filler and water cooler combination units, bi-level designs, with other features and options. These are sometimes installed to meet local, state or federal codes.
Bottom-load water dispenser
Water dispensers commonly have the water supply vessel mounted at the top of the unit. Bottom-load water dispensers have the vessel mounted at the bottom of the unit to make loading easier.
Tabletop water dispenser
There are also smaller versions of the water dispensers where the dispenser can be placed directly on top of a table. These dispensers are commonly classified as household appliances and can often be found in household kitchens and office pantries.
Direct-piping water dispenser (POU)
Water dispensers can be directly connected to the in-house water source for continuous dispensing of hot and cold drinking water.
It is commonly referred to as POU () water dispensers. POU units are generally more hygienic than bottled water coolers, provided the end user has access to clean water sources.
Freestanding
A freestanding design generally involves bottles of water placed spout-down into the dispensing machine.
Tabletop or kitchen worktop versions are available which utilize readily available five-liter water bottles from supermarkets. These coolers use air pumps to push the water into the cooling chamber and Peltier devices to chill the water.
New development within the water cooler market is the advent of countertop appliances which are connected to the mains and provide an instant supply of not only chilled water but also hot and boiling water. This is often visible in the horeca industry.
Water will flow faster when the handle is in the upright position. The water is aerated which allows the water to come through the spout at a faster rate.
Water source
Water dispensed from water coolers may originate from many different sources, but are often classified into two major categories, namely natural mineral and spring water, and purified water.
Natural mineral and spring water
Natural mineral and spring water are waters emanating from underground geological rock formations collected from boreholes or emerging springs. Legislation in each respective country further differentiates between these two types of water and stipulates strict naming and labeling criteria based on natural source protection, total dissolved solids, and the amount of processing the water may undergo prior to bottling.
Purified water
Purified water is water from groundwater or municipal water supply and is produced by any one of several methods of purification including reverse osmosis, distillation, deionization, and filtration. The water is often treated by ultraviolet light or ozone for antimicrobial reasons and re-mineralized by injection of soluble inorganic salts.
Water delivery
The delivery of water in a water cooler comes in two main forms, namely bottled variants, or plumbed directly from the main water supply. The water is normally pumped into a water tank to be heated or chilled, depending on the model of the water cooler. Modern versions include hybrid models that are able to utilize both methods.
Bottled water coolers
To install the bottle, the bottle is tipped upside down and set onto the dispenser; a probe punctures the cap of the bottle
and allows the water to flow into the machine's internal reservoir.
These gravity-powered systems have a device to dispense water in a controlled manner.
These machines come in different sizes and vary from table units, intended for occasional use to floor-mounted units intended for heavier use. Bottled water is normally delivered to households or businesses on a regular basis, where empty returnable polycarbonate bottles are exchanged for full ones. In developing markets, PET is often used for large bottles despite shrinkage and lower washing temperature will lead to making it a more challenging material to use.
The bottle size varies with the size of the unit, with the larger versions in the US using bottles. This is also the most common size elsewhere, labeled as 18.9 liters in countries that use the metric system. Originally, these bottles were manufactured at 3,5 or 6 US gallon capacity (11.4, 18.9 or 22.7 liters) and supplied to rented water cooler units. These units usually do not have a place to dump excess water, only offering a small basin to catch minor spills. On the front, a lever or pushbutton dispenses the water into a cup held beneath the spigot. When the water container is empty, it is lifted off the top of the dispenser, and automatically seals to prevent any excess water still in the bottle from leaking.
Material
For many years and throughout the 20th century, glass was the main material used for bottling until the evolution of thermoplastics following World War II. PVC evolved as a multi-purpose plastic material and gained mass adoption as an ideal mass production material. Only dark green glass bottles were retained for packaging carbonated waters. The 1980s saw the re-development of PVC bottles due to cost reduction. Advances in manufacturing and materials technology such as new blow and injection molding techniques have reduced the wall thickness and weight of bottles while improving durability and increasing service life.
Direct plumbed
Directly plumbed water coolers use tap water and therefore do not need bottles due to their use of the main water supply. Usually, some method of purification is used. Log reduction (e.g. 6-log reduction or 99.9999% effective) is used as a measure on the effectiveness of sanitization and disinfection.
Purification
Filtration
Filtration methods include reverse osmosis, ion exchange, and activated carbon. Reverse osmosis works differently from chemical or ultraviolet protection, using a membrane that has fine pores, passing H2O while preventing larger molecules such as salts, carbonates, and other micro-organisms from passing through it. If there is insufficient energy to naturally force the water through the membrane, a powerful pump is required, resulting in potential high energy costs. In addition, RO units are capable of softening water. Some living micro-organisms, including viruses, are capable of passing through an RO unit filter.
Deionizers or demineralizers use resins exchange to remove ions from the stream of water and are most commonly twin-bed or mixed-bed deionizers. It is often used in sterile manufacturing environments such as computer chips, where deionized water is a poor conductor of electricity.
In activated carbon, raw materials such as lignite, coal, bone charcoal, coconut shells, and wood charcoal are used, developing pores during activation when partly burning away carbon layers. In most cases, activated carbon is a single-use material as regeneration is often not possible on-site. Granular activated carbon (GAC) is most commonly used in the filtration of the water cooler. Regular sanitization using hot water and steam is required to limit bacterial growth.
Sanitization & disinfection
The sanitization of water is defined by the reduction of the number of micro-organisms to a safe level. According to the AOAC suspension test method, a sanitizer should be capable of killing 99.999% of a specific bacterial test population within 30 seconds at 25 °C (77 °F). Sanitizers may or may not necessarily destroy pathogenic or disease-causing bacteria. The sanitizer used must comply with regulations applicable in the geographic location. In the US, sanitizers are regulated by the EPA and FDA, and must pass the AOAC test in the reduction of microbial activity of two standard test organisms (Staphylococcus aureus and Escherichia coli) from a designated microbial load by a 5-log reduction.
The main difference between a sanitizer and a disinfectant is that at a specific use dilution, the disinfectant must have a higher kill capability for pathogenic bacteria than that of a sanitizer. If these micro-organisms are not destroyed, the bottled water being produced may be contaminated.
UVGI (ultraviolet germicidal irradiation) is a commonly used disinfection method to kill or inactivate micro-organisms and leaving them unable to perform vital cellular functions. Drawbacks to UV light water purifiers include turbidity. If the fluid is unclear, the UV light will not pass through completely, leaving the stream partially sterilized.
Cooling and heating methods
Cooling
Most modern units offer a refrigeration function to chill the water, using Vapor compression refrigeration or Thermoelectric cooling.
Vapor compression refrigeration
Water coolers using vapor compression refrigeration come in one of the following systems:
Reservoir System - A tank where water is held, to be used for cooling or heating and is fitted with a float mechanism to prevent overflowing.
Removable Reservoir - a removable reservoir is an open-end tank with cooling coils that come into contact with the external tank surface. It operates on the basis of a modular system, allowing one to easily detach and refill water instead of keeping it in a closed system. One of the advantages in using a removable reservoir is the ease of sanitization. This allows end users to replace the reservoir completely rather than sending an entire water cooler back for servicing. A similar technology can be found in many modern water dispensers and coffee machines.
Stainless Steel - open end tank with cooling coils that come into contact with the external tank surface
Pressure Vessel Direct Chill System - The combination of a pressure vessel, which protects the water in the tank from air-borne contamination, and a direct chill system which cools water coming from the mains quickly.
Pressure Vessel - A sealed pressure vessel is filled at a lower pressure within the water cooler. As such, the water does not come into contact with the atmosphere, allowing a larger amount of cold water (depending on the size of the tank) to be dispensed at the expense of a slower cooling system.
Direct Chill - In a standard direct chill system, water is passed through a stainless steel coil that is in contact with a copper evaporator that circulates refrigerant gas. The refrigeration system is attached outside of the coil and the cold transfers through the pipe walls to chill the water in the coil through conduction. When the taps are operated, the chilled water is dispensed at mains pressure. The water never comes into contact with the atmosphere as the cold temperature emitted by the refrigerant gas is transferred through the copper coil which transfers the cold temperatures to water passing through the stainless steel coil without touching each other. This allows the water to get cold more quickly again at the expense of having a lower volume of cold water available.
Ice-bank Cooling System - A pressurized stainless steel coil and a copper coil is immersed in a reservoir full of pre-chilled water. The copper coil containing the refrigerant gas freezes the water contained within the reservoir producing a cold supply, which in turns cools the drinking water flowing through stainless steel coil.
Thermoelectric cooling
Thermoelectric cooling is a green alternative to HFC refrigerant that uses a solid state device that acts as a heat pump to transfer heat from one side of the device to another using the Peltier effect. It is made up of numerous pairs of semiconductors enclosed by ceramic wafers. Thermoelectric coolers use direct current power rather than refrigerant gas and a compressor and have no moving parts or complex assemblies.
Heating
Some versions also have a second dispenser that delivers room-temperature water or even heated water that can be used for tea, hot chocolate or other uses. The water in the alternate hot tap is generally heated with a heating element and stored in a hot tank (much like the traditional hot water heaters used in residential homes). Additionally, the hot tap is usually equipped with a push-in safety valve to prevent burns from an accidental or inadvertent pressing of the lever.
Additional features
Bottle filler
Newer variants of water coolers include an additional dispenser designed to fill water bottles directly on wall-mounted units. This is increasingly common in public water coolers as they have also been spotted in public places such as airports and railway stations. These bottle filling units also can indicate the number of single-use plastic bottles saved as part of an ongoing public effort to reduce plastic pollution.
Carbonation
Modern variants of water coolers have been equipped with options for sparkling water as a result of increasing demand for carbonated beverages and also a greater awareness to healthy living, resulting in preference for carbonated water over sweetened carbonated beverages. This works with the addition of a mixer tank filled with compressed CO2 located inside the cooling tank. This brings the temperature of the CO2 gas down to the temperature of the cooling tank. As carbonated water is dispensed, the mixer tank is automatically refilled with cold water and carbon dioxide, ensuring a continuous supply of carbonated water is readily available.
Maintenance
All bottled water coolers need to be periodically cleaned to prevent mineral build-up inside the heating tank, also known as scaling. The frequency of the cleaning can be determined by the concentration of the minerals and the amount of water used. Descaling agents such as citric acid can be used for this cleaning process.
Heating tanks will require cleaning when normal hot water flow appears to be restricted or when noisy heating cycles can be heard during operation. Additional symptoms include water coming from the cooling tank is very warm as well as a change of taste in the water resulting from mineral build-up.
| Technology | Household appliances | null |
621267 | https://en.wikipedia.org/wiki/Gluteus%20maximus | Gluteus maximus | The gluteus maximus is the main extensor muscle of the hip in humans. It is the largest and outermost of the three gluteal muscles and makes up a large part of the shape and appearance of each side of the hips. It is the single largest muscle in the human body. Its thick fleshy mass, in a quadrilateral shape, forms the prominence of the buttocks. The other gluteal muscles are the medius and minimus, and sometimes informally these are collectively referred to as the glutes.
Its large size is one of the most characteristic features of the muscular system in humans, connected as it is with the power of maintaining the trunk in the erect posture. Other primates have much flatter hips and cannot sustain standing erectly.
The muscle is made up of muscle fascicles lying parallel with one another, and are collected together into larger bundles separated by fibrous septa.
Structure
The gluteus maximus (or buttock) is the outermost muscle of the buttocks. It arises from connections to nearby structures in this area. It arises from the posterior gluteal line of the outer upper ilium, a bone of the pelvis, as well as above it to the iliac crest and slightly below it; from the lower part of the sacrum and the side of the coccyx, the tailbone; from the aponeurosis of the erector spinae (lumbodorsal fascia), the sacrotuberous ligament, and the fascia covering the gluteus medius (gluteal aponeurosis).
The fibers are directed obliquely inferiorly and laterally;
The gluteus maximus ends in two main areas:
those forming the upper and larger portion of the muscle, together with the superficial fibers of the lower portion, end in a thick tendinous lamina, which passes across the greater trochanter, and inserts into the iliotibial band of the fascia lata;
the deeper fibers of the lower portion are inserted into the gluteal tuberosity of the linea aspera, between the vastus lateralis and adductor magnus. If present, the third trochanter also serves as an attachment.
Bursae
Three bursae are usually found in relation with the deep surface of this muscle:
One of these, of large size, separates it from the greater trochanter (Bursa trochanterica m. glutaei maximi),
a second (often missing) is situated on the tuberosity of the ischium (Bursae glutaeofemorales),
a third is found between the skin and the tendon of the muscle, which sometimes extends to the vastus lateralis (Bursa trochanterica subcutanea.
Function
The gluteus maximus straightens the leg at the hip; when the leg is flexed at the hip, the gluteus maximus extends it to bring the leg into a straight line with the body. The anus also aligns when the leg is flexed at the hip, making the muscle tighten and the pelvis tilt forward. Taking its fixed point from below, it acts upon the pelvis, supporting it and the trunk upon the head of the femur; this is particularly obvious in standing on one leg. Its most powerful action is to cause the body to regain the erect position after stooping, by drawing the pelvis backward, being assisted in this action by the biceps femoris (long head), semitendinosus, semimembranosus, and adductor magnus.
The lower part of the muscle also acts as an adductor and external rotator of the limb. The upper fibers act as abductors of the hip joints.
The gluteus maximus is a tensor of the fascia lata, and by its connection with the iliotibial band steadies the femur on the articular surfaces of the tibia during standing, when the extensor muscles are relaxed. Therefore, the muscle carries out an extension, a valgisation and an external rotation of the knee.
Society and culture
Training
The gluteus maximus is involved in several sports, from running to weight-lifting. A number of exercises focus on the gluteus maximus and other muscles of the upper leg:
Hip thrusts
Glute bridge
Quadruped hip extensions
Kettlebell swings
Squats and variations like split squats, unilateral squats with the raised foot positioned either backwards or forwards (pistols), and wide-stance lunges
Deadlift (and variations)
Reverse hyperextension
Four-way hip extensions
Glute-ham raise
In art
In cultural terms, the glutes are often considered a symbol of health and strength, and aesthetically appealing. They frequently feature in artwork which seeks to emphasise and celebrate physicality, and the ability to move dynamically and powerfully. They are usually shown to be efficiently proportioned and prominent.
Evidence of such depictions of the gluteal muscles extends from at least Ancient Greece to the modern day.
Clinical significance
Functional assessment can be useful in assessing injuries to the gluteus maximus and surrounding muscles.
The 30-second chair-to-stand test measures a participant's ability to stand up from a seated position as many times as possible in a thirty-second period of time. Testing the number of times a person can stand up in a thirty-second period helps assess strength, flexibility, pain, and endurance, which can help determine how far along a person is in rehabilitation, or how much work is still to be done.
The piriformis test measures flexibility of the gluteus maximus. This requires a trained professional and is based on the angle of external and internal rotation in relation to normal range of motion without injury or impingement.
Other animals
The gluteus maximus is larger in size and thicker in humans than in other primates. Specifically, it is approximately 1.6 times larger relative to body mass compared to chimpanzees and comprises about 18.3% of total hip musculature mass versus 11.7% in chimpanzees. Its large size is one of the most characteristic features of the muscular system in humans, connected as it is with the power of maintaining the trunk in the erect posture. Other primates have much flatter hips and cannot sustain standing erectly.
In other primates, the correlate to the human gluteus maximus consists of the ischiofemoralis, a small muscle that corresponds to the human gluteus maximus and originates from the ilium and the ligaments of the sacroiliac, and the gluteus maximus proprius, a large muscle that extends from the ischial tuberosity to a relatively more distant insertion on the femur. In adapting to bipedal gait, reorganization of the attachment of the muscle as well as the moment arm was required.
Running
The human gluteus maximus plays multiple important functional roles, particularly in running rather than walking. During running, it helps control trunk flexion, aids in decelerating the swing leg, and contributes to hip extension. During level walking, the muscle shows minimal activity, suggesting its enlargement was not primarily adapted for walking.
The muscle's size and position make it uniquely suited for controlling trunk position during rapid movements and stabilizing the trunk against flexion. While traditionally associated with maintaining erect posture, evidence suggests its enlargement was more likely selected for its role in running capability and trunk stabilization during various dynamic activities. These adaptations would have been particularly important for activities like running and climbing in early human evolution.
Additional images
| Biology and health sciences | Human anatomy | Health |
621379 | https://en.wikipedia.org/wiki/Oolite | Oolite | Oolite or oölite () is a sedimentary rock formed from ooids, spherical grains composed of concentric layers. Strictly, oolites consist of ooids of diameter 0.25–2 millimetres; rocks composed of ooids larger than 2 mm are called pisolites. The term oolith can refer to oolite or individual ooids.
Composition
Ooids are most commonly composed of calcium carbonate (calcite or aragonite), but can be composed of phosphate, clays, chert, dolomite or iron minerals, including hematite. Dolomitic and chert ooids are most likely the result of the replacement of the original texture in limestone. Oolitic hematite occurs at Red Mountain near Birmingham, Alabama, along with oolitic limestone.
They are usually formed in warm, supersaturated, shallow, highly agitated marine water intertidal environments, though some are formed in inland lakes. The mechanism of formation starts with a small fragment of sediment acting as a 'seed', such as a piece of a shell. Strong intertidal currents wash the 'seeds' around on the seabed, where they accumulate layers of chemically precipitated calcite from the supersaturated water. The oolites are commonly found in large current bedding structures that resemble sand dunes. The size of the oolites reflect the time that they were exposed to the water before they were covered with later sediment.
Oolites are often used in the home aquarium industry because their small grain size (0.2 to 1.22 mm) is ideal for shallow static beds and bottom covering of up to 1" in depth. Also known as "oolitic" sand, the sugar-sized round grains of this sand pass easily through the gills of gobies and other sand-sifting organisms. This unusually smooth sand promotes the growth of bacteria, which are important biofilters in home aquaria. Because of its extremely small grain size, oolitic sand has a lot of surface area, which promotes high bacterial growth.
Occurrence
Some exemplar oolitic limestone was formed in England during the Jurassic period, and forms the Cotswold Hills, the Isle of Portland with its famous Portland Stone, and part of the North York Moors. A particular type, Bath Stone, gives the buildings of the World Heritage City of Bath their distinctive appearance. Carboniferous Hunts Bay Oolite lies under much of south Wales.
The Miami Rock Ridge of southeastern Florida, the islands of the Lower Florida Keys, and much of the Everglades, are underlain by Miami Oolite. This limestone was formed by deposition when shallow seas covered the area between periods of glaciation. The material consolidated and eroded during later exposure above the ocean surface.
One of the world's largest freshwater lakebed oolites is the Shoofly Oolite, a section of the Glenns Ferry Formation on southwestern Idaho's Snake River Plain. 10 million years ago, the Plain formed the bed of Lake Idaho. Wave action in the lake washed sediments back and forth in the shallows on the southwestern shore, forming ooids and depositing them on steeper benches near the shore in 2- to 40-foot thicknesses. When the lake drained (2 to 4 million years ago), the oolite was left behind, along with siltstone, volcanic tuffs and alluvium from adjacent mountain slopes. The other sediments eroded away, while the more resistant oolite weathered into hummocks, small arches and other natural "sculptures". The Shoofly Oolite lies on public land west of Bruneau, Idaho managed by the Bureau of Land Management (BLM). The physical and chemical properties of the Shoofly Oolite are the setting for a suite of rare plants, which the BLM protects through land use management and on-site interpretation.
This type of limestone is also found in Indiana in the United States. The town of Oolitic, Indiana, was founded for the trade in limestone and bears its name. Quarries in Oolitic, Bedford, and Bloomington contributed the materials for such U.S. landmarks as the Empire State Building in New York and the Pentagon in Arlington, Virginia. Many of the buildings on the Indiana University campus in Bloomington are built with native oolitic limestone material, and the Soldiers' and Sailors' Monument in downtown Indianapolis, Indiana, is built mainly of grey oolitic limestone.
Oolites also appear in the Conococheague limestone, of Cambrian age, in the Great Appalachian Valley in Pennsylvania, Maryland, West Virginia, and Virginia.
Rogenstein is a term describing a specific type of oolite in which the cementing matter is argillaceous.
| Physical sciences | Sedimentary rocks | Earth science |
622003 | https://en.wikipedia.org/wiki/Miniature%20horse | Miniature horse | A miniature horse is a breed or type of horse characterised by its small size. Usually it has been bred to display in miniature the physical characteristics of a full-sized horse, but to be little over in height, or even less. Although such horses have the appearance of small horses, they are genetically much more similar to pony breeds such as the Shetland. They have various colors and coat patterns.
Miniature horses are present in several countries, including Argentina, Australia, France, Germany, Holland, Ireland, Namibia, the Philippines, the United Kingdom and the United States. In some countries they have the status of a breed; these include the Falabella of Argentina, the Dutch Miniature or , the South African Miniature Horse and the American Miniature Horse.
They are commonly kept as companion animals, or for sporting activities such as driving or other competitive horse show events. A few have been trained as guide horses for blind people.
History
Miniature horses originated in Europe, where there is written and iconographic documentation of them from the late eighteenth century. In the first half of the twentieth century small horses were bred in England by Lady Estella Mary Hope and her sister Lady Dorothea.
The Falabella was developed in Argentina in the mid-1800s by Patrick Newtall. When Newtall died, the herd and breeding methods were passed to his son-in-law, Juan Falabella. Falabella added additional bloodlines including the Welsh Pony, Shetland pony, and small Thoroughbreds. With considerable inbreeding he was able to gain consistently small size within the herd.
The South African Miniature Horse was bred in South Africa from about 1945, when Wynand de Wet of Lindley began selective breeding of Shetland stock. In 2011 there were about 700 of the horses in the country. Morphology is variable: some have an Arab appearance, while others have the conformation of a draft horse. A breed association was established in 1984, and in 1989 the South African Miniature was recognized by the national South African Stud Book and Livestock Improvement Association.
Characteristics
Miniature horses are generally quite hardy. They often live for longer than is typical for full-sized horses of some breeds; the usual life span is from 25 to 35 years.
Their predisposition to disease is markedly different from that of full-sized horses. They are only rarely affected by ailments such as laryngeal hemiplegia, osteochondrosis or navicular disease, all of which are common in larger horses, but are much more likely to develop other illnesses rare in large horses, such as hyperlipaemia — which may lead to hepatic lipidosis — or eclampsia. Dental misalignment and overcrowding are more common than in larger horses: brachygnathism ("parrot mouth") and prognathism ("sow mouth") are often seen; retention of caps can occur, as can infection of the sinuses associated with tooth eruption. Poor mastication can contribute to an increased incidence of colic caused by enteroliths, faecoliths, or sand.
Use
Miniature horses are commonly kept as companion animals. They are often too small for any but the smallest riders to ride, but are well suited to driving; some may participate in other horse show events. A small number have been trained as guide horses for blind people, particularly for those who consider dogs unclean, as is common in Muslim cultures.
| Biology and health sciences | Horses | Animals |
622836 | https://en.wikipedia.org/wiki/Myosotis | Myosotis | Myosotis ( ) is a genus of flowering plants in the family Boraginaceae. The name comes from the Ancient Greek "mouse's ear", which the foliage is thought to resemble. In the Northern Hemisphere, they are colloquially known as forget-me-nots or scorpion grasses. Myosotis alpestris is the official flower of Alaska and Dalsland, Sweden. Plants of the genus are not to be confused with Chatham Islands' forget-me-nots, which belong to the related genus Myosotidium.
Description
The genus was originally described by Carl Linnaeus. The type species is Myosotis scorpioides. Myosotis species are annual or perennial, herbaceous, flowering plants with pentamerous actinomorphic flowers with five sepals and petals. Flowers are typically 1 cm (½") in diameter or less, flatly faced, coloured typically blue, but sometimes pink, white or yellow with yellow centres and borne on scorpioid cymes. Their foliage is alternate, and their roots are generally diffuse. They typically flower in spring or soon after the melting of snow in alpine ecosystems.
The seeds are contained in small, tulip-shaped pods along the stem to the flower. The pods attach to clothing when brushed against and eventually fall off, leaving the small seed within the pod to germinate elsewhere. Seeds can be collected by placing a sheet of paper under stems and shaking the seed pods onto the paper.
Myosotis scorpioides is colloquially called scorpion grass because of the spiraling curvature of its inflorescence.
Distribution
The genus is largely restricted to western Eurasia, with over 60 confirmed species, and New Zealand with around 40 endemic species. A few species occur elsewhere, including North America, South America, and Papua New Guinea. Despite this, Myosotis species are now common throughout temperate latitudes because of the introduction of cultivars and alien species. Many are popular in horticulture. They prefer moist habitats. In locales where they are not native, they frequently escape to wetlands and riverbanks.
One or two European species, especially Myosotis sylvatica, the "woodland" forget-me-not, have been introduced into most of the temperate regions of Europe, Asia, and the Americas.
Genetic analysis indicates that the genus originated in the Northern Hemisphere, and that species native to New Zealand, Australia, New Guinea, and South America form a lineage of closely related species that are likely derived from a single dispersal event to the Southern Hemisphere.
Ecology
Myosotis species are food for the larvae of some Lepidoptera species, including the setaceous Hebrew character. Many of the species in New Zealand are threatened.
Taxonomy
Of more than 510 recorded species names, only 156 species are presently accepted, listed below. The remainder are either synonyms or hybrids of presently accepted or proposed names.
Myosotis abyssinica Boiss. & Reut.
Myosotis afropalustris C.H. Wright
Myosotis albicans Riedl
Myosotis albiflora Banks & Sol. ex Hook.f.
Myosotis albosericea Hook.f.
Myosotis alpestris F.W.Schmidt (alpine forget-me-not)
Myosotis amabilis Cheeseman
Myosotis ambigens (Bég.) Grau
Myosotis angustata Cheeseman
Myosotis anomala Riedl
Myosotis antarctica Hook.f.
Myosotis arnoldii L.B.Moore
Myosotis arvensis (L.) Hill (field forget-me-not)
Myosotis asiatica (Vestergr. ex Hultén) Schischk. & Serg. (Asiatic forget-me-not)
Myosotis atlantica Vestergr.
Myosotis australis R.Br.
Myosotis austrosibirica O.D.Nikif.
Myosotis azorica H.C.Watson (Azores forget-me-not)
Myosotis baicalensis O.D.Nikif.
Myosotis balbisiana Jord.
Myosotis × bohemica Domin
Myosotis × bollandica P.Jeps.
Myosotis bothriospermoides Kitag.
Myosotis brachypoda Gren.
Myosotis brevis de Lange & Barkla
Myosotis brockiei L.B.Moore & M.J.A.Simpson
Myosotis bryonoma Meudt, Prebble & Thorsen
Myosotis butorinae Stepanov
Myosotis × cadevallii
Myosotis cadmea Kitag
Myosotis cameroonensis Cheek & R.Becker
Myosotis capitata Hook.f.
Myosotis chaffeyorum Lehnebach
Myosotis chakassica O.D.Nikif.
Myosotis cheesemanii Petrie
Myosotis × cinerascens Petrie
Myosotis colensoi (Kirk) J.F.Macbr.
Myosotis concinna Cheeseman
Myosotis congesta Shuttlew.
Myosotis corsicana (Fiori) Grau
Myosotis czekanowskii (Trautv.) Kamelin & V.N.Tikhom.
Myosotis daralaghezica T.N.Popova
Myosotis debilis Pomel
Myosotis decumbens Host
Myosotis densiflora
Myosotis diminuta Grau
Myosotis discolor Pers. (changing forget-me-not)
Myosotis densiflora C. Koch
Myosotis dissitiflora Baker
Myosotis dubia
Myosotis ergakensis Stepanov
Myosotis exarrhena F. Muell.
Myosotis eximia Petrie
Myosotis explanata Cheeseman
Myosotis forsteri Lehm.
Myosotis gallica Vestergr.
Myosotis galpinii C.H.Wright
Myosotis glabrescens L.B.Moore
Myosotis glauca (G.Simpson & J.S.Thomson) de Lange & Barkla
Myosotis goyenii Petrie
Myosotis graminifolia DC.
Myosotis graui Selvi
Myosotis guneri A.P.Khokhr.
Myosotis heteropoda Trautv.
Myosotis hikuwai Meudt, Prebble & G.M.Rogers
Myosotis imitata Serg.
Myosotis incrassata Guss.
Myosotis jenissejensis O.D.Nikif.
Myosotis jordanovii N.Andreev & Peev
Myosotis × kablikiana
Myosotis kamelinii O.D.Nikif.
Myosotis kazakhstanica O.D.Nikif.
Myosotis kebeshensis Stepanov
Myosotis keniensis T.C.E.Fr.
Myosotis koelzii Riedl
Myosotis kolakovskyi A.P.Khokhr.
Myosotis × krajinae
Myosotis krasnoborovii O.D.Nikif. & Lomon.
Myosotis krylovii Serg.
Myosotis kurdica Riedl
Myosotis laeta Cheeseman
Myosotis laingii Cheeseman
Myosotis latifolia Poir. (broadleaf forget-me-not)
Myosotis laxa Lehm. (tufted forget-me-not or bay forget-me-not)
Myosotis lazica Popov
Myosotis lithospermifolia Hornem.
Myosotis lithuanica (Schmalh.) Besser ex Dobrocz.
Myosotis litoralis Steven ex M.Bieb.
Myosotis ludomilae Zaver.
Myosotis lyallii Hook.f.
Myosotis macrantha (Hook.f.) Benth. & Hook.f. (bronze forget-me-not)
Myosotis macrosiphon Font Quer & Maire
Myosotis macrosperma Engelm. (largeseed forget-me-not)
Myosotis magniflora A.P.Khokhr.
Myosotis margaritae Štěpánková
Myosotis maritima Hochst. ex Seub.
Myosotis martini Sennen
Myosotis matthewsii L.B.Moore
Myosotis michaelae Štěpánková
Myosotis micrantha Pall. ex Lehm.
Myosotis minutiflora Boiss. & Reut.
Myosotis monroi Cheeseman (Monro's forget-me-not)
Myosotis nemorosa Besser
Myosotis nikiforovae Stepanov
Myosotis ochotensis O.D.Nikif.
Myosotis olympica Boiss.
Myosotis oreophila Petrie
Myosotis pansa (L.B.Moore) Meudt, Prebble, R.J.Stanley & Thorsen
Myosotis × parviflora
Myosotis paucipilosa (Grau) Ristow & Hand
Myosotis × permixta
Myosotis persoonii Georges RouyRouy & E.G.Camus
Myosotis petiolata Hook.f.
Myosotis platyphylla Boiss.
Myosotis popovii Dobrocz.
Myosotis pospelovae
Myosotis pottsiana (L.B.Moore) Meudt, Prebble, R.J.Stanley & Thorsen
Myosotis propinqua (Turcz.) Fisch. & C.A.Mey.
Myosotis × pseudohispida
Myosotis pulvinaris Hook.f. (cushion forget-me-not)
Myosotis pusilla Loisel.
Myosotis radix-palaris A.P.Khokhr.
Myosotis rakiura L.B.Moore
Myosotis ramosissima Rochel (early forget-me-not)
Myosotis refracta Boiss.
Myosotis rehsteineri (Hausm.) Wartm. ex Reut.
Myosotis retrorsa Meudt, Prebble & Hindmarsh-Walls
Myosotis rivularis (Vestergr.) A.P. Khokhr
Myosotis robusta D.Don
Myosotis sajanensis O.D.Nikif.
Myosotis saxatilis Petrie
Myosotis saxosa Hook.f.
Myosotis schistosa A.P.Khokhr.
Myosotis schmakovii O.D.Nikif.
Myosotis scorpioides (L.) (true forget-me-not)
Myosotis secunda Al.Murray (creeping forget-me-not)
Myosotis semiamplexicaulis DC.
Myosotis sicula Guss. (Jersey forget-me-not)
Myosotis solange Greuter & Zaffran
Myosotis soleirolii Godr.
Myosotis sparsiflora J.C.Mikan ex Pohl
Myosotis spatulata G.Forst.
Myosotis speciosa Pomel
Myosotis speluncicola Schott ex Boiss.
Myosotis stenophylla Knaf
Myosotis stolonifera (J.Gay ex DC.) J.Gay ex Leresche & Levier
Myosotis stricta Link ex Roem. & Schult.
Myosotis suavis Petrie]
Myosotis subcordata Riedl
Myosotis × suzae
Myosotis sylvatica Ehrh. ex Hoffm. (wood forget-me-not)
Myosotis taverae Valdés
Myosotis tenericaulis Petrie
Myosotis tineoi C.Brullo & Brullo
Myosotis traversii Hook.f.
Myosotis tuxeniana (O.Bolòs & Vigo) O.Bolòs & Vigo
Myosotis ucrainica Czern.
Myosotis ultramafica Meudt, Prebble & Rance
Myosotis umbrosa Meudt, Prebble & Thorsen
Myosotis uniflora Hook.f.
Myosotis urceolaris Shuttlew.
Myosotis venosa Colenso
Myosotis venticola Meudt & Prebble
Myosotis verchojanica
Myosotis verna Nutt. (spring forget-me-not)
Myosotis vestergrenii Stroh
Myosotis welwitschii Boiss. & Reut.
Myosotis wumengensis L.Wei
Gallery
Symbolism
The small, blue forget-me-not flower was first used by the Grand Lodge Zur Sonne, in 1926, as a Masonic emblem at the annual convention in Bremen, Germany. In 1938, a forget-me-not badge—made by the same factory as the Masonic badge—was chosen for the annual Nazi Party Winterhilfswerk, the annual charity drive of the National Socialist People's Welfare, the welfare branch of the Nazi party. This coincidence enabled Freemasons to wear the forget-me-not badge as a secret sign of membership.
After World War II, the forget-me-not flower was used again as a Masonic emblem in 1948 at the first Annual Convention of the United Grand Lodges of Germany. The badge is now worn in the coat lapel by Freemasons around the world to remember all who suffered in the name of Freemasonry, especially those during the Nazi era.
The flower is also used as a symbol of remembrance by the people of Newfoundland and Labrador. It is used to commemorate those from the province who were killed in the First World War, and worn around July 1.
It is also used in Germany to commemorate the fallen soldiers of the world wars in a similar manner to the use of remembrance poppies in the UK.
The flower is also the symbol for the Armenian genocide's 100th anniversary. The design of the flower is a black dot symbolising the past, and the suffering of Armenian people. The light purple appendages symbolise the present, and unity of Armenians. The five purple petals symbolise the future, and the five continents to which Armenians escaped. The yellow in the centre symbolises eternity, and the Tsitsernakaberd itself symbolises the 12 provinces lost to Turkey.
In Lithuania, the flower has become one of the symbols for the commemoration of the January events of 1991.
In the Netherlands, the forget-me-not has become a symbol for Alzheimer Nederland, a foundation advocating for people suffering from dementia.
In New Zealand, the forget-me-not is the symbol for Alzheimers New Zealand, the foundation advocating for people suffering from Alzheimer's disease and dementia.
In the United Kingdom, many health settings make use of the forget-me-not as a symbol to highlight that someone has dementia; it may be placed on notes, bedsides or patient boards. Also in the United Kingdom, the forget-me-not is the symbol of the Alzheimer's Society.
In the history of art, the forget-me-not is used to remember loved ones who have died, and so is very common in funerary portraits.
Since the Medieval period it has become a symbol of everlasting love and devotion. There is a German legend set as an origin story behind the name "Forget-Me-Not". In the legend, a knight was walking with his lady near the Danube River and decided to pick blue flowers for her. While picking the flowers he fell in the river and was swept away. He tossed the flowers to his lady and his last words to her were "Forget-me-not!".
| Biology and health sciences | Boraginales | Plants |
623099 | https://en.wikipedia.org/wiki/Switchblade | Switchblade | A switchblade (also known as switch knife, automatic knife, pushbutton knife, ejector knife, flick knife, gravity knife, flick blade, or spring knife) is a pocketknife with a sliding or pivoting blade contained in the handle which is extended automatically by a spring when a button, lever, or switch on the handle or bolster is activated. Virtually all switchblades incorporate a locking blade, a means of preventing the blade from being accidentally closed while in the open position. An unlocking mechanism must be activated in order to close the blade for storage.
During the 1950s, US newspapers as well as the tabloid press promoted the image of a new violent crime wave caused by young male delinquents with a switchblade or flick knife, based mostly on anecdotal evidence. In 1954, Democratic Rep. James J. Delaney of New York authored the first bill submitted to the U.S. Congress banning the manufacture and sale of switchblades, beginning a wave of legal restrictions worldwide and a consequent decline in their popularity.
Method of operation
Side-opening
The most common type of switchblade is the side-opening or out-the-side (OTS) knife. These resemble traditional manually-operated folding knives, but feature a coil or leaf spring which powers a blade that is released when the activation button is pressed. Side-opening knives may feature a safety mechanism that prevents the accidental actuation of the blade release mechanism. Manipulation of a lever, slide button, bolster, or picklock releases the blade for closure.
Out the front (OTF) knives
Double action OTF knives
A double action out the front knife is so-called because the blade emerges from the front of the handle and the thumb stud can be moved forwards or backwards to extend or retract the knife respectively.
The knife blade (dark grey) is locked in position by a spring-loaded restraining pin (yellow and red) fitting into a notch in the blade at position 1. The two spring carriers (green) fit into the spaces on the slide (blue) and this assembly rests to the side of the blade. The right spring carrier is restrained by a tab at position 2 that fits over the end of the blade. Tension on the main spring (red zig-zag) holds the other spring carrier, slide and thumb stud (light grey) to the right.
When the thumb stud is pushed forward the slide and left spring carrier are free to travel. This increases tension on the main spring as the blade and right spring carrier are locked. A ramp on the slide impinges on the lower pin. When the pin evacuates the notch, the blade and right spring carrier are free to move. The right spring carrier moves only a short distance before it comes to rest in the slide. Momentum carries the blade further before flanges (not shown) retard its motion.
Another restraining pin at position 3 fits into a notch and locks the blade in the extended position. A tab on the left spring carrier fits into a hole in the blade at position 4 which restrains the left spring carrier. This allows reverse force on the thumb stud to increase tension in the main spring before the upper restraining pin releases and the blade and carrier can return to the closed position.
The small restraining pin at 3 is the only thing holding the blade open and is prone to failure if abused. The whole slide assembly moves only a short distance, exactly as far as the thumb stud moves. The force that causes the blade to extend or retract is equal to the force applied by the user on the thumb stud to stretch the main spring before it releases. For this reason the tip of the blade is unlikely to even break skin and is entirely incapable of causing significant injury when released though the edge of the blade may still cut as it moves as with any knife. Any object in the path of the extending blade may cause the blade to stop before it can lock in position. This is easily remedied by either pulling the blade out so that it locks or pushing it in till it locks and then redeploying.
Double-action knives have the advantages of being able to automatically retract the blade, as well as allowing the main spring to be in the "at rest" position whenever the knife is fully open or closed. However, because they have more complicated mechanisms, double-action OTFs will tend to be more expensive, have a weaker firing action, and a less-solid lockup than comparable single-action OTFs.
Single action OTF knives
A single action out the front knife operates under similar principles but will only automatically extend the blade which must then be manually retracted.
One spring post (green, left) is rigidly fixed to the handle (orange), the other spring post (green, right) is fastened to the base of the blade. The main spring (red) is under tension, but the blade cannot eject because the spring mounted button (light grey, its spring not shown) is resting in a notch in the blade. The cocking arm (blue) emerges through the base of the handle; friction with the handle holds it in place.
When the button is depressed (sideways into the handle or, as illustrated, into the page) a slot in it aligns with the blade and allows the blade to move forward. When the blade is fully extended flanges on the blade engage pins on the cocking arm retarding the blade's motion. The blade is locked in position when the rear notch of the blade allows the button to return to its rest position. Even if the button is pressed spring tension holds the knife open.
To retract the blade the button is again pressed so that its slot aligns with the blade. The cocking arm is pulled backwards which itself pulls the blade backward. When the blade is fully retracted the spring mounted button rests in the forward notch and again pops up and locks the blade in the cocked position. The cocking arm is then manually pushed forward to again sit flush with the handle.
Because the main spring is constantly acting on the blade and is extended by a far greater amount and is cocked by the whole hand and arm rather than by thumb the force it can exert on the blade is greater than with a double action knife. This will easily allow the tip of the blade to break skin when deployed and possibly penetrate a few millimetres (or hundredths of an inch) past it or to pass through light clothing. While still not a hugely strong design, because it is more firmly attached a good quality single action out the front blade displays less wobble and play than its comparable quality double action counterpart.
Spring-assist knife vs. switchblade
While operationally identical (in terms of one-handed opening), the "spring-assist" or "assisted opening" knife is not a switchblade or automatic knife. A switchblade opens its blade from the handle automatically to the fully locked and open position with the single press of a button, lever, or switch that is remotely mounted in the knife handle or bolster. In contrast, a spring-assist design uses either 1) manual pressure upon a protrusion on the blade itself or 2) movement of a lever or switch directly linked to the blade to initiate partial opening of the blade, at which point an internal spring propels the blade into the fully open, locked position. By this definition, some out-the-front automatics could be considered spring-assisted knives rather than switchblades, and some merchants in the United States sold them as such during periods of switchblade illegality.
Despite this difference in function, the criminal codes of many nations treat the assisted opening knife as a prohibited weapon like the switchblade.
In the US, persons have occasionally been arrested or prosecuted by state law enforcement authorities for carrying assisted-opening knives defined as an illegal switchblade. An attempt to criminalize the sale of spring-assisted knives by federal law enforcement was forestalled by a US 2009 amendment (Amendment 1447) to 15 U.S.C. §1244. This amendment provides that the Switchblade Knife Act shall not apply to spring-assist or assisted-opening knives (i.e. knives with closure-biased springs that require physical force applied to the blade to assist in opening the knife).
Legality
Austria
Beginning with the Austrian Arms Act of 1996 switchblades, like all other knives, are generally legal to buy, import, possess or carry, regardless of blade length or opening or locking mechanism. The only exception are minors (defined as persons under the age of 18) and people who have been expressly banned from owning and carrying any weapon (Waffenverbot): both groups may only possess knives which are not considered "weapons" under the Arms Act, defined as "objects that by their very nature are intended to reduce or eliminate the defensive ability of a person through direct impact". Switchblades usually fall under that definition.
In Austria the regulatory laws of individual states and the Assembly Act prohibit switchblades and other knives from being carried into a public building, school, public assembly, or public event.
Australia
In Australia, switchblades are banned by the Customs (Prohibited Imports) Regulations as a Prohibited Import. Australian customs refer to the automatic knife or switchblade as a flick knife. Australian law defines a flick knife as a knife that has a blade which opens automatically by gravity, by centrifugal force, or by any pressure applied to a button, spring or device in or attached to the handle of the knife, a definition that would cover not only switchblades and automatic-opening knives, but also gravity knives and balisongs.
At a state and local level, most jurisdictions declare flick knives to be prohibited weapons in their respective acts, codes and regulations. Persons residing in states that do not have specific weapons legislation covering switchblades (such as Tasmania) are still covered by Federal Customs legislation, but in conditions where the state has no legislation against such items, an exemption may be applied for and received if approved by the chief supervisory officer of the police service in that state.
Some states which have specific legislation against switchblades allow individuals to apply for an exemption from this legislation if they have a legitimate reason. For example, in the state of Victoria, a member of a bona fide knife-collectors' association, who is not a prohibited person (per the Firearms Act 1996), and meets other guidelines and conditions may apply to the Chief Commissioner of Police for a Prohibited Weapons Exemption to possess, carry, or otherwise own such a knife. This exemption may then, in turn, be used to apply to the Australian Customs Service for an import permit.
Belgium
Article 3, §1 of the 2006 Weapons Act lists the switchblade or automatic knife (couteaux à cran d'arrêt et à lame jaillissante) as a prohibited weapon. In Belgium, the police and local jurisdictions are also allowed to prohibit the carrying or possession of a wide variety of knives, which are not explicitly banned by law, if the owner cannot establish a legitimate reason (motif légitime) for having that knife, particularly in urban areas or at public events.
Canada
Under Part III of the Criminal Code a knife that has a blade that opens automatically by gravity or centrifugal force or by hand pressure applied to a button, spring or other device in or attached to the handle of the knife, is illegal to possess, import, sell, buy, trade, or carry on one's person. These are prohibited weapons (armes défendues). While certain businesses can be granted a licence to acquire and possess prohibited weapons such as switchblades for use as props in movie productions, these exemptions do not apply to individuals.
Czech Republic
It is legal to carry and possess switchblades or automatic knives in the Czech Republic.
Denmark
Any type of automatic-opening knife or bladed tool that can be opened using just one hand (this includes any one-handed knife that has been deactivated by removing its opening mechanism) is illegal to own or possess. Multi-tools featuring one-hand opening blades are also illegal to own or possess. Manually opened one-handed knives are legal.
Finland
In Finland switchblade or automatic knives are legal to purchase or possess. All knives are considered as dangerous weapons and it is forbidden to carry any knife without a proper cause. The law forbids carrying or importing any automatic knife that has the blade completely hidden like OTF switchblades. The restriction does not apply to importing historically significant knives or those with significant artistic value. The law requires that switchblades be cased and secured while being transported.
France
French law defines switchblades as dangerous weapons, which may not be carried on one's person. If carried in a vehicle, such knives must be placed in a secure, locked compartment not accessible to the vehicle occupants. In addition, French law provides that authorities may classify any knife as a prohibited item depending upon circumstances and the discretion of the police or judicial authorities.
Germany
The switchblade is known in Germany as the Springmesser (also called a Sprenger or Springer). All large side-opening switchblade knives (blade longer than ), OTF switchblades, balisongs or butterfly knives (blade longer than ), and gravity knives are illegal to own, import or export under German law. Side-opening switchblade knives with single-edged blades not longer than and incorporating a continuous spine are legal to own. Legal switchblades may be carried both open and concealed on one's person if there is a justified need for it ("berechtigtes Bedürfnis") or if the weapon cannot be accessed with less than 3 moves ("Transport in verschlossenem Behältnis"). Other laws or regulations may still prohibit the carrying of otherwise legal automatic or switchblade knives, particularly in certain situations or places (gatherings on public ground, check-in areas of airports).
Hungary
According to decree 175/2003. (X. 28.) of the Hungarian government a közbiztonságra különösen veszélyes eszközökről (about the instruments particularly hazardous to public safety), it is prohibited to possess a switchblade in public places or private places open to the public – that includes the inside of vehicles present there – and on public transport vehicles, except for filmmaking and theatrical performances. Members of the Hungarian Army, law enforcement, national security agencies and armed forces stationed in Hungary are exempt from this limitation together with those who are authorised to carry such instruments by legislation. Sale of a switchblade is authorised only to the persons and organizations above. Customs clearance of switchblades may not be performed for private individuals such as tourists.
Hong Kong
According to the Weapons Ordinance (Cap. 217), any person who has possession of any prohibited items (including Gravity Knife and Flick Knife) commits an offence.
Ireland
Section 9 of the Firearms and Offensive Weapons Act 1990 makes it an offence to carry a "flick knife" in any public space without lawful authority or reasonable excuse. A summary conviction is punishable with either a €1000 fine, up to 12 months imprisonment or both but if indictable the penalty can be up to five years in prison. The Act, which classifies a flick knife as an offensive weapon, also prohibits the manufacture, importation, sale, hire or loan of these knives. Conviction for any of these offences carries a sentence of up to seven years imprisonment.
Italy
In Italy, the switchblade or automatic opening knife (coltello a scatto) is generally defined as an arma bianca (offensive weapon) rather than a tool. While legal for adults to purchase, such knives may not be transported outside of one's property nor carried on the person, either concealed or unconcealed, nor may it be carried in a motor vehicle where the knife may be accessed by driver or passengers. The Italian Ministry of Interior has warned that switchblade knives will be considered offensive weapons in their own right.
Japan
In Japan any switchblade over in blade length requires permission from the prefectural public safety commission in order to possess at home. However, switchblades and assisted open knives are prohibited from carry under any circumstances.
Lithuania
According to Lithuanian law it is illegal to carry or possess a switchblade if it meets one of the following criteria: the blade is longer than ; the width in the middle of the blade is less than 14% of its total length; the blade is double sided.
Mexico
It is legal to carry and possess switchblades in Mexico.
Netherlands
As of 2012, it is prohibited to own or possess, whether kept at home or not, any stilettos, switchblades, folding knives with more than one cutting edge, and throwing knives.
New Zealand
The Customs Import Prohibition Order 2008 prohibits the importation of "any knife having a blade that opens automatically by hand pressure applied to a button, spring or other device in or attached to the handle of the knife (sometimes known as a 'flick-knife' or 'flick gun')". The Summary Offences Act 1981 and the Crimes Act 1961 section 202A(4)(a) make it an offence to possess any weapon in a public place without reasonable excuse.
Norway
Switchblades or automatic knives (springkniver) may not be acquired, possessed, or carried in Norway "without justifiable purpose" and also assuming they "appear as products of violence".
Poland
Knives, including switchblades, although regarded as dangerous tools, are not considered weapons under Polish law, except for blades hidden in umbrellas, canes, etc. It is legal to sell, buy, trade and possess a switchblade, and Polish law does not prohibit carrying a knife in a public place. However, certain prohibitions may apply during mass events.
Russia
In Russia, switchblades (rus. автоматический нож, выкидной нож, пружинный нож) are illegal only if their blade's length is more than – this is an illegal weapon, and there is a fine 500–2000 Russian rubles (about $8–30) and withdrawing of the knife only for carrying it (article 20.8 of Offences Code of Russia), but not for illegal purchasing and possession (keeping at home or somewhere else). Only self-making and selling white arms (rus. холодное оружие) is a crime in Russia (these two crimes are punished by: part 4 article 222 and part 4 article 223 of Russian Criminal Code). If the blade is shorter than 9 centimetres, anyone (even if people younger than 18 years old, having a criminal history or mental illness) can buy, own and concealed carry (open carry of any weapon or things that look like weapon at human settlements is forbidden in Russia; with the exception for policemen) such a switchblade without any license. But even in this case, it is recommended that people carry on their person an official certificate (type approval) (which is usually in a box with a purchased knife), which proves that it is not a melee weapon and not restricted to carry, in which case even knives longer than 9 cm are sometimes approved.
Singapore
The importation and possession of switchblades are illegal in Singapore. It may not be also listed or sold in auctions in Singapore.
Slovenia
Switchblades are specifically prohibited under Slovenian law.
Slovakia
It is legal to carry and possess switchblade or automatic knives with no restriction to the length of the blade.
South Africa
In South Africa, little to no laws exist on the possession, sale, manufacture, and carrying of weapons, other than firearms. Switchblades are legal for possession, sale, manufacture, and carrying.
South Korea
In South Korea, any knife that automatically opens wider than 45 degrees with the push of a button and has a blade that is longer than is subject to registration. In order to register the knife and legally possess it, one must be older than 20, have no previous criminal offences and be healthy both physically and psychologically. The registration process is carried out at nearby police stations. However, unless the owner of the knife has a hunting license, carrying the knife in public is generally prohibited.
Spain
Manufacture, importation, trade, use and possession of switchblade knives are prohibited in Spain.
Sweden
In Sweden, the possession of any knife in a public place, at school, or public roads is prohibited. Exceptions are made for those who carry knives for professional or otherwise justified reasons. Switchblades may not be possessed by individuals under 21 years of age.
Switzerland
Knives whose blade can be opened with an automatic mechanism that can be operated with one hand are illegal to acquire (except with a special permit) in Switzerland under the Federal Weapons Act. Butterfly knives, throwing knives and daggers with a symmetrical blade are banned likewise. Violations are punishable with imprisonment of up to three years or a fiscal penalty, as provided for by article 33 of the same act.
Turkey
Switchblades are illegal to buy, sell and carry in Turkey per the corresponding law 6136 (4) which includes an incarceration sentence of up to 1 year. However, due to the widespread use of switchblades and butterfly knives in the country, imprisoning is very rare and sentences are often converted to a fine when it is the only violation.
Ukraine
Under Article 263 of the Criminal Code, switchblades are not specifically prohibited; however, any knife definable as a 'dagger' may not be manufactured, sold, repaired for sale, nor carried on one's person without a valid permit.
United Kingdom
On 12 May 1958, Parliament passed the Restriction of Offensive Weapons Act 1959, which banned the manufacture, sale or offer to sell or hire any type of automatic-opening or switchblade knife. The law came in response to their perceived use by juvenile delinquents and gangs and associated media coverage, as well as by the 1958 passage of the Switchblade Knife Act in the United States. Indeed, much of the language in the Restriction of Offensive Weapons Act 1959 appears to be taken directly from the American law.
In 2019, parliamentary amendments to Section 43, 44, and 46 of The Restriction of Offensive Weapons Act 1959 make it illegal to own, possess, sell or transfer a switchblade or flick knife within the United Kingdom, including possession at home. According to UK government websites, assisted-opening knives are included in the amended and expanded definition of a prohibited 'flick knife'.
United States
Federal law
The Switchblade Knife Act (, , aka SWA, enacted on August 12, 1958, and codified in ), prohibits the manufacture, importation, distribution, transportation, and sale of switchblade knives in commercial transactions substantially affecting interstate commerce between any state, territory, possession of the United States, or the District of Columbia, and any place outside that state, territory, U.S. possession, or the District of Columbia. The Act also prohibits possession of such knives on federal or Indian lands or on lands subject to federal jurisdiction. The federal SWA does not prohibit the ownership or carrying of automatic knives or switchblades inside state lines while not on federal property, nor does it prohibit the acquisition or disposition of such knives in an intrastate (in-state) transaction. Finally, the law does not prohibit interstate knife sales or transactions that are either noncommercial in nature, or which do not substantially affect interstate commerce (as defined by recent decisions of the U.S. Supreme Court).
U.S. Code Title 15, Sect. 1241 defines switchblade knives as any knives which open "1) by hand pressure applied to a button or other device in the handle of the knife, or any knife having a blade which opens automatically; (2) by operation of inertia, gravity, or both". The Act also prohibits the manufacture, sale, or possession of switchblade knives on any Federal lands, Native American reservations, military bases, and Federal maritime or territorial jurisdictions including the District of Columbia, Puerto Rico, and other territories. The act was amended in 1986 to also prohibit the importation, sale, manufacture, or possession of ballistic knives in interstate commerce.
U.S.C. 1716 prohibits the mailing or transport of switchblades or automatic knives through the U.S. mails (U.S. Postal Service), with a few designated exceptions. The act provides for a fine and/or imprisonment of not more than one year. provides:
provides that the federal Switchblade Knife Act does not apply to: 1) any common carrier or contract carrier, with respect to any switchblade knife shipped, transported, or delivered for shipment in interstate commerce in the ordinary course of business; 2) the manufacture, sale, transportation, distribution, possession, or introduction into interstate commerce of switchblade knives pursuant to contract with the Armed Forces; 3) to the Armed Forces or any member or employee thereof acting in the performance of his duty; 4) the possession and transportation upon his person of any switchblade knife with a blade or less in length by any individual who has only one arm, and 5) a knife "that contains a spring, detent, or other mechanism designed to create a bias toward closure of the blade and that requires exertion applied to the blade by hand, wrist, or arm to overcome the bias toward closure to assist in opening the knife".
State laws
In addition to federal law, some U.S. states have laws restricting or prohibiting automatic knives or switchblades, sometimes as part of a catchall category of deadly weapons or prohibited weapons. A few states, among them Delaware, Hawaii, New Jersey, New Mexico, and New York, prohibit sale, transfer, ownership or possession of automatic knives or switchblades as deadly or prohibited weapons, while others such as New Hampshire and Arizona have no restrictions on sale, ownership, possession, or carry (with some location-specific exceptions). Other states allow purchase, possession, and carrying on one's person to a limited degree, sometimes with restrictions on blade length or location.
The negative public reputation of the switchblade as the tool of the juvenile delinquent, derived from sensational media coverage of the 1950s, was enshrined in many states' criminal codes, and some of these laws persist to this day. Thus in some states, the possession or carrying of an automatic-opening knife or switchblade may become illegal based solely on its design or aesthetic appearance, or simply its use as a weapon in a given circumstance. For example, switchblade knives with blade shapes originally designed for the purpose of stabbing or thrusting, such as the dirk, dagger, poignard, or stiletto are automatically considered to be 'deadly weapons' (i.e. knives designed or specially adapted for use as a weapon to inflict death or serious bodily injury).
Over the years, state judicial decisions have expanded the original reach of switchblade laws, either by reclassifying single-edged automatic pocket knives with short, general-purpose blades as illegal 'dirks or daggers', or by re-defining otherwise legal manually-operated lock-blade pocket knives as a prohibited gravity knife, flick knife, or switchblade. Persons who used knives deemed prohibited as in their work or for self-defense, or who could not afford adequate legal representation, particularly racial minorities, have been disproportionately affected by the capricious enforcement of such laws.
In response to complaints raised about the constitutionality and inconsistent application of existing statutes to modern knife designs, several states such as Alaska, Arkansas, Indiana, Kansas, Michigan, Missouri, Montana, Tennessee, Texas, Virginia, West Virginia, and Wisconsin have repealed older laws against possession or purchase of switchblade or automatic knives. Five states still prohibit anyone from selling, purchasing, owning or carrying a switchblade.
In August 2024, the Massachusetts Supreme Judicial Court relied on the 2022 U.S. Supreme Court decision New York State Rifle & Pistol Association, Inc. v. Bruen when it struck down a 1957 ban on switchblade knives in the state, on the grounds there were no similar bans at the time of the writing of the Second and Fourteenth Amendments. This may mean other switchblade laws at the federal and state level will be ruled unconstitutional.
City and county ordinances
Unless preempted by state law, various county, city, or other local jurisdictions may also have their own codes or ordinances further restricting or prohibiting switchblade possession or use, for example Sioux Falls, South Dakota, or Oakland, California.
History
Switchblades date from the mid-18th century. The development of the first automatic knife was made possible by the invention of small tempered springs by the clockmaker Benjamin Huntsman in 1742.
The first spring-fired switchblade that can be authenticated appeared in the late 1700s, probably constructed by a craftsman in Italy. After 1816, no automatic knives were produced in Italy for 50 years due to laws passed by the Austro-Hungarian Empire. By 1900, Italy had resumed production of automatic knives of the [stiletto] pattern, all hand-crafted by individual cutlers or small knife forges. Most of these knifemakers were concentrated in the towns of Maniago, Frosolone, Campobasso, and Scarperia.
Unknown artisans developed an automatic folding spike bayonet for use on flintlock pistols and coach guns. Examples of steel automatic folding knives from Sheffield England have crown markings that date to 1840. Cutlery makers such as Tillotson, A. Davey, Beever, Hobson, Ibbotson and others produced automatic-opening knives. Some have simple iron bolsters and wooden handles, while others feature ornate, embossed silver alloy bolsters and stag handles. English-made knives often incorporate a "pen release" instead of a central handle button, whereby the main spring activated larger blade is released by pressing down on the closed smaller pen blade.
In France, 19th-century folding knives marked Châtellerault were available in both automatic and manually opened versions in several sizes and lengths. Châtellerault switchblades have recognizable features such as S-shaped cross guards, picklock type mechanisms and engraved decorative pearl and ivory handles. In Spain, Admiral D'Estaing is attributed with a type of folding naval dirk that doubled as an eating utensil. In closed (folded) position, the blade tip would extend beyond the handle to be used at the dining table. It could be spring activated to full length if needed as a side arm, by pressing a lever instead of a handle button. By 1850, at least one American company offered a .22 rimfire single-shot pistol equipped with a spring-operated knife. After the American Civil War (1865), knife production became industrialized. The oldest American made mass-production automatic knife is the Korn Patent Knife, which used a rocking bolster release.
The advent of mass production methods enabled folding knives with multiple components to be produced in large numbers at lower cost. By 1890, US knife sales of all types were on the increase, buoyed by catalog mail order sales as well as mass marketing campaigns utilizing advertisements in periodicals and newspapers. In consequence, knife manufacturers began marketing new and much more affordable automatic knives to the general public. In Europe as well as the United States, automatic knife sales were never more than a fraction of sales generated by conventional folding knives, yet the type enjoyed consistent if modest sales from year to year.
In 1892, George Schrade, a toolmaker and machinist from New York City developed and patented the first of several practical automatic knife designs. The following year, Schrade founded the New York Press Button Knife Company to manufacture his switchblade knife pattern, which had a unique release button mounted in the knife bolster. Schrade's company operated out of a small workshop in New York City and employed about a dozen workmen.
1900–1945
Swordmakers in Toledo, Spain, developed a market in the 1920s for gold plated automatic leverlock knives with pearl handles and enamel inlaid blades. Italian knifemakers had their own style of knives including both pushbutton and leverlock styles, some bearing design characteristics similar to the early French Châtellerault knife. Prior to World War II, hand crafted automatic knives marked Campobasso or Frosolone were often called Flat Guards because of the two-piece top bolster design. Some Italian switchblades incorporated a bayonet-type blade equipped with a blade lock release activated by prying up a locking flange at the hinge end, and were known as picklock models. These knives were later supplanted by newer designs which incorporated the blade lock release into a tilting bolster.
In Italy, increased production of automatic knives resulted from the actions of German businessman Albert Marx, who owned two cutlery manufacturing concerns in Solingen, Germany. After a trip to Maniago in 1907, Marx was convinced of the appeal of Italian style automatic knives, and duly took note of attempts by Maniago knifemakers to increase productivity using powered cutting tools. Marx promptly introduced the Solingen methods of semi-mass production to the Maniago knife industry, increasing output and lowering individual costs. While Italian automatic knives would remain hand-assembled and to some extent hand-crafted products, Marx's innovations did increase production, enabling exports to other parts of Italy and eventually throughout Europe and abroad. Over time, Maniago became the central hub of automatic knife production in Italy.
In the United States, commercial development of the switchblade knife was primarily dominated by the inventions of George Schrade and his New York Press Button Knife Company, though W.R. Case, Union Cutlery, Camillus Cutlery, and other U.S. knife manufacturers also marketed automatic knives of their own design. Most of Schrade's switchblade patterns were automatic versions of utilitarian jackknives and pocket knives, as well as smaller penknife models designed to appeal to women buyers. In 1903, Schrade sold his interest in the New York Press Button Knife Co. to the Walden Knife Co., and moved to Walden, New York, where he opened a new factory. There Schrade became the company's production superintendent, establishing a production line to manufacture several patterns of Schrade-designed switchblade knives, ranging from a large folding hunter to a small pocket knife. Walden Knife Co. would go on to sell thousands of copies of Schrade's original bolster button design.
The advertising campaigns of the day by Schrade and other automatic knife manufacturers focused on marketing to farmers, ranchers, hunters, or outdoors men who needed a compact pocket knife that could be quickly brought into action when needed. In rural areas of America, these campaigns were partially successful, particularly with younger buyers, who aspired to own the most modern tools at a time when new labor-saving inventions were constantly appearing on the market. Most American-made switchblades made after 1900 were patterned after standard utilitarian pocketknives, though a few larger Bowie or Folding Hunter patterns were produced with blade shapes and lengths that could be considered useful as fighting knives. Most had flat or sabre-ground clip or spear-point blade profiles and single-sharpened edges. Blade lengths rarely exceeded . A few manufacturers introduced the double switchblade, featuring two blades that could be automatically opened and locked with the push of a button.
At the low end of the market, Shapleigh Hardware Company of St. Louis, Missouri contracted thousands of switchblades under the trademark Diamond Edge for distribution to dealers across the United States and Canada. Most of these knives were novelty items, assembled at the lowest possible cost. Sold off display cards in countless hardware and general stores, many low-end Diamond Edge switchblades failed to last more than a few months in actual use. Other companies such as Imperial Knife and Remington Arms paid royalties to Schrade in order to produce automatic "contract knives" for rebranding and sale by large mail-order catalog retailers such as Sears, Roebuck & Co.
In 1904, in combination with his brothers Louis and William, George Schrade formed the Schrade Cutlery Co. in Walden, and began developing a new series of switchblades, which he patented in 1906–07. Schrade's new Safety Pushbutton Knives incorporated several design improvements over his earlier work, and featured a handle-mounted operating button with a sliding safety switch. A multi-blade operating button allowed the knife to operate with up to four automatic blades. In successive patents from 1906 through 1916 Schrade would steadily improve this design, which would later become known as the Presto series. With the Presto line, Schrade would largely dominate the automatic knife market in the United States for the next forty years. Schrade would go on to manufacture thousands of contract switchblade knives under several trademarks and brands, including E. Weck, Wade & Butcher, and Case XX, while other companies used Schrade's patent as the basis for their own switchblade patterns. Among these were pocket and folding hunter pattern switchblades bearing the name Keen Kutter, a trademark owned by E.C. Simmons Hardware Co. (later purchased by the Shapleigh Hardware Co.).
Having earned a handsome return from his work, Schrade traveled to Europe in 1911, first to Sheffield, England, where he assisted Thomas Turner & Company in expediting a wartime order from the British Navy. He next moved to the knifemaking center of Solingen, Germany. Schrade was aware of Solingen's reputation for having the best cutlery steel in Europe, and he opened a factory to produce his safety pushbutton switchblade knife there. In 1915 or 1916 Schrade sold his Solingen holdings (some sources state they were seized by the German government) and returned to the United States.
In 1918, Captain Rupert Hughes of the U.S. Army submitted a patent application for a specialized automatic-opening trench knife of his own design, the Hughes Trench Knife. This was a curious device consisting of a folding spring-loaded knife blade attached to a handle which fastened to the back of the hand and was secured by a leather strap, leaving the palm and fingers free for grasping other objects. Pressing a button on the handle automatically extended a knife blade into an open position and locked position, allowing the knife to be used as a stabbing weapon. The Hughes Trench Knife was evaluated as a potential military arm by a panel of U.S. Army officers from the American Expeditionary Force (AEF) in June 1918. After testing the board found the Hughes design to be of no value, and it was never adopted. Hughes went on to patent his automatic trench knife in 1919, though Hughes appears to have been unsuccessful in persuading a knife manufacturing company to produce his design.
From 1923 to 1951, the Union Cutlery Co. of Olean, New York produced a series of lever-operated switchblades designed for the mid and upper end of the market, featuring celluloid, stag, or jigged bone handles, a bolster-mounted push-button, all featuring the company's KA-BAR trademark on the blade tang. The line included the KA-BAR Grizzly, KA-BAR Baby Grizzly, and KA-BAR Model 6110 Lever Release knives. The largest model was KA-BAR Grizzly, a folding hunter pattern with a broad bowie-type clip point blade.
Upon returning to the United States, Schrade made a final improvement to his Presto series of switchblades, filing his patent application on June 6, 1916. The next year, Schrade licensed a new flylock switchblade design to the Challenge Cutlery Company, which he then joined. Under the trademark of Flylock Knife Co., Challenge made several patterns of the flylock switchblade, including a large folding hunter model with hinged floating guard and a small pen knife model designed to appeal to women buyers. A Challenge Cutlery advertisement of the day depicted a female hand operating a fly-lock automatic pen knife, accompanied by a caption urging women to buy one for their sewing kit so as not to break a nail while attempting to open a normal pen knife. Schrade pursued his knifemaking interests at both Challenge and at Schrade, where his brother George now managed one of the company's factories.
With a few ex-Challenge employees Schrade formed a second company, the Geo. Schrade Knife Company, primarily to manufacture his Presto series of switchblade knives. In 1937, Schrade came out with two more low-cost switchblade knives designed to appeal to youth, the Flying Jack and the Pull-Ball Knife. The Flying Jack had a sliding operating latch and could be produced with one or more automatically opening blades. The Pull-Ball opened by pulling a ball located on the butt end of the handle. Schrade would later manufacture alternative configurations to the ball operating handle, including dice, rings, eight balls, or different colors. The Pull-Ball required two hands to open, removing much of the switchblade's utility as a one-handed knife. As the blade catch mechanism required a good deal of space within the handle, the knife's blade length was short relative to its handle length. Schrade manufactured many pull-ball knives for sale under other brands, including Remington, Case, and the "J.C.N. Co." (Jewelry Cutlery Novelty Company of North Attleboro, Massachusetts) Always looking for a new way to appeal to customers, Schrade continued to experiment with new forms of switchblade designs up to the time of his death in 1940.
In the late 1930s the German Luftwaffe began training a Fallschirmjäger or paratroop force, and as part of this effort developed specialized equipment for the airborne soldier, including the Fallschirmjäger-Messer (paratrooper's knife), which used a gravity-operated mechanism to deploy its sliding spearpoint blade from the handle. The German paratrooper knife, which featured a marlinspike in addition to the cutting blade, was used to cut rigging and unknot lines, though it could be employed as a weapon in an emergency. The U.S. Army in 1940 tasked the Geo. Schrade Knife Co. to produce a small single-edge switchblade for U.S. airborne troops, to be used similarly to the Fallschirmjäger-Messer. The knife was not intended primarily as a fighting knife, but rather as a utility tool, to enable a paratrooper to rapidly cut himself out of his lines and harness in the event he could not escape them after landing.
The company's submission was approved by the U.S. Army Materiel Command in December 1940 as the Knife, Pocket, M2. The M2 had a clip-point blade and featured a carrying bail. Except for the bail, the M2 was for all intents and purposes a copy of George Schrade's popular Presto safety-button civilian model. The M-2 was issued primarily to U.S. Army paratroopers during the war, though some knives appear to have been distributed to crews and members of the Office of Strategic Services. When issued to paratroopers, the M2 was normally carried in the dual-zippered knife pocket on the upper chest of the M42 jump uniform jacket. After the war, the M2 was manufactured by Schrade (now Schrade-Walden, Inc.) as the Parachutist's Snap Blade Knife (MIL-K-10043) under a postwar military contract. In addition, other companies such as the Colonial Knife Co. made civilian versions of the M2 after the war.
Postwar usage and the Italian stiletto
From the end of World War II until 1958, most U.S.-manufactured switchblades were manufactured by Schrade (now Schrade-Walden, Inc., a division of Imperial Knife Co.), and the Colonial Knife Co.
Schrade-Walden Inc. made knives under the Schrade-Walden trademark, while Colonial made a number of mass-produced switchblade patterns during the 1950s under the trademark Shur Snap. Designed to a price point, Shur Snap switchblades feature stamped plated sheet-metal bolsters and plastic scales.
In 1956, the U.S. Air Force requested development for a new aircrew knife with several requirements, including the ability to be opened with one hand. The final result was the MC-1 Aircrew Survival Knife. A development of the WW2-era M2 Parachutist Snap Blade knife, the MC-1 featured twin blades, The main blade was a blunt line-cutting blade with a protected sharpened inside edge for severing parachute lines, while the secondary blade opened automatically with a push button in the event the crew member could use only one hand. First issued in 1958, the MC-1 was restricted to U.S. military sales only, and was produced by the Camillus Cutlery Co., Logan/Smyth of Venice Florida, and Schrade-Walden Inc.. The last production contract for the MC-1 was cancelled in 1993.
After 1945, American soldiers returning home from Europe brought along individually purchased examples of the Italian style of stiletto pattern switchblade produced in Maniago and other cutlery towns. Though undeniably limited in practical usefulness, the style and beauty of the so-called stiletto switchblade was a revelation to US buyers accustomed to the utilitarian nature of most U.S.-made automatic knives such as the Schrade Presto pocketknife. Consumer demand for more of these knives resulted in the importation of large numbers of side-opening and telescoping blade switchblades, primarily from Italy. In the case of the switchblade, the name stiletto derives from the blade design, since most Italian designs incorporated a long, slender blade tapering to a needle-like point, together with a slim-profile handle and vestigial cross-guard reminiscent of the medieval weapon. The majority of these stiletto pattern switchblade knives used a now-iconic slender bayonet-style blade with a single sabre-ground edge and an opposing false edge. Other blade styles included the double-edged dagger and the curved-edge kris. Some were flimsy souvenirs made for tourists or novelty purchasers, while others were made with solid materials and workmanship. Eventually, many thousands of Italian switchblades were exported to the US. Around this time, the traditional Italian switchblade 'picklock' method of blade release was largely replaced by the tilt bolster mechanism, ending the "Golden Age" of hand-crafted Italian switchblades.
As with the medieval stiletto upon which it was based, the so-called stiletto switchblade was intended to be a concealable weapon optimized for thrusting rather than cutting or slashing (many imported stiletto switchblades had no sharpened cutting edge at all). These knives ranged in blade length from . As a weapon, the stiletto switchblade was much less effective than most fixed-blade hunting and military knives commonly available in the US, including the Bowie knife and dagger, which could inflict deep slashing cuts as well as stab wounds. However, its peculiar properties of easy concealment and rapid blade deployment appealed to some, and as with any other knife, the stiletto switchblade could inflict a severe wound, given sufficient blade length.
1950s gang usage and controversy
In 1950, an article titled The Toy That Kills appeared in the Woman's Home Companion, a widely read U.S. periodical of the day. The article sparked a storm of controversy and a nationwide campaign that would eventually result in state and federal laws criminalizing the importation, sale, and possession of automatic-opening knives. In the article, author Jack Harrison Pollack assured the reader that the growing switchblade "menace" could have deadly consequence "as any crook can tell you". Pollack, a former aide to Democratic Senator Harley M. Kilgore and a ghostwriter for then-Senator Harry S Truman, had authored a series of melodramatic magazine articles calling for new laws to address a variety of social ills. In The Toy That Kills, Pollack wrote that the switchblade was "Designed for violence, deadly as a revolver - that's the switchblade, the 'toy' youngsters all over the country are taking up as a fad. Press the button on this new version of the pocketknife and the blade darts out like a snake's tongue. Action against this killer should be taken now". To back up his charges, Pollack quoted an unnamed juvenile court judge as saying: "It's only a short step from carrying a switchblade to gang warfare".
During the 1950s, established U.S. newspapers as well as the sensationalist tabloid press joined forces in promoting the image of a young delinquent with a stiletto switchblade or flick knife. While the press focused on the switchblade as a symbol of youthful evil intent, the American public's attention was attracted by lurid stories of urban youth gang warfare and the fact that many gangs were composed of lower class youth and/or racial minorities. The purported offensive nature of the stiletto switchblade combined with reports of knife fights, robberies, and stabbings by youth gangs and other criminal elements in urban areas of the United States generated continuing demands from newspaper editorial rooms and the public for new laws restricting the lawful possession and/or use of switchblade knives - with particular emphasis on racial minorities, especially African-American and Hispanic teens. In 1954, the state of New York passed the first law banning the sale or distribution of switchblade knives in hopes of reducing gang violence. That same year, Democratic Rep. James J. Delaney of New York authored the first bill submitted to the U.S. Congress banning the manufacture and sale of switchblades.
Some U.S. congressmen saw the switchblade controversy as a political opportunity to capitalize on constant negative accounts of the switchblade knife and its connection to violence and youth gangs. This coverage included not only magazine articles but also highly popular films of the late 1950s including Rebel Without a Cause (1955), Blackboard Jungle (1955), Crime in the Streets (1956), 12 Angry Men (1957), The Delinquents (1957), High School Confidential (1958), and the 1957 Broadway musical West Side Story. Hollywood's fixation on the switchblade as the symbol of youth violence, sex, and delinquency resulted in renewed demands from the public and Congress to control the sale and possession of such knives. State laws restricting or criminalizing switchblade possession and use were adopted by an increasing number of state legislatures.
In 1957, Senator Estes Kefauver (D) of Tennessee attempted unsuccessfully to pass a law restricting the importation and possession of switchblade knives. Opposition to the bill from the U.S. knife making industry was muted, with the exception of the Colonial Knife Co. and Schrade-Walden Inc., which were still manufacturing small quantities of pocket switchblades for the U.S. market. Some in the industry even supported the legislation, hoping to gain market share at the expense of Colonial and Schrade. However, the legislation failed to receive expected support from the U.S. Departments of Commerce and Justice, which considered the legislation unenforceable and an unwarranted intrusion into lawful sales in interstate commerce.
While Kefauver's bill failed, a new U.S. Senate bill prohibiting the importation or possession of switchblade knives in interstate commerce was introduced the following year by Democratic Senator Peter F. Mack Jr. of Illinois in an attempt to reduce gang violence in Chicago and other urban centers in the state. With youth violence and delinquency aggravated by the severe economic recession, Mack's bill was enacted by Congress and signed into law as the Switchblade Knife Act of 1958.
The melodrama created by US media towards the stiletto switchblade had its effect in Canada and the United Kingdom. The US Switchblade Knife Act was closely followed by the UK Restriction of Offensive Weapons Act of 1959. In Canada, Parliament amended the Criminal Code in 1959 to include all new-production automatic knives as prohibited weapons - banned from importation, sale or possession within that country.
The new laws treated all automatic-opening knives as a prohibited class, even knives with utility or general-purpose blades not generally used by criminals. Curiously, the sale and possession of stilettos and other 'offensive' knives with fixed or lockback blades were not prohibited. In other U.S. states, the sale and possession of switchblade knives remained legal, particularly in rural states where a significant proportion of the population possessed firearms. As late as 1968, Jack Pollack was still writing lurid articles demanding further federal legislation prohibiting the purchase or possession of switchblade knives. That same year, New York congressman Lester L. Wolff (D) even read one of Pollack's articles into the U.S. Congressional Record while introducing legislation to further restrict the sale and transportation of switchblades, arguing that 'switchblade knives have no redeeming social value and are restricted almost solely to violence'.
As an anti-violence measure, legislation against switchblade sales or use clearly failed in the United States, as youth street gangs increasingly turned from bats and knives to handguns, MAC-10s, and AK-47s to settle their disputes over territory as well as income from prostitution, extortion, and illicit drug sales. In fact, the U.S. murder rate using cutting or stabbing instruments of all types declined from 23% of all murders in 1965 to just 12% in 2012.
1970-2000
By the late 1960s, new production of switchblades in the United States was largely limited to military contract automatic knives such as the MC-1.
In Italy, switchblades known among collectors as "Transitionals" were made with a mix of modern parts and leftover old style parts.
Switchblade knives continued to be sold and collected in those states in which possession remained legal. In the 1980s, automatic knife imports to the U.S. resumed with the concept of kit knives, allowing the user to assemble a working switchblade from a parts kit with the addition of a mainspring or other key part (often sold separately). Since no law prohibited importation of switchblade parts or unassembled kits, all risk of prosecution was assumed by the assembling purchaser, not the importer. This loophole was eventually closed by new federal regulations.
Present day
The ability to purchase or carry switchblades or automatic knives continues to be heavily restricted or prohibited throughout much of Europe, with some notable exceptions. In Britain, the folding type of switchblade is commonly referred to as a flick knife. In the UK, knives with an automated opening system are nearly impossible to acquire or carry legally; although they can legally be owned, it is illegal to manufacture, sell, hire, give, lend, or import such knives. This definition would nominally restrict lawful ownership to 'grandfathered' automatic knives already in possession by their owner prior to the enactment of the applicable law in 1959. Even when such a knife is legally owned, carrying it in public without good reason or lawful authority is also illegal under current UK laws.
Under US federal laws, switchblades remain illegal to import from abroad or to purchase through interstate commerce since 1958 under the old Switchblade Knife Act (15 U.S.C. §§1241-1245). In recent years, many U.S. states have repealed laws prohibiting the purchase or possession of automatic or switchblade knives in their entirety.
Modern-day Switchblade Development
Despite federal law, there are still a number of U.S. knife companies and custom makers who build automatic knives, primarily for use by the military and emergency personnel. Some well known present-day automatic knife manufacturers include Buck Knives, Colonial Knife Co., Microtech Knives, Benchmade, Severtech, Gerber Legendary Blades, Mikov, Pro-Tech Knives, Dalton, Böker, Spyderco, Kershaw Knives, and Piranha. Colonial currently manufactures the M724 Automatic Rescue Knife, which is currently being issued for use in all U.S. military aircraft ejection seat survival kits.
The classic Italian-style stiletto switchblade continues to be produced in Italy, Taiwan, and China. Automatic knife manufacture in Italy consists predominantly as a cottage industry of family-oriented businesses. These include Frank Beltrame and AGA Campolin, who have been making automatic knives using hand assembly techniques for more than half a century. Since the late 1990s, the nations of Taiwan and China have emerged as large-scale producers of automatic knives.
Automatic or switchblade knives have been produced in the following countries: Argentina, China, Czech Republic, England, France, Germany, Hong Kong, India, Italy, Japan, Mexico, Korea, Pakistan, Poland, Russia, Spain, Switzerland, Taiwan and U.S.A..
| Technology | Knives | null |
623118 | https://en.wikipedia.org/wiki/Ipomoea | Ipomoea | Ipomoea () is the largest genus in the plant family Convolvulaceae, with over 600 species. It is a large and diverse group, with common names including morning glory, water convolvulus or water spinach, sweet potato, bindweed, moonflower, etc. The genus occurs throughout the tropical and subtropical regions of the world, and comprises annual and perennial herbaceous plants, lianas, shrubs, and small trees; most of the species are twining climbing plants.
Their most widespread common name is morning glory, but some species in related genera bear that same common name and some Ipomoea species are known by different common names. Those formerly separated in Calonyction (Greek "good" and , , , "night") are called moonflowers. The name Ipomoea is derived from the Ancient Greek , meaning , and (), meaning "resembling". It refers to their twining habit.
Uses and ecology
Human uses of Ipomoea include:
Most species have spectacular, colorful flowers, and are often grown as ornamentals, and a number of cultivars have been developed. Their deep flowers attract large Lepidoptera - especially the Sphingidae, such as the pink-spotted hawk moth (Agrius cingulata) - or even hummingbirds.
The genus includes food crops; the tubers of sweet potatoes (I. batatas) and the leaves of water spinach (I. aquatica) are commercially important food items, and have been for millennia. The sweet potato is one of the Polynesian "canoe plants", transplanted by settlers on islands throughout the Pacific. Water spinach is used all over eastern Asia and the warmer regions of the Americas as a key component of well-known dishes, such as canh chua rau muống (Mekong sour soup) or callaloo; its numerous local names attest to its popularity. Other species are used on a smaller scale, e.g. the whitestar potato (I. lacunosa) traditionally eaten by some Native Americans, such as the Chiricahua Apaches, or the Australian bush potato (I. costata). The peduncles or seed pods of Ipomoea muricata are consumed as a delicacy in the Indian state of Kerala.
Peonidin, an anthocyanidin potentially useful as a food additive, is present in significant quantities in the flowers of the 'Heavenly Blue' morning glory cultivar.
Ipomoea sepiaria, is part of the Dashapushpam (Ten sacred flowers) in Kerala and is known as "Thiruthali" in Malayalam.
Moon vine (I. alba) sap was used for vulcanization of the latex of Castilla elastica (Panama rubber tree, Nahuatl: olicuáhuitl) to rubber; as it happens, the rubber tree seems well-suited for the vine to twine upon, and the two species are often found together. As early as 1600 BCE, the Olmecs produced the balls used in the Mesoamerican ballgame.
The root called John the Conqueror in hoodoo and used in lucky and/or sexual charms (though apparently not as a component of love potions, because it is a strong laxative if ingested) usually seems to be from I. jalapa. The testicle-like dried tubers are carried as amulets and rubbed by the users to gain good luck in gambling or flirting. As Willie Dixon wrote, somewhat tongue-in-cheek, in his song "Rub My Root" (a Muddy Waters version is titled "My John the Conquer Root"):
My pistol may snap, my mojo is frail
But I rub my root, my luck will never fail
When I rub my root, my John the Conquer root
Aww, you know there ain't nothin' she can do, Lord,
I rub my John the Conquer root
As medicine and entheogen
Humans use Ipomoea spp. for their content of medical and psychoactive compounds, mainly alkaloids. Some species are renowned for their properties in folk medicine and herbalism; for example, Vera Cruz jalap (I. jalapa) and Tampico jalap (I. simulans) are used to produce jalap, a cathartic preparation accelerating the passage of stool. Kiribadu ala (giant potato, I. mauritiana) is one of the many ingredients of chyawanprash, the ancient Ayurvedic tonic called "the elixir of life" for its wide-ranging properties.
The leaves of I. batatas are eaten as a vegetable, and have been shown to slow oxygenation of LDLs, with some similar potential health benefits to green tea and grape polyphenols.
Other species were and still are used as potent entheogens. Seeds of Mexican morning glory (tlitliltzin, I. tricolor) were thus used by Aztecs and Zapotecs in shamanistic and priestly divination rituals, and at least by the former also as a poison, to give the victim a "horror trip" (see also Aztec entheogenic complex). Beach moonflower (I. violacea) was also used thusly, and the cultivars called 'Heavenly Blue', touted today for their psychoactive properties, seem to represent an indeterminable assembly of hybrids of these two species.Ergoline derivatives (lysergamides) are probably responsible for the entheogenic activity. Ergine (LSA), isoergine, D-lysergic acid N-(α-hydroxyethyl)amide and lysergol have been isolated from I. tricolor, I. violacea and/or purple morning glory (I. purpurea); although these are often assumed to be the cause of the plants' effects, this is not supported by scientific studies, which show although they are psychoactive, they are not notably hallucinogenic. Alexander Shulgin in TiHKAL suggests ergonovine is responsible, instead. It has verified psychoactive properties, though as yet other undiscovered lysergamides possibly are present in the seeds.
Though most often noted as "recreational" drugs, the lysergamides are also of medical importance. Ergonovine enhances the action of oxytocin, used to still post partum bleeding. Ergine induces drowsiness and a relaxed state, so might be useful in treating anxiety disorder. Whether Ipomoea species are useful sources of these compounds remains to be determined. In any case, in some jurisdictions, certain Ipomoea are regulated, e.g. by the Louisiana State Act 159, which bans cultivation of I. violacea except for ornamental purposes.
Pests and diseases
Many herbivores avoid morning glories such as Ipomoea, as the high alkaloid content makes these plants unpalatable, if not toxic. Nonetheless, Ipomoea species are used as food plants by the caterpillars of certain Lepidoptera (butterflies and moths). For a selection of diseases of the sweet potato (I. batatas), many of which also infect other members of this genus, see List of sweet potato diseases.
Pollination
The species of Ipomoea interfere with each other's pollination. Pollen from different species compete in each other's reproductive processes, imposing a fitness cost.
Gallery
| Biology and health sciences | Solanales | Plants |
623686 | https://en.wikipedia.org/wiki/Brain%E2%80%93computer%20interface | Brain–computer interface | A brain–computer interface (BCI), sometimes called a brain–machine interface (BMI), is a direct communication link between the brain's electrical activity and an external device, most commonly a computer or robotic limb. BCIs are often directed at researching, mapping, assisting, augmenting, or repairing human cognitive or sensory-motor functions. They are often conceptualized as a human–machine interface that skips the intermediary of moving body parts (hands...), although they also raise the possibility of erasing the distinction between brain and machine. BCI implementations range from non-invasive (EEG, MEG, MRI) and partially invasive (ECoG and endovascular) to invasive (microelectrode array), based on how physically close electrodes are to brain tissue.
Research on BCIs began in the 1970s by Jacques Vidal at the University of California, Los Angeles (UCLA) under a grant from the National Science Foundation, followed by a contract from the Defence Advanced Research Projects Agency (DARPA). Vidal's 1973 paper introduced the expression brain–computer interface into scientific literature.
Due to the cortical plasticity of the brain, signals from implanted prostheses can, after adaptation, be handled by the brain like natural sensor or effector channels. Following years of animal experimentation, the first neuroprosthetic devices were implanted in humans in the mid-1990s.
Studies in human-computer interaction via the application of machine learning to statistical temporal features extracted from the frontal lobe (EEG brainwave) data has achieved success in classifying mental states (relaxed, neutral, concentrating), mental emotional states (negative, neutral, positive), and thalamocortical dysrhythmia.
History
The history of brain-computer interfaces (BCIs) starts with Hans Berger's discovery of the brain's electrical activity and the development of electroencephalography (EEG). In 1924 Berger was the first to record human brain activity utilizing EEG. Berger was able to identify oscillatory activity, such as the alpha wave (8–13 Hz), by analyzing EEG traces.
Berger's first recording device was rudimentary. He inserted silver wires under the scalps of his patients. These were later replaced by silver foils attached to the patient's head by rubber bandages. Berger connected these sensors to a Lippmann capillary electrometer, with disappointing results. However, more sophisticated measuring devices, such as the Siemens double-coil recording galvanometer, which displayed voltages as small as 10−4 volt, led to success.
Berger analyzed the interrelation of alternations in his EEG wave diagrams with brain diseases. EEGs permitted completely new possibilities for brain research.
Although the term had not yet been coined, one of the earliest examples of a working brain-machine interface was the piece Music for Solo Performer (1965) by American composer Alvin Lucier. The piece makes use of EEG and analog signal processing hardware (filters, amplifiers, and a mixing board) to stimulate acoustic percussion instruments. Performing the piece requires producing alpha waves and thereby "playing" the various instruments via loudspeakers that are placed near or directly on the instruments.
Vidal coined the term "BCI" and produced the first peer-reviewed publications on this topic. He is widely recognized as the inventor of BCIs. A review pointed out that Vidal's 1973 paper stated the "BCI challenge" of controlling external objects using EEG signals, and especially use of Contingent Negative Variation (CNV) potential as a challenge for BCI control. Vidal's 1977 experiment was the first application of BCI after his 1973 BCI challenge. It was a noninvasive EEG (actually Visual Evoked Potentials (VEP)) control of a cursor-like graphical object on a computer screen. The demonstration was movement in a maze.
1988 was the first demonstration of noninvasive EEG control of a physical object, a robot. The experiment demonstrated EEG control of multiple start-stop-restart cycles of movement, along an arbitrary trajectory defined by a line drawn on a floor. The line-following behavior was the default robot behavior, utilizing autonomous intelligence and an autonomous energy source.
In 1990, a report was given on a closed loop, bidirectional, adaptive BCI controlling a computer buzzer by an anticipatory brain potential, the Contingent Negative Variation (CNV) potential. The experiment described how an expectation state of the brain, manifested by CNV, used a feedback loop to control the S2 buzzer in the S1-S2-CNV paradigm. The resulting cognitive wave representing the expectation learning in the brain was termed Electroexpectogram (EXG). The CNV brain potential was part of Vidal's 1973 challenge.
Studies in the 2010s suggested neural stimulation's potential to restore functional connectivity and associated behaviors through modulation of molecular mechanisms. This opened the door for the concept that BCI technologies may be able to restore function.
Beginning in 2013, DARPA funded BCI technology through the BRAIN initiative, which supported work out of teams including University of Pittsburgh Medical Center, Paradromics, Brown, and Synchron.
Neuroprosthetics
Neuroprosthetics is an area of neuroscience concerned with neural prostheses, that is, using artificial devices to replace the function of impaired nervous systems and brain-related problems, or of sensory or other organs (bladder, diaphragm, etc.). As of December 2010, cochlear implants had been implanted as neuroprosthetic devices in some 736,900 people worldwide. Other neuroprosthetic devices aim to restore vision, including retinal implants. The first neuroprosthetic device, however, was the pacemaker.
The terms are sometimes used interchangeably. Neuroprosthetics and BCIs seek to achieve the same aims, such as restoring sight, hearing, movement, ability to communicate, and even cognitive function. Both use similar experimental methods and surgical techniques.
Animal research
Several laboratories have managed to read signals from monkey and rat cerebral cortices to operate BCIs to produce movement. Monkeys have moved computer cursors and commanded robotic arms to perform simple tasks simply by thinking about the task and seeing the results, without motor output. In May 2008 photographs that showed a monkey at the University of Pittsburgh Medical Center operating a robotic arm by thinking were published in multiple studies. Sheep have also been used to evaluate BCI technology including Synchron's Stentrode.
In 2020, Elon Musk's Neuralink was successfully implanted in a pig. In 2021, Musk announced that the company had successfully enabled a monkey to play video games using Neuralink's device.
Early work
In 1969 operant conditioning studies by Fetz et al. at the Regional Primate Research Center and Department of Physiology and Biophysics, University of Washington School of Medicine showed that monkeys could learn to control the deflection of a biofeedback arm with neural activity. Similar work in the 1970s established that monkeys could learn to control the firing rates of individual and multiple neurons in the primary motor cortex if they were rewarded accordingly.
Algorithms to reconstruct movements from motor cortex neurons, which control movement, date back to the 1970s. In the 1980s, Georgopoulos at Johns Hopkins University found a mathematical relationship between the electrical responses of single motor cortex neurons in rhesus macaque monkeys and the direction in which they moved their arms. He also found that dispersed groups of neurons, in different areas of the monkey's brains, collectively controlled motor commands. He was able to record the firings of neurons in only one area at a time, due to equipment limitations.
Several groups have been able to capture complex brain motor cortex signals by recording from neural ensembles (groups of neurons) and using these to control external devices.
Research
Kennedy and Yang Dan
Phillip Kennedy (Neural Signals founder (1987) and colleagues built the first intracortical brain–computer interface by implanting neurotrophic-cone electrodes into monkeys.In 1999, Yang Dan et al. at University of California, Berkeley decoded neuronal firings to reproduce images from cats. The team used an array of electrodes embedded in the thalamus (which integrates the brain's sensory input). Researchers targeted 177 brain cells in the thalamus lateral geniculate nucleus area, which decodes signals from the retina. Neuron firings were recorded from watching eight short movies. Using mathematical filters, the researchers decoded the signals to reconstruct recognizable scenes and moving objects.
Nicolelis
Duke University professor Miguel Nicolelis advocates using multiple electrodes spread over a greater area of the brain to obtain neuronal signals.
After initial studies in rats during the 1990s, Nicolelis and colleagues developed BCIs that decoded brain activity in owl monkeys and used the devices to reproduce monkey movements in robotic arms. Monkeys' advanced reaching and grasping abilities and hand manipulation skills, made them good test subjects.
By 2000, the group succeeded in building a BCI that reproduced owl monkey movements while the monkey operated a joystick or reached for food. The BCI operated in real time and could remotely control a separate robot. But the monkeys received no feedback (open-loop BCI).
Later experiments on rhesus monkeys included feedback and reproduced monkey reaching and grasping movements in a robot arm. Their deeply cleft and furrowed brains made them better models for human neurophysiology than owl monkeys. The monkeys were trained to reach and grasp objects on a computer screen by manipulating a joystick while corresponding movements by a robot arm were hidden. The monkeys were later shown the robot and learned to control it by viewing its movements. The BCI used velocity predictions to control reaching movements and simultaneously predicted gripping force.
In 2011 O'Doherty and colleagues showed a BCI with sensory feedback with rhesus monkeys. The monkey controlled the position of an avatar arm while receiving sensory feedback through direct intracortical stimulation (ICMS) in the arm representation area of the sensory cortex.
Donoghue, Schwartz, and Andersen
Other laboratories that have developed BCIs and algorithms that decode neuron signals include John Donoghue at the Carney Institute for Brain Science at Brown University, Andrew Schwartz at the University of Pittsburgh, and Richard Andersen at Caltech. These researchers produced working BCIs using recorded signals from far fewer neurons than Nicolelis (15–30 neurons versus 50–200 neurons).
The Carney Institute reported training rhesus monkeys to use a BCI to track visual targets on a computer screen (closed-loop BCI) with or without a joystick. The group created a BCI for three-dimensional tracking in virtual reality and reproduced BCI control in a robotic arm. The same group demonstrated that a monkey could feed itself pieces of fruit and marshmallows using a robotic arm controlled by the animal's brain signals.
Andersen's group used recordings of premovement activity from the posterior parietal cortex, including signals created when experimental animals anticipated receiving a reward.
Other research
In addition to predicting kinematic and kinetic parameters of limb movements, BCIs that predict electromyographic or electrical activity of the muscles of primates are in process. Such BCIs could restore mobility in paralyzed limbs by electrically stimulating muscles.
Nicolelis and colleagues demonstrated that large neural ensembles can predict arm position. This work allowed BCIs to read arm movement intentions and translate them into actuator movements. Carmena and colleagues programmed a BCI that allowed a monkey to control reaching and grasping movements by a robotic arm. Lebedev and colleagues argued that brain networks reorganize to create a new representation of the robotic appendage in addition to the representation of the animal's own limbs.
In 2019, a study reported a BCI that had the potential to help patients with speech impairment caused by neurological disorders. Their BCI used high-density electrocorticography to tap neural activity from a patient's brain and used deep learning to synthesize speech. In 2021, those researchers reported the potential of a BCI to decode words and sentences in an anarthric patient who had been unable to speak for over 15 years.
The biggest impediment to BCI technology is the lack of a sensor modality that provides safe, accurate and robust access to brain signals. The use of a better sensor expands the range of communication functions that can be provided using a BCI.
Development and implementation of a BCI system is complex and time-consuming. In response to this problem, Gerwin Schalk has been developing BCI2000, a general-purpose system for BCI research, since 2000.
A new 'wireless' approach uses light-gated ion channels such as channelrhodopsin to control the activity of genetically defined subsets of neurons in vivo. In the context of a simple learning task, illumination of transfected cells in the somatosensory cortex influenced decision-making in mice.
BCIs led to a deeper understanding of neural networks and the central nervous system. Research has reported that despite neuroscientists' inclination to believe that neurons have the most effect when working together, single neurons can be conditioned through the use of BCIs to fire in a pattern that allows primates to control motor outputs. BCIs led to development of the single neuron insufficiency principle that states that even with a well-tuned firing rate, single neurons can only carry limited information and therefore the highest level of accuracy is achieved by recording ensemble firings. Other principles discovered with BCIs include the neuronal multitasking principle, the neuronal mass principle, the neural degeneracy principle, and the plasticity principle.
BCIs are proposed to be applied by users without disabilities. Passive BCIs allow for assessing and interpreting changes in the user state during Human-Computer Interaction (HCI). In a secondary, implicit control loop, the system adapts to its user, improving its usability.
BCI systems can potentially be used to encode signals from the periphery. These sensory BCI devices enable real-time, behaviorally-relevant decisions based upon closed-loop neural stimulation.
The BCI Award
The BCI Research Award is awarded annually in recognition of innovative research. Each year, a renowned research laboratory is asked to judge projects. The jury consists of BCI experts recruited by that laboratory. The jury selects twelve nominees, then chooses a first, second, and third-place winner, who receive awards of $3,000, $2,000, and $1,000, respectively.
Human research
Invasive BCIs
Invasive BCI requires surgery to implant electrodes under the scalp for accessing brain signals. The main advantage is to increase accuracy. Downsides include side effects from the surgery, including scar tissue that can obstruct brain signals or the body may not accept the implanted electrodes.
Vision
Invasive BCI research has targeted repairing damaged sight and providing new functionality for people with paralysis. Invasive BCIs are implanted directly into the grey matter of the brain during neurosurgery. Because they lie in the grey matter, invasive devices produce the highest quality signals of BCI devices but are prone to scar-tissue build-up, causing the signal to weaken, or disappear, as the body reacts to the foreign object.
In vision science, direct brain implants have been used to treat non-congenital (acquired) blindness. One of the first scientists to produce a working brain interface to restore sight was private researcher William Dobelle. Dobelle's first prototype was implanted into "Jerry", a man blinded in adulthood, in 1978. A single-array BCI containing 68 electrodes was implanted onto Jerry's visual cortex and succeeded in producing phosphenes, the sensation of seeing light. The system included cameras mounted on glasses to send signals to the implant. Initially, the implant allowed Jerry to see shades of grey in a limited field of vision at a low frame-rate. This also required him to be hooked up to a mainframe computer, but shrinking electronics and faster computers made his artificial eye more portable and now enable him to perform simple tasks unassisted.
In 2002, Jens Naumann, also blinded in adulthood, became the first in a series of 16 paying patients to receive Dobelle's second generation implant, one of the earliest commercial uses of BCIs. The second generation device used a more sophisticated implant enabling better mapping of phosphenes into coherent vision. Phosphenes are spread out across the visual field in what researchers call "the starry-night effect". Immediately after his implant, Jens was able to use his imperfectly restored vision to drive an automobile slowly around the parking area of the research institute. Dobelle died in 2004 before his processes and developments were documented, leaving no one to continue his work. Subsequently, Naumann and the other patients in the program began having problems with their vision, and eventually lost their "sight" again.
Movement
BCIs focusing on motor neuroprosthetics aim to restore movement in individuals with paralysis or provide devices to assist them, such as interfaces with computers or robot arms.
Kennedy and Bakay were first to install a human brain implant that produced signals of high enough quality to simulate movement. Their patient, Johnny Ray (1944–2002), developed 'locked-in syndrome' after a brain-stem stroke in 1997. Ray's implant was installed in 1998 and he lived long enough to start working with the implant, eventually learning to control a computer cursor; he died in 2002 of a brain aneurysm.
Tetraplegic Matt Nagle became the first person to control an artificial hand using a BCI in 2005 as part of the first nine-month human trial of Cyberkinetics's BrainGate chip-implant. Implanted in Nagle's right precentral gyrus (area of the motor cortex for arm movement), the 96-electrode implant allowed Nagle to control a robotic arm by thinking about moving his hand as well as a computer cursor, lights and TV. One year later, Jonathan Wolpaw received the Altran Foundation for Innovation prize for developing a Brain Computer Interface with electrodes located on the surface of the skull, instead of directly in the brain.
Research teams led by the BrainGate group and another at University of Pittsburgh Medical Center, both in collaborations with the United States Department of Veterans Affairs (VA), demonstrated control of prosthetic limbs with many degrees of freedom using direct connections to arrays of neurons in the motor cortex of tetraplegia patients.
Communication
In May 2021, a Stanford University team reported a successful proof-of-concept test that enabled a quadraplegic participant to produce English sentences at about 86 characters per minute and 18 words per minute. The participant imagined moving his hand to write letters, and the system performed handwriting recognition on electrical signals detected in the motor cortex, utilizing Hidden Markov models and recurrent neural networks.
A 2021 study reported that a paralyzed patient was able to communicate 15 words per minute using a brain implant that analyzed vocal tract motor neurons.
In a review article, authors wondered whether human information transfer rates can surpass that of language with BCIs. Language research has reported that information transfer rates are relatively constant across many languages. This may reflect the brain's information processing limit. Alternatively, this limit may be intrinsic to language itself, as a modality for information transfer.
In 2023 two studies used BCIs with recurrent neural network to decode speech at a record rate of 62 words per minute and 78 words per minute.
Technical challenges
There exist a number of technical challenges to recording brain activity with invasive BCIs. Advances in CMOS technology are pushing and enabling integrated, invasive BCI designs with smaller size, lower power requirements, and higher signal acquisition capabilities. Invasive BCIs involve electrodes that penetrate brain tissue in an attempt to record action potential signals (also known as spikes) from individual, or small groups of, neurons near the electrode. The interface between a recording electrode and the electrolytic solution surrounding neurons has been modelled using the Hodgkin-Huxley model.
Electronic limitations to invasive BCIs have been an active area of research in recent decades. While intracellular recordings of neurons reveal action potential voltages on the scale of hundreds of millivolts, chronic invasive BCIs rely on recording extracellular voltages which typically are three orders of magnitude smaller, existing at hundreds of microvolts. Further adding to the challenge of detecting signals on the scale of microvolts is the fact that the electrode-tissue interface has a high capacitance at small voltages. Due to the nature of these small signals, for BCI systems that incorporate functionality onto an integrated circuit, each electrode requires its own amplifier and ADC, which convert analog extracellular voltages into digital signals. Because a typical neuron action potential lasts for one millisecond, BCIs measuring spikes must have sampling rates ranging from 300 Hz to 5 kHz. Yet another concern is that invasive BCIs must be low-power, so as to dissipate less heat to surrounding tissue; at the most basic level more power is traditionally needed to optimize signal-to-noise ratio. Optimal battery design is an active area of research in BCIs.Challenges existing in the area of material science are central to the design of invasive BCIs. Variations in signal quality over time have been commonly observed with implantable microelectrodes. Optimal material and mechanical characteristics for long term signal stability in invasive BCIs has been an active area of research. It has been proposed that the formation of glial scarring, secondary to damage at the electrode-tissue interface, is likely responsible for electrode failure and reduced recording performance. Research has suggested that blood-brain barrier leakage, either at the time of insertion or over time, may be responsible for the inflammatory and glial reaction to chronic microelectrodes implanted in the brain. As a result, flexible and tissue-like designs have been researched and developed to minimize foreign-body reaction by means of matching the Young's modulus of the electrode closer to that of brain tissue.
Partially invasive BCIs
Partially invasive BCI devices are implanted inside the skull but rest outside the brain rather than within the grey matter. They produce higher resolution signals than non-invasive BCIs where the bone tissue of the cranium deflects and deforms signals and have a lower risk of forming scar-tissue in the brain than fully invasive BCIs. Preclinical demonstration of intracortical BCIs from the stroke perilesional cortex has been conducted.
Endovascular
A systematic review published in 2020 detailed multiple clinical and non-clinical studies investigating the feasibility of endovascular BCIs.
In 2010, researchers affiliated with University of Melbourne began developing a BCI that could be inserted via the vascular system. Australian neurologist Thomas Oxley conceived the idea for this BCI, called Stentrode, earning funding from DARPA. Preclinical studies evaluated the technology in sheep.
Stentrode is a monolithic stent electrode array designed to be delivered via an intravenous catheter under image-guidance to the superior sagittal sinus, in the region which lies adjacent to the motor cortex. This proximity enables Stentrode to measure neural activity. The procedure is most similar to how venous sinus stents are placed for the treatment of idiopathic intracranial hypertension. Stentrode communicates neural activity to a battery-less telemetry unit implanted in the chest, which communicates wirelessly with an external telemetry unit capable of power and data transfer. While an endovascular BCI benefits from avoiding a craniotomy for insertion, risks such as clotting and venous thrombosis exist.
Human trials with Stentrode were underway as of 2021. In November 2020, two participants with amyotrophic lateral sclerosis were able to wirelessly control an operating system to text, email, shop, and bank using direct thought using Stentrode, marking the first time a brain-computer interface was implanted via the patient's blood vessels, eliminating the need for brain surgery. In January 2023, researchers reported no serious adverse events during the first year for all four patients, who could use it to operate computers.
Electrocorticography
Electrocorticography (ECoG) measures brain electrical activity from beneath the skull in a way similar to non-invasive electroencephalography, using electrodes embedded in a thin plastic pad placed above the cortex, beneath the dura mater. ECoG technologies were first trialled in humans in 2004 by Eric Leuthardt and Daniel Moran from Washington University in St. Louis. In a later trial, the researchers enabled a teenage boy to play Space Invaders. This research indicates that control is rapid, requires minimal training, balancing signal fidelity and level of invasiveness.
Signals can be either subdural or epidural, but are not taken from within the brain parenchyma. Patients are required to have invasive monitoring for localization and resection of an epileptogenic focus.
ECoG offers higher spatial resolution, better signal-to-noise ratio, wider frequency range, and less training requirements than scalp-recorded EEG, and at the same time has lower technical difficulty, lower clinical risk, and may have superior long-term stability than intracortical single-neuron recording. This feature profile and evidence of the high level of control with minimal training requirements shows potential for real world application for people with motor disabilities.
Edward Chang and Joseph Makin from UCSF reported that ECoG signals could be used to decode speech from epilepsy patients implanted with high-density ECoG arrays over the peri-Sylvian cortices. They reported word error rates of 3% (a marked improvement from prior efforts) utilizing an encoder-decoder neural network, which translated ECoG data into one of fifty sentences composed of 250 unique words.
Non-invasive BCIs
Human experiments have used non-invasive neuroimaging interfaces. The majority of published BCI research involves noninvasive EEG-based BCIs. EEG-based technologies and interfaces have been used for the broadest variety of applications. Although EEG-based interfaces are easy to wear and do not require surgery, they have relatively poor spatial resolution and cannot effectively use higher-frequency signals because the skull interferes, dispersing and blurring the electromagnetic waves created by the neurons. EEG-based interfaces also require some time and effort prior to each usage session, while others require no prior-usage training. The choice of a specific BCI for a patient depends on numerous factors.
Functional near-infrared spectroscopy
In 2014, a BCI using functional near-infrared spectroscopy for "locked-in" patients with amyotrophic lateral sclerosis (ALS) was able to restore basic ability to communicate.
Electroencephalography (EEG)-based brain-computer interfaces
After Vidal stated the BCI challenge, the initial reports on non-invasive approaches included control of a cursor in 2D using VEP, control of a buzzer using CNV, control of a physical object, a robot, using a brain rhythm (alpha), control of a text written on a screen using P300.
In the early days of BCI research, another substantial barrier to using EEG was that extensive training was required. For example, in experiments beginning in the mid-1990s, Niels Birbaumer at the University of Tübingen in Germany trained paralysed people to self-regulate the slow cortical potentials in their EEG to such an extent that these signals could be used as a binary signal to control a computer cursor. (Birbaumer had earlier trained epileptics to prevent impending fits by controlling this low voltage wave.) The experiment trained ten patients to move a computer cursor. The process was slow, requiring more than an hour for patients to write 100 characters with the cursor, while training often took months. The slow cortical potential approach has fallen away in favor of approaches that require little or no training, are faster and more accurate, and work for a greater proportion of users.
Another research parameter is the type of oscillatory activity that is measured. Gert Pfurtscheller founded the BCI Lab 1991 and conducted the first online BCI based on oscillatory features and classifiers. Together with Birbaumer and Jonathan Wolpaw at New York State University they focused on developing technology that would allow users to choose the brain signals they found easiest to operate a BCI, including mu and beta rhythms.
A further parameter is the method of feedback used as shown in studies of P300 signals. Patterns of P300 waves are generated involuntarily (stimulus-feedback) when people see something they recognize and may allow BCIs to decode categories of thoughts without training.
A 2005 study reported EEG emulation of digital control circuits, using a CNV flip-flop. A 2009 study reported noninvasive EEG control of a robotic arm using a CNV flip-flop. A 2011 study reported control of two robotic arms solving Tower of Hanoi task with three disks using a CNV flip-flop. A 2015 study described EEG-emulation of a Schmitt trigger, flip-flop, demultiplexer, and modem.
Advances by Bin He and his team at University of Minnesota suggest the potential of EEG-based brain-computer interfaces to accomplish tasks close to invasive brain-computer interfaces. Using advanced functional neuroimaging including BOLD functional MRI and EEG source imaging, They identified the co-variation and co-localization of electrophysiological and hemodynamic signals. Refined by a neuroimaging approach and a training protocol, They fashioned a non-invasive EEG based brain-computer interface to control the flight of a virtual helicopter in 3-dimensional space, based upon motor imagination. In June 2013 they announced a technique to guide a remote-control helicopter through an obstacle course. They also solved the EEG inverse problem and then used the resulting virtual EEG for BCI tasks. Well-controlled studies suggested the merits of such a source analysis-based BCI.
A 2014 study reported that severely motor-impaired patients could communicate faster and more reliably with non-invasive EEG BCI than with muscle-based communication channels.
A 2019 study reported that the application of evolutionary algorithms could improve EEG mental state classification with a non-invasive Muse device, enabling classification of data acquired by a consumer-grade sensing device.
In a 2021 systematic review of randomized controlled trials using BCI for post-stroke upper-limb rehabilitation, EEG-based BCI was reported to have efficacy in improving upper-limb motor function compared to control therapies. More specifically, BCI studies that utilized band power features, motor imagery, and functional electrical stimulation were reported to be more effective than alternatives. Another 2021 systematic review focused on post-stroke robot-assisted EEG-based BCI for hand rehabilitation. Improvement in motor assessment scores was observed in three of eleven studies.
Dry active electrode arrays
In the early 1990s Babak Taheri, at University of California, Davis demonstrated the first single and multichannel dry active electrode arrays. The arrayed electrode was demonstrated to perform well compared to silver/silver chloride electrodes. The device consisted of four sensor sites with integrated electronics to reduce noise by impedance matching. The advantages of such electrodes are:
no electrolyte used,
no skin preparation,
significantly reduced sensor size,
compatibility with EEG monitoring systems.
The active electrode array is an integrated system containing an array of capacitive sensors with local integrated circuitry packaged with batteries to power the circuitry. This level of integration was required to achieve the result.
The electrode was tested on a test bench and on human subjects in four modalities, namely:
spontaneous EEG,
sensory event-related potentials,
brain stem potentials,
cognitive event-related potentials.
Performance compared favorably with that of standard wet electrodes in terms of skin preparation, no gel requirements (dry), and higher signal-to-noise ratio.
In 1999 Hunter Peckham and others at Case Western Reserve University used a 64-electrode EEG skullcap to return limited hand movements to a quadriplegic. As he concentrated on simple but opposite concepts like up and down. A basic pattern was identified in his beta-rhythm EEG output and used to control a switch: Above average activity was interpreted as on, below average off. The signals were also used to drive nerve controllers embedded in his hands, restoring some movement.
SSVEP mobile EEG BCIs
In 2009, the NCTU Brain-Computer-Interface-headband was announced. Those researchers also engineered silicon-based microelectro-mechanical system (MEMS) dry electrodes designed for application to non-hairy body sites. These electrodes were secured to the headband's DAQ board with snap-on electrode holders. The signal processing module measured alpha activity and transferred it over Bluetooth to a phone that assessed the patients' alertness and cognitive capacity. When the subject became drowsy, the phone sent arousing feedback to the operator to rouse them.
In 2011, researchers reported a cellular based BCI that could cause a phone to ring. The wearable system was composed of a four channel bio-signal acquisition/amplification module, a communication module, and a Bluetooth phone. The electrodes were placed to pick up steady state visual evoked potentials (SSVEPs). SSVEPs are electrical responses to flickering visual stimuli with repetition rates over 6 Hz that are best found in the parietal and occipital scalp regions of the visual cortex. It was reported that all study participants were able to initiate the phone call with minimal practice in natural environments.
The scientists reported that a single channel fast Fourier transform (FFT) and multiple channel system canonical correlation analysis (CCA) algorithm can support mobile BCIs. The CCA algorithm has been applied in experiments investigating BCIs with claimed high accuracy and speed. Cellular BCI technology can reportedly be translated for other applications, such as picking up sensorimotor mu/beta rhythms to function as a motor-imagery based BCI.
In 2013, comparative tests performed on Android cell phone, tablet, and computer based BCIs, analyzed the power spectrum density of resultant EEG SSVEPs. The stated goals of this study were to "increase the practicability, portability, and ubiquity of an SSVEP-based BCI, for daily use". It was reported that the stimulation frequency on all mediums was accurate, although the phone's signal was not stable. The amplitudes of the SSVEPs for the laptop and tablet were reported to be larger than those of the cell phone. These two qualitative characterizations were suggested as indicators of the feasibility of using a mobile stimulus BCI.
One of the difficulties with EEG readings is susceptibility to motion artifacts. In most research projects, the participants were asked to sit still in a laboratory setting, reducing head and eye movements as much as possible. However, since these initiatives were intended to create a mobile device for daily use, the technology had to be tested in motion. In 2013, researchers tested mobile EEG-based BCI technology, measuring SSVEPs from participants as they walked on a treadmill. Reported results were that as speed increased, SSVEP detectability using CCA decreased. Independent component analysis (ICA) had been shown to be efficient in separating EEG signals from noise. The researchers stated that CCA data with and without ICA processing were similar. They concluded that CCA demonstrated robustness to motion artifacts. EEG-based BCI applications offer low spatial resolution. Possible solutions include: EEG source connectivity based on graph theory, EEG pattern recognition based on Topomap and EEG-fMRI fusion.
Prosthesis and environment control
Non-invasive BCIs have been applied to prosthetic upper and lower extremity devices in people with paralysis. For example, Gert Pfurtscheller of Graz University of Technology and colleagues demonstrated a BCI-controlled functional electrical stimulation system to restore upper extremity movements in a person with tetraplegia due to spinal cord injury. Between 2012 and 2013, researchers at University of California, Irvine demonstrated for the first time that BCI technology can restore brain-controlled walking after spinal cord injury. In their study, a person with paraplegia operated a BCI-robotic gait orthosis to regain basic ambulation. In 2009 independent researcher Alex Blainey used the Emotiv EPOC to control a 5 axis robot arm. He made several demonstrations of mind controlled wheelchairs and home automation.
Magnetoencephalography and fMRI
Magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) have both been used as non-invasive BCIs. In a widely reported experiment, fMRI allowed two users to play Pong in real-time by altering their haemodynamic response or brain blood flow through biofeedback.
fMRI measurements of haemodynamic responses in real time have also been used to control robot arms with a seven-second delay between thought and movement.
In 2008 research developed in the Advanced Telecommunications Research (ATR) Computational Neuroscience Laboratories in Kyoto, Japan, allowed researchers to reconstruct images from brain signals at a resolution of 10x10 pixels.
A 2011 study reported second-by-second reconstruction of videos watched by the study's subjects, from fMRI data. This was achieved by creating a statistical model relating videos to brain activity. This model was then used to look up 100 one-second video segments, in a database of 18 million seconds of random YouTube videos, matching visual patterns to brain activity recorded when subjects watched a video. These 100 one-second video extracts were then combined into a mash-up image that resembled the video.
BCI control strategies in neurogaming
Motor imagery
Motor imagery involves imagining the movement of body parts, activating the sensorimotor cortex, which modulates sensorimotor oscillations in the EEG. This can be detected by the BCI and used to infer user intent. Motor imagery typically requires training to acquire acceptable control. Training sessions typically consume hours over several days. Regardless of the duration of the training session, users are unable to master the control scheme. This results in very slow pace of the gameplay. Machine learning methods were used to compute a subject-specific model for detecting motor imagery performance. The top performing algorithm from BCI Competition IV in 2022 dataset 2 for motor imagery was the Filter Bank Common Spatial Pattern, developed by Ang et al. from A*STAR, Singapore.
Bio/neurofeedback for passive BCI designs
Biofeedback can be used to monitor a subject's mental relaxation. In some cases, biofeedback does not match EEG, while parameters such as electromyography (EMG), galvanic skin resistance (GSR), and heart rate variability (HRV) can do so. Many biofeedback systems treat disorders such as attention deficit hyperactivity disorder (ADHD), sleep problems in children, teeth grinding, and chronic pain. EEG biofeedback systems typically monitor four brainwave bands (theta: 4–7 Hz, alpha:8–12 Hz, SMR: 12–15 Hz, beta: 15–18 Hz) and challenge the subject to control them. Passive BCI uses BCI to enrich human–machine interaction with information on the user's mental state, for example, simulations that detect when users intend to push brakes during emergency vehicle braking. Game developers using passive BCIs understand that through repetition of game levels the user's cognitive state adapts. During the first play of a given level, the player reacts differently than during subsequent plays: for example, the user is less surprised by an event that they expect.
Visual evoked potential (VEP)
A VEP is an electrical potential recorded after a subject is presented with a visual stimuli. The types of VEPs include SSVEPs and P300 potential.
Steady-state visually evoked potentials (SSVEPs) use potentials generated by exciting the retina, using visual stimuli modulated at certain frequencies. SSVEP stimuli are often formed from alternating checkerboard patterns and at times use flashing images. The frequency of the phase reversal of the stimulus used can be distinguished by EEG; this makes detection of SSVEP stimuli relatively easy. SSVEP is used within many BCI systems. This is due to several factors. The signal elicited is measurable in as large a population as the transient VEP and blink movement. Electrocardiographic artefacts do not affect the frequencies monitored. The SSVEP signal is robust; the topographic organization of the primary visual cortex is such that a broader area obtains afferents from the visual field's central or fovial region. SSVEP comes with problems. As SSVEPs use flashing stimuli to infer user intent, the user must gaze at one of the flashing or iterating symbols in order to interact with the system. It is, therefore, likely that the symbols become irritating and uncomfortable during longer play sessions.
Another type of VEP is the P300 potential. This potential is a positive peak in the EEG that occurs roughly 300 ms after the appearance of a target stimulus (a stimulus for which the user is waiting or seeking) or oddball stimuli. P300 amplitude decreases as the target stimuli and the ignored stimuli grow more similar. P300 is thought to be related to a higher level attention process or an orienting response. Using P300 requires fewer training sessions. The first application to use it was the P300 matrix. Within this system, a subject chooses a letter from a 6 by 6 grid of letters and numbers. The rows and columns of the grid flashed sequentially and every time the selected "choice letter" was illuminated the user's P300 was (potentially) elicited. However, the communication process, at approximately 17 characters per minute, was slow. P300 offers a discrete selection rather than continuous control. The advantage of P300 within games is that the player does not have to learn how to use a new control system, requiring only short training instances to learn gameplay mechanics and the basic BCI paradigm.
Non-brain-based human–computer interface (physiological computing)
Human-computer interaction can exploit other recording modalities, such as electrooculography and eye-tracking. These modalities do not record brain activity and therefore do not qualify as BCIs.
Electrooculography (EOG)
In 1989, a study reported control of a mobile robot by eye movement using electrooculography signals. A mobile robot was driven to a goal point using five EOG commands, interpreted as forward, backward, left, right, and stop.
Pupil-size oscillation
A 2016 article described a new non-EEG-based HCI that required no visual fixation, or ability to move the eyes. The interface is based on covert interest; directing attention to a chosen letter on a virtual keyboard, without the need to look directly at the letter. Each letter has its own (background) circle which micro-oscillates in brightness differently from the others. Letter selection is based on best fit between unintentional pupil-size oscillation and the background circle's brightness oscillation pattern. Accuracy is additionally improved by the user's mental rehearsal of the words 'bright' and 'dark' in synchrony with the brightness transitions of the letter's circle.
Brain-to-brain communication
In the 1960s a researcher after training used EEG to create Morse code using alpha waves. On 27 February 2013 Miguel Nicolelis's group at Duke University and IINN-ELS connected the brains of two rats, allowing them to share information, in the first-ever direct brain-to-brain interface.
Gerwin Schalk reported that ECoG signals can discriminate vowels and consonants embedded in spoken and imagined words, shedding light on the mechanisms associated with their production and could provide a basis for brain-based communication using imagined speech.
In 2002 Kevin Warwick had an array of 100 electrodes fired into his nervous system in order to link his nervous system to the Internet. Warwick carried out a series of experiments. Electrodes were implanted into his wife's nervous system, allowing them to conduct the first direct electronic communication experiment between the nervous systems of two humans.
Other researchers achieved brain-to-brain communication between participants at a distance using non-invasive technology attached to the participants' scalps. The words were encoded in binary streams by the cognitive motor input of the person sending the information. Pseudo-random bits of the information carried encoded words "hola" ("hi" in Spanish) and "ciao" ("goodbye" in Italian) and were transmitted mind-to-mind.
Cell-culture BCIs
Researchers have built devices to interface with neural cells and entire neural networks in vitro. Experiments on cultured neural tissue focused on building problem-solving networks, constructing basic computers and manipulating robotic devices. Research into techniques for stimulating and recording individual neurons grown on semiconductor chips is neuroelectronics or neurochips.
Development of the first neurochip was claimed by a Caltech team led by Jerome Pine and Michael Maher in 1997. The Caltech chip had room for 16 neurons.
In 2003 a team led by Theodore Berger, at the University of Southern California, worked on a neurochip designed to function as an artificial or prosthetic hippocampus. The neurochip was designed for rat brains. The hippocampus was chosen because it is thought to be the most structured and most studied part of the brain. Its function is to encode experiences for storage as long-term memories elsewhere in the brain.
In 2004 Thomas DeMarse at the University of Florida used a culture of 25,000 neurons taken from a rat's brain to fly a F-22 fighter jet aircraft simulator. After collection, the cortical neurons were cultured in a petri dish and reconnected themselves to form a living neural network. The cells were arranged over a grid of 60 electrodes and used to control the pitch and yaw functions of the simulator. The study's focus was on understanding how the human brain performs and learns computational tasks at a cellular level.
Collaborative BCIs
The idea of combining/integrating brain signals from multiple individuals was introduced at Humanity+ @Caltech, in December 2010, by Adrian Stoica, who referred to the concept as multi-brain aggregation. A patent was applied for in 2012. Stoica's first paper on the topic appeared in 2012, after the publication of his patent application.
Ethical considerations
BCIs present ethical questions, including concerns about privacy, autonomy, consent, and the consequences of merging human cognition with external devices. Exploring these ethical considerations highlights the complex interplay between advancing technology and preserving fundamental human rights and values. The concerns can be broadly categorized into user-centric issues and legal and social issues.
Concerns center on the safety and long-term effects on users. These include obtaining informed consent from individuals with communication difficulties, the impact on patients' and families' quality of life, health-related side effects, misuse of therapeutic applications, safety risks, and the non-reversible nature of some BCI-induced changes. Additionally, questions arise about access to maintenance, repair, and spare parts, particularly in the event of a company's bankruptcy.
The legal and social aspects of BCIs complicate mainstream adoption. Concerns include issues of accountability and responsibility, such as claims that BCI influence overrides free will and control over actions, inaccurate translation of cognitive intentions, personality changes resulting from deep-brain stimulation, and the blurring of the line between human and machine. Other concerns involve the use of BCIs in advanced interrogation techniques, unauthorized access ("brain hacking"), social stratification through selective enhancement, privacy issues related to mind-reading, tracking and "tagging" systems, and the potential for mind, movement, and emotion control. Researchers have also theorized that BCIs could exacerbate existing social inequalities.
In their current form, most BCIs are more akin to corrective therapies that engage few of such ethical issues. Bioethics is well-equipped to address the challenges posed by BCI technologies, with Clausen suggesting in 2009 that "BCIs pose ethical challenges, but these are conceptually similar to those that bioethicists have addressed for other realms of therapy." Haselager and colleagues highlighted the importance of managing expectations and value. Standard protocols can ensure ethically sound informed-consent procedures for locked-in patients.
The evolution of BCIs mirrors that of pharmaceutical science, which began as a means to address impairments and now enhances focus and reduces the need for sleep. As BCIs progress from therapies to enhancements, the BCI community is working to create consensus on ethical guidelines for research, development, and dissemination. Ensuring equitable access to BCIs will be crucial in preventing generational inequalities that could hinder the right to human flourishing.
Low-cost systems
Various companies are developing inexpensive BCIs for research and entertainment. Toys such as the NeuroSky and Mattel MindFlex have seen some commercial success.
In 2006, Sony patented a neural interface system allowing radio waves to affect signals in the neural cortex.
In 2007, NeuroSky released the first affordable consumer based EEG along with the game NeuroBoy. It was the first large scale EEG device to use dry sensor technology.
In 2008, OCZ Technology developed a device for use in video games relying primarily on electromyography.
In 2008, Final Fantasy developer Square Enix announced that it was partnering with NeuroSky to create Judecca, a game.
In 2009, Mattel partnered with NeuroSky to release Mindflex, a game that used an EEG to steer a ball through an obstacle course. It was by far the best selling consumer based EEG at the time.
In 2009, Uncle Milton Industries partnered with NeuroSky to release the Star Wars Force Trainer, a game designed to create the illusion of possessing the Force.
In 2009, Emotiv released the EPOC, a 14 channel EEG device that can read 4 mental states, 13 conscious states, facial expressions, and head movements. The EPOC was the first commercial BCI to use dry sensor technology, which can be dampened with a saline solution for a better connection.
In November 2011, Time magazine selected "necomimi" produced by Neurowear as one of the year's best inventions.
In February 2014, They Shall Walk (a nonprofit organization fixed on constructing exoskeletons, dubbed LIFESUITs, for paraplegics and quadriplegics) began a partnership with James W. Shakarji on the development of a wireless BCI.
In 2016, a group of hobbyists developed an open-source BCI board that sends neural signals to the audio jack of a smartphone, dropping the cost of entry-level BCI to £20. Basic diagnostic software is available for Android devices, as well as a text entry app for Unity.
In 2020, NextMind released a dev kit including an EEG headset with dry electrodes at $399. The device can run various visual-BCI demonstration applications or developers can create their own. It was later acquired by Snap Inc. in 2022.
In 2023, PiEEG released a shield that allows converting a single-board computer Raspberry Pi to a brain-computer interface for $350.
Future directions
A consortium of 12 European partners completed a roadmap to support the European Commission in their funding decisions for the Horizon 2020 framework program. The project was funded by the European Commission. It started in November 2013 and published a roadmap in April 2015. A 2015 publication describes this project, as well as the Brain-Computer Interface Society. It reviewed work within this project that further defined BCIs and applications, explored recent trends, discussed ethical issues, and evaluated directions for new BCIs.
Other recent publications too have explored future BCI directions for new groups of disabled users.
Disorders of consciousness (DOC)
Some people have a disorder of consciousness (DOC). This state is defined to include people in a coma and those in a vegetative state (VS) or minimally conscious state (MCS). BCI research seeks to address DOC. A key initial goal is to identify patients who can perform basic cognitive tasks, which would change their diagnosis, and allow them to make important decisions (such as whether to seek therapy, where to live, and their views on end-of-life decisions regarding them). Patients incorrectly diagnosed may die as a result of end-of-life decisions made by others. The prospect of using BCI to communicate with such patients is a tantalizing prospect.
Many such patients cannot use BCIs based on vision. Hence, tools must rely on auditory and/or vibrotactile stimuli. Patients may wear headphones and/or vibrotactile stimulators placed on responsive body parts. Another challenge is that patients may be able to communicate only at unpredictable intervals. Home devices can allow communications when the patient is ready.
Automated tools can ask questions that patients can easily answer, such as "Is your father named George?" or "Were you born in the USA?" Automated instructions inform patients how to convey yes or no, for example by focusing their attention on stimuli on the right vs. left wrist. This focused attention produces reliable changes in EEG patterns that can help determine whether the patient is able to communicate.
Motor recovery
People may lose some of their ability to move due to many causes, such as stroke or injury. Research in recent years has demonstrated the utility of EEG-based BCI systems in aiding motor recovery and neurorehabilitation in patients who have had a stroke. Several groups have explored systems and methods for motor recovery that include BCIs. In this approach, a BCI measures motor activity while the patient imagines or attempts movements as directed by a therapist. The BCI may provide two benefits: (1) if the BCI indicates that a patient is not imagining a movement correctly (non-compliance), then the BCI could inform the patient and therapist; and (2) rewarding feedback such as functional stimulation or the movement of a virtual avatar also depends on the patient's correct movement imagery.
So far, BCIs for motor recovery have relied on the EEG to measure the patient's motor imagery. However, studies have also used fMRI to study different changes in the brain as persons undergo BCI-based stroke rehab training. Imaging studies combined with EEG-based BCI systems hold promise for investigating neuroplasticity during motor recovery post-stroke. Future systems might include the fMRI and other measures for real-time control, such as functional near-infrared, probably in tandem with EEGs. Non-invasive brain stimulation has also been explored in combination with BCIs for motor recovery. In 2016, scientists out of the University of Melbourne published preclinical proof-of-concept data related to a potential brain-computer interface technology platform being developed for patients with paralysis to facilitate control of external devices such as robotic limbs, computers and exoskeletons by translating brain activity.
Functional brain mapping
In 2014, some 400,000 people underwent brain mapping during neurosurgery. This procedure is often required for people who do not respond to medication. During this procedure, electrodes are placed on the brain to precisely identify the locations of structures and functional areas. Patients may be awake during neurosurgery and asked to perform tasks, such as moving fingers or repeating words. This is necessary so that surgeons can remove the desired tissue while sparing other regions. Removing too much brain tissue can cause permanent damage, while removing too little can mandate additional neurosurgery.
Researchers explored ways to improve neurosurgical mapping. This work focuses largely on high gamma activity, which is difficult to detect non-invasively. Results improved methods for identifying key functional areas.
Flexible devices
Flexible electronics are polymers or other flexible materials (e.g. silk, pentacene, PDMS, Parylene, polyimide) printed with circuitry; the flexibility allows the electronics to bend. The fabrication techniques used to create these devices resembles those used to create integrated circuits and microelectromechanical systems (MEMS).
Flexible neural interfaces may minimize brain tissue trauma related to mechanical mismatch between electrode and tissue.
Neural dust
Neural dust is millimeter-sized devices operated as wirelessly powered nerve sensors that were proposed in a 2011 paper from the University of California, Berkeley Wireless Research Center. In one model, local field potentials could be distinguished from action potential "spikes", which would offer greatly diversified data vs conventional techniques.
| Biology and health sciences | Biology basics | Biology |
623697 | https://en.wikipedia.org/wiki/Dytiscus | Dytiscus | Dytiscus ("little diver" based on Greek δυτικός, "able to dive" and the diminutive suffix -ίσκος) is a Holarctic genus of predaceous diving beetles that usually live in wetlands and ponds. There are 26 species in this genus distributed in Europe, Asia, North Africa and North and Central America. They are predators that can reduce mosquito larvae.
Dytiscus are large water beetles with a robust, rounded shape and they measure long depending on the exact species involved. The largest, D. latissimus, is among the largest species in the family and its size is only matched by certain Megadytes. The tarsi of the males are modified into suckers which are used to grip the female in mating. Females are usually larger than the males and come in two forms, with grooved (sulcate) or smooth elytra. Males only ever have smooth elytra. The adults of most species can fly.
Life history
Adult beetles and their larvae are aquatic but the pupae spend their life in the ground. Females lay eggs inside the tissue of aquatic plants such as reeds. The eggs hatch in about three weeks.
The larvae (known as "water tigers") are elongate with a round and flat head and strong mandibles. They are predatory and their mandible have grooves on their inner edge through which they are able to suck the body fluids of their prey. The larvae take air from the surface of the water using hairs at the end of their abdomen. These lead to spiracles into which the air is taken.
Once the larvae grow to some size, they move to soil at the edge of water and burrow into a cell and pupate.
The adults breathe by going to the surface and upending. They collect air under their elytra and are able to breathe this collected air using spiracles hidden under the elytra.
In Dytiscus marginalis and other species the tarsus of the forelegs is modified in males to form a circular sucker. A reduced sucker is also seen in the midleg of the male.
Parasitoids
Eggs of Dytiscus are sometimes parasitized by wasps of the families Eulophidae, Mymaridae and other Chalcidoidea.
Species
Dysticus contains the following species:
Dytiscus alaskanus J.Balfour-Browne, 1944
Dytiscus avunculus C.Heyden, 1862
Dytiscus caraboides Linnaeus, 1758
Dytiscus carolinus Aubé, 1838
Dytiscus circumcinctus (Ahrens, 1811)
Dytiscus circumflexus Fabricius, 1801
Dytiscus cordieri Aubé, 1838
Dytiscus dauricus Gebler, 1832
Dytiscus delictus (Zaitzev, 1906)
Dytiscus dimidiatus Bergsträsser, 1778
Dytiscus distantus Feng, 1936
Dytiscus fasciventris Say, 1824
Dytiscus habilis Say, 1830
Dytiscus harrisii Kirby, 1837
Dytiscus hatchi Wallis, 1950
Dytiscus hybridus Aubé, 1838
Dytiscus krausei H.J.Kolbe, 1931
Dytiscus lapponicus Gyllenhal, 1808
Dytiscus latahensis Wickham, 1931
Dytiscus latissimus Linnaeus, 1758
Dytiscus latro Sharp, 1882
Dytiscus lavateri Heer, 1847
Dytiscus marginalis Linnaeus, 1758
Dytiscus marginicollis LeConte, 1845
Dytiscus miocenicus Lewis & Gundersen, 1987
Dytiscus mutinensis Branden, 1885
Dytiscus persicus Wehncke, 1876
Dytiscus pisanus Laporte, 1835
Dytiscus semisulcatus (O.F.Müller, 1776)
Dytiscus sharpi Wehncke, 1875
Dytiscus sinensis Feng, 1935
Dytiscus thianschanicus (Gschwendtner, 1923)
Dytiscus verticalis Say, 1823
Dytiscus zersii Sordelli, 1882
| Biology and health sciences | Beetles (Coleoptera) | Animals |
9588432 | https://en.wikipedia.org/wiki/Carya%20glabra | Carya glabra | Carya glabra, the pignut hickory, is a common, but not abundant species of hickory in the oak-hickory forest association in the Eastern United States and Canada. Other common names are pignut, sweet pignut, coast pignut hickory, smoothbark hickory, swamp hickory, and broom hickory. The pear-shaped nut ripens in September and October, has a sweet maple like smell, and is an important part of the diet of many wild animals. The wood is used for a variety of products, including fuel for home heating. Its leaves turn yellow in the Fall.
Habitat
Native range
The range of pignut hickory covers nearly all of the eastern United States. The species grows in central Florida and northward through North Carolina to southern Massachusetts. It also grows north of the Gulf Coast through Alabama, Mississippi north to Missouri and extreme southeastern Iowa, and the Lower Peninsula of Michigan.
The best development of this species is in the lower Ohio River Basin. It prevails over other species of hickory in the Appalachian forests. Pignut makes up much of the hickory harvested in Kentucky, West Virginia, the Cumberland Mountains of Tennessee, and the hill country of the Ohio Valley.
Pignut hickory is also found in Canada in extreme southern Ontario. It does however have a limited range and is restricted to the Niagara Peninsula, southern Halton Region, the Hamilton
area along western Lake Ontario, and southward along the northern shore of Lake Erie and pockets of extreme southwestern Ontario.
Climate
Pignut hickory grows in a humid climate with an average annual precipitation of of which is rain during the growing season. Average snowfall varies from little to none in the South to or more in the mountains of West Virginia, upstate New York, and western North Carolina (25).
Within the range of pignut hickory, average annual temperatures vary from in the north to in Florida. Average January temperature varies from and average July temperature varies from . Extremes of have been recorded within the range. The growing season varies by latitude and elevation from 140 to 300 days.
Mean annual relative humidity ranges from 70 to 80 percent with small monthly differences; daytime relative humidity often falls below 50% while nighttime humidity approaches 100%.
Mean annual hours of sunshine range from 2,200 to 3,000. Average January sunshine varies from 100 to 200 hours, and July sunshine from 260 to 340 hours. Mean daily solar radiation ranges from 12.57 to 18.86 million J m± (300 to 450 langleys). In January daily radiation varies from 6.28 to 12.57 million J m± (150 to 300 langleys), and in July from 20.95 to 23.04 million J m± (500 to 550 langleys).
According to one classification of climate (20), the range of pignut hickory south of the Ohio River, except for a small area in Florida, is designated as humid, mesothermal. That part of the range lying north of the Ohio River is designated humid, mesothermal. Part of the species range in peninsular Florida is classed as subhumid, mesothermal. Mountains in Pennsylvania, West Virginia, North Carolina, and Tennessee are classed as wet, microthermal, and mountains in South Carolina and Georgia are classed as wet, mesothermal. Throughout its range, precipitation is rated adequate during all seasons.
Soils and topography
Pignut hickory frequently grows on dry ridgetops and sideslopes throughout its range but it is also common on moist sites, particularly in the mountains and Piedmont. In the Great Smoky Mountains pignut hickory has been observed on dry sandy soils at low elevations. Whittaker placed pignut in a submesic class and charted it as ranging up to -the hickory with the greatest elevational range in the Great Smoky Mountains. In southwest Virginia, south-facing upper slopes from of Beanfield Mountain are dominated by pignut hickory, northern red oak Quercus rubra), and white oak (Q. alba). This site is the most xeric habitat on the mountain because of high insolation, 70 percent slopes, and medium- to coarse-textured soils derived from Clinch sandstone. Mid-elevation slopes from are dominated by chestnut oak (Q. prinus), northern red oak, and pignut hickory and coincide with three shale formations.
The range of pignut hickory encompasses 7 orders, 12 suborders, and 22 great groups of soils. About two-thirds of the species range is dominated by Ultisols, which are low in bases and have subsurface horizons of clay accumulation. They are usually moist but are dry during part of the warm season. Udults is the dominant suborder and Hapludults and Paleudults are the dominant great groups. These soils are derived from a variety of parent materials-sedimentary and metamorphic rocks, glacial till, and in places varying thickness of loess-which vary in age from Precambrian to Quaternary.
A wide range of soil fertility exists as evidenced by soil orders-Alfisols and Mollisols which are medium to high in base saturation to Ultisols which are low in base saturation. Pignut hickory responds to increases in soil nitrogen similarly to American beech (Fagus grandifolia), sugar maple (Acer saccharum), and blackgum (Nyssa sylvatica). These species are rated as intermediate in nitrogen deficiency tolerance and consequently are able to grow with lower levels of nitrogen than are required by "nitrogen- demanding" white ash (Fraxinus americana), yellow-poplar (Liriodendron tulipifera), and American basswood (Tilia americana). Hickories are considered "soil improvers" because their leaves have a relatively high calcium content.
Associated forest cover
Hickories are consistently present in the broad eastern upland climax forest association commonly called oak-hickory, but they are not generally abundant. Locally, hickories may make up to 20 to 30 percent of stand basal area, particularly in slope and cove forests below the escarpment of the Cumberland Plateau and in second-growth forests in the Cumberland Mountains, especially on benches. It has been hypothesized that hickory will replace chestnut (Castanea dentata) killed by the blight (Cryphonectria parasitica) in the Appalachian Highlands. On Beanfield Mountain in Giles County, Virginia, the former chestnut-oak complex has changed to an oak-hickory association over a period of 50 years. This association is dominated by pignut hickory with an importance value of 41.0 (maximum value = 300), northern red oak (36.0), and chestnut oak (25.0). White oak, red maple (Acer rubrum), and sugar maple are subdominant species.
Pignut hickory is an associated species in 20 of the 90 forest cover types listed by the Society of American Foresters for the eastern United States (6):
Northern forest region
53 White Pine-Chestnut Oak
Central forest region
40 Post Oak-Blackjack Oak
44 Chestnut Oak
45 Pitch Pine
46 Eastern Redcedar
52 White Oak-Black Oak-Northern Red Oak
53 White Oak
55 Northern Red Oak
57 Yellow-Poplar-Tulip tree
59 Yellow-Poplar-White Oak-Northern Red Oak
64 Sassafras-Persimmon
110 Black Oak
Southern forest region
75 Shortleaf Pine
76 Shortleaf Pine-Oak
78 Virginia Pine-Oak
79 Virginia Pine
80 Loblolly Pine-Shortleaf Pine
81 Loblolly Pine
82 Loblolly Pine-Hardwood
83 Longleaf Pine-Slash Pine
Because the range of pignut hickory is so extensive, it is not feasible to list the associated trees, shrubs, herbs, and grasses, which vary according to elevation, topographic conditions, edaphic features, and geographic locality.
Life history
Reproduction and early growth
Hickories are monoecious and flower in the spring. The staminate catkins of pignut hickory are long and develop from axils of leaves of the previous season or from inner scales of the terminal buds at the base of the current growth. The pistillate flowers appear in spikes about long on peduncles terminating in shoots of the current year. Flowers open from the middle of March in the southeast part (Florida) of the range to early June in Michigan. The catkins usually emerge before the pistillate flowers.
The fruit of hickory is pear shaped and enclosed in a thin husk developed from the floral involucre. The fruit ripens in September and October, and seeds are dispersed from September through December. Husks are green until maturity; they turn brown to brownish-black as they ripen. The husks become dry at maturity and split away from the nut into four valves along sutures. Husks of pignut hickory split only to the middle or slightly beyond and generally cling to the nut, which is unribbed, with a thick shell.
Seed production and dissemination
Pignut hickory begins to bear seed in quantity in 30 years, with optimum production between 75 and 200 years. The maximum age for seed production is about 300 years. Good seed crops occur every year or two with light crops in other years; frost can seriously hinder seed production. Usually less than half of the seeds are sound, but 50 to 75 percent of these will germinate. The hickory shuckworm (Laspeyresia caryana) can seriously reduce germination. Pignut seed, averaging 440/kg (200/lb), is lighter than the seed of other hickory species. The nuts are disseminated mainly by gravity, but the range of seeding is extended by squirrels and chipmunks.
Seedling development
Hickories exhibit embryo dormancy which is overcome naturally by overwintering in the duff and litter or artificially by stratification in a moist medium at for 30 to 150 days. In forest tree nurseries unstratified hickory nuts are sown in the fall and stratified nuts are sown in the spring. Hickories are hypogeously germinating plants, and the nuts seldom remain viable in the forest floor for more than one winter.
Seedling growth of hickories is slow. The following height growth of pignut hickory seedlings was reported in the Ohio Valley in the open or under light shade, on red clay soil (2):
Vegetative reproduction
Hickories sprout readily from stumps and roots. Stump sprouting is not as prolific as in other deciduous trees species but the sprouts that are produced are vigorous and grow fairly rapidly in height. Root sprouts also are vigorous and probably more numerous than stump sprouts in cut-over areas. Small stumps sprout more frequently than large ones. Sprouts that originate at or below ground level and from small stumps are less likely to develop heartwood decay. Pignut hickory is difficult to reproduce from cuttings.
Sapling and pole stages to maturity
Pignut hickory often grows tall and occasionally reaches , with d.b.h. of . The bole is often forked. Height and diameter by age are shown in table 1 for selected locations. Diameter growth of pignut hickory (along with chestnut oak, white oak, sweet birch (Betula lenta), and American beech is rated slow. Since hickories constitute 15 percent or less of the basal area of oak-hickory forest types, most growth and yield information is written in terms of oak rather than oak-hickory. Yields of mixed oak stands and of hickory stands have been reported. Tree volume tables are available.
¹Second growth.
²Virgin forest.
Rooting habit
Pignut hickory tends to develop a pronounced taproot with few laterals and is rated as windfirm. The taproot develops early, which may explain the slow growth of seedling shoots. Taproots may develop in compact and stony soils.
Reaction to competition
The hickories as a group are classed as intermediate in shade tolerance; however, pignut hickory has been classed as intolerant in the Northeast and tolerant in the Southeast. In much of the area covered by mixed oak forests, shade-tolerant hardwoods (including the hickories) are climax, and the trend of succession toward this climax is very strong. Although most silvicultural systems when applied to oak types will maintain a hardwood forest, the cutting methods used affects the rapidity with which other species may replace the oaks and hickories.
Damaging agents
Pignut hickory is easily damaged by fire, which causes stem degrade or loss of volume, or both. Internal discolorations called mineral streak are common and are one major reason why so few standing hickories meet trade specifications. Streaks result from yellow-bellied sapsucker pecking, pin knots, worm holes, and mechanical injuries. Hickories strongly resist ice damage and seldom develop epicormic branches.
The Index of Plant Diseases in the United States lists 133 fungi and 10 other causes of diseases on Carya species. Most of the fungi are saprophytes, but a few are damaging to foliage, produce cankers, or cause trunk or root rots.
The most common disease of pignut hickory from Pennsylvania southward is a trunk rot caused by Poria spiculosa. Cankers vary in size and appearance depending on their age. A common form develops around a branch wound and resembles a swollen, nearly healed wound. On large trees these may become prominent burl-like bodies having several vertical or irregular folds in the callus covering. A single trunk canker near the base is a sign that the butt log is badly infected, and multiple cankers are evidence that the entire tree may be a cull.
Major leaf diseases are anthracnose (Gnomonia caryae) and mildew (Microstroma juglandis). The former causes brown spots with definite margins on the undersides of the leaf. These may coalesce and cause widespread blotching. Mildew invades the leaves and twigs and may form witches' brooms by stimulating bud formation. Although locally prevalent, mildew offers no problem in the management of hickory.
The stem canker (Nectria galligena) produces depressed areas with concentric bark rings that develop on the trunk and branches. Affected trees are sometimes eliminated through breakage or competition and sometimes live to reach merchantable size with cull section at the canker. No special control measures are required, but cankered trees should be harvested in stand improvement operations.
A gall-forming fungus species of Phomopsis can produce warty excrescences ranging from small twig galls to very large trunk burls on northern hickories and oaks. Little information is available on root diseases of hickory.
More than 100 insects have been reported to infest hickory trees and wood products, but only a few cause death or severe damage. The hickory bark beetle (Scolytus quadrispinosus) is the most important insect enemy of hickory, and also one of the most important insect pests of hardwoods in the Eastern United States. During drought periods in the Southeast, outbreaks often develop and large tracts of timber are killed. At other times, damage may be confined to the killing of a single tree or to portions of the tops of trees. The foliage of heavily infested trees turns red within a few weeks after attack, and the trees soon die. There is one generation per year in northern areas and normally two broods per year in the South. Control consists of felling infested trees and destroying the bark during winter months or storing infested logs in ponds.
Logs and dying trees of several hardwood species including pignut hickory are attacked by the ambrosia beetle (Platypus quadridentatus) throughout the South and north to West Virginia and North Carolina. The false powderpost beetle (Xylobiops basilaris) attacks recently felled or dying trees, logs, or limbs with bark in the Eastern and Southern States. Hickory, persimmon (Diospyros virginiana), and pecan (C. illinoinensis) are most frequently infested, but other hardwoods also are attacked. Healthy trees growing in proximity to heavily infested trees are occasionally attacked but almost always without success.
Hickory is one of several host species of the twig girdler (Oncideres cingulata). Infested trees and seedlings are not only damaged severely but become ragged and unattractive. A few of the more common species of gall-producing insects attacking hickory are Phylloxera caryaecaulis, Caryomyia holotricha, C. sanguinolenta, and C. tubicola.
Special uses
Hickories provide food to many kinds of wildlife. The nuts are relished by several species of squirrel and represent an estimated 10 to 25 percent of their diet. Hogs were observed consuming the nuts in colonial America, lending the species its common name. Nuts and flowers are eaten by the wild turkey and several species of songbirds. Nuts and bark are eaten by black bears, foxes, rabbits, and raccoons. Small mammals eat the nuts and leaves; 5 to 10 percent of the diet of eastern chipmunks is hickory nuts. White-tailed deer occasionally browse hickory leaves, twigs, and nuts.
The kernel of hickory seeds is exceptionally high in crude fat, up to 70 to 80 percent in some species. Crude protein, phosphorus, and calcium contents are generally moderate to low. Crude fiber is very low.
Pignut hickory makes up a small percentage of the biomass in low-quality upland hardwood stands that are prime candidates for clearcutting for chips or fuelwood as the first step toward rehabilitation to more productive stands. Hickory has a relatively high heating value and is used extensively as a home heating fuel.
Pignut hickory is an important shade tree in wooded suburban areas over most of the range but is seldom planted as an ornamental tree because of its size and difficulty of transplanting, although it has spectacular orangey-red fall colors.
Genetics
Carya glabra var. megacarpa (Sarg.) Sarg., coast pignut hickory, was once recognized as a distinct variety but is now considered to be a synonym of C. glabra (Mill.) Sweet. C. leiodermis Sarg., swamp hickory, has also been added as a synonym of C. glabra.
Carya glabra (Mill.) Sweet var. glabra distinguishes the (typical) pignut hickory from red hickory (C. glabra var. odorata (Marsh.) Little). The taxonomic position of red hickory is controversial. The binomial C. ovalis (Wangenh.) Sarg. was published in 1913 for a segregate of C. glabra. It was reduced to a synonym of C. glabra in Little's 1953 checklist but was elevated to a variety in the 1979 edition. The principal difference is in the husk of the fruit, opening late and only partly, or remaining closed in C. glabra but promptly splitting to the base in C. ovalis. However, many trees are intermediate in this trait, and the recorded ranges are almost the same. The leaves of C. ovalis have mostly seven leaflets; those of C. glabra have mostly five leaflets. The two can be distinguished with certainty only in November. Since the two ranges seem to overlap, the distributions have been mapped together as a Carya glabra-ovalis complex.
Carya ovalis has also been treated as an interspecific hybrid between C. glabra and C. ovata. C. ovalis was accepted as a polymorphic species especially variable in size and shape of its nuts and is possibly a hybrid. The relationships may be more complex after a long and reticulate phylogeny, according to detailed chemical analyses of hickory nut oils.
Carya glabra is a 64 chromosome species that readily hybridizes with other hickories, especially C. ovalis.
One hybrid, C. x demareei Palmer (C. glabra x cordiformis) was described in 1937 from northeastern Arkansas.
Gallery
| Biology and health sciences | Fagales | Plants |
9593858 | https://en.wikipedia.org/wiki/Megarachne | Megarachne | {{Speciesbox
| fossil_range = Gzhelian
| image = Megarachne servinei.jpg
| image_alt = Cast of Megarachnes holotype specimen.
| image_caption = Cast of the holotype specimen of Megarachne exhibited at Royal Ontario Museum
| display_parents = 3
| genus = Megarachne
| parent_authority = Hünicken, 1980
| species = servinei
| authority = Hünicken, 1980
| range_map =
| range_map_caption =
| synonyms =
| synonyms_ref =
}}Megarachne''' is a genus of eurypterid, an extinct group of aquatic arthropods. Fossils of Megarachne have been discovered in deposits of Late Carboniferous age, from the Gzhelian stage, in the Bajo de Véliz Formation of San Luis, Argentina. The fossils of the single and type species M. servinei have been recovered from deposits that had once been a freshwater environment. The generic name, composed of the Ancient Greek μέγας (megas) meaning "great" and Ancient Greek ἀράχνη (arachne) meaning "spider", translates to "great spider"; because the fossil was misidentified as a large, prehistoric spider.
With a body length of , Megarachne was a medium-sized eurypterid. If the original identification as a spider had been correct, Megarachne would have been the largest known spider to have ever lived. Eurypterids such as Megarachne are often called "sea scorpions", but the strata in which Megarachne has been found indicates that it dwelled in freshwater and not in marine environments.Megarachne was similar to other eurypterids within the Mycteropoidea, a rare group known primarily from South Africa and Scotland. The mycteropoids had evolved a specialized method of feeding referred to as sweep-feeding. This involved raking through the substrate of riverbeds in order to capture and eat smaller invertebrates. Despite only two specimens having been recovered, Megarachne represents the most complete eurypterid discovered in Carboniferous deposits in South America so far. Due to their fragmentary fossil record and similarities between the genera, some researchers have hypothesized that Megarachne and two other members of its family, Mycterops and Woodwardopterus, represent different developmental stages of a single genus.
Description
Known fossils of Megarachne indicate a body length of . While large for an arthropod, Megarachne was dwarfed by other eurypterids, even relatively close relatives such as Hibbertopterus which could reach lengths exceeding . Though originally described as a giant spider, a multitude of features support the classification of Megarachne as a eurypterid. Among them, the raised lunules (the vaguely moon-shaped ornamentation, similar to scales) and the cuticular sculpture of the mucrones (a dividing ridge continuing uninterrupted throughout the carapace, the part of the exoskeleton which covers the head) are especially important since these features are characteristic of eurypterids.Megarachne possessed blade-like structures on its appendages (limbs) which would have allowed it to engage in a feeding method known as sweep-feeding, raking through the soft sediment of aquatic environments in swamps and rivers with its frontal appendage blades to capture and feed on small invertebrates. Megarachne also possessed a large and circular second opisthosomal tergite (the second dorsal segment of the abdomen), the function of which remains unknown.Megarachne was very similar to other mycteroptid eurypterids in appearance, a group distinguished from other mycteropoids by the parabolic shape of their prosoma (the head plate), hastate telsons (the hindmost part of the body being shaped like a gladius, a Roman sword) with paired keel-shaped projections on the underside, and heads with small compound eyes that were roughly trapezoidal in shape.
History of researchMegarachne servinei was originally described in 1980 by the Argentine paleontologist Mario Hünicken. The generic name, composed of the Ancient Greek μέγας (megas) meaning "great" and Latin arachne meaning "spider", translates to "great spider". The holotype (now stored at the Museum of Paleontology at the National University of Córdoba) was recovered from the Pallero Member of the Bajo de Véliz Formation of Argentina, which has been dated to the Gzhelian age: to million years ago. The specimen preserves the carapace, the first two tergites, three partial appendages, and what is possibly a coxa (the proximal-most limb segment).
Hünicken wrongly identified the specimen as a mygalomorph spider (the group that includes tarantulae) based on the shape of the carapace; the wide, circular eye tubercle (round outgrowth) located in the center of the head between the two eyes; and a circular structure behind the first body-segment - which he identified as the "moderately hairy" abdomen. Hünicken's identification relied heavily on X-ray microtomography of the holotype. Additional hidden structures – such as a sternum and labium, coxae and cheliceral fangs – were also extrapolated from the X-radiographs.
With an estimated length of based on the assumption that the fossil was that of a spider, and with a leg-span estimated to be , Megarachne servinei would have been the largest spider to have ever existed; exceeding the goliath birdeater (Theraphosa blondi), which has a maximum leg-span of around . Because of its status as the "largest spider to have ever lived", Megarachne quickly became popular. Based on Hünicken's detailed description of the fossil specimen and various other illustrations and reconstructions made by him, reconstructions of Megarachne as a giant spider were set up in museums around the world.
The identification of the specimen as a spider was doubted by some arachnologists, such as Shear and colleagues (1989), who stated that while Megarachne had been assigned to the Araneae, it "may represent an unnamed order or a ricinuleid". Even Hünicken himself acknowledged discrepancies in the morphology of the fossil that could not be accommodated with an arachnid identity. These discrepancies included an unusual cuticular ornamentation, the carapace being divided into frontal and rear parts by a suture, and spatulate (having a broad, rounded end) chelicerae (already noted by Hünicken as a strange feature, as no known spider possesses spatulate chelicerae); all features unknown in other spiders. However, the holotype was by then deposited in a bank vault so other paleontologists only had access to plaster casts.
In 2005, a second - more complete - specimen consisting of a part and counterpart (the matching halves of a compression fossil) was recovered. This specimen had preserved parts of the front section of the body, as well as coxae possibly from the fourth pair of appendages, and was recovered from the same locality and horizon. A research team led by the British paleontologist and arachnologist Paul A. Selden, and also consisting of Hünicken and Argentine arachnologist José A. Corronca, re-examined the holotype in light of the new discovery. They concluded that Megarachne servinei was a large eurypterid (a group also known as "sea scorpions"), not a spider. Although Hünicken had misidentified Megarachne, his identification of it as an arachnid was not entirely absurd; as the two groups are closely related. A morphological comparison with other eurypterids indicated that Megarachne most closely resembled another large Permo-Carboniferous eurypterid, the mycteroptid Woodwardopterus scabrosus, which is known only from a single specimen. Selden and colleagues (2005) concluded that, despite only being represented by two known specimens, Megarachne is the most complete eurypterid discovered in Carboniferous deposits in South America so far.
Classification Megarachne was part of the stylonurine suborder, a relatively rare clade of eurypterids. Within the stylonurines, Megarachne is a member of the superfamily Mycteropoidea and its constituent family Mycteropidae; which includes the close relatives Woodwardopterus and Mycterops.
Fossilized remains of the second tergite of the mycteroptid Woodwardopterus were compared to the fossil remains of Megarachne by Selden and colleagues (2005), which revealed that they were virtually identical; including features previously not noted in Woodwardopterus, such as radiating lines covering the tergite. It was concluded that Megarachne and Woodwardopterus were part of the same family by Selden and colleagues (2005), with two primary differences: the tergites and the mucrones on the carapace are more sparsely packed in Megarachne and the protrusion of the anteroedian (i.e. before the middle) carapace, seen prominently in Megarachne, does not occur in Woodwardopterus.
It has been suggested that three of the four genera that constitute the Mycteroptidae, Mycterops, Woodwardopterus, and Megarachne might represent different ontogenetic stages (different developmental stages of the animal during its life) of each other based on their morphology and the size of the specimens. Should this interpretation be correct, the sparse mucrones of Megarachne might be because of its age; as Megarachne is significantly larger than Woodwardopterus. The smallest genus, Mycterops, has even more densely packed ornaments on its carapace and tergite; and thus might be the youngest, ontogenetic stage of the animal. Should Mycterops, Megarachne, and Woodwardopterus represent the same animal, the name-taking priority would be Mycterops; as it was named first, in 1886.
The cladogram below is adapted from Lamsdell and colleagues (2010) and shows the relationship of Megarachne within the suborder Stylonurina.
Paleoecology
Both known specimens of Megarachne have been recovered from the Bajo de Véliz Formation of Argentina, dated to the Gzhelian stage of the Late Carboniferous. The environment of the Bajo de Véliz formation was, unlike the typical living environments of eurypterids (especially the swimming eurypterids of the suborder Eurypterina), a freshwater environment in a floodplain. Similar Late Carboniferous floodplains with fossilized remnants discovered in modern-day Australia suggest a flora dominated by different types of pteridosperms with pockets of isoetoid lycopsids.
During Megarachne time, Argentina and the rest of South America was part of the ancient supercontinent Gondwana which was beginning to fuse with the northern continents of Euramerica, North China, Siberia and Kazakhstania to form Pangaea. In addition to Megarachne, the Bajo de Véliz Formation preserves a wide array of fossilized flying insects, such as Rigattoptera (classified in the order Protorthoptera), but as a freshwater predator, Megarachne would probably not have fed on them. Instead, the blades on the frontal appendages of Megarachne would have allowed it to sweep-feed, raking through the soft sediment of the rivers it inhabited in order to capture and feed on small invertebrates. This feeding strategy was common to other mycteropoids.
In comparison to the comparatively warm climate of the earlier parts of the Carboniferous, the Late Carboniferous was relatively cold globally. This climate change likely occurred during the Middle Carboniferous due to falling levels in the atmosphere and high oxygen levels. The Southern Hemisphere, where Argentina was and still is located, may even have experienced glaciation with large continental ice sheets similar to the modern glacial ice sheets of the Arctic and Antarctica, or smaller glaciers in dispersed centers. The spread of the ice sheets also affected sea levels, which would rise and fall throughout the period. Late Carboniferous flora was low in diversity but also developed uniformly throughout Gondwana. The plant life consisted of pteridosperm trees such as Nothorhacopteris, Triphyllopteris and Botrychiopsis, and lycopsid trees Malanzania, Lepidodendropsis and Bumbudendron. The plant fossils present also suggest that it was subject to monsoons during certain time intervals.
In popular culture
During the production of the 2005 British documentary Walking with Monsters, Megarachne was slated to appear as a giant tarantula-like spider hunting the cat-sized reptile Petrolacosaurus'' in the segment detailing the Carboniferous, with the reconstruction closely following what was thought to be known of the genus at the time the series began production. The actual identity of the genus, as a eurypterid, was only discovered well into production and by then it was far too late to update the reconstructions. The scenes were left in, but the giant spider was renamed as an unspecified species belonging to the primitive spider suborder Mesothelae, a suborder that actually exists but with genera much smaller than, and looking considerably different from, the spider featured in the program.
| Biology and health sciences | Fossil arthropods | Animals |
4250553 | https://en.wikipedia.org/wiki/Gene | Gene | In biology, the word gene has two meanings. The Mendelian gene is a basic unit of heredity. The molecular gene is a sequence of nucleotides in DNA that is transcribed to produce a functional RNA. There are two types of molecular genes: protein-coding genes and non-coding genes. During gene expression (the synthesis of RNA or protein from a gene), DNA is first copied into RNA. RNA can be directly functional or be the intermediate template for the synthesis of a protein.
The transmission of genes to an organism's offspring, is the basis of the inheritance of phenotypic traits from one generation to the next. These genes make up different DNA sequences, together called a genotype, that is specific to every given individual, within the gene pool of the population of a given species. The genotype, along with environmental and developmental factors, ultimately determines the phenotype of the individual.
Most biological traits occur under the combined influence of polygenes (a set of different genes) and gene–environment interactions. Some genetic traits are instantly visible, such as eye color or the number of limbs, others are not, such as blood type, the risk for specific diseases, or the thousands of basic biochemical processes that constitute life. A gene can acquire mutations in its sequence, leading to different variants, known as alleles, in the population. These alleles encode slightly different versions of a gene, which may cause different phenotypical traits. Genes evolve due to natural selection or survival of the fittest and genetic drift of the alleles.
Definitions
There are many different ways to use the term "gene" based on different aspects of their inheritance, selection, biological function, or molecular structure but most of these definitions fall into two categories, the Mendelian gene or the molecular gene.
The Mendelian gene is the classical gene of genetics and it refers to any heritable trait. This is the gene described in The Selfish Gene. More thorough discussions of this version of a gene can be found in the articles Genetics and Gene-centered view of evolution.
The molecular gene definition is more commonly used across biochemistry, molecular biology, and most of genetics—the gene that is described in terms of DNA sequence. There are many different definitions of this gene—some of which are misleading or incorrect.
Very early work in the field that became molecular genetics suggested the concept that one gene makes one protein (originally 'one gene – one enzyme'). However, genes that produce repressor RNAs were proposed in the 1950s and by the 1960s, textbooks were using molecular gene definitions that included those that specified functional RNA molecules such as ribosomal RNA and tRNA (noncoding genes) as well as protein-coding genes.
This idea of two kinds of genes is still part of the definition of a gene in most textbooks. For example,
The important parts of such definitions are: (1) that a gene corresponds to a transcription unit; (2) that genes produce both mRNA and noncoding RNAs; and (3) regulatory sequences control gene expression but are not part of the gene itself. However, there's one other important part of the definition and it is emphasized in Kostas Kampourakis' book Making Sense of Genes.
The emphasis on function is essential because there are stretches of DNA that produce non-functional transcripts and they do not qualify as genes. These include obvious examples such as transcribed pseudogenes as well as less obvious examples such as junk RNA produced as noise due to transcription errors. In order to qualify as a true gene, by this definition, one has to prove that the transcript has a biological function.
Early speculations on the size of a typical gene were based on high-resolution genetic mapping and on the size of proteins and RNA molecules. A length of 1500 base pairs seemed reasonable at the time (1965). This was based on the idea that the gene was the DNA that was directly responsible for production of the functional product. The discovery of introns in the 1970s meant that many eukaryotic genes were much larger than the size of the functional product would imply. Typical mammalian protein-coding genes, for example, are about 62,000 base pairs in length (transcribed region) and since there are about 20,000 of them they occupy about 35–40% of the mammalian genome (including the human genome).
In spite of the fact that both protein-coding genes and noncoding genes have been known for more than 50 years, there are still a number of textbooks, websites, and scientific publications that define a gene as a DNA sequence that specifies a protein. In other words, the definition is restricted to protein-coding genes. Here is an example from a recent article in American Scientist.
This restricted definition is so common that it has spawned many recent articles that criticize this "standard definition" and call for a new expanded definition that includes noncoding genes. However, some modern writers still do not acknowledge noncoding genes although this so-called "new" definition has been recognised for more than half a century.
Although some definitions can be more broadly applicable than others, the fundamental complexity of biology means that no definition of a gene can capture all aspects perfectly. Not all genomes are DNA (e.g. RNA viruses), bacterial operons are multiple protein-coding regions transcribed into single large mRNAs, alternative splicing enables a single genomic region to encode multiple district products and trans-splicing concatenates mRNAs from shorter coding sequence across the genome. Since molecular definitions exclude elements such as introns, promotors, and other regulatory regions, these are instead thought of as "associated" with the gene and affect its function.
An even broader operational definition is sometimes used to encompass the complexity of these diverse phenomena, where a gene is defined as a union of genomic sequences encoding a coherent set of potentially overlapping functional products. This definition categorizes genes by their functional products (proteins or RNA) rather than their specific DNA loci, with regulatory elements classified as gene-associated regions.
History
Discovery of discrete inherited units
The existence of discrete inheritable units was first suggested by Gregor Mendel (1822–1884). From 1857 to 1864, in Brno, Austrian Empire (today's Czech Republic), he studied inheritance patterns in 8000 common edible pea plants, tracking distinct traits from parent to offspring. He described these mathematically as 2n combinations where n is the number of differing characteristics in the original peas. Although he did not use the term gene, he explained his results in terms of discrete inherited units that give rise to observable physical characteristics. This description prefigured Wilhelm Johannsen's distinction between genotype (the genetic material of an organism) and phenotype (the observable traits of that organism). Mendel was also the first to demonstrate independent assortment, the distinction between dominant and recessive traits, the distinction between a heterozygote and homozygote, and the phenomenon of discontinuous inheritance.
Prior to Mendel's work, the dominant theory of heredity was one of blending inheritance, which suggested that each parent contributed fluids to the fertilization process and that the traits of the parents blended and mixed to produce the offspring. Charles Darwin developed a theory of inheritance he termed pangenesis, from Greek pan ("all, whole") and genesis ("birth") / genos ("origin"). Darwin used the term gemmule to describe hypothetical particles that would mix during reproduction.
Mendel's work went largely unnoticed after its first publication in 1866, but was rediscovered in the late 19th century by Hugo de Vries, Carl Correns, and Erich von Tschermak, who (claimed to have) reached similar conclusions in their own research. Specifically, in 1889, Hugo de Vries published his book Intracellular Pangenesis, in which he postulated that different characters have individual hereditary carriers and that inheritance of specific traits in organisms comes in particles. De Vries called these units "pangenes" (Pangens in German), after Darwin's 1868 pangenesis theory.
Twenty years later, in 1909, Wilhelm Johannsen introduced the term "gene" (inspired by the ancient Greek: γόνος, gonos, meaning offspring and procreation) and, in 1906, William Bateson, that of "genetics" while Eduard Strasburger, among others, still used the term "pangene" for the fundamental physical and functional unit of heredity.
Discovery of DNA
Advances in understanding genes and inheritance continued throughout the 20th century. Deoxyribonucleic acid (DNA) was shown to be the molecular repository of genetic information by experiments in the 1940s to 1950s. The structure of DNA was studied by Rosalind Franklin and Maurice Wilkins using X-ray crystallography, which led James D. Watson and Francis Crick to publish a model of the double-stranded DNA molecule whose paired nucleotide bases indicated a compelling hypothesis for the mechanism of genetic replication.
In the early 1950s the prevailing view was that the genes in a chromosome acted like discrete entities arranged like beads on a string. The experiments of Benzer using mutants defective in the rII region of bacteriophage T4 (1955–1959) showed that individual genes have a simple linear structure and are likely to be equivalent to a linear section of DNA.
Collectively, this body of research established the central dogma of molecular biology, which states that proteins are translated from RNA, which is transcribed from DNA. This dogma has since been shown to have exceptions, such as reverse transcription in retroviruses. The modern study of genetics at the level of DNA is known as molecular genetics.
In 1972, Walter Fiers and his team were the first to determine the sequence of a gene: that of bacteriophage MS2 coat protein. The subsequent development of chain-termination DNA sequencing in 1977 by Frederick Sanger improved the efficiency of sequencing and turned it into a routine laboratory tool. An automated version of the Sanger method was used in early phases of the Human Genome Project.
Modern synthesis and its successors
The theories developed in the early 20th century to integrate Mendelian genetics with Darwinian evolution are called the modern synthesis, a term introduced by Julian Huxley.
This view of evolution was emphasized by George C. Williams' gene-centric view of evolution. He proposed that the Mendelian gene is a unit of natural selection with the definition: "that which segregates and recombines with appreciable frequency." Related ideas emphasizing the centrality of Mendelian genes and the importance of natural selection in evolution were popularized by Richard Dawkins.
The development of the neutral theory of evolution in the late 1960s led to the recognition that random genetic drift is a major player in evolution and that neutral theory should be the null hypothesis of molecular evolution. This led to the construction of phylogenetic trees and the development of the molecular clock, which is the basis of all dating techniques using DNA sequences. These techniques are not confined to molecular gene sequences but can be used on all DNA segments in the genome.
Molecular basis
DNA
The vast majority of organisms encode their genes in long strands of DNA (deoxyribonucleic acid). DNA consists of a chain made from four types of nucleotide subunits, each composed of: a five-carbon sugar (2-deoxyribose), a phosphate group, and one of the four bases adenine, cytosine, guanine, and thymine.
Two chains of DNA twist around each other to form a DNA double helix with the phosphate–sugar backbone spiralling around the outside, and the bases pointing inward with adenine base pairing to thymine and guanine to cytosine. The specificity of base pairing occurs because adenine and thymine align to form two hydrogen bonds, whereas cytosine and guanine form three hydrogen bonds. The two strands in a double helix must, therefore, be complementary, with their sequence of bases matching such that the adenines of one strand are paired with the thymines of the other strand, and so on.
Due to the chemical composition of the pentose residues of the bases, DNA strands have directionality. One end of a DNA polymer contains an exposed hydroxyl group on the deoxyribose; this is known as the 3' end of the molecule. The other end contains an exposed phosphate group; this is the 5' end. The two strands of a double-helix run in opposite directions. Nucleic acid synthesis, including DNA replication and transcription occurs in the 5'→3' direction, because new nucleotides are added via a dehydration reaction that uses the exposed 3' hydroxyl as a nucleophile.
The expression of genes encoded in DNA begins by transcribing the gene into RNA, a second type of nucleic acid that is very similar to DNA, but whose monomers contain the sugar ribose rather than deoxyribose. RNA also contains the base uracil in place of thymine. RNA molecules are less stable than DNA and are typically single-stranded. Genes that encode proteins are composed of a series of three-nucleotide sequences called codons, which serve as the "words" in the genetic "language". The genetic code specifies the correspondence during protein translation between codons and amino acids. The genetic code is nearly the same for all known organisms.
Chromosomes
The total complement of genes in an organism or cell is known as its genome, which may be stored on one or more chromosomes. A chromosome consists of a single, very long DNA helix on which thousands of genes are encoded. The region of the chromosome at which a particular gene is located is called its locus. Each locus contains one allele of a gene; however, members of a population may have different alleles at the locus, each with a slightly different gene sequence.
The majority of eukaryotic genes are stored on a set of large, linear chromosomes. The chromosomes are packed within the nucleus in complex with storage proteins called histones to form a unit called a nucleosome. DNA packaged and condensed in this way is called chromatin. The manner in which DNA is stored on the histones, as well as chemical modifications of the histone itself, regulate whether a particular region of DNA is accessible for gene expression. In addition to genes, eukaryotic chromosomes contain sequences involved in ensuring that the DNA is copied without degradation of end regions and sorted into daughter cells during cell division: replication origins, telomeres, and the centromere. Replication origins are the sequence regions where DNA replication is initiated to make two copies of the chromosome. Telomeres are long stretches of repetitive sequences that cap the ends of the linear chromosomes and prevent degradation of coding and regulatory regions during DNA replication. The length of the telomeres decreases each time the genome is replicated and has been implicated in the aging process. The centromere is required for binding spindle fibres to separate sister chromatids into daughter cells during cell division.
Prokaryotes (bacteria and archaea) typically store their genomes on a single, large, circular chromosome. Similarly, some eukaryotic organelles contain a remnant circular chromosome with a small number of genes. Prokaryotes sometimes supplement their chromosome with additional small circles of DNA called plasmids, which usually encode only a few genes and are transferable between individuals. For example, the genes for antibiotic resistance are usually encoded on bacterial plasmids and can be passed between individual cells, even those of different species, via horizontal gene transfer.
Whereas the chromosomes of prokaryotes are relatively gene-dense, those of eukaryotes often contain regions of DNA that serve no obvious function. Simple single-celled eukaryotes have relatively small amounts of such DNA, whereas the genomes of complex multicellular organisms, including humans, contain an absolute majority of DNA without an identified function. This DNA has often been referred to as "junk DNA". However, more recent analyses suggest that, although protein-coding DNA makes up barely 2% of the human genome, about 80% of the bases in the genome may be expressed, so the term "junk DNA" may be a misnomer.
Structure and function
Structure
The structure of a protein-coding gene consists of many elements of which the actual protein coding sequence is often only a small part. These include introns and untranslated regions of the mature mRNA. Noncoding genes can also contain introns that are removed during processing to produce the mature functional RNA.
All genes are associated with regulatory sequences that are required for their expression. First, genes require a promoter sequence. The promoter is recognized and bound by transcription factors that recruit and help RNA polymerase bind to the region to initiate transcription. The recognition typically occurs as a consensus sequence like the TATA box. A gene can have more than one promoter, resulting in messenger RNAs (mRNA) that differ in how far they extend in the 5' end. Highly transcribed genes have "strong" promoter sequences that form strong associations with transcription factors, thereby initiating transcription at a high rate. Others genes have "weak" promoters that form weak associations with transcription factors and initiate transcription less frequently. Eukaryotic promoter regions are much more complex and difficult to identify than prokaryotic promoters.
Additionally, genes can have regulatory regions many kilobases upstream or downstream of the gene that alter expression. These act by binding to transcription factors which then cause the DNA to loop so that the regulatory sequence (and bound transcription factor) become close to the RNA polymerase binding site. For example, enhancers increase transcription by binding an activator protein which then helps to recruit the RNA polymerase to the promoter; conversely silencers bind repressor proteins and make the DNA less available for RNA polymerase.
The mature messenger RNA produced from protein-coding genes contains untranslated regions at both ends which contain binding sites for ribosomes, RNA-binding proteins, miRNA, as well as terminator, and start and stop codons. In addition, most eukaryotic open reading frames contain untranslated introns, which are removed and exons, which are connected together in a process known as RNA splicing. Finally, the ends of gene transcripts are defined by cleavage and polyadenylation (CPA) sites, where newly produced pre-mRNA gets cleaved and a string of ~200 adenosine monophosphates is added at the 3' end. The poly(A) tail protects mature mRNA from degradation and has other functions, affecting translation, localization, and transport of the transcript from the nucleus. Splicing, followed by CPA, generate the final mature mRNA, which encodes the protein or RNA product.
Many noncoding genes in eukaryotes have different transcription termination mechanisms and they do not have poly(A) tails.
Many prokaryotic genes are organized into operons, with multiple protein-coding sequences that are transcribed as a unit. The genes in an operon are transcribed as a continuous messenger RNA, referred to as a polycistronic mRNA. The term cistron in this context is equivalent to gene. The transcription of an operon's mRNA is often controlled by a repressor that can occur in an active or inactive state depending on the presence of specific metabolites. When active, the repressor binds to a DNA sequence at the beginning of the operon, called the operator region, and represses transcription of the operon; when the repressor is inactive transcription of the operon can occur (see e.g. Lac operon). The products of operon genes typically have related functions and are involved in the same regulatory network.
Complexity
Though many genes have simple structures, as with much of biology, others can be quite complex or represent unusual edge-cases. Eukaryotic genes often have introns that are much larger than their exons, and those introns can even have other genes nested inside them. Associated enhancers may be many kilobase away, or even on entirely different chromosomes operating via physical contact between two chromosomes. A single gene can encode multiple different functional products by alternative splicing, and conversely a gene may be split across chromosomes but those transcripts are concatenated back together into a functional sequence by trans-splicing. It is also possible for overlapping genes to share some of their DNA sequence, either on opposite strands or the same strand (in a different reading frame, or even the same reading frame).
Gene expression
In all organisms, two steps are required to read the information encoded in a gene's DNA and produce the protein it specifies. First, the gene's DNA is transcribed to messenger RNA (mRNA). Second, that mRNA is translated to protein. RNA-coding genes must still go through the first step, but are not translated into protein. The process of producing a biologically functional molecule of either RNA or protein is called gene expression, and the resulting molecule is called a gene product.
Genetic code
The nucleotide sequence of a gene's DNA specifies the amino acid sequence of a protein through the genetic code. Sets of three nucleotides, known as codons, each correspond to a specific amino acid. The principle that three sequential bases of DNA code for each amino acid was demonstrated in 1961 using frameshift mutations in the rIIB gene of bacteriophage T4 (see Crick, Brenner et al. experiment).
Additionally, a "start codon", and three "stop codons" indicate the beginning and end of the protein coding region. There are 64 possible codons (four possible nucleotides at each of three positions, hence 43 possible codons) and only 20 standard amino acids; hence the code is redundant and multiple codons can specify the same amino acid. The correspondence between codons and amino acids is nearly universal among all known living organisms.
Transcription
Transcription produces a single-stranded RNA molecule known as messenger RNA, whose nucleotide sequence is complementary to the DNA from which it was transcribed. The mRNA acts as an intermediate between the DNA gene and its final protein product. The gene's DNA is used as a template to generate a complementary mRNA. The mRNA matches the sequence of the gene's DNA coding strand because it is synthesised as the complement of the template strand. Transcription is performed by an enzyme called an RNA polymerase, which reads the template strand in the 3' to 5' direction and synthesizes the RNA from 5' to 3'. To initiate transcription, the polymerase first recognizes and binds a promoter region of the gene. Thus, a major mechanism of gene regulation is the blocking or sequestering the promoter region, either by tight binding by repressor molecules that physically block the polymerase or by organizing the DNA so that the promoter region is not accessible.
In prokaryotes, transcription occurs in the cytoplasm; for very long transcripts, translation may begin at the 5' end of the RNA while the 3' end is still being transcribed. In eukaryotes, transcription occurs in the nucleus, where the cell's DNA is stored. The RNA molecule produced by the polymerase is known as the primary transcript and undergoes post-transcriptional modifications before being exported to the cytoplasm for translation. One of the modifications performed is the splicing of introns which are sequences in the transcribed region that do not encode a protein. Alternative splicing mechanisms can result in mature transcripts from the same gene having different sequences and thus coding for different proteins. This is a major form of regulation in eukaryotic cells and also occurs in some prokaryotes.
Translation
Translation is the process by which a mature mRNA molecule is used as a template for synthesizing a new protein. Translation is carried out by ribosomes, large complexes of RNA and protein responsible for carrying out the chemical reactions to add new amino acids to a growing polypeptide chain by the formation of peptide bonds. The genetic code is read three nucleotides at a time, in units called codons, via interactions with specialized RNA molecules called transfer RNA (tRNA). Each tRNA has three unpaired bases known as the anticodon that are complementary to the codon it reads on the mRNA. The tRNA is also covalently attached to the amino acid specified by the complementary codon. When the tRNA binds to its complementary codon in an mRNA strand, the ribosome attaches its amino acid cargo to the new polypeptide chain, which is synthesized from amino terminus to carboxyl terminus. During and after synthesis, most new proteins must fold to their active three-dimensional structure before they can carry out their cellular functions.
Regulation
Genes are regulated so that they are expressed only when the product is needed, since expression draws on limited resources. A cell regulates its gene expression depending on its external environment (e.g. available nutrients, temperature and other stresses), its internal environment (e.g. cell division cycle, metabolism, infection status), and its specific role if in a multicellular organism. Gene expression can be regulated at any step: from transcriptional initiation, to RNA processing, to post-translational modification of the protein. The regulation of lactose metabolism genes in E. coli (lac operon) was the first such mechanism to be described in 1961.
RNA genes
A typical protein-coding gene is first copied into RNA as an intermediate in the manufacture of the final protein product. In other cases, the RNA molecules are the actual functional products, as in the synthesis of ribosomal RNA and transfer RNA. Some RNAs known as ribozymes are capable of enzymatic function, while others such as microRNAs and riboswitches have regulatory roles. The DNA sequences from which such RNAs are transcribed are known as non-coding RNA genes.
Some viruses store their entire genomes in the form of RNA, and contain no DNA at all. Because they use RNA to store genes, their cellular hosts may synthesize their proteins as soon as they are infected and without the delay in waiting for transcription. On the other hand, RNA retroviruses, such as HIV, require the reverse transcription of their genome from RNA into DNA before their proteins can be synthesized.
Inheritance
Organisms inherit their genes from their parents. Asexual organisms simply inherit a complete copy of their parent's genome. Sexual organisms have two copies of each chromosome because they inherit one complete set from each parent.
Mendelian inheritance
According to Mendelian inheritance, variations in an organism's phenotype (observable physical and behavioral characteristics) are due in part to variations in its genotype (particular set of genes). Each gene specifies a particular trait with a different sequence of a gene (alleles) giving rise to different phenotypes. Most eukaryotic organisms (such as the pea plants Mendel worked on) have two alleles for each trait, one inherited from each parent.
Alleles at a locus may be dominant or recessive; dominant alleles give rise to their corresponding phenotypes when paired with any other allele for the same trait, whereas recessive alleles give rise to their corresponding phenotype only when paired with another copy of the same allele. If you know the genotypes of the organisms, you can determine which alleles are dominant and which are recessive. For example, if the allele specifying tall stems in pea plants is dominant over the allele specifying short stems, then pea plants that inherit one tall allele from one parent and one short allele from the other parent will also have tall stems. Mendel's work demonstrated that alleles assort independently in the production of gametes, or germ cells, ensuring variation in the next generation. Although Mendelian inheritance remains a good model for many traits determined by single genes (including a number of well-known genetic disorders) it does not include the physical processes of DNA replication and cell division.
DNA replication and cell division
The growth, development, and reproduction of organisms relies on cell division; the process by which a single cell divides into two usually identical daughter cells. This requires first making a duplicate copy of every gene in the genome in a process called DNA replication. The copies are made by specialized enzymes known as DNA polymerases, which "read" one strand of the double-helical DNA, known as the template strand, and synthesize a new complementary strand. Because the DNA double helix is held together by base pairing, the sequence of one strand completely specifies the sequence of its complement; hence only one strand needs to be read by the enzyme to produce a faithful copy. The process of DNA replication is semiconservative; that is, the copy of the genome inherited by each daughter cell contains one original and one newly synthesized strand of DNA.
The rate of DNA replication in living cells was first measured as the rate of phage T4 DNA elongation in phage-infected E. coli and found to be impressively rapid. During the period of exponential DNA increase at 37 °C, the rate of elongation was 749 nucleotides per second.
After DNA replication, the cell must physically separate the two genome copies and divide into two distinct membrane-bound cells. In prokaryotes (bacteria and archaea) this usually occurs via a relatively simple process called binary fission, in which each circular genome attaches to the cell membrane and is separated into the daughter cells as the membrane invaginates to split the cytoplasm into two membrane-bound portions. Binary fission is extremely fast compared to the rates of cell division in eukaryotes. Eukaryotic cell division is a more complex process known as the cell cycle; DNA replication occurs during a phase of this cycle known as S phase, whereas the process of segregating chromosomes and splitting the cytoplasm occurs during M phase.
Molecular inheritance
The duplication and transmission of genetic material from one generation of cells to the next is the basis for molecular inheritance and the link between the classical and molecular pictures of genes. Organisms inherit the characteristics of their parents because the cells of the offspring contain copies of the genes in their parents' cells. In asexually reproducing organisms, the offspring will be a genetic copy or clone of the parent organism. In sexually reproducing organisms, a specialized form of cell division called meiosis produces cells called gametes or germ cells that are haploid, or contain only one copy of each gene. The gametes produced by females are called eggs or ova, and those produced by males are called sperm. Two gametes fuse to form a diploid fertilized egg, a single cell that has two sets of genes, with one copy of each gene from the mother and one from the father.
During the process of meiotic cell division, an event called genetic recombination or crossing-over can sometimes occur, in which a length of DNA on one chromatid is swapped with a length of DNA on the corresponding homologous non-sister chromatid. This can result in reassortment of otherwise linked alleles. The Mendelian principle of independent assortment asserts that each of a parent's two genes for each trait will sort independently into gametes; which allele an organism inherits for one trait is unrelated to which allele it inherits for another trait. This is in fact only true for genes that do not reside on the same chromosome or are located very far from one another on the same chromosome. The closer two genes lie on the same chromosome, the more closely they will be associated in gametes and the more often they will appear together (known as genetic linkage). Genes that are very close are essentially never separated because it is extremely unlikely that a crossover point will occur between them.
Genome
The genome is the total genetic material of an organism and includes both the genes and non-coding sequences. Eukaryotic genes can be annotated using FINDER.
Number of genes
The genome size, and the number of genes it encodes varies widely between organisms. The smallest genomes occur in viruses, and viroids (which act as a single non-coding RNA gene). Conversely, plants can have extremely large genomes, with rice containing >46,000 protein-coding genes. The total number of protein-coding genes (the Earth's proteome) is estimated to be 5 million sequences.
Although the number of base-pairs of DNA in the human genome has been known since the 1950s, the estimated number of genes has changed over time as definitions of genes, and methods of detecting them have been refined. Initial theoretical predictions of the number of human genes in the 1960s and 1970s were based on mutation load estimates and the numbers of mRNAs and these estimates tended to be about 30,000 protein-coding genes. During the 1990s there were guesstimates of up to 100,000 genes and early data on detection of mRNAs (expressed sequence tags) suggested more than the traditional value of 30,000 genes that had been reported in the textbooks during the 1980s.
The initial draft sequences of the human genome confirmed the earlier predictions of about 30,000 protein-coding genes however that estimate has fallen to about 19,000 with the ongoing GENCODE annotation project. The number of noncoding genes is not known with certainty but the latest estimates from Ensembl suggest 26,000 noncoding genes.
Essential genes
Essential genes are the set of genes thought to be critical for an organism's survival. This definition assumes the abundant availability of all relevant nutrients and the absence of environmental stress. Only a small portion of an organism's genes are essential. In bacteria, an estimated 250–400 genes are essential for Escherichia coli and Bacillus subtilis, which is less than 10% of their genes. Half of these genes are orthologs in both organisms and are largely involved in protein synthesis. In the budding yeast Saccharomyces cerevisiae the number of essential genes is slightly higher, at 1000 genes (~20% of their genes). Although the number is more difficult to measure in higher eukaryotes, mice and humans are estimated to have around 2000 essential genes (~10% of their genes). The synthetic organism, Syn 3, has a minimal genome of 473 essential genes and quasi-essential genes (necessary for fast growth), although 149 have unknown function.
Essential genes include housekeeping genes (critical for basic cell functions) as well as genes that are expressed at different times in the organisms development or life cycle. Housekeeping genes are used as experimental controls when analysing gene expression, since they are constitutively expressed at a relatively constant level.
Genetic and genomic nomenclature
Gene nomenclature was established by the HUGO Gene Nomenclature Committee (HGNC), a committee of the Human Genome Organisation, for each known human gene in the form of an approved gene name and symbol (short-form abbreviation), which can be accessed through a database maintained by HGNC. Symbols are chosen to be unique, and each gene has only one symbol (although approved symbols sometimes change). Symbols are preferably kept consistent with other members of a gene family and with homologs in other species, particularly the mouse due to its role as a common model organism.
Genetic engineering
Genetic engineering is the modification of an organism's genome through biotechnology. Since the 1970s, a variety of techniques have been developed to specifically add, remove and edit genes in an organism. Recently developed genome engineering techniques use engineered nuclease enzymes to create targeted DNA repair in a chromosome to either disrupt or edit a gene when the break is repaired. The related term synthetic biology is sometimes used to refer to extensive genetic engineering of an organism.
Genetic engineering is now a routine research tool with model organisms. For example, genes are easily added to bacteria and lineages of knockout mice with a specific gene's function disrupted are used to investigate that gene's function. Many organisms have been genetically modified for applications in agriculture, industrial biotechnology, and medicine.
For multicellular organisms, typically the embryo is engineered which grows into the adult genetically modified organism. However, the genomes of cells in an adult organism can be edited using gene therapy techniques to treat genetic diseases.
| Biology and health sciences | Biology | null |
4253054 | https://en.wikipedia.org/wiki/Ampere-hour | Ampere-hour | An ampere-hour or amp-hour (symbol: A⋅h or A h; often simplified as Ah) is a unit of electric charge, having dimensions of electric current multiplied by time, equal to the charge transferred by a steady current of one ampere flowing for one hour, or 3,600 coulombs.
The commonly seen milliampere-hour (symbol: mA⋅h, mA h, often simplified as mAh) is one-thousandth of an ampere-hour (3.6 coulombs).
Use
The ampere-hour is frequently used in measurements of electrochemical systems such as electroplating and for battery capacity where the commonly known nominal voltage is understood.
A milliampere second (mA⋅s) is a unit of measurement used in X-ray imaging, diagnostic imaging, and radiation therapy. It is equivalent to a millicoulomb. This quantity is proportional to the total X-ray energy produced by a given X-ray tube operated at a particular voltage. The same total dose can be delivered in different time periods depending on the X-ray tube current.
To help express energy, computation over charge values in ampere-hour requires precise data of voltage: in a battery system, for example, accurate calculation of the energy delivered requires integration of the power delivered (product of instantaneous voltage and instantaneous current) over the discharge interval. Generally, the battery voltage varies during discharge; an average value or nominal value may be used to approximate the integration of power.
When comparing the energy capacities of battery-based products that might have different internal cell chemistries or cell configurations, a simple ampere-hour rating is often insufficient. For example, at 3.2 V for a lithium iron phosphate battery cell (), the perceived energy capacity of a small UPS product that has multiple DC outputs at different voltages but is simply listed with a single ampere-hour rating, e.g., 8800 mAh, would be exaggerated by a factor of 3.75 compared to that of a sealed 12-volt lead-acid battery where the ampere-hour rating, e.g., 7 Ah, is based on the total output voltage rather than the internal cell voltage, so the 12-volt output of the example UPS product can actually deliver only about a third of the energy of the example battery, not a quarter more energy. But a direct replacement product for the example battery, in the same form factor and comparable output voltage and energy capacity but based on , might also be specified as 7 Ah, here based on output voltage rather than cell chemistry. For consumers without an engineering background, these difficulties would be avoided by a specification of the watt-hour rating instead (or additionally).
In other units of electric charge
One ampere-hour is equal to (up to 4 significant figures):
3,600 coulombs
2.247 × 1022 elementary charges
0.03731 faradays
1.079 × 1013 statcoulombs (CGS-ESU equivalent)
360 abcoulombs (CGS-EMU equivalent)
Examples
An AA size dry cell has a capacity of about 2,000 to 3,000 milliampere-hours.
An average smartphone battery usually has between 2,500 and 4,000 milliampere-hours of electric capacity.
Automotive car batteries vary in capacity but a large automobile propelled by an internal combustion engine would have about a 50-ampere-hour battery capacity.
Since one ampere-hour can produce 0.336 grams of aluminium from molten aluminium chloride, producing a ton of aluminium required transfer of at least 2.98 million ampere-hours.
| Physical sciences | Charge | Basics and measurement |
4254776 | https://en.wikipedia.org/wiki/Postterm%20pregnancy | Postterm pregnancy | Postterm pregnancy is when a woman has not yet delivered her baby after 42 weeks of gestation, two weeks beyond the typical 40-week duration of pregnancy. Postmature births carry risks for both the mother and the baby, including fetal malnutrition, meconium aspiration syndrome, and stillbirths. After the 42nd week of gestation, the placenta, which supplies the baby with nutrients and oxygen from the mother, starts aging and will eventually fail . Postterm pregnancy is a reason to induce labor.
Definitions
The management of labor and delivery may vary depending on the gestational age. It is common to encounter the following terms when describing different time periods of pregnancy.
Postterm – ≥ 42 weeks + 0 days of gestation (> 293 days from the first day of last menstrual period, or > 13 days from the estimated due date)
Late term – 41 weeks + 0 days to 41 weeks + 6 days of gestation
Full term – 39 weeks + 0 days to 40 weeks + 6 days of gestation
Early term – 37 weeks + 0 days to 38 weeks + 6 days of gestation
Preterm – ≤ 36 weeks + 6 days of gestation
Besides postterm pregnancy, other terminologies have been used to describe the same condition (≥ 42w+0d), such as prolonged pregnancy, postdates, and postdatism. However, these terminologies are less commonly used to avoid confusion.
Postterm pregnancy should not be confused with postmaturity, postmaturity syndrome, or dysmaturity. These terms describe the neonatal condition that may be caused by postterm pregnancy instead of the duration of pregnancy.
Signs and symptoms
Because postterm pregnancy is a condition solely based on gestational age, there are no confirming physical signs or symptoms. While it is difficult to determine gestational age physically, infants that are born postterm may be associated with a physical condition called postmaturity. The most common symptoms for this condition are dry skin, overgrown nails, creases on the baby's palms and soles of their feet, minimal fat, abundant hair on their head, and either a brown, green, or yellow discoloration of their skin. Doctors diagnose postmature birth based on the baby's physical appearance and the length of the mother's pregnancy. However, some postmature babies may show no or few signs of postmaturity.
Baby
Reduced placental perfusion – Once a pregnancy has surpassed the 40-week gestation period, doctors closely monitor the mother for signs of placental deterioration. Toward the end of pregnancy, calcium is deposited on the walls of blood vessels, and proteins are deposited on the surface of the placenta, which changes the placenta. This limits the blood flow through the placenta and ultimately leads to placental insufficiency, and the baby is no longer properly nourished. Induced labor is strongly encouraged if this happens.
Oligohydramnios – Low volume of amniotic fluid surrounding the fetus. It is associated with complications such as cord compression, abnormal heart rate, fetal acidosis, and meconium amniotic fluid.
Meconium aspiration syndrome – Respiratory compromise secondary to meconium present in infant's lungs.
Macrosomia – Excessive birth weight, estimated fetal weight of ≥ 4.5 kg. It can further increase the risk of prolonged labor and shoulder dystocia.
Shoulder dystocia – Difficulty in delivering the shoulders due to increased body size.
Increased forceps-assisted or vacuum-assisted birth – When postterm babies are larger than average, forceps or vacuum delivery may be used to resolve the difficulties at the delivery time, such as shoulder dystocia. Complications include lacerations, skin markings, external eye trauma, intracranial injury, facial nerve injury, skull fracture, and rarely death.
Mother
Increased labor induction – Induction may be needed if labor progression is abnormal. Oxytocin, a medication used in induction, may have side effects such as low blood pressure.
Increased forceps assisted or vacuum assisted birth – operative vaginal deliveries increase maternal risks of genital trauma.
Increased Caesarean birth – Postterm babies may be larger than an average baby, thus increasing the length of labor. The labor is increased because the baby's head is too big to pass through the mother's pelvis. This is called cephalopelvic disproportion. Caesarean sections are encouraged if this happens. Complications include bleeding, infection, abnormal wound healing, abnormal placenta in future pregnancies, and rarely death.
A 2019 randomized control trial of induced labor at 42 or 43 weeks was terminated early due to statistical evidence of "significantly increased risk for women induced at the start of week 43". The study implies clinical guidelines for induction of labor no later than at 41 gestational weeks.
Causes
The causes of post-term births are unknown, but postmature births are more likely when the mother has experienced a previous postmature birth. Due dates are easily miscalculated when the mother is unsure of her last menstrual period. When there is a miscalculation, the baby could be delivered before or after the expected due date. Postmature births can also be attributed to irregular menstrual cycles. When the menstrual period is irregular it is difficult to judge the moment of ovulation and subsequent fertilization and pregnancy. Some postmature pregnancies may not be postmature in reality due to the uncertainty of mother's last menstrual period. However, in most countries where gestation is measured by ultrasound scan technology, this is less likely.
Monitoring
Once a pregnancy is diagnosed postterm, usually at or greater than 42 weeks of gestational age, the mother should be offered additional monitoring as this can provide valuable clues that the fetal health is being maintained.
Fetal movement recording
Regular movements of the fetus is the best sign indicating that it is still in good health. The mother should keep a "kick-chart" to record the movements of her fetus. If there is a reduction in the number of movements it could indicate placental deterioration.
Doppler fetal monitor
Doppler fetal monitor is a hand-held device that is routinely used in prenatal care. When it is used correctly, it can quickly measure the fetal heart rate. The baseline of fetal heart rate is typically between 110 and 160 beats per minute.
Doppler flow study
Doppler flow study is a type of ultrasound that measures the amount of blood flowing in and out of the placenta. The ultrasound machine can also detect the direction of blood flow and display it in red or blue. Usually, a red color indicates a flow toward the ultrasound transducer, while blue indicates a flow away from the transducer. Based on the display, doctors can evaluate blood flow to the umbilical arteries, umbilical veins, or other organs such as heart and brain.
Nonstress test
Nonstress test (NST) is a type of electronic fetal monitoring that uses a cardiotocograph to monitor fetal heartbeat, fetal movement and mother's contraction. NST is typically monitored for at least 20 minutes. Signs of a reactive (normal) NST include a baseline fetal heart rate (FHR) between 110 and 160 beats per minute (bpm) and 2 accelerations of FHR of at least 15 bpm above baseline for over 15 seconds. Vibroacoustic stimulation and longer monitoring may be needed if NST is non-reactive.
Biophysical profile
A biophysical profile is a noninvasive procedure that uses the ultrasound to evaluate the fetal health based on NST and four ultrasound parameters: fetal movement, fetal breathing, fetal muscle tone, and the amount of amniotic fluid surrounding the fetus. A score of 2 points is given for each category that meets the criteria or 0 points if the criteria are not met (no 1 point). Sometimes, the NST is omitted, making the highest score 8/8 instead of 10/10. Generally, a score of 8/10 or 10/10 is considered a normal test result, unless 0 points is given for amniotic fluid. A score of 6/10 with normal amniotic fluid is considered equivocal, and a repeated test within 24 hours may be needed. A score of 4/10 or less is considered abnormal, and delivery may be indicated. Low amniotic fluid can cause pinching umbilical cord, decreasing blood flow to the fetus. Therefore, a score of 0 points for amniotic fluid may indicate the fetus is at risk.
Management
Expectant
A woman who has reached 42 weeks of pregnancy is likely to be offered induction of labour. Alternatively, she can choose expectant management, that is, she waits for the natural onset of labour. Women opting for expectant management may also choose to carry on with additional monitoring of their baby, with regular CTG, ultrasound, and biophysical profile. Risks of expectant management vary between studies.
In many places in the World, according to the World Health Organization and others, such services are rudimentary or not available, and deserve improvement.
Inducing labor
Inducing labor artificially starts the labor process by using medication and other techniques. Labor is usually only induced if there is potential danger on the mother or child.
There are several reasons for labor induction; the mother's water breaks, and contractions have not started, the child is postmature, the mother has diabetes or high blood pressure, or there is not enough amniotic fluid around the baby. Labor induction is not always the best choice because it has its own risks. Sometimes mothers will request to be induced for reasons that are not medical. This is called an elective induction. Doctors try to avoid inducing labor unless it is completely necessary.
Procedure
There are four common methods of starting contractions. The four most common are stripping the membranes, breaking the mother's water, giving the hormone prostaglandin, and giving the synthetic hormone pitocin. Stripping the membranes does not work for all women, but can for most. A doctor inserts a finger into the mother's cervix and moves it around to separate the membrane connecting the amniotic sac, which houses the baby, from the walls of the uterus. Once this membrane is stripped, the hormone prostaglandin is naturally released into the mother's body and initiates contractions. Most of the time doing this only once will not immediately start labor. It may have to be done several times before the stimulant hormone is released, and contractions start. The next method is breaking the mother's water, which is also referred to as an amniotomy. The doctor uses a plastic hook to break the membrane and rupture the amniotic sac. Within a few hours labor usually begins. Giving the hormone prostaglandin ripens the cervix, meaning the cervix softens, thins out, or dilates. The drug Cervidil is administered by mouth in tablet form or in gel form as an insert. This is most often done in the hospital overnight. The hormone oxytocin is usually given in the synthetic form of Pitocin. It is administered through an IV throughout the labor process. This hormone stimulates contractions. Pitocin is also used to "restart" labor when it is lagging.
The use of misoprostol is also allowed, but close monitoring of the mother is required.
Feelings
Stripping the membranes: Stripping the membranes only takes a few minutes and causes a few intense cramps. Many women report a feeling similar to urination, others report it to be quite painful.
Breaking the water: Having one's water broken feels like a slight tug and then a warm flow of liquid.
Pitocin: When the synthetic hormone, pitocin, is used, contractions occur more frequently than a natural occurring birth; they are also more intense.
Epidemiology
Prevalence of postterm pregnancy may vary between countries due to different population characteristics or medical management. Factors include number of first-time pregnancies, genetic predisposition, timing of ultrasound assessment, and Caesarean section rates, etc. The incidence is approximately 7%. Postterm pregnancy occurs in 0.4% of pregnancies approximately in the United States according to birth certificate data.
| Biology and health sciences | Human reproduction | Biology |
4255718 | https://en.wikipedia.org/wiki/Organosilicon%20chemistry | Organosilicon chemistry | Organosilicon chemistry is the study of organometallic compounds containing carbon–silicon bonds, to which they are called organosilicon compounds. Most organosilicon compounds are similar to the ordinary organic compounds, being colourless, flammable, hydrophobic, and stable to air. Silicon carbide is an inorganic compound.
History
In 1863, Charles Friedel and James Crafts made the first organochlorosilane compound. The same year, they also described a "polysilicic acid ether" in the preparation of ethyl- and methyl-o-silicic acid. Extensive research in the field of organosilicon compounds was pioneered in the beginning of 20th century by Frederic S. Kipping. He also had coined the term "silicone" (resembling ketones, though this is erroneous) in relation to these materials in 1904. In recognition of Kipping's achievements, the Dow Chemical Company had established an award in the 1960s that is given for significant contributions to the field of silicon chemistry. In his works, Kipping was noted for using Grignard reagents to make alkylsilanes and arylsilanes and preparing silicone oligomers and polymers for the first time.
In 1945, Eugene G. Rochow also made a significant contribution to the field of organosilicon chemistry by first describing the Müller-Rochow process.
Occurrence and applications
Organosilicon compounds are widely encountered in commercial products. Most common are antifoamers, caulks (sealant), adhesives, and coatings made from silicones. Other important uses include agricultural and plant control adjuvants commonly used in conjunction with herbicides and fungicides.
Biology and medicine
Carbon–silicon bonds are absent in biology, however enzymes have been used to artificially create carbon-silicon bonds in living microbes. Silicates, on the other hand, have known existence in diatoms. Silafluofen is an organosilicon compound that functions as a pyrethroid insecticide. Several organosilicon compounds have been investigated as pharmaceuticals.
Bonding
In the great majority of organosilicon compounds, Si is tetravalent with tetrahedral molecular geometry. Compared to carbon–carbon bonds, carbon–silicon bonds are longer and weaker.
The C–Si bond is somewhat polarised towards carbon due to carbon's greater electronegativity (C 2.55 vs Si 1.90), and single bonds from Si to electronegative elements are very strong. Silicon is thus susceptible to nucleophilic attack by O−, Cl−, or F−; the energy of an Si–O bond in particular is strikingly high. This feature is exploited in many reactions such as the Sakurai reaction, the Brook rearrangement, the Fleming–Tamao oxidation, and the Peterson olefination.
The Si–C bond (1.89 Å) is significantly longer than a typical C–C bond (1.54 Å), suggesting that silyl substitutents have less steric demand than their organyl analogues. When geometry allows, silicon exhibits negative hyperconjugation, reversing the usual polarization on neighboring atoms.
Preparation
The first organosilicon compound, tetraethylsilane, was prepared by Charles Friedel and James Crafts in 1863 by reaction of tetrachlorosilane with diethylzinc.
The bulk of organosilicon compounds derive from organosilicon chlorides . These chlorides are produced by the "Direct process", which entails the reaction of methyl chloride with a silicon-copper alloy. The main and most sought-after product is dimethyldichlorosilane:
2 + Si →
A variety of other products are obtained, including trimethylsilyl chloride and methyltrichlorosilane. About 1 million tons of organosilicon compounds are prepared annually by this route. The method can also be used for phenyl chlorosilanes.
Hydrosilylation
Another major method for the formation of Si-C bonds is hydrosilylation (also called hydrosilation). In this process, compounds with Si-H bonds (hydrosilanes) are added to unsaturated substrates. Commercially, the main substrates are alkenes. Other unsaturated functional groups — alkynes, imines, ketones, and aldehydes — also participate, but these reactions are of little economic value.
Hydrosilylation requires metal catalysts, especially those based on platinum group metals.
In the related silylmetalation, a metal replaces the hydrogen atom.
Cleavage of Si-Si bonds
Hexamethyldisilane reacts with methyl lithium to give trimethylsilyl lithium:
Similarly, tris(trimethylsilyl)silyl lithium is derived from tetrakis(trimethylsilyl)silane:
Functional groups
Silicon is a component of many functional groups. Most of these are analogous to organic compounds. The overarching exception is the rarity of multiple bonds to silicon, as reflected in the double bond rule.
Silanols, siloxides, and siloxanes
Silanols are analogues of alcohols. They are generally prepared by hydrolysis of silyl chlorides:
+ → + HCl
Less frequently silanols are prepared by oxidation of silyl hydrides, a reaction that uses a metal catalyst:
2 + → 2
Many silanols have been isolated including and . They are about 500x more acidic than the corresponding alcohols. Siloxides are the deprotonated derivatives of silanols:
+ NaOH → +
Silanols tend to dehydrate to give siloxanes:
2 → +
Polymers with repeating siloxane linkages are called silicones. Compounds with an Si=O double bond called silanones are extremely unstable.
Silyl ethers
Silyl ethers have the connectivity Si-O-C. They are typically prepared by the reaction of alcohols with silyl chlorides:
+ ROH → + HCl
Silyl ethers are extensively used as protective groups for alcohols.
Exploiting the strength of the Si-F bond, fluoride sources such as tetra-n-butylammonium fluoride (TBAF) are used in deprotection of silyl ethers:
+ + → + H-O-R +
Silyl chlorides
Organosilyl chlorides are important commodity chemicals. They are mainly used to produce silicone polymers as described above. Especially important silyl chlorides dimethyldichlorosilane (), methyltrichlorosilane (), and trimethylsilyl chloride () are all produced by direct process. More specialized derivatives that find commercial applications include dichloromethylphenylsilane, trichloro(chloromethyl)silane, trichloro(dichlorophenyl)silane, trichloroethylsilane, and phenyltrichlorosilane.
Although proportionately a minor outlet, organosilicon compounds are widely used in organic synthesis. Notably trimethylsilyl chloride is the main silylating agent. One classic method called the Flood reaction for the synthesis of this compound class is by heating hexaalkyldisiloxanes with concentrated sulfuric acid and a sodium halide.
Silyl hydrides
The silicon to hydrogen bond is longer than the C–H bond (148 compared to 105 pm) and weaker (299 compared to 338 kJ/mol). Hydrogen is more electronegative than silicon hence the naming convention of silyl hydrides. Commonly the presence of the hydride is not mentioned in the name of the compound. Triethylsilane has the formula . Phenylsilane is . The parent compound is called silane.
Silenes
Organosilicon compounds, unlike their carbon counterparts, do not have a rich double bond chemistry. Compounds with silene Si=C bonds (also known as alkylidenesilanes) are laboratory curiosities such as the silicon benzene analogue silabenzene. In 1967, Gusel'nikov and Flowers provided the first evidence for silenes from pyrolysis of dimethylsilacyclobutane. The first stable (kinetically shielded) silene was reported in 1981 by Brook.
Disilenes have Si=Si double bonds and disilynes are silicon analogues of an alkyne. The first Silyne (with a silicon to carbon triple bond) was reported in 2010.
Siloles
Siloles, also called silacyclopentadienes, are members of a larger class of compounds called metalloles. They are the silicon analogs of cyclopentadienes and are of current academic interest due to their electroluminescence and other electronic properties. Siloles are efficient in electron transport. They owe their low lying LUMO to a favorable interaction between the antibonding sigma silicon orbital with an antibonding pi orbital of the butadiene fragment.
Pentacoordinated silicon
Unlike carbon, silicon compounds can be coordinated to five atoms as well in a group of compounds ranging from so-called silatranes, such as phenylsilatrane, to a uniquely stable pentaorganosilicate:
The stability of hypervalent silicon is the basis of the Hiyama coupling, a coupling reaction used in certain specialized organic synthetic applications. The reaction begins with the activation of a Si-C bond by fluoride:
+ R"-X + → R-R" + +
Various reactions
In general, almost any silicon-heteroatom bond is water-sensitive, and will spontaneously hydrolyze. Unstrained silicon-carbon bonds, however, are very strong, and cleave only in a small number of extreme conditions. Strong acids will protodesilate arylsilanes and, in the presence of a Lewis acid catalyst, alkylsilanes. Most nucleophiles are too weak to displace carbon from silicon: the exceptions are fluoride ions and alkoxides, although the latter often deprotonate the organosilane to a silicon ylide instead.
As a covalent hydride source, hydrosilanes are good reductants.
Certain allyl silanes can be prepared from allylic esters such as 1 and monosilylcopper compounds, which are formed in situ by the reaction of the disilylzinc compound 2, with Copper Iodide, in:
In this reaction type, silicon polarity is reversed in a chemical bond with zinc, and a formal allylic substitution on the benzoyloxy group takes place.
Unsaturated silanes like the above are susceptible to electrophilic substitution.
Environmental effects
Organosilicon compounds affect bee (and other insect) immune expression, making them more susceptible to viral infection.
| Physical sciences | Organic compounds | null |
4256725 | https://en.wikipedia.org/wiki/Archaeplastida | Archaeplastida | The Archaeplastida (or kingdom Plantae sensu lato "in a broad sense"; pronounced ) are a major group of eukaryotes, comprising the photoautotrophic red algae (Rhodophyta), green algae, land plants, and the minor group glaucophytes. It also includes the non-photosynthetic lineage Rhodelphidia, a predatorial (eukaryotrophic) flagellate that is sister to the Rhodophyta, and probably the microscopic picozoans. The Archaeplastida have chloroplasts that are surrounded by two membranes, suggesting that they were acquired directly through a single endosymbiosis event by phagocytosis of a cyanobacterium. All other groups which have chloroplasts, besides the amoeboid genus Paulinella, have chloroplasts surrounded by three or four membranes, suggesting they were acquired secondarily from red or green algae. Unlike red and green algae, glaucophytes have never been involved in secondary endosymbiosis events.
The cells of the Archaeplastida typically lack centrioles and have mitochondria with flat cristae. They usually have a cell wall that contains cellulose, and food is stored in the form of starch. However, these characteristics are also shared with other eukaryotes. The main evidence that the Archaeplastida form a monophyletic group comes from genetic studies, which indicate their plastids probably had a single origin. This evidence is disputed. Based on the evidence to date, it is not possible to confirm or refute alternative evolutionary scenarios to a single primary endosymbiosis. Photosynthetic organisms with plastids of different origin (such as brown algae) do not belong to the Archaeplastida.
The archaeplastidans fall into two main evolutionary lines. The red algae are pigmented with chlorophyll a and phycobiliproteins, like most cyanobacteria, and accumulate starch outside the chloroplasts. The green algae and land plants – together known as Viridiplantae (Latin for "green plants") or Chloroplastida – are pigmented with chlorophylls a and b, but lack phycobiliproteins, and starch is accumulated inside the chloroplasts. The glaucophytes have typical cyanobacterial pigments, but their plastids (called cyanelles) differ in having a peptidoglycan outer layer.
Archaeplastida should not be confused with the older and obsolete name Archiplastideae, which refers to cyanobacteria and other groups of bacteria.
Taxonomy
The consensus in 2005, when the group consisting of the glaucophytes and red and green algae and land plants was named 'Archaeplastida', was that it was a clade, i.e. was monophyletic. Many studies published since then have provided evidence in agreement. Other studies, though, have suggested that the group is paraphyletic. To date, the situation appears unresolved, but a strong signal for Plantae (Archaeplastida) monophyly has been demonstrated in a recent study (with an enrichment of red algal genes). The assumption made here is that Archaeplastida is a valid clade.
Various names have been given to the group. Some authors have simply referred to the group as plants or Plantae. However, the name Plantae is ambiguous, since it has also been applied to less inclusive clades, such as Viridiplantae and embryophytes. To distinguish, the larger group is sometimes known as Plantae sensu lato ("plants in the broad sense").
To avoid ambiguity, other names have been proposed. Primoplantae, which appeared in 2004, seems to be the first new name suggested for this group. Another name applied to this node is Plastida, defined as the clade sharing "plastids of primary (direct prokaryote) origin [as] in Magnolia virginiana Linnaeus 1753".
Although many studies have suggested the Archaeplastida form a monophyletic group, a 2009 paper argues that they are in fact paraphyletic. The enrichment of novel red algal genes in a recent study demonstrates a strong signal for Plantae (Archaeplastida) monophyly and an equally strong signal of gene sharing history between the red/green algae and other lineages. This study provides insight on how rich mesophilic red algal gene data are crucial for testing controversial issues in eukaryote evolution and for understanding the complex patterns of gene inheritance in protists.
The name Archaeplastida was proposed in 2005 by a large international group of authors (Adl et al.), who aimed to produce a classification for the eukaryotes which took into account morphology, biochemistry, and phylogenetics, and which had "some stability in the near term." They rejected the use of formal taxonomic ranks in favour of a hierarchical arrangement where the clade names do not signify rank. Thus, the phylum name 'Glaucophyta' and the class name 'Rhodophyceae' appear at the same level in their classification. The divisions proposed for the Archaeplastida are shown below in both tabular and diagrammatic form.
Archaeplastida:
Glaucophyta Skuja, 1954 (Glaucocystophyta Kies & Kremer, 1986) – glaucophytes
Glaucophytes are a small group of freshwater single-celled algae. Their chloroplasts, called cyanelles, have a peptidoglycan layer, making them more similar to cyanobacteria than those of the remaining Archaeplastida.
Rhodophyceae Thuret, 1855, emend. Rabenhorst, 1863, emend. Adl et al., 2005 (Rhodophyta Wettstein 1901) – red algae
Red algae form one of the largest groups of algae. Most are seaweeds, being multicellular and marine. Their red colour comes from phycobiliproteins, used as accessory pigments in light capture for photosynthesis.
Chloroplastida Adl et al., 2005 (Viridiplantae Cavalier-Smith 1981; Chlorobionta Jeffrey 1982, emend. Bremer 1985, emend. Lewis and McCourt 2004; Chlorobiota Kendrick and Crane 1997)
Chloroplastida is the term chosen by Adl et al. for the group made up of the green algae and land plants (embryophytes). Except where lost secondarily, all have chloroplasts without a peptidoglycan layer and lack phycobiliproteins.
Chlorophyta Pascher, 1914, emend. Lewis & McCourt, 2004 – green algae (part)
Adl et al. employ a narrow definition of the Chlorophyta; other sources include the Chlorodendrales and Prasinophytae, which may themselves be combined.
Ulvophyceae Mattox & Stewart, 1984
Trebouxiophyceae Friedl, 1995 (Pleurastrophyceae Mattox et al. 1984; Microthamniales Melkonian 1990)
Chlorophyceae Christensen, 1994
Chlorodendrales Fritsch, 1917 – green algae (part)
Prasinophytae Cavalier-Smith, 1998, emend. Lewis & McCourt, 2004 – green algae (part)
Mesostigma Lauterborn, 1894, emend. McCourt in Adl et al., 2005 (Mesostigmata Turmel, Otis, and Lemieux 2002)
Charophyta Karol et al., 2001, emend. Lewis & McCourt, 2004 (Charophyceae Smith 1938, emend. Mattox and Stewart 1984) – green algae (part) and land plants
Charophyta sensu lato, as used by Adl et al., is a monophyletic group which is made up of some green algae, including the stoneworts (Charophyta sensu stricto), as well as the land plants (embryophytes).
Sub-divisions other than Streptophytina (below) were not given by Adl et al.
Other sources would include the green algal groups Chlorokybales, Klebsormidiales, Zygnematales and Coleochaetales.
Streptophytina Lewis & McCourt, 2004 – stoneworts and land plants
Charales Lindley 1836 (Charophytae Engler, 1887) – stoneworts
Plantae Haeckel 1866 (Cormophyta Endlicher, 1836; Embryophyta Endlicher, 1836, emend. Lewis & McCourt, 2004) – land plants (embryophytes)
External phylogeny
Below is a consensus reconstruction of the relationships of Archaeplastida with its nearest neighbours, mainly based on molecular data.
There has been disagreement near the Archaeplastida root, e.g. whether Cryptista emerged within the Archaeplastida. In 2014 a thorough review was published on these inconsistencies. The position of Telonemia and Picozoa are not clear. Also Hacrobia (Haptista + Cryptista) may be completely associated with the SAR clade. The SAR are often seen as eukaryote-eukaryote hybrids, contributing to the confusion in the genetic analyses. A sister of Gloeomargarita lithophora has been engulfed by an ancestor of the Archaeplastida, leading to the plastids which are living in permanent endosymbiosis in most of the descendant lineages. Because both Gloeomargarita and related cyanobacteria, in addition to the most primitive archaeplastids, all live in freshwater, it seems the Archaeplastida originated in freshwater, and only colonized the oceans in the late Proterozoic.
Internal phylogeny
In 2019, a phylogeny of the Archaeplastida based on genomes and transcriptomes from 1,153 plant species was proposed. The placing of algal groups is supported by phylogenies based on genomes from the Mesostigmatophyceae and Chlorokybophyceae that have since been sequenced. Both the "chlorophyte algae" and the "streptophyte algae" are treated as paraphyletic (vertical bars beside phylogenetic tree diagram) in this analysis. The classification of Bryophyta is supported both by Puttick et al. 2018, and by phylogenies involving the hornwort genomes that have also since been sequenced. Recent work on non-photosynthetic algae places Rhodelphidia as sister to Rhodophyta or to Glaucophyta and Viridiplantae; and Picozoa sister to that pair of groups.
Morphology
All archaeplastidans have plastids (chloroplasts) that carry out photosynthesis and are believed to be derived from endosymbiotic cyanobacteria. In glaucophytes, perhaps the most primitive members of the group, the chloroplast is called a cyanelle and shares several features with cyanobacteria, including a peptidoglycan cell wall, that are not retained in other members of the group. The resemblance of cyanelles to cyanobacteria supports the endosymbiotic theory.
The cells of most archaeplastidans have walls, commonly but not always made of cellulose.
The Archaeplastida vary widely in the degree of their cell organization, from isolated cells to filaments to colonies to multi-celled organisms. The earliest were unicellular, and many groups remain so today. Multicellularity evolved separately in several groups, including red algae, ulvophyte green algae, and in the green algae that gave rise to stoneworts and land plants.
Endosymbiosis
Because the ancestral archaeplastidan is hypothesized to have acquired its chloroplasts directly by engulfing cyanobacteria, the event is known as a primary endosymbiosis (as reflected in the name chosen for the group 'Archaeplastida' i.e. 'ancient plastid'). In 2013 it was discovered that one species of green algae, Cymbomonas tetramitiformis in the order Pyramimonadales, is a mixotroph and able to support itself through both phagotrophy and phototrophy. It is not yet known if this is a primitive trait and therefore defines the last common ancestor of Archaeplastida, which could explain how it obtained its chloroplasts, or if it is a trait regained by horizontal gene transfer. Since then more species of mixotrophic green algae, such as Pyramimonas tychotreta and Mantoniella antarctica, has been found.
Evidence for primary endosymbiosis includes the presence of a double membrane around the chloroplasts; one membrane belonged to the bacterium, and the other to the eukaryote that captured it. Over time, many genes from the chloroplast have been transferred to the nucleus of the host cell through endosymbiotic gene transfer (EGT). It is estimated that 6–20% of the archaeplastidan genome consist of genes transferred from the endosymbiont. The presence of such genes in the nuclei of eukaryotes without chloroplasts suggests this transfer happened early in the evolution of the group.
Other eukaryotes with chloroplasts appear to have gained them by engulfing a single-celled archaeplastidan with its own bacterially-derived chloroplasts. Because these events involve endosymbiosis of cells that have their own endosymbionts, the process is called secondary endosymbiosis. The chloroplasts of such eukaryotes are typically surrounded by more than two membranes, reflecting a history of multiple engulfment. The chloroplasts of euglenids, chlorarachniophytes and a small group of dinoflagellates appear to be captured green algae, whereas those of the remaining photosynthetic eukaryotes, such as heterokont algae, cryptophytes, haptophytes, and dinoflagellates, appear to be captured red algae.
Fossil record
Perhaps the most ancient remains of Archaeplastida are putative red algae (Rafatazmia) within stromatolites in 1600 Ma (million years ago) rocks in India, as well as possible alga fossils (Tuanshanzia) from China's Gaoyuzhuang Biota of a similar age. Somewhat more recent are microfossils from the Roper group in northern Australia. The structure of these single-celled fossils resembles that of modern green algae. They date to the Mesoproterozoic Era, about 1500 to 1300 Ma. These fossils are consistent with a molecular clock study that calculated that this clade diverged about 1500 Ma. The oldest fossil that can be assigned to a specific modern group is the red alga Bangiomorpha, from 1200 Ma.
In the late Neoproterozoic Era, algal fossils became more numerous and diverse. Eventually, in the Paleozoic Era, plants emerged onto land, and have continued to flourish up to the present.
| Biology and health sciences | Bikonts | Plants |
3122976 | https://en.wikipedia.org/wiki/Thescelosaurus | Thescelosaurus | Thescelosaurus ( ) is a genus of ornithischian dinosaur that lived during the Late Cretaceous period in western North America. It was named and described in 1913 by the paleontologist Charles W. Gilmore; the type species is T. neglectus. Two other species, T. garbanii and T. assiniboiensis, were named in 1976 and 2011, respectively. Additional species have been suggested but are currently not accepted. Thescelosaurus is the eponymous member of its family, the Thescelosauridae. Thescelosaurids are either considered to be basal ("primitive") ornithopods, or are placed outside of this group within the broader group Neornithischia.
Adult Thescelosaurus would have measured roughly long and probably weighed . It moved on two legs, and its body was counter-balanced by its long tail, which made up half of the body length and was stiffened by rod-like ossified tendons. The animal had a long, low snout that ended in a toothless . It had more teeth than related genera, and the teeth were of different types. The hand bore five fingers, and the foot four toes. Thin plates are found next to the ribs' sides, the function of which is unclear. Scale impressions are known from the leg of one specimen. An herbivore, Thescelosaurus was likely a selective feeder, as indicated by its teeth and narrow snout. Its limbs were robust, and its (upper thigh bone) was longer than its (shin bone), suggesting that it was not adapted to running. Its brain was comparatively small, possibly indicating small group sizes of two to three individuals. The senses of smell and balance were acute, but hearing was poor. It might have been burrowing, as acute smell and poor hearing are typical for modern burrowing animals. Burrowing has been confirmed for the closely related Oryctodromeus, and might have been widespread in thescelosaurids. The genus attracted media attention in 2000, when a specimen unearthed in 1999 was interpreted as including a fossilized heart, but scientists now doubt the identification of the object.
Thescelosaurus has been found across a wide geographic range across western North America. The first specimens were discovered in the Lance Formation of Wyoming, but subsequent discoveries have been made in North Dakota, South Dakota, Montana, Alberta, and Saskatchewan, in geological formations including the Frenchman Formation, Hell Creek Formation, and Scollard Formation. It was relatively common, and may have been the most common dinosaur in the Frenchman Formation. Living during the late Maastrichtian age, it was among the last of the non-avian dinosaurs before the entire group went extinct during the Cretaceous–Paleogene extinction event around 66 million years ago.
Discovery and history
T. neglectus and its type specimen
The first specimens of what would later be named Thescelosaurus were discovered during the bone wars, a heated rivalry between the paleontologists Edward Drinker Cope and Othniel Charles Marsh. In July 1891, the fossil hunter John Bell Hatcher, who had been hired by Marsh, and his assistant William H. Utterback discovered a near-complete skeleton of a small herbivorous dinosaur along Doegie Creek in Niobrara County, Wyoming, in rocks of the Lance Formation. The skeleton was found lying on its left side and largely in natural , with only the head and neck lost to erosion. It was taken to the Smithsonian Institution's National Museum of Natural History (USNM), where it remained in its original, unlabelled packing box. In 1903, the USNM hired the paleontologist Charles W. Gilmore to work on the extensive collection that had been amassed under the direction of Marsh, who had died in 1899. It was not before 1913 that Gilmore opened the box and, to his surprise, found the skeleton of a new species of dinosaur. In 1913, Gilmore published a preliminary description naming the new genus and species Thescelosaurus neglectus. In addition to Hatcher's specimen (USNM 7757), which became the type specimen of the new species, Gilmore assigned a second, more fragmentary skeleton from Lance Creek, also in Niobrara County, to the species (paratype, USNM 7758). The generic name derives from the Greek words (), , and () or . The specific name, , is Latin for or , as the type specimen had been unattended to for so long.
Gilmore published a comprehensive description in 1915 after the type specimen was fully prepared. He identified six more specimens, including a shoulder blade with coracoid, a neck vertebra, and a toe bone, as well as three partial skeletons that had been collected by Barnum Brown and were stored in the American Museum of Natural History (AMNH). The neck and skull remained unknown, however, and Gilmore restored these missing parts based on Hypsilophodon, which he considered a close relative, in his skeletal and life reconstructions. For the museum display of the type specimen, Gilmore maintained its original posture and incompleteness. Only the right leg, which was slightly dislocated, was adjusted in position, and some minor damage to the bones was restored, but painted lighter than the original bones so that the real and reconstructed parts could be distinguished visually. In 1963, the display was included in a wall mount alongside the ornithischians Edmontosaurus and Corythosaurus and the theropod Gorgosaurus. In 1981 the display was rearranged, placing Thescelosaurus higher and more out-of-sight. Renovations of the exhibit from 2014 to 2019 removed the Thescelosaurus and other dinosaurs on display, replacing them with plaster casts so that the original fossils could be further prepared and studied.
T. edmontonensis, revision, and T. garbanii
In 1926, William Parks described the new species T. warreni from a well-preserved skeleton from Alberta, Canada, from what was then known as the Edmonton Formation. This skeleton had notable differences from T. neglectus, and so Charles M. Sternberg placed it in a new genus, Parksosaurus, in 1937. In 1940, Sternberg named an additional species, T. edmontonensis, based on another articulated skeleton (CMN 8537) that he had discovered in the Edmonton Formation of Rumsey, Alberta. Sternberg had already mentioned this specimen in 1926, though it was still unprepared at that time. It preserves most of the vertebral column, pelvis, legs, scapula, coracoid, arm, and, most significantly, multiple bones of the skull roof and a complete mandible, the first known from Thescelosaurus. Newer geology has separated the Edmonton Formation into four formations, with Parksosaurus from the older Horseshoe Canyon Formation and Thescelosaurus edmontonensis from the younger Scollard Formation.
In 1974, Peter M. Galton revised Thescelosaurus and described additional specimens, resulting in a total of 15 specimens known. These include four specimens from the Hell Creek Formation collected by Barnum Brown in Montana in 1906 and 1909, some of which had already been mentioned by Gilmore in 1915; one specimen found in 1892 by Wortman and Peterson at an uncertain location; two specimens found in 1921 by Levi Sternberg in the Frenchman Formation of Rocky Creek, Saskatchewan; and two isolated bones, also from Saskatchewan. One of Browns specimens, AMNH 5034, was found just below the Fort Union Formation, at the youngest locality from which dinosaurs were found. Galton concluded that T. edmontonensis was simply a more robust individual of T. neglectus (possibly the opposite sex of the type individual).
William J. Morris described three additional partial skeletons in 1976, two found in the Hell Creek Formation of Garfield County, Montana by preparator Harli Garbani, and one from an unknown location in Harding County, South Dakota. The first specimen (LACM 33543) preserves parts of the vertebral column and pelvis in addition to bones of the skull not yet known from Thescelosaurus such as the jugals and braincase. The second specimen (LACM 33542) includes vertebrae from the neck and back, and a nearly complete lower leg with a partial femur. Morris concluded that its ankle anatomy and larger size was unique, and therefore named the new species Thescelosaurus garbanii, in honor of the discoverer Garbani. Morris also argued that the ankle of T. edmontonensis, which Galton claimed was damaged and misinterpreted, was truly different from T. neglectus and more similar to T. garbanii. Therefore, he suggested that T. edmontonensis and T. garbanii may eventually be separated from Thescelosaurus as a new genus. The third specimen (SDSM 7210) includes a large part of the skull, some partial vertebrae from the back and two bones of the fingers, parts that do not overlap with the diagnostic regions of the T. neglectus type specimen, preventing comparisons. Morris provisionally assigned the specimen to Thescelosaurus, but suggested that it could represent a new species; this potential species has later been called the "Hell Creek hypsilophodontid".
Bugenasaura and the "Willo" specimen
Galton revised Thescelosaurus for a second time in 1995. He argued that the supposedly diagnostic traits of the ankle of the T. edmontonensis specimen are the result of breakage, as indicated by the previously undescribed left ankle of that specimen that showed the same anatomy as T. neglectus. Consequently, he synonymized T. edmontonensis with T. neglectus. Galton determined that Morris correctly interpreted the ankle of T. garbanii and suggested that the species could be elevated to a genus of its own. There was also the possibility that the hindlimb of T. garbanii did instead belong to the pachycephalosaurid Stygimoloch, which is also known from the Hell Creek Formation and for which the hindlimb was unknown. Galton also concluded that the skull of SDSM 7210, the third of the specimens described by Morris, was distinct from Thescelosaurus, and therefore named the new taxon Bugenasaura infernalis. The name is a combination of the Latin , and , , as well as the Ancient Greek , . The specific name, from the Latin , , is a reference to the lower levels of the Hell Creek Formation from which it is known. Galton also tentatively assigned LACM 33543, the type of T. garbanii, to the new species, noting that additional material is necessary to determine if the referral is correct, and that the name garbanii should have priority if this turns out to be the case.
In his 1995 revision, Galton also reassigned isolated teeth from the Campanian Judith River Formation of Montana to the related genus Orodromeus. These teeth had been assigned to Thescelosaurus cf. neglectus by Ashok Sahni in 1972, which would have been the oldest occurrence of Thescelosaurus. In a 1999 study on the anatomy of Bugenasaura, Galton assigned a tooth in the collection of the University of California Museum of Paleontology (UCMP 49611) to the latter. Significantly, this tooth reportedly came from the Late Jurassic Kimmeridge Clay Formation of Weymouth, England, and therefore is roughly 70 million years older than the Bugenasaura type specimen and from another continent. Galton argued that it had possibly been mislabelled and was actually from the Lance Formation of Wyoming, but the tooth was first collected before the museum was active in the Lance region. The lack of diagnostic features led Paul M. Barrett and Susannah Maidment to classify the tooth as an indeterminate ornithischian in 2011.
After the discovery of additional specimens of Thescelosaurus preserving both the skull and skeleton, Clint Boyd and colleagues reassessed the historic and current species of Thescelosaurus in 2009. One of the new specimens (MOR 979) was found in the Hell Creek of Montana and preserves a nearly complete skull and skeleton. The researchers also identified previously overlooked skull material of the T. neglectus paratype USNM 7758, which allowed comparisons of the diagnostic regions of the skull and ankle across multiple specimens and species. The key specimen, however, was NCSM 15728, nicknamed "Willo", which was found in the upper Hell Creek Formation in Harding County, South Dakota by Michael Hammer in 1999. This specimen preserves most of the skeleton and a mass in the chest cavity that was initially interpreted as a heart. "Willo" also includes a complete skull, showing that it was much lower and longer than previously thought. "Willo" and the other new specimens made it clear that Bugenasaura infernalis must be assigned to Thescelosaurus. By reassigning the species, Boyd and colleagues created the new combination T. infernalis which they considered undiagnostic.
T. assiniboiensis and further discoveries
Another species, T. assiniboiensis, was named by Caleb M. Brown and colleagues in 2011 based on a specimen (RSM P 1225.1) found in 1968 by Albert Swanston, a museum technician at the Royal Saskatchewan Museum. The specific name, assiniboiensis, derives from the historic District of Assiniboia that covered the southern Saskatchewan region where the Frenchman Formation is exposed, which in turn takes its name from the Assiniboine peoples. When discovered, the specimen was articulated, with its tail weathering out of a hill side. It is a small specimen including a fragmentary skull, most of the vertebral column, the pelvic girdles and the hind limbs. The locality of the specimen as originally reported was incorrect, as revisiting of the Frenchman River valley by Tim Tokaryk in the 1980s found that the excavation, identifiable by bone and plaster remnants, actually took place on the north side of the valley, approximately halfway up the exposed claystone. This places the specimen in the Frenchman Formation.
Specimens can only be directly compared if they preserve the same bones, but overlapping material is often not available – the assignment of most Thescelosaurus specimens to any of the three recognized species therefore remained uncertain. This situation improved in 2014, when Boyd and colleagues reported a new specimen from the Hell Creek Formation of Dewey County, South Dakota (TLAM.BA.2014.027.0001), that was collected from private lands by Bill Alley before being donated to the Timber Lake and Area Museum. This specimen had yet to be fully prepared but includes a mostly complete but slightly crushed skull and much of the skeleton. This find allowed the assignment of this specimen and the "Willo" specimen to T. neglectus. In 2022, news media reported that a specimen of Thescelosaurus was found at the Tanis fossil site in North Dakota, which is thought to show direct signs of the Chicxulub asteroid impact in the Gulf of Mexico that resulted in the K-Pg extinction.
Description
The skeletal anatomy of this genus is well documented overall, and restorations have been published in several papers, including skeletal restorations and models. The skeleton is known well enough that a detailed reconstruction of the hip and hindlimb muscles has been made. The largest known thescelosaurid, its body length has been estimated at and the weight at , with the large type specimen of T. garbanii estimated at long. It may have been sexually dimorphic, with one sex larger than the other.
Skull
The skull is best known from T. neglectus, mostly thanks to the excellently preserved "Willo" specimen which has been CT-scanned to reveal its internal details. A fragmentary skull is also known from T. assiniboiensis (RSM P 1225.1). Most autapomorphies – distinguishing features that are not found in related genera – are found in the skull. The skull also shows many plesiomorphies, "primitive" (basal) features that are typically found in ornithischians that are geologically much older, but also shows derived (advanced) features.
The skull had a long, low snout that ended in a toothless . As in other dinosaurs, it was perforated by several , or skull openings. Of these, the (eye socket) and the (behind the orbit) were proportionally large, while the (nostril) was small. The external naris was formed by the (the front bone of the upper jaw) and the , while the (the tooth-bearing "cheek" bone) was excluded. Another fenestra, the antorbital fenestra, was in-between the external naris and the orbit and contained two smaller internal fenestrae. Long rod-like bones called palpebrals were present above the eyes, giving the animal heavy bony eyebrows. The palpebral was not aligned with the upper margin of the orbit as in some other ornithischians, but protruded across it. The , which form the above the orbit, were widest at the level of the middle of the orbit and narrower at their posterior (rear) ends – an autapomorphy of Thescelosaurus.
There was a prominent ridge along the length of both maxillae; a similar ridge was also present on both (the tooth-bearing bone of the lower jaw). The ridges and position of the teeth, deeply internal to the outside surface of the skull, have been interpreted as evidence for muscular cheeks. The morphology of the ridge on the maxilla, which is very pronounced and has small and oblique ridges covering its posterior end, is an autapomorphy of the genus. The teeth were of different types: small pointed premaxillary teeth, and leaf-shaped cheek teeth that differed between the maxilla and the dentary. The premaxillae had six teeth each, a primitive trait among ornithischians that is otherwise only found in much earlier and more basal forms such as Lesothosaurus and Scutellosaurus. Immature individuals may have had less than six premaxillary teeth. Unlike many other basal ornithischians, the premaxillary teeth lacked (small protuberances on the cutting edges). Both the maxilla and the dentary had up to twenty cheek teeth on each side, which is again similar to basal ornithischians and unlike other neornithischians, which had a reduced tooth count. The cheek teeth themselves likewise showed primitive features, such as a constriction that separated the crowns from their roots, and a (bulge surrounding the tooth) above the constriction. The front bone of the lower jaw was the , which was unique to ornithischians. When seen from below, the posterior end of the predentary was bifurcated, which is a derived feature.
Boyd and colleagues, in 2014, listed seven skull features that separate T. assiniboiensis from T. neglectus, most of which are found in the at the back of the skull. These include, amongst others, a (small opening) piercing the roof of the braincase (absent in T. neglectus); the flattened anterior (front) surface of the bone (V-shaped in T. neglectus); and the trigeminal foramen (the opening for the trigeminal nerve) piercing both the and bones (restricted to the prootic in T. neglectus).
Vertebrae and limbs
T. neglectus had 6 ("hip") vertebrae and 27 presacral ("neck and back") vertebrae. The type specimen of T. assiniboiensis appears to have had only five sacrals, but it is possible that this individual was not yet fully mature and that the last sacral was not yet fused to the other sacrals. The tail was long and made up half of the total body length. It was braced by ossified tendons from the middle to the tip, which would have reduced the flexibility of the tail. The rib cage was broad, giving it a wide back. Large thin and flat mineralized plates have been found next to the ribs' sides, so-called intercostal plates. The anterior ribs were flattened and concave, and the posterior margins of their lower ends had a rough surface. These features are autapomorphies of Thescelosaurus and are possibly adaptations that allow the plates to attach to the rib cage.
The limbs were robust. The (upper thigh bone) was longer than the (shin bone), which distinguishes the genus from closely related genera. Thescelosaurus had short, broad, five-fingered hands. The second digit was the longest, and the fifth digit was strongly reduced in size. Only the first three digits ended in hooflike unguals. There were two phalanges (finger bones) in the first digit, three in the second, four in the third, three in the fourth, and two in the fifth. The foot had five , though only the first four carried digits, with the fifth metatarsal being vestigial (reduced to a small splint). The first metatarsal was only half the length of the third, and its digit might not have regularly touched the ground. Most of the animal's weight was therefore supported by the center three , of which the middle (third) was the longest. The first digit had two phalanges, the second had three, the third had four, and the fourth had five. The digits were shorter than the metatarsals, and their phalanges were distinctly flattened. The species T. garbanii differs from the other species in its unique ankle, with the calcaneus being reduced and not contributing to the midtarsal joint.
Integument
For most of its history, the nature of this genus' integument, be it scales or something else, remained unknown. Gilmore described patches of carbonized material near the shoulders as possible epidermis, with a "punctured" texture, but no regular pattern, while Morris suggested that armor was present, in the form of small scutes he interpreted as located at least along the midline of the neck of one specimen. Scutes have not been found with other articulated specimens of Thescelosaurus, though, and Galton argued in 2008 that Morris's scutes are crocodilian in origin. In 2022, news media reported that the Tanis specimen preserves skin impressions on a leg that show that parts of the animal were covered in scales.
Classification
In his 1913 description of Thescelosaurus, Gilmore considered it to be a member of Camptosauridae, alongside Hypsilophodon, Dryosaurus and Laosaurus. In 1915, he instead placed it within Hypsilophodontidae alongside only Hypsilophodon. Many authors followed this classification within Hypsilophodontidae. Franz Nopcsa and Friedrich von Huene instead retained Thescelosaurus as a relative of Camptosaurus in 1928 and 1956, respectively. In 1937, Sternberg separated Thescelosaurus and the related Parksosaurus into a family of their own, the Thescelosauridae, but considered both genera to be members of the subfamily Thescelosaurinae within Hypsilophodontidae in 1940. Anatoly Konstantinovich Rozhdestvensky and Richard A. Thulborn retained Thescelosauridae as a separate family in 1964 and 1974, respectively. Galton classified Thescelosaurus as a member of Iguanodontidae based on hindlimb proportions in 1974, but this family was found to be polyphyletic (not a natural group); he therefore returned to a hypsilophodontid classification in 1995.
Hypsilophodontidae only included four genera in 1940: Hypsilophodon, Thescelosaurus, Parksosaurus, and Dysalotosaurus. In 1966,Alfred Sherwood Romer assigned most small ornithopods to the family, which was followed by Galton and later authors, though Thescelosaurus was not always included in the family. As a result, Hypsilophodontidae included 13 genera in the first edition of the book The Dinosauria in 1990. This concept of Hypsilophodontidae as an inclusive monophyletic (natural) group was supported by the early cladistic studies of Paul C. Sereno, David B. Weishampel, and Ronald Heinrich, who found Thescelosaurus to be the most basal hypsilophodontid. The analysis of Weishampel and Heinrich in 1992 can be seen below.
The concept of Hypsilophodontidae as a monophyletic group then fell out of favor. Rodney Sheetz suggested in 1999 that "hypsilophodontids" were simply the primitive forms of ornithopods, the larger grouping to which they were commonly assigned. Scheetz found Thescelosaurus, Parksosaurus and Bugenasaura to be successively closer to Hypsilophodon and later ornithopods, but not a group of their own. Other studies had similar results, with Thescelosaurus or Bugenasaura as early ornithopods close to the origin of the group, sometimes forming a clade with Parksosaurus. An issue with T. neglectus prior to the revision by Boyd and colleagues in 2009 was the uncertainty about the assigned specimens, including the separation of Bugenasaura and the unresolved question of whether T. edmontonensis was distinct or not. Following their taxonomic revision, the systematic relationships of Thescelosaurus and "hypsilophodonts" have become clearer, and Boyd and colleagues found support for a larger group of early ornithopods consisting of Thescelosaurus, Parksosaurus, Zephyrosaurus, Orodromeus and Oryctodromeus. Brown and colleagues, while describing T. assiniboiensis in 2011, came to similar results. The same authors confirmed these results again in 2013, prompting them to reintroduce the name Thescelosauridae for the entire group, which was divided into the revised subfamily Thescelosaurinae and the new subfamily Orodrominae.
Other studies did not find Parksosaurus to be closely related to Thescelosaurus, and instead proposed that it was related to the South American Gasparinisaura. However, Boyd argued that the anatomy of Parksosaurus had been misinterpreted, and that Parksosaurus and Thescelosaurus were very closely related if not each other's closest relatives. The clades Thescelosauridae (or, alternatively, Parksosauridae) and Thescelosaurinae have been confirmed by numerous phylogenetic analyses, though not by all. There is also disagreement about whether Thescelosaurus and thescelosaurids are members of Ornithopoda or more basal. Boyd highlighted in 2015 that many phylogenetic studies that included Thescelosaurus either do not include marginocephalians or are unresolved, so there was no definitive evidence that Thescelosaurus was an ornithopod. In his analysis, Thescelosaurus and Thescelosauridae were outside Ornithopoda, instead forming an expansive clade of non-ornithopod neornithischians. Some studies agree with this placement for thescelosaurids, while others support Thescelosaurus as an ornithopod, and others are unresolved. Fonseca and colleagues gave the name Pyrodontia to the clade uniting Thescelosaurus with more derived ornithischians when Thescelosauridae is outside Ornithopoda, referencing the early and rapid diversification of Thescelosauridae, Marginocephalia and Ornithopoda. The thescelosaurid results of Fonseca and colleagues in 2024 can be seen below.
The earliest-known thescelosaurids, Changchunsaurus and Zephyrosaurus, are from the middle Cretaceous, roughly 40 million years younger than when the group would have evolved, suggesting a long ghost lineage (a period of geologic time during which a group existed but left no fossil evidence). In 2024, André Fonseca and colleagues recovered the Late Jurassic Nanosaurus as the earliest thescelosaurid, which would shorten the ghost lineage. Boyd concluded in 2015 that the split between Orodrominae and Thescelosaurinae took place in North America by the Aptian stage, with Orodrominae diversifying within North America. Thescelosaurinae might have diversified either in North America or Asia; the genus Fona, described in 2024, suggests that Thescelosaurinae was already established in North America at the beginning of the Late Cretaceous.
Paleobiology
Like other ornithischians, Thescelosaurus was probably herbivorous. The different types of teeth, as well as the narrow snout, suggest that it was a selective feeder. The contemporary pachycephalosaur Stegoceras, in contrast, was probably a more indiscriminate feeder, allowing both animals to share the same environment without competing for food (niche partitioning). One specimen is known to have had a bone pathology, with the long bones of the right foot fused at their tops.
Posture and locomotion
In his 1915 description, Gilmore suggested that Thescelosaurus was an agile, bipedal (two-legged) animal and was adapted for running. He also created a model to depict its life appearance, showing a light and agile body built with slender hind limbs. These ideas were contested by Sternberg in 1940, who argued that the skeleton, and especially the limbs, were robust. His own model, of the species T. edmontonensis, consequently showed limbs that were much more muscular. Other subsequent studies disagreed with Gilmore idea of a proficient runner given the robust skeleton, the proportionally long femur, and the short lower leg bones. Galton, in 1974, even suggested that Thescelosaurus could have occasionally moved quadrupedally (on all fours), given its fairly long arms and wide hands. Phil Senter and Jared Mackey, in 2024, concluded that a quadrupedal posture would have been theoretically possible, as the spine of the back was bent down, allowing the hand to touch the ground even when the hind limbs were straight. However, in such a posture the fingers would have pointed towards the sides rather than front, and consequently could not have been used to propel the animal forward; quadrupedal locomotion therefore seems unlikely.
A 2023 study by David Button and Lindsay Zanno concluded that Thescelosaurus was less adapted for running than other thescelosaurids but nonetheless showed two traits that are common in runners. The first of these is the fourth trochanter, a bony crest on the femur that anchored the main locomotory muscle. This crest was relatively proximal (closer to the upper end of the bone), allowing for faster movements at the expense of power. The second trait is found in the inner ear, which contains the three semicircular canals that house the sense of balance: one of these canals, the anterior semicircular canal, was greatly enlarged, suggesting acute balance sensitivity, which in turn might suggest high agility but could also be explained by possible burrowing behavior.
In Sternberg's 1940 model, the upper arm was horizontal and almost perpendicular to the body. Peter Galton pointed out in 1970 that the (upper arm bone) of most ornithischians was articulated to the shoulder by an articular surface consisting of the entire end of the bone, rather than a distinct ball and socket as in mammals, and that the humerus would not have spread sidewards as in Sternberg's model. Senter and Mackey found that the humerus could swing forward to a vertical position, but not much beyond that point.
The semicircular canals may allow for reconstructing the habitual posture of the head. In modern animals, one of the canals, the lateral semicircular canal, is typically horizontal when the head is in an "alert" posture. Button and Zanno argued that the head of Thescelosaurus would be slightly up-tilted when oriented such that the canal is horizontal. This is similar to Dysalotosaurus, but contrasts with the down-tilted alert postures hypothesized for many other ornithischians including ceratopsians, ankylosaurs, and hadrosaurs.
Function of intercostal plates
The function of the plates at the side of the rib cage remains unclear. Such plates are known from several other ornithischians, and it was originally suggested that they were osteoderms (armor) for defence against predators. This hypothesis has been refuted, as both their outer and inner surfaces show Sharpey's fibres, indicating the insertion of tendons – consequently, the plates must have been completely embedded within the musculature. Furthermore, analysis of thin sections of the plates of Thescelosaurus, Hypsilophodon, and Talenkauen showed that the plates started as cartilage and became bone as the animal aged (endochondral ossification), which is not the case with osteoderms (which are intramembranous ossifications). Instead, the plates may have played a role in breathing, or simply made the thoracic cavity more rigid. The plates appear to be absent in smaller Thescelosaurus specimens, suggesting that they ossified only later in life.
Senses, sociality, and possible burrowing behavior
Button and Zanno, in 2023, discussed the sensory and cognitive abilities of Thescelosaurus based on a CT scan of the skull of the "Willo" specimen. Even though the brain itself is not preserved, the skull vault that contained the brain, the endocast, can be studied. Overall, the brain was small compared to most other neornithischian dinosaurs, but similar in size to that of ceratopsids such as Triceratops. Its cognitive abilities were therefore likely within the range of modern reptiles. These limited cognitive abilities might suggest that social interactions were comparatively simple, or that it lived in smaller groups. In localities of the related Oryctodromeus, two to three individuals are usually found together, which could reflect the group size typical for that genus. Thescelosaurus might also have lived in such small groups, although Button and Zanno cautioned that the evidence for such claims remains weak.
It had poor hearing, with an estimated best hearing range between around 296 and 2150 Hz, which is narrower than that of related genera such as Dysalotosaurus. The sense of smell, in contrast, was acute, as indicated by the large olfactory bulbs of the brain, which are around 3% of the entire volume of the endocast. This is comparable to modern rodents and lagomorphs and more than in birds. Poor hearing and an acute sense of smell are commonly found in modern animals that create burrows, leading Button and Zanno to suggest that Thescelosaurus may have been semi-fossorial. The animal might have dug for food such as roots and tubers, which can be detected by smelling. Some anatomical features of the skeleton could also be related to digging, such as the robust forelimbs and the premaxillae that were fused together towards their tips, reinforcing the tip of the snout to aid in digging. Furthermore, the shoulder blade was broad, possibly to provide a larger attachment surface for muscles important for scratch-digging. The relatively large size of Thescelosaurus does not necessarily preclude burrowing behaviour, as tunnels have been associated with the only slightly smaller Oryctrodromeus and with much larger mammals.
Button and Zanno alternatively suggested that Thescelosaurus could have inherited its burrowing adaptations from burrowing ancestors, while not burrowing itself. This idea is supported by the lack of some of the burrowing adaptations seen in the closely related Oryctodromeus. Burrowing might have been widespread in thescelosaurids and other basal neornithischians.
Supposed fossilized heart
In 2000, the imaging specialist Paul Fisher and colleagues interpreted a concretion in the chest region of the "Willo" specimen as the remnant of a heart. The internal structure of the concretion was revealed using CT scans, showing three low-density areas that the researchers interpreted as the left and right ventricles and the aorta. They suggested that the heart had been saponified (turned to grave wax) under airless burial conditions, and then changed to goethite, an iron mineral, by replacement of the original material, forming a concretion that reflects the original shape of the heart. The two supposed ventricles and the single aorta are consistent with a four-chambered heart as found in modern birds and mammals, suggesting an elevated metabolic rate for Thescelosaurus. Following this discovery, "Willo" became widely known in the public as the "Dinosaur with a heart of stone".
In 2001, Timothy Rowe and colleagues commented that the anatomy of the object is inconsistent with a heart – for example, the supposed heart partially engulfs one of the ribs and has an internal structure of concentric layers in some places. Instead, they suggested that the structure is an ironstone concretion; such concretions are common in similar sediments, and another concretion is preserved behind the right leg. The original authors defended their position, arguing that the concretion is unique and has formed around the actual heart.
In 2011, researchers supervised by Mary Schweitzer applied multiple lines of inquiry to the question of the object's identity, including more advanced CT scanning, histology, X-ray diffraction, X-ray photoelectron spectroscopy, and scanning electron microscopy. The team found that the object's internal structure does not include chambers but is made up of three unconnected areas of lower density material, and is not comparable to the structure of an ostrich's heart. The "walls" are composed of sedimentary minerals not known to be produced in biological systems, such as goethite, feldspar minerals, quartz, and gypsum, as well as some plant fragments. Carbon, nitrogen, and phosphorus, chemical elements important to life, were lacking in their samples, and cardiac cellular structures were absent. There was one possible patch with animal cellular structures. The authors found their data supported identification as a geologic concretion, not the heart, with the possibility that isolated areas of tissues were preserved.
Paleoecology
Temporal and geographic range
Thescelosaurus is definitively known only from deposits in western North America dating to the late Maastrichtian age, just before the Cretaceous-Paleogene Extinction Event 66.04 million years ago. T. neglectus is known from the Lance Formation of Wyoming and the Hell Creek Formation of South Dakota, T. garbanii is known from the Hell Creek Formation of Montana, and T. assiniboiensis is known from the Frenchman Formation of Saskatchewan. An additional definitive Thescelosaurus specimen that cannot be assigned to a diagnostic species, the type of T. edmontonensis, is known from the Scollard Formation of Alberta. The deposition of the Lance Formation began 69.42 million years ago; the deposition of the Scollard and Frenchman formations began 66.88 million years ago; and the deposition of the Hell Creek Formation began at least 67.2 million years ago. Equivocal material of Thescelosaurus has also been reported from the Horseshoe Canyon Formation of Alberta, the Hell Creek Formation of North Dakota, the Laramie Formation of Colorado, the Ferris, Medicine Bow, and Almond Formations of Wyoming, the Willow Creek Formation of Montana, and the Prince Creek Formation of Alaska. All of these localities are of similar late Maastrichtian age to those bearing clear Thescelosaurus material, except the Horseshoe Canyon and Prince Creek Formations. The presence of Thescelosaurus in those would extend the known range of the genus into the middle or early Maastrichtian, but they have since been reassigned as probable Parksosaurus specimens.
Abundance
Thescelosaurus was historically thought to be relatively uncommon in its paleoenvironments. A 1987 study estimated that hypsilophodontids (including Thescelosaurus) and pachycephalosaurs together accounted for just 2% of the dinosaur faunas of the Lance, Hell Creek, and Frenchman formations. Two other studies, of 1998 and 2011, estimated that Thescelosaurus made up 3% and 5% of the total dinosaur fauna of the Hell Creek Formation, respectively. These low figures might be the result of a sampling bias, as specimens of more spectacular dinosaurs such as Triceratops were preferentially collected, and Thescelosaurus is now thought to be one of the more abundant dinosaurs. A 2011 census study of an area of the Hell Creek Formation where fossils have been collected without such bias estimated that Thescelosaurus forms 8% of the dinosaur fauna. Brown and colleagues, in 2011, estimated that Thescelosaurus was perhaps the most abundant dinosaur in the Frenchman Formation, accounting for 31% of specimens. At one site in the Hell Creek Formation known as the "tooth draw deposit", Thescelosaurus accounted for 22.7% of all dinosaur bones. At the 'convenience store' locality of the Frenchman Formation, Thescelosaurus even accounted for 42% of all tetrapod fossils; all Thescelosaurus specimens from this locality are very small and presumably juvenile. The most common Thescelosaurus fossils that can be readily identified are the phalanges of the foot, while articulated skeletons are very rare.
Paleoenvironment
Paleoenvironments of the Scollard and Hell Creek formation show that the very end of the Cretaceous was intermediate between semi-arid and humid, with both formations showing braided streams and floodplains and meandering river channels, that shifted to become more humid following the Cretaceous-Paleogene extinction event. The formations where Thescelosaurus fossils have been found represent different sections of the western shore of the Western Interior Seaway dividing western and eastern North America during the Cretaceous, a broad coastal plain extending westward from the seaway to the newly formed Rocky Mountains. These formations are composed largely of sandstone and mudstone, which have been attributed to floodplain environments. While slightly older floras were dominated by cycad-palm-fern meadows, the Hell Creek was dominated by angiosperms in a forested landscape of small trees. The floral assemblages of the Frenchman Formation show that southern Saskatchewan was a subtropical to warm temperate environment, with seasons and an average mean temperature of . The paleoenvironment would have been a swampy to lowland forest with a tree canopy of conifers and a diverse angiosperm-dominated mid-canopy and understory. There is also evidence of wildfires, with one site preserving a mature forest and another preserving a forest recovering from a fire.
The disproportional presence of Thescelosaurus and hadrosaurs in sandstone, versus ceratopsians in mudstone, could suggest that Thescelosaurus preferred the habitat along channel margins rather than floodplains, but the possible presence in the Laramie Formation would imply Thescelosaurus preferred a low coastal environment. Alternatively, these supposed habitat preferences may simply be a result of Thescelosaurus fossils being more readily preserved in some environments than in others. Thescelosaurus would have inhabited an ecomorphospace different from that of other dinosaurs including the similarly sized and built pachycephalosaurids.
Many fossil vertebrates are found in the Scollard Formation alongside Thescelosaurus, including Chondrichthyes and Osteichthyes such as Palaeospinax, Myledaphus, Lepisosteus and Cyclurus, amphibians like Scapherpeton, turtles including Compsemys, indeterminate champsosaurs, crocodilians, pterosaurs and birds, a variety of theropod groups including troodontids, ornithomimids, the tyrannosaurid Tyrannosaurus, and ornithischians including Leptoceratops, pachycephalosaurids, Triceratops and Ankylosaurus. Mammals are also very diverse, with multituberculates, deltatheridiids, the marsupials Alphadon, Pediomys, Didelphodon and Eodelphis, and the insectivorans Gypsonictops, Cimolestes and Batodon. Within the Hell Creek Formation of Montana, Thescelosaurus lived alongside dinosaurs including Leptoceratops, pachycephalosaurids Pachycephalosaurus, Stygimoloch and Sphaerotholus, the hadrosaurid Edmontosaurus and possibly Parasaurolophus, ceratopsians like Triceratops and Torosaurus, the nodosaurid Edmontonia and ankylosaurid Ankylosaurus, multiple dromaeosaurids and troodontids, the ornithomimid Ornithomimus, the caenagnathid Elmisaurus, tyrannosaurids including Tyrannosaurus, an alvarezsaurid, and the bird Avisaurus. The dinosaur fauna of the Frenchman Formation is similar, with the presence of pachycephalosaurids, Edmontosaurus, Triceratops, Torosaurus, ankylosaurids, dromaeosaurids, troodontids, ornithomimids, caenagnathids, and Tyrannosaurus, as well as the intermediate theropod Richardoestesia.
The Lance Formation contains one of the best known faunas from the Late Cretaceous, with a diverse assemblage of cartilaginous and bony fishes, frogs, salamanders, turtles, champsosaurs, lizards, snakes, crocodilians, pterosaurs, mammals, and birds such as Potamornis and Palintropus. The dinosaurs of the Lance Formation include troodontids such as Pectinodon and Paronychodon, dromaeosaurids, the ornithomimid Ornithomimus, the caenagnathid Chirostenotes, the tyrannosaurid Tyrannosaurus, the pachycephalosaurids Pachycephalosaurus and Stygimoloch, the hadrosaurid Edmontosaurus, ankylosaurs such as Edmontonia and Ankylosaurus, and ceratopsians such as Leptoceratops, Triceratops, and Torosaurus. Small tyrannosaurids, large dromaeosaurids and other second tier predators likely targeted Thescelosaurus and other smaller ornithischians and theropods, with very young ornithischians also preyed on by smaller dromaeosaurids and troodontids, with crocodilians, lizards and mammals as opportunistic lower trophic level hunters and scavengers.
| Biology and health sciences | Ornitischians | Animals |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.