id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
8340209
https://en.wikipedia.org/wiki/Alternative%20fuel%20vehicle
Alternative fuel vehicle
An alternative fuel vehicle is a motor vehicle that runs on alternative fuel rather than traditional petroleum-based fossil fuels such as gasoline, petrodiesel or liquefied petroleum gas (autogas). The term typically refers to internal combustion engine vehicles or fuel cell vehicles that utilize synthetic renewable fuels such as biofuels (ethanol fuel, biodiesel and biogasoline), hydrogen fuel or so-called "Electrofuel". The term can also be used to describe an electric vehicle (particularly a battery electric vehicle or a solar vehicle), which should be more appropriately called an "alternative energy vehicle" or "new energy vehicle" as its propulsion actually rely on electricity rather than motor fuel. Vehicle engines powered by gasoline/petrol first emerged in the 1860s and 1870s; they took until the 1930s to completely dominate the original "alternative" engines driven by steam (18th century), by gases (early 19th century), or by electricity ( 1830s). Because of a combination of factors, such as environmental and health concerns including climate change and air pollution, high oil-prices and the potential for peak oil, development of cleaner alternative fuels and advanced power systems for vehicles has become a high priority for many governments and vehicle manufacturers around the world in recent years. Hybrid electric vehicles such as the Toyota Prius are not actually alternative fuel vehicles, as they still use traditional fuels such as gasoline, but through advancement in electric battery/supercapacitor and motor-generator technologies, they have an overall better fuel efficiency than conventional combustion vehicles. Other research and development efforts in alternative forms of power focus on developing plug-in electric, range extender and fuel cell vehicles, and even compressed-air vehicles. An environmental analysis of the impacts of various vehicle-fuels extends beyond just operating efficiency and emissions, especially if a technology comes into wide use. A life-cycle assessment of a vehicle involves production and post-use considerations. In general, the lifecycle greenhouse gas emissions of battery-electric vehicles are lower than emissions from hydrogen, PHEV, hybrid, compressed natural gas, gasoline, and diesel vehicles. Current deployments , there were more than 1.49 billion motor vehicles on the world's roads, compared with approximately 159 million alternative fuel and advanced technology vehicles that had been sold or converted worldwide at the end of 2022 and consisting of: Over 65 million flex fuel automobiles, motorcycles and light duty trucks by the end of 2021, led by Brazil with 38.3 million and the United States with 27 million. Over 26 million plug-in electric vehicles, 70% of which were battery electric vehicles (BEVs) and 30% of which were plug-in hybrids (PHEVs). China had 13.8 million units, Europe 7.8 million, and the United States 3 million. In 2022, annual sales exceeded 10 million vehicles, up 55% relative to 2021. 24.9 million LPG powered vehicles by December 2013, led by Turkey with 3.93 million, South Korea (2.4 million), and Poland (2.75 million). 24.5 million natural gas vehicles by the end of 2017, led by China (5.35 million) followed by Iran (4.0 million), India (3.05 million), Pakistan (3 million), Argentina (2.3 million), and Brazil (1.78 million). In 2015, 2.4 million units were sold. Over 13 million hybrid electric vehicles as of 2019. 5.7 million neat-ethanol only light-vehicles built in Brazil since 1979, with 2.4 to 3.0 million vehicles still in use by 2003. and 1.22 million units as of December 2011. 70,200 fuel cell electric vehicles (FCEVs) powered with hydrogen by the end of 2022. South Korea had 29,500 units, the United States 15,000, China 11,200, and Japan 7,700. In 2022, annual sales amounted to 15,391 vehicles. Hydrogen FCEV sales as a percentage of market share among electric vehicles (BEVs, PHEVs and FCEVs) declined for the 6th consecutive year. Mainstream commercial technologies Flexible fuel A flexible-fuel vehicle (FFV) or dual-fuel vehicle (DFF) is an alternative fuel automobile or light duty truck with a multifuel engine that can use more than one fuel, usually mixed in the same tank, and the blend is burned in the combustion chamber together. These vehicles are colloquially called flex-fuel, or flexifuel in Europe, or just flex in Brazil. FFVs are distinguished from bi-fuel vehicles, where two fuels are stored in separate tanks. The most common commercially available FFV in the world market is the ethanol flexible-fuel vehicle, with the major markets concentrated in the United States, Brazil, Sweden, and some other European countries. Ethanol flexible-fuel vehicles have standard gasoline engines that are capable of running with ethanol and gasoline mixed in the same tank. These mixtures have "E" numbers which describe the percentage of ethanol in the mixture, for example, E85 is 85% ethanol and 15% gasoline. (See common ethanol fuel mixtures for more information.) Though technology exists to allow ethanol FFVs to run on any mixture up to E100, in the U.S. and Europe, flex-fuel vehicles are optimized to run on E85. This limit is set to avoid cold starting problems during very cold weather. Over 65 million flex fuel automobiles, motorcycles and light duty trucks by the end of 2021, led by Brazil with 38.3 million and the United States with 27 million. Other markets were Canada (1.6 million by 2014), and Sweden (243,100 through December 2014). The Brazilian flex fuel fleet includes over 4 million flexible-fuel motorcycles produced since 2009 through March 2015. In Brazil, 65% of flex-fuel car owners were using ethanol fuel regularly in 2009, while, the actual number of American FFVs being run on E85 is much lower; surveys conducted in the U.S. have found that 68% of American flex-fuel car owners were not aware they owned an E85 flex. There have been claims that American automakers are motivated to produce flex-fuel vehicles due to a loophole in the Corporate Average Fuel Economy (CAFE) requirements, which gives the automaker a "fuel economy credit" for every flex-fuel vehicle sold, whether or not the vehicle is actually fueled with E85 in regular use. This loophole allegedly allows the U.S. auto industry to meet CAFE fuel economy targets not by developing more fuel-efficient models, but by spending between US$100 and US$200 extra per vehicle to produce a certain number of flex-fuel models, enabling them to continue selling less fuel-efficient vehicles such as SUVs, which netted higher profit margins than smaller, more fuel-efficient cars. Plug-in electric Battery-electric Battery electric vehicles (BEVs), also known as all-electric vehicles (AEVs), are electric vehicles whose main energy storage is in the chemical energy of batteries. BEVs are the most common form of what is defined by the California Air Resources Board (CARB) as zero emission vehicle (ZEV) because they produce no tailpipe emissions at the point of operation. The electrical energy carried on board a BEV to power the motors is obtained from a variety of battery chemistries arranged into battery packs. For additional range genset trailers or pusher trailers are sometimes used, forming a type of hybrid vehicle. Batteries used in electric vehicles include "flooded" lead-acid, absorbed glass mat, NiCd, nickel metal hydride, Li-ion, Li-poly and zinc-air batteries. Attempts at building viable, modern battery-powered electric vehicles began in the 1950s with the introduction of the first modern (transistor controlled) electric car – the Henney Kilowatt, even though the concept was out in the market since 1890. Despite the poor sales of the early battery-powered vehicles, development of various battery-powered vehicles continued through the mid-1990s, with such models as the General Motors EV1 and the Toyota RAV4 EV. Battery powered cars had primarily used lead-acid batteries and NiMH batteries. Lead-acid batteries' recharge capacity is considerably reduced if they're discharged beyond 75% on a regular basis, making them a less-than-ideal solution. NiMH batteries are a better choice, but are considerably more expensive than lead-acid. Lithium-ion battery powered vehicles such as the Venturi Fetish and the Tesla Roadster have recently demonstrated excellent performance and range, and nevertheless is used in most mass production models launched since December 2010. Expanding on traditional lithium-ion batteries predominately used in today's battery electric vehicles, is an emerging science that is paving the way to utilize a carbon fiber structure (a vehicle body or chassis in this case) as a structural battery. Experiments being conducted at the Chalmers University of Technology in Sweden are showing that when coupled with Lithium-ion insertion mechanisms, an enhanced carbon fiber structure can have electromechanical properties. This means that the carbon fiber structure itself can act as its own battery/power source for propulsion. This would negate the need for traditional heavy battery banks, reducing weight and therefore increasing fuel efficiency. , several neighborhood electric vehicles, city electric cars and series production highway-capable electric cars and utility vans have been made available for retails sales, including Tesla Roadster, GEM cars, Buddy, Mitsubishi i MiEV and its rebadged versions Peugeot iOn and Citroën C-Zero, Chery QQ3 EV, JAC J3 EV, Nissan Leaf, Smart ED, Mia electric, BYD e6, Renault Kangoo Z.E., Bolloré Bluecar, Renault Fluence Z.E., Ford Focus Electric, BMW ActiveE, Renault Twizy, Tesla Model S, Honda Fit EV, RAV4 EV second generation, Renault Zoe, Mitsubishi Minicab MiEV, Roewe E50, Chevrolet Spark EV, Fiat 500e, BMW i3, Volkswagen e-Up!, Nissan e-NV200, Volkswagen e-Golf, Mercedes-Benz B-Class Electric Drive, Kia Soul EV, BYD e5, and Tesla Model X. The world's all-time top selling highway legal electric car is the Nissan Leaf, released in December 2010, with global sales of more than 250,000 units through December 2016. The Tesla Model S, released in June 2012, ranks second with global sales of over 158,000 cars delivered . The Renault Kangoo Z.E. utility van is the leader of the light-duty all-electric segment with global sales of 25,205 units through December 2016. Plug-in hybrid Plug-in hybrid electric vehicles (PHEVs) use batteries to power an electric motor, as well as another fuel, such as gasoline or diesel, to power an internal combustion engine or other propulsion source. PHEVs can charge their batteries through charging equipment and regenerative braking. Using electricity from the grid to run the vehicle some or all of the time reduces operating costs and fuel use, relative to conventional vehicles. Until 2010 most plug-in hybrids on the road in the U.S. were conversions of conventional hybrid electric vehicles, and the most prominent PHEVs were conversions of 2004 or later Toyota Prius, which have had plug-in charging and more batteries added and their electric-only range extended. Chinese battery manufacturer and automaker BYD Auto released the F3DM to the Chinese fleet market in December 2008 and began sales to the general public in Shenzhen in March 2010. General Motors began deliveries of the Chevrolet Volt in the U.S. in December 2010. Deliveries to retail customers of the Fisker Karma began in the U.S. in November 2011. During 2012, the Toyota Prius Plug-in Hybrid, Ford C-Max Energi, and Volvo V60 Plug-in Hybrid were released. The following models were launched during 2013 and 2015: Honda Accord Plug-in Hybrid, Mitsubishi Outlander P-HEV, Ford Fusion Energi, McLaren P1 (limited edition), Porsche Panamera S E-Hybrid, BYD Qin, Cadillac ELR, BMW i3 REx, BMW i8, Porsche 918 Spyder (limited production), Volkswagen XL1 (limited production), Audi A3 Sportback e-tron, Volkswagen Golf GTE, Mercedes-Benz S 500 e, Porsche Cayenne S E-Hybrid, Mercedes-Benz C 350 e, BYD Tang, Volkswagen Passat GTE, Volvo XC90 T8, BMW X5 xDrive40e, Hyundai Sonata PHEV, and Volvo S60L PHEV. , about 500,000 highway-capable plug-in hybrid electric cars had been sold worldwide since December 2008, out of total cumulative global sales of 1.2 million light-duty plug-in electric vehicles. , the Volt/Ampera family of plug-in hybrids, with combined sales of about 134,500 units is the top selling plug-in hybrid in the world. Ranking next are the Mitsubishi Outlander P-HEV with about 119,500, and the Toyota Prius Plug-in Hybrid with almost 78,000. Biofuels Bioalcohol and ethanol The first commercial vehicle that used ethanol as a fuel was the Ford Model T, produced from 1908 through 1927. It was fitted with a carburetor with adjustable jetting, allowing use of gasoline or ethanol, or a combination of both. Other car manufactures also provided engines for ethanol fuel use. In the United States, alcohol fuel was produced in corn-alcohol stills until Prohibition criminalized the production of alcohol in 1919. The use of alcohol as a fuel for internal combustion engines, either alone or in combination with other fuels, lapsed until the oil price shocks of the 1970s. Furthermore, additional attention was gained because of its possible environmental and long-term economical advantages over fossil fuel. Both ethanol and methanol have been used as an automotive fuel. While both can be obtained from petroleum or natural gas, ethanol has attracted more attention because it is considered a renewable resource, easily obtained from sugar or starch in crops and other agricultural produce such as grain, sugarcane, sugar beets or even lactose. Since ethanol occurs in nature whenever yeast happens to find a sugar solution such as overripe fruit, most organisms have evolved some tolerance to ethanol, whereas methanol is toxic. Other experiments involve butanol, which can also be produced by fermentation of plants. Support for ethanol comes from the fact that it is a biomass fuel, which addresses climate change and greenhouse gas emissions, though these benefits are now highly debated, including the heated 2008 food vs fuel debate. Most modern cars are designed to run on gasoline are capable of running with a blend from 10% up to 15% ethanol mixed into gasoline (E10-E15). With a small amount of redesign, gasoline-powered vehicles can run on ethanol concentrations as high as 85% (E85), the maximum set in the United States and Europe due to cold weather during the winter, or up to 100% (E100) in Brazil, with a warmer climate. Ethanol has close to 34% less energy per volume than gasoline, consequently fuel economy ratings with ethanol blends are significantly lower than with pure gasoline, but this lower energy content does not translate directly into a 34% reduction in mileage, because there are many other variables that affect the performance of a particular fuel in a particular engine, and also because ethanol has a higher octane rating which is beneficial to high compression ratio engines. For this reason, for pure or high ethanol blends to be attractive for users, its price must be lower than gasoline to offset the lower fuel economy. As a rule of thumb, Brazilian consumers are frequently advised by the local media to use more alcohol than gasoline in their mix only when ethanol prices are 30% lower or more than gasoline, as ethanol price fluctuates heavily depending on the results and seasonal harvests of sugar cane and by region. In the US, and based on EPA tests for all 2006 E85 models, the average fuel economy for E85 vehicles was found 25.56% lower than unleaded gasoline. The EPA-rated mileage of current American flex-fuel vehicles could be considered when making price comparisons, though E85 has octane rating of about 104 and could be used as a substitute for premium gasoline. Regional retail E85 prices vary widely across the US, with more favorable prices in the Midwest region, where most corn is grown and ethanol produced. In August 2008 the US average spread between the price of E85 and gasoline was 16.9%, while in Indiana was 35%, 30% in Minnesota and Wisconsin, 19% in Maryland, 12 to 15% in California, and just 3% in Utah. Depending on the vehicle capabilities, the break even price of E85 usually has to be between 25 and 30% lower than gasoline. Reacting to the high price of oil and its growing dependence on imports, in 1975 Brazil launched the Pro-alcool program, a huge government-subsidized effort to manufacture ethanol fuel (from its sugar cane crop) and ethanol-powered automobiles. These ethanol-only vehicles were very popular in the 1980s, but became economically impractical when oil prices fell – and sugar prices rose – late in that decade. In May 2003 Volkswagen built for the first time a commercial ethanol flexible fuel car, the Gol 1.6 Total Flex. These vehicles were a commercial success and by early 2009 other nine Brazilian manufacturers are producing flexible fuel vehicles: Chevrolet, Fiat, Ford, Peugeot, Renault, Honda, Mitsubishi, Toyota, Citroën, and Nissan. The adoption of the flex technology was so rapid, that flexible fuel cars reached 87.6% of new car sales in July 2008. As of August 2008, the fleet of "flex" automobiles and light commercial vehicles had reached 6 million new vehicles sold, representing almost 19% of all registered light vehicles. The rapid success of "flex" vehicles, as they are popularly known, was made possible by the existence of 33,000 filling stations with at least one ethanol pump available by 2006, a heritage of the Pro-alcool program. In the United States, initial support to develop alternative fuels by the government was also a response to the 1973 oil crisis, and later on, as a goal to improve air quality. Also, liquid fuels were preferred over gaseous fuels not only because they have a better volumetric energy density but also because they were the most compatible fuels with existing distribution systems and engines, thus avoiding a big departure from the existing technologies and taking advantage of the vehicle and the refueling infrastructure. California led the search of sustainable alternatives with interest in methanol. In 1996, a new FFV Ford Taurus was developed, with models fully capable of running either methanol or ethanol blended with gasoline. This ethanol version of the Taurus was the first commercial production of an E85 FFV. The momentum of the FFV production programs at the American car companies continued, although by the end of the 1990s, the emphasis was on the FFV E85 version, as it is today. Ethanol was preferred over methanol because there is a large support in the farming community and thanks to government's incentive programs and corn-based ethanol subsidies. Sweden also tested both the M85 and the E85 flexifuel vehicles, but due to agriculture policy, in the end emphasis was given to the ethanol flexifuel vehicles. Biodiesel The main benefit of Diesel combustion engines is that they have a 44% fuel burn efficiency; compared with just 25–30% in the best gasoline engines. In addition diesel fuel has slightly higher energy density by volume than gasoline. This makes Diesel engines capable of achieving much better fuel economy than gasoline vehicles. Biodiesel (fatty acid methyl ester), is commercially available in most oilseed-producing states in the United States. As of 2005, it is somewhat more expensive than fossil diesel, though it is still commonly produced in relatively small quantities (in comparison to petroleum products and ethanol). Many farmers who raise oilseeds use a biodiesel blend in tractors and equipment as a matter of policy, to foster production of biodiesel and raise public awareness. It is sometimes easier to find biodiesel in rural areas than in cities. Biodiesel has lower energy density than fossil diesel fuel, so biodiesel vehicles are not quite able to keep up with the fuel economy of a fossil fuelled diesel vehicle, if the diesel injection system is not reset for the new fuel. If the injection timing is changed to take account of the higher cetane value of biodiesel, the difference in economy is negligible. Because biodiesel contains more oxygen than diesel or vegetable oil fuel, it produces the lowest emissions from diesel engines, and is lower in most emissions than gasoline engines. Biodiesel has a higher lubricity than mineral diesel and is an additive in European pump diesel for lubricity and emissions reduction. Some Diesel-powered cars can run with minor modifications on 100% pure vegetable oils. Vegetable oils tend to thicken (or solidify if it is waste cooking oil), in cold weather conditions so vehicle modifications (a two tank system with diesel start/stop tank), are essential in order to heat the fuel prior to use under most circumstances. Heating to the temperature of engine coolant reduces fuel viscosity, to the range cited by injection system manufacturers, for systems prior to 'common rail' or 'unit injection ( VW PD)' systems. Waste vegetable oil, especially if it has been used for a long time, may become hydrogenated and have increased acidity. This can cause the thickening of fuel, gumming in the engine and acid damage of the fuel system. Biodiesel does not have this problem, because it is chemically processed to be PH neutral and lower viscosity. Modern low emission diesels (most often Euro -3 and -4 compliant), typical of the current production in the European industry, would require extensive modification of injector system, pumps and seals etc. due to the higher operating pressures, that are designed thinner (heated) mineral diesel than ever before, for atomisation, if they were to use pure vegetable oil as fuel. Vegetable oil fuel is not suitable for these vehicles as they are currently produced. This reduces the market as increasing numbers of new vehicles are not able to use it. However, the German Elsbett company has successfully produced single tank vegetable oil fuel systems for several decades, and has worked with Volkswagen on their TDI engines. This shows that it is technologically possible to use vegetable oil as a fuel in high efficiency / low emission diesel engines. Greasestock is an event held yearly in Yorktown Heights, New York, and is one of the largest showcases of vehicles using waste oil as a biofuel in the United States. Biogas Compressed biogas may be used for internal combustion engines after purification of the raw gas. The removal of H2O, H2S and particles can be seen as standard producing a gas which has the same quality as compressed natural gas. Compressed natural gas High-pressure compressed natural gas (CNG), mainly composed of methane, that is used to fuel normal combustion engines instead of gasoline. Combustion of methane produces the least amount of CO2 of all fossil fuels. Gasoline cars can be retrofitted to CNG and become bifuel Natural gas vehicles (NGVs) as the gasoline tank is kept. The driver can switch between CNG and gasoline during operation. Natural gas vehicles (NGVs) are popular in regions or countries where natural gas is abundant. Widespread use began in the Po River Valley of Italy, and later became very popular in New Zealand by the eighties, though its use has declined. As of 2017, there were 24.5 million natural gas vehicles worldwide, led by China (5.35 million) followed by Iran (4.0 million), India (3.05 million), Pakistan (3 million), Argentina (2.3 million), and Brazil (1.78 million). As of 2010, the Asia-Pacific region led the global market with a share of 54%. In Europe they are popular in Italy (730,000), Ukraine (200,000), Armenia (101,352), Russia (100,000) and Germany (91,500), and they are becoming more so as various manufacturers produce factory made cars, buses, vans and heavy vehicles. In the United States CNG powered buses are the favorite choice of several public transit agencies, with an estimated CNG bus fleet of some 130,000. Other countries where CNG-powered buses are popular include India, Australia, Argentina, and Germany. CNG vehicles are common in South America, where these vehicles are mainly used as taxicabs in main cities of Argentina and Brazil. Normally, standard gasoline vehicles are retrofitted in specialized shops, which involve installing the gas cylinder in the trunk and the CNG injection system and electronics. The Brazilian GNV fleet is concentrated in the cities of Rio de Janeiro and São Paulo. Pike Research reports that almost 90% of NGVs in Latin America have bi-fuel engines, allowing these vehicles to run on either gasoline or CNG. Dual fuel Dual fuel vehicle is referred as the vehicle using two types of fuel in the same time (can be gas + liquid, gas + gas, liquid + liquid) with different fuel tank. Diesel-CNG dual fuel is a system using two type of fuel which are diesel and compressed natural gas (CNG) at the same time. It is because of CNG need a source of ignition for combustion in diesel engine. Hybrid electric A hybrid vehicle uses multiple propulsion systems to provide motive power. The most common type of hybrid vehicle is the gasoline-electric hybrid vehicles, which use gasoline (petrol) and electric batteries for the energy used to power internal-combustion engines (ICEs) and electric motors. These motors are usually relatively small and would be considered "underpowered" by themselves, but they can provide a normal driving experience when used in combination during acceleration and other maneuvers that require greater power. The Toyota Prius first went on sale in Japan in 1997 and it is sold worldwide since 2000. , there are over 50 models of hybrid electric cars available in several world markets, with more than 12 million hybrid electric vehicles sold worldwide since their inception in 1997. Hydrogen A hydrogen car is an automobile which uses hydrogen as its primary source of power for locomotion. These cars generally use the hydrogen in one of two methods: combustion or fuel-cell conversion. In combustion, the hydrogen is "burned" in engines in fundamentally the same method as traditional gasoline cars. The common internal combustion engine, usually fueled with gasoline (petrol) or diesel liquids, can be converted to run on gaseous hydrogen. This emits water at the point of use, and during combustion with air NOx can be produced. However, the most efficient use of hydrogen involves the use of fuel cells and electric motors instead of a traditional engine. Hydrogen reacts with oxygen inside the fuel cells, which produces electricity to power the motors, with the only byproduct from the spent hydrogen being water. A small number of commercially available hydrogen fuel cell cars currently exist: the Hyundai NEXO, Toytota Mirai, and previously the Honda FCX Clarity. One primary area of research is hydrogen storage, to try to increase the range of hydrogen vehicles while reducing the weight, energy consumption, and complexity of the storage systems. Two primary methods of storage are metal hydrides and compression. Some believe that hydrogen cars will never be economically viable and that the emphasis on this technology is a diversion from the development and popularization of more efficient battery electric vehicles. In the light road vehicle segment, by the end of 2022, 70,200 hydrogen fuel cell electric vehicles had been sold worldwide, compared with 26 million plug-in electric vehicles. With the rapid rise of electric vehicles and associated battery technology and infrastructure, the global scope for hydrogen’s role in cars is shrinking relative to earlier expectations. Electric, fed by external source Electric power fed from an external source to the vehicle is standard in railway electrification. At such systems usually the tracks form one pole, while the other is usually a single overhead wire or a rail insulated against ground. On roads this system does not work as described, as normal road surfaces are very poor electric conductors; and so electric vehicles fed with external power on roads require at least two overhead wires. The most common type of road vehicles fed with electricity from external source are trolleybusses, but there are also some trucks powered with this technology. The advantage is that the vehicle can be operated without breaks for refueling or charging. Disadvantages include: a large infrastructure of electric wires; difficulty in driving as one has to prevent a dewirement of the vehicle; vehicles cannot overtake each other; a danger of electrocution; and an aesthetic problem. Wireless transmission (see Wireless power transfer) is possible, in principle; but the infrastructure (especially wiring) necessary for inductive or capacitive coupling would be extensive and expensive. In principle it is also possible to transmit energy by microwaves or by lasers to the vehicle, but this may be inefficient and dangerous for the power required. Beside this, in the case of lasers one requires a guidance system to track the vehicle to be powered, as laser beams have a small diameter. Comparative assessment of fossil and alternative fuels Comparative assessments of conventional fossil and alternative fuel vehicles usually encompass more than in-use environmental impacts and running costs. They factor in issues like resource extractive impacts (e.g. for battery manufacture or fossil fuel extraction), ‘well-to-wheel’ efficiency, and the carbon intensity of electricity in different geographies. In general, the lifecycle greenhouse gas emissions of battery-electric vehicles are lower than emissions from hydrogen, PHEV, hybrid, compressed natural gas, gasoline, and diesel vehicles. BEVs have lower emissions than internal combustion engine vehicles even in places where electricity generation is relatively carbon-intensive, for example China where electricity is predominantly generated from coal. Other technologies Engine air compressor The air engine is an emission-free piston engine that uses compressed air as a source of energy. The first compressed air car was invented by a French engineer named Guy Nègre. The expansion of compressed air may be used to drive the pistons in a modified piston engine. Efficiency of operation is gained through the use of environmental heat at normal temperature to warm the otherwise cold expanded air from the storage tank. This non-adiabatic expansion has the potential to greatly increase the efficiency of the machine. The only exhaust is cold air (−15 °C), which could also be used to air condition the car. The source for air is a pressurized carbon-fiber tank. Air is delivered to the engine via a rather conventional injection system. Unique crank design within the engine increases the time during which the air charge is warmed from ambient sources and a two-stage process allows improved heat transfer rates. Electric, stored-otherway Electricity can be also stored in supercapacitors and superconductors. However superconductor storage is unsuitable for vehicle propulsion as it requires extreme deep temperature and produces strong magnetic fields. Supercapacitors, however, can be used in vehicles and are used in some trams on sections without overhead wire. They can be load in during regular stops, at which passengers enter and leave the train, but can only travel a few kilometres with the stored energy. However, this is no problem in this case as the next stop is usually in reachable distance. Solar A solar car is an electric vehicle powered by solar energy obtained from solar panels on the car. Solar panels cannot currently be used to directly supply a car with a suitable amount of power at this time, but they can be used to extend the range of electric vehicles. As of 2022, a handful of solar electric cars with varying performance are becoming commercially available, from Fisker and Lightyear, among others. Solar cars are raced in competitions such as the World Solar Challenge and the North American Solar Challenge. These events are often sponsored by Government agencies such as the United States Department of Energy keen to promote the development of alternative energy technology such as solar cells and electric vehicles. Such challenges are often entered by universities to develop their students' engineering and technological skills as well as motor vehicle manufacturers such as GM and Honda. Dimethyl ether fuel Dimethyl ether (DME) is a promising fuel in diesel engines, petrol engines (30% DME / 70% LPG), and gas turbines owing to its high cetane number, which is 55, compared to diesel's, which is 40–53. Only moderate modifications are needed to convert a diesel engine to burn DME. The simplicity of this short carbon chain compound leads during combustion to very low emissions of particulate matter, NOx, CO. For these reasons as well as being sulfur-free, DME meets even the most stringent emission regulations in Europe (EURO5), U.S. (U.S. 2010), and Japan (2009 Japan). Mobil is using DME in their methanol to gasoline process. DME is being developed as a synthetic second generation biofuel (BioDME), which can be manufactured from lignocellulosic biomass. In 2006 the EU considered BioDME in its potential biofuel mix in 2030; the Volvo Group was the coordinator for the European Community Seventh Framework Programme project BioDME where Chemrec's BioDME pilot plant based on black liquor gasification is nearing completion in Piteå, Sweden. Ammonia fuelled vehicles Ammonia is produced by combining gaseous hydrogen with nitrogen from the air. Large-scale ammonia production uses natural gas for the source of hydrogen. Ammonia was used during World War II to power buses in Belgium, and in engine and solar energy applications prior to 1900. Liquid ammonia also fuelled the Reaction Motors XLR99 rocket engine, that powered the X-15 hypersonic research aircraft. Although not as powerful as other fuels, it left no soot in the reusable rocket engine and its density approximately matches the density of the oxidizer, liquid oxygen, which simplified the aircraft's design. Ammonia has been proposed as a practical alternative to fossil fuel for internal combustion engines. The calorific value of ammonia is 22.5 MJ/kg (9690 BTU/lb), which is about half that of diesel. In a normal engine, in which the water vapour is not condensed, the calorific value of ammonia will be about 21% less than this figure. It can be used in existing engines with only minor modifications to carburettors/injectors. When ammonia is produced using coal, the CO2 emitted has the potential to be sequestered (the combustion products are nitrogen and water). Ammonia engines or ammonia motors, using ammonia as a working fluid, have been proposed and occasionally used. The principle is similar to that used in a fireless locomotive, but with ammonia as the working fluid, instead of steam or compressed air. Ammonia engines were used experimentally in the 19th century by Goldsworthy Gurney in the UK and in streetcars in New Orleans. In 1981 a Canadian company converted a 1981 Chevrolet Impala to operate using ammonia as fuel. Ammonia and is being used with success by developers in Canada, since it can run in spark ignited or diesel engines with minor modifications, also the only green fuel to power jet engines, and despite its toxicity is reckoned to be no more dangerous than petrol or LPG. It can be made from renewable electricity, and having half the density of petrol or diesel can be readily carried in sufficient quantities in vehicles. On complete combustion it has no emissions other than nitrogen and water vapour. The combustion chemical formula is 4 NH3 + 3 O2 → 2 N2 + 6 H2O, 75% water is the result. Charcoal In the 1930s Tang Zhongming made an invention using abundant charcoal resources for Chinese auto market. The charcoal-fuelled car was later used intensively in China, serving the army and conveyancer after the breakout of World War II. Liquefied natural gas Liquefied natural gas (LNG) is natural gas that has been cooled to a point at which it becomes a cryogenic liquid. In this liquid state, natural gas is more than 2 times as dense as highly compressed CNG. LNG fuel systems function on any vehicle capable of burning natural gas. Unlike CNG, which is stored at high pressure (typically 3000 or 3600 psi) and then regulated to a lower pressure that the engine can accept, LNG is stored at low pressure (50 to 150 psi) and simply vaporized by a heat exchanger before entering the fuel metering devices to the engine. Because of its high energy density compared to CNG, it is very suitable for those interested in long ranges while running on natural gas. In the United States, the LNG supply chain is the main thing that has held back this fuel source from growing rapidly. The LNG supply chain is very analogous to that of diesel or gasoline. First, pipeline natural gas is liquefied in large quantities, which is analogous to refining gasoline or diesel. Then, the LNG is transported via semi trailer to fuel stations where it is stored in bulk tanks until it is dispensed into a vehicle. CNG, on the other hand, requires expensive compression at each station to fill the high-pressure cylinder cascades. Autogas LPG or liquefied petroleum gas (LPG) is a low pressure liquefied gas mixture composed mainly of propane and butane which burns in conventional gasoline combustion engines with less CO2 than gasoline. Gasoline cars can be retrofitted to LPG aka Autogas and become bifuel vehicles as the gasoline tank is not removed, allowing drivers to switch between LPG and gasoline during operation. Estimated 10 million vehicles running worldwide. There are 24.9 million LPG powered vehicles worldwide as of December 2013, led by Turkey with 3.93 million, South Korea (2.4 million), and Poland (2.75 million). In the U.S., 190,000 on-road vehicles use propane, and 450,000 forklifts use it for power. However, it is banned in Pakistan (DEC 2013) as it is considered a risk to public safety by OGRA. Formic acid Formic acid is used by converting it first to hydrogen, and using that in a hydrogen fuel cell. It can also be used directly in formic acid fuel cells. Formic acid is much easier to store than hydrogen. Liquid nitrogen car Liquid nitrogen (LN2) is a method of storing energy. Energy is used to liquefy air, and then LN2 is produced by evaporation, and distributed. LN2 is exposed to ambient heat in the car and the resulting nitrogen gas can be used to power a piston or turbine engine. The maximum amount of energy that can be extracted from LN2 is 213 Watt-hours per kg (W·h/kg) or 173 W·h per liter, in which a maximum of 70 W·h/kg can be utilized with an isothermal expansion process. Such a vehicle with a 350-liter (93 gallon) tank can achieve ranges similar to a gasoline powered vehicle with a 50-liter (13 gallon) tank. Theoretical future engines, using cascading topping cycles, can improve this to around 110 W·h/kg with a quasi-isothermal expansion process. The advantages are zero harmful emissions and superior energy densities compared to a compressed-air vehicle as well as being able to refill the tank in a matter of minutes. Nuclear power In principle, it is possible to build a vehicle powered by nuclear fission or nuclear decay. However, there are two major problems: first one has to transform the energy, which comes as heat and radiation into energy usable for a drive. One possible would be to use a steam turbine as in a nuclear power plant, but such a device would take too much space. A more suitable way would be direct conversion into electricity for example with thermoelements or thermionic devices. The second problem is that nuclear fission produces high levels of neutron and gamma rays, which require excessive shielding, that would result in a vehicle too large for use on public roads. However studies were made in this way by Ford Nucleon. A better way for a nuclear powered vehicle would be the use of power of radioactive decay in radioisotope thermoelectric generators, which are also very safe and reliable. The required shielding of these devices depends on the used radio nuclide. Plutonium-238 as nearly pure alpha radiator does not require much shielding. As prices for suitable radionuclide are high and energy density is low (generating 1 watt with Plutonium-238 requires a half gram of it), this way of propulsion is too expensive for wide use. Also radioisotope thermoelectric generators offer according to their large content of high radioactive material an extreme danger in case of misuse for example by terrorists. The only vehicle in use, which is driven by radioisotope thermoelectric generators is the Mars rover Curiosity. Other forms of nuclear power as fusion and annihilation are at present not available for vehicle propulsion, as no working fusion reactor is available and it is questionable if one can ever built one with a size suitable for a road vehicle. Annihilation may perhaps work in some ways (see antimatter drive), but there is no technology existing to produce and store enough antimatter. Pedal-assisted electric hybrid vehicle In very small vehicles, the power demand decreases, so human power can be employed to make a significant improvement in battery life. Three such commercially made vehicles are the Sinclair C5, ELF and TWIKE. Flywheels Flywheels can be also used for alternative fuel and were used in the 1950s for the propulsion of buses in Switzerland, the such called gyrobuses. The flywheel of the bus was loaded up by electric power at the terminals of the line and allowed it to travel a way up to 8 kilometres just with its flywheel. Flywheel-powered vehicles are quieter than vehicles with combustion engine, require no overhead wire and generate no exhausts, but the flywheel device has a great weight (1.5 tons for 5 kWh) and requires special safety measures due to its high rotational speed. Silanes Silanes higher than heptasilane can be stored like gasoline and may also work as fuel. They have the advantage that they can also burn with the nitrogen of the air, but have as major disadvantage its high price and that its combustion products are solid, which gives trouble in combustion engines. Spring The power of wound-up springs or twisted rubber cords can be used for the propulsion of small vehicles. However this way of energy storage allows only saving small energy amounts not suitable for the propulsion of vehicles for transporting people. Spring-powered vehicles are wind-up toys or mousetrap cars. Steam A steam car is a car that has a steam engine. Wood, coal, ethanol, or others can be used as fuel. The fuel is burned in a boiler and the heat converts water into steam. When the water turns to steam, it expands. The expansion creates pressure. The pressure pushes the pistons back and forth. This turns the driveshaft to spin the wheels which provides moves the car forward. It works like a coal-fueled steam train, or steam boat. The steam car was the next logical step in independent transport. Steam cars take a long time to start, but some can reach speeds over 100 mph (161 km/h) eventually. The late model Doble steam cars could be brought to operational condition in less than 30 seconds, had high top speeds and fast acceleration, but were expensive to buy. A steam engine uses external combustion, as opposed to internal combustion. Gasoline-powered cars are more efficient at about 25–28% efficiency. In theory, a combined cycle steam engine in which the burning material is first used to drive a gas turbine can produce 50% to 60% efficiency. However, practical examples of steam engined cars work at only around 5–8% efficiency. The best known and best selling steam-powered car was the Stanley Steamer. It used a compact fire-tube boiler under the hood to power a simple two-piston engine which was connected directly to the rear axle. Before Henry Ford introduced monthly payment financing with great success, cars were typically purchased outright. This is why the Stanley was kept simple; to keep the purchase price affordable. Steam produced in refrigeration also can be use by a turbine in other vehicle types to produce electricity, that can be employed in electric motors or stored in a battery. Steam power can be combined with a standard oil-based engine to create a hybrid. Water is injected into the cylinder after the fuel is burned, when the piston is still superheated, often at temperatures of 1500 degrees or more. The water will instantly be vaporized into steam, taking advantage of the heat that would otherwise be wasted. Wind Wind-powered vehicles have been well known for a long time. They can be realized with sails similar to those used on ships, by using an onboard wind turbine, which drives the wheels directly or which generates electricity for an electric motor, or can be pulled by a kite. Wind-powered land vehicles need an enormous clearance in height, especially when sails or kites are used and are unsuitable in urban area. They may be also be difficult to steer. Wind-powered vehicles are only used for recreational activities on beaches or other free areas. The concept is described in further detail here: . Wood gas Wood gas can be used to power cars with ordinary internal combustion engines if a wood gasifier is attached. This was quite popular during World War II in several European and Asian countries because the war prevented easy and cost-effective access to oil. Herb Hartman of Woodward, Iowa currently drives a wood powered Cadillac. He claims to have attached the gasifier to the Cadillac for just $700. Hartman claims, "A full hopper will go about fifty miles depending on how you drive it," and he added that splitting the wood was "labor-intensive. That's the big drawback."
Technology
Basics_7
null
25309794
https://en.wikipedia.org/wiki/Lycopodium%20powder
Lycopodium powder
Lycopodium powder is a yellow-tan dust-like powder, consisting of the dry spores of clubmoss plants, or various fern relatives. When it is mixed with air, the spores are highly flammable and are used to create dust explosions as theatrical special effects. The powder was traditionally used in physics experiments to demonstrate phenomena such as Brownian motion. Composition The powder consists of the dry spores of clubmoss plants, or various fern relatives principally in the genera Lycopodium and Diphasiastrum. The preferred source species are Lycopodium clavatum (stag's horn clubmoss) and Diphasiastrum digitatum (common groundcedar), because these widespread and often locally abundant species are both prolific in their spore production and easy to collect. Main uses Today, the principal use of the powder is to create flashes or flames that are large and impressive but relatively easy to manage safely in magic acts and for cinema and theatrical special effects. Historically it was also used as a photographic flash powder. Both these uses rely on the same principle as a dust explosion, as the spores have a large surface area per unit of volume (a single spore's diameter is about 33 micrometers (μm)), and a high fat content. It is also used in fireworks and explosives, fingerprint powders, as a covering for pills, and as an ice cream stabilizer. Other uses Lycopodium powder is also sometimes used as a lubricating dust on skin-contacting latex (natural rubber) goods, such as condoms and medical gloves. In physics experiments and demonstrations, lycopodium powder can be used to make sound waves in air visible for observation and measurement, and to make a pattern of electrostatic charge visible. The powder is also highly hydrophobic; if the surface of a cup of water is coated with lycopodium powder, a finger or other object inserted straight into the cup will come out dusted with the powder but remain completely dry. Because of the very small size of its particles, lycopodium powder can be used to demonstrate Brownian motion. A microscope slide, with or without a well, is prepared with a droplet of water, and a fine dusting of lycopodium powder is applied. Then, a cover-glass can be placed over the water and spore sample in order to reduce convection in the water by evaporation. Under several hundred diameters magnification, one will see in the microscope, when well focused upon individual lycopodium particles, that the spore particles "dance" randomly. This is in response to asymmetric collisional forces applied to the macroscopic (but still quite small) powder particle by microscopic water molecules in random thermal motion. As a then-common laboratory supply, lycopodium powder was often used by inventors developing experimental prototypes. For example, Nicéphore Niépce used lycopodium powder in the fuel for one of the first internal combustion engines, the Pyréolophore, in about 1807, and Chester Carlson used lycopodium powder in 1938 in his early experiments to demonstrate xerography.
Biology and health sciences
Lycophytes
Plants
23962701
https://en.wikipedia.org/wiki/Tropical%20monsoon%20climate
Tropical monsoon climate
An area of tropical monsoon climate (occasionally known as a sub-equatorial, tropical wet climate or a tropical monsoon and trade-wind littoral climate) is a tropical climate subtype that corresponds to the Köppen climate classification category Am. Tropical monsoon climates have monthly mean temperatures above in every month of the year and a dry season. The tropical monsoon climate is the intermediate climate between the wet Af (or tropical rainforest climate) and the drier Aw (or tropical savanna climate). A tropical monsoon climate's driest month has on average less than 60 mm, but more than . This is in direct contrast to a tropical savanna climate, whose driest month has less than 60 mm of precipitation and also less than of average monthly precipitation. In essence, a tropical monsoon climate tends to either have more rainfall than a tropical savanna climate or have less pronounced dry seasons. A tropical monsoon climate tends to vary less in temperature during a year than does a tropical savanna climate. This climate has the driest month, which nearly always occurs at or soon after the winter solstice. Versions There are generally two versions of a tropical monsoon climate: Less pronounced dry seasons. Regions with this variation of the tropical monsoon climate typically see copious amounts of rain during the wet season(s), usually in the form of frequent thunderstorms. Unlike most tropical savanna climates, a sizeable amount of precipitation also falls during the dry season(s), but not quite enough for a tropical rainforest classification. In essence, this version of the tropical monsoon climate generally has less pronounced dry seasons than tropical savanna climates. Extraordinarily rainy wet seasons and pronounced dry seasons. This variation features pronounced dry seasons similar in length and character to dry seasons observed in tropical savanna climates. This is followed by a sustained period (or sustained periods) of extraordinary rainfall. In some instances, up to (and sometimes in excess of) 1,000 mm of precipitation is observed per month for two or more consecutive months. Tropical savanna climates generally do not see this level of sustained rainfall. Area Tropical monsoon are most commonly found in Africa (West and Central Africa), Asia (South and Southeast Asia), South America and Central America. This climate also occurs in sections of the Caribbean, North America, and northern Australia. Factors The major controlling factor over a tropical monsoon climate is its relationship to the monsoon circulation. The monsoon is a seasonal change in wind direction. In Asia, during the summer (or high-sun season) there is an onshore flow of air (air moving from ocean toward land). In the “winter” (or low-sun season) an offshore air flow (air moving from land toward water) is prevalent. The change in direction is due to the difference in the way water and land heat. Changing pressure patterns that affect the seasonality of precipitation also occur in Africa, though it generally differs from the way it operates in Asia. During the high-sun season, the Intertropical Convergence Zone (ITCZ) induces rain. During the low-sun season, the subtropical high creates dry conditions. The monsoon climates of Africa, and the Americas for that matter, are typically located along trade wind coasts. Countries and cities Asia Chittagong, Bangladesh Sylhet, Bangladesh (bordering on Cwa) Phuntsholing, Bhutan (bordering on Cwa) Sihanoukville, Cambodia Qionghai, China Wanning, China Wenchang, China Kochi, Kerala, India Mangalore, Karnataka, India Thiruvananthapuram, Kerala, India Bandung, Indonesia (bordering on Af) Jakarta, Indonesia Makassar, Indonesia Malang, Indonesia Semarang, Indonesia Surakarta, Indonesia Chichijima, Japan (bordering on Aw and Cfa) Alor Setar, Kedah, Malaysia Langkawi, Malaysia Malé, Maldives Yangon, Myanmar Baguio, Philippines (bordering on Cwb) Calamba, Philippines Manila, Philippines Quezon City, Philippines Batticaloa, Sri Lanka (bordering on As) Kaohsiung, Taiwan Pingtung, Taiwan Taitung, Taiwan Hat Yai, Thailand (bordering on Aw) Ko Samui, Thailand (bordering on Af) Narathiwat, Thailand Pattani, Thailand Cà Mau, Vietnam Da Nang, Vietnam Huế, Vietnam Oceania Cairns, Queensland, Australia Christmas Island, Australia Saipan, Northern Mariana Islands (bordering on Af) Africa Douala, Cameroon Kisangani, Democratic Republic of the Congo Bata, Equatorial Guinea Malabo, Equatorial Guinea Libreville, Gabon Conakry, Guinea Monrovia, Liberia Curepipe, Mauritius Port Harcourt, Nigeria Freetown, Sierra Leone Zanzibar City, Tanzania The Americas Belmopan, Belize Trinidad, Bolivia Aracaju, Brazil Recife, Brazil Maceió, Brazil Manaus, Brazil Barrancabermeja, Colombia Cali, Colombia Villavicencio, Colombia Roseau, Dominica Santo Domingo, Dominican Republic Cayenne, French Guiana (bordering on Af) La Ceiba, Honduras Coatzacoalcos, Mexico Villahermosa, Mexico Managua, Nicaragua Panama City, Panama Pucallpa, Peru Puerto Maldonado, Venezuela Basseterre, Saint Kitts and Nevis Nassau, The Bahamas (bordering on Aw) Port of Spain, Trinidad and Tobago Fort Myers, Florida, United States (bordering on Cfa) Miami, Florida, United States San Juan, Puerto Rico, United States Guanare, Venezuela Mérida, Venezuela Puerto Ayacucho, Venezuela Select charts
Physical sciences
Climates
Earth science
1280893
https://en.wikipedia.org/wiki/Cooking%20apple
Cooking apple
A cooking apple or culinary apple is an apple that is used primarily for cooking, as opposed to a dessert apple, which is eaten raw. Cooking apples are generally larger, and can be tarter than dessert varieties. Some varieties have a firm flesh that does not break down much when cooked. Culinary varieties with a high acid content produce froth when cooked, which is desirable for some recipes. Britain grows a large range of apples specifically for cooking. Worldwide, dual-purpose varieties (for both cooking and eating raw) are more widely grown. There are many apples that have been cultivated to have the firmness and tartness desired for cooking. Yet each variety of apple has unique qualities and categories such as "cooking" or "eating" are suggestive, rather than exact. How an apple will perform once cooked is tested by simmering a half inch wedge in water until tender, then prodding to see if its shape is intact. The apple can then be tasted to see how its flavour has been maintained and if sugar should be added. Apples can be cooked down into sauce, apple butter, or fruit preserves. They can be baked in an oven and served with custard, and made into pies or apple crumble. In the UK roast pork is commonly served with cold apple sauce made from boiled and mashed apples. A baked apple is baked in an oven until it has become soft. The core is usually removed before baking and the resulting cavity stuffed with fruits, brown sugar, raisins, or cinnamon, and sometimes a liquor such as brandy. An apple dumpling adds a pastry crust. John Claudius Loudon wrote in 1842: History Popular cooking apples in US, in the late 19th century: Tart varieties: Duchess of Oldenburg Fallawater Gravenstein Horse Keswick Codlin Red Astrachan Rhode Island Greening Tetofsky Sweet varieties: Golden Sweet Maverack Sweet Peach Pound Sweet Tolman Sweet Willis Sweet Popular cooking apples in early 20th century England: Alfriston Beauty of Kent Bismark Bramley Cox Pomona Dumelow Ecklinville Emneth Early Golden Noble Grenadier Lord Grosvenor Lord Derby Newton Wonder Stirling Castle Warner's King Cooking apple cultivars D = Dual purpose (table + cooking); Cooking result: P = puree, K = keeps shape Alfriston P Allington K Annie Elizabeth K Antonovka P Arthur Turner P Baldwin Ballyfatten Bancroft Baron Ward Beacon Beauty of Kent P Belle de Boskoop K Bismarck apple P Black Amish D Black Twig D Blenheim Orange P - K Bloody Ploughman Bountiful Braeburn K Bramley P Crab apple (primarily for jelly) Burr Knot P Byflett Seedling P Byford Wonder K Calville Blanc d'hiver K Calville Rouge d´automne K Calville Rouge d´hiver P Campanino Carlisle Codlin P Carolina Red June Carter's Blue Catshead P Cellini P Charles Ross K Chelmsford Wonder P Cockle Pippin P Colloggett Pippin P - K 'Cortland' D Coul Blush Cox Pomona P - K Custard Danziger Kantapfel K Duchess of Oldenburg Dudley Winter Dumelow's Seedling P Edward VII P Emneth Early Esopus Spitzenburg D Fallawater Flower of Kent Galloway K Gennet Moyle George Neal Glockenapfel Ginger Gold Golden Noble Golden Pippin Golden Reinette P - K Golden Sweet Gragg Gravenstein Granny Smith D Greenup´s Pippin P Grenadier Hambledon Deux Ans P - K Harrison Cider Apple Hawthornden P Howgate Wonder K Irish Peach Isaac Newton James Grieve D Jonathan D Jumbo Keswick Codlin P King of the Pippins K D Landsberger Reinette Lane's Prince Albert P Lodi Lord Derby P Lowell Maiden Blush Malinda McIntosh D My Jewel Newell-Kimzey (aka Airlie Red Flesh) Newton Wonder P Nickajack Norfolk Biffin K Northern greening Northern Spy Oldenburg Paulared D Peasgood's Nonsuch P - K Pink Lady D Pinova Porter's Pott's Seedling Pumpkin Sweet apple Queen P Red Astrachan Red Prince Reverend W. Wilks P Rhode Island Greening Rome Beauty Sandow Scotch Bridget Scotch Dumpling Schoolmaster P Stirling Castle P Smokehouse Snow apple (aka Fameuse) Spartan Stayman Stirling Castle P Surprise K Tetofsky Tickled Pink Tolman Sweet Tom Putt Topaz Transparante de Croncels K Twenty Ounce K Wagener Warner's King P Wealthy D White Melrose White Transparent Winesap K D Wolf River K York Imperial D
Biology and health sciences
Pomes
Plants
1282836
https://en.wikipedia.org/wiki/Gigantopithecus
Gigantopithecus
Gigantopithecus ( ; ) is an extinct genus of ape that lived in southern China from 2 million to approximately 300,000 to 200,000 years ago during the Early to Middle Pleistocene, represented by one species, Gigantopithecus blacki. Potential identifications have also been made in Thailand, Vietnam, and Indonesia. The first remains of Gigantopithecus, two third molar teeth, were identified in a drugstore by anthropologist Ralph von Koenigswald in 1935, who subsequently described the ape. In 1956, the first mandible and more than 1,000 teeth were found in Liucheng, and numerous more remains have since been found in at least 16 sites. Only teeth and four mandibles are known currently, and other skeletal elements were likely consumed by porcupines before they could fossilise. Gigantopithecus was once argued to be a hominin, a member of the human line, but it is now thought to be closely allied with orangutans, classified in the subfamily Ponginae. Gigantopithecus has traditionally been restored as a massive, gorilla-like ape, nearly tall and potentially when alive, but the paucity of remains make total size estimates highly speculative. The species may have been sexually dimorphic, with males much bigger than females. The incisors are reduced and the canines appear to have functioned like cheek teeth (premolars and molars). The premolars are high-crowned, and the fourth premolar is very molar-like. The molars are the largest of any known ape, and have a relatively flat surface. Gigantopithecus had the thickest enamel by absolute measure of any ape, up to 6 mm (a quarter of an inch) in some areas, though this is only fairly thick when tooth size is taken into account. Gigantopithecus appears to have been a generalist herbivore of C3 forest plants, with the jaw adapted to grinding, crushing, and cutting through tough, fibrous plants, and the thick enamel functioning to resist foods with abrasive particles such as stems, roots, and tubers with dirt. Some teeth bear traces of fig family fruits, which may have been important dietary components. It primarily lived in subtropical to tropical forest, and went extinct about 300,000 years ago likely because of the retreat of preferred habitat due to climate change, and potentially archaic human activity. Gigantopithecus has become popular in cryptozoology circles as the identity of the Tibetan yeti or the American bigfoot, apelike creatures in local folklore. Discovery Research history Gigantopithecus blacki was named by anthropologist Ralph von Koenigswald in 1935 based on two third lower molar teeth, which, he noted, were of enormous size (the first was "Ein gewaltig grosser (...) Molar", the second was described as "der enorme Grösse besitzt"), measuring . The specific name blacki is in honour of Canadian palaeoanthropologist Davidson Black, who had studied human evolution in China and had died the previous year. Von Koenigswald, working for the Dutch East Indies Mineralogical Survey on Java, had found the teeth in a drugstore in Hong Kong where they were being sold as "dragon bones" to be used in traditional Chinese medicine. By 1939, after purchasing more teeth, he determined they had originated somewhere in Guangdong or Guangxi. He could not formally describe the type specimen until 1952 due to his internment by Japanese forces during World War II. The originally discovered teeth are part of the collection of the University of Utrecht. In 1955, a survey team that was led by Chinese palaeontologist Pei Wenzhong was tasked by the Chinese Institute of Vertebrate Palaeontology and Palaeoanthropology (IVPP) with finding the original Gigantopithecus locality. They collected 47 teeth among shipments of "dragon bones" in Guangdong and Guangxi. In 1956, the team discovered the first in situ remains, a third molar and premolar, in a cave (subsequently named "Gigantopithecus Cave") in Niusui Mountain, Guangxi. Also in 1956, Liucheng farmer Xiuhuai Qin discovered more teeth and the first mandible on his field. From 1957 to 1963, the IVPP survey team carried out excavations in this area and recovered two more mandibles and more than 1,000 teeth. In 2014, a fourth confirmed mandible was discovered in Yanliang, Central China. Indicated by extensive rodent gnawing marks, teeth primarily accumulated in caves likely due to porcupine activity. Porcupines gnaw on bones to obtain nutrients necessary for quill growth, and can haul large bones into their underground dens and consume them entirely, except the hard, enamel-capped crowns of teeth. This may explain why teeth are typically found in great quantity, and why remains other than teeth are so rare. Confirmed Gigantopithecus remains have since been found in 16 different sites across southern China. The northernmost sites are Longgupo and Longgudong, just south of the Yangtze River, and southernmost on Hainan Island in the South China Sea. An isolated canine from Thẩm Khuyên Cave, Vietnam, and a fourth premolar from Pha Bong, Thailand, could possibly be assigned to Gigantopithecus, though these could also represent the extinct orangutan Pongo weidenreichi. Two mandibular fragments each preserving the last two molars from Semono in Central Java, Indonesia, described in 2016 could represent Gigantopithecus. The oldest remains date to 2.2 million years ago from Baikong Cave, and the youngest to 295 to 215 thousand years ago from Shuangtan and Gongjisgan Caves. Classification G. blacki In 1935, von Koenigswald considered Gigantopithecus to be closely allied with the Late Miocene Sivapithecus from India. In 1939, South African palaeontologist Robert Broom hypothesised that it was closely allied with Australopithecus and the last common ancestor of humans and other apes. In 1946, Jewish German anthropologist Franz Weidenreich described Gigantopithecus as a human ancestor as "Gigantanthropus", believing that the human lineage went through a gigantic phase. He stated that the teeth are more similar to those of modern humans and Homo erectus (at the time "Pithecanthropus" for early Javan specimens), and envisioned a lineage from Gigantopithecus, to the Javan ape Meganthropus (then considered a human ancestor), to "Pithecanthropus", to "Javanthropus", and finally Aboriginal Australians. This was part of his multiregional hypothesis, that all modern races and ethnicities evolved independently from a local archaic human species, rather than sharing a more recent and fully modern common ancestor. In 1952, von Koenigswald agreed that Gigantopithecus was a hominin, but believed it was an offshoot rather than a human ancestor. Much debate followed whether Gigantopithecus was a hominin or not for the next three decades until the Out of Africa hypothesis overturned the Out of Asia and multiregional hypotheses, firmly placing humanity's origins in Africa. Gigantopithecus is now classified in the subfamily Ponginae, closely allied with Sivapithecus and Indopithecus. This would make its closest living relatives the orangutans. However, there are few similar traits (synapomorphies) linking Gigantopithecus and orangutans due to fragmentary remains, with the main morphological argument being its close affinities to Sivapithecus, which is better established as a pongine based on skull features. In 2017, Chinese palaeoanthropologist Yingqi Zhang and American anthropologist Terry Harrison suggested that Gigantopithecus is most closely allied to the Chinese Lufengpithecus, which went extinct 4 million years prior to Gigantopithecus. In 2019, peptide sequencing of dentine and enamel proteins of a Gigantopithecus molar from Chuifeng Cave indicates that Gigantopithecus was indeed closely allied with orangutans, and, assuming the current mutation rate in orangutans has remained constant, shared a common ancestor about 12–10 million years ago in the Middle to Late Miocene. Their last common ancestor would have been a part of the Miocene radiation of apes. The same study calculated a divergence time between the Ponginae and African great apes about 26–17.7 million years ago. Cladogram according to Zhang and Harrison, 2017: "G. bilaspurensis" In 1969, an 8.6 million year old mandible from the Sivalik Hills in northern India was classified as "G. bilaspurensis" by palaeontologists Elwyn L. Simons and , who believed it was the ancestor of G. blacki. This bore resemblance to a molar discovered in 1915 in the Pakistani Pothohar Plateau then classified as "Dryopithecus giganteus". Von Koenigswald reclassified "D. giganteus" in 1950 into its own genus, Indopithecus, but this was changed again in 1979 to "G. giganteus" by American anthropologists Frederick Szalay and Eric Delson until Indopithecus was resurrected in 2003 by Australian anthropologist David W. Cameron. "G. bilaspurensis" is now considered a synonym of Indopithecus giganteus, leaving Gigantopithecus monotypic (with only one species), G. blacki. Description Size Total size estimates are highly speculative because only tooth and jaw elements are known, and molar size and total body weight do not always correlate, such as in the case of postcanine megadontia hominins (small-bodied primate exhibiting massive molars and thick enamel). In 1946, Weidenreich hypothesised that Gigantopithecus was twice the size of male gorillas. In 1957, Pei estimated a total height of about and a weight of more than . In 1970, Simons and American palaeontologist Peter Ettel approximated a height of almost and a weight of up to , which is about 40% heavier than the average male gorilla. In 1979, American anthropologist Alfred E. Johnson Jr. used the dimensions of gorillas to estimate a femur length of and humerus length of for Gigantopithecus, about 20–25% longer than those of gorillas. In 2017, Chinese palaeoanthropologist Yingqi Zhang and American anthropologist Terry Harrison suggested a body mass of , though conceded that it is impossible to obtain a reliable body mass estimate without more complete remains. This size would make Gigantopithecus the biggest primate ever recorded. The average maximum length of the upper canines for presumed males and females are and , respectively, and Mandible III (presumed male) is 40% larger than Mandible I (presumed female). These imply sexual dimorphism, with males being larger than females. Such a high degree of dimorphism is only surpassed by gorillas among modern apes in canine size, and is surpassed by none for mandibular disparity. Teeth and jaws Like other apes, Gigantopithecus had a dental formula of , with two incisors, one canine, two premolars, and three molars in each half of the jaw for both jaws. The canines, due to a lack of honing facets (which keep them sharp) and their overall stoutness, have been suggested to have functioned like premolars and molars (cheek teeth). Like other apes with enlarged molars, the incisors of Gigantopithecus are reduced. Wearing on the tongue-side of the incisors (the lingual face), which can extend as far down as the tooth root, suggests an underbite. Overall mandibular anatomy and tooth wearing suggests a side-to-side movement of the jaw while chewing (lateral excursion). The incisors and canines have extremely long tooth roots, at least double the length of the tooth crown (the visible part of the tooth). These teeth were closely packed together. In the upper jaw, the first premolar (P3) averages in surface area, the second premolar (P4) , the first and/or second molars (M1/2, which are difficult to distinguish) , and the third molar (M3) . In the lower jaw, P3 averages , P4 , M1/2 , and M3 . The molars are the biggest of any known ape. Teeth continually evolved to become larger and larger. The premolars are high-crowned, and the lower have two tooth roots, whereas the upper have three. The lower molars are low-crowned, long and narrow, and waist at the midline—which is more pronounced in the lower molars—with low-lying and bulbous cusps and rounded-off crests. The tooth enamel on the molars is in absolute measure the thickest of any known ape, averaging in three different molars, and over on the tongue-side (lingual) cusps of an upper molar. This has attracted comparisons with the extinct Paranthropus hominins, which had extremely large molars and thick enamel for their size. However, in relation to the tooth's size, enamel thickness for Gigantopithecus overlaps with that of several other living and extinct apes. Like orangutans and potentially all pongines (though unlike African apes) the Gigantopithecus molar had a large and flat (tabular) grinding surface, with an even enamel coating, and short dentine horns (the areas of the dentine layer which project upwards into the top enamel layer). The molars are the most hypsodont (where the enamel extends beyond the gums) of any ape. Palaeobiology Diet Gigantopithecus is considered to have been a herbivore. Carbon-13 isotope analysis suggests consumption of C3 plants, such as fruits, leaves, and other forest plants. The robust mandible of Gigantopithecus indicates it was capable of resisting high strains while chewing through tough or hard foods. However, the same mandibular anatomy is typically seen in modern apes which primarily eat soft leaves (folivores) or seeds (granivores). Gigantopithecus teeth have a markedly lower rate of pitting (caused by eating small, hard objects) than orangutans, more similar to the rate seen in chimpanzees, which could indicate a similarly generalist diet. The molar-like premolars, large molars, and long rooted cheeked teeth could point to chewing, crushing, and grinding of bulky and fibrous materials. Thick enamel would suggest a diet of abrasive items, such as dirt particles on food gathered near or on the ground (like bamboo shoots). Similarly, oxygen isotope analysis suggests Gigantopithecus consumed more low-lying plants such as stems, roots, and grasses than orangutans. Dental calculus indicates the consumption of tubers. Gigantopithecus does not appear to have consumed the commonplace savanna grasses (C4 plants). Nonetheless, in 1990, a few opal phytoliths adhering to four teeth from Gigantopithecus Cave were identified to have originated from grasses; though, the majority of phytoliths resemble the hairs of fig family fruits, which include figs, mulberry, breadfruit and banyan. This suggests that fruit was a significant dietary component for at least this population of Gigantopithecus. The 400,000–320,000-year-old Middle Pleistocene teeth from Hejiang Cave in southeastern China (near the time of extinction) show some differences from Early Pleistocene material from other sites, which could potentially indicate that the Hejiang Gigantopithecus were a specialised form adapting to a changing environment with different food resources. The Hejiang teeth display a less level (more crenulated) outer enamel surface due to the presence of secondary crests emanating from the paracone and protocone on the side of the molar closer to the midline (medially), as well as sharper major crests. That is, the teeth are not as flat. In 1957, based on hoofed animal remains in a cave located in a seemingly inaccessible mountain, Pei had believed that Gigantopithecus was a cave-dwelling predator and carried these animals in. This hypothesis is no longer considered viable because its dental anatomy is consistent with herbivory. In 1975, American palaeoanthropologist Tim D. White drew similarities between the jaws and dentition of Gigantopithecus and those of the giant panda, and suggested they both occupied the same niche as bamboo specialists. This garnered support from some subsequent researchers, but thicker enamel and hypsodonty in Gigantopithecus could suggest different functionality for these teeth. The species' reliance on barks and twigs for nutrition led to its demise. Growth A Gigantopithecus permanent third molar, based on an approximate 600–800 days required for the enamel on the cusps to form (which is quite long), was estimated to have taken four years to form, which is within the range (albeit, far upper range) of what is exhibited in humans and chimpanzees. Like many other fossil apes, the rate of enamel formation near the enamel-dentine junction (dentine is the nerve-filled layer beneath the enamel) was estimated to begin at about 4 μm per day; this is seen in only baby teeth for modern apes. Protein sequencing of Gigantopithecus enamel identified alpha-2-HS-glycoprotein (AHSG), which, in modern apes, is important in bone and dentine mineralisation. Because it was found in enamel, and not dentine, AHSG may have been an additional component in Gigantopithecus which facilitated biomineralisation of enamel during prolonged amelogenesis (enamel growth). Pathology Gigantopithecus molars have a high cavity rate of 11%, which could mean fruit was commonly included in its diet. The molars from Gigantopithecus Cave frequently exhibit pitting enamel hypoplasia, where the enamel improperly forms with pits and grooves. This can be caused by malnutrition during growth years, which could point to periodic food shortages, though it can also be induced by other factors. Specimen PA1601-1 from Yanliang Cave shows evidence of tooth loss of the right second molar before the eruption of the neighboring third molar (which grew slantedly), which suggests this individual was able to survive for a long time despite impaired chewing abilities. Society The high levels of sexual dimorphism could indicate relatively intense male–male competition, though considering the upper canines only projected slightly farther than the cheek teeth, canine display was probably not very important in agonistic behaviour, unlike modern non-human apes. Palaeoecology Gigantopithecus remains are generally found in what were subtropical evergreen broadleaf forest in South China, except in Hainan which featured a tropical rainforest. Carbon and oxygen isotope analysis of Early Pleistocene enamel suggests Gigantopithecus inhabited dense, humid, closed-canopy forest. Queque Cave featured a mixed deciduous and evergreen forest dominated by birch, oak, and chinkapin, as well as several low-lying herbs and ferns. The "Gigantopithecus fauna", one of the most important mammalian faunal groups of the Early Pleistocene of southern China, includes tropical or subtropical forest species. This group has been subdivided into three stages spanning 2.6–1.8 million years ago, 1.8–1.2 million years ago, and 1.2–0.8 million years ago. The early stage is characterised by more ancient Neogene animals such as the gomphotheriid proboscidean (relative of elephants) Sinomastodon, the chalicothere Hesperotherium, the suid Hippopotamodon, the tragulid Dorcabune, and the deer Cervavitus. The middle stage is indicated by the appearance of the panda Ailuropoda wulingshanensis, the dhole Cuon antiquus, and the tapir Tapirus sinensis. The late stage features more typical Middle Pleistocene animals such as the panda Ailuropoda baconi and the stegodontid proboscidean Stegodon. Other classic animals typically include orangutans, macaques, rhinos, the extinct pigs Sus xiaozhu and Sus peii, muntjac, Cervus (a deer), gaur (a cow), the bovid Megalovis, and more rarely the large saber-toothed cat Megantereon. In 2009, American palaeoanthropologist Russell Ciochon hypothesised an undescribed, chimp-sized ape he identified from a few teeth coexisted with Gigantopithecus, which in 2019 was identified as the closely related Meganthropus. Longgudong Cave may have represented a transitional zone between the Palaearctic and Oriental realms, featuring, alongside the typical Gigantopithecus fauna, more boreal animals such as hedgehogs, hyenas, horses, the bovid Leptobos, and pikas. Extinction Gigantopithecus fossil sites range across Guangxi, Guizhou, Hainan and Hubei Provinces, but those post-dating about 400,000 years ago are only known from Guangxi. Its youngest remains in China are roughly 295,000 to 215,000 years old, but there is a possible occurrence in the Late Pleistocene in Vietnam. The former correlates with a cooling trend marked by intensifying seasonality and monsoon strength in the region, which led to the encroachment of rainforests by open grasslands. Because Gigantopithecus teeth dating to this time show evidence of dietary shifts and chronic nutritional stress, it may have been less successful at adapting to these environmental stressors compared to contemporary great apes — namely Pongo weidenreichi and Homo — which could have led to its extinction. Similarly, Gigantopithecus seems to only have been consuming C3 forest plants, instead of the C4 savannah plants which were becoming more common during this time. Savannas remained the dominant habitat of Southeast Asia until the Late Pleistocene. Human activity in southern China is known as early as 800,000 years ago but does not become prevalent until after the extinction of Gigantopithecus, so it is unclear if pressures such as competition over resources or overhunting were factors. Zhang et al. in 2024 suggested that there is no evidence of any archaic hominin involvement in the early extinctions of the Pleistocene of southern China. Cryptozoology Gigantopithecus has been used in cryptozoology circles as the identity of the Tibetan yeti or American bigfoot, apelike monsters in local folklore. This began in 1960 with zoologist Wladimir Tschernezky, briefly describing in the journal Nature a 1951 photograph of alleged yeti tracks taken by Himalayan mountaineers Michael Ward and Eric Shipton. Tschernezky concluded that the yeti walked like a human and was similar to Gigantopithecus. Subsequently, the yeti attracted short-lived scientific attention, with several more authors publishing in Nature and Science, but this also incited a popular monster hunting following for both the yeti and the similar American bigfoot which has persisted into the present day. The only scientist who continued trying to prove such monsters exist was anthropologist Grover Krantz, who continued pushing for a connection between Gigantopithecus and bigfoot from 1970 to his death in 2002. Among the binomial names he came up with for bigfoot included "Gigantopithecus canadensis". Scientists and amateur monster hunters both dismissed Krantz's arguments, saying he readily accepted clearly false evidence.
Biology and health sciences
Apes
Animals
1283013
https://en.wikipedia.org/wiki/Spalax
Spalax
Spalax is a genus of rodent in the family Spalacidae, subfamily Spalacinae (blind mole-rats). It is one of two extant genera in the subfamily Spalacinae, alongside Nannospalax. Species in this genus are found in eastern Europe and western & central Asia. They are completely blind and have a subterranean lifestyle. Taxonomy Prior to 2013, Spalax was widely considered the only member of Spalacinae, with all blind mole-rat species being grouped within it. However, phylogenetic and morphological evidence supported some of the species within it forming a distinct lineage that diverged from the others during the Late Miocene, when a marine barrier formed between Anatolia and the Balkans. These species were reclassified into the genus Nannospalax, making Spalax one of two extant spalacine genera. Species of genus Spalax Mehely's blind mole-rat, S. antiquus Sandy blind mole-rat, S. arenarius Giant blind mole-rat, S. giganteus Bukovina blind mole-rat, S. graecus Oltenia blind mole-rat, S. istricus (possibly extinct) Greater blind mole-rat, S. microphthalmus Kazakhstan blind mole-rat, S. uralensis Podolsk blind mole-rat, S. zemni
Biology and health sciences
Rodents
Animals
1283638
https://en.wikipedia.org/wiki/Canopus
Canopus
Canopus is the brightest star in the southern constellation of Carina and the second-brightest star in the night sky. It is also designated α Carinae, which is romanized (transliterated) to Alpha Carinae. With a visual apparent magnitude of −0.74, it is outshone only by Sirius. Located around from the Sun, Canopus is a bright giant of spectral type A9, so it is essentially white when seen with the naked eye. It has a luminosity over 10,000 times the luminosity of the Sun, is eight times as massive, and has expanded to 71 times the Sun's radius. Its enlarged photosphere has an effective temperature of around . Canopus is undergoing core helium burning and is currently in the so-called blue loop phase of its evolution, having already passed through the red-giant branch after exhausting the hydrogen in its core. Canopus is a source of X-rays, which are likely being emitted from its corona. The prominent appearance of Canopus means it has been the subject of mythological lore among many ancient peoples. Its proper name is generally considered to originate from the mythological Canopus, who was a navigator for Menelaus, king of Sparta. The acronycal rising marked the date of the Ptolemaia festival in Egypt. In ancient India, it was named Agastya after the revered Vedic sage. For Chinese astronomers, it was known as the Old Man of the South Pole. In Islamic astronomy, it is Suhail or Suhayl, a name that is also commonly used to imply rareness of appearance (as the Canopus infrequently appeared to a gazer at Middle Eastern latitutes) Nomenclature The name Canopus is a Latinisation of the Ancient Greek name Κάνωβος/Kanôbos, recorded in Claudius Ptolemy's Almagest (c.150 AD). Eratosthenes used the same spelling. Hipparchos wrote it as Κάνωπος. John Flamsteed wrote Canobus, as did Edmond Halley in his 1679 Catalogus Stellarum Australium. The name has two possible derivations, both listed in Richard Hinckley Allen's seminal Star Names: Their Lore and Meaning. The brightest star in the obsolete constellation of Argo Navis, which represented the ship used by Jason and the Argonauts, was given the name of a ship's pilot from another Greek legend: Canopus, pilot of Menelaus' ship on his quest to retrieve Helen of Troy after she was taken by Paris. A ruined ancient Egyptian port named Canopus lies near the mouth of the Nile, site of the Battle of the Nile. It is speculated that its name is derived from the Egyptian Coptic Kahi Nub ("Golden Earth"), which refers to how Canopus would have appeared near the horizon in ancient Egypt, reddened by atmospheric extinction from that position. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN, which included Canopus for this star. Canopus is now included in the IAU Catalog of Star Names. Canopus traditionally marked the steering oar of the ship Argo Navis. German celestial cartographer Johann Bayer gave it—as the brightest star in the constellation—the designation of α Argus (Latinised to Alpha Argus) in 1603. In 1763, French astronomer Nicolas Louis de Lacaille divided the huge constellation into three smaller ones, and hence Canopus became α Carinae (Latinised to Alpha Carinae). It is listed in the Bright Star Catalogue as HR 2326, the Henry Draper Catalogue as HD 45348, and the Hipparcos catalogue as HIP 30438. Flamsteed did not number this southern star, but Benjamin Apthorp Gould gave it the number 7 (7 G. Carinae) in his Uranometria Argentina. An occasional name seen in English is Soheil, or the feminine Soheila; in Turkish is Süheyl, or the feminine Süheyla, from the Arabic name for several bright stars, سهيل suhayl, and Canopus was known as Suhel in medieval times. Alternative spellings include Suhail, Souhail, Suhilon, Suheyl, Sohayl, Sohail, Suhayil, Shoel, Sohil, Soheil, Sahil, Suhayeel, Sohayil, Sihel, and Sihil. An alternative name was Wazn "weight" or Haḍar "ground" , implying the anchor stone used by ship, rather than being related to its low position near the horizon. Hence comes its name in the Alfonsine tables, Suhel ponderosus, a Latinization of Al Suhayl al Wazn. Its Greek name was revived during the Renaissance. Observation The Arabic Muslim astronomer Ibn Rushd went to Marrakesh (in Morocco) to observe the star in 1153, as it was invisible in his native Córdoba, Al-Andalus. He used the different visibility in different latitudes to argue that the Earth is round, following Aristotle's argument which held that such an observation was only possible if the Earth was a relatively small sphere. English explorer Robert Hues brought Canopus to the attention of European observers in his 1592 work Tractatus de Globis, along with Achernar and Alpha Centauri, noting:"Now, therefore, there are but three Stars of the first magnitude that I could perceive in all those parts which are never seene here in England. The first of these is that bright Star in the sterne of Argo which they call Canobus. The second is in the end of Eridanus. The third is in the right foote of the Centaure." In the Southern Hemisphere, Canopus and Sirius are both visible high in the sky simultaneously, and reach a meridian just apart. Brighter than first magnitude, Canopus can be seen by naked eye in the early twilight. Mostly visible in mid to late summer in the Southern Hemisphere, Canopus culminates at midnight on December 27, and at 9 PM on February 11. When seen from latitudes south of  S, Canopus is a circumpolar star. Since Canopus is so far south in the sky, it never rises in mid- to far-northern latitudes; in theory the northern limit of visibility is latitude north. This is just south of Athens, San Francisco, and Seoul, and very close to Seville and Agrigento. It is almost exactly the latitude of Lick Observatory on Mt. Hamilton, California, from which it is readily visible because of the effects of elevation and atmospheric refraction, which add another degree to its apparent altitude. Under ideal conditions, it can be spotted as far north as latitude from the Pacific coast. Another northernmost record of visibility came from Mount Nemrut in Turkey, latitude . It is more easily visible in places such as the Gulf Coast and Florida, and the island of Crete (Greece) where the best season for viewing it around 9 p.m. is during late January and early February. Canopus has a B–V color index of +0.15—where 0 is a blue-white—indicating it is essentially white, although it has been described as yellow-white. Canopus' spectral type has been given as F0 and the incrementally warmer A9. It is less yellow than Altair or Procyon, with indices measured as 0.22 and 0.42, respectively. Some observers may have perceived Canopus as yellow-tinged because it is low in the sky and hence subject to atmospheric effects. Patrick Moore said that it never appeared anything but white to him. The bolometric correction for Canopus is 0.00, indicating that the visual absolute magnitude and bolometric absolute magnitude are equal. Canopus was previously proposed to be a member of the Scorpius–Centaurus association, however it is not located near the subgroups of that association, and has not been included as a Sco-Cen member in kinematic studies that used Hipparcos astrometric data. Canopus is not thought to be a member of any nearby young stellar groups. In 2014, astronomer Eric Mamajek reported that an extremely magnetically active M dwarf (having strong coronal X-ray emission), 1.16 degrees south of Canopus, appears to share a common proper motion with Canopus. The projected separation of the M dwarf 2MASS J06234738-5351131 ("Canopus B") is approximately 1.9 parsecs. However, despite this large separation, it is still within the estimated tidal radius (2.9 parsecs) for the massive star Canopus. Since it is more luminous than any star closer to Earth, Canopus has been the brightest star in the night sky during three epochs over the past four million years. Other stars appear brighter only during relatively temporary periods, during which they are passing the Solar System much closer than Canopus. About 90,000 years ago, Sirius moved close enough that it became brighter than Canopus, and that will remain so for another 210,000 years. But in 480,000 years, as Sirius moves further away and appears fainter, Canopus will once again be the brightest, and will remain so for a period of about 510,000 years. Role in navigation The southeastern wall of the Kaaba in Mecca is aligned with the rising point of Canopus, and is also named Janūb. The Bedouin people of the Negev and Sinai knew Canopus as Suhayl, and used it and Polaris as the two principal stars for navigation at night. Because it disappears below the horizon in those regions, it became associated with a changeable nature, as opposed to always-visible Polaris, which was circumpolar and hence 'steadfast'. The south celestial pole can be approximately located using Canopus and two different bright stars. The first, Achernar, makes an equilateral triangle between the stars and the south pole. One can also locate the pole more roughly using an imaginary line between Sirius and Canopus; Canopus will be approximately at the midpoint, being one way to Sirius and to the pole. Canopus's brightness and location well off the ecliptic make it useful for space navigation. Many spacecraft carry a special camera known as a "Canopus Star Tracker" plus a Sun sensor for attitude determination. Mariner 4 used Canopus for second axis stabilisation (after locking on the Sun) in 1964, the first time a star had been used. Spectrum Canopus was little-studied by western scientists before the 20th century. It was given a spectral class of F in 1897, an early use of this extension to Secchi class I, applied to those stars where the hydrogen lines are relatively weak and the calcium K line relatively strong. It was given as a standard star of F0 in the Henry Draper Catalogue, with the spectral type F0 described as having hydrogen lines half the strength of an A0 star and the calcium K line three times as strong as Hδ. American astronomer Jesse Greenstein was interested in stellar spectra and used the newly built Otto Struve Telescope at McDonald Observatory to analyze the star's spectrum in detail. In a 1942 paper, he reported that the spectrum is dominated by strong broad hydrogen lines. There are also absorption lines of carbon, nitrogen, oxygen, sulphur, iron, and many ionised metals. It was studied in the ultraviolet by an early astronomical satellite, Gemini XI in 1966. The UV spectra were considered to be consistent with an F0 supergiant having a temperature of , the accepted parameters for Canopus at the time. New Zealand-based astronomers John Hearnshaw and Krishna Desikachary examined the spectrum in greater detail, publishing their results in 1982. When luminosity classes were added to the MK spectral classification scheme, Canopus was assigned class Iab indicating an intermediate luminosity supergiant. This was based on the relative strengths of certain spectral lines understood to be sensitive to the luminosity of a star. In the Bright Star Catalogue 5th edition it is given the spectral class F0II, the luminosity class indicating a bright giant. Balmer line profiles and oxygen line strengths indicate the size and luminosity of Canopus. When the effects of stellar rotation speed on spectral lines are accounted for, the MK spectral class of Canopus is adjusted to A9II. Its spectrum consists mostly of absorption lines on a visible continuum, but some emission has been detected. For example, the calcium K line has weak emission wings on each side of the strong central absorption line, first observed in 1966. The emission line profiles are usually correlated with the luminosity of the star as described by the Wilson-Bappu effect, but in the case of Canopus they indicate a luminosity much lower than that calculated by other methods. More detailed observations have shown that the emission line profiles are variable and may be due to plage areas on the surface of the star. Emission can also be found in other lines such as the h and k lines of ionised magnesium. Distance Before the launch of the Hipparcos satellite telescope, distance estimates for Canopus varied widely, from 96 light-years to 1200 light-years (or 30 to 370 parsecs). For example, an old distance estimate of 200 parsecs (652 light years) gave it a luminosity of , far higher than modern estimates. The closer distance was derived from parallax measurements of around . The larger distance derives from the assumption of a very bright absolute magnitude for Canopus. Hipparcos established Canopus as being () from the Solar System; this is based on its 2007 parallax measurement of . At 95 parsecs, the interstellar extinction for Canopus is low at 0.26 magnitudes. Canopus is too bright to be included in the normal observation runs of the Gaia satellite and there is no published Gaia parallax for it. At present the star is drifting further away from the Sun with a radial velocity of 20 km/s. Some 3.1 million years ago it made the closest approach to the Sun at a distance of about . Canopus is orbiting the Milky Way with a heliocentric velocity of 24.5 km/s and a low eccentricity of 0.065. Physical characteristics The absorption lines in the spectrum of Canopus shift slightly with a period of . This was first detected in 1906 and the Doppler variations were interpreted as orbital motion. An orbit was even calculated, but no such companion exists and the small radial velocity changes are due to movements in the atmosphere of the star. The maximum observed radial velocities are only 0.7 to . Canopus also has a magnetic field that varies with the same period, detected by the Zeeman splitting of its spectral lines. Canopus is bright at microwave wavelengths, one of the few F-class stars to be detected by radio. The rotation period of the star is not accurately known, but may be over three hundred days. The projected rotational velocity has been measured at 9 km/s. An early interferometric measurement of its angular diameter in 1968 gave a limb-darkened value of , close to the accepted modern value. Very-long-baseline interferometry has been used to calculate Canopus' angular diameter at . Combined with distance calculated from its Hipparcos parallax, this gives it a radius of 71 times that of the Sun. If it were at the centre of the Solar System, it would extend 90% of the way to the orbit of Mercury. The radius and temperature relative to the Sun means that it is 10,700 times more luminous than the Sun, and its position in the H-R diagram relative to theoretical evolutionary tracks means that it is times as massive as the Sun. Measurements of its shape find a 1.1° departure from spherical symmetry. Canopus is a source of X-rays, which are probably produced by its corona, magnetically heated to several million Kelvin. The temperature has likely been stimulated by fast rotation combined with strong convection percolating through the star's outer layers. The soft X-ray sub-coronal X-ray emission is much weaker than the hard X-ray coronal emission. The same behaviour has been measured in other F-class supergiants such as α Persei and is now believed to be a normal property of such stars. Evolution The spectrum of Canopus indicates that it spent some 30 million years of its existence as a blue-white main sequence star of around 10 solar masses, before exhausting its core hydrogen and evolving away from the main sequence. The position of Canopus in the H–R diagram indicates that it is currently in the core-helium burning phase. It is an intermediate mass star that has left the red-giant branch before its core became degenerate and is now in a blue loop. Models of stellar evolution in the blue loop phase show that the length of the blue loop is strongly affected by rotation and mixing effects inside the star. It is difficult to determine whether a star is currently evolving towards hotter temperature or returning to cooler temperatures, since the evolutionary tracks for stars with different masses overlap during the blue loops. Canopus lies on the warm side of the instability strip and does not pulsate like Cepheid variables of a similar luminosity. However its atmosphere does appear to be unstable, showing strong signs of convection. Canopus may be massive enough to explode by an iron-core collapse supernova. Cultural significance Canopus was known to the ancient Mesopotamians and represented the city of Eridu in the Three Stars Each Babylonian star catalogues and later MUL.APIN around 1100 BC. Canopus was called MUL.NUNKI by the Babylonians, which translates as "star of the city of Eridu". Eridu was the southernmost and one of the oldest Sumerian cities. From there is a good view to the south, so that about 6000 years ago due to the precession of the Earth's axis the first rising of the star Canopus in Mesopotamia could be observed only from there at the southern meridian at midnight. Today, the star Sigma Sagittarii is known by the common name Nunki. Canopus was not visible to the mainland ancient Greeks and Romans; it was, however, visible to the ancient Egyptians. Hence Aratus did not write of the star as it remained below the horizon, while Eratosthenes and Ptolemy—observing from Alexandria—did, calling it Kanōbos. An Egyptian priestly poet in the time of Thutmose III mentions the star as Karbana, "the star which pours his light in a glance of fire, when he disperses the morning dew." Under the Ptolemies, the star was known as Ptolemaion (Greek: Πτολεμαῖον) and its acronychal rising marked the date of the Ptolemaia festival, which was held every four years, from 262 to 145 BC. The Greek astronomer Posidonius used observations of Canopus to calculate quite accurately the Earth's circumference, around 90 – 120 BC. India In Indian Vedic literature, Canopus is associated with the sage Agastya, one of the ancient siddhars and rishis (the others are associated with the stars of the Big Dipper). To Agastya, the star is said to be the 'cleanser of waters', and its rising coincides with the calming of the waters of the Indian Ocean. Canopus is described by Pliny the Elder and Gaius Julius Solinus as the largest, brightest and only source of starlight for navigators near Tamraparni island (ancient Sri Lanka) during many nights. China Canopus was described as Shou Xing, the Star of Longevity, in the Shiji (Records of the Grand Historian) completed in 94 BC by Chinese historian Sima Qian. Drawing on sources from the Warring States period, he noted it to be the southern counterpart of Sirius, and wrote of a sanctuary dedicated to it established by Emperor Qin Shi Huang between 221 and 210 BC. During the Han dynasty, the star was auspicious, its appearance in the southern sky heralding peace and absence war. From the imperial capital Chang'an, the star made a low transit across the southern sky, indicating true south to observers, and was often obscured by clouds. During this time it was also equated with Old Man of the South Pole (in ) Under this name, Canopus appears (albeit misplaced northwards) on the medieval Chinese manuscript the Dunhuang Star Chart, although it cannot be seen from the Chinese capital of Chang'an. The Chinese astronomer Yi Xing had journeyed south to chart Canopus and other far southern stars in 724 AD. Its personification as the Old Man Star was popularised in the Tang dynasty, where it appeared often in poetry and memorials. Later still, during the Ming dynasty, the star was established as one of the Three Stars (Fu Lo Shou), appearing frequently in art and literature of the time. This symbolism spread into neighbouring cultures in Asia. In Japan, Canopus is known as Mera-boshi and Roujin-sei (the old man star), and in Mongolia, it was personified as the White Old Man. Although the link was known in Tibet, with names such as Genpo karpo (Rgan po dkar po) or Genkar (Rgan dkar) "White Old Man", the symbolism was not popular. Instead, Canopus was more commonly named Karma Rishi སྐར་མ་རི་ཥི།, derived from Indian mythology. Tibetans celebrated the star's heliacal rising with ritual bathing and associated it with morning dew. Polynesia Bright stars were important to the ancient Polynesians for navigation between the many islands and atolls of the Pacific Ocean. Low on the horizon, they acted as stellar compasses to assist mariners in charting courses to particular destinations. Canopus served as the southern wingtip of a "Great Bird" constellation called Manu, with Sirius as the body and Procyon the northern wingtip, which divided the Polynesian night sky into two hemispheres. The Hawaiian people called Canopus Ke Alii-o-kona-i-ka-lewa, "The chief of the southern expanse"; it was one of the stars used by Hawaiʻiloa and Ki when they traveled to the Southern Ocean. The Māori people of New Zealand/Aotearoa had several names for Canopus. Ariki ("High-born"), was known as a solitary star that appeared in the east, prompting people to weep and chant. They also named it Atutahi, Aotahi or Atuatahi, "Stand Alone". Its solitary nature indicates it is a tapu star, as tapu people are often solitary. Its appearance at the beginning of the Maruaroa season foretells the coming winter; light rays to the south indicate a cold wet winter, and to the north foretell a mild winter. Food was offered to the star on its appearance. This name has several mythologies attached to it. One story tells of how Atutahi was left outside the basket representing the Milky Way when Tāne wove it. Another related myth about the star says that Atutahi was the first-born child of Rangi, who refused to enter the Milky Way and so turned it sideways and rose before it. The same name is used for other stars and constellations throughout Polynesia. Kapae-poto, "Short horizon", referred to it rarely setting as seen in New Zealand; Kauanga ("Solitary") was the name for Canopus only when it was the last star visible before sunrise. The people of the Society Islands had two names for Canopus, as did the Tuamotu people. The Society Islanders called Canopus Taurua-e-tupu-tai-nanu, "Festivity-whence-comes-the-flux-of-the-sea", and Taurua-nui-o-te-hiti-apatoa "Great-festivity-of-the-border-of-the-south", and the Tuamotu people called the star Te Tau-rari and Marere-te-tavahi, the latter said to be the true name for the former, "He-who-stands-alone". Africa In the Guanche mythology of the island of Tenerife (Spain), the star Canopus was linked with the goddess Chaxiraxi. The Tswana people of Botswana knew Canopus as Naka. Appearing late in winter skies, it heralded increasing winds and a time when trees lose their leaves. Stock owners knew it was time to put their sheep with rams. In southern Africa, the Sotho, Tswana and Venda people called Canopus Naka or Nanga, “the Horn Star”, while the Zulu and Swazi called it inKhwenkwezi "Brilliant star". It appears in the predawn sky in the third week of May. According to the Venda, the first person to see Canopus would blow a phalaphala horn from the top of a hill, getting a cow for a reward. The Sotho chiefs also awarded a cow, and ordered their medicine men to roll bone dice and read the fortune for the coming year. To the ǀXam-speaking Bushmen of South Africa, Canopus and Sirius signalled the appearance of termites and flying ants. They also believed that stars had the power to cause death and misfortune, and they would pray to Sirius and Canopus in particular to impart good fortune or skill. The ǃKung people of the Kalahari Desert in Botswana held Canopus and Capella to be the horns of tshxum (the Pleiades), the appearance of all three marking the end of the dry season and start of the rainy season. Americas The Navajo observed the star and named it Maʼii Bizòʼ, the “Coyote Star”. According to legend, Maʼii (Coyote) took part in the naming and placing of the star constellations during the creation of the universe. He placed Canopus directly south, naming it after himself. The Kalapalo people of Mato Grosso state in Brazil saw Canopus and Procyon as Kofongo "Duck", with Castor and Pollux representing his hands. The asterism's appearance signified the coming of the rainy season and increase in manioc, a food staple fed to guests at feasts. Australia Canopus is identified as the moiety ancestor Waa "Crow" to some Koori people in southeastern Australia. The Boorong people of northwestern Victoria recalled that War (Canopus) was the brother of Warepil (Sirius), and that he brought fire from the heavens and introduced it to humanity. His wife was Collowgullouric War (Eta Carinae). The Pirt-Kopan-noot people of western Victoria tell of Waa "Crow" falling in love with a queen, Gneeanggar "Wedge-tailed Eagle" (Sirius) and her six attendants (the Pleiades). His advances spurned, he hears that the women are foraging for grubs and so transforms himself into a grub. When the women dig him out, he changes into a giant and carries her off. The Kulin people know Canopus as Lo-an-tuka. Objects in the sky are also associated with states of being for some tribes; the Wailwun of northern New South Wales know Canopus as Wumba "deaf", alongside Mars as Gumba "fat" and Venus as Ngindigindoer "you are laughing". Tasmanian aboriginal lore holds that Canopus is Dromerdene, the brother of Moinee; the two fought and fell out of the sky, with Dromerdene falling into Louisa Bay in southwest Tasmania. Astronomer Duane Hamacher has identified Canopus with Moinee in a paper dating Tasmanian Aboriginal oral tradition to the late Pleistocene, when Canopus was much closer to the South celestial pole. Legacy Canopus appears on the flag of Brazil, symbolising the state of Goiás. Two U.S. Navy submarine tenders have been named after Canopus, the first serving from 1922 to 1942 and the second serving from 1965 to 1994. The Royal Navy built nine Canopus-class ships of the line in the early 19th century, and six s which entered services between 1899 and 1902. There are at least two mountains named after the star: Mount Canopus in Antarctica; and Mount Canopus or Canopus Hill in Tasmania, the location of the Canopus Hill astronomical observatory. In popular culture The fictional planet Arrakis, of Frank Herbert's 1965 novel Dune, orbits Canopus. Canopus is the home of superior and benevolent aliens in Doris Lessing's Canopus in Argos books. Canopus is a system present in the video game Helldivers 2, host to a desert world.
Physical sciences
Notable stars
null
1283865
https://en.wikipedia.org/wiki/Hyperbolic%20triangle
Hyperbolic triangle
In hyperbolic geometry, a hyperbolic triangle is a triangle in the hyperbolic plane. It consists of three line segments called sides or edges and three points called angles or vertices. Just as in the Euclidean case, three points of a hyperbolic space of an arbitrary dimension always lie on the same plane. Hence planar hyperbolic triangles also describe triangles possible in any higher dimension of hyperbolic spaces. Definition A hyperbolic triangle consists of three non-collinear points and the three segments between them. Properties Hyperbolic triangles have some properties that are analogous to those of triangles in Euclidean geometry: Each hyperbolic triangle has an inscribed circle but not every hyperbolic triangle has a circumscribed circle (see below). Its vertices can lie on a horocycle or hypercycle. Hyperbolic triangles have some properties that are analogous to those of triangles in spherical or elliptic geometry: Two triangles with the same angle sum are equal in area. There is an upper bound for the area of triangles. There is an upper bound for radius of the inscribed circle. Two triangles are congruent if and only if they correspond under a finite product of line reflections. Two triangles with corresponding angles equal are congruent (i.e., all similar triangles are congruent). Hyperbolic triangles have some properties that are the opposite of the properties of triangles in spherical or elliptic geometry: The angle sum of a triangle is less than 180°. The area of a triangle is proportional to the deficit of its angle sum from 180°. Hyperbolic triangles also have some properties that are not found in other geometries: Some hyperbolic triangles have no circumscribed circle, this is the case when at least one of its vertices is an ideal point or when all of its vertices lie on a horocycle or on a one sided hypercycle. Hyperbolic triangles are thin, there is a maximum distance δ from a point on an edge to one of the other two edges. This principle gave rise to δ-hyperbolic space. Triangles with ideal vertices The definition of a triangle can be generalized, permitting vertices on the ideal boundary of the plane while keeping the sides within the plane. If a pair of sides is limiting parallel (i.e. the distance between them approaches zero as they tend to the ideal point, but they do not intersect), then they end at an ideal vertex represented as an omega point. Such a pair of sides may also be said to form an angle of zero. A triangle with a zero angle is impossible in Euclidean geometry for straight sides lying on distinct lines. However, such zero angles are possible with tangent circles. A triangle with one ideal vertex is called an omega triangle. Special Triangles with ideal vertices are: Triangle of parallelism A triangle where one vertex is an ideal point, one angle is right: the third angle is the angle of parallelism for the length of the side between the right and the third angle. Schweikart triangle The triangle where two vertices are ideal points and the remaining angle is right, one of the first hyperbolic triangles (1818) described by Ferdinand Karl Schweikart. Ideal triangle The triangle where all vertices are ideal points, an ideal triangle is the largest possible triangle in hyperbolic geometry because of the zero sum of the angles. Standardized Gaussian curvature The relations among the angles and sides are analogous to those of spherical trigonometry; the length scale for both spherical geometry and hyperbolic geometry can for example be defined as the length of a side of an equilateral triangle with fixed angles. The length scale is most convenient if the lengths are measured in terms of the absolute length (a special unit of length analogous to a relations between distances in spherical geometry). This choice for this length scale makes formulas simpler. In terms of the Poincaré half-plane model absolute length corresponds to the infinitesimal metric and in the Poincaré disk model to . In terms of the (constant and negative) Gaussian curvature of a hyperbolic plane, a unit of absolute length corresponds to a length of . In a hyperbolic triangle the sum of the angles A, B, C (respectively opposite to the side with the corresponding letter) is strictly less than a straight angle. The difference between the measure of a straight angle and the sum of the measures of a triangle's angles is called the defect of the triangle. The area of a hyperbolic triangle is equal to its defect multiplied by the square of : . This theorem, first proven by Johann Heinrich Lambert, is related to Girard's theorem in spherical geometry. Trigonometry In all the formulas stated below the sides , , and must be measured in absolute length, a unit so that the Gaussian curvature of the plane is −1. In other words, the quantity in the paragraph above is supposed to be equal to 1. Trigonometric formulas for hyperbolic triangles depend on the hyperbolic functions sinh, cosh, and tanh. Trigonometry of right triangles If C is a right angle then: The sine of angle A is the hyperbolic sine of the side opposite the angle divided by the hyperbolic sine of the hypotenuse. The cosine of angle A is the hyperbolic tangent of the adjacent leg divided by the hyperbolic tangent of the hypotenuse. The tangent of angle A is the hyperbolic tangent of the opposite leg divided by the hyperbolic sine of the adjacent leg. . The hyperbolic cosine of the adjacent leg to angle A is the cosine of angle B divided by the sine of angle A. . The hyperbolic cosine of the hypotenuse is the product of the hyperbolic cosines of the legs. . The hyperbolic cosine of the hypotenuse is also the product of the cosines of the angles divided by the product of their sines. Relations between angles We also have the following equations: Area The area of a right angled triangle is: also The area for any other triangle is: Angle of parallelism The instance of an omega triangle with a right angle provides the configuration to examine the angle of parallelism in the triangle. In this case angle B = 0, a = c = and , resulting in . Equilateral triangle The trigonometry formulas of right triangles also give the relations between the sides s and the angles A of an equilateral triangle (a triangle where all sides have the same length and all angles are equal). The relations are: General trigonometry Whether C is a right angle or not, the following relationships hold: The hyperbolic law of cosines is as follows: Its dual theorem is There is also a law of sines: and a four-parts formula: which is derived in the same way as the analogous formula in spherical trigonometry.
Mathematics
Non-Euclidean geometry
null
1284973
https://en.wikipedia.org/wiki/Nile%20crocodile
Nile crocodile
The Nile crocodile (Crocodylus niloticus) is a large crocodilian native to freshwater habitats in Africa, where it is present in 26 countries. It is widely distributed in sub-Saharan Africa, occurring mostly in the eastern, southern, and central regions of the continent, and lives in different types of aquatic environments such as lakes, rivers, swamps and marshlands. It occasionally inhabits deltas, brackish lakes and rarely also saltwater. Its range once stretched from the Nile Delta throughout the Nile River. Lake Turkana in Kenya has one of the largest undisturbed Nile crocodile populations. Generally, the adult male Nile crocodile is between in length and weighs . However, specimens exceeding in length and in weight have been recorded. It is the largest predator in Africa, and may be considered the second-largest extant reptile in the world, after the saltwater crocodile (Crocodylus porosus). Size is sexually dimorphic, with females usually about 30% smaller than males. The crocodile has thick, scaly, heavily armoured skin. Nile crocodiles are opportunistic apex predators; a very aggressive crocodile, they are capable of taking almost any animal within their range. They are generalists, taking a variety of prey, with a diet consisting mostly of different species of fish, reptiles, birds, and mammals. As ambush predators, they can wait for hours, days, and even weeks for the suitable moment to attack. They are agile predators and wait for the opportunity for a prey item to come well within attack range. Even swift prey are not immune to attack. Like other crocodiles, Nile crocodiles have a powerful bite that is unique among all animals, and sharp, conical teeth that sink into flesh, allowing a grip that is almost impossible to loosen. They can apply high force for extended periods of time, a great advantage for holding down large prey underwater to drown. Nile crocodiles are relatively social. They share basking spots and large food sources, such as schools of fish and big carcasses. Their strict hierarchy is determined by size. Large, old males are at the top of this hierarchy and have first access to food and the best basking spots. Crocodiles tend to respect this order; when it is infringed, the results are often violent and sometimes fatal. Like most other reptiles, Nile crocodiles lay eggs; these are guarded by the females but also males, making the Nile crocodiles one of few reptile species whose males contribute to parental care. The hatchlings are also protected for a period of time, but hunt by themselves and are not fed by the parents. The Nile crocodile is one of the most dangerous species of crocodile and is responsible for hundreds of human deaths every year. It is common and is not endangered, despite some regional declines or extirpations in the Maghreb. Etymology and naming The binomial name Crocodylus niloticus is derived from the Ancient Greek κρόκη, kroke ("pebble"), δρῖλος, drilos ("worm"), referring to its rough skin; and niloticus, meaning "from the Nile River". The Nile crocodile is called timsah al-nil in Arabic, mamba in Swahili, yaxaas in Somali, garwe in Shona, ngwenya in Ndebele, ngwena in Venda, kwena in Sotho and Tswana, and tanin ha-yeor in Hebrew. It also sometimes referred to as the African crocodile, Ethiopian crocodile, and common crocodile. Taxonomy Although no subspecies are currently formally recognized, as many as seven have been proposed, mostly due to variations in appearance and size noted in various populations throughout Africa. These have consisted of C. n. africanus (informally named the East African Nile crocodile), C. n. chamses (the West African Nile crocodile), C. n. cowiei (the South African Nile crocodile), C. n. madagascariensis (the Malagasy or Madagascar Nile crocodile, regionally also known as the croco Mada, which translates to Malagasy crocodile), C. n. niloticus (the Ethiopian Nile crocodile; this would be the nominate subspecies), C. n. pauciscutatus (the Kenyan Nile crocodile) and C. (n.) suchus (now widely considered a separate species). In a study of the morphology of the various populations, including C. (n.) suchus, the appearance of the Nile crocodile sensu lato was found to be more variable than that of any other currently recognized crocodile species, and at least some of these variations were related to locality. For example, a study on Lake Turkana in Kenya (informally this population would be placed in C. n. pauciscutatus) found that the local crocodiles have more osteoderms in their ventral surface than other known populations, and thus are of lesser value in leather trading, accounting for an exceptionally large (possibly overpopulated) local population there in the late 20th century. The segregation of the West African crocodile (C. suchus) from the Nile crocodile has been supported by morphological characteristics, studies of genetic materials and habitat preferences. The separation of the two is not recognized by the IUCN as their last evaluations of the group was in 2008 and 2009, years before the primary publications supporting the distinctiveness of the West African crocodiles. Evolution Although originally thought to be the same species as the West African crocodile, genetic studies using DNA sequencing have revealed that the Nile crocodile is actually more closely related to the crocodiles of the Americas, namely the American (C. acutus), Cuban (C. rhombifer), Morelet's (C. moreletii), and Orinoco crocodiles (C. intermedius). The fossil species C. checchiai from the Miocene in Kenya was about the same size as the extant African Nile crocodiles and shared similar physical characteristics to this specific species. At one time, the fossil species Rimasuchus lloydi was thought to be the closest relative of the Nile crocodile, but more recent research has indicated that Rimasuchus, despite its very large size (about 20–30% bigger than a Nile crocodile with a skull length estimated up to ), is more closely related to the dwarf crocodile (Osteolaemus tetraspis) among living species. Two other fossil species from Africa retained in the genus Crocodylus appear to be closely related to the Nile crocodile: C. anthropophagus from Plio-Pleistocene Tanzania and C. thorbjarnarsoni from Plio-Pleistocene Kenya. C. anthropophagus and C. thorbjarnarsoni were both somewhat larger, with projected total lengths up to . As well as being larger, C. anthropophagus and C. thorbjarnarsoni, as well as Rimasuchus spp., were all relatively broad-snouted, indicating a specialization at hunting sizeable prey, such as large mammals and freshwater turtles, the latter much larger than any in present-day Africa. Studies have since shown these other African crocodiles to be only more distantly related to the Nile crocodile. Below is a cladogram based on a 2018 tip dating study by Lee & Yates simultaneously using morphological, molecular (DNA sequencing), and stratigraphic (fossil age) data, as revised by the 2021 Hekkala et al. paleogenomics study using DNA extracted from the extinct Voay. Characteristics and physiology Adult Nile crocodiles have a dark bronze colouration above, with faded blackish spots and stripes variably appearing across the back and a dingy off-yellow on the belly, although mud can often obscure the crocodile's actual colour. The flanks, which are yellowish-green in colour, have dark patches arranged in oblique stripes in highly variable patterns. Some variation occurs relative to environment; specimens from swift-flowing waters tend to be lighter in colour than those dwelling in murkier lakes or swamps, which provides camouflage that suits their environment, an example of clinal variation. Nile crocodiles have green eyes. The colouration also helps to camouflage them; juveniles are grey, multicoloured, or brown, with dark cross-bands on the tail and body. The underbelly of young crocodiles is yellowish green. As they mature, Nile crocodiles become darker and the cross-bands fade, especially those on the upper-body. A similar tendency in coloration change during maturation has been noted in most crocodile species. Most morphological attributes of Nile crocodiles are typical of crocodilians as a whole. Like all crocodilians, for example, the Nile crocodile is a quadruped with four short, splayed legs, a long, powerful tail, a scaly hide with rows of ossified scutes running down its back and tail, and powerful, elongated jaws. Their skin has a number of poorly understood integumentary sense organs that may react to changes in water pressure, presumably allowing them to track prey movements in the water. The Nile crocodile has fewer osteoderms on the belly, which are much more conspicuous on some of the more modestly sized crocodilians. The species, however, also has small, oval osteoderms on the sides of the body, as well as the throat. The Nile crocodile shares with all crocodilians a nictitating membrane to protect the eyes and lachrymal glands to cleanse its eyes with tears. The nostrils, eyes, and ears are situated on the top of the head, so the rest of the body can remain concealed under water. They have a four-chambered heart, although modified for their ectothermic nature due to an elongated cardiac septum, physiologically similar to the heart of a bird, which is especially efficient at oxygenating their blood. As in all crocodilians, Nile crocodiles have exceptionally high levels of lactic acid in their blood, which allows them to sit motionless in water for up to 2 hours. Levels of lactic acid as high as they are in a crocodile would kill most vertebrates. However, exertion by crocodilians can lead to death due to increasing lactic acid to lethal levels, which in turn leads to failure of the animal's internal organs. This is rarely recorded in wild crocodiles, normally having been observed in cases where humans have mishandled crocodiles and put them through overly extended periods of physical struggling and stress. Skull and head morphology The mouths of Nile crocodiles are filled with 64 to 68 sharply pointed, cone-shaped teeth (about a dozen less than alligators have). For most of a crocodile's life, broken teeth can be replaced. On each side of the mouth, five teeth are in the front of the upper jaw (premaxilla), 13 or 14 are in the rest of the upper jaw (maxilla), and 14 or 15 are on either side of the lower jaw (mandible). The enlarged fourth lower tooth fits into the notch on the upper jaw and is visible when the jaws are closed, as is the case with all true crocodiles. Hatchlings quickly lose a hardened piece of skin on the top of their mouths called the egg tooth, which they use to break through their eggshells at hatching. Among crocodilians, the Nile crocodile possesses a relatively long snout, which is about 1.6 to 2.0 times as long as broad at the level of the front corners of the eyes. As is the saltwater crocodile, the Nile crocodile is considered a species with medium-width snout relative to other extant crocodilian species. In a search for the largest crocodilian skulls in museums, the largest verifiable Nile crocodile skulls found were several housed in Arba Minch, Ethiopia, sourced from nearby Lake Chamo, which apparently included several specimens with a skull length more than , with the largest one being in length with a mandibular length of . Nile crocodiles with skulls this size are likely to measure in the range of , which is also the length of the animals according to the museum where they were found. However, larger skulls may exist, as this study largely focused on crocodilians from Asia. The detached head of an exceptionally large Nile crocodile (killed in 1968 and measuring in length) was found to have weighed , including the large tendons used to shut the jaw. Biting force The bite force exerted by an adult Nile crocodile has been shown by Brady Barr to measure . However, the muscles responsible for opening the mouth are exceptionally weak, allowing a person to easily hold them shut, and even larger crocodiles can be brought under control by the use of duct tape to bind the jaws together. The broadest snouted modern crocodilians are alligators and larger caimans. For example, a black caiman (Melanosuchus niger) was found to have a notably broader and heavier skull than that of a Nile crocodile measuring . However, despite their robust skulls, alligators and caimans appear to be proportionately equal in biting force to true crocodiles, as the muscular tendons used to shut the jaws are similar in proportional size. Only the gharial (Gavialis gangeticus) (and perhaps some of the few very thin-snouted crocodilians) is likely to have noticeably diminished bite force compared to other living species due to its exceptionally narrow, fragile snout. More or less, the size of the tendons used to impart bite force increases with body size and the larger the crocodilian gets, the stronger its bite is likely to be. Therefore, a male saltwater crocodile, which had attained a length around , was found to have the most powerful biting force ever tested in a lab setting for any type of animal. Size The Nile crocodile is the largest crocodilian in Africa, and is generally considered the second-largest crocodilian after the saltwater crocodile. Typical size has been reported to be as much as , but this is excessive for actual average size per most studies and represents the upper limit of sizes attained by the largest animals in a majority of populations. Alexander and Marais (2007) give the typical mature size as ; Garrick and Lang (1977) put it at from . According to Cott (1961), the average length and weight of Nile crocodiles from Uganda and Zambia in breeding maturity was and . Per Graham (1968), the average length and weight of a large sample of adult crocodiles from Lake Turkana (formerly known as Lake Rudolf), Kenya was and body mass of . Similarly, adult crocodiles from Kruger National Park reportedly average in length. In comparison, the saltwater crocodile and gharial reportedly both average around , so are about longer on average, and the false gharial (Tomistoma schlegelii) may average about , so may be slightly longer, as well. However, compared to the narrow-snouted, streamlined gharial and false gharial, the Nile crocodile is more robust and ranks second to the saltwater crocodile in total average body mass among living crocodilians, and is considered to be the second-largest extant reptile. The largest accurately measured male, shot near Mwanza, Tanzania, measured and weighed about . Another large male measuring in total length (Cott 1961) was among the largest Nile crocodiles ever recorded. It was estimated to weigh . Size and sexual dimorphism Like all crocodiles, they are sexually dimorphic, with the males up to 30% larger than the females, though the difference is considerably less compared to some species, like the saltwater crocodile. Male Nile crocodiles are about longer on average at sexual maturity and grow more so than females after becoming sexually mature, especially expanding in bulk after exceeding in length. Adult male Nile crocodiles usually range in length from long; at these lengths, an average sized male may weigh from . Very old, mature ones can grow to or more in length (all specimens over from 1900 onward are cataloged later). Large mature males can reach or more in weight. Mature female Nile crocodiles typically measure , at which lengths the average female specimen would weigh . An old male individual, named "Big Daddy", housed at Mamba Village Centre, Mombasa, Kenya, is considered to be one of the largest living Nile crocodiles in captivity. It measures in length and weighs . In 2007, at the Katavi National Park, Brady Barr captured a specimen measuring in total length (with a considerable portion of its tail tip missing). The weight of this specimen was estimated to be , making it one of the largest crocodiles ever to be captured and released alive. The bulk and mass of individual crocodiles can be fairly variable, some animals being relatively slender, while others being very robust; females are often bulkier than males of a similar length. As an example of the body mass increase undergone by mature crocodiles, one of the larger crocodiles handled firsthand by Cott (1961) was and weighed , while the largest specimen measured by Graham and Beard (1973) was and weighed more than . One of the largest known specimens from South Africa, caught by J. G. Kuhlmann in Venda, which was long weighed . On the other hand, another individual measuring in length was estimated to weigh between . In attempts to parse the mean male and female lengths across the species, the mean adult length was estimated to be reportedly in males, at which males would average about in weight, while that of the female is , at which females would average about . This gives the Nile crocodile somewhat of a size advantage over the next largest non-marine predator on the African continent, the lion (Panthera leo), which averages in males and in females, and attains a maximum known weight of , far less than that of large male crocodiles. Nile crocodiles from cooler climates, like the southern tip of Africa, may be smaller, reaching maximum lengths of only . A crocodile population from Mali, the Sahara Desert, and elsewhere in West Africa reaches only in length, but it is now largely recognized as a separate species, the West African crocodile. Distribution and habitat The Nile crocodile's current distribution extends from the regional tributaries of the Nile in Sudan and Lake Nasser in Egypt to the Cunene river in Angola, the Okavango Delta of Botswana, and the Olifants River in South Africa. It is the most common crocodilian in Africa and occurs in Somalia, Ethiopia, Uganda, Kenya, Central African Republic, the Democratic Republic of the Congo, Equatorial Guinea, Tanzania, Rwanda, Burundi, Zambia, Zimbabwe, Gabon, Malawi, Mozambique, Namibia, Sudan, South Sudan and Cameroon. Its historic range extended to the Mediterranean coast in Libya, Tunisia, Algeria, Morocco and across the Red Sea in the Palestine region and Syria. Herodotus sighted it in Lake Moeris in Egypt. It is thought to have become extinct in the Seychelles in the early 19th century. It has rarely been spotted in Zanzibar and the Comoros. An isolated population also exists in western and southern Madagascar from Sambirano River to Tôlanaro. It likely colonized the island after the extinction of the endemic crocodile Voay within the last 2000 years. In 2022, a skull of Crocodylus from Madagascar was found to be around 7,500 years old based on radiocarbon dating, suggesting that the extinction of Voay post-dates the arrival of Nile crocodiles on Madagascar. The Nile crocodile was previously thought to also occur in West and Central Africa, but these populations are now typically recognized as a distinct species, the West African (or desert) crocodile. The West African crocodile occurs throughout much of West and Central Africa, ranging east to South Sudan and Uganda where it may come into contact with the Nile crocodile. The Nile crocodile is absent from most of West and Central Africa, but range into the latter region in eastern and southern Democratic Republic of Congo, and along the Central African coastal Atlantic region north to Cameroon. Likely a level of habitat segregation occurs between the two species, but this remains to be confirmed. Nile crocodiles may be able to tolerate an extremely broad range of habitat types, including small brackish streams, fast-flowing rivers, swamps, dams, and tidal lakes and estuaries. In East Africa, they are found mostly in rivers, lakes, marshes, and dams, favoring open, broad bodies of water over smaller ones. They are often found in waters adjacent to various open habitats such as savanna or even semi-desert but can also acclimate to well-wooded swamps, extensively wooded riparian zones, waterways of other woodlands and the perimeter of forests. In Madagascar, the remnant population of Nile crocodiles has adapted to living within caves. Nile crocodiles may make use of ephemeral watering holes on occasion. The Nile crocodile possesses salt glands like all true crocodiles and does on occasion enter coastal and even marine waters. They have been known to enter the sea in some areas, with one specimen having been recorded off St. Lucia Bay in 1917. Invasive potential Nile crocodiles have been recently captured in South Florida, though no signs that the population is reproducing in the wild have been found. Genetic studies of Nile crocodiles captured in the wild in Florida have revealed that the specimens are all closely related to each other, suggesting a single source of the introduction. This source remains unclear, as their genetics do not match samples collected from captives at various zoos and theme parks in Florida. When compared to Nile crocodiles from their native Africa, the Florida wild specimens are most closely related to South African Nile crocodiles. It is unknown how many Nile crocodiles are currently at large in Florida. The animals likely were either brought there to be released or are escapees. Behaviour Generally, Nile crocodiles are relatively inert creatures, as are most crocodilians and other large, cold-blooded creatures. More than half of the crocodiles observed by Cott (1961), if not disturbed, spent the hours from 9:00 a.m. to 4:00 p.m. continuously basking with their jaws open if conditions were sunny. If their jaws are bound together in the extreme midday heat, Nile crocodiles may easily die from overheating. Although they can remain practically motionless for hours on end, whether basking or sitting in shallows, Nile crocodiles are said to be constantly aware of their surroundings and aware of the presence of other animals. However, mouth-gaping (while essential to thermoregulation) may also serve as a threat display to other crocodiles. For example, some specimens have been observed mouth-gaping at night, when overheating is not a risk. In Lake Turkana, crocodiles rarely bask at all through the day, unlike crocodiles from most other areas, for unknown reasons, usually sitting motionless partially exposed at the surface in shallows, with no apparent ill effects from the lack of basking on land. In South Africa, Nile crocodiles are more easily observed in winter because of the extensive amount of time they spend basking at this time of year. More time is spent in water on overcast, rainy, or misty days. In the southern reaches of their range, as a response to dry, cool conditions that they cannot survive externally, crocodiles may dig and take refuge in tunnels and engage in aestivation. Pooley found in Royal Natal National Park that during aestivation, young crocodiles of total length would dig tunnels around in depth for most, with some tunnels measuring more than , the longest there being . Crocodiles in aestivation are lethargic, entering a state similar to animals that hibernate. Only the largest individuals engaging in aestivation leave the burrow to sun on the warmest days; otherwise, these crocodiles rarely left their burrows. Aestivation has been recorded from May to August. Nile crocodiles usually dive for only a few minutes at a time, but can swim under water up to 30 minutes if threatened. If they remain fully inactive, they can hold their breath for up to 2 hours (which, as aforementioned, is due to the high levels of lactic acid in their blood). They have a rich vocal range and good hearing. Nile crocodiles normally crawl along on their bellies, but they can also "high walk" with their trunks raised above the ground. Smaller specimens can gallop, and even larger individuals are capable of occasional, surprising bursts of speed, briefly reaching up to . They can swim much faster, moving their bodies and tails in a sinuous fashion, and they can sustain this form of movement much longer than on land, with a maximum known swimming speed of . Nile crocodiles have been widely known to have gastroliths in their stomachs, which are stones swallowed by animals for various purposes. Although this is clearly a deliberate behaviour for the species, the purpose is not definitively known. Gastroliths are not present in hatchlings, but increase quickly in presence within most crocodiles examined at and yet normally become extremely rare again in very large specimens, meaning that some animals may eventually expel them. However, large specimens can have a large number of gastroliths. One crocodile measuring and weighing had of stones in its stomach, perhaps a record gastrolith weight for a crocodile. Specimens shot near Mpondwe on the Semliki River had gastroliths in their stomach despite being shot miles away from any sources for the stones; the same holds true for specimens from Kafue Flats, Upper Zambezi and Bangweulu Swamp, all of which often had stones inside them despite being nowhere near stony regions. Cott (1961) felt that gastroliths were most likely serving as ballast to provide stability and additional weight to sink in water, this bearing great probability over the theories that they assist in digestion and staving off hunger. However, Alderton (1998) stated that a study using radiology found that gastroliths were seen to internally aid the grinding of food during digestion for a small Nile crocodile. Herodotus claimed that Nile crocodiles have a symbiotic relationship with a bird named the Trochilus which enter the crocodile's mouth and pick leeches feeding on the crocodile's blood. Guggisberg (1972) had seen examples of birds picking scraps of meat from the teeth of basking crocodiles (without entering the mouth) and prey from soil very near basking crocodiles, so felt it was not impossible that a bold, hungry bird may occasionally nearly enter a crocodile's mouth, but not likely as a habitual behaviour. MacFarland and Reeder, reviewing the evidence in 1974, found that: "Extensive observations of Nile crocodiles in regular or occasional association with various species of potential cleaners (e.g. Plovers, Sandpipers, water dikkop) ... have resulted in only a few reports of sandpipers removing leeches from the mouth and gular scutes and snapping at insects along the reptile's body." Hunting and diet Nile crocodiles are apex predators throughout their range. In the water, this species is an agile and rapid hunter relying on both movement and pressure sensors to catch any prey unfortunate enough to present itself inside or near the waterfront. Out of water, however, the Nile crocodile can only rely on its limbs, as it gallops on solid ground, to chase prey. No matter where they attack prey, this and other crocodilians take practically all of their food by ambush, needing to grab their prey in a matter of seconds to succeed. They have an ectothermic metabolism, so can survive for long periods between meals. However, for such large animals, their stomachs are relatively small, not much larger than a basketball in an average-sized adult, so as a rule, they are anything but voracious eaters. Young crocodiles feed more actively than their elders, according to studies in Uganda and Zambia. In general, at the smallest sizes (), Nile crocodiles were most likely to have full stomachs (17.4% full per Cott); adults at in length were most likely to have empty stomachs (20.2%). In the largest size range studied by Cott, , they were the second most likely to either have full stomachs (10%) or empty stomachs (20%). Other studies have also shown a large number of adult Nile crocodiles with empty stomachs. For example, in Lake Turkana, Kenya, 48.4% of crocodiles had empty stomachs. The stomachs of brooding females are always empty, meaning that they can survive several months without food. The Nile crocodile mostly hunts within the confines of waterways, attacking aquatic prey or terrestrial animals when they come to the water to drink or to cross. The crocodile mainly hunts land animals by almost fully submerging its body under water. Occasionally, a crocodile quietly surfaces so that only its eyes (to check positioning) and nostrils are visible, and swims quietly and stealthily toward its mark. The attack is sudden and unpredictable. The crocodile lunges its body out of water and grasps its prey. On other occasions, more of its head and upper body is visible, especially when the terrestrial prey animal is on higher ground, to get a sense of the direction of the prey item at the top of an embankment or on a tree branch. Crocodile teeth are not used for tearing up flesh, but to sink deep into it and hold on to the prey item. The immense bite force, which may be as high as in large adults, ensures that the prey item cannot escape the grip. Prey taken is often much smaller than the crocodile itself, and such prey can be overpowered and swallowed with ease. When it comes to larger prey, success depends on the crocodile's body power and weight to pull the prey item back into the water, where it is either drowned or killed by sudden thrashes of the head or by tearing it into pieces with the help of other crocodiles. Subadult and smaller adult Nile crocodiles use their bodies and tails to herd groups of fish toward a bank, and eat them with quick sideways jerks of their heads. Some crocodiles of the species may habitually use their tails to sweep terrestrial prey off balance, sometimes forcing the prey specimen into the water, where it can be more easily drowned. They also cooperate, blocking migrating fish by forming a semicircle across the river. The most dominant crocodile eats first. Their ability to lie concealed with most of their bodies under water, combined with their speed over short distances, makes them effective opportunistic hunters of larger prey. They grab such prey in their powerful jaws, drag it into the water, and hold it underneath until it drowns. They also scavenge or steal kills from other predators, such as lions and leopards (Panthera pardus). Groups of Nile crocodiles may travel hundreds of meters from a waterway to feast on a carcass. They also feed on dead hippopotamuses (Hippopotamus amphibius) as a group (sometimes including three or four dozen crocodiles), tolerating each other. Much of the food from crocodile stomachs may come from scavenging carrion, and the crocodiles could be viewed as performing a similar function at times as do vultures or hyenas on land. Once their prey is dead, they rip off and swallow chunks of flesh. When groups are sharing a kill, they use each other for leverage, biting down hard and then twisting their bodies to tear off large pieces of meat in a "death roll". They may also get the necessary leverage by lodging their prey under branches or stones, before rolling and ripping. The Nile crocodile possesses unique predation behavior characterized by the ability of preying both within water, where it is best adapted, and out of it, which often results in unpredictable attacks on almost any other animal up to twice its size. Most hunting on land is done at night by lying in ambush near forest trails or roadsides, up to 50 m (170 ft) from the water's edge. Since their speed and agility on land is rather outmatched by most terrestrial animals, they must use obscuring vegetation or terrain to have a chance of succeeding during land-based hunts. In one case, an adult crocodile charged from the water up a bank to kill a bushbuck (Tragelaphus scriptus) and instead of dragging it into the water, was observed to pull the kill further on land into the cover of the bush. Two subadult crocodiles were once seen carrying the carcass of a nyala (Tragelaphus angasii) across land in unison. In South Africa, a game warden far from water sources in a savannah-scrub area reported that he saw a crocodile jump up and grab a donkey by the neck and then drag the prey off. Small carnivores are readily taken opportunistically, including African clawless otters (Aonyx capensis). Interspecific predatory relationships Living in the rich biosphere of Africa south of the Sahara, the Nile crocodile may come into contact with other large predators. Its place in the ecosystems it inhabits is unique, as it is the only large tetrapod carnivore that spends the majority of its life in water and hunting prey associated with aquatic zones. Large mammalian predators in Africa are often social animals and obligated to feed almost exclusively on land. The Nile crocodile is a strong example of an apex predator. Outside water, crocodiles can meet competition from other dominant savannah predators, notably big cats, which in Africa are represented by lions, cheetahs, and leopards. In general, big cats and crocodiles have a relationship of mutual avoidance. Occasionally, if regular food becomes scarce, both lions and the crocodile will steal kills on land from each other and, depending on size, will be dominant over one another. Both species may be attracted to carrion, and may occasionally fight over both kills or carrion. Most conflicts over food occur near the water and can literally lead to a tug-of-war over a carcass that can end either way, although seldom is there any serious fighting or bloodshed between the large carnivores. Intimidation displays may also resolve these conflicts. However, when size differences are prominent, the predators may prey on each other. Reproduction On average, sexual maturity is obtained from 12 to 16 years of age. For males, the onset of sexual maturity occurs when they are about long and mass of , being fairly consistent. On the other hand, that for females is rather more variable, and may be indicative of the health of a regional population based on size at sexual maturity. On average, according to Cott (1961), female sexual maturity occurs when they reach in length. Similarly, a wide range of studies from southern Africa found that the average length for females at the onset of sexual maturity was . However, stunted sexual maturity appears to occur in populations at opposite extremes, both where crocodiles are thought to be overpopulated and where they are overly reduced to heavy hunting, sometimes with females laying eggs when they measure as small as although it is questionable whether such clutches would bear healthy hatchlings. According to Bourquin (2008), the average breeding female in southern Africa is between . Earlier studies support that breeding is often inconsistent in females less than and clutch size is smaller, a female at reportedly never lays more than 35 eggs, while a female measuring can expect a clutch of up to 95 eggs. In "stunted" newly mature females from Lake Turkana measuring , the average clutch size was only 15. Graham and Beard (1968) hypothesized that, while females do continue to grow as do males throughout life, that past a certain age and size that females much over in length in Lake Turkana no longer breed (supported by the physiology of the females examined here); however, subsequent studies in Botswana and South Africa have found evidence of nesting females at least in length. In the Olifants River in South Africa, rainfall influenced the size of nesting females as only larger females (greater than ) nested during the driest years. Breeding females along the Olifants were overall larger than those in Zimbabwe. Most females nest only every two to three years while mature males may breed every year. During the mating season, males attract females by bellowing, slapping their snouts in the water, blowing water out of their noses, and making a variety of other noises. Among the larger males of a population, territorial clashes can lead to physical fighting between males especially if they are near the same size. Such clashes can be brutal affairs and can end in mortality but typically end with victor and loser still alive, the latter withdrawing into deep waters. Once a female has been attracted, the pair warble and rub the undersides of their jaws together. Compared to the tender behaviour of the female accepting the male, copulation is rather rough (even described as "rape"-like by Graham & Beard (1968)) in which the male often roars and pins the female underwater. Cott noted little detectable discrepancy in the mating habits of Nile crocodiles and American alligators. In some regions, males have reportedly mated with several females, perhaps any female that enters his claimed territory, though in most regions annual monogamy appears to be most common in this species. Females lay their eggs about one to two months after mating. The nesting season can fall in nearly every month of the year. In the northern extremes of the distribution (i.e. Somalia or Egypt), the nesting season is December through February while in the southern limits (i.e. South Africa or Tanzania) is in August through December. In crocodiles between these distributions egg-laying is in intermediate months, often focused between April and July. The dates correspond to about a month or two into the dry season within that given region. The benefits of this are presumably that nest flooding risk is considerably reduced at this time and the stage at which hatchlings begin their lives out of the egg falls roughly at the beginning of the rainy season, when water levels are still relatively low but insect prey is in recovery. Preferred nesting locations are sandy shores, dry stream beds, or riverbanks. The female digs a hole a few metres from the bank and up to 0.5 m (20 in) deep, and lays on average between 25 and 80 eggs. The number of eggs varies and depends partially on the size of the female. The most significant prerequisites to a nesting site are soil with the depth to permit the female to dig out the nest mound, shading to which mother can retire during the heat of the day and access to water. She finds a spot soft enough to allow her to dig a sideways slanted burrow. The mother Nile crocodile deposits the eggs in the terminal chamber and packs the sand or earth back over the nest pit. While, like all crocodilians, the Nile crocodile digs out a hole for a nest site, unlike most other modern crocodilians, female Nile crocodiles bury their eggs in sand or soil rather than incubate them in rotting vegetation. The female may urinate sporadically on the soil to keep it moist, which prevents soil from hardening excessively. After burying the eggs, the female then guards them for the three-month incubation period. Nests have been recorded seldom in concealed positions such as under a bush or in grasses, but normally in open spots on the bank. It is thought the Nile crocodile cannot nest under heavy forest cover as can two of the three other African crocodiles because they do not use rotting leaves (a very effective method of producing heat for the eggs) and thus require sunlight on sand or soil the surface of the egg chamber to provide the appropriate warmth for embryo development. In South Africa, the invasive plant Chromolaena odorata has recently exploded along banks traditionally used by crocodiles as nesting sites and caused nest failures by blocking sunlight over the nest chamber. When Nile crocodiles have been entirely free from disturbance in the past, they may nest gregariously with the nest lying so close together that after hatching time the rims of craters are almost contiguous. These communal nesting sites are not known to exist today, perhaps being most recently recorded at Ntoroko peninsula, Uganda where two such sites remaining until 1952. In one area, 17 craters were found in an area of , in another 24 in an area of . Communal nesting areas also reported from Lake Victoria (up until the 1930s) and also in the 20th century at Rahad River, Lake Turkana and Malawi. The behaviour of the female Nile crocodile is considered unpredictable and may be driven by the regional extent of prior human disturbance and human persecution rather than natural variability. In some areas, the mother crocodiles will only leave the nest if she needs to cool off (thermoregulation) by taking a quick dip or seeking out a patch of shade. Females will not leave nest site even if rocks are thrown at her back and several authors note her trance-like state while standing near nest, similar to that of crocodiles in aestivation but not like any other stage in their life-cycle. In such a trance, some mother Nile crocodiles may show no discernable reaction even if pelted with stones. At other times, the female will fiercely attack anything approaching their eggs, sometimes joined by another crocodile which may be the sire of the young. In other areas, the nesting female may disappear upon potential disturbance which may allow the presence of both the female and her buried nest to escape unwanted detection by predators. Despite the attentive care of both parents, the nests are often raided by humans and monitor lizards or other animals while she is temporarily absent. At a reported incubation period of about 90 days, the stage is notably shorter than that of the American alligator (110–120 days) but slightly longer than that of the mugger crocodile. Nile crocodiles have temperature-dependent sex determination (TSD), which means the sex of their hatchlings is determined not by genetics as is the case in mammals and birds, but by the average temperature during the middle third of their incubation period. If the temperature inside the nest is below 31.7 °C (89.1 °F), or above 34.5 °C (94.1 °F), the offspring will be female. Males can only be born if the temperature is within that narrow range. The hatchlings start to make a high-pitched chirping noise before hatching, which is the signal for the mother to rip open the nest. It is thought to be either difficult or impossible for hatchlings to escape the nest burrow without assistance, as the surface may become very heavy and packed above them. The mother crocodile may pick up the eggs in her mouth, and roll them between their tongue and the upper palate to help crack the shell and release her offspring. Once the eggs hatch, the female may lead the hatchlings to water, or even carry them there in her mouth, as female American alligators have been observed doing. Hatchling Nile crocodiles are between long at first and weigh around . The hatchlings grow approximately that length each year for the first several years. The new mother will protect her offspring for up to two years, and if there are multiple nests in the same area, the mothers may form a crèche. During this time, the mothers may pick up their offspring either in their mouths or gular fold (throat pouch) to keep the babies safe. The mother will sometimes carry her young on her back to avoid natural predators of the small crocodiles, which can be surprisingly bold even with the mother around. Nile crocodiles of under two years are much more rarely observed than larger specimens, and more seldom seen than the same age young in several other types of crocodilian. Young crocodiles are shy and evasive due to the formidable array of predators that they must face in sub-Saharan Africa, spending little time sunning and moving about nocturnally whenever possible. Crocodiles two years old and younger may spend a surprising amount of time on land, as evidenced by the range of terrestrial insects found in their stomachs, and their lifestyle may resemble that of a semi-aquatic mid-sized lizard more so than the very aquatic lives of older crocodiles. At the end of the two years, the hatchlings will be about long, and will naturally depart the nest area, avoiding the territories of older and larger crocodiles. After this stage, crocodiles may loosely associate with similarly sized crocodiles and many enter feeding congregations of crocodiles once they attain , at which size predators and cannibal crocodiles become much less of a concern. Crocodile longevity is not well established, but larger species like the Nile crocodile live longer, and may have a potential maximum life span of 70 to 100 years, though no crocodilian species commonly exceeds a lifespan of 50 to 60 years in captivity. Natural mortality of young Nile crocodiles An estimated 10% of eggs will survive to hatch and a mere 1% of young that hatch will successfully reach adulthood. The full range of causes for mortality of young Nile crocodiles is not well understood, as very young and small Nile crocodiles or well-concealed nests are only sporadically observed. Unseasonable flooding (during nesting which corresponds with the regional dry season) is not uncommon and has probably destroyed several nests, although statistical likelihood of such an event is not known. The only aspect of mortality in this age range that is well studied is predation and this is most likely the primary cause of death while the saurians are still diminutive. The single most virulent predator of nests is almost certainly the Nile monitor. This predator can destroy about 50% of studied Nile crocodile eggs on its own, often being successful (as are other nest predators) in light of the trance-like state that the mother crocodile enters while brooding or taking advantage of moments where she is distracted or needs to leave the nest. In comparison, perenties (Varanus giganteus) (the Australian ecological equivalent of the Nile monitor) succeeds in depredating about 90% of freshwater crocodile (Crocodylus johnsoni) eggs and about 25% of saltwater crocodile nests. Mammalian predators can take nearly as heavy of a toll, especially large mongooses such the Egyptian mongoose (Herpestes ichneumon) in the north and the water mongoose in the south of crocodile's range. Opportunistic mammals who attack Nile crocodile nests have included wild pigs, medium-sized wild cats and baboon troops. Like Nile monitors, mammalian predators probably locate crocodile nests by scent as the padded-down mound is easy to miss visually. Marabou storks sometimes follow monitors to pirate crocodile eggs for themselves to consume, although can also dig out nests on their own with their massive, awl-like bills if they can visually discern the nest mound. Predators of Nile crocodiles eggs have ranged from insects such as the red flour beetle (Tribolium castaneum) to predators as large and formidable as spotted hyenas (Crocuta crocuta). Unsurprisingly, once exposed to the elements as hatchlings, the young, small Nile crocodiles are even more vulnerable. Most of the predators of eggs also opportunistically eat young crocodiles, including monitors and marabous, plus almost all co-existing raptorial birds, including vultures, eagles, and large owls and buzzards. Many "large waders" are virulent predators of crocodile hatchlings, from dainty little egrets (Egretta garzetta) and compact hamerkops (Scopus umbretta) to towering saddle-billed storks (Ephippiorhynchus senegalensis), goliath herons and shoebills (Balaeniceps rex). Larger corvids and some non-wading water birds (i.e. pelicans) can also take some young Nile crocodiles. Mammalian carnivores take many hatchlings as well as large turtles and snakes, large predatory freshwater fish, such as the African tigerfish, the introduced largemouth bass, and possibly bull sharks, when they enter river systems. When crocodile nests are dug out and the young placed in water by the mother, in areas such as Royal Natal National Park predators can essentially enter a feeding frenzy. It may take a few years before predation is no longer a major cause of mortality for young crocodiles. African fish eagles can take crocodile hatchlings up to a few months of age and honey badgers can prey on yearlings. Once they reach their juvenile stage, large African rock pythons and big cats remain as the only predatory threat to young crocodiles. Perhaps no predator is more deadly to young Nile crocodiles than larger crocodiles of their own species, as, like most crocodilians, they are cannibalistic. This species may be particularly dangerous to their own kind considering their aggressive dispositions. While the mother crocodile will react aggressively toward potential predators and has been recorded chasing and occasionally catching and killing such interlopers into her range, due to the sheer number of animals who feed on baby crocodiles and the large number of hatchlings, she is more often unsuccessful at deflecting such predators. Environmental status Conservation organizations have determined that the main threats to Nile crocodiles, in turn, are loss of habitat, pollution, hunting, and human activities such as accidental entanglement in fishing nets. Though the Nile crocodile has been hunted since ancient times, the advent of the readily available firearm made it much easier to kill these potentially dangerous reptiles. The species began to be hunted on a much larger scale from the 1940s to the 1960s, primarily for high-quality leather, although also for meat with its purported curative properties. The population was severely depleted, and the species faced extinction. National laws, and international trade regulations have resulted in a resurgence in many areas, and the species as a whole is no longer wholly threatened with extinction. The status of Nile crocodiles was variable based on the regional prosperity and extent of conserved wetlands by the 1970s. However, as is the case for many large animal species whether they are protected or not, persecution and poaching have continued apace and between the 1950s and 1980s, an estimated 3 million Nile crocodiles were slaughtered by humans for the leather trade. In Lake Sibaya, South Africa, it was determined that in the 21st century, persecution continues as the direct cause for the inability of Nile crocodiles to recover after the leather trade last century. Recovery for the species appears quite gradual and few areas have recovered to bear crocodile populations, i.e. largely insufficient to produce sustainable populations of young crocodiles, on par with times prior to the peak of leather trading. Crocodile 'protection programs' are artificial environments where crocodiles exist safely and without the threat of extermination from hunters. An estimated 250,000 to 500,000 individuals occur in the wild today. The IUCN Red List assesses the Nile crocodile as "Least Concern (LR/lc)". The Convention on International Trade in Endangered Species (CITES) lists the Nile crocodile under Appendix I (threatened with extinction) in most of its range; and under Appendix II (not threatened, but trade must be controlled) in the remainder, which either allows ranching or sets an annual quota of skins taken from the wild. The Nile crocodile is widely distributed, with strong, documented populations in many countries in eastern and southern Africa, including Somalia, Ethiopia, Kenya, Zambia and Zimbabwe. This species is farmed for its meat and leather in some parts of Africa. Successful sustainable-yield programs focused on ranching crocodiles for their skins have been successfully implemented in this area, and even countries with quotas are moving toward ranching. In 1993, 80,000 Nile crocodile skins were produced, the majority from ranches in Zimbabwe and South Africa. Crocodile farming is one of the few burgeoning industries in Zimbabwe. Unlike American alligator flesh, Nile crocodile meat is generally considered unappetizing although edible as tribes such as the Turkana may opportunistically feed on them. According to Graham and Beard (1968), Nile crocodile meat has an "indescribable" and unpleasant taste, greasy texture and a "repellent" smell. The conservation situation is more grim in Central and West Africa presumably for both the Nile and West African crocodiles. The crocodile population in this area is much more sparse, and has not been adequately surveyed. While the natural population in these areas may be lower due to a less-than-ideal environment and competition with sympatric slender-snouted and dwarf crocodiles, extirpation may be a serious threat in some of these areas. At some point in the 20th century, the Nile crocodile appeared to have been extirpated as a breeding species from Egypt, but has locally re-established in some areas such as the Aswan Dam. Additional factors are a loss of wetland habitats, which is addition to direct dredging, damming and irrigation by humans, has retracted in the east, south and north of the crocodile's range, possibly in correlation with global warming. Retraction of wetlands due both to direct habitat destruction by humans and environmental factor possibly related to global warming is perhaps linked to the extinction of Nile crocodiles in the last few centuries in Syria, Israel and Tunisia. In Lake St. Lucia, highly saline water has been pumped into the already brackish waters due to irrigation practices. Some deaths of crocodiles appeared to have been caused by this dangerous salinity, and this one-time stronghold for breeding crocodiles has experienced a major population decline. In yet another historic crocodile stronghold, the Olifants River, which flows through Kruger National Park, numerous crocodile deaths have been reported. These are officially due to unknown causes but analysis has indicated that environmental pollutants caused by humans, particularly the burgeoning coal industry, are the primary cause. Much of the contamination of crocodiles occurs when they consume fish themselves killed by pollutants. Additional ecological surveys and establishing management programs are necessary to resolve these questions. The Nile crocodile is the top predator in its environment, and is responsible for checking the population of mesopredator species, such as the barbel catfish and lungfish, that could overeat fish populations on which other species, including birds, rely. One of the fish predators seriously affected by the unchecked mesopredator fish populations (due again to crocodile declines) is humans, particularly with respect to tilapia, an important commercial fish that has declined due to excessive predation. The Nile crocodile also consumes dead animals that would otherwise pollute the waters. Attacks on humans Much of the hunting of and general animosity toward Nile crocodiles stems from their reputation as a man-eater, which is not entirely unjustified. Despite most attacks going unreported, the Nile crocodile along with the saltwater crocodile is estimated to kill hundreds (possibly thousands) of people each year, which is more than all other crocodilian species combined. While these species are much more aggressive toward people than other living crocodilians (as is statistically supported by estimated numbers of crocodile attacks), Nile crocodiles are not particularly more likely to behave aggressively to humans or regard humans as potential prey than saltwater crocodiles. However, unlike other "man-eating" crocodile species, including the saltwater crocodile, the Nile crocodile lives in close proximity to human populations through most of its range, so contact is more frequent. This combined with the species' large size creates a higher risk of attack. Crocodiles as small as are capable of overpowering and successfully preying on small apes and hominids, presumably including children and smaller adult humans, but a majority of fatal attacks on humans are by crocodiles reportedly exceeding in length. In studies preceding the slaughter of crocodiles for the leather trade, when there were believed to be many more Nile crocodiles, a roughly estimated 1,000 human fatalities per annum by Nile crocodiles were posited, with a roughly equal number of aborted attacks. A more contemporary study claimed the number of attacks by Nile crocodiles per year as 275 to 745, of which 63% are fatal, as opposed to an estimated 30 attacks per year by saltwater crocodiles, of which 50% are fatal. With the Nile crocodile and the saltwater crocodile, the mean size of crocodiles involved in non-fatal attacks was about as opposed to a reported range of or larger for crocodiles responsible for fatal attacks. The average estimated size of Nile crocodiles involved in fatal attacks is . Since a majority of fatal attacks are believed to be predatory in nature, the Nile crocodile can be considered the most prolific predator of humans among wild animals. In comparison, lions, in the years from 1990 to 2006, were responsible for an estimated one-eighth as many fatal attacks on humans in Africa as were Nile crocodiles. Although Nile crocodiles are more than a dozen times more numerous than lions in the wild, probably fewer than a quarter of living Nile crocodiles are old and large enough to pose a danger to humans. Other wild animals responsible for more annual human mortalities either attack humans in self-defense, as do venomous snakes, or are deadly only as vectors of disease or infection, such as snails, rats and mosquitos. Regional reportage from numerous areas with large crocodile populations nearby indicate, per district or large village, that crocodiles often annually claim about a dozen or more lives per year. Miscellaneous examples of areas in the last few decades with a dozen or more fatal crocodile attacks annually include Korogwe District, Tanzania, Niassa Reserve, Mozambique, and the area around Lower Zambezi National Park, Zambia. Despite historic claims that the victims of Nile crocodile attacks are usually women and children, there is no detectable trends in this regard and any human, regardless of age, gender, or size is potentially vulnerable. Incautious human behavior is the primary drive behind crocodile attacks. Most fatal attacks occur when a person is standing a few feet away from water on a non-steep bank, is wading in shallow waters, is swimming or has limbs dangling over a boat or pier. Many victims are caught while crouching, and people in jobs that might require heavy usage of water, including laundry workers, fisherman, game wardens and regional guides, are more likely to be attacked. Many fisherman and other workers who are not poverty-stricken will go out of their way to avoid waterways known to harbor large crocodile populations. Most biologists who have engaged in months or even years of field work with Nile crocodiles, including Cott (1961), Graham and Beard (1968) and Guggisberg (1972), have found that with sufficient precautions, their own lives and the lives of their local guides were rarely, if ever, at risk in areas with many crocodiles. However, Guggisberg accumulated several earlier writings that noted the lack of fear of crocodiles among Africans, driven in part perhaps by poverty and superstition, that caused many observed cases of an "appalling" lack of caution within view of large crocodiles, as opposed to the presence of bold lions, which engendered an appropriate panic. Per Guggisberg, this disregard (essentially regarding the crocodile as a lowly creature and thus non-threatening to humans) may account for the higher frequency of deadly attacks by crocodiles than by large mammalian carnivores. Most locals are well aware of how to behave in crocodile-occupied areas, and some of the writings quoted by Guggisberg from the 19th and 20th century may need to be taken with a "grain of salt".
Biology and health sciences
Crocodilia
Animals
827692
https://en.wikipedia.org/wiki/Transmission%20tower
Transmission tower
A transmission tower (also electricity pylon, hydro tower, or pylon) is a tall structure, usually a lattice tower made of steel that is used to support an overhead power line. In electrical grids, transmission towers carry high-voltage transmission lines that transport bulk electric power from generating stations to electrical substations, from which electricity is delivered to end consumers; moreover, utility poles are used to support lower-voltage sub-transmission and distribution lines that transport electricity from substations to electricity customers. There are four categories of transmission towers: (i) the suspension tower, (ii) the dead-end terminal tower, (iii) the tension tower, and (iv) the transposition tower. The heights of transmission towers typically range from , although when longer spans are needed, such as for crossing water, taller towers are sometimes used. More transmission towers are needed to mitigate climate change, and as a result, transmission towers became politically important in the 2020s. Terminology Transmission tower is the name for the structure used in the industry in the United States and some other English-speaking countries. In Europe and the U.K., the terms electricity pylon and pylon derive from the basic shape of the structure, an obelisk with a tapered top. In Canada, the term hydrotower is used, because hydroelectricity is the principal source of electricity for the country. High voltage AC transmission towers Three-phase electric power systems are used for high voltage (66- or 69-kV and above) and extra-high voltage (110- or 115-kV and above; most often 138- or 230-kV and above in contemporary systems) AC transmission lines. In some European countries, e.g. Germany, Spain or Czech Republic, smaller lattice towers are used for medium voltage (above 10 kV) transmission lines too. The towers must be designed to carry three (or multiples of three) conductors. The towers are usually steel lattices or trusses (wooden structures are used in Australia, Canada, Germany, and Scandinavia in some cases) and the insulators are either glass or porcelain discs or composite insulators using silicone rubber or EPDM rubber material assembled in strings or long rods whose lengths are dependent on the line voltage and environmental conditions. Typically, one or two ground wires, also called "guard" wires, are placed on top to intercept lightning and harmlessly divert it to ground. Towers for high- and extra-high voltage are usually designed to carry two or more electric circuits. If a line is constructed using towers designed to carry several circuits, it is not necessary to install all the circuits at the time of construction. Indeed, for economic reasons, some transmission lines are designed for three (or four) circuits, but only two (or three) circuits are initially installed. Some high voltage circuits are often erected on the same tower as 110 kV lines. Paralleling circuits of 380 kV, 220 kV and 110 kV-lines on the same towers is common. Sometimes, especially with 110 kV circuits, a parallel circuit carries traction lines for railway electrification. High voltage DC transmission towers High-voltage direct current (HVDC) transmission lines are either monopolar or bipolar systems. With bipolar systems, a conductor arrangement with one conductor on each side of the tower is used. On some schemes, the ground conductor is used as electrode line or ground return. In this case, it had to be installed with insulators equipped with surge arrestors on the pylons in order to prevent electrochemical corrosion of the pylons. For single-pole HVDC transmission with ground return, towers with only one conductor can be used. In many cases, however, the towers are designed for later conversion to a two-pole system. In these cases, often conductors on both sides of the tower are installed for mechanical reasons. Until the second pole is needed, it is either used as electrode line or joined in parallel with the pole in use. In the latter case, the line from the converter station to the earthing (grounding) electrode is built as underground cable, as overhead line on a separate right of way or by using the ground conductors. Electrode line towers are used in some HVDC schemes to carry the power line from the converter station to the grounding electrode. They are similar to structures used for lines with voltages of 10–30 kV, but normally carry only one or two conductors. AC transmission towers may be converted to full or mixed HVDC use, to increase power transmission levels at a lower cost than building a new transmission line. Railway traction line towers Towers used for single-phase AC railway traction lines are similar in construction to those towers used for 110 kV three-phase lines. Steel tube or concrete poles are also often used for these lines. However, railway traction current systems are two-pole AC systems, so traction lines are designed for two conductors (or multiples of two, usually four, eight, or twelve). These are usually arranged on one level, whereby each circuit occupies one half of the cross arm. For four traction circuits, the arrangement of the conductors is in two levels and for six electric circuits, the arrangement of the conductors is in three levels. Tower designs Transmission towers must withstand various external forces, including wind, ice, and seismic activity, while supporting the weight of heavy conductors. Shape Different shapes of transmission towers are typical for different countries. The shape also depends on voltage and number of circuits. One circuit Delta pylons are the most common design for single circuit lines, because of their stability. They have a V-shaped body with a horizontal arm on the top, which forms an inverted delta. Larger Delta towers usually use two guard cables. Portal pylons are widely used in the USA, Ireland, Scandinavia and Canada. They stand on two legs with one cross arm, which gives them a H-shape. Up to 110 kV they often were made from wood, but higher voltage lines use steel pylons. Smaller single circuit pylons may have two small cross arms on one side and one on the other. Two circuits One level pylons only have one cross arm carrying 3 cables on each side. Sometimes they have an additional cross arm for the protection cables. They are frequently used close to airports due to their reduced height. Danube pylons or Donaumasten got their name from a line built in 1927 next to the Danube river. They are the most common design in central European countries like Germany or Poland. They have two cross arms, the upper arm carries one and the lower arm carries two cables on each side. Sometimes they have an additional cross arm for the protection cables. Ton shaped towers are the most common design, they have 3 horizontal levels with one cable very close to the pylon on each side. In the United Kingdom the second level is often (but not always) wider than the other ones while in the United States all cross arms have the same width. T-pylons In 2021 the first T-pylon, a new tubular T-shaped design, was installed in United Kingdom for a new power line to Hinkley Point C nuclear power station, carrying two high voltage 400 kV power lines. The design features electricity cables strung below a cross-arm atop a single pole which reduces the visual impact on the environment compared to lattice pylons. These 36 T-pylons were the first major UK redesign since 1927, designed by Danish company Bystrup, winner of a 2011 competition from more than 250 entries held by the Royal Institute of British Architects and Her Majesty's Government. Y-pylons Y-pylons are a newer concept for electrical transmission towers. They usually have a guy-wire or support beam to help support the "Y" shape in the tower. Four circuits Christmas-tree-shaped towers for 4 or even 6 circuits are common in Germany and have 3 cross arms where the highest arm has each one cable, the second has two cables and the third has three cables on each side. The cables on the third arm usually carry circuits for lower high voltage. Branch pylons Special designed pylons are necessary to introduce branching lines, e.g. to connect nearby substations. Support structures Towers may be self-supporting and capable of resisting all forces due to conductor loads, unbalanced conductors, wind and ice in any direction. Such towers often have approximately square bases and usually four points of contact with the ground. A semi-flexible tower is designed so that it can use overhead grounding wires to transfer mechanical load to adjacent structures, if a phase conductor breaks and the structure is subject to unbalanced loads. This type is useful at extra-high voltages, where phase conductors are bundled (two or more wires per phase). It is unlikely for all of them to break at once, barring a catastrophic crash or storm. A guyed mast has a very small footprint and relies on guy wires in tension to support the structure and any unbalanced tension load from the conductors. A guyed tower can be made in a V shape, which saves weight and cost. Materials Tubular steel Poles made of tubular steel generally are assembled at the factory and placed on the right-of-way afterward. Because of its durability and ease of manufacturing and installation, many utilities in recent years prefer the use of monopolar steel or concrete towers over lattice steel for new power lines and tower replacements. In Germany steel tube pylons are also established predominantly for medium voltage lines, in addition, for high voltage transmission lines or two electric circuits for operating voltages by up to 110 kV. Steel tube pylons are also frequently used for 380 kV lines in France, and for 500 kV lines in the United States. Lattice A lattice tower is a framework construction made of steel or aluminium sections. Lattice towers are used for power lines of all voltages, and are the most common type for high-voltage transmission lines. Lattice towers are usually made of galvanized steel. Aluminium is used for reduced weight, such as in mountainous areas where structures are placed by helicopter. Aluminium is also used in environments that would be corrosive to steel. The extra material cost of aluminium towers will be offset by lower installation cost. Design of aluminium lattice towers is similar to that for steel, but must take into account aluminium's lower Young's modulus. A lattice tower is usually assembled at the location where it is to be erected. This makes very tall towers possible, up to (and in special cases even higher, as in the Elbe crossing 1 and Elbe crossing 2). Assembly of lattice steel towers can be done using a crane. Lattice steel towers are generally made of angle-profiled steel beams (L-beam or T-beams). For very tall towers, trusses are often used. Wood Wood is a material which is limited in use in high-voltage transmission. Because of the limited height of available trees, the maximum height of wooden pylons is limited to approximately . Wood is rarely used for lattice framework. Instead, they are used to build multi-pole structures, such as H-frame and K-frame structures. The voltages they carry are also limited, such as in other regions, where wood structures only carry voltages up to approximately 30 kV. In countries such as Canada or the United States, wooden towers carry voltages up to 345 kV; these can be less costly than steel structures and take advantage of the surge voltage insulating properties of wood. , 345 kV lines on wood towers are still in use in the US and some are still being constructed on this technology. Wood can also be used for temporary structures while constructing a permanent replacement. Concrete Concrete pylons are used in Germany normally only for lines with operating voltages below 30 kV. In exceptional cases, concrete pylons are used also for 110 kV lines, as well as for the public grid or for the railway traction current grid. Concrete poles for medium-voltage are also used in Canada and the United States. In Switzerland, concrete pylons with heights of up to 59.5 metres (world's tallest pylon of prefabricated concrete at Littau) are used for 380 kV overhead lines. In Argentina and some other south american countries, many overhead power lines, except the ultra-high voltage grid, were placed on tubular concrete pylons. Also in former soviet countries, concrete pylons are common, though with crossarms made of steel. Concrete pylons, which are not prefabricated, are also used for constructions taller than 60 metres. One example is a tall pylon of a 380 kV powerline near Reuter West Power Plant in Berlin. In China some pylons for lines crossing rivers were built of concrete. The tallest of these pylons belong to the Yangtze Powerline crossing at Nanjing with a height of . Special designs Sometimes (in particular on steel lattice towers for the highest voltage levels) transmitting plants are installed, and antennas mounted on the top above or below the overhead ground wire. Usually these installations are for mobile phone services or the operating radio of the power supply firm, but occasionally also for other radio services, like directional radio. Thus transmitting antennas for low-power FM radio and television transmitters were already installed on pylons. On the Elbe Crossing 1 tower, there is a radar facility belonging to the Hamburg water and navigation office. For crossing broad valleys, a large distance between the conductors must be maintained to avoid short-circuits caused by conductor cables colliding during storms. To achieve this, sometimes a separate mast or tower is used for each conductor. For crossing wide rivers and straits with flat coastlines, very tall towers must be built due to the necessity of a large height clearance for navigation. Such towers and the conductors they carry must be equipped with flight safety lamps and reflectors. Two well-known wide river crossings are the Elbe Crossing 1 and Elbe Crossing 2. The latter has the tallest overhead line masts in Europe, at tall. In Spain, the overhead line crossing pylons in the Spanish bay of Cádiz have a particularly interesting construction. The main crossing towers are tall with one crossarm atop a frustum framework construction. The longest overhead line spans are the crossing of the Norwegian Sognefjord Span ( between two masts) and the Ameralik Span in Greenland (). In Germany, the overhead line of the EnBW AG crossing of the Eyachtal has the longest span in the country at . In order to drop overhead lines into steep, deep valleys, inclined towers are occasionally used. These are utilized at the Hoover Dam, located in the United States, to descend the cliff walls of the Black Canyon of the Colorado. In Switzerland, a pylon inclined around 20 degrees to the vertical is located near Sargans, St. Gallens. Highly sloping masts are used on two 380 kV pylons in Switzerland, the top 32 meters of one of them being bent by 18 degrees to the vertical. Power station chimneys are sometimes equipped with crossbars for fixing conductors of the outgoing lines. Because of possible problems with corrosion by flue gases, such constructions are very rare. There exist also a variety of pylons and powerline poles mounted on buildings. The most common forms are small rooftop poles used in some countries like Germany for the realization of overhead 400/230 volt grids for the power supply of homes . However, there are also roof-mounted support structures for high-voltage. Some thermal power plants in Poland like Połaniec Power Station and in the former Soviet Union like Lukoml Power Station use portal pylons on the roof of the power station building for the high voltage line from the machine transformer to the switchyard. Also other industrial buildings may have a rooftop powerline support structure. One can find such a device at a steel work in Dnipro, Ukraine at 48°28'57"N 34°58'43"E and at a steel work in Freital, Germany at 50°59'53"N 13°38'26"E. In the United States such device may be more common as in other countries , There are also real rooftop high voltage towers on industry buildings as at a steel plant in Piombino, Italy and on a roof on an industrial building at Cherepovets, Russia at 59°8'52"N 37°51'55"E. Until 2015, on a residential highrise building in Dazhou, China at 31°11'28"N 107°30'43"E a powerline tower stood. Beside this, it is also possible that the lower parts of an electricity pylon stand in a building. Such a structure a person, who cannot have a view of the interior of the building, cannot distinguish from a real rooftop pylon. A structure of this type is Tower 9108 of a 110 kV high-voltage traction power line in Fulda , :File:Mast9108-Fundament.jpg. An other unconventional way of installing powerlines are catenaries spun across a valley. Two such structures are used at the Kemano-Kitimat Powerline an other-one can be found near Cape Town at 34.149954 S 18.926239 E. A new type of pylon, called Wintrack pylons, will be used in the Netherlands starting in 2010. The pylons were designed as a minimalist structure by Dutch architects Zwarts and Jansma. The use of physical laws for the design made a reduction of the magnetic field possible. Also, the visual impact on the surrounding landscape is reduced. Two clown-shaped pylons appear in Hungary, on both sides of the M5 motorway, near Újhartyán. The Pro Football Hall of Fame in Canton, Ohio, U.S., and American Electric Power paired to conceive, design, and install goal post-shaped towers located on both sides of Interstate 77 near the hall as part of a power infrastructure upgrade. The Mickey pylon is a Mickey Mouse shaped transmission tower on the side of Interstate 4, near Walt Disney World in Orlando, FL. Bog Fox is a design pylon in Estonia south of Risti at 58° 59′ 33.44″ N, 24° 3′ 33.19″ E. In Russia several pylons designed as artwork were built Assembly Before transmission towers are even erected, prototype towers are tested at tower testing stations. There are a variety of ways they can then be assembled and erected: They can be assembled horizontally on the ground and erected by push-pull cable. This method is rarely used because of the large assembly area needed. They can be assembled vertically (in their final upright position). Very tall towers, such as the Yangtze River Crossing, were assembled in this way. A jin-pole crane can be used to assemble lattice towers. This is also used for utility poles. Helicopters can serve as aerial cranes for their assembly in areas with limited accessibility. Towers can also be assembled elsewhere and flown to their place on the transmission right-of-way. Helicopters may also be used for transporting disassembled towers for scrapping. Markers The International Civil Aviation Organization issues recommendations on markers for towers and the conductors suspended between them. Certain jurisdictions will make these recommendations mandatory, for example that certain power lines must have overhead wire markers placed at intervals, and that warning lights be placed on any sufficiently high towers, this is particularly true of transmission towers which are in close vicinity to airports. Electricity pylons often have an identification tag marked with the name of the line (either the terminal points of the line or the internal designation of the power company) and the tower number. This makes identifying the location of a fault to the power company that owns the tower easier. Transmission towers, much like other steel lattice towers including broadcasting or cellphone towers, are marked with signs which discourage public access due to the danger of the high voltage. Often this is accomplished with a sign warning of the high voltage. At other times, the entire access point to the transmission corridor is marked with a sign. Sign warning of the high voltage may also state the name of the company who built the structures, and acquired and designated lands where the transmission structures stand and line segments or right of way. Tower functions Tower structures can be classified by the way in which they support the line conductors. Suspension structures support the conductor vertically using suspension insulators. Strain structures resist net tension in the conductors and the conductors attach to the structure through strain insulators. Dead-end structures support the full weight of the conductor and also all the tension in it, and also use strain insulators. Structures are classified as tangent suspension, angle suspension, tangent strain, angle strain, tangent dead-end and angle dead-end. Where the conductors are in a straight line, a tangent tower is used. Angle towers are used where a line must change direction. Cross arms and conductor arrangement Generally three conductors are required per AC 3-phase circuit, although single-phase and DC circuits are also carried on towers. Conductors may be arranged in one plane, or by use of several cross-arms may be arranged in a roughly symmetrical, triangulated pattern to balance the impedances of all three phases. If more than one circuit is required to be carried and the width of the line right-of-way does not permit multiple towers to be used, two or three circuits can be carried on the same tower using several levels of cross-arms. Often multiple circuits are the same voltage, but mixed voltages can be found on some structures. Other features Insulators Insulators electrically isolate the live side of the transmission cables from the tower structure and earth. They are either glass or porcelain discs or composite insulators using silicone rubber or EPDM rubber material. They are assembled in strings or long rods whose lengths are dependent on the line voltage and environmental conditions. By using disks the shortest surface electrical path between the ends is maximised which reduces the chance of a leakage in moist conditions. Stockbridge dampers Stockbridge dampers are added to the transmission lines a meter or two from the tower. They consist of a short length of cable clamped in place parallel to the line itself and weighted at each end. The size and dimensions are carefully designed to damp any buildup of mechanical oscillation of the lines that could be induced by mechanical vibrations most likely caused by wind. Without them it's possible for a standing wave to become established that grows in magnitude and destroys the line or the tower. Arcing horns Arcing horns are sometimes added to the ends of the insulators in areas where voltage surges may occur. These may be caused by either lightning strikes or in switching operations. They protect power line insulators from damage due to arcing. They can be seen as rounded metal pipework at either end of the insulator and provide a path to earth in extreme circumstances without damaging the insulator. Physical security Towers will have a level of physical security to prevent members of the public or climbing animals from ascending them. This may take the form of a security fence or climbing baffles added to the supporting legs. Some countries require that lattice steel towers be equipped with a barbed wire barrier approximately above ground in order to deter unauthorized climbing. Such barriers can often be found on towers close to roads or other areas with easy public access, even where there is not a legal requirement. In the United Kingdom, all such towers are fitted with barbed wire. Other features Some electricity pylons, especially for voltages above 100 kV, carry transmission antennas. In most cases these are cellphone antennas and antennas for radio relay links adjoined with them, but it is also possible that antennas of radio relay systems of power companies or antenna for small broadcasting transmitters in the VHF-/UHF-range are installed. The northern tower of Elbekreuzung 1 carries in a height of 30 metres a radar station for monitoring ship traffic on Elbe river on its structure. The Tower 93 of Facility 4101, a strainer at Hürth south of Cologne, Germany carried from 1977 to 2010 a public observation deck, which was accessible by a staircase. Notable electricity transmission towers The following electricity transmission towers are notable due to their enormous height, unusual design, unusual construction site or their use in artworks. Bold type denotes structure which was at one time the tallest transmission tower(s) in the world.
Technology
Electricity transmission and distribution
null
827736
https://en.wikipedia.org/wiki/Shield%20%28geology%29
Shield (geology)
A shield is a large area of exposed Precambrian crystalline igneous and high-grade metamorphic rocks that form tectonically stable areas. These rocks are older than 570 million years and sometimes date back to around 2 to 3.5 billion years. They have been little affected by tectonic events following the end of the Precambrian, and are relatively flat regions where mountain building, faulting, and other tectonic processes are minor, compared with the activity at their margins and between tectonic plates. Shields occur on all continents. Terminology The term shield cannot be used interchangeably with the term craton. However, shield can be used interchangeably with the term basement. The difference is that a craton describes a basement overlain by a sedimentary platform while shield only describes the basement. The term shield, used to describe this type of geographic region, appears in the 1901 English translation of Eduard Suess's Face of Earth by H. B. C. Sollas, and comes from the shape "not unlike a flat shield" of the Canadian Shield which has an outline that "suggests the shape of the shields carried by soldiers in the days of hand-to-hand combat." Lithology A shield is that part of the continental crust in which these usually Precambrian basement rocks crop out extensively at the surface. Shields can be very complex: they consist of vast areas of granitic or granodioritic gneisses, usually of tonalitic composition, and they also contain belts of sedimentary rocks, often surrounded by low-grade volcano-sedimentary sequences, or greenstone belts. These rocks are frequently metamorphosed greenschist, amphibolite, and granulite facies. It is estimated that over 50% of Earth's shields surface is made up of gneiss. Erosion and landforms Being relatively stable regions, the relief of shields is rather old, with elements such as peneplains being shaped in Precambrian times. The oldest peneplain identifiable in a shield is called a "primary peneplain"; in the case of the Fennoscandian Shield, this is the Sub-Cambrian peneplain. The landforms and shallow deposits of northern shields that have been subject to Quaternary glaciation and periglaciation are distinct from those found closer to the equator. Shield relief, including peneplains, can be protected from erosion by various means. Shield surfaces exposed to sub-tropical and tropical climate for long enough time can end up being silicified, becoming hard and extremely difficult to erode. Erosion of peneplains by glaciers in shield regions is limited. In the Fennoscandian Shield, average glacier erosion during the Quaternary has amounted to tens of meters, though this was not evenly distributed. For glacier erosion to be effective in shields, a long "preparation period" of weathering under non-glacial conditions may be a requirement. In weathered and eroded shields, inselbergs are common sights. List of shields The Canadian Shield forms the nucleus of North America and extends from Lake Superior on the south to the Arctic Islands on the north, and from western Canada eastward across to include most of Greenland. The Atlantic Shield The Amazonian (Brazilian) Shield on the eastern bulge portion of South America. Bordering this is the Guiana Shield to the north, and the Platian Shield to the south. The Uruguayan Shield The Baltic (Fennoscandian) Shield is located in eastern Norway, Finland and Sweden. The African (Ethiopian) Shield is located in Africa. The Tuareg Shield is located in northern Africa, primarily within southern Algeria, northern Mali, and western Niger. The Australian Shield occupies most of the western half of Australia. The Arabian-Nubian Shield on the western edge of Arabia. The Antarctic Shield. In Asia, an area in China and North Korea is sometimes referred to as the China-Korean Shield. The Angaran Shield, as it is sometimes called, is bounded by the Yenisey River on the west, the Lena River on the east, the Arctic Ocean on the north, and Lake Baikal on the south. The Indian Shield occupies two-thirds of the southern Indian peninsula.
Physical sciences
Tectonics
Earth science
827792
https://en.wikipedia.org/wiki/Rare%20Earth%20hypothesis
Rare Earth hypothesis
In planetary astronomy and astrobiology, the Rare Earth hypothesis argues that the origin of life and the evolution of biological complexity, such as sexually reproducing, multicellular organisms on Earth, and subsequently human intelligence, required an improbable combination of astrophysical and geological events and circumstances. According to the hypothesis, complex extraterrestrial life is an improbable phenomenon and likely to be rare throughout the universe as a whole. The term "Rare Earth" originates from Rare Earth: Why Complex Life Is Uncommon in the Universe (2000), a book by Peter Ward, a geologist and paleontologist, and Donald E. Brownlee, an astronomer and astrobiologist, both faculty members at the University of Washington. In the 1970s and 1980s, Carl Sagan and Frank Drake, among others, argued that Earth is a typical rocky planet in a typical planetary system, located in a non-exceptional region of a common barred spiral galaxy. From the principle of mediocrity (extended from the Copernican principle), they argued that the evolution of life on Earth, including human beings, was also typical, and therefore that the universe teems with complex life. Ward and Brownlee argue that planets, planetary systems, and galactic regions that are as accommodating for complex life as are the Earth, the Solar System, and our own galactic region are not typical at all but actually exceedingly rare. Fermi paradox There is no reliable or reproducible evidence that extraterrestrial organisms of any kind have visited Earth. No transmissions or evidence of intelligent extraterrestrial life have been detected or observed anywhere other than Earth in the Universe. This runs counter to the knowledge that the Universe is filled with a very large number of planets, some of which likely hold the conditions hospitable for life. Life typically expands until it fills all available niches. These contradictory facts form the basis for the Fermi paradox, of which the Rare Earth hypothesis is one proposed solution. Requirements for complex life The Rare Earth hypothesis argues that the evolution of biological complexity anywhere in the universe requires the coincidence of a large number of fortuitous circumstances, including, among others, a galactic habitable zone; a central star and planetary system having the requisite character (i.e. a circumstellar habitable zone); a terrestrial planet of the right mass; the advantage of one or more gas giant guardians like Jupiter and possibly a large natural satellite to shield the planet from frequent impact events; conditions needed to ensure the planet has a magnetosphere and plate tectonics; a chemistry similar to that present in the Earth's lithosphere, atmosphere, and oceans; the influence of periodic "evolutionary pumps" such as massive glaciations and bolide impacts; and whatever factors may have led to the emergence of eukaryotic cells, sexual reproduction, and the Cambrian explosion of animal, plant, and fungi phyla. The evolution of human beings and of human intelligence may have required yet further specific events and circumstances, all of which are extremely unlikely to have happened were it not for the Cretaceous–Paleogene extinction event 66 million years ago removing dinosaurs as the dominant terrestrial vertebrates. In order for a small rocky planet to support complex life, Ward and Brownlee argue, the values of several variables must fall within narrow ranges. The universe is so vast that it might still contain many Earth-like planets, but if such planets exist, they are likely to be separated from each other by many thousands of light-years. Such distances may preclude communication among any intelligent species that may evolve on such planets, which would solve the Fermi paradox: "If extraterrestrial aliens are common, why aren't they obvious?" The right location in the right kind of galaxy Rare Earth suggests that much of the known universe, including large parts of our galaxy, are "dead zones" unable to support complex life. Those parts of a galaxy where complex life is possible make up the galactic habitable zone, which is primarily characterized by distance from the Galactic Center. As that distance increases, star metallicity declines. Metals (which in astronomy refers to all elements other than hydrogen and helium) are necessary for the formation of terrestrial planets. The X-ray and gamma ray radiation from the black hole at the galactic center, and from nearby neutron stars, becomes less intense as distance increases. Thus the early universe, and present-day galactic regions where stellar density is high and supernovae are common, will be dead zones. Gravitational perturbation of planets and planetesimals by nearby stars becomes less likely as the density of stars decreases. Hence the further a planet lies from the Galactic Center or a spiral arm, the less likely it is to be struck by a large bolide which could extinguish all complex life on a planet. Item #1 rules out the outermost reaches of a galaxy; #2 and #3 rule out galactic inner regions. Hence a galaxy's habitable zone may be a relatively narrow ring of adequate conditions sandwiched between its uninhabitable center and outer reaches. Also, a habitable planetary system must maintain its favorable location long enough for complex life to evolve. A star with an eccentric (elliptical or hyperbolic) galactic orbit will pass through some spiral arms, unfavorable regions of high star density; thus a life-bearing star must have a galactic orbit that is nearly circular, with a close synchronization between the orbital velocity of the star and of the spiral arms. This further restricts the galactic habitable zone within a fairly narrow range of distances from the Galactic Center. Lineweaver et al. calculate this zone to be a ring 7 to 9 kiloparsecs in radius, including no more than 10% of the stars in the Milky Way, about 20 to 40 billion stars. Gonzalez et al. would halve these numbers; they estimate that at most 5% of stars in the Milky Way fall within the galactic habitable zone. Approximately 77% of observed galaxies are spiral, two-thirds of all spiral galaxies are barred, and more than half, like the Milky Way, exhibit multiple arms. According to Rare Earth, our own galaxy is unusually quiet and dim (see below), representing just 7% of its kind. Even so, this would still represent more than 200 billion galaxies in the known universe. Our galaxy also appears unusually favorable in suffering fewer collisions with other galaxies over the last 10 billion years, which can cause more supernovae and other disturbances. Also, the Milky Way's central black hole seems to have neither too much nor too little activity. The orbit of the Sun around the center of the Milky Way is indeed almost perfectly circular, with a period of 226 Ma (million years), closely matching the rotational period of the galaxy. However, the majority of stars in barred spiral galaxies populate the spiral arms rather than the halo and tend to move in gravitationally aligned orbits, so there is little that is unusual about the Sun's orbit. While the Rare Earth hypothesis predicts that the Sun should rarely, if ever, have passed through a spiral arm since its formation, astronomer Karen Masters has calculated that the orbit of the Sun takes it through a major spiral arm approximately every 100 million years. Some researchers have suggested that several mass extinctions do indeed correspond with previous crossings of the spiral arms. The right orbital distance from the right type of star The terrestrial example suggests that complex life requires liquid water, the maintenance of which requires an orbital distance neither too close nor too far from the central star, another scale of habitable zone or Goldilocks principle. The habitable zone varies with the star's type and age. For advanced life, the star must also be highly stable, which is typical of middle star life, about 4.6 billion years old. Proper metallicity and size are also important to stability. The Sun has a low (0.1%) luminosity variation. To date, no solar twin star, with an exact match of the Sun's luminosity variation, has been found, though some come close. The star must also have no stellar companions, as in binary systems, which would disrupt the orbits of any planets. Estimates suggest 50% or more of all star systems are binary. Stars gradually brighten over time and it takes hundreds of millions or billions of years for animal life to evolve. The requirement for a planet to remain in the habitable zone even as its boundaries move outwards over time restricts the size of what Ward and Brownlee call the "continuously habitable zone" for animals. They cite a calculation that it is very narrow, within 0.95 and 1.15 astronomical units (one AU is the distance between the Earth and the Sun), and argue that even this may be too large because it is based on the whole zone within which liquid water can exist, and water near boiling point may be much too hot for animal life. The liquid water and other gases available in the habitable zone bring the benefit of the greenhouse effect. Even though the Earth's atmosphere contains a water vapor concentration from 0% (in arid regions) to 4% (in rainforest and ocean regions) and – as of November 2022 – only 417.2 parts per million of , these small amounts suffice to raise the average surface temperature by about 40 °C, with the dominant contribution being due to water vapor. Rocky planets must orbit within the habitable zone for life to form. Although the habitable zone of such hot stars as Sirius or Vega is wide, hot stars also emit much more ultraviolet radiation that ionizes any planetary atmosphere. Such stars may also become red giants before advanced life evolves on their planets. These considerations rule out the massive and powerful stars of type F6 to O (see stellar classification) as homes to evolved metazoan life. Conversely, small red dwarf stars have small habitable zones wherein planets are in tidal lock, with one very hot side always facing the star and another very cold side always facing away, and they are also at increased risk of solar flares (see Aurelia). As such, it is disputed whether they can support life. Rare Earth proponents claim that only stars from F7 to K1 types are hospitable. Such stars are rare: G type stars such as the Sun (between the hotter F and cooler K) comprise only 9% of the hydrogen-burning stars in the Milky Way. Such aged stars as red giants and white dwarfs are also unlikely to support life. Red giants are common in globular clusters and elliptical galaxies. White dwarfs are mostly dying stars that have already completed their red giant phase. Stars that become red giants expand into or overheat the habitable zones of their youth and middle age (though theoretically planets at much greater distances may then become habitable). An energy output that varies with the lifetime of the star will likely prevent life (e.g., as Cepheid variables). A sudden decrease, even if brief, may freeze the water of orbiting planets, and a significant increase may evaporate it and cause a greenhouse effect that prevents the oceans from reforming. All known life requires the complex chemistry of metallic elements. The absorption spectrum of a star reveals the presence of metals within, and studies of stellar spectra reveal that many, perhaps most, stars are poor in metals. Because heavy metals originate in supernova explosions, metallicity increases in the universe over time. Low metallicity characterizes the early universe: globular clusters and other stars that formed when the universe was young, stars in most galaxies other than large spirals, and stars in the outer regions of all galaxies. Metal-rich central stars capable of supporting complex life are therefore believed to be most common in the less dense regions of the larger spiral galaxies—where radiation also happens to be weak. The right arrangement of planets around the star Rare Earth proponents argue that a planetary system capable of sustaining complex life must be structured more or less like the Solar System, with small, rocky inner planets and massive outer gas giants. Without the protection of such "celestial vacuum cleaner" planets, such as Jupiter, with strong gravitational pulls, other planets would be subject to more frequent catastrophic asteroid collisions. An asteroid only twice the size of the one which cause the Cretaceous–Paleogene extinction might have wiped out all complex life. Observations of exoplanets have shown that arrangements of planets similar to the Solar System are rare. Most planetary systems have super-Earths, several times larger than Earth, close to their star, whereas the Solar System's inner region has only a few small rocky planets and none inside Mercury's orbit. Only 10% of stars have giant planets similar to Jupiter and Saturn, and those few rarely have stable, nearly circular orbits distant from their star. Konstantin Batygin and colleagues argue that these features can be explained if, early in the history of the Solar System, Jupiter and Saturn drifted towards the Sun, sending showers of planetesimals towards the super-Earths which sent them spiralling into the Sun, and ferrying icy building blocks into the terrestrial region of the Solar System which provided the building blocks for the rocky planets. The two giant planets then drifted out again to their present positions. In the view of Batygin and his colleagues: "The concatenation of chance events required for this delicate choreography suggest that small, Earth-like rocky planets – and perhaps life itself – could be rare throughout the cosmos." A continuously stable orbit Rare Earth proponents argue that a gas giant also must not be too close to a body where life is developing. Close placement of one or more gas giants could disrupt the orbit of a potential life-bearing planet, either directly or by drifting into the habitable zone. Newtonian dynamics can produce chaotic planetary orbits, especially in a system having large planets at high orbital eccentricity. The need for stable orbits rules out stars with planetary systems that contain large planets with orbits close to the host star (called "hot Jupiters"). It is believed that hot Jupiters have migrated inwards to their current orbits. In the process, they would have catastrophically disrupted the orbits of any planets in the habitable zone. To exacerbate matters, hot Jupiters are much more common orbiting F and G class stars. A terrestrial planet of the right size The Rare Earth hypothesis argues that life requires terrestrial planets like Earth, and since gas giants lack such a surface, that complex life cannot arise there. A planet that is too small cannot maintain much atmosphere, rendering its surface temperature low and variable and oceans impossible. A small planet will also tend to have a rough surface, with large mountains and deep canyons. The core will cool faster, and plate tectonics may be brief or entirely absent. A planet that is too large will retain too dense an atmosphere, like Venus. Although Venus is similar in size and mass to Earth, its surface atmospheric pressure is 92 times that of Earth, and its surface temperature is 735 K (462 °C; 863 °F). The early Earth once had a similar atmosphere, but may have lost it in the giant impact event which formed the Moon. Plate tectonics Rare Earth proponents argue that plate tectonics and a strong magnetic field are essential for biodiversity, global temperature regulation, and the carbon cycle. The lack of mountain chains elsewhere in the Solar System is evidence that Earth is the only body which now has plate tectonics, and thus the only one capable of supporting life. Plate tectonics depend on the right chemical composition and a long-lasting source of heat from radioactive decay. Continents must be made of less dense felsic rocks that "float" on underlying denser mafic rock. Taylor emphasizes that tectonic subduction zones require the lubrication of oceans of water. Plate tectonics also provide a means of biochemical cycling. Plate tectonics and, as a result, continental drift and the creation of separate landmasses would create diversified ecosystems and biodiversity, one of the strongest defenses against extinction. An example of species diversification and later competition on Earth's continents is the Great American Interchange. North and Middle America drifted into South America at around 3.5 to 3 Ma. The fauna of South America had already evolved separately for about 30 million years, since Antarctica separated, but, after the merger, many species were wiped out, mainly in South America, by competing North American animals. A large moon The Moon is unusual because the other rocky planets in the Solar System either have no satellites (Mercury and Venus), or only relatively tiny satellites which are probably captured asteroids (Mars). After Charon, the Moon is also the largest natural satellite in the Solar System relative to the size of its parent body, being 27% the size of Earth. The giant-impact theory hypothesizes that the Moon resulted from the impact of a roughly Mars-sized body, dubbed Theia, with the young Earth. This giant impact also gave the Earth its axial tilt (inclination) and velocity of rotation. Rapid rotation reduces the daily variation in temperature and makes photosynthesis viable. The Rare Earth hypothesis further argues that the axial tilt cannot be too large or too small (relative to the orbital plane). A planet with a large tilt will experience extreme seasonal variations in climate. A planet with little or no tilt will lack the stimulus to evolution that climate variation provides. In this view, the Earth's tilt is "just right". The gravity of a large satellite also stabilizes the planet's tilt; without this effect, the variation in tilt would be chaotic, probably making complex life forms on land impossible. If the Earth had no Moon, the ocean tides resulting solely from the Sun's gravity would be only half that of the lunar tides. A large satellite gives rise to tidal pools, which may be essential for the formation of complex life, though this is far from certain. A large satellite also increases the likelihood of plate tectonics through the effect of tidal forces on the planet's crust. The impact that formed the Moon may also have initiated plate tectonics, without which the continental crust would cover the entire planet, leaving no room for oceanic crust. It is possible that the large-scale mantle convection needed to drive plate tectonics could not have emerged if the crust had a uniform composition. A further theory indicates that such a large moon may also contribute to maintaining a planet's magnetic shield by continually acting upon a metallic planetary core as dynamo, thus protecting the surface of the planet from charged particles and cosmic rays, and helping to ensure the atmosphere is not stripped over time by solar winds. An atmosphere A terrestrial planet must be the right size, like Earth and Venus, in order to retain an atmosphere. On Earth, once the giant impact of Theia thinned Earth's atmosphere, other events were needed to make the atmosphere capable of sustaining life. The Late Heavy Bombardment reseeded Earth with water lost after the impact of Theia. The development of an ozone layer generated a protective shield against ultraviolet (UV) sunlight. Nitrogen and carbon dioxide are needed in a correct ratio for life to form. Lightning is needed for nitrogen fixation. The gaseous carbon dioxide needed for life comes from sources such as volcanoes and geysers. Carbon dioxide is preferably needed at relatively low levels (currently at approximately 400 ppm on Earth) because at high levels it is poisonous. Precipitation is needed to have a stable water cycle. A proper atmosphere must reduce diurnal temperature variation. One or more evolutionary triggers for complex life Regardless of whether planets with similar physical attributes to the Earth are rare or not, some argue that life tends not to evolve into anything more complex than simple bacteria without being provoked by rare and specific circumstances. Biochemist Nick Lane argues that simple cells (prokaryotes) emerged soon after Earth's formation, but since almost half the planet's life had passed before they evolved into complex ones (eukaryotes), all of whom share a common ancestor, this event can only have happened once. According to some views, prokaryotes lack the cellular architecture to evolve into eukaryotes because a bacterium expanded up to eukaryotic proportions would have tens of thousands of times less energy available to power its metabolism. Two billion years ago, one simple cell incorporated itself into another, multiplied, and evolved into mitochondria that supplied the vast increase in available energy that enabled the evolution of complex eukaryotic life. If this incorporation occurred only once in four billion years or is otherwise unlikely, then life on most planets remains simple. An alternative view is that the evolution of mitochondria was environmentally triggered, and that mitochondria-containing organisms appeared soon after the first traces of atmospheric oxygen. The evolution and persistence of sexual reproduction is another mystery in biology. The purpose of sexual reproduction is unclear, as in many organisms it has a 50% cost (fitness disadvantage) in relation to asexual reproduction. Mating types (types of gametes, according to their compatibility) may have arisen as a result of anisogamy (gamete dimorphism), or the male and female sexes may have evolved before anisogamy. It is also unknown why most sexual organisms use a binary mating system, and why some organisms have gamete dimorphism. Charles Darwin was the first to suggest that sexual selection drives speciation; without it, complex life would probably not have evolved. The right time in evolutionary history While life on Earth is regarded to have spawned relatively early in the planet's history, the evolution from multicellular to intelligent organisms took around 800 million years. Civilizations on Earth have existed for about 12,000 years, and radio communication reaching space has existed for little more than 100 years. Relative to the age of the Solar System (~4.57 Ga) this is a short time, in which extreme climatic variations, super volcanoes, and large meteorite impacts were absent. These events would severely harm intelligent life, as well as life in general. For example, the Permian-Triassic mass extinction, caused by widespread and continuous volcanic eruptions in an area the size of Western Europe, led to the extinction of 95% of known species around 251.2 Ma ago. About 65 million years ago, the Chicxulub impact at the Cretaceous–Paleogene boundary (~65.5 Ma) on the Yucatán peninsula in Mexico led to a mass extinction of the most advanced species at that time. Rare Earth equation The following discussion is adapted from Cramer. The Rare Earth equation is Ward and Brownlee's riposte to the Drake equation. It calculates , the number of Earth-like planets in the Milky Way having complex life forms, as: where: N* is the number of stars in the Milky Way. This number is not well-estimated, because the Milky Way's mass is not well estimated, with little information about the number of small stars. N* is at least 100 billion, and may be as high as 500 billion, if there are many low visibility stars. is the average number of planets in a star's habitable zone. This zone is fairly narrow, being constrained by the requirement that the average planetary temperature be consistent with water remaining liquid throughout the time required for complex life to evolve. Thus, =1 is a likely upper bound. We assume . The Rare Earth hypothesis can then be viewed as asserting that the product of the other nine Rare Earth equation factors listed below, which are all fractions, is no greater than 10−10 and could plausibly be as small as 10−12. In the latter case, could be as small as 0 or 1. Ward and Brownlee do not actually calculate the value of , because the numerical values of quite a few of the factors below can only be conjectured. They cannot be estimated simply because we have but one data point: the Earth, a rocky planet orbiting a G2 star in a quiet suburb of a large barred spiral galaxy, and the home of the only intelligent species we know; namely, ourselves. is the fraction of stars in the galactic habitable zone (Ward, Brownlee, and Gonzalez estimate this factor as 0.1). is the fraction of stars in the Milky Way with planets. is the fraction of planets that are rocky ("metallic") rather than gaseous. is the fraction of habitable planets where microbial life arises. Ward and Brownlee believe this fraction is unlikely to be small. is the fraction of planets where complex life evolves. For 80% of the time since microbial life first appeared on the Earth, there was only bacterial life. Hence Ward and Brownlee argue that this fraction may be small. is the fraction of the total lifespan of a planet during which complex life is present. Complex life cannot endure indefinitely, because the energy put out by the sort of star that allows complex life to emerge gradually rises, and the central star eventually becomes a red giant, engulfing all planets in the planetary habitable zone. Also, given enough time, a catastrophic extinction of all complex life becomes ever more likely. is the fraction of habitable planets with a large moon. If the giant impact theory of the Moon's origin is correct, this fraction is small. is the fraction of planetary systems with large Jovian planets. This fraction could be large. is the fraction of planets with a sufficiently low number of extinction events. Ward and Brownlee argue that the low number of such events the Earth has experienced since the Cambrian explosion may be unusual, in which case this fraction would be small. Lammer, Scherf et al. define Earth-like habitats (EHs) as rocky exoplanets within the habitable zone of complex life (HZCL) on which Earth-like N2-O2-dominated atmospheres with minor amounts of CO2 can exist. They estimate the maximum number of EHs in the Milky Way as , with the actual number of EHs being possibly much less than that. This would reduce the Rare Earth equation to: The Rare Earth equation, unlike the Drake equation, does not factor the probability that complex life evolves into intelligent life that discovers technology. Barrow and Tipler review the consensus among such biologists that the evolutionary path from primitive Cambrian chordates, e.g., Pikaia to Homo sapiens, was a highly improbable event. For example, the large brains of humans have marked adaptive disadvantages, requiring as they do an expensive metabolism, a long gestation period, and a childhood lasting more than 25% of the average total life span. Other improbable features of humans include: Being one of a handful of extant bipedal land (non-avian) vertebrate. Combined with an unusual eye–hand coordination, this permits dextrous manipulations of the physical environment with the hands; A vocal apparatus far more expressive than that of any other mammal, enabling speech. Speech makes it possible for humans to interact cooperatively, to share knowledge, and to acquire a culture; The capability of formulating abstractions to a degree permitting the invention of mathematics, and the discovery of science and technology. Only recently did humans acquire anything like their current scientific and technological sophistication. Advocates Writers who support the Rare Earth hypothesis: Stuart Ross Taylor, a specialist on the Solar System, firmly believed in the hypothesis. Taylor concluded that the Solar System is probably unusual, because it resulted from so many chance factors and events. Stephen Webb, a physicist, mainly presents and rejects candidate solutions for the Fermi paradox. The Rare Earth hypothesis emerges as one of the few solutions left standing by the end of the book Simon Conway Morris, a paleontologist, endorses the Rare Earth hypothesis in chapter 5 of his Life's Solution: Inevitable Humans in a Lonely Universe, and cites Ward and Brownlee's book with approval. John D. Barrow and Frank J. Tipler, cosmologists, vigorously defend the hypothesis that humans are likely to be the only intelligent life in the Milky Way, and perhaps the entire universe. But this hypothesis is not central to their book The Anthropic Cosmological Principle, a thorough study of the anthropic principle and of how the laws of physics are peculiarly suited to enable the emergence of complexity in nature. Ray Kurzweil, a computer pioneer and self-proclaimed Singularitarian, argues in his 2005 book The Singularity Is Near that the coming Singularity requires that Earth be the first planet on which sapient, technology-using life evolved. Although other Earth-like planets could exist, Earth must be the most evolutionarily advanced, because otherwise we would have seen evidence that another culture had experienced the Singularity and expanded to harness the full computational capacity of the physical universe. John Gribbin, a prolific science writer, defends the hypothesis in Alone in the Universe: Why our planet is unique (2011). Michael H. Hart, an astrophysicist who proposed a narrow habitable zone based on climate studies, edited the influential 1982 book Extraterrestrials: Where are They and authored one of its chapters "Atmospheric Evolution, the Drake Equation and DNA: Sparse Life in an Infinite Universe". Marc J. Defant, professor of geochemistry and volcanology, elaborated on several aspects of the rare Earth hypothesis in his TEDx talk entitled: Why We are Alone in the Galaxy. He also wrote in his book in 1998: "I do not believe that we were the destined outcome of evolution. In fact, we are probably the result of an incredible number of chance circumstances (one example is the meteorite impact at the end of the Cretaceous which probably destroyed the dinosaurs and led to mammal domination). The coincidental nature of our evolution should be clear from this book. I might even contend that so many "coincidences" had to take place during the history of the universe, that intelligent life on this planet may be the only life in our universe. I do not mean to suggest that we must have been "created." I mean to say that maybe there is not as much chance of finding life in our galaxy or universe as some would have us believe. We may be it." Brian Cox, physicist and popular science celebrity confesses his support for the hypothesis in his 2014 BBC production of the Human Universe. Richard Dawkins, evolutionary biologist, notes the Fermi paradox in his book, The Greatest Show on Earth, while discussing how life first evolved on Earth. Although we do not yet know the precise process for how life first began on Earth, Dawkins's view is that it is an implausible theory (i.e., improbable) given we have not encountered any evidence for life existing elsewhere in the universe. He concludes that life is probably very rare throughout the universe. Criticism Cases against the Rare Earth hypothesis take various forms. The hypothesis appears anthropocentric The hypothesis concludes, more or less, that complex life is rare because it can evolve only on the surface of an Earth-like planet or on a suitable satellite of a planet. Some biologists, such as Jack Cohen, believe this assumption too restrictive and unimaginative; they see it as a form of circular reasoning. According to David Darling, the Rare Earth hypothesis is neither hypothesis nor prediction, but merely a description of how life arose on Earth. In his view, Ward and Brownlee have done nothing more than select the factors that best suit their case. Critics also argue that there is a link between the Rare Earth hypothesis and the unscientific idea of intelligent design. Exoplanets around main sequence stars are being discovered in large numbers An increasing number of extrasolar planet discoveries are being made, with planets in planetary systems known as of . Rare Earth proponents argue life cannot arise outside Sun-like systems, due to tidal locking and ionizing radiation outside the F7–K1 range. However, some exobiologists have suggested that stars outside this range may give rise to life under the right circumstances; this possibility is a central point of contention to the theory because these late-K and M category stars make up about 82% of all hydrogen-burning stars. Current technology limits the testing of important Rare Earth criteria: surface water, tectonic plates, a large moon and biosignatures are currently undetectable. Though planets the size of Earth are difficult to detect and classify, scientists now think that rocky planets are common around Sun-like stars. The Earth Similarity Index (ESI) of mass, radius and temperature provides a means of measurement, but falls short of the full Rare Earth criteria. Rocky planets orbiting within habitable zones may not be rare Some argue that Rare Earth's estimates of rocky planets in habitable zones ( in the Rare Earth equation) are too restrictive. James Kasting cites the Titius–Bode law to contend that it is a misnomer to describe habitable zones as narrow when there is a 50% chance of at least one planet orbiting within one. In 2013, astronomers using the Kepler space telescope's data estimated that about one-fifth of G-type and K-type stars (sun-like stars and orange dwarfs) are expected to have an Earth-sized or super-Earth-sized planet ( Earths wide) close to an Earth-like orbit (), yielding about 8.8 billion of them for the entire Milky Way Galaxy. Uncertainty over Jupiter's role The requirement for a system to have a Jovian planet as protector (Rare Earth equation factor ) has been challenged, affecting the number of proposed extinction events (Rare Earth equation factor ). Kasting's 2001 review of Rare Earth questions whether a Jupiter protector has any bearing on the incidence of complex life. Computer modelling including the 2005 Nice model and 2007 Nice 2 model yield inconclusive results in relation to Jupiter's gravitational influence and impacts on the inner planets. A study by Horner and Jones (2008) using computer simulation found that while the total effect on all orbital bodies within the Solar System is unclear, Jupiter has caused more impacts on Earth than it has prevented. Lexell's Comet, a 1770 near miss that passed closer to Earth than any other comet in recorded history, was known to be caused by the gravitational influence of Jupiter. Plate tectonics may not be unique to Earth or a requirement for complex life Ward and Brownlee argue that for complex life to evolve (Rare Earth equation factor ), tectonics must be present to generate biogeochemical cycles, and predicted that such geological features would not be found outside of Earth, pointing to a lack of observable mountain ranges and subduction. There is, however, no scientific consensus on the evolution of plate tectonics on Earth. Though it is believed that tectonic motion first began around three billion years ago, by this time photosynthesis and oxygenation had already begun. Furthermore, recent studies point to plate tectonics as an episodic planetary phenomenon, and that life may evolve during periods of "stagnant-lid" rather than plate tectonic states. Recent evidence also points to similar activity either having occurred or continuing to occur elsewhere. The geology of Pluto, for example, described by Ward and Brownlee as "without mountains or volcanoes ... devoid of volcanic activity", has since been found to be quite the contrary, with a geologically active surface possessing organic molecules and mountain ranges like Tenzing Montes and Hillary Montes comparable in relative size to those of Earth, and observations suggest the involvement of endogenic processes. Plate tectonics has been suggested as a hypothesis for the Martian dichotomy, and in 2012 geologist An Yin put forward evidence for active plate tectonics on Mars. Europa has long been suspected to have plate tectonics and in 2014 NASA announced evidence of active subduction. Like Europa, analysis of the surface of Jupiter's largest moon Ganymede strike-strip faulting and surface materials of possible endogenic origin suggests that plate tectonics has also taken place there. In 2017, scientists studying the geology of Charon confirmed that icy plate tectonics also operated on Pluto's largest moon. Since 2017 several studies of the geodynamics of Venus have also found that, contrary to the view that the lithosphere of Venus is static, it is actually being deformed via active processes similar to plate tectonics, though with less subduction, implying that geodynamics are not a rare occurrence in Earth sized bodies. Kasting suggests that there is nothing unusual about the occurrence of plate tectonics in large rocky planets and liquid water on the surface as most should generate internal heat even without the assistance of radioactive elements. Studies by Valencia and Cowan suggest that plate tectonics may be inevitable for terrestrial planets Earth-sized or larger, that is, Super-Earths, which are now known to be more common in planetary systems. Free oxygen may be neither rare nor a prerequisite for multicellular life The hypothesis that molecular oxygen, necessary for animal life, is rare and that a Great Oxygenation Event (Rare Earth equation factor ) could only have been triggered and sustained by tectonics, appears to have been invalidated by more recent discoveries. Ward and Brownlee ask "whether oxygenation, and hence the rise of animals, would ever have occurred on a world where there were no continents to erode". Extraterrestrial free oxygen has recently been detected around other solid objects, including Mercury, Venus, Mars, Jupiter's four Galilean moons, Saturn's moons Enceladus, Dione and Rhea and even the atmosphere of a comet. This has led scientists to speculate whether processes other than photosynthesis could be capable of generating an environment rich in free oxygen. Wordsworth (2014) concludes that oxygen generated other than through photodissociation may be likely on Earth-like exoplanets, and could actually lead to false positive detections of life. Narita (2015) suggests photocatalysis by titanium dioxide as a geochemical mechanism for producing oxygen atmospheres. Since Ward & Brownlee's assertion that "there is irrefutable evidence that oxygen is a necessary ingredient for animal life", anaerobic metazoa have been found that indeed do metabolise without oxygen. Spinoloricus cinziae, for example, a species discovered in the hypersaline anoxic L'Atalante basin at the bottom of the Mediterranean Sea in 2010, appears to metabolise with hydrogen, lacking mitochondria and instead using hydrogenosomes. Studies since 2015 of the eukaryotic genus Monocercomonoides that lack mitochondrial organelles are also significant as there are no detectable signs that mitochondria are part of the organism. Since then further eukaryotes, particularly parasites, have been identified to be completely absent of mitochondrial genome, such as the 2020 discovery in Henneguya zschokkei. Further investigation into alternative metabolic pathways used by these organisms appear to present further problems for the premise. Stevenson (2015) has proposed other membrane alternatives for complex life in worlds without oxygen. In 2017, scientists from the NASA Astrobiology Institute discovered the necessary chemical preconditions for the formation of azotosomes on Saturn's moon Titan, a world that lacks atmospheric oxygen. Independent studies by Schirrmeister and by Mills concluded that Earth's multicellular life existed prior to the Great Oxygenation Event, not as a consequence of it. NASA scientists Hartman and McKay argue that plate tectonics may in fact slow the rise of oxygenation (and thus stymie complex life rather than promote it). Computer modelling by Tilman Spohn in 2014 found that plate tectonics on Earth may have arisen from the effects of complex life's emergence, rather than the other way around as the Rare Earth might suggest. The action of lichens on rock may have contributed to the formation of subduction zones in the presence of water. Kasting argues that if oxygenation caused the Cambrian explosion then any planet with oxygen producing photosynthesis should have complex life. A magnetosphere may not be rare or a requirement The importance of Earth's magnetic field to the development of complex life has been disputed. The origin of Earth's magnetic field remains a mystery though the presence of a magnetosphere appears to be relatively common for larger planetary mass objects as all Solar System planets larger than Earth possess one. There is increasing evidence of present or past magnetic activity in terrestrial bodies such as the Moon, Ganymede, Mercury and Mars. Without sufficient measurement present studies rely heavily on modelling methods developed in 2006 by Olson & Christensen to predict field strength. Using a sample of 496 planets such models predict Kepler-186f to be one of few of Earth size that would support a magnetosphere (though such a field around this planet has not currently been confirmed). However current recent empirical evidence points to the occurrence of much larger and more powerful fields than those found in our Solar System, some of which cannot be explained by these models. Kasting argues that the atmosphere provides sufficient protection against cosmic rays even during times of magnetic pole reversal and atmosphere loss by sputtering. Kasting also dismisses the role of the magnetic field in the evolution of eukaryotes, citing the age of the oldest known magnetofossils. A large moon may be neither rare nor necessary The requirement of a large moon (Rare Earth equation factor ) has also been challenged. Even if it were required, such an occurrence may not be as unique as predicted by the Rare Earth Hypothesis. Work by Edward Belbruno and J. Richard Gott of Princeton University suggests that giant impactors such as those that may have formed the Moon can indeed form in planetary trojan points ( or Lagrangian point) which means that similar circumstances may occur in other planetary systems. The assertion that the Moon's stabilization of Earth's obliquity and spin is a requirement for complex life has been questioned. Kasting argues that a moonless Earth would still possess habitats with climates suitable for complex life and questions whether the spin rate of a moonless Earth can be predicted. Although the giant impact theory posits that the impact forming the Moon increased Earth's rotational speed to make a day about 5 hours long, the Moon has slowly "stolen" much of this speed to reduce Earth's solar day since then to about 24 hours and continues to do so: in 100 million years Earth's solar day will be roughly 24 hours 38 minutes (the same as Mars's solar day); in 1 billion years, 30 hours 23 minutes. Larger secondary bodies would exert proportionally larger tidal forces that would in turn decelerate their primaries faster and potentially increase the solar day of a planet in all other respects like Earth to over 120 hours within a few billion years. This long solar day would make effective heat dissipation for organisms in the tropics and subtropics extremely difficult in a similar manner to tidal locking to a red dwarf star. Short days (high rotation speed) cause high wind speeds at ground level. Long days (slow rotation speed) cause the day and night temperatures to be too extreme. Many Rare Earth proponents argue that the Earth's plate tectonics would probably not exist if not for the tidal forces of the Moon or the impact of Theia (prolonging mantle effects). The hypothesis that the Moon's tidal influence initiated or sustained Earth's plate tectonics remains unproven, though at least one study implies a temporal correlation to the formation of the Moon. Evidence for the past existence of plate tectonics on planets like Mars which may never have had a large moon would counter this argument, although plate tectonics may fade anyway before a moon is relevant to life. Kasting argues that a large moon is not required to initiate plate tectonics. Complex life may arise in alternative habitats Rare Earth proponents argue that simple life may be common, though complex life requires specific environmental conditions to arise. Critics consider life could arise on a moon of a gas giant, though this is less likely if life requires volcanicity. The moon must have stresses to induce tidal heating, but not so dramatic as seen on Jupiter's Io. However, the moon is within the gas giant's intense radiation belts, sterilizing any biodiversity before it can get established. Dirk Schulze-Makuch disputes this, hypothesizing alternative biochemistries for alien life. While Rare Earth proponents argue that only microbial extremophiles could exist in subsurface habitats beyond Earth, some argue that complex life can also arise in these environments. Examples of extremophile animals such as the Hesiocaeca methanicola, an animal that inhabits ocean floor methane clathrates substances more commonly found in the outer Solar System, the tardigrades which can survive in the vacuum of space or Halicephalobus mephisto which exists in crushing pressure, scorching temperatures and extremely low oxygen levels 3.6 kilometres ( 2.2 miles) deep in the Earth's crust, are sometimes cited by critics as complex life capable of thriving in "alien" environments. Jill Tarter counters the classic counterargument that these species adapted to these environments rather than arose in them, by suggesting that we cannot assume conditions for life to emerge which are not actually known. There are suggestions that complex life could arise in sub-surface conditions which may be similar to those where life may have arisen on Earth, such as the tidally heated subsurfaces of Europa or Enceladus. Ancient circumvental ecosystems such as these support complex life on Earth such as Riftia pachyptila that exist completely independent of the surface biosphere.
Physical sciences
Planetary science
Astronomy
828436
https://en.wikipedia.org/wiki/QR%20code
QR code
A QR code, quick-response code, is a type of two-dimensional matrix barcode invented in 1994 by Masahiro Hara of Japanese company Denso Wave for labelling automobile parts. It features black squares on a white background with fiducial markers, readable by imaging devices like cameras, and processed using Reed–Solomon error correction until the image can be appropriately interpreted. The required data is then extracted from patterns that are present in both the horizontal and the vertical components of the QR image. Whereas a barcode is a machine-readable optical image that contains information specific to the labeled item, the QR code contains the data for a locator, an identifier, and web-tracking. To store data efficiently, QR codes use four standardized modes of encoding: (1) numeric, (2) alphanumeric, (3) byte or binary, and (4) kanji. Compared to standard UPC barcodes, the QR labeling system was applied beyond the automobile industry because of faster reading of the optical image and greater data-storage capacity in applications such as product tracking, item identification, time tracking, document management, and general marketing. History The QR code system was invented in 1994, at the Denso Wave automotive products company, in Japan. The initial alternating-square design presented by the team of researchers, headed by Masahiro Hara, was influenced by the black counters and the white counters played on a Go board; the pattern of the position detection markers was determined by finding the least-used sequence of alternating black-white areas on printed matter, which was found to be (1:1:3:1:1). The functional purpose of the QR code system was to facilitate keeping track of the types and numbers of automobile parts, by replacing individually-scanned bar-code labels on each box of auto parts with a single label that contained the data of each label. The quadrangular configuration of the QR code system consolidated the data of the various bar-code labels with Kanji, Kana, and alphanumeric codes printed onto a single label. Adoption QR codes are used in a much broader context, including both commercial tracking applications and convenience-oriented applications aimed at mobile phone users (termed mobile tagging). QR codes may be used to display text to the user, to open a webpage on the user's device, to add a vCard contact to the user's device, to open a Uniform Resource Identifier (URI), to connect to a wireless network, or to compose an email or text message. There are a great many QR code generators available as software or as online tools that are either free or require a paid subscription. The QR code has become one of the most-used types of two-dimensional code. During June 2011, 14 million American mobile users scanned a QR code or a barcode. Some 58% of those users scanned a QR or barcode from their homes, while 39% scanned from retail stores; 53% of the 14 million users were men between the ages of 18 and 34. In 2022, 89 million people in the United States scanned a QR code using their mobile devices, up by 26 percent compared to 2020. The majority of QR code users used them to make payments or to access product and menu information. In September 2020, a survey found that 18.8 percent of consumers in the United States and the United Kingdom strongly agreed that they had noticed an increase in QR code use since the then-active COVID-19-related restrictions had begun several months prior. Standards Several standards cover the encoding of data as QR codes: October 1997AIM (Association for Automatic Identification and Mobility) International January 1999JIS X 0510 June 2000ISO/IEC 18004:2000 Information technologyAutomatic identification and data capture techniquesBar code symbologyQR code (now withdrawn) Defines QR code models 1 and 2 symbols. 1 September 2006ISO/IEC 18004:2006 Information technologyAutomatic identification and data capture techniquesQR Code 2005 bar code symbology specification (now withdrawn) Defines QR code 2005 symbols, an extension of QR code model 2. Does not specify how to read QR code model 1 symbols, or require this for compliance. 1 February 2015ISO/IEC 18004:2015 InformationAutomatic identification and data capture techniquesQR Code barcode symbology specification Renames the QR Code 2005 symbol to QR Code and adds clarification to some procedures and minor corrections. May 2022ISO/IEC 23941:2022 Information technologyAutomatic identification and data capture techniquesRectangular Micro QR Code (rMQR) bar code symbology specificationDefines the requirements for Micro QR Code. At the application layer, there is some variation between most of the implementations. Japan's NTT DoCoMo has established de facto standards for the encoding of URLs, contact information, and several other data types. The open-source "ZXing" project maintains a list of QR code data types. Uses QR codes have become common in consumer advertising. Typically, a smartphone is used as a QR code scanner, displaying the code and converting it to some useful form (such as a standard URL for a website, thereby obviating the need for a user to type it into a Web browser). QR code has become a focus of advertising strategy, since it provides a way to access a brand's website more quickly than by manually entering a URL. Beyond mere convenience to the consumer, the importance of this capability is that it increases the conversion rate: the chance that contact with the advertisement will convert to a sale. It coaxes interested prospects further down the conversion funnel with little delay or effort, bringing the viewer to the advertiser's website immediately, whereas a longer and more targeted sales pitch may lose the viewer's interest. Although initially used to track parts in vehicle manufacturing, QR codes are used over a much wider range of applications. These include commercial tracking, warehouse stock control, entertainment and transport ticketing, product and loyalty marketing, and in-store product labeling. Examples of marketing include where a company's discounted and percent discount can be captured using a QR code decoder that is a mobile app, or storing a company's information such as address and related information alongside its alpha-numeric text data as can be seen in telephone directory yellow pages. They can also be used to store personal information for organizations. An example of this is the Philippines National Bureau of Investigation (NBI) where NBI clearances now come with a QR code. Many of these applications target mobile-phone users (via mobile tagging). Users may receive text, add a vCard contact to their device, open a URL, or compose an e-mail or text message after scanning QR codes. They can generate and print their own QR codes for others to scan and use by visiting one of several pay or free QR code-generating sites or apps. Google had an API, now deprecated, to generate QR codes, and apps for scanning QR codes can be found on nearly all smartphone devices. QR codes storing addresses and URLs may appear in magazines, on signs, on buses, on business cards, or on almost any object about which users might want information. Users with a camera phone equipped with the correct reader application can scan the image of the QR code to display text and contact information, connect to a wireless network, or open a web page in the phone's browser. This act of linking from physical world objects is termed hardlinking or object hyperlinking. QR codes also may be linked to a location to track where a code has been scanned. Either the application that scans the QR code retrieves the geo information by using GPS and cell tower triangulation (aGPS) or the URL encoded in the QR code itself is associated with a location. In 2008, a Japanese stonemason announced plans to engrave QR codes on gravestones, allowing visitors to view information about the deceased, and family members to keep track of visits. Psychologist Richard Wiseman was one of the first authors to include QR codes in a book, in Paranormality: Why We See What Isn't There (2011). Microsoft Office and LibreOffice have a functionality to insert QR code into documents. QR codes have been incorporated into currency. In June 2011, The Royal Dutch Mint (Koninklijke Nederlandse Munt) issued the world's first official coin with a QR code to celebrate the centenary of its current building and premises. The coin can be scanned by a smartphone and originally linked to a special website with content about the historical event and design of the coin. In 2014, the Central Bank of Nigeria issued a 100-naira banknote to commemorate its centennial, the first banknote to incorporate a QR code in its design. When scanned with an internet-enabled mobile device, the code goes to a website that tells the centenary story of Nigeria. In 2015, the Central Bank of the Russian Federation issued a 100-rubles note to commemorate the annexation of Crimea by the Russian Federation. It contains a QR code into its design, and when scanned with an internet-enabled mobile device, the code goes to a website that details the historical and technical background of the commemorative note. In 2017, the Bank of Ghana issued a 5-cedis banknote to commemorate 60 years of central banking in Ghana. It contains a QR code in its design which, when scanned with an internet-enabled mobile device, goes to the official Bank of Ghana website. Credit card functionality is under development. In September 2016, the Reserve Bank of India (RBI) launched the eponymously named BharatQR, a common QR code jointly developed by all the four major card payment companies – National Payments Corporation of India that runs RuPay cards along with Mastercard, Visa, and American Express. It will also have the capability of accepting payments on the Unified Payments Interface (UPI) platform. Augmented reality QR codes are used in some augmented reality systems to determine the positions of objects in 3-dimensional space. Mobile operating systems QR codes can be used on various mobile device operating systems. While initially requiring the installation and use of third-party apps, both Android and iOS (since iOS 11 ) devices can now natively scan QR codes, without requiring an external app to be used. The camera app can scan and display the kind of QR code along with the link. These devices support URL redirection, which allows QR codes to send metadata to existing applications on the device. Virtual stores QR codes have been used to establish "virtual stores", where a gallery of product information and QR codes is presented to the customer, e.g. on a train station wall. The customers scan the QR codes, and the products are delivered to their homes. This use started in South Korea, and Argentina, but is currently expanding globally. Walmart, Procter & Gamble and Woolworths have already adopted the Virtual Store concept. QR code payment QR codes can be used to store bank account information or credit card information, or they can be specifically designed to work with particular payment provider applications. There are several trial applications of QR code payments across the world. In developing countries including China, India QR code payment is a very popular and convenient method of making payments. Since Alipay designed a QR code payment method in 2011, mobile payment has been quickly adopted in China. As of 2018, around 83% of all payments were made via mobile payment. In November 2012, QR code payments were deployed on a larger scale in the Czech Republic when an open format for payment information exchange – a Short Payment Descriptor – was introduced and endorsed by the Czech Banking Association as the official local solution for QR payments. In 2013, the European Payment Council provided guidelines for the EPC QR code enabling SCT initiation within the Eurozone. In 2017, Singapore created a task force including government agencies such as the Monetary Authority of Singapore and Infocomm Media Development Authority to spearhead a system for e-payments using standardized QR code specifications. These specific dimensions are specialized for Singapore. The e-payment system, Singapore Quick Response Code (SGQR), essentially merges various QR codes into one label that can be used by both parties in the payment system. This allows for various banking apps to facilitate payments between multiple customers and a merchant that displays a single QR code. The SGQR scheme is co-owned by MAS and IMDA. A single SDQR label contains e-payments and combines multiple payment options. People making purchases can scan the code and see which payment options the merchant accepts. Website login QR codes can be used to log into websites: a QR code is shown on the login page on a computer screen, and when a registered user scans it with a verified smartphone, they will automatically be logged in. Authentication is performed by the smartphone, which contacts the server. Google deployed such a login scheme in 2012. Mobile ticket There is a system whereby a QR code can be displayed on a device such as a smartphone and used as an admission ticket. Its use is common for J1 League and Nippon Professional Baseball tickets in Japan. In some cases, rights can be transferred via the Internet. In Latvia, QR codes can be scanned in Riga public transport to validate Rīgas Satiksme e-tickets. Restaurant ordering Restaurants can present a QR code near the front door or at the table allowing guests to view an online menu, or even redirect them to an online ordering website or app, allowing them to order and/or possibly pay for their meal without having to use a cashier or waiter. QR codes can also link to daily or weekly specials that are not printed on the standardized menus, and enable the establishment to update the entire menu without needing to print copies. At table-serve restaurants, QR codes enable guests to order and pay for their meals without a waiter involved – the QR code contains the table number so servers know where to bring the food. This application has grown especially since the need for social distancing during the 2020 COVID-19 pandemic prompted reduced contact between service staff and customers. Joining a Wi‑Fi network By specifying the SSID, encryption type, password/passphrase, and if the SSID is hidden or not, mobile device users can quickly scan and join networks without having to manually enter the data. A MeCard-like format is supported by Android and iOS 11+. Common format: WIFI:S:<SSID>;T:<WEP|WPA|nopass>;P:<PASSWORD>;H:<true|false|blank>;; Sample: WIFI:S:MySSID;T:WPA;P:MyPassW0rd;; Funerary use A QR code can link to an obituary and can be placed on a headstone. In 2008, Ishinokoe in Yamanashi Prefecture, Japan began to sell tombstones with QR codes produced by IT DeSign, where the code leads to a virtual grave site of the deceased. Other companies, such as Wisconsin-based Interactive Headstones, have also begun implementing QR codes into tombstones. In 2014, the Jewish Cemetery of La Paz in Uruguay began implementing QR codes for tombstones. Electronic authentication QR codes can be used to generate time-based one-time passwords for electronic authentication. Loyalty programs QR codes have been used by various retail outlets that have loyalty programs. Sometimes these programs are accessed with an app that is loaded onto a phone and includes a process triggered by a QR code scan. The QR codes for loyalty programs tend to be found printed on the receipt for a purchase or on the products themselves. Users in these schemes collect award points by scanning a code. Counterfeit detection Serialised QR codes have been used by brands and governments to let consumers, retailers and distributors verify the authenticity of the products and help with detecting counterfeit products, as part of a brand protection program. However, the security level of a regular QR code is limited since QR codes printed on original products are easily reproduced on fake products, even though the analysis of data generated as a result of QR code scanning can be used to detect counterfeiting and illicit activity. A higher security level can be attained by embedding a digital watermark or copy detection pattern into the image of the QR code. This makes the QR code more secure against counterfeiting attempts; products that display a code which is counterfeit, although valid as a QR code, can be detected by scanning the secure QR code with the appropriate app. The treaty regulating apostilles (documents bearing a seal of authenticity), has been updated to allow the issuance of digital apostilles by countries; a digital apostille is a PDF document with a cryptographic signature containing a QR code for a canonical URL of the original document, allowing users to verify the apostille from a printed version of the document. Product tracing Different studies have been conducted to assess the effectiveness of QR codes as a means of conveying labelling information and their use as part of a food traceability system. In a field experiment, it was found that when provided free access to a smartphone with a QR code scanning app, 52.6% of participants would use it to access labelling information. A study made in South Korea showed that consumers appreciate QR code used in food traceability system, as they provide detailed information about food, as well as information that helps them in their purchasing decision. If QR codes are serialised, consumers can access a web page showing the supply chain for each ingredient, as well as information specific to each related batch, including meat processors and manufacturers, which helps address the concerns they have about the origin of their food. COVID-19 pandemic After the COVID-19 pandemic began spreading, QR codes began to be used as a "touchless" system to display information, show menus, or provide updated consumer information, especially in the hospitality industry. Restaurants replaced paper or laminated plastic menus with QR code decals on the table, which opened an online version of the menu. This prevented the need to dispose of single-use paper menus, or institute cleaning and sanitizing procedures for permanent menus after each use. Local television stations have also begun to utilize codes on local newscasts to allow viewers quicker access to stories or information involving the pandemic, including testing and immunization scheduling websites, or for links within stories mentioned in the newscasts overall. In Australia, patrons were required to scan QR codes at shops, clubs, supermarkets, and other service and retail establishments on entry to assist contact tracing. Singapore, Taiwan, the United Kingdom, and New Zealand used similar systems. QR codes are also present on COVID-19 vaccination certificates in places such as Canada and the EU (EU Digital COVID certificate), where they can be scanned to verify the information on the certificate. Design Unlike the older, one-dimensional barcodes that were designed to be mechanically scanned by a narrow beam of light, a QR code is detected by a two-dimensional digital image sensor and then digitally analyzed by a programmed processor. The processor locates the three distinctive squares at the corners of the QR code image, using a smaller square (or multiple squares) near the fourth corner to normalize the image for size, orientation, and angle of viewing. The small dots throughout the QR code are then converted to binary numbers and validated with an error-correcting algorithm. Information capacity The amount of data that can be represented by a QR code symbol depends on the data type (mode, or input character set), version (1, ..., 40, indicating the overall dimensions of the symbol, i.e. 4 × version number + 17 dots on each side), and error correction level. The maximum storage capacities occur for version 40 and error correction level L (low), denoted by 40-L: Here are some samples of QR codes: Error correction QR codes use Reed–Solomon error correction over the finite field or , the elements of which are encoded as bytes of 8 bits; the byte with a standard numerical value encodes the field element where is taken to be a primitive element satisfying . The primitive polynomial is , corresponding to the polynomial number 285, with initial root = 0 to obtain generator polynomials. The Reed–Solomon code uses one of 37 different polynomials over , with degrees ranging from 7 to 68, depending on how many error correction bytes the code adds. It is implied by the form of Reed–Solomon used (systematic BCH view) that these polynomials are all on the form . However, the rules for selecting the degree are specific to the QR standard. For example, the generator polynomial used for the Version 1 QR code (21×21), when 7 error correction bytes are used, is: . This is obtained by multiplying the first seven terms: . The same may also be expressed using decimal coefficients (over ), as: . The highest power of in the polynomial (the degree , of the polynomial) determines the number of error correction bytes. In this case, the degree is 7. When discussing the Reed–Solomon code phase there is some risk for confusion, in that the QR ISO/IEC standard uses the term codeword for the elements of , which with respect to the Reed–Solomon code are symbols, whereas it uses the term block for what with respect to the Reed–Solomon code are the codewords. The number of data versus error correction bytes within each block depends on (i) the version (side length) of the QR symbol and (ii) the error correction level, of which there are four. The higher the error correction level, the less storage capacity. The following table lists the approximate error correction capability at each of the four levels: In larger QR symbols, the message is broken up into several Reed–Solomon code blocks. The block size is chosen so that no attempt is made at correcting more than 15 errors per block; this limits the complexity of the decoding algorithm. The code blocks are then interleaved together, making it less likely that localized damage to a QR symbol will overwhelm the capacity of any single block. The Version 1 QR symbol with level L error correction, for example, consists of a single error correction block with a total of 26 code bytes (made of 19 message bytes and seven error correction bytes). It can correct up to 2 byte errors. Hence, this code is known as a (26,19,2) error correction code over . It is also sometimes represented in short, as (26,19) code. Due to error correction, it is possible to create artistic QR codes with embellishments to make them more readable or attractive to the human eye, and to incorporate colors, logos, and other features into the QR code block; the embellishments are treated as errors, but the codes still scan correctly. It is also possible to design artistic QR codes without reducing the error correction capacity by manipulating the underlying mathematical constructs. Image processing algorithms are also used to reduce errors in QR-code. Encoding Format information and masking The format information records two things: the error correction level and the mask pattern used for the symbol. Masking is used to break up patterns in the data area that might confuse a scanner, such as large blank areas or misleading features that look like the locator marks. The mask patterns are defined on a grid that is repeated as necessary to cover the whole symbol. Modules corresponding to the dark areas of the mask are inverted. The 5-bit format information is protected from errors with a BCH code, and two complete copies are included in each QR symbol. A (15,5) triple error-correcting BCH code over is used, having the generator polynomial . It can correct at most 3 bit-errors out of the 5 data bits. There are a total of 15 bits in this BCH code (10 bits are added for error correction). This 15-bit code is itself X-ORed with a fixed 15-bit mask pattern (101010000010010) to prevent an all-zero string. Error correction bytes To obtain the error correction (EC) bytes for a message "www.wikipedia.org", the following procedure may be carried out: The message is 17 bytes long, hence it can be encoded using a (26,19,2) Reed-Solomon code to fit in a Ver1 (21×21) symbol, which has a maximum capacity of 19 bytes (for L level error correction). The generator polynomial specified for the (26,19,2) code, is: , which may also be written in the form of a matrix of decimal coefficients: [1 127 122 154 164 11 68 117] The 17-byte long message "www.wikipedia.org" as hexadecimal coefficients (ASCII values), denoted by M1 through M17 is: [77 77 77 2E 77 69 6B 69 70 65 64 69 61 2E 6E 72 67] The encoding mode is "Byte encoding". Hence the 'Enc' field is [0100] (4 bits). The length of the above message is 17 bytes hence 'Len' field is [00010001] (8 bits). The 'End' field is End of message marker [0000] (4 bits). The message code word (without EC bytes) is of the form: ['Enc' 'Len' w w w . w i k i p e d i a . o r g 'End'] Substituting the hexadecimal values, it can be expressed as: [4 11 77 77 77 2E 77 69 6B 69 70 65 64 69 61 2E 6E 72 67 0] This is rearranged as 19-byte blocks of 8 bits each: [41 17 77 77 72 E7 76 96 B6 97 06 56 46 96 12 E6 E7 26 70] Using the procedure for Reed-Solomon systematic encoding, the 7 EC bytes obtained (E1 through E7, as shown in the symbol) which are the coefficients (in decimal) of the remainder after polynomial division are: [174 173 239 6 151 143 37] or in hexadecimal values: [AE AD EF 06 97 8F 25] These 7 EC bytes are then appended to the 19-byte message. The resulting coded message has 26 bytes (in hexadecimal): [41 17 77 77 72 E7 76 96 B6 97 06 56 46 96 12 E6 E7 26 70 AE AD EF 06 97 8F 25] Note: The bit values shown in the Ver1 QR symbol below do not match with the above values, as the symbol has been masked using a mask pattern (001). Message placement The message dataset is placed from right to left in a zigzag pattern, as shown below. In larger symbols, this is complicated by the presence of the alignment patterns and the use of multiple interleaved error-correction blocks. The general structure of a QR encoding is as a sequence of 4 bit indicators with payload length dependent on the indicator mode (e.g. byte encoding payload length is dependent on the first byte). Note: Character Count Indicator depends on how many modules are in a QR code (Symbol Version). ECI Assignment number Size: 8 × 1 bits if ECI Assignment Bitstream starts with '0' 8 × 2 bits if ECI Assignment Bitstream starts with '10' 8 × 3 bits if ECI Assignment Bitstream starts with '110' Four-bit indicators are used to select the encoding mode and convey other information. Encoding modes can be mixed as needed within a QR symbol. (e.g., a url with a long string of alphanumeric characters ) [ Mode Indicator][ Mode bitstream ] --> [ Mode Indicator][ Mode bitstream ] --> etc... --> [ 0000 End of message (Terminator) ] After every indicator that selects an encoding mode is a length field that tells how many characters are encoded in that mode. The number of bits in the length field depends on the encoding and the symbol version. Alphanumeric encoding mode stores a message more compactly than the byte mode can, but cannot store lower-case letters and has only a limited selection of punctuation marks, which are sufficient for rudimentary web addresses. Two characters are coded in an 11-bit value by this formula: V = 45 × C1 + C2 This has the exception that the last character in an alphanumeric string with an odd length is read as a 6-bit value instead. Decoding example The following images offer more information about the QR code. Variants Model 1 Model 1 QR code is an older version of the specification. It is visually similar to the widely seen model 2 codes, but lacks alignment patterns. Differences are in the bottom right corner, and in the midsections of the bottom and right edges are additional functional regions. Micro QR code Micro QR code is a smaller version of the QR code standard for applications where symbol size is limited. There are four different versions (sizes) of Micro QR codes: the smallest is 11×11 modules; the largest can hold 35 numeric characters, or 21 ASCII alphanumeric characters, or 15 bytes (128 bits). Rectangular Micro QR Code Rectangular Micro QR Code (also known as rMQR Code) is a two-dimensional (2D) matrix barcode invented and standardized in 2022 by Denso Wave as ISO/IEC 23941. rMQR Code is designed as a rectangular variation of the QR code and has the same parameters and applications as original QR codes; however, rMQR Code is more suitable for rectangular areas, and has a difference between width and height up to 19 in the R7x139 version. iQR code iQR code is an alternative to existing square QR codes developed by Denso Wave. iQR codes can be created in square or rectangular formations; this is intended for situations where a longer and narrower rectangular shape is more suitable, such as on cylindrical objects. iQR codes can fit the same amount of information in 30% less space. There are 61 versions of square iQR codes, and 15 versions of rectangular codes. For squares, the minimum size is 9 × 9 modules; rectangles have a minimum of 19 × 5 modules. iQR codes add error correction level S, which allows for 50% error correction. iQR Codes had not been given an ISO/IEC specification as of 2015, and only proprietary Denso Wave products could create or read iQR codes. Secure QR code Secure Quick Response (SQR) code is a QR code that contains a "private data" segment after the terminator instead of the specified filler bytes "ec 11". This private data segment must be deciphered with an encryption key. This can be used to store private information and to manage a company's internal information. Frame QR Frame QR is a QR code with a "canvas area" that can be flexibly used. In the center of this code is the canvas area, where graphics, letters, and more can be flexibly arranged, making it possible to lay out the code without losing the design of illustrations, photos, etc. HCC2D Researchers have proposed a new High Capacity Colored 2-Dimensional (HCC2D) Code, which builds upon a QR code basis for preserving the QR robustness to distortions and uses colors for increasing data density (as of 2014 it is still in the prototyping phase). The HCC2D code specification is described in details in Querini et al. (2011), while techniques for color classification of HCC2D code cells are described in detail in Querini and Italiano (2014), which is an extended version of Querini and Italiano (2013). Introducing colors into QR codes requires addressing additional issues. In particular, during QR code reading only the brightness information is taken into account, while HCC2D codes have to cope with chromatic distortions during the decoding phase. In order to ensure adaptation to chromatic distortions that arise in each scanned code, HCC2D codes make use of an additional field: the Color Palette Pattern. This is because color cells of a Color Palette Pattern are supposed to be distorted in the same way as color cells of the Encoding Region. Replicated color palettes are used for training machine-learning classifiers. AQR Accessible QR is a type of QR code that combines a standard QR code with a dot-dash pattern positioned around one corner of the code to provide product information for people who are blind and partially sighted. The codes, announce product categories and product details such as instructions, ingredients, safety warnings, and recycling information. The data is structured for the needs of users who are blind or partially sighted and offers larger text or audio output. It can read QR codes from a metre away, activating the smartphone's accessibility features like VoiceOver to announce product details. License The use of QR code technology is freely licensed as long as users follow the standards for QR code documented with JIS or ISO/IEC. Non-standardized codes may require special licensing. Denso Wave owns a number of patents on QR code technology, but has chosen to exercise them in a limited fashion. In order to promote widespread usage of the technology Denso Wave chose to waive its rights to a key patent in its possession for standardized codes only. In the US, the granted QR code patent is 5726435, and in Japan 2938338, both of which have expired. The European Patent Office granted patent 0672994 to Denso Wave, which was then validated into French, UK, and German patents, all of which expired in March 2015. The text QR Code itself is a registered trademark and wordmark of Denso Wave Incorporated. In UK, the trademark is registered as E921775, the term QR Code, with a filing date of 3 September 1998. The UK version of the trademark is based on the Kabushiki Kaisha Denso (DENSO CORPORATION) trademark, filed as Trademark 000921775, the term QR Code, on 3 September 1998 and registered on 16 December 1999 with the European Union OHIM (Office for Harmonization in the Internal Market). The U.S. Trademark for the term QR Code is Trademark 2435991 and was filed on 29 September 1998 with an amended registration date of 13 March 2001, assigned to Denso Corporation. In South Korea, trademark application filed on 18 November 2011 was refused at 20 March 2012, because the Korean Intellectual Property Office viewed that the phrase was genericized among South Korean people to refer to matrix barcodes in general. Risks The only context in which common QR codes can carry executable data is the URL data type. These URLs may host JavaScript code, which can be used to exploit vulnerabilities in applications on the host system, such as the reader, the web browser, or the image viewer, since a reader will typically send the data to the application associated with the data type used by the QR code. In the case of no software exploits, malicious QR codes combined with a permissive reader can still put a computer's contents and user's privacy at risk. This practice is known as "attagging", a portmanteau of "attack tagging". They are easily created and can be affixed over legitimate QR codes. On a smartphone, the reader's permissions may allow use of the camera, full Internet access, read/write contact data, GPS, read browser history, read/write local storage, and global system changes. Risks include linking to dangerous web sites with browser exploits, enabling the microphone/camera/GPS, and then streaming those feeds to a remote server, analysis of sensitive data (passwords, files, contacts, transactions), and sending email/SMS/IM messages or packets for DDoS as part of a botnet, corrupting privacy settings, stealing identity, and even containing malicious logic themselves such as JavaScript or a virus. These actions could occur in the background while the user is only seeing the reader opening a seemingly harmless web page. In Russia, a malicious QR code caused phones that scanned it to send premium texts at a fee of $6 each. QR codes have also been linked to scams in which stickers are placed on parking meters and other devices, posing as quick payment options, as seen in Austin, San Antonio and Boston, among other cities across the United States and Australia.
Technology
Data storage and memory
null
829159
https://en.wikipedia.org/wiki/Tokyo%20Bay%20Aqua-Line
Tokyo Bay Aqua-Line
The , also known as the Trans-Tokyo Bay Expressway, is an expressway that is mainly made up of a bridge–tunnel combination across Tokyo Bay in Japan. It connects the city of Kawasaki in Kanagawa Prefecture with the city of Kisarazu in Chiba Prefecture, and forms part of National Route 409. With an overall length of 23.7 km, it includes a 4.4 km bridge and 9.6 km tunnel underneath the bay—the fourth-longest underwater tunnel in the world. Overview An artificial island, , marks the transition between the bridge and tunnel segments and provides a rest stop with restaurants, shops, and amusement facilities. A distinctive tower standing above the middle of the tunnel, the Kaze no Tō (の, "the tower of wind"), supplies air to the tunnel, its ventilation system powered by the bay's almost-constant winds. The Tokyo Bay Aqua-Line shortened the drive between Chiba and Kanagawa, two important industrial areas, from 90 to 15 minutes, and also helped cut travel time from Tokyo and Kanagawa to the seaside leisure spots of the southern Bōsō Peninsula. Before it opened, the trip entailed a 100 km journey along Tokyo Bay and pass through central Tokyo. An explicit goal of the Aqua-Line was to redirect vehicular flow away from central Tokyo, but the expensive toll has meant only a limited reduction in central-Tokyo traffic. Many highway bus services now use the Tokyo Bay Aqua-Line, including lines from Tokyo Station, Yokohama Station, Kawasaki Station, Shinagawa Station, Shibuya Station, Shinjuku Station and Haneda Airport to Kisarazu, Kimitsu, Nagaura station, Ichihara, Mobara, Tōgane, Kamogawa, Katsuura and Tateyama. History One of the last Japanese megaprojects of the 20th century, the roadway was built at a cost of the ¥1.44 trillion (US$11.2 billion) and opened on December 18, 1997 by then-Crown Prince Naruhito and then-Crown Princess Masako after 23 years of planning and nine years of construction. The roadway was conceived during the bubble economy of the late 1980s. At opening time, the roadway had the highest toll fee in Japan a one-way trip costs ¥5050 or ¥334 per kilometer. Due to its expensive toll, analysts see lower traffic volume than what Japan Highway Public Corporation, the operator of the roadway, expected at 25,000 cars. Tolls The cash toll for a single trip on the Aqua-Line is ¥3,140 for ordinary-size cars (¥2,510 for kei cars); however, using the ETC (electronic toll collection) system, the fare is ¥2320 (¥1860 for kei cars). The ETC toll is reduced to ¥1000 on Saturdays, Sundays and holidays. In general, tolls for usage of the Aqua-Line in either direction are collected at the mainline toll plaza on the Kisarazu end.
Technology
Multi-modal crossings
null
831170
https://en.wikipedia.org/wiki/Sloan%20Digital%20Sky%20Survey
Sloan Digital Sky Survey
The Sloan Digital Sky Survey or SDSS is a major multi-spectral imaging and spectroscopic redshift survey using a dedicated 2.5-m wide-angle optical telescope at Apache Point Observatory in New Mexico, United States. The project began in 2000 and was named after the Alfred P. Sloan Foundation, which contributed significant funding. A consortium of the University of Washington and Princeton University was established to conduct a redshift survey. The Astrophysical Research Consortium (ARC) was established in 1984 with the additional participation of New Mexico State University and Washington State University to manage activities at Apache Point. In 1991, the Sloan Foundation granted the ARC funding for survey efforts and the construction of equipment to carry out the work. Background At the time of its design, the SDSS was a pioneering combination of novel instrumentation as well as data reduction and storage techniques that drove major advances in astronomical observations, discoveries, and theory. The SDSS project was centered around two instruments and data processing pipelines that were groundbreaking for the scale at which they were implemented: A multi-filter/multi-array scanning CCD camera to take an imaging survey of the sky at high efficiency, followed by A multi-object/multi-fiber spectrograph that could take spectra in bulk (several hundred objects at a time) of targets identified from the survey A major new challenge was how to deal with the exceptional data volume generated by the telescope and instruments. At the time, hundreds of gigabytes of raw data per night was unprecedented, and a collaborating team as complex as the original hardware and engineering team was needed to design a software and storage system for processing the data. From each imaging run, object catalogs, reduced images, and associated files were produced in a highly automated pipeline, yielding the largest astronomical object catalogs (billions of objects) available in digital queryable form at the time. For each spectral run, thousands of two-dimensional spectral images had to be processed to automatically extract calibrated spectra (flux versus wavelength). In the approximate decade it took to achieve these goals, SDSS contributed to notable advances in massive database storage and accessing technology, such as SQL, and was one of the first major astronomical projects to make data available in this form. The model of giving the scientific community and public broad and internet-accessible access to the survey data products was also relatively new at the time. The collaboration model around the project was also complex but successful, given the large numbers of institutions and individuals needed to bring expertise to the system. Universities and foundations were participants along with the managing partner ARC. Other participants included Fermi National Accelerator Laboratory (Fermilab), which supplied computer processing and storage capabilities, and colleagues from the computing industry. Operation Data collection began in 2000; the final imaging data release (DR9) covers over 35% of the sky, with photometric observations of around nearly 1 billion objects, while the survey continues to acquire spectra, having so far taken spectra of over 4 million objects. The main galaxy sample has a median redshift of z = 0.1; there are redshifts for luminous red galaxies as far as z = 0.7, and for quasars as far as z = 5; and the imaging survey has been involved in the detection of quasars beyond a redshift z = 6. Data release 8 (DR8), released in January 2011, includes all photometric observations taken with the SDSS imaging camera, covering 14,555 square degrees on the sky (just over 35% of the full sky). Data release 9 (DR9), released to the public on 31 July 2012, includes the first results from the Baryon Oscillation Spectroscopic Survey (BOSS), including over 800,000 new spectra. Over 500,000 of the new spectra are of objects in the Universe 7 billion years ago (roughly half the age of the universe). Data release 10 (DR10), released to the public on 31 July 2013, includes all data from previous releases, plus the first results from the APO Galactic Evolution Experiment (APOGEE), including over 57,000 high-resolution infrared spectra of stars in the Milky Way. DR10 also includes over 670,000 new BOSS spectra of galaxies and quasars in the distant universe. The publicly available images from the survey were made between 1998 and 2009. In July 2020, after a 20-year-long survey, astrophysicists of the Sloan Digital Sky Survey published the largest, most detailed 3D map of the universe so far, filled a gap of 11 billion years in its expansion history, and provided data which supports the theory of a flat geometry of the universe and confirms that different regions seem to be expanding at different speeds. Observations SDSS uses a dedicated 2.5 m wide-angle optical telescope; from 1998 to 2009 it observed in both imaging and spectroscopic modes. The imaging camera was retired in late 2009, since then the telescope has observed entirely in spectroscopic mode. Images were taken using a photometric system of five filters (named u, g, r, i and z). These images are processed to produce lists of objects observed and various parameters, such as whether they seem pointlike or extended (as a galaxy might) and how the brightness on the CCDs relates to various kinds of astronomical magnitude. For imaging observations, the SDSS telescope used the drift scanning technique, but with a choreographed variation of right ascension, declination, tracking rate, and image rotation which allows the telescope to track along great circles and continuously record small strips of the sky. The image of the stars in the focal plane drifts along the CCD chip, and the charge is electronically shifted along the detectors at the same rate, instead of staying fixed as in tracked telescopes. (Simply parking the telescope as the sky moves is only workable on the celestial equator, since stars at different declination move at different apparent speeds). This method allows consistent astrometry over the widest possible field and minimises overheads from reading out the detectors. The disadvantage is minor distortion effects. The telescope's imaging camera is made up of 30 CCD chips, each with a resolution of pixels, totaling approximately 120 megapixels. The chips are arranged in 5 rows of 6 chips. Each row has a different optical filter with average wavelengths of 355.1 (u), 468.6 (g), 616.5 (r), 748.1 (i), and 893.1 (z)nm, with 95% completeness in typical seeing to magnitudes of 22.0, 22.2, 22.2, 21.3, and 20.5, for u, g, r, i, z respectively. The filters are placed on the camera in the order r, i, u, z, g. To reduce noise, the camera is cooled to 190 kelvins (about −80°C) by liquid nitrogen. Note: colors are only approximate and based on wavelength to sRGB representation. Using these photometric data, stars, galaxies, and quasars are also selected for spectroscopy. The spectrograph operates by feeding an individual optical fibre for each target through a hole drilled in an aluminum plate. Each hole is positioned specifically for a selected target, so every field in which spectra are to be acquired requires a unique plate. The original spectrograph attached to the telescope was capable of recording 640 spectra simultaneously, while the updated spectrograph for SDSSIII can record 1000 spectra at once. Throughout each night, between six and nine plates are typically used for recording spectra. In spectroscopic mode, the telescope tracks the sky in the standard way, keeping the objects focused on their corresponding fiber tips. Every night the telescope produces about 200GB of data. Phases SDSS-I: 2000–2005 During its first phase of operations, 2000–2005, the SDSS imaged more than 8,000 square degrees of the sky in five optical bandpasses, and it obtained spectra of galaxies and quasars selected from 5,700 square degrees of that imaging. It also obtained repeated imaging (roughly 30 scans) of a 300 square-degree stripe in the southern Galactic cap. SDSS-II: 2005–2008 In 2005 the survey entered a new phase, the SDSS-II, by extending the observations to explore the structure and stellar makeup of the Milky Way, the SEGUE and the Sloan Supernova Survey, which watches after supernova Ia events to measure the distances to far objects. Sloan Legacy Survey The Sloan Legacy Survey covers over 7,500 square degrees of the Northern Galactic Cap with data from nearly 2 million objects and spectra from over 800,000 galaxies and 100,000 quasars. The information on the position and distance of the objects has allowed the large-scale structure of the Universe, with its voids and filaments, to be investigated for the first time. Almost all of these data were obtained in SDSS-I, but a small part of the footprint was finished in SDSS-II. Sloan Extension for Galactic Understanding and Exploration (SEGUE) The Sloan Extension for Galactic Understanding and Exploration obtained spectra of 240,000 stars (with a typical radial velocity of 10 km/s) to create a detailed three-dimensional map of the Milky Way. SEGUE data provide evidence for the age, composition and phase space distribution of stars within the various Galactic components, providing crucial clues for understanding the structure, formation and evolution of our galaxy. The stellar spectra, imaging data, and derived parameter catalogs for this survey are publicly available as part of SDSS Data Release 7 (DR7). Sloan Supernova Survey The SDSS Supernova Survey, which ran from 2005 to 2008, performed repeat imaging of one stripe of sky 2.5° wide centered on the celestial equator, going from 20 hours right ascension to 4 hours RA so that it was in the southern galactic cap (see Draft:Galactic cap) and did not suffer from galactic extinction. The project discovered more than 500 type Ia supernovae, Running until the end of the year 2007, the Supernova Survey searched for Type Ia supernovae. The survey rapidly scanned a 300 square degree area to detect variable objects and supernovae. It detected 130 confirmed supernovae Ia events in 2005 and a further 197 in 2006. In 2014 an even larger catalogue was released containing 10,258 variable and transient sources. Of these, 4,607 sources are either confirmed or likely supernovae, which makes this the largest set of supernovae so far compiled. SDSS III: 2008–2014 In mid-2008, SDSS-III was started. It comprised four separate surveys: APO Galactic Evolution Experiment (APOGEE) The APO Galactic Evolution Experiment (APOGEE) used high-resolution, high signal-to-noise infrared spectroscopy to penetrate the dust that obscures the inner Galaxy. APOGEE surveyed 100,000 red giant stars across the full range of the galactic bulge, bar, disk, and halo. It increased the number of stars observed at high spectroscopic resolution (R ≈ 20,000 at λ ≈ 1.6μm) and high signal-to-noise ratio () by more than a factor of 100. The high-resolution spectra revealed the abundances of about 15 elements, giving information on the composition of the gas clouds the red giants formed from. APOGEE planned to collect data from 2011 to 2014, with the first data released as part of SDSS DR10 in late 2013. Baryon Oscillation Spectroscopic Survey (BOSS) The SDSS-III's Baryon Oscillation Spectroscopic Survey (BOSS) was designed to measure the expansion rate of the Universe. It mapped the spatial distribution of luminous red galaxies (LRGs) and quasars to determine their spatial distribution and detect the characteristic scale imprinted by baryon acoustic oscillations in the early universe. Sound waves that propagate in the early universe, like spreading ripples in a pond, imprint a characteristic scale on the positions of galaxies relative to each other. It was announced that BOSS had measured the scale of the universe to an accuracy of one percent, and was completed in Spring 2014. Multi-object APO Radial Velocity Exoplanet Large-area Survey (MARVELS) The Multi-object APO Radial Velocity Exoplanet Large-area Survey (MARVELS) monitored the radial velocities of 11,000 bright stars, with the precision and cadence needed to detect gas giant planets that have orbital periods ranging from several hours to two years. This ground-based Doppler survey used the SDSS telescope and new multi-object Doppler instruments to monitor radial velocities. The main goal of the project was to generate a large-scale, statistically well-defined sample of giant planets. It searched for gaseous planets having orbital periods ranging from hours to 2 years and masses between 0.5 and 10 times that of Jupiter. A total of 11,000 stars were analyzed with 25–35 observations per star over 18 months. It was expected to detect between 150 and 200 new exoplanets, and was able to study rare systems, such as planets with extreme eccentricity, and objects in the "brown dwarf desert". The collected data was used as a statistical sample for the theoretical comparison and discovery of rare systems. The project started in the fall of 2008, and continued until spring 2014. SEGUE-2 The original Sloan Extension for Galactic Understanding and Exploration (SEGUE-1) obtained spectra of nearly 240,000 stars of a range of spectral types. Building on this success, SEGUE-2 spectroscopically observed around 120,000 stars, focusing on the in situ stellar halo of the Milky Way, from distances of 10 to 60kpc. SEGUE-2 doubled the sample size of SEGUE-1. Combining SEGUE-1 and 2 revealed the complex kinematic and chemical substructure of the galactic halo and disks, providing essential clues to the assembly and enrichment history of the galaxy. In particular, the outer halo was expected to be dominated by late-time accretion events. SEGUE data can help constrain existing models for the formation of the stellar halo and inform the next generation of high-resolution simulations of galaxy formation. In addition, SEGUE-1 and SEGUE-2 may help uncover rare, chemically primitive stars that are fossils of the earliest generations of cosmic star formation. SDSS IV: 2014–2020 The fourth generation of the SDSS (SDSS-IV, 2014–2020) is extending precision cosmological measurements to a critical early phase of cosmic history (eBOSS), expanding its infrared spectroscopic survey of the Galaxy in the northern and southern hemispheres (APOGEE-2), and for the first time using the Sloan spectrographs to make spatially resolved maps of individual galaxies (MaNGA). APO Galactic Evolution Experiment (APOGEE-2) A stellar survey of the Milky Way, with two major components: a northern survey using the bright time at APO, and a southern survey using the 2.5m Du Pont Telescope at Las Campanas. Extended Baryon Oscillation Spectroscopic Survey (eBOSS) A cosmological survey of quasars and galaxies, also encompassing subprograms to survey variable objects (TDSS) and X-ray sources (SPIDERS). Mapping Nearby Galaxies at APO (MaNGA) MaNGA (Mapping Nearby Galaxies at Apache Point Observatory), explored the detailed internal structure of nearly 10,000 nearby galaxies from 2014 to the spring of 2020. Earlier SDSS surveys only allowed spectra to be observed from the center of galaxies. By using two-dimensional arrays of optical fibers bundled together into a hexagonal shape, MaNGA was able to use spatially resolved spectroscopy to construct maps of the areas within galaxies, allowing deeper analysis of their structure, such as radial velocities and star formation regions. SDSS-V: 2020–current Apache Point Observatory in New Mexico began to gather data for SDSS-V in October 2020. Apache Point is scheduled to be converted by mid-2021 from plug plates (aluminum plates with manually-placed holes for starlight to shine through) to small automated robot arms, with Las Campanas Observatory in Chile following later in the year. The Milky Way Mapper survey will target the spectra of six million stars. The Black Hole Mapper survey will target galaxies to indirectly analyze their supermassive black holes. The Local Volume Mapper will target nearby galaxies to analyze their clouds of interstellar gas. Data access The survey makes the data releases available over the Internet. The SkyServer provides a range of interfaces to an underlying Microsoft SQL Server. Both spectra and images are available in this way, and interfaces are made very easy to use so that, for example, a full-color image of any region of the sky covered by an SDSS data release can be obtained just by providing the coordinates. The data are available for non-commercial use only, without written permission. The SkyServer also provides a range of tutorials aimed at everyone from schoolchildren up to professional astronomers. The tenth major data release, DR10, released in July 2013, provides images, imaging catalogs, spectra, and redshifts via a variety of search interfaces. The raw data (from before being processed into databases of objects) are also available through another Internet server and first experienced as a 'fly-through' via the NASA World Wind program. Sky in Google Earth includes data from the SDSS, for those regions where such data are available. There are also KML plugins for SDSS photometry and spectroscopy layers, allowing direct access to SkyServer data from within Google Sky. The data is also available on Hayden Planetarium with a 3D visualizer. There is also the ever-growing list of data for the Stripe 82 region of the SDSS. Following Technical Fellow Jim Gray's contribution on behalf of Microsoft Research with the SkyServer project, Microsoft's WorldWide Telescope makes use of SDSS and other data sources. MilkyWay@home also used SDSS's data to create a highly accurate three-dimensional model of the Milky Way galaxy. Results Along with publications describing the survey itself, SDSS data have been used in publications over a huge range of astronomical topics. The SDSS website has a full list of these publications covering distant quasars at the limits of the observable universe, the distribution of galaxies, the properties of stars in our galaxy and also subjects such as dark matter and dark energy in the universe. Maps Based on the release of Data Release 9 a new 3D map of massive galaxies and distant black holes was published on August 8, 2012.
Physical sciences
Surveys and Catalogs
Astronomy
831650
https://en.wikipedia.org/wiki/Anodizing
Anodizing
Anodizing is an electrolytic passivation process used to increase the thickness of the natural oxide layer on the surface of metal parts. The process is called anodizing because the part to be treated forms the anode electrode of an electrolytic cell. Anodizing increases resistance to corrosion and wear, and provides better adhesion for paint primers and glues than bare metal does. Anodic films can also be used for several cosmetic effects, either with thick porous coatings that can absorb dyes or with thin transparent coatings that add reflected light wave interference effects. Anodizing is also used to prevent galling of threaded components and to make dielectric films for electrolytic capacitors. Anodic films are most commonly applied to protect aluminium alloys, although processes also exist for titanium, zinc, magnesium, niobium, zirconium, hafnium, and tantalum. Iron or carbon steel metal exfoliates when oxidized under neutral or alkaline micro-electrolytic conditions; i.e., the iron oxide (actually ferric hydroxide or hydrated iron oxide, also known as rust) forms by anoxic anodic pits and large cathodic surface, these pits concentrate anions such as sulfate and chloride accelerating the underlying metal to corrosion. Carbon flakes or nodules in iron or steel with high carbon content (high-carbon steel, cast iron) may cause an electrolytic potential and interfere with coating or plating. Ferrous metals are commonly anodized electrolytically in nitric acid or by treatment with red fuming nitric acid to form hard black Iron(II,III) oxide. This oxide remains conformal even when plated on wiring and the wiring is bent. Anodizing changes the microscopic texture of the surface and the crystal structure of the metal near the surface. Thick coatings are normally porous, so a sealing process is often needed to achieve corrosion resistance. Anodized aluminium surfaces, for example, are harder than aluminium but have low to moderate wear resistance that can be improved with increasing thickness or by applying suitable sealing substances. Anodic films are generally much stronger and more adherent than most types of paint and metal plating, but also more brittle. This makes them less likely to crack and peel from ageing and wear, but more susceptible to cracking from thermal stress. History Anodizing was first used on an industrial scale in 1923 to protect Duralumin seaplane parts from corrosion. This early chromic acid–based process was called the Bengough–Stuart process and was documented in British defence specification DEF STAN 03-24/3. It is still used today despite its legacy requirements for a complicated voltage cycle now known to be unnecessary. Variations of this process soon evolved, and the first sulfuric acid anodizing process was patented by Gower and O'Brien in 1927. Sulfuric acid soon became and remains the most common anodizing electrolyte. Oxalic acid anodizing was first patented in Japan in 1923 and later widely used in Germany, particularly for architectural applications. Anodized aluminium extrusion was a popular architectural material in the 1960s and 1970s, but has since been displaced by cheaper plastics and powder coating. The phosphoric acid processes are the most recent major development, so far only used as pretreatments for adhesives or organic paints. A wide variety of proprietary and increasingly complex variations of all these anodizing processes continue to be developed by industry, so the growing trend in military and industrial standards is to classify by coating properties rather than by process chemistry. Aluminium Aluminium alloys are anodized to increase corrosion resistance and to allow dyeing (coloring), improved lubrication, or improved adhesion. However, anodizing does not increase the strength of the aluminium object. The anodic layer is insulative. When exposed to air at room temperature, or any other gas containing oxygen, pure aluminium self-passivates by forming a surface layer of amorphous aluminium oxide 2 to 3 nm thick, which provides very effective protection against corrosion. Aluminium alloys typically form a thicker oxide layer, 5–15 nm thick, but tend to be more susceptible to corrosion. Aluminium alloy parts are anodized to greatly increase the thickness of this layer for corrosion resistance. The corrosion resistance of aluminium alloys is significantly decreased by certain alloying elements or impurities: copper, iron, and silicon, so 2000-, 4000-, 6000 and 7000-series Al alloys tend to be most susceptible. Although anodizing produces a very regular and uniform coating, microscopic fissures in the coating can lead to corrosion. Further, the coating is susceptible to chemical dissolution in the presence of high- and low-pH chemistry, which results in stripping the coating and corrosion of the substrate. To combat this, various techniques have been developed either to reduce the number of fissures, to insert more chemically stable compounds into the oxide, or both. For instance, sulphuric-anodized articles are normally sealed, either through hydro-thermal sealing or precipitating sealing, to reduce porosity and interstitial pathways that allow corrosive ion exchange between the surface and the substrate. Precipitating seals enhance chemical stability but are less effective in eliminating ionic exchange pathways. Most recently, new techniques to partially convert the amorphous oxide coating into more stable micro-crystalline compounds have been developed that have shown significant improvement based on shorter bond lengths. Some aluminium aircraft parts, architectural materials, and consumer products are anodized. Anodized aluminium can be found on MP3 players, smartphones, multi-tools, flashlights, cookware, cameras, sporting goods, firearms, window frames, roofs, in electrolytic capacitors, and on many other products both for corrosion resistance and the ability to retain dye. Although anodizing only has moderate wear resistance, the deeper pores can better retain a lubricating film than a smooth surface would. Anodized coatings have a much lower thermal conductivity and coefficient of linear expansion than aluminium. As a result, the coating will crack from thermal stress if exposed to temperatures above 80 °C (353 K). The coating can crack, but it will not peel. The melting point of aluminium oxide is 2050°C (2323K), much higher than pure aluminium's 658°C (931K). This and the insulativity of aluminium oxide can make welding more difficult. In typical commercial aluminium anodizing processes, the aluminium oxide is grown down into the surface and out from the surface by equal amounts. Therefore, anodizing will increase the part dimensions on each surface by half the oxide thickness. For example, a coating that is 2 μm thick will increase the part dimensions by 1 μm per surface. If the part is anodized on all sides, then all linear dimensions will increase by the oxide thickness. Anodized aluminium surfaces are harder than aluminium but have low to moderate wear resistance, although this can be improved with thickness and sealing. Process Desmut A desmut solution can be applied to the surface of aluminium to remove contaminates. Nitric acid is typically used to remove smut (residue), but is being replaced because of environmental concerns. Electrolysis The anodized aluminium layer is created by passing a direct current through an electrolytic solution, with the aluminium object serving as the anode (the positive electrode in an electrolytic cell). The current releases hydrogen at the cathode (the negative electrode) and oxygen at the surface of the aluminium anode, creating a build-up of aluminium oxide. Alternating current and pulsed current is also possible but rarely used. The voltage required by various solutions may range from 1 to 300 V DC, although most fall in the range of 15 to 21 V. Higher voltages are typically required for thicker coatings formed in sulfuric and organic acid. The anodizing current varies with the area of aluminium being anodized and typically ranges from 30 to 300 A/m2. Aluminium anodizing (eloxal or Electrolytic Oxidation of Aluminium) is usually performed in an acidic solution, typically sulphuric acid or chromic acid, which slowly dissolves the aluminium oxide. The acid action is balanced with the oxidation rate to form a coating with nanopores, 10–150 nm in diameter. These pores are what allow the electrolyte solution and current to reach the aluminium substrate and continue growing the coating to greater thickness beyond what is produced by auto-passivation. These pores allow for the dye to be absorbed, however, this must be followed by sealing or the dye will not stay. Dye is typically followed up by a clean nickel acetate seal. Because the dye is only superficial, the underlying oxide may continue to provide corrosion protection even if minor wear and scratches break through the dyed layer. Conditions such as electrolyte concentration, acidity, solution temperature, and current must be controlled to allow the formation of a consistent oxide layer. Harder, thicker films tend to be produced by more concentrated solutions at lower temperatures with higher voltages and currents. The film thickness can range from under 0.5 micrometers for bright decorative work up to 150 micrometers for architectural applications. Dual-finishing Anodizing can be performed in combination with chromate conversion coating. Each process provides corrosion resistance, with anodizing offering a significant advantage when it comes to ruggedness or physical wear resistance. The reason for combining the processes can vary, however, the significant difference between anodizing and chromate conversion coating is the electrical conductivity of the films produced. Although both stable compounds, chromate conversion coating has a greatly increased electrical conductivity. Applications where this may be useful are varied, however the issue of grounding components as part of a larger system is an obvious one. The dual finishing process uses the best each process has to offer, anodizing with its hard wear resistance and chromate conversion coating with its electrical conductivity. The process steps can typically involve chromate conversion coating the entire component, followed by a masking of the surface in areas where the chromate coating must remain intact. Beyond that, the chromate coating is then dissolved in unmasked areas. The component can then be anodized, with anodizing taking to the unmasked areas. The exact process will vary dependent on service provider, component geometry and required outcome. It helps to protect aluminium article. Widely used specifications The most widely used anodizing specification in the US is a U.S. military spec, MIL-A-8625, which defines three types of aluminium anodizing. Type I is chromic acid anodizing, Type II is sulphuric acid anodizing, and Type III is sulphuric acid hard anodizing. Other anodizing specifications include more MIL-SPECs (e.g., MIL-A-63576), aerospace industry specs by organizations such as SAE, ASTM, and ISO (e.g., AMS 2469, AMS 2470, AMS 2471, AMS 2472, AMS 2482, ASTM B580, ASTM D3933, ISO 10074, and BS 5599), and corporation-specific specs (such as those of Boeing, Lockheed Martin, Airbus and other large contractors). AMS 2468 is obsolete. None of these specifications define a detailed process or chemistry, but rather a set of tests and quality assurance measures which the anodized product must meet. BS 1615 guides the selection of alloys for anodizing. For British defense work, a detailed chromic and sulfuric anodizing processes are described by DEF STAN 03-24/3 and DEF STAN 03-25/3 respectively. Chromic acid (Type I) The oldest anodizing process uses chromic acid. It is widely known as the Bengough-Stuart process but, due to the safety regulations regarding air quality control, is not preferred by vendors when the additive material associated with type II doesn't break tolerances. In North America, it is known as Type I because it is so designated by the MIL-A-8625 standard, but it is also covered by AMS 2470 and MIL-A-8625 Type IB. In the UK it is normally specified as Def Stan 03/24 and used in areas that are prone to come into contact with propellants etc. There are also Boeing and Airbus standards. Chromic acid produces thinner, 0.5 μm to 18 μm (0.00002" to 0.0007") more opaque films that are softer, ductile, and to a degree self-healing. They are harder to dye and may be applied as a pretreatment before painting. The method of film formation is different from using sulfuric acid in that the voltage is ramped up through the process cycle. Sulfuric acid (Type II & III) Sulfuric acid is the most widely used solution to produce an anodized coating. Coatings of moderate thickness 1.8 μm to 25 μm (0.00007" to 0.001") are known as Type II in North America, as named by MIL-A-8625, while coatings thicker than 25 μm (0.001") are known as Type III, hard-coat, hard anodizing, or engineered anodizing. Very thin coatings similar to those produced by chromic anodizing are known as Type IIB. Thick coatings require more process control, and are produced in a refrigerated tank near the freezing point of water with higher voltages than the thinner coatings. Hard anodizing can be made between 13 and 150 μm (0.0005" to 0.006") thick. Anodizing thickness increases wear resistance, corrosion resistance, ability to retain lubricants and PTFE coatings, and electrical and thermal insulation. Sealing Type III will improve corrosion resistance at the cost of reducing abrasion resistance. Sealing will reduce this greatly. Standards for thin (Soft/Standard) sulfuric anodizing are given by MIL-A-8625 Types II and IIB, AMS 2471 (undyed), and AMS 2472 (dyed), BS EN ISO 12373/1 (decorative), BS 3987 (Architectural). Standards for thick sulphuric anodizing are given by MIL-A-8625 Type III, AMS 2469, BS ISO 10074, BS EN 2536 and the obsolete AMS 2468 and DEF STAN 03-26/1. Organic acid Anodizing can produce yellowish integral colors without dyes if it is carried out in weak acids with high voltages, high current densities, and strong refrigeration. Shades of color are restricted to a range which includes pale yellow, gold, deep bronze, brown, grey, and black. Some advanced variations can produce a white coating with 80% reflectivity. The shade of color produced is sensitive to variations in the metallurgy of the underlying alloy and cannot be reproduced consistently. Anodizing in some organic acids, for example malic acid, can enter a 'runaway' situation, in which the current drives the acid to attack the aluminium far more aggressively than normal, resulting in huge pits and scarring. Also, if the current or voltage are driven too high, 'burning' can set in; in this case, the supplies act as if nearly shorted and large, uneven and amorphous black regions develop. Integral color anodizing is generally done with organic acids, but the same effect has been produced in laboratories with very dilute sulfuric acid. Integral color anodizing was originally performed with oxalic acid, but sulfonated aromatic compounds containing oxygen, particularly sulfosalicylic acid, have been more common since the 1960s. Thicknesses of up to 50 μm can be achieved. Organic acid anodizing is called Type IC by MIL-A-8625. Phosphoric acid Anodizing can be carried out in phosphoric acid, usually as a surface preparation for adhesives. This is described in standard ASTM D3933. Borate and tartrate baths Anodizing can also be performed in borate or tartrate baths in which aluminium oxide is insoluble. In these processes, the coating growth stops when the part is fully covered, and the thickness is linearly related to the voltage applied. These coatings are free of pores, relative to the sulfuric and chromic acid processes. This type of coating is widely used to make electrolytic capacitors because the thin aluminium films (typically less than 0.5 μm) would risk being pierced by acidic processes. Plasma electrolytic oxidation Plasma electrolytic oxidation is a similar process, but where higher voltages are applied. This causes sparks to occur and results in more crystalline/ceramic type coatings. Other metals Magnesium Magnesium is anodized primarily as a primer for paint. A thin (5 μm) film is sufficient for this. Thicker coatings of 25 μm and up can provide mild corrosion resistance when sealed with oil, wax, or sodium silicate. Standards for magnesium anodizing are given in AMS 2466, AMS 2478, AMS 2479, and ASTM B893. Niobium Niobium anodizes in a similar fashion to titanium with a range of attractive colors being formed by interference at different film thicknesses. Again the film thickness is dependent on the anodizing voltage. Uses include jewelry and commemorative coins. Tantalum Tantalum anodizes similarly to titanium and niobium with a range of attractive colors being formed by interference at different film thicknesses. Again the film thickness is dependent on the anodizing voltage and typically ranges from 18 to 23 Angstroms per volt depending on electrolyte and temperature. Uses include tantalum capacitors. Titanium An anodized oxide layer has a thickness in the range of to several micrometers. Standards for titanium anodizing are given by AMS 2487 and AMS 2488. AMS 2488 Type III anodizing of titanium generates an array of different colors without dyes, for which it is sometimes used in art, costume jewellery, body piercing jewellery and wedding rings. The color formed is dependent on the thickness of the oxide (which is determined by the anodizing voltage); it is caused by the interference of light reflecting off the oxide surface with light travelling through it and reflecting off the underlying metal surface. AMS 2488 Type II anodizing produces a thicker matte grey finish with higher wear resistance. Zinc Zinc is rarely anodized, but a process was developed by the International Lead Zinc Research Organization and covered by MIL-A-81801. A solution of ammonium phosphate, chromate and fluoride with voltages of up to 200 V can produce olive green coatings up to 80 μm thick. The coatings are hard and corrosion resistant. Zinc or galvanized steel can be anodized using DC at lower voltages (20–30 V) in silicate baths containing varying concentrations of sodium silicate, sodium hydroxide, borax, sodium nitrite, and nickel sulfate. Dyeing The most common anodizing processes, for example, sulphuric acid on aluminium, produce a porous surface which can accept dyes easily. The number of dye colors is almost endless; however, the colors produced tend to vary according to the base alloy. The most common colors in the industry, due to them being relatively cheap, are yellow, green, blue, black, orange, purple and red. Though some may prefer lighter colors, in practice they may be difficult to produce on certain alloys such as high-silicon casting grades and 2000-series aluminium-copper alloys. Another concern is the "lightfastness" of organic dyestuffs—some colors (reds and blues) are particularly prone to fading. Black dyes and gold produced by inorganic means (ferric ammonium oxalate) are more lightfast. Dyed anodizing is usually sealed to reduce or eliminate dye bleed out. White color cannot be applied due to the larger molecule size than the pore size of the oxide layer. Alternatively, metal (usually tin) can be electrolytically deposited in the pores of the anodic coating to provide more lightfast colors. Metal dye colors range from pale champagne to black. Bronze shades are commonly used for architectural metals. Alternatively, the color may be produced integral to the film. This is done during the anodizing process using organic acids mixed with the sulfuric electrolyte and a pulsed current. Splash effects are created by dying the unsealed porous surface in lighter colors and then splashing darker color dyes onto the surface. Aqueous and solvent-based dye mixtures may also be alternately applied since the colored dyes will resist each other and leave spotted effects. Another interesting coloring method is anodizing interference coloring. The thin oil film resting on the water's surface displays a rainbow hue due to the interference between light reflected from the water-oil interface and the oil film's surface. Because the oil film's thickness isn't regulated, the resulting rainbow color appears random. In the anodizing coloring of aluminum, desired colors are achieved by depositing a controllably thick metal layer (typically tin) at the base of the porous structure. This involves reflections on the aluminum substrate and the upper metal surface. The color resulting from interference shifts from blue, green, and yellow to red as the deposited metal layer thickens. Beyond a specific thickness, the optical interference vanishes, and the color turns bronze. Interference-colored anodized aluminum parts exhibit a distinctive quality: their color varies when viewed from different angles. The interference coloring involves a 3-step process: sulfuric acid anodizing, electrochemical modification of the anodic pore, and metal (tin) deposition. Sealing Sealing is the final step in the anodizing process. Acidic anodizing solutions produce pores in the anodized coating. These pores can absorb dyes and retain lubricants but are also an avenue for corrosion. When lubrication properties are not critical, they are usually sealed after dyeing to increase corrosion resistance and dye retention. There are three most common types of sealing. Long immersion in boiling-hot——deionized water or steam is the simplest sealing process, although it is not completely effective and reduces abrasion resistance by 20%. The oxide is converted into its hydrated form and the resulting swelling reduces the porosity of the surface. Mid-temperature sealing process which works at in solutions containing organic additives and metal salts. However, this process will likely leach the colors. Cold sealing process, where the pores are closed by impregnation of a sealant in a room-temperature bath, is more popular due to energy savings. Coatings sealed in this method are not suitable for adhesive bonding. Teflon, nickel acetate, cobalt acetate, and hot sodium or potassium dichromate seals are commonly used. MIL-A-8625 requires sealing for thin coatings (Types I and II) and allows it as an option for thick ones (Type III). Cleaning Anodized aluminium surfaces that are not regularly cleaned are susceptible to panel edge staining, a unique type of surface staining that can affect the structural integrity of the metal. Environmental impact Anodizing is one of the more environmentally friendly metal finishing processes. Except for organic (aka integral color) anodizing, the by-products contain only small amounts of heavy metals, halogens, or volatile organic compounds. Integral color anodizing produces no VOCs, heavy metals, or halogens as all of the byproducts found in the effluent streams of other processes come from their dyes or plating materials. The most common anodizing effluents, aluminium hydroxide and aluminium sulfate, are recycled for the manufacturing of alum, baking powder, cosmetics, newsprint and fertilizer or used by industrial wastewater treatment systems. Mechanical considerations Anodizing will raise the surface since the oxide created occupies more space than the base metal converted. This will generally not be of consequence except where there are tight tolerances. If so, the thickness of the anodizing layer has to be taken into account when choosing the machining dimension. A general practice on engineering drawing is to specify that "dimensions apply after all surface finishes". This will force the machine shop to take into account the anodization thickness when performing the final machining of the mechanical part before anodization. Also in the case of small holes threaded to accept screws, anodizing may cause the screws to bind, thus the threaded holes may need to be chased with a tap to restore the original dimensions. Alternatively, special oversize taps may be used to precompensate for this growth. In the case of unthreaded holes that accept fixed-diameter pins or rods, a slightly oversized hole to allow for the dimension change may be appropriate. Depending on the alloy and thickness of the anodized coating, the same may have a significantly negative effect on fatigue life. Conversely, anodizing may increase fatigue life by preventing corrosion pitting.
Technology
Metallurgy
null
832128
https://en.wikipedia.org/wiki/Liquefied%20natural%20gas
Liquefied natural gas
Liquefied natural gas (LNG) is natural gas (predominantly methane, CH4, with some mixture of ethane, C2H6) that has been cooled down to liquid form for ease and safety of non-pressurized storage or transport. It takes up about 1/600th the volume of natural gas in the gaseous state at standard conditions for temperature and pressure. LNG is odorless, colorless, non-toxic and non-corrosive. Hazards include flammability after vaporization into a gaseous state, freezing and asphyxia. The liquefaction process involves removal of certain components, such as dust, acid gases, helium, water, and heavy hydrocarbons, which could cause difficulty downstream. The natural gas is then condensed into a liquid at close to atmospheric pressure by cooling it to approximately ; maximum transport pressure is set at around (gauge pressure), which is about 0.25 times atmospheric pressure at sea level. The gas extracted from underground hydrocarbon deposits contains a varying mix of hydrocarbon components, which usually includes mostly methane (CH4), along with ethane (C2H6), propane (C3H8) and butane (C4H10). Other gases also occur in natural gas, notably CO2. These gases have wide-ranging boiling points and also different heating values, allowing different routes to commercialization and also different uses. The "acidic" elements such as hydrogen sulphide (H2S) and carbon dioxide (CO2), together with oil, mud, water, and mercury, are removed from the gas to deliver a clean sweetened stream of gas. Failure to remove much or all of such acidic molecules, mercury, and other impurities could result in damage to the equipment. Corrosion of steel pipes and amalgamization of mercury to aluminum within cryogenic heat exchangers could cause expensive damage. The gas stream is typically separated into the liquefied petroleum fractions (butane and propane), which can be stored in liquid form at relatively low pressure, and the lighter ethane and methane fractions. These lighter fractions of methane and ethane are then liquefied to make up the bulk of LNG that is shipped. Natural gas was considered during the 20th century to be economically unimportant wherever gas-producing oil or gas fields were distant from gas pipelines or located in offshore locations where pipelines were not viable. In the past this usually meant that natural gas produced was typically flared, especially since unlike oil, no viable method for natural gas storage or transport existed other than compressed gas pipelines to end users of the same gas. This meant that natural gas markets were historically entirely local, and any production had to be consumed within the local or regional network. Developments of production processes, cryogenic storage, and transportation effectively created the tools required to commercialize natural gas into a global market which now competes with other fuels. Furthermore, the development of LNG storage also introduced a reliability in networks which was previously thought impossible. Given that storage of other fuels is relatively easily secured using simple tanks, a supply for several months could be kept in storage. With the advent of large-scale cryogenic storage, it became possible to create long term gas storage reserves. These reserves of liquefied gas could be deployed at a moment's notice through regasification processes, and today are the main means for networks to handle local peak shaving requirements. Specific energy content and energy density The heating value depends on the source of gas that is used and the process that is used to liquefy the gas. The range of heating value can span ±10 to 15 percent. A typical value of the higher heating value of LNG is approximately 50 MJ/kg or 21,500 BTU/lb. A typical value of the lower heating value of LNG is 45 MJ/kg or 19,350 BTU/lb. For the purpose of comparison of different fuels, the heating value may be expressed in terms of energy per volume, which is known as the energy density expressed in MJ/litre. The density of LNG is roughly 0.41 kg/litre to 0.5 kg/litre, depending on temperature, pressure, and composition, compared to water at 1.0 kg/litre. Using the median value of 0.45 kg/litre, the typical energy density values are 22.5 MJ/litre (based on higher heating value) or 20.3 MJ/litre (based on lower heating value). The volumetric energy density of LNG is approximately 2.4 times that of compressed natural gas (CNG), which makes it economical to transport natural gas by ship in the form of LNG. The energy density of LNG is comparable to propane and ethanol but is only 60 percent that of diesel and 70 percent that of gasoline. History Experiments on the properties of gases started early in the 17th century. By the middle of the seventeenth century Robert Boyle had derived the inverse relationship between the pressure and the volume of gases. About the same time, Guillaume Amontons started looking into temperature effects on gas. Various gas experiments continued for the next 200 years. During that time there were efforts to liquefy gases. Many new facts about the nature of gases were discovered. For example, early in the nineteenth century Cagniard de la Tour showed there was a temperature above which a gas could not be liquefied. There was a major push in the mid to late nineteenth century to liquefy all gases. A number of scientists including Michael Faraday, James Joule, and William Thomson (Lord Kelvin) did experiments in this area. In 1886 Karol Olszewski liquefied methane, the primary constituent of natural gas. By 1900 all gases had been liquefied except helium, which was liquefied in 1908. The first large-scale liquefaction of natural gas in the U.S. was in 1918 when the U.S. government liquefied natural gas as a way to extract helium, which is a small component of some natural gas. This helium was intended for use in British dirigibles for World War I. The liquid natural gas (LNG) was not stored, but regasified and immediately put into the gas mains. The key patents having to do with natural gas liquefaction date from 1915 and the mid-1930s. In 1915 Godfrey Cabot patented a method for storing liquid gases at very low temperatures. It consisted of a Thermos bottle-type design which included a cold inner tank within an outer tank; the tanks being separated by insulation. In 1937 Lee Twomey received patents for a process for large-scale liquefaction of natural gas. The intention was to store natural gas as a liquid so it could be used for shaving peak energy loads during cold snaps. Because of large volumes it is not practical to store natural gas, as a gas, near atmospheric pressure. However, when liquefied, it can be stored in a volume 1/600th as large. This is a practical way to store it but the gas must be kept at . There are two processes for liquefying natural gas in large quantities. The first is the cascade process, in which the natural gas is cooled by another gas which in turn has been cooled by still another gas, hence named the "cascade" process. There are usually two cascade cycles before the liquid natural gas cycle. The other method is the Linde process, with a variation of the Linde process, called the Claude process, being sometimes used. In this process, the gas is cooled regeneratively by continually passing and expanding it through an orifice until it is cooled to temperatures at which it liquefies. This process was developed by James Joule and William Thomson and is known as the Joule–Thomson effect. Lee Twomey used the cascade process for his patents. Commercial operations in the United States The East Ohio Gas Company built a full-scale commercial LNG plant in Cleveland, Ohio, in 1940 just after a successful pilot plant built by its sister company, Hope Natural Gas Company of West Virginia. This was the first such plant in the world. Originally it had three spheres, approximately 63 feet in diameter containing LNG at −260 °F. Each sphere held the equivalent of about 50 million cubic feet of natural gas. A fourth tank, a cylinder, was added in 1942. It had an equivalent capacity of 100 million cubic feet of gas. The plant operated successfully for three years. The stored gas was regasified and put into the mains when cold snaps hit and extra capacity was needed. This precluded the denial of gas to some customers during a cold snap. The Cleveland plant failed on October 20, 1944, when the cylindrical tank ruptured, spilling thousands of gallons of LNG over the plant and nearby neighborhood. The gas evaporated and caught fire, which caused 130 fatalities. The fire delayed further implementation of LNG facilities for several years. However, over the next 15 years new research on low-temperature alloys, and better insulation materials, set the stage for a revival of the industry. It restarted in 1959 when a U.S. World War II Liberty ship, the Methane Pioneer, converted to carry LNG, made a delivery of LNG from the U.S. Gulf Coast to energy-starved Great Britain. In June 1964, the world's first purpose-built LNG carrier, the Methane Princess, entered service. Soon after that a large natural gas field was discovered in Algeria. International trade in LNG quickly followed as LNG was shipped to France and Great Britain from the Algerian fields. One more important attribute of LNG had now been exploited. Once natural gas was liquefied it could not only be stored more easily, but it could be transported. Thus energy could now be shipped over the oceans via LNG the same way it was shipped in the form of oil. The LNG industry in the U.S. restarted in 1965 with the building of a number of new plants, which continued through the 1970s. These plants were not only used for peak-shaving, as in Cleveland, but also for base-load supplies for places that never had natural gas before this. A number of import facilities were built on the East Coast in anticipation of the need to import energy via LNG. However, a recent boom in U.S. natural gas production (2010–2014), enabled by hydraulic fracturing ("fracking"), has many of these import facilities being considered as export facilities. The first U.S. LNG export was completed in early 2016. By 2023, the U.S. had become the biggest exporter in the world, and projects already under construction or permitted would double its export capacities by 2027. The largest exporters were Cheniere Energy Inc., Freeport LNG, and Venture Global LNG Inc. The U.S. Energy Information Administration reported that the U.S. had exported 4.3 trillion cubic feet in 2023. LNG life cycle The process begins with the pre-treatment of a feedstock of natural gas entering the system to remove impurities such as H2S, CO2, H2O, mercury and higher-chained hydrocarbons. Feedstock gas then enters the liquefaction unit where it is cooled to between -145 °C and -163 °C. Although the type or number of heating cycles and/or refrigerants used may vary based on the technology, the basic process involves circulating the gas through aluminum tube coils and exposure to a compressed refrigerant. As the refrigerant is vaporized, the heat transfer causes the gas in the coils to cool. The LNG is then stored in a specialized double-walled insulated tank at atmospheric pressure ready to be transported to its final destination. Most domestic LNG is transported by land via truck/trailer designed for cryogenic temperatures. Intercontinental LNG transport travels by special tanker ships. LNG transport tanks comprise an internal steel or aluminum compartment and an external carbon or steel compartment with a vacuum system in between to reduce the amount of heat transfer. Once on site, the LNG must be stored in vacuum insulated or flat bottom storage tanks. When ready for distribution, the LNG enters a regasification facility where it is pumped into a vaporizer and heated back into gaseous form. The gas then enters the gas pipeline distribution system and is delivered to the end-user. Production The natural gas fed into the LNG plant will be treated to remove water, hydrogen sulfide, carbon dioxide, benzene and other components that will freeze under the low temperatures needed for storage or be destructive to the liquefaction facility. LNG typically contains more than 90% methane. It also contains small amounts of ethane, propane, butane, some heavier alkanes, and nitrogen. The purification process can be designed to give almost 100% methane. One of the risks of LNG is a rapid phase transition explosion (RPT), which occurs when cold LNG comes into contact with water. The most important infrastructure needed for LNG production and transportation is an LNG plant consisting of one or more LNG trains, each of which is an independent unit for gas liquefaction and purification. A typical train consists of a compression area, propane condenser area, and methane and ethane areas. The largest LNG train in operation is in Qatar, with a total production capacity of 7.8 million tonnes per annum (MTPA). LNG is loaded onto ships and delivered to a regasification terminal, where the LNG is allowed to expand and reconvert into gas. Regasification terminals are usually connected to a storage and pipeline distribution network to distribute natural gas to local distribution companies (LDCs) or independent power plants (IPPs). LNG plant production Information for the following table is derived in part from publication by the U.S. Energy Information Administration.
Technology
Fuel
null
832314
https://en.wikipedia.org/wiki/Clinic
Clinic
A clinic (or outpatient clinic or ambulatory care clinic) is a health facility that is primarily focused on the care of outpatients. Clinics can be privately operated or publicly managed and funded. They typically cover the primary care needs of populations in local communities, in contrast to larger hospitals which offer more specialized treatments and admit inpatients for overnight stays. Most commonly, the English word clinic refers to a general practice, run by one or more general practitioners offering small therapeutic treatments, but it can also mean a specialist clinic. Some clinics retain the name "clinic" even while growing into institutions as large as major hospitals or becoming associated with a hospital or medical school. Etymology The word clinic derives from Ancient Greek klinein meaning to slope, lean or recline. Hence klinē is a couch or bed and klinikos is a physician who visits his patients in their beds. In Latin, this became clīnicus. An early use of the word clinic was "one who receives baptism on a sick bed". Overview Clinics are often associated with a general medical practice run by one or several general practitioners. Other types of clinics are run by the type of specialist associated with that type: physical therapy clinics by physiotherapists and psychology clinics by clinical psychologists, and so on for each health profession. (This can even hold true for certain services outside the medical field: for example, legal clinics are run by lawyers.) Some clinics are operated in-house by employers, government organizations, or hospitals, and some clinical services are outsourced to private corporations which specialize in providing health services. In China, for example, owners of such clinics do not have formal medical education. There were 659,596 village clinics in China in 2011. Health care in India, China, Russia and Africa is provided to those regions' vast rural areas by mobile health clinics or roadside dispensaries, some of which integrate traditional medicine. In India these traditional clinics provide ayurvedic medicine and unani herbal medical practice. In each of these countries, traditional medicine tends to be a hereditary practice. Function The function of clinics differs from country to country. For instance, a local general practice run by a single general practitioner provides primary health care and is usually run as a for-profit business by the owner, whereas a government-run specialist clinic may provide subsidized or specialized health care. Some clinics serve as a place for people with injuries or illnesses to be seen by a triage nurse or other health worker. In these clinics, the injury or illness may not be serious enough to require a visit to an emergency room (ER), but the person can be transferred to one if needed. Treatment at these clinics is often less expensive than it would be at a casualty department. Also, unlike an ER these clinics are often not open on a 24/7/365 basis. They sometimes have access to diagnostic equipment such as X-ray machines, especially if the clinic is part of a larger facility. Doctors at such clinics can often refer patients to specialists if the need arises. Large outpatient clinics Large outpatient clinics vary in size, but can be as large as hospitals. Function Typical large outpatient clinics house general medical practitioners (GPs) such as doctors and nurses to provide ambulatory care and some acute care services but lack the major surgical and pre- and post-operative care facilities commonly associated with hospitals. Besides GPs, if a clinic is a polyclinic, it can house outpatient departments of some medical specialties, such as gynecology, dermatology, ophthalmology, otolaryngology, neurology, pulmonology, cardiology, and endocrinology. In some university cities, polyclinics contain outpatient departments for the entire teaching hospital in one building. Internationally Large outpatient clinics are a common type of healthcare facility in many countries, including France, Germany (long tradition), Switzerland, and most of the countries of Central and Eastern Europe (often using a mixed Soviet-German model), as well as in former Soviet republics such as Russia and Ukraine; and in many countries across Asia and Africa. In Europe, especially in the Central and Eastern Europe, bigger outpatient health centers, commonly in cities and towns, are called policlinics (derived from the word polis, not from poly-). Recent Russian governments have attempted to replace the policlinic model introduced during Soviet times with a more western model. However, this has failed. In the Czech Republic, many policlinics were privatized or leasehold and decentralized in the post-communist era: some of them are just lessors and coordinators of a healthcare provided by private doctor's offices in the policlinic building. India has also set up huge numbers of polyclinics for former defense personnel. The network envisages 426 polyclinics in 343 districts of the country which will benefit about 33 lakh (3.3 million) ex-servicemen residing in remote and far-flung areas. Policlinics are also the backbone of Cuba's primary care system and have been credited with a role in improving that nation's health indicators. Mobile clinics Providing health services through mobile clinics provides accessible healthcare services to these remote areas that have yet to make their way in the politicized space. For example, mobile clinics have proved helpful in dealing with new settlement patterns in Costa Rica. Before foreign aid organizations or the state government became involved in healthcare, Costa Rica's people managed their own health maintenance and protection. People relied on various socio-cultural adaptations and remedies to prevent illnesses, such as personal hygiene and settlement patterns. When new settlements that sprang up along the coast became "artificial" communities, and due to lack of traditional home healing practices here, alternative methods such as mobile clinics had to be implemented in these communities for the protection and prevention of diseases. A study done in rural Namibia revealed the health changes of orphans, vulnerable children and non-vulnerable children (OVC) visiting a mobile clinic where health facilities are far from the remote villages. Over 6 months, information on immunization status, diagnosis of anemia, skin and intestinal disorders, nutrition, dental disorders was collected and showed that visits to mobile clinics improved the overall health of children that visited regularly. It concluded that specified "planning of these programs in areas with similarly identified barriers may help correct the health disparities among Namibian OVC and could be a first step in improving child morbidity and mortality in difficult-to-reach rural areas." Food supplementation in the context of routine mobile clinic visits also shows to have improved the nutritional status of children, and it needs further exploration as a way to reduce childhood malnutrition in resource-scarce areas. A cross-sectional study focussed on comparing acute and chronic undernutrition rates prior to and after a food-supplementation program as an adjunct to routine health care for children of migrant workers residing in rural communities in the Dominican Republic. Rates of chronic undernutrition decreased from 33% to 18% after the initiation of the food-supplementation program and shows that the community members attending the mobile clinics are not just passively receiving the information but are incorporating it and helping keep their children nourished. Types There are many different types of clinics providing outpatient services. Such clinics may be public (government-funded) or private medical practices. A CLSC are in Quebec; they are a type of free clinic funded by the provincial government; they provide service not covered by Canada's healthcare plan including social workers In the United States, a free clinic provides free or low-cost healthcare for those with little or without insurance. A retail-based clinic is housed in supermarkets and similar retail outlets providing walk-in health care, which may be staffed by nurse practitioners. A general out-patient clinic offers general diagnoses or treatments without an overnight stay. A polyclinic or policlinic provides a range of healthcare services (including diagnostics) without need of an overnight stay A specialist clinic provides advanced diagnostic or treatment services for specific diseases or parts of the body. This type contrasts with general out-patient clinics. A sexual health clinic deals with sexual health related problems, such as prevention and treatment of sexually transmitted infections. A gender identity clinic provides services relating to transgender health care. A fertility clinic aims to help women and couples to become pregnant. An abortion clinic is a medical facility providing abortion services to women. An ambulatory surgery clinic offers outpatient or same day surgery services, usually for surgical procedures less complicated than those requiring hospitalization. An ultrasound clinic offers medical ultrasound investigations for patients. An ultrasound clinic is normally run privately.
Biology and health sciences
Health facilities
Health
832482
https://en.wikipedia.org/wiki/Rotten%20Tomatoes
Rotten Tomatoes
Rotten Tomatoes is an American review-aggregation website for film and television. The company was launched in August 1998 by three undergraduate students at the University of California, Berkeley: Senh Duong, Patrick Y. Lee, and Stephen Wang. Although the name "Rotten Tomatoes" connects to the practice of audiences throwing rotten tomatoes in disapproval of a poor stage performance, the direct inspiration for the name from Duong, Lee, and Wang came from an equivalent scene in the 1992 Canadian film Léolo. Since January 2010, Rotten Tomatoes has been owned by Flixster, which was in turn acquired by Warner Bros. in 2011. In February 2016, Rotten Tomatoes and its parent site Flixster were sold to Comcast's Fandango ticketing company. Warner Bros. retained a minority stake in the merged entities, including Fandango. The site is influential among moviegoers, a third of whom say they consult it before going to the cinema in the U.S. It has been criticized for oversimplifying reviews by flattening them into a fresh vs. rotten dichotomy. It has also been criticized for being easy for studios to manipulate by limiting early screenings to critics inclined to be favorable, among other tactics. History Rotten Tomatoes was launched on August 12, 1998, as a spare-time project by Senh Duong. His objective in creating Rotten Tomatoes was "to create a site where people can get access to reviews from a variety of critics in the U.S". As a fan of Jackie Chan, Duong was inspired to create the website after collecting all the reviews of Chan's Hong Kong action movies as they were being released in the United States. The catalyst for the creation of the website was Rush Hour (1998), Chan's first major Hollywood crossover, which was originally planned to release in August 1998. Duong coded the website in two weeks and the site went live the same month, but the release of Rush Hour was delayed until September 1998. Besides Jackie Chan films, he began including other films on Rotten Tomatoes, extending it beyond Chan's fandom. The first non-Chan Hollywood movie whose reviews were featured on Rotten Tomatoes was Your Friends & Neighbors (1998). The website was an immediate success, receiving mentions by Netscape, Yahoo!, and USA Today within the first week of its launch; it attracted "600–1,000 daily unique visitors" as a result. Duong teamed up with University of California, Berkeley classmates Patrick Y. Lee and Stephen Wang, his former partners at the Berkeley, California-based web design firm Design Reactor, to pursue Rotten Tomatoes on a full-time basis. They officially launched it on April 1, 2000. In June 2004, IGN Entertainment acquired Rotten Tomatoes for an undisclosed sum. In September 2005, IGN was bought by News Corp's Fox Interactive Media. In January 2010, IGN sold the website to Flixster. The combined reach of both companies is 30 million unique visitors a month across all different platforms, according to the companies. In 2011, Warner Bros. acquired Rotten Tomatoes. In early 2009, Current Television launched The Rotten Tomatoes Show, a televised version of the web review site. It was hosted by Brett Erlich and Ellen Fox and written by Mark Ganek. The show aired Thursdays at 10:30 EST until September 16, 2010. It returned as a much shorter segment of InfoMania, a satirical news show that ended in 2011. By late 2009, the website was designed to enable Rotten Tomatoes users to create and join groups to discuss various aspects of film. One group, "The Golden Oyster Awards", accepted votes of members for various awards, spoofing the better-known Academy Awards or Golden Globes. When Flixster bought the company, they disbanded the groups. As of February 2011, new community features have been added and others removed. For example, users can no longer sort films by Fresh Ratings from Rotten Ratings, and vice versa. On September 17, 2013, a section devoted to scripted television series, called TV Zone, was created as a subsection of the website. In February 2016, Rotten Tomatoes and its parent site Flixster were sold to Comcast's Fandango Media. Warner Bros retained a minority stake in the merged entities, including Fandango. In December 2016, Fandango and all its various websites moved to Fox Interactive Media's former headquarters in Beverly Hills, California. In July 2017, the website's editor-in-chief since 2007, Matt Atchity, left to join The Young Turks YouTube channel. On November 1, 2017, the site launched a new web series on Facebook, See It/Skip It, hosted by Jacqueline Coley and Segun Oduolowu. In March 2018, the site announced its new design, icons and logo for the first time in 19 years at South by Southwest. On May 19, 2020, Rotten Tomatoes won the 2020 Webby People's Voice Award for Entertainment in the Web category. In February 2021, the Rotten Tomatoes staff made an entry on their Product Blog, announcing several design changes to the site: Each film's 'Score Box' at the top of the page would now also include its release year, genre, and runtimes, with an MPAA rating to be soon added; the number of ratings would be shown in groupings – from 50+ up to 250,000+ ratings, for easier visualization. Links to critics and viewers are included underneath the ratings. By clicking on either the Tomatometer Score or the Audience Score, the users can access "Score Details" information, such as the number of Fresh and Rotten reviews, average rating, and Top Critics' score. The team also added a new "What to Know" section for each film entry page, which could combine the "Critics Consensus" blurb with a new "Audience Says" blurb, so users can see an at-a-glance summary of the sentiments of both certified critics and verified audience members. Features Critics' aggregate score Rotten Tomatoes staff first collect online reviews from writers who are certified members of various writing guilds or film critic-associations. To be accepted as a critic on the website, a critic's original reviews must garner a specific number of "likes" from users. Those classified as "Top Critics" generally write for major newspapers. The critics upload their reviews to the movie page on the website, and need to mark their review "fresh" if it is generally favorable or "rotten" otherwise. It is necessary for the critic to do so as some reviews are qualitative and do not grant a numeric score, making it impossible for the system to be automatic. The website keeps track of all the reviews counted for each film and calculates the percentage of positive reviews. If the positive reviews make up 60% or more, the film is considered "fresh". If the positive reviews are less than 60%, the film is considered "rotten". An average score on a 0 to 10 scale is also calculated. With each review, a short excerpt of the review is quoted that also serves a hyperlink to the complete review essay for anyone interested to read the critic's full thoughts on the subject. "Top Critics", such as Roger Ebert, Desson Thomson, Stephen Hunter, Owen Gleiberman, Lisa Schwarzbaum, Peter Travers and Michael Phillips are identified in a sub-listing that calculates their reviews separately. Their opinions are also included in the general rating. When there are sufficient reviews, the staff creates and posts a consensus statement to express the general reasons for the collective opinion of the film. This rating is indicated by an equivalent icon at the film listing, to give the reader a one-glance look at the general critical opinion about the work. The "Certified Fresh" seal is reserved for movies that satisfy two criteria: a "Tomatometer" of 75% or better and at least 80 reviews (40 for limited release movies) from "Tomatometer" critics (including 5 Top Critics). Films earning this status will keep it unless the positive critical percentage drops below 70%. Films with 100% positive ratings that lack the required number of reviews may not receive the "Certified Fresh" seal. When a film or TV show reaches the requirements for the "Certified Fresh", it is not automatically granted the seal; "the Tomatometer score must be consistent and unlikely to deviate significantly" before it is thus marked. Once certified, if a film's score drops and remains consistently below 70%, it loses its Certified Fresh designation. Golden Tomato Awards In 2000, Rotten Tomatoes announced the RT Awards honoring the best-reviewed films of the year according to the website's rating system. The awards were later renamed the Golden Tomato Awards. The nominees and winners are announced on the website, although there is no actual awards ceremony. The films are divided into wide release and limited release categories. Limited releases are defined as opening in 599 or fewer theaters at initial release. Platform releases, movies initially released under 600 theaters but later receiving wider distribution, fall under this definition. Any film opening in more than 600 theaters is considered wide release. There are also two categories purely for British and Australian films. The "User"-category represents the highest rated film among users, and the "Mouldy"-award represents the worst-reviewed films of the year. A movie must have 40 (originally 20) or more rated reviews to be considered for domestic categories. It must have 500 or more user ratings to be considered for the "User"-category. Films are further classified based on film genre. Each movie is eligible in only one genre, aside from non-English-language films, which can be included in both their genre and the respective "Foreign" category. Once a film is considered eligible, its "votes" are counted. Each critic from the website's list gets one vote (as determined by their review), all weighted equally. Because reviews are continually added, manually and otherwise, a cutoff date at which new reviews are not counted toward the Golden Tomato awards is initiated each year, usually the first of the new year. Reviews without ratings are not counted toward the results of the Golden Tomato Awards. Audience score and reviews Each movie features a "user average", which calculates the percentage of registered users who have rated the film positively on a 5-star scale, similar to calculation of recognized critics' reviews. On May 24, 2019, Rotten Tomatoes introduced a verified rating system that would replace the earlier system where users were merely required to register to submit a rating. So, in addition to creating an account, users will have to verify their ticket purchase through ticketing company Fandango Media, parent company of Rotten Tomatoes. While users can still leave reviews without verifying, those reviews will not account for the average audience score displayed next to the Tomatometer. On August 21, 2024, Rotten Tomatoes rebranded its audience score as the Popcornmeter and introduced a new "Verified Hot" badge. The designation is only given to films which have reached an audience score of 90 percent or higher among users whom Rotten Tomatoes has verified as having purchased a ticket to the film through Fandango. A representative for Rotten Tomatoes stated that their goal is to include other services in the future for users who do not use Fandango. Upon its creation, the "Verified Hot" badge was installed retroactively on over 200 films which achieved a verified audience score of 90% or higher since the launch of Rotten Tomatoes' verified audience ratings in May 2019. "What to Know" In February 2021, a new "What to Know" section was created for each film entry, combining the "Critics Consensus" and a new "Audience Says" blurbs within it, to give users an at-a-glance summary of the general sentiments of a film as experienced by critics and audiences. Prior to February 2021, only the "Critics Consensus" blurb was posted for each entry, after enough certified critics had submitted reviews. When the "Audience Says" blurbs were added, Rotten Tomatoes initially included them only for newer films and those with a significant audience rating, but suggested that they may later add them for older films as well. "Critics Consensus" / "Audience Says" Each movie features a brief blurb summary of the critics' reviews, called the "Critical Consensus", used in that entry's Tomatometer aggregate score. In February 2021, Rotten Tomatoes added an "Audience Says" section; similar to the "Critics Consensus", it summarizes the reviews noted by registered users into a concise blurb. The Rotten Tomatoes staff noted that for any given film, if there were any external factors such as controversies or issues affecting the sentiments of a film, they may address it in the "Audience Says" section to give users the most relevant info regarding their viewing choices. Localized versions Localized versions of the site available in the United Kingdom, India, and Australia were discontinued following the acquisition of Rotten Tomatoes by Fandango. The Mexican version of the site, , remains active. API The Rotten Tomatoes API provides limited access to critic and audience ratings and reviews, allowing developers to incorporate Rotten Tomatoes data on other websites. The free service is intended for use in the US only; permission is required for use elsewhere. As of 2022, API access is restricted to approved developers that must go through an application process. Influence Major Hollywood studios have come to see Rotten Tomatoes as a potential threat to their marketing. In 2017, several blockbuster films like Pirates of the Caribbean: Dead Men Tell No Tales, Baywatch and The Mummy were projected to open with gross receipts of $90 million, $50 million and $45 million, respectively, but ended up debuting with $62.6 million, $23.1 million and $31.6 million. Rotten Tomatoes, which scored the films at 30%, 19% and 16%, respectively, was blamed for undermining them. That same summer, films like Wonder Woman and Spider-Man: Homecoming (both 92%) received high scores and opened at or exceeded expectations with their $100+ million trackings. As a result of this concern, 20th Century Fox commissioned a 2015 study, titled "Rotten Tomatoes and Box Office", that stated the website combined with social media was going to be an increasingly serious complication for the film business: "The power of Rotten Tomatoes and fast-breaking word of mouth will only get stronger. Many Millennials and even Gen X-ers now vet every purchase through the Internet, whether it's restaurants, video games, make-up, consumer electronics or movies. As they get older and comprise an even larger share of total moviegoers, this behavior is unlikely to change". Other studios have commissioned a number of studies on the subject, with them finding that 7/10 people said they would be less interested in seeing a film if the Rotten Tomatoes score was below 25%, and that the site has the most influence on people 25 and younger. The scores have reached a level of online ubiquity which film companies have found threatening. For instance, the scores are regularly posted in Google search results for films so reviewed. Furthermore, the scores are prominently featured in Fandango's popular ticket purchasing website, on its mobile app, on popular streaming services like Peacock, and on Flixster, which led to complaints that "rotten" scores damaged films' performances. Others have argued that filmmakers and studios have only themselves to blame if Rotten Tomatoes produces a bad score, as this only reflects a poor reception among film critics. As one independent film distributor marketing executive noted, "To me, it's a ridiculous argument that Rotten Tomatoes is the problem ... make a good movie!". ComScore's Paul Dergarabedian had similar comments, saying: "The best way for studios to combat the 'Rotten Tomatoes Effect' is to make better movies, plain and simple". Some studios have suggested embargoing or cancelling early critic screenings in a response to poor reviews prior to a film's release affecting pre-sales and opening weekend numbers. In July 2017, Sony embargoed critic reviews for The Emoji Movie until mid-day the Thursday before its release. The film ended up with a 9% rating (including 0% after the first 25 reviews), but still opened to $24 million, on par with projections. Josh Greenstein, Sony Pictures President of Worldwide Marketing and Distribution, said, "The Emoji Movie was built for people under 18 ... so we wanted to give the movie its best chance. What other wide release with a score under 8 percent has opened north of $20 million? I don't think there is one". Conversely, Warner Bros. also did not do critic pre-screenings for The House, which held a score of 16% until the day of its release, and opened to just $8.7 million; the lowest of star Will Ferrell's career. That marketing tactic can backfire, and drew the vocal disgust of influential critics such as Roger Ebert, who was prone to derisively condemn such moves, with gestures such as "The Wagging Finger of Shame", on At the Movies. Furthermore, the very nature of withholding reviews can draw early conclusions from the public that the film is of poor quality because of that marketing tactic. On February 26, 2019, in response to issues surrounding coordinated "bombing" of user reviews for several films, most notably Captain Marvel and Star Wars: The Rise of Skywalker, prior to their release, the site announced that user reviews would no longer be accepted until a film is publicly released. The site also announced plans to introduce a system for "verified" reviews, and that the "Want to See" statistic would now be expressed as a number so that it would not be confused with the audience score. Despite arguments on how Rotten Tomatoes scores impact the box office, academic researchers so far have not found evidence that Rotten Tomatoes ratings affect box office performance. Criticism Oversimplification In January 2010, on the occasion of the 75th anniversary of the New York Film Critics Circle, its chairman Armond White cited Rotten Tomatoes in particular and film review aggregators in general as examples of how "the Internet takes revenge on individual expression". He said they work by "dumping reviewers onto one website and assigning spurious percentage-enthusiasm points to the discrete reviews". According to White, such websites "offer consensus as a substitute for assessment". Landon Palmer, a film and media historian and an assistant professor in the Department of Journalism and Creative Media director in the College of Communication and Information Sciences at the University of Alabama agreed with White, stating that "[Rotten Tomatoes applies a] problematic algorithm to pretty much all avenues of modern media art and entertainment". Director and producer Brett Ratner has criticized the website for "reducing hundreds of reviews culled from print and online sources into a popularized aggregate score", while expressing respect for traditional film critics. Writer Max Landis, following his film Victor Frankenstein receiving an approval rating of 24% on the site, wrote that the site "breaks down entire reviews into just the word 'yes' or 'no', making criticism binary in a destructive arbitrary way". Review manipulation Vulture ran an article in September 2023 that raised several criticisms of Rotten Tomatoes's system, including the ease at which large companies are able to manipulate reviewer ratings. The article cited publicity company Bunker 15 as an example of how scores can be boosted by recruiting obscure, often self-published reviewers, using the example of 2018's Ophelia. Rotten Tomatoes responded by delisting several Bunker 15 films, including Ophelia. It told Vulture in a statement, "We take the integrity of our scores seriously and do not tolerate any attempts to manipulate them. We have a dedicated team who monitors our platforms regularly and thoroughly investigates and resolves any suspicious activity." WIRED published an article in February 2024 written by Christopher Null, a former film critic, that argued such methods are standard activities performed by all PR agencies. In particular, Null points out that sponsoring legitimate, honest reviews has a long history in other industries and is a "common tactic employed by indie titles to get visibility." Other criticisms American director Martin Scorsese wrote a column in The Hollywood Reporter criticizing both Rotten Tomatoes and CinemaScore for promoting the idea that films like Mother! had to be "instantly liked" to be successful. Scorsese, in a dedication for the Roger Ebert Center for Film Studies at the University of Illinois later continued his criticism, voicing that Rotten Tomatoes and other review services "devalue cinema on streaming platforms to the level of content". In 2015, while promoting the film Suffragette (which has a 73% approval rating) actress Meryl Streep accused Rotten Tomatoes of disproportionately representing the opinions of male film critics, resulting in a skewed ratio that adversely affected the commercial performances of female-driven films. "I submit to you that men and women are not the same, they like different things," she said. "Sometimes they like the same thing, but sometimes their tastes diverge. If the Tomatometer is slighted so completely to one set of tastes that drives box office in the United States, absolutely". Critics took issue with the sentiment that someone's gender or ethnic background would dictate their response to art. Rotten Tomatoes deliberately withheld the critic score for Justice League based on early reviews until the premiere of its See It/Skip It episode on the Thursday before its release. Some critics viewed the move as a ploy to promote the web series, but some argued that the move was a deliberate conflict of interest on account of Warner Bros.' ownership of the film and Rotten Tomatoes, and the tepid critical reception of the DC Extended Universe films at the time. The New York Times aggregated statistics on the critical reception of audience scores versus critic scores, and noticed in almost every genre that "The public rates a movie more positively than do the critics. The only exceptions are black comedies and documentaries. Critics systematically rate films in these genres more highly than do Rotten Tomatoes users". Slate magazine collected data in a similar survey that revealed a noticeable favor for movies released before the 1990s, that "may be explained by a bias toward reviewers reviewing, or Rotten Tomatoes scoring, only the best movies from bygone eras".
Technology
Utility
null
833499
https://en.wikipedia.org/wiki/Prestressed%20concrete
Prestressed concrete
Prestressed concrete is a form of concrete used in construction. It is substantially "prestressed" (compressed) during production, in a manner that strengthens it against tensile forces which will exist when in service. It was patented by Eugène Freyssinet in 1928. This compression is produced by the tensioning of high-strength "tendons" located within or adjacent to the concrete and is done to improve the performance of the concrete in service. Tendons may consist of single wires, multi-wire strands or threaded bars that are most commonly made from high-tensile steels, carbon fiber or aramid fiber. The essence of prestressed concrete is that once the initial compression has been applied, the resulting material has the characteristics of high-strength concrete when subject to any subsequent compression forces and of ductile high-strength steel when subject to tension forces. This can result in improved structural capacity and/or serviceability compared with conventionally reinforced concrete in many situations. In a prestressed concrete member, the internal stresses are introduced in a planned manner so that the stresses resulting from the imposed loads are counteracted to the desired degree. Prestressed concrete is used in a wide range of building and civil structures where its improved performance can allow for longer spans, reduced structural thicknesses, and material savings compared with simple reinforced concrete. Typical applications include high-rise buildings, residential concrete slabs, foundation systems, bridge and dam structures, silos and tanks, industrial pavements and nuclear containment structures. First used in the late nineteenth century, prestressed concrete has developed beyond pre-tensioning to include post-tensioning, which occurs after the concrete is cast. Tensioning systems may be classed as either 'monostrand', where each tendon's strand or wire is stressed individually, or 'multi-strand', where all strands or wires in a tendon are stressed simultaneously. Tendons may be located either within the concrete volume (internal prestressing) or wholly outside of it (external prestressing). While pre-tensioned concrete uses tendons directly bonded to the concrete, post-tensioned concrete can use either bonded or unbonded tendons. Pre-tensioned concrete Pre-tensioned concrete is a variant of prestressed concrete where the tendons are tensioned prior to the concrete being cast. The concrete bonds to the tendons as it cures, following which the end-anchoring of the tendons is released, and the tendon tension forces are transferred to the concrete as compression by static friction. Pre-tensioning is a common prefabrication technique, where the resulting concrete element is manufactured off-site from the final structure location and transported to site once cured. It requires strong, stable end-anchorage points between which the tendons are stretched. These anchorages form the ends of a "casting bed" which may be many times the length of the concrete element being fabricated. This allows multiple elements to be constructed end-to-end in the one pre-tensioning operation, allowing significant productivity benefits and economies of scale to be realized. The amount of bond (or adhesion) achievable between the freshly set concrete and the surface of the tendons is critical to the pre-tensioning process, as it determines when the tendon anchorages can be safely released. Higher bond strength in early-age concrete will speed production and allow more economical fabrication. To promote this, pre-tensioned tendons are usually composed of isolated single wires or strands, which provides a greater surface area for bonding than bundled-strand tendons. Unlike those of post-tensioned concrete (see below), the tendons of pre-tensioned concrete elements generally form straight lines between end-anchorages. Where "profiled" or "harped" tendons are required, one or more intermediate deviators are located between the ends of the tendon to hold the tendon to the desired non-linear alignment during tensioning. Such deviators usually act against substantial forces, and hence require a robust casting-bed foundation system. Straight tendons are typically used in "linear" precast concrete elements, such as shallow beams, hollow-core slabs; whereas profiled tendons are more commonly found in deeper precast bridge beams and girders. Pre-tensioned concrete is most commonly used for the fabrication of structural beams, floor slabs, hollow-core slabs, balconies, lintels, driven piles, water tanks and concrete pipes. Post-tensioned concrete Post-tensioned concrete is a variant of prestressed concrete where the tendons are tensioned after the surrounding concrete structure has been cast. The tendons are not placed in direct contact with the concrete, but are encapsulated within a protective sleeve or duct which is either cast into the concrete structure or placed adjacent to it. At each end of a tendon is an anchorage assembly firmly fixed to the surrounding concrete. Once the concrete has been cast and set, the tendons are tensioned ("stressed") by pulling the tendon ends through the anchorages while pressing against the concrete. The large forces required to tension the tendons result in a significant permanent compression being applied to the concrete once the tendon is "locked-off" at the anchorage. The method of locking the tendon-ends to the anchorage is dependent upon the tendon composition, with the most common systems being "button-head" anchoring (for wire tendons), split-wedge anchoring (for strand tendons), and threaded anchoring (for bar tendons). Tendon encapsulation systems are constructed from plastic or galvanised steel materials, and are classified into two main types: those where the tendon element is subsequently bonded to the surrounding concrete by internal grouting of the duct after stressing (bonded post-tensioning); and those where the tendon element is permanently debonded from the surrounding concrete, usually by means of a greased sheath over the tendon strands (unbonded post-tensioning). Casting the tendon ducts/sleeves into the concrete before any tensioning occurs allows them to be readily "profiled" to any desired shape including incorporating vertical and/or horizontal curvature. When the tendons are tensioned, this profiling results in reaction forces being imparted onto the hardened concrete, and these can be beneficially used to counter any loadings subsequently applied to the structure. Bonded post-tensioning In bonded post-tensioning, tendons are permanently bonded to the surrounding concrete by the in situ grouting of their encapsulating ducting (after tendon tensioning). This grouting is undertaken for three main purposes: to protect the tendons against corrosion; to permanently "lock-in" the tendon pre-tension, thereby removing the long-term reliance upon the end-anchorage systems; and to improve certain structural behaviors of the final concrete structure. Bonded post-tensioning characteristically uses tendons each comprising bundles of elements (e.g., strands or wires) placed inside a single tendon duct, with the exception of bars which are mostly used unbundled. This bundling makes for more efficient tendon installation and grouting processes, since each complete tendon requires only one set of end-anchorages and one grouting operation. Ducting is fabricated from a durable and corrosion-resistant material such as plastic (e.g., polyethylene) or galvanised steel, and can be either round or rectangular/oval in cross-section. The tendon sizes used are highly dependent upon the application, ranging from building works typically using between 2 and 6 strands per tendon, to specialized dam works using up to 91 strands per tendon. Fabrication of bonded tendons is generally undertaken on-site, commencing with the fitting of end-anchorages to formwork, placing the tendon ducting to the required curvature profiles, and reeving (or threading) the strands or wires through the ducting. Following concreting and tensioning, the ducts are pressure-grouted and the tendon stressing-ends sealed against corrosion. Unbonded post-tensioning Unbonded post-tensioning differs from bonded post-tensioning by allowing the tendons permanent freedom of longitudinal movement relative to the concrete. This is most commonly achieved by encasing each individual tendon element within a plastic sheathing filled with a corrosion-inhibiting grease, usually lithium based. Anchorages at each end of the tendon transfer the tensioning force to the concrete, and are required to reliably perform this role for the life of the structure. Unbonded post-tensioning can take the form of: Individual strand tendons placed directly into the concreted structure (e.g., buildings, ground slabs) Bundled strands, individually greased-and-sheathed, forming a single tendon within an encapsulating duct that is placed either within or adjacent to the concrete (e.g., restressable anchors, external post-tensioning) For individual strand tendons, no additional tendon ducting is used and no post-stressing grouting operation is required, unlike for bonded post-tensioning. Permanent corrosion protection of the strands is provided by the combined layers of grease, plastic sheathing, and surrounding concrete. Where strands are bundled to form a single unbonded tendon, an enveloping duct of plastic or galvanised steel is used and its interior free-spaces grouted after stressing. In this way, additional corrosion protection is provided via the grease, plastic sheathing, grout, external sheathing, and surrounding concrete layers. Individually greased-and-sheathed tendons are usually fabricated off-site by an extrusion process. The bare steel strand is fed into a greasing chamber and then passed to an extrusion unit where molten plastic forms a continuous outer coating. Finished strands can be cut-to-length and fitted with "dead-end" anchor assemblies as required for the project. Comparison between bonded and unbonded post-tensioning Both bonded and unbonded post-tensioning technologies are widely used around the world, and the choice of system is often dictated by regional preferences, contractor experience, or the availability of alternative systems. Either one is capable of delivering code-compliant, durable structures meeting the structural strength and serviceability requirements of the designer. The benefits that bonded post-tensioning can offer over unbonded systems are: Reduced reliance on end-anchorage integrity. Following tensioning and grouting, bonded tendons are connected to the surrounding concrete along their full length by high-strength grout. Once cured, this grout can transfer the full tendon tension force to the concrete within a very short distance (approximately 1 metre). As a result, any inadvertent severing of the tendon or failure of an end anchorage has only a very localised impact on tendon performance, and almost never results in tendon ejection from the anchorage. Increased ultimate strength in flexure. With bonded post-tensioning, any flexure of the structure is directly resisted by tendon strains at that same location (i.e. no strain re-distribution occurs). This results in significantly higher tensile strains in the tendons than if they were unbonded, allowing their full yield strength to be realised, and producing a higher ultimate load capacity. Improved crack-control. In the presence of concrete cracking, bonded tendons respond similarly to conventional reinforcement (rebar). With the tendons fixed to the concrete at each side of the crack, greater resistance to crack expansion is offered than with unbonded tendons, allowing many design codes to specify reduced reinforcement requirements for bonded post-tensioning. Improved fire performance. The absence of strain redistribution in bonded tendons may limit the impact that any localised overheating has on the overall structure. As a result, bonded structures may display a higher capacity to resist fire conditions than unbonded ones. The benefits that unbonded post-tensioning can offer over bonded systems are: Ability to be prefabricated. Unbonded tendons can be readily prefabricated off-site complete with end-anchorages, facilitating faster installation during construction. Additional lead time may need to be allowed for this fabrication process. Improved site productivity. The elimination of the post-stressing grouting process required in bonded structures improves the site-labour productivity of unbonded post-tensioning. Improved installation flexibility. Unbonded single-strand tendons have greater handling flexibility than bonded ducting during installation, allowing them a greater ability to be deviated around service penetrations or obstructions. Reduced concrete cover. Unbonded tendons may allow some reduction in concrete element thickness, as their smaller size and increased corrosion protection may allow them to be placed closer to the concrete surface. Simpler replacement and/or adjustment. Being permanently isolated from the concrete, unbonded tendons are able to be readily de-stressed, re-stressed and/or replaced should they become damaged or need their force levels to be modified in-service. Superior overload performance. Although having a lower ultimate strength than bonded tendons, unbonded tendons' ability to redistribute strains over their full length can give them superior pre-collapse ductility. In extremes, unbonded tendons can resort to a catenary-type action instead of pure flexure, allowing significantly greater deformation before structural failure. Tendon durability and corrosion protection Long-term durability is an essential requirement for prestressed concrete given its widespread use. Research on the durability performance of in-service prestressed structures has been undertaken since the 1960s, and anti-corrosion technologies for tendon protection have been continually improved since the earliest systems were developed. The durability of prestressed concrete is principally determined by the level of corrosion protection provided to any high-strength steel elements within the prestressing tendons. Also critical is the protection afforded to the end-anchorage assemblies of unbonded tendons or cable-stay systems, as the anchorages of both of these are required to retain the prestressing forces. Failure of any of these components can result in the release of prestressing forces, or the physical rupture of stressing tendons. Modern prestressing systems deliver long-term durability by addressing the following areas: Tendon grouting (bonded tendons)Bonded tendons consist of bundled strands placed inside ducts located within the surrounding concrete. To ensure full protection to the bundled strands, the ducts must be pressure-filled with a corrosion-inhibiting grout, without leaving any voids, following strand-tensioning. Tendon coating (unbonded tendons)Unbonded tendons comprise individual strands coated in an anti-corrosion grease or wax, and fitted with a durable plastic-based full-length sleeve or sheath. The sleeving is required to be undamaged over the tendon length, and it must extend fully into the anchorage fittings at each end of the tendon. Double-layer encapsulationPrestressing tendons requiring permanent monitoring and/or force adjustment, such as stay-cables and re-stressable dam anchors, will typically employ double-layer corrosion protection. Such tendons are composed of individual strands, grease-coated and sleeved, collected into a strand-bundle and placed inside encapsulating polyethylene outer ducting. The remaining void space within the duct is pressure-grouted, providing a multi-layer polythene-grout-plastic-grease protection barrier system for each strand. Anchorage protectionIn all post-tensioned installations, protection of the end-anchorages against corrosion is essential, and critically so for unbonded systems. Several durability-related events are listed below: Ynys-y-Gwas bridge, West Glamorgan, Wales, 1985A single-span, precast-segmental structure constructed in 1953 with longitudinal and transverse post-tensioning. Corrosion attacked the under-protected tendons where they crossed the in-situ joints between the segments, leading to sudden collapse. Scheldt River bridge, Melle, Belgium, 1991A three-span prestressed cantilever structure constructed in the 1950s. Inadequate concrete cover in the side abutments resulted in tie-down cable corrosion, leading to a progressive failure of the main bridge span and the death of one person. UK Highways Agency, 1992Following discovery of tendon corrosion in several bridges in England, the Highways Agency issued a moratorium on the construction of new internally grouted post-tensioned bridges and embarked on a 5-year programme of inspections on its existing post-tensioned bridge stock. The moratorium was lifted in 1996. Pedestrian bridge, Charlotte Motor Speedway, North Carolina, US, 2000A multi-span steel and concrete structure constructed in 1995. An unauthorised chemical was added to the tendon grout to speed construction, leading to corrosion of the prestressing strands and the sudden collapse of one span, injuring many spectators. Hammersmith Flyover London, England, 2011Sixteen-span prestressed structure constructed in 1961. Corrosion from road de-icing salts was detected in some of the prestressing tendons, necessitating initial closure of the road while additional investigations were done. Subsequent repairs and strengthening using external post-tensioning was carried out and completed in 2015. Petrulla Viaduct ("Viadotto Petrulla"), Sicily, Italy, 2014One span of a 12-span viaduct collapsed on 7 July 2014, causing 4 injuries, due to corrosion of the post-tensioning tendons. Genoa bridge collapse, 2018. The Ponte Morandi was a cable-stayed bridge characterised by a prestressed concrete structure for the piers, pylons and deck, very few stays, as few as two per span, and a hybrid system for the stays constructed from steel cables with prestressed concrete shells poured on. The concrete was only prestressed to 10 MPa, resulting in it being prone to cracks and water intrusion, which caused corrosion of the embedded steel. Churchill Way flyovers, Liverpool, EnglandThe flyovers were closed in September 2018 after inspections revealed poor quality concrete, tendon corrosion and signs of structural distress. They were demolished in 2019. Applications Prestressed concrete is a highly versatile construction material as a result of it being an almost ideal combination of its two main constituents: high-strength steel, pre-stretched to allow its full strength to be easily realised; and modern concrete, pre-compressed to minimise cracking under tensile forces. Its wide range of application is reflected in its incorporation into the major design codes covering most areas of structural and civil engineering, including buildings, bridges, dams, foundations, pavements, piles, stadiums, silos, and tanks. Building structures Building structures are typically required to satisfy a broad range of structural, aesthetic and economic requirements. Significant among these include: a minimum number of (intrusive) supporting walls or columns; low structural thickness (depth), allowing space for services, or for additional floors in high-rise construction; fast construction cycles, especially for multi-storey buildings; and a low cost-per-unit-area, to maximise the building owner's return on investment. The prestressing of concrete allows "load-balancing" forces to be introduced into the structure to counter in-service loadings. This provides many benefits to building structures: Longer spans for the same structural depthLoad balancing results in lower in-service deflections, which allows spans to be increased (and the number of supports reduced) without adding to structural depth. Reduced structural thicknessFor a given span, lower in-service deflections allows thinner structural sections to be used, in turn resulting in lower floor-to-floor heights, or more room for building services. Faster stripping timeTypically, prestressed concrete building elements are fully stressed and self-supporting within five days. At this point they can have their formwork stripped and re-deployed to the next section of the building, accelerating construction "cycle-times". Reduced material costsThe combination of reduced structural thickness, reduced conventional reinforcement quantities, and fast construction often results in prestressed concrete showing significant cost benefits in building structures compared to alternative structural materials. Some notable building structures constructed from prestressed concrete include: Sydney Opera House and World Tower, Sydney; St George Wharf Tower, London; CN Tower, Toronto; Kai Tak Cruise Terminal and International Commerce Centre, Hong Kong; Ocean Heights 2, Dubai; Eureka Tower, Melbourne; Torre Espacio, Madrid; Guoco Tower (Tanjong Pagar Centre), Singapore; Zagreb International Airport, Croatia; and Capital Gate, Abu Dhabi UAE. Civil structures Bridges Concrete is the most popular structural material for bridges, and prestressed concrete is frequently adopted. When investigated in the 1940s for use on heavy-duty bridges, the advantages of this type of bridge over more traditional designs was that it is quicker to install, more economical and longer-lasting with the bridge being less lively. One of the first bridges built in this way is the Adam Viaduct, a railway bridge constructed 1946 in the UK. By the 1960s, prestressed concrete largely superseded reinforced concrete bridges in the UK, with box girders being the dominant form. In short-span bridges of around , prestressing is commonly employed in the form of precast pre-tensioned girders or planks. Medium-length structures of around , typically use precast-segmental, in-situ balanced-cantilever and incrementally-launched designs. For the longest bridges, prestressed concrete deck structures often form an integral part of cable-stayed designs. Dams Concrete dams have used prestressing to counter uplift and increase their overall stability since the mid-1930s. Prestressing is also frequently retro-fitted as part of dam remediation works, such as for structural strengthening, or when raising crest or spillway heights. Most commonly, dam prestressing takes the form of post-tensioned anchors drilled into the dam's concrete structure and/or the underlying rock strata. Such anchors typically comprise tendons of high-tensile bundled steel strands or individual threaded bars. Tendons are grouted to the concrete or rock at their far (internal) end, and have a significant "de-bonded" free-length at their external end which allows the tendon to stretch during tensioning. Tendons may be full-length bonded to the surrounding concrete or rock once tensioned, or (more commonly) have strands permanently encapsulated in corrosion-inhibiting grease over the free-length to permit long-term load monitoring and re-stressability. Silos and tanks Circular storage structures such as silos and tanks can use prestressing forces to directly resist the outward pressures generated by stored liquids or bulk-solids. Horizontally curved tendons are installed within the concrete wall to form a series of hoops, spaced vertically up the structure. When tensioned, these tendons exert both axial (compressive) and radial (inward) forces onto the structure, which can directly oppose the subsequent storage loadings. If the magnitude of the prestress is designed to always exceed the tensile stresses produced by the loadings, a permanent residual compression will exist in the wall concrete, assisting in maintaining a watertight crack-free structure. Nuclear and blast Prestressed concrete has been established as a reliable construction material for high-pressure containment structures such as nuclear reactor vessels and containment buildings, and petrochemical tank blast-containment walls. Using pre-stressing to place such structures into an initial state of bi-axial or tri-axial compression increases their resistance to concrete cracking and leakage, while providing a proof-loaded, redundant and monitorable pressure-containment system. Nuclear reactor and containment vessels will commonly employ separate sets of post-tensioned tendons curved horizontally or vertically to completely envelop the reactor core. Blast containment walls, such as for liquid natural gas (LNG) tanks, will normally utilize layers of horizontally-curved hoop tendons for containment in combination with vertically looped tendons for axial wall pre-stressing. Hardstands and pavements Heavily loaded concrete ground-slabs and pavements can be sensitive to cracking and subsequent traffic-driven deterioration. As a result, prestressed concrete is regularly used in such structures as its pre-compression provides the concrete with the ability to resist the crack-inducing tensile stresses generated by in-service loading. This crack-resistance also allows individual slab sections to be constructed in larger pours than for conventionally reinforced concrete, resulting in wider joint spacings, reduced jointing costs and less long-term joint maintenance issues. Initial works have also been successfully conducted on the use of precast prestressed concrete for road pavements, where the speed and quality of the construction has been noted as being beneficial for this technique. Some notable civil structures constructed using prestressed concrete include: Gateway Bridge, Brisbane Australia; Incheon Bridge, South Korea; Roseires Dam, Sudan; Wanapum Dam, Washington, US; LNG tanks, South Hook, Wales; Cement silos, Brevik Norway; Autobahn A73 bridge, Itz Valley, Germany; Ostankino Tower, Moscow, Russia; CN Tower, Toronto, Canada; and Ringhals nuclear reactor, Videbergshamn Sweden. Design agencies and regulations Worldwide, many professional organizations exist to promote best practices in the design and construction of prestressed concrete structures. In the United States, such organizations include the Post-Tensioning Institute (PTI) and the Precast/Prestressed Concrete Institute (PCI). Similar bodies include the Canadian Precast/Prestressed Concrete Institute (CPCI), the UK's Post-Tensioning Association, the Post Tensioning Institute of Australia and the South African Post Tensioning Association. Europe has similar country-based associations and institutions. These organizations are not the authorities of building codes or standards, but rather exist to promote the understanding and development of prestressed concrete design, codes and best practices. Rules and requirements for the detailing of reinforcement and prestressing tendons are specified by individual national codes and standards such as: European Standard EN 1992-2:2005 – Eurocode 2: Design of Concrete Structures; US Standard ACI318: Building Code Requirements for Reinforced Concrete; and Australian Standard AS 3600-2009: Concrete Structures.
Technology
Building materials
null
835031
https://en.wikipedia.org/wiki/Riverboat
Riverboat
A riverboat is a watercraft designed for inland navigation on lakes, rivers, and artificial waterways. They are generally equipped and outfitted as work boats in one of the carrying trades, for freight or people transport, including luxury units constructed for entertainment enterprises, such as lake or harbour tour boats. As larger water craft, virtually all riverboats are especially designed and constructed, or alternatively, constructed with special-purpose features that optimize them as riverine or lake service craft, for instance, dredgers, survey boats, fisheries management craft, fireboats and law enforcement patrol craft. Design differences Riverboats are usually less sturdy than ships built for the open seas, with limited navigational and rescue equipment, as they do not have to withstand the high winds or large waves characteristic to large lakes, seas or oceans. They can thus be built from light composite materials. They are limited in size by width and depth of the river as well as the height of bridges spanning the river. They can be designed with shallow drafts, as were the paddle wheel steamers on the Mississippi River that could operate in water under two metres deep. While a ferry is often used to cross a river, a riverboat is used to travel along the course of the river, while carrying passengers or cargo, or both, for revenue. (Vessels like 'riverboat casinos' are not considered here, as they are essentially stationary.) The significance of riverboats is dependent on the number of navigable rivers and channels as well as the condition of the road and rail network. Generally speaking, riverboats provide slow but cheap transport especially suited for bulk cargo and containers. History As early as 20,000 BC people started fishing in rivers and lakes using rafts and dugouts. Roman sources dated 50 BC mention extensive transportation of goods and people on the river Rhine. Upstream, boats were usually powered by sails or oars. In the Middle Ages, towpaths were built along most waterways to use working animals or people to pull riverboats. In the 19th century, steamboats became common. The most famous riverboats were on the rivers of the midwestern and central southern United States, on the Mississippi, Ohio and Missouri rivers in the early 19th century. Out west, riverboats were common transportation on the Colorado, Columbia, and Sacramento rivers. These American riverboats were designed to draw very little water, and in fact it was commonly said that they could "navigate on a heavy dew". Australia has a history of riverboats. Australia's biggest river, the Murray, has an inland port called Echuca. Many large riverboats were working on the Murray, but now a lower water level is stopping them. The Kalgan River in Western Australia has had two main riverboats, the Silver Star, 1918 to 1935, would lower her funnel to get under the low bridge. Today, the Kalgan Queen riverboat takes tourists up the river to taste the local wines. She lowers her roof to get under the same bridge. It is these early steam-driven river craft that typically come to mind when "steamboat" is mentioned, as these were powered by burning wood, with iron boilers drafted by a pair of tall smokestacks belching smoke and cinders, and twin double-acting pistons driving a large paddlewheel at the stern, churning foam. This type of propulsion was an advantage as a rear paddlewheel operates in an area clear of snags, is easily repaired, and is not likely to suffer damage in a grounding. By burning wood, the boat could consume fuel provided by woodcutters along the shore of the river. These early boats carried a brow (a short bridge) on the bow, so they could head in to an unimproved shore for transfer of cargo and passengers. Modern riverboats are generally screw (propeller)-driven, with pairs of diesel engines of several thousand horsepower. The standard reference for the development of the steamboat is Steamboats on Western Rivers: An Economic and Technological History by Louis C. Hunter (1949). Terrace, British Columbia, Canada, celebrates "Riverboat Days" each summer. The Skeena River passes through Terrace and played a crucial role during the age of the steamboat. The first steam-powered vessel to enter the Skeena was the Union in 1864. In 1866 the Mumford attempted to ascend the river but was only able to reach the Kitsumkalum River. It was not until 1891 that the Hudson's Bay Company sternwheeler the Caledonia successfully negotiated through the Kitselas Canyon and reached Hazelton. A number of other steamers were built around the turn of the century, in part due to the growing fish industry and the gold rush. The WT Preston, a museum ship that was once a specialised river dredge, also called a "snagboat". Modern riverboats Luxury tourist transport Some large riverboats are comparable in accommodation, food service, and entertainment to a modern oceanic cruise ship. Tourist boats provide a scenic and relaxing trip through the segment they operate in. On the Yangtze River, typically employees have double duties: both as serving staff and as evening-costumed dancers. Smaller luxury craft (without entertainment) operate on European waterways - both rivers and canals, with some providing bicycle and van side trips to smaller villages. High-speed passenger transport High-speed boats such as those shown here had a special advantage in some operations in the free-running Yangtze. In several locations within the Three Gorges, one-way travel was enforced through fast narrows. While less maneuverable and deeper draft vessels were obliged to wait for clearance, these high-speed boats were free to zip past waiting traffic by running in the shallows. Local and low-cost passenger transport Smaller riverboats are used in urban and suburban areas for sightseeing and public transport. Sightseeing boats can be found in Amsterdam, Paris, and other touristic cities where historical monuments are located near water. The concept of local waterborn public transport is known as water taxi in English-speaking countries, vaporetto in Venice, water/river tramway in former Soviet Union and Poland (although sightseeing boats can be called water tramways too). Local waterborne public transport is similar to ferry. The transport craft shown below is used for short-distance carriage of passengers between villages and small cities along the Yangtze, while larger craft are used for low-cost carriage over longer distance, without the fancy food or shows seen on the tourist riverboats. In some cases, the traveller must provide their own food. Goods transport Multimodal As the major rivers in China are mostly east-west, most rail and road transport are typically north-south. As roads along the rivers are inadequate for heavy truck transport and in some cases extremely dangerous, drive-on/drive-off ramp barges are used to transport trucks. In many cases the trucks transported are new and are being delivered to customers or dealers. Perhaps unique to China, the new trucks observed traveling upstream were all blue, while the new trucks traveling downstream were all white. Bulk cargo Low-value goods are transported on rivers and canals worldwide, since slow-speed barge traffic offers the lowest possible cost per ton mile and the capital cost per ton carried is also quite low compared to other modes of transport.
Technology
Maritime transport
null
18697521
https://en.wikipedia.org/wiki/Toucan
Toucan
Toucans (, ) are Neotropical birds in the family Ramphastidae. The Ramphastidae are most closely related to the Toucan barbets. They are brightly marked and have large, often colorful bills. The family includes five genera and over 40 different species. Toucans are arboreal and typically lay two to four white eggs in their nests. They make their nests in tree hollows and holes excavated by other animals such as woodpeckers—the toucan bill has very limited use as an excavation tool. When the eggs hatch, the young emerge completely naked, without any down. Toucans are resident breeders and do not migrate. Toucans are usually found in pairs or small flocks. They sometimes fence with their bills and wrestle, which scientists hypothesize they do to establish dominance hierarchies. In Africa and Asia, hornbills occupy the toucans' ecological niche, an example of convergent evolution. Taxonomy and systematics The name of this bird group is derived from the Tupi word tukana or the Guaraní word tukã, via Portuguese. The family includes toucans, aracaris and toucanets; more distant relatives include various families of barbets and woodpeckers in the suborder Pici. The phylogenetic relationship between the toucans and the eight other families in the order Piciformes is shown in the cladogram below. The number of species in each family is taken from the list maintained by Frank Gill, Pamela C. Rasmussen and David Donsker on behalf of the International Ornithological Committee (IOC). Description Toucans range in size from the lettered aracari (Pteroglossus inscriptus), at and , to the toco toucan (Ramphastos toco), at and . Their bodies are short (of comparable size to a crow's) and compact. The tail is rounded and varies in length, from half the length to the whole length of the body. The neck is short and thick. The wings are small, as they are forest-dwelling birds who only need to travel short distances, and are often of about the same span as the bill-tip-to-tail-tip measurements of the bird. The legs of the toucan are strong and rather short. Their toes are arranged in pairs with the first and fourth toes turned backward. The majority of toucans do not show any sexual dimorphism in their coloration, the genus Selenidera being the most notable exception to this rule (hence their common name, "dichromatic toucanets"). However, the bills of female toucans are usually shorter, deeper and sometimes straighter, giving more of a "blocky" impression compared to male bills. The feathers in the genus containing the largest toucans are generally purple, with touches of white, yellow, and scarlet, and black. The underparts of the araçaris (smaller toucans) are yellow, crossed by one or more black or red bands. The toucanets have mostly green plumage with blue markings. The colorful and large bill, which in some large species measures more than half the length of the body, is the hallmark of toucans. Despite its size, the toucan's bill is very light, being composed of bone struts filled with spongy tissue of keratin between them, which take on the structure of a biofoam. The bill has forward-facing serrations resembling teeth, which historically led naturalists to believe that toucans captured fish and were primarily carnivorous; today it is known that they eat mostly fruit. Researchers have discovered that the large bill of the toucan is a highly efficient thermoregulation system, though its size may still be advantageous in other ways. It does aid in their feeding behavior (as they sit in one spot and reach for all fruit in range, thereby reducing energy expenditure), and it has also been theorized that the bill may intimidate smaller birds, so that the toucan may plunder nests undisturbed (see Diet below). The beak allows the bird to reach deep into tree-holes to access food unavailable to other birds, and also to ransack suspended nests built by smaller birds. A toucan's tongue is long (up to ), narrow, grey, and singularly frayed on each side, adding to its sensitivity as a tasting organ. A structural complex probably unique to toucans involves the modification of several tail vertebrae. The rear three vertebrae are fused and attached to the spine by a ball and socket joint. Because of this, toucans may snap their tail forward until it touches the head. This is the posture in which they sleep, often appearing simply as a ball of feathers, with the tip of the tail sticking out over the head. Distribution and habitat Toucans are native to the Neotropics, from Southern Mexico, through Central America, into South America south to northern Argentina. They mostly live in the lowland tropics, but the mountain species from the genus Andigena reach temperate climates at high altitudes in the Andes and can be found up to the tree line. For the most part the toucans are forest species, and restricted to primary forests. They will enter secondary forests to forage, but are limited to forests with large old trees that have holes large enough to breed in. Toucans are poor dispersers, particularly across water, and have not reached the West Indies. The only non-forest living toucan is the toco toucan, which is found in savannah with forest patches and open woodlands. Behaviour and ecology Toucans are highly social and most species occur in groups of up to 20 or more birds for most of the time. Pairs may retire from the groups during the breeding season, then return with their offspring after the breeding season. Larger groups may form during irruptions, migration or around a particularly large fruiting tree. Toucans often spend time sparring with their bills, tag-chasing, and calling during the long time it takes for fruit to digest. These behaviours may be related to maintenance of the pair bond or establishing dominance hierarchies, but the time required to digest fruit, which can be up to 75 minutes and during which the toucan can not feed, provides this time for socializing. Diet Toucans are primarily frugivorous (fruit eating), but are opportunistically omnivorous and will take prey such as insects, smaller birds, and small lizards. Captive toucans have been reported to hunt insects actively in their cages, and it is possible to keep toucans on an insect-only diet. They also plunder nests of smaller birds, taking eggs and nestlings. This probably provides a crucial addition of protein to their diet. Certainly, apart from being systematically predatory as well as frugivorous, like many omnivorous birds, they particularly prefer animal food for feeding their chicks. However, in their range, toucans are the dominant frugivores, and as such, play an extremely important ecological role as vectors for seed dispersal of fruiting trees. Breeding behaviour Toucans nest in cavities in trees, and the presence of suitable trees is a habitat prerequisite for toucans. For the most part toucans don't excavate nesting cavities, although some green toucanets do. Calls Toucans make a variety of sounds. The very name of the bird (from Tupi) refers to its predominant frog-like croaking call, but toucans also make barking and growling sounds. They also use their bills to make tapping and clattering sounds. Mountain toucans are known for donkey-like braying. Relationship with humans The toucans are, due to their unique appearance, among the most popular and well known birds in the world. Across their native range they were hunted for food and also kept as pets, and their plumage and bills were used for decorations. In some places anyone that discovers a nest is deemed its owner and is entitled to sell the birds within. In the western world they were first popularised by John Gould, who devoted two editions to a detailed monograph of the family. The constellation Tucana, containing most of the Small Magellanic Cloud, is named after the toucan. The family has been used prominently in advertising. During the 1930s and 1940s Guinness beer advertising featured a toucan, as the black and white appearance of the bird mirrored the stout. A cartoon toucan, Toucan Sam, has been used as the mascot of Froot Loops breakfast cereal since 1963, and a toucan is the mascot of the Brazilian Social Democracy Party; its party members are called tucanos for this reason. Toucans have also been used in popular media. They have been used as the principal characters in Toucan Tecs, a 1992 UK television cartoon about two detectives named Zippi and Zac. In Dora the Explorer, the character Señor Túcan is a Spanish-speaking toucan who occasionally gives Dora and her friends advice. Tuca, the anthropomorphic title character of the 2019 show Tuca & Bertie is a Toucan, and the companion of the song thrush Bertie. In the 2016 Nintendo 3DS game Pokémon Sun and Moon, the Pokémon Toucannon and its previous evolutions were modeled after a Toco Toucan.
Biology and health sciences
Piciformes
null
18698592
https://en.wikipedia.org/wiki/Haloform%20reaction
Haloform reaction
In chemistry, the haloform reaction (also referred to as the Lieben haloform reaction) is a chemical reaction in which a haloform (, where X is a halogen) is produced by the exhaustive halogenation of an acetyl group (, where R can be either a hydrogen atom, an alkyl or an aryl group), in the presence of a base. The reaction can be used to transform acetyl groups into carboxyl groups () or to produce chloroform (), bromoform (), or iodoform (). Note that fluoroform () can't be prepared in this way. Mechanism In the first step, the halogen dis-proportionates in the presence of hydroxide to give the halide and hypohalite. Br2 + 2 OH- -> Br- + BrO- + H2O If a secondary alcohol is present, it is oxidized to a ketone by the hypohalite: If a methyl ketone is present, it reacts with the hypohalite in a three-step process: 1. Under basic conditions, the ketone undergoes keto-enol tautomerisation. The enolate undergoes electrophilic attack by the hypohalite (containing a halogen with a formal +1 charge). 2. When the α(alpha) position has been exhaustively halogenated, the molecule reacts with hydroxide, with being the leaving group stabilized by three electron-withdrawing groups. In the third step the anion abstracts a proton from either the solvent or the carboxylic acid formed in the previous step, and forms the haloform. At least in some cases (chloral hydrate) the reaction may stop and the intermediate product isolated if conditions are acidic and hypohalite is used. Scope Substrates are broadly limited to methyl ketones and secondary alcohols oxidizable to methyl ketones, such as isopropanol. The only primary alcohol and aldehyde to undergo this reaction are ethanol and acetaldehyde, respectively. 1,3-Diketones such as acetylacetone also undergo this reaction. β-ketoacids such as acetoacetic acid will also give the test upon heating. Acetyl chloride and acetamide do not undergo this reaction. The halogen used may be chlorine, bromine, iodine or sodium hypochlorite. Fluoroform (CHF3) cannot be prepared by this method as it would require the presence of the highly unstable hypofluorite ion. However ketones with the structure RCOCF3 do cleave upon treatment with base to produce fluoroform; this is equivalent to the second and third steps in the process shown above. Applications Laboratory scale This reaction forms the basis of the iodoform test which was commonly used in history as a chemical test to determine the presence of a methyl ketone, or a secondary alcohol oxidizable to a methyl ketone. When iodine and sodium hydroxide are used as the reagents a positive reaction gives iodoform, which is a solid at room temperature and tends to precipitate out of solution causing a distinctive cloudiness. In organic chemistry, this reaction may be used to convert a terminal methyl ketone into the analogous carboxylic acid. Industrially It was formerly used to produce iodoform, bromoform, and even chloroform industrially. A variant of this reaction is used to manufacture deuterated chloroform, in reaction of hexachloroacetone with heavy water catalysed by base: Further variant uses decomposition of calcium trichloroacetate in heavy water: As a by-product of water chlorination Water chlorination can result in the formation of haloforms if the water contains suitable reactive impurities (e.g. humic acid). There is a concern that such reactions may lead to the presence of carcinogenic compounds in drinking water. History The haloform reaction is one of the oldest organic reactions known. In 1822, Georges-Simon Serullas added potassium metal to a solution of iodine in ethanol and water to form potassium formate and iodoform, called in the language of that time hydroiodide of carbon. In 1832, Justus von Liebig reported the reaction of chloral with calcium hydroxide to form chloroform and calcium formate. The reaction was rediscovered by Adolf Lieben in 1870. The iodoform test is also called the Lieben iodoform reaction. A review of the haloform reaction with a history section was published in 1934.
Physical sciences
Organic reactions
Chemistry
3600408
https://en.wikipedia.org/wiki/Volume%20fraction
Volume fraction
In chemistry and fluid mechanics, the volume fraction is defined as the volume of a constituent Vi divided by the volume of all constituents of the mixture V prior to mixing: Being dimensionless, its unit is 1; it is expressed as a number, e.g., 0.18. It is the same concept as volume percent (vol%) except that the latter is expressed with a denominator of 100, e.g., 18%. The volume fraction coincides with the volume concentration in ideal solutions where the volumes of the constituents are additive (the volume of the solution is equal to the sum of the volumes of its ingredients). The sum of all volume fractions of a mixture is equal to 1: The volume fraction (percentage by volume, vol%) is one way of expressing the composition of a mixture with a dimensionless quantity; mass fraction (percentage by weight, wt%) and mole fraction (percentage by moles, mol%) are others. Volume concentration and volume percent Volume percent is the concentration of a certain solute, measured by volume, in a solution. It has as a denominator the volume of the mixture itself, as usual for expressions of concentration, rather than the total of all the individual components’ volumes prior to mixing: Volume percent is usually used when the solution is made by mixing two fluids, such as liquids or gases. However, percentages are only additive for ideal gases. The percentage by volume (vol%, % v/v) is one way of expressing the composition of a mixture with a dimensionless quantity; mass fraction (percentage by weight, wt%) and mole fraction (percentage by moles, mol%) are others. In the case of a mixture of ethanol and water, which are miscible in all proportions, the designation of solvent and solute is arbitrary. The volume of such a mixture is slightly less than the sum of the volumes of the components. Thus, by the above definition, the term "40% alcohol by volume" refers to a mixture of 40 volume units of ethanol with enough water to make a final volume of 100 units, rather than a mixture of 40 units of ethanol with 60 units of water. The "enough water" is actually slightly more than 60 volume units, since water-ethanol mixture loses volume due to intermolecular attraction. Relation to mass fraction Volume fraction is related to mass fraction, by where is the constituent density, and is the mixture density.
Physical sciences
Ratio
Basics and measurement
132729
https://en.wikipedia.org/wiki/Tuple
Tuple
In mathematics, a tuple is a finite sequence or ordered list of numbers or, more generally, mathematical objects, which are called the elements of the tuple. An -tuple is a tuple of elements, where is a non-negative integer. There is only one 0-tuple, called the empty tuple. A 1-tuple and a 2-tuple are commonly called a singleton and an ordered pair, respectively. The term "infinite tuple" is occasionally used for "infinite sequences". Tuples are usually written by listing the elements within parentheses "" and separated by commas; for example, denotes a 5-tuple. Other types of brackets are sometimes used, although they may have a different meaning. An -tuple can be formally defined as the image of a function that has the set of the first natural numbers as its domain. Tuples may be also defined from ordered pairs by a recurrence starting from ordered pairs; indeed, an -tuple can be identified with the ordered pair of its first elements and its th element. In computer science, tuples come in many forms. Most typed functional programming languages implement tuples directly as product types, tightly associated with algebraic data types, pattern matching, and destructuring assignment. Many programming languages offer an alternative to tuples, known as record types, featuring unordered elements accessed by label. A few programming languages combine ordered tuple product types and unordered record types into a single construct, as in C structs and Haskell records. Relational databases may formally identify their rows (records) as tuples. Tuples also occur in relational algebra; when programming the semantic web with the Resource Description Framework (RDF); in linguistics; and in philosophy. Etymology The term originated as an abstraction of the sequence: single, couple/double, triple, quadruple, quintuple, sextuple, septuple, octuple, ..., ‑tuple, ..., where the prefixes are taken from the Latin names of the numerals. The unique 0-tuple is called the null tuple or empty tuple. A 1‑tuple is called a single (or singleton), a 2‑tuple is called an ordered pair or couple, and a 3‑tuple is called a triple (or triplet). The number can be any nonnegative integer. For example, a complex number can be represented as a 2‑tuple of reals, a quaternion can be represented as a 4‑tuple, an octonion can be represented as an 8‑tuple, and a sedenion can be represented as a 16‑tuple. Although these uses treat ‑uple as the suffix, the original suffix was ‑ple as in "triple" (three-fold) or "decuple" (ten‑fold). This originates from medieval Latin plus (meaning "more") related to Greek ‑πλοῦς, which replaced the classical and late antique ‑plex (meaning "folded"), as in "duplex". Properties The general rule for the identity of two -tuples is if and only if . Thus a tuple has properties that distinguish it from a set: A tuple may contain multiple instances of the same element, so tuple ; but set . Tuple elements are ordered: tuple , but set . A tuple has a finite number of elements, while a set or a multiset may have an infinite number of elements. Definitions There are several definitions of tuples that give them the properties described in the previous section. Tuples as functions The -tuple may be identified as the empty function. For the -tuple may be identified with the (surjective) function with domain and with codomain that is defined at by That is, is the function defined by in which case the equality necessarily holds. Tuples as sets of ordered pairs Functions are commonly identified with their graphs, which is a certain set of ordered pairs. Indeed, many authors use graphs as the definition of a function. Using this definition of "function", the above function can be defined as: Tuples as nested ordered pairs Another way of modeling tuples in set theory is as nested ordered pairs. This approach assumes that the notion of ordered pair has already been defined. The 0-tuple (i.e. the empty tuple) is represented by the empty set . An -tuple, with , can be defined as an ordered pair of its first entry and an -tuple (which contains the remaining entries when : This definition can be applied recursively to the -tuple: Thus, for example: A variant of this definition starts "peeling off" elements from the other end: The 0-tuple is the empty set . For : This definition can be applied recursively: Thus, for example: Tuples as nested sets Using Kuratowski's representation for an ordered pair, the second definition above can be reformulated in terms of pure set theory: The 0-tuple (i.e. the empty tuple) is represented by the empty set ; Let be an -tuple , and let . Then, . (The right arrow, , could be read as "adjoined with".) In this formulation: -tuples of -sets In discrete mathematics, especially combinatorics and finite probability theory, -tuples arise in the context of various counting problems and are treated more informally as ordered lists of length . -tuples whose entries come from a set of elements are also called arrangements with repetition, permutations of a multiset and, in some non-English literature, variations with repetition. The number of -tuples of an -set is . This follows from the combinatorial rule of product. If is a finite set of cardinality , this number is the cardinality of the -fold Cartesian power . Tuples are elements of this product set. Type theory In type theory, commonly used in programming languages, a tuple has a product type; this fixes not only the length, but also the underlying types of each component. Formally: and the projections are term constructors: The tuple with labeled elements used in the relational model has a record type. Both of these types can be defined as simple extensions of the simply typed lambda calculus. The notion of a tuple in type theory and that in set theory are related in the following way: If we consider the natural model of a type theory, and use the Scott brackets to indicate the semantic interpretation, then the model consists of some sets (note: the use of italics here that distinguishes sets from types) such that: and the interpretation of the basic terms is: . The -tuple of type theory has the natural interpretation as an -tuple of set theory: The unit type has as semantic interpretation the 0-tuple.
Mathematics
Set theory
null
132784
https://en.wikipedia.org/wiki/Cast%20iron
Cast iron
Cast iron is a class of iron–carbon alloys with a carbon content of more than 2% and silicon content around 1–3%. Its usefulness derives from its relatively low melting temperature. The alloying elements determine the form in which its carbon appears: white cast iron has its carbon combined into an iron carbide named cementite, which is very hard, but brittle, as it allows cracks to pass straight through; grey cast iron has graphite flakes which deflect a passing crack and initiate countless new cracks as the material breaks, and ductile cast iron has spherical graphite "nodules" which stop the crack from further progressing. Carbon (C), ranging from 1.8 to 4 wt%, and silicon (Si), 1–3 wt%, are the main alloying elements of cast iron. Iron alloys with lower carbon content are known as steel. Cast iron tends to be brittle, except for malleable cast irons. With its relatively low melting point, good fluidity, castability, excellent machinability, resistance to deformation and wear resistance, cast irons have become an engineering material with a wide range of applications and are used in pipes, machines and automotive industry parts, such as cylinder heads, cylinder blocks and gearbox cases. Some alloys are resistant to damage by oxidation. In general, cast iron is notoriously difficult to weld. The earliest cast-iron artifacts date to the 5th century BC, and were discovered by archaeologists in what is now Jiangsu, China. Cast iron was used in ancient China to mass-produce weaponry for warfare, as well as agriculture and architecture. During the 15th century AD, cast iron became utilized for cannons and shot in Burgundy, France, and in England during the Reformation. The amounts of cast iron used for cannons required large-scale production. The first cast-iron bridge was built during the 1770s by Abraham Darby III, and is known as the Iron Bridge in Shropshire, England. Cast iron was also used in the construction of buildings. Production Cast iron is made from pig iron, which is the product of melting iron ore in a blast furnace. Cast iron can be made directly from the molten pig iron or by re-melting pig iron, often along with substantial quantities of iron, steel, limestone, carbon (coke) and taking various steps to remove undesirable contaminants. Phosphorus and sulfur may be burnt out of the molten iron, but this also burns out the carbon, which must be replaced. Depending on the application, carbon and silicon content are adjusted to the desired levels, which may be anywhere from 2–3.5% and 1–3%, respectively. If desired, other elements are then added to the melt before the final form is produced by casting. Cast iron is sometimes melted in a special type of blast furnace known as a cupola, but in modern applications, it is more often melted in electric induction furnaces or electric arc furnaces. After melting is complete, the molten cast iron is poured into a holding furnace or ladle. Types Alloying elements Cast iron's properties are changed by adding various alloying elements, or alloyants. Next to carbon, silicon is the most important alloyant because it forces carbon out of solution. A low percentage of silicon allows carbon to remain in solution, forming iron carbide and producing white cast iron. A high percentage of silicon forces carbon out of solution, forming graphite and producing grey cast iron. Other alloying agents, manganese, chromium, molybdenum, titanium, and vanadium counteract silicon, and promote the retention of carbon and the formation of those carbides. Nickel and copper increase strength and machinability, but do not change the amount of graphite formed. Carbon as graphite produces a softer iron, reduces shrinkage, lowers strength, and decreases density. Sulfur, largely a contaminant when present, forms iron sulfide, which prevents the formation of graphite and increases hardness. Sulfur makes molten cast iron viscous, which causes defects. To counter the effects of sulfur, manganese is added, because the two form into manganese sulfide instead of iron sulfide. The manganese sulfide is lighter than the melt, so it tends to float out of the melt and into the slag. The amount of manganese required to neutralize sulfur is 1.7 × sulfur content + 0.3%. If more than this amount of manganese is added, then manganese carbide forms, which increases hardness and chilling, except in grey iron, where up to 1% of manganese increases strength and density. Nickel is one of the most common alloying elements, because it refines the pearlite and graphite structures, improves toughness, and evens out hardness differences between section thicknesses. Chromium is added in small amounts to reduce free graphite, produce chill, and because it is a powerful carbide stabilizer; nickel is often added in conjunction. A small amount of tin can be added as a substitute for 0.5% chromium. Copper is added in the ladle or in the furnace, on the order of 0.5–2.5%, to decrease chill, refine graphite, and increase fluidity. Molybdenum is added on the order of 0.3–1% to increase chill and refine the graphite and pearlite structure; it is often added in conjunction with nickel, copper, and chromium to form high strength irons. Titanium is added as a degasser and deoxidizer, but it also increases fluidity. Vanadium at 0.15–0.5% is added to cast iron to stabilize cementite, increase hardness, and increase resistance to wear and heat. Zirconium at 0.1–0.3% helps to form graphite, deoxidize, and increase fluidity. In malleable iron melts, bismuth is added at 0.002–0.01% to increase how much silicon can be added. In white iron, boron is added to aid in the production of malleable iron; it also reduces the coarsening effect of bismuth. Grey cast iron Grey cast iron is characterised by its graphitic microstructure, which causes fractures of the material to have a grey appearance. It is the most commonly used cast iron and the most widely used cast material based on weight. Most cast irons have a chemical composition of 2.5–4.0% carbon, 1–3% silicon, and the remainder iron. Grey cast iron has less tensile strength and shock resistance than steel, but its compressive strength is comparable to low- and medium-carbon steel. These mechanical properties are controlled by the size and shape of the graphite flakes present in the microstructure and can be characterised according to the guidelines given by the ASTM. White cast iron White cast iron displays white fractured surfaces due to the presence of an iron carbide precipitate called cementite. With a lower silicon content (graphitizing agent) and faster cooling rate, the carbon in white cast iron precipitates out of the melt as the metastable phase cementite, Fe3C, rather than graphite. The cementite which precipitates from the melt forms as relatively large particles. As the iron carbide precipitates out, it withdraws carbon from the original melt, moving the mixture toward one that is closer to eutectic, and the remaining phase is the lower iron-carbon austenite (which on cooling might transform to martensite). These eutectic carbides are much too large to provide the benefit of what is called precipitation hardening (as in some steels, where much smaller cementite precipitates might inhibit [plastic deformation] by impeding the movement of dislocations through the pure iron ferrite matrix). Rather, they increase the bulk hardness of the cast iron simply by virtue of their own very high hardness and their substantial volume fraction, such that the bulk hardness can be approximated by a rule of mixtures. In any case, they offer hardness at the expense of toughness. Since carbide makes up a large fraction of the material, white cast iron could reasonably be classified as a cermet. White iron is too brittle for use in many structural components, but with good hardness and abrasion resistance and relatively low cost, it finds use in such applications as the wear surfaces (impeller and volute) of slurry pumps, shell liners and lifter bars in ball mills and autogenous grinding mills, balls and rings in coal pulverisers. It is difficult to cool thick castings fast enough to solidify the melt as white cast iron all the way through. However, rapid cooling can be used to solidify a shell of white cast iron, after which the remainder cools more slowly to form a core of grey cast iron. The resulting casting, called a chilled casting, has the benefits of a hard surface with a somewhat tougher interior. High-chromium white iron alloys allow massive castings (for example, a 10-tonne impeller) to be sand cast, as the chromium reduces cooling rate required to produce carbides through the greater thicknesses of material. Chromium also produces carbides with impressive abrasion resistance. These high-chromium alloys attribute their superior hardness to the presence of chromium carbides. The main form of these carbides are the eutectic or primary M7C3 carbides, where "M" represents iron or chromium and can vary depending on the alloy's composition. The eutectic carbides form as bundles of hollow hexagonal rods and grow perpendicular to the hexagonal basal plane. The hardness of these carbides are within the range of 1500-1800HV. Malleable cast iron Malleable iron starts as a white iron casting that is then heat treated for a day or two at about and then cooled over a day or two. As a result, the carbon in iron carbide transforms into graphite and ferrite plus carbon. The slow process allows the surface tension to form the graphite into spheroidal particles rather than flakes. Due to their lower aspect ratio, the spheroids are relatively short and far from one another, and have a lower cross section vis-a-vis a propagating crack or phonon. They also have blunt boundaries, as opposed to flakes, which alleviates the stress concentration problems found in grey cast iron. In general, the properties of malleable cast iron are more like those of mild steel. There is a limit to how large a part can be cast in malleable iron, as it is made from white cast iron. Ductile cast iron Developed in 1948, nodular or ductile cast iron has its graphite in the form of very tiny nodules with the graphite in the form of concentric layers forming the nodules. As a result, the properties of ductile cast iron are that of a spongy steel without the stress concentration effects that flakes of graphite would produce. The carbon percentage present is 3-4% and percentage of silicon is 1.8-2.8%.Tiny amounts of 0.02 to 0.1% magnesium, and only 0.02 to 0.04% cerium added to these alloys slow the growth of graphite precipitates by bonding to the edges of the graphite planes. Along with careful control of other elements and timing, this allows the carbon to separate as spheroidal particles as the material solidifies. The properties are similar to malleable iron, but parts can be cast with larger sections. Table of comparative qualities of cast irons History Cast iron and wrought iron can be produced unintentionally when smelting copper using iron ore as a flux. The earliest cast-iron artifacts date to the 5th century BC, and were discovered by archaeologists in what is now modern Luhe County, Jiangsu in China during the Warring States period. This is based on an analysis of the artifact's microstructures. Because cast iron is comparatively brittle, it is not suitable for purposes where a sharp edge or flexibility is required. It is strong under compression, but not under tension. Cast iron was invented in China in the 5th century BC and poured into molds to make ploughshares and pots as well as weapons and pagodas. Although steel was more desirable, cast iron was cheaper and thus was more commonly used for implements in ancient China, while wrought iron or steel was used for weapons. The Chinese developed a method of annealing cast iron by keeping hot castings in an oxidizing atmosphere for a week or longer in order to burn off some carbon near the surface in order to keep the surface layer from being too brittle. Deep within the Congo region of the Central African forest, blacksmiths invented sophisticated furnaces capable of high temperatures over 1000 years ago. There are countless examples of welding, soldering, and cast iron created in crucibles and poured into molds. These techniques were employed for the use of composite tools and weapons with cast iron or steel blades and soft, flexible wrought iron interiors. Iron wire was also produced. Numerous testimonies were made by early European missionaries of the Luba people pouring cast iron into molds to make hoes. These technological innovations were accomplished without the invention of the blast furnace which was the prerequisite for the deployment of such innovations in Europe and Asia. The technology of cast iron was transferred to the West from China. Al-Qazvini in the 13th century and other travellers subsequently noted an iron industry in the Alburz Mountains to the south of the Caspian Sea. This is close to the silk route, thus the use of cast-iron technology being derived from China is conceivable. Upon its introduction to the West in the 15th century it was used for cannon and shot. Henry VIII (reigned 1509–1547) initiated the casting of cannon in England. Soon, English iron workers using blast furnaces developed the technique of producing cast-iron cannons, which, while heavier than the prevailing bronze cannons, were much cheaper and enabled England to arm her navy better. Cast-iron pots were made at many English blast furnaces at the time. In 1707, Abraham Darby patented a new method of making pots (and kettles) thinner and hence cheaper than those made by traditional methods. This meant that his Coalbrookdale furnaces became dominant as suppliers of pots, an activity in which they were joined in the 1720s and 1730s by a small number of other coke-fired blast furnaces. Application of the steam engine to power blast bellows (indirectly by pumping water to a waterwheel) in Britain, beginning in 1743 and increasing in the 1750s, was a key factor in increasing the production of cast iron, which surged in the following decades. In addition to overcoming the limitation on water power, the steam-pumped-water powered blast gave higher furnace temperatures which allowed the use of higher lime ratios, enabling the conversion from charcoal (supplies of wood for which were inadequate) to coke. The ironmasters of the Weald continued producing cast irons until the 1760s, and armament was one of the main uses of irons after the Restoration. Cast-iron bridges The use of cast iron for structural purposes began in the late 1770s, when Abraham Darby III built The Iron Bridge, although short beams had already been used, such as in the blast furnaces at Coalbrookdale. Other inventions followed, including one patented by Thomas Paine. Cast-iron bridges became commonplace as the Industrial Revolution gathered pace. Thomas Telford adopted the material for his bridge upstream at Buildwas, and then for Longdon-on-Tern Aqueduct, a canal trough aqueduct at Longdon-on-Tern on the Shrewsbury Canal. It was followed by the Chirk Aqueduct and the Pontcysyllte Aqueduct, both of which remain in use following the recent restorations. The best way of using cast iron for bridge construction was by using arches, so that all the material is in compression. Cast iron, again like masonry, is very strong in compression. Wrought iron, like most other kinds of iron and indeed like most metals in general, is strong in tension, and also tough – resistant to fracturing. The relationship between wrought iron and cast iron, for structural purposes, may be thought of as analogous to the relationship between wood and stone. Cast-iron beam bridges were used widely by the early railways, such as the Water Street Bridge in 1830 at the Manchester terminus of the Liverpool and Manchester Railway, but problems with its use became all too apparent when a new bridge carrying the Chester and Holyhead Railway across the River Dee in Chester collapsed killing five people in May 1847, less than a year after it was opened. The Dee bridge disaster was caused by excessive loading at the centre of the beam by a passing train, and many similar bridges had to be demolished and rebuilt, often in wrought iron. The bridge had been badly designed, being trussed with wrought iron straps, which were wrongly thought to reinforce the structure. The centres of the beams were put into bending, with the lower edge in tension, where cast iron, like masonry, is very weak. Nevertheless, cast iron continued to be used in inappropriate structural ways, until the Tay Rail Bridge disaster of 1879 cast serious doubt on the use of the material. Crucial lugs for holding tie bars and struts in the Tay Bridge had been cast integral with the columns, and they failed in the early stages of the accident. In addition, the bolt holes were also cast and not drilled. Thus, because of casting's draft angle, the tension from the tie bars was placed on the hole's edge rather than being spread over the length of the hole. The replacement bridge was built in wrought iron and steel. Further bridge collapses occurred, however, culminating in the Norwood Junction rail accident of 1891. Thousands of cast-iron rail underbridges were eventually replaced by steel equivalents by 1900 owing to the widespread concern about cast iron under bridges on the rail network in Britain. Buildings Cast-iron columns, pioneered in mill buildings, enabled architects to build multi-storey buildings without the enormously thick walls required for masonry buildings of any height. They also opened up floor spaces in factories, and sight lines in churches and auditoriums. By the mid 19th century, cast iron columns were common in warehouse and industrial buildings, combined with wrought or cast iron beams, eventually leading to the development of steel-framed skyscrapers. Cast iron was also used sometimes for decorative facades, especially in the United States, and the Soho district of New York has numerous examples. It was also used occasionally for complete prefabricated buildings, such as the historic Iron Building in Watervliet, New York. Textile mills Another important use was in textile mills. The air in the mills contained flammable fibres from the cotton, hemp, or wool being spun. As a result, textile mills had an alarming propensity to burn down. The solution was to build them completely of non-combustible materials, and it was found convenient to provide the building with an iron frame, largely of cast iron, replacing flammable wood. The first such building was at Ditherington in Shrewsbury, Shropshire. Many other warehouses were built using cast-iron columns and beams, although faulty designs, flawed beams or overloading sometimes caused building collapses and structural failures. During the Industrial Revolution, cast iron was also widely used for frame and other fixed parts of machinery, including spinning and later weaving machines in textile mills. Cast iron became widely used, and many towns had foundries producing industrial and agricultural machinery.
Physical sciences
Specific alloys
null
133017
https://en.wikipedia.org/wiki/Second%20law%20of%20thermodynamics
Second law of thermodynamics
The second law of thermodynamics is a physical law based on universal empirical observation concerning heat and energy interconversions. A simple statement of the law is that heat always flows spontaneously from hotter to colder regions of matter (or 'downhill' in terms of the temperature gradient). Another statement is: "Not all heat can be converted into work in a cyclic process." The second law of thermodynamics establishes the concept of entropy as a physical property of a thermodynamic system. It predicts whether processes are forbidden despite obeying the requirement of conservation of energy as expressed in the first law of thermodynamics and provides necessary criteria for spontaneous processes. For example, the first law allows the process of a cup falling off a table and breaking on the floor, as well as allowing the reverse process of the cup fragments coming back together and 'jumping' back onto the table, while the second law allows the former and denies the latter. The second law may be formulated by the observation that the entropy of isolated systems left to spontaneous evolution cannot decrease, as they always tend toward a state of thermodynamic equilibrium where the entropy is highest at the given internal energy. An increase in the combined entropy of system and surroundings accounts for the irreversibility of natural processes, often referred to in the concept of the arrow of time. Historically, the second law was an empirical finding that was accepted as an axiom of thermodynamic theory. Statistical mechanics provides a microscopic explanation of the law in terms of probability distributions of the states of large assemblies of atoms or molecules. The second law has been expressed in many ways. Its first formulation, which preceded the proper definition of entropy and was based on caloric theory, is Carnot's theorem, formulated by the French scientist Sadi Carnot, who in 1824 showed that the efficiency of conversion of heat to work in a heat engine has an upper limit. The first rigorous definition of the second law based on the concept of entropy came from German scientist Rudolf Clausius in the 1850s and included his statement that heat can never pass from a colder to a warmer body without some other change, connected therewith, occurring at the same time. The second law of thermodynamics allows the definition of the concept of thermodynamic temperature, but this has been formally delegated to the zeroth law of thermodynamics. Introduction The first law of thermodynamics provides the definition of the internal energy of a thermodynamic system, and expresses its change for a closed system in terms of work and heat. It can be linked to the law of conservation of energy. Conceptually, the first law describes the fundamental principle that systems do not consume or 'use up' energy, that energy is neither created nor destroyed, but is simply converted from one form to another. The second law is concerned with the direction of natural processes. It asserts that a natural process runs only in one sense, and is not reversible. That is, the state of a natural system itself can be reversed, but not without increasing the entropy of the system's surroundings, that is, both the state of the system plus the state of its surroundings cannot be together, fully reversed, without implying the destruction of entropy. For example, when a path for conduction or radiation is made available, heat always flows spontaneously from a hotter to a colder body. Such phenomena are accounted for in terms of entropy change. A heat pump can reverse this heat flow, but the reversal process and the original process, both cause entropy production, thereby increasing the entropy of the system's surroundings. If an isolated system containing distinct subsystems is held initially in internal thermodynamic equilibrium by internal partitioning by impermeable walls between the subsystems, and then some operation makes the walls more permeable, then the system spontaneously evolves to reach a final new internal thermodynamic equilibrium, and its total entropy, , increases. In a reversible or quasi-static, idealized process of transfer of energy as heat to a closed thermodynamic system of interest, (which allows the entry or exit of energy – but not transfer of matter), from an auxiliary thermodynamic system, an infinitesimal increment () in the entropy of the system of interest is defined to result from an infinitesimal transfer of heat () to the system of interest, divided by the common thermodynamic temperature of the system of interest and the auxiliary thermodynamic system: Different notations are used for an infinitesimal amount of heat and infinitesimal change of entropy because entropy is a function of state, while heat, like work, is not. For an actually possible infinitesimal process without exchange of mass with the surroundings, the second law requires that the increment in system entropy fulfills the inequality This is because a general process for this case (no mass exchange between the system and its surroundings) may include work being done on the system by its surroundings, which can have frictional or viscous effects inside the system, because a chemical reaction may be in progress, or because heat transfer actually occurs only irreversibly, driven by a finite difference between the system temperature () and the temperature of the surroundings (). The equality still applies for pure heat flow (only heat flow, no change in chemical composition and mass), which is the basis of the accurate determination of the absolute entropy of pure substances from measured heat capacity curves and entropy changes at phase transitions, i.e. by calorimetry. Introducing a set of internal variables to describe the deviation of a thermodynamic system from a chemical equilibrium state in physical equilibrium (with the required well-defined uniform pressure P and temperature T), one can record the equality The second term represents work of internal variables that can be perturbed by external influences, but the system cannot perform any positive work via internal variables. This statement introduces the impossibility of the reversion of evolution of the thermodynamic system in time and can be considered as a formulation of the second principle of thermodynamics – the formulation, which is, of course, equivalent to the formulation of the principle in terms of entropy. The zeroth law of thermodynamics in its usual short statement allows recognition that two bodies in a relation of thermal equilibrium have the same temperature, especially that a test body has the same temperature as a reference thermometric body. For a body in thermal equilibrium with another, there are indefinitely many empirical temperature scales, in general respectively depending on the properties of a particular reference thermometric body. The second law allows a distinguished temperature scale, which defines an absolute, thermodynamic temperature, independent of the properties of any particular reference thermometric body. Various statements of the law The second law of thermodynamics may be expressed in many specific ways, the most prominent classical statements being the statement by Rudolf Clausius (1854), the statement by Lord Kelvin (1851), and the statement in axiomatic thermodynamics by Constantin Carathéodory (1909). These statements cast the law in general physical terms citing the impossibility of certain processes. The Clausius and the Kelvin statements have been shown to be equivalent. Carnot's principle The historical origin of the second law of thermodynamics was in Sadi Carnot's theoretical analysis of the flow of heat in steam engines (1824). The centerpiece of that analysis, now known as a Carnot engine, is an ideal heat engine fictively operated in the limiting mode of extreme slowness known as quasi-static, so that the heat and work transfers are between subsystems that are always in their own internal states of thermodynamic equilibrium. It represents the theoretical maximum efficiency of a heat engine operating between any two given thermal or heat reservoirs at different temperatures. Carnot's principle was recognized by Carnot at a time when the caloric theory represented the dominant understanding of the nature of heat, before the recognition of the first law of thermodynamics, and before the mathematical expression of the concept of entropy. Interpreted in the light of the first law, Carnot's analysis is physically equivalent to the second law of thermodynamics, and remains valid today. Some samples from his book are: ...wherever there exists a difference of temperature, motive power can be produced. The production of motive power is then due in steam engines not to an actual consumption of caloric, but to its transportation from a warm body to a cold body ... The motive power of heat is independent of the agents employed to realize it; its quantity is fixed solely by the temperatures of the bodies between which is effected, finally, the transfer of caloric. In modern terms, Carnot's principle may be stated more precisely: The efficiency of a quasi-static or reversible Carnot cycle depends only on the temperatures of the two heat reservoirs, and is the same, whatever the working substance. A Carnot engine operated in this way is the most efficient possible heat engine using those two temperatures. Clausius statement The German scientist Rudolf Clausius laid the foundation for the second law of thermodynamics in 1850 by examining the relation between heat transfer and work. His formulation of the second law, which was published in German in 1854, is known as the Clausius statement: Heat can never pass from a colder to a warmer body without some other change, connected therewith, occurring at the same time. The statement by Clausius uses the concept of 'passage of heat'. As is usual in thermodynamic discussions, this means 'net transfer of energy as heat', and does not refer to contributory transfers one way and the other. Heat cannot spontaneously flow from cold regions to hot regions without external work being performed on the system, which is evident from ordinary experience of refrigeration, for example. In a refrigerator, heat is transferred from cold to hot, but only when forced by an external agent, the refrigeration system. Kelvin statements Lord Kelvin expressed the second law in several wordings. It is impossible for a self-acting machine, unaided by any external agency, to convey heat from one body to another at a higher temperature. It is impossible, by means of inanimate material agency, to derive mechanical effect from any portion of matter by cooling it below the temperature of the coldest of the surrounding objects. Equivalence of the Clausius and the Kelvin statements Suppose there is an engine violating the Kelvin statement: i.e., one that drains heat and converts it completely into work (the drained heat is fully converted to work) in a cyclic fashion without any other result. Now pair it with a reversed Carnot engine as shown by the right figure. The efficiency of a normal heat engine is η and so the efficiency of the reversed heat engine is 1/η. The net and sole effect of the combined pair of engines is to transfer heat from the cooler reservoir to the hotter one, which violates the Clausius statement. This is a consequence of the first law of thermodynamics, as for the total system's energy to remain the same; , so therefore , where (1) the sign convention of heat is used in which heat entering into (leaving from) an engine is positive (negative) and (2) is obtained by the definition of efficiency of the engine when the engine operation is not reversed. Thus a violation of the Kelvin statement implies a violation of the Clausius statement, i.e. the Clausius statement implies the Kelvin statement. We can prove in a similar manner that the Kelvin statement implies the Clausius statement, and hence the two are equivalent. Planck's proposition Planck offered the following proposition as derived directly from experience. This is sometimes regarded as his statement of the second law, but he regarded it as a starting point for the derivation of the second law. It is impossible to construct an engine which will work in a complete cycle, and produce no effect except the production of work and cooling of a heat reservoir. Relation between Kelvin's statement and Planck's proposition It is almost customary in textbooks to speak of the "Kelvin–Planck statement" of the law, as for example in the text by ter Haar and Wergeland. This version, also known as the heat engine statement, of the second law states that It is impossible to devise a cyclically operating device, the sole effect of which is to absorb energy in the form of heat from a single thermal reservoir and to deliver an equivalent amount of work. Planck's statement Max Planck stated the second law as follows. Every process occurring in nature proceeds in the sense in which the sum of the entropies of all bodies taking part in the process is increased. In the limit, i.e. for reversible processes, the sum of the entropies remains unchanged. Rather like Planck's statement is that of George Uhlenbeck and G. W. Ford for irreversible phenomena. ... in an irreversible or spontaneous change from one equilibrium state to another (as for example the equalization of temperature of two bodies A and B, when brought in contact) the entropy always increases. Principle of Carathéodory Constantin Carathéodory formulated thermodynamics on a purely mathematical axiomatic foundation. His statement of the second law is known as the Principle of Carathéodory, which may be formulated as follows: In every neighborhood of any state S of an adiabatically enclosed system there are states inaccessible from S. With this formulation, he described the concept of adiabatic accessibility for the first time and provided the foundation for a new subfield of classical thermodynamics, often called geometrical thermodynamics. It follows from Carathéodory's principle that quantity of energy quasi-statically transferred as heat is a holonomic process function, in other words, . Though it is almost customary in textbooks to say that Carathéodory's principle expresses the second law and to treat it as equivalent to the Clausius or to the Kelvin-Planck statements, such is not the case. To get all the content of the second law, Carathéodory's principle needs to be supplemented by Planck's principle, that isochoric work always increases the internal energy of a closed system that was initially in its own internal thermodynamic equilibrium. Planck's principle In 1926, Max Planck wrote an important paper on the basics of thermodynamics. He indicated the principle The internal energy of a closed system is increased by an adiabatic process, throughout the duration of which, the volume of the system remains constant. This formulation does not mention heat and does not mention temperature, nor even entropy, and does not necessarily implicitly rely on those concepts, but it implies the content of the second law. A closely related statement is that "Frictional pressure never does positive work." Planck wrote: "The production of heat by friction is irreversible." Not mentioning entropy, this principle of Planck is stated in physical terms. It is very closely related to the Kelvin statement given just above. It is relevant that for a system at constant volume and mole numbers, the entropy is a monotonic function of the internal energy. Nevertheless, this principle of Planck is not actually Planck's preferred statement of the second law, which is quoted above, in a previous sub-section of the present section of this present article, and relies on the concept of entropy. A statement that in a sense is complementary to Planck's principle is made by Claus Borgnakke and Richard E. Sonntag. They do not offer it as a full statement of the second law: ... there is only one way in which the entropy of a [closed] system can be decreased, and that is to transfer heat from the system. Differing from Planck's just foregoing principle, this one is explicitly in terms of entropy change. Removal of matter from a system can also decrease its entropy. Relating the second law to the definition of temperature The second law has been shown to be equivalent to the internal energy defined as a convex function of the other extensive properties of the system. That is, when a system is described by stating its internal energy , an extensive variable, as a function of its entropy , volume , and mol number , i.e. ), then the temperature is equal to the partial derivative of the internal energy with respect to the entropy (essentially equivalent to the first equation for and held constant): Second law statements, such as the Clausius inequality, involving radiative fluxes The Clausius inequality, as well as some other statements of the second law, must be re-stated to have general applicability for all forms of heat transfer, i.e. scenarios involving radiative fluxes. For example, the integrand (đQ/T) of the Clausius expression applies to heat conduction and convection, and the case of ideal infinitesimal blackbody radiation (BR) transfer, but does not apply to most radiative transfer scenarios and in some cases has no physical meaning whatsoever. Consequently, the Clausius inequality was re-stated so that it is applicable to cycles with processes involving any form of heat transfer. The entropy transfer with radiative fluxes () is taken separately from that due to heat transfer by conduction and convection (), where the temperature is evaluated at the system boundary where the heat transfer occurs. The modified Clausius inequality, for all heat transfer scenarios, can then be expressed as, In a nutshell, the Clausius inequality is saying that when a cycle is completed, the change in the state property S will be zero, so the entropy that was produced during the cycle must have transferred out of the system by heat transfer. The (or đ) indicates a path dependent integration. Due to the inherent emission of radiation from all matter, most entropy flux calculations involve incident, reflected and emitted radiative fluxes. The energy and entropy of unpolarized blackbody thermal radiation, is calculated using the spectral energy and entropy radiance expressions derived by Max Planck using equilibrium statistical mechanics, where c is the speed of light, k is the Boltzmann constant, h is the Planck constant, ν is frequency, and the quantities Kv and Lv are the energy and entropy fluxes per unit frequency, area, and solid angle. In deriving this blackbody spectral entropy radiance, with the goal of deriving the blackbody energy formula, Planck postulated that the energy of a photon was quantized (partly to simplify the mathematics), thereby starting quantum theory. A non-equilibrium statistical mechanics approach has also been used to obtain the same result as Planck, indicating it has wider significance and represents a non-equilibrium entropy. A plot of Kv versus frequency (v) for various values of temperature (T) gives a family of blackbody radiation energy spectra, and likewise for the entropy spectra. For non-blackbody radiation (NBR) emission fluxes, the spectral entropy radiance Lv is found by substituting Kv spectral energy radiance data into the Lv expression (noting that emitted and reflected entropy fluxes are, in general, not independent). For the emission of NBR, including graybody radiation (GR), the resultant emitted entropy flux, or radiance L, has a higher ratio of entropy-to-energy (L/K), than that of BR. That is, the entropy flux of NBR emission is farther removed from the conduction and convection q/T result, than that for BR emission. This observation is consistent with Max Planck's blackbody radiation energy and entropy formulas and is consistent with the fact that blackbody radiation emission represents the maximum emission of entropy for all materials with the same temperature, as well as the maximum entropy emission for all radiation with the same energy radiance. Generalized conceptual statement of the second law principle Second law analysis is valuable in scientific and engineering analysis in that it provides a number of benefits over energy analysis alone, including the basis for determining energy quality (exergy content), understanding fundamental physical phenomena, and improving performance evaluation and optimization. As a result, a conceptual statement of the principle is very useful in engineering analysis. Thermodynamic systems can be categorized by the four combinations of either entropy (S) up or down, and uniformity (Y) – between system and its environment – up or down. This 'special' category of processes, category IV, is characterized by movement in the direction of low disorder and low uniformity, counteracting the second law tendency towards uniformity and disorder. The second law can be conceptually stated as follows: Matter and energy have the tendency to reach a state of uniformity or internal and external equilibrium, a state of maximum disorder (entropy). Real non-equilibrium processes always produce entropy, causing increased disorder in the universe, while idealized reversible processes produce no entropy and no process is known to exist that destroys entropy. The tendency of a system to approach uniformity may be counteracted, and the system may become more ordered or complex, by the combination of two things, a work or exergy source and some form of instruction or intelligence. Where 'exergy' is the thermal, mechanical, electric or chemical work potential of an energy source or flow, and 'instruction or intelligence', although subjective, is in the context of the set of category IV processes. Consider a category IV example of robotic manufacturing and assembly of vehicles in a factory. The robotic machinery requires electrical work input and instructions, but when completed, the manufactured products have less uniformity with their surroundings, or more complexity (higher order) relative to the raw materials they were made from. Thus, system entropy or disorder decreases while the tendency towards uniformity between the system and its environment is counteracted. In this example, the instructions, as well as the source of work may be internal or external to the system, and they may or may not cross the system boundary. To illustrate, the instructions may be pre-coded and the electrical work may be stored in an energy storage system on-site. Alternatively, the control of the machinery may be by remote operation over a communications network, while the electric work is supplied to the factory from the local electric grid. In addition, humans may directly play, in whole or in part, the role that the robotic machinery plays in manufacturing. In this case, instructions may be involved, but intelligence is either directly responsible, or indirectly responsible, for the direction or application of work in such a way as to counteract the tendency towards disorder and uniformity. There are also situations where the entropy spontaneously decreases by means of energy and entropy transfer. When thermodynamic constraints are not present, spontaneously energy or mass, as well as accompanying entropy, may be transferred out of a system in a progress to reach external equilibrium or uniformity in intensive properties of the system with its surroundings. This occurs spontaneously because the energy or mass transferred from the system to its surroundings results in a higher entropy in the surroundings, that is, it results in higher overall entropy of the system plus its surroundings. Note that this transfer of entropy requires dis-equilibrium in properties, such as a temperature difference. One example of this is the cooling crystallization of water that can occur when the system's surroundings are below freezing temperatures. Unconstrained heat transfer can spontaneously occur, leading to water molecules freezing into a crystallized structure of reduced disorder (sticking together in a certain order due to molecular attraction). The entropy of the system decreases, but the system approaches uniformity with its surroundings (category III). On the other hand, consider the refrigeration of water in a warm environment. Due to refrigeration, as heat is extracted from the water, the temperature and entropy of the water decreases, as the system moves further away from uniformity with its warm surroundings or environment (category IV). The main point, take-away, is that refrigeration not only requires a source of work, it requires designed equipment, as well as pre-coded or direct operational intelligence or instructions to achieve the desired refrigeration effect. Corollaries Perpetual motion of the second kind Before the establishment of the second law, many people who were interested in inventing a perpetual motion machine had tried to circumvent the restrictions of first law of thermodynamics by extracting the massive internal energy of the environment as the power of the machine. Such a machine is called a "perpetual motion machine of the second kind". The second law declared the impossibility of such machines. Carnot's theorem Carnot's theorem (1824) is a principle that limits the maximum efficiency for any possible engine. The efficiency solely depends on the temperature difference between the hot and cold thermal reservoirs. Carnot's theorem states: All irreversible heat engines between two heat reservoirs are less efficient than a Carnot engine operating between the same reservoirs. All reversible heat engines between two heat reservoirs are equally efficient with a Carnot engine operating between the same reservoirs. In his ideal model, the heat of caloric converted into work could be reinstated by reversing the motion of the cycle, a concept subsequently known as thermodynamic reversibility. Carnot, however, further postulated that some caloric is lost, not being converted to mechanical work. Hence, no real heat engine could realize the Carnot cycle's reversibility and was condemned to be less efficient. Though formulated in terms of caloric (see the obsolete caloric theory), rather than entropy, this was an early insight into the second law. Clausius inequality The Clausius theorem (1854) states that in a cyclic process The equality holds in the reversible case and the strict inequality holds in the irreversible case, with Tsurr as the temperature of the heat bath (surroundings) here. The reversible case is used to introduce the state function entropy. This is because in cyclic processes the variation of a state function is zero from state functionality. Thermodynamic temperature For an arbitrary heat engine, the efficiency is: where Wn is the net work done by the engine per cycle, qH > 0 is the heat added to the engine from a hot reservoir, and qC = − < 0 is waste heat given off to a cold reservoir from the engine. Thus the efficiency depends only on the ratio / . Carnot's theorem states that all reversible engines operating between the same heat reservoirs are equally efficient. Thus, any reversible heat engine operating between temperatures TH and TC must have the same efficiency, that is to say, the efficiency is a function of temperatures only: In addition, a reversible heat engine operating between temperatures T1 and T3 must have the same efficiency as one consisting of two cycles, one between T1 and another (intermediate) temperature T2, and the second between T2 and T3, where T1 > T2 > T3. This is because, if a part of the two cycle engine is hidden such that it is recognized as an engine between the reservoirs at the temperatures T1 and T3, then the efficiency of this engine must be same to the other engine at the same reservoirs. If we choose engines such that work done by the one cycle engine and the two cycle engine are same, then the efficiency of each heat engine is written as the below. , , . Here, the engine 1 is the one cycle engine, and the engines 2 and 3 make the two cycle engine where there is the intermediate reservoir at T2. We also have used the fact that the heat passes through the intermediate thermal reservoir at without losing its energy. (I.e., is not lost during its passage through the reservoir at .) This fact can be proved by the following. In order to have the consistency in the last equation, the heat flown from the engine 2 to the intermediate reservoir must be equal to the heat flown out from the reservoir to the engine 3. Then Now consider the case where is a fixed reference temperature: the temperature of the triple point of water as 273.16 K; . Then for any T2 and T3, Therefore, if thermodynamic temperature T* is defined by then the function f, viewed as a function of thermodynamic temperatures, is simply and the reference temperature T1* = 273.16 K × f(T1,T1) = 273.16 K. (Any reference temperature and any positive numerical value could be usedthe choice here corresponds to the Kelvin scale.) Entropy According to the Clausius equality, for a reversible process That means the line integral is path independent for reversible processes. So we can define a state function S called entropy, which for a reversible process or for pure heat transfer satisfies With this we can only obtain the difference of entropy by integrating the above formula. To obtain the absolute value, we need the third law of thermodynamics, which states that S = 0 at absolute zero for perfect crystals. For any irreversible process, since entropy is a state function, we can always connect the initial and terminal states with an imaginary reversible process and integrating on that path to calculate the difference in entropy. Now reverse the reversible process and combine it with the said irreversible process. Applying the Clausius inequality on this loop, with Tsurr as the temperature of the surroundings, Thus, where the equality holds if the transformation is reversible. If the process is an adiabatic process, then , so . Energy, available useful work An important and revealing idealized special case is to consider applying the second law to the scenario of an isolated system (called the total system or universe), made up of two parts: a sub-system of interest, and the sub-system's surroundings. These surroundings are imagined to be so large that they can be considered as an unlimited heat reservoir at temperature TR and pressure PR so that no matter how much heat is transferred to (or from) the sub-system, the temperature of the surroundings will remain TR; and no matter how much the volume of the sub-system expands (or contracts), the pressure of the surroundings will remain PR. Whatever changes to dS and dSR occur in the entropies of the sub-system and the surroundings individually, the entropy Stot of the isolated total system must not decrease according to the second law of thermodynamics: According to the first law of thermodynamics, the change dU in the internal energy of the sub-system is the sum of the heat δq added to the sub-system, minus any work δw done by the sub-system, plus any net chemical energy entering the sub-system d ΣμiRNi, so that: where μiR are the chemical potentials of chemical species in the external surroundings. Now the heat leaving the reservoir and entering the sub-system is where we have first used the definition of entropy in classical thermodynamics (alternatively, in statistical thermodynamics, the relation between entropy change, temperature and absorbed heat can be derived); and then the second law inequality from above. It therefore follows that any net work δw done by the sub-system must obey It is useful to separate the work δw done by the subsystem into the useful work δwu that can be done by the sub-system, over and beyond the work pR dV done merely by the sub-system expanding against the surrounding external pressure, giving the following relation for the useful work (exergy) that can be done: It is convenient to define the right-hand-side as the exact derivative of a thermodynamic potential, called the availability or exergy E of the subsystem, The second law therefore implies that for any process which can be considered as divided simply into a subsystem, and an unlimited temperature and pressure reservoir with which it is in contact, i.e. the change in the subsystem's exergy plus the useful work done by the subsystem (or, the change in the subsystem's exergy less any work, additional to that done by the pressure reservoir, done on the system) must be less than or equal to zero. In sum, if a proper infinite-reservoir-like reference state is chosen as the system surroundings in the real world, then the second law predicts a decrease in E for an irreversible process and no change for a reversible process. is equivalent to This expression together with the associated reference state permits a design engineer working at the macroscopic scale (above the thermodynamic limit) to utilize the second law without directly measuring or considering entropy change in a total isolated system (see also Process engineer). Those changes have already been considered by the assumption that the system under consideration can reach equilibrium with the reference state without altering the reference state. An efficiency for a process or collection of processes that compares it to the reversible ideal may also be found (see Exergy efficiency). This approach to the second law is widely utilized in engineering practice, environmental accounting, systems ecology, and other disciplines. Direction of spontaneous processes The second law determines whether a proposed physical or chemical process is forbidden or may occur spontaneously. For isolated systems, no energy is provided by the surroundings and the second law requires that the entropy of the system alone must increase: ΔS > 0. Examples of spontaneous physical processes in isolated systems include the following: 1) Heat can be transferred from a region of higher temperature to a lower temperature (but not the reverse). 2) Mechanical energy can be converted to thermal energy (but not the reverse). 3) A solute can move from a region of higher concentration to a region of lower concentration (but not the reverse). However, for some non-isolated systems which can exchange energy with their surroundings, the surroundings exchange enough heat with the system, or do sufficient work on the system, so that the processes occur in the opposite direction. This is possible provided the total entropy change of the system plus the surroundings is positive as required by the second law: ΔStot = ΔS + ΔSR > 0. For the three examples given above: 1) Heat can be transferred from a region of lower temperature to a higher temperature in a refrigerator or in a heat pump. These machines must provide sufficient work to the system. 2) Thermal energy can be converted to mechanical work in a heat engine, if sufficient heat is also expelled to the surroundings. 3) A solute can move from a region of lower concentration to a region of higher concentration in the biochemical process of active transport, if sufficient work is provided by a concentration gradient of a chemical such as ATP or by an electrochemical gradient. Second law in chemical thermodynamics For a spontaneous chemical process in a closed system at constant temperature and pressure without non-PV work, the Clausius inequality ΔS > Q/Tsurr transforms into a condition for the change in Gibbs free energy or dG < 0. For a similar process at constant temperature and volume, the change in Helmholtz free energy must be negative, . Thus, a negative value of the change in free energy (G or A) is a necessary condition for a process to be spontaneous. This is the most useful form of the second law of thermodynamics in chemistry, where free-energy changes can be calculated from tabulated enthalpies of formation and standard molar entropies of reactants and products. The chemical equilibrium condition at constant T and p without electrical work is dG = 0. History The first theory of the conversion of heat into mechanical work is due to Nicolas Léonard Sadi Carnot in 1824. He was the first to realize correctly that the efficiency of this conversion depends on the difference of temperature between an engine and its surroundings. Recognizing the significance of James Prescott Joule's work on the conservation of energy, Rudolf Clausius was the first to formulate the second law during 1850, in this form: heat does not flow spontaneously from cold to hot bodies. While common knowledge now, this was contrary to the caloric theory of heat popular at the time, which considered heat as a fluid. From there he was able to infer the principle of Sadi Carnot and the definition of entropy (1865). Established during the 19th century, the Kelvin-Planck statement of the second law says, "It is impossible for any device that operates on a cycle to receive heat from a single reservoir and produce a net amount of work." This statement was shown to be equivalent to the statement of Clausius. The ergodic hypothesis is also important for the Boltzmann approach. It says that, over long periods of time, the time spent in some region of the phase space of microstates with the same energy is proportional to the volume of this region, i.e. that all accessible microstates are equally probable over a long period of time. Equivalently, it says that time average and average over the statistical ensemble are the same. There is a traditional doctrine, starting with Clausius, that entropy can be understood in terms of molecular 'disorder' within a macroscopic system. This doctrine is obsolescent. Account given by Clausius In 1865, the German physicist Rudolf Clausius stated what he called the "second fundamental theorem in the mechanical theory of heat" in the following form: where Q is heat, T is temperature and N is the "equivalence-value" of all uncompensated transformations involved in a cyclical process. Later, in 1865, Clausius would come to define "equivalence-value" as entropy. On the heels of this definition, that same year, the most famous version of the second law was read in a presentation at the Philosophical Society of Zurich on April 24, in which, in the end of his presentation, Clausius concludes: The entropy of the universe tends to a maximum. This statement is the best-known phrasing of the second law. Because of the looseness of its language, e.g. universe, as well as lack of specific conditions, e.g. open, closed, or isolated, many people take this simple statement to mean that the second law of thermodynamics applies virtually to every subject imaginable. This is not true; this statement is only a simplified version of a more extended and precise description. In terms of time variation, the mathematical statement of the second law for an isolated system undergoing an arbitrary transformation is: where S is the entropy of the system and t is time. The equality sign applies after equilibration. An alternative way of formulating of the second law for isolated systems is: with with the sum of the rate of entropy production by all processes inside the system. The advantage of this formulation is that it shows the effect of the entropy production. The rate of entropy production is a very important concept since it determines (limits) the efficiency of thermal machines. Multiplied with ambient temperature it gives the so-called dissipated energy . The expression of the second law for closed systems (so, allowing heat exchange and moving boundaries, but not exchange of matter) is: with Here, is the heat flow into the system is the temperature at the point where the heat enters the system. The equality sign holds in the case that only reversible processes take place inside the system. If irreversible processes take place (which is the case in real systems in operation) the >-sign holds. If heat is supplied to the system at several places we have to take the algebraic sum of the corresponding terms. For open systems (also allowing exchange of matter): with Here, is the flow of entropy into the system associated with the flow of matter entering the system. It should not be confused with the time derivative of the entropy. If matter is supplied at several places we have to take the algebraic sum of these contributions. Statistical mechanics Statistical mechanics gives an explanation for the second law by postulating that a material is composed of atoms and molecules which are in constant motion. A particular set of positions and velocities for each particle in the system is called a microstate of the system and because of the constant motion, the system is constantly changing its microstate. Statistical mechanics postulates that, in equilibrium, each microstate that the system might be in is equally likely to occur, and when this assumption is made, it leads directly to the conclusion that the second law must hold in a statistical sense. That is, the second law will hold on average, with a statistical variation on the order of 1/ where N is the number of particles in the system. For everyday (macroscopic) situations, the probability that the second law will be violated is practically zero. However, for systems with a small number of particles, thermodynamic parameters, including the entropy, may show significant statistical deviations from that predicted by the second law. Classical thermodynamic theory does not deal with these statistical variations. Derivation from statistical mechanics The first mechanical argument of the Kinetic theory of gases that molecular collisions entail an equalization of temperatures and hence a tendency towards equilibrium was due to James Clerk Maxwell in 1860; Ludwig Boltzmann with his H-theorem of 1872 also argued that due to collisions gases should over time tend toward the Maxwell–Boltzmann distribution. Due to Loschmidt's paradox, derivations of the second law have to make an assumption regarding the past, namely that the system is uncorrelated at some time in the past; this allows for simple probabilistic treatment. This assumption is usually thought as a boundary condition, and thus the second law is ultimately a consequence of the initial conditions somewhere in the past, probably at the beginning of the universe (the Big Bang), though other scenarios have also been suggested. Given these assumptions, in statistical mechanics, the second law is not a postulate, rather it is a consequence of the fundamental postulate, also known as the equal prior probability postulate, so long as one is clear that simple probability arguments are applied only to the future, while for the past there are auxiliary sources of information which tell us that it was low entropy. The first part of the second law, which states that the entropy of a thermally isolated system can only increase, is a trivial consequence of the equal prior probability postulate, if we restrict the notion of the entropy to systems in thermal equilibrium. The entropy of an isolated system in thermal equilibrium containing an amount of energy of is: where is the number of quantum states in a small interval between and . Here is a macroscopically small energy interval that is kept fixed. Strictly speaking this means that the entropy depends on the choice of . However, in the thermodynamic limit (i.e. in the limit of infinitely large system size), the specific entropy (entropy per unit volume or per unit mass) does not depend on . Suppose we have an isolated system whose macroscopic state is specified by a number of variables. These macroscopic variables can, e.g., refer to the total volume, the positions of pistons in the system, etc. Then will depend on the values of these variables. If a variable is not fixed, (e.g. we do not clamp a piston in a certain position), then because all the accessible states are equally likely in equilibrium, the free variable in equilibrium will be such that is maximized at the given energy of the isolated system as that is the most probable situation in equilibrium. If the variable was initially fixed to some value then upon release and when the new equilibrium has been reached, the fact the variable will adjust itself so that is maximized, implies that the entropy will have increased or it will have stayed the same (if the value at which the variable was fixed happened to be the equilibrium value). Suppose we start from an equilibrium situation and we suddenly remove a constraint on a variable. Then right after we do this, there are a number of accessible microstates, but equilibrium has not yet been reached, so the actual probabilities of the system being in some accessible state are not yet equal to the prior probability of . We have already seen that in the final equilibrium state, the entropy will have increased or have stayed the same relative to the previous equilibrium state. Boltzmann's H-theorem, however, proves that the quantity increases monotonically as a function of time during the intermediate out of equilibrium state. Derivation of the entropy change for reversible processes The second part of the second law states that the entropy change of a system undergoing a reversible process is given by: where the temperature is defined as: See Microcanonical ensemble for the justification for this definition. Suppose that the system has some external parameter, x, that can be changed. In general, the energy eigenstates of the system will depend on x. According to the adiabatic theorem of quantum mechanics, in the limit of an infinitely slow change of the system's Hamiltonian, the system will stay in the same energy eigenstate and thus change its energy according to the change in energy of the energy eigenstate it is in. The generalized force, X, corresponding to the external variable x is defined such that is the work performed by the system if x is increased by an amount dx. For example, if x is the volume, then X is the pressure. The generalized force for a system known to be in energy eigenstate is given by: Since the system can be in any energy eigenstate within an interval of , we define the generalized force for the system as the expectation value of the above expression: To evaluate the average, we partition the energy eigenstates by counting how many of them have a value for within a range between and . Calling this number , we have: The average defining the generalized force can now be written: We can relate this to the derivative of the entropy with respect to x at constant energy E as follows. Suppose we change x to x + dx. Then will change because the energy eigenstates depend on x, causing energy eigenstates to move into or out of the range between and . Let's focus again on the energy eigenstates for which lies within the range between and . Since these energy eigenstates increase in energy by Y dx, all such energy eigenstates that are in the interval ranging from E – Y dx to E move from below E to above E. There are such energy eigenstates. If , all these energy eigenstates will move into the range between and and contribute to an increase in . The number of energy eigenstates that move from below to above is given by . The difference is thus the net contribution to the increase in . If Y dx is larger than there will be the energy eigenstates that move from below E to above . They are counted in both and , therefore the above expression is also valid in that case. Expressing the above expression as a derivative with respect to E and summing over Y yields the expression: The logarithmic derivative of with respect to x is thus given by: The first term is intensive, i.e. it does not scale with system size. In contrast, the last term scales as the inverse system size and will thus vanish in the thermodynamic limit. We have thus found that: Combining this with gives: Derivation for systems described by the canonical ensemble If a system is in thermal contact with a heat bath at some temperature T then, in equilibrium, the probability distribution over the energy eigenvalues are given by the canonical ensemble: Here Z is a factor that normalizes the sum of all the probabilities to 1, this function is known as the partition function. We now consider an infinitesimal reversible change in the temperature and in the external parameters on which the energy levels depend. It follows from the general formula for the entropy: that Inserting the formula for for the canonical ensemble in here gives: Initial conditions at the Big Bang As elaborated above, it is thought that the second law of thermodynamics is a result of the very low-entropy initial conditions at the Big Bang. From a statistical point of view, these were very special conditions. On the other hand, they were quite simple, as the universe - or at least the part thereof from which the observable universe developed - seems to have been extremely uniform. This may seem somewhat paradoxical, since in many physical systems uniform conditions (e.g. mixed rather than separated gases) have high entropy. The paradox is solved once realizing that gravitational systems have negative heat capacity, so that when gravity is important, uniform conditions (e.g. gas of uniform density) in fact have lower entropy compared to non-uniform ones (e.g. black holes in empty space). Yet another approach is that the universe had high (or even maximal) entropy given its size, but as the universe grew it rapidly came out of thermodynamic equilibrium, its entropy only slightly increased compared to the increase in maximal possible entropy, and thus it has arrived at a very low entropy when compared to the much larger possible maximum given its later size. As for the reason why initial conditions were such, one suggestion is that cosmological inflation was enough to wipe off non-smoothness, while another is that the universe was created spontaneously where the mechanism of creation implies low-entropy initial conditions. Living organisms There are two principal ways of formulating thermodynamics, (a) through passages from one state of thermodynamic equilibrium to another, and (b) through cyclic processes, by which the system is left unchanged, while the total entropy of the surroundings is increased. These two ways help to understand the processes of life. The thermodynamics of living organisms has been considered by many authors, including Erwin Schrödinger (in his book What is Life?) and Léon Brillouin. To a fair approximation, living organisms may be considered as examples of (b). Approximately, an animal's physical state cycles by the day, leaving the animal nearly unchanged. Animals take in food, water, and oxygen, and, as a result of metabolism, give out breakdown products and heat. Plants take in radiative energy from the sun, which may be regarded as heat, and carbon dioxide and water. They give out oxygen. In this way they grow. Eventually they die, and their remains rot away, turning mostly back into carbon dioxide and water. This can be regarded as a cyclic process. Overall, the sunlight is from a high temperature source, the sun, and its energy is passed to a lower temperature sink, i.e. radiated into space. This is an increase of entropy of the surroundings of the plant. Thus animals and plants obey the second law of thermodynamics, considered in terms of cyclic processes. Furthermore, the ability of living organisms to grow and increase in complexity, as well as to form correlations with their environment in the form of adaption and memory, is not opposed to the second law – rather, it is akin to general results following from it: Under some definitions, an increase in entropy also results in an increase in complexity, and for a finite system interacting with finite reservoirs, an increase in entropy is equivalent to an increase in correlations between the system and the reservoirs. Living organisms may be considered as open systems, because matter passes into and out from them. Thermodynamics of open systems is currently often considered in terms of passages from one state of thermodynamic equilibrium to another, or in terms of flows in the approximation of local thermodynamic equilibrium. The problem for living organisms may be further simplified by the approximation of assuming a steady state with unchanging flows. General principles of entropy production for such approximations are a subject of ongoing research. Gravitational systems Commonly, systems for which gravity is not important have a positive heat capacity, meaning that their temperature rises with their internal energy. Therefore, when energy flows from a high-temperature object to a low-temperature object, the source temperature decreases while the sink temperature is increased; hence temperature differences tend to diminish over time. This is not always the case for systems in which the gravitational force is important: systems that are bound by their own gravity, such as stars, can have negative heat capacities. As they contract, both their total energy and their entropy decrease but their internal temperature may increase. This can be significant for protostars and even gas giant planets such as Jupiter. When the entropy of the black-body radiation emitted by the bodies is included, however, the total entropy of the system can be shown to increase even as the entropy of the planet or star decreases. Non-equilibrium states The theory of classical or equilibrium thermodynamics is idealized. A main postulate or assumption, often not even explicitly stated, is the existence of systems in their own internal states of thermodynamic equilibrium. In general, a region of space containing a physical system at a given time, that may be found in nature, is not in thermodynamic equilibrium, read in the most stringent terms. In looser terms, nothing in the entire universe is or has ever been truly in exact thermodynamic equilibrium. For purposes of physical analysis, it is often enough convenient to make an assumption of thermodynamic equilibrium. Such an assumption may rely on trial and error for its justification. If the assumption is justified, it can often be very valuable and useful because it makes available the theory of thermodynamics. Elements of the equilibrium assumption are that a system is observed to be unchanging over an indefinitely long time, and that there are so many particles in a system, that its particulate nature can be entirely ignored. Under such an equilibrium assumption, in general, there are no macroscopically detectable fluctuations. There is an exception, the case of critical states, which exhibit to the naked eye the phenomenon of critical opalescence. For laboratory studies of critical states, exceptionally long observation times are needed. In all cases, the assumption of thermodynamic equilibrium, once made, implies as a consequence that no putative candidate "fluctuation" alters the entropy of the system. It can easily happen that a physical system exhibits internal macroscopic changes that are fast enough to invalidate the assumption of the constancy of the entropy. Or that a physical system has so few particles that the particulate nature is manifest in observable fluctuations. Then the assumption of thermodynamic equilibrium is to be abandoned. There is no unqualified general definition of entropy for non-equilibrium states. There are intermediate cases, in which the assumption of local thermodynamic equilibrium is a very good approximation, but strictly speaking it is still an approximation, not theoretically ideal. For non-equilibrium situations in general, it may be useful to consider statistical mechanical definitions of other quantities that may be conveniently called 'entropy', but they should not be confused or conflated with thermodynamic entropy properly defined for the second law. These other quantities indeed belong to statistical mechanics, not to thermodynamics, the primary realm of the second law. The physics of macroscopically observable fluctuations is beyond the scope of this article. Arrow of time The second law of thermodynamics is a physical law that is not symmetric to reversal of the time direction. This does not conflict with symmetries observed in the fundamental laws of physics (particularly CPT symmetry) since the second law applies statistically on time-asymmetric boundary conditions. The second law has been related to the difference between moving forwards and backwards in time, or to the principle that cause precedes effect (the causal arrow of time, or causality). Irreversibility Irreversibility in thermodynamic processes is a consequence of the asymmetric character of thermodynamic operations, and not of any internally irreversible microscopic properties of the bodies. Thermodynamic operations are macroscopic external interventions imposed on the participating bodies, not derived from their internal properties. There are reputed "paradoxes" that arise from failure to recognize this. Loschmidt's paradox Loschmidt's paradox, also known as the reversibility paradox, is the objection that it should not be possible to deduce an irreversible process from the time-symmetric dynamics that describe the microscopic evolution of a macroscopic system. In the opinion of Schrödinger, "It is now quite obvious in what manner you have to reformulate the law of entropyor for that matter, all other irreversible statementsso that they be capable of being derived from reversible models. You must not speak of one isolated system but at least of two, which you may for the moment consider isolated from the rest of the world, but not always from each other." The two systems are isolated from each other by the wall, until it is removed by the thermodynamic operation, as envisaged by the law. The thermodynamic operation is externally imposed, not subject to the reversible microscopic dynamical laws that govern the constituents of the systems. It is the cause of the irreversibility. The statement of the law in this present article complies with Schrödinger's advice. The cause–effect relation is logically prior to the second law, not derived from it. This reaffirms Albert Einstein's postulates that cornerstone Special and General Relativity - that the flow of time is irreversible, however it is relative. Cause must precede effect, but only within the constraints as defined explicitly within General Relativity (or Special Relativity, depending on the local spacetime conditions). Good examples of this are the Ladder Paradox, time dilation and length contraction exhibited by objects approaching the velocity of light or within proximity of a super-dense region of mass/energy - e.g. black holes, neutron stars, magnetars and quasars. Poincaré recurrence theorem The Poincaré recurrence theorem considers a theoretical microscopic description of an isolated physical system. This may be considered as a model of a thermodynamic system after a thermodynamic operation has removed an internal wall. The system will, after a sufficiently long time, return to a microscopically defined state very close to the initial one. The Poincaré recurrence time is the length of time elapsed until the return. It is exceedingly long, likely longer than the life of the universe, and depends sensitively on the geometry of the wall that was removed by the thermodynamic operation. The recurrence theorem may be perceived as apparently contradicting the second law of thermodynamics. More obviously, however, it is simply a microscopic model of thermodynamic equilibrium in an isolated system formed by removal of a wall between two systems. For a typical thermodynamical system, the recurrence time is so large (many many times longer than the lifetime of the universe) that, for all practical purposes, one cannot observe the recurrence. One might wish, nevertheless, to imagine that one could wait for the Poincaré recurrence, and then re-insert the wall that was removed by the thermodynamic operation. It is then evident that the appearance of irreversibility is due to the utter unpredictability of the Poincaré recurrence given only that the initial state was one of thermodynamic equilibrium, as is the case in macroscopic thermodynamics. Even if one could wait for it, one has no practical possibility of picking the right instant at which to re-insert the wall. The Poincaré recurrence theorem provides a solution to Loschmidt's paradox. If an isolated thermodynamic system could be monitored over increasingly many multiples of the average Poincaré recurrence time, the thermodynamic behavior of the system would become invariant under time reversal. Maxwell's demon James Clerk Maxwell imagined one container divided into two parts, A and B. Both parts are filled with the same gas at equal temperatures and placed next to each other, separated by a wall. Observing the molecules on both sides, an imaginary demon guards a microscopic trapdoor in the wall. When a faster-than-average molecule from A flies towards the trapdoor, the demon opens it, and the molecule will fly from A to B. The average speed of the molecules in B will have increased while in A they will have slowed down on average. Since average molecular speed corresponds to temperature, the temperature decreases in A and increases in B, contrary to the second law of thermodynamics. One response to this question was suggested in 1929 by Leó Szilárd and later by Léon Brillouin. Szilárd pointed out that a real-life Maxwell's demon would need to have some means of measuring molecular speed, and that the act of acquiring information would require an expenditure of energy. Likewise, Brillouin demonstrated that the decrease in entropy caused by the demon would be less than the entropy produced by choosing molecules based on their speed. Maxwell's 'demon' repeatedly alters the permeability of the wall between A and B. It is therefore performing thermodynamic operations on a microscopic scale, not just observing ordinary spontaneous or natural macroscopic thermodynamic processes. Quotations
Physical sciences
Thermodynamics
Physics
22459054
https://en.wikipedia.org/wiki/Pachypasa
Pachypasa
Pachypasa is a genus of moths in the family Lasiocampidae. The genus was erected by Francis Walker in 1855. Species Pachypasa otus (Drury, 1773) Italy, Greece, Asia Minor, Iraq, Iran Pachypasa limosa (de Villiers, 1827) southwestern Europe, northern Africa Pachypasa denticula (Bethune-Baker, 1908) Zimbabwe Pachypasa drucei (Bethune-Baker, 1908) Zimbabwe Pachypasa argibasis (Mabille, 1893) western and eastern Africa Pachypasa pallens (Bethune-Baker, 1908) Zimbabwe Pachypasa subfascia Walker, 1855 western Africa Pachypasa multipunctata (Hering, 1932) External links Lasiocampidae
Biology and health sciences
Lepidoptera
Animals
12991932
https://en.wikipedia.org/wiki/Level%20staff
Level staff
A level staff, also called levelling rod, is a graduated wooden or aluminium rod, used with a levelling instrument to determine the difference in height between points or heights of points above a vertical datum. When used for stadiametric rangefinding, the level staff is called a stadia rod. Rod construction and materials Levelling rods can be one piece, but many are sectional and can be shortened for storage and transport or lengthened for use. Aluminum rods may be shortened by telescoping sections inside each other, while wooden rod sections can be attached to each other with sliding connections or slip joints, or hinged to fold when not in use. There are many types of rods, with names that identify the form of the graduations and other characteristics. Markings can be in imperial or metric units. Some rods are graduated on one side only while others are marked on both sides. If marked on both sides, the markings can be identical or can have imperial units on one side and metric on the other. Reading a rod In the photograph on the right, both a metric (left) and imperial (right) levelling rod are seen. This is a two-sided aluminum rod, coated white with markings in contrasting colours. The imperial side has a bright yellow background. The metric rod has major numbered graduations in meters and tenths of meters (e.g. 18 is 1.8 m - there is a tiny decimal point between the numbers). Between the major marks are either a pattern of squares and spaces in different colours or an E shape (or its mirror image) with horizontal components and spaces between of equal size. In both parts of the pattern, the squares, lines or spaces are precisely one centimetre high. When viewed through an instrument's telescope, the observer can visually interpolate a 1 cm mark to a tenth of its height, yielding a reading with precision in mm. Usually readings are recorded with millimetre precision. On this side of the rod, the colours of the markings alternate between red and black with each meter of length. The imperial graduations are in feet (large red numbers), tenths of a foot (small black numbers) and hundredths of a foot (unnumbered marks or spaces between the marks). The tenths of a foot point is indicated by the top of the long mark with the upward sloped end. The point halfway between tenths of a foot marks is indicated by the bottom of a medium length black mark with a downward sloped end. Each mark or space is approximately 3mm, yielding roughly the same accuracy as the metric rod. Classes of rods Rods come in two classes: Self-reading rods (sometimes called speaking rods). Target rods. Self-reading rods are rods that are read by the person viewing the rod through the telescope of the instrument. The graduations are sufficiently clear to read with good accuracy. Target rods, on the other hand, are equipped with a target. The target is a round or oval plate marked in quarters in contrasting colours such as red and white in opposite quarters. A hole in the centre allows the instrument user to see the rod's scale. The target is adjusted by the rodman according to the instructions from the instrument man. When the target is set to align with the crosshairs of the instrument, the rodman records the level value. The target may have a vernier to allow fractional increments of the graduation to be read. Digital levels electronically read a bar-coded scale on the staff. These instruments usually include data recording capability. The automation removes the requirement for the operator to read a scale and write down the value, and so reduces blunders. It may also compute and apply refraction and curvature corrections. Topographer's rods Topographer's rods are special purpose rods used in topographical surveys. The rod has the zero mark at mid-height and the graduations increase in both directions away from the mid-height. In use, the rod is adjusted so that the zero point is level with the instrument (or the surveyor's eye if he is using a hand level for low-resolution work). When placed at any point where the level is to be read, the value seen is the height above or below the viewer's position. An alternative topographer's rod has the graduations numbered upwards from the base.
Technology
Surveying tools
null
4865141
https://en.wikipedia.org/wiki/Flood%20control%20in%20the%20Netherlands
Flood control in the Netherlands
Flood control is an important issue for the Netherlands, as due to its low elevation, approximately two thirds of its area is vulnerable to flooding, while the country is densely populated. Natural sand dunes and constructed dikes, dams, and floodgates provide defense against storm surges from the sea. River dikes prevent flooding from water flowing into the country by the major rivers Rhine and Meuse, while a complicated system of drainage ditches, canals, and pumping stations (historically: windmills) keep the low-lying parts dry for habitation and agriculture. Water control boards are the independent local government bodies responsible for maintaining this system. In modern times, flood disasters coupled with technological developments have led to large construction works to reduce the influence of the sea and prevent future floods. These have proved essential over the course of Dutch history, both geographically and militarily, and have greatly impacted the lives of many living in the cities affected, stimulating their economies through constant infrastructural improvement. History The Greek geographer Pytheas noted of the Low Countries, as he passed them on his way to Heligoland BCE, that "more people died in the struggle against water than in the struggle against men". Roman author Pliny, of the 1st century, wrote something similar in his Natural History: There, twice in every twenty-four hours, the ocean's vast tide sweeps in a flood over a large stretch of land and hides Nature's everlasting controversy about whether this region belongs to the land or to the sea. There these wretched peoples occupy high ground, or manmade platforms constructed above the level of the highest tide they experience; they live in huts built on the site so chosen and are like sailors in ships when the waters cover the surrounding land, but when the tide has receded they are like shipwrecked victims. Around their huts they catch fish as they try to escape with the ebbing tide. It does not fall to their lot to keep herds and live on milk, like neighboring tribes, nor even to fight with wild animals, since all undergrowth has been pushed far back. The flood-threatened area of the Netherlands is essentially an alluvial plain, built up from sediment left by thousands of years of flooding by rivers and the sea. About 2,000 years ago most of the Netherlands was covered by extensive peat swamps. The coast consisted of a row of coastal dunes and natural embankments which kept the swamps from draining but also from being washed away by the sea. The only areas suitable for habitation were on the higher grounds in the east and south and on the dunes and natural embankments along the coast and the rivers. In several places the sea had broken through these natural defenses and created extensive floodplains in the north. The first permanent inhabitants of this area were probably attracted by the sea-deposited clay soil which was much more fertile than the peat and sandy soil further inland. To protect themselves against floods they built their homes on artificial dwelling hills called terpen or wierden (known as Warften or Halligen in Germany). Between 500 BC and AD 700 there were probably several periods of habitation and abandonment as the sea level periodically rose and fell. The first dikes were low embankments of only a meter or so in height surrounding fields to protect the crops against occasional flooding. Around the 9th century the sea was on the advance again and many terps had to be raised to keep them safe. Many single terps had by this time grown together as villages. These were now connected by the first dikes. After about AD 1000 the population grew, which meant there was a greater demand for arable land but also that there was a greater workforce available and dike construction was taken up more seriously. The major contributors in later dike building were the monasteries. As the largest landowners they had the organization, resources and manpower to undertake the large construction. By 1250 most dikes had been connected into a continuous sea defense. The next step was to move the dikes ever-more seawards. Every cycle of high and low tide left a small layer of sediment. Over the years these layers had built up to such a height that they were rarely flooded. It was then considered safe to build a new dike around this area. The old dike was often kept as a secondary defense, called a sleeper dike. A dike could not always be moved seawards. Especially in the southwest river delta it was often the case that the primary sea dike was undermined by a tidal channel. A secondary dike was then built, called an inlaagdijk. With an inland dike, when the seaward dike collapses the secondary inland dike becomes the primary. Although the redundancy provides security, the land from the first to second dike is lost; over the years the loss can become significant. Taking land from the cycle of flooding by putting a dike around it prevents it from being raised by silt left behind after a flooding. At the same time the drained soil consolidates and peat decomposes leading to land subsidence. In this way the difference between the water level on one side and land level on the other side of the dike grew. While floods became more rare, if the dike did overflow or was breached the destruction was much larger. The construction method of dikes has changed over the centuries. Popular in the Middle Ages were wierdijken, earth dikes with a protective layer of seaweed. An earth embankment was cut vertically on the sea-facing side. Seaweed was then stacked against this edge, held into place with poles. Compression and rotting processes resulted in a solid residue that proved very effective against wave action and they needed very little maintenance. In places where seaweed was unavailable, other materials, such as reeds or wicker mats, were used. Another system used much and for a long time was that of a vertical screen of timbers backed by an earth bank. Technically these vertical constructions were less successful as vibration from crashing waves and washing out of the dike foundations weakened the dike. Much damage was done to these wood constructions with the arrival of the shipworm (Teredo navalis), a bivalve thought to have been brought to the Netherlands by VOC trading ships, that ate its way through Dutch sea defenses around 1730. The change was made from wood to using stone for reinforcement. This was a great financial setback as there is no naturally occurring rock in the Netherlands and it all had to be imported from abroad. Current dikes are made with a core of sand, covered by a thick layer of clay to provide waterproofing and resistance against erosion. Dikes without a foreland have a layer of crushed rock below the waterline to slow wave action. Up to the high waterline the dike is often covered with carefully laid basalt stones or a layer of tarmac. The remainder is covered by grass and maintained by grazing sheep. Sheep keep the grass dense and compact the soil, in contrast to cattle. Developing the peat swamps At about the same time as the building of dikes the first swamps were made suitable for agriculture by colonists. By digging a system of parallel drainage ditches water was drained from the land to be able to grow grain. However, the peat settled much more than other soil types when drained and land subsidence resulted in developed areas becoming wet again. Cultivated lands which were at first primarily used for growing grain thus became too wet and the switch was made to dairy farming. A new area behind the existing field was then cultivated, heading deeper into the wild. This cycle repeated itself several times until the different developments met each other and no further undeveloped land was available. All land was then used for grazing cattle. Because of the continuous land subsidence it became ever more difficult to remove excess water. The mouths of streams and rivers were dammed to prevent high water levels flowing back upstream and overflowing cultivated lands. These dams had a wooden culvert equipped with a valve, allowing drainage but preventing water from flowing upstream. These dams, however, blocked shipping and the economic activity caused by the need to transship goods caused villages to grow up near the dam, some famous examples are Amsterdam (dam in the river Amstel) and Rotterdam (dam in the Rotte). Only in later centuries were locks developed to allow ships to pass. Further drainage could only be accomplished after the development of the polder windmill in the 15th century. The wind-driven water pump has become one of the trademark tourist attractions of the Netherlands. The first drainage mills using a scoop wheel could raise water at most 1.5 m. By combining mills the pumping height could be increased. Later mills were equipped with an Archimedes' screw which could raise water much higher. The polders, now often below sea level, were kept dry with mills pumping water from the polder ditches and canals to the boezem ("bosom"), a system of canals and lakes connecting the different polders and acting as a storage basin until the water could be let out to river or sea, either by a sluice gate at low tide or using further pumps. This system is still in use today, though drainage mills have been replaced by first steam and later diesel and electric pumping stations. The growth of towns and industry in the Middle Ages resulted in an increased demand for dried peat as fuel. First all the peat down to the groundwater table was dug away. In the 16th century a method was developed to dig peat below water, using a dredging net on a long pole. Large scale peat dredging was taken up by companies, supported by investors from the cities. These undertakings often devastated the landscape as agricultural land was dug away and the leftover ridges, used for drying the peat, collapsed under the action of waves. Small lakes were created which quickly grew in area, every increase in surface water leading to more leverage of the wind on the water to attack more land. It even led to villages being lost to the waves of human-made lakes. The development of the polder mill gave the option of draining the lakes. In the 16th century this work was started on small, shallow lakes, continuing with ever-larger and deeper lakes, though it was not until in the 19th century that the most dangerous of lakes, the Haarlemmermeer near Amsterdam, was drained using steam power. Drained lakes and new polders can often be easily distinguished on topographic maps by their different regular division pattern as compared to their older surroundings. Millwright and hydraulic engineer Jan Leeghwater has become famous for his involvement in these works. Control of river floods Three major European rivers, the Rhine, Meuse, and Scheldt, flow through the Netherlands, of which the Rhine and Meuse cross the country from east to west. The first large construction works on the rivers were conducted by the Romans. Nero Claudius Drusus was responsible for building a dam in the Rhine to divert water from the river branches Waal to the Nederrijn and possibly for connecting the river IJssel, previously only a small stream, to the Rhine. Whether these were intended as flood control measures or just for military defense and transport purposes is unclear. The first river dikes appeared near the river mouths in the 11th century, where incursions from the sea added to the danger from high water levels on the river. Local rulers dammed branches of rivers to prevent flooding on their lands (Graaf van Holland, c. 1160, Kromme Rijn; Floris V, 1285, Hollandse IJssel), only to cause problems to others living further upstream. Large scale deforestation upstream caused the river levels to become ever more extreme while the demand for arable land led to more land being protected by dikes, giving less space to the river stream bed and so causing even higher water levels. Local dikes to protect villages were connected to create a ban dike to contain the river at all times. These developments meant that while the regular floods for the first inhabitants of the river valleys were just a nuisance, in contrast the later incidental floods when dikes burst were much more destructive. The 17th and 18th centuries were a period of many infamous river floods resulting in much loss of life. They were often caused by ice dams blocking the river. Land reclamation works, large willow plantations and building in the winter bed of the river all worsened the problem. Next to the obvious clearing of the winter bed, overflows (overlaten) were created. These were intentionally low dikes where the excess water could be diverted downstream. The land in such a diversion channel was kept clear of buildings and obstructions. As this so-called green river could therefore essentially only be used for grazing cattle it was in later centuries seen as a wasteful use of land. Most overflows have now been removed, focusing instead on stronger dikes and more control over the distribution of water across the river branches. To achieve this canals such as the Pannerdens Kanaal and Nieuwe Merwede were dug. A committee reported in 1977 about the weakness of the river dikes, but there was too much resistance from the local population against demolishing houses and straightening and strengthening the old meandering dikes. It took the flood threats in 1993 and again in 1995, when over people had to be evacuated and the dikes only just held, to put plans into action. Now the risk of a river flooding has been reduced from once every 100 years to once every years. Further works in the Room for the River project are being carried out to give the rivers more space to flood and in this way reducing the flood height. Water control boards The first dikes and water control structures were built and maintained by those directly benefiting from them, mostly farmers. As the structures got more extensive and complex councils were formed from people with a common interest in the control of water levels on their land and so the first water boards began to emerge. These often controlled only a small area, a single polder or dike. Later they merged or an overall organization was formed when different water boards had conflicting interests. The original water boards differed much from each other in the organisation, power, and area that they managed. The differences were often regional and were dictated by differing circumstances, whether they had to defend a sea dike against a storm surge or keep the water level in a polder within bounds. In the middle of the 20th century there were about 2,700 water control boards. After many mergers there are currently 21 water boards left. Water boards hold separate elections, levy taxes, and function independently from other government bodies. The dikes were maintained by the individuals who benefited from their existence, every farmer having been designated part of the dike to maintain, with a three-yearly viewing by the water board directors. The old rule "Whom the water hurts, he the water stops" (Wie het water deert, die het water keert) meant that those living at the dike had to pay and care for it. This led to haphazard maintenance and it is believed that many floods would not have happened or would not have been as severe if the dikes had been in better condition. Those living further inland often refused to pay or help in the upkeep of the dikes though they were just as much affected by floods, while those living at the dike itself could go bankrupt from having to repair a breached dike. Rijkswaterstaat (Directorate General for Public Works and Water Management) was set up in 1798 under French rule to put water control in the Netherlands under a central government. Local waterboards however were too attached to their autonomy and for most of the time Rijkswaterstaat worked alongside the local waterboards. Rijkswaterstaat has been responsible for many major water control structures and was later and still is also involved in building railroads and highways. Water boards may try new experiments like the sand engine off the coast of South Holland. Notorious floods Over the years there have been many storm surges and floods in the Netherlands. Some deserve special mention as they particularly have changed the contours of the Netherlands. A series of devastating storm surges, more or less starting with the First All Saints' flood (Allerheiligenvloed) in 1170 washed away a large area of peat marshes, enlarging the Wadden Sea and connecting the previously existing Lake Almere in the middle of the country to the North Sea, thereby creating the Zuiderzee. It in itself would cause much trouble until the building of the Afsluitdijk in 1933. Several storms starting in 1219 created the Dollart from the mouth of the river Ems. By 1520 the Dollart had reached its largest area. Reiderland, containing several towns and villages, was lost. Much of this land was later reclaimed. In 1421 the St. Elizabeth's flood caused the loss of De Grote Waard in the southwest of the country. Particularly the digging of peat near the dike for salt production and neglect because of a civil war caused dikes to fail, which created the Biesbosch, now a valued nature reserve. The more recent floodings of 1916 and 1953 gave rise to building the Afsluitdijk and Deltaworks respectively. Flooding as military defense The deliberate inundating of certain areas can allow a military defensive line to be created. In case of an advancing enemy army, the area was to be inundated with about 30 cm (1 ft) of water, too shallow for boats but deep enough to make advance on foot difficult by hiding underwater obstacles such as canals, ditches, and purpose-built traps. Dikes crossing the flooded area and other strategic points were to be protected by fortifications. The system proved successful on the Hollandic Water Line in rampjaar 1672 during the Third Anglo-Dutch War but was overcome in 1795 because of heavy frost. It was also used with the Stelling van Amsterdam, the Grebbe line and the IJssel Line. The advent of heavier artillery and especially airplanes have made that strategy largely obsolete. Modern developments Technological development in the 20th century meant that larger projects could be undertaken to further improve the safety against flooding and to reclaim large areas of land. The most important are the Zuiderzee Works and the Delta Works. By the end of the 20th century all sea inlets have been closed off from the sea by dams and barriers. Only the Westerschelde needs to remain open for shipping access to the port of Antwerp. Plans to reclaim parts of the Wadden Sea and the Markermeer were eventually called off because of the ecological and recreational values of these waters. Zuiderzee Works The Zuiderzee Works (Zuiderzeewerken) are a system of dams, land reclamation, and water drainage works. The basis of the project was the damming off of the Zuiderzee, a large shallow inlet of the North Sea. This dam, called the Afsluitdijk, was built in 1932–33, separating the Zuiderzee from the North Sea. As result, the Zuider sea became the IJsselmeer—IJssel lake. Following the damming, large areas of land were reclaimed in the newly freshwater lake body by means of polders. The works were performed in several steps from 1920 to 1975. Engineer Cornelis Lely played a major part in its design and as statesman in the authorization of its construction. Delta Works A study done by Rijkswaterstaat in 1937 showed that the sea defenses in the southwest river delta were inadequate to withstand a major storm surge. The proposed solution was to dam all the river mouths and sea inlets thereby shortening the coast. However, because of the scale of this project and the intervention of the Second World War its construction was delayed and the first works were only completed in 1950. The North Sea flood of 1953 gave a major impulse to speed up the project. In the following years a number of dams were built to close off the estuary mouths. In 1976, under pressures from environmental groups and the fishing industry, it was decided not to close off the Oosterschelde estuary by a solid dam but instead to build the Oosterscheldekering, a storm surge barrier which is only closed during storms. It is the most well-known (and most expensive) dam of the project. A second major hurdle for the works was in the Rijnmond area. A storm surge through the Nieuwe Waterweg would threaten about 1.5 million people around Rotterdam. However, closing off this river mouth would be very detrimental for the Dutch economy, as the Port of Rotterdam—one of the biggest sea ports in the world—uses this river mouth. Eventually, the Maeslantkering was built in 1997, keeping economical factors in mind: the Maeslantkering is a set of two swinging doors that can shut off the river mouth when necessary, but which are usually open. The Maeslantkering is forecast to close about once per decade. Up until January 2012, it has closed only once, in 2007. Current situation and future The current sea defenses are stronger than ever, but experts warn that complacency would be a mistake. New calculation methods revealed numerous weak spots. Sea level rise could increase the mean sea level by one to two meters by the end of this century, with even more following. This, land subsidence, and increased storms make further upgrades to the flood control and water management infrastructure necessary. The sea defenses are continuously being strengthened and raised to meet the safety norm of a flood chance of once every 10,000 years for the west, which is the economic heart and most densely populated part of the Netherlands, and once every 4,000 years for less densely populated areas. The primary flood defenses are tested against this norm every five years. In 2010 about 800 km of dikes out of a total of 3,500 km failed to meet the norm. This does not mean there is an immediate flooding risk; it is the result of the norm's becoming more strict from the results of scientific research on, for example, wave action and sea level rise. The amount of coastal erosion is compared against the so-called "reference coastline" (Dutch: ), the average coastline in 1990. Sand replenishment is used where beaches have retreated too far. About 12 million m3 of sand are deposited yearly on the beaches and below the waterline in front of the coast. The Stormvloedwaarschuwingsdienst (SVSD; Storm Surge Warning Service) makes a water level forecast in case of a storm surge and warns the responsible parties in the affected coastal districts. These can then take appropriate measures depending on the expected water levels, such as evacuating areas outside the dikes, closing barriers and in extreme cases patrolling the dikes during the storm. The Second Delta Committee, or Veerman Committee, officially Staatscommissie voor Duurzame Kustontwikkeling (State Committee for Durable Coast Development) gave its advice in 2008. It expects a sea level rise of 65 to 130 cm by the year 2100. Among its suggestions are: to increase the safety norms tenfold and strengthen dikes accordingly, to use sand replenishment to broaden the North Sea coast and allow it to grow naturally, to use the lakes in the southwest river delta as river water retention basins, to raise the water level in the IJsselmeer to provide freshwater. These measures would cost approximately 1 billion euros/year. Room for the River Global warming in the 21st century might result in a rise in sea level which could overwhelm the measures the Netherlands has taken to control floods. The Room for the River project allows for periodic flooding of indefensible lands. In such regions residents have been removed to higher ground, some of which has been raised above anticipated flood levels.
Technology
Hydraulic infrastructure
null
4870290
https://en.wikipedia.org/wiki/Complex%20logarithm
Complex logarithm
In mathematics, a complex logarithm is a generalization of the natural logarithm to nonzero complex numbers. The term refers to one of the following, which are strongly related: A complex logarithm of a nonzero complex number , defined to be any complex number for which . Such a number is denoted by . If is given in polar form as , where and are real numbers with , then is one logarithm of , and all the complex logarithms of are exactly the numbers of the form for integers . These logarithms are equally spaced along a vertical line in the complex plane. A complex-valued function , defined on some subset of the set of nonzero complex numbers, satisfying for all in . Such complex logarithm functions are analogous to the real logarithm function , which is the inverse of the real exponential function and hence satisfies for all positive real numbers . Complex logarithm functions can be constructed by explicit formulas involving real-valued functions, by integration of , or by the process of analytic continuation. There is no continuous complex logarithm function defined on all of . Ways of dealing with this include branches, the associated Riemann surface, and partial inverses of the complex exponential function. The principal value defines a particular complex logarithm function that is continuous except along the negative real axis; on the complex plane with the negative real numbers and 0 removed, it is the analytic continuation of the (real) natural logarithm. Problems with inverting the complex exponential function For a function to have an inverse, it must map distinct values to distinct values; that is, it must be injective. But the complex exponential function is not injective, because for any complex number and integer , since adding to has the effect of rotating counterclockwise radians. So the points equally spaced along a vertical line, are all mapped to the same number by the exponential function. This means that the exponential function does not have an inverse function in the standard sense. There are two solutions to this problem. One is to restrict the domain of the exponential function to a region that does not contain any two numbers differing by an integer multiple of : this leads naturally to the definition of branches of , which are certain functions that single out one logarithm of each number in their domains. This is analogous to the definition of on as the inverse of the restriction of to the interval : there are infinitely many real numbers with , but one arbitrarily chooses the one in . Another way to resolve the indeterminacy is to view the logarithm as a function whose domain is not a region in the complex plane, but a Riemann surface that covers the punctured complex plane in an infinite-to-1 way. Branches have the advantage that they can be evaluated at complex numbers. On the other hand, the function on the Riemann surface is elegant in that it packages together all branches of the logarithm and does not require an arbitrary choice as part of its definition. Principal value Definition For each nonzero complex number , the principal value is the logarithm whose imaginary part lies in the interval . The expression is left undefined since there is no complex number satisfying . When the notation appears without any particular logarithm having been specified, it is generally best to assume that the principal value is intended. In particular, this gives a value consistent with the real value of when is a positive real number. The capitalization in the notation is used by some authors to distinguish the principal value from other logarithms of Calculating the principal value The polar form of a nonzero complex number is , where is the absolute value of , and is its argument. The absolute value is real and positive. The argument is defined up to addition of an integer multiple of . Its principal value is the value that belongs to the interval , which is expressed as . This leads to the following formula for the principal value of the complex logarithm: For example, , and . The principal value as an inverse function Another way to describe is as the inverse of a restriction of the complex exponential function, as in the previous section. The horizontal strip consisting of complex numbers such that is an example of a region not containing any two numbers differing by an integer multiple of , so the restriction of the exponential function to has an inverse. In fact, the exponential function maps bijectively to the punctured complex plane , and the inverse of this restriction is . The conformal mapping section below explains the geometric properties of this map in more detail. The principal value as an analytic continuation On the region consisting of complex numbers that are not negative real numbers or 0, the function is the analytic continuation of the natural logarithm. The values on the negative real line can be obtained as limits of values at nearby complex numbers with positive imaginary parts. Properties Not all identities satisfied by extend to complex numbers. It is true that for all (this is what it means for to be a logarithm of ), but the identity fails for outside the strip . For this reason, one cannot always apply to both sides of an identity to deduce . Also, the identity can fail: the two sides can differ by an integer multiple of ; for instance, but The function is discontinuous at each negative real number, but continuous everywhere else in . To explain the discontinuity, consider what happens to as approaches a negative real number . If approaches from above, then approaches which is also the value of itself. But if approaches from below, then approaches So "jumps" by as crosses the negative real axis, and similarly jumps by Branches of the complex logarithm Is there a different way to choose a logarithm of each nonzero complex number so as to make a function that is continuous on all of ? The answer is no. To see why, imagine tracking such a logarithm function along the unit circle, by evaluating as increases from to . If is continuous, then so is , but the latter is a difference of two logarithms of so it takes values in the discrete set so it is constant. In particular, , which contradicts . To obtain a continuous logarithm defined on complex numbers, it is hence necessary to restrict the domain to a smaller subset of the complex plane. Because one of the goals is to be able to differentiate the function, it is reasonable to assume that the function is defined on a neighborhood of each point of its domain; in other words, should be an open set. Also, it is reasonable to assume that is connected, since otherwise the function values on different components of could be unrelated to each other. All this motivates the following definition: A branch of is a continuous function defined on a connected open subset of the complex plane such that is a logarithm of for each in . For example, the principal value defines a branch on the open set where it is continuous, which is the set obtained by removing 0 and all negative real numbers from the complex plane. Another example: The Mercator series converges locally uniformly for , so setting defines a branch of on the open disk of radius 1 centered at 1. (Actually, this is just a restriction of , as can be shown by differentiating the difference and comparing values at 1.) Once a branch is fixed, it may be denoted if no confusion can result. Different branches can give different values for the logarithm of a particular complex number, however, so a branch must be fixed in advance (or else the principal branch must be understood) in order for "" to have a precise unambiguous meaning. Branch cuts The argument above involving the unit circle generalizes to show that no branch of exists on an open set containing a closed curve that winds around 0. One says that "" has a branch point at 0". To avoid containing closed curves winding around 0, is typically chosen as the complement of a ray or curve in the complex plane going from 0 (inclusive) to infinity in some direction. In this case, the curve is known as a branch cut. For example, the principal branch has a branch cut along the negative real axis. If the function is extended to be defined at a point of the branch cut, it will necessarily be discontinuous there; at best it will be continuous "on one side", like at a negative real number. The derivative of the complex logarithm Each branch of on an open set is the inverse of a restriction of the exponential function, namely the restriction to the image . Since the exponential function is holomorphic (that is, complex differentiable) with nonvanishing derivative, the complex analogue of the inverse function theorem applies. It shows that is holomorphic on , and for each in . Another way to prove this is to check the Cauchy–Riemann equations in polar coordinates. Constructing branches via integration The function for real can be constructed by the formula If the range of integration started at a positive number other than 1, the formula would have to be instead. In developing the analogue for the complex logarithm, there is an additional complication: the definition of the complex integral requires a choice of path. Fortunately, if the integrand is holomorphic, then the value of the integral is unchanged by deforming the path (while holding the endpoints fixed), and in a simply connected region (a region with "no holes"), any path from to inside can be continuously deformed inside into any other. All this leads to the following: The complex logarithm as a conformal map Any holomorphic map satisfying for all is a conformal map, which means that if two curves passing through a point of form an angle (in the sense that the tangent lines to the curves at form an angle ), then the images of the two curves form the same angle at . Since a branch of is holomorphic, and since its derivative is never 0, it defines a conformal map. For example, the principal branch , viewed as a mapping from to the horizontal strip defined by , has the following properties, which are direct consequences of the formula in terms of polar form: Circles in the z-plane centered at 0 are mapped to vertical segments in the w-plane connecting to , where is the real log of the radius of the circle. Rays emanating from 0 in the z-plane are mapped to horizontal lines in the w-plane. Each circle and ray in the z-plane as above meet at a right angle. Their images under Log are a vertical segment and a horizontal line (respectively) in the w-plane, and these too meet at a right angle. This is an illustration of the conformal property of Log. The associated Riemann surface Construction The various branches of cannot be glued to give a single continuous function because two branches may give different values at a point where both are defined. Compare, for example, the principal branch on with imaginary part in and the branch on whose imaginary part lies in . These agree on the upper half plane, but not on the lower half plane. So it makes sense to glue the domains of these branches only along the copies of the upper half plane. The resulting glued domain is connected, but it has two copies of the lower half plane. Those two copies can be visualized as two levels of a parking garage, and one can get from the level of the lower half plane up to the level of the lower half plane by going radians counterclockwise around , first crossing the positive real axis (of the level) into the shared copy of the upper half plane and then crossing the negative real axis (of the level) into the level of the lower half plane. One can continue by gluing branches with imaginary part in , in , and so on, and in the other direction, branches with imaginary part in , in , and so on. The final result is a connected surface that can be viewed as a spiraling parking garage with infinitely many levels extending both upward and downward. This is the Riemann surface associated to . A point on can be thought of as a pair where is a possible value of the argument of . In this way, can be embedded in . The logarithm function on the Riemann surface Because the domains of the branches were glued only along open sets where their values agreed, the branches glue to give a single well-defined function . It maps each point on to . This process of extending the original branch by gluing compatible holomorphic functions is known as analytic continuation. There is a "projection map" from down to that "flattens" the spiral, sending to . For any , if one takes all the points of lying "directly above" and evaluates at all these points, one gets all the logarithms of . Gluing all branches of log z Instead of gluing only the branches chosen above, one can start with all branches of , and simultaneously glue every pair of branches and along the largest open subset of on which and agree. This yields the same Riemann surface and function as before. This approach, although slightly harder to visualize, is more natural in that it does not require selecting any particular branches. If is an open subset of projecting bijectively to its image in , then the restriction of to corresponds to a branch of defined on . Every branch of arises in this way. The Riemann surface as a universal cover The projection map realizes as a covering space of . In fact, it is a Galois covering with deck transformation group isomorphic to , generated by the homeomorphism sending to . As a complex manifold, is biholomorphic with via . (The inverse map sends to .) This shows that is simply connected, so is the universal cover of . Applications The complex logarithm is needed to define exponentiation in which the base is a complex number. Namely, if and are complex numbers with , one can use the principal value to define . One can also replace by other logarithms of to obtain other values of , differing by factors of the form . The expression has a single value if and only if is an integer. Because trigonometric functions can be expressed as rational functions of , the inverse trigonometric functions can be expressed in terms of complex logarithms. In electrical engineering, the propagation constant involves a complex logarithm. Generalizations Logarithms to other bases Just as for real numbers, one can define for complex numbers and with the only caveat that its value depends on the choice of a branch of log defined at and (with ). For example, using the principal value gives Logarithms of holomorphic functions If f is a holomorphic function on a connected open subset of , then a branch of on is a continuous function on such that for all in . Such a function is necessarily holomorphic with for all in . If is a simply connected open subset of , and is a nowhere-vanishing holomorphic function on , then a branch of defined on can be constructed by choosing a starting point a in , choosing a logarithm of , and defining for each in .
Mathematics
Specific functions
null
2647057
https://en.wikipedia.org/wiki/Trinomial
Trinomial
In elementary algebra, a trinomial is a polynomial consisting of three terms or monomials. Examples of trinomial expressions with variables with variables with variables , the quadratic polynomial in standard form with variables. with variables, nonnegative integers and any constants. where is variable and constants are nonnegative integers and any constants. Trinomial equation A trinomial equation is a polynomial equation involving three terms. An example is the equation studied by Johann Heinrich Lambert in the 18th century. Some notable trinomials The quadratic trinomial in standard form (as from above): sum or difference of two cubes: A special type of trinomial can be factored in a manner similar to quadratics since it can be viewed as a quadratic in a new variable ( below). This form is factored as: where For instance, the polynomial is an example of this type of trinomial with . The solution and of the above system gives the trinomial factorization: . The same result can be provided by Ruffini's rule, but with a more complex and time-consuming process.
Mathematics
Basics
null
2648952
https://en.wikipedia.org/wiki/Charonia
Charonia
Charonia is a genus of very large sea snail, commonly known as Triton's trumpet or Triton snail. They are marine gastropod mollusks in the monotypic family Charoniidae. They are one of the few natural predators of the crown-of-thorns starfish. Etymology The common name "Triton's trumpet" is derived from the Greek god Triton, who was the son of Poseidon, god of the sea. The god Triton is often portrayed blowing a large seashell horn similar to this species. Fossil records This genus is known in the fossil records as far back as the Cretaceous period. Fossils are found in the marine strata throughout the world. Description Species within the genus Charonia have large fusiform shells, usually whiteish with brown or yellow markings. The shell of the giant triton Charonia tritonis (Linnaeus, 1758), which lives in the Indo-Pacific, can grow to over half a metre (20 inches) in length. One slightly smaller (shell size but still very large species, Charonia variegata (Lamarck, 1816), lives in the western Atlantic, from North Carolina to Brazil. Distribution Charonia species inhabit temperate and tropical waters worldwide. Life habits Unlike pulmonate and opisthobranch gastropods, tritons are not hermaphrodites; they have separate sexes and undergo sexual reproduction with internal fertilization. The female deposits white capsules in clusters, each of which contains many developing larvae. The larvae emerge free-swimming and enter the plankton, where they drift in open water for up to three months. Feeding behavior Adult tritons are active predators and feed on other molluscs and starfish. The giant triton has gained fame for its ability to capture and eat crown-of-thorns starfish, a large species (up to 1 m in diameter) covered in venomous spikes an inch long. The crown-of-thorns starfish has few other natural predators, and are capable of destroying large sections of coral reef. Tritons can be observed to turn and give chase when the scent of prey is detected. Some starfish (including the crown-of-thorns starfish) appear to be able to detect the approach of the mollusc by means which are not clearly understood, and they will attempt flight before any physical contact has taken place. Tritons, however, are faster than starfish, and only large starfish have a reasonable hope of escape, and then only by abandoning whichever limb the snail seizes first. The triton grips its prey with its muscular foot and uses its toothy radula (a serrated, scraping organ found in gastropods) to saw through the starfish's armoured skin. Once it has penetrated, a paralyzing saliva subdues the prey and the snail feeds at leisure, often beginning with the softest parts such as the gonads and gut. Tritons ingest smaller prey animals whole without troubling to paralyse them, and will spit out any poisonous spines, shells, or other unwanted parts later. Species and subspecies Species within the genus Charonia include: Charonia guichemerrei Lozouet, 1998 † Charonia lampas (Linnaeus, 1758) Charonia marylenae Petuch & Berschauer, 2020 Charonia seguenzae (Aradas & Benoit, 1872) Charonia tritonis (Linnaeus, 1758) Charonia variegata (Lamarck, 1816) - Caribbean Triton's trumpet Charonia veterior Lozouet, 1999 † Synonymized species Charonia capax Finlay, 1926: synonym of Charonia lampas (Linnaeus, 1758) Charonia digitalis (Reeve, 1844): synonym of Maculotriton serriale (Deshayes, 1834) Charonia eucla Hedley, 1914 : synonym of Charonia lampas (Linnaeus, 1758) Charonia eucla instructa Iredale, 1929: synonym of Charonia lampas (Linnaeus, 1758) Charonia grandimaculatus Reeve: synonym of Lotoria grandimaculata (Reeve, 1844) Charonia maculosum Gmelin: synonym of Colubraria maculosa (Gmelin, 1791) (new combination) Charonia mirabilis Parenzan, 1970: synonym of Charonia lampas (Linnaeus, 1758) Charonia nodifera(Lamarck, 1822): synonym of Charonia lampas (Linnaeus, 1758) Charonia poecilostoma Smith, 1915: synonym of Ranella gemmifera (Euthyme, 1889) Charonia powelli Cotton, 1957 : synonym of Charonia lampas (Linnaeus, 1758) Charonia rubicunda (Perry, 1811): synonym of Charonia lampas (Linnaeus, 1758) Charonia sauliae (Reeve, 1844): synonym of Charonia lampas (Linnaeus, 1758) Charonia seguenzae (Aradas & Benoit, 1872): synonym of Charonia variegata (Lamarck, 1816) Charonia variegatus Reeve: synonym of Charonia variegata (Lamarck, 1816)
Biology and health sciences
Gastropods
Animals
2649115
https://en.wikipedia.org/wiki/Submarine%20volcano
Submarine volcano
Submarine volcanoes are underwater vents or fissures in the Earth's surface from which magma can erupt. Many submarine volcanoes are located near areas of tectonic plate formation, known as mid-ocean ridges. The volcanoes at mid-ocean ridges alone are estimated to account for 75% of the magma output on Earth. Although most submarine volcanoes are located in the depths of seas and oceans, some also exist in shallow water, and these can discharge material into the atmosphere during an eruption. The total number of submarine volcanoes is estimated to be over one million (most are now extinct) of which some 75,000 rise more than above the seabed. Only 119 submarine volcanoes in Earth's oceans and seas are known to have erupted during the last 11,700 years. Hydrothermal vents, sites of abundant biological activity, are commonly found near submarine volcanoes. Effect of water on volcanoes The presence of water can greatly alter the characteristics of a volcanic eruption and the explosions of underwater volcanoes in comparison to those on land. For instance, water causes magma to cool and solidify much more quickly than in a terrestrial eruption, often turning it into volcanic glass. The shapes and textures of lava formed by submarine volcanoes are different from lava erupted on land. Upon contact with water, a solid crust forms around the lava. Advancing lava flows into this crust, forming what is known as pillow lava. Below ocean depths of about where the pressure exceeds the critical pressure of water (22.06 MPa or about 218 atmospheres for pure water), it can no longer boil; it becomes a supercritical fluid. Without boiling sounds, deep-sea volcanoes can be difficult to detect at great distances using hydrophones. The critical temperature and pressure increase in solutions of salts, which are normally present in the seawater. The composition of aqueous solution in the vicinity of hot basalt, and circulating within the conduits of hot rocks, is expected to differ from that of bulk water (i.e., of sea water away from the hot surfaces). One estimation is that the critical point is and 29.9 MPa, while the solution composition corresponds to that of approximately 3.2% of NaCl. Research Scientists still have much to learn about the location and activity of underwater volcanoes. In the first two decades of this century, NOAA's Office of Ocean Exploration has funded exploration of submarine volcanoes, with the Ring of Fire missions to the Mariana Arc in the Pacific Ocean being particularly noteworthy. Using Remote Operated Vehicles (ROV), scientists studied underwater eruptions, ponds of molten sulfur, black smoker chimneys and even marine life adapted to this deep, hot environment. Research from the ROV KAIKO off the coast of Hawaii has suggested that pahoehoe lava flows occur underwater, and the degree of the submarine terrain slope and rate of lava supply determine the shape of the resulting lobes. In August 2019, news media reported a large pumice raft floating in the South Pacific between Fiji and Tonga. Subsequent scientific investigations revealed the pumice raft originated from the eruption of a nearby submarine volcano, which was directly observed as a volcanic plume in satellite images. This discovery will help scientists better predict for the precursors of a submarine eruption, such as low-frequency earthquakes or hydrophone data, using machine learning. Seamounts Many submarine volcanoes are seamounts, typically extinct volcanoes that rise abruptly from a seafloor of - depth. They are defined by oceanographers as independent features that rise to at least above the seafloor. The peaks are often found hundreds to thousands of meters below the surface, and are therefore considered to be within the deep sea. An estimated 30,000 seamounts occur across the globe, with only a few having been studied. However, some seamounts are also unusual. For example, while the summits of seamounts are normally hundreds of meters below sea level, the Bowie Seamount in Canada's Pacific waters rises from a depth of about to within of the sea surface. Identifying types of eruptions by sounds There are two types of sound generated by submarine eruptions: One created by the slow release and bursting of large lava bubbles, while quick explosions of gas bubbles create the other one. Using this method to be able to distinguish the two can help measure the related affects on marine animals and ecosystems, the volume and composition of the lava flow can also be estimated and built into a model to extrapolate potential effects. Scientists have connected sounds to sights in both types of eruptions. In 2009, a video camera and a hydrophone were floating below sea level in the Pacific Ocean near Samoa, watching and listening as the West Mata Volcano erupted in several ways. Putting video and audio together let researchers learn the sounds made by slow lava bursting and the different noises made by hundreds of gas bubbles.
Physical sciences
Volcanology
Earth science
2649805
https://en.wikipedia.org/wiki/Aepycamelus
Aepycamelus
Aepycamelus is an extinct genus of camelids that lived during the Miocene 20.6–4.9 million years ago, existing for about . Its name is derived from the Homeric Greek , "high and steep" and κάμηλος – "camel"; thus, "high camel"; alticamelus in Latin. Aepycamelus spp. walked on their toes only. Unlike earlier species of camelids, they possessed cushioned pads like those of modern camels. Taxonomy Aepycamelus was formerly referred to the genus Alticamelus, which Matthew (1901) erected for "Procamelus" altus Marsh, 1894, a camel species described from a calcaneum found in Neogene deposits in Oregon, after he referred a complete skeleton of a tall camel from Colorado to that species. Matthew and Cook (1909) erected Alticamelus giraffinus for the Colorado specimen after recognizing the A. altus holotype as indeterminate. MacDonald (1956) recognized Alticamelus as a nomen dubium and erected Aepycamelus for species previously assigned to Alticamelus. Morphology Aepycamelus was a prairie dweller of North America (Colorado, etc.). It was a highly specialized animal. Its head was relatively small compared with the rest of its body, its neck was long, as a result of giraffe-like lengthening of the cervical vertebrae, and its legs were long and stilt-like, with the elbow and knee joints on the same level. The top of its head was about above the ground. Its strange body structure gives information on its mode of life and habits. Aepycamelus obviously inhabited dry grasslands with groups of trees. It is presumed to have moved about singly or in small groups, like today's giraffes, and like them, browsed high up in the trees. In this respect, it had no competitors. It survived a relatively long time, through most of the Miocene epoch, and died out prior to the start of the Pliocene, possibly due to climatic changes. Fossil distribution Its fossils are distributed widely, from Montana to Florida to California.
Biology and health sciences
Camelidae
Animals
2649947
https://en.wikipedia.org/wiki/Economic%20analysis%20of%20climate%20change
Economic analysis of climate change
An economic analysis of climate change uses economic tools and models to calculate the magnitude and distribution of damages caused by climate change. It can also give guidance for the best policies for mitigation and adaptation to climate change from an economic perspective. There are many economic models and frameworks. For example, in a cost–benefit analysis, the trade offs between climate change impacts, adaptation, and mitigation are made explicit. For this kind of analysis, integrated assessment models (IAMs) are useful. Those models link main features of society and economy with the biosphere and atmosphere into one modelling framework. The total economic impacts from climate change are difficult to estimate. In general, they increase the more the global surface temperature increases (see climate change scenarios). Many effects of climate change are linked to market transactions and therefore directly affect metrics like GDP or inflation. However, there are also non-market impacts which are harder to translate into economic costs. These include the impacts of climate change on human health, biomes and ecosystem services. Economic analysis of climate change is challenging as climate change is a long-term problem. Furthermore, there is still a lot of uncertainty about the exact impacts of climate change and the associated damages to be expected. Future policy responses and socioeconomic development are also uncertain. Economic analysis also looks at the economics of climate change mitigation and the cost of climate adaptation. Mitigation costs will vary according to how and when emissions are cut. Early, well-planned action will minimize the costs. Globally, the benefits and co-benefits of keeping warming under 2 °C exceed the costs. Cost estimates for mitigation for specific regions depend on the quantity of emissions allowed for that region in future, as well as the timing of interventions. Economists estimate the incremental cost of climate change mitigation at less than 1% of GDP. The costs of planning, preparing for, facilitating and implementing adaptation are also difficult to estimate, depending on different factors. Across all developing countries, they have been estimated to be about USD 215 billion per year up to 2030, and are expected to be higher in the following years. Purposes Economic analysis of climate change is an umbrella term for a range of investigations into the economic costs around the effects of climate change, and for preventing or softening those effects. These investigations can serve any of the following purposes: estimating the potential global aggregate economic costs of climate change (i.e. global climate damages) estimating sectoral or regional economic costs of climate change (e.g. costs to agriculture sector or energy services) estimating economic costs of facilitating and implementing climate change mitigation and adaptation strategies (varying with the objectives and the levels of action required); see also economics of climate change mitigation. monetising the projected impacts to society per additional metric tonne of carbon emissions (social cost of carbon) informing decisions about global climate management strategy (through UN institutions) or policy decisions in some countries The economic impacts of climate change also include any mitigation (for example, limiting the global average temperature below 2 °C) or adaption (for example, building flood defences) employed by nations or groups of nations, which might infer economic consequences. They also take into account that some regions or sectors benefit from low levels of warming, for example through lower energy demand or agricultural advantages in some markets. There are wider policy (and policy coherence) considerations of interest. For example, in some areas, policies designed to mitigate climate change may contribute positively towards other sustainable development objectives, such as abolishing fossil fuel subsidies which would reduce air pollution and thus save lives. Direct global fossil fuel subsidies reached $319 billion in 2017, and $5.2 trillion when indirect costs such as air pollution are priced in. In other areas, the cost of climate change mitigation may divert resources away from other socially and environmentally beneficial investments (the opportunity costs of climate change policy). Types of economic models Various economic tools are employed to understand the economic aspects around impacts of climate change, climate change mitigation and adaptation. Several sets of tools or approaches exist. Econometric models (statistical models) are used to integrate the broad impacts of climate change with other economic drivers, to quantify the economic costs and assess the value of climate-related policies, often for a specific sector or region. Structural economic models look at market and non-market impacts affecting the whole economy through its inputs and outputs. Process models simulate physical, chemical and biological processes under climate change, and the economic effects. Process-based models Structural models Computable general equilibrium models Aggregate cost-benefit models Integrated assessment models (IAMs) are also used make aggregate estimates of the costs of climate change. These (cost-benefit) models balance the economic implications of mitigation and climate damages to identify the pathway of emissions reductions that will maximize total economic welfare. In other words, the trade-offs between climate change impacts, adaptation, and mitigation are made explicit. The costs of each policy and the outcomes modelled are converted into monetary estimates. The models incorporate aspects of the natural, social, and economic sciences in a highly aggregated way. Compared to other climate-economy models (including process-based IAMs), they do not have the structural detail necessary to model interactions with energy systems, land-use etc. and their economic implications. Statistical (econometric) methods A more recent modelling approach uses empirical, statistical methods to investigate how the economy is affected by weather variation. This approach can causatively identify effects of temperature, rainfall and other climate variables on agriculture, energy demand, industry and other economic activity. Panel data are used giving weather variation over time and spatial areas, eg. ground station observations or (interpolated) gridded data. These are typically aggregated for economic analysis eg. to investigate effects on national economies. These studies examine temperature and rainfall, and events such as droughts and windstorms. They show that for example, hot years are linked to lower income growth in poor countries, and low rainfall is linked to reduced incomes in Africa. Other econometric studies show that there are negative impacts of hotter temperatures on agricultural output, and on labour productivity in factories, call centres and in outdoor industries such as mining and forestry. The analyses are used to estimate the costs of climate change in the future. Analytical frameworks Cost–benefit analysis Standard cost–benefit analysis (CBA) has been applied to the problem of climate change. In a CBA framework, the negative and positive impacts associated with a given action are converted into monetary estimates. This is also referred to as a monetized cost–benefit framework. Various types of model can provide information for CBA, including energy-economy-environment models (process models) that study energy systems and their transitions. Some of these models may include a physical model of the climate. Computable General Equilibrium (CGE) structural models investigate effects of policies (including climate policies) on economic growth, trade, employment, and public revenues. However, most CBA analyses are produced using aggregate integrated assessment models. These aggregate-type IAMs are particularly designed for doing CBA of climate change. The CBA framework requires (1) the valuation of costs and benefits using willingness to pay (WTP) or willingness to accept (WTA) compensation as a measure of value, and (2) a criterion for accepting or rejecting proposals: For (1), in CBA where WTP/WTA is used, climate change impacts are aggregated into a monetary value, with environmental impacts converted into consumption equivalents, and risk accounted for using certainty equivalents. Values over time are then discounted to produce their equivalent present values. The valuation of costs and benefits of climate change can be controversial because some climate change impacts are difficult to assign a value to, e.g., ecosystems and human health. For (2), the standard criterion is the Kaldor–Hicks compensation principle. According to the compensation principle, so long as those benefiting from a particular project compensate the losers, and there is still something left over, then the result is an unambiguous gain in welfare. If there are no mechanisms allowing compensation to be paid, then it is necessary to assign weights to particular individuals. One of the mechanisms for compensation is impossible for this problem: mitigation might benefit future generations at the expense of current generations, but there is no way that future generations can compensate current generations for the costs of mitigation. On the other hand, should future generations bear most of the costs of climate change, compensation to them would not be possible. CBA has several strengths: it offers an internally consistent and global comprehensive analysis of impacts. Furthermore, sensitivity analysis allows critical assumptions in CBA analysis to be changed. This can identify areas where the value of information is highest and where additional research might have the highest payoffs. However, there are many uncertainties that affect cost–benefit analysis, for example, sector- and country-specific damage functions. Damage functions Damage functions play an important role in estimating the costs associated with potential damages caused by climate-related hazards. They quantify the relationship between the intensity of the hazard, other factors such as the vulnerability of the system, and the resulting damages. For example, damage functions have been developed for sea level rise, agricultural productivity, or heat effects on labour productivity. In a CBA framework, damages are monetized to facilitate comparison with the benefits of proposed actions or policies. Sensitivity analysis is conducted to assess the robustness of the results to changes in assumptions and parameters, including those of the damage function. Cost-effectiveness analysis Cost-Effectiveness Analysis (CEA) is preferable to CBA when the benefits of impacts, adaptation and mitigation are difficult to estimate in monetary terms. A CEA can be used to compare different policy options for achieving a well-defined goal. This goal (i.e. the benefit) is usually expressed as the amount of GHG emissions reduction in the analysis of mitigation measures. For adaptation measures, there is no single common goal or metric for the economic benefits. Adaptation involves responding to different types of risks in different sectors and local contexts. For example, the goal might be the reduction of land area in hectares at risk to sea level rise. CEA involves the costing of each option, and providing a cost per unit of effectiveness. For example, cost per tonne of GHG reduced ($/tCO2). This allows the ranking of policy options. This ranking can help decision-maker to understand which are the most cost-effective options, i.e. those that deliver high benefits for low costs. CEA can be used for minimising net costs for achieving pre-defined policy targets, such as meeting an emissions reduction target for a given sector. CEA, like CBA, is a type of decision analysis method. Many of these methods work well when different stakeholders work together on a problem to understand and manage risks. For example, by discussing how well certain options might work in the real world. Or by helping in measuring the costs and benefits as part of a CEA. Some authors have focused on a disaggregated analysis of climate change impacts. "Disaggregated" refers to the choice to assess impacts in a variety of indicators or units, e.g., changes in agricultural yields and loss of biodiversity. By contrast, monetized CBA converts all impacts into a common unit (money), which is used to assess changes in social welfare. Scenario-based assessments The long time scales and uncertainty associated with global warming have led analysts to develop "scenarios" of future environmental, social and economic changes. These scenarios can help governments understand the potential consequences of their decisions. The projected temperature in climate change scenarios is subject to scientific uncertainty (e.g., the relationship between concentrations of GHGs and global mean temperature, which is called the climate sensitivity). Projections of future atmospheric concentrations based on emission pathways are also affected by scientific uncertainties, e.g., over how carbon sinks, such as forests, will be affected by future climate change. One of the economic aspects of climate change is producing scenarios of future economic development. Future economic developments can, for example, affect how vulnerable society is to future climate change, what the future impacts of climate change might be, as well as the level of future GHG emissions. Scenarios are neither "predictions" nor "forecasts" but are stories of possible futures that provide alternate outcomes relevant to a decision-maker or other user. These alternatives usually also include a "baseline" or reference scenario for comparison. "Business-as-usual" scenarios have been developed in which there are no additional policies beyond those currently in place, and socio-economic development is consistent with recent trends. This term is now used less frequently than in the past. In scenario analysis, scenarios are developed that are based on differing assumptions of future development patterns. An example of this are the shared socioeconomic pathways produced by the Intergovernmental Panel on Climate Change (IPCC). These project a wide range of possible future emissions levels. Scenarios often support sector-specific analysis of the physical effects and economic costs of climate change. Scenarios are used with cost–benefit analysis or cost-effectiveness analysis of climate policies. Risk management Risk management can be used to evaluate policy decisions based a range of criteria or viewpoints, and is not restricted to the results of particular type of analysis, e.g., monetized CBA. Another approach is that of uncertainty analysis, where analysts attempt to estimate the probability of future changes in emission levels. In a cost–benefit analysis, an acceptable risk means that the benefits of a climate policy outweigh the costs of the policy. The standard rule used by public and private decision makers is that a risk will be acceptable if the expected net present value is positive. The expected value is the mean of the distribution of expected outcomes. In other words, it is the average expected outcome for a particular decision. This criterion has been justified on the basis that: a policy's benefits and costs have known probabilities economic agents (people and organizations) can diversify their own risk through insurance and other markets. On the second point, it has been suggested that insurance could be bought against climate change risks. Policymakers and investors are beginning to recognize the implications of climate change for the financial sector, from both physical risks (damage to property, infrastructure, and land) and transition risk due to changes in policy, technology, and consumer and market behavior. Financial institutions are becoming increasingly aware of the need to incorporate the economics of low carbon emissions into business models. In the scientific literature, there is sometimes a focus on "best estimate" or "likely" values of climate sensitivity. However, from a risk management perspective, values outside of "likely" ranges are relevant, because, though these values are less probable, they could be associated with more severe climate impacts (the statistical definition of risk = probability of an impact × magnitude of the impact). Analysts have also looked at how uncertainty over climate sensitivity affects economic estimates of climate change impacts. Policy guidance from cost-benefit analysis (CBA) can be extremely divergent depending on the assumptions employed. Hassler et al use integrated assessment modeling to examine a range of estimates and what happens at extremes. Iterative risk management Two related ways of thinking about the problem of climate change decision-making in the presence of uncertainty are iterative risk management and sequential decision making. Considerations in a risk-based approach might include, for example, the potential for low-probability, worst-case climate change impacts. One of the responses to the uncertainties of global warming is to adopt a strategy of sequential decision making. Sequential decision making refers to the process in which the decision maker makes consecutive observations of the process before making a final decision. This strategy recognizes that decisions on global warming need to be made with incomplete information, and that decisions in the near term will have potentially long-term impacts. Governments may use risk management as part of their policy response to global warming. An approach based on sequential decision making recognizes that, over time, decisions related to climate change can be revised in the light of improved information. This is particularly important with respect to climate change, due to the long-term nature of the problem. A near-term hedging strategy concerned with reducing future climate impacts might favor stringent, near-term emissions reductions. As stated earlier, carbon dioxide accumulates in the atmosphere, and to stabilize the atmospheric concentration of , emissions would need to be drastically reduced from their present level. Stringent near-term emissions reductions allow for greater future flexibility with regard to a low stabilization target, e.g., 450 parts per million (ppm) . To put it differently, stringent near-term emissions abatement can be seen as having an option value in allowing for lower, long-term stabilization targets. This option may be lost if near-term emissions abatement is less stringent. On the other hand, a view may be taken that points to the benefits of improved information over time. This may suggest an approach where near-term emissions abatement is more modest. Another way of viewing the problem is to look at the potential irreversibility of future climate change impacts (e.g., damages to biomes and ecosystems) against the irreversibility of making investments in efforts to reduce emissions. Portfolio analysis An example of a framework that is based on risk management is portfolio analysis. This approach is based on portfolio theory, originally applied in the areas of finance and investment. It has also been applied to the analysis of climate change. The idea is that a reasonable response to uncertainty is to invest in a wide portfolio of options. More specifically, the aim is to minimise the variance and co-variance of the performance of investments in the portfolio. In the case of climate change mitigation, performance is measured by how much GHG emissions reduction is achieved. On the other hand, climate change adaptation acts as insurance against the chance that unfavourable impacts occur. The performance of adaptation options could either be defined in economic terms, e.g. revenue, or as physical metrics, e.g. the quantity of water conserved. It is important to compare alternative portfolios of options across different future climate change scenarios in order to take into account uncertainty in climate impacts, GHG emission trends etc. The options should ideally be diversified to be effective in different scenarios: i.e. some options suited for a no/low climate change scenario, with other options being suited for scenarios with severe climate changes. Investment and financial flows Investment and financial flow (I&FF) studies typically consider how much it might cost to increase the resilience of future investments or financial flows. They also investigate the potential sources of investment funds and the types of financing entities or actors. Aggregated studies assess the sensitivity of future investments, estimating the risk from climate change and estimating the additional investment needed to increase resilience. More detailed studies undertake investment and financial flow analysis at a sectoral level to provide detailed costing of the additional marginal costs needed for building resilience. Costs of impacts of climate change At the global level (aggregate costs) Global aggregate costs (also known as global damages or losses) sum up the predicted impacts of climate change across all market sectors (e.g. including costs to agriculture, energy services and tourism) and can also include non-market impacts (e.g. on ecosystems and human health) for which it is possible to assign monetary values. A study in 2024 projected that by 2050, climate change will reduce average global incomes by likely 19% (confidence interval 11-29%), relative to a counterfactual where no climate change occurs. The global economy and per capita income would still grow relative to present, but the global annual damages would reach about $38 trillion (in 2005 International dollars) by 2050, and increase a lot further under high emissions. In comparison, limiting global warming to 2 °C would by 2050 cost about $6 trillion per year, or far less than the anticipated annual damages, emphasizing the economic benefits of proactive climate mitigation. Another study, which checked the data from the last 120 years, found that climate change has already reduced welfare by 29% and further temperature rise will bring this number to 47%. The temperature rise during the years 1960-2019 alone has already cut current GDP per capita by 18%. A rise by 1 degree in global temperature reduces global GDP by 12%. An increase of 3 degrees by 2100, will reduce capital by 50%. The effects are like experiencing the 1929 Great Depression permanently. The appropriate social cost of carbon is 1065 dollars per tonne of CO2. Global estimates are often based on an aggregation of independent sector and/or regional studies and results, with complex interactions modelled. For example, there is uncertainty in how physical and natural systems may respond to climate change. Potential socioeconomic changes, including how human societies might mitigate and adapt to climate change also need consideration. The uncertainty and complexities associated with climate change and have led analysts to develop "scenarios" with which they can explore different possibilities. Global economic losses due to extreme weather, climate and water events are increasing. Costs have increased sevenfold from the 1970s to the 2010s. Direct losses from disasters have averaged above US$330 billion annually between 2015 and 2021. Climate change has contributed to the increased probability and magnitude of extreme events. When a vulnerable community is exposed to extreme climate or weather events, disasters can occur. Socio-economic factors have contributed to the observed trend of global disaster losses, such as population growth and increased wealth. This shows that increased exposure is the most important driver of losses. However, part of these are also due to human-induced climate change. Extreme Event Attribution quantifies how climate change is altering the probability and magnitude of extreme events. On a case-by-case basis, it is feasible to estimate how the magnitude and/or probability of the extreme event has shifted due to climate change. These attributable changes have been identified for many individual extreme heat events and rainfall events. Using all available data on attributable changes, one study estimated the global losses to average US$143 billion per year between 2000 and 2019. This includes a statistical loss of life value of 90 billion and economic damages of 53 billion per year. Estimates of the economic impacts from climate change in future years are most often measured as percent global GDP change, relative to GDP without additional climate change. The 2022 IPCC report compared the latest estimates of many modelling and meta-analysis studies. It found wide variety in the results. These vary depending on the assumptions used in the IPCC socioeconomic scenarios. The same set of scenarios are used in all of the climate models. Estimates are found to increase non-linearly with global average temperature change. Global temperature change projection ranges (corresponding to each cost estimate) are based on IPCC assessment on the physical science in the same report. It finds that with high warming (~4 °C) and low adaptation, annual global GDP might be reduced by 10–23% by 2100 because of climate change. The same assessment finds smaller GDP changes with reductions of 1–8%, assuming assuming low warming, more adaptation, and using different models. These global economic cost estimates do not take into account impacts on social well-being or welfare or distributional effects. Nor do they fully consider climate change adaptation responses. One 2020 study estimated economic losses due to climate change could be between 127 and 616 trillion dollars extra until 2100 with current commitments, compared to 1.5 °C or well below 2 °C compatible action. Failure to implement current commitments raises economic losses to 150–792 trillion dollars until 2100. Economic impacts also include inflation from rising insurance premiums, energy costs and food prices. High emissions scenarios The total economic impacts from climate change increase for higher temperature changes. For instance, total damages are estimated to be 90% less if global warming is limited to 1.5 °C compared to 3.66 °C, a warming level chosen to represent no mitigation. In an Oxford Economics study high emission scenario, a temperature rise of 2 degrees by the year 2050 would reduce global GDP by 2.5–7.5%. By the year 2100 in this case, the temperature would rise by 4 degrees, which could reduce the global GDP by 30% in the worst case. One 2018 study found that potential global economic gains if countries implement mitigation strategies to comply with the 2 °C target set at the Paris Agreement are in the vicinity of US$17 trillion per year up to 2100, compared to a very high emission scenario. Underestimation of economic impacts Studies in 2019 suggested that economic damages due to climate change have been underestimated, and may be severe, with the probability of disastrous tail-risk events. Tipping points are critical thresholds that, when crossed, lead to large, accelerating and often irreversible changes in the climate system. The science of tipping points is complex and there is great uncertainty as to how they might unfold. Economic analyses often exclude the potential effect of tipping points. A 2018 study noted that the global economic impact is underestimated by a factor of two to eight, when tipping points are excluded from consideration. The Stern Review from 2006 for the British Government predicted that world GDP would be reduced by several percent due to climate related costs. However, their calculations may omit ecological effects that are difficult to quantify economically (such as human deaths or loss of biodiversity) or whose economic consequences will manifest slowly. Therefore, their calculations may be an underestimate. The study has received both criticism and support from other economists. By region Other studies investigate economic losses by GDP change per country or by per country per capita. Findings show large differences among countries and within countries. The estimated GDP changes in some developing countries are similar to some of the worst country-level losses during historical economic recessions. Economic losses are risks to living standards, which are more likely to be severe in developing countries. Climate change can push more people into extreme poverty or keep people poor, especially through particularly climate-sensitive sectors such as agriculture and fisheries. Climate change may also increase income inequality within countries as well as between them, particularly affecting low-income groups. The economic impact of changes in annual mean temperature is estimated to be lower at higher latitudes despite higher temperature changes due to lower estimated economic vulnerability to temperature changes. Reduced daily temperature variability at high latitudes shows positive estimated economic impact, with opposite effects at lower latitudes and Europe. Economic effects due to changes in total annual precipitation show regional patterns generally opposite to changes in the number of wet days. According to a study by reinsurance company Swiss Re in 2021 the economies of wealthy countries like the US would likely shrink by approximately 7%, while some developing nations would be devastated, losing around 20% or in some cases 40% of their economic output. A United States government report in November 2018 raised the possibility of US GDP going down 10% as a result of the warming climate, including huge shifts in geography, demographics and technology. By sector A number of economic sectors will be affected by climate change, including the livestock, forestry, and fisheries industries. Other sectors sensitive to climate change include the energy, insurance, tourism and recreation industries. Health and productivity Among the health impacts that have been studied, aggregate costs of heat stress (through loss of work time) have been estimated, as have the costs of malnutrition. However, it is usual for studies to aggregate the number of 'years of life lost' adjusted for years living with disability to measure effects on health. In 2019 the International Labour Organization published a report titled: "Working on a warmer planet: The impact of heat stress on labour productivity and decent work", in which it claims that even if the rise in temperature will be limited to 1.5 degree, by the year 2030, Climate Change will cause losses in productivity reaching 2.2% of all the working hours, every year. This is equivalent to 80 million full-time jobs, or 2,400 billion dollars. The sector expected to be most affected is agriculture, which is projected to account for 60% of this loss. The construction sector is also projected to be severely impacted and accounts for 19% of projected losses. Other sectors that are most at risk are environmental goods and services, refuse collection, emergency, repair work, transport, tourism, sports and some forms of industrial work. It has been estimated that 3.5 million people die prematurely each year from air pollution from fossil fuels. The health benefits of meeting climate goals substantially outweigh the costs of action. The health benefits of phasing out fossil fuels measured in money (estimated by economists using the value of life for each country) are substantially more than the cost of achieving the 2 degree C goal of the Paris Agreement. Agriculture Industry Carbon-intensive industries and investors are expected to experience a significant increase in stranded assets with a potential ripple affect throughout the world economy. Impacts on living costs The effects of climate change contribute to inflation due to additional costs. For example, food prices could rise by as much as 3% per year due to climate change impacts. Climate change was one of the factors involved in the world food crises (2022–2023), which led to higher food prices. Natural disasters fueled by climate change have increased housing costs through insurance and by exacerbating housing shortages when those events make homes unlivable. Utility of aggregated assessment There are a number of benefits of using aggregated assessments to measure economic impacts of climate change. They allow impacts to be directly compared between different regions and times. Impacts can be compared with other environmental problems and also with the costs of avoiding those impacts. A problem of aggregated analyses is that they often reduce different types of impacts into a small number of indicators. It can be argued that some impacts are not well-suited to this, e.g., the monetization of mortality and loss of species diversity. On the other hand, where there are monetary costs of avoiding impacts, it may not be possible to avoid monetary valuation of those impacts. Costs of climate change mitigation measures Climate change mitigation consist of human actions to reduce greenhouse gas emissions or to enhance carbon sinks that absorb greenhouse gases from the atmosphere. Costs of climate change adaptation measures Challenges and debates Efficiency and equity No consensus exists on who should bear the burden of adaptation and mitigation costs. Several different arguments have been made over how to spread the costs and benefits of taxes or systems based on emissions trading. One approach considers the problem from the perspective of who benefits most from the public good. This approach is sensitive to the fact that different preferences exist between different income classes. The public good is viewed in a similar way as a private good, where those who use the public good must pay for it. Some people will benefit more from the public good than others, thus creating inequalities in the absence of benefit taxes. A difficulty with public goods is determining who exactly benefits from the public good, although some estimates of the distribution of the costs and benefits of global warming have been made – see above. Additionally, this approach does not provide guidance as to how the surplus of benefits from climate policy should be shared. A second approach has been suggested based on economics and the social welfare function. To calculate the social welfare function requires an aggregation of the impacts of climate change policies and climate change itself across all affected individuals. This calculation involves a number of complexities and controversial equity issues. For example, the monetization of certain impacts on human health. There is also controversy over the issue of benefits affecting one individual offsetting negative impacts on another. These issues to do with equity and aggregation cannot be fully resolved by economics. On a utilitarian basis, which has traditionally been used in welfare economics, an argument can be made for richer countries taking on most of the burdens of mitigation. However, another result is possible with a different modeling of impacts. If an approach is taken where the interests of poorer people have lower weighting, the result is that there is a much weaker argument in favour of mitigation action in rich countries. Valuing climate change impacts in poorer countries less than domestic climate change impacts (both in terms of policy and the impacts of climate change) would be consistent with observed spending in rich countries on foreign aid A third approach looks at the problem from the perspective of who has contributed most to the problem. Because the industrialized countries have contributed more than two-thirds of the stock of human-induced GHGs in the atmosphere, this approach suggests that they should bear the largest share of the costs. This stock of emissions has been described as an "environmental debt". In terms of efficiency, this view is not supported. This is because efficiency requires incentives to be forward-looking, and not retrospective. The question of historical responsibility is a matter of ethics. It has been suggested that developed countries could address the issue by making side-payments to developing countries. A 2019 modelling study found climate change had contributed towards global economic inequality. Wealthy countries in colder regions had either felt little overall economic impact from climate change, or possibly benefited, whereas poor hotter countries very likely grew less than if global warming had not occurred. Part of this observation stems from the fact that greenhouse gas emissions come mainly from high-income countries, while low-income countries are affected by it negatively. So, high-income countries are producing significant amounts of emissions, but the impacts are unequally threatening low-income countries, who do not have access to the resources to recover from such impacts. This further deepens the inequalities within the poor and the rich, hindering sustainability efforts. Impacts of climate change could even push millions of people into poverty. Insurance and markets Traditional insurance works by transferring risk to those better able or more willing to bear risk, and also by the pooling of risk. Since the risks of climate change are, to some extent, correlated, this reduces the effectiveness of pooling. However, there is reason to believe that different regions will be affected differently by climate change. This suggests that pooling might be effective. Since developing countries appear to be potentially most at risk from the effects of climate change, developed countries could provide insurance against these risks. Disease, rising seas, reduced crop yields, and other harms driven by climate change will likely have a major deleterious impact on the economy by 2050 unless the world sharply reduces greenhouse gas emissions in the near term, according to a number of studies, including a study by the Carbon Disclosure Project and a study by insurance giant Swiss Re. The Swiss Re assessment found that annual output by the world economy will be reduced by $23 trillion annually, unless greenhouse gas emissions are adequately mitigated. As a consequence, according to the Swiss Re study, climate change will impact how the insurance industry prices a variety of risks. Effects of economic growth and degrowth scenarios on emissions Economic growth is one of the causes of increasing greenhouse gas emissions. As the economy expands, demand for energy and energy-intensive goods increases, pushing up CO2 emissions. On the other hand, economic growth may drive technological change and increase energy efficiency. Economic growth may be associated with specialization in certain economic sectors. If specialization is in energy-intensive sectors, then there will be a strong link between economic growth and emissions growth. If specialization is in less energy-intensive sectors, e.g. the services sector, then there might be a weak link between economic growth and emissions growth. In general, there is some degree of flexibility between economic growth and emissions growth. Some studies found that degrowth scenarios, where economic output either declines or declines in terms of contemporary economic metrics such as current GDP, have been neglected in considerations of 1.5 °C scenarios reported by the Intergovernmental Panel on Climate Change (IPCC). They find that some degrowth scenarios "minimize many key risks for feasibility and sustainability compared to technology-driven pathways" with a core problem of such being feasibility in the context of contemporary decision-making of politics and globalized rebound- and relocation-effects. This is supported by other studies which state that absolute decoupling is highly unlikely to be achieved fast enough to prevent global warming over 1.5 °C or 2 °C, even under optimistic policy conditions. Economics of climate change mitigation The economics of climate change mitigation is a contentious part of climate change mitigation – action aimed to limit the dangerous socio-economic and environmental consequences of climate change. Climate change mitigation centres on two main strategies: the reduction of greenhouse gas (GHG) emissions and the preservation and expansion of sinks which absorb greenhouse gases, including the sea and forests. The economics of climate change mitigation are a central point of contention whose considerations significantly affect the level of climate action at every level from local to global. For example, higher interest rates are slowing solar panel installation in developing countries. Policies and approaches to reduce emissions Price signals A carbon price is a system of applying a price to carbon emissions, as a method of emissions mitigation. Potential methods of pricing include carbon emission trading, results-based climate finance, crediting mechanisms and more. Carbon pricing can lend itself to the creation of carbon taxes, which allows governments to tax emissions. Carbon taxes are considered useful because, once a number has been created, it will benefit the government either with currency or with a lowering in emissions or both, and therefore benefit the environment. It is almost a consensus that carbon taxing is the most cost-effective method of having a substantial and rapid response to climate change and carbon emissions. However, backlash to the tax includes that it can be considered regressive, as the impact can be damaging disproportionately to the poor who spend much of their income on energy for their homes. Still, even with near universal approval, there are issues regarding both the collection and redistribution of the taxes. One of the central questions being how the newly collected taxes will be redistributed. Some or all of the proceeds of a carbon tax can be used to stop it disadvantaging the poor. Structural market reforms In addition to the implementation of command-and-control regulations (as with a carbon tax), governments can also use market-based approaches to mitigate emissions. One such method is emissions trading where governments set the total emissions of all polluters to a maximum and distribute permits, through auction or allocation, that allow entities to emit a portion, typically one ton of carbon dioxide equivalent (CO2e), of the mandated total emissions. In other words, the amount of pollution an entity can emit in an emissions trading system is limited by the number of permits they have. If a polluter wants to increase their emissions, they can only do so after buying permits from those who are willing to sell them. Many economists prefer this method of reducing emissions as it is market based and highly cost effective. That being said, emissions trading alone is not perfect since it fails to place a clear price on emissions. Without this price, emissions prices are volatile due to the supply of permits being fixed, meaning their price is entirely determined by shifts in demand. This uncertainty in price is especially disliked by businesses since it prevents them from investing in abatement technologies with confidence which hinders efforts for mitigating emissions. Regardless, while emissions trading alone has its problems and cannot reduce pollutants to the point of stabilizing the global climate, it remains an important tool for addressing climate change. Degrowth There is a debate about a potentially critical need for new ways of economic accounting, including directly monitoring and quantifying positive real-world environmental effects such as air quality improvements and related unprofitable work like forest protection, alongside far-reaching structural changes of lifestyles as well as acknowledging and moving beyond the limits of current economics such as GDP. Some argue that for effective climate change mitigation degrowth has to occur, while some argue that eco-economic decoupling could limit climate change enough while continuing high rates of traditional GDP growth. There is also research and debate about requirements of how economic systems could be transformed for sustainability – such as how their jobs could transition harmonously into green jobs – a just transition – and how relevant sectors of the economy – like the renewable energy industry and the bioeconomy – could be adequately supported. While degrowth is often believed to be associated with decreased living standards and austerity measures, many of its proponents seek to expand universal public goods (such as public transport), increase health (fitness, wellbeing and freedom from diseases) and increase various forms of, often unconventional commons-oriented, labor. To this end, the application of both advanced technologies and reductions in various demands, including via overall reduced labor time or sufficiency-oriented strategies, are considered to be important by some. Finance Assessing costs and benefits GDP The costs of mitigation and adaptation policies can be measured as a percentage of GDP. A problem with this method of assessing costs is that GDP is an imperfect measure of welfare. There are externalities in the economy which mean that some prices might not be truly reflective of their social costs. Corrections can be made to GDP estimates to allow for these problems, but they are difficult to calculate. In response to this problem, some have suggested using other methods to assess policy. For example, the United Nations Commission for Sustainable Development has developed a system for "Green" GDP accounting and a list of sustainable development indicators. Baselines The emissions baseline is, by definition, the emissions that would occur in the absence of policy intervention. Definition of the baseline scenario is critical in the assessment of mitigation costs. This because the baseline determines the potential for emissions reductions, and the costs of implementing emission reduction policies. There are several concepts used in the literature over baselines, including the "efficient" and "business-as-usual" (BAU) baseline cases. In the efficient baseline, it is assumed that all resources are being employed efficiently. In the BAU case, it is assumed that future development trends follow those of the past, and no changes in policies will take place. The BAU baseline is often associated with high GHG emissions, and may reflect the continuation of current energy-subsidy policies, or other market failures. Some high emission BAU baselines imply relatively low net mitigation costs per unit of emissions. If the BAU scenario projects a large growth in emissions, total mitigation costs can be relatively high. Conversely, in an efficient baseline, mitigation costs per unit of emissions can be relatively high, but total mitigation costs low. Ancillary impacts These are the secondary or side effects of mitigation policies, and including them in studies can result in higher or lower mitigation cost estimates. Reduced mortality and morbidity costs are potentially a major ancillary benefit of mitigation. This benefit is associated with reduced use of fossil fuels, thereby resulting in less air pollution, which might even just by itself be a benefit greater than the cost. There may also be ancillary costs. Flexibility Flexibility is the ability to reduce emissions at the lowest cost. The greater the flexibility that governments allow in their regulatory framework to reduce emissions, the lower the potential costs are for achieving emissions reductions (Markandya et al., 2001:455). "Where" flexibility allows costs to be reduced by allowing emissions to be cut at locations where it is most efficient to do so. For example, the Flexibility Mechanisms of the Kyoto Protocol allow "where" flexibility (Toth et al., 2001:660). "When" flexibility potentially lowers costs by allowing reductions to be made at a time when it is most efficient to do so. Including carbon sinks in a policy framework is another source of flexibility. Tree planting and forestry management actions can increase the capacity of sinks. Soils and other types of vegetation are also potential sinks. There is, however, uncertainty over how net emissions are affected by activities in this area. No regrets options No regret options are social and economic benefits developed under the assumption of taking action and establishing preventative measures in current times without fully knowing what climate change will look like in the future. These are emission reduction options which can also make a lot of profit – such as adding solar and wind power. Different studies make different assumptions about how far the economy is from the production frontier (defined as the maximum outputs attainable with the optimal use of available inputs – natural resources, labour, etc.). The benefits of coal phase out exceed the costs. Switching from cars by improving walking and cycling infrastructure is either free or beneficial to a country's economy as a whole. Technology Assumptions about technological development and efficiency in the baseline and mitigation scenarios have a major impact on mitigation costs, in particular in bottom-up studies. The magnitude of potential technological efficiency improvements depends on assumptions about future technological innovation and market penetration rates for these technologies. Discount rates Assessing climate change impacts and mitigation policies involves a comparison of economic flows that occur in different points in time. The discount rate is used by economists to compare economic effects occurring at different times. Discounting converts future economic impacts into their present-day value. The discount rate is generally positive because resources invested today can, on average, be transformed into more resources later. If climate change mitigation is viewed as an investment, then the return on investment can be used to decide how much should be spent on mitigation. Integrated assessment models (IAM) are used to estimate the social cost of carbon. The discount rate is one of the factors used in these models. The IAM frequently used is the Dynamic Integrated Climate-Economy (DICE) model developed by William Nordhaus. The DICE model uses discount rates, uncertainty, and risks to make benefit and cost estimations of climate policies and adapt to the current economic behavior. The choice of discount rate has a large effect on the result of any climate change cost analysis (Halsnæs et al., 2007:136). Using too high a discount rate will result in too little investment in mitigation, but using too low a rate will result in too much investment in mitigation. In other words, a high discount rate implies that the present-value of a dollar is worth more than the future-value of a dollar. Discounting can either be prescriptive or descriptive. The descriptive approach is based on what discount rates are observed in the behaviour of people making every day decisions (the private discount rate) (IPCC, 2007c:813). In the prescriptive approach, a discount rate is chosen based on what is thought to be in the best interests of future generations (the social discount rate). The descriptive approach can be interpreted as an effort to maximize the economic resources available to future generations, allowing them to decide how to use those resources (Arrow et al., 1996b:133–134). The prescriptive approach can be interpreted as an effort to do as much as is economically justified to reduce the risk of climate change. The DICE model incorporates a descriptive approach, in which discounting reflects actual economic conditions. In a recent DICE model, DICE-2013R Model, the social cost of carbon is estimated based on the following alternative scenarios: (1) a baseline scenario, when climate change policies have not changed since 2010, (2) an optimal scenario, when climate change policies are optimal (fully implemented and followed), (3) when the optimal scenario does not exceed 2˚C limit after 1900 data, (4) when the 2˚C limit is an average and not the optimum, (5) when a near-zero (low) discount rate of 0.1% is used (as assumed in the Stern Review), (6) when a near-zero discount rate is also used but with calibrated interest rates, and (7) when a high discount rate of 3.5% is used. According to Markandya et al. (2001:466), discount rates used in assessing mitigation programmes need to at least partly reflect the opportunity costs of capital. In developed countries, Markandya et al. (2001:466) thought that a discount rate of around 4–6% was probably justified, while in developing countries, a rate of 10–12% was cited. The discount rates used in assessing private projects were found to be higher – with potential rates of between 10% and 25%. When deciding how to discount future climate change impacts, value judgements are necessary (Arrow et al., 1996b:130). IPCC (2001a:9) found that there was no consensus on the use of long-term discount rates in this area. The prescriptive approach to discounting leads to long-term discount rates of 2–3% in real terms, while the descriptive approach leads to rates of at least 4% after tax – sometimes much higher (Halsnæs et al., 2007:136). Even today, it is difficult to agree on an appropriate discount rate. The approach of discounting to be either prescriptive or descriptive stemmed from the views of Nordhaus and Stern. Nordhaus takes on a descriptive approach which "assumes that investments to slow climate change must compete with investments in other areas". While Stern takes on a prescriptive approach in which "leads to the conclusion that any positive pure rate of time preference is unethical". In Nordhaus' view, his descriptive approach translates that the impact of climate change is slow, thus investments in climate change should be on the same level of competition with other investments. He defines the discount rate to be the rate of return on capital investments. The DICE model uses the estimated market return on capital as the discount rate, around an average of 4%. He argues that a higher discount rate will make future damages look small, thus have less effort to reduce emissions today. A lower discount rate will make future damages look larger, thus put more effort to reduce emissions today. In Stern's view, the pure rate of time preference is defined as the discount rate in a scenario where present and future generations have equal resources and opportunities. A zero pure rate of time preference in this case would indicate that all generations are treated equally. The future generation do not have a "voice" on today's current policies, so the present generation are morally responsible to treat the future generation in the same manner. He suggests for a lower discount rate in which the present generation should invest in the future to reduce the risks of climate change. Assumptions are made to support estimating high and low discount rates. These estimates depend on future emissions, climate sensitivity relative to increase in greenhouse gas concentrations, and the seriousness of impacts over time. Long-term climate policies will significantly impact future generations and this is called intergenerational discounting. Factors that make intergenerational discounting complicated include the great uncertainty of economic growth, future generations are affected by today's policies, and private discounting will be affected due to a longer "investment horizon". Discounting is a relatively controversial issue in both climate change mitigation and environmental economics due to the ethical implications of valuing future generations less than present ones. Non-economists often find it difficult to grapple with the idea that thousands of dollars of future costs and benefits can be valued at less than a cent in the present after discounting. Cost estimates Global costs Mitigation cost estimates depend critically on the baseline (in this case, a reference scenario that the alternative scenario is compared with), the way costs are modelled, and assumptions about future government policy. Macroeconomic costs in 2030 were estimated for multi-gas mitigation (reducing emissions of carbon dioxide and other GHGs, such as methane) as between a 3% decrease in global GDP to a small increase, relative to baseline. This was for an emissions pathway consistent with atmospheric stabilization of GHGs between 445 and 710 ppm CO2-eq. In 2050, the estimated costs for stabilization between 710 and 445 ppm CO2-eq ranged between a 1% gain to a 5.5% decrease in global GDP, relative to baseline. These cost estimates were supported by a moderate amount of evidence and much agreement in the literature. Macroeconomic cost estimates were mostly based on models that assumed transparent markets, no transaction costs, and perfect implementation of cost-effective policy measures across all regions throughout the 21st century. Relaxation of some or all these assumptions would lead to an appreciable increase in cost estimates. On the other hand, cost estimates could be reduced by allowing for accelerated technological learning, or the possible use of carbon tax/emission permit revenues to reform national tax systems. In most of the assessed studies, costs rose for increasingly stringent stabilization targets. In scenarios that had high baseline emissions, mitigation costs were generally higher for comparable stabilization targets. In scenarios with low emissions baselines, mitigation costs were generally lower for comparable stabilization targets. Regional costs Several studies have estimated regional mitigation costs. The conclusions of these studies are as follows: Regional abatement costs are largely dependent on the assumed stabilization level and baseline scenario. The allocation of emission allowances/permits is also an important factor, but for most countries, is less important than the stabilization level. Other costs arise from changes in international trade. Fossil fuel-exporting regions are likely to be affected by losses in coal and oil exports compared to baseline, while some regions might experience increased bio-energy (energy derived from biomass) exports. Allocation schemes based on current emissions (i.e., where the most allowances/permits are given to the largest current polluters, and the fewest allowances are given to smallest current polluters) lead to welfare losses for developing countries, while allocation schemes based on a per capita convergence of emissions (i.e., where per capita emissions are equalized) lead to welfare gains for developing countries. Cost sharing Distributing emissions abatement costs There have been different proposals on how to allocate responsibility for cutting emissions: Egalitarianism: this system interprets the problem as one where each person has equal rights to a global resource, i.e., polluting the atmosphere. Basic needs: this system would have emissions allocated according to basic needs, as defined according to a minimum level of consumption. Consumption above basic needs would require countries to buy more emission rights. From this viewpoint, developing countries would need to be at least as well off under an emissions control regime as they would be outside the regime. Proportionality and polluter-pays principle: Proportionality reflects the ancient Aristotelian principle that people should receive in proportion to what they put in, and pay in proportion to the damages they cause. This has a potential relationship with the "polluter-pays principle", which can be interpreted in a number of ways: Historical responsibilities: this asserts that allocation of emission rights should be based on patterns of past emissions. Two-thirds of the stock of GHGs in the atmosphere at present is due to the past actions of developed countries. Comparable burdens and ability to pay: with this approach, countries would reduce emissions based on comparable burdens and their ability to take on the costs of reduction. Ways to assess burdens include monetary costs per head of population, as well as other, more complex measures, like the UNDP's Human Development Index. Willingness to pay: with this approach, countries take on emission reductions based on their ability to pay along with how much they benefit from reducing their emissions. Specific proposals Equal per capita entitlements: this is the most widely cited method of distributing abatement costs, and is derived from egalitarianism. This approach can be divided into two categories. In the first category, emissions are allocated according to national population. In the second category, emissions are allocated in a way that attempts to account for historical (cumulative) emissions. Status quo: with this approach, historical emissions are ignored, and current emission levels are taken as a status quo right to emit. An analogy for this approach can be made with fisheries, which is a common, limited resource. The analogy would be with the atmosphere, which can be viewed as an exhaustible natural resource. In international law, one state recognized the long-established use of another state's use of the fisheries resource. It was also recognized by the state that part of the other state's economy was dependent on that resource. Economic barriers to addressing climate change mitigation Economic components like the stock market underestimate or cannot value social benefits of climate change mitigation. Climate change is largely an externality, despite a limited recent internalization of impacts that previously were fully 'external' to the economy. Consumers can be and are affected by policies that relate to e.g. ethical consumer literacy, the available choices they have, transportation policy, product transparency policies, and larger-order economic policies that for example facilitate large-scale shifts of jobs. Such policies or measures are sometimes unpopular with the population. Therefore, they may be difficult for politicians to enact directly or help facilitate indirectly. Climate policies-induced future lost financial profits from global stranded fossil-fuel assets would lead to major losses for freely managed wealth of investors in advanced economies in current economics.
Physical sciences
Climate change
Earth science
2650394
https://en.wikipedia.org/wiki/Respiratory%20rate
Respiratory rate
The respiratory rate is the rate at which breathing occurs; it is set and controlled by the respiratory center of the brain. A person's respiratory rate is usually measured in breaths per minute. Measurement The respiratory rate in humans is measured by counting the number of breaths for one minute through counting how many times the chest rises. A fibre-optic breath rate sensor can be used for monitoring patients during a magnetic resonance imaging scan. Respiration rates may increase with fever, illness, or other medical conditions. Inaccuracies in respiratory measurement have been reported in the literature. One study compared respiratory rate counted using a 90-second count period, to a full minute, and found significant differences in the rates.. Another study found that rapid respiratory rates in babies, counted using a stethoscope, were 60–80% higher than those counted from beside the cot without the aid of the stethoscope. Similar results are seen with animals when they are being handled and not being handled—the invasiveness of touch apparently is enough to make significant changes in breathing. Various other methods to measure respiratory rate are commonly used, including impedance pneumography, and capnography which are commonly implemented in patient monitoring. In addition, novel techniques for automatically monitoring respiratory rate using wearable sensors are in development, such as estimation of respiratory rate from the electrocardiogram, photoplethysmogram, or accelerometry signals. Breathing rate is often interchanged with the term breathing frequency. However, this should not be considered the frequency of breathing because realistic breathing signal is composed of many frequencies. Normal range For humans, the typical respiratory rate for a healthy adult at rest is 12–15 breaths per minute. The respiratory center sets the quiet respiratory rhythm at around two seconds for an inhalation and three seconds exhalation. This gives the lower of the average rate at 12 breaths per minute. Average resting respiratory rates by age are: birth to 6 weeks: 30–40 breaths per minute 6 months: 25–40 breaths per minute 3 years: 20–30 breaths per minute 6 years: 18–25 breaths per minute 10 years: 17–23 breaths per minute Adults: 15–18 breaths per minute 50 years: 18-25 breaths per minute Elderly ≥ 65 years old: 12–28 breaths per minute. Elderly ≥ 80 years old: 10-30 breaths per minute. Minute volume Respiratory minute volume is the volume of air which is inhaled (inhaled minute volume) or exhaled (exhaled minute volume) from the lungs in one minute. Diagnostic value The value of respiratory rate as an indicator of potential respiratory dysfunction has been investigated but findings suggest it is of limited value. One study found that only 33% of people presenting to an emergency department with an oxygen saturation below 90% had an increased respiratory rate. An evaluation of respiratory rate for the differentiation of the severity of illness in babies under 6 months found it not to be very useful. Approximately half of the babies had a respiratory rate above 50 breaths per minute, thereby questioning the value of having a "cut-off" at 50 breaths per minute as the indicator of serious respiratory illness. It has also been reported that factors such as crying, sleeping, agitation and age have a significant influence on the respiratory rate. As a result of these and similar studies the value of respiratory rate as an indicator of serious illness is limited. Nonetheless, respiratory rate is widely used to monitor the physiology of acutely-ill hospital patients. It is measured regularly to facilitate identification of changes in physiology along with other vital signs. This practice has been widely adopted as part of early warning systems. Abnormal respiratory rates
Biology and health sciences
Diagnostics
Health
19832108
https://en.wikipedia.org/wiki/Diplostraca
Diplostraca
The Diplostraca or Cladocera, commonly known as water fleas, is a superorder of small, mostly freshwater crustaceans, most of which feed on microscopic chunks of organic matter, though some forms are predatory. Over 1000 species have been recognised so far, with many more undescribed. The oldest fossils of diplostracans date to the Jurassic, though their modern morphology suggests that they originated substantially earlier, during the Paleozoic. Some have also adapted to a life in the ocean, the only members of Branchiopoda to do so, though several anostracans live in hypersaline lakes. Most are long, with a down-turned head with a single median compound eye, and a carapace covering the apparently unsegmented thorax and abdomen. Most species show cyclical parthenogenesis, where asexual reproduction is occasionally supplemented by sexual reproduction, which produces resting eggs that allow the species to survive harsh conditions and disperse to distant habitats. Description They are mostly long, with the exception of Leptodora, which can be up to long. The body is not obviously segmented and bears a folded carapace which covers the thorax and abdomen. The head is angled downwards, and may be separated from the rest of the body by a "cervical sinus" or notch. It bears a single black compound eye, located on the animal's midline, in all but two genera, and often, a single ocellus is present. The head also bears two pairs of antennae – the first antennae are small, unsegmented appendages, while the second antennae are large, segmented, and branched, with powerful muscles. The first antennae bear olfactory setae, while the second are used for swimming by most species. The pattern of setae on the second antennae is useful for identification. The part of the head which projects in front of the first antennae is known as the rostrum or "beak". The mouthparts are small, and consist of an unpaired labrum, a pair of mandibles, a pair of maxillae, and an unpaired labium. They are used to eat "organic detritus of all kinds" and bacteria. The thorax bears five or six pairs of lobed, leaf-like appendages, each with numerous hairs or setae. Carbon dioxide is lost, and oxygen taken up, through the body surface. Lifecycle With the exception of a few purely asexual species, the lifecycle of diplostracans is dominated by asexual reproduction, with occasional periods of sexual reproduction; this is known as cyclical parthenogenesis. When conditions are favourable, reproduction occurs by parthenogenesis for several generations, producing only female clones. As the conditions deteriorate, males are produced, and sexual reproduction occurs. This results in the production of long-lasting dormant eggs. These ephippial eggs can be transported over land by wind, and hatch when they reach favourable conditions, allowing many species to have very wide – even cosmopolitan – distributions. Except for the genus Leptodora, which has a metanauplius stage, a nauplius larval stage is absent in Diplostraca. Evolutionary history Diplostraca are nested within the clam shrimp, being most closely related to the order Cyclestherida, the only living genus of which is Cyclestheria. Though several fossils from the Paleozoic have been claimed to represent fossils of diplostracans, none of these records can be confirmed. The oldest confirmed records of diplostracans are from the Early Jurassic of Asia. Fossils from the Jurassic are assignable to modern as well as extinct groups, indicating that the initial radiation of the group occurred prior to the beginning of the Jurassic, likely during the late Paleozoic. A Devonian fossil, Ebullitiocaris, is tentatively placed as a diplostracan, however since it is only known from its carapace this is uncertain. Ecology Most diplostracan species live in fresh water and other inland water bodies, with only eight species being truly oceanic. The marine species are all in the family Podonidae, except for the genus Penilia. Some diplostracans inhabit leaf litter. Taxonomy According to the World Registry of Marine Species, Cladocera is a synonym of the superorder Diplostraca, which is included in the class Branchiopoda. Both names are currently in use. The superorder forms a monophyletic group of 7 orders, about 24 families, and more than 11,000 species. Many more species remain undescribed. The genus Daphnia alone contains around 150 species. Many groups of the water fleas are cryptic species or species flocks. The following families are recognised: Superorder Diplostraca Gerstaecker, 1866 (=Cladocera) Order Anomopoda G.O. Sars, 1865 Family Acantholeberidae Smirnov, 1976 Family Bosminidae Baird, 1845 Family Chydoridae Dybowski & Grochowski, 1894 Family Daphniidae Straus, 1820 Family Dumontiidae Santos-Flores & Dodson, 2003 Family Eurycercidae Kurz, 1875 Family Gondwanothrichidae Van Damme, Shiel & Dumont, 2007 Family Ilyocryptidae Smirnov, 1976 Family Macrothricidae Norman & Brady, 1867 Family Moinidae Goulden, 1968 Family Ophryoxidae Smirnov, 1976 Order Ctenopoda G.O. Sars, 1865 Family Holopediidae G.O. Sars, 1865 Family Pseudopenilidae Korovchinsky & Sergeeva, 2008 Family Sididae Baird, 1850 Order Cyclestherida Sars G.O., 1899 Family Cyclestheriidae Sars G.O., 1899 Order Haplopoda G.O. Sars, 1865 Family Leptodoridae Lilljeborg, 1861 Order Laevicaudata Linder, 1945 Family Lynceidae Stebbing, 1902 Order Onychopoda G.O. Sars, 1865 Family Cercopagididae Mordukhai-Boltovskoi, 1968 Family Podonidae Mordukhai-Boltovskoi, 1968 Family Polyphemidae Baird, 1845 Order Spinicaudata Linder, 1945 Family Cyzicidae Stebbing, 1910 Family Eocyzicidae Schwentner, et al., 2020 Family Leptestheriidae Daday, 1913: 44 Family Limnadiidae Burmeister, 1843 Etymology The word "Cladocera" derives via Neo-Latin from the Ancient Greek (, "branch") and (, "horn").
Biology and health sciences
Crustaceans
Animals
19835580
https://en.wikipedia.org/wiki/Nice%20model
Nice model
In astronomy, the Nice () model is a scenario for the dynamical evolution of the Solar System. It is named for the location of the Côte d'Azur Observatory—where it was initially developed in 2005—in Nice, France. It proposes the migration of the giant planets from an initial compact configuration into their present positions, long after the dissipation of the initial protoplanetary disk. In this way, it differs from earlier models of the Solar System's formation. This planetary migration is used in dynamical simulations of the Solar System to explain historical events including the Late Heavy Bombardment of the inner Solar System, the formation of the Oort cloud, and the existence of populations of small Solar System bodies such as the Kuiper belt, the Neptune and Jupiter trojans, and the numerous resonant trans-Neptunian objects dominated by Neptune. Description The original core of the Nice model is a triplet of papers published in the general science journal Nature in 2005 by an international collaboration of scientists. In these publications, the four authors proposed that after the dissipation of the gas and dust of the primordial Solar System disk, the four giant planets (Jupiter, Saturn, Uranus, and Neptune) were originally found on near-circular orbits between ~5.5 and ~17 astronomical units (AU), much more closely spaced and compact than in the present. A large, dense disk of small rock and ice planetesimals totalling about 35 Earth masses extended from the orbit of the outermost giant planet to some 35 au. According to the Nice model, the planetary system evolved in the following manner: Planetesimals at the disk's inner edge occasionally pass through gravitational encounters with the outermost giant planet, which change the planetesimals' orbits. The planet scatters inward the majority of the small icy bodies that it encounters, which in turn moves the planet outwards in response as it acquires angular momentum from the scattered objects. The inward-deflected planetesimals successively encounter Uranus, Neptune, and Saturn, moving each outwards in turn by the same process. Despite the minute movement each exchange of momentum produces, cumulatively these planetesimal encounters shift (migrate) the orbits of the planets by significant amounts. This process continues until the planetesimals interact with the innermost and most massive giant planet, Jupiter, whose immense gravity sends them into highly elliptical orbits or even ejects them outright from the Solar System. This, in contrast, causes Jupiter to move slightly inward. The low rate of orbital encounters governs the rate at which planetesimals are lost from the disk, and the corresponding rate of migration. After several hundreds of millions of years of slow, gradual migration, Jupiter and Saturn, the two inmost giant planets, cross their mutual 1:2 mean-motion resonance. This resonance increases their orbital eccentricities, destabilizing the entire planetary system. The arrangement of the giant planets alters quickly and dramatically. Jupiter shifts Saturn out towards its present position, and this relocation causes mutual gravitational encounters between Saturn and the two ice giants, which propel Neptune and Uranus onto much more eccentric orbits. These ice giants then plough into the planetesimal disk, scattering tens of thousands of planetesimals from their formerly stable orbits in the outer Solar System. This disruption almost entirely scatters the primordial disk, removing 99% of its mass. Although the scenario explains the absence of a dense trans-Neptunian population, alternative models that achieve the same depletion of trans-Saturnian asteroids, but without planet migration or chaotic resonances, have been proposed. The details of the calculations of the Nice model are sensitive to chaotic interactions between planets and asteroids. Such calculations are notoriously plagued by numerical errors, in particular round-off and time discretisation errors. Originally it was thought that the model would cause some of the planetesimals to be thrown into the inner Solar System, producing a sudden influx of impacts on the terrestrial planets: the Late Heavy Bombardment (LHB). However, it has since been demonstrated that the LHB is inconsistent with the age and abundance of craters on the asteroid 4 Vesta, and that the original lunar observations were the result of statistical aberrations in crater age determination. Following the Nice model, the giant planets eventually reach their final orbital semi-major axes, and dynamical friction with the remaining planetesimal disc damps their eccentricities and makes the orbits of Uranus and Neptune circular again. In some 50% of the initial models of Tsiganis and colleagues, Neptune and Uranus also exchange places. Such statistics, however, can not be interpreted as a probability in a dynamically chaotic system. Although, an exchange of Uranus and Neptune would be consistent with models of their formation in a disk that had a surface density that declined with distance from the Sun, there is no compelling argument why planet mass should follow the disc's density profile. Solar System features Running dynamical models of the Solar System with different initial conditions for the simulated length of the history of the Solar System produce various distributions of minor bodies in the Solar System. In order to explain the wide variety of object families in their respective observed abundances, a wide range of initial conditions for the solar system are necessary. This diversity in initial conditions renders then the model unpractical and suspect, because there can only be one realization of the early Solar System: that realization should explain all the families of minor bodies in their observed abundances. Proving of a model of the evolution of the early Solar System is difficult, since the evolution cannot be directly observed. However, the success of any dynamical model can be judged by comparing the population predictions from the simulations to astronomical observations of these populations. At the present time, there is no satisfactory computer model that explains the current Solar System's architecture. The Late Heavy Bombardment The main motivation for the introduction of the Nice model is to explain the Late Heavy Bombardment (LHB), a hypothetical surge in asteroid impacts and crater formation on the lunar surface and the terrestrial planets at about 600 million years after the Solar System's formation. However, newer studies on the age of lunar craters show no peak in the cratering record, but rather an exponential decay of the number of craters with time. The surge may be a statistical artifact, with a finite uncertainty on the determination of a crater´s age combining with the cutoff age of the moon to create an apparent peak in the inferred age distribution, the LHB. Also recent measurements of laser ablation microprobe of the 40 to 39 Argon isotope ratio on the surface of (4)Vesta are in considerable tension with the LHB. The Nice model would explain the LHB as follows. Icy planetesimals are scattered onto planet-crossing orbits when the outer disc is disrupted by Uranus and Neptune, causing a sharp spike of impacts by icy objects. The migration of outer planets also causes mean-motion and secular resonances to sweep through the inner Solar System. In the asteroid belt these excite the eccentricities of the asteroids driving them onto orbits that intersect those of the terrestrial planets causing a more extended period of impacts by stony objects and removing roughly 90% of its mass. The number of planetesimals that would reach the Moon is consistent with the crater record from the LHB. However, the predicted orbital distribution of the remaining asteroids does not match observations. In the outer Solar System the impacts onto Jupiter's moons are sufficient to trigger Ganymede's differentiation but not Callisto's. The impacts of icy planetesimals onto Saturn's inner moons are excessive, however, resulting in the vaporization of their ice. The strong doubts of the LHB as a unique phase in the Solar System's early evolution also weaken the credibility of the Nice model. Trojans and the asteroid belt After Jupiter and Saturn cross the 2:1 resonance their combined gravitational influence destabilizes the Trojan co-orbital region allowing existing Trojan groups in the L4 and L5 Lagrange points of Jupiter and Neptune to escape and new objects from the outer planetesimal disk to be captured. Objects in the Trojan co-orbital region undergo libration, drifting cyclically relative to the L4 and L5 points. When Jupiter and Saturn are near but not in resonance, the location at which Jupiter passes Saturn relative to their perihelia circulates slowly. If the period of this circulation falls into resonance with the period at which the Trojans librate, then the libration range can increase until they escape. When this phenomenon occurs, the Trojan co-orbital region is "dynamically open" and objects can both escape and enter it. Primordial Trojans escape and a fraction of the numerous objects from the disrupted planetesimal disk temporarily inhabit it. Later when the separation of the Jupiter and Saturn orbits increases, the Trojan region becomes "dynamically closed", and the planetesimals in the Trojan region are captured, with many remaining today. The captured Trojans have a wide range of inclinations, which had not previously been understood, due to their repeated encounters with the giant planets. The libration angle and eccentricity of the simulated population also matches observations of the orbits of the Jupiter Trojans. This mechanism of the Nice model similarly generates the Neptune trojans. A large number of planetesimals would have also been captured in Jupiter's mean motion resonances as Jupiter migrated inward. Those that remained in a 3:2 resonance with Jupiter form the Hilda family. The eccentricity of other objects declined while they were in a resonance and escaped onto stable orbits in the outer asteroid belt, at distances greater than 2.6 au as the resonances moved inward. These captured objects would then have undergone collisional erosion, grinding the population away into progressively smaller fragments that can then be subject to the Yarkovsky effect, which causes small objects to drift into unstable resonances, and to the Poynting–Robertson drag which causes smaller grains to drift toward the sun. These processes may have removed >90% of the origin mass implanted into the asteroid belt. The size frequency distribution of this simulated population following this erosion are in excellent agreement with observations. This agreement suggests that the Jupiter Trojans, Hildas, and spectral D-type asteroids such as some objects in the outer asteroid belt, are remnant planetesimals from this capture and erosion process. The dwarf planet may be a Kuiper-belt object that was captured by this process. A few recently discovered D-type asteroids have semi-major axes <2.5 au, which is closer than those that would be captured in the original Nice model. Outer-system satellites Any original populations of irregular satellites captured by traditional mechanisms, such as drag or impacts from the accretion disks, would be lost during the encounters between the planets at the time of global system instability. In the Nice model, the outer planets encounter large numbers of planetesimals after Uranus and Neptune enter and disrupt the planetesimal disk. A fraction of these planetesimals are captured by these planets via three-way interactions during encounters between planets. The probability for any planetesimal to be captured by an ice giant is relatively high, a few 10−7. These new satellites could be captured at almost any angle, so unlike the regular satellites of Saturn, Uranus, and Neptune, they do not necessarily orbit in the planets' equatorial planes. Some irregulars may have even been exchanged between planets. The resulting irregular orbits match well with the observed populations' semimajor axes, inclinations, and eccentricities. Subsequent collisions between these captured satellites may have created the suspected collisional families seen today. These collisions are also required to erode the population to the present size distribution. Triton, the largest moon of Neptune, can be explained if it was captured in a three-body interaction involving the disruption of a binary planetoid. Such binary disruption would be more likely if Triton was the smaller member of the binary. However, Triton's capture would be more likely in the early Solar System when the gas disk would damp relative velocities, and binary exchange reactions would not in general have supplied the large number of small irregulars. There were not enough interactions between Jupiter and the other planets to explain Jupiter's retinue of irregulars in the initial Nice model simulations that reproduced other aspects of the outer Solar System. This suggests either that a second mechanism was at work for that planet, or that the early simulations did not reproduce the evolution of the giant planets' orbits. Formation of the Kuiper belt The migration of the outer planets is also necessary to account for the existence and properties of the Solar System's outermost regions. Originally, the Kuiper belt was much denser and closer to the Sun, with an outer edge at approximately 30 au. Its inner edge would have been just beyond the orbits of Uranus and Neptune, which were in turn far closer to the Sun when they formed (most likely in the range of 15–20 au), and in opposite locations, with Uranus farther from the Sun than Neptune. Gravitational encounters between the planets scatter Neptune outward into the planetesimal disk with a semi-major axis of ~28 au and an eccentricity as high as 0.4. Neptune's high eccentricity causes its mean-motion resonances to overlap and orbits in the region between Neptune and its 2:1 mean motion resonances to become chaotic. The orbits of objects between Neptune and the edge of the planetesimal disk at this time can evolve outward onto stable low-eccentricity orbits within this region. When Neptune's eccentricity is damped by dynamical friction they become trapped on these orbits. These objects form a dynamically cold belt, since their inclinations remain small during the short time they interact with Neptune. Later, as Neptune migrates outward on a low eccentricity orbit, objects that have been scattered outward are captured into its resonances and can have their eccentricities decline and their inclinations increase due to the Kozai mechanism, allowing them to escape onto stable higher-inclination orbits. Other objects remain captured in resonance, forming the plutinos and other resonant populations. These two populations are dynamically hot, with higher inclinations and eccentricities; due to their being scattered outward and the longer period these objects interact with Neptune. This evolution of Neptune's orbit produces both resonant and non-resonant populations, an outer edge at Neptune's 2:1 resonance, and a small mass relative to the original planetesimal disk. The excess of low-inclination plutinos in other models is avoided due to Neptune being scattered outward, leaving its 3:2 resonance beyond the original edge of the planetesimal disk. The differing initial locations, with the cold classical objects originating primarily from the outer disk, and capture processes, offer explanations for the bi-modal inclination distribution and its correlation with compositions. However, this evolution of Neptune's orbit fails to account for some of the characteristics of the orbital distribution. It predicts a greater average eccentricity in classical Kuiper belt object orbits than is observed (0.10–0.13 versus 0.07) and it does not produce enough higher-inclination objects. It also cannot explain the apparent complete absence of gray objects in the cold population, although it has been suggested that color differences arise in part from surface evolution processes rather than entirely from differences in primordial composition. The shortage of the lowest-eccentricity objects predicted in the Nice model may indicate that the cold population formed in situ. In addition to their differing orbits the hot and cold populations have differing colors. The cold population is markedly redder than the hot, suggesting it has a different composition and formed in a different region. The cold population also includes a large number of binary objects with loosely bound orbits that would be unlikely to survive close encounter with Neptune. If the cold population formed at its current location, preserving it would require that Neptune's eccentricity remained small, or that its perihelion precessed rapidly due to a strong interaction between it and Uranus. Scattered disc and Oort cloud Objects scattered outward by Neptune onto orbits with semi-major axis greater than 50 au can be captured in resonances forming the resonant population of the scattered disc, or if their eccentricities are reduced while in resonance they can escape from the resonance onto stable orbits in the scattered disc while Neptune is migrating. When Neptune's eccentricity is large its aphelion can reach well beyond its current orbit. Objects that attain perihelia close to or larger than Neptune's at this time can become detached from Neptune when its eccentricity is damped reducing its aphelion, leaving them on stable orbits in the scattered disc. Objects scattered outward by Uranus and Neptune onto larger orbits (roughly 5,000 au) can have their perihelion raised by the galactic tide detaching them from the influence of the planets forming the inner Oort cloud with moderate inclinations. Others that reach even larger orbits can be perturbed by nearby stars forming the outer Oort cloud with isotropic inclinations. Objects scattered by Jupiter and Saturn are typically ejected from the Solar System. Several percent of the initial planetesimal disc can be deposited in these reservoirs. Modifications The Nice model has undergone a number of modifications since its initial publication. Some changes reflect a better understanding of the formation of the Solar System while others were made after significant differences between its predictions and observations were identified. Hydrodynamical models of the early Solar System indicate that the orbits of the giant planets would converge resulting in their capture into a series of resonances. The slow approach of Jupiter and Saturn to the 2:1 resonance before the instability and their smooth separation of their orbits afterwards was also shown to alter the orbits of objects in the inner Solar System due to sweeping secular resonances. The first could result in the orbit of Mars crossing that of the other terrestrial planets, destabilizing the inner Solar System. If the first was avoided the latter would still leave the orbits of the terrestrial planets with larger eccentricities. The orbital distribution of the asteroid belt would also be altered leaving it with an excess of high inclination objects. Other differences between predictions and observations included the capture of few irregular satellites by Jupiter, the vaporization of the ice from Saturn's inner moons, a shortage of high inclination objects captured in the Kuiper belt, and the recent discovery of D-type asteroids in the inner asteroid belt. The first modifications to the Nice model were the initial positions of the giant planets. Investigations of the behavior of planets orbiting in a gas disk using hydrodynamical models reveal that the giant planets would migrate toward the Sun. If the migration continued it would have resulted in Jupiter orbiting close to the Sun like recently discovered exoplanets known as hot Jupiters. Saturn's capture in a resonance with Jupiter prevents this, however, and the later capture of the other planets results in a quadruple resonant configuration with Jupiter and Saturn in their 3:2 resonance. A mechanism for a delayed disruption of this resonance was also proposed. Gravitational encounters with Pluto-massed objects in the outer disk would stir their orbits causing an increase in eccentricities, and through a coupling of their orbits, an inward migration of the giant planets. During this inward migration secular resonances would be crossed that altered the eccentricities of the planets' orbits and disrupted the quadruple resonance. A late instability similar to the original Nice model then follows. Unlike the original Nice model the timing of this instability is not sensitive to the planets' initial orbits or the distance between the outer planet and the planetesimal disk. The combination of resonant planetary orbits and the late instability triggered by these long distant interactions was referred to as the Nice 2 model. The second modification was the requirement that one of the ice giants encounters Jupiter, causing its semi-major axis to jump. In this jumping-Jupiter scenario, an ice giant encounters Saturn and is scattered inward onto a Jupiter-crossing orbit, causing Saturn's orbit to expand; then encounters Jupiter and is scattered outward, causing Jupiter's orbit to shrink. This results in a step-wise separation of Jupiter's and Saturn's orbits instead of a smooth divergent migration. The step-wise separation of the orbits of Jupiter and Saturn avoids the slow sweeping of secular resonances across the inner solar System that increases the eccentricities of the terrestrial planets and leaves the asteroid belt with an excessive ratio of high- to low-inclination objects. The encounters between the ice giant and Jupiter in this model allow Jupiter to acquire its own irregular satellites. Jupiter trojans are also captured following these encounters when Jupiter's semi-major axis jumps and, if the ice giant passes through one of the libration points scattering trojans, one population is depleted relative to the other. The faster traverse of the secular resonances across the asteroid belt limits the loss of asteroids from its core. Most of the rocky impactors of the Late Heavy Bombardment instead originate from an inner extension that is disrupted when the giant planets reach their current positions, with a remnant remaining as the Hungaria asteroids. Some D-type asteroids are embedded in the inner asteroid belt, within 2.5 au, during encounters with the ice giant when it is crossing the asteroid belt. Five-planet Nice model The frequent ejection in simulations of the ice giant encountering Jupiter has led David Nesvorný and others to hypothesize an early Solar System with five giant planets, one of which was ejected during the instability. This five-planet Nice model begins with the giant planets in a 3:2, 3:2, 2:1, 3:2 resonant chain with a planetesimal disk orbiting beyond them. Following the breaking of the resonant chain Neptune first migrates outward into the planetesimal disk reaching 28 au before encounters between planets begin. This initial migration reduces the mass of the outer disk enabling Jupiter's eccentricity to be preserved and produces a Kuiper belt with an inclination distribution that matches observations if 20 Earth-masses remained in the planetesimal disk when that migration began. Neptune's eccentricity can remain small during the instability since it only encounters the ejected ice giant, allowing an in situ cold-classical belt to be preserved. The lower mass planetesimal belt in combination with the excitation of inclinations and eccentricities by the Pluto-massed objects also significantly reduce the loss of ice by Saturn's inner moons. The combination of a late breaking of the resonance chain and a migration of Neptune to 28 au before the instability is unlikely with the Nice 2 model. This gap may be bridged by a slow dust-driven migration over several million years following an early escape from resonance. A recent study found that the five-planet Nice model has a statistically small likelihood of reproducing the orbits of the terrestrial planets. Although this implies that the instability occurred before the formation of the terrestrial planets and could not be the source of the Late Heavy Bombardment, the advantage of an early instability is reduced by the sizable jumps in the semi-major axis of Jupiter and Saturn required to preserve the asteroid belt.
Physical sciences
Solar System
Astronomy
23974535
https://en.wikipedia.org/wiki/Omnivore
Omnivore
An omnivore () is an animal that regularly consumes significant quantities of both plant and animal matter. Obtaining energy and nutrients from plant and animal matter, omnivores digest carbohydrates, protein, fat, and fiber, and metabolize the nutrients and energy of the sources absorbed. Often, they have the ability to incorporate food sources such as algae, fungi, and bacteria into their diet. Omnivores come from diverse backgrounds that often independently evolved sophisticated consumption capabilities. For instance, dogs evolved from primarily carnivorous organisms (Carnivora) while pigs evolved from primarily herbivorous organisms (Artiodactyla). Despite this, physical characteristics such as tooth morphology may be reliable indicators of diet in mammals, with such morphological adaptation having been observed in bears. The variety of different animals that are classified as omnivores can be placed into further sub-categories depending on their feeding behaviors. Frugivores include cassowaries, orangutans and grey parrots; insectivores include swallows and pink fairy armadillos; granivores include large ground finches and mice. All of these animals are omnivores, yet still fall into special niches in terms of feeding behavior and preferred foods. Being omnivores gives these animals more food security in stressful times or makes possible living in less consistent environments. Etymology and definitions The word omnivore derives from Latin omnis 'all' and vora, from vorare 'to eat or devour', having been coined by the French and later adopted by the English in the 1800s. Traditionally the definition for omnivory was entirely behavioral by means of simply "including both animal and vegetable tissue in the diet." In more recent times, with the advent of advanced technological capabilities in fields like gastroenterology, biologists have formulated a standardized variation of omnivore used for labeling a species' actual ability to obtain energy and nutrients from materials. This has subsequently conditioned two context-specific definitions. Behavioral: This definition is used to specify if a species or individual is actively consuming both plant and animal materials. (e.g. "vegans do not participate in the omnivore based diet.") In the fields of nutrition, sociology and psychology the terms "omnivore" & "omnivory" is often used to distinguish prototypical highly diverse human diet patterns from restricted diet patterns that exclude major categories of food. Physiological: This definition is often used in academia to specify species that have the capability to obtain energy and nutrients from both plant and animal matter. (e.g. "humans are omnivores due to their capability to obtain energy and nutrients from both plant and animal materials.") The taxonomic utility of omnivore's traditional and behavioral definition is limited, since the diet, behavior, and phylogeny of one omnivorous species may be very different from that of another: for instance, an omnivorous pig digging for roots and scavenging for fruit and carrion is taxonomically and ecologically quite distinct from an omnivorous chameleon that eats leaves and insects. The term "omnivory" is also not always comprehensive because it does not deal with mineral foods such as salt licks or with non-omnivores that self-medicate by consuming either plant or animal material which they otherwise would not (i.e. zoopharmacognosy). Classification, contradictions and difficulties Though Carnivora is a taxon for species classification, no such equivalent exists for omnivores, as omnivores are widespread across multiple taxonomic clades. The Carnivora order does not include all carnivorous species, and not all species within the Carnivora taxon are carnivorous. (The members of Carnivora are formally referred to as carnivorans.) It is common to find physiological carnivores consuming materials from plants or physiological herbivores consuming material from animals, e.g. felines eating grass and deer eating birds. From a behavioral aspect, this would make them omnivores, but from the physiological standpoint, this may be due to zoopharmacognosy. Physiologically, animals must be able to obtain both energy and nutrients from plant and animal materials to be considered omnivorous. Thus, such animals are still able to be classified as carnivores and herbivores when they are just obtaining nutrients from materials originating from sources that do not seemingly complement their classification. For instance, it is well documented that animals such as giraffes, camels, and cattle will gnaw on bones, preferably dry bones, for particular minerals and nutrients. Felines, which are usually regarded as obligate carnivores, occasionally eat grass to regurgitate indigestibles (e.g. hair, bones), aid with hemoglobin production, and as a laxative. Occasionally, it is found that animals historically classified as carnivorous may deliberately eat plant material. For example, in 2013, it was considered that American alligators (Alligator mississippiensis) may be physiologically omnivorous once investigations had been conducted on why they occasionally eat fruits. It was suggested that alligators probably ate fruits both accidentally and deliberately. "Life-history omnivores" is a specialized classification given to organisms that change their eating habits during their life cycle. Some species, such as grazing waterfowl like geese, are known to eat mainly animal tissue at one stage of their lives, but plant matter at another. The same is true for many insects, such as beetles in the family Meloidae, which begin by eating animal tissue as larvae, but change to eating plant matter after they mature. Likewise, many mosquito species in early life eat plants or assorted detritus, but as they mature, males continue to eat plant matter and nectar whereas the females (such as those of Anopheles, Aedes and Culex) also eat blood to reproduce effectively. Omnivorous species General Although cases exist of herbivores eating meat and carnivores eating plant matter, the classification "omnivore" refers to the adaptation and main food source of the species in general, so these exceptions do not make either individual animals or the species as a whole omnivorous. For the concept of "omnivore" to be regarded as a scientific classification, some clear set of measurable and relevant criteria would need to be considered to differentiate between an "omnivore" and other categories, e.g. faunivore, folivore, and scavenger. Some researchers argue that evolution of any species from herbivory to carnivory or carnivory to herbivory would be rare except via an intermediate stage of omnivory. Omnivorous mammals Various mammals are omnivorous in the wild, such as species of hominids, pigs, badgers, bears, foxes, coatis, civets, hedgehogs, opossums, skunks, sloths, squirrels, raccoons, chipmunks, mice, hamsters and rats. Most bear species are omnivores, but individual diets can range from almost exclusively herbivorous (hypocarnivore) to almost exclusively carnivorous (hypercarnivore), depending on what food sources are available locally and seasonally. Polar bears are classified as carnivores, both taxonomically (they are in the order Carnivora), and behaviorally (they subsist on a largely carnivorous diet). Depending on the species of bear, there is generally a preference for one class of food, as plants and animals are digested differently. Canines including wolves, dogs, dingoes, and coyotes eat some plant matter, but they have a general preference and are evolutionarily geared towards meat. However, the maned wolf is a canid whose diet is naturally 50% plant matter. Like most arboreal species, squirrels are primarily granivores, subsisting on nuts and seeds. However, like virtually all mammals, squirrels avidly consume some animal food when it becomes available. For example, the American eastern gray squirrel has been introduced to parts of Britain, continental Europe and South Africa. Its effect on populations of nesting birds is often serious because of consumption of eggs and nestlings. Other species Various birds are omnivorous, with diets varying from berries and nectar to insects, worms, fish, and small rodents. Examples include cranes, cassowaries, chickens, crows and related corvids, kea, rallidae, and rheas. In addition, some lizards (such as Galapagos Lava Lizard), turtles, fish (such as piranhas and catfish), and invertebrates are omnivorous. Quite often, mainly herbivorous creatures will eagerly eat small quantities of animal food when it becomes available. Although this is trivial most of the time, omnivorous or herbivorous birds, such as sparrows, often will feed their chicks insects while food is most needed for growth. On close inspection it appears that nectar-feeding birds such as sunbirds rely on the ants and other insects that they find in flowers, not for a richer supply of protein, but for essential nutrients such as cobalt/vitamin b12 that are absent from nectar. Similarly, monkeys of many species eat maggoty fruit, sometimes in clear preference to sound fruit. When to refer to such animals as omnivorous, or otherwise, is a question of context and emphasis, rather than of definition.
Biology and health sciences
Ethology
Biology
6373390
https://en.wikipedia.org/wiki/Saddle
Saddle
A saddle is a supportive structure for a rider of an animal, fastened to an animal's back by a girth. The most common type is equestrian. However, specialized saddles have been created for oxen, camels and other animals. It is not known precisely when riders first began to use some sort of padding or protection, but a blanket attached by some form of surcingle or girth was probably the first "saddle", followed later by more elaborate padded designs. The solid saddle tree was a later invention, and though early stirrup designs predated the invention of the solid tree, the paired stirrup, which attached to the tree, was the last element of the saddle to reach the basic form that is still used today. Today, modern saddles come in a wide variety of styles, each designed for a specific equestrianism discipline, and require careful fit to both the rider and the horse. Proper saddle care can extend the useful life of a saddle, often for decades. The saddle was a crucial step in the increased use of domesticated animals, during the Classical Era. Etymology The word "saddle" originates from the Old English word sadol which in turn comes from the Proto-Germanic language , with cognates in various other Indo-European languages, including the Latin sella. Parts Tree: the base on which the rest of the saddle is built – usually based on wood (or on a similar synthetic material). The eventually covers it with leather or with a leather-like synthetic. The tree's size determines its fit on the horse's back, as well as the size of the seat for the rider. The tree supports and distributes the weight of the rider. Seat: the part of the saddle where the rider sits. It is usually lower than the pommel and cantle - to provide security. Pommel (English)/ swells (Western) or saddlebow: the front, slightly raised area of the saddle. : the rear of the saddle Stirrup: part of the saddle in which the rider's feet are placed; provides support and leverage to the rider. Leathers and flaps (English), or fenders (Western): The leather straps connecting the stirrups to the saddle tree and leather flaps giving support to the rider's leg and protecting the rider from sweat. D-ring: a D-shaped ring on the front of a saddle, to which certain pieces of equipment (such as breastplates) can be attached. Girth or cinch: A wide strap that goes under the horse's barrel, just behind the front legs of the horse, and holds the saddle on. Panels, lining, or padding: cushioning on the underside of the saddle. Some saddles also include: Surcingle: A long strap that goes all the way around the horse's barrel. Depending on purpose, may be used by itself, placed over a pad or blanket only, or placed over a saddle (often in addition to a girth) to help hold it on. Monkey grip or less commonly jug handle: a handle that may be attached to the front of European saddles or on the right side of Australian stock saddles. Riders may use it to help maintain their seat or to assist in mounting. Horn: knob-like appendage attached to the pommel or swells, most commonly associated with the modern western saddle, but seen on some saddle designs in other cultures. Knee rolls: Seen on some English and Australian saddles, extra padding on the front of the flaps to help stabilize the rider's leg. Sometimes thigh rolls are also added to the back of the flap. History and development There is evidence, though disputed, that humans first began riding the horse not long after domestication, possibly as early as 4000 BC. The earliest saddle known thus far was discovered inside a woman's tomb in the Turpan basin, in what is now Xinjiang, China, dating to between 727–396 BC. The saddle is made of cushioned cow hide, and shows signs of usage and repair. The tomb is associated with the Subeixi Culture, which is associated with the Jushi Kingdom described in later Chinese sources. The Subeixi people had contact with Scythians, and share a similar material culture with the Pazyryk culture, where later saddles were found. Eurasian and Northern Asian nomads on the Mongolian plateau developed an early form of saddle with a rudimentary frame, which included two parallel leather cushions, with girth attached to them, a pommel and cantle with detachable bone/horn/hardened leather facings, leather thongs, a crupper, breastplate, and a felt shabrack adorned with animal motifs. These were located in Pazyryk burials finds. These saddles, found in the Ukok Plateau, Siberia were dated to 500-400 BC. Iconographic evidence of a predecessor to the modern saddle has been found in the art of the ancient Armenians, Assyrians, and steppe nomads depicted on the Assyrian stone relief carvings from the time of Ashurnasirpal II. Some of the earliest saddle-like equipment were fringed cloths or pads used by Assyrian cavalry around 700 BC. These were held on with a girth or surcingle that included breast straps and cruppers. From the earliest depictions, saddles became status symbols. To show off an individual's wealth and status, embellishments were added to saddles, including elaborate sewing and leather work, precious metals such as gold, carvings of wood and horn, and other ornamentation. The Scythians also developed an early saddle that included padding and decorative embellishments. Though they had neither a solid tree nor stirrups, these early treeless saddles and pads provided protection and comfort to the rider, with a slight increase in security. The Sarmatians also used a padded treeless early saddle, possibly as early as the seventh century BC and ancient Greek artworks of Alexander the Great of Macedon depict a saddle cloth. The Greeks called the saddlecloth or pad, ephippium (ἐφίππιον or ἐφίππειον). Early solid-treed saddles were made of felt that covered a wooden frame. Chinese saddles are depicted among the cavalry horses in the Terracotta Army of the Qin dynasty, completed by 206 BC. Asian designs proliferated during China's Han dynasty around approximately 200 BC. One of the earliest solid-treed saddles in the Western world was the "four horn" design, first used by the Romans as early as the 1st century BC. Neither design had stirrups. Recent archeological finds in Mongolia (e.g. Urd Ulaan Uneet site) suggest that the Mongolic Rouran tribes had sophisticated, wooden frame saddles as early as the 3rd century AD. The wooden frame saddle found at the Urd Ulaan Uneet site in Mongolia is one of the earliest examples found in Central and East Asia. The development of the solid saddle tree was significant; it raised the rider above the horse's back, and distributed the rider's weight on either side of the animal's spine instead of pinpointing pressure at the rider's seat bones, reducing the pressure (force per unit area) on any one part of the horse's back, thus greatly increasing the comfort of the horse and prolonging its useful life. The invention of the solid saddle tree also allowed development of the true stirrup as it is known today. Without a solid tree, the rider's weight in the stirrups creates abnormal pressure points and makes the horse's back sore. Thermography studies on "treeless" and flexible tree saddle designs have found that there is considerable friction across the center line of a horse's back. The stirrup was one of the milestones in saddle development. The first stirrup-like object was invented in India in the 2nd century BC, and consisted of a simple leather strap in which the rider's toe was placed. It offered very little support, however. Mongolic Rouran tribes in Mongolia are thought to have been the inventors of the modern stirrup, but the first dependable representation of a rider with paired stirrups was found in China in a Jin Dynasty tomb of about 302 AD. The stirrup appeared to be in widespread use across China by 477 AD, and later spread to Europe. This invention gave great support for the rider, and was essential in later warfare. Post-classical West Africa Accounts of the cavalry system of the Mali Empire describe the use of stirrups and saddles in the cavalry. Stirrups and saddles brought about innovation in new tactics, such as mass charges with thrusting spears and swords. Middle Ages Saddles were improved upon during the Middle Ages, as knights needed saddles that were stronger and offered more support. The resulting saddle had a higher cantle and pommel (to prevent the rider from being unseated in warfare) and was built on a wooden tree that supported more weight from a rider with armor and weapons. This saddle, a predecessor to the modern Western saddle, was originally padded with wool or horsehair and covered in leather or textiles. It was later modified for cattle tending and bullfighting in addition to the continual development for use in war. Other saddles, derived from earlier, treeless designs, sometimes added solid trees to support stirrups, but were kept light for use by messengers and for horse racing. Modernity The saddle eventually branched off into different designs that became the modern English and Western saddles. One variant of the English saddle was developed by François Robinchon de la Guérinière, a French riding master and author of "Ecole de Cavalerie" who made major contributions to what today is known as classical dressage. He put great emphasis on the proper development of a "three point" seat that is still used today by many dressage riders. In the 18th century, fox hunting became increasingly popular in England. The high-cantle, high-pommel design of earlier saddles became a hindrance, unsafe and uncomfortable for riders as they jumped. Due to this fact, Guérinière's saddle design which included a low pommel and cantle and allowed for more freedom of movement for both horse and rider, became increasingly popular throughout northern Europe. In the early 20th century, Captain Frederico Caprilli revolutionized the jumping saddle by placing the flap at an angle that allowed a rider to achieve the forward seat necessary for jumping high fences and traveling rapidly across rugged terrain. The modern Western saddle was developed from the Spanish saddles that were brought by the Spanish Conquistadors when they came to the Americas. These saddles were adapted to suit the needs of vaqueros and cowboys of Mexico, Texas and California, including the addition of a horn that allowed a lariat to be tied or dallied for the purpose of holding cattle and other livestock. Types In the Western world there are two basic types of saddles used today for horseback riding, usually called the English saddle and the "stock" saddle. The best known stock saddle is the American western saddle, followed by the Australian stock saddle. In Asia and throughout the world, there are numerous saddles of unique designs used by various nationalities and ethnic groups. English English saddles are used for English riding throughout the world, not just in England or English-speaking countries. They are the saddles used in all of the Olympic equestrian disciplines. The term English saddle encompasses several different styles of saddle, including those used for eventing, show jumping and hunt seat, dressage, saddle seat, horse racing, horse surfing and polo. The major distinguishing feature of an English saddle is its flatter appearance, the lack of a horn, and the self-padding design of the panels: a pair of pads attached to the underside of the seat and filled with wool, foam, or air. However, the length and angle of the flaps, the depth of the seat and height of the cantle all play a role in the use for which a particular saddle is intended. The "tree" that underlies the saddle is usually one of the defining features of saddle quality. Traditionally, the tree of an English saddle is built of laminated layers of high quality wood reinforced with spring steel along its length, with a riveted gullet plate. These trees are semi-adjustable and are considered "spring trees". They have some give, but a minimum amount of flexibility. More recently, saddle manufacturers are using various materials to replace wood and create a synthetic molded tree (some with the integrated spring steel and gullet plate, some without). Synthetic materials vary widely in quality. Polyurethane trees are often very well-made, but some cheap saddles are made with fiberglass trees of limited durability. Synthetic trees are often lighter, more durable, and easier to customize. Some designs are intended to be more flexible and move with the horse. Several companies offer flexible trees or adjustable gullets that allow the same saddle to be used on different sizes of horses. Stock Western saddles are saddles originally designed to be used on horses on working cattle ranches in the United States. Used today in a wide variety of western riding activities, they are the "cowboy saddles" familiar to movie viewers, rodeo fans, and those who have gone on tourist trail rides. The Western saddle has minimal padding of its own, and must be used with a saddle blanket or pad in order to provide a comfortable fit for the horse. It also has sturdier stirrups and uses a cinch rather than a girth. Its most distinctive feature is the horn on the front of the saddle, originally used to dally a lariat when roping cattle. Other nations such as Australia and Argentina have stock saddles that usually do not have a horn, but have other features commonly seen in a western saddle, including a deep seat, high cantle, and heavier leather. The tree of a western saddle is the most critical component, defining the size and shape of the finished product. The tree determines both the width and length of the saddle as it sits on the back of the horse, as well as the length of the seat for the rider, width of the swells (pommel), height of cantle, and, usually, shape of the horn. Traditional trees were made of wood or wood laminate covered with rawhide and this style is still manufactured today, though modern synthetic materials are also used. The rawhide is stretched and molded around the tree, with minimal padding between the tree and the exterior leather, usually a bit of relatively thin padding on the seat, and a sheepskin cover on the underside of the skirts to prevent chafing and rubbing on the horse. Though a western saddle is often considerably heavier than an English saddle, the tree is designed to spread out the weight of the rider and any equipment the rider may be carrying so that there are fewer pounds per square inch on the horse's back and, when properly fitted, few if any pressure points. Thus, the design, in spite of its weight, can be used for many hours with relatively little discomfort to a properly conditioned horse and rider. Military British Universal Pattern military saddles were used by the mounted forces from Australia, Britain, Canada, New Zealand and South Africa. The Steel Arch Universal Pattern Mark I was issued in 1891. This was found to irritate riders and in 1893 it was discontinued in favour of the Mark II. In 1898, the Mark III appeared, which had the addition of a V-shaped arrangement of strap billets on the sideboards for the attachment of the girth. This girthing system could be moved forward or back to obtain an optimum fit on a wide range of horses. From 1902 the Universal Military Saddle was manufactured with a fixed tree, broad panels to spread the load, and initially a front arch in three sizes. The advantage of this saddle was its lightness, ease of repair and comfort for horse and rider. From 1912 the saddle was built on an adjustable tree and consequently only one size was needed. Its advantage over the fixed tree 1902 pattern was its ability to maintain a better fit on the horse's back as the horse gained or lost weight. This saddle was made using traditional methods and featured a seat blocked from sole leather, which maintained its shape well. Military saddles were fitted with metal staples and dees to carry a sword, spare horse shoes and other equipment. In the US, the McClellan saddle was introduced in the 1850s by George B. McClellan for use by the United States Cavalry, and the core design was used continuously, with some improvements, until the 1940s. Today, the McClellan saddle continues to be used by ceremonial mounted units in the U.S. Army. The basic design that inspired McClellan saw use by military units in several other nations, including Rhodesia and Mexico, and even to a degree by the British in the Boer War. Military saddles are still produced and are now used in exhibitions, parades and other events. Asian Saddles in Asia date to the time of the Scythians and Cimmerians. Modern Asian saddles can be divided into two groups: those from nomadic Eurasia, which have a prominent horn and leather covering, and those from East Asia, which have a high pommel and cantle. Central Asian saddles are noted for their wide seats and high horns. The saddle has a base of wood with a thin leather covering that frequently has a lacquer finish. Central Asian saddles have no pad and must be ridden with a saddle blanket. The horn comes in particular good use during the rough horseback sport of buskashi, played throughout Central Asia, which involves two teams of riders wrestling over a decapitated goat's carcass. In the Near East, a saddle large enough to carry more than one person is called a howdah which is fitted on elephants. Some of the largest examples of a saddle, elaborate howdah were used in warfare outfitted with weaponry, and alternatively for monarchs, maharajahs, and sultans. Howdahs continue to play a role in modern Indian ceremonies. In recent years, the elephant chosen to carry the Golden Howdah has been contentious and newsworthy. In 2020, the elephant Arjuna was deemed too old to carry the Golden Howdah after a Supreme Court and Union Government guideline stated that elephants over the age of 60 could no longer serve in this role. A younger, 54 year old elephant, Abhimanyu, was chosen to carry out the duty instead. In preparation for carrying the Golden Howdah, Abhimanyu's strength and endurance was tested by carrying a large wooden howdah. Saddles from East Asia differ from Central Asian saddles by their high pommel and cantle and lack of a horn. East Asian saddles can be divided into several types that are associated with certain nationalities and ethnic groups. Saddles used by the Han Chinese are noted by their use of inlay work for ornamentation. Tibetan saddles typically employ iron covers inlaid with precious metals on the pommel and cantle and universally come with padding. Mongolian saddles are similar to the Tibetan style except that they are typically smaller and the seat has a high ridge. Saddles from ethnic minority groups in China's southwest, such as in Sichuan and Yunnan provinces, have colorful lacquer work over a leather covering. Japanese Japanese saddles are classified as Chinese-style () or Japanese-style (). In the Nara period the Chinese style was adopted. Gradually the Japanese changed the saddle to suit their needs, and in the Heian period, the saddle typically associated with the samurai class was developed. These saddles, known as kura, were lacquered as protection from the weather. Early samurai warfare was conducted primarily on horseback and the kura provided a rugged, stable, comfortable platform for shooting arrows, but it was not well suited for speed or distance. In the Edo period horses were no longer needed for warfare and Japanese saddles became quite elaborate and were decorated with mother of pearl inlays, gold leaf, and designs in colored lacquer. Other Sidesaddle, designed originally as a woman's saddle that allowed a rider in a skirt to stay on and control a horse. Sidesaddle riding is still seen today in horse shows, fox hunting, parades and other exhibitions. Trick (or stunt) riding saddles are similar to western saddles and have a tall metal horn, low front and back, reinforced hand holds and extended double rigging for a wide back girth. Endurance riding saddle, a saddle designed to be comfortable to the horse with broad panels but lightweight design, as well as comfortable for the rider over long hours of riding over challenging terrain. Police saddle, similar to an English saddle in general design, but with a tree that provides greater security to the rider and distributes a rider's weight over a greater area so that the horse is comfortable with a rider on its back for long hours. McClellan saddle, a specific American cavalry model that entered service just before the Civil War with the United States Army. It was designed with an English-type tree, but with a higher pommel and cantle. Also, the area upon which the rider sits was divided into two sections with a gap between the two panels. Pack saddle, similar to a cavalry saddle in the simplicity of its construction, but intended solely for the support of heavy bags or other objects being carried by the horse. Double seat saddles have two pairs of stirrups and two deep padded seats for use when double-banking or riding double with a child behind an adult rider. The western variety has one horn on the front of the saddle. Treeless saddles are available in both Western and English designs and are not built upon a solid saddle tree. They are intended to be flexible and comfortable on a variety of horses, but do not always provide the weight support that a solid tree does. The use of an appropriate saddle pad is essential for treeless saddles. A flexible saddle uses a traditional tree, but the panels are not permanently attached to the finished saddle. These saddles use flexible panels (the part that sits along the horse's back) that are moveable and adjustable to provide a custom fit for the horse and allow for changes of placement as the horse's body develops. Although there is not one specific kind, therapy saddles that aid in the riding experience of those who are taking part in Equine Assisted Therapy are made to fit differing individuals according to their needs. Typically, these saddles are made of soft materials and allow the rider to sit closer to the back of the animal, which in turn transfers the horse's heat to the rider to allow for muscle relaxation and stimulation. Bareback pad, usually a simple pad in the shape of an English-style saddle pad, made of cordura nylon or leather, padded with fleece, wool or synthetic foam, equipped with a girth. It is used as an alternative to bareback riding to provide padding for both horse and rider and to help keep the rider's clothing a bit cleaner. Depending on materials, bareback pads offer a bit more grip to the rider's seat and legs. However, though some bareback pads come with handles and even stirrups, without being attached to a saddle tree, these appendages are unsafe and pads with them should be avoided. In some cases, the addition of stirrups without a supporting tree place pressure on the horse's spinous processes, potentially causing damage. Fitting A saddle, regardless of type, must fit both horse and rider. Saddle fitting is an art and in ideal circumstances is performed by a professional saddle maker or saddle fitter. Custom-made saddles designed for an individual horse and rider will fit the best, but are also the most expensive. However, many manufactured saddles provide a decent fit if properly selected, and some minor adjustments can be made. The definition of a fitting saddle is still controversial; however, there is a vital rule for fitting that no damage should occur to the horse's skin and no injury should be presented to any muscular or neural tissues beneath the saddle. Width of the saddle is the primary means by which a saddle is measured and fitted to a horse, though length of the tree and proper balance must also be considered. The gullet of a saddle must clear the withers of the horse, but yet must not be so narrow as to pinch the horse's back. The tree must be positioned so that the tree points (English) or bars (Western) do not interfere with the movement of the horse's shoulder. The seat of the saddle must be positioned so that the rider, when riding correctly, is placed over the horse's center of balance. The bars of the saddle must not be so long that they place pressure beyond the last rib of the horse. A too-short tree alone does not usually create a problem, as shorter trees are most often on saddles made for children, though a short tree with an unbalanced adult rider may create abnormal pressure points. While a horse's back can be measured for size and shape, the saddle must be tried on the individual animal to assure proper fit. Saddle blankets or pads can provide assistance to correct minor fit problems, as well as provide comfort and protection to the horse's back, but no amount of padding can compensate for a poor-fitting saddle. For example, saddles that are either too wide or too narrow for the horse will cause change in pressure points and ultimately muscle atrophy in the epaxial muscles. The common problems associated with saddle fitting are: bridging, ill-fitting headplates and incorrect stuffing of the panels. Saddle-related injuries Contact-point injuries Depending on the rider, the saddle may need to be adjusted or replaced entirely to ensure proper fitment. Riding a saddle that doesn't properly secure and balance the rider can cause pain in the hips and back, as well as saddle sores under the bones that make contact with the saddle during riding. Saddle-horn injury On horseback, a rider's pelvis may receive a saddle-horn injury due to falling onto the saddle after being bounced into the air. The strikes against the saddle's horn compress the pelvic ring, which can lead to further complications such as pubic symphysis or injury to the sacroiliac joint.
Technology
Animal-powered transport
null
1908016
https://en.wikipedia.org/wiki/Einstein%20solid
Einstein solid
The Einstein solid is a model of a crystalline solid that contains a large number of independent three-dimensional quantum harmonic oscillators of the same frequency. The independence assumption is relaxed in the Debye model. While the model provides qualitative agreement with experimental data, especially for the high-temperature limit, these oscillations are in fact phonons, or collective modes involving many atoms. Albert Einstein was aware that getting the frequency of the actual oscillations would be difficult, but he nevertheless proposed this theory because it was a particularly clear demonstration that quantum mechanics could solve the specific heat problem in classical mechanics. Historical impact The original theory proposed by Einstein in 1907 has great historical relevance. The heat capacity of solids as predicted by the empirical Dulong–Petit law was required by classical mechanics, the specific heat of solids should be independent of temperature. But experiments at low temperatures showed that the heat capacity changes, going to zero at absolute zero. As the temperature goes up, the specific heat goes up until it approaches the Dulong and Petit prediction at high temperature. By employing Planck's quantization assumption, Einstein's theory accounted for the observed experimental trend for the first time. Together with the photoelectric effect, this became one of the most important pieces of evidence for the need of quantization. Einstein used the levels of the quantum mechanical oscillator many years before the advent of modern quantum mechanics. Heat capacity For a thermodynamic approach, the heat capacity can be derived using different statistical ensembles. All solutions are equivalent at the thermodynamic limit. Microcanonical ensemble The heat capacity of an object at constant volume V is defined through the internal energy U as , the temperature of the system, can be found from the entropy To find the entropy consider a solid made of atoms, each of which has 3 degrees of freedom. So there are quantum harmonic oscillators (hereafter SHOs for "Simple Harmonic Oscillators"). Possible energies of an SHO are given by where the n of SHO is usually interpreted as the excitation state of the oscillating mass but here n is usually interpreted as the number of phonons (bosons) occupying that vibrational mode (frequency). The net effect is that the energy levels are evenly spaced, and one can define a quantum of energy due to a phonon as which is the smallest and only amount by which the energy of an SHO is increased. Next, we must compute the multiplicity of the system. That is, compute the number of ways to distribute quanta of energy among SHOs. This task becomes simpler if one thinks of distributing pebbles over boxes or separating stacks of pebbles with partitions or arranging pebbles and partitions The last picture is the most telling. The number of arrangements of  objects is . So the number of possible arrangements of pebbles and partitions is . However, if partition #3 and partition #5 trade places, no one would notice. The same argument goes for quanta. To obtain the number of possible distinguishable arrangements one has to divide the total number of arrangements by the number of indistinguishable arrangements. There are identical quanta arrangements, and identical partition arrangements. Therefore, multiplicity of the system is given by which, as mentioned before, is the number of ways to deposit quanta of energy into oscillators. Entropy of the system has the form is a huge number—subtracting one from it has no overall effect whatsoever: With the help of Stirling's approximation, entropy can be simplified: Total energy of the solid is given by since there are q energy quanta in total in the system in addition to the ground state energy of each oscillator. Some authors, such as Schroeder, omit this ground state energy in their definition of the total energy of an Einstein solid. We are now ready to compute the temperature Elimination of q between the two preceding formulas gives for U: The first term is associated with zero point energy and does not contribute to specific heat. It will therefore be lost in the next step. Differentiating with respect to temperature to find we obtain: or Although the Einstein model of the solid predicts the heat capacity accurately at high temperatures, and in this limit , which is equivalent to Dulong–Petit law, the heat capacity noticeably deviates from experimental values at low temperatures. See Debye model for how to calculate accurate low-temperature heat capacities. Canonical ensemble Heat capacity is obtained through the use of the canonical partition function of a simple quantum harmonic oscillator. where substituting this into the partition function formula yields This is the partition function of one harmonic oscillator. Because, statistically, heat capacity, energy, and entropy of the solid are equally distributed among its atoms, we can work with this partition function to obtain those quantities and then simply multiply them by to get the total. Next, let's compute the average energy of each oscillator where Therefore, Heat capacity of one oscillator is then Up to now, we calculated the heat capacity of a unique degree of freedom, which has been modeled as a quantum harmonic. The heat capacity of the entire solid is then given by , where the total number of degree of freedom of the solid is three (for the three directional degree of freedom) times , the number of atoms in the solid. One thus obtains which is algebraically identical to the formula derived in the previous section. The quantity has the dimensions of temperature and is a characteristic property of a crystal. It is known as the Einstein temperature. Hence, the Einstein crystal model predicts that the energy and heat capacities of a crystal are universal functions of the dimensionless ratio . Similarly, the Debye model predicts a universal function of the ratio , where is the Debye temperature. Limitations and succeeding model In Einstein's model, the specific heat approaches zero exponentially fast at low temperatures. This is because all the oscillations have one common frequency. The correct behavior is found by quantizing the normal modes of the solid in the same way that Einstein suggested. Then the frequencies of the waves are not all the same, and the specific heat goes to zero as a power law, which matches experiment. This modification is called the Debye model, which appeared in 1912.
Physical sciences
Basics_2
Physics
1908365
https://en.wikipedia.org/wiki/Neon%20lighting
Neon lighting
Neon lighting consists of brightly glowing, electrified glass tubes or bulbs that contain rarefied neon or other gases. Neon lights are a type of cold cathode gas-discharge light. A neon tube is a sealed glass tube with a metal electrode at each end, filled with one of a number of gases at low pressure. A high potential of several thousand volts applied to the electrodes ionizes the gas in the tube, causing it to emit colored light. The color of the light depends on the gas in the tube. Neon lights were named for neon, a noble gas which gives off a popular orange light, but other gases and chemicals called phosphors are used to produce other colors, such as hydrogen (purple-red), helium (yellow or pink), carbon dioxide (white), and mercury (blue). Neon tubes can be fabricated in curving artistic shapes, to form letters or pictures. They are mainly used to make dramatic, multicolored glowing signage for advertising, called neon signs, which were popular from the 1920s to 1960s and again in the 1980s. The term can also refer to the miniature neon glow lamp, developed in 1917, about seven years after neon tube lighting. While neon tube lights are typically meters long, the neon lamps can be less than one centimeter in length and glow much more dimly than the tube lights. They are still in use as small indicator lights. Through the 1970s, neon glow lamps were widely used for numerical displays in electronics, for small decorative lamps, and as signal processing devices in circuitry. While these lamps are now antiques, the technology of the neon glow lamp developed into contemporary plasma displays and televisions. Neon was discovered in 1898 by the British scientists William Ramsay and Morris W. Travers. After obtaining pure neon from the atmosphere, they explored its properties using an "electrical gas-discharge" tube that was similar to the tubes used for neon signs today. Georges Claude, a French engineer and inventor, presented neon tube lighting in essentially its modern form at the Paris Motor Show, December 3–18, 1910. Claude, sometimes called "the Edison of France", had a near monopoly on the new technology, which became very popular for signage and displays in the period 1920–1940. Neon lighting was an important cultural phenomenon in the United States in that era; by 1940, the downtowns of nearly every city in the US were bright with neon signage, and Times Square in New York City was known worldwide for its neon extravagances. There were 2,000 shops nationwide designing and fabricating neon signs. The popularity, intricacy, and scale of neon signage for advertising declined in the U.S. following the Second World War (1939–1945), but development continued vigorously in Japan, Iran, and some other countries. In recent decades architects and artists, in addition to sign designers, have again adopted neon tube lighting as a component in their works. Neon lighting is closely related to fluorescent lighting, which developed about 25 years after neon tube lighting. In fluorescent lights, the light emitted by rarefied gases within a tube is used exclusively to excite fluorescent materials that coat the tube, which then shine with their own colors that become the tube's visible, usually white, glow. Fluorescent coatings (phosphors) and glasses are also an option for neon tube lighting, but are usually selected to obtain bright colors. History and science Neon is a noble gas chemical element and an inert gas that is a minor component of the Earth's atmosphere. It was discovered in 1898 by the British scientists William Ramsay and Morris W. Travers. When Ramsay and Travers had succeeded in obtaining pure neon from the atmosphere, they explored its properties using an "electrical gas-discharge" tube that was similar to the tubes used today for neon signs. Travers later wrote, "the blaze of crimson light from the tube told its own story and was a sight to dwell upon and never forget." The procedure of examining the colors of the light emitted from gas-discharge (or "Geissler" tubes) was well known at the time, since the colors of light (the "spectral lines") emitted by a gas discharge tube are, effectively, fingerprints that identify the gases inside. Immediately following neon's discovery, neon tubes were used as scientific instruments and novelties. However, the scarcity of purified neon gas precluded its prompt application for electrical gas-discharge lighting along the lines of Moore tubes, which used more common nitrogen or carbon dioxide as the working gas, and enjoyed some commercial success in the US in the early 1900s. After 1902, Georges Claude's company in France, Air Liquide, began producing industrial quantities of neon as a byproduct of the air liquefaction business. From December 3 to 18, 1910, Claude demonstrated two large ( long), bright red neon tubes at the Paris Motor Show. These neon tubes were essentially in their contemporary form. The outer diameters for the glass tubing used in neon lighting ranges from 9 to 25 mm; with standard electrical equipment, the tubes can be as long as . The pressure of the gas inside ranges from 3 to 20 Torr (0.4–3 kPa), which corresponds to a partial vacuum in the tubing. Claude had also solved two technical problems that substantially shortened the working life of neon and some other gas discharge tubes, and effectively gave birth to a neon lighting industry. In 1915, a US patent was issued to Claude covering the design of the electrodes for gas-discharge lighting; this patent became the basis for the monopoly held in the US by his company, Claude Neon Lights, for neon signs through the early 1930s. Claude's patents envisioned the use of gases such as argon and mercury vapor to create different colors beyond those produced by neon. For instance, mixing metallic mercury with neon gas creates blue. Green can then be achieved using uranium (yellow) glass. White and gold can also be created by adding argon and helium. In the 1920s, fluorescent glasses and coatings were developed to further expand the range of colors and effects for tubes with argon gas or argon-neon mixtures; generally, the fluorescent coatings are used with an argon/mercury-vapor mixture, which emits ultraviolet light that activates the fluorescent coatings. By the 1930s, the colors from combinations of neon tube lights had become satisfactory for some general interior lighting applications, and achieved some success in Europe, but not in the US. Since the 1950s, the development of phosphors for color televisions has created nearly 100 new colors for neon tube lighting. Around 1917, Daniel McFarlan Moore, then working at the General Electric Company, developed the miniature neon lamp. The glow lamp has a very different design than the much larger neon tubes used for signage; the difference was sufficient that a separate US patent was issued for the lamp in 1919. A Smithsonian Institution website notes, "These small, low power devices use a physical principle called 'coronal discharge'." Moore mounted two electrodes close together in a bulb and added neon or argon gas. The electrodes would glow brightly in red or blue, depending on the gas, and the lamps lasted for years. Since the electrodes could take almost any shape imaginable, a popular application has been fanciful decorative lamps. Glow lamps found practical use as electronic components, and as indicators in instrument panels and in many home appliances until the acceptance of light-emitting diodes (LEDs) starting in the 1970s." Although some neon lamps themselves are now antiques, and their use in electronics has declined markedly, the technology has continued to develop in artistic and entertainment contexts. Neon lighting technology has been reshaped from long tubes into thin flat panels used for plasma displays and plasma television sets. Neon tube lighting and signs When Georges Claude demonstrated an impressive, practical form of neon tube lighting in 1910, he apparently envisioned that it would be used as a form of lighting, which had been the application of the earlier Moore tubes that were based on nitrogen and carbon dioxide discharges. Claude's 1910 demonstration of neon lighting at the Grand Palais (Grand Palace) in Paris lit a peristyle of this large exhibition space. Claude's associate, Jacques Fonseque, realized the possibilities for a business based on signage and advertising. By 1913 a large sign for the vermouth Cinzano illuminated the night sky in Paris, and by 1919 the entrance to the Paris Opera was adorned with neon tube lighting. Neon signage was received with particular enthusiasm in the United States. In 1923, Earle C. Anthony purchased two neon signs from Claude for his Packard car dealership in Los Angeles, California; these literally stopped traffic. Claude's US patents had secured him a monopoly on neon signage, and following Anthony's success with neon signs, many companies arranged franchises with Claude to manufacture neon signs. In many cases companies were given exclusive licenses for the production of neon signs in a given geographical area; by 1931, the value of the neon sign business was $16.9 million, of which a significant percentage was paid to Claude Neon Lights, Inc. by the franchising arrangements. Claude's principal patent expired in 1932, which led to a great expansion in the production of neon signage. The industry's sales in 1939 were about $22.0 million; the expansion in volume from 1931 to 1939 was much larger than the ratio of sales in the two years suggests. Rudi Stern has written, "The 1930s were years of great creativity for neon, a period when many design and animation techniques were developed. ... Men like O. J. Gude and, in particular, Douglas Leigh took neon advertising further than Georges Claude and his associates had ever envisioned. Leigh, who conceived and created the archetypal Times Square spectacular, experimented with displays that incorporated smells, fog, and sounds as part of their total effect. ... Much of the visual excitement of Times Square in the thirties was a result of Leigh's genius as a kinetic and luminal artist." Major cities throughout the United States and in several other countries also had elaborate displays of neon signs. Events such as the Chicago Century of Progress Exposition (1933–34), the Paris World's Fair (1937) and New York World's Fair (1939) were remarkable for their extensive use of neon tubes as architectural features. Stern has argued that the creation of "glorious" neon displays for movie theaters led to an association of the two, "One's joy in going to the movies became inseparably associated with neon." The Second World War (1939–1945) arrested new sign installations around most of the world. Following the war, the industry resumed. Marcus Thielen writes of this era, "...after World War II, government programs were established to help re-educate soldiers. The Egani Institute (New York City) was one of few schools in the country that taught neon-trade secrets. The American streamlined design from the 1950s would be unimaginable without the use of neon." The development of Las Vegas, Nevada as a resort city is inextricably linked with neon signage; Tom Wolfe wrote in 1965, "Las Vegas is the only city in the world whose skyline is made neither of buildings, like New York, nor of trees, like Wilbraham, Massachusetts, but signs. One can look at Las Vegas from a mile away on route 91 and see no buildings, no trees, only signs. But such signs! They tower. They revolve, they oscillate, they soar in shapes before which the existing vocabulary of art history is helpless." Overall, however, neon displays became less fashionable, and some cities discouraged their construction with ordinances. Nelson Algren titled his 1947 collection of short stories The Neon Wilderness (as a synonym of "urban jungle" for Chicago). Margalit Fox has written, "... after World War II, as neon signs were replaced increasingly by fluorescent-lighted plastic, the art of bending colored tubes into sinuous, gas-filled forms began to wane." A Dark Age persisted at least through the 1970s, when artists adopted neon with enthusiasm; in 1979 Rudi Stern published his manifesto, Let There Be Neon. Marcus Thielen wrote in 2005, on the 90th anniversary of the US patent issued to Georges Claude, "The demand for the use of neon and cold cathode in architectural applications is growing, and the introduction of new techniques like fiber optics and LED—into the sign market have strengthened, rather than replaced, neon technology. The evolution of the 'waste' product neon tube remains incomplete 90 years after the patent was filed." Neon glow lamps and plasma displays In neon glow lamps, the luminous region of the gas is a thin, "negative glow" region immediately adjacent to a negatively charged electrode (or "cathode"); the positively charged electrode ("anode") is quite close to the cathode. These features distinguish glow lamps from the much longer and brighter "positive column" luminous regions in neon tube lighting. The energy dissipation in the lamps when they are glowing is very low (about 0.1 W), hence the distinguishing term cold-cathode lighting. Some of the applications of neon lamps include: Pilot lamps that indicate the presence of electrical power in an appliance or instrument (e.g. an electric coffee pot or power supply). Decorative (or "figural") lamps in which the cathode is shaped as a flower, animal, etc.. The figures inside these lamps were typically painted with phosphorescent paints to achieve a variety of colors. A prominent manufacturer of these lamps was the Aerolux Light Corporation. Active electronic circuits such as electronic oscillators, timers, memory elements, etc.. Intricate electronic displays such as the Nixie tube (see photograph). The small negative glow region of a neon lamp and its adaptable electronic properties led to the use of this technology in early plasma panel displays. In 1964, at the University of Illinois, the first monochrome dot-matrix plasma displays were developed for the PLATO educational system. Inventors Donald L. Bitzer, H. Gene Slottow, and Robert H. Wilson created a display that could retain its state without constant updates. In 2006, Larry F. Weber explained that modern plasma TVs still use key features of these early displays, such as alternating sustain voltage and a neon-based gas mixture. Plasma displays emit ultraviolet light, with each pixel containing phosphors for red, green, or blue light. Neon lighting and artists in light The mid to late 1980s was a period of resurgence in neon production. Sign companies developed a new type of signage called channel lettering, in which individual letters were fashioned from sheet metal. While the market for neon lighting in outdoor advertising signage has declined since the mid twentieth century, in recent decades neon lighting has been used consciously in art, both in individual objects and integrated into architecture. Frank Popper traces the use of neon lighting as the principal element in artworks to Gyula Košice's late 1940s work in Argentina. Among the later artists whom Popper notes in a brief history of neon lighting in art are Stephen Antonakos, the conceptual artists Billy Apple, Joseph Kosuth, Bruce Nauman, Martial Raysse, Chryssa, Piotr Kowalski, Maurizio Nannucci and François Morellet in addition to Lucio Fontana or Mario Merz. Several museums in the United States are now devoted to neon lighting and art, including the Museum of Neon Art (founded by neon artist Lili Lakich, Los Angeles, 1981), the Neon Museum (Las Vegas, founded 1996), the American Sign Museum (Cincinnati, founded 1999). These museums restore and display historical signage that was originally designed as advertising, in addition to presenting exhibits of neon art. Several books of photographs have also been published to draw attention to neon lighting as art. List of neon light artists Billy Apple (1935) New Zealand / USA Frida Blumenberg (1935) South Africa Chryssa (1962) Greek-American Michael Flechtner (1951) US Michael Hayden (1943) Canada Joseph Kosuth (1965) US Piotr Kowalski (1927) Poland, France Brigitte Kowanz (1957) Austria Lili Lakich (1944) US Mario Merz (1925) Italy Victor Millonzi (1915) US Maurizio Nannucci (1939) Italy Bruce Nauman (1941) US Carla O'Brien Australia LED neon flex Bill Parker (1950) US - plasma lamp Stepan Ryabchenko (1987) Ukraine Lisa Schulte (1956) US Keith Sonnier (1941) US Rudi Stern (1936) US Tim White-Sobieski (1961) Poland
Technology
Lighting
null
1908527
https://en.wikipedia.org/wiki/Nuclear%20magnetic%20resonance%20spectroscopy
Nuclear magnetic resonance spectroscopy
Nuclear magnetic resonance spectroscopy, most commonly known as NMR spectroscopy or magnetic resonance spectroscopy (MRS), is a spectroscopic technique based on re-orientation of atomic nuclei with non-zero nuclear spins in an external magnetic field. This re-orientation occurs with absorption of electromagnetic radiation in the radio frequency region from roughly 4 to 900 MHz, which depends on the isotopic nature of the nucleus and increased proportionally to the strength of the external magnetic field. Notably, the resonance frequency of each NMR-active nucleus depends on its chemical environment. As a result, NMR spectra provide information about individual functional groups present in the sample, as well as about connections between nearby nuclei in the same molecule. As the NMR spectra are unique or highly characteristic to individual compounds and functional groups, NMR spectroscopy is one of the most important methods to identify molecular structures, particularly of organic compounds. The principle of NMR usually involves three sequential steps: The alignment (polarization) of the magnetic nuclear spins in an applied, constant magnetic field B0. The perturbation of this alignment of the nuclear spins by a weak oscillating magnetic field, usually referred to as a radio-frequency (RF) pulse. Detection and analysis of the electromagnetic waves emitted by the nuclei of the sample as a result of this perturbation. Similarly, biochemists use NMR to identify proteins and other complex molecules. Besides identification, NMR spectroscopy provides detailed information about the structure, dynamics, reaction state, and chemical environment of molecules. The most common types of NMR are proton and carbon-13 NMR spectroscopy, but it is applicable to any kind of sample that contains nuclei possessing spin. NMR spectra are unique, well-resolved, analytically tractable and often highly predictable for small molecules. Different functional groups are obviously distinguishable, and identical functional groups with differing neighboring substituents still give distinguishable signals. NMR has largely replaced traditional wet chemistry tests such as color reagents or typical chromatography for identification. The most significant drawback of NMR spectroscopy is its poor sensitivity (compared to other analytical methods, such as mass spectrometry). Typically 2–50 mg of a substance is required to record a decent-quality NMR spectrum. The NMR method is non-destructive, thus the substance may be recovered. To obtain high-resolution NMR spectra, solid substances are usually dissolved to make liquid solutions, although solid-state NMR spectroscopy is also possible. The timescale of NMR is relatively long, and thus it is not suitable for observing fast phenomena, producing only an averaged spectrum. Although large amounts of impurities do show on an NMR spectrum, better methods exist for detecting impurities, as NMR is inherently not very sensitive though at higher frequencies, sensitivity is higher. Correlation spectroscopy is a development of ordinary NMR. In two-dimensional NMR, the emission is centered around a single frequency, and correlated resonances are observed. This allows identifying the neighboring substituents of the observed functional group, allowing unambiguous identification of the resonances. There are also more complex 3D and 4D methods and a variety of methods designed to suppress or amplify particular types of resonances. In nuclear Overhauser effect (NOE) spectroscopy, the relaxation of the resonances is observed. As NOE depends on the proximity of the nuclei, quantifying the NOE for each nucleus allows construction of a three-dimensional model of the molecule. NMR spectrometers are relatively expensive; universities usually have them, but they are less common in private companies. Between 2000 and 2015, an NMR spectrometer cost around 0.5–5 million USD. Modern NMR spectrometers have a very strong, large and expensive liquid-helium-cooled superconducting magnet, because resolution directly depends on magnetic field strength. Higher magnetic field also improves the sensitivity of the NMR spectroscopy, which depends on the population difference between the two nuclear levels, which increases exponentially with the magnetic field strength. Less expensive machines using permanent magnets and lower resolution are also available, which still give sufficient performance for certain applications such as reaction monitoring and quick checking of samples. There are even benchtop nuclear magnetic resonance spectrometers. NMR spectra of protons (1H nuclei) can be observed even in Earth magnetic field. Low-resolution NMR produces broader peaks, which can easily overlap one another, causing issues in resolving complex structures. The use of higher-strength magnetic fields result in a better sensitivity and higher resolution of the peaks, and it is preferred for research purposes. History Credit for the discovery of NMR goes to Isidor Isaac Rabi, who received the Nobel Prize in Physics in 1944. The Purcell group at Harvard University and the Bloch group at Stanford University independently developed NMR spectroscopy in the late 1940s and early 1950s. Edward Mills Purcell and Felix Bloch shared the 1952 Nobel Prize in Physics for their inventions. NMR-active criteria The key determinant of NMR activity in atomic nuclei is the nuclear spin quantum number (I). This intrinsic quantum property, similar to an atom's "spin", characterizes the angular momentum of the nucleus. To be NMR-active, a nucleus must have a non-zero nuclear spin (I ≠ 0). It is this non-zero spin that enables nuclei to interact with external magnetic fields and show signals in NMR. Atoms with an odd sum of protons and neutrons exhibit half-integer values for the nuclear spin quantum number (I = 1/2, 3/2, 5/2, and so on). These atoms are NMR-active because they possess non-zero nuclear spin. Atoms with an even sum but both an odd number of protons and an odd number of neutrons exhibit integer nuclear spins (I = 1, 2, 3, and so on). Conversely, atoms with an even number of both protons and neutrons have a nuclear spin quantum number of zero (I = 0), and therefore are not NMR-active. NMR-active nuclei, particularly those with a spin quantum number of 1/2, are of great significance in NMR spectroscopy. Examples include 1H, 13C, 15N, and 31P. Some atoms with very high spin (as 9/2 for 99Tc atom) are also extensively studied with NMR spectroscopy. Main aspects of NMR techniques Resonant frequency When placed in a magnetic field, NMR active nuclei (such as 1H or 13C) absorb electromagnetic radiation at a frequency characteristic of the isotope. The resonant frequency, energy of the radiation absorbed, and the intensity of the signal are proportional to the strength of the magnetic field. For example, in a 21-tesla magnetic field, hydrogen nuclei (protons) resonate at 900 MHz. It is common to refer to a 21 T magnet as a 900 MHz magnet, since hydrogen is the most common nucleus detected. However, different nuclei will resonate at different frequencies at this field strength in proportion to their nuclear magnetic moments. Sample handling An NMR spectrometer typically consists of a spinning sample-holder inside a very strong magnet, a radio-frequency emitter, and a receiver with a probe (an antenna assembly) that goes inside the magnet to surround the sample, optionally gradient coils for diffusion measurements, and electronics to control the system. Spinning the sample is usually necessary to average out diffusional motion, however, some experiments call for a stationary sample when solution movement is an important variable. For instance, measurements of diffusion constants (diffusion ordered spectroscopy or DOSY) are done using a stationary sample with spinning off, and flow cells can be used for online analysis of process flows. Deuterated solvents The vast majority of molecules in a solution are solvent molecules, and most regular solvents are hydrocarbons and so contain NMR-active hydrogen-1 nuclei. In order to avoid having the signals from solvent hydrogen atoms overwhelm the experiment and interfere in analysis of the dissolved analyte, deuterated solvents are used where >99% of the protons are replaced with deuterium (hydrogen-2). The most widely used deuterated solvent is deuterochloroform (CDCl3), although other solvents may be used for various reasons, such as solubility of a sample, desire to control hydrogen bonding, or melting or boiling points. The chemical shifts of a molecule change slightly between solvents, and therefore the solvent used is almost always reported with chemical shifts. Proton NMR spectra are often calibrated against the known solvent residual proton peak as an internal standard instead of adding tetramethylsilane (TMS), which is conventionally defined as having a chemical shift of zero. Shim and lock To detect the very small frequency shifts due to nuclear magnetic resonance, the applied magnetic field must be extremely uniform throughout the sample volume. High-resolution NMR spectrometers use shims to adjust the homogeneity of the magnetic field to parts per billion (ppb) in a volume of a few cubic centimeters. In order to detect and compensate for inhomogeneity and drift in the magnetic field, the spectrometer maintains a "lock" on the solvent deuterium frequency with a separate lock unit, which is essentially an additional transmitter and RF processor tuned to the lock nucleus (deuterium) rather than the nuclei of the sample of interest. In modern NMR spectrometers shimming is adjusted automatically, though in some cases the operator has to optimize the shim parameters manually to obtain the best possible resolution. Acquisition of spectra Upon excitation of the sample with a radio frequency (60–1000 MHz) pulse, a nuclear magnetic resonance response a free induction decay (FID) is obtained. It is a very weak signal and requires sensitive radio receivers to pick up. A Fourier transform is carried out to extract the frequency-domain spectrum from the raw time-domain FID. A spectrum from a single FID has a low signal-to-noise ratio, but it improves readily with averaging of repeated acquisitions. Good 1H NMR spectra can be acquired with 16 repeats, which takes only minutes. However, for elements heavier than hydrogen, the relaxation time is rather long, e.g. around 8 seconds for 13C. Thus, acquisition of quantitative heavy-element spectra can be time-consuming, taking tens of minutes to hours. Following the pulse, the nuclei are, on average, excited to a certain angle vs. the spectrometer magnetic field. The extent of excitation can be controlled with the pulse width, typically about 3–8 μs for the optimal 90° pulse. The pulse width can be determined by plotting the (signed) intensity as a function of pulse width. It follows a sine curve and, accordingly, changes sign at pulse widths corresponding to 180° and 360° pulses. Decay times of the excitation, typically measured in seconds, depend on the effectiveness of relaxation, which is faster for lighter nuclei and in solids, slower for heavier nuclei and in solutions, and can be very long in gases. If the second excitation pulse is sent prematurely before the relaxation is complete, the average magnetization vector has not decayed to ground state, which affects the strength of the signal in an unpredictable manner. In practice, the peak areas are then not proportional to the stoichiometry; only the presence, but not the amount of functional groups is possible to discern. An inversion recovery experiment can be done to determine the relaxation time and thus the required delay between pulses. A 180° pulse, an adjustable delay, and a 90° pulse is transmitted. When the 90° pulse exactly cancels out the signal, the delay corresponds to the time needed for 90° of relaxation.<ref>{{cite web |url=http://triton.iqfr.csic.es/guide/eNMR/eNMR1D/invrec.html |title='T1 Measurement using Inversion-Recovery |first=Teodor |last=Parella |work=NMRGuide3.5 |url-status=dead |archive-url=https://web.archive.org/web/20210428064003/triton.iqfr.csic.es/guide/eNMR/eNMR1D/invrec.html |archive-date=2021-04-28}}</ref> Inversion recovery is worthwhile for quantitative 13C, 2D and other time-consuming experiments. Spectral interpretation NMR signals are ordinarily characterized by three variables: chemical shift, spin–spin coupling, and relaxation time. Chemical shift The energy difference ΔE between nuclear spin states is proportional to the magnetic field (Zeeman effect). ΔE is also sensitive to electronic environment of the nucleus, giving rise to what is known as the chemical shift, δ. The simplest types of NMR graphs are plots of the different chemical shifts of the nuclei being studied in the molecule. The value of δ is often expressed in terms of "shielding": shielded nuclei have higher ΔE. The range of δ values is called the dispersion. It is rather small for 1H signals, but much larger for other nuclei. NMR signals are reported relative to a reference signal, usually that of TMS (tetramethylsilane). Additionally, since the distribution of NMR signals is field-dependent, these frequencies are divided by the spectrometer frequency. However, since we are dividing Hz by MHz, the resulting number would be too small, and thus it is multiplied by a million. This operation therefore gives a locator number called the "chemical shift" with units of parts per million. The chemical shift provides structural information. The conversion of chemical shifts (and J's, see below) is called assigning the spectrum. For diamagnetic organic compounds, assignments of 1H and 13C NMR spectra are extremely sophisticated because of the large databases and easy computational tools. In general, chemical shifts for protons are highly predictable, since the shifts are primarily determined by shielding effects (electron density). The chemical shifts for many heavier nuclei are more strongly influenced by other factors, including excited states ("paramagnetic" contribution to shielding tensor). This paramagnetic contribution, which is unrelated to paramagnetism) not only disrupts trends in chemical shifts, which complicates assignments, but it also gives rise to very large chemical shift ranges. For example, most 1H NMR signals for most organic compounds are within 15 ppm. For 31P NMR, the range is hundreds of ppm. In paramagnetic NMR spectroscopy, the samples are paramagnetic, i.e. they contain unpaired electrons. The paramagnetism gives rise to very diverse chemical shifts. In 1H NMR spectroscopy, the chemical shift range can span up to thousands of ppm. J-coupling Some of the most useful information for structure determination in a one-dimensional NMR spectrum comes from J-coupling, or scalar coupling (a special case of spin–spin coupling), between NMR active nuclei. This coupling arises from the interaction of different spin states through the chemical bonds of a molecule and results in the splitting of NMR signals. For a proton, the local magnetic field is slightly different depending on whether an adjacent nucleus points towards or against the spectrometer magnetic field, which gives rise to two signals per proton instead of one. These splitting patterns can be complex or simple and, likewise, can be straightforwardly interpretable or deceptive. This coupling provides detailed insight into the connectivity of atoms in a molecule. The multiplicity of the splitting is an effect of the spins of the nuclei that are coupled and the number of such nuclei involved in the coupling. Coupling to n equivalent spin-1/2 nuclei splits the signal into a n + 1 multiplet with intensity ratios following Pascal's triangle as described in the table. Coupling to additional spins leads to further splittings of each component of the multiplet, e.g. coupling to two different spin-1/2 nuclei with significantly different coupling constants leads to a doublet of doublets (abbreviation: dd). Note that coupling between nuclei that are chemically equivalent (that is, have the same chemical shift) has no effect on the NMR spectra, and couplings between nuclei that are distant (usually more than 3 bonds apart for protons in flexible molecules) are usually too small to cause observable splittings. Long-range couplings over more than three bonds can often be observed in cyclic and aromatic compounds, leading to more complex splitting patterns. For example, in the proton spectrum for ethanol, the CH3 group is split into a triplet with an intensity ratio of 1:2:1 by the two neighboring CH2 protons. Similarly, the CH2 is split into a quartet with an intensity ratio of 1:3:3:1 by the three neighboring CH3 protons. In principle, the two CH2 protons would also be split again into a doublet to form a doublet of quartets by the hydroxyl proton, but intermolecular exchange of the acidic hydroxyl proton often results in a loss of coupling information. Coupling to any spin-1/2 nuclei such as phosphorus-31 or fluorine-19 works in this fashion (although the magnitudes of the coupling constants may be very different). But the splitting patterns differ from those described above for nuclei with spin greater than 1/2 because the spin quantum number has more than two possible values. For instance, coupling to deuterium (a spin-1 nucleus) splits the signal into a 1:1:1 triplet because the spin 1 has three spin states. Similarly, a spin-3/2 nucleus such as 35Cl splits a signal into a 1:1:1:1 quartet and so on. Coupling combined with the chemical shift (and the integration for protons) tells us not only about the chemical environment of the nuclei, but also the number of neighboring NMR active nuclei within the molecule. In more complex spectra with multiple peaks at similar chemical shifts or in spectra of nuclei other than hydrogen, coupling is often the only way to distinguish different nuclei. The magnitude of the coupling (the coupling constant J) is an effect of how strongly the nuclei are coupled to each other. For simple cases, this is an effect of the bonding distance between the nuclei, the magnetic moment of the nuclei, and the dihedral angle between them. Second-order (or strong) coupling The above description assumes that the coupling constant is small in comparison with the difference in NMR frequencies between the inequivalent spins. If the shift separation decreases (or the coupling strength increases), the multiplet intensity patterns are first distorted, and then become more complex and less easily analyzed (especially if more than two spins are involved). Intensification of some peaks in a multiplet is achieved at the expense of the remainder, which sometimes almost disappear in the background noise, although the integrated area under the peaks remains constant. In most high-field NMR, however, the distortions are usually modest, and the characteristic distortions (roofing'') can in fact help to identify related peaks. Some of these patterns can be analyzed with the method published by John Pople, though it has limited scope. Second-order effects decrease as the frequency difference between multiplets increases, so that high-field (i.e. high-frequency) NMR spectra display less distortion than lower-frequency spectra. Early spectra at 60 MHz were more prone to distortion than spectra from later machines typically operating at frequencies at 200 MHz or above. Furthermore, as in the figure to the right, J-coupling can be used to identify ortho-meta-para substitution of a ring. Ortho coupling is the strongest at 15 Hz, Meta follows with an average of 2 Hz, and finally para coupling is usually insignificant for studies. Magnetic inequivalence More subtle effects can occur if chemically equivalent spins (i.e., nuclei related by symmetry and so having the same NMR frequency) have different coupling relationships to external spins. Spins that are chemically equivalent but are not indistinguishable (based on their coupling relationships) are termed magnetically inequivalent. For example, the 4 H sites of 1,2-dichlorobenzene divide into two chemically equivalent pairs by symmetry, but an individual member of one of the pairs has different couplings to the spins making up the other pair. Magnetic inequivalence can lead to highly complex spectra, which can only be analyzed by computational modeling. Such effects are more common in NMR spectra of aromatic and other non-flexible systems, while conformational averaging about C−C bonds in flexible molecules tends to equalize the couplings between protons on adjacent carbons, reducing problems with magnetic inequivalence. Correlation spectroscopy Correlation spectroscopy is one of several types of two-dimensional nuclear magnetic resonance (NMR) spectroscopy or 2D-NMR. This type of NMR experiment is best known by its acronym, COSY. Other types of two-dimensional NMR include J-spectroscopy, exchange spectroscopy (EXSY), Nuclear Overhauser effect spectroscopy (NOESY), total correlation spectroscopy (TOCSY), and heteronuclear correlation experiments, such as HSQC, HMQC, and HMBC. In correlation spectroscopy, emission is centered on the peak of an individual nucleus; if its magnetic field is correlated with another nucleus by through-bond (COSY, HSQC, etc.) or through-space (NOE) coupling, a response can also be detected on the frequency of the correlated nucleus. Two-dimensional NMR spectra provide more information about a molecule than one-dimensional NMR spectra and are especially useful in determining the structure of a molecule, particularly for molecules that are too complicated to work with using one-dimensional NMR. The first two-dimensional experiment, COSY, was proposed by Jean Jeener, a professor at Université Libre de Bruxelles, in 1971. This experiment was later implemented by Walter P. Aue, Enrico Bartholdi and Richard R. Ernst, who published their work in 1976. Solid-state nuclear magnetic resonance A variety of physical circumstances do not allow molecules to be studied in solution, and at the same time not by other spectroscopic techniques to an atomic level, either. In solid-phase media, such as crystals, microcrystalline powders, gels, anisotropic solutions, etc., it is in particular the dipolar coupling and chemical shift anisotropy that become dominant to the behaviour of the nuclear spin systems. In conventional solution-state NMR spectroscopy, these additional interactions would lead to a significant broadening of spectral lines. A variety of techniques allows establishing high-resolution conditions, that can, at least for 13C spectra, be comparable to solution-state NMR spectra. Two important concepts for high-resolution solid-state NMR spectroscopy are the limitation of possible molecular orientation by sample orientation, and the reduction of anisotropic nuclear magnetic interactions by sample spinning. Of the latter approach, fast spinning around the magic angle is a very prominent method, when the system comprises spin-1/2 nuclei. Spinning rates of about 20 kHz are used, which demands special equipment. A number of intermediate techniques, with samples of partial alignment or reduced mobility, is currently being used in NMR spectroscopy. Applications in which solid-state NMR effects occur are often related to structure investigations on membrane proteins, protein fibrils or all kinds of polymers, and chemical analysis in inorganic chemistry, but also include "exotic" applications like the plant leaves and fuel cells. For example, Rahmani et al. studied the effect of pressure and temperature on the bicellar structures' self-assembly using deuterium NMR spectroscopy. Solid-state NMR is usefull also for metal structure understanding in case of X-ray amorphous metal samples (like nano-size refractory metal 99Tc) . Biomolecular NMR spectroscopy Proteins Much of the innovation within NMR spectroscopy has been within the field of protein NMR spectroscopy, an important technique in structural biology. A common goal of these investigations is to obtain high resolution 3-dimensional structures of the protein, similar to what can be achieved by X-ray crystallography. In contrast to X-ray crystallography, NMR spectroscopy is usually limited to proteins smaller than 35 kDa, although larger structures have been solved. NMR spectroscopy is often the only way to obtain high resolution information on partially or wholly intrinsically unstructured proteins. It is now a common tool for the determination of Conformation Activity Relationships where the structure before and after interaction with, for example, a drug candidate is compared to its known biochemical activity. Proteins are orders of magnitude larger than the small organic molecules discussed earlier in this article, but the basic NMR techniques and some NMR theory also applies. Because of the much higher number of atoms present in a protein molecule in comparison with a small organic compound, the basic 1D spectra become crowded with overlapping signals to an extent where direct spectral analysis becomes untenable. Therefore, multidimensional (2, 3 or 4D) experiments have been devised to deal with this problem. To facilitate these experiments, it is desirable to isotopically label the protein with 13C and 15N because the predominant naturally occurring isotope 12C is not NMR-active and the nuclear quadrupole moment of the predominant naturally occurring 14N isotope prevents high resolution information from being obtained from this nitrogen isotope. The most important method used for structure determination of proteins utilizes NOE experiments to measure distances between atoms within the molecule. Subsequently, the distances obtained are used to generate a 3D structure of the molecule by solving a distance geometry problem. NMR can also be used to obtain information on the dynamics and conformational flexibility of different regions of a protein. Nucleic acids Nucleic acid NMR is the use of NMR spectroscopy to obtain information about the structure and dynamics of polynucleic acids, such as DNA or RNA. , nearly half of all known RNA structures had been determined by NMR spectroscopy. Nucleic acid and protein NMR spectroscopy are similar but differences exist. Nucleic acids have a smaller percentage of hydrogen atoms, which are the atoms usually observed in NMR spectroscopy, and because nucleic acid double helices are stiff and roughly linear, they do not fold back on themselves to give "long-range" correlations. The types of NMR usually done with nucleic acids are 1H or proton NMR, 13C NMR, 15N NMR, and 31P NMR. Two-dimensional NMR methods are almost always used, such as correlation spectroscopy (COSY) and total coherence transfer spectroscopy (TOCSY) to detect through-bond nuclear couplings, and nuclear Overhauser effect spectroscopy (NOESY) to detect couplings between nuclei that are close to each other in space. Parameters taken from the spectrum, mainly NOESY cross-peaks and coupling constants, can be used to determine local structural features such as glycosidic bond angles, dihedral angles (using the Karplus equation), and sugar pucker conformations. For large-scale structure, these local parameters must be supplemented with other structural assumptions or models, because errors add up as the double helix is traversed, and unlike with proteins, the double helix does not have a compact interior and does not fold back upon itself. NMR is also useful for investigating nonstandard geometries such as bent helices, non-Watson–Crick basepairing, and coaxial stacking. It has been especially useful in probing the structure of natural RNA oligonucleotides, which tend to adopt complex conformations such as stem-loops and pseudoknots. NMR is also useful for probing the binding of nucleic acid molecules to other molecules, such as proteins or drugs, by seeing which resonances are shifted upon binding of the other molecule. Carbohydrates Carbohydrate NMR spectroscopy addresses questions on the structure and conformation of carbohydrates. The analysis of carbohydrates by 1H NMR is challenging due to the limited variation in functional groups, which leads to 1H resonances concentrated in narrow bands of the NMR spectrum. In other words, there is poor spectral dispersion. The anomeric proton resonances are segregated from the others due to fact that the anomeric carbons bear two oxygen atoms. For smaller carbohydrates, the dispersion of the anomeric proton resonances facilitates the use of 1D TOCSY experiments to investigate the entire spin systems of individual carbohydrate residues. Drug discovery Knowledge of energy minima and rotational energy barriers of small molecules in solution can be found using NMR, e.g. looking at free ligand conformational preferences and conformational dynamics, respectively. This can be used to guide drug design hypotheses, since experimental and calculated values are comparable. For example, AstraZeneca uses NMR for its oncology research & development. High-pressure NMR spectroscopy One of the first scientific works devoted to the use of pressure as a variable parameter in NMR experiments was the work of J. Jonas published in the journal Annual Review of Biophysics in 1994. The use of high pressures in NMR spectroscopy was primarily driven by the desire to study biochemical systems, where the use of high pressure allows controlled changes in intermolecular interactions without significant perturbations. Of course, attempts have been made to solve scientific problems using high-pressure NMR spectroscopy. However, most of them were difficult to reproduce due to the problem of equipment for creating and maintaining high pressure. In the most common types of NMR cells for realization of high-pressure NMR experiments are given. High-pressure NMR spectroscopy has been widely used for a variety of applications, mainly related to the characterization of the structure of protein molecules. However, in recent years, software and design solutions have been proposed to characterize the chemical and spatial structures of small molecules in a supercritical fluid environment, using state parameters as a driving force for such changes.
Physical sciences
Nuclear physics
Physics
1908699
https://en.wikipedia.org/wiki/Lepospondyli
Lepospondyli
Lepospondyli is a diverse taxon of early tetrapods. With the exception of one late-surviving lepospondyl from the Late Permian of Morocco (Diplocaulus minimus), lepospondyls lived from the Visean stage of the Early Carboniferous to the Early Permian and were geographically restricted to what is now Europe and North America. Five major groups of lepospondyls are known: Adelospondyli; Aïstopoda; Lysorophia; Microsauria; and Nectridea. Lepospondyls have a diverse range of body forms and include species with newt-like, eel- or snake-like, and lizard-like forms. Various species were aquatic, semiaquatic, or terrestrial. None were large (the biggest genus, the diplocaulid Diplocaulus, reached a meter in length, but most were much smaller), and they are assumed to have lived in specialized ecological niches not taken by the more numerous temnospondyl amphibians that coexisted with them in the Paleozoic. Lepospondyli was named in 1888 by Karl Alfred von Zittel, who coined the name to include some tetrapods from the Paleozoic that shared some specific characteristics in the notochord and teeth. Lepospondyls have sometimes been considered to be either related or ancestral to modern amphibians or to Amniota (the clade containing reptiles and mammals). It has been suggested that the grouping is polyphyletic, with aïstopods being primitive stem-tetrapods, while recumbirostran microsaurs are primitive reptiles. Description All lepospondyls are characterised by having simple, spool-shaped vertebrae that did not ossify from cartilage, but rather grew as bony cylinders around the notochord. In addition, the upper portion of the vertebra, the neural arch, is usually fused to the centrum (the main body of the vertebra). Classification The position of the Lepospondyli within the Tetrapoda is uncertain because the earliest lepospondyls were already highly specialized when they first appeared in the fossil record. Some lepospondyls were once thought to be related or perhaps ancestral to modern salamanders (Urodela), but not the other modern amphibians. This view is no longer held and all modern amphibians (frogs, salamanders, and caecilians) are now grouped within the clade Lissamphibia. For a long time, the Lepospondyli were considered one of the three subclasses of Amphibia, along with the Lissamphibia and the Labyrinthodontia. However, the dissolution of "labyrinthodonts" into separate groups such as temnospondyls and anthracosaurs has cast doubt on these traditional amphibian subclasses. Much like "Labyrinthodontia", some studies proposed that Lepospondyli is an artificial (polyphyletic) grouping with some members closely related to extinct stem tetrapod groups and others more closely related to modern amphibians or reptiles. Early phylogenetic analyses conducted in the 1980s and 1990s often maintained the idea that lepospondyls were paraphyletic, with nectrideans close to colosteids and microsaurs close to temnospondyls, which were considered to be ancestral to modern amphibians. However, a 1995 paper by Robert Carroll argued that lepospondyls were actually a monophyletic group closer to reptiles. Carroll considered them closer to reptiles than the seymouriamorphs, but not as close as the diadectomorphs. Many phylogenetic analyses since Carroll (1995) agreed with his interpretation, including Laurin & Reisz (1997), Anderson (2001), and Ruta et al. (2003). A few have still considered lepospondyls ancestral to amphibians, but came to this conclusion without changing the position of lepospondyls compared to seymouriamorphs and diadectomorphs. Lepospondyl and tetrapod classification is still controversial, and even recent studies have had doubts about lepospondyl monophyly. For example, a 2007 paper has suggested that adelospondyls are stem-tetrapods close to colosteids and a 2017 paper on Lethiscus has Aïstopoda in the tetrapod stem based on their primitive braincase. These studies differ in the internal and external relationships of the remaining lepospondyl taxa. The former places the remaining lepospondyls into a single clade along the amniote stem. The latter does not treat the relationships of nectrideans or adelospondyls, but finds microsaurs to be early amniotes, and places lysorophians within microsaurs. Interrelationships Five main groups of lepospondyls are often recognized: Microsauria, a superficially lizard- or salamander-like and species-rich group; Lysorophia, a group with elongated bodies and very small limbs; Aïstopoda, a group of limbless, extremely elongated snake-like forms; Adelospondyli, a group of presumably aquatic forms that resemble aïstopods, but have more solidly built skulls; and Nectridea, another diverse group that includes terrestrial and aquatic newt-like forms. Microsauria is generally considered paraphyletic; rather than being a monophyletic group, it has been considered an evolutionary grade of basal ("primitive") lepospondyls, although there is growing consensus that a large subset of fossorially-adapted microsaurs, the Recumbirostra, is monophyletic. Lysorophia may belong within the Recumbirostran clade, distinct from other derived lepospondyls. Nectridea may also be paraphyletic, consisting of a range of more anatomically-specialized lepospondyls. The name Holospondyli has been proposed for a clade including aïstopods, and nectrideans, and possibly adelospondyls, although not all recent phylogenetic analyses support the grouping. The following cladogram, simplified, is after an analysis of tetrapods and stem-tetrapods presented by Ruta et al. in 2003: Position within Tetrapoda The "lepospondyl hypothesis" of modern amphibian origins proposes that lissamphibians are monophyletic (that is, they form their own clade) and that they evolved from lepospondyl ancestors. Two alternatives are the "temnospondyl hypothesis", in which lissamphibians originated within Temnospondyli, and the "polyphyly hypothesis", in which caecilians originated from lepospondyls while frogs and salamanders (collectively grouped within Batrachia) evolved from temnospondyls. Of the three hypotheses, the temnospondyl hypothesis is currently the most widely accepted among researchers. Strong support for this relationship comes from a suite of anatomical features shared between lissamphibians and a group of Paleozoic temnospondyls called dissorophoids. Under this hypothesis, Lepospondyli either falls outside crown group Tetrapoda (the smallest clade containing all living tetrapods, i.e. the smallest clade containing Lissamphibia and Amniota), or is closer to amniotes and therefore part of Reptiliomorpha. However, some phylogenetic analyses continue to find support for the lepospondyl hypothesis. The analysis by Vallin and Laurin (2004) found lissamphibians to be most closely related to lysorophians, followed by microsaurs. Pawley (2006) also found lysorophians to be the closest relatives of lissamphibians, but found aïstopods and adelogyrinids rather than microsaurs to be the second most closely related groups. Marjanović (2010) found holospondyls to be the most closely related group to lissamphibians, followed by lysorophians. Under this hypothesis, lepospondyls would be crown tetrapods and temnospondyls would be stem tetrapods. Below is a cladogram from Ruta et al. (2003) that supports the "temnospondyl hypothesis", showing the position of Lepospondyli within crown group Tetrapoda: Gallery
Biology and health sciences
Prehistoric amphibians
Animals
1909222
https://en.wikipedia.org/wiki/Cultivator
Cultivator
A cultivator (also known as a rotavator) is a piece of agricultural equipment used for secondary tillage. One sense of the name refers to frames with teeth (also called shanks) that pierce the soil as they are dragged through it linearly. Another sense of the name also refers to machines that use the rotary motion of disks or teeth to accomplish a similar result, such as a rotary tiller. Cultivators stir and pulverize the soil, either before planting (to aerate the soil and prepare a smooth, loose seedbed) or after the crop has begun growing (to kill weeds—controlled disturbance of the topsoil close to the crop plants kills the surrounding weeds by uprooting them, burying their leaves to disrupt their photosynthesis or a combination of both). Unlike a harrow, which disturbs the entire surface of the soil, cultivators are designed to disturb the soil in careful patterns, sparing the crop plants but disrupting the weeds. Cultivators of the toothed type are often similar in form to chisel plows, but their goals are different. Cultivators' teeth work near the surface, usually for weed control, whereas chisel plow shanks work deep beneath the surface, breaking up the hardened layer on top. Small toothed cultivators pushed or pulled by a single person are used as garden tools for small-scale gardening, such as for the household's own use or for small market gardens. Similarly sized rotary tillers combine the functions of a harrow and cultivator into one multipurpose machine. Cultivators are usually either self-propelled or drawn as an attachment behind either a two-wheel tractor or four-wheel tractor. For two-wheel tractors, they are usually rigidly fixed and powered via couplings to the tractors' transmission. For four-wheel tractors they are usually attached by means of a three-point hitch and driven by a power take-off . Drawbar hookup is also still commonly used worldwide. Draft-animal power is sometimes still used today, being somewhat common in developing nations although rare in more industrialized economies. History The basic idea of soil scratching for weed control is ancient and was done with hoes or plough for millennia before any larger or more complex equipment was developed to reduce the manual labor and to speed the work. The notion of ganging several hoes together and applying draft animal power to drag them led to harrows, which while newer than the hoe are still quite ancient. In the eighteenth and nineteenth centuries, as the Industrial Revolution developed, a proliferation of cultivator designs proceeded. These new cultivators were drawn by draft animals (such as horses, mules, or oxen) or were pushed or drawn by people, depending on the need and expense. The powered rotary hoe was invented by Arthur Clifford Howard who, in 1912, began experimenting with rotary tillage on his father's farm at Gilgandra, New South Wales, Australia. Initially using his father's steam tractor engine as a power source, he found that ground could be mechanically tilled without soil-packing occurring, as was the case with normal ploughing. His earliest designs threw the tilled soil sideways, until he improved his invention by designing an L-shaped blade mounted on widely spaced flanges fixed to a small-diameter rotor. With fellow apprentice Everard McCleary, he established a company to make his machine, but plans were interrupted by World War I. In 1919 Howard returned to Australia and resumed his design work, patenting a design with 5 rotary hoe cultivator blades and an internal combustion engine in 1920. In March 1922, Howard formed the company Austral Auto Cultivators Pty Ltd, which later became known as Howard Auto Cultivators. It was based in Northmead, a suburb of Sydney, from 1927. Meanwhile, in North America during the 1910s, tractors were evolving away from traction engine–sized monsters toward smaller, lighter, more affordable machines. The Fordson tractor especially had made tractors affordable and practical for small and medium family farms for the first time in history. Cultivating was somewhat of an afterthought in the Fordson's design, which reflected the fact that even just bringing practical motorized tractive power alone to this market segment was in itself a milestone. This left an opportunity for others to pursue better motorized cultivating. Between 1915 and 1920, various inventors and farm implement companies experimented with a class of machines referred to as motor cultivators, which were simply modified horse-drawn shank-type cultivators with motors added for self-propulsion. This class of machines found limited market success. But by 1921 International Harvester had combined motorized cultivating with the other tasks of tractors (tractive power and belt work) to create the Farmall, the general-purpose tractor tailored to cultivating that basically invented the category of row-crop tractors. In Australia, by the 1930s, Howard was finding it increasingly difficult to meet a growing worldwide demand for exports of his machines. He travelled to the United Kingdom, founding the company Rotary Hoes Ltd in East Horndon, Essex, in July 1938. Branches of this new company subsequently opened in the United States of America, South Africa, Germany, France, Italy, Spain, Brazil, Malaysia, Australia and New Zealand. It later became the holding company for Howard Rotavator Co. Ltd. The Howard Group of companies was acquired by the Danish Thrige Agro Group in 1985, and in December 2000 the Howard Group became a member of Kongskilde Industries of Soroe, Denmark. In modern commercial agriculture, the amount of cultivating done for weed control has been greatly reduced via use of herbicides instead. However, herbicides are not always desirable—for example, in organic farming. When herbicidal weed control was first widely commercialized in the 1950s and 1960s, it played into that era's optimistic worldview in which sciences such as chemistry would usher in a new age of modernity that would leave old-fashioned practices (such as weed control via cultivators) in the dustbin of history. Thus, herbicidal weed control was adopted very widely, and in some cases too heavily and hastily. In subsequent decades, people overcame this initial imbalance and came to realize that herbicidal weed control has limitations and externalities, and it must be managed intelligently. It is still widely used, and probably will continue to be indispensable to affordable food production worldwide for the foreseeable future; but its wise management includes seeking alternate methods, such as the traditional standby of mechanical cultivation, where practical. Industrial use To the extent that cultivating is done commercially today (such as in truck farming), it is usually powered by tractors, especially row-crop tractors. Industrial cultivators can vary greatly in size and shape, from to wide. Many are equipped with hydraulic wings that fold up to make road travel easier and safer. Different types are used for preparation of fields before planting, and for the control of weeds between row crops. The cultivator may be an implement trailed after the tractor via a drawbar; mounted on the three-point hitch; or mounted on a frame beneath the tractor. Active cultivator implements are driven by a power take-off shaft. While most cultivator are considered a secondary tillage implement, active cultivators are commonly used for primary tillage in lighter soils instead of plowing. The largest versions available are about wide, and require a tractor with an excess of (PTO) to drive them. Field cultivators are used to complete tillage operations in many types of arable crop fields. The main function of the field cultivator is to prepare a proper seedbed for the crop to be planted into, to bury crop residue in the soil (helping to warm the soil before planting), to control weeds, and to mix and incorporate the soil to ensure the growing crop has enough water and nutrients to grow well during the growing season. The implement has many shanks mounted on the underside of a metal frame, and small narrow rods at the rear of the machine that smooth out the soil surface for easier travel later when planting. In most field cultivators, one-to-many hydraulic cylinders raise and lower the implement and control its depth. Row crop cultivators The main function of the row crop cultivator is weed control between the rows of an established crop. Row crop cultivators are usually raised and lowered by a three-point hitch and the depth is controlled by gauge wheels. Sometimes referred to as sweep cultivators, these commonly have two center blades that cut weeds from the roots near the base of the crop and turn over soil, while two rear sweeps further outward than the center blades deal with the center of the row, and can be anywhere from 1 to 36 rows wide. Garden cultivators Small tilling equipment, used in small gardens such as household gardens and small commercial gardens, can provide both primary and secondary tillage. For example, a rotary tiller does both the "plowing" and the "harrowing", preparing a smooth, loose seedbed. It does not provide the row-wise weed control that cultivator teeth would. For that task, there are single-person-pushable toothed cultivators. Variants and trademarks Rotary tillers are a type of cultivator. They are popular with home gardeners who want large vegetable gardens. The garden may be tilled a few times before planting each crop. Rotary tillers may be rented from tool rental centers for single-use applications, such as when planting grass. A small rotary hoe for domestic gardens was known by the trademark Rototiller and another, made by the Howard Group, who produced a range of rotary tillers, was known as the Rotavator. Rototiller The small rototiller is typically propelled by a petrol engine rotating the tines, some have powered wheels, though they may have small transport/level control wheel(s). To keep the machine from moving forward too fast, an adjustable tine is usually fixed just behind the blades so that through friction with deeper un-tilled soil, it acts as a brake, slowing the machine and allowing it to pulverize the soils. The slower a rototiller moves forward, the more soil tilth can be obtained. The operator can control the amount of friction/braking action by raising and lowering the handlebars of the tiller. Rototillers often do not have a reverse as such backwards movement towards the operator could cause serious injury. While operating, the rototiller can be pulled backwards to go over areas that were not pulverized enough, but care must be taken to ensure that the operator does not stumble and pull the rototiller on top of themselves. Rototilling is much faster than manual tilling, but notoriously difficult to handle and exhausting work, especially in the heavier and higher power models. If the rototiller's blades catch on unseen subsurface objects, such as tree roots and buried garbage, it can cause the rototiller to abruptly and violently move in an unexpected direction. Rotavator Unlike the Rototiller, the self-propelled Howard Rotavator is equipped with a gearbox and driven forward, or held back, by its wheels. The gearbox enables the forward speed to be adjusted while the rotational speed of the tines remains constant which enables the operator to easily regulate the extent to which soil is engaged. For a two-wheel tractor rotavator this greatly reduces the workload of the operator as compared to a rototiller. These rotavators are generally more heavy duty, come in versions with either a petrol or diesel engine and can cover larger areas. The trademarked word "Rotavator" is one of the longest single-word palindromes in the English language. Mini tiller Mini tillers are a new type of small agricultural tillers or cultivators, used by farmers or homeowners. These are also known as power tillers or garden tillers. Compact, powerful and, most importantly, inexpensive, these agricultural rotary tillers are providing alternatives to four-wheel tractors and in the small farmers' fields in developing countries are more economical than four-wheel tractors. Two-wheel tractor The higher power "riding" rotavators cross out of the home garden category into farming category, especially in Europe, capable of preparing 1 hectare of land in 8–10 hours. These are also known as walk-behind tractors or walking tractors. Years ago they were considered only useful for rice growing areas, where they were fitted with steel cage-wheels for traction, but now the same are being used in both wetland and dryland farming all over the world. They have multiple functions with related tools for dryland or paddys, pumping, transportation, threshing, ditching, spraying pesticide. They can be used on hills, mountains, in greenhouses and orchards.
Technology
Farm and garden machinery
null
1911087
https://en.wikipedia.org/wiki/Madagascar%20hissing%20cockroach
Madagascar hissing cockroach
The Madagascar hissing cockroach (Gromphadorhina portentosa), also known as the hissing cockroach or simply hisser, is one of the largest species of cockroach, reaching at maturity. They are native to the island of Madagascar, which is off the African mainland, where they are commonly found in rotting logs. It is one of some 20 known species of large hissing roaches from Madagascar, many of which are kept as pets, and often confused with one another by pet dealers; in particular, G. portentosa is commonly confused with G. oblongonota and G. picea. Unlike most cockroaches, they are wingless. The "hissing" sound (expelling air through their bodies) is their primary defense, to frighten potential predators, as they cannot fly and are easily captured. They are excellent climbers and can scale smooth glass. Males can be distinguished from females by their thicker, hairier antennae and the very pronounced bumps on the pronotum. Females carry the ootheca internally, and release the young nymphs only after her offspring have emerged within her (this is known as ovoviviparity). As in some other wood-inhabiting roaches, the parents and offspring will commonly remain in close physical contact for extended periods of time. In captivity, they have been known to live up to 5 years. They feed primarily on vegetable material. Hissing As the common name suggests, the Madagascar hissing cockroach is characterized by its "hissing" sound, which some people claim sounds more like a rattlesnake's tail or a rainstick. This is their primary method of warding off potentially insectivorous predators. The sound is produced as the insect forcefully expels air out of their specialized respiratory spiracles (orifices), mainly those that are located on the insect fourth body segment (abdomen), although spiracles are found, more or less, on all segments of their abdomen. The Madagascar hissing cockroach is the only member of their group of cockroaches that can make audible sounds. Compared to crickets, this exact mode of sound production is atypical, as most insects that make noises do so by rubbing together various body parts ("stridulation"), such as the hind legs. Some long-horned beetles, e.g., the giant Fijian long-horned beetle, hiss by squeezing air out from under their elytra, but this does not involve the spiracles. In hissing cockroaches, the sound takes three forms: the disturbance hiss, the female-attracting hiss, and the aggressive or fighting hiss. All cockroaches from the fourth instar (fourth molting cycle) and older are capable of the disturbance hiss. Only males use the female-attracting hiss and fighting hiss; the latter is used between males to settle territory disputes over breeding rights. The hissing makes them a popular pet; initially, they will make the noises when picked up, though they quickly calm down and adjust to being handled and observed up-close. Associations with other animals The mite species Gromphadorholaelaps schaeferi lives on this species of cockroach along the undersides and bases of the legs and takes some of its host's food as well as consuming particulates along the host's body. As these mites do not harm the cockroaches they live upon, they are commensals, not parasites, unless they build up to abnormal levels and start starving their host. Recent studies have shown that these mites also may have beneficial qualities for the cockroaches, in that they clean the surfaces of the cockroaches of pathogenic mold spores, which in turn increases the life expectancy of the cockroaches. Popular culture The Madagascar hissing cockroach has been known to be featured in Hollywood movies, prominently in Bug (1975), as cockroaches who could set fires by rubbing their legs together, and in Damnation Alley (1977) as post-nuclear-war mutant armor-plated "killer" cockroaches. In Starship Troopers, a sci-fi satire film about future humans' war against an alien species called "The Bugs", a teacher is shown encouraging her students to step on this species as part of a TV propaganda broadcast. In 1984, a guest named Adam Zweig appeared on Late Night with David Letterman, demonstrating his pet Madagascar cockroach "climbing the tightrope over the fires of hell and the pit of doom". A Madagascar hissing cockroach was used by artist Garnet Hertz as the driver of a mobile robotic artwork. They were frequently used in the reality television series Fear Factor, where in one episode of the 2002 series, featuring celebrities competing for charity, the host, Joe Rogan, ate one as part of a wager with contestant Alison Sweeney of Days of Our Lives after she had what Rogan has since described as "the greatest freak-out in Fear Factor history" after panicking during a stunt. While normally she would be eliminated, as the show was for charity, it was decided that if she ate 3 worms she would advance to the final stunt regardless. In addition, Rogan would eat a cockroach as part of the bet. The species also made an appearance in the movie Men in Black in 1997. This was later parodied in the comedy Team America: World Police (2004), where a cockroach emerges from a Kim Jong-il puppet's body after his death, enters a tiny spaceship, and flies away. In September 2006, amusement park Six Flags Great America announced that it would be granting unlimited line-jumping privileges (for all rides) to anyone who could eat a live Madagascar hissing cockroach, as part of a Halloween-themed promotion for their annual FrightFest. Furthermore, if a contestant managed to beat the previous world record (eating 36 cockroaches in 1 minute), they would receive season passes, for four people, for the 2007 season. Despite any protein or additional nutrients, cockroaches contain a mild neurotoxin that numbs the mouth and makes it difficult to swallow. The promotion ended on October 29, 2006. Since 2011 the Bronx Zoo has held a roach-naming and gifting program themed for Valentine's Day allowing their Madagascar hissing cockroaches to be named by benefactors. Funds raised are donated to Wildlife Conservation Society, the parent nonprofit organization of the zoo. As pets Madagascar cockroaches can be kept as exotic pets. They require a small living area with an area for them to hide because they dislike light sources. The cockroaches prefer warmth and they cannot function in cold weather. Due to their propensity to climb, the living area must be tested to see if they can climb it as they do in their natural environment. Fish tanks with screens work best but it is also wise to coat the top few inches with petroleum jelly to keep them from getting out of the habitat that they are kept in. They can live on fresh vegetables along with any kind of pellet food that is high in protein, such as dry dog food. In the US, some states require permits before this species can be kept as a pet or in breeding colonies. The state of Florida requires such a permit. This is because of the similarity between Madagascar and Florida in climate, which makes them potentially invasive. In fact, during outreach programs, the University of Florida's Department of Entomology and Nematology, which has such a permit, allows only males to be taken out of the laboratory. This is to prevent the possible introduction of a pregnant female into the environment. It is also possible to raise them to feed other pets, as they are reasonably high in protein. Reptiles are often given roaches as food.
Biology and health sciences
Cockroaches &amp; Termites (Blattodea)
Animals
1911942
https://en.wikipedia.org/wiki/Brachypelma%20hamorii
Brachypelma hamorii
Brachypelma hamorii is a vulnerable species of tarantula found in Mexico. It has been confused with B. smithi; both have been called Mexican redknee tarantulas. Many earlier sources referring to B. smithi either do not distinguish between the two species or relate to B. hamorii. B. hamorii is a terrestrial tarantula native to the western faces of the Sierra Madre Occidental and Sierra Madre del Sur mountain ranges in the Mexican states of Colima, Jalisco, and Michoacán. The species is a large spider, adult females having a total body length over and males having legs up to long. Mexican redknee tarantulas are a popular choice for enthusiasts. Like most tarantulas, it has a long lifespan. Description Brachypelma hamorii is a large spider. A sample of seven females had a total body length (excluding chelicerae and spinnerets) in the range . A sample of 11 males was slightly smaller, with a total body length in the range . Although males have slightly shorter bodies, they have longer legs. The fourth leg is the longest, measuring in the type male and in a female. The legs and palps are black to reddish black with three distinctly colored rings, deep orange on the part of the patellae closest to the body with pale orange–yellow further away, pale orange–yellow on the lower part of the tibiae, and yellowish-white at the end of the metatarsi. Adult males have light greyish-red around the border of the carapace with a darker reddish-black marking from the middle of the carapace to the front of the head; the upper surface of the abdomen is black. Adult females vary more in carapace color and pattern. The carapace may be mainly black with a brownish-pink border, or the dark area may be broken up into a "starburst" pattern with pale orange–yellow elsewhere. Taxonomy Brachypelma hamorii was initially misidentified as the very similar B. smithi, a species originally described in 1897. In 1968, the holotype of B. smithi was found to be an immature male, and in 1994, A. M. Smith redescribed B. smithi using two adult specimens. The specimens cannot now be found, but his description makes it clear that they actually belonged to what is now B. hamorii, not B. smithi. B. hamorii was first described by Marc Tesmoingt, Frédéric Cleton and Jean Verdez in 1997. They stated that it was close to B. smithi, but could be distinguished by a number of characteristics, including the spermathecae of the females. However, following Smith's description, B. hamorii continued to be misidentified as B. smithi until the situation was clarified by J. Mendoza and O. Francke in 2017. The two species have very similar color patterns. When viewed from above, the chelicerae of B. hamorii have two brownish-pink bands on a greyish background, not visible on all individuals. B. smithi lacks these bands. Mature males of the two species can be distinguished by the shape of the palpal bulb. When viewed retrolaterally, the palpal bulb of B. hamorii is narrower and less straight than the broad, spoon-shaped one of B. smithi. It also has a narrower keel at the apex. In mature females of B. hamorii, the baseplate of the spermatheca is elliptical, rather than divided and subtriangular as in B. smithi; also, the ventral face of the spermatheca is smooth rather than striated. DNA barcoding DNA barcoding has been applied to Mexican species of Brachypelma. In this approach, a portion of about 650 base pairs of the mitochondrial gene cytochrome oxidase I is used, primarily to identify existing species, but also sometimes to support a separation between species. In 2017, Mendoza and Francke showed that although B. hamorii and B. smithi are similar in external appearance, they are clearly distinguished by their DNA barcodes. Longevity B. hamorii grows very slowly and matures relatively late. The females of this species can live up to 30 years, but the males tend to live for only 5 years or so. Molting Like all tarantulas, B. hamorii is an arthropod, and must go through a molting process to grow. Molting serves several purposes, such as renewing the tarantula's outer cover (shell) and replacing missing appendages. As tarantulas grow, they regularly molt (shed their skin), on multiple occasions during the year, depending on the tarantula's age. Since the exoskeleton cannot stretch, it must be replaced by a new one from beneath for the tarantula to grow. A tarantula may also regenerate lost appendages gradually, with each succeeding molt. Prior to molting, the spider becomes sluggish and stops eating to conserve as much energy as possible. Its abdomen darkens; this is the new exoskeleton beneath. Normally, the spider turns on its back to molt and stays in that position for several hours, as it pushes fluids just beneath its old exoskeleton and wiggles its limbs to loosen off the old and reveal the new exoskeleton. Once this has been accomplished, the tarantula does not eat for several days to weeks, and not uncommonly for up to a month after a molt, as its fangs are still soft; the fangs are also part of the exoskeleton and are shed with the rest of the skin. The whole process can take several hours and sheaths the tarantula with a moist, new skin in place of an old, faded one. Behavior Like most New World tarantulas, B. hamorii specimens kick urticating hairs from their abdomens and back legs if disturbed, rather than bite. They are only slightly venomous to humans and are considered extremely docile; however, as with all tarantulas, their large fangs can cause very painful puncture wounds, which can lead to secondary bacterial infection if not properly treated, and allergies may intensify with any bite. Distribution and habitat B. hamorii and the very similar B. smithi are found along the Pacific Coast of Mexico on opposite sides of the Balsas River basin as it opens onto the Pacific. B. hamorii is found to the north, in the states of Colima, Jalisco, and Michoacán. The natural habitat of the species is in hilly deciduous tropical forests. It constructs or extends burrows under logs, rocks, and tree roots, among thorny shrubs and tall grass. Their burrows were described in 1999 by a source that did not distinguish between B. hamorii and B. smithi. The deep burrows keep them protected from predators, such as the white-nosed coati, and enable them to ambush passing prey. The females spend the majority of their lives in their burrows, which are typically located in, or not far from, vegetation, and consist of a single entrance with a tunnel leading to one or two chambers. The entrance is just slightly larger than the body size of the spider. The tunnel, usually about three times the tarantula's leg span in length, leads to a chamber that is large enough for the spider to safely molt. Further down the burrow, via a shorter tunnel, a larger chamber is located where the spider rests and eats its prey. When the tarantula needs privacy, e.g. when molting or laying eggs, the entrance is sealed with silk, sometimes supplemented with soil and leaves. Conservation In 1985, B. smithi (then not distinguished from B. hamorii) was placed on CITES Appendix II. Wild-caught specimens shipped for the Chinese market were decreasing in size. The smaller sizes were suspected to be a consequence of a declining population due to excessive export. Exporting is not the only threat, though; some local people have reportedly made a habit of killing these spiders in a nearly systematic way using pesticides, pouring gasoline into burrows, or simply killing migrating spiders on sight. The reasons for these actions seem to be an irrational fear based on myth surrounding B. hamorii and related species. Thus, whether the listing strengthened the wild population or not remains uncertain. The species has been bred successfully in captivity. In 1994, all remaining Brachypelma species were added to Appendix II. Large numbers of Mexican redknee tarantulas caught in the wild continue to be smuggled out of Mexico. At least 3,000 specimens of Mexican tarantulas were reported to have been sent to the United States or Europe a few years prior to 2017, most of which were Mexican redknee tarantulas.
Biology and health sciences
Spiders
Animals
1912220
https://en.wikipedia.org/wiki/Annona%20squamosa
Annona squamosa
Annona squamosa is a small, well-branched tree or shrub from the family Annonaceae that bears edible fruits called sugar apples or sweetsops. It tolerates a tropical lowland climate better than its relatives Annona reticulata and Annona cherimola (whose fruits often share the same name) helping make it the most widely cultivated of these species. Annona squamosa is a small, semi-(or late) deciduous, much-branched shrub or small tree tall similar to soursop (Annona muricata). It is native of tropical climate in the Americas and West Indies, and Spanish traders aboard the Manila galleons docking in the Philippines brought it to Asia. The fruit is spherical-conical, in diameter and long, and weighing , with a thick rind composed of knobby segments. The colour is typically pale green through blue-green, with a deep pink blush in certain varieties, and typically has a bloom. It is unique among Annona fruits in being segmented; the segments tend to separate when ripe, exposing the innards. The flesh is fragrant and sweet, creamy white through light yellow, and resembles and tastes like custard. The seeds are coated with the flesh, It is found adhering to seeds forming individual segments arranged in a single layer around a conical core. It is soft, slightly grainy, and slippery. The hard, shiny seeds may number 20–40 or more per fruit and have a brown to black coat, although varieties exist that are almost seedless. The seeds can be ground for use as an insecticide, although this has not been approved by the US EPA or EU authorities.The stems run through the centre of the fruit connecting it to the outside. The skin is shaped like a Reuleaux triangle coloured green and rough in texture. Due to the soft flesh and structure of the sugar apple it is very fragile to pressure when ripe. New varieties are also being developed in Taiwan and Hong Kong. The atemoya or "pineapple sugar-apple", a hybrid between the sugar-apple and the cherimoya, is popular in Taiwan, although it was first developed in the United States in 1908. The fruit is similar in sweetness to the sugar-apple, but has a very different taste. As its name suggests, it tastes like pineapple. Description The fruit of A. squamosa (sugar-apple) has sweet whitish pulp, and is popular in tropical markets. In bengal it is called Ata phal. Stems and leaves Branches with light brown bark and visible leaf scars; inner bark light yellow and slightly bitter; twigs become brown with light brown dots (lenticels – small, oval, rounded spots upon the stem or branch of a plant, from which the underlying tissues may protrude or roots may issue). Thin, simple, alternate leaves occur singly, long and wide; rounded at the base and pointed at the tip (oblong-lanceolate). They are pale green on both surfaces and mostly hairless with slight hairs on the underside when young. The sides sometimes are slightly unequal and the leaf edges are without teeth, inconspicuously hairy when young. The leaf stalks are long, green, and sparsely pubescent. Flowers Solitary or in short lateral clusters of 2–4 about long, greenish-yellow flowers on a hairy, slender long stalk. Three green outer petals, purplish at the base, oblong, long, and wide, three inner petals reduced to minute scales or absent. Very numerous stamens; crowded, white, less than long; ovary light green. Styles white, crowded on the raised axis. Each pistil forms a separate tubercle (small rounded wartlike protuberance), mostly long and wide which matures into the aggregate fruit. Flowering occurs in spring-early summer and flowers are pollinated by nitidulid beetles. Its pollen is shed as permanent tetrads. Fruits and reproduction Fruits ripen 3 to 4 months after flowering. Aggregate and soft fruits form from the numerous and loosely united pistils of a flower which become enlarged and mature into fruits which are distinct from fruits of other species of genus (and more like a giant raspberry instead). The round or heart-shaped greenish yellow, ripened aggregate fruit is pendulous on a thickened stalk; in diameter with many round protuberances and covered with a powdery bloom. Fruits are formed of loosely cohering or almost free carpels (the ripened pistels). The pulp is white tinged yellow, edible and sweetly aromatic. Each carpel containing an oblong, shiny and smooth, dark brown to black, long seed. Nutrition and uses Sugar-apple is high in energy, an excellent source of vitamin C and manganese, a good source of thiamine and vitamin B6, and provides vitamin B2, B3 B5, B9, iron, magnesium, phosphorus and potassium in fair quantities. Chemistry The diterpenoid alkaloid atisine is the most abundant alkaloid in the root. Other constituents of Annona squamosa include the alkaloids oxophoebine, reticuline, isocorydine, and methylcorydaldine, and the flavonoid quercetin-3-O-glucoside. Bayer AG has patented the extraction process and molecular identity of the annonaceous acetogenin annonin, as well as its use as a biopesticide, although this use has not been approved by US or EU authorities. Other acetogenins have been isolated from the seeds, bark, and leaves. Distribution and habitat Annona squamosa is native to the tropical Americas and West Indies, but the exact origin is unknown. It is now the most widely cultivated of all the species of Annona, being grown for its fruit throughout the tropics and warmer subtropics, such as India, Indonesia, Thailand, Taiwan, and China as far north as Suzhou; it was introduced to southern Asia before 1590. It is naturalized as far north as southern Florida in the United States and as far south as Bahia in Brazil, and is an invasive species in some areas. Native Neotropic Caribbean: Antigua and Barbuda, Bahamas, Barbados, Cuba, Dominica, Dominican Republic, Grenada, Guadeloupe, Haiti, Jamaica, Martinique, Montserrat, Netherlands Antilles, Puerto Rico, St Kitts and Nevis, St Lucia, St Vincent and the Grenadines, Suriname, Trinidad and Tobago, Virgin Islands. Central America: Costa Rica, El Salvador, Guatemala, Honduras, Nicaragua, Panama Northern South America: Suriname, French Guiana, Guyana, Venezuela Western South America: Bolivia, Colombia, Ecuador, Peru Southern South America: Argentina, Brazil, Chile, Paraguay, Uruguay Naturalised Pacific: Samoa, Tonga North America: Mexico, Belize Afrotropic: Angola, Namibia, Sudan, Tanzania, Uganda, Zanzibar, Kenya Australasia: Australia, Fiji, New Zealand, Papua New Guinea, Solomon Islands Indomalaya: Bangladesh, Cambodia, China, India, Indonesia, Laos, Malaysia, Nepal, Pakistan, Philippines, Sri Lanka, Taiwan, Thailand, Myanmar, Vietnam Palearctic: Cyprus, Greece, Lebanon, Malta, Israel Climate and cultivation Like most species of Annona, it requires a tropical or subtropical climate with summer temperatures from to , and mean winter temperatures above . It is sensitive to cold and frost, being defoliated below and killed by temperatures of a couple of degrees below freezing. It is only moderately drought-tolerant, requiring at least of annual rainfall, and does not produce fruit well during droughts. It will grow from sea level to an altitude of and thrives in hot dry climates, differing in its tolerance of lowland tropics from many of the other fruit bearers in the Annona family. It is quite a prolific bearer, and it produces fruit within as little as two to three years. A five-year-old tree can produce as many as 50 sugar apples. Poor fruit production has been reported in Florida because there are few natural pollinators (honeybees have a difficult time penetrating the tightly closed female flowers); however, hand pollination with a natural fibre brush is effective in increasing yield. Natural pollinators include beetles (coleoptera) of the families Nitidulidae, Staphylinidae, Chrysomelidae, Curculionidae and Scarabaeidae. Ecology In the Philippines, the fruit is commonly eaten by the Philippine fruit bat (kabag or kabog), which then spreads the seeds from island to island. It is a host plant for larvae of the butterfly Graphium agamemnon (tailed jay). Uses In traditional Indian, Thai, and Native American medicines, the leaves are boiled down with water, possibly mixed with other specific botanicals, and used in a decoction to treat dysentery and urinary tract infection. In traditional Indian medicine, the leaves are also crushed for use as a poultice, and applied to wounds. In Mexico, the leaves are rubbed on floors and put in hens' nests, to repel lice. In Haiti, the fruit is known as cachiman and is used to simply make juice. Gallery
Biology and health sciences
Tropical and tropical-like fruit
Plants
1912301
https://en.wikipedia.org/wiki/Humulus%20lupulus
Humulus lupulus
Humulus lupulus, the common hop or hops, is a species of flowering plant in the hemp family, Cannabaceae. It is a perennial, herbaceous climbing plant which sends up new shoots in early spring and dies back to a cold-hardy rhizome in autumn. It is dioecious (having separate male and female plants) and native to West Asia, Europe and North America. As the female cone-shaped flowers (hops) are used to preserve and flavor beer, the species is widely cultivated for the brewing industry. Description Humulus lupulus is a perennial herbaceous plant up to tall, living up to 20 years. It has simple leaves with 3–5 deep lobes that can be opposite or alternate. The species is triggered by the longer summer days to flower, usually around July or August in the Northern Hemisphere. The plant is dioecious, with male and female flowers on separate plants. The fragrant flowers are wind-pollinated. The staminate (male) flowers do not have petals, while the pistillate (female) flowers have petals enveloping the fruit. The female flower cones (or strobili) are known as hops. The fruit is an achene, meaning that it is dry and does not split open at maturity. The achene is surrounded by tepals and lupulin-secreting glands are concentrated on the fruit. The species is sometimes described as a bine rather than a vine because it has stiff downward facing hairs that provide stability and allow it to climb. Chemistry H. lupulus contains myrcene, humulene, xanthohumol, myrcenoland linalool, as well as less defined tannins and resin. Hops are unique for containing secondary metabolites, flavonoids, oils, and polyphenols that impact the flavor of the products they are common in, such as beer. The bitter flavors in hops can be accounted for by acids composed of prenylated polyketides (a group of secondary metabolites), which highly impact the taste of hop-based products. Multiple genes have been identified as factors in the expression of taste including , geranyl diphosphate synthase, and chalcone synthase. Genomic analyses have shown evidence that the intervention of humans in the selection process of the hop over the thousands of years it has been cultivated have provided noticeable enhancements in aroma and bitterness as well as selection of varieties with high yield rates. Flowering, growth, and stress response Predicted genes in homologous primary contigs have been identified as accounting for various traits expressed via variation in the growth, flowering, and stress responses in the plant. These homologous primary contigs correspond to regions with large amounts of sequence variation. Genes in the hop that contain higher rates of sequence divergence in homologous primary contigs (overlapping DNA sequences inherited by a common ancestor) have been attributed to the expression of flowering, growth and responses to (both abiotic and biotic) stress in the plant. The responses to stress are thought to manifest in the distinct differences and difficulties in the cultivation processes between geographically popular varieties of the hop plant. Outside environmental stress, such as changes in temperature and water availability has also been shown to significantly alter the transcriptome and incite reductions in genes known to be involved in the synthesis of secondary metabolites (including bitter acids), which are organic compounds produced that do not impact development or reproduction of hops. Environmental stress has also been shown to reduce expression of the valerophenone synthase gene, which is known to be an essential genetic component in the regulation of bitter acid production. This shows that impacts of outside stress on H. lupulus likely has a direct implication of the expression of the bitter flavor that remains an essential component of the popularity of the plant. Research Humulus lupulus contains xanthohumol, which is converted by large intestine bacteria into the phytoestrogen 8-prenylnaringenin, which may have a relative binding affinity to estrogen receptors as well as potentiating effects on GABAA receptor activity Humulus lupulus extract is antimicrobial, an activity which has been exploited in the manufacture of natural deodorant. Spent H. lupulus extract has also been shown to have antimicrobial and anti-biofilm activities, raising the possibility this waste product of the brewing industry could be developed for medical applications. Extracts of the bitter alpha-acids present in H. lupulus have been shown to decrease nocturnal activity, acting as a sleep aide, in certain concentrations. Because of the growing understanding regarding the hop's overlap in gene structures with cannabidiolic acid synthase, the precursor structure to cannabidiol, there is a gap in general understanding about potential unknown compounds and benefits in hops. As the understanding of the health benefits available in cannabidiol increases, there is a growing demand to further investigate the overlap between cannabidiolic acid synthase and H. lupulus. Limitations The genome of H. lupulus is relatively large and has been shown to be a similar size to the human genome. The complexity of the hop genome has made it difficult to understand and identify unknown genetic properties, however with the growing availability of accessible sequencing, there is room for more advanced understanding of the plant. Because of the growing concern of climate change, and the assumption that there will be an increase of heat waves, it is likely that growing large yields of hops could become more difficult. This could result in changes to the transcriptome of the hop, or result in a decrease of certain varieties, leaving less room for further research. Taxonomy Relation to Cannabis sativa The hop is within the same family of plants such as hemp and marijuana, called Cannabaceae. The hop plant diverged from Cannabis sativa over 20 million years ago and has evolved to be three times the physical size. The hop and C. sativa are estimated to have approximately a 73% overlap in genomic content. The overlap between enzymes includes polyketide synthases and prenyltransferases. The hop and C. sativa also have significant overlap in the cannabidiolic acid synthase gene, which is expressed in the tissues of the leaves in both plants. Varieties The five varieties of this species (Humulus lupulus) are: H. l. var. lupulus – Europe, western Asia H. l. var. cordifolius – eastern Asia H. l. var. lupuloides (syn. H. americanus) – eastern North America H. l. var. neomexicanus – western North American. H. l. var. pubescens – midwestern and eastern North America Many cultivars are found in the list of hop varieties. A yellow-leafed ornamental cultivar, Humulus lupulus 'Aureus', is cultivated for garden use. It is also known as golden hop, and holds the Royal Horticultural Society's Award of Garden Merit (AGM). Etymology The genus name Humulus is a medieval name that was at some point Latinized after being borrowed from a Germanic source exhibiting the h•m•l consonant cluster, as in Middle Low German homele. According to Soviet Iranist Vasily Abaev this could be a word of Sarmatian origin which is present in the modern Ossetian language () and derives from proto-Iranian hauma-arayka, an Aryan haoma. From Sarmatian dialects this word spread across Eurasia, thus creating a group of related words in Turkic, Finno-Ugric, Slavic and Germanic languages (see , Chuvash хăмла, Finnish humala, Hungarian komló, Mordovian комла, Avar хомеллег). The specific epithet lupulus is Latin for "small wolf". The name refers to the plant's tendency to strangle other plants, mainly osiers or basket willows (Salix viminalis), like a wolf does a sheep. Hops could be seen growing over these willows so often that it was named the willow-wolf. The English word hop is derived from the Middle Dutch word , also meaning Humulus lupulus. Distribution and habitat The plant is native to Europe, western Asia and North America. It grows best in the latitude range of 38°–51° in full sun with moderate amounts of rainfall. Ecology The flowers attract butterflies, amongst other insects. Animal pests Damson hop aphid (Phorodon humuli) Two spotted spider mite (Tetranychus urticae) Japanese beetle (Popillia japonica) Comma butterfly (Polygonia c-album) Pale tussock moth (Calliteara pudibunda) Currant pug moth (Eupithecia assimilata) Buttoned snout moth (Hypena rostralis) Buff ermine moth (Spilosoma lutea) Diseases Downy mildew (Pseudoperonospora humuli) Powdery mildew (Podosphaera macularis) Toxicity H. lupulus can cause dermatitis to some who handle them. It is estimated that about 1 in 30 people are affected by this. Uses H. lupulus is first mentioned in 768 CE when King Pepin donated hops to a monastery in Paris. Cultivation was first recorded in 859 CE, in documents from a monastery in Freising, Germany. The chemical compounds found in H. lupulus are the main components in flavoring and bittering beer. The fragrant flower cones, known as hops, impart a bitter flavor and also have aromatic and preservative qualities. Some other compounds help with creating foam in beer. Chemicals such as linalool and aldehydes contribute to the flavor of beer. The main components of bitterness in beer are iso-alpha acids, with many other compounds contributing to beer's overall bitterness. Until the Middle Ages, many varieties of plant were used to flavor beer, including most commonly Myrica gale. H. lupulus became favored because it contains preserving agents which prolong the viability of a brew. In culture H. lupulus was voted the county flower of Kent in 2002 following a poll by the wild flora conservation charity Plantlife.
Biology and health sciences
Rosales
Plants
1912874
https://en.wikipedia.org/wiki/Integrating%20factor
Integrating factor
In mathematics, an integrating factor is a function that is chosen to facilitate the solving of a given equation involving differentials. It is commonly used to solve non-exact ordinary differential equations, but is also used within multivariable calculus when multiplying through by an integrating factor allows an inexact differential to be made into an exact differential (which can then be integrated to give a scalar field). This is especially useful in thermodynamics where temperature becomes the integrating factor that makes entropy an exact differential. Use An integrating factor is any expression that a differential equation is multiplied by to facilitate integration. For example, the nonlinear second order equation admits as an integrating factor: To integrate, note that both sides of the equation may be expressed as derivatives by going backwards with the chain rule: Therefore, where is a constant. This form may be more useful, depending on application. Performing a separation of variables will give This is an implicit solution which involves a nonelementary integral. This same method is used to solve the period of a simple pendulum. Solving first order linear ordinary differential equations Integrating factors are useful for solving ordinary differential equations that can be expressed in the form The basic idea is to find some function, say , called the "integrating factor", which we can multiply through our differential equation in order to bring the left-hand side under a common derivative. For the canonical first-order linear differential equation shown above, the integrating factor is . Note that it is not necessary to include the arbitrary constant in the integral, or absolute values in case the integral of involves a logarithm. Firstly, we only need one integrating factor to solve the equation, not all possible ones; secondly, such constants and absolute values will cancel out even if included. For absolute values, this can be seen by writing , where refers to the sign function, which will be constant on an interval if is continuous. As is undefined when , and a logarithm in the antiderivative only appears when the original function involved a logarithm or a reciprocal (neither of which are defined for 0), such an interval will be the interval of validity of our solution. To derive this, let be the integrating factor of a first order linear differential equation such that multiplication by transforms a non-integrable expression into an integrable derivative, then: Going from step 2 to step 3 requires that , which is a separable differential equation, whose solution yields in terms of : To verify, multiplying by gives By applying the product rule in reverse, we see that the left-hand side can be expressed as a single derivative in We use this fact to simplify our expression to Integrating both sides with respect to where is a constant. Moving the exponential to the right-hand side, the general solution to Ordinary Differential Equation is: In the case of a homogeneous differential equation, and the general solution to Ordinary Differential Equation is: . for example, consider the differential equation We can see that in this case Multiplying both sides by we obtain The above equation can be rewritten as By integrating both sides with respect to x we obtain or The same result may be achieved using the following approach Reversing the quotient rule gives or or where is a constant. Solving second order linear ordinary differential equations The method of integrating factors for first order equations can be naturally extended to second order equations as well. The main goal in solving first order equations was to find an integrating factor such that multiplying by it would yield , after which subsequent integration and division by would yield . For second order linear differential equations, if we want to work as an integrating factor, then This implies that a second order equation must be exactly in the form for the integrating factor to be usable. Example 1 For example, the differential equation can be solved exactly with integrating factors. The appropriate can be deduced by examining the term. In this case, , so . After examining the term, we see that we do in fact have , so we will multiply all terms by the integrating factor . This gives us which can be rearranged to give Integrating twice yields Dividing by the integrating factor gives: Example 2 A slightly less obvious application of second order integrating factors involves the following differential equation: At first glance, this is clearly not in the form needed for second order integrating factors. We have a term in front of but no in front of . However, and from the Pythagorean identity relating cotangent and cosecant, so we actually do have the required term in front of and can use integrating factors. Multiplying each term by gives which rearranged is Integrating twice gives Finally, dividing by the integrating factor gives Solving nth order linear differential equations Integrating factors can be extended to any order, though the form of the equation needed to apply them gets more and more specific as order increases, making them less useful for orders 3 and above. The general idea is to differentiate the function times for an th order differential equation and combine like terms. This will yield an equation in the form If an th order equation matches the form that is gotten after differentiating times, one can multiply all terms by the integrating factor and integrate times, dividing by the integrating factor on both sides to achieve the final result. Example A third order usage of integrating factors gives thus requiring our equation to be in the form For example in the differential equation we have , so our integrating factor is . Rearranging gives Integrating thrice and dividing by the integrating factor yields
Mathematics
Differential equations
null
1912904
https://en.wikipedia.org/wiki/Areca%20catechu
Areca catechu
Areca catechu is a species of palm native to the Philippines cultivated for areca nuts. It was carried widely through the tropics by the Austronesian migrations and trade since at least 1500 BCE due to its use in betel nut chewing. It is widespread in cultivation and is considered naturalized in Malaysia, Indonesia, New Guinea, Taiwan, Madagascar, Cambodia, Laos, Myanmar, Thailand, Vietnam, southern China (Guangxi, Hainan, Yunnan), India, Nepal, Bangladesh, the Maldives, Sri Lanka, parts of the Pacific Islands, and also in the West Indies. Its fruits (called areca nuts or betel nuts) are chewed together with slaked lime and betel leaves for their stimulant and narcotic effects. Taxonomy Common names in English include areca palm, areca nut palm, betel palm, betel nut palm, Indian nut, Pinang palm and catechu. This palm is commonly called the betel tree because its fruit, the areca nut, which are often chewed along with the betel leaf, a leaf from a vine of the family Piperaceae. The species was first published by Carl Linnaeus in his book Species Plantarum on page 1189 in 1753. Description Areca catechu is a medium-sized palm tree, growing straight to tall, with a trunk in diameter. The leaves are long, pinnate, with numerous, crowded leaflets. Chemical composition The seed contains alkaloids such as arecaidine and arecoline, which, when chewed, are intoxicating and slightly addictive. The seed also contains condensed tannins (procyanidins) called arecatannins, which are carcinogenic. The antibacterial activity of the seed has been studied. Uses Betel nut chewing Areca catechu is grown for its commercially important seed crop, the areca nut, which is the main component of the practice of betel nut chewing. It is popular throughout Southeast Asia, South Asia, Taiwan, Papua New Guinea and some nearby islands, parts of southern China, Madagascar, and the Maldives. The nut itself can be addictive and has direct link to oral cancers. Chewing areca nut is a cause of oral submucous fibrosis, a premalignant lesion which frequently progresses to mouth cancer. The practice of chewing areca nuts originated in Island Southeast Asia, where the areca palm is native. The oldest known evidence of areca nut chewing was found in a burial pit in the Duyong Cave site in the Philippines (to which areca palms are native), which dates to around 4,630±250 BP. Its diffusion is closely tied to the Neolithic expansion of the Austronesian peoples. It was spread to the Indo-Pacific during prehistoric times, reaching Micronesia at 3,500 to 3,000 BP, Near Oceania at 3,400 to 3,000 BP; South India and Sri Lanka by 3,500 BP; Mainland Southeast Asia by 3,000 to 2,500 BP; Northern India by 1500 BP; and Madagascar by 600 BP. From India, it was also spread westwards to Persia and the Mediterranean. It was also previously present in the Lapita culture, based on archaeological remains dated from 3,600 to 2,500 BP, but it was not carried into Polynesia. Other uses The areca palm is also used as an interior landscaping species. It is often used in large indoor areas such as malls and hotels. It will not fruit or reach full size if grown in this way. Indoors, it is a slow-growing, low-water, high-light plant that is sensitive to spider mites and occasionally mealybugs. In India, the dried fallen leaves are collected and hot-pressed into disposable palm leaf plates and bowls. Cultural significance In Indonesia and Malaysia there are numerous place names using the words pinang, jambi or jambe (areca in Javanese, Sundanese, Balinese, and Old Malay). For example, the cities of Tanjung Pinang, Pangkal Pinang in Indonesia, the Indonesian province of Jambi and Penang Island (Pulau Pinang) off the west coast of Peninsular Malaysia. Fua Mulaku in the Maldives, Guwahati in Assam, Supari(সুপারি) in West Bengal and coastal areas of Kerala and Karnataka in India, are also some of the places named after a local name for areca nut. Gallery
Biology and health sciences
Arecales (inc. Palms)
Plants
1913920
https://en.wikipedia.org/wiki/Vanessa%20cardui
Vanessa cardui
Vanessa cardui is the most widespread of all butterfly species. It is commonly called the painted lady, or formerly in North America the cosmopolitan. Description Distribution V. cardui is one of the most widespread of all butterflies, found on every continent except Antarctica and South America. In Australia, V. cardui has a limited range around Bunbury, Fremantle, and Rottnest Island. However, its close relative, the Australian painted lady (V. kershawi, sometimes considered a subspecies) ranges over half the continent. Other closely related species are the American painted lady (V. virginiensis) and the West Coast lady (V. annabella). Ecology and behavior Food sources and host plants Larvae feed on Asteraceae species, including Cirsium, Carduus, Centaurea, Arctium, Onopordum, Helianthus, and Artemisia. The painted lady uses over 300 recorded host plants according to the HOSTS database. Adult butterflies feed on flower nectar and aphid honeydew. Migration V. cardui occurs in any temperate zone, including mountains in the tropics. The species is resident only in warmer areas, but migrates in spring, and sometimes again in autumn. It migrates from North Africa and the Mediterranean to Britain and Europe in May and June, occasionally reaching Iceland, and from the Red Sea basin, via Israel and Cyprus, to Turkey in March and April. The occasional autumn migration made by V. cardui is likely for the inspection of resource changes; it consists of a round trip from Europe to Africa. For decades, naturalists have debated whether the offspring of these immigrants ever make a southwards return migration. Research suggests that British painted ladies do undertake an autumn migration, making round trip from tropical Africa to the Arctic Circle in a series of steps by up to six successive generations. The Radar Entomology Unit at Rothamsted Research provided evidence that autumn migrations take place at high altitude, which explains why these migrations are seldom witnessed. In recent years, thanks to the activity of The Worldwide Painted Lady Migration citizen science project, led by the Barcelona-based Institute of Evolutionary Biology (Catalan: Institut de Biologia Evolutiva), the huge range of migration has begun to be revealed. For example, some butterflies migrated from Iceland to the Sahara desert, and even further south. V. cardui is known for its distinct migratory behaviour. In California, they are usually seen flying from north to north-west. These migrations appear to be partially initiated by heavy winter rains in the desert where rainfall controls the growth of larval food plants. In March 2019, after heavy rain produced an abundance of vegetation in the deserts, Southern California saw these butterflies migrating by the millions across the state. Similarly, heavier than usual rain during the 2018-2019 winter seems to have been the cause of the extraordinarily large migration observed in Israel at the end of March, estimated at a billion individual butterflies. Painted lady migration patterns are highly erratic and they do not migrate every year. Some evidence suggests that global climatic events, such as el Niño, may affect the migratory behaviour of the painted lady butterflies, causing large-scale migrations. The first noticeable wave of migration in eastern Ukraine was noted in the 20s of April 2019. From May 15, numbers began to grow and it was possible to observe hundreds of this species in the Kharkiv region of Ukraine, including in the city streets of Kharkiv. Based on experimental data, the painted lady's migration pattern in northern Europe apparently does not follow a strict north-west heading. The range of headings suggests that migrating butterflies may adjust their migration patterns in response to local topographical features and weather, such as strong wind patterns. Laboratory-raised autumn-generation painted lady butterflies were able to distinguish a southern orientation for a return migration path. According to the same laboratory-based study, when butterflies were isolated from the sun, they were unable to orient themselves in a specific direction, opposed to those that did have access to the sun. This suggests that V. cardui requires a direct view of the sky, implying the use of a solar compass to orient its migratory direction and maintain a straight flight path. A 2024 Nature Communications article provided the first evidence that the painted lady, or any insect, had traveled across an ocean. Specimens were captured on a beach in French Guiana, outside the painted lady's natural habitat. Pollen grains from the butterflies' bodies matched species of West African shrubs that flowered at the same time of year. The researchers also analyzed the butterflies' genomes and used isotope tracing to confirm that they were born in Europe or Africa. Finally, the study found that the trade wind conditions from Africa to South America were "exceptionally favorable" at that time, which would have allowed the butterflies to be propelled over —one of the longest journeys of an insect ever recorded. Behavior Groups of two to eight painted lady butterflies have been observed to fly in circles around each other for about one to five seconds before separating, symbolizing courtship. Groups of butterflies usually will not fly more than away from the starting point. To establish and defend their territories, adult males perch in the late afternoon in areas where females are most likely to appear. Once the male spots a female of the same species, he begins pursuit of her. If the foreign butterfly is a male, the original male will give chase, flying vertically for a few feet before returning to his perch. V. cardui establishes territories within areas sheltered by hedgerows. Vanessa cardui tend to inhabit sunny, brightly lit, open environments and are often attracted to open areas of flowers and clovers. Adults spend time in small depressions in the ground on overcast days. Mating V. cardui displays a unique system of continuous mating, throughout all seasons, including the winter. This may be attributed to its migratory patterns, thus significantly affecting its mating behaviour. During European migrations, the butterflies immediately begin to mate and lay eggs upon arrival in the Mediterranean in the spring, starting in late May. In the United States, painted lady butterflies migrating towards the north experience poor mating conditions, and many butterflies have limited breeding capabilities. The "local adult generation" develops during this time, roughly from the middle of May through early June in conjunction with the butterfly progression throughout their flight. During its migratory process, these painted lady butterflies start breeding, and reproduce entirely throughout their migration. Scientists have not been able to find evidence of their overwintering; this may be because they migrate to warmer locations to survive and reproduce. Female painted lady butterflies may suspend their flight temporarily when they are "ready to oviposit"; this allows them the opportunity to continually reproduce throughout their migrations. Because these butterflies are constantly migrating, male butterflies are thought to lack consistent territory. Instead of requiring territory to mate with females and developing evolutionary behaviour to defend this territory, the mating butterflies appear to establish a particular "time and place" in certain locations that they find to be suitable for reproduction. More specifically, they locate certain perches, hilltops, forest-meadow edges, or other landmarks where they will stay until, presumably, a female arrives to mate. Equally important for the reproduction of the painted lady butterflies is the males' exhibition of polygynous mating behaviour, in which they often mate with more than one female. This is important for painted lady butterflies because the benefits may supersede the costs of polygyny since no permanent breeding ground is used. Upon mating, which typically occurs in the afternoon, female painted lady butterflies lay eggs one by one in their desired breeding locations. The variety of eclosion locations ultimately dictates the male painted lady behaviour. Female painted lady butterflies have been observed to have a relatively "high biotic potential", meaning they each produce large numbers of offspring. This perpetual influx of reproduction may be a reason why these painted lady butterflies have propagated so successfully. One interesting aspect that scientists have observed is that these butterflies like to fly towards rain. Further studies have suggested that the large amounts of rainfall may somehow "activate more eggs or induce better larval development". Inhabited locations begin to observe a large influx of new generations of painted lady butterflies in the fall, particularly in September and October. Their reproductive success declines relatively throughout the winter, primarily through November. However, they still continue to reproduce—an aspect of butterfly behaviour that is quite unique. Scientists hypothesize that these extensive migratory patterns help the painted lady butterflies find suitable conditions for breeding, thus offering a possible reason as to why these butterflies mate continuously. Oviposition Females oviposit on plants with nectar immediately available for the adults even if it leads to high mortality of the larvae. This lack of discrimination indicates they do not take into account volatile chemicals released from potential host plants when searching for oviposition choices. The availability of adult resources dictates a preference for specific areas of flowers. Flowers with more available nectar result in a larger number of eggs deposited on the plants. This reinforces the idea that the painted lady butterfly does not discriminate host plants and chooses mainly on the availability of adult food sources even if it increases the mortality rate of the offspring. The data also suggest that the painted lady butterfly favors quantity of offspring over quality. Defence mechanisms The main defence mechanisms of painted lady butterflies include flight and camouflage. The caterpillars hide in small silk nests on top of leaves from their main predators that include wasps, spiders, ants, and birds. Vision Painted lady butterflies have a visual system that resembles that of a honey bee. Adult V. cardui eyes contain ultraviolet, blue, and green opsins. Unlike other butterflies, such as the monarch or red postman butterflies, painted ladies lack red receptors, which means that they are not sensitive to red light. Behavioral studies on the related species, Vanessa atalanta, have demonstrated that V. atalanta cannot distinguish yellow light from orange light or orange light from red light. Human interaction Vanessa cardui and other painted lady species are bred in schools for educational purposes and used for butterfly releases at hospices, memorial events, and weddings.
Biology and health sciences
Lepidoptera
Animals
1913997
https://en.wikipedia.org/wiki/Okra
Okra
Okra (, ), Abelmoschus esculentus, known in some English-speaking countries as lady's fingers, is a flowering plant in the mallow family native to East Africa. Cultivated in tropical, subtropical, and warm temperate regions around the world for its edible green seed pods, okra is featured in the cuisines of many countries. Description The species is a perennial, often cultivated as an annual in temperate climates, often growing to around tall. As a member of the Malvaceae, it is related to such species as cotton, cocoa, and hibiscus. The leaves are long and broad, palmately lobed with 5–7 lobes. The flowers are in diameter, with five white to yellow petals, often with a red or purple spot at the base of each petal. The pollens are spherical and approximately 188 microns in diameter. The fruit is a capsule up to long with pentagonal cross-section, containing numerous seeds. Etymology is Neo-Latin from , while is Latin for being fit for human consumption. The first use of the word okra (alternatively; okro or ochro) appeared in 1679 in the Colony of Virginia, deriving from . The word gumbo was first used in American English around 1805, derived from Louisiana Creole, but originates from either or . Even though the word gumbo often refers to the dish gumbo in most of the United States, many places in the Deep South may have used it to refer to the pods and plant as well as many other variants of the word found across the African diaspora in the Americas. Origin and distribution Okra is an allopolyploid of uncertain parentage. However, proposed parents include Abelmoschus ficulneus, A. tuberculatus and a reported "diploid" form of okra. Truly wild (as opposed to naturalised) populations are not known with certainty, and the West African variety has been described as a cultigen. Okra originated in East Africa in Ethiopia, Eritrea, and eastern Sudan. From Arabia, the plant spread around the shores of the Mediterranean Sea and eastward. Okra was introduced to Europe by the Umayyad conquest of Hispania. One of the earliest accounts is by Abu al-Abbas al-Nabati, who visited Ayyubid Egypt in 1216 and described the plant under cultivation by the locals who ate the tender, young pods with meal. The plant was introduced to the Americas by ships plying the Atlantic slave trade by 1658, when its presence was recorded in Brazil. It was further documented in Suriname in 1686. Okra may have been introduced to southeastern North America from Africa in the early 18th century. By 1748, it was being grown as far north as Philadelphia. Thomas Jefferson noted it was well established in Virginia by 1781. It was commonplace throughout the Southern United States by 1800, and the first mention of different cultivars was in 1806. Cultivation Abelmoschus esculentus is cultivated throughout the tropical and warm temperate regions of the world for its fibrous fruits or pods containing round, white seeds. It is among the most heat- and drought-tolerant vegetable species in the world and will tolerate soils with heavy clay and intermittent moisture, but frost can damage the pods. In cultivation, the seeds are soaked overnight prior to planting to a depth of . It prefers a soil temperature of at least for germination, which occurs between six days (soaked seeds) and three weeks. As a tropical plant, it also requires a lot of sunlight, and it should also be cultivated in soil that has a pH between 5.8 and 7, ideally on the acidic side. Seedlings require ample water. The seed pods rapidly become fibrous and woody and, to be edible as a vegetable, must be harvested when immature, usually within a week after pollination. The first harvest will typically be ready about 2 months after planting, and it will be approximately long. The most common disease afflicting the okra plant is verticillium wilt, often causing a yellowing and wilting of the leaves. Other diseases include powdery mildew in dry tropical regions, leaf spots, yellow mosaic and root-knot nematodes. Resistance to yellow mosaic virus in A. esculentus was transferred through a cross with Abelmoschus manihot and resulted in a new variety called Parbhani kranti. In the U.S. much of the supply is grown in Florida, especially around Dade in southern Florida. Okra is grown throughout the state to some degree, so okra is available ten months of the year. Yields range from less than to over . Wholesale prices can go as high as $18/bushel which is . The Regional IPM Centers provide integrated pest management plans for use in the state. Production In 2021, world production of okra was 10.8 million tonnes, led by India with 60% of the total, with Nigeria and Mali as secondary producers. Uses Nutrition Culinary Okra is one of three thickeners that may be used in gumbo soup from Louisiana. Fried okra is a dish from the Cuisine of the Southern United States. In Cuba and Puerto Rico, the vegetable is referred to as quimbombó, and is used in dishes such as quimbombó guisado (stewed okra), a dish similar to gumbo. It is also used in traditional dishes in the Dominican Republic, where it is called molondrón. In Brazil, it is an important component of several regional dishes, such as caruru, made with shrimp, in the Northeastern region, and frango com quiabo (chicken with okra) and carne refogada com quiabo (stewed meat with okra) in Minas Gerais. In South Asia, the pods are used in many spicy vegetable preparations as well as cooked with beef, mutton, lamb and chicken. Pods The pods of the plant are mucilaginous, resulting in the characteristic "goo" or slime when the seed pods are cooked; the mucilage contains soluble fiber. One possible way to de-slime okra is to cook it with an acidic food, such as tomatoes, to minimize the mucilage. Pods are cooked, pickled, eaten raw, or included in salads. Okra may be used in developing countries to mitigate malnutrition and alleviate food insecurity. Leaves and seeds Young okra leaves may be cooked similarly to the greens of beets or dandelions, or used in salads. Okra seeds may be roasted and ground to form a caffeine-free substitute for coffee. When importation of coffee was disrupted by the American Civil War in 1861, the Austin State Gazette said, "An acre of okra will produce seed enough to furnish a plantation with coffee in every way equal to that imported from Rio." Greenish-yellow edible okra oil is pressed from okra seeds; it has a pleasant taste and odor, and is high in unsaturated fats such as oleic acid and linoleic acid. The oil content of some varieties of the seed is about 40%. At , the yield was exceeded only by that of sunflower oil in one trial. Industrial Bast fibre from the stem of the plant has industrial uses such as the reinforcement of polymer composites. The mucilage produced by the okra plant can be used for the removal of turbidity from wastewater by virtue of its flocculant properties. Having composition similar to a thick polysaccharide film, okra mucilage is under development as a biodegradable food packaging, as of 2018. A 2009 study found okra oil suitable for use as a biofuel. Trivia Okra is the national vegetable of Pakistan. It is known as Bhindi in the country. Gallery
Biology and health sciences
Others
null
1914137
https://en.wikipedia.org/wiki/Cynodon
Cynodon
Cynodon is a genus of plants in the grass family. It is native to warm temperate to tropical regions of the Old World, as well as being cultivated and naturalized in the New World and on many oceanic islands. Taxonomy The genus name comes from Greek words meaning "dog-tooth". The genus as a whole as well as its species are commonly known as Bermuda grass or dog's tooth grass. Species Cynodon ambiguus (Ohwi) P.M.Peterson Cynodon barberi Rang. & Tadul. – India, Sri Lanka Cynodon convergens F.Muell. Cynodon coursii A.Camus – Madagascar Cynodon dactylon (L.) Pers. – Old World; introduced in New World and on various islands Cynodon incompletus Nees – southern Africa; introduced in Australia, Argentina Cynodon × magennisii Hurcombe – Limpopo, Gauteng, Mpumalanga; introduced in Texas, Alabama Cynodon nlemfuensis Vanderyst - Africa from Ethiopia to Zimbabwe; introduced in South Africa, West Africa, Saudi Arabia, Philippines, Texas, Florida, Mesoamerica, northern South America, various islands Cynodon plectostachyus (K.Schum.) Pilg. – Chad, East Africa; introduced in Madagascar, Bangladesh, Mexico, West Indies, Paraguay, northeastern Argentina, Texas, California Cynodon prostratus (C.A.Gardner & C.E.Hubb.) P.M.Peterson Cynodon radiatus Roth – China, Indian Subcontinent, Southeast Asia, Madagascar; introduced in Australia, New Guinea Cynodon simonii P.M.Peterson Cynodon tenellus R.Br. Cynodon transvaalensis Burtt Davy – South Africa, Lesotho; introduced in other parts of Africa plus in scattered locales in Iran, Australia, and the Americas Formerly included Several species now considered better suited to other genera, namely Arundo, Bouteloua, Chloris, Cortaderia, Ctenium, Digitaria, Diplachne, Eleusine, Enteropogon, Eragrostis, Eustachys, Gynerium, Leptochloa, Molinia, Muhlenbergia, Phragmites, Poa, Spartina, Tridens, and Trigonochloa. Cultivation and uses Some species, most commonly C. dactylon, are grown as lawn grasses in warm temperate regions, such as the Sunbelt area of the United States where they are valued for their drought tolerance compared to most other lawn grasses. Propagation is by rhizomes, stolons, or seeds. In some cases it is considered to be a weed; it spreads through lawns and flower beds, where it can be difficult to kill with herbicides without damaging other grasses or plants. It is difficult to pull out because the rhizomes and stolons break readily, and then re-grow. It is also noted for its common use on the surface of greens on golf courses, as well as football and baseball playing fields. Recent news reports claim that a Bermuda-derived F1 hybrid called Tifton 85 suddenly started producing cyanide and killed a cattle herd in Texas, USA.
Biology and health sciences
Poales
Plants
18701436
https://en.wikipedia.org/wiki/Tumblr
Tumblr
Tumblr (pronounced "tumbler") is a microblogging and social networking website founded by David Karp in 2007 and currently owned by American company Automattic. The service allows users to post multimedia and other content to a short-form blog. History Beginnings (2006–2012) Development of Tumblr began in 2006 during a two-week gap between contracts at David Karp's software consulting company, Davidville. Karp had been interested in tumblelogs (short-form blogs, hence the name Tumblr) for some time and was waiting for one of the established blogging platforms to introduce their own tumblelogging platform. As none had done so after a year of waiting, Karp and developer Marco Arment began working on their own platform. Tumblr was launched in February 2007, and within two weeks had gained 75,000 users. Arment left the company in September 2010 to work on Instapaper. In June 2012, Tumblr featured its first major brand advertising campaign in collaboration with Adidas, who launched an official soccer Tumblr blog and bought ad placements on the user dashboard. This launch came only two months after Tumblr announced it would be moving towards paid advertising on its site. Ownership by Yahoo! (2013–2018) On May 20, 2013, it was announced that Yahoo and Tumblr had reached an agreement for Yahoo! Inc. to acquire Tumblr for $1.1 billion in cash. Many of Tumblr's users were unhappy with the news, causing some to start a petition, achieving nearly 170,000 signatures. David Karp remained CEO and the deal was finalized on June 20, 2013. Advertising sales goals were not met and in 2016 Yahoo wrote down $712 million of Tumblr's value. Verizon Communications acquired Yahoo in June 2017, and placed Yahoo and Tumblr under its Oath subsidiary. Karp announced in November 2017 that he would be leaving Tumblr by the end of the year. Jeff D'Onofrio, Tumblr's president and COO, took over leading the company. The site, along with the rest of the Oath division (renamed Verizon Media Group in 2019), continued to struggle under Verizon. In March 2019, SimilarWeb estimated Tumblr had lost 30% of its user traffic since December 2018, when the site had introduced a stricter content policy with heavier restrictions on adult content (which had been a notable draw to the service). In May 2019, it was reported that Verizon was considering selling the site due to its continued struggles since the purchase (as it had done with another Yahoo property, Flickr, via its sale to SmugMug). Following this news, Pornhub's vice president publicly expressed interest in purchasing Tumblr, with a promise to reinstate the previous adult content policies. Automattic (2019–present) On August 12, 2019, Verizon Media announced that it would sell Tumblr to Automattic, the operator of blog service WordPress.com and corporate backer of the open source blog software of the same name. The sale was for an undisclosed amount, but Axios reported that the sale price was less than $3 million, less than 0.3% of Yahoo's original purchase price. Automattic CEO Matt Mullenweg stated that the site will operate as a complementary service to WordPress.com, and that there were no plans to reverse the content policy decisions made during Verizon ownership. In November 2022, Mullenweg stated that Tumblr will add support for the decentralized social networking protocol ActivityPub. In November 2023, most of Tumblr's product development and marketing teams were transferred to other groups within Automattic. Mullenweg stated that focus would shift to core functionality and streamlining existing features. In February 2024, Automattic announced that it would begin selling user data from Tumblr and WordPress.com to Midjourney and OpenAI. Tumblr users are opted-in by default, with an option to opt out. In August 2024, Automattic announced that it would migrate Tumblr's backend to an architecture derived from WordPress, in order to ease development and code sharing between the platforms. The company stated that this migration would not impact the service's user experience and content, and that users "won't even notice a difference from the outside". Features Blog management Dashboard: The dashboard is the primary tool for the typical Tumblr user. It is a live feed of recent posts from blogs that they follow. Through the dashboard, users are able to comment, reblog, and like posts from other blogs that appear on their dashboard. The dashboard allows the user to upload text posts, images, videos, quotes, or links to their blog with a click of a button displayed at the top of the dashboard. Users are also able to connect their blogs to their Twitter and Facebook accounts, so that whenever they make a post, it will also be sent as a tweet and a status update. As of June 2022, users can also turn off reblogs on specific posts through the dashboard. Queue: Users are able to set up a schedule to delay posts that they make. They can spread their posts over several hours or even days. Tags: Users can help their audience find posts about certain topics by adding tags. If someone were to upload a picture to their blog and wanted their viewers to find pictures, they would add the tag #picture, and their viewers could use that word to search for posts with the tag #picture. HTML editing: Tumblr allows users to edit their blog's theme using HTML to control the appearance of their blog. Custom themes are able to be shared and used by other users, or sold. Custom domains: Tumblr allows users to use custom domains for their blogs. Users must purchase a domain from Tumblr Domains, an in-house registrar that provides domains that can only be used with Tumblr unless removed from the user's blog and transferred to another registrar. Blogs previously were able to be linked with any domain/subdomain from any registrar, however following the introduction of the Tumblr Domains service, now requires you to purchase a domain directly from Tumblr to be used with a blog. Users who kept their blogs connected to a domain after the introduction got to keep their custom domain, as long as they don't disconnect it from Tumblr or let the domain expire. Tags The tagging system on the website operates on a hybrid tagging system, involving both self-tagging (user write their own tags on their posts) and an auto-manual function (the website will recommend popular tags and ones that the user has used before.) Only the first 20 tags added to any post will be indexed by the site. The tags are prefaced by a hashtag and separated by commas, and spaces and special characters are allowed, but only up to 140 characters total per tag. There are two main types used by Tumblr users: descriptive tagging, and opinion or commentary tagging. Descriptive tags are usually introduced by the original poster, and describe what is in the post (e.g. #art, #sky). These are important for the original poster to use, so their post will be indexed and searchable by others wishing to view that subject of content. Tags used as a form of communication are unique to Tumblr, and are typically more personal, expressing opinions, reactions, meta-commentary, background information, and more. Instead of adding onto the reblogged post (with their comments becoming an addition to each subsequent reblog from them) a user may add their comments in the tags, not changing the content or appearance of the original post in any way. Not all users choose to use tags this way, but those who do use tags for commentary may prefer it over adding a comment on the actual post. Mobile With Tumblr's 2009 acquisition of Tumblerette, an iOS application created by Jeff Rock and Garrett Ross, the service launched its official iPhone app. The site became available to BlackBerry smartphones on April 17, 2010, via a Mobelux application in BlackBerry World. In June 2012, Tumblr released a new version of its iOS app, Tumblr 3.0, allowing support for Spotify integration, hi-res images and offline access. An app for Android is also available. A Windows Phone app was released on April 23, 2013. An app for Google Glass was released on May 16, 2013. Inbox and messaging Tumblr blogs have the option to allow users to submit questions, either as themselves or anonymously, to the blog for a response. Tumblr also previously offered a "fan mail" function, allowing users to send messages to blogs that they followed. On November 10, 2015, Tumblr introduced an integrated instant messaging function, allowing users to chat with other Tumblr users. The feature was rolled out in a "viral" manner; it was initially made available to a group of 1,500 users, and other users could receive access to the messaging system if they were sent a message by any user that had received access to the system itself. The messaging platform replaces the fan mail system, which was deprecated. The ability to send posts to others via the Dashboard was added the following month. Discontinued features In May 2012, Tumblr launched Storyboard, a blog managed by an in-house editorial team which features stories and videos about noteworthy blogs and users on Tumblr. In April 2013, Storyboard was shut down. In March 2018, Tumblr began to syndicate original video content from Verizon-owned video network go90, as part of an ongoing integration of Oath properties, and reported plans to wind down go90 in favor of using Oath properties to distribute its content instead. This made the respective content available internationally, since go90 is a U.S.-only service. Go90 shut down at the end of the following July. In November 2019, Tumblr introduced "group chats"—ephemeral chat rooms surfaced via searches, designed to allow users to share content in real-time with users who share their interests. Posts would disappear after 24 hours and could not be edited. The group chat function was discontinued on September 22, 2021. On July 21, 2021, Tumblr launched Post+ for some beta users, allowing bloggers to monetize their content. Post+ was removed in January 2024 due to low usage. At the end of 2022, Tumblr announced a livestreaming service called Tumblr Live. Tumblr Live was an adapted version of The Meet Group's product Livebox. In 2024, Tumblr announced that they would be discontinuing Tumblr Live as of January 24, with options for users to migrate to MeetMe. A feature that allowed users to tip small amounts of money to other users, introduced in February 2022, is scheduled to be removed on June 1, 2024 due to low usage. Usage Tumblr has been noted for the socially progressive views of its users. In 2011, the service was most popular with the teen and college-aged user segments with half of Tumblr's visitor base being under the age of 25. In April 2013, the website received more than 13 billion global page views. User activity, measured by the number of blog posts per quarter, peaked at over 100 million in early 2014 and declined in each of the next three years, to approximately 30 million by October 2018. , Tumblr hosted over 465 million blogs and more than 172 billion posts in total with over 21 million posts created on the site each day. According to then-CEO Jeff D’Onofrio, members of Generation Z made up 48% of active and 61% of new users, reflecting a resurgence in activity on the platform. LGBTQ+ content and community Multiple researchers looking into Tumblr have found that the website is often used as for community-building and a place to explore identity formation and gender expression for LGBT groups. Prior to the 2018 adult content ban, transgender users posted their personal gender transitioning experiences, including photos of post gender-confirming surgery and the healing process. Many users felt that the ability to be anonymous, or cultivate the identity they were transitioning to, made posting personal information to the website acceptable and safe. Adult content At the time of its acquisition by Yahoo, Tumblr was described by technology journalists as having a sizable amount of pornographic content. An analysis conducted by news and technology site TechCrunch on May 20, 2013, showed that over 22% of all traffic in and out of Tumblr was classified as pornography. In addition, a reported 16.45% of blogs on Tumblr exclusively contained pornographic material. Following July 2013 and its acquisition by Yahoo, Tumblr progressively restricted adult content on the site. In July 2013, Tumblr began to filter content in adult-tagged blogs from appearing in search results and tagged displays unless the user was logged in. In February 2018, Safe Mode (which filters "sensitive" content and blogs) became enabled by default for all users on an opt-out basis. On December 3, 2018, Tumblr announced that effective December 17, all images and videos depicting sex acts, and real-life images and videos depicting human genitalia or "female-presenting" nipples, would be banned from the service. Exceptions are provided for illustrations or art that depict nudity, nudity related to "political or newsworthy speech", and depictions of "female-presenting" nipples in relation to medical events such as childbirth, breastfeeding, mastectomy and gender reassignment surgery. The rules do not apply to text content. All posts in violation of the policy are hidden from public view, and repeat offenders may be reprimanded. Shortly prior to the announcement, Tumblr's Android app was patched to remove the ability to disable Safe Mode. The change faced wide criticism among Tumblr's community; in particular, it has been argued that the service should have focused on other major issues (such as controlling hate speech or the number of porn-related spambots on the service), and that the service's adult community provided a platform for sex education, independent adult performers (especially those representing LGBT communities who feel that they are under-represented by a heteronormative mainstream industry) seeking an outlet for their work, and those seeking a safe haven from "over-policed" platforms to share creative work with adult themes. Tumblr stated that it was using various algorithms to detect potential violations, in combination with manual reviews. Users quickly discovered a wide array of false positives. A large number of users scheduled protest actions on December 17. On the day the ban took effect, Tumblr issued a new post clarifying the new policy, showcasing examples of adult images still allowed on the service, and stating that it "fully recognized" its "special obligation" to serving its LGBT userbase, and that "LGBTQ+ conversations, exploration of sexuality and gender, efforts to document the lives and challenges of those in the sex worker industry, and posts with pictures, videos, and GIFs of gender reassignment surgery are all examples of content that is not only permitted on Tumblr but actively encouraged." Wired cited multiple potential factors in the ban, including that the presence of adult content made the service unappealing to potential advertisers, the Stop Enabling Sex Traffickers Act (a U.S. federal law which makes websites liable for knowingly assisting or facilitating illegal sex trafficking), as well as heavy restrictions on adult content imposed by Apple for software offered on the iOS App Store (which similarly prompted several Reddit clients to heavily frustrate the ability for users to access forums on the site that contain adult content). In January 2022, Tumblr reached a settlement with New York City's Commission on Human Rights, which had claimed that the 2018 ban on adult content disproportionately affected LGBTQ+ users. The agreement required the company to review its algorithms, revise its appeals process and review closed cases, and train its human moderators on diversity and inclusion issues. In November 2022, Tumblr changed its rules to allow nudity, but not sexually explicit images. Corporate affairs Tumblr's headquarters were at 770 Broadway in New York City. The company also maintains a support office in Richmond, Virginia. , Tumblr had 411 employees. Tumblr (and Automattic) now has a mostly distributed workforce, with a small office in San Francisco. The company's logo is set in Bookman Old Style with some modifications. Funding , Tumblr had received about $125 million of funding from investors. The company has raised funding from Union Square Ventures, Spark Capital, Martín Varsavsky, John Borthwick (Betaworks), Fred Seibert, Krum Capital, and Sequoia Capital (among other investors). In its first round of funding in October 2007, Tumblr raised $750,000 from Spark Capital and Union Square Ventures. In December 2008 the company raised $4.5 million in Series B funding and a further $5 million in April 2010. In December 2010, Tumblr raised $30 million in Series D funding. The company had an $800 million valuation in August 2011. In September 2011, the company raised $85 million in a round of funding led by Greylock Partners and Insight Venture Partners. Revenue sources In an interview with Nicole Lapin of Bloomberg West on September 7, 2012, David Karp said the site was monetized by advertising. Their first advertising launch started in May 2012 after 16 experimental campaigns. Tumblr made $13 million in revenue in 2012 and hoped to make $100 million in 2013. Tumblr reportedly spent $25 million to fund operations in 2012. In 2013, Tumblr began allowing companies to pay to promote their own posts to a larger audience. Tumblr Head of Sales, Lee Brown, has quoted the average ad purchase on Tumblr to be nearly six figures. Tumblr also allows premium theme templates to be sold for use by blogs. In July 2016, advertisements were implemented by default across all blogs. Users may opt-out, and the service stated that a revenue sharing program would be implemented at a later date. In February 2022, Tumblr launched an ad-free subscription option that removes the marketing from microblogs for $5 per month, or $40 per year. During an AMA on 11 July 2023, the CEO said that Tumblr was financially in the red, losing $30 million a year. Criticism Copyright issues Tumblr has received criticism for copyright violations by participating bloggers; however, Tumblr accepts Digital Millennium Copyright Act (DMCA) take-down notices. Tumblr's visual appeal has made it ideal for photoblogs that often include copyrighted works from others that are re-published without payment. Tumblr users can post unoriginal content by "Reblogging", a feature on Tumblr that allows users to re-post content taken from another blog onto their own blog with attribution. In addition to these copyright infringements, Tumblr has at times been weaponised by individuals seeking to raise DMCA notices against other sites. Former Wall Street Journalist and Pulitzer Prize-winning journalist Bradley Hope and his investigative publication Project Brazen published a story on 10 October 2023 reporting that Indian businessman Gaurav Srivastava was fraudlently representing himself as an agent of the Central Intelligence Agency (the Gaurav Srivastava Fake Spy Scam). Shortly after, a fake blog was created on blogging site Tumblr that republished the content of their story and backdated it to 8 October 2023, two days before their article came out. Following a copyright infringement complaint filed on legal archive Lumen and without checking the veracity of the source, Google delisted the Project Brazen article from its search results. Tumblr later confirmed that it had removed several accounts and posts from its platform following the alleged abuse of the site by Srivastava. Security Tumblr has been forced to manage spam and security problems. For example, a chain letter scam in May 2011 affected 130,000 users. On December 3, 2012, Tumblr was attacked by a cross-site scripting worm deployed by the internet troll group Gay Nigger Association of America. The message urged users to harm themselves and criticized blogging in general. User interface changes In 2015, Tumblr faced criticism by users for changes to its reblog mechanisms. In July 2015, the system was modified so that users cannot remove or edit individual comments by other users when reblogging a post; existing comments can only be removed all at once. Tumblr staff argued that the change was intended to combat "misattribution", though this move was met by criticism from 'ask blogs' and "RP blogs', which often shortened long chains of reblogs between users to improve readability. In September 2015, Tumblr changed how threads of comments on reblogged posts are displayed; rather than a nested view with indentations for each post, all reblogs are now shown in a flat view, and user avatars were also added. The change was intended to improve the legibility of reblogs, especially on mobile platforms, and complements the inability to edit existing comments. Although some users had requested such a change to combat posts made illegible by extremely large numbers of comments on a reblogged post, the majority of users (even those who had requested such a change) criticized the new format. The Verge was also critical of the changes, noting that it was cleaner, but made the site lose its "nostalgic charm". Userbase behaviour While Tumblr's userbase has generally been received as accommodating people from a wide range of ideologies and identities, a common point of criticism is that attitudes from users on the site stifle discussion and discourse. In 2015, members of the Steven Universe fandom drove an artist to the point of attempting suicide over their artwork, in which they drew characters thin that are typically seen as being 'fat' in the show. In 2018, Kotaku reporter Gita Jackson described the site as a 'joyless black hole', citing how the website's design and functionality led to 'fandoms spinning out of control', as well as an environment that inhibited discussion and discourse. Promotion of self-harm and suicide In February 2012, Tumblr banned blogs that promote or advocate suicide, self-harm and eating disorders (pro-ana). The suicide of a British teenager, Tallulah Wilson, raised the issue of suicide and self-harm promotion on Tumblr as Wilson was reported to have maintained a self-harm blog on the site. A user on the site is reported to have sent Wilson an image of a noose accompanied by the message: "here is your new necklace, try it on." In response to the Wilson case, Maria Miller, the UK's minister for culture, media, and sport at the time, said that social media sites like Tumblr needed to remove "toxic" self-harm content. Searching terms like "depression", "anxiety", and "suicide" on Tumblr now brings up a PSA page directing the user to resources like the National Suicide Prevention Lifeline, The Trevor Project, the National Eating Disorders Association, and RAINN, as well as an option to continue to the search results. There are concerns of some Tumblr posts glorifying suicide and depression among young people. Politics In February 2018, BuzzFeed published a report claiming that Tumblr was utilized as a distribution channel for Russian agents to influence American voting habits during the 2016 presidential election. Despite policies forbidding hate speech, Tumblr has been noted for hosting content from Neo-Nazis and white supremacists. In May 2020, Tumblr announced that it will remove reblogs of terminated hate speech posts, specifically Nazi and white supremacist content. Censorship Several countries have blocked access to Tumblr because of pornography, religious extremism or LGBT content. These countries include China, Indonesia, Kazakhstan and Iran. In February 2016, the Indonesian government temporarily blocked access to Tumblr within the country because the site hosted pages that carried pornography. The government shortly reversed its decision to block the site and said it had asked Tumblr to self-censor its pornographic content. Adult content ban In November 2018, Tumblr's iOS app was removed by Apple from its App Store after illegal child pornography was found on the service. Tumblr stated that all images uploaded to the service are scanned against an industry database, but that a "routine audit" had revealed images that had not yet been added to the database. In the wake of the incident, a number of Tumblr blogs—particularly those dealing primarily in adult-tagged artwork such as erotica, as well as art study and anatomy resources—were also deleted, with affected users taking to other platforms (such as Twitter) to warn others and complain about the deletions, as well as encourage users to back up their blog's contents. Tumblr subsequently removed the ability to disable "Safe Mode" from its Android app, and announced a wider ban on explicit images of sex acts and nudity on the platform with certain limited exceptions. Tumblr deployed an automatic content recognition system which resulted in many non-pornographic images being removed from the platform. In December 2018, about a month after it was initially banned, Tumblr's iOS app was restored to the app store. The site was known for its popularity with adult content that attracted women and catered for other under-served audiences. Notable matters On October 21, 2011, then-U.S. President Barack Obama created a Tumblr account. In late 2015, a user on the website went viral after allegedly having collected human bones at a graveyard, sparking a controversy known as "Boneghazi" (a portmanteau of bone + Benghazi). The user, from New Orleans, Louisiana, had offered to share the human bones reportedly procured from Holt Cemetery by making a post in a Facebook group known as the "Queer Witch Collective". The Facebook post was later re-posted to Tumblr by another user, and the account from Facebook was traced to a profile on Tumblr due to the profile pictures matching. In January 2016, the user's home was searched by law enforcement, where they found 11 bones and four teeth.
Technology
Social network and blogging
null
18707721
https://en.wikipedia.org/wiki/Head%20lice%20infestation
Head lice infestation
Head lice infestation, also known as pediculosis capitis, is the infection of the head hair and scalp by the head louse (Pediculus humanus capitis). Itching from lice bites is common. During a person's first infection, the itch may not develop for up to six weeks. If a person is infected again, symptoms may begin much more quickly. The itch may cause problems with sleeping. Generally, however, it is not a serious condition. While head lice appear to spread some other diseases in Africa, they do not appear to do so in Europe or North America. Head lice are spread by direct contact with the hair of someone who is infected. The cause of head lice infestations in children is not related to cleanliness. Other animals, such as cats and dogs, do not play a role in transmission. Head lice feed only on human blood and are only able to survive on human head hair. When adults, they are about 2 to 3 mm long. When not attached to a human, they are unable to live beyond three days. Humans can also become infected with two other lice – the body louse and the crab louse. To make the diagnosis, live lice must be found. Using a comb can help with detection. Empty eggshells (known as nits) are not sufficient for the diagnosis. Possible treatments include: combing the hair frequently with a fine tooth comb or shaving the head completely. A number of topical medications are also effective, including malathion, ivermectin, and dimethicone. Dimethicone, which is a silicone oil, is often preferred due to the low risk of side effects. Pyrethroids such as permethrin have been commonly used; however, they have become less effective due to increasing pesticide resistance. There is little evidence for alternative medicines. Head-lice infestations are common, especially in children. In Europe, they infect between 1 and 20% of different groups of people. In the United States, between 6 and 12 million children are infected a year. They occur more often in girls than boys. It has been suggested that historically, head lice infection were beneficial, as they protected against the more dangerous body louse. Infestations may cause stigmatization of the infected individual. Signs and symptoms Head lice are generally uncomfortable, but typically do not constitute a serious condition. The most common symptom is itching of the head, which normally worsens 3 to 4 weeks after the initial infestation. The bite reaction is very mild, and it can be rarely seen between the hairs. Bites can be seen, especially in the neck of long-haired individuals when the hair is pushed aside. Swelling of the local lymph nodes and fever are rare. Itching may cause skin breakdown and uncommonly result in a bacterial infection. Many individuals do not experience symptoms. Itching may take 2–6 weeks to develop upon first infestation, and sooner in subsequent infestations. In Ethiopia, head lice appear to be able to spread louse-born epidemic typhus and Bartonella quintana. In Europe, the head lice do not appear to carry these infections. Transmission Head lice spreads through direct contact of the head of an infested person with the head of a non-infested person. The presence of live lice indicates an active infestation while the presence of nits indicates a past or currently inactive infection with the potential to become active. Head lice do not leap or spring as a means to transfer to their hosts; instead, they move by crawling. Transmission by indirect contact (e.g. sharing bedding, clothing, headwear, the same comb) is much less common. The cause of head lice infestations is not related to cleanliness. Neither hair length nor how often the hair is brushed affects the risk of infection. Pets are not vectors for head lice. Other lice that infest humans are the body louse and the crab louse (aka pubic lice). The claws of these three species are adapted to attach to specific hair diameters. Pubic lice are most often spread by sexual contact with an infested person. Body lice can be found on clothing and they are not known to burrow into the skin. Diagnosis The condition is diagnosed by finding live lice and unhatched eggs in the hair. Finding empty eggs is not enough. Dandruff, lint, sand, hair casts, and dried hairspray, can be mistaken for eggs and nits. This is made easier by using a magnifying glass or running a comb through the child's wet hair, the latter of which is the most assured method of diagnosis and can be used to monitor treatment. In questionable cases, a child can be referred to a health professional. However, head lice infestation is commonly overdiagnosed, with extinct infestations being mistaken for active ones. Infestations are only considered extinct if nits are more than 0.25 inches away from the scalp and nymphs and adult lice are absent. As a result, lice-killing treatments are more often used on non-infested than infested children. The use of a louse comb is the most effective way to detect living lice. With both methods, special attention should be paid to the area near the ears and the nape of the neck. The use of a magnifying glass to examine the material collected between the teeth of the comb could prevent misdiagnosis. The presence of nits alone, however, is not an accurate indicator of an active head louse infestation. Generally, white nits are empty egg casings, while brown nits may still contain viable louse larva. One way of determining the nit is to squeeze it between two fingernails; it gives a characteristic snapping pop sound as the egg bursts. Children with nits on their hair have a 35–40% chance of also being infested with living lice and eggs. If lice are detected, the entire family needs to be checked (especially children up to the age of 13 years) with a louse comb, and only those who are infested with living lice should be treated. As long as no living lice are detected, the child should be considered negative for head louse infestation. Accordingly, a child should be treated with a pediculicide only when living lice are detected on their hair (not because they have louse eggs/nits on their hair and not because the scalp is itchy). Prevention Examination of the child's head at regular intervals using a louse comb allows the diagnosis of louse infestation at an early stage. Early diagnosis makes treatment easier and reduces the possibility of infesting others. In times and areas when louse infestations are common, weekly examinations of children, especially those 4–15 years old, carried out by their parents, will aid control. Additional examinations are necessary if the child came in contact with infested individuals, if the child frequently scratches their head, or if nits suddenly appear on the child's hair. Clothes, towels, bedding, combs, and brushes, which came in contact with the infested individual, can be disinfected either by leaving them outside for at least two days or by washing them at 60 °C (140 °F) for 30 minutes. This is because adult lice can survive only one to two days without a blood meal and are highly dependent on human body warmth. Treatment There are a number of treatments effective for head lice. These methods include combs, shaving, medical creams, and hot air. Medical creams usually require two treatments a week apart. Head lice are not justification to keep children home from school as the risk of spread is low. Mechanical measures Wet combing (mechanical removal of lice through combing wet hair) can be used as treatment measure for those who are too young for pediculicide treatment, which is intended for 6 years of age or older. Wet combing a few times a day for a few weeks may also get rid of the infestation in half of people. This requires the use of a special lice comb with extra fine teeth. This is the recommended method for infants and women who are pregnant. Shaving the head can also effectively treat lice. Another treatment is the use of heated air applied by a hair dryer. This can be of special use in the early stages of an infestation, since it has very high mortality for eggs. Medications There are many medications which can kill lice. Dimethicone is between 70 and 97% effective with a low rate of side effects, and thus is seen as the preferred treatment. Dimethicone is a silicone oils with a low surface tension and the propensity to perfectly coat surfaces. It is thought to work not by suffocation or poisoning, but by blocking water excretion, which causes insects to die from physiological stress either through prolonged immobilisation or disruption of internal organs such as the gut. There is no evidence of pesticide resistance. Ivermectin is around 80% effective, but can cause local skin irritation. Malathion has an effectiveness around 90%, but there's the possibility of toxicity. Pyrethroids such as permethrin, while commonly used, have lower rates of effectiveness due to the resistance among lice. Effectiveness varies from 10 to 80%, depending on the population studied. Medications within a lotion appear to work better than those within a shampoo. Benzyl alcohol appears effective but it is unclear if it is better than standard treatments. Abametapir was approved for medical use in the United States in July 2020. Resistance to several commonly used treatments is increasing worldwide, with patterns of resistance varying by region. Head lice have demonstrated resistance to permethrin, malathion, phenothrin, and carbaryl in several countries around the world. A previous method used to delay resistance included utilizing a rotating list of recommended insecticides by health authorities. The mosaic model is the current recommendation, in which it is advised to use one product for a treatment course, followed by a different insecticide from another substance class if the first treatment fails. Home remedies Tea tree oil has been promoted as a treatment for head lice; however, there is no clear evidence of its effectiveness. A 2012 review of head lice treatment recommended against the use of tea tree oil for children because it could cause skin irritation or allergic reactions, because of contraindications, and because of a lack of knowledge about the oil's safety and effectiveness. Other home remedies, such as putting vinegar, isopropyl alcohol, olive oil, mayonnaise, or melted butter under a shower cap, have been disproven. The CDC states that swimming has no effect on drowning lice, and can decrease the effectiveness of some treatments. Environment After treatment, people are often instructed to wash all bedding and vacuum all areas the head may have been, such as car seats, coat hoods, and sofas, but this is not always necessary, since adult lice will die within 2 days without a blood meal, and newly hatched lice die within minutes of hatching. Combs and brushes may be deloused in boiling water for 5–10 minutes. Items may also be frozen for 24 hours well below the freezing point of water to ensure that ice crystals form within the cells of the lice. Outbreak management In addition to environmental management, an outbreak of head lice infestation requires synchronous treatment of all who are infested and evaluation of those who have been exposed or are suspected to have head lice. Synchronous ovoidal dimethicone treatment has been shown to successfully manage and terminate outbreaks, and a single treatment is likely sufficient. Other treatment methods can be repeated 8–10 days following initial treatment, and may sometimes require a third treatment. Outbreak status and treatment effectiveness can be monitored using the wet combing method. Epidemiology The number of cases of human louse infestations (or pediculosis) has increased worldwide since the mid-1960s, reaching hundreds of millions annually. It is estimated between 1 and 20% of specific groups in Europe are infected. Despite improvements in medical treatment and prevention of human diseases during the 20th century, head louse infestation remains stubbornly prevalent. In 1997, 80% of American elementary schools reported at least one outbreak of lice. Lice infestation during that same period was more prevalent than chickenpox. About 6–12 million children between the ages of 3 and 11 are treated annually for head lice in the United States alone. High levels of louse infestations have also been reported from all over the world, including Israel, Denmark, Sweden, U.K., France, and Australia. The United Kingdom's National Health Service report that lice have no preference for any type of hair be it clean, dirty, or short. The number of children per family, the sharing of beds and closets, hair washing habits, local customs and social contacts, healthcare in a particular area (e.g. school), and socioeconomic status were found to be factors in head louse infestation in Iran. Other studies found no relationship between frequency of brushing or shampooing. The California Department of Public Health indicates that chronic head lice infestation may be a sign of socioeconomic or family problems. Children between 4 and 13 years of age are the most frequently infested group. In the U.S., African-American children have lower rates of infestation. Head lice (Pediculus humanus capitis) infestation is most frequent on children aged 3–10 and their families. Females get head lice twice as often as males, and infestation in persons of Afro-Caribbean or other black descent could be rare due to difference in hair shape or width. But these children may have nits that hatch and the live lice could be transferred by head contact to other children. Stigma Head lice infestations are notably common, as is the stigma associated with those who experience infestations. Such stigma is even evidenced in the English language as the term "lousy", an adjective that describes something as very poor, bad, or disgusting. Misperceptions of those infected with head lice include that it is associated with low socioeconomic status, poor hygiene, unhealthiness, immigration status, and homelessness. Though these negative beliefs are unfounded, they can lead to consequences for both the caregivers and the affected individual, such as social exclusion and isolation from peers, victim-blaming, caregiver strain, inappropriate or unsafe treatment practices, and missed work or school. Public-health implications Over-treatment or mismanagement of head lice, which can be driven by stigma, has important implications at the level of the individual and community. Though evidence-based guidelines from the CDC, American Academy of Pediatrics (AAP) and National Association of School Nurses (NASN) all recommend discontinuing "no-nit" policies in schools (meaning that a child does not need to be free of nits before returning to school), 80 percent of schools in the United States still maintain stringent policies that prevent children with infestations from attending. Thus, to foster a return to school in a timely fashion, these policies can encourage unsafe or harsh treatment practices, including chemicals like bleach or kerosene. Similarly, over-treatment of head-lice using pesticide-based pediculicides has been linked to increased resistance and declining efficacy of these treatments. Society and culture "To a Louse" (on a lady's bonnet). Perhaps the most widely known cultural reference to pediculosis capitis, occurring in a noted poem by Robert Burns. Other animals Lice infestation in general is known as pediculosis, and occurs in many mammalian and bird species. Lice infesting other host species are not the same organism as that which causes head lice infestations in humans, nor do the three louse species which infest humans infest any other host species.
Biology and health sciences
Helminthic diseases and infestations
Health
18707980
https://en.wikipedia.org/wiki/Baudet%20du%20Poitou
Baudet du Poitou
The Baudet du Poitou, also called the Poitevin or Poitou donkey, is a French breed of donkey. It is one of the largest breeds, and jacks (donkey stallions) were bred to mares of the Poitevin horse breed to produce Poitevin mules, which were formerly in worldwide demand for agricultural and other work. The Baudet has a distinctive coat, which hangs in long, ungroomed locks or cadenettes. The Baudet developed in the former province of Poitou, possibly from donkeys introduced to the area by the Romans. They may have been a status symbol during the Middle Ages, and by the early 18th century, their physical characteristics had been established. A studbook for the breed was established in France in 1884, and the 19th and early 20th centuries saw them being used for the production of mules throughout Europe. During this same time, Poitou bloodlines were also used to develop other donkey breeds, including the American Mammoth Jack in the United States. Increasing mechanization in the mid-20th century saw a decline in the need for, and hence population of, the breed, and by 1977, a survey found only 44 members worldwide. Conservation efforts were begun by a number of public and private breeders and organizations, and by 2005 there were 450 purebred Poitou donkeys. History The exact origins of the Poitou breed are unknown, but donkeys and their use in the breeding of mules may have been introduced to the Poitou region of France by the Roman Empire. The Baudet de Poitou and the Mulassière (mule breeder) horse breed (also known as the Poitevin) were developed together for the use of producing superior mules. In the Middle Ages, owning a Poitou donkey may have been a status symbol among the local French nobility. It is not known when the Poitou's distinctive characteristics were gained but they seem to have been well-developed by 1717 when an advisor to King Louis XV described: There is found, in northern Poitou, donkeys which are as tall as large mules. They are almost completely covered in hair a half-foot long with legs and joints as large as a those of a carriage horse. In the mid-1800s, Poitevin mules were "regarded as the finest and strongest in France", and between 15,000 and 18,000 were sold annually. In 1884, a studbook was established for the Poitou donkey in France. During the first half of the twentieth century, the mules bred by the Poitou and the Poitevin continued to be desired throughout Europe, and were called the "finest working mule in the world". Purchasers paid higher prices for Poitevin mules than for others, and up to 30,000 were bred annually in Poitou, with some estimates putting the number as high as 50,000. As mechanization increased around World War II, mules became outmoded, and population numbers for both mules and donkeys dropped dramatically. Poitou donkey and mule breeders were extremely protective of their breeding practices, some of which were "highly unusual and misguided." Jacks were kept in closed-in stalls throughout the year once they had begun covering mares, in often unhygienic conditions. Once the mares had been covered, a folk belief held that if they were underfed, they would produce colts, which were more valuable, rather than fillies. This often led to mares being starved during their pregnancies. Colostrum, vital for foal development, was considered unhealthy and withheld from newborns. A lack of breeding records resulted in fertility problems, and there was a significant amount of foal mortality, due to jacks being used to cover horse mares before jennies of their own kind, resulting in late-born foals that were vulnerable to cold fall and winter temperatures. Despite these husbandry issues, one author, writing in 1883, stated that "mule-breeding is about the only branch of agricultural industry in which France has no rival abroad, owing its prosperity entirely to the zeal of those engaged in it." Conservation efforts A breed census in 1977 found only 44 Poitou donkeys worldwide, and in 1980 there were still fewer than 80 animals. Conservation efforts were led by several public and private groups in France. In 1979, the Haras Nationaux, (the French national stud) and the Parc Naturel Regional du Marais Poitevin, working with private breeders, launched an effort to improve the genetics of the Poitou, develop new breeding techniques and collect traditional knowledge on the breed. In 1981, 18 large donkeys from Portugal were acquired for use in breeding Poitou donkeys. This preceded the creation of the Asinerie Nationale Experimentale, which opened in Charente-Maritime in Dampierre-sur-Boutonne in 1982, as an experimental breeding farm. The Parc also works to preserve the Poitevin horse breed. In 1988, the Association pour la Sauvegarde du Baudet du Poitou (SABAUD) was formed as a breeder network that focuses on marketing and fundraising for the breed, and in 1989 became the financial support arm of the Asinerie Nationale Experimentale. The Association des Éleveurs des Races Équine, Mulassière et Asine, Baudet du Poitou is the registering body for the Poitou donkey. The early conservation efforts were sometimes sidetracked as some breeders sold crossbred Poitous as purebreds, which are worth up to ten times as much. Forged pedigrees and registration papers were sometimes used to legitimize these sales. However, by the 1990s, DNA testing and microchip technology began to be used to identify and track purebred animals. The conservation efforts in the latter decades of the 20th century and the early years of the 21st were successful, and a 2005 survey revealed 450 purebred registered animals. This number dropped to just under 400 by 2011. The French studbook for the breed is split into two sections. The first, Livre A, is for purebred animals with documented Poitou parentage on both sides of their pedigree. The second, Livre B, is for animals with one purebred Poitou parent. The American Livestock Breeds Conservancy lists the Poitou as "Critical" on its Conservation Priority List, a category for breeds with less than 2,000 animals worldwide and less than 200 registrations annually in the US. In 2001, scientists in Australia successfully implanted a Poitou donkey embryo created by artificial insemination in the womb of a Standardbred mare. Worries that joint problems might prevent a healthy pregnancy in the foal's biological mother led to the initiative. The resultant foal became one of three Poitou donkeys in Australia. The procedure was unusual because it is often difficult for members of one Equus species to accept implanted embryos from another species in the same genus. In the United States Historical records exist of several sets of exports of Poitous from France to the US during the 19th and early 20th centuries, including a 1910 import of 10 donkeys. Most of these were integrated into the generic pool of donkey bloodstock, rather than being bred pure. During this time, Poitous were used in the creation of the American Mammoth Jack breed. Due to high purchase and transportation costs, the breed played a smaller role in the development of the Mammoth Jack than some breeders would have preferred. Imports to the US continued until at least 1937, when a successful breeding jack name Kaki, who stood high, was brought to the country. The 1940s through the 1960s saw a dearth of Poitou imports, and only a few arrived between 1978 and the 1990s. By 1996, there were estimated to only be around 30 Poitous in North America. In 1996, Debbie Hamilton, an American, founded the Hamilton Rare Breeds Foundation on a farm in Hartland, Vermont, to breed Poitou donkeys. As of 2004, she owned 26 purebred and 14 partbred Poitous, making hers the largest Poitou breeding operation in the United States, and the second largest in the world, behind the French government-sponsored experimental farm. Hamilton works with French officials toward the preservation of the breed, and has received praise from French veterinarians, who appreciate her technical and financial contributions to the breed. Techniques for using cryopreservation to develop a sperm bank for Poitou donkeys have been in development in France since at least 1997, but Hamilton has pioneered the use of artificial insemination using frozen semen in the breed, in order to use genetic material from France to improve Poitou herds in the US. The North American Baudet de Poitou Society, organized by the American Donkey and Mule Society, is the American registry for the breed, coordinating with French officials for inspections and registrations of American-bred Poitou stock. Characteristics The Baudet is a large breed; among other European donkeys only the Andalucian donkey reaches a similar size. And the Catalonian is even bigger. . In order to breed large mules, the original breeders of the Poitou chose animals with large features, such as ears, heads and leg joints. The ears developed to such an extent that their weight sometimes causes them to be carried horizontally. Minimum height is for jacks and for jennies. They have large, long heads, strong necks, long backs, short croups and round haunches. The limb joints and feet are large, and the legs strong. The temperament has been described as "friendly, affectionate and docile". In Poitou, the coat of the Baudet was traditionally – and deliberately – left ungroomed; with time, it formed , long shaggy locks somewhat like dreadlocks. These sometimes became so long that they reached the ground; a Baudet with such a long coat was termed or . The genes responsible for the unusual coat type are recessive, so Poitou mules do not exhibit the trait, and cross-bred donkeys do not exhibit it unless of a related donkey breed that occasionally carries the same genes. The coat is dark bay, ranging from dark brown to black; it may also be , in which the silver-grey surround of the mouth and eyes has a reddish border. The underbelly and the insides of the thighs are pale. It may not display either rubican markings ("white ticking"), nor a dorsal mule-stripe. Use The Baudet was traditionally used only for breeding mules; the word means "donkey sire", but it used to describe the breed as a whole. With the decline of mule-breeding, some may be used for agricultural work, for driving or for riding.
Biology and health sciences
Donkeys
Animals
18710520
https://en.wikipedia.org/wiki/Intergalactic%20dust
Intergalactic dust
Intergalactic dust is cosmic dust in between galaxies in intergalactic space. Evidence for intergalactic dust has been suggested as early as 1949, and study of it grew throughout the late 20th century. There are large variations in the distribution of intergalactic dust. Dust may affect intergalactic distance measurements, such as supernovae and quasars in other galaxies. Partially due to the dust's absorption and re-emission of visible light, observations of more distant astronomical objects have greater apparent magnitude when conducted in infrared. Intergalactic dust can form intergalactic dust clouds, known since the 1960s to exist around some galaxies. By the 1980s, at least four intergalactic dust clouds had been discovered within several megaparsecs of the Milky Way galaxy, exemplified by the Okroy Cloud.
Physical sciences
Basics_2
Astronomy
22462464
https://en.wikipedia.org/wiki/Banpo%20Bridge
Banpo Bridge
The Banpo Bridge () is a major bridge for vehicular traffic over the Han River in central Seoul, South Korea. It is a double-decked bridge, and is above the pedestrian Jamsu Bridge. The bridge is a popular tourist attraction, and is known for its daily Moonlight Rainbow Fountain and light shows between April and October. , the bridge holds the Guinness World Record for longest fountain bridge in the world. It is centrally located in Seoul, and accessible via public transportation. Description The bridge is situated over the Han River, and connects Seobinggo-dong in Yongsan District with Banpo-dong in Seocho District. It is wide and long. The bridge is intended for vehicular traffic. It is the first double deck bridge built in South Korea. It is also a major landmark of the city and attracts both locals and tourists. Moonlight Rainbow Fountain Since April 2009, the bridge has had a fountain off its west side called Moonlight Rainbow Fountain (). On November 7, 2008, the bridge was awarded the Guinness World Record for longest fountain bridge in the world. The bridge has 38 water pumps and 380 nozzles installed. It also has speakers, lights, and projectors. Five to six times per day from April to October, the fountain has a 20 minute water and light show. This occurs only if there is good weather. It has projectors that can display images on the water. The show is set to music that includes various popular South Korean and international songs. The set list is available online. Additionally, Banpo Hangang Park organizes the "Moonlight Square Cultural Weekend" every Saturday from 7:00 to 8:30 p.m. between May and October. This event showcases a range of musical genres, including classical music with commentary, a cappella, popera, jazz, brass bands, and orchestras. Jamsu Bridge Beneath Banpo Bridge is the pedestrian Jamsu Bridge. It is wide and long. During periods of high rainfall, the Jamsu Bridge is designed to submerge as the water level of the river rises, as the lower deck lies close to the waterline. It often hosts cultural events such as a yearly fall market with live music and food trucks. History The lower Jamsu Bridge was completed in 1979, before Banpo Bridge. Banpo Bridge began construction on August 11, 1980 and was completed in November 1982. It cost W22 billion to build (US$20 million). Its construction was intended to reduce traffic load on the Hangang Bridge. Jamsu Bridge was made into an elevated arch shape in 1986, in order to accommodate tourist cruise ships passing underneath it. The bridge went under repairs from December 30, 1994 to June 30, 1996. It underwent more repairs from December 1998 to 2002. From October 2003 and 2005, it was repaved. Gallery
Technology
Bridges
null
133345
https://en.wikipedia.org/wiki/Dead%20reckoning
Dead reckoning
In navigation, dead reckoning is the process of calculating the current position of a moving object by using a previously determined position, or fix, and incorporating estimates of speed, heading (or direction or course), and elapsed time. The corresponding term in biology, to describe the processes by which animals update their estimates of position or heading, is path integration. Advances in navigational aids that give accurate information on position, in particular satellite navigation using the Global Positioning System, have made simple dead reckoning by humans obsolete for most purposes. However, inertial navigation systems, which provide very accurate directional information, use dead reckoning and are very widely applied. Etymology Contrary to myth, the term "dead reckoning" was not originally used to abbreviate "deduced reckoning", nor is it a misspelling of the term "ded reckoning". The use of "ded" or "deduced reckoning" is not known to have appeared earlier than 1931, much later in history than "dead reckoning", which appeared as early as 1613 in the Oxford English Dictionary. The original intention of "dead" in the term is generally assumed to mean using a stationary object that is "dead in the water" as a basis for calculations. Additionally, at the time the first appearance of "dead reckoning", "ded" was considered a common spelling of "dead". This potentially led to later confusion of the origin of the term. By analogy with their navigational use, the words dead reckoning are also used to mean the process of estimating the value of any variable quantity by using an earlier value and adding whatever changes have occurred in the meantime. Often, this usage implies that the changes are not known accurately. The earlier value and the changes may be measured or calculated quantities. Errors While dead reckoning can give the best available information on the present position with little math or analysis, it is subject to significant errors of approximation. For precise positional information, both speed and direction must be accurately known at all times during travel. Most notably, dead reckoning does not account for directional drift during travel through a fluid medium. These errors tend to compound themselves over greater distances, making dead reckoning a difficult method of navigation for longer journeys. For example, if displacement is measured by the number of rotations of a wheel, any discrepancy between the actual and assumed traveled distance per rotation, due perhaps to slippage or surface irregularities, will be a source of error. As each estimate of position is relative to the previous one, errors are cumulative, or compounding, over time. The accuracy of dead reckoning can be increased significantly by using other, more reliable methods to get a new fix part way through the journey. For example, if one was navigating on land in poor visibility, then dead reckoning could be used to get close enough to the known position of a landmark to be able to see it, before walking to the landmark itself—giving a precisely known starting point—and then setting off again. Localization of mobile sensor nodes Localizing a static sensor node is not a difficult task because attaching a Global Positioning System (GPS) device suffices the need of localization. But a mobile sensor node, which continuously changes its geographical location with time is difficult to localize. Mostly mobile sensor nodes within some particular domain for data collection can be used, i.e, sensor node attached to an animal within a grazing field or attached to a soldier on a battlefield. Within these scenarios a GPS device for each sensor node cannot be afforded. Some of the reasons for this include cost, size and battery drainage of constrained sensor nodes. To overcome this problem a limited number of reference nodes (with GPS) within a field is employed. These nodes continuously broadcast their locations and other nodes in proximity receive these locations and calculate their position using some mathematical technique like trilateration. For localization, at least three known reference locations are necessary to localize. Several localization algorithms based on Sequential Monte Carlo (SMC) method have been proposed in literature. Sometimes a node at some places receives only two known locations and hence it becomes impossible to localize. To overcome this problem, dead reckoning technique is used. With this technique a sensor node uses its previous calculated location for localization at later time intervals. For example, at time instant 1 if node A calculates its position as loca_1 with the help of three known reference locations; then at time instant 2 it uses loca_1 along with two other reference locations received from other two reference nodes. This not only localizes a node in less time but also localizes in positions where it is difficult to get three reference locations. Animal navigation In studies of animal navigation, dead reckoning is more commonly (though not exclusively) known as path integration. Animals use it to estimate their current location based on their movements from their last known location. Animals such as ants, rodents, and geese have been shown to track their locations continuously relative to a starting point and to return to it, an important skill for foragers with a fixed home. Vehicular navigation Marine In marine navigation a "dead" reckoning plot generally does not take into account the effect of currents or wind. Aboard ship a dead reckoning plot is considered important in evaluating position information and planning the movement of the vessel. Dead reckoning begins with a known position, or fix, which is then advanced, mathematically or directly on the chart, by means of recorded heading, speed, and time. Speed can be determined by many methods. Before modern instrumentation, it was determined aboard ship using a chip log. More modern methods include pit log referencing engine speed (e.g. in rpm) against a table of total displacement (for ships) or referencing one's indicated airspeed fed by the pressure from a pitot tube. This measurement is converted to an equivalent airspeed based upon known atmospheric conditions and measured errors in the indicated airspeed system. A naval vessel uses a device called a pit sword (rodmeter), which uses two sensors on a metal rod to measure the electromagnetic variance caused by the ship moving through water. This change is then converted to ship's speed. Distance is determined by multiplying the speed and the time. This initial position can then be adjusted resulting in an estimated position by taking into account the current (known as set and drift in marine navigation). If there is no positional information available, a new dead reckoning plot may start from an estimated position. In this case subsequent dead reckoning positions will have taken into account estimated set and drift. Dead reckoning positions are calculated at predetermined intervals, and are maintained between fixes. The duration of the interval varies. Factors including one's speed made good and the nature of heading and other course changes, and the navigator's judgment determine when dead reckoning positions are calculated. Before the 18th-century development of the marine chronometer by John Harrison and the lunar distance method, dead reckoning was the primary method of determining longitude available to mariners such as Christopher Columbus and John Cabot on their trans-Atlantic voyages. Tools such as the traverse board were developed to enable even illiterate crew members to collect the data needed for dead reckoning. Polynesian navigation, however, uses different wayfinding techniques. Air On 14 June, 1919, John Alcock and Arthur Brown took off from Lester's Field in St. John's, Newfoundland in a Vickers Vimy. They navigated across the Atlantic Ocean by dead reckoning and landed in County Galway, Ireland at 8:40 a.m. on 15 June completing the first non-stop transatlantic flight. On 21 May 1927 Charles Lindbergh landed in Paris, France after a successful non-stop flight from the United States in the single-engined Spirit of St. Louis. As the aircraft was equipped with very basic instruments, Lindbergh used dead reckoning to navigate. Dead reckoning in the air is similar to dead reckoning on the sea, but slightly more complicated. The density of the air the aircraft moves through affects its performance as well as winds, weight, and power settings. The basic formula for DR is Distance = Speed x Time. An aircraft flying at 250 knots airspeed for 2 hours has flown 500 nautical miles through the air. The wind triangle is used to calculate the effects of wind on heading and airspeed to obtain a magnetic heading to steer and the speed over the ground (groundspeed). Printed tables, formulae, or an E6B flight computer are used to calculate the effects of air density on aircraft rate of climb, rate of fuel burn, and airspeed. A course line is drawn on the aeronautical chart along with estimated positions at fixed intervals (say every half hour). Visual observations of ground features are used to obtain fixes. By comparing the fix and the estimated position corrections are made to the aircraft's heading and groundspeed. Dead reckoning is on the curriculum for VFR (visual flight rules – or basic level) pilots worldwide. It is taught regardless of whether the aircraft has navigation aids such as GPS, ADF and VOR and is an ICAO Requirement. Many flying training schools will prevent a student from using electronic aids until they have mastered dead reckoning. Inertial navigation systems (INSes), which are nearly universal on more advanced aircraft, use dead reckoning internally. The INS provides reliable navigation capability under virtually any conditions, without the need for external navigation references, although it is still prone to slight errors. Automotive Dead reckoning is today implemented in some high-end automotive navigation systems in order to overcome the limitations of GPS/GNSS technology alone. Satellite microwave signals are unavailable in parking garages and tunnels, and often severely degraded in urban canyons and near trees due to blocked lines of sight to the satellites or multipath propagation. In a dead-reckoning navigation system, the car is equipped with sensors that know the wheel circumference and record wheel rotations and steering direction. These sensors are often already present in cars for other purposes (anti-lock braking system, electronic stability control) and can be read by the navigation system from the controller-area network bus. The navigation system then uses a Kalman filter to integrate the always-available sensor data with the accurate but occasionally unavailable position information from the satellite data into a combined position fix. Autonomous navigation in robotics Dead reckoning is utilized in some robotic applications. It is usually used to reduce the need for sensing technology, such as ultrasonic sensors, GPS, or placement of some linear and rotary encoders, in an autonomous robot, thus greatly reducing cost and complexity at the expense of performance and repeatability. The proper utilization of dead reckoning in this sense would be to supply a known percentage of electrical power or hydraulic pressure to the robot's drive motors over a given amount of time from a general starting point. Dead reckoning is not totally accurate, which can lead to errors in distance estimates ranging from a few millimeters (in CNC machining) to kilometers (in UAVs), based upon the duration of the run, the speed of the robot, the length of the run, and several other factors. Pedestrian dead reckoning With the increased sensor offering in smartphones, built-in accelerometers can be used as a pedometer and built-in magnetometer as a compass heading provider. Pedestrian dead reckoning (PDR) can be used to supplement other navigation methods in a similar way to automotive navigation, or to extend navigation into areas where other navigation systems are unavailable. In a simple implementation, the user holds their phone in front of them and each step causes position to move forward a fixed distance in the direction measured by the compass. Accuracy is limited by the sensor precision, magnetic disturbances inside structures, and unknown variables such as carrying position and stride length. Another challenge is differentiating walking from running, and recognizing movements like bicycling, climbing stairs, or riding an elevator. Before phone-based systems existed, many custom PDR systems existed. While a pedometer can only be used to measure linear distance traveled, PDR systems have an embedded magnetometer for heading measurement. Custom PDR systems can take many forms including special boots, belts, and watches, where the variability of carrying position has been minimized to better utilize magnetometer heading. True dead reckoning is fairly complicated, as it is not only important to minimize basic drift, but also to handle different carrying scenarios and movements, as well as hardware differences across phone models. Directional dead reckoning The south-pointing chariot was an ancient Chinese device consisting of a two-wheeled horse-drawn vehicle which carried a pointer that was intended always to aim to the south, no matter how the chariot turned. The chariot pre-dated the navigational use of the magnetic compass, and could not detect the direction that was south. Instead it used a kind of directional dead reckoning: at the start of a journey, the pointer was aimed southward by hand, using local knowledge or astronomical observations e.g. of the Pole Star. Then, as it traveled, a mechanism possibly containing differential gears used the different rotational speeds of the two wheels to turn the pointer relative to the body of the chariot by the angle of turns made (subject to available mechanical accuracy), keeping the pointer aiming in its original direction, to the south. Errors, as always with dead reckoning, would accumulate as distance traveled increased. For networked games Networked games and simulation tools routinely use dead reckoning to predict where an actor should be right now, using its last known kinematic state (position, velocity, acceleration, orientation, and angular velocity). This is primarily needed because it is impractical to send network updates at the rate that most games run, 60 Hz. The basic solution starts by projecting into the future using linear physics: This formula is used to move the object until a new update is received over the network. At that point, the problem is that there are now two kinematic states: the currently estimated position and the just received, actual position. Resolving these two states in a believable way can be quite complex. One approach is to create a curve (e.g. cubic Bézier splines, centripetal Catmull–Rom splines, and Hermite curves) between the two states while still projecting into the future. Another technique is to use projective velocity blending, which is the blending of two projections (last known and current) where the current projection uses a blending between the last known and current velocity over a set time. The first equation calculates a blended velocity given the client-side velocity at the time of the last server update and the last known server-side velocity . This essentially blends from the client-side velocity towards the server-side velocity for a smooth transition. Note that should go from zero (at the time of the server update) to one (at the time at which the next update should be arriving). A late server update is unproblematic as long as remains at one. Next, two positions are calculated: firstly, the blended velocity and the last known server-side acceleration are used to calculate . This is a position which is projected from the client-side start position based on , the time which has passed since the last server update. Secondly, the same equation is used with the last known server-side parameters to calculate the position projected from the last known server-side position and velocity , resulting in . Finally, the new position to display on the client is the result of interpolating from the projected position based on client information towards the projected position based on the last known server information . The resulting movement smoothly resolves the discrepancy between client-side and server-side information, even if this server-side information arrives infrequently or inconsistently. It is also free of oscillations which spline-based interpolation may suffer from. Computer science In computer science, dead-reckoning refers to navigating an array data structure using indexes. Since every array element has the same size, it is possible to directly access one array element by knowing any position in the array. Given the following array: knowing the memory address where the array starts, it is easy to compute the memory address of D: Likewise, knowing D's memory address, it is easy to compute the memory address of B: This property is particularly important for performance when used in conjunction with arrays of structures because data can be directly accessed, without going through a pointer dereference.
Technology
Navigation
null
133496
https://en.wikipedia.org/wiki/Parallelogram
Parallelogram
In Euclidean geometry, a parallelogram is a simple (non-self-intersecting) quadrilateral with two pairs of parallel sides. The opposite or facing sides of a parallelogram are of equal length and the opposite angles of a parallelogram are of equal measure. The congruence of opposite sides and opposite angles is a direct consequence of the Euclidean parallel postulate and neither condition can be proven without appealing to the Euclidean parallel postulate or one of its equivalent formulations. By comparison, a quadrilateral with at least one pair of parallel sides is a trapezoid in American English or a trapezium in British English. The three-dimensional counterpart of a parallelogram is a parallelepiped. The word "parallelogram" comes from the Greek παραλληλό-γραμμον, parallēló-grammon, which means "a shape of parallel lines". Special cases Rectangle – A parallelogram with four angles of equal size (right angles). Rhombus – A parallelogram with four sides of equal length. Any parallelogram that is neither a rectangle nor a rhombus was traditionally called a rhomboid but this term is not used in modern mathematics. Square – A parallelogram with four sides of equal length and angles of equal size (right angles). Characterizations A simple (non-self-intersecting) quadrilateral is a parallelogram if and only if any one of the following statements is true: Two pairs of opposite sides are parallel (by definition). Two pairs of opposite sides are equal in length. Two pairs of opposite angles are equal in measure. The diagonals bisect each other. One pair of opposite sides is parallel and equal in length. Adjacent angles are supplementary. Each diagonal divides the quadrilateral into two congruent triangles. The sum of the squares of the sides equals the sum of the squares of the diagonals. (This is the parallelogram law.) It has rotational symmetry of order 2. The sum of the distances from any interior point to the sides is independent of the location of the point. (This is an extension of Viviani's theorem.) There is a point X in the plane of the quadrilateral with the property that every straight line through X divides the quadrilateral into two regions of equal area. Thus, all parallelograms have all the properties listed above, and conversely, if just any one of these statements is true in a simple quadrilateral, then it is considered a parallelogram. Other properties Opposite sides of a parallelogram are parallel (by definition) and so will never intersect. The area of a parallelogram is twice the area of a triangle created by one of its diagonals. The area of a parallelogram is also equal to the magnitude of the vector cross product of two adjacent sides. Any line through the midpoint of a parallelogram bisects the area. Any non-degenerate affine transformation takes a parallelogram to another parallelogram. A parallelogram has rotational symmetry of order 2 (through 180°) (or order 4 if a square). If it also has exactly two lines of reflectional symmetry then it must be a rhombus or an oblong (a non-square rectangle). If it has four lines of reflectional symmetry, it is a square. The perimeter of a parallelogram is 2(a + b) where a and b are the lengths of adjacent sides. Unlike any other convex polygon, a parallelogram cannot be inscribed in any triangle with less than twice its area. The centers of four squares all constructed either internally or externally on the sides of a parallelogram are the vertices of a square. If two lines parallel to sides of a parallelogram are constructed concurrent to a diagonal, then the parallelograms formed on opposite sides of that diagonal are equal in area. The diagonals of a parallelogram divide it into four triangles of equal area. Area formula All of the area formulas for general convex quadrilaterals apply to parallelograms. Further formulas are specific to parallelograms: A parallelogram with base b and height h can be divided into a trapezoid and a right triangle, and rearranged into a rectangle, as shown in the figure to the left. This means that the area of a parallelogram is the same as that of a rectangle with the same base and height: The base × height area formula can also be derived using the figure to the right. The area K of the parallelogram to the right (the blue area) is the total area of the rectangle less the area of the two orange triangles. The area of the rectangle is and the area of a single triangle is Therefore, the area of the parallelogram is Another area formula, for two sides B and C and angle θ, is Provided that the parallelogram is not a rhombus, the area can be expressed using sides B and C and angle at the intersection of the diagonals: When the parallelogram is specified from the lengths B and C of two adjacent sides together with the length D1 of either diagonal, then the area can be found from Heron's formula. Specifically it is where and the leading factor 2 comes from the fact that the chosen diagonal divides the parallelogram into two congruent triangles. From vertex coordinates Let vectors and let denote the matrix with elements of a and b. Then the area of the parallelogram generated by a and b is equal to . Let vectors and let . Then the area of the parallelogram generated by a and b is equal to . Let points . Then the signed area of the parallelogram with vertices at a, b and c is equivalent to the determinant of a matrix built using a, b and c as rows with the last column padded using ones as follows: Proof that diagonals bisect each other To prove that the diagonals of a parallelogram bisect each other, we will use congruent triangles: (alternate interior angles are equal in measure) (alternate interior angles are equal in measure). (since these are angles that a transversal makes with parallel lines AB and DC). Also, side AB is equal in length to side DC, since opposite sides of a parallelogram are equal in length. Therefore, triangles ABE and CDE are congruent (ASA postulate, two corresponding angles and the included side). Therefore, Since the diagonals AC and BD divide each other into segments of equal length, the diagonals bisect each other. Separately, since the diagonals AC and BD bisect each other at point E, point E is the midpoint of each diagonal. Lattice of parallelograms Parallelograms can tile the plane by translation. If edges are equal, or angles are right, the symmetry of the lattice is higher. These represent the four Bravais lattices in 2 dimensions. Parallelograms arising from other figures Automedian triangle An automedian triangle is one whose medians are in the same proportions as its sides (though in a different order). If ABC is an automedian triangle in which vertex A stands opposite the side a, G is the centroid (where the three medians of ABC intersect), and AL is one of the extended medians of ABC with L lying on the circumcircle of ABC, then BGCL is a parallelogram. Varignon parallelogram Varignon's theorem holds that the midpoints of the sides of an arbitrary quadrilateral are the vertices of a parallelogram, called its Varignon parallelogram. If the quadrilateral is convex or concave (that is, not self-intersecting), then the area of the Varignon parallelogram is half the area of the quadrilateral. Proof without words (see figure): An arbitrary quadrilateral and its diagonals. Bases of similar triangles are parallel to the blue diagonal. Ditto for the red diagonal. The base pairs form a parallelogram with half the area of the quadrilateral, Aq, as the sum of the areas of the four large triangles, Al is 2 Aq (each of the two pairs reconstructs the quadrilateral) while that of the small triangles, As is a quarter of Al (half linear dimensions yields quarter area), and the area of the parallelogram is Aq minus As. Tangent parallelogram of an ellipse For an ellipse, two diameters are said to be conjugate if and only if the tangent line to the ellipse at an endpoint of one diameter is parallel to the other diameter. Each pair of conjugate diameters of an ellipse has a corresponding tangent parallelogram, sometimes called a bounding parallelogram, formed by the tangent lines to the ellipse at the four endpoints of the conjugate diameters. All tangent parallelograms for a given ellipse have the same area. It is possible to reconstruct an ellipse from any pair of conjugate diameters, or from any tangent parallelogram. Faces of a parallelepiped A parallelepiped is a three-dimensional figure whose six faces are parallelograms.
Mathematics
Two-dimensional space
null
133824
https://en.wikipedia.org/wiki/Oyster
Oyster
Oyster is the common name for a number of different families of salt-water bivalve molluscs that live in marine or brackish habitats. In some species, the valves are highly calcified, and many are somewhat irregular in shape. Many, but not all oysters, are in the superfamily Ostreoidea. Some species of oyster are commonly consumed and are regarded as a delicacy in some localities. Some types of pearl oysters are harvested for the pearl produced within the mantle. Others, such as the translucent Windowpane oysters, are harvested for their shells. Etymology The word oyster comes from Old French , and first appeared in English during the 14th century. The French derived from the Latin , the feminine form of , which is the latinisation of the Ancient Greek () 'oyster'. Compare () 'bone'. Types True oysters True oysters are members of the family Ostreidae. This family includes the edible oysters, which mainly belong to the genera Ostrea, Crassostrea, Magallana, and Saccostrea. Examples include the European flat oyster, eastern oyster, Olympia oyster, Pacific oyster, and the Sydney rock oyster. Ostreidae evolved in the Early Triassic epoch: The genus Liostrea grew on the shells of living ammonoids. Pearl oysters Almost all shell-bearing mollusks can secrete pearls, yet most are not very valuable. Pearls can form in both saltwater and freshwater environments. Pearl oysters are not closely related to true oysters, being members of a distinct family, the feathered oysters (Pteriidae). Both cultured pearls and natural pearls can be extracted from pearl oysters, though other molluscs, such as the freshwater mussels, also yield pearls of commercial value. The largest pearl-bearing oyster is the marine Pinctada maxima, which is roughly the size of a dinner plate. Not all individual oysters produce pearls. In nature, pearl oysters produce pearls by covering a minute invasive object with nacre. Over the years, the irritating object is covered with enough layers of nacre to become a pearl. The many different types, colours and shapes of pearls depend on the natural pigment of the nacre, and the shape of the original irritant. Pearl farmers can culture a pearl by placing a nucleus, usually a piece of polished mussel shell, inside the oyster. In three to seven years, the oyster can produce a perfect pearl. Since the beginning of the 20th century, when several researchers discovered how to produce artificial pearls, the cultured pearl market has far outgrown the natural pearl market. Other types A number of bivalve molluscs (other than true oysters and pearl oysters) also have common names that include the word "oyster", usually because they either taste like or look somewhat like true oysters, or because they yield noticeable pearls. Examples include: Thorny oysters in the genus Spondylus Pilgrim oyster, another term for a scallop, in reference to the scallop shell of St. James Saddle oysters, members of the Anomiidae family also known as jingle shells Dimydarian oysters, members of the family Dimyidae Windowpane oysters In the Philippines, a local thorny oyster species known as Tikod amo is a favorite seafood source in the southern part of the country. Because of its good flavor, it commands high prices. Anatomy Oysters breathe primarily via gills. In addition to their gills, oysters can exchange gases across their mantles, which are lined with many small, thin-walled blood vessels. A small, three-chambered heart, lying under the adductor muscle, pumps colorless blood to all parts of the body. At the same time, two kidneys, located on the underside of the muscle, remove waste products from the blood. Their nervous system includes two pairs of nerve cords and three pairs of ganglia. There is no evidence that oysters have a brain. While some oysters have two sexes (European oyster and Olympia oyster), their reproductive organs contain both eggs and sperm. Because of this, it is technically possible for an oyster to fertilize its own eggs. The gonads surround the digestive organs, and are made up of sex cells, branching tubules, and connective tissue. Once her millions of eggs are fertilized, the female discharges them into the water. The larvae develop in about six hours and exist suspended in the water column as veliger larvae for two to three weeks before settling on a bed and reaching sexual maturity within a year. Feeding Oysters are filter feeders, drawing water in over their gills through the beating of cilia. Suspended plankton and non-food particles are trapped in the mucus of a gill, and from there are transported to the mouth, where they are eaten, digested, and expelled as feces or pseudofeces that fall to the bottom and remain out of the water column. Oysters feed most actively at temperatures ranging from the high 60s to the high 70s (). Under ideal laboratory conditions, an oyster can filter up to of water per day. Under average conditions, mature oysters filter . Chesapeake Bay's once-flourishing oyster population historically filtered excess nutrients from the estuary's entire water volume every three to four days. As of 2008 it was estimated that a complete cycle would take nearly a year. Habitat and behaviour A group of oysters is commonly called a bed or oyster reef. As a keystone species, oysters provide habitat for many marine species. Crassostrea and Saccostrea live mainly in the intertidal zone, while Ostrea is subtidal. The hard surfaces of oyster shells and the nooks between the shells provide places where a host of small animals can live. Hundreds of animals, such as sea anemones, barnacles, and hooked mussels, inhabit oyster reefs. Many of these animals are prey to larger animals, including fish, such as striped bass, black drum and croakers. An oyster reef can increase the surface area of a flat bottom 50-fold. An oyster's mature shape often depends on the type of bottom to which it is originally attached, but it always orients itself with its outer, flared shell tilted upward. One valve is cupped and the other is flat. Oysters usually reach maturity in one year. They are protandric; during their first year, they spawn as males by releasing sperm into the water. As they grow over the next two or three years and develop greater energy reserves, they spawn as females by releasing eggs. Bay oysters usually spawn from the end of June until mid-August. An increase in water temperature prompts a few oysters to spawn. This triggers spawning in the rest, clouding the water with millions of eggs and sperm. A single female oyster can produce up to 100 million eggs annually. The eggs become fertilized in the water and develop into larvae, which eventually find suitable sites, such as another oyster's shell, on which to settle. Attached oyster larvae are called spat. Spat are oysters less than long. Many species of bivalves, oysters included, seem to be stimulated to settle near adult conspecifics. Oysters filter large amounts of water to feed and breathe (exchange and with water) but they are not permanently open. They regularly shut their valves to enter a resting state, even when they are permanently submersed. Their behaviour follows very strict circatidal and circadian rhythms according to the relative moon and sun positions. During neap tides, they exhibit much longer closing periods than during the spring tide. Some tropical oysters, such as the mangrove oyster in the family Ostreidae, grow best on mangrove roots. Low tide can expose them, making them easy to collect. The largest oyster-producing body of water in the United States is the Chesapeake Bay, although these beds have decreased in number due to overfishing and pollution. Other large oyster farming areas in the US include the bays and estuaries along the coast of the Gulf of Mexico from Apalachicola, Florida, in the east to Galveston, Texas, in the west. Large beds of edible oysters are also found in Japan and Australia. In 2005, China accounted for 80% of the global oyster harvest. In Europe, France remained the industry leader. Common oyster predators include crabs, seabirds, starfish, and humans. Some oysters contain crabs, known as oyster crabs. Nutrient cycling Bivalves, including oysters, are effective filter feeders and can have large effects on the water columns in which they occur. As filter feeders, oysters remove plankton and organic particles from the water column. Multiple studies have shown individual oysters are capable of filtering up to of water per day, and thus oyster reefs can significantly improve water quality and clarity. Oysters consume nitrogen-containing compounds (nitrates and ammonia), phosphates, plankton, detritus, bacteria, and dissolved organic matter, removing them from the water. What is not used for animal growth is then expelled as solid waste pellets, which eventually decompose into the atmosphere as nitrogen. In Maryland, the Chesapeake Bay Program had implemented a plan to use oysters to reduce the amount of nitrogen compounds entering the Chesapeake Bay by per year by 2010. Several studies have shown that oysters and mussels have the capacity to dramatically alter nitrogen levels in estuaries. In the U.S., Delaware is the only East Coast state without aquaculture, but making aquaculture a state-controlled industry of leasing water by the acre for commercial harvesting of shellfish is being considered. Supporters of Delaware's legislation to allow oyster aquaculture cite revenue, job creation, and nutrient cycling benefits. It is estimated that can produce nearly 750,000 oysters, which could filter between of water daily. Also see nutrient pollution for an extended explanation of nutrient remediation. Ecosystem services As an ecosystem engineer, oysters provide supporting ecosystem services, along with provisioning, regulating and cultural services. Oysters influence nutrient cycling, water filtration, habitat structure, biodiversity, and food web dynamics. Oyster reef habitats have been recognized as green infrastructure for shoreline protection. Assimilation of nitrogen and phosphorus into shellfish tissues provides an opportunity to remove these nutrients from the water column. In California's Tomales Bay, native oyster presence is associated with higher species diversity of benthic invertebrates. As the ecological and economic importance of oyster reefs has become more acknowledged, restoration efforts have increased. Human history Middens testify to the prehistoric importance of oysters as food, with some middens in New South Wales, Australia, dated at ten thousand years. They have been cultivated in Japan from at least 2000 BC. In the United Kingdom, the town of Whitstable is noted for oyster farming from beds on the Kentish Flats that have been used since Roman times. The borough of Colchester holds an annual Oyster Feast each October, at which "Colchester Natives" (the native oyster, Ostrea edulis) are consumed. The United Kingdom hosts several other annual oyster festivals; for example, Woburn Oyster Festival is held in September. In fact, in Victorian England, it was quite common for people to go to the pub and enjoy their favorite beer with some oysters. They quickly realized that the "rich, sweet, malty stouts" were great with the "briny, creamy oyster". Then brewers found that oyster shells naturally clarify a beer and they started putting crushed oyster shells into their brews. The first known brewery to start this was in 1938 at the Hammerton Brewery in London. That is where the oyster stout was first started. The French seaside resort of Cancale in Brittany is noted for its oysters, which also date from Roman times. Sergius Orata of the Roman Republic is considered the first major merchant and cultivator of oysters. Using his considerable knowledge of hydraulics, he built a sophisticated cultivation system, including channels and locks, to control the tides. He was so famous for this, the Romans used to say he could breed oysters on the roof of his house. In the early 19th century, oysters were cheap and mainly eaten by the working class. Throughout the 19th century, oyster beds in New York Harbor became the largest source of oysters worldwide. On any day in the late 19th century, six million oysters could be found on barges tied up along the city's waterfront. They were naturally quite popular in New York City, and helped initiate the city's restaurant trade. New York's oystermen became skilled cultivators of their beds, which provided employment for hundreds of workers and nutritious food for thousands. Eventually, rising demand exhausted many of the beds. To increase production, they introduced foreign species, which brought disease; effluent and increasing sedimentation from erosion destroyed most of the beds by the early 20th century. Oysters' popularity has put ever-increasing demands on wild oyster stocks. This scarcity increased prices, converting them from their original role as working-class food to their current status as an expensive delicacy. In Britain, the native species (European flat oyster) has five years to mature and is protected by the people during their May-to-August spawning season. The current market is dominated by the larger Pacific oyster and Rock oyster species which are farmed year-round. Fishing from the wild Oysters are harvested by simply gathering them from their beds. In very shallow waters, they can be gathered by hand or with small rakes. In somewhat deeper water, long-handled rakes or oyster tongs are used to reach the beds. Patent tongs can be lowered on a line to reach beds that are too deep to reach directly. In all cases, the task is the same: the oysterman scrapes oysters into a pile, and then scoops them up with the rake or tongs. In some areas, a scallop dredge is used. This is a toothed bar attached to a chain bag. The dredge is towed through an oyster bed by a boat, picking up the oysters in its path. While dredges collect oysters more quickly, they heavily damage the beds, and their use is highly restricted. Until 1965, Maryland limited dredging to sailboats, and even since then motor boats can be used only on certain days of the week. These regulations prompted the development of specialized sailboats (the bugeye and later the skipjack) for dredging. Similar laws were enacted in Connecticut before World War I and lasted until 1969. The laws restricted the harvesting of oysters in state-owned beds to vessels under sail. These laws prompted the construction of the oyster sloop-style vessel to last well into the 20th century. Hope is believed to be the last-built Connecticut oyster sloop, completed in 1948. Oysters can also be collected by divers. In any case, when the oysters are collected, they are sorted to eliminate dead animals, bycatch (unwanted catch), and debris. Then they are taken to market, where they are either canned or sold live. Cultivation Oysters have been cultured since at least the days of the Roman Empire. The Pacific oyster (Magallana gigas) is presently the most widely grown bivalve around the world. Two methods are commonly used, release and bagging. In both cases, oysters are cultivated onshore to the size of spat, when they can attach themselves to a substrate. They may be allowed to mature further to form "seed oysters". In either case, they are then placed in the water to mature. The release technique involves distributing the spat throughout existing oyster beds, allowing them to mature naturally to be collected like wild oysters. Bagging has the cultivator putting spat in racks or bags and keeping them above the bottom. Harvesting involves simply lifting the bags or rack to the surface and removing the mature oysters. The latter method prevents losses to some predators, but is more expensive. The Pacific oyster has been grown in the outflow of mariculture ponds. When fish or prawns are grown in ponds, it takes typically of feed to produce of product (dry-dry basis). The other goes into the pond and after mineralization, provides food for phytoplankton, which in turn feeds the oyster. To prevent spawning, sterile oysters are now cultured by crossbreeding tetraploid and diploid oysters. The resulting triploid oyster cannot propagate, which prevents introduced oysters from spreading into unwanted habitats. Restoration and recovery In many areas, non-native oysters have been introduced in attempts to prop up failing harvests of native varieties. For example, the eastern oyster (Crassostrea virginica) was introduced to California waters in 1875, while the Pacific oyster was introduced there in 1929. Proposals for further such introductions remain controversial. The Pacific oyster prospered in Pendrell Sound, where the surface water is typically warm enough for spawning in the summer. Over the following years, spat spread out sporadically and populated adjacent areas. Eventually, possibly following adaptation to the local conditions, the Pacific oyster spread up and down the coast and now is the basis of the North American west coast oyster industry. Pendrell Sound is now a reserve that supplies spat for cultivation. Near the mouth of the Great Wicomico River in the Chesapeake Bay, five-year-old artificial reefs now harbor more than 180 million native Crassostrea virginica. That is far lower than in the late 1880s, when the bay's population was in the billions, and watermen harvested about annually. The 2009 harvest was less than . Researchers claim the keys to the project were: using waste oyster shells to elevate the reef floor to keep the spat free of bottom sediments building larger reefs, ranging up to in size disease-resistant broodstock The "oyster-tecture" movement promotes the use of oyster reefs for water purification and wave attenuation. An oyster-tecture project has been implemented at Withers Estuary, Withers Swash, South Carolina, by Neil Chambers-led volunteers, at a site where pollution was affecting beach tourism. Currently, for the installation cost of $3000, roughly 4.8 million liters of water are being filtered daily. In New Jersey, however, the Department of Environmental Protection refused to allow oysters as a filtering system in Sandy Hook Bay and the Raritan Bay, citing worries that commercial shellfish growers would be at risk and that members of the public might disregard warnings and consume tainted oysters. New Jersey Baykeepers responded by changing their strategy for utilizing oysters to clean up the waterway, by collaborating with Naval Weapons Station Earle. The Navy station is under 24/7 security and therefore eliminates any poaching and associated human health risk. Oyster-tecture projects have been proposed to protect coastal cities, such as New York, from the threat of rising sea levels due to climate change. Additionally Oyster reef restoration has shown to increase the population of oyster beds within the oceans while also conserving the biolife within the oyster reefs. Human impact The accidental or intentional introduction of species by humans has the potential to negatively impact native oyster populations. For example, non-native species in Tomales Bay have resulted in the loss of half of California's Olympia oysters. Oyster reefs occupy a small fraction of their distribution prior to mass harvesting during the last three centuries. In October 2017, it was reported that underwater noise pollution can affect oysters as they close their shells when exposed to low frequencies of sounds in experimental conditions. Oysters rely on hearing waves and currents to regulate their circadian rhythms, and perception of weather events—such as rain—may induce spawning. Cargo ships, pile drivers, and explosions conducted underwater produce low frequencies that may be detected by oysters. Environmental stressors as a result of global change are also negatively impacting oysters around the world, with many impacts affecting molecular, physiological, and behavioral processes in species including Magallana gigas. Shell recycling Recycled oyster shells can help restore oyster reefs to provide marine life habitat that reduces flooding, and protects shorelines from storms. Shell-recycling non-profits retrieve shells from restaurants, wash and dry them, and set them in the sun for up to a year to kill bacteria. Some states encourage shell recycling by offering tax incentives. As food Jonathan Swift is quoted as having said, "He was a bold man that first ate an oyster". Evidence of oyster consumption goes back into prehistory, evidenced by oyster middens found worldwide. Oysters were an important food source in all coastal areas where they could be found, and oyster fisheries were an important industry where they were plentiful. Overfishing and pressure from diseases and pollution have sharply reduced supplies, but they remain a popular treat celebrated in oyster festivals in many cities and towns. It was once assumed that oysters were only safe to eat in months with the letter 'r' in their English and French names. This myth is based in truth, in that in the Northern Hemisphere, oysters are much more likely to spoil in the warmer months of May, June, July, and August. In recent years, pathogens such as Vibrio parahaemolyticus have caused outbreaks in several harvesting areas of the eastern United States during the summer months, lending further credence to this belief. Dishes Oysters can be eaten on the half shell, raw, smoked, boiled, baked, fried, roasted, stewed, canned, pickled, steamed, or broiled, or used in a variety of drinks. Eating can be as simple as opening the shell and eating the contents, including juice. Butter and salt are often added. Poached oysters can be served on toast with a cream roux. In the case of Oysters Rockefeller, preparation can be very elaborate. They are sometimes served on edible seaweed, such as brown algae. Care should be taken when consuming oysters. They may be eaten raw, with no dressing or with lemon juice, vinegar (most commonly shallot vinegar), or cocktail sauce. Upscale restaurants pair raw oysters with mignonette sauce, which consists primarily of fresh chopped shallot, mixed peppercorn, dry white wine and lemon juice or sherry vinegar. Raw oysters have complex flavors that vary among varieties and regions: salty, briny, buttery, metallic or fruity. The texture is soft and fleshy. North American varieties include Kumamoto and Yaquina Bay from Oregon, Duxbury and Wellfleet from Massachusetts, Malpeque from Prince Edward Island, Canada, Blue Point from Long Island, New York, Pemaquid from Maine, Rappahannock River and James River from Virginia, Chesapeake from Maryland and Cape May from New Jersey. Variations in water salinity, alkalinity, and mineral and nutritional content influence their flavor. Nutrition Oysters are an excellent source of zinc, iron, calcium, and selenium, as well as vitamin A and vitamin B12. Oysters are low in food energy; one dozen raw oysters provides only . They are rich in protein (approximately 9 g in 100 g of Pacific oysters). Two oysters () provide the Reference Daily Intake of zinc and vitamin B12. Traditionally, oysters are considered to be an aphrodisiac, partially because they resemble female sex organs. A team of American and Italian researchers analyzed bivalves and found they were rich in amino acids that trigger increased levels of sex hormones. Their high zinc content aids the production of testosterone. Shucking oysters Opening oysters, referred to as "oyster-shucking", requires skill. The preferred method is to use a special knife (called an oyster knife, a variant of a shucking knife), with a short and thick blade about long. While different methods are used to open an oyster (which sometimes depend on the type), the following is one commonly accepted oyster-shucking method. Insert the blade, with moderate force and vibration if necessary, at the hinge between the two valves. Twist the blade until there is a slight pop. Slide the blade upward to cut the adductor muscle which holds the shell closed. Inexperienced shuckers can apply too much force, which can result in injury if the blade slips. Heavy gloves, sometimes sold as oyster gloves, are recommended; apart from the knife, the shell itself can be razor-sharp. Professional shuckers require fewer than three seconds to open the shell. If the oyster has a particularly soft shell, the knife can be inserted instead in the "sidedoor", about halfway along one side where the oyster lips widen with a slight indentation. Oyster-shucking has become a competitive sport; competitions are staged around the world. The Guinness World Oyster Opening Championship was held annually in September at the Galway International Oyster Festival, in Galway, Ireland until 2010. Since 2011, "Guinness" has been dropped from the title. Food safety and storage Unlike most shellfish, oysters can have a fairly long shelf life of up to four weeks. However, their taste becomes less pleasant as they age. Fresh oysters must be alive just before consumption or cooking. Cooked oysters that do not open are generally assumed to be previously dead and therefore unsafe. There is only one criterion: the oyster must be capable of tightly closing its shell. Open oysters should be tapped on the shell; a live oyster will close up and is safe to eat. Oysters which are open and unresponsive are dead and must be discarded. Some dead oysters, or oyster shells which are full of sand, may be closed. These make a distinctive noise when tapped, and are known as "clackers". Oysters can contain harmful bacteria. Oysters are filter feeders, so will naturally concentrate anything present in the surrounding water. Oysters from the Gulf Coast of the United States, for example, contain high bacterial loads of human pathogens in the warm months, most notably Vibrio vulnificus and Vibrio parahaemolyticus. In these cases, the main danger is for immunocompromised individuals, who are unable to fight off infection and can succumb to sepsis, leading to death. Vibrio vulnificus is the most deadly seafood-borne pathogen. Depuration Depuration of oysters is a common industry practice and widely researched in the scientific community but is not commonly known by end consumers. The main objective of seafood depuration is to remove fecal contamination in seafood before being sold to end consumers. Oyster depuration is useful since they are generally eaten raw and in many countries, the requirement to process is government-regulated or mandatory. The United Nations Food and Agriculture Organization (FAO) formally recognizes depuration and has published detailed documents on the process, whereas the Codex Alimentarius, encourages the application of seafood depuration. Oyster depuration begins after the harvest of oysters from farmed locations. The oysters are transported and placed into tanks pumped with clean water for periods of 48 to 72 hours. The holding temperatures and salinity vary according to species. The seawater that the oysters were originally farmed in does not remain in the oyster, since the water used for depuration must be fully sterilized, and the depuration facility would not necessarily be located near the farming location. Depuration of oysters can remove moderate levels of contamination of most bacterial indicators and pathogens. Well-known contaminants include Vibrio parahaemolyticus, a temperature-sensitive bacterium found in seawater animals, and Escherichia coli, a bacterium found in coastal waters near highly populated cities having sewage systems discharging waste nearby, or in the presence of agricultural discharges. Depuration expands beyond oysters into many shellfish and other related products, especially in seafood that is known to come from potentially polluted areas; depurated seafood is effectively a product cleansed from inside-out to make it safe for human consumption. Cultural aspects Religious As shellfish, consumption of oyster is forbidden by Jewish dietary law. Similarly, in Islam, Jaʽafari Shia and Hanafi Sunni dietary jurisprudence prohibit consuming bivalves, including oysters, as it is makruh (highly disliked). Diseases Oysters are subject to various diseases which can reduce harvests and severely deplete local populations. Disease control focuses on containing infections and breeding resistant strains, and is the subject of much ongoing research. "Dermo" is caused by a protozoan parasite (Perkinsus marinus). It is a prevalent pathogen, causes massive mortality, and poses a significant economic threat to the oyster industry. The disease is not a direct threat to humans consuming infected oysters. Dermo first appeared in the Gulf of Mexico in the 1950s, and until 1978 was believed to be caused by a fungus. While it is most serious in warmer waters, it has gradually spread up the east coast of the United States. Multinucleated sphere X (MSX) is caused by the protozoan Haplosporidium nelsoni, generally seen as a multinucleated Plasmodium. It is infectious and causes heavy mortality in the eastern oyster; survivors, however, develop resistance and can help propagate resistant populations. MSX is associated with high salinity and water temperatures. MSX was first noted in Delaware Bay in 1957, and is now found all up and down the East Coast of the United States. Evidence suggests it was brought to the US when Crassostrea gigas, Pacific oyster variety, was introduced to Delaware Bay. Denman Island disease causes visible yellow/green pustules on the body and adductor muscles of oysters. This disease mainly affects Pacific oysters (Crassostrea gigas). The disease was first described in 1960 near Denman Island off the eastern aspect of Vancouver Island, British Columbia. It was found that the causative agent of these lesions are associated with amitochondriate protistan microcells, which were later identified as Mikrocytos mackini. Some oysters also harbor bacterial species which can cause human disease; of importance is Vibrio vulnificus, which causes gastroenteritis, which is usually self-limiting, and cellulitis. Cellulitis can be severe and rapidly spreading, requiring antibiotics, medical care, and in some severe cases amputation. It is usually acquired when the contents of the oyster come in contact with a cut skin lesion, as when shucking an oyster.
Biology and health sciences
Mollusks
null
133923
https://en.wikipedia.org/wiki/Chisel
Chisel
A chisel is a wedged hand tool with a characteristically shaped cutting edge on the end of its blade, for carving or cutting a hard material (e.g. wood, stone, or metal). The tool can be used by hand, struck with a mallet, or applied with mechanical power. The handle and blade of some types of chisel are made of metal or wood with a sharp edge in it (such that wood chisels have lent part of their name to a particular grind). Chiselling use involves forcing the blade into some material to cut it. The driving force may be applied by pushing by hand, or by using a mallet or hammer. In industrial use, a hydraulic ram or falling weight ('trip hammer') may be used to drive a chisel into the material. A gouge is a type of chisel that serves to carve small pieces from the material; particularly in woodworking, woodturning and sculpture. Gouges most frequently produce concave surfaces and have a U-shaped cross-section. Etymology Chisel comes from the Old French cisel, modern ciseau, Late Latin cisellum, a cutting tool, from caedere, to cut. History Chisels are common in the archeological record. Chisel-cut materials have also been found. Woodworking Woodworking chisels range from small hand tools for tiny details, to large chisels used to remove big sections of wood, in 'roughing out' the shape of a pattern or design. Typically, in woodcarving, one starts with a larger tool, and gradually progresses to smaller tools to finish the detail. One of the largest types of chisel is the slick, used in timber frame construction and wooden shipbuilding. There are many types of woodworking chisels used for specific purposes, such as: Firmer chisel has a blade with a thick rectangular cross section, making them stronger for use on tougher and heavier work. Bevel edge chisel can get into acute angles with its bevelled edges. Mortise chisel thick, rigid blade with straight cutting edge and deep, slightly tapered sides to make mortises and similar joints. Common types are registered and sash mortice chisels. Paring chisel has a long blade ideal for cleaning grooves and accessing tight spaces. Skew chisel has a 60 degree cutting angle and is used for trimming and finishing across the grain on a wood lathe. Dovetail chisel made specifically for cutting dovetail joints. The difference being the thickness of the body of the chisel, as well as the angle of the edges, permitting easier access to the joint. Butt chisel short chisel with beveled sides and straight edge for creating joints. Carving chisels used for intricate designs and sculpting; cutting edges are many; such as gouge, skew, parting, straight, paring, and V-groove. Corner chisel resembles a punch and has an L-shaped cutting edge. Cleans out square holes, mortises and corners with 90 degree angles. Flooring chisel cuts and lifts flooring materials for removal and repair; ideal for tongue-and-groove flooring. Framing chisel usually used with mallet; similar to a butt chisel, except it has a longer, slightly flexible blade. Slick a very large chisel driven by manual pressure, never struck. Drawer lock chisel an all metal chisel with two angled blades used for tight spaces such as cutting out the space for fitting a desk drawer lock. Lathe tools Woodturners use a woodworking gouge or chisel designed to cut wood as it is spun on a lathe. These tools have longer handles for more leverage, needed to counteract the tendency of the tool to react to the downward force of the spinning wood being cut or carved. In addition, the angle and method of sharpening is different. Metalworking Chisels used in metal work can be divided into two main categories: hot chisels and cold chisels. Cold chisel A cold chisel is a tool made of tempered steel used for cutting 'cold' metals, meaning that they are not used in conjunction with heating torches, forges, etc. Cold chisels are used to remove waste metal when a very smooth finish is not required or when the work cannot be done easily with other tools, such as a hacksaw, file, bench shears or power tools. The name cold chisel comes from its use by blacksmiths to cut metal while it was cold as compared to other tools they used to cut hot metal. Because cold chisels are used to form metal, they have a less-acute angle to the sharp portion of the blade than a woodworking chisel. This gives the cutting edge greater strength at the expense of sharpness. Cold chisels come in a variety of sizes, from fine engraving tools that are tapped with very light hammers, to massive tools that are driven with sledgehammers. Cold chisels are forged to shape and hardened and tempered (to a blue colour) at the cutting edge. The head of the chisel is chamfered to slow down the formation of the mushroom shape caused by hammering and is left soft to avoid brittle fracture splintering from hammer blows. There are four common types of cold chisels. These are the flat chisel, the most widely known type, which is used to cut bars and rods to reduce surfaces and to cut sheet metal that is too thick or difficult to cut with tin snips. The cross cut chisel is used for cutting grooves and slots. The blade narrows behind the cutting edge to provide clearance. The round nose chisel is used for cutting semi-circular grooves for oil ways in bearings. The diamond point chisel is used for cleaning out corners or difficult places and pulling over centre punch marks wrongly placed for drilling. Although the vast majority of cold chisels are made of steel, a few are manufactured from beryllium copper, for use in special situations where non-sparking tools are required. Cold chisels are predominantly used in Repoussé and chasing processes for the fabrication of bronze and aluminium sculptures. Hot chisel A hot chisel is used to cut metal that has been heated in a forge to soften the metal. One type of hot chisel is the hotcut hardy, which is used in an anvil hardy hole with the cutting edge facing up. The hot workpiece to be cut is placed over the chisel and struck with a hammer. The hammer drives the workpiece into the chisel, which allows it to be snapped off with a pair of tongs. This tool is also often used in combination with a "top fuller" type of hotcut, when the piece being cut is particularly large. Stone Stone chisels are used to carve or cut stone, bricks or concrete slabs. To cut, as opposed to carve, a brick bolster is used; this has a wide, flat blade that is tapped along the cut line to produce a groove, then hit hard in the centre to crack the stone. Sculptors use a spoon chisel, which is bent, with the bevel on both sides. To increase the force, stone chisels are often hit with club hammers, a heavier type of hammer. Masonry Masonry chisels are typically heavy, with a relatively dull head that wedges and breaks, rather than cuts. Often used as a demolition tool, they may be mounted on a hammer drill, jackhammer, or hammered manually, usually with a heavy hammer of three pounds or more. These chisels normally have an SDS, SDS-MAX, or 1-1/8" Hex connection. Types of masonry chisels include the following: Moil (point) chisels Flat chisels Asphalt cutters Carbide bushing tools Clay spade Flexible chisels Tamper A plugging chisel has a tapered edge for cleaning out hardened mortar. The chisel is held with one hand and struck with a hammer. The direction of the taper in the blade determines if the chisel cuts deep or runs shallow along the joint. Leather In leather work, a chisel is a tool used to punch holes in a piece leather. The chisel has between one and seven (or possibly more) tines that are carefully placed along the line where the holes are desired, and then the top of the chisel is struck with a hammer until the tines penetrate the leather. They are then withdrawn, and the leather worker then stitches through the resulting holes. Gouge A modern gouge is similar to a chisel except its blade edge is not flat, but instead is curved or angled in cross-section. The modern version is generally hafted inline, the blade and handle typically having the same long axis. If the bevel of the blade is on the outer surface of the curve the gouge is called an 'outcannel' gouge, otherwise it is known as an 'incannel' gouge. Gouges with angled rather than curved blades are often called 'V-gouges' or 'vee-parting tools'. The blade geometry is defined by a semi-standardized numbering system that varies by manufacturer and country of origin. For each gouge a "sweep number" is specified that expresses the part of a circle defined by the curve of the blade. The sweep number usually ranges from #1, or flat, up to #9, a semi-circle, with additional specialized gouges at higher numbers, such as the U-shaped #11, and a v-tool or parting tool, which may be an even higher number such as #41. In addition to sweep, gouges are also specified by the distance from one edge of the blade to the other (this corresponds to the chord of the circle section defined by the edge of the blade). Putting these pieces together, two numbers are used to specify the shape of the cutting edge of a gouge, such as a '#7-20mm'. Some manufacturers provide charts with the sweeps of their blades shown graphically. In addition to varying blade sweeps, bevels, and widths, blade variations include: 'Crank-neck' gouges, in which the blade is offset from the handle by a small distance, to allow working flat to a surface 'Spoon-bent' gouges, in which the blade is curved along its length, to allow working in a hollow not otherwise accessible with a straight bladed gouge 'Fishtail' gouges, in which the blade is very narrow for most of its length and then broadens out near the working edge, to allow working in tight spaces. All of these specialized gouges allow a craftsperson to cut into areas that may not be possible with a regular, straight-bladed gouge. The cutting shape of a gouge may also be held in an adze, roughly as the shape of a modern-day mattock. Gouges are used in woodworking and arts. For example, a violin luthier uses gouges to carve the violin, a cabinetmaker may use it for running flutes or paring curves, or an artist may produce a piece of art by cutting some bits out of a sheet of linoleum (see also Linocut). Gouges were found at a number of historic Bronze Age hoards found in Great Britain.
Technology
Hand tools
null
3609705
https://en.wikipedia.org/wiki/Star%20chart
Star chart
A star chart is a celestial map of the night sky with astronomical objects laid out on a grid system. They are used to identify and locate constellations, stars, nebulae, galaxies, and planets. They have been used for human navigation since time immemorial. Note that a star chart differs from an astronomical catalog, which is a listing or tabulation of astronomical objects for a particular purpose. Tools using a star chart include the astrolabe and planisphere. History Prehistory A variety of archaeological sites and artifacts found are thought to indicate ancient made star charts. The oldest known star chart may be a carved ivory Mammoth tusk, drawn by early people from Asia who moved into Europe, that was discovered in Germany in 1979. This artifact is 32,500 years old and has a carving that resembles the constellation Orion, although it could not be confirmed and could also be a pregnancy chart. German researcher Dr Michael Rappenglueck, of the University of Munich, has suggested that drawing on the wall of the Lascaux caves in France could be a graphical representation of the Pleiades open cluster of stars. This is dated from 33,000 to 10,000 years ago. He also suggested a panel in the same caves depicting a charging bison, a man with a bird's head and the head of a bird on top of a piece of wood, together may depict the Summer Triangle, which at the time was a circumpolar formation. Rappenglueck also discovered a drawing of the Northern Crown constellation in the cave of El Castillo (North of Spain), made in the same period as the Lascaux chart. Another star chart panel, created more than 21,000 years ago, was found in the La Tête du Lion cave (fr). The bovine in this panel may represent the constellation Taurus, with a pattern representing the Pleiades just above it. A star chart drawn 5000 years ago by the Indians in Kashmir, which also depict a supernova for the first time in human history. The Nebra sky disk, a 30 cm wide bronze disk dated to 1600 BC, bears gold symbols generally interpreted as a sun or full moon, a lunar crescent, several stars including the Pleiades cluster and possibly the Milky Way. Antiquity The oldest accurately dated star chart appeared in ancient Egyptian astronomy in 1534 BC. The earliest known star catalogues were compiled by the ancient Babylonian astronomers of Mesopotamia in the late 2nd millennium BC, during the Kassite Period (ca. 1531–1155 BC). The oldest records of Chinese astronomy date to the Warring States period (476–221 BC), but the earliest preserved Chinese star catalogues of astronomers Shi Shen and Gan De are found in the 2nd-century BC Shiji by the Western Han historian Sima Qian. The oldest Chinese graphical representation of the night sky is a lacquerware box from the 5th-century BC Tomb of Marquis Yi of Zeng, although this depiction shows the positions of the Chinese constellations by name and does not show individual stars. The Farnese Atlas is a 2nd-century AD Roman copy of a Hellenistic era Greek statue depicting the Titan Atlas holding the celestial sphere on his shoulder. It is the oldest surviving depiction of the ancient Greek constellations, and includes grid circles that provide coordinate positions. Because of precession, the positions of the constellations slowly change over time. By comparing the positions of the 41 constellations against the grid circles, an accurate determination can be made of the epoch when the original observations were performed. Based upon this information, the constellations were catalogued at . This evidence indicates that the star catalogue of the 2nd-century BC Greek astronomer Hipparchus was used. A Roman era example of a graphical representation of the night sky is the Ptolemaic Egyptian Dendera zodiac, dating from 50 BC. This is a bas relief sculpting on a ceiling at the Dendera Temple complex. It is a planisphere depicting the zodiac in graphical representations. However, individual stars are not plotted. Medieval The oldest surviving manuscript star chart was the Dunhuang Star Chart, dated to the Tang dynasty (618–907) and discovered in the Mogao Caves of Dunhuang in Gansu, Western China along the Silk Road. This is a scroll 210 cm in length and 24.4 cm wide showing the sky between declinations 40° south to 40° north in twelve panels, plus a thirteenth panel showing the northern circumpolar sky. A total of 1,345 stars are drawn, grouped into 257 asterisms. The date of this chart is uncertain, but is estimated as 705–10 AD. During the Song dynasty (960–1279), the Chinese astronomer Su Song wrote a book titled Xin Yixiang Fa Yao (New Design for the Armillary Clock) containing five maps of 1,464 stars. This has been dated to 1092. In 1193, the astronomer Huang Shang prepared a planisphere along with explanatory text. It was engraved in stone in 1247, and this chart still exists in the Wen Miao temple in Suzhou. In Muslim astronomy, the first star chart to be drawn accurately was most likely the illustrations produced by the Persian astronomer Abd al-Rahman al-Sufi in his 964 work titled Book of Fixed Stars. This book was an update of parts VII.5 and VIII.1 of the 2nd century Almagest star catalogue by Ptolemy. The work of al-Sufi contained illustrations of the constellations and portrayed the brighter stars as dots. The original book did not survive, but a copy from about 1009 is preserved at the Oxford University. Perhaps the oldest European star map was a parchment manuscript titled De Composicione Spere Solide. It was most likely produced in Vienna, Austria in 1440 and consisted of a two-part map depicting the constellations of the northern celestial hemisphere and the ecliptic. This may have served as a prototype for the oldest European printed star chart, a 1515 set of woodcut portraits produced by Albrecht Dürer in Nuremberg, Germany. Early modern During the European Age of Discovery, expeditions to the southern hemisphere began to result in the addition of new constellations. These most likely came from the records of two Dutch sailors, Pieter Dirkszoon Keyser and Frederick de Houtman, who in 1595 traveled together to the Dutch East Indies. Their compilations resulted in the 1601 globe of Jodocus Hondius, who added 12 new southern constellations. Several other such maps were produced, including Johann Bayer's Uranometria in 1603. The latter was the first atlas to chart both celestial hemispheres and it introduced the Bayer designations for identifying the brightest stars using the Greek alphabet. The Uranometria contained 48 maps of Ptolemaic constellations, a plate of the southern constellations and two plates showing the entire northern and southern hemispheres in stereographic polar projection. Polish astronomer Johannes Hevelius published his Firmamentum Sobiescianum star atlas posthumously in 1690. It contained 56 large, double page star maps and improved the accuracy in the position of the southern stars. He introduced 11 more constellations, including Scutum, Lacerta, and Canes Venatici. Modern In 1824 Sidney Hall produced a set of star charts called Urania's Mirror. They are illustrations based on Alexander Jamieson's A Celestial Atlas, but the addition of holes punched in them allowed them to be held up to a light to see a depiction of the constellation's stars.
Technology
Astronomical technology
null
13001588
https://en.wikipedia.org/wiki/Animal%20consciousness
Animal consciousness
Animal consciousness, or animal awareness, is the quality or state of self-awareness within an animal, or of being aware of an external object or something within itself. In humans, consciousness has been defined as: sentience, awareness, subjectivity, qualia, the ability to experience or to feel, wakefulness, having a sense of selfhood, and the executive control system of the mind. Despite the difficulty in definition, many philosophers believe there is a broadly shared underlying intuition about what consciousness is. The topic of animal consciousness is beset with a number of difficulties. It poses the problem of other minds in an especially severe form because animals, lacking the ability to use human language, cannot communicate their experiences. It is also difficult to reason objectively about the question because a denial that an animal is conscious is often taken to imply that they do not feel, their life has no value, and that harming them is not morally wrong. For example, the 17th-century French philosopher René Descartes is sometimes criticised for providing a rationale for the mistreatment of animals because he argued that only humans are conscious. Philosophers who consider subjective experience the essence of consciousness also generally believe, as a correlate, that the existence and nature of animal consciousness can never rigorously be known. The American philosopher Thomas Nagel spelled out this point of view in an influential essay titled What Is it Like to Be a Bat? He said that an organism is conscious "if and only if there is something that it is like to be that organism—something it is like for the organism"; and he argued that no matter how much we know about an animal's brain and behavior, we can never really put ourselves into the mind of the animal and experience their world in the way they do themselves. Other thinkers, such as the cognitive scientist Douglas Hofstadter, dismiss this argument as incoherent. Several psychologists and ethologists have argued for the existence of animal consciousness by describing a range of behaviors that appear to show animals holding beliefs about things they cannot directly perceive—Walter Veit's 2023 book A Philosophy for the Science of Animal Consciousness reviews a substantial portion of the evidence. Animal consciousness has been actively researched for over one hundred years. In 1927, the American functional psychologist Harvey Carr argued that any valid measure or understanding of awareness in animals depends on "an accurate and complete knowledge of its essential conditions in man". A more recent review concluded in 1985 that "the best approach is to use experiment (especially psychophysics) and observation to trace the dawning and ontogeny of self-consciousness, perception, communication, intention, beliefs, and reflection in normal human fetuses, infants, and children". In 2012, a group of neuroscientists signed the Cambridge Declaration on Consciousness, which "unequivocally" asserted that "humans are not unique in possessing the neurological substrates that generate consciousness. Non-human animals, including all mammals and birds, and many other creatures, including octopuses, also possess these neural substrates." Philosophical background The mind–body problem in philosophy examines the relationship between mind and matter, and in particular the relationship between consciousness and the brain. A variety of approaches have been proposed. Most are either dualist or monist. Dualism maintains a rigid distinction between the realms of mind and matter. Monism maintains that there is only one kind of stuff, and that mind and matter are both aspects of it. The problem was addressed by pre-Aristotelian philosophers, and was famously addressed by René Descartes in the 17th century, resulting in Cartesian dualism. Descartes believed that humans only, and not other animals have this non-physical mind. The rejection of the mind–body dichotomy is found in French Structuralism, and is a position that generally characterized post-war French philosophy. The absence of an empirically identifiable meeting point between the non-physical mind and its physical extension has proven problematic to dualism and many modern philosophers of mind maintain that the mind is not something separate from the body. These approaches have been particularly influential in the sciences, particularly in the fields of sociobiology, computer science, evolutionary psychology, and the neurosciences. Epiphenomenalism Epiphenomenalism is the theory in philosophy of mind that mental phenomena are caused by physical processes in the brain or that both are effects of a common cause, as opposed to mental phenomena driving the physical mechanics of the brain. The impression that thoughts, feelings, or sensations cause physical effects, is therefore to be understood as illusory to some extent. For example, it is not the feeling of fear that produces an increase in heart beat, both are symptomatic of a common physiological origin, possibly in response to a legitimate external threat. The history of epiphenomenalism goes back to the post-Cartesian attempt to solve the riddle of Cartesian dualism, i.e., of how mind and body could interact. La Mettrie, Leibniz and Spinoza all in their own way began this way of thinking. The idea that even if the animal were conscious nothing would be added to the production of behavior, even in animals of the human type, was first voiced by La Mettrie (1745), and then by Cabanis (1802), and was further explicated by Hodgson (1870) and Huxley (1874). Huxley (1874) likened mental phenomena to the whistle on a steam locomotive. However, epiphenomenalism flourished primarily as it found a niche among methodological or scientific behaviorism. In the early 1900s scientific behaviorists such as Ivan Pavlov, John B. Watson, and B. F. Skinner began the attempt to uncover laws describing the relationship between stimuli and responses, without reference to inner mental phenomena. Instead of adopting a form of eliminativism or mental fictionalism, positions that deny that inner mental phenomena exist, a behaviorist was able to adopt epiphenomenalism in order to allow for the existence of mind. However, by the 1960s, scientific behaviourism met substantial difficulties and eventually gave way to the cognitive revolution. Participants in that revolution, such as Jerry Fodor, reject epiphenomenalism and insist upon the efficacy of the mind. Fodor even speaks of "epiphobia"—fear that one is becoming an epiphenomenalist. Thomas Henry Huxley defends in an essay titled On the Hypothesis that Animals are Automata, and its History an epiphenomenalist theory of consciousness according to which consciousness is a causally inert effect of neural activity—"as the steam-whistle which accompanies the work of a locomotive engine is without influence upon its machinery". To this William James objects in his essay Are We Automata? by stating an evolutionary argument for mind-brain interaction implying that if the preservation and development of consciousness in the biological evolution is a result of natural selection, it is plausible that consciousness has not only been influenced by neural processes, but has had a survival value itself; and it could only have had this if it had been efficacious. Karl Popper develops in the book The Self and Its Brain a similar evolutionary argument. Animal ethics Bernard Rollin of Colorado State University, the principal author of two U.S. federal laws regulating pain relief for animals, writes that researchers remained unsure into the 1980s as to whether animals experience pain, and veterinarians trained in the U.S. before 1989 were simply taught to ignore animal pain. In his interactions with scientists and other veterinarians, Rollin asserts that he was regularly asked to prove animals are conscious and provide scientifically acceptable grounds for claiming they feel pain. The denial of animal consciousness by scientists has been described as mentophobia by Donald Griffin. Academic reviews of the topic are equivocal, noting that the argument that animals have at least simple conscious thoughts and feelings has strong support, but some critics continue to question how reliably animal mental states can be determined. A refereed journal Animal Sentience launched in 2015 by the Institute of Science and Policy of The Humane Society of the United States is devoted to research on this and related topics. Defining consciousness Consciousness is an elusive concept that presents many difficulties when attempts are made to define it. Its study has progressively become an interdisciplinary challenge for numerous researchers, including ethologists, neurologists, cognitive neuroscientists, philosophers, psychologists and psychiatrists. In 1976, Richard Dawkins wrote, "The evolution of the capacity to simulate seems to have culminated in subjective consciousness. Why this should have happened is, to me, the most profound mystery facing modern biology." In 2004, eight neuroscientists felt it was still too soon for a definition. They wrote an apology in "Human Brain Function", in which they stated: "We have no idea how consciousness emerges from the physical activity of the brain and we do not know whether consciousness can emerge from non-biological systems, such as computers... At this point the reader will expect to find a careful and precise definition of consciousness. You will be disappointed. Consciousness has not yet become a scientific term that can be defined in this way. Currently we all use the term consciousness in many different and often ambiguous ways. Precise definitions of different aspects of consciousness will emerge ... but to make precise definitions at this stage is premature." Consciousness is sometimes defined as the quality or state of being aware of an external object or something within oneself. It has been defined somewhat vaguely as: subjectivity, awareness, sentience, the ability to experience or to feel, wakefulness, having a sense of selfhood, and the executive control system of the mind. Despite the difficulty in definition, many philosophers believe that there is a broadly shared underlying intuition about what consciousness is. Max Velmans and Susan Schneider wrote in The Blackwell Companion to Consciousness: "Anything that we are aware of at a given moment forms part of our consciousness, making conscious experience at once the most familiar and most mysterious aspect of our lives." Related terms, also often used in vague or ambiguous ways, are: Awareness: the state or ability to perceive, to feel, or to be conscious of events, objects, or sensory patterns. In this level of consciousness, sense data can be confirmed by an observer without necessarily implying understanding. More broadly, it is the state or quality of being aware of something. In biological psychology, awareness is defined as a human's or an animal's perception and cognitive reaction to a condition or event. Self-awareness: the capacity for introspection and the ability to reconcile oneself as an individual separate from the environment and other individuals. Self-consciousness: an acute sense of self-awareness. It is a preoccupation with oneself, as opposed to the philosophical state of self-awareness, which is the awareness that one exists as an individual being; although some writers use both terms interchangeably or synonymously. Sentience: the ability to be aware (feel, perceive, or be conscious) of one's surroundings or to have subjective experiences. Sentience is a minimalistic way of defining consciousness, which is otherwise commonly used to collectively describe sentience plus other characteristics of the mind. Sapience: often defined as wisdom, or the ability of an organism or entity to act with appropriate judgment, a mental faculty which is a component of intelligence or alternatively may be considered an additional faculty, apart from intelligence, with its own properties. Qualia: individual instances of subjective, conscious experience. Sentience (the ability to feel, perceive, or to experience subjectivity) is not the same as self-awareness (being aware of oneself as an individual). The mirror test is sometimes considered to be an operational test for self-awareness, and the handful of animals that have passed it are often considered to be self-aware. It remains debatable whether recognition of one's mirror image can be properly construed to imply full self-awareness, particularly given that robots are being constructed which appear to pass the test. Much has been learned in neuroscience about correlations between brain activity and subjective, conscious experiences, and many suggest that neuroscience will ultimately explain consciousness; "...consciousness is a biological process that will eventually be explained in terms of molecular signaling pathways used by interacting populations of nerve cells...". However, this view has been criticized because consciousness has yet to be shown to be a process, and the so-called "hard problem" of relating consciousness directly to brain activity remains elusive. Scientific approaches Since Descartes's proposal of dualism, it became general consensus that the mind had become a matter of philosophy and that science was not able to penetrate the issue of consciousness – that consciousness was outside of space and time. However, in recent decades many scholars have begun to move toward a science of consciousness. Antonio Damasio and Gerald Edelman are two neuroscientists who have led the move to neural correlates of the self and of consciousness. Damasio has demonstrated that emotions and their biological foundation play a critical role in high level cognition, and Edelman has created a framework for analyzing consciousness through a scientific outlook. The current problem consciousness researchers face involves explaining how and why consciousness arises from neural computation. In his research on this problem, Edelman has developed a theory of consciousness, in which he has coined the terms primary consciousness and secondary consciousness. Eugene Linden, author of The Parrot's Lament suggests there are many examples of animal behavior and intelligence that surpass what people would suppose to be the boundary of animal consciousness. Linden contends that in many of these documented examples, a variety of animal species exhibit behavior that can only be attributed to emotion, and to a level of consciousness that we would normally ascribe only to our own species. Philosopher Daniel Dennett counters: Consciousness in mammals (including humans) is an aspect of the mind generally thought to comprise qualities such as subjectivity, sentience, and the ability to perceive the relationship between oneself and one's environment. It is a subject of much research in philosophy of mind, psychology, neuroscience, and cognitive science. Some philosophers divide consciousness into phenomenal consciousness, which is subjective experience itself, and access consciousness, which refers to the global availability of information to processing systems in the brain. Phenomenal consciousness has many different experienced qualities, often referred to as qualia. Phenomenal consciousness is usually consciousness of something or about something, a property known as intentionality in philosophy of mind. In humans, there are three common methods of studying consciousness: verbal reporting, behavioural demonstrations, and neural correlation with conscious activity, though these can only be generalized to non-human taxa with varying degrees of difficulty. In a new study conducted in rhesus monkeys, Ben-Haim and his team used a process dissociation approach that predicted opposite behavioral outcomes for the two modes of perception. They found that monkeys displayed exactly the same opposite behavioral outcomes as humans when they were aware or unaware of the stimuli presented. Mirror test The sense in which animals (or human infants) can be said to have consciousness or a self-concept has been hotly debated; it is often referred to as the debate over animal minds. The best known research technique in this area is the mirror test devised by Gordon G. Gallup, in which the skin of an animals (or human infant) is marked, while they are asleep or sedated, with a mark that cannot be seen directly but is visible in a mirror. The animals are then allowed to see their reflection in a mirror; if the animal spontaneously directs grooming behaviour towards the mark, that is taken as an indication that they are aware of themselves. Over the past 30 years, many studies have found evidence that animals recognise themselves in mirrors. Self-awareness by this criterion has been reported for: Land mammals: apes (chimpanzees, bonobos, orangutans and gorillas) and elephants. Cetaceans: bottlenose dolphins, killer whales and possibly false killer whales. Birds: magpies and pigeons (can pass the mirror test after training in the prerequisite behaviors). Until recently, it was thought that self-recognition was absent in animals without a neocortex, and was restricted to mammals with large brains and well-developed social cognition. However, in 2008, a study of self-recognition in corvids reported significant results for magpies. Mammals and birds inherited the same brain components from their last common ancestor nearly 300 million years ago, and have since independently evolved and formed significantly different brain types. The results of the mirror and mark tests showed that neocortex-less magpies are capable of understanding that a mirror image belongs to their own body. The findings show that magpies respond in the mirror and mark tests in a manner similar to apes, dolphins and elephants. The magpies were chosen to study based on their empathy and lifestyle, a possible precursor to their ability to develop self-awareness. For chimpanzees, the occurrence is about 75% in young adults and considerably less in young and old individuals. For monkeys, non-primate mammals, and a number of bird species, exploration of the mirror and social displays were observed. Hints at mirror-induced self-directed behavior have been obtained. According to a 2019 study, cleaner wrasses have become the first fish ever observed to pass the mirror test. However, the test's inventor Gordon Gallup has said that the fish were most likely trying to scrape off a perceived parasite on another fish and that they did not demonstrate self-recognition. The authors of the study retorted that because the fish checked themselves in the mirror before and after the scraping, this meant that the fish had self-awareness and recognized that their reflections belonged to their own bodies. The mirror test has attracted controversy among some researchers because it is entirely focused on vision, the primary sense in humans, while other species rely more heavily on other senses such as the olfactory sense in dogs. A study in 2015 showed that the "sniff test of self-recognition (STSR)" provides evidence of self-awareness in dogs. Language Another approach to determine whether a non-human animal is conscious derives from passive speech research with a macaw (see Arielle). Some researchers propose that by passively listening to an animal's voluntary speech, it is possible to learn about the thoughts of another creature and to determine that the speaker is conscious. This type of research was originally used to investigate a child's crib speech by Weir (1962) and in investigations of early speech in children by Greenfield and others (1976). Zipf's law might be able to be used to indicate if a given dataset of animal communication indicate an intelligent natural language. Some researchers have used this algorithm to study bottlenose dolphin language. Pain or suffering Further arguments revolve around the ability of animals to feel pain or suffering. Suffering implies consciousness. If animals can be shown to suffer in a way similar or identical to humans, many of the arguments against human suffering could then, presumably, be extended to animals. Others have argued that pain can be demonstrated by adverse reactions to negative stimuli that are non-purposeful or even maladaptive. One such reaction is transmarginal inhibition, a phenomenon observed in humans and some animals akin to mental breakdown. Carl Sagan, the American cosmologist, points to reasons why humans have had a tendency to deny animals can suffer: John Webster, a professor of animal husbandry at Bristol, argues: However, there is no agreement where the line should be drawn between organisms that can feel pain and those that cannot. Justin Leiber, a philosophy professor at Oxford University writes that: There are also some who reject the argument entirely, arguing that although suffering animals feel anguish, a suffering plant also struggles to stay alive (albeit in a less visible way). In fact, no living organism 'wants' to die for another organism's sustenance. In an article written for The New York Times, Carol Kaesuk Yoon argues that: Cognitive bias and emotion Cognitive bias in animals is a pattern of deviation in judgment, whereby inferences about other animals and situations may be drawn in an illogical fashion. Individuals create their own "subjective social reality" from their perception of the input. It refers to the question "Is the glass half empty or half full?", used as an indicator of optimism or pessimism. Cognitive biases have been shown in a wide range of species including rats, dogs, rhesus macaques, sheep, chicks, starlings and honeybees. The neuroscientist Joseph LeDoux advocates avoiding terms derived from human subjective experience when discussing brain functions in animals. For example, the common practice of calling brain circuits that detect and respond to threats "fear circuits" implies that these circuits are responsible for feelings of fear. LeDoux argues that Pavlovian fear conditioning should be renamed Pavlovian threat conditioning to avoid the implication that "fear" is being acquired in rats or humans. Key to his theoretical change is the notion of survival functions mediated by survival circuits, the purpose of which is to keep organisms alive rather than to make emotions. For example, defensive survival circuits exist to detect and respond to threats. While all organisms can do this, only organisms that can be conscious of their own brain's activities can feel fear. Fear is a conscious experience and occurs the same way as any other kind of conscious experience: via cortical circuits that allow attention to certain forms of brain activity. LeDoux argues the only differences between an emotional and non-emotion state of consciousness are the underlying neural ingredients that contribute to the state. Neuroscience Neuroscience is the scientific study of the nervous system. It is a highly active interdisciplinary science that collaborates with many other fields. The scope of neuroscience has broadened recently to include molecular, cellular, developmental, structural, functional, evolutionary, computational, and medical aspects of the nervous system. Theoretical studies of neural networks are being complemented with techniques for imaging sensory and motor tasks in the brain. According to a 2008 paper, neuroscience explanations of psychological phenomena currently have a "seductive allure", and "seem to generate more public interest" than explanations which do not contain neuroscientific information. They found that subjects who were not neuroscience experts "judged that explanations with logically irrelevant neuroscience information were more satisfying than explanations without. Neural correlates The neural correlates of consciousness constitute the minimal set of neuronal events and mechanisms sufficient for a specific conscious percept. Neuroscientists use empirical approaches to discover neural correlates of subjective phenomena. The set should be minimal because, if the brain is sufficient to give rise to any given conscious experience, the question is which of its components is necessary to produce it. Visual sense and representation was reviewed in 1998 by Francis Crick and Christof Koch. They concluded sensory neuroscience can be used as a bottom-up approach to studying consciousness, and suggested experiments to test various hypotheses in this research stream. A feature that distinguishes humans from most animals is that we are not born with an extensive repertoire of behavioral programs that would enable us to survive on our own ("physiological prematurity"). To compensate for this, we have an unmatched ability to learn, i.e., to consciously acquire such programs by imitation or exploration. Once consciously acquired and sufficiently exercised, these programs can become automated to the extent that their execution happens beyond the realms of our awareness. Take, as an example, the incredible fine motor skills exerted in playing a Beethoven piano sonata or the sensorimotor coordination required to ride a motorcycle along a curvy mountain road. Such complex behaviors are possible only because a sufficient number of the subprograms involved can be executed with minimal or even suspended conscious control. In fact, the conscious system may actually interfere somewhat with these automated programs. The growing ability of neuroscientists to manipulate neurons using methods from molecular biology in combination with optical tools depends on the simultaneous development of appropriate behavioural assays and model organisms amenable to large-scale genomic analysis and manipulation. A combination of such fine-grained neuronal analysis in animals with ever more sensitive psychophysical and brain imaging techniques in humans, complemented by the development of a robust theoretical predictive framework, will hopefully lead to a rational understanding of consciousness. Neocortex and equivalents The neocortex is a part of the brain of mammals. It consists of the grey matter, or neuronal cell bodies and unmyelinated fibers, surrounding the deeper white matter (myelinated axons) in the cerebrum. The neocortex is smooth in rodents and other small mammals, whereas in primates and other larger mammals it has deep grooves and wrinkles. These folds increase the surface area of the neocortex considerably without taking up too much more volume. Also, neurons within the same wrinkle have more opportunity for connectivity, while neurons in different wrinkles have less opportunity for connectivity, leading to compartmentalization of the cortex. The neocortex is divided into frontal, parietal, occipital, and temporal lobes, which perform different functions. For example, the occipital lobe contains the primary visual cortex, and the temporal lobe contains the primary auditory cortex. Further subdivisions or areas of neocortex are responsible for more specific cognitive processes. The neocortex is the newest part of the cerebral cortex to evolve (hence the prefix "neo"); the other parts of the cerebral cortex are the paleocortex and archicortex, collectively known as the allocortex. In humans, 90% of the cerebral cortex is neocortex. Researchers have argued that consciousness in mammals arises in the neocortex, and therefore by extension used to argue that consciousness cannot arise in animals which lack a neocortex. For example, Rose argued in 2002 that the "fishes have nervous systems that mediate effective escape and avoidance responses to noxious stimuli, but, these responses must occur without a concurrent, human-like awareness of pain, suffering or distress, which depend on separately evolved neocortex." Recently that view has been challenged, and many researchers now believe that animal consciousness can arise from homologous subcortical brain networks. For instance, evidence suggests the pallium in bird brains to be functionally equivalent to the mammalian cerebral cortex as a basis of consciousness. Attention Attention is the cognitive process of selectively concentrating on one aspect of the environment while ignoring other things. Attention has also been referred to as the allocation of processing resources. Attention also has variations amongst cultures. Voluntary attention develops in specific cultural and institutional contexts through engagement in cultural activities with more competent community members. Most experiments show that one neural correlate of attention is enhanced firing. If a neuron has a certain response to a stimulus when the animal is not attending to the stimulus, then when the animal does attend to the stimulus, the neuron's response will be enhanced even if the physical characteristics of the stimulus remain the same. In many cases attention produces changes in the EEG. Many animals, including humans, produce gamma waves (40–60 Hz) when focusing attention on a particular object or activity. Extended consciousness Extended consciousness is an animal's autobiographical self-perception. It is thought to arise in the brains of animals which have a substantial capacity for memory and reason. It does not necessarily require language. The perception of a historic and future self arises from a stream of information from the immediate environment and from neural structures related to memory. The concept was popularised by Antonio Damasio and is used in biological psychology. Extended consciousness is said to arise in structures in the human brain described as image spaces and dispositional spaces. Image spaces imply areas where sensory impressions of all types are processed, including the focused awareness of the core consciousness. Dispositional spaces include convergence zones, which are networks in the brain where memories are processed and recalled, and where knowledge is merged with immediate experience. Metacognition Metacognition is defined as "cognition about cognition", or "knowing about knowing." It can take many forms; it includes knowledge about when and how to use particular strategies for learning or for problem solving. It has been suggested that metacognition in some animals provides evidence for cognitive self-awareness. There are generally two components of metacognition: knowledge about cognition, and regulation of cognition. Writings on metacognition can be traced back at least as far as De Anima and the Parva Naturalia of the Greek philosopher Aristotle. Metacognologists believe that the ability to consciously think about thinking is unique to sapient species and indeed is one of the definitions of sapience. There is evidence that rhesus monkeys and apes can make accurate judgments about the strengths of their memories of fact and monitor their own uncertainty, while attempts to demonstrate metacognition in birds have been inconclusive. A 2007 study provided some evidence for metacognition in rats, but further analysis suggested that they may have been following simple operant conditioning principles, or a behavioral economic model. Mirror neurons Mirror neurons are neurons that fire both when an animal acts and when the animal observes the same action performed by another. Thus, the neuron "mirrors" the behavior of the other, as though the observer were themselves acting. Such neurons have been directly observed in primate and other species including birds. The function of the mirror system is a subject of much speculation. Many researchers in cognitive neuroscience and cognitive psychology consider that this system provides the physiological mechanism for the perception action coupling (see the common coding theory). They argue that mirror neurons may be important for understanding the actions of other people, and for learning new skills by imitation. Some researchers also speculate that mirror systems may simulate observed actions, and thus contribute to theory of mind skills, while others relate mirror neurons to language abilities. Neuroscientists such as Marco Iacoboni (UCLA) have argued that mirror neuron systems in the human brain help us understand the actions and intentions of other people. In a study published in March 2005, Iacoboni and his colleagues reported that mirror neurons could discern if another person who was picking up a cup of tea planned to drink from it or clear it from the table. In addition, Iacoboni and a number of other researchers have argued that mirror neurons are the neural basis of the human capacity for emotions such as empathy. Vilayanur S. Ramachandran has speculated that mirror neurons may provide the neurological basis of self-awareness. Evolutionary psychology Consciousness is likely an evolved adaptation since it meets George Williams' criteria of species universality, complexity, and functionality, and it is a trait that apparently increases fitness. Opinions are divided as to where in biological evolution consciousness emerged and about whether or not consciousness has survival value. It has been argued that consciousness emerged (i) exclusively with the first humans, (ii) exclusively with the first mammals, (iii) independently in mammals and birds, or (iv) with the first reptiles. Donald Griffin suggests in his book Animal Minds a gradual evolution of consciousness. Each of these scenarios raises the question of the possible survival value of consciousness. In his paper "Evolution of consciousness," John Eccles argues that special anatomical and physical adaptations of the mammalian cerebral cortex gave rise to consciousness. In contrast, others have argued that the recursive circuitry underwriting consciousness is much more primitive, having evolved initially in pre-mammalian species because it improves the capacity for interaction with both social and natural environments by providing an energy-saving "neutral" gear in an otherwise energy-expensive motor output machine. Once in place, this recursive circuitry may well have provided a basis for the subsequent development of many of the functions that consciousness facilitates in higher organisms, as outlined by Bernard J. Baars. Richard Dawkins suggested that humans evolved consciousness in order to make themselves the subjects of thought. Daniel Povinelli suggests that large, tree-climbing apes evolved consciousness to take into account one's own mass when moving safely among tree branches. Consistent with this hypothesis, Gordon Gallup found that chimpanzees and orangutans, but not little monkeys or terrestrial gorillas, demonstrated self-awareness in mirror tests. The concept of consciousness can refer to voluntary action, awareness, or wakefulness. However, even voluntary behaviour involves unconscious mechanisms. Many cognitive processes take place in the cognitive unconscious, unavailable to conscious awareness. Some behaviours are conscious when learned but then become unconscious, seemingly automatic. Learning, especially implicitly learning a skill, can take place outside of consciousness. For example, plenty of people know how to turn right when they ride a bike, but very few can accurately explain how they actually do so. Neural Darwinism Neural Darwinism is a large scale theory of brain function initially proposed in 1978 by the American biologist Gerald Edelman. Edelman distinguishes between what he calls primary and secondary consciousness: Primary consciousness: is the ability, found in humans and some animals, to integrate observed events with memory to create an awareness of the present and immediate past of the world around them. This form of consciousness is also sometimes called "sensory consciousness". Put another way, primary consciousness is the presence of various subjective sensory contents of consciousness such as sensations, perceptions, and mental images. For example, primary consciousness includes a person's experience of the blueness of the ocean, a bird's song, and the feeling of pain. Thus, primary consciousness refers to being mentally aware of things in the world in the present without any sense of past and future; it is composed of mental images bound to a time around the measurable present. Secondary consciousness: is an individual's accessibility to their history and plans. The concept is also loosely and commonly associated with having awareness of one's own consciousness. The ability allows its possessors to go beyond the limits of the remembered present of primary consciousness. Primary consciousness can be defined as simple awareness that includes perception and emotion. As such, it is ascribed to most animals. By contrast, secondary consciousness depends on and includes such features as self-reflective awareness, abstract thinking, volition and metacognition. Edelman's theory focuses on two nervous system organizations: the brainstem and limbic systems on one side and the thalamus and cerebral cortex on the other side. The brain stem and limbic system take care of essential body functioning and survival, while the thalamocortical system receives signals from sensory receptors and sends out signals to voluntary muscles such as those of the arms and legs. The theory asserts that the connection of these two systems during evolution helped animals learn adaptive behaviors. Other scientists have argued against Edelman's theory, instead suggesting that primary consciousness might have emerged with the basic vegetative systems of the brain. That is, the evolutionary origin might have come from sensations and primal emotions arising from sensors and receptors, both internal and surface, signaling that the well-being of the creature was immediately threatened—for example, hunger for air, thirst, hunger, pain, and extreme temperature change. This is based on neurological data showing the thalamic, hippocampal, orbitofrontal, insula, and midbrain sites are the key to consciousness of thirst. These scientists also point out that the cortex might not be as important to primary consciousness as some neuroscientists have believed. Evidence of this lies in the fact that studies show that systematically disabling parts of the cortex in animals does not remove consciousness. Another study found that children born without a cortex are conscious. Instead of cortical mechanisms, these scientists emphasize brainstem mechanisms as essential to consciousness. Still, these scientists concede that higher order consciousness does involve the cortex and complex communication between different areas of the brain. While animals with primary consciousness have long-term memory, they lack explicit narrative, and, at best, can only deal with the immediate scene in the remembered present. While they still have an advantage over animals lacking such ability, evolution has brought forth a growing complexity in consciousness, particularly in mammals. Animals with this complexity are said to have secondary consciousness. Secondary consciousness is seen in animals with semantic capabilities, such as the four great apes. It is present in its richest form in the human species, which is unique in possessing complex language made up of syntax and semantics. In considering how the neural mechanisms underlying primary consciousness arose and were maintained during evolution, it is proposed that at some time around the divergence of reptiles into mammals and then into birds, the embryological development of large numbers of new reciprocal connections allowed rich re-entrant activity to take place between the more posterior brain systems carrying out perceptual categorization and the more frontally located systems responsible for value-category memory. The ability of an animal to relate a present complex scene to their own previous history of learning conferred an adaptive evolutionary advantage. At much later evolutionary epochs, further re-entrant circuits appeared that linked semantic and linguistic performance to categorical and conceptual memory systems. This development enabled the emergence of secondary consciousness. Ursula Voss of the Universität Bonn believes that the theory of protoconsciousness may serve as adequate explanation for self-recognition found in birds, as they would develop secondary consciousness during REM sleep. She added that many types of birds have very sophisticated language systems. Don Kuiken of the University of Alberta finds such research interesting as well as if we continue to study consciousness with animal models (with differing types of consciousness), we would be able to separate the different forms of reflectiveness found in today's world. For the advocates of the idea of a secondary consciousness, self-recognition serves as a critical component and a key defining measure. What is most interesting then, is the evolutionary appeal that arises with the concept of self-recognition. In non-human species and in children, the mirror test (see above) has been used as an indicator of self-awareness. Declarations on animal consciousness Cambridge Declaration on Consciousness In 2012, a group of neuroscientists attending a conference on "Consciousness in Human and non-Human Animals" at the University of Cambridge in the UK, signed the Cambridge Declaration on Consciousness (see box on the right). In the accompanying text they "unequivocally" asserted: "The field of Consciousness research is rapidly evolving. Abundant new techniques and strategies for human and non-human animal research have been developed. Consequently, more data is becoming readily available, and this calls for a periodic reevaluation of previously held preconceptions in this field. Studies of non-human animals have shown that homologous brain circuits correlated with conscious experience and perception can be selectively facilitated and disrupted to assess whether they are in fact necessary for those experiences. Moreover, in humans, new non-invasive techniques are readily available to survey the correlates of consciousness." "The neural substrates of emotions do not appear to be confined to cortical structures. In fact, subcortical neural networks aroused during affective states in humans are also critically important for generating emotional behaviors in animals. Artificial arousal of the same brain regions generates corresponding behavior and feeling states in both humans and non-human animals. Wherever in the brain one evokes instinctual emotional behaviors in non-human animals, many of the ensuing behaviors are consistent with experienced feeling states, including those internal states that are rewarding and punishing. Deep brain stimulation of these systems in humans can also generate similar affective states. Systems associated with affect are concentrated in subcortical regions where neural homologies abound. Young human and non-human animals without neocortices retain these brain-mind functions. Furthermore, neural circuits supporting behavioral/electrophysiological states of attentiveness, sleep and decision making appear to have arisen in evolution as early as the invertebrate radiation, being evident in insects and cephalopod mollusks (e.g., octopus)." "Birds appear to offer, in their behavior, neurophysiology, and neuroanatomy a striking case of parallel evolution of consciousness. Evidence of near human-like levels of consciousness has been most dramatically observed in grey parrots. Mammalian and avian emotional networks and cognitive microcircuitries appear to be far more homologous than previously thought. Moreover, certain species of birds have been found to exhibit neural sleep patterns similar to those of mammals, including REM sleep and, as was demonstrated in zebra finches, neurophysiological patterns previously thought to require a mammalian neocortex. Magpies in particular have been shown to exhibit striking similarities to humans, great apes, dolphins, and elephants in studies of mirror self-recognition." "In humans, the effect of certain hallucinogens appears to be associated with a disruption in cortical feedforward and feedback processing. Pharmacological interventions in non-human animals with compounds known to affect conscious behavior in humans can lead to similar perturbations in behavior in non-human animals. In humans, there is evidence to suggest that awareness is correlated with cortical activity, which does not exclude possible contributions by subcortical or early cortical processing, as in visual awareness. Evidence that human and non-human animal emotional feelings arise from homologous subcortical brain networks provide compelling evidence for evolutionarily shared primal affective qualia." New York Declaration on Animal Consciousness In 2024, a conference on "The Emerging Science of Animal Consciousness" at New York University produced The New York Declaration on Animal Consciousness. This brief declaration, signed by a number of academics, asserts that, as well as "strong scientific support for attributions of conscious experience to other mammals and to birds", there is additional empirical evidence which "indicates at least a realistic possibility of conscious experience in all vertebrates (including reptiles, amphibians, and fishes) and many invertebrates (including, at minimum, cephalopod mollusks, decapod crustaceans, and insects)." The declaration further asserts that "when there is a realistic possibility of conscious experience in an animal, it is irresponsible to ignore that possibility in decisions affecting that animal". Examples A common image is the scala naturae, the ladder of nature on which animals of different species occupy successively higher rungs, with humans typically at the top. A more useful approach has been to recognize that different animals may have different kinds of cognitive processes, which are better understood in terms of the ways in which they are cognitively adapted to their different ecological niches, than by positing any kind of hierarchy. Mammals Dogs Dogs were previously listed as non-self-aware animals. Traditionally, self-consciousness was evaluated via the mirror test. But dogs, and many other animals, are not (as) visually oriented. A 2015 study claims that the "sniff test of self-recognition" (STSR) provides significant evidence of self-awareness in dogs, and could play a crucial role in showing that this capacity is not a specific feature of only great apes, humans and a few other animals, but it depends on the way in which researchers try to verify it. According to the biologist Roberto Cazzolla Gatti (who published the study), "the innovative approach to test the self-awareness with a smell test highlights the need to shift the paradigm of the anthropocentric idea of consciousness to a species-specific perspective". This study has been confirmed by another study. Birds Grey parrots Research with captive grey parrots, especially Irene Pepperberg's work with an individual named Alex, has demonstrated they possess the ability to associate simple human words with meanings, and to intelligently apply the abstract concepts of shape, colour, number, zero-sense, etc. According to Pepperberg and other scientists, they perform many cognitive tasks at the level of dolphins, chimpanzees, and even human toddlers. Another notable African grey is N'kisi, which in 2004 was said to have a vocabulary of over 950 words which she used in creative ways. For example, when Jane Goodall visited N'kisi in his New York home, he greeted her with "Got a chimp?" because he had seen pictures of her with chimpanzees in Africa. In 2011, research led by Dalila Bovet of Paris West University Nanterre La Défense, demonstrated grey parrots were able to coordinate and collaborate with each other to an extent. They were able to solve problems such as two birds having to pull strings at the same time to obtain food. In another example, one bird stood on a perch to release a food-laden tray, while the other pulled the tray out from the test apparatus. Both would then feed. The birds were observed waiting for their partners to perform the necessary actions so their behaviour could be synchronized. The parrots appeared to express individual preferences as to which of the other test birds they would work with. Corvids It was thought that self-recognition was restricted to mammals with large brains and highly evolved social cognition, but absent from animals without a neocortex. However, in 2008, an investigation of self-recognition in corvids was conducted to determine the ability of self-recognition in the magpie. Mammals and birds inherited the same brain components from their last common ancestor nearly 300 million years ago, and have since independently evolved and formed significantly different brain types. The results of the mirror test showed that although magpies do not have a neocortex, they are capable of understanding that a mirror image belongs to their own body. The findings show that magpies respond to the mirror test in a manner similar to that of apes, dolphins, killer whales, pigs and elephants. A 2020 study found that carrion crows show a neuronal response that correlates with their perception of a stimulus, which they argue to be an empirical marker of (avian) sensory consciousness – the conscious perception of sensory input – in the crows which do not have a cerebral cortex. The study thereby substantiates the theory that conscious perception does not require a cerebral cortex and that the basic foundations for it – and possibly for human-type consciousness – may have evolved before the last common ancestor >320 Mya or independently in birds. A related study showed that the birds' pallium's neuroarchitecture is reminiscent of the mammalian cortex. Invertebrates Octopuses are highly intelligent, possibly more so than any other order of invertebrates. The level of their intelligence and learning capability are debated, but maze and problem-solving studies show they have both short- and long-term memory. Octopus have a highly complex nervous system, only part of which is localized in their brain. Two-thirds of an octopus's neurons are found in the nerve cords of their arms. Octopus arms show a variety of complex reflex actions that persist even when they have no input from the brain. Unlike vertebrates, the complex motor skills of octopuses are not organized in their brain using an internal somatotopic map of their body, instead using a non-somatotopic system unique to large-brained invertebrates. Some octopuses, such as the mimic octopus, move their arms in ways that emulate the shape and movements of other sea creatures. In laboratory studies, octopuses can easily be trained to distinguish between different shapes and patterns. They reportedly use observational learning, although the validity of these findings is contested. Octopuses have also been observed to play: repeatedly releasing bottles or toys into a circular current in their aquariums and then catching them. Octopuses often escape from their aquarium and sometimes enter others. They have boarded fishing boats and opened holds to eat crabs. At least four specimens of the veined octopus (Amphioctopus marginatus) have been witnessed retrieving discarded coconut shells, manipulating them, and then reassembling them to use as shelter. Shamanistic and religious views Shamanistic and other traditional cultures and folk tales speak of animal spirits and the consciousness of animals. In India, Jains consider all the jivas (living organisms, including plants, animals and insects) to be conscious. Researchers Some contributors to relevant research on animal consciousness include: Marc Bekoff Peter Carruthers Antonio Damasio Marian Stamp Dawkins Frans de Waal Shaun Gallagher Gordon G. Gallup Donald Griffin Nicholas Humphrey Christof Koch Thomas Nagel Irene Pepperberg Bernard Rollin Jeffrey M. Schwartz Jakob von Uexküll
Biology and health sciences
Basics_2
Biology
491658
https://en.wikipedia.org/wiki/Heavy%20equipment
Heavy equipment
Heavy equipment, heavy machinery, earthmovers, construction vehicles, or construction equipment, refers to heavy-duty vehicles specially designed to execute construction tasks, most frequently involving earthwork operations or other large construction tasks. Heavy equipment usually comprises five equipment systems: the implement, traction, structure, power train, and control/information. Heavy equipment has been used since at least the 1st century BC, when the ancient Roman engineer Vitruvius described a crane powered by human or animal labor in De architectura. Heavy equipment functions through the mechanical advantage of a simple machine that multiplies the ratio between input force applied and force exerted, easing and speeding tasks which often could otherwise take hundreds of people and many weeks' labor. Some such equipment uses hydraulic drives as a primary source of motion. The word plant, in this context, has come to mean any type of industrial equipment, including mobile equipment (e.g. in the same sense as powerplant). However, plant originally meant "structure" or "establishment" – usually in the sense of factory or warehouse premises; as such, it was used in contradistinction to movable machinery, often in the phrase "plant and equipment". History The use of heavy equipment has a long history; the ancient Roman engineer Vitruvius (1st century BCE) gave descriptions of heavy equipment and cranes in ancient Rome in his treatise De architectura. The pile driver was invented around 1500. The first tunnelling shield was patented by Marc Isambard Brunel in 1818. From horses, through steam and diesel, to electric and robotic Until the 19th century and into the early 20th century heavy machines were drawn under human or animal power. With the advent of portable steam-powered engines the drawn machine precursors were reconfigured with the new engines, such as the combine harvester. The design of a core tractor evolved around the new steam power source into a new machine core traction engine, that can be configured as the steam tractor and the steamroller. During the 20th century, internal-combustion engines became the major power source of heavy equipment. Kerosene and ethanol engines were used, but today diesel engines are dominant. Mechanical transmission was in many cases replaced by hydraulic machinery. The early 20th century also saw new electric-powered machines such as the forklift. Caterpillar Inc. is a present-day brand from these days, starting out as the Holt Manufacturing Company. The first mass-produced heavy machine was the Fordson tractor in 1917. The first commercial continuous track vehicle was the 1901 Lombard Steam Log Hauler. The use of tracks became popular for tanks during World War I, and later for civilian machinery like the bulldozer. The largest engineering vehicles and mobile land machines are bucket-wheel excavators, built since the 1920s. Until almost the twentieth century, one simple tool constituted the primary earthmoving machine: the hand shovel—moved with animal and human powered, sleds, barges, and wagons. This tool was the principal method by which material was either sidecast or elevated to load a conveyance, usually a wheelbarrow, or a cart or wagon drawn by a draft animal. In antiquity, an equivalent of the hand shovel or hoe and head basket—and masses of men—were used to move earth to build civil works. Builders have long used the inclined plane, levers, and pulleys to place solid building materials, but these labor-saving devices did not lend themselves to earthmoving, which required digging, raising, moving, and placing loose materials. The two elements required for mechanized earthmoving, then as now, were an independent power source and off-road mobility, neither of which could be provided by the technology of that time. Container cranes were used from the 1950s and onwards, and made containerization possible. Nowadays such is the importance of this machinery, some transport companies have developed specific equipment to transport heavy construction equipment to and from sites. Most of the major equipment manufacturers such as Caterpillar, Volvo, Liebherr, and Bobcat have released or have been developing fully or partially electric-powered heavy equipment. Commercially-available models and R&D models were announced in 2019 and 2020. Robotics and autonomy has been a growing concern for heavy equipment manufacturers with manufacturers beginning research and technology acquisition. A number of companies are currently developing (Caterpillar and Bobcat) or have launched (Built Robotics) commercial solutions to the market. Types These subdivisions, in this order, are the standard heavy equipment categorization. Tractor Bulldozer (dozer, track dozer) Snowcat Snowplow Skidder Tractor (wheel tractor) Track tractor Locomotive Artillery tractor Crawler-transporter Military engineering vehicles Grader Grader Excavator Amphibious excavator Compact excavator Dragline excavator Dredger Bucket-wheel excavator Excavator (digger) Long reach excavator Power shovel Reclaimer Suction excavator Walking excavator Trencher Yarder Backhoe Backhoe Backhoe loader Timber Feller buncher Harvester Forwarder Skidder Power saw Track harvester Wheel forwarder Wheel skidder Pipelayer Pipelayer (sideboom) Scraper Fresno scraper Scraper Wheel tractor-scraper (belly scraper) Mining Construction and mining tractor Construction and mining truck Dumper Dump truck Haul truck Mining equipment Articulated Articulated hauler Compactor Wheel dozer (soil compactor) Soil stabilizer Loader Loader (payloader, front loader, wheel loader, integrated tool carrier) Skip loader (skippy) Track loader Track loader Skid-steer loader Skid-steer loader Material handler Aerial work platform, Lift table Crane Block-setting crane Bulk-handling crane Crane vessel Aerial crane Container crane Gantry crane Overhead crane Electric overhead traveling crane Ring crane Level luffing crane Mobile crane Travel lift Forklift Garbage truck Grapple truck, Knuckleboom loader (trailer mount) Straddle carrier Sidelifter Reach stacker Telescopic handlers Tow truck Paving Asphalt paver Asphalt plant Cold planer Cure rig Paver Pavement milling Pneumatic tire compactor Roller (road roller, roller compactor) Slipform paver Vibratory compactor, Compactor Underground Roadheader Tunnel boring machine Underground mining equipment Hydromatic tool Ballast tamper Attachments Drilling rig Horizontal directional drilling Earth auger Pile driver Post pounder Rotary tiller (rototiller, rotovator) Hydraulic machinery Highway Tractor unit Ballast tractor Pushback tractor Railcar mover Highway 10 yard rear dump Highway bottom dump (stiff), pup (belly train), triple Highway end dump and side dump Highway transfer, Transfer train Concrete mixer Concrete mix truck Concrete mix dozer Lowboy (trailer) Street sweeper Street sweep truck Street sweep dozer Images Implements and hydromechanical work tools auger backhoe bale spear broom bulldozer blade clam shell bucket cold plane demolition shears equipment bucket excavator bucket forks grapple hydraulic hammer, hoe ram hydraulics hydraulic tilting bucket (4-in-1) landscape tiller material handling arm mechanical pulverizer, crusher multi processor pavement removal bucket pile driver power take-off (PTO) quick coupler rake ripper rotating grab sheep's foot compactor skeleton bucket snow blower stump grinder stump shear thumb tiltrotator trencher vibratory plate compactor wheel saw Traction: Off-the-road tires and tracks Heavy equipment requires specialized tires for various construction applications. While many types of equipment have continuous tracks applicable to more severe service requirements, tires are used where greater speed or mobility is required. An understanding of what equipment will be used for during the life of the tires is required for proper selection. Tire selection can have a significant impact on production and unit cost. There are three types of off-the-road tires, transport for earthmoving machines, work for slow moving earthmoving machines, and load and carry for transporting as well as digging. Off-highway tires have six categories of service C compactor, E earthmover, G grader, L loader, LS log-skidder and ML mining and logging. Within these service categories are various tread types designed for use on hard-packed surface, soft surface and rock. Tires are a large expense on any construction project, careful consideration should be given to prevent excessive wear or damage. Heavy equipment operator A heavy equipment operator drives and operates heavy equipment used in engineering and construction projects. Typically only skilled workers may operate heavy equipment, and there is specialized training for learning to use heavy equipment. Much publication about heavy equipment operators focuses on improving safety for such workers. The field of occupational medicine researches and makes recommendations about safety for these and other workers in safety-sensitive positions. Equipment cost Due to the small profit margins on construction projects it is important to maintain accurate records concerning equipment utilization, repairs and maintenance. The two main categories of equipment costs are ownership cost and operating cost. Ownership cost To classify as an ownership cost an expense must have been incurred regardless of if the equipment is used or not. These costs are as follows: purchase expense salvage value tax savings from depreciation major repairs and overhauls property taxes insurance storage Depreciation can be calculated several ways, the simplest is the straight-line method. The annual depreciation is constant, reducing the equipment value annually. The following are simple equations paraphrased from the Peurifoy & Schexnayder text: Operating cost For an expense to be classified as an operating cost, it must be incurred through use of the equipment. These costs are as follows: The biggest distinction from a cost standpoint is if a repair is classified as a major repair or a minor repair. A major repair can change the depreciable equipment value due to an extension in service life, while a minor repair is normal maintenance. How a firm chooses to cost major and minor repairs vary from firm to firm depending on the costing strategies being used. Some firms will charge only major repairs to the equipment while minor repairs are costed to a project. Another common costing strategy is to cost all repairs to the equipment and only frequently replaced wear items are excluded from the equipment cost. Many firms keep their costing structure closely guarded as it can impact the bidding strategies of their competition. In a company with multiple semi-independent divisions, the equipment department often wants to classify all repairs as "minor" and charge the work to a job – therefore improving their 'profit' from the equipment. Models Die-cast metal promotional scale models of heavy equipment are often produced for each vehicle to give to prospective customers. These are typically in 1:50 scale. The popular manufacturers of these models are Conrad and NZG in Germany, even for US vehicles. Notable manufacturers The largest 10 heavy equipment manufacturers in 2022 Other manufacturers include: Anhui Heli Atlas Copco BEML Limited Bobcat Company Case Construction Equipment Chelyabinsk Tractor Plant CNH Global Demag Fiat-Allis HEPCO HIAB Hidromek Hyundai Heavy Industries Ingersoll Rand Kubota Kobelco LiuGong MARAIS Navistar International Corporation NCK New Holland Track Marshall Orenstein and Koppel GmbH (O&K) Paccar Poclain Rototilt Shantui ST Kinetics Takeuchi Manufacturing Wacker Neuson Yanmar Zoomlion
Technology
Specific-purpose transportation
null
491851
https://en.wikipedia.org/wiki/Radio%20receiver
Radio receiver
In radio communications, a radio receiver, also known as a receiver, a wireless, or simply a radio, is an electronic device that receives radio waves and converts the information carried by them to a usable form. It is used with an antenna. The antenna intercepts radio waves (electromagnetic waves of radio frequency) and converts them to tiny alternating currents which are applied to the receiver, and the receiver extracts the desired information. The receiver uses electronic filters to separate the desired radio frequency signal from all the other signals picked up by the antenna, an electronic amplifier to increase the power of the signal for further processing, and finally recovers the desired information through demodulation. Radio receivers are essential components of all systems that use radio. The information produced by the receiver may be in the form of sound, video (television), or digital data. A radio receiver may be a separate piece of electronic equipment, or an electronic circuit within another device. The most familiar type of radio receiver for most people is a broadcast radio receiver, which reproduces sound transmitted by radio broadcasting stations, historically the first mass-market radio application. A broadcast receiver is commonly called a "radio". However radio receivers are very widely used in other areas of modern technology, in televisions, cell phones, wireless modems, radio clocks and other components of communications, remote control, and wireless networking systems. Applications Broadcasting Broadcast audio reception Broadcast television reception Televisions receive a video signal representing a moving image, composed of a sequence of still images, and a synchronized audio signal representing the associated sound. The television channel received by a TV occupies a wider bandwidth than an audio signal, from 600 kHz to 6 MHz. Terrestrial television receiver, broadcast television or just television (TV) - Televisions contains an integral receiver (TV tuner) which receives free broadcast television from local television stations on TV channels in the VHF and UHF bands. Satellite TV receiver - a set-top box which receives subscription direct-broadcast satellite television, and displays it on an ordinary television. A rooftop satellite dish receives many channels all modulated on a Ku band microwave downlink signal from a geostationary direct broadcast satellite above the Earth, and the signal is converted to a lower intermediate frequency and transported to the box through a coaxial cable. The subscriber pays a monthly fee. Voice communications Two-way voice communications A two-way radio is an audio transceiver, a receiver and transmitter in the same device, used for bidirectional person-to-person voice communication. The radio link may be half-duplex, using a single radio channel in which only one radio can transmit at a time. so different users take turns talking, pressing a push to talk button on their radio which switches on the transmitter. Or the radio link may be full duplex, a bidirectional link using two radio channels so both people can talk at the same time, as in a cell phone. Cellphone - a portable telephone that is connected to the telephone network by radio signals exchanged with a local antenna called a cell tower. Cellphones have highly automated digital receivers working in the UHF and microwave band that receive the incoming side of the duplex voice channel, as well as a control channel that handles dialing calls and switching the phone between cell towers. They usually also have several other receivers that connect them with other networks: a WiFi modem, a bluetooth modem, and a GPS receiver. The cell tower has sophisticated multichannel receivers that receive the signals from many cell phones simultaneously. Cordless phone - a landline telephone in which the handset is portable and communicates with the rest of the phone by a short range duplex radio link, instead of being attached by a cord. Both the handset and the base station have radio receivers operating in the UHF band that receive the short range bidirectional duplex radio link. Citizens band radio - a two-way half-duplex radio operating in the 27 MHz band that can be used without a license. They are often installed in vehicles and used by truckers and delivery services. Walkie-talkie - a handheld short range half-duplex two-way radio. Scanner - a receiver that continuously monitors multiple frequencies or radio channels by stepping through the channels repeatedly, listening briefly to each channel for a transmission. When a transmitter is found the receiver stops at that channel. Scanners are used to monitor emergency police, fire, and ambulance frequencies, as well as other two way radio frequencies such as citizens band. Scanning capabilities have also become a standard feature in communications receivers, walkie-talkies, and other two-way radios. Communications receiver or shortwave receiver - a general purpose audio receiver covering the LF, MF, shortwave (HF), and VHF bands. Used mostly with a separate shortwave transmitter for two-way voice communication in communication stations, amateur radio stations, and for shortwave listening. One-way voice communications Wireless microphone receiver - these receive the short range signal from wireless microphones used onstage by musical artists, public speakers, and television personalities. Baby monitor - this is a cribside appliance for parents of infants that transmits the baby's sounds to a receiver carried by the parents, so they can monitor the baby while they are in other parts of the house. Many baby monitors now have video cameras to show a picture of the baby. Data communication Wireless (WiFi) modem - an automated short range digital data transmitter and receiver on a portable wireless device that communicates by microwaves with a nearby access point, a router or gateway, connecting the portable device with a local computer network (WLAN) to exchange data with other devices. Bluetooth modem - a very short range (up to 10 m) 2.4-2.83 GHz data transceiver on a portable wireless device used as a substitute for a wire or cable connection, mainly to exchange files between portable devices and connect cellphones and music players with wireless earphones. Microwave relay - a long-distance high bandwidth point-to-point data transmission link consisting of a dish antenna and transmitter that transmits a beam of microwaves to another dish antenna and receiver. Since the antennas must be in line-of-sight, distances are limited by the visual horizon to 30–40 miles. Microwave links are used for private business data, wide area computer networks (WANs), and by telephone companies to transmit distance phone calls and television signals between cities. Satellite communications - Communication satellites are used for data transmission between widely separated points on Earth. Other satellites are used for search and rescue, remote sensing, weather reporting and scientific research. Radio communication with satellites and spacecraft can involve very long path lengths, from 35,786 km (22,236 mi) for geosynchronous satellites to billions of kilometers for interplanetary spacecraft. This and the limited power available to a spacecraft transmitter mean very sensitive receivers must be used. Satellite transponder - A receiver and transmitter in a communications satellite that receives multiple data channels carrying long-distance telephone calls, television signals. or internet traffic on a microwave uplink signal from a satellite ground station and retransmits the data to another ground station on a different downlink frequency. In a direct broadcast satellite the transponder broadcasts a stronger signal directly to satellite radio or satellite television receivers in consumer's homes. Satellite ground station receiver - communication satellite ground stations receive data from communications satellites orbiting the Earth. Deep space ground stations such as those of the NASA Deep Space Network receive the weak signals from distant scientific spacecraft on interplanetary exploration missions. These have large dish antennas around 85 ft (25 m) in diameter, and extremely sensitive radio receivers similar to radio telescopes. The RF front end of the receiver is often cryogenically cooled to −195.79 °C (−320 °F) by liquid nitrogen to reduce radio noise in the circuit. Remote control - Remote control receivers receive digital commands that control a device, which may be as complex as a space vehicle or unmanned aerial vehicle, or as simple as a garage door opener. Remote control systems often also incorporate a telemetry channel to transmit data on the state of the controlled device back to the controller. Radio controlled model and other models include multichannel receivers in model cars, boats, airplanes, and helicopters. A short-range radio system is used in keyless entry systems. Other applications Radiolocation - This is the use of radio waves to determine the location or direction of an object. Radar - a device that transmits a narrow beam of microwaves which reflect from a target back to a receiver, used to locate objects such as aircraft, spacecraft, missiles, ships or land vehicles. The reflected waves from the target are received by a receiver usually connected to the same antenna, indicating the direction to the target. Widely used in aviation, shipping, navigation, weather forecasting, space flight, vehicle collision avoidance systems, and the military. Global navigation satellite system (GNSS) receiver, such as a GPS receiver used with the US Global Positioning System - the most widely used electronic navigation device. An automated digital receiver that receives simultaneous data signals from several satellites in low Earth orbit. Using extremely precise time signals it calculates the distance to the satellites, and from this the receiver's location on Earth. GNSS receivers are sold as portable devices, and are also incorporated in cell phones, vehicles and weapons, even artillery shells. VOR receiver - navigational instrument on an aircraft that uses the VHF signal from VOR navigational beacons between 108 and 117.95 MHz to determine the direction to the beacon very accurately, for air navigation. Wild animal tracking receiver - a receiver with a directional antenna used to track wild animals which have been tagged with a small VHF transmitter, for wildlife management purposes. Other Telemetry receiver - this receives data signals to monitor conditions of a process. Telemetry is used to monitor missile and spacecraft in flight, well logging during oil and gas drilling, and unmanned scientific instruments in remote locations. Measuring receiver - a calibrated, laboratory grade radio receiver used to measure the characteristics of radio signals. Often incorporates a spectrum analyzer. Radio telescope - specialized antenna and radio receiver used as a scientific instrument to study weak radio waves from astronomical radio sources in space like stars, nebulas and galaxies in radio astronomy. They are the most sensitive radio receivers that exist, having large parabolic (dish) antennas up to 500 meters in diameter, and extremely sensitive radio circuits. The RF front end of the receiver is often cryogenically cooled by liquid nitrogen to reduce radio noise. Principles A radio receiver is connected to an antenna which converts some of the energy from the incoming radio wave into a tiny radio frequency AC voltage which is applied to the receiver's input. An antenna typically consists of an arrangement of metal conductors. The oscillating electric and magnetic fields of the radio wave push the electrons in the antenna back and forth, creating an oscillating voltage. The antenna may be enclosed inside the receiver's case, as with the ferrite loop antennas of AM radios and the flat inverted F antenna of cell phones; attached to the outside of the receiver, as with whip antennas used on FM radios, or mounted separately and connected to the receiver by a cable, as with rooftop television antennas and satellite dishes. Practical radio receivers perform three basic functions on the signal from the antenna: filtering, amplification, and demodulation: Reception The signal strength of radio waves decreases the farther they travel from the transmitter, so a radio station can only be received within a limited range of its transmitter. The range depends on the power of the transmitter, the sensitivity of the receiver, atmospheric and internal noise, as well as any geographical obstructions such as hills between transmitter and receiver. AM broadcast band radio waves travel as ground waves which follow the contour of the Earth, so AM radio stations can be reliably received at hundreds of miles distance. Due to their higher frequency, FM band radio signals cannot travel far beyond the visual horizon; limiting reception distance to about 40 miles (64 km), and can be blocked by hills between the transmitter and receiver. However FM radio is less susceptible to interference from radio noise (RFI, sferics, static) and has higher fidelity; better frequency response and less audio distortion, than AM. So in countries that still broadcast AM radio, serious music is typically only broadcast by FM stations, and AM stations specialize in radio news, talk radio, and sports radio. Like FM, DAB signals travel by line of sight so reception distances are limited by the visual horizon to about 30–40 miles (48–64 km). Bandpass filtering Radio waves from many transmitters pass through the air simultaneously without interfering with each other and are received by the antenna. These can be separated in the receiver because they have different frequencies; that is, the radio wave from each transmitter oscillates at a different rate. To separate out the desired radio signal, the bandpass filter allows the frequency of the desired radio transmission to pass through, and blocks signals at all other frequencies. The bandpass filter consists of one or more resonant circuits (tuned circuits). The resonant circuit is connected between the antenna input and ground. When the incoming radio signal is at the resonant frequency, the resonant circuit has high impedance and the radio signal from the desired station is passed on to the following stages of the receiver. At all other frequencies the resonant circuit has low impedance, so signals at these frequencies are conducted to ground. Bandwidth and selectivity: See graphs. The information (modulation) in a radio transmission is contained in two narrow bands of frequencies called sidebands (SB) on either side of the carrier frequency (C), so the filter has to pass a band of frequencies, not just a single frequency. The band of frequencies received by the receiver is called its passband (PB), and the width of the passband in kilohertz is called the bandwidth (BW). The bandwidth of the filter must be wide enough to allow the sidebands through without distortion, but narrow enough to block any interfering transmissions on adjacent frequencies (such as S2 in the diagram). The ability of the receiver to reject unwanted radio stations near in frequency to the desired station is an important parameter called selectivity determined by the filter. In modern receivers quartz crystal, ceramic resonator, or surface acoustic wave (SAW) filters are often used which have sharper selectivity compared to networks of capacitor-inductor tuned circuits. Tuning: To select a particular station the radio is "tuned" to the frequency of the desired transmitter. The radio has a dial or digital display showing the frequency it is tuned to. Tuning is adjusting the frequency of the receiver's passband to the frequency of the desired radio transmitter. Turning the tuning knob changes the resonant frequency of the tuned circuit. When the resonant frequency is equal to the radio transmitter's frequency the tuned circuit oscillates in sympathy, passing the signal on to the rest of the receiver. Amplification The power of the radio waves picked up by a receiving antenna decreases with the square of its distance from the transmitting antenna. Even with the powerful transmitters used in radio broadcasting stations, if the receiver is more than a few miles from the transmitter the power intercepted by the receiver's antenna is very small, perhaps as low as picowatts or femtowatts. To increase the power of the recovered signal, an amplifier circuit uses electric power from batteries or the wall plug to increase the amplitude (voltage or current) of the signal. In most modern receivers, the electronic components which do the actual amplifying are transistors. Receivers usually have several stages of amplification: the radio signal from the bandpass filter is amplified to make it powerful enough to drive the demodulator, then the audio signal from the demodulator is amplified to make it powerful enough to operate the speaker. The degree of amplification of a radio receiver is measured by a parameter called its sensitivity, which is the minimum signal strength of a station at the antenna, measured in microvolts, necessary to receive the signal clearly, with a certain signal-to-noise ratio. Since it is easy to amplify a signal to any desired degree, the limit to the sensitivity of many modern receivers is not the degree of amplification but random electronic noise present in the circuit, which can drown out a weak radio signal. Demodulation After the radio signal is filtered and amplified, the receiver must extract the information-bearing modulation signal from the modulated radio frequency carrier wave. This is done by a circuit called a demodulator (detector). Each type of modulation requires a different type of demodulator an AM receiver that receives an (amplitude modulated) radio signal uses an AM demodulator an FM receiver that receives a frequency modulated signal uses an FM demodulator an FSK receiver which receives frequency-shift keying (used to transmit digital data in wireless devices) uses an FSK demodulator Many other types of modulation are also used for specialized purposes. The modulation signal output by the demodulator is usually amplified to increase its strength, then the information is converted back to a human-usable form by some type of transducer. An audio signal, representing sound, as in a broadcast radio, is converted to sound waves by an earphone or loudspeaker. A video signal, representing moving images, as in a television receiver, is converted to light by a display. Digital data, as in a wireless modem, is applied as input to a computer or microprocessor, which interacts with human users. AM demodulation The easiest type of demodulation to understand is AM demodulation, used in AM radios to recover the audio modulation signal, which represents sound and is converted to sound waves by the radio's speaker. It is accomplished by a circuit called an envelope detector (see circuit), consisting of a diode (D) with a bypass capacitor (C) across its output. See graphs. The amplitude modulated radio signal from the tuned circuit is shown at (A). The rapid oscillations are the radio frequency carrier wave. The audio signal (the sound) is contained in the slow variations (modulation) of the amplitude (size) of the waves. If it was applied directly to the speaker, this signal cannot be converted to sound, because the audio excursions are the same on both sides of the axis, averaging out to zero, which would result in no net motion of the speaker's diaphragm. (B) When this signal is applied as input VI to the detector, the diode (D) conducts current in one direction but not in the opposite direction, thus allowing through pulses of current on only one side of the signal. In other words, it rectifies the AC current to a pulsing DC current. The resulting voltage VO applied to the load RL no longer averages zero; its peak value is proportional to the audio signal. (C) The bypass capacitor (C) is charged up by the current pulses from the diode, and its voltage follows the peaks of the pulses, the envelope of the audio wave. It performs a smoothing (low pass filtering) function, removing the radio frequency carrier pulses, leaving the low frequency audio signal to pass through the load RL. The audio signal is amplified and applied to earphones or a speaker. Automatic gain control (AGC) The signal strength (amplitude) of the radio signal from a receiver's antenna varies drastically, by orders of magnitude, depending on how far away the radio transmitter is, how powerful it is, and propagation conditions along the path of the radio waves. The strength of the signal received from a given transmitter varies with time due to changing propagation conditions of the path through which the radio wave passes, such as multipath interference; this is called fading. In an AM receiver, the amplitude of the audio signal from the detector, and the sound volume, is proportional to the amplitude of the radio signal, so fading causes variations in the volume. In addition as the receiver is tuned between strong and weak stations, the volume of the sound from the speaker would vary drastically. Without an automatic system to handle it, in an AM receiver, constant adjustment of the volume control would be required. With other types of modulation like FM or FSK the amplitude of the modulation does not vary with the radio signal strength, but in all types the demodulator requires a certain range of signal amplitude to operate properly. Insufficient signal amplitude will cause an increase of noise in the demodulator, while excessive signal amplitude will cause amplifier stages to overload (saturate), causing distortion (clipping) of the signal. Therefore, almost all modern receivers include a feedback control system which monitors the average level of the radio signal at the detector, and adjusts the gain of the amplifiers to give the optimum signal level for demodulation. This is called automatic gain control (AGC). AGC can be compared to the dark adaptation mechanism in the human eye; on entering a dark room the gain of the eye is increased by the iris opening. In its simplest form, an AGC system consists of a rectifier which converts the RF signal to a varying DC level, a lowpass filter to smooth the variations and produce an average level. This is applied as a control signal to an earlier amplifier stage, to control its gain. In a superheterodyne receiver, AGC is usually applied to the IF amplifier, and there may be a second AGC loop to control the gain of the RF amplifier to prevent it from overloading, too. In certain receiver designs such as modern digital receivers, a related problem is DC offset of the signal. This is corrected by a similar feedback system. Designs Tuned radio frequency (TRF) receiver In the simplest type of radio receiver, called a tuned radio frequency (TRF) receiver, the three functions above are performed consecutively: (1) the mix of radio signals from the antenna is filtered to extract the signal of the desired transmitter; (2) this oscillating voltage is sent through a radio frequency (RF) amplifier to increase its strength to a level sufficient to drive the demodulator; (3) the demodulator recovers the modulation signal (which in broadcast receivers is an audio signal, a voltage oscillating at an audio frequency rate representing the sound waves) from the modulated radio carrier wave; (4) the modulation signal is amplified further in an audio amplifier, then is applied to a loudspeaker or earphone to convert it to sound waves. Although the TRF receiver is used in a few applications, it has practical disadvantages which make it inferior to the superheterodyne receiver below, which is used in most applications. The drawbacks stem from the fact that in the TRF the filtering, amplification, and demodulation are done at the high frequency of the incoming radio signal. The bandwidth of a filter increases with its center frequency, so as the TRF receiver is tuned to different frequencies its bandwidth varies. Most important, the increasing congestion of the radio spectrum requires that radio channels be spaced very close together in frequency. It is extremely difficult to build filters operating at radio frequencies that have a narrow enough bandwidth to separate closely spaced radio stations. TRF receivers typically must have many cascaded tuning stages to achieve adequate selectivity. The Advantages section below describes how the superheterodyne receiver overcomes these problems. The superheterodyne design The superheterodyne receiver, invented in 1918 by Edwin Armstrong is the design used in almost all modern receivers except a few specialized applications. In the superheterodyne, the radio frequency signal from the antenna is shifted down to a lower "intermediate frequency" (IF), before it is processed. The incoming radio frequency signal from the antenna is mixed with an unmodulated signal generated by a local oscillator (LO) in the receiver. The mixing is done in a nonlinear circuit called the "mixer". The result at the output of the mixer is a heterodyne or beat frequency at the difference between these two frequencies. The process is similar to the way two musical notes at different frequencies played together produce a beat note. This lower frequency is called the intermediate frequency (IF). The IF signal also has the modulation sidebands that carry the information that was present in the original RF signal. The IF signal passes through filter and amplifier stages, then is demodulated in a detector, recovering the original modulation. The receiver is easy to tune; to receive a different frequency it is only necessary to change the local oscillator frequency. The stages of the receiver after the mixer operates at the fixed intermediate frequency (IF) so the IF bandpass filter does not have to be adjusted to different frequencies. The fixed frequency allows modern receivers to use sophisticated quartz crystal, ceramic resonator, or surface acoustic wave (SAW) IF filters that have very high Q factors, to improve selectivity. The RF filter on the front end of the receiver is needed to prevent interference from any radio signals at the image frequency. Without an input filter the receiver can receive incoming RF signals at two different frequencies,. The receiver can be designed to receive on either of these two frequencies; if the receiver is designed to receive on one, any other radio station or radio noise on the other frequency may pass through and interfere with the desired signal. A single tunable RF filter stage rejects the image frequency; since these are relatively far from the desired frequency, a simple filter provides adequate rejection. Rejection of interfering signals much closer in frequency to the desired signal is handled by the multiple sharply-tuned stages of the intermediate frequency amplifiers, which do not need to change their tuning. This filter does not need great selectivity, but as the receiver is tuned to different frequencies it must "track" in tandem with the local oscillator. The RF filter also serves to limit the bandwidth applied to the RF amplifier, preventing it from being overloaded by strong out-of-band signals. To achieve both good image rejection and selectivity, many modern superhet receivers use two intermediate frequencies; this is called a dual-conversion or double-conversion superheterodyne. The incoming RF signal is first mixed with one local oscillator signal in the first mixer to convert it to a high IF frequency, to allow efficient filtering out of the image frequency, then this first IF is mixed with a second local oscillator signal in a second mixer to convert it to a low IF frequency for good bandpass filtering. Some receivers even use triple-conversion. At the cost of the extra stages, the superheterodyne receiver provides the advantage of greater selectivity than can be achieved with a TRF design. Where very high frequencies are in use, only the initial stage of the receiver needs to operate at the highest frequencies; the remaining stages can provide much of the receiver gain at lower frequencies which may be easier to manage. Tuning is simplified compared to a multi-stage TRF design, and only two stages need to track over the tuning range. The total amplification of the receiver is divided between three amplifiers at different frequencies; the RF, IF, and audio amplifier. This reduces problems with feedback and parasitic oscillations that are encountered in receivers where most of the amplifier stages operate at the same frequency, as in the TRF receiver. The most important advantage is that better selectivity can be achieved by doing the filtering at the lower intermediate frequency. One of the most important parameters of a receiver is its bandwidth, the band of frequencies it accepts. In order to reject nearby interfering stations or noise, a narrow bandwidth is required. In all known filtering techniques, the bandwidth of the filter increases in proportion with the frequency, so by performing the filtering at the lower , rather than the frequency of the original radio signal , a narrower bandwidth can be achieved. Modern FM and television broadcasting, cellphones and other communications services, with their narrow channel widths, would be impossible without the superheterodyne. History Television receive-only
Technology
Broadcasting
null
491868
https://en.wikipedia.org/wiki/Dragonet
Dragonet
Dragonets are small, percomorph, marine fish of the diverse family Callionymidae (from the Greek kallis, "beautiful" and , "name") found mainly in the tropical waters of the western Indo-Pacific. They are benthic organisms, spending most of their time near the sandy bottoms, at a depth of roughly two hundred meters. There exist 139 species of the fish, in nineteen genera. Due to similarities in morphology and behavior, dragonets are sometimes confused with members of the goby family. However, male dragonets can be differentiated from the goby by their very long dorsal fins, and females by their protruding lower jaws. The Draconettidae may be considered a sister family, whose members are very much alike, though rarely seen. Genera The following genera are classified within the Callionymidae: Anaora J. E. Gray, 1835 Bathycallionymus Nakabo, 1982 Callionymus Linnaeus, 1758 (including Calliurichthys) Diplogrammus Gill, 1865 (including Chalinops) Draculo Snyder, 1911 Eleutherochir Bleeker, 1879 Eocallionymus Nakabo, 1982 Foetorepus Whitley, 1931 Neosynchiropus Nalbant, 1979 Paracallionymus Barnard, 1927 Protogrammus Fricke, 1985 Pseudocalliurichthys Nakabo, 1982 Repomucenus Whitley, 1931 Spinicapitichthys Fricke, 1980 Synchiropus Gill, 1859 Tonlesapia Motomura & Mukai, 2006 Description These "little dragons" are generally very colorful and possess cryptic patterns. Their bodies are elongated and scaleless. A large preopercular spine is characteristic of this fish, and has been reported to be venomous in some species. All fins are large, showy and elongated; the first high dorsal fin usually has four spines; in males, the first of these spines may be further adorned with filamentous extensions. Dragonets have flattened, triangular heads with large mouths and eyes; their tail fins are fan-shaped and tapered. The largest species, the longtail dragonet (Callionymus gardineri) reaches a length of . At the other end of the spectrum, the Saint Helena dragonet (Callionymus sanctaehelenae) reaches a length of just . Many species exhibit marked sexual dimorphism: males and females are coloured and patterned differently from each other, and (in addition to the spine filament) males have a much higher dorsal fin. This difference is extreme in the high-finned dragonet (Synchiropus rameus). Reproduction Dragonet spawning occurs during late afternoons, right before the sun sets. The fish's spawning behavior is divided into four distinctive stages: courtship display, pairing, ascending, and the release of eggs and milt. Both male and female dragonets have been observed displaying and courting each other, although the practice is much more frequent in the males. Females only do so when they are ready to spawn and are in need of a mate. Both sexes display by spreading their pectoral and caudal fins, and moving around or by the side of the other sex. Males will sometimes also spread their dorsal fins, repeatedly open and close their mouths, and position themselves on top of the females and rub their abdomens with their bodies. If a female accepts a male for spawning, they form a pair. Occasionally, another male might intrude upon the pair as they are mating and attempt to sneak fertilizations with the female. Such an act would result in aggression by the original male. Prior to spawning, a male and female dragonet pair will ascend approximately 0.7-1.2 meters up a water column from the sand at the bottom of the ocean. The male assumes a parallel position to the female, touching the female's side with the part of its body near its ventral fin. The pair rises slowly up the water column, moving in a semicircular manner by swimming with their pectoral fins. The ascent occurs in two phases. During the first phase, the dragonet pair moves upward about fifteen centimeters and rests for around five seconds. Then it proceeds with its second rise. During this second phase of the ascent, the male and female flex their bodies and move their genital papillae toward each other. The male releases its ejaculate and the female releases its eggs. The release of eggs occurs singly and continuously for approximately five seconds. The eggs are pelagic, floating freely in the water column. The female releases a high number of eggs during each spawning, and the dragonets do not guard their offspring. The eggs are buoyant, so they intermingle with plankton and get swept away by the ocean current. After the spawning, the dragonet pair parts from each other and swims back down to the ocean floor. Male dragonets are polygynous, and will begin to search for other females to repeat the mating process with. They generally spawn with several different females within one reproductive day. Dragonets are very sexually dimorphic, with the males being much larger and having longer fins than the females. This sexual dimorphism may have evolved in males in response to female mate choice, male-male competition, or both. Competition/aggression Male dragonets form dominance hierarchies and act extremely aggressively towards each other. They are often observed chasing and biting, which occurs primarily when two males are close to a female during courtship and pairing. Fights can be very intense; when one male recognizes another male near its breeding site, it will rush toward it and bite at its rival's mouth. The two may bite at each other and twist their bodies around one another for longer than a minute. As a result of this behavior, male dragonets suffer higher mortality rates than females do after attaining maturation. The highest mortality rates in adult males occur during breeding. Males have evolved larger bodies, as well as longer spines and rays, in order to achieve dominance in reproduction. They have also developed bright colors so as to more effectively compete for female attention. These secondary sex characteristics further reduce the survival potential of male dragonets, as they increase the risk of predation, require greater energy costs, and escalate the risk of suffering injuries. Feeding Feeding by the dragonet occurs throughout the day, including the intervals between courtships and spawning. The fish feeds entirely on benthic sources, primarily copepods, amphipods, and other small invertebrates living on blades of sea grass. Species of dragonets from different locations show variations in specific food preference, attributable to the different availabilities and abundances of food organisms in those places. All of them feed by extending their highly protractible jaws toward their food and drawing it into the mouths, frequently followed by the expulsion of sand. No evidence suggests that dragonets are territorial. Individuals do not defend specific areas of substrate, as well as any resources that might be present on them, from intrusion by conspecifics or other fish species. Among Calliurichthys japonicus and Repomucenus huguenini, the two most abundant dragonet species, amphipods are the most plentiful prey during the spring and winter months. The fish also supplement their diets with polychaetes, bivalves, and gastropods in these periods. During the summer, the dragonets feed primarily on ophiuroids and amphipods. In this season, ophiuroids are the most dominant in number. Finally, in the fall, the two species predominantly consume polychaetes, amphipods, and gastropods, with polychaetes contributing the highest amount. Locomotion Four types of swimming are observed in the dragonet. The first is burst swimming, the most common of the four, and utilized during foraging. The dragonet uses its pelvic fins to propel its body off of a substrate, and then its pectoral fins to guide itself forward. The second is continuous swimming, often utilized by males when approaching a potential mate or retreating during an aggressive encounter with another male. The dragonet uses its pectoral fins to propel its body forward, and its pelvic fins to lift and guide itself. The third type of swimming is rapid swimming, which is observed when the dragonet is attacking or fleeing. The fish primarily uses its caudal fins to achieve a quick speed. Finally, the fourth type is vertical swimming, utilized by the dragonet during spawning when it ascends. The pectoral fins are used to propel the fish's body up the water column. Defense In defense against its predators, the dragonet rapidly buries itself under the sand at the bottom of the ocean so that only its eyes remain visible. Many species of the fish also are capable of producing and secreting foul-tasting and -smelling substances that may ward off any potential predators. Timeline
Biology and health sciences
Acanthomorpha
Animals
491936
https://en.wikipedia.org/wiki/Pilates
Pilates
Pilates (; ) is a type of mind-body exercise developed in the early 20th century by German physical trainer Joseph Pilates, after whom it was named. Pilates called his method "Contrology". It is practiced worldwide, especially in developed countries such as Australia, Canada, Germany, South Korea, New Zealand, the United Arab Emirates, the United Kingdom, and the United States. Pilates uses a combination of around 50 repetitive exercises to spur muscle exertion. Each exercise flows from the "five essentials": breath, cervical alignment, rib and scapular stabilization, pelvic mobility, and utilization of the transversus abdominis. Each exercise is typically repeated three to five times. As of 2023, over 12 million people practice Pilates. Pilates developed in the aftermath of the late nineteenth century physical culture of exercising to alleviate ill health. There is, however, only limited evidence to support the use of Pilates to alleviate problems such as lower back pain. While studies have found that regular sessions improve balance, and can help muscle conditioning in healthy adults (compared to doing no exercise), it has not been shown to be an effective treatment for any medical condition. History Pilates was developed by Joseph Pilates from Mönchengladbach, Germany. His father was a gymnast and his mother a naturopath. Pilates said that the inspiration for his method came to him during World War I, while he was being held at the Knockaloe internment camp in the Isle of Man. Pilates spent four years there, working on his fellow internees, developing his method of a system of exercises intended to strengthen the human mind and body, believing that mental and physical health were interrelated. In his youth, Pilates had practiced many of the physical training regimens available in Germany, and it was from these that he developed his own method. It has clear connections with the physical culture of the late nineteenth century, such as the use of special apparatuses, and claims that the exercises could cure ill health. It is also related to the tradition of "corrective exercise" or "medical gymnastics" as typified by Pehr Henrik Ling. Pilates accompanied his method with a variety of equipment, which he called "apparatus". Each apparatus was designed to help accelerate the process of stretching, strengthening, body alignment and increased core strength started by mat work. The best-known and most popular apparatus today, the Reformer, was originally called the Universal Reformer, aptly named for "universally reforming the body". Eventually Pilates designed other apparatus, including the Cadillac, Wunda Chair, High "Electric" Chair, Spine Corrector, Ladder Barrel and Pedi-Pole. He published two books related to his training method: Your Health: A Corrective System of Exercising That Revolutionizes the Entire Field of Physical Education (1934) and Return to Life Through Contrology (1945). During his lifetime, Joseph Pilates directly trained and certified two assistants, Kathy Stanford Grant and Lolita San Miguel. Description A systematic review of Pilates in 2012 examined its literature to form a consensus description of it, and found it could be described as "a mind-body exercise that requires core stability, strength, and flexibility, and attention to muscle control, posture, and breathing". According to The New York Times, Pilates "can be tailored to a spectrum of fitness goals, ages and abilities". Pilates is not a cardiovascular workout, but rather a strength and flexibility workout. There are various elements that contribute to distinguishing Pilates from other forms of resistance training. For example, Pilates places a heavy emphasis on breathwork and creating a mind-body connection. Joseph Pilates even states "Above all, learn how to breathe correctly." Participants consciously use the core and breath for all forms of movement. In his book Return to Life through Contrology, Joseph Pilates presented his method as the art of controlled movements, which should look and feel like a workout (not a therapy) when properly done. If practiced consistently, Pilates improves flexibility, builds strength, and develops control and endurance in the entire body. It puts emphasis on alignment, breathing, developing a strong core, and improving coordination and balance. The core, consisting of the muscles of the abdomen, low back and hips, is often called the "powerhouse" and is thought to be the key to a person's stability. Pilates' system allows for exercises to be modified in difficulty, from beginner to advanced or any other level, and to accommodate the instructor's and practitioner's goals and/or limitations. Their intensity can be increased as the body adapts itself to the exercises. A number of versions of Pilates are taught today; most are based on up to nine principles. Effectiveness In 2015 the Australian Government's Department of Health published a meta study which reviewed the existing literature on 17 alternative therapies, including Pilates, to determine whether any were suitable for being covered by health insurance. The review found that due to the small number and methodologically limited nature of the existing studies, the effectiveness of Pilates was uncertain. Accordingly, in 2017, the Australian government named it a practice that would not qualify for insurance subsidy, saying this step would "ensure taxpayer funds are expended appropriately and not directed to therapies lacking evidence". For the treatment of lower back pain, low-quality evidence suggests that while Pilates is better than doing nothing, it is no more effective than other forms of physical exercise. There is some evidence that regular sessions can help condition the abdominal muscles of healthy people, when compared to doing no exercise. There is no good evidence that it helps improve balance in elderly people. From the limited data available, it would seem from the statistically and clinically significant findings that Pilates has demonstrated efficacy as a tool for the rehabilitation of a wide range of conditions. Mat and reformer Pilates Pilates is continuously evolving through the use of modern equipment, but the core of the technique is tied to the movement patterns designed by Joseph Pilates. Pilates can be performed on both a mat or on specialized equipment. Pilates often incorporates spring-based resistance machines known as reformers, which consists of a box-like frame, sliding platform, springs, straps/ropes, and pulleys that help support the spine and target different muscle groups. For example, in order to target the upper back, a typical Pilates move on the reformer involves lying face-down on top of an accessory called a long box which is placed on top of the sliding platform. The participant then lifts their head and chest while pulling back the straps down toward their hips to slide forward with the moving platform and repeating a few times. The straps can be heavier or lighter depending on the resistance that is controlled by the springs. With mat Pilates, people sit or lie with their body weight as the main resistance, using gravity to stabilize their core. For example, a common mat Pilates exercise is called "roll-up", where participants start by sitting on the floor with their legs straight out in front of them and their arms extended over their legs. Participants then slowlyusing the breath to control the motionuncurl their upper bodies backward toward a supinated lying down position, until they are indeed lying down on their backs with their arms out over their heads. They then curl back up into the starting position as they exhale, repeating this process multiple times. Accessories such as resistance circle rings or resistance bands may be used in both mat and reformer Pilates. Comparison with yoga Modern yoga, like Pilates, is a mind-and-body discipline, though yoga classes are more likely to address spiritual aspects explicitly. Both yoga and Pilates incorporate elements of stretching and breathing. Both are low-impact, low-intensity exercises, but there are key differences. When practicing yoga, individuals hold certain poses for longer periods of time and flow into others; when practicing Pilates, individuals move their arms or legs while in certain positions. With yoga, breath is used for relaxation and to hold poses. With Pilates, breath is used to power the muscles with more energy. Most Pilates exercises start from lying down, while most yoga poses start from standing up. Some poses are similar in the two disciplines, for example, open leg balance closely resembles Navasana (boat pose), roll over is similar to Halasana (plough pose), and swan and push-up are essentially identical to Bhujangasana (cobra pose) and Chaturanga Dandasana (low plank pose). Both disciplines develop strength, flexibility and fitness. Pilates, however, emphasises core strength, while yoga emphasizes flexibility. Legal status Pilates is not professionally regulated. In October 2000, "Pilates" was ruled a generic term by a U.S. federal court, making it free for unrestricted use. The term is still capitalized in writing due to its origin from the proper name of the method's founder. As a result of the court ruling, the Pilates Method Alliance was formed as a professional association for the Pilates community. Its purpose is to provide an international organization to connect teachers, teacher trainers, studios, and facilities dedicated to preserving and enhancing the legacy of Joseph H. Pilates and his exercise method by establishing standards, encouraging unity, and promoting professionalism.
Biology and health sciences
Physical fitness
Health
491962
https://en.wikipedia.org/wiki/Respiration%20%28physiology%29
Respiration (physiology)
In physiology, respiration is the transport of oxygen from the outside environment to the cells within tissues, and the removal of carbon dioxide in the opposite direction to the environment by a respiratory system. The physiological definition of respiration differs from the biochemical definition, which refers to a metabolic process by which an organism obtains energy (in the form of ATP and NADPH) by oxidizing nutrients and releasing waste products. Although physiologic respiration is necessary to sustain cellular respiration and thus life in animals, the processes are distinct: cellular respiration takes place in individual cells of the organism, while physiologic respiration concerns the diffusion and transport of metabolites between the organism and the external environment. Exchange of gases in the lung occurs by ventilation and perfusion. Ventilation refers to the in-and-out movement of air of the lungs and perfusion is the circulation of blood in the pulmonary capillaries. In mammals, physiological respiration involves respiratory cycles of inhaled and exhaled breaths. Inhalation (breathing in) is usually an active movement that brings air into the lungs where the process of gas exchange takes place between the air in the alveoli and the blood in the pulmonary capillaries. Contraction of the diaphragm muscle causes a pressure variation, which is equal to the pressures caused by elastic, resistive and inertial components of the respiratory system. In contrast, exhalation (breathing out) is usually a passive process, though there are many exceptions: when generating functional overpressure (speaking, singing, humming, laughing, blowing, snorting, sneezing, coughing, powerlifting); when exhaling underwater (swimming, diving); at high levels of physiological exertion (running, climbing, throwing) where more rapid gas exchange is necessitated; or in some forms of breath-controlled meditation. Speaking and singing in humans requires sustained breath control that many mammals are not capable of performing. The process of breathing does not fill the alveoli with atmospheric air during each inhalation (about 350 ml per breath), but the inhaled air is carefully diluted and thoroughly mixed with a large volume of gas (about 2.5 liters in adult humans) known as the functional residual capacity which remains in the lungs after each exhalation, and whose gaseous composition differs markedly from that of the ambient air. Physiological respiration involves the mechanisms that ensure that the composition of the functional residual capacity is kept constant, and equilibrates with the gases dissolved in the pulmonary capillary blood, and thus throughout the body. Thus, in precise usage, the words breathing and ventilation are hyponyms, not synonyms, of respiration; but this prescription is not consistently followed, even by most health care providers, because the term respiratory rate (RR) is a well-established term in health care, even though it would need to be consistently replaced with ventilation rate if the precise usage were to be followed. During respiration the C-H bonds are broken by oxidation-reduction reaction and so carbon dioxide and water are also produced. The cellular energy-yielding process is called cellular respiration. Classifications of respiration There are several ways to classify the physiology of respiration: By species Aquatic respiration Buccal pumping Cutaneous respiration Intestinal respiration Respiratory system By mechanism Breathing Gas exchange Arterial blood gas Control of respiration Apnea By experiments Huff and puff apparatus Spirometry Selected ion flow tube mass spectrometry By intensive care and emergency medicine CPR Mechanical ventilation Intubation Iron lung Intensive care medicine Liquid breathing ECMO Oxygen toxicity Medical ventilator Life support General anaesthesia Laryngoscope By other medical topics Respiratory therapy Breathing gases Hyperbaric oxygen therapy Hypoxia Gas embolism Decompression sickness Barotrauma Oxygen equivalent Oxygen toxicity Nitrogen narcosis Carbon dioxide poisoning Carbon monoxide poisoning HPNS Additional images
Biology and health sciences
Basics_3
null
491986
https://en.wikipedia.org/wiki/Phaseolus%20vulgaris
Phaseolus vulgaris
Phaseolus vulgaris, the common bean, is a herbaceous annual plant grown worldwide for its edible dry seeds or green, unripe pods. Its leaf is also occasionally used as a vegetable and the straw as fodder. Its botanical classification, along with other Phaseolus species, is as a member of the legume family, Fabaceae. Like most members of this family, common beans acquire the nitrogen they require through an association with rhizobia, which are nitrogen-fixing bacteria. The common bean has a long history of cultivation. All wild members of the species have a climbing habit, but many cultivars are classified either as bush beans or climbing beans, depending on their style of growth. The other major types of commercially grown beans are the runner bean (Phaseolus coccineus) and the broad bean (Vicia faba). Beans are grown on every continent except Antarctica. In 2022, 28 million tonnes of dry common beans were produced worldwide, led by India with 23% of the total. Raw dry beans contain the toxic compound phytohaemagglutinin, which can be deactivated by cooking beans for ten minutes at boiling point (100 °C, 212 °F). The U.S. Food and Drug Administration also recommends an initial soak of at least 5 hours in water which should then be discarded. Description Bush varieties form erect bushes tall, while pole or running varieties form vines long. All varieties bear alternate, green or purple leaves, which are divided into three oval, smooth-edged leaflets, each long and wide. The white, pink, or purple flowers are about 1 cm long and have 10 stamens. The flowers are self-pollinating, which facilitates the selection of stable cultivars. The flowers give way to pods long and 1–1.5 cm wide. These may be green, yellow, black, or purple, each containing 4–8 beans. Some varieties develop a string along the pod; these are generally cultivated for dry beans, as green stringy beans are not commercially desirable. The beans are smooth, plump, kidney-shaped, up to 1.5 cm long, range widely in color and are often mottled in two or more colors. The beans maintain their germination capacity for up to 5 years. Like most species from Phaseolus, the genome of P. vulgaris has 11 chromosomal pairs (2n = 22). Its genome is one of the smallest in the legume family at 625 Mbp per haploid genome. Raw or undercooked beans contain a toxic protein called phytohaemagglutinin. Taxonomy The common bean, like all species of Phaseolus is a member of the legume family Fabaceae. In Species Plantarum in 1753, Carl Linnaeus classified the beans known by him into genus Phaseolus and genus Dolichos, naming 11 species of Phaseolus, including 6 cultivated species and 5 "wild" species. The beans cultivated in Europe prior to the Columbian Exchange were of Asian origin and are unrelated to New World Phaseolus species. The Eurasian species were transferred to other genera including Vigna, Vicia and Lablab, so members of the Phaseolus genus are now all from the Americas. Etymology Ancient Greeks used the word φάσηλος (phasēlos) to refer to the beans of Asian origins that were cultivated in Europe at the time. The Romans used both the Latinized phaseolus and their own faba to refer to different pre-Columbian species of beans, presumably using the word faseolus for smaller seeds like those belonging to the genus Vigna such as the black-eyed peas and the word faba for larger seeds, such as the fava beans. This latter word, faba, was related to the Proto-Germanic bauno, from which the Old English word bean is derived and has the meaning of "bean, pea, legume". When Phaseolus vulgaris arrived in Europe in the 16th century, this species was yet another seed in a pod, thus there were already words in the European languages describing it. In the Americas, P. vulgaris is also known as ayacotl in nahuatl (Aztec language), búul in Mayan (Maya language) and purutu in Quechua (Inca language). In Argentina, Bolivia, Chile, Paraguay and Uruguay, the Spanish name 'poroto' is used, being derived from its corresponding Quechua word. Additional names include the Castilian Spanish frijol, the Portuguese feijão, and the Catalan fesol. Distribution Wild P. vulgaris is native to the Americas. It was originally believed that it had been domesticated separately in Mesoamerica and in the southern Andes region some 8,000 years ago, giving the domesticated bean two gene pools. However, recent genetic analyses show that it was domesticated in Mexico first, then split into the Mesoamerican and Andean P. vulgaris gene pools. Beans, squash and maize (corn) are the three Mesoamerican crops that constitute the "Three Sisters", central to indigenous American agriculture. The common bean arrived in Europe as part of the Columbian exchange. Cultivation Good commercial yield in favorable environments under irrigation is 6 to 8 ton/ha fresh and 1.5 to 2 ton/ha dry seed. Cultivars and varieties Archeologists found large-seeded varieties of the domesticated bean in the highlands of Peru, dating to 2300 BC, and spreading to the coastal regions by around 500 BC. Small-seeded varieties were found in sites in Mexico, dating to 300 BC, which then spread north and east of the Mississippi River by 1000 AD. Many well-known bean cultivars and varieties belong to this species, and the list below is in no way exhaustive. Both bush and running (pole) cultivars/varieties exist. The colors and shapes of pods and seeds vary over a wide range. Production In 2022, world production of dry common beans was 28 million tonnes, led by India with 23% of the total. Brazil and Myanmar were secondary producers. Toxicity The toxic compound phytohaemagglutinin, a lectin, is present in many common bean varieties but is especially concentrated in red kidney beans. White kidney beans contain about a third as many toxins as the red variety; broad beans (Vicia faba) contain 5 to 10% as much as red kidney beans. Phytohaemagglutinin can be inactivated by cooking beans for ten minutes at boiling point (100 °C, 212 °F). Insufficient cooking, such as in a slow cooker at 80 °C/ 176 °F, is insufficient to deactivate all toxins. To safely cook the beans, the U.S. Food and Drug Administration recommends boiling for 30 minutes to ensure they reach a sufficient temperature for long enough to destroy the toxin completely. For dry beans, the FDA also recommends an initial soak of at least 5 hours in water which should then be discarded. Outbreaks of poisoning have been associated with cooking kidney beans in slow cookers. The primary symptoms of phytohaemagglutinin poisoning are nausea, vomiting, and diarrhea. Onset is from one to three hours after consumption of improperly prepared beans, and symptoms typically resolve within a few hours. Consumption of as few as four or five raw, soaked kidney beans can cause symptoms. Canned red kidney beans are safe to use immediately, as they have already been cooked. Beans are high in purines, which are metabolized to uric acid. Uric acid is not a toxin but may promote the development or exacerbation of gout. However, more recent research has questioned this association, finding that moderate intake of purine-rich foods is not associated with an increased risk of gout. Uses Nutrition Raw green beans are 90% water, 7% carbohydrates, 1% protein, and contain negligible fat. In a reference amount of , raw green beans supply 36 calories, and are a rich source (20% or more of the Daily Value, DV) of vitamin K (41% DV) and a moderate source (10-19% DV) of vitamin C, vitamin B6, and manganese. Dry white common beans, after boiling, are 63% water, 25% carbohydrates, 10% protein, and contain little fat. In a reference amount of , boiled white common beans supply 139 calories and are a rich source of folate and manganese, with moderate amounts of thiamine and several dietary minerals. Dry beans Dry beans will keep indefinitely if stored in a cool, dry place, but as time passes, their nutritive value and flavor degrade, and cooking times lengthen. Dried beans are almost always cooked by boiling, often after being soaked in water for several hours. While the soaking is not strictly necessary, it shortens cooking time and results in more evenly textured beans. In addition, soaking beans removes 5 to 10% of the gas-producing sugars that can cause flatulence for some people. The methods include simple overnight soaking and the power soak method, in which beans are boiled for three minutes and then set aside for 2–4 hours. Before cooking, the soaking water is drained off and discarded. Dry common beans take longer to cook than most pulses: cooking times vary from one to four hours but are substantially reduced with pressure cooking. In Mexico, Central America, and South America, the traditional spice used with beans is epazote, which is also said to aid digestion. In East Asia, a type of seaweed, kombu, is added to beans as they cook for the same purpose. Salt, sugar, and acidic foods such as tomatoes may harden uncooked beans, resulting in seasoned beans at the expense of slightly longer cooking times. Dry beans may also be bought cooked and canned as refried beans, or whole with water, salt, and sometimes sugar. Green beans and wax beans The three commonly known types of green beans are string or snap beans, which may be round or have a flat pod; stringless or French beans, which lack a tough, fibrous string running along the length of the pod; and runner beans, which belong to a separate species, Phaseolus coccineus. Green beans may have a purple rather than green pod, which changes to green when cooked. Wax beans are P. vulgaris beans that have a yellow or white pod. Wax bean cultivars are commonly grown; the plants are often of the bush or dwarf form. As the name implies, snap beans break easily when the pod is bent, giving off a distinct audible snap sound. The pods of snap beans (green, yellow, and purple) are harvested when they are rapidly growing, fleshy, tender (not tough and stringy), and bright in color, and the seeds are small and underdeveloped (8 to 10 days after flowering). Green beans and wax beans are often steamed, boiled, stir-fried, or baked in casseroles. Shelling beans Shell, shelled, or shelling beans are beans removed from their pods before being cooked or dried. Common beans can be used as shell beans, but the term also refers to other species of beans whose pods are not typically eaten, such as lima beans, soybeans, peas, and fava beans. Fresh shell beans are nutritionally similar to dry beans but are prepared more like vegetables, often steamed, fried, or made into soups. Popping beans The nuña is an Andean subspecies, P. v. subsp. nunas (formerly P. vulgaris Nuñas group), with round, multicolored seeds that resemble pigeon eggs. When cooked on high heat, the bean explodes, exposing the inner part in the manner of popcorn and other puffed grains. Other uses Bean leaves have been used to trap bedbugs in houses. Microscopic hairs (trichomes) on the bean leaves entrap the insects. Beans have been used as devices in various methods of divination since ancient times. Fortune-telling using beans is called favomancy. P. vulgaris has been found to bio-accumulate zinc, manganese, and iron and have some tolerance to their respective toxicities, suggesting suitability for natural bio-remediation of heavy-metal-contaminated soils. In culture In 1528, Pope Clemente VII received some white beans, which thrived. Five years later, he gave a bag of beans as a present to his niece, Catherine, on her wedding to Prince Henri of France, along with the county of the Lauragais, whose county town is Castelnaudary, now synonymous with the white bean dish of cassoulet. Gallery
Biology and health sciences
Fabales
null
492012
https://en.wikipedia.org/wiki/C4%20carbon%20fixation
C4 carbon fixation
carbon fixation or the Hatch–Slack pathway is one of three known photosynthetic processes of carbon fixation in plants. It owes the names to the 1960s discovery by Marshall Davidson Hatch and Charles Roger Slack. fixation is an addition to the ancestral and more common carbon fixation. The main carboxylating enzyme in photosynthesis is called RuBisCO, which catalyses two distinct reactions using either (carboxylation) or oxygen (oxygenation) as a substrate. RuBisCO oxygenation gives rise to phosphoglycolate, which is toxic and requires the expenditure of energy to recycle through photorespiration. photosynthesis reduces photorespiration by concentrating around RuBisCO. To enable RuBisCO to work in a cellular environment where there is a lot of carbon dioxide and very little oxygen, leaves generally contain two partially isolated compartments called mesophyll cells and bundle-sheath cells. is initially fixed in the mesophyll cells in a reaction catalysed by the enzyme PEP carboxylase in which the three-carbon phosphoenolpyruvate (PEP) reacts with to form the four-carbon oxaloacetic acid (OAA). OAA can then be reduced to malate or transaminated to aspartate. These intermediates diffuse to the bundle sheath cells, where they are decarboxylated, creating a -rich environment around RuBisCO and thereby suppressing photorespiration. The resulting pyruvate (PYR), together with about half of the phosphoglycerate (PGA) produced by RuBisCO, diffuses back to the mesophyll. PGA is then chemically reduced and diffuses back to the bundle sheath to complete the reductive pentose phosphate cycle (RPP). This exchange of metabolites is essential for photosynthesis to work. Additional biochemical steps require more energy in the form of ATP to regenerate PEP, but concentrating allows high rates of photosynthesis at higher temperatures. Higher CO2 concentration overcomes the reduction of gas solubility with temperature (Henry's law). The concentrating mechanism also maintains high gradients of concentration across the stomatal pores. This means that plants have generally lower stomatal conductance, reduced water losses and have generally higher water-use efficiency. plants are also more efficient in using nitrogen, since PEP carboxylase is cheaper to make than RuBisCO. However, since the pathway does not require extra energy for the regeneration of PEP, it is more efficient in conditions where photorespiration is limited, typically at low temperatures and in the shade. Discovery The first experiments indicating that some plants do not use carbon fixation but instead produce malate and aspartate in the first step of carbon fixation were done in the 1950s and early 1960s by Hugo Peter Kortschak and Yuri Karpilov. The pathway was elucidated by Marshall Davidson Hatch and Charles Roger Slack, in Australia, in 1966. While Hatch and Slack originally referred to the pathway as the "C4 dicarboxylic acid pathway", it is sometimes called the Hatch–Slack pathway. Anatomy plants often possess a characteristic leaf anatomy called kranz anatomy, from the German word for wreath. Their vascular bundles are surrounded by two rings of cells; the inner ring, called bundle sheath cells, contains starch-rich chloroplasts lacking grana, which differ from those in mesophyll cells present as the outer ring. Hence, the chloroplasts are called dimorphic. The primary function of kranz anatomy is to provide a site in which can be concentrated around RuBisCO, thereby avoiding photorespiration. Mesophyll and bundle sheath cells are connected through numerous cytoplasmic sleeves called plasmodesmata whose permeability at leaf level is called bundle sheath conductance. A layer of suberin is often deposed at the level of the middle lamella (tangential interface between mesophyll and bundle sheath) in order to reduce the apoplastic diffusion of (called leakage). The carbon concentration mechanism in plants distinguishes their isotopic signature from other photosynthetic organisms. Although most plants exhibit kranz anatomy, there are, however, a few species that operate a limited cycle without any distinct bundle sheath tissue. Suaeda aralocaspica, Bienertia cycloptera, Bienertia sinuspersici and Bienertia kavirense (all chenopods) are terrestrial plants that inhabit dry, salty depressions in the deserts of the Middle East. These plants have been shown to operate single-cell -concentrating mechanisms, which are unique among the known mechanisms. Although the cytology of both genera differs slightly, the basic principle is that fluid-filled vacuoles are employed to divide the cell into two separate areas. Carboxylation enzymes in the cytosol are separated from decarboxylase enzymes and RuBisCO in the chloroplasts. A diffusive barrier is between the chloroplasts (which contain RuBisCO) and the cytosol. This enables a bundle-sheath-type area and a mesophyll-type area to be established within a single cell. Although this does allow a limited cycle to operate, it is relatively inefficient. Much leakage of from around RuBisCO occurs. There is also evidence of inducible photosynthesis by non-kranz aquatic macrophyte Hydrilla verticillata under warm conditions, although the mechanism by which leakage from around RuBisCO is minimised is currently uncertain. Biochemistry In plants, the first step in the light-independent reactions of photosynthesis is the fixation of by the enzyme RuBisCO to form 3-phosphoglycerate. However, RuBisCo has a dual carboxylase and oxygenase activity. Oxygenation results in part of the substrate being oxidized rather than carboxylated, resulting in loss of substrate and consumption of energy, in what is known as photorespiration. Oxygenation and carboxylation are competitive, meaning that the rate of the reactions depends on the relative concentration of oxygen and . In order to reduce the rate of photorespiration, plants increase the concentration of around RuBisCO. To do so two partially isolated compartments differentiate within leaves, the mesophyll and the bundle sheath. Instead of direct fixation by RuBisCO, is initially incorporated into a four-carbon organic acid (either malate or aspartate) in the mesophyll. The organic acids then diffuse through plasmodesmata into the bundle sheath cells. There, they are decarboxylated creating a -rich environment. The chloroplasts of the bundle sheath cells convert this into carbohydrates by the conventional pathway. There is large variability in the biochemical features of C4 assimilation, and it is generally grouped in three subtypes, differentiated by the main enzyme used for decarboxylation ( NADP-malic enzyme, NADP-ME; NAD-malic enzyme, NAD-ME; and PEP carboxykinase, PEPCK). Since PEPCK is often recruited atop NADP-ME or NAD-ME it was proposed to classify the biochemical variability in two subtypes. For instance, maize and sugarcane use a combination of NADP-ME and PEPCK, millet uses preferentially NAD-ME and Megathyrsus maximus, uses preferentially PEPCK. NADP-ME The first step in the NADP-ME type pathway is the conversion of pyruvate (Pyr) to phosphoenolpyruvate (PEP), by the enzyme Pyruvate phosphate dikinase (PPDK). This reaction requires inorganic phosphate and ATP plus pyruvate, producing PEP, AMP, and inorganic pyrophosphate (PPi). The next step is the carboxylation of PEP by the PEP carboxylase enzyme (PEPC) producing oxaloacetate. Both of these steps occur in the mesophyll cells: pyruvate + Pi + ATP → PEP + AMP + PPi PEP + → oxaloacetate PEPC has a low KM for — and, hence, high affinity, and is not confounded by O2 thus it will work even at low concentrations of . The product is usually converted to malate (M), which diffuses to the bundle-sheath cells surrounding a nearby vein. Here, it is decarboxylated by the NADP-malic enzyme (NADP-ME) to produce and pyruvate. The is fixed by RuBisCo to produce phosphoglycerate (PGA) while the pyruvate is transported back to the mesophyll cell, together with about half of the phosphoglycerate (PGA). This PGA is chemically reduced in the mesophyll and diffuses back to the bundle sheath where it enters the conversion phase of the Calvin cycle. For each molecule exported to the bundle sheath the malate shuttle transfers two electrons, and therefore reduces the demand of reducing power in the bundle sheath. NAD-ME Here, the OAA produced by PEPC is transaminated by aspartate aminotransferase to aspartate (ASP) which is the metabolite diffusing to the bundle sheath. In the bundle sheath ASP is transaminated again to OAA and then undergoes a futile reduction and oxidative decarboxylation to release . The resulting Pyruvate is transaminated to alanine, diffusing to the mesophyll. Alanine is finally transaminated to pyruvate (PYR) which can be regenerated to PEP by PPDK in the mesophyll chloroplasts. This cycle bypasses the reaction of malate dehydrogenase in the mesophyll and therefore does not transfer reducing equivalents to the bundle sheath. PEPCK In this variant the OAA produced by aspartate aminotransferase in the bundle sheath is decarboxylated to PEP by PEPCK. The fate of PEP is still debated. The simplest explanation is that PEP would diffuse back to the mesophyll to serve as a substrate for PEPC. Because PEPCK uses only one ATP molecule, the regeneration of PEP through PEPCK would theoretically increase photosynthetic efficiency of this subtype, however this has never been measured. An increase in relative expression of PEPCK has been observed under low light, and it has been proposed to play a role in facilitating balancing energy requirements between mesophyll and bundle sheath. Metabolite exchange While in photosynthesis each chloroplast is capable of completing light reactions and dark reactions, chloroplasts differentiate in two populations, contained in the mesophyll and bundle sheath cells. The division of the photosynthetic work between two types of chloroplasts results inevitably in a prolific exchange of intermediates between them. The fluxes are large and can be up to ten times the rate of gross assimilation. The type of metabolite exchanged and the overall rate will depend on the subtype. To reduce product inhibition of photosynthetic enzymes (for instance PECP) concentration gradients need to be as low as possible. This requires increasing the conductance of metabolites between mesophyll and bundle sheath, but this would also increase the retro-diffusion of out of the bundle sheath, resulting in an inherent and inevitable trade off in the optimisation of the concentrating mechanism. Light harvesting and light reactions To meet the NADPH and ATP demands in the mesophyll and bundle sheath, light needs to be harvested and shared between two distinct electron transfer chains. ATP may be produced in the bundle sheath mainly through cyclic electron flow around Photosystem I, or in the M mainly through linear electron flow depending on the light available in the bundle sheath or in the mesophyll. The relative requirement of ATP and NADPH in each type of cells will depend on the photosynthetic subtype. The apportioning of excitation energy between the two cell types will influence the availability of ATP and NADPH in the mesophyll and bundle sheath. For instance, green light is not strongly adsorbed by mesophyll cells and can preferentially excite bundle sheath cells, or vice versa for blue light. Because bundle sheaths are surrounded by mesophyll, light harvesting in the mesophyll will reduce the light available to reach BS cells. Also, the bundle sheath size limits the amount of light that can be harvested. Efficiency Different formulations of efficiency are possible depending on which outputs and inputs are considered. For instance, average quantum efficiency is the ratio between gross assimilation and either absorbed or incident light intensity. Large variability of measured quantum efficiency is reported in the literature between plants grown in different conditions and classified in different subtypes but the underpinnings are still unclear. One of the components of quantum efficiency is the efficiency of dark reactions, biochemical efficiency, which is generally expressed in reciprocal terms as ATP cost of gross assimilation (ATP/GA). In photosynthesis ATP/GA depends mainly on and O2 concentration at the carboxylating sites of RuBisCO. When concentration is high and O2 concentration is low photorespiration is suppressed and assimilation is fast and efficient, with ATP/GA approaching the theoretical minimum of 3. In photosynthesis concentration at the RuBisCO carboxylating sites is mainly the result of the operation of the concentrating mechanisms, which cost circa an additional 2 ATP/GA but makes efficiency relatively insensitive of external concentration in a broad range of conditions. Biochemical efficiency depends mainly on the speed of delivery to the bundle sheath, and will generally decrease under low light when PEP carboxylation rate decreases, lowering the ratio of /O2 concentration at the carboxylating sites of RuBisCO. The key parameter defining how much efficiency will decrease under low light is bundle sheath conductance. Plants with higher bundle sheath conductance will be facilitated in the exchange of metabolites between the mesophyll and bundle sheath and will be capable of high rates of assimilation under high light. However, they will also have high rates of retro-diffusion from the bundle sheath (called leakage) which will increase photorespiration and decrease biochemical efficiency under dim light. This represents an inherent and inevitable trade off in the operation of photosynthesis. plants have an outstanding capacity to attune bundle sheath conductance. Interestingly, bundle sheath conductance is downregulated in plants grown under low light and in plants grown under high light subsequently transferred to low light as it occurs in crop canopies where older leaves are shaded by new growth. Evolution and advantages plants have a competitive advantage over plants possessing the more common carbon fixation pathway under conditions of drought, high temperatures, and nitrogen or limitation. When grown in the same environment, at 30 °C, grasses lose approximately 833 molecules of water per molecule that is fixed, whereas grasses lose only 277. This increased water use efficiency of grasses means that soil moisture is conserved, allowing them to grow for longer in arid environments. carbon fixation has evolved in at least 62 independent occasions in 19 different families of plants, making it a prime example of convergent evolution. This convergence may have been facilitated by the fact that many potential evolutionary pathways to a phenotype exist, many of which involve initial evolutionary steps not directly related to photosynthesis. plants arose around during the Oligocene (precisely when is difficult to determine) and were becoming ecologically significant in the early Miocene around . metabolism in grasses originated when their habitat migrated from the shady forest undercanopy to more open environments, where the high sunlight gave it an advantage over the pathway. Drought was not necessary for its innovation; rather, the increased parsimony in water use was a byproduct of the pathway and allowed plants to more readily colonize arid environments. Today, plants represent about 5% of Earth's plant biomass and 3% of its known plant species. Despite this scarcity, they account for about 23% of terrestrial carbon fixation. Increasing the proportion of plants on earth could assist biosequestration of and represent an important climate change avoidance strategy. Present-day plants are concentrated in the tropics and subtropics (below latitudes of 45 degrees) where the high air temperature increases rates of photorespiration in plants. Plants that use carbon fixation About 8,100 plant species use carbon fixation, which represents about 3% of all terrestrial species of plants. All these 8,100 species are angiosperms. carbon fixation is more common in monocots compared with dicots, with 40% of monocots using the pathway, compared with only 4.5% of dicots. Despite this, only three families of monocots use carbon fixation compared to 15 dicot families. Of the monocot clades containing plants, the grass (Poaceae) species use the photosynthetic pathway most. 46% of grasses are and together account for 61% of species. has arisen independently in the grass family some twenty or more times, in various subfamilies, tribes, and genera, including the Andropogoneae tribe which contains the food crops maize, sugar cane, and sorghum. Various kinds of millet are also . Of the dicot clades containing species, the order Caryophyllales contains the most species. Of the families in the Caryophyllales, the Chenopodiaceae use carbon fixation the most, with 550 out of 1,400 species using it. About 250 of the 1,000 species of the related Amaranthaceae also use . Members of the sedge family Cyperaceae, and members of numerous families of eudicots – including Asteraceae (the daisy family), Brassicaceae (the cabbage family), and Euphorbiaceae (the spurge family) – also use . No large trees (above 15 m in height) use , however a number of small trees or shrubs smaller than 10 m exist which do: six species of Euphorbiaceae all native to Hawaii and two species of Amaranthaceae growing in deserts of the Middle-East and Asia. Converting plants to Given the advantages of , a group of scientists from institutions around the world are working on the Rice Project to produce a strain of rice, naturally a plant, that uses the pathway by studying the plants maize and Brachypodium. As rice is the world's most important human food—it is the staple food for more than half the planet—having rice that is more efficient at converting sunlight into grain could have significant global benefits towards improving food security. The team claims rice could produce up to 50% more grain—and be able to do it with less water and nutrients. The researchers have already identified genes needed for photosynthesis in rice and are now looking towards developing a prototype rice plant. In 2012, the Government of the United Kingdom along with the Bill & Melinda Gates Foundation provided US$14 million over three years towards the Rice Project at the International Rice Research Institute. In 2019, the Bill & Melinda Gates Foundation granted another US$15 million to the Oxford-University-led C4 Rice Project. The goal of the 5-year project is to have experimental field plots up and running in Taiwan by 2024. C2 photosynthesis, an intermediate step between and Kranz , may be preferred over for rice conversion. The simpler system is less optimized for high light and high temperature conditions than , but has the advantage of requiring fewer steps of genetic engineering and performing better than under all temperatures and light levels. In 2021, the UK Government provided £1.2 million on studying C2 engineering.
Biology and health sciences
Metabolic processes
Biology
492015
https://en.wikipedia.org/wiki/Multiple%20rocket%20launcher
Multiple rocket launcher
A multiple rocket launcher (MRL) or multiple launch rocket system (MLRS) is a type of rocket artillery system that contains multiple launchers which are fixed to a single platform, and shoots its rocket ordnance in a fashion similar to a volley gun. Rockets are self-propelled in flight and have different capabilities than conventional artillery shells, such as longer effective range, lower recoil, typically considerably higher payload than a similarly sized gun artillery platform, or even carrying multiple warheads. Unguided rocket artillery is notoriously inaccurate and slow to reload compared to gun artillery. A multiple rocket launcher helps compensate for this with its ability to launch multiple rockets in rapid succession, which, coupled with the large kill zone of each warhead, can easily deliver saturation fire over a target area. However, modern rockets can use GPS or inertial guidance to combine the advantages of rockets with the higher accuracy of precision-guided munitions. History The first multiple rocket launchers, known as Huo Che, were invented during the medieval Chinese Song dynasty, in which the Chinese fire lance was fixed backward on a pike or arrow and shot at an enemy as early as 1180. This form of rocket was used during the Mongol siege of Kaifeng. Chinese militaries later created multiple rocket launchers that fired up to 100 small fire-arrow rockets simultaneously. The typical powder section of the arrow-rockets was 1/3 to 1/2 ft (10 to 15 cm) long. Bamboo arrow shafts varied from 1.5 ft (45 cm) to 2.5 ft (75 cm) long and the striking distance reached 300 to 400 paces. The Chinese also enhanced rocket tips with poison and made sure that the launchers were mobile. They designed a multiple rocket launcher to be carried and operated by a single soldier. Various forms of MRLs evolved, including a launcher mounted on a wheelbarrow. The Joseon dynasty of Korea used an expanded variant of such a launcher (called a hwacha) made of 100 to 200 holes containing rocket arrows placed on a two-wheeled cart. The range of the fired arrows is estimated to have been 2,000 meters. The hwacha was used to great effect against invading armies during the Japanese invasions of 1592–1598, most notably the Battle of Haengju, in which 40 hwachas were deployed to repel 30,000 Japanese soldiers. European armies preferred relatively large single-launch rockets prior to World War II. Napoleonic armies of both sides followed the British adoption of Mysorean rockets as the Congreve rocket. These were explosive steel-cased bombardment rockets with minimal launchers. European navies developed naval multiple launcher mounts with steadily improving explosive rockets for light and coastal vessels. These weapons were largely replaced by conventional light artillery during the late nineteenth century. World War II The first self-propelled MRLs—and arguably the most famous—was the Soviet BM-13 Katyusha, first used during World War II and exported to Soviet allies afterwards. They were simple systems in which a rack of launch rails was mounted on the back of a truck. This set the template for modern MRLs. The Americans mounted tubular launchers atop M4 Sherman tanks to create the T34 Calliope rocket launching tank, only used in small numbers, as their closest equivalent to the Katyusha. The Germans began using a towed six-tube multiple rocket launcher during World War II, the Nebelwerfer, called the "Screaming Mimi" by the Allies. The system was developed before the war to skirt the limitations of the Treaty of Versailles. Later in the war, 15 cm Nebelwerfer 41s were mounted on modified Opel Maultier "Mule" halftracks, becoming Panzerwerfer 42 4/1s. Another version produced in limited numbers towards the end of the war was a conversion of the Schwerer Wehrmachtschlepper ("heavy military transport", sWS) halftrack to a configuration similar to the Panzerwerfer 42 4/1, mounting the 10-barreled 15 cm Nebelwerfer. Another German halftrack MRL system was inspired by the Russian BM-13. Keeping the Soviet 82 mm rocket caliber as well as the launch and rocket stabilisation designs, it was developed into a system of two rows of 12 guide rails mounted to a Maultier chassis, each row providing the capacity for 24 rockets, underslung as well as on top of the rails, for 48 rockets total. This vehicle was designated 8 cm Raketen-Vielfachwerfer (8 cm multiple rocket launcher). As the launch system was inspired by and looked similar to the BM-13, which the Germans had nicknamed "Stalin-Orgel" or "Stalin-Organ", the Vielfachwerfer soon became known as the "Himmler-Orgel", or "Himmler-Organ". Types There are two main types of MRLs: With tubes or pipes, usually made of steel, non-removable from launcher, with options to be reloaded on the battlefield with rockets loaded manually or semi-automatically. This was the most usual type until the 21st century. It is more convenient for battlefield usage because it does not require special tools to reload modules and test them before using them on launchers as with other types. With containers, pods or modules that can be removed from the launcher and quickly replaced with same or different types of rockets and calibers. They are usually reloaded at a factory or within specially-equipped army workshops. These are more modern types of weapons as they are not necessarily related to just one type of rocket and give more options to commanders in the field to deal with different tactical situations using different types of rockets or to quickly reload. They are also easier to upgrade for different types of rockets. Current usage Like all artillery, MRLs have a reputation of devastating morale on ill-disciplined or already-shaken troops. The material effect depends on circumstances, as well-covered field fortifications may provide reasonable protection. MRLs are still unable to properly engage reverse slope positions in mountain warfare because it is more difficult to determine the trajectory compared to that of a howitzer by adding or removing propellant increments. Simple MRL rocket types have a rather long minimum firing range for the same reason. An approach to lessen this limit is the addition of drag rings to the rocket nose. The increased drag slows the rocket down relative to a clean configuration and creates a less flat trajectory. Pre-packaged MRL munitions do not offer this option but some MRL types with individually loaded rockets do. Improvised MRLs based on helicopter or aircraft-mounted rocket pods (typically of 57–80mm caliber) especially on light trucks and pickups (so-called "technicals") are often seen in civil wars when rebels make use of captured launchers and munitions. Modern MRL systems can use modern land navigation (especially satellite navigation such as GPS) for quick and accurate positioning. The accurate determination of the battery position previously required such effort that making a dispersed operation of the battery was at times impractical. MRL systems with GPS can have their MRLs dispersed and fire from various positions at a single target, just as previously multiple batteries were often united on one target area. Radar may be used to track weather balloons to determine winds or to track special rockets that self-destruct in the air. The tracking allows determination of the influence of winds and propellant temperatures on the rockets' flight paths. These observations can then be factored into the firing solution for the rocket salvo for effect. Such tracking radars can also be used to predict the range error of individual rockets. Trajectory-correcting munitions may then benefit from this, as a directional radio may send a coded message to the rocket to deploy air brakes at just the right time to correct most of the range error. This requires that the rockets were originally aimed too far, as the range can only be shortened by the air brakes, not extended. A more sophisticated system makes use of radar data and a one-way radio datalink to initiate a two dimensional (range and azimuth) correction of the rocket's flight path with steering by fins or nose thrusters. The latter is more common with systems which can be used to upgrade old rockets and the IMI ACCULAR is an example. Fin-stabilised rockets also allow for easy course corrections using rudders or minute charges. Precision-guided munitions have been introduced to exploit this. Guidance principles such as satellite navigation, inertial navigation systems and semi-active laser seekers are used for this. This improves dispersion from a CEP of hundreds of meters at dozens of kilometers' range to just a few meters and largely independent of the range of the round (except for INS, as INS navigation creates a small dispersion that is about proportional to range). This in turn made great increases of rocket (or missile) ranges useful; previously dispersion had made rockets too inefficient and often too dangerous to friendly troops at long ranges. Long-range MRL missiles often fly a higher quasi-ballistic trajectory than shorter-ranged rockets and thus pose a de-confliction challenge, as they might collide with friendly aircraft in the air. The differences between an MRL missile and a large anti-tank guided missile, such as the Nimrod, have blurred due to guided MRL missiles such as the M31 GMLRS (guided unitary multiple launch rocket system), which passed flight tests in 2014.
Technology
Missiles
null
492039
https://en.wikipedia.org/wiki/Archosaur
Archosaur
Archosauria () or archosaurs () is a clade of diapsid sauropsid tetrapods, with birds and crocodilians being the only extant representatives. Although broadly classified as reptiles, which traditionally exclude birds, the cladistic sense of the term includes all living and extinct relatives of birds and crocodilians such as non-avian dinosaurs, pterosaurs, phytosaurs, aetosaurs and rauisuchians as well as many Mesozoic marine reptiles. Modern paleontologists define Archosauria as a crown group that includes the most recent common ancestor of living birds and crocodilians, and all of its descendants. The base of Archosauria splits into two clades: Pseudosuchia, which includes crocodilians and their extinct relatives; and Avemetatarsalia, which includes birds and their extinct relatives (such as non-avian dinosaurs and pterosaurs). Older definitions of the group Archosauria rely on shared morphological characteristics, such as an antorbital fenestra in the skull, serrated teeth, and an upright stance. Some extinct reptiles, such as proterosuchids and euparkeriids, also possessed these features yet originated prior to the split between the crocodilian and bird lineages. The older morphological definition of Archosauria nowadays roughly corresponds to Archosauriformes, a group named to encompass crown-group archosaurs and their close relatives. The oldest true archosaur fossils are known from the Early Triassic period, though the first archosauriforms and archosauromorphs (reptilians closer to archosaurs than to lizards or other lepidosaurs) appeared in the Permian. Archosaurs quickly diversified in the aftermath of the Permian-Triassic mass extinction (~252 Ma), which wiped out most of the then-dominant therapsid competitors such as the gorgonopsians and anomodonts, and the subsequent arid Triassic climate allowed the more drought-resilient archosaurs (largely due to their uric acid-based urinary system) to eventually become the largest and most ecologically dominant terrestrial vertebrates from the Middle Triassic period up until the Cretaceous–Paleogene extinction event (~66 Ma). Birds and several crocodyliform lineages were the only archosaurs to survive the K-Pg extinction, rediversifying in the subsequent Cenozoic era. Birds in particular have become among the most species-rich groups of terrestrial vertebrates in the present day. Distinguishing characteristics Archosaurs can traditionally be distinguished from other tetrapods on the basis of several synapomorphies, or shared characteristics, which were present in their last common ancestor. Many of these characteristics appeared prior to the origin of the clade Archosauria, as they were present in archosauriforms such as Proterosuchus and Euparkeria, which were outside the crown group. The most obvious features include teeth set in deep sockets, antorbital and mandibular fenestrae (openings in front of the eyes and in the jaw, respectively), and a pronounced fourth trochanter (a prominent ridge on the femur). Being set in sockets, the teeth were less likely to be torn loose during feeding. This feature is responsible for the name "thecodont" (meaning "socket teeth"), which early paleontologists applied to many Triassic archosaurs. Additionally, non-muscular cheek and lip tissue appear in various forms throughout the clade, with all living archosaurs lacking non-muscular lips, unlike most non-avian saurischian dinosaurs. Some archosaurs, such as birds, are secondarily toothless. Antorbital fenestrae reduced the weight of the skull, which was relatively large in early archosaurs, rather like that of modern crocodilians. Mandibular fenestrae may also have reduced the weight of the jaw in some forms. The fourth trochanter provides a large site for the attachment of muscles on the femur. Stronger muscles allowed for erect gaits in early archosaurs, and may also be connected with the ability of the archosaurs or their immediate ancestors to survive the catastrophic Permian-Triassic extinction event. Unlike their close living relatives, the lepidosaurs, archosaurs lost the vomeronasal organ. Origins Archosaurs are a subgroup of archosauriforms, which themselves are a subgroup of archosauromorphs. Both the oldest archosauromorph (Protorosaurus speneri) and the oldest archosauriform (Archosaurus rossicus) lived in the late Permian. The oldest true archosaurs appeared during the Olenekian stage (247–251 Ma) of the Early Triassic. A few fragmentary fossils of large carnivorous crocodilian-line archosaurs (informally termed "rauisuchians") are known from this stage. These include Scythosuchus and Tsylmosuchus (both of which have been found in Russia), as well as the Xilousuchus, a ctenosauriscid from China. The oldest known fossils of bird-line archosaurs are from the Anisian stage (247–242 Ma) of Tanzania, and include Asilisaurus (an early silesaurid), Teleocrater (an aphanosaur), and Nyasasaurus (a possible early dinosaur). Archosaurian domination in the Triassic Synapsids are a clade that includes mammals and their extinct ancestors. The latter group are often referred to as mammal-like reptiles, but should be termed protomammals, stem mammals, or basal synapsids, because they are not true reptiles by modern cladistic classification. They were the dominant land vertebrates throughout the Permian, but most perished in the Permian–Triassic extinction event. Very few large synapsids survived the event, but one form, Lystrosaurus (a herbivorous dicynodont), attained a widespread distribution soon after the extinction. Following this, archosaurs and other archosauriforms quickly became the dominant land vertebrates in the early Triassic. Fossils from before the mass extinction have only been found around the Equator, but after the event fossils can be found all over the world. Suggested explanations for this include: Archosaurs made more rapid progress towards erect limbs than synapsids, and this gave them greater stamina by avoiding Carrier's constraint. An objection to this explanation is that archosaurs became dominant while they still had sprawling or semi-erect limbs, similar to those of Lystrosaurus and other synapsids. Archosaurs have more efficient respiratory systems featuring unidirectional air flow, as opposed to the tidal respiration of synapids. The ability to breathe more efficiently in hypoxic conditions may have been advantageous to early archosaurs during the suspected drop in oxygen levels at the end of the Permian. The Early Triassic was predominantly arid, because most of the Earth's land was concentrated in the supercontinent Pangaea. Archosaurs were probably better at conserving water than early synapsids because: Modern diapsids (lizards, snakes, crocodilians, birds) excrete uric acid, which can be excreted as a paste, resulting in low water loss as opposed to a more dilute urine. It is reasonable to suppose that archosaurs (the ancestors of crocodilians, dinosaurs and pterosaurs) also excreted uric acid, and therefore were good at conserving water. The aglandular (glandless) skins of diapsids would also have helped to conserve water. Modern mammals excrete urea, which requires a relatively high urinary rate to keep it from leaving the urine by diffusion in the kidney tubules. Their skins also contain many glands, which also lose water. Assuming that early synapsids had similar features, e.g., as argued by the authors of Palaeos, they were at a disadvantage in a mainly arid world. The same well-respected site points out that "for much of Australia's Plio-Pleistocene history, where conditions were probably similar, the largest terrestrial predators were not mammals but gigantic varanid lizards (Megalania) and land crocs." However, this theory has been questioned, since it implies synapsids were necessarily less advantaged in water retention, that synapsid decline coincides with climate changes or archosaur diversity (neither of which tested) and the fact that desert dwelling mammals are as well adapted in this department as archosaurs, and some cynodonts like Trucidocynodon were large sized predators. A study favors competition amidst mammaliaforms as the main explanation for Mesozoic mammals being small. Main forms Since the 1970s, scientists have classified archosaurs mainly on the basis of their ankles. The earliest archosaurs had "primitive mesotarsal" ankles: the astragalus and calcaneum were fixed to the tibia and fibula by sutures and the joint bent about the contact between these bones and the foot. The Pseudosuchia appeared early in the Triassic. In their ankles, the astragalus was joined to the tibia by a suture and the joint rotated round a peg on the astragalus which fitted into a socket in the calcaneum. Early "crurotarsans" still walked with sprawling limbs, but some later crurotarsans developed fully erect limbs. Modern crocodilians are crurotarsans that can employ a diverse range of gaits depending on speed. Euparkeria and the Ornithosuchidae had "reversed crurotarsal" ankles, with a peg on the calcaneum and socket on the astragalus. The earliest fossils of Avemetatarsalia ("bird ankles") appear in the Anisian age of the Middle Triassic. Most Ornithodirans had "advanced mesotarsal" ankles. This form of ankle incorporated a very large astragalus and very small calcaneum, and could only move in one plane, like a simple hinge. This arrangement, which was only suitable for animals with erect limbs, provided more stability when the animals were running. The earliest avemetatarsalians, such as Teleocrater and Asilisaurus, retained "primitive mesotarsal" ankles. The ornithodirans differed from other archosaurs in other ways: they were lightly built and usually small, their necks were long and had an S-shaped curve, their skulls were much more lightly built, and many ornithodirans were completely bipedal. The archosaurian fourth trochanter on the femur may have made it easier for ornithodirans to become bipeds, because it provided more leverage for the thigh muscles. In the late Triassic, the ornithodirans diversified to produce dinosaurs and pterosaurs. Classification Modern classification Archosauria is normally defined as a crown group, which means that it only includes descendants of the last common ancestors of its living representatives. In the case of archosaurs, these are birds and crocodilians. Archosauria is within the larger clade Archosauriformes, which includes some close relatives of archosaurs, such as proterochampsids and euparkeriids. These relatives are often referred to as archosaurs despite being placed outside of the crown group Archosauria in a more basal position within Archosauriformes. Historically, many archosauriforms were described as archosaurs, including proterosuchids and erythrosuchids, based on the presence of an antorbital fenestra. While many researchers prefer to treat Archosauria as an unranked clade, some continue to assign it a traditional biological rank. Traditionally, Archosauria has been treated as a Superorder, though a few 21st century researchers have assigned it to different ranks including Division and Class. History of classification Archosauria as a term was first coined by American paleontologist Edward Drinker Cope in 1869, and included a wide range of taxa including dinosaurs, crocodilians, thecodonts, sauropterygians (which may be related to turtles), rhynchocephalians (a group that according to Cope included rhynchosaurs, which nowadays are considered to be more basal archosauromorphs, and tuataras, which are lepidosaurs), and anomodonts, which are now considered synapsids. It was not until 1986 that Archosauria was defined as a crown-clade, restricting its use to more derived taxa. Cope's term was a Greek-Latin hybrid intended to refer to the cranial arches, but has later also been understood as "leading reptiles" or "ruling reptiles" by association with Greek ἀρχός "leader, ruler". The term "thecodont", now considered an obsolete term, was first used by the English paleontologist Richard Owen in 1859 to describe Triassic archosaurs, and it became widely used in the 20th century. Thecodonts were considered the "basal stock" from which the more advanced archosaurs descended. They did not possess features seen in later avian and crocodilian lines, and therefore were considered more primitive and ancestral to the two groups. With the cladistic revolution of the 1980s and 90s, in which cladistics became the most widely used method of classifying organisms, thecodonts were no longer considered a valid grouping. Because they are considered a "basal stock", thecodonts are paraphyletic, meaning that they form a group that does not include all descendants of its last common ancestor: in this case, the more derived crocodilians and birds are excluded from "Thecodontia" as it was formerly understood. The description of the basal ornithodires Lagerpeton and Lagosuchus in the 1970s provided evidence that linked thecodonts with dinosaurs, and contributed to the disuse of the term "Thecodontia", which many cladists consider an artificial grouping. With the identification of "crocodilian normal" and "crocodilian reversed" ankles by Sankar Chatterjee in 1978, a basal split in Archosauria was identified. Chatterjee considered these two groups to be Pseudosuchia with the "normal" ankle and Ornithosuchidae with the "reversed" ankle. Ornithosuchids were thought to be ancestral to dinosaurs at this time. In 1979, A.R.I. Cruickshank identified the basal split and thought that the crurotarsan ankle developed independently in these two groups, but in opposite ways. Cruickshank also thought that the development of these ankle types progressed in each group to allow advanced members to have semi-erect (in the case of crocodilians) or erect (in the case of dinosaurs) gaits. Phylogeny In many phylogenetic analyses, archosaurs have been shown to be a monophyletic grouping, thus forming a true clade. One of the first studies of archosaur phylogeny was authored by French paleontologist Jacques Gauthier in 1986. Gauthier split Archosauria into Pseudosuchia, the crocodilian line, and Ornithosuchia, the dinosaur and pterosaur line. Pseudosuchia was defined as all archosaurs more closely related to crocodiles, while Ornithosuchia was defined as all archosaurs more closely related to birds. Proterochampsids, erythrosuchids, and proterosuchids fell successively outside Archosauria in the resulting tree. Below is the cladogram from Gauthier (1986): In 1988, paleontologists Michael Benton and J. M. Clark produced a new tree in a phylogenetic study of basal archosaurs. As in Gauthier's tree, Benton and Clark's revealed a basal split within Archosauria. They referred to the two groups as Crocodylotarsi and Ornithosuchia. Crocodylotarsi was defined as an apomorphy-based taxon based on the presence of a "crocodile-normal" ankle joint (considered to be the defining apomorphy of the clade). Gauthier's Pseudosuchia, by contrast, was a stem-based taxon. Unlike Gauthier's tree, Benton and Clark's places Euparkeria outside Ornithosuchia and outside the crown group Archosauria altogether. The clades Crurotarsi and Ornithodira were first used together in 1990 by paleontologist Paul Sereno and A. B. Arcucci in their phylogenetic study of archosaurs. They were the first to erect the clade Crurotarsi, while Ornithodira was named by Gauthier in 1986. Crurotarsi and Ornithodira replaced Pseudosuchia and Ornithosuchia, respectively, as the monophyly of both of these clades were questioned. Sereno and Arcucci incorporated archosaur features other than ankle types in their analyses, which resulted in a different tree than previous analyses. Below is a cladogram based on Sereno (1991), which is similar to the one produced by Sereno and Arcucci: Ornithodira and Crurotarsi are both node-based clades, meaning that they are defined to include the last common ancestor of two or more taxa and all of its descendants. Ornithodira includes the last common ancestor of pterosaurs and dinosaurs (which include birds), while Crurotarsi includes the last common ancestor of living crocodilians and three groups of Triassic archosaurs: ornithosuchids, aetosaurs, and phytosaurs. These clades are not equivalent to "bird-line" and "crocodile-line" archosaurs, which would be branch-based clades defined as all taxa more closely related to one living group (either birds or crocodiles) than the other. Benton proposed the name Avemetatarsalia in 1999 to include all bird-line archosaurs (under his definition, all archosaurs more closely related to dinosaurs than to crocodilians). His analysis of the small Triassic archosaur Scleromochlus placed it within bird-line archosaurs but outside Ornithodira, meaning that Ornithodira was no longer equivalent to bird-line archosaurs. Below is a cladogram modified from Benton (2004) showing this phylogeny: In Sterling Nesbitt's 2011 monograph on early archosaurs, a phylogenetic analysis found strong support for phytosaurs falling outside Archosauria. Many subsequent studies supported this phylogeny. Because Crurotarsi is defined by the inclusion of phytosaurs, the placement of phytosaurs outside Archosauria means that Crurotarsi must include all of Archosauria. Nesbitt reinstated Pseudosuchia as a clade name for crocodile-line archosaurs, using it as a stem-based taxon. Below is a cladogram modified from Nesbitt (2011): Extinction and survival Crocodylomorphs, pterosaurs and dinosaurs survived the Triassic–Jurassic extinction event about 200 million years ago, but other archosaurs had become extinct at or prior to the Triassic-Jurassic boundary. Non-avian dinosaurs and pterosaurs perished in the Cretaceous–Paleogene extinction event, which occurred approximately million years ago, but crown-group birds (the only remaining dinosaur group) and many crocodyliforms survived. Both are descendants of archosaurs, and are therefore archosaurs themselves under phylogenetic taxonomy. Crocodilians (which include all modern crocodiles, alligators, and gharials) and birds flourish today in the Holocene. It is generally agreed that birds have the most species of all terrestrial vertebrates. Archosaur lifestyle Hip joints and locomotion Like the early tetrapods, early archosaurs had a sprawling gait because their hip sockets faced sideways, and the knobs at the tops of their femurs were in line with the femur. In the early to middle Triassic, some archosaur groups developed hip joints that allowed (or required) a more erect gait. This gave them greater stamina, because it avoided Carrier's constraint, i.e. they could run and breathe easily at the same time. There were two main types of joint which allowed erect legs: The hip sockets faced sideways, but the knobs on the femurs were at right angles to the rest of the femur, which therefore pointed downwards. Dinosaurs evolved from archosaurs with this hip arrangement. The hip sockets faced downwards and the knobs on the femurs were in line with the femur. This "pillar-erect" arrangement appears to have evolved independently in various archosaur lineages, for example it was common in "Rauisuchia" (non-crocodylomorph paracrocodylomorphs) and also appeared in some aetosaurs. It has been pointed out that an upright stance requires more energy, so it may indicate a higher metabolism and a higher body temperature. Diet Most were large predators, but members of various lines diversified into other niches. Aetosaurs were herbivores and some developed extensive armor. A few crocodyliforms were herbivores, e.g., Simosuchus, Phyllodontosuchus. The large crocodyliform Stomatosuchus may have been a filter feeder. Sauropodomorphs and ornithischian dinosaurs were herbivores with diverse adaptations for feeding biomechanics. Land, water and air Archosaurs are mainly portrayed as land animals, but: Many phytosaurs and crocodyliforms dominated the rivers and swamps and even invaded the seas (e.g., the teleosaurs, Metriorhynchidae and Dyrosauridae). The Metriorhynchidae were rather dolphin-like, with paddle-like forelimbs, a tail fluke and smooth, unarmoured skins. Two clades of ornithodirans, the pterosaurs and the birds, dominated the air after becoming adapted to a volant lifestyle. Some dinosaurs like Spinosaurus have been argued to have had a semiaquatic lifestyle. Hesperornithes and penguins also adapted to this lifestyle. Metabolism The metabolism of archosaurs is still a controversial topic. They certainly evolved from cold-blooded ancestors, and the surviving non-dinosaurian archosaurs, crocodilians, are cold-blooded. But crocodilians have some features which are normally associated with a warm-blooded metabolism because they improve the animal's oxygen supply: 4-chambered hearts. Both birds and mammals have 4-chambered hearts, which completely separate the flows of oxygenated and de-oxygenated blood. Non-crocodilian reptiles have 3-chambered hearts, which are less efficient because they let the two flows mix and thus send some de-oxygenated blood out to the body instead of to the lungs. Modern crocodilians' hearts are 4-chambered, but are smaller relative to body size and run at lower pressure than those of modern birds and mammals. They also have a pulmonary bypass, which makes them functionally 3-chambered when under water, conserving oxygen. a secondary palate, which allows the animal to eat and breathe at the same time. a hepatic piston mechanism for pumping the lungs. This is different from the lung-pumping mechanisms of mammals and birds, but similar to what some researchers claim to have found in some dinosaurs. Historically there has been uncertainty as to why natural selection favored the development of these features, which are very important for active warm-blooded creatures, but of little apparent use to cold-blooded aquatic ambush predators that spend the vast majority of their time floating in water or lying on river banks. Paleontological evidence shows that the ancestors of living crocodilians were active and endothermic (warm-blooded). Some experts believe that their archosaur ancestors were warm-blooded as well. This is likely because feather-like filaments evolved to cover the whole body and were capable of providing thermal insulation. Physiological, anatomical, and developmental features of the crocodilian heart support the paleontological evidence and show that the lineage reverted to ectothermy when it invaded the aquatic, ambush predator niche. Crocodilian embryos develop fully 4-chambered hearts at an early stage. Modifications to the growing heart form a pulmonary bypass shunt that includes the left aortic arch, which originates from the right ventricle, the foramen of Panizza between the left and right aortic arches, and the cog-tooth valve at the base of the pulmonary artery. The shunt is used during diving to make the heart function as 3-chambered heart, providing the crocodilian with the neurally controlled shunting used by ectotherms. The researchers concluded that the ancestors of living crocodilians had fully 4-chambered hearts, and were therefore warm-blooded, before they reverted to a cold-blooded or ectothermic metabolism. The authors also provide other evidence for endothermy in stem archosaurs. It is reasonable to suggest that later crocodilians developed the pulmonary bypass shunt as they became cold-blooded, aquatic, and less active. If the crocodilian ancestors and other Triassic archosaurs were warm-blooded, this would help to resolve some evolutionary puzzles: The earliest crocodylomorphs, e.g., Terrestrisuchus, were slim, leggy terrestrial predators whose build suggests a fairly active lifestyle, which requires a fairly fast metabolism. And some other crurotarsan archosaurs appear to have had erect limbs, while those of rauisuchians are very poorly adapted for any other posture. Erect limbs are advantageous for active animals because they avoid Carrier's constraint, but disadvantageous for more sluggish animals because they increase the energy costs of standing up and lying down. If early archosaurs were completely cold-blooded and (as seems most likely) dinosaurs were at least fairly warm-blooded, dinosaurs would have had to evolve warm-blooded metabolisms in less than half the time it took for synapsids to do the same. Respiratory system A recent study of the lungs of Alligator mississippiensis (the American alligator) has shown that the airflow through them is unidirectional, moving in the same direction during inhalation and exhalation. This is also seen in birds and many non-avian dinosaurs, which have air sacs to further aid in respiration. Both birds and alligators achieve unidirectional air flow through the presence of parabronchi, which are responsible for gas exchange. The study has found that in alligators, air enters through the second bronchial branch, moves through the parabronchi, and exits through the first bronchial branch. Unidirectional airflow in both birds and alligators suggests that this type of respiration was present at the base of Archosauria and retained by both dinosaurs and non-dinosaurian archosaurs, such as aetosaurs, "rauisuchians" (non-crocodylomorph paracrocodylomorphs), crocodylomorphs, and pterosaurs. The use of unidirectional airflow in the lungs of archosaurs may have given the group an advantage over synapsids, which had lungs where air moved tidally in and out through a network of bronchi that terminated in alveoli, which were cul-de-sacs. The better efficiency in gas transfer seen in archosaur lungs may have been advantageous during the times of low atmospheric oxygen which are thought to have existed during the Mesozoic. Reproduction Most (if not all) archosaurs are oviparous. Birds and crocodilians lay hard-shelled eggs, as did extinct dinosaurs, and crocodylomorphs. Hard-shelled eggs are present in both dinosaurs and crocodilians, which has been used as an explanation for the absence of viviparity or ovoviviparity in archosaurs. However, both pterosaurs and baurusuchids have soft-shelled eggs, implying that hard shells are not a plesiomorphic condition. The pelvic anatomy of Cricosaurus and other metriorhynchids and fossilized embryos belonging to the non-archosaur archosauromorph Dinocephalosaurus, together suggest that the lack of viviparity among archosaurs may be a consequence of lineage-specific restrictions. Archosaurs are ancestrally superprecocial as evidenced in various dinosaurs, pterosaurs, and crocodylomorphs. However, parental care did evolve independently multiple times in crocodilians, dinosaurs, and aetosaurs. In most such species the animals bury their eggs and rely on temperature-dependent sex determination. The notable exception are Neornithes which incubate their eggs and rely on genetic sex determination – a trait that might have given them a survival advantage over other dinosaurs.
Biology and health sciences
General classifications_2
Animals
492052
https://en.wikipedia.org/wiki/Presbyopia
Presbyopia
Presbyopia is a physiological insufficiency of optical accommodation associated with the aging of the eye; it results in progressively worsening ability to focus clearly on close objects. Also known as age-related farsightedness (or as age-related long sight in the UK), it affects many adults over the age of 40. A common sign of presbyopia is difficulty in reading small print, which results in having to hold reading material farther away. Other symptoms associated can be headaches and eyestrain. Different people experience different degrees of problems. Other types of refractive errors may exist at the same time as presbyopia. This condition is similar to hypermetropia or far-sightedness, which starts in childhood and exhibits similar symptoms of blur in the vision for close objects. Presbyopia is a typical part of the aging process. It occurs due to age-related changes in the lens (decreased elasticity and increased hardness) and ciliary muscle (decreased strength and ability to move the lens), causing the eye to focus right behind rather than on the retina when looking at close objects. It is a type of refractive error, along with nearsightedness, farsightedness, and astigmatism. Diagnosis is by an eye examination. Presbyopia can be corrected using glasses, contact lenses, multifocal intraocular lenses, or LASIK (PresbyLASIK) surgery. The most common treatment is glass correction using appropriate convex lens. Glasses prescribed to correct presbyopia may be simple reading glasses, bifocals, trifocals, or progressive lenses. People over 40 are at risk for developing presbyopia and all people become affected to some degree. An estimated 25% of people (1.8 billion globally) had presbyopia . Signs and symptoms The first symptoms most people notice are difficulty reading fine print, particularly in low light conditions, eyestrain when reading for long periods, blurring of near objects or temporarily blurred vision when changing the viewing distance. Many extreme presbyopes complain that their arms have become "too short" to hold reading material at a comfortable distance. Presbyopia, like other focal imperfections, becomes less noticeable in bright sunlight when the pupil becomes smaller. As with any lens, increasing the focal ratio of the lens increases depth of field by reducing the level of blur of out-of-focus objects (compare the effect of aperture on depth of field in photography). The onset of presbyopia varies among those with certain professions and those with miotic pupils. In particular, farmers and homemakers seek correction later, whereas service workers and construction workers seek correction earlier. Scuba divers with interest in underwater photography may notice presbyopic changes while diving before they recognize the symptoms in their normal routines due to the near focus in low light conditions. Interaction with myopia People with low near-sightedness can read comfortably without eyeglasses or contact lenses even after age forty, but higher myopes might require two pairs of glasses (one for distance, one for near), bifocal, or progressive lenses. However, their myopia does not disappear and the long-distance visual challenges remain. Myopes considering refractive surgery are advised that surgically correcting their nearsightedness may be a disadvantage after age forty, when the eyes become presbyopic and lose their ability to accommodate or change focus, because they will then need to use glasses for reading. Myopes with low astigmatism find near vision better, though not perfect, without glasses or contact lenses when presbyopia sets in, but the more astigmatism, the poorer the uncorrected near vision. A surgical technique offered is to create a "reading eye" and a "distance vision eye", a technique commonly used in contact lens practice, known as monovision. Monovision can be created with contact lenses, so candidates for this procedure can determine if they are prepared to have their corneas reshaped by surgery to cause this effect permanently. Mechanism The cause of presbyopia is lens hardening by decreasing levels of -crystallin, a process which may be sped up by higher temperatures. It results in a near point greater than (or equivalently, less than 4 diopters). In optics, the closest point at which an object can be brought into focus by the eye is called the eye's near point. A standard near point distance of is typically assumed in the design of optical instruments, and in characterizing optical devices such as magnifying glasses. There is some confusion over how the focusing mechanism of the eye works. In the 1977 book, Eye and Brain, for example, the lens is said to be suspended by a membrane, the 'zonula', which holds it under tension. The tension is released, by contraction of the ciliary muscle, to allow the lens to become more round, for close vision. This implies the ciliary muscle, which is outside the zonula, must be circumferential, contracting like a sphincter, to slacken the tension of the zonula pulling outwards on the lens. This is consistent with the fact that our eyes seem to be in the 'relaxed' state when focusing at infinity, and also explains why no amount of effort seems to enable a myopic person to see farther away. The ability to focus on near objects declines throughout life, from an accommodation of about 20 dioptres (ability to focus at away) in a child, to 10 dioptres at age 25 (), and levels off at 0.5 to 1 dioptre at age 60 (ability to focus down to only). The expected, maximum, and minimum amplitudes of accommodation in diopters (D) for a corrected patient of a given age can be estimated using Hofstetter's formulas: expected amplitude (age in years); maximum amplitude (age in years); minimum amplitude (age in years). Diagnosis A basic eye exam, which includes a refraction assessment and an eye health exam, is used to diagnose presbyopia. Treatment In the visual system, images captured by the eye are translated into electric signals that are transmitted to the brain where they are interpreted. As such, in order to overcome presbyopia, two main components of the visual system can be addressed: image capturing by the optical system of the eye and image processing in the brain. Image capturing in the eye Solutions for presbyopia have advanced significantly in recent years due to widened availability of optometry care and over-the-counter vision correction options. Corrective lenses Corrective lenses provide vision correction over a range as high as +4.0 diopters. People with presbyopia require a convex lens for reading glasses; specialized preparations of convex lenses usually require the services of an optometrist. Contact lenses can also be used to correct the focusing loss that comes along with presbyopia. Multifocal contact lenses can be used to correct vision for both the near and the far. Some people choose contact lenses to correct one eye for near and one eye for far with a method called monovision. Surgery Refractive surgery has been done to create multifocal corneas. PresbyLASIK, a type of multifocal corneal ablation LASIK procedure may be used to correct presbyopia. Results are, however, more variable and some people have a decrease in visual acuity. Concerns with refractive surgeries for presbyopia include people's eyes changing with time. Other side effects of multifocal corneal ablation include postoperative glare, halos, ghost images, and monocular diplopia. Image processing in the brain A number of studies have claimed improvements in near visual acuity by the use of training protocols based on perceptual learning and requiring the detection of briefly presented low-contrast Gabor stimuli; study participants with presbyopia were enabled to read smaller font sizes and to increase their reading speed. Eye drops Pilocarpine, eye drops that constrict the pupil, has been approved by the FDA for presbyopia. Research on other drugs is in progress. Eye drops intended to restore lens elasticity are also being investigated. Etymology The term is from and (GEN ). History The condition was mentioned as early as the writings of Aristotle in the 4th century BC. Glass lenses first came into use for the problem in the late 13th century.
Biology and health sciences
Disabilities
Health
492177
https://en.wikipedia.org/wiki/Sizing
Sizing
Sizing or size is a substance that is applied to, or incorporated into, other materials—especially papers and textiles—to act as a protective filler or glaze. Sizing is used in papermaking and textile manufacturing to change the absorption and wear characteristics of those materials. Sizing is used for oil-based surface preparation for gilding (sometimes called mordant in this context). It is used by painters and artists to prepare paper and textile surfaces for some art techniques. Sizing is used in photography to increase the sharpness of a print, to change the glossiness of a print, or for other purposes depending on the type of paper and printing technique. Fibers used in composite materials are treated with various sizing agents to promote adhesion with the matrix material. Sizing is used during paper manufacture to reduce the paper's tendency when dry to absorb liquid, with the goal of allowing inks and paints to remain on the surface of the paper and to dry there, rather than be absorbed into the paper. This provides a more consistent, economical, and precise printing, painting, and writing surface. This is achieved by curbing the paper fibers' tendency to absorb liquids by capillary action. In addition, sizing affects abrasiveness, creasability, finish, printability, smoothness, and surface bond strength and decreases surface porosity and fuzzing. There are three categories of papers with respect to sizing: unsized (water-leaf), weak sized (slack sized), and strong sized (hard sized). Waterleaf has low water resistance and includes absorbent papers for blotting. Slack sized paper is somewhat absorbent and includes newsprint, while hard sized papers have the highest water resistance, such as coated fine papers and liquid packaging board. There are two types of sizing: internal sizing, sometimes also called engine sizing, and surface sizing (tub sizing). Internal sizing is applied to almost all papers and especially to all those that are machine made, while surface sizing is added for the highest grade bond, ledger, and writing papers. Surface sizing Surface sizing solutions consist of mainly modified starches and sometimes other hydrocolloids, such as gelatine, or surface sizing agents such as acrylic co-polymers. Surface sizing agents are amphiphilic molecules, having both hydrophilic (water-loving) and hydrophobic (water-repelling) ends. The sizing agent adheres to substrate fibers and forms a film, with the hydrophilic tail facing the fiber and the hydrophobic tail facing outwards, resulting in a smooth finish that tends to be water-repellent. Sizing improves the surface strength, printability, and water resistance of the paper or material to which it is applied. In the sizing solution, optical brightening agents (OBA) may also be added to improve the opacity and whiteness of the paper or material surface. Internal sizing Usual internal sizing chemicals used in papermaking at the wet end are alkyl ketene dimer (AKD) and alkyl succinic anhydride (ASA) in neutral pH conditions, and the more ancient rosin system which requires acidic conditions and is still used in some mills. Preservation While sizing is intended to make paper more suitable for printing, acidic sizing using rosin also makes printing paper less durable and poses a problem for preservation of printed documents. Sizing with starch was introduced quite early in the history of papermaking. Dard Hunter in Papermaking through Eighteen Centuries corroborates this by writing, "The Chinese used starch as a size for paper as early as A.D. 768 and its use continued until the fourteenth century when animal glue was substituted." In the early modern paper mills in Europe, which produced paper for printing and other uses, the sizing agent of choice was gelatin, as Susan Swartzburg writes in Preserving Library Materials: "Various substances have been used for sizing through the ages, from gypsum to animal gelatin." Hunter describes the process of sizing in these paper mills in the following: With the advent of the mass production of paper, the type of size used for paper production also changed. As Swartzburg writes, "By 1850 rosin size had come into use. Unfortunately, it produces a chemical action that hastens the decomposition of even the finest papers." In the field of library preservation it is known "that acid hydrolysis of cellulose and related carbo-hydrates [sic] is one of the key factors responsible for the degradation of paper during ageing." Some professional work has focused on the specific processes involved in the degradation of rosin-sized paper, in addition to work on developing permanent paper and sizing agents that will not eventually destroy the paper. An issue on the periphery to the preservation of paper and sizing, is washing, which is described by V. Daniels and J. Kosek as, "The removal of discolouration ... in water is principally effected by the dissolution of water-soluble material; this is usually done by immersing paper in water." In such a process, surface level items applied to the paper, such as size in early paper making processes as seen above, have the possibility of being removed from the paper, which might have some item specific interest in a special collections library. With later processes in paper making being more akin to "engine sizing," as H. Hardman and E. J. Cole describe it, "Engine sizing, which is part of the manufacturing process, has the ingredients added to the furnish or stock prior to sheet formation," the concern for the removal of size is less, and as such, most literature focuses on the more pressing issue of preserving acidic papers and similar issues. Gilding Sizing is a term used for any substance which is applied to a surface before gilding in order to ensure adhesion of the thin gold layer to the substrate. Egg whites have often been used as sizing; the Ancient Egyptians sometimes used blood. Other commonly used traditional materials for gold leaf sizing are rabbit-skin glue diluted and heated in water (water gilding), and boiled linseed oil (oil gilding); modern materials include polyvinyl acetate. Textile warp sizing Textile warp sizing, also known as tape sizing', of warp yarn is essential to reduce breakage of the yarn and thus production stops on the weaving machine. On the weaving machine, the warp yarns are subjected to several types of actions i.e. cyclic strain, flexing, abrasion at various loom parts, and inter yarn friction. With sizing, the strength—abrasion resistance—of the yarn will improve and the hairiness of yarn will decrease. The degree of improvement of strength depends on adhesion force between fiber and size, size penetration, as well as encapsulation of yarn. Different types of water soluble polymers called textile sizing agents/chemicals such as modified starch, polyvinyl alcohol (PVA), carboxymethyl cellulose (CMC), and acrylates are used to protect the yarn. Also wax is added to reduce the abrasiveness of the warp yarns. The type of yarn material (e.g. cotton, polyester, linen), the thickness of the yarn, and the type of weaving machinery will determine the sizing recipe. Often, the sizing liquor contain mutton tallow. Mutton tallow is an animal fat, used to improve abrasion resistance of yarns during weaving. The sizing liquor is applied on warp yarn with a warp sizing machine. After the weaving process, the fabric is desized (washed). Sizing may be done by hand, or in a sizing machine. Canvas sizing for oil painting Preparation of canvas for the oil painting always includes sizing: the canvas will "rot" if directly exposed to the paint. Aqueous glue, frequently the hide glue was used for sizing the canvas for centuries, Size in art is not a replacement for ground: it is not intended to form a level surface for painting, it is used to simply fill pores and isolate the canvas from the actual ground.
Technology
Materials
null
492445
https://en.wikipedia.org/wiki/Compass%20%28drawing%20tool%29
Compass (drawing tool)
A compass, also commonly known as a pair of compasses, is a technical drawing instrument that can be used for inscribing circles or arcs. As dividers, it can also be used as a tool to mark out distances, in particular, on maps. Compasses can be used for mathematics, drafting, navigation and other purposes. Prior to computerization, compasses and other tools for manual drafting were often packaged as a set with interchangeable parts. By the mid-twentieth century, circle templates supplemented the use of compasses. Today those facilities are more often provided by computer-aided design programs, so the physical tools serve mainly a didactic purpose in teaching geometry, technical drawing, etc. Construction and parts Compasses are usually made of metal or plastic, and consist of two "legs" connected by a hinge which can be adjusted to allow changing of the radius of the circle drawn. Typically one leg has a spike at its end for anchoring, and the other leg holds a drawing tool, such as a pencil, a short length of just pencil lead or sometimes a pen. Handle The handle, a small knurled rod above the hinge, is usually about half an inch long. Users can grip it between their pointer finger and thumb. Legs There are two types of leg in a pair of compasses: the straight or the steady leg and the adjustable one. Each has a separate purpose; the steady leg serves as the basis or support for the needle point, while the adjustable leg can be altered in order to draw different sizes of circles. Hinge The screw through the hinge holds the two legs in position. The hinge can be adjusted, depending on desired stiffness; the tighter the hinge-screw, the more accurate the compass's performance. The better quality compass, made of plated metal, is able to be finely adjusted via a small, serrated wheel usually set between the legs (see the "using a compass" animation shown above) and it has a (dangerously powerful) spring encompassing the hinge. This sort of compass is often known as a "pair of Spring-Bow Compasses". Needle point The needle point is located on the steady leg, and serves as the center point of the circle that is about to be drawn. Pencil lead The pencil lead draws the circle on a particular paper or material. Alternatively, an ink nib or attachment with a technical pen may be used. The better quality compass, made of metal, has its piece of pencil lead specially sharpened to a "chisel edge" shape, rather than to a point. Adjusting nut This holds the pencil lead or pen in place. Uses Circles can be made by pushing one leg of the compasses into the paper with the spike, putting the pencil on the paper, and moving the pencil around while keeping the legs at the same angle. Some people who find this action difficult often hold the compasses still and move the paper round instead. The radius of the intended circle can be changed by adjusting the initial angle between the two legs. Distances can be measured on a map using compasses with two spikes, also called a dividing compass (or just "dividers"). The hinge is set in such a way that the distance between the spikes on the map represents a certain distance in reality, and by measuring how many times the compasses fit between two points on the map the distance between those points can be calculated. Compasses and straightedge Compasses-and-straightedge constructions are used to illustrate principles of plane geometry. Although a real pair of compasses is used to draft visible illustrations, the ideal compass used in proofs is an abstract creator of perfect circles. The most rigorous definition of this abstract tool is the "collapsing compass"; having drawn a circle from a given point with a given radius, it disappears; it cannot simply be moved to another point and used to draw another circle of equal radius (unlike a real pair of compasses). Euclid showed in his second proposition (Book I of the Elements) that such a collapsing compass could be used to transfer a distance, proving that a collapsing compass could do anything a real compass can do. Variants A beam compass is an instrument, with a wooden or brass beam and sliding sockets, cursors or trammels, for drawing and dividing circles larger than those made by a regular pair of compasses. Scribe-compasses is an instrument used by carpenters and other tradesmen. Some compasses can be used to draw circles, bisect angles and, in this case, to trace a line. It is the compass in the most simple form. Both branches are crimped metal. One branch has a pencil sleeve while the other branch is crimped with a fine point protruding from the end. A wing nut on the hinge serves two purposes: first it tightens the pencil and secondly it locks in the desired distance when the wing nut is turned clockwise. Loose leg wing dividers are made of all forged steel. The pencil holder, thumb screws, brass pivot and branches are all well built. They are used for scribing circles and stepping off repetitive measurements with some accuracy. A reduction compass or proportional dividers is used to reduce or enlarge patterns while conserving angles. Ellipse drawing compasses are used to draw ellipse. As a symbol A pair of compasses is often used as a symbol of precision and discernment. As such it finds a place in logos and symbols such as the Freemasons' Square and Compasses and in various computer icons. English poet John Donne used the compass as a conceit in "A Valediction: Forbidding Mourning" (1611).
Technology
Artist's and drafting tools
null
493399
https://en.wikipedia.org/wiki/Loudness
Loudness
In acoustics, loudness is the subjective perception of sound pressure. More formally, it is defined as the "attribute of auditory sensation in terms of which sounds can be ordered on a scale extending from quiet to loud". The relation of physical attributes of sound to perceived loudness consists of physical, physiological and psychological components. The study of apparent loudness is included in the topic of psychoacoustics and employs methods of psychophysics. In different industries, loudness may have different meanings and different measurement standards. Some definitions, such as ITU-R BS.1770 refer to the relative loudness of different segments of electronically reproduced sounds, such as for broadcasting and cinema. Others, such as ISO 532A (Stevens loudness, measured in sones), ISO 532B (Zwicker loudness), DIN 45631 and ASA/ANSI S3.4, have a more general scope and are often used to characterize loudness of environmental noise. More modern standards, such as Nordtest ACOU112 and ISO/AWI 532-3 (in progress) take into account other components of loudness, such as onset rate, time variation and spectral masking. Loudness, a subjective measure, is often confused with physical measures of sound strength such as sound pressure, sound pressure level (in decibels), sound intensity or sound power. Weighting filters such as A-weighting and LKFS attempt to compensate measurements to correspond to loudness as perceived by the typical human. Explanation The perception of loudness is related to sound pressure level (SPL), frequency content and duration of a sound. The relationship between SPL and loudness of a single tone can be approximated by Stevens's power law in which SPL has an exponent of 0.67. A more precise model known as the Inflected Exponential function, indicates that loudness increases with a higher exponent at low and high levels and with a lower exponent at moderate levels. The sensitivity of the human ear changes as a function of frequency, as shown in the equal-loudness graph. Each line on this graph shows the SPL required for frequencies to be perceived as equally loud, and different curves pertain to different sound pressure levels. It also shows that humans with normal hearing are most sensitive to sounds around 2–4 kHz, with sensitivity declining to either side of this region. A complete model of the perception of loudness will include the integration of SPL by frequency. Historically, loudness was measured using an ear-balancing method with an audiometer in which the amplitude of a sine wave was adjusted by the user to equal the perceived loudness of the sound being evaluated. Contemporary standards for measurement of loudness are based on the summation of energy in critical bands. Hearing loss When sensorineural hearing loss (damage to the cochlea or in the brain) is present, the perception of loudness is altered. Sounds at low levels (often perceived by those without hearing loss as relatively quiet) are no longer audible to the hearing impaired, but sounds at high levels often are perceived as having the same loudness as they would for an unimpaired listener. This phenomenon can be explained by two theories, called loudness recruitment and softness imperception. Loudness recruitment posits that loudness grows more rapidly for certain listeners than normal listeners with changes in level. This theory has been accepted as the classical explanation. Softness imperception, a term coined by Mary Florentine around 2002, proposes that some listeners with sensorineural hearing loss may exhibit a normal rate of loudness growth, but instead have an elevated loudness at their threshold. That is, the softest sound that is audible to these listeners is louder than the softest sound audible to normal listeners. Compensation The loudness control associated with a loudness compensation feature on some consumer stereos alters the frequency response curve to correspond roughly with the equal loudness characteristic of the ear. Loudness compensation is intended to make the recorded music sound more natural when played at a lower levels by boosting low frequencies, to which the ear is less sensitive at lower sound pressure levels. Normalization Loudness normalization is a specific type of audio normalization that equalizes perceived level such that, for instance, commercials do not sound louder than television programs. Loudness normalization schemes exist for a number of audio applications. Broadcast Commercial Advertisement Loudness Mitigation Act EBU R 128 Movie and home theaters Dialnorm Music playback Sound Check in iTunes ReplayGain Normalization systems built into streaming services such as Spotify and YouTube. Measurement Historically sone (loudness N) and phon (loudness level LN) units have been used to measure loudness. A-weighting follows human sensitivity to sound and describes relative perceived loudness for at quiet to moderate speech levels, around 40 phons. Relative loudness monitoring in production is measured in accordance with ITU-R BS.1770 in units of LKFS. Work began on ITU-R BS.1770 in 2001 after 0 dBFS+ level distortion in converters and lossy codecs had become evident; and the original Leq(RLB) loudness metric was proposed by Gilbert Soulodre in 2003. Based on data from subjective listening tests, Leq(RLB) compared favorably to numerous other algorithms. CBC, Dolby and TC Electronic and numerous broadcasters contributed to the listening tests. Loudness levels measured according to the Leq(RLB) specified in ITU-R BS.1770 are reported in LKFS units. The ITU-R BS.1770 measurement system was improved for made multi-channel applications (monaural to 5.1 surround sound). To make the loudness metric cross-genre friendly, a relative measurement gate was added. This work was carried out in 2008 by the EBU. The improvements were brought back into BS.1770-2. ITU subsequently updated the true-peak metric (BS.1770-3) and added provision for even more audio channels, for instance 22.2 surround sound (BS.1770-4).
Physical sciences
Waves
Physics
493465
https://en.wikipedia.org/wiki/North%20American%20plate
North American plate
The North American plate is a tectonic plate containing most of North America, Cuba, the Bahamas, extreme northeastern Asia, and parts of Iceland and the Azores. With an area of , it is the Earth's second largest tectonic plate, behind the Pacific plate (which borders the plate to the west). It extends eastward to the seismically active Mid-Atlantic Ridge at the Azores triple junction plate boundary where it meets the Eurasian plate and Nubian plate. and westward to the Chersky Range in eastern Siberia. The plate includes both continental and oceanic crust. The interior of the main continental landmass includes an extensive granitic core called a craton. Along most of the edges of this craton are fragments of crustal material called terranes, which are accreted to the craton by tectonic actions over a long span of time. Much of North America west of the Rocky Mountains is composed of such terranes. Boundaries The southern boundary with the Cocos plate to the west and the Caribbean plate to the east is a transform fault, represented by the Swan Islands Transform Fault under the Caribbean Sea and the Motagua Fault through Guatemala. The parallel Septentrional and Enriquillo–Plantain Garden faults running through Hispaniola and bounding the Gonâve microplate, and the parallel Puerto Rico Trench running north of Puerto Rico and the Virgin Islands and bounding the Puerto Rico–Virgin Islands microplate, are also a part of the boundary. The rest of the southerly margin which extends east to the Mid-Atlantic Ridge and marks the boundary between the North American plate and the South American plate is vague but located near the Fifteen-Twenty fracture zone around 16°N. On the northerly boundary is a continuation of the Mid-Atlantic Ridge called the Gakkel Ridge. The rest of the boundary in the far northwestern part of the plate extends into Siberia. This boundary continues from the end of the Gakkel Ridge as the Laptev Sea Rift, on to a transitional deformation zone in the Chersky Range, then the Ulakhan Fault between it and the Okhotsk microplate, and finally the Aleutian Trench to the end of the Queen Charlotte Fault system (see also: Aleutian Arc). The westerly boundary is the Queen Charlotte Fault running offshore along the coast of Alaska and the Cascadia subduction zone to the north, the San Andreas Fault through California, the East Pacific Rise in the Gulf of California, and the Middle America Trench to the south. On its western edge, the Farallon plate has been subducting under the North American plate since the Jurassic period. The Farallon plate has almost completely subducted beneath the western portion of the North American plate, leaving that part of the North American plate in contact with the Pacific plate as the San Andreas Fault. The Juan de Fuca, Explorer, Gorda, Rivera, Cocos and Nazca plates are remnants of the Farallon plate. The boundary along the Gulf of California is complex. The gulf is underlain by the Gulf of California Rift Zone, a series of rift basins and transform fault segments from the northern end of the East Pacific Rise in the mouth of the gulf to the San Andreas Fault system in the vicinity of the Salton Trough rift/Brawley seismic zone. It is generally accepted that a piece of the North American plate was broken off and transported north as the East Pacific Rise propagated northward, creating the Gulf of California. However, it is as yet unclear whether the oceanic crust between the rise and the mainland coast of Mexico is actually a new plate beginning to converge with the North American plate, consistent with the standard model of rift zone spreading centers generally. Hotspots A few hotspots are thought to exist below the North American plate. The most notable hotspots are the Yellowstone (Wyoming), Jemez Lineament (New Mexico), and Anahim (British Columbia) hotspots. These are thought to be caused by a narrow stream of hot mantle convecting up from the Earth's core–mantle boundary called a mantle plume, although some geologists think that upper mantle convection is a more likely cause. The Yellowstone and Anahim hotspots are thought to have first arrived during the Miocene period and are still geologically active, creating earthquakes and volcanoes. The Yellowstone hotspot is most notable for the Yellowstone Caldera and the many calderas that lie in the Snake River Plain, while the Anahim hotspot is most notable for the Anahim Volcanic Belt in the Nazko Cone area. Plate motion For the most part, the North American plate moves in roughly a southwest direction away from the Mid-Atlantic Ridge at a rate of about 2.3 centimeters (~1 inch) per year. At the same time, the Pacific plate is moving to the northwest at a speed of between 7 and 11 centimeters (~3-4 inches) per year. The motion of the plate cannot be driven by subduction as no part of the North American plate is being subducted, except for a small section comprising part of the Puerto Rico Trench; thus other mechanisms continue to be investigated. One study in 2007 suggests that a mantle convective current is propelling the plate.
Physical sciences
Tectonic plates
Earth science
493760
https://en.wikipedia.org/wiki/Ice%20sheet
Ice sheet
In glaciology, an ice sheet, also known as a continental glacier, is a mass of glacial ice that covers surrounding terrain and is greater than . The only current ice sheets are the Antarctic ice sheet and the Greenland ice sheet. Ice sheets are bigger than ice shelves or alpine glaciers. Masses of ice covering less than 50,000 km2 are termed an ice cap. An ice cap will typically feed a series of glaciers around its periphery. Although the surface is cold, the base of an ice sheet is generally warmer due to geothermal heat. In places, melting occurs and the melt-water lubricates the ice sheet so that it flows more rapidly. This process produces fast-flowing channels in the ice sheet — these are ice streams. Even stable ice sheets are continually in motion as the ice gradually flows outward from the central plateau, which is the tallest point of the ice sheet, and towards the margins. The ice sheet slope is low around the plateau but increases steeply at the margins. Increasing global air temperatures due to climate change take around 10,000 years to directly propagate through the ice before they influence bed temperatures, but may have an effect through increased surface melting, producing more supraglacial lakes. These lakes may feed warm water to glacial bases and facilitate glacial motion. In previous geologic time spans (glacial periods) there were other ice sheets. During the Last Glacial Period at Last Glacial Maximum, the Laurentide Ice Sheet covered much of North America. In the same period, the Weichselian ice sheet covered Northern Europe and the Patagonian Ice Sheet covered southern South America. Overview An ice sheet is a body of ice which covers a land area of continental size - meaning that it exceeds 50,000 km2. The currently existing two ice sheets in Greenland and Antarctica have a much greater area than this minimum definition, measuring at 1.7 million km2 and 14 million km2, respectively. Both ice sheets are also very thick, as they consist of a continuous ice layer with an average thickness of . This ice layer forms because most of the snow which falls onto the ice sheet never melts, and is instead compressed by the mass of newer snow layers. This process of ice sheet growth is still occurring nowadays, as can be clearly seen in an example that occurred in World War II. A Lockheed P-38 Lightning fighter plane crashed in Greenland in 1942. It was only recovered 50 years later. By then, it had been buried under 81 m (268 feet) of ice which had formed over that time period. Dynamics Glacial flows Even stable ice sheets are continually in motion as the ice gradually flows outward from the central plateau, which is the tallest point of the ice sheet, and towards the margins. The ice sheet slope is low around the plateau but increases steeply at the margins. This difference in slope occurs due to an imbalance between high ice accumulation in the central plateau and lower accumulation, as well as higher ablation, at the margins. This imbalance increases the shear stress on a glacier until it begins to flow. The flow velocity and deformation will increase as the equilibrium line between these two processes is approached. This motion is driven by gravity but is controlled by temperature and the strength of individual glacier bases. A number of processes alter these two factors, resulting in cyclic surges of activity interspersed with longer periods of inactivity, on time scales ranging from hourly (i.e. tidal flows) to the centennial (Milankovich cycles). On an unrelated hour-to-hour basis, surges of ice motion can be modulated by tidal activity. The influence of a 1 m tidal oscillation can be felt as much as 100 km from the sea. During larger spring tides, an ice stream will remain almost stationary for hours at a time, before a surge of around a foot in under an hour, just after the peak high tide; a stationary period then takes hold until another surge towards the middle or end of the falling tide. At neap tides, this interaction is less pronounced, and surges instead occur approximately every 12 hours. Increasing global air temperatures due to climate change take around 10,000 years to directly propagate through the ice before they influence bed temperatures, but may have an effect through increased surface melting, producing more supraglacial lakes. These lakes may feed warm water to glacial bases and facilitate glacial motion. Lakes of a diameter greater than ~300 m are capable of creating a fluid-filled crevasse to the glacier/bed interface. When these crevasses form, the entirety of the lake's (relatively warm) contents can reach the base of the glacier in as little as 2–18 hours – lubricating the bed and causing the glacier to surge. Water that reaches the bed of a glacier may freeze there, increasing the thickness of the glacier by pushing it up from below. Boundary conditions As the margins end at the marine boundary, excess ice is discharged through ice streams or outlet glaciers. Then, it either falls directly into the sea or is accumulated atop the floating ice shelves. Those ice shelves then calve icebergs at their periphery if they experience excess of ice. Ice shelves would also experience accelerated calving due to basal melting. In Antarctica, this is driven by heat fed to the shelf by the circumpolar deep water current, which is 3 °C above the ice's melting point. The presence of ice shelves has a stabilizing influence on the glacier behind them, while an absence of an ice shelf becomes destabilizing. For instance, when Larsen B ice shelf in the Antarctic Peninsula had collapsed over three weeks in February 2002, the four glaciers behind it - Crane Glacier, Green Glacier, Hektoria Glacier and Jorum Glacier - all started to flow at a much faster rate, while the two glaciers (Flask and Leppard) stabilized by the remnants of the ice shelf did not accelerate. The collapse of the Larsen B shelf was preceded by thinning of just 1 metre per year, while some other Antarctic ice shelves have displayed thinning of tens of metres per year. Further, increased ocean temperatures of 1 °C may lead to up to 10 metres per year of basal melting. Ice shelves are always stable under mean annual temperatures of −9 °C, but never stable above −5 °C; this places regional warming of 1.5 °C, as preceded the collapse of Larsen B, in context. Marine ice sheet instability In the 1970s, Johannes Weertman proposed that because seawater is denser than ice, then any ice sheets grounded below sea level inherently become less stable as they melt due to Archimedes' principle. Effectively, these marine ice sheets must have enough mass to exceed the mass of the seawater displaced by the ice, which requires excess thickness. As the ice sheet melts and becomes thinner, the weight of the overlying ice decreases. At a certain point, sea water could force itself into the gaps which form at the base of the ice sheet, and marine ice sheet instability (MISI) would occur. Even if the ice sheet is grounded below the sea level, MISI cannot occur as long as there is a stable ice shelf in front of it. The boundary between the ice sheet and the ice shelf, known as the grounding line, is particularly stable if it is constrained in an embayment. In that case, the ice sheet may not be thinning at all, as the amount of ice flowing over the grounding line would be likely to match the annual accumulation of ice from snow upstream. Otherwise, ocean warming at the base of an ice shelf tends to thin it through basal melting. As the ice shelf becomes thinner, it exerts less of a buttressing effect on the ice sheet, the so-called back stress increases and the grounding line is pushed backwards. The ice sheet is likely to start losing more ice from the new location of the grounding line and so become lighter and less capable of displacing seawater. This eventually pushes the grounding line back even further, creating a self-reinforcing mechanism. Vulnerable locations Because the entire West Antarctic Ice Sheet is grounded below the sea level, it would be vulnerable to geologically rapid ice loss in this scenario. In particular, the Thwaites and Pine Island glaciers are most likely to be prone to MISI, and both glaciers have been rapidly thinning and accelerating in recent decades. As a result, sea level rise from the ice sheet could be accelerated by tens of centimeters within the 21st century alone. The majority of the East Antarctic Ice Sheet would not be affected. Totten Glacier is the largest glacier there which is known to be subject to MISI - yet, its potential contribution to sea level rise is comparable to that of the entire West Antarctic Ice Sheet. Totten Glacier has been losing mass nearly monotonically in recent decades, suggesting rapid retreat is possible in the near future, although the dynamic behavior of Totten Ice Shelf is known to vary on seasonal to interannual timescales. The Wilkes Basin is the only major submarine basin in Antarctica that is not thought to be sensitive to warming. Ultimately, even geologically rapid sea level rise would still most likely require several millennia for the entirety of these ice masses (WAIS and the subglacial basins) to be lost. Marine ice cliff instability A related process known as Marine Ice Cliff Instability (MICI) posits that ice cliffs which exceed ~ in above-ground height and are ~ in basal (underground) height are likely to collapse under their own weight once the peripheral ice stabilizing them is gone. Their collapse then exposes the ice masses following them to the same instability, potentially resulting in a self-sustaining cycle of cliff collapse and rapid ice sheet retreat - i.e. sea level rise of a meter or more by 2100 from Antarctica alone. This theory had been highly influential - in a 2020 survey of 106 experts, the paper which had advanced this theory was considered more important than even the year 2014 IPCC Fifth Assessment Report. Sea level rise projections which involve MICI are much larger than the others, particularly under high warming rate. At the same time, this theory has also been highly controversial. It was originally proposed in order to describe how the large sea level rise during the Pliocene and the Last Interglacial could have occurred - yet more recent research found that these sea level rise episodes can be explained without any ice cliff instability taking place. Research in Pine Island Bay in West Antarctica (the location of Thwaites and Pine Island Glacier) had found seabed gouging by ice from the Younger Dryas period which appears consistent with MICI. However, it indicates "relatively rapid" yet still prolonged ice sheet retreat, with a movement of > inland taking place over an estimated 1100 years (from ~12,300 years Before Present to ~11,200 B.P.) In recent years, 2002-2004 fast retreat of Crane Glacier immediately after the collapse of the Larsen B ice shelf (before it reached a shallow fjord and stabilized) could have involved MICI, but there weren't enough observations to confirm or refute this theory. The retreat of Greenland ice sheet's three largest glaciers - Jakobshavn, Helheim, and Kangerdlugssuaq Glacier - did not resemble predictions from ice cliff collapse at least up until the end of 2013, but an event observed at Helheim Glacier in August 2014 may fit the definition. Further, modelling done after the initial hypothesis indicates that ice-cliff instability would require implausibly fast ice shelf collapse (i.e. within an hour for ~-tall cliffs), unless the ice had already been substantially damaged beforehand. Further, ice cliff breakdown would produce a large number of debris in the coastal waters - known as ice mélange - and multiple studies indicate their build-up would slow or even outright stop the instability soon after it started. Some scientists - including the originators of the hypothesis, Robert DeConto and David Pollard - have suggested that the best way to resolve the question would be to precisely determine sea level rise during the Last Interglacial. MICI can be effectively ruled out if SLR at the time was lower than , while it is very likely if the SLR was greater than . As of 2023, the most recent analysis indicates that the Last Interglacial SLR is unlikely to have been higher than , as higher values in other research, such as , appear inconsistent with the new paleoclimate data from The Bahamas and the known history of the Greenland Ice Sheet. Earth's current two ice sheets Antarctic ice sheet West Antarctic ice sheet East Antarctic ice sheet Greenland ice sheet Role in carbon cycle Historically, ice sheets were viewed as inert components of the carbon cycle and were largely disregarded in global models. In 2010s, research had demonstrated the existence of uniquely adapted microbial communities, high rates of biogeochemical and physical weathering in ice sheets, and storage and cycling of organic carbon in excess of 100 billion tonnes. There is a massive contrast in carbon storage between the two ice sheets. While only about 0.5-27 billion tonnes of pure carbon are present underneath the Greenland ice sheet, 6000-21,000 billion tonnes of pure carbon are thought to be located underneath Antarctica. This carbon can act as a climate change feedback if it is gradually released through meltwater, thus increasing overall carbon dioxide emissions. For comparison, 1400–1650 billion tonnes are contained within the Arctic permafrost. Also for comparison, the annual human caused carbon dioxide emissions amount to around 40 billion tonnes of . In Greenland, there is one known area, at Russell Glacier, where meltwater carbon is released into the atmosphere as methane, which has a much larger global warming potential than carbon dioxide. However, it also harbours large numbers of methanotrophic bacteria, which limit those emissions. In geologic timescales Normally, the transitions between glacial and interglacial states are governed by Milankovitch cycles, which are patterns in insolation (the amount of sunlight reaching the Earth). These patterns are caused by the variations in shape of the Earth's orbit and its angle relative to the Sun, caused by the gravitational pull of other planets as they go through their own orbits. For instance, during at least the last 100,000 years, portions of the ice sheet covering much of North America, the Laurentide Ice Sheet broke apart sending large flotillas of icebergs into the North Atlantic. When these icebergs melted they dropped the boulders and other continental rocks they carried, leaving layers known as ice rafted debris. These so-called Heinrich events, named after their discoverer Hartmut Heinrich, appear to have a 7,000–10,000-year periodicity, and occur during cold periods within the last interglacial. Internal ice sheet "binge-purge" cycles may be responsible for the observed effects, where the ice builds to unstable levels, then a portion of the ice sheet collapses. External factors might also play a role in forcing ice sheets. Dansgaard–Oeschger events are abrupt warmings of the northern hemisphere occurring over the space of perhaps 40 years. While these D–O events occur directly after each Heinrich event, they also occur more frequently – around every 1500 years; from this evidence, paleoclimatologists surmise that the same forcings may drive both Heinrich and D–O events. Hemispheric asynchrony in ice sheet behavior has been observed by linking short-term spikes of methane in Greenland ice cores and Antarctic ice cores. During Dansgaard–Oeschger events, the northern hemisphere warmed considerably, dramatically increasing the release of methane from wetlands, that were otherwise tundra during glacial times. This methane quickly distributes evenly across the globe, becoming incorporated in Antarctic and Greenland ice. With this tie, paleoclimatologists have been able to say that the ice sheets on Greenland only began to warm after the Antarctic ice sheet had been warming for several thousand years. Why this pattern occurs is still open for debate. Antarctic ice sheet during geologic timescales Greenland ice sheet during geologic timescales
Physical sciences
Glaciology
null
493890
https://en.wikipedia.org/wiki/Juan%20de%20Fuca%20plate
Juan de Fuca plate
The Juan de Fuca plate is a small tectonic plate (microplate) generated from the Juan de Fuca Ridge that is subducting beneath the northerly portion of the western side of the North American plate at the Cascadia subduction zone. It is named after the explorer of the same name. One of the smallest of Earth's tectonic plates, the Juan de Fuca plate is a remnant part of the once-vast Farallon plate, which is now largely subducted underneath the North American plate. In plate tectonic reconstructions, the Juan de Fuca plate is referred to as the Vancouver plate between the break-up of the Farallon plate 55–52 Ma and the activation of the San Andreas Fault c. 30 Ma. Origins The Juan de Fuca plate system has its origins with Panthalassa's oceanic basin and crust. This oceanic crust has primarily been subducted under the North American plate, and the Eurasian plate. Panthalassa's oceanic plate remnants are understood to be the Juan de Fuca, Gorda, Cocos and the Nazca plates, all four of which were part of the Farallon plate. Extent The Juan de Fuca plate is bounded on the south by the Blanco fracture zone (running northwest off the coast of Oregon), on the north by the Nootka Fault (running southwest off Nootka Island, near Vancouver Island, British Columbia) and along the west by the Pacific plate (which covers most of the Pacific Ocean and is the largest of Earth's tectonic plates). The Juan de Fuca plate itself has since fractured into three pieces, and the name is applied to the entire plate in some references, but in others only to the central portion. The three fragments are differentiated as such: the piece to the south is known as the Gorda plate and the piece to the north is known as the Explorer plate. The separate pieces are demarcated by the large offsets of the undersea spreading zone. Volcanism This subducting plate system has formed the Cascade Range, the Cascade Volcanic Arc, and the Pacific Ranges, along the west coast of North America from southern British Columbia to northern California. These in turn are part of the Pacific Ring of Fire, a much larger-scale volcanic feature that extends around much of the rim of the Pacific Ocean. Earthquakes The last megathrust earthquake at the Cascadia subduction zone was the 1700 Cascadia earthquake, estimated to have a moment magnitude of 8.7 to 9.2. Based on carbon dating of local tsunami deposits, it is inferred to have occurred around 1700. Evidence of this earthquake is also seen in the ghost forest along the bank of the Copalis River in Washington. The rings of the dead trees indicate that they died around 1700, and it is believed that they were killed when the earthquake occurred and sank the ground beneath them causing the trees to be flooded by saltwater. Japanese records indicate that a tsunami occurred in Japan on 26 January 1700, which was likely caused by this earthquake. In 2008, small earthquakes were observed within the Juan de Fuca plate. The unusual quakes were described as "more than 600 quakes over the past 10 days in a basin 150 miles [240 km] southwest of Newport". The quakes were unlike most quakes in that they did not follow the pattern of a large quake, followed by smaller aftershocks; rather, they were simply a continual deluge of small quakes. Furthermore, they did not occur on the tectonic plate boundary, but rather in the middle of the plate. The subterranean quakes were detected on hydrophones, and scientists described the sounds as similar to thunder, and unlike anything previously recorded. Carbon sequestration potential The basaltic formations of the Juan de Fuca plate could potentially be suitable for long-term CO2 sequestration as part of a carbon capture and storage (CCS) system. Injection of CO2 would lead to the formation of stable carbonates. It is estimated that 100 years of US carbon emissions (at current rate) could be stored securely, without risk of leakage back into the atmosphere. Tearing In 2019, scientists from the University of California, Berkeley, published a study in Geophysical Research Letters in which they reported that by utilizing data from over 30,000 seismic waves and 217 earthquakes to create a three-dimensional map, they had revealed the existence of a hole in the subducted part of the Juan de Fuca plate, and speculated that the hole is an indication of a deep tear in the plate along a "preexisting zone of weakness". According to William B. Hawley and Richard M. Allen, the authors of the study, the hole may be the cause of volcanism and earthquakes on the plate, and is causing deformation of the offshore part of the plate. The deformation may cause the plate to fragment, with the remaining un-subducted small pieces becoming attached to other plates nearby. Lithosphere–asthenosphere boundary beneath Juan de Fuca In 2016, a geophysical study was published on the possible presence of a layer of buoyant material between the Earth's lithosphere and the asthenosphere under the Juan de Fuca plate. The study extends the theory of partial melt in the lithosphere-asthenosphere boundary to subduction zones, specifically in the convergent margins. Using teleseismic body-wave tomography, a low-velocity zone of thickness 50~100 km in the sublithospheric region beneath the Juan de Fuca plate was detected. The observation, along with fluid-mechanical calculations that factor in Couette and Poiseuille flows, support the hypothesis of the accumulation of a buoyant material, characterized by low viscosity. The exact source of this anomaly remains unknown, although its highly-conductive nature and low-seismic wave velocity are well observed.
Physical sciences
Tectonic plates
Earth science
493913
https://en.wikipedia.org/wiki/Plate%20armour
Plate armour
Plate armour is a historical type of personal body armour made from bronze, iron, or steel plates, culminating in the iconic suit of armour entirely encasing the wearer. Full plate steel armour developed in Europe during the Late Middle Ages, especially in the context of the Hundred Years' War, from the coat of plates (popular in late 13th and early 14th century) worn over mail suits during the 14th century, a century famous for the Transitional armour, in that plate gradually replaced chain mail. In Europe, full plate armour reached its peak in the 15th and 16th centuries. The full suit of armour, also referred to as a panoply, is thus a feature of the very end of the Middle Ages and the Renaissance period. Its popular association with the "medieval knight” is due to the specialised jousting armour which developed in the 16th century. Full suits of Gothic plate armour and Milanese plate armour were worn on the battlefields of the Burgundian Wars, Wars of the Roses, Polish–Teutonic Wars, Eighty Years' War, French Wars of Religion, Italian Wars, Hungarian–Ottoman Wars, Ottoman–Habsburg wars, Polish–Ottoman Wars, a significant part of the Hundred Years' War, and even the Thirty Years' War. The most heavily armoured troops of the period were heavy cavalry, such as the gendarmes and early cuirassiers, but the infantry troops of the Swiss mercenaries and the Landsknechts also took to wearing lighter suits of "three quarters" munition armour, leaving the lower legs unprotected. The use of plate armour began to decline in the early 17th century, but it remained common both among the nobility (e.g., the Emperor Ferdinand II, Louis XIII, Philip IV of Spain, Maurice of Orange and Gustavus Adolphus) and the cuirassiers throughout the European wars of religion. After the mid-17th century, plate armour was mostly reduced to the simple breastplate or cuirass worn by cuirassiers, with the exception of the Polish Hussars that still used considerable amounts of plate. This was due to the development of the musket, which could penetrate armour at a considerable distance. For infantry, the breastplate gained renewed importance with the development of shrapnel in the late Napoleonic Wars. The use of steel plates sewn into flak jackets dates to World War II, and was replaced by more modern materials such as fibre-reinforced plastic, since the mid-20th century. Mail armour is a layer of protective clothing worn most commonly from the 9th to the 13th century, though it would continue to be worn under plate armour until the 15th century. Mail was made from hundreds of small interlinking iron or steel rings held together by rivets. It was made this way so that it would be able to follow the contour of the wearer's body, maximizing comfort. Mail armour was designed mainly to defend against thrusting and cutting weapons, rather than bludgeons. Typical clothing articles made of mail at the time would be hooded cloaks, gloves, trousers, and shoes. From the 10th to the 13th century, mail armour was so popular in Europe, that it was known as the age of mail. Early history Partial plate armour, made out of bronze, which protected the chest and the lower limbs, was used by the ancient Greeks, as early as the late Bronze Age. The Dendra panoply protected the entire torso on both sides and included shoulder and neck protections. Less restrictive and heavy armour would become more widespread in the form of the muscle cuirass during classic antiquity before being superseded by other types of armour. Parthian and Sassanian heavy cavalry known as Clibanarii used cuirasses made out of scales or mail and small, overlapping plates in the manner of the manica for the protection of arms and legs. Plate armour in the form of the Lorica segmentata was used by the Roman empire between the 1st century BC and 4th century AD. Single plates of metal armour were again used from the late 13th century on, to protect joints and shins, and these were worn over a mail hauberk. Gradually the number of plate components of medieval armour increased, protecting further areas of the body, and in barding those of a cavalryman's horse. Armourers developed skills in articulating the lames or individual plates for parts of the body that needed to be flexible, and in fitting armour to the individual wearer like a tailor. The cost of a full suit of high quality fitted armour, as opposed to the cheaper munition armour (equivalent of ready-to-wear) was enormous, and inevitably restricted to the wealthy who were seriously committed to either soldiering or jousting. The rest of an army wore inconsistent mixtures of pieces, with mail still playing an important part. Japan In the Kofun period (250–538), iron plate cuirasses (tankō) and helmets were being made. Plate armour was used in Japan during the Nara period (646–793); both plate and lamellar armours have been found in burial mounds, and haniwa (ancient clay figures) have been found depicting warriors wearing full armour. In Japan, the warfare of the Sengoku period (1467–1615) required large quantities of armour to be produced for the ever-growing armies of foot soldiers (ashigaru). Simple munition-quality<ref>[https://books.google.com/books?id=8APyY3eIONcC&dq=samurai+Munitions+armor&pg=PA130 The Watanabe Art Museum Samurai Armour Collection, Volume I, Kabuto & Mengu, Trevor Absolon, page 130].</ref> chest armours (dō) and helmets (kabuto) were mass-produced. In 1543, the Portuguese brought matchlock firearms (tanegashima) to Japan. As Japanese swordsmiths began mass-producing matchlock firearms and firearms became used in war, the use of Lamellar armour (ō-yoroi and dō-maru), previously used as samurai armour, gradually decreased. Japanese armour makers started to make new types of armour made of larger iron plate and plated leather. This new suit of armour is called tōsei gusoku (gusoku), which means modern armour.The Grove encyclopedia of decorative arts, Volume 1, Gordon Campbell, Oxford University Press US, 2006, page 36.Samurai: The Weapons and Spirit of the Japanese Warrior, Clive Sinclaire, Globe Pequot, 2004, page 49. The type of gusoku, which covered the front and back of the body with a single iron plate with a raised center and a V-shaped bottom like plate armour, was specifically called nanban dou gusoku ("Western style gusoku) and was used by some samurai. Japanese armour makers designed bulletproof plate armour called tameshi gusoku ("bullet tested"), which allowed soldiers to continue wearing armour despite the heavy use of firearms in the late 16th century.The Watanabe Art Museum Samurai Armour Collection, Volume I, Kabuto & Mengu, Trevor Absolon, page 78. In the 17th century, warfare in Japan came to an end, but the samurai continued to use plate armour until the end of the samurai era in the 1860s, with the known last use of samurai armour occurring in 1877, during the Satsuma rebellion. Late Middle Ages By about 1420, complete suits of plate armour had been developed in Europe. A full suit of plate armour would have consisted of a helmet, a gorget (or bevor), spaulders, pauldrons with gardbraces to cover the armpits as was seen in French armour,David Nicolle, Fornovo 1495: France's bloody fighting retreat, Osprey Publishing, series Campaign #43, 1996. or besagews (also known as rondels) which were mostly used in Gothic Armour, rerebraces, couters, vambraces, gauntlets, a cuirass (breastplate and backplate) with a fauld, tassets and a culet, a mail skirt, cuisses, poleyns, greaves, and sabatons. The very fullest sets, known as garnitures, more often made for jousting than war, included pieces of exchange, alternate pieces suiting different purposes, so that the suit could be configured for a range of different uses, for example fighting on foot or on horse. By the Late Middle Ages even infantry could afford to wear several pieces of plate armour. Armour production was a profitable and pervasive industry during the Middle Ages and the Renaissance. A complete suit of plate armour made from well-tempered steel would weigh around . The wearer remained highly agile and could jump, run and otherwise move freely as the weight of the armour was spread evenly throughout the body. The armour was articulated and covered a man's entire body completely from neck to toe. In the 15th and 16th centuries, plate-armoured soldiers were the nucleus of every army. Large bodies of men-at-arms numbering thousands, or even more than ten thousand men (approximately 60% to 70% of French armies were men-at-arms and the percentage was also high in other countries), were fighting on foot, wearing full plate next to archers and crossbowmen. This was commonly seen in the Western European armies, especially during the Hundred Years War, the Wars of the Roses or the Italian Wars. European leaders in armouring techniques were Northern Italians, especially from Milan, and Southern Germans, who had somewhat different styles. But styles were diffused around Europe, often by the movement of armourers; the Renaissance Greenwich armour was made by a royal workshop near London that had imported Italian, Flemish and (mostly) German craftsmen, though it soon developed its own unique style. Ottoman Turkey also made wide use of plate armour, but incorporated large amounts of mail into their armour, which was widely used by shock troops such as the Janissary Corps. Effect on weapon development Plate armour gave the wearer very good protection against sword cuts, as well against spear thrusts, and provided decent defense against blunt weapons. The evolution of plate armour also triggered developments in the design of offensive weapons. While this armour was effective against cuts or strikes, their weak points could be exploited by thrusting weapons, such as estocs, poleaxes, and halberds. The effect of arrows and bolts is still a point of contention with regard to plate armour. The evolution of the 14th-century plate armour also triggered the development of various polearms. They were designed to deliver a strong impact and concentrate energy on a small area and cause damage through the plate. Maces, war hammers, and pollaxes (poleaxes) were used to inflict blunt force trauma through armour. Strong blows to the head could result in concussion, even if the armour is not penetrated. Fluted plate was not only decorative, but also reinforced the plate against bending under striking or blunt impact. This offsets against the tendency for flutes to catch piercing blows. In armoured techniques taught in the German school of swordsmanship, the attacker concentrates on these "weak spots", resulting in a fighting style very different from unarmoured sword-fighting. Because of this weakness, most warriors wore a mail shirt (haubergeon or hauberk) beneath their plate armour (or coat-of-plates). Later, full mail shirts were replaced with mail patches, called gussets, which were sewn onto a gambeson or arming jacket. Further protection for plate armour was the use of small round plates called besagews, that covered the armpit area and the addition of couters and poleyns with "wings" to protect the inside of the joint. Renaissance German so-called Maximilian armour of the early 16th century is a style using heavy fluting and some decorative etching, as opposed to the plainer finish on 15th-century white armour. The shapes include influence from Italian styles. This era also saw the use of closed helms, as opposed to the 15th-century-style sallets and barbutes. During the early 16th century, the helmet and neckguard design was reformed to produce the so-called Nürnberg armour, many of them masterpieces of workmanship and design. As firearms became better and more common on the battlefield, the utility of full armour gradually declined, and full suits became restricted to those made for jousting which continued to develop. The decoration of fine armour greatly increased in the period, using a range of techniques, and further greatly increasing the cost. Elaborately decorated plate armour for royalty and the very wealthy was being produced. Highly decorated armour is often called parade armour, a somewhat misleading term as such armour might well be worn on active military service. Steel plate armour for Henry II of France, made in 1555, is covered with meticulous embossing, which has been subjected to blueing, silvering and gilding. Such work required armourers to either collaborate with artists or have artistic skill of their own; another alternative was to take designs from ornament prints and other prints, as was often done. Daniel Hopfer was an etcher of armour by training, who developed etching as a form of printmaking. Other artists such as Hans Holbein the Younger produced designs for armour. The Milanese armourer Filippo Negroli, from a leading dynasty of armourers, was the most famous modeller of figurative relief decoration on armour. Infantry Reduced plate armour, typically consisting of a breastplate, a burgonet, morion or cabasset and gauntlets, however, also became popular among 16th-century mercenaries, and there are many references to so-called munition armour being ordered for infantrymen at a fraction of the cost of full plate armour. This mass-produced armour was often heavier and made of lower quality metal than fine armour for commanders. Jousting Specialised jousting armour produced in the late 15th to 16th century was heavier, and could weigh as much as , as it was not intended for free combat, it did not need to permit free movement, the only limiting factor being the maximum weight that could be carried by a warhorse of the period. The medieval joust has its origins in the military tactics of heavy cavalry during the High Middle Ages. Since the 15th century, jousting had become a sport (hastilude) with less direct relevance to warfare, for example using separate specialized armour and equipment. During the 1490s, emperor Maximilian I invested a great deal of effort in perfecting the sport, for which he received his nickname of "The Last Knight".Rennen and Stechen were two sportive forms of the joust developed during the 15th century and practiced throughout the 16th century. The armours used for these two respective styles of the joust were known as Rennzeug and Stechzeug, respectively. The Stechzeug in particular developed into extremely heavy armour which completely inhibited the movement of the rider, in its latest forms resembling an armour-shaped cabin integrated into the horse armour more than a functional suit of armour. Such forms of sportive equipment during the final phase of the joust in 16th-century Germany gave rise to modern misconceptions about the heaviness or clumsiness of "medieval armour", as notably popularised by Mark Twain's A Connecticut Yankee in King Arthur's Court. The extremely heavy helmets of the Stechzeug are explained by the fact that the aim was to detach the crest of the opponent's helmet, resulting in frequent full impact of the lance to the helmet. By contrast, the Rennen was a type of joust with lighter contact. Here, the aim was to hit the opponent's shield. The specialised Rennzeug was developed on the request of Maximilian, who desired a return to a more agile form of joust compared to the heavily armoured "full contact" Stechen. In the Rennzeug, the shield was attached to the armour with a mechanism of springs and would detach itself upon contact. Early modern period Plate armour was widely used by most armies until the end of the 17th century for both foot and mounted troops such as the cuirassiers, London lobsters, dragoons, demi-lancers and Polish hussars. The infantry armour of the 16th century developed into the Savoyard type of three-quarters armour by 1600. Full plate armour was expensive to produce and remained therefore restricted to the upper strata of society; lavishly decorated suits of armour remained the fashion with 18th-century nobles and generals long after they had ceased to be militarily useful on the battlefield due to the advent of inexpensive muskets. The development of powerful firearms made all but the finest and heaviest armour obsolete. The increasing power and availability of firearms and the nature of large, state-supported infantry led to more portions of plate armour being cast off in favour of cheaper, more mobile troops. Leg protection was the first part to go, replaced by tall leather boots. By the beginning of the 18th century, only field marshals, commanders and royalty remained in full armour on the battlefield, more as a sign of rank than for practical considerations. It remained fashionable for monarchs to be portrayed in armour during the first half of the 18th century (late Baroque period), but even this tradition became obsolete. Thus, a portrait of Frederick the Great in 1739 still shows him in armour, while a later painting showing him as a commander in the Seven Years' War (c. 1760) depicts him without armour. Modern body armour Body armour made a brief reappearance in the American Civil War with mixed success. During World War I, both sides experimented with shrapnel armour, and some soldiers used their own dedicated ballistic armour such as the American Brewster Body Shield, although none were widely produced. The heavy cavalry armour (cuirass) used by the German, British, and French empires during the Napoleonic Wars, were actively used until the first few months of World War I, when French cuirassiers went to meet the enemy dressed in armour outside of Paris. The cuirass represents the final stage of the tradition of plate armour descended from the Late Middle Ages. Meanwhile, makeshift steel armour for protection against shrapnel and early forms of ballistic vests began development from the mid-19th century to the present day. Plate armour was also famously used in Australia by the Kelly Gang, a group of four bushrangers led by Edward "Ned" Kelly, who had constructed four suits of improvised armour from plough mouldboards and whose crime spree culminated with a violent shootout with police at the town of Glenrowan in 1880. The armour was reasonably effective against bullets and made Kelly seem almost invincible to the policemen, who likened him to an evil spirit or Bunyip with one constable reporting that "[I] fired at him point blank and hit him straight in the body. But there is no use firing at Ned Kelly; he can't be hurt", however it left sections of the groin and limbs exposed; during the infamous "Glenrowan Affair", gang member Joe Byrne was killed by a bullet to the groin, Kelly was captured after a fifteen-minute last stand against police (having sustained a total of 28 bullet wounds over his body), and the remaining two members are thought to have committed suicide shortly after. Although the recovered suits were almost immediately mismatched, they have since been reorganized and restored and today remain as a powerful symbol of the Australian outback. In 1916, General Adrian of the French army provided an abdominal shield which was light in weight (approx. one kilogram) and easy to wear. A number of British officers recognised that many casualties could be avoided if effective armour were available. The first usage of the term "flak jacket" refers to the armour originally developed by the Wilkinson Sword company during World War II to help protect Royal Air Force (RAF) air personnel from flying debris and shrapnel. The Red Army also made use of ballistic steel body armour, typically chestplates, for combat engineers and assault infantry. After World War II, steel plates were soon replaced by vests made from synthetic fibre, in the 1950s, made of either boron carbide, silicon carbide, or aluminium oxide. They were issued to the crew of low-flying aircraft, such as the UH-1 and UC-123, during the Vietnam War. The synthetic fibre Kevlar was introduced in 1971, and most ballistic vests since the 1970s are based on kevlar, optionally with the addition of trauma plates to reduce the risk of blunt trauma injury. Such plates may be made of ceramic, metal (steel or titanium) or synthetic materials.
Technology
Armour
null
494305
https://en.wikipedia.org/wiki/Farallon%20plate
Farallon plate
The Farallon plate was an ancient oceanic tectonic plate. It formed one of the three main plates of Panthalassa, alongside the Izanagi plate and the Phoenix plate, which were connected by a triple junction. The Farallon plate began subducting under the west coast of the North American plate—then located in modern Utah—as Pangaea broke apart and after the formation of the Pacific plate at the center of the triple junction during the Early Jurassic. It is named for the Farallon Islands, which are located just west of San Francisco, California. Over time, the central part of the Farallon plate was subducted under the southwestern part of the North American plate. The remains of the Farallon plate are the Explorer, Gorda, and Juan de Fuca plates, which are subducting under the northern part of the North American plate; the Cocos plate subducting under Central America; and the Nazca plate subducting under the South American plate. The Farallon plate is also responsible for transporting old island arcs and various fragments of continental crust, which have rifted off of other distant plates. These fragments from elsewhere are called terranes (sometimes, "exotic" terranes). During the subduction of the Farallon plate, it accreted these island arcs and terranes to the North American plate. Much of western North America is composed of these accreted terranes. Tomographic imaging of the plate As an ancient tectonic plate, the Farallon plate must be studied using methods that allow researchers to see deep beneath the Earth's surface. The understanding of the Farallon plate has evolved as details from seismic tomography provide improved details of the submerged remnants. Since the North American west coast has a convoluted structure, significant work has been required to resolve the complexity. Seismic tomography can be used to image the remainder of the subducted plate because it is still "cold," as in, it has not reached thermal equilibrium with the mantle. This is important for the use of tomography because seismic waves have different velocities in materials of different temperatures, so the Farallon slab appears as a velocity anomaly on the tomography model. Shallow angle subduction and deformation Multiple studies show that the subduction of the Farallon plate was characterized by a period of "flat-slab subduction," which is the subduction of a plate at a relatively shallow angle to the overriding crust (in this case, North America). This phenomenon is one that accounts for the far-inland orogenisis of the Rocky Mountains and other ranges in North America which are much farther from the convergent plate boundary than is typical of a subduction-generated orogeny. Significant deformation of the slab also occurred due to this flat subduction phenomenon, which has been imaged by seismic tomography. There is a concentration of velocity anomalies in the tomography that is thicker than the slab itself should be, indicating that folding and deformation occurred beneath the surface during subduction. In other words, more of the slab should be in the lower mantle, but the deformation has caused it to remain shallower, in the upper mantle. Multiple hypotheses have been proposed to explain this shallow subduction angle and resulting deformation. Some studies suggest that the faster movement of the North American plate caused the slab to flatten, resulting in slab rollback. Another cause of flat slab subduction may be slab buoyancy, a characteristic influenced by the presence of oceanic plateaus (or oceanic flood basalts). In addition to influencing slab buoyancy, some oceanic plateaus may have also become accreted to North America. It has been suggested that this deformation may go so far as to include a tear in the slab, where a piece of the subducted Farallon plate has broken off, creating multiple slab remnants. This is supported by tomography studies and provides some more explanation of the formation of Laramide structures that are further inland from the edge. Interpretations of Farallon plate subduction A 2013 study proposed two additional now-subducted plates that would account for some of the unexplained complexities of the accreted terranes, suggesting that the Farallon should be partitioned into Northern Farallon, Angayucham, Mezcalera and Southern Farallon segments based on recent tomographic models. Under this model, the North American continent overrode a series of subduction trenches, and several microcontinents (similar to those in the modern-day Indonesian Archipelago) were added to it. These microcontinents must have had adjacent oceanic plates that are not represented in previous models of Farallon subduction, so this interpretation brings forth a different perspective on the history of collision. Based on this model, the plate moved west, causing the following geologic events to occur: 165–155 Myr ago an exotic terrane (the Mezcalera promontory) started to be subducted and the orogenesis of the Rocky Mountains began. 125 Myr ago the North American plate collided with a chain of island arcs, causing the Sevier orogeny. 124–90 Myr ago the Omineca magmatic belts are formed in the Pacific Northwest as the Mezcalera promontory is overridden. 85 Myr ago Sonora volcanism in the Moctezuma volcanic field occurred. 85–55 Myr ago the Laramide orogeny occurred as buoyant terranes were accreted. 72–69 Myr ago the Carmacks volcanism occurred in present-day Canada as an island arc is completely subducted. 55–50 Myr ago the accretion of the Siletzia and Pacific Rim terranes occurred. 55–50 Myr ago the conclusion of the volcanism of the Coast Mountain island arc occurred. When the final archipelago, the Siletzia archipelago, lodged as a terrane, the associated trench stepped west. When this happened, the trench that had been characterized as an oceanic-oceanic subduction environment approached the North American margin and eventually became the current Cascadia subduction zone. This created a slab window. Other models have been proposed for the Farallon's influence on the Laramide orogeny, including the dewatering of the slab which led to intense uplift and magmatism.
Physical sciences
Tectonic plates
Earth science
494322
https://en.wikipedia.org/wiki/South%20American%20plate
South American plate
The South American plate is a major tectonic plate which includes the continent of South America as well as a sizable region of the Atlantic Ocean seabed extending eastward to the African plate, with which it forms the southern part of the Mid-Atlantic Ridge. The easterly edge is a divergent boundary with the African plate; the southerly edge is a complex boundary with the Antarctic plate, the Scotia plate, and the Sandwich Plate; the westerly edge is a convergent boundary with the subducting Nazca plate; and the northerly edge is a boundary with the Caribbean plate and the oceanic crust of the North American plate. At the Chile triple junction, near the west coast of the Taitao–Tres Montes Peninsula, an oceanic ridge known as the Chile Rise is actively subducting under the South American plate. Geological research suggests that the South American plate is moving west away from the Mid-Atlantic Ridge: "Parts of the plate boundaries consisting of alternations of relatively short transform fault and spreading ridge segments are represented by a boundary following the general trend." As a result, the eastward-moving and more dense Nazca plate is subducting under the western edge of the South American plate, along the continent's Pacific coast, at a rate of per year. The collision of these two plates is responsible for lifting the massive Andes Mountains and for creating the numerous volcanoes (including both stratovolcanoes and shield volcanoes) that are strewn throughout the Andes.
Physical sciences
Tectonic plates
Earth science
494418
https://en.wikipedia.org/wiki/Proper%20time
Proper time
In relativity, proper time (from Latin, meaning own time) along a timelike world line is defined as the time as measured by a clock following that line. The proper time interval between two events on a world line is the change in proper time, which is independent of coordinates, and is a Lorentz scalar. The interval is the quantity of interest, since proper time itself is fixed only up to an arbitrary additive constant, namely the setting of the clock at some event along the world line. The proper time interval between two events depends not only on the events, but also the world line connecting them, and hence on the motion of the clock between the events. It is expressed as an integral over the world line (analogous to arc length in Euclidean space). An accelerated clock will measure a smaller elapsed time between two events than that measured by a non-accelerated (inertial) clock between the same two events. The twin paradox is an example of this effect. By convention, proper time is usually represented by the Greek letter τ (tau) to distinguish it from coordinate time represented by t. Coordinate time is the time between two events as measured by an observer using that observer's own method of assigning a time to an event. In the special case of an inertial observer in special relativity, the time is measured using the observer's clock and the observer's definition of simultaneity. The concept of proper time was introduced by Hermann Minkowski in 1908, and is an important feature of Minkowski diagrams. Mathematical formalism The formal definition of proper time involves describing the path through spacetime that represents a clock, observer, or test particle, and the metric structure of that spacetime. Proper time is the pseudo-Riemannian arc length of world lines in four-dimensional spacetime. From the mathematical point of view, coordinate time is assumed to be predefined and an expression for proper time as a function of coordinate time is required. On the other hand, proper time is measured experimentally and coordinate time is calculated from the proper time of inertial clocks. Proper time can only be defined for timelike paths through spacetime which allow for the construction of an accompanying set of physical rulers and clocks. The same formalism for spacelike paths leads to a measurement of proper distance rather than proper time. For lightlike paths, there exists no concept of proper time and it is undefined as the spacetime interval is zero. Instead, an arbitrary and physically irrelevant affine parameter unrelated to time must be introduced. In special relativity With the timelike convention for the metric signature, the Minkowski metric is defined by and the coordinates by for arbitrary Lorentz frames. In any such frame an infinitesimal interval, here assumed timelike, between two events is expressed as and separates points on a trajectory of a particle (think clock{?}). The same interval can be expressed in coordinates such that at each moment, the particle is at rest. Such a frame is called an instantaneous rest frame, denoted here by the coordinates for each instant. Due to the invariance of the interval (instantaneous rest frames taken at different times are related by Lorentz transformations) one may write since in the instantaneous rest frame, the particle or the frame itself is at rest, i.e., . Since the interval is assumed timelike (ie. ), taking the square root of the above yields or Given this differential expression for , the proper time interval is defined as Here is the worldline from some initial event to some final event with the ordering of the events fixed by the requirement that the final event occurs later according to the clock than the initial event. Using and again the invariance of the interval, one may write where is an arbitrary bijective parametrization of the worldline such that give the endpoints of and a < b; is the coordinate speed at coordinate time ; and , , and are space coordinates. The first expression is manifestly Lorentz invariant. They are all Lorentz invariant, since proper time and proper time intervals are coordinate-independent by definition. If , are parameterised by a parameter , this can be written as If the motion of the particle is constant, the expression simplifies to where Δ means the change in coordinates between the initial and final events. The definition in special relativity generalizes straightforwardly to general relativity as follows below. In general relativity Proper time is defined in general relativity as follows: Given a pseudo-Riemannian manifold with a local coordinates and equipped with a metric tensor , the proper time interval between two events along a timelike path is given by the line integral This expression is, as it should be, invariant under coordinate changes. It reduces (in appropriate coordinates) to the expression of special relativity in flat spacetime. In the same way that coordinates can be chosen such that in special relativity, this can be done in general relativity too. Then, in these coordinates, This expression generalizes definition and can be taken as the definition. Then using invariance of the interval, equation follows from it in the same way follows from , except that here arbitrary coordinate changes are allowed. Examples in special relativity Example 1: The twin "paradox" For a twin paradox scenario, let there be an observer A who moves between the A-coordinates (0,0,0,0) and (10 years, 0, 0, 0) inertially. This means that A stays at for 10 years of A-coordinate time. The proper time interval for A between the two events is then So being "at rest" in a special relativity coordinate system means that proper time and coordinate time are the same. Let there now be another observer B who travels in the x direction from (0,0,0,0) for 5 years of A-coordinate time at 0.866c to (5 years, 4.33 light-years, 0, 0). Once there, B accelerates, and travels in the other spatial direction for another 5 years of A-coordinate time to (10 years, 0, 0, 0). For each leg of the trip, the proper time interval can be calculated using A-coordinates, and is given by So the total proper time for observer B to go from (0,0,0,0) to (5 years, 4.33 light-years, 0, 0) and then to (10 years, 0, 0, 0) is Thus it is shown that the proper time equation incorporates the time dilation effect. In fact, for an object in a SR (special relativity) spacetime traveling with velocity for a time , the proper time interval experienced is which is the SR time dilation formula. Example 2: The rotating disk An observer rotating around another inertial observer is in an accelerated frame of reference. For such an observer, the incremental () form of the proper time equation is needed, along with a parameterized description of the path being taken, as shown below. Let there be an observer C on a disk rotating in the xy plane at a coordinate angular rate of and who is at a distance of r from the center of the disk with the center of the disk at . The path of observer C is given by , where is the current coordinate time. When r and are constant, and . The incremental proper time formula then becomes So for an observer rotating at a constant distance of r from a given point in spacetime at a constant angular rate of ω between coordinate times and , the proper time experienced will be as for a rotating observer. This result is the same as for the linear motion example, and shows the general application of the integral form of the proper time formula. Examples in general relativity The difference between SR and general relativity (GR) is that in GR one can use any metric which is a solution of the Einstein field equations, not just the Minkowski metric. Because inertial motion in curved spacetimes lacks the simple expression it has in SR, the line integral form of the proper time equation must always be used. Example 3: The rotating disk (again) An appropriate coordinate conversion done against the Minkowski metric creates coordinates where an object on a rotating disk stays in the same spatial coordinate position. The new coordinates are and The t and z coordinates remain unchanged. In this new coordinate system, the incremental proper time equation is With r, θ, and z being constant over time, this simplifies to which is the same as in Example 2. Now let there be an object off of the rotating disk and at inertial rest with respect to the center of the disk and at a distance of R from it. This object has a coordinate motion described by , which describes the inertially at-rest object of counter-rotating in the view of the rotating observer. Now the proper time equation becomes So for the inertial at-rest observer, coordinate time and proper time are once again found to pass at the same rate, as expected and required for the internal self-consistency of relativity theory. Example 4: The Schwarzschild solution – time on the Earth The Schwarzschild solution has an incremental proper time equation of where t is time as calibrated with a clock distant from and at inertial rest with respect to the Earth, r is a radial coordinate (which is effectively the distance from the Earth's center), ɸ is a co-latitudinal coordinate, the angular separation from the north pole in radians. θ is a longitudinal coordinate, analogous to the longitude on the Earth's surface but independent of the Earth's rotation. This is also given in radians. m is the geometrized mass of the Earth, m = GM/c2, M is the mass of the Earth, G is the gravitational constant. To demonstrate the use of the proper time relationship, several sub-examples involving the Earth will be used here. For the Earth, , meaning that . When standing on the north pole, we can assume (meaning that we are neither moving up or down or along the surface of the Earth). In this case, the Schwarzschild solution proper time equation becomes . Then using the polar radius of the Earth as the radial coordinate (or ), we find that At the equator, the radius of the Earth is . In addition, the rotation of the Earth needs to be taken into account. This imparts on an observer an angular velocity of of 2π divided by the sidereal period of the Earth's rotation, 86162.4 seconds. So . The proper time equation then produces From a non-relativistic point of view this should have been the same as the previous result. This example demonstrates how the proper time equation is used, even though the Earth rotates and hence is not spherically symmetric as assumed by the Schwarzschild solution. To describe the effects of rotation more accurately the Kerr metric may be used.
Physical sciences
Theory of relativity
Physics
1285803
https://en.wikipedia.org/wiki/Corynebacterium%20diphtheriae
Corynebacterium diphtheriae
Corynebacterium diphtheriae is a Gram-positive pathogenic bacterium that causes diphtheria. It is also known as the Klebs–Löffler bacillus because it was discovered in 1884 by German bacteriologists Edwin Klebs (1834–1912) and Friedrich Löffler (1852–1915). These bacteria are usually harmless, unless they are infected by a bacteriophage carrying a gene which gives rise to a toxin. This toxin causes the disease. Diphtheria is caused by the adhesion and infiltration of the bacteria into the mucosal layers of the body, primarily affecting the respiratory tract and causing the subsequent release of an exotoxin. The toxin has a localized effect on skin lesions, as well as a metastatic, proteolytic effects on other organ systems in severe infections. Originally a major cause of childhood mortality, diphtheria has been almost entirely eradicated due to the vigorous administration of the diphtheria vaccination in the 1910s. Diphtheria is no longer transmitted as frequently due to the development of the vaccine, DTaP. Although diphtheria outbreaks continue to occur, this is often in developing countries where the majority of the population is not vaccinated. Classification Four subspecies are recognized: C. d. mitis, C. d. intermedius, C. d. gravis, and C. d. belfanti. The four subspecies differ slightly in their colonial morphology and biochemical properties, such as the ability to metabolize certain nutrients. All may be either toxigenic (and therefore cause diphtheria) or not toxigenic. Strain subtyping involves comparing species of bacteria and categorizing them into subspecies. Strain subtyping also helps with identifying the origin of a certain bacteria's outbreak. However, when it comes to the subtyping of C. diphtheriae, there is not a lot of useful or accurate classification due to the lack of publicly available resources to identify strains and therefore find the origin of outbreaks. Toxin C. diphtheriae produces the diphtheria toxin which alters protein function in the host by inactivating the elongation factor EF-2. This causes pharyngitis and 'pseudomembrane' in the throat. The strains that are toxigenic are ones which have been infected with a bacteriophage. The diphtheria toxin gene is encoded by the bacteriophage found in toxigenic strains, integrated into the bacterial chromosome. The diphtheria toxin repressor is mainly controlled by iron. It serves as the essential cofactor for the activation of target DNA binding. A low concentration of iron is required in the medium for toxin production. At high iron concentrations, iron molecules bind to an aporepressor on the beta bacteriophage, which carries the Tox gene. When bound to iron, the aporepressor shuts down toxin production. Elek's test for toxigenicity is used to determine whether the organism is able to produce the diphtheria toxin. Identification To identify C. diphtheriae, a Gram stain is performed to show Gram-positive, highly pleomorphic organisms often looking like Chinese letters. Stains such as Albert's stain and Ponder's stain are used to demonstrate the metachromatic granules formed in the polar regions. The granules are called polar granules, or volutin granules, known under the eponymous name Babes-Ernst granules after their inventors Paul Ernst and Victor Babes. An enrichment medium, such as Löffler's medium, preferentially grows C. diphtheriae. After that, a differential plate known as tellurite agar, allows all Corynebacteria (including C. diphtheriae) to reduce tellurite to metallic tellurium. The tellurite reduction is colourimetrically indicated by brown colonies for most Cornyebacterium species or by a black halo around the C. diphtheriae colonies. The organism produces catalase but not urease, which differentiates it from Corynebacterium ulcerans. C. diphtheriae does not produce pyrazinamidase which differentiates from Corynebacterium striatum and Corynebacterium jeikeium. Pathogenicity Corynebacterium diphtheriae is the bacterium that causes the disease called diphtheria. Bacteriophages introduce a gene into the bacterial cells that makes a strain toxigenic. The strains that are not infected with these viruses are harmless. C. diphtheriae is a rod-shaped, Gram-positive, non spore-forming, and nonmotile bacterium. C. diphtheriae has shown to exclusively infect humans. It is believed that humans may be the reservoir for this pathogen. However, there have been extremely rare cases in which C. diphtheriae has been found in animals. These infections were only toxigenic in two dogs and two horses. The disease occurs primarily in tropical regions and developing countries. Immunocompromised individuals, poorly immunized adults, and unvaccinated children are at the greatest risk of contracting diphtheria. Mode of transmission is person-to-person contact via respiratory droplets (i.e., coughing or sneezing). Less commonly, it could also be passed by touching open sores or contaminated surfaces. During the typical course of disease, the body region most commonly affected is the upper respiratory system. A thick, grey coating accumulates in the nasopharyngeal region, making breathing and swallowing more difficult. The disease remains contagious for at least two weeks following the disappearance of symptoms but has been known to last for up to a month. The most common routes of entry for C. diphtheriae are the nose, tonsils, and throat. Individuals suffering from the disease may experience sore throat, weakness, fever, and swollen glands. This could cause even more dangerous symptoms such as shortness of breath. If left untreated, diphtheria toxin may enter the bloodstream, causing damage to the kidneys, nerves, and heart. Extremely rare complications include suffocation and partial paralysis. A vaccine, DTaP, effectively prevents the disease and is mandatory in the United States for participation in public education and some professions (exceptions apply). The first step of C. diphtheriae infection involves the toxigenic bacteria colonizing a mucosal layer. In young children, this typically occurs in the upper respiratory tract mucosa. In adults, the infection is limited mostly to the tonsillar region. Some unusual sites of infection include the heart, larynx, trachea, bronchi, and anterior areas of the mouth including the buccal mucosa, the lips, tongue, and the hard and soft palate. The bacteria have several virulence factors to help them localize on areas of the respiratory tract, many of which are yet to be fully understood as diphtheria does not affect many model hosts such as mice. One common virulence factor that has been studied in vitro is DIP0733, a multi-functional protein that has been shown to have a role in bacterial adhesion to host cells and fibrogen-binding qualities. In experiments with mutant strains of the C. diphtheriae, adhesion and epithelial infiltration decreased significantly. The ability to bind to extracellular matrices aids the bacteria in avoiding detection by the body's immune system. The diphtheritic lesion is often covered by a pseudomembrane composed of fibrin, bacterial cells, and inflammatory cells. Diphtheria toxin can be proteolytically cleaved into two fragments: an N-terminal fragment A (catalytic domain), and fragment B (transmembrane and receptor binding domain). Fragment A catalyzes the NAD+ -dependent ADP-ribosylation of elongation factor 2, thereby inhibiting protein synthesis in eukaryotic cells. Fragment B binds to the cell surface receptor and facilitates the delivery of fragment A to the cytosol. Once the bacteria have localized in one area, they start multiplying and create the inflammatory pseudomembrane. Individuals with faucial diphtheria typically have the pseudomembrane grow over the tonsil and accessory structures, uvula, soft palate, and possibly the nasopharyngeal area. In upper respiratory tract diphtheria, the pseudomembrane can grow on the pharynx, larynx, trachea, and bronchi/bronchioles. The pseudomembrane starts off white in colour and then later becomes dirty-grey and tough due to the necrotic epithelium. Pseudomembrane formation on the trachea or bronchi will decrease the efficiency of airflow. Over time, the diffusion rate in the alveoli decreases due to the lower airflow and decreases the partial pressure of oxygen in the systemic circulation, which can cause cyanosis and suffocation. Transmission Mode of transmission is person-to-person contact via respiratory droplets (i.e., coughing or sneezing), and less commonly, by touching open sores or contaminated surfaces. Vaccine A vaccine, DTaP, effectively prevents the disease and is mandatory in the United States for participation in public education and some professions (exceptions apply). The invention of the toxoid vaccine, which provides protection against Corynebacterium diphtheriae, caused a dramatic shift in the bacterium's rate of infection in the United States. Even though the vaccine was first made in the early 1800s, it did not become widely available until the early 1910s. According to the National Health and Nutrition Examination Survey (NHANES), "80 per cent of persons age 12 to 19 years were immune to diphtheria" due to the wide use of the vaccine in the United States. Diagnosis Diagnosis of respiratory C. diphtheriae is made based on presentation clinically, whereas non-respiratory diphtheria may not be clinically suspected therefore laboratory testing is more reliant. Culturing is the most accurate kind of testing that will confirm or deny the prevalence of diphtheria toxins. The testing is done by swabbing the possibly infected area, as well as any lesions and sores. Treatment and prevention When a toxigenic strain of Corynebacterium diphtheriae infects the human body, it releases harmful toxins, especially to the throat. Antitoxins are used to prevent further harm. Antibiotics are also used to fight the infection. Typical antibiotics that are used against diphtheria involve penicillin or erythromycin. People infected with diphtheria must quarantine for at least 48 hours after being prescribed antibiotics. To confirm that the person is no longer contagious, tests are performed ensure that the bacteria have been cleared. People are then vaccinated prevent further transmission of the disease. The wide use of the diphtheria vaccine dramatically decreased the rate of infection and allows for primary prevention of the disease. Most people receive a 3-in-1 vaccine that consists of protection against diphtheria, tetanus and pertussis, which is commonly known as the DTaP or Tdap vaccine. DTaP vaccine is for children while the Tdap vaccine is known for adolescents and adults. In the United States, the DTaP vaccine to parents of infants which typically involves a series of five shots is recommended. These vaccines are injected through the arm or thigh and are administered when the infant is 2 months, 4 months, 6 months, 15–18 months and then 4–6 years old. Possible side events that are associated with the diphtheria vaccine include "mild fever, fussiness, drowsiness or tenderness at the injection site". Although it is rare, the DTaP vaccine may cause an allergic reaction that causes hives or a rash to breakout within minutes of administering the vaccine. Genetics The genome of C. diphtheriae consists of a single circular chromosome of 2.5 Mbp, with no plasmids. Its genome shows an extreme compositional bias, being noticeably higher in G+C near the origin than at the terminus. The Corynebacterium diphtheriae genome is a single circular chromosome that has no plasmids. These chromosomes have a high G+C content which is what contributes to their high genetic diversity. The high content of guanine and cytosine is not constant across the entire genome of the bacteria. There is a terminus of replication around the ~740kb region that causes a decrease in the G+C content. In other bacteria, it is often seen that the G+C content gets smaller near the terminus, but C. diphtheriae is a considerably strong genome that has this occurrence. Chromosomal replication is one of the ways this happens within this genome.
Biology and health sciences
Gram-positive bacteria
Plants
1285813
https://en.wikipedia.org/wiki/Corynebacterium
Corynebacterium
Corynebacterium () is a genus of Gram-positive bacteria and most are aerobic. They are bacilli (rod-shaped), and in some phases of life they are, more specifically, club-shaped, which inspired the genus name (coryneform means "club-shaped"). They are widely distributed in nature in the microbiota of animals (including the human microbiota) and are mostly innocuous, most commonly existing in commensal relationships with their hosts. Some, such as C. glutamicum, are commercially and industrially useful. Others can cause human disease, including, most notably, diphtheria, which is caused by C. diphtheriae. Like various species of microbiota (including their relatives in the genera Arcanobacterium and Trueperella), they usually are not pathogenic, but can occasionally opportunistically capitalize on atypical access to tissues (via wounds) or weakened host defenses. Taxonomy The genus Corynebacterium was created by Lehmann and Neumann in 1896 as a taxonomic group to contain the bacterial rods responsible for causing diphtheria. The genus was defined based on morphological characteristics. Based on studies of 16S rRNA, they have been grouped into the subdivision of Gram-positive Eubacteria with high G:C content, with close phylogenetic relationships to Arthrobacter, Mycobacterium, Nocardia, and Streptomyces. The term comes from Greek κορύνη, 'club, mace, staff, knobby plant bud or shoot' and βακτήριον, 'little rod'. The term "diphtheroids" is used to represent corynebacteria that are nonpathogenic; for example, C. diphtheriae would be excluded. The term diphtheroid comes from Greek διφθέρα, 'prepared hide, leather'. Genomics Comparative analysis of corynebacterial genomes has led to the identification of several conserved signature indels (CSIs) that are unique to the genus. Two examples of CSIs are a two-amino-acid insertion in a conserved region of the enzyme phosphoribose diphosphate:decaprenyl-phosphate phosphoribosyltransferase and a three-amino-acid insertion in acetate kinase, both of which are found only in Corynebacterium species. Both of these indels serve as molecular markers for species of the genus Corynebacterium. Additionally, 16 conserved signature proteins, which are uniquely found in Corynebacterium species, have been identified. Three of these have homologs found in the genus Dietzia, which is believed to be the closest related genus to Corynebacterium. In phylogenetic trees based on concatenated protein sequences or 16S rRNA, the genus Corynebacterium forms a distinct clade, within which is a distinct subclade, cluster I. The cluster is made up of the species C. diphtheriae, C. pseudotuberculosis, C. ulcerans, C. aurimucosum, C. glutamicum, and C. efficiens. This cluster is distinguished by several conserved signature indels, such as a two-amino-acid insertion in LepA and a seven- or eight-amino-acid insertions in RpoC. Also, 21 conserved signature proteins are found only in members of cluster I. Another cluster has been proposed, consisting of C. jeikeium and C. urealyticum, which is supported by the presence of 19 distinct conserved signature proteins which are unique to these two species. Corynebacteria have a high G+C content ranging from 46-74 mol%. Characteristics The principal features of the genus Corynebacterium were described by Collins and Cummins, for Coryn Taylor in 1986. They are gram-positive, catalase-positive, non-spore-forming, non-motile, rod-shaped bacteria that are straight or slightly curved. Metachromatic granules are usually present representing stored phosphate regions. Their size falls between 2 and 6 μm in length and 0.5 μm in diameter. The bacteria group together in a characteristic way, which has been described as the form of a "V", "palisades", or "Chinese characters". They may also appear elliptical. They are aerobic or facultatively anaerobic, chemoorganotrophs. They are pleomorphic through their lifecycles, they occur in various lengths, and they frequently have thickenings at either end, depending on the surrounding conditions. Some corynebacteria are lipophilic (such as CDC coryneform groups F-1 and G, C. accolens, C. afermentans subsp. lipophilum, C. bovis, C. jeikeium, C. macginleyi, C. uropygiale, and C. urealyticum), but medically relevant corynebacteria are typically not. The nonlipophilic bacteria may be classified as fermentative (such as C. amycolatum; C. argentoratense, members of the C. diphtheriae group, C. glucuronolyticum, C. glutamicum, C. matruchotii, C. minutissimum, C. striatum, and C. xerosis) or nonfermentative (such as C. afermentans subsp. afermentans, C. auris, C. pseudodiphtheriticum, and C. propinquum). Cell wall The cell wall is distinctive, with a predominance of mesodiaminopimelic acid in the murein wall and many repetitions of arabinogalactan, as well as corynemycolic acid (a mycolic acid with 22 to 26 carbon atoms), bound by disaccharide bonds called L-Rhap-(1 → 4)--D-GlcNAc-phosphate. These form a complex commonly seen in Corynebacterium species: the mycolyl-AG–peptidoglican (mAGP). Unlike most corynebacteria, Corynebacterium kroppenstedtii does not contain mycolic acids. Culture Corynebacteria grow slowly, even on enriched media. In nutritional requirements, all need biotin to grow. Some strains also need thiamine and PABA. Some of the Corynebacterium species with sequenced genomes have between 2.5 and 3.0 million base pairs. The bacteria grow in Loeffler's medium, blood agar, and trypticase soy agar (TSA). They form small, grayish colonies with a granular appearance, mostly translucent, but with opaque centers, convex, with continuous borders. The color tends to be yellowish-white in Loeffler's medium. In TSA, they can form grey colonies with black centers and dentated borders that either resemble flowers (C. gravis), continuous borders (C. mitis), or a mix between the two forms (C. intermedium). Habitat Corynebacterium species occur commonly in nature in soil, water, plants, and food products. The non-diphtheroid Corynebacterium species can even be found in the mucosa and normal skin flora of humans and animals. Unusual habitats, such as the preen gland of birds, have been recently reported for Corynebacterium uropygiale. Some species are known for their pathogenic effects in humans and other animals. Perhaps the most notable one is C. diphtheriae, which acquires the capacity to produce diphtheria toxin only after interacting with a bacteriophage. Other pathogenic species in humans include: C. amycolatum, C. striatum, C. jeikeium, C. urealyticum, and C. xerosis; all of these are important as pathogens in immunosuppressed patients. Pathogenic species in other animals include C. bovis and C. renale. This genus has been found to be part of the human salivary microbiome. Role in disease The most notable human infection is diphtheria, caused by C. diphtheriae. It is an acute, contagious infection characterized by pseudomembranes of dead epithelial cells, white blood cells, red blood cells, and fibrin that form around the tonsils and back of the throat. In developed countries, it is an uncommon illness that tends to occur in unvaccinated individuals, especially school-aged children, elderly, neutropenic or immunocompromised patients, and those with prosthetic devices such as prosthetic heart valves, shunts, or catheters. It is more common in developing countries It can occasionally infect wounds, the vulva, the conjunctiva, and the middle ear. It can be spread within a hospital. The virulent and toxigenic strains produce an exotoxin formed by two polypeptide chains, which is itself produced when a bacterium is transformed by a gene from the β prophage. Several species cause disease in animals, most notably C. pseudotuberculosis, which causes the disease caseous lymphadenitis, and some are also pathogenic in humans. Some attack healthy hosts, while others tend to attack the immunocompromised. Effects of infection include granulomatous lymphadenopathy, pneumonitis, pharyngitis, skin infections, and endocarditis. Corynebacterial endocarditis is seen most frequently in patients with intravascular devices. Several species of Corynebacterium can cause trichomycosis axillaris. C. striatum may cause axillary odor. C. minutissimum causes erythrasma. Industrial uses Nonpathogenic species of Corynebacterium are used for important industrial applications, such as the production of amino acids and nucleotides, bioconversion of steroids, degradation of hydrocarbons, cheese aging, and production of enzymes. Some species produce metabolites similar to antibiotics: bacteriocins of the corynecin-linocin type, antitumor agents, etc. One of the most studied species is C. glutamicum, whose name refers to its capacity to produce glutamic acid in aerobic conditions. L-Lysine production is specific to C. glutamicum in which core metabolic enzymes are manipulated through genetic engineering to drive metabolic flux towards the production of NADPH from the pentose phosphate pathway, and L-4-aspartyl phosphate, the commitment step to the synthesis of L-lysine, lysC, , , and . These enzymes are up-regulated in industry through genetic engineering to ensure adequate amounts of lysine precursors are produced to increase metabolic flux. Unwanted side reactions such as threonine and asparagine production can occur if a buildup of intermediates occurs, so scientists have developed mutant strains of C. glutamicum through PCR engineering and chemical knockouts to ensure production of side-reaction enzymes are limited. Many genetic manipulations conducted in industry are by traditional cross-over methods or inhibition of transcriptional activators. Expression of functionally active human epidermal growth factor has been brought about in C. glutamicum, thus demonstrating a potential for industrial-scale production of human proteins. Expressed proteins can be targeted for secretion through either the general secretory pathway or the twin-arginine translocation pathway. Unlike gram-negative bacteria, the gram-positive Corynebacterium species lack lipopolysaccharides that function as antigenic endotoxins in humans. Species Corynebacterium comprises the following species: C. accolens Neubauer et al. 1991 C. afermentans Riegel et al. 1993 C. alimapuense Claverias et al. 2019 "C. alkanolyticum" Lee and Reichenbach 2006 C. ammoniagenes (Cooke and Keith 1927) Collins 1987 C. amycolatum Collins et al. 1988 C. anserum Liu et al. 2021 C. appendicis Yassin et al. 2002 C. aquatimens Aravena-Román et al. 2012 C. aquilae Fernández-Garayzábal et al. 2003 C. argentoratense Riegel et al. 1995 "C. asperum" De Briel et al. 1992 C. atrinae Kim et al. 2015 C. atypicum Hall et al. 2003 C. aurimucosum Yassin et al. 2002 C. auris Funke et al. 1995 C. auriscanis Collins et al. 2000 C. belfantii Dazas et al. 2018 C. beticola Abdou 1969 (Approved Lists 1980) "C. bouchesdurhonense" Ndongo et al. 2017 "C. bouchesdurhonense" Lo et al. 2019 C. bovis Bergey et al. 1923 (Approved Lists 1980) C. callunae (Lee and Good 1963) Yamada and Komagata 1972 (Approved Lists 1980) C. camporealensis Fernández-Garayzábal et al. 1998 C. canis Funke et al. 2010 C. capitovis Collins et al. 2001 C. casei Brennan et al. 2001 C. caspium Collins et al. 2004 C. choanae Busse et al. 2019 C. ciconiae Fernández-Garayzábal et al. 2004 C. comes Schaffert et al. 2021 C. confusum Funke et al. 1998 C. coyleae Funke et al. 1997 C. crudilactis Zimmermann et al. 2016 C. cystitidis Yanagawa and Honda 1978 (Approved Lists 1980) "C. defluvii" Yu et al. 2017 "C. dentalis" Benabdelkader et al. 2020 C. deserti Zhou et al. 2012 C. diphtheriae (Kruse 1886) Lehmann and Neumann 1896 (Approved Lists 1980) C. doosanense Lee et al. 2009 C. durum Riegel et al. 1997 C. efficiens Fudou et al. 2002 C. endometrii Ballas et al. 2020 C. epidermidicanis Frischmann et al. 2012 C. faecale Chen et al. 2016 C. falsenii Sjödén et al. 1998 C. felinum Collins et al. 2001 C. flavescens Barksdale et al. 1979 (Approved Lists 1980) C. fournieri corrig. Diop et al. 2018 C. frankenforstense Wiertz et al. 2013 C. freiburgense Funke et al. 2009 C. freneyi Renaud et al. 2001 C. gerontici Busse et al. 2019 C. glaucum Yassin et al. 2003 C. glucuronolyticum Funke et al. 1995 C. glutamicum (Kinoshita et al. 1958) Abe et al. 1967 (Approved Lists 1980) C. glyciniphilum (ex Kubota et al. 1972) Al-Dilaimi et al. 2015 C. gottingense Atasayar et al. 2017 C. guangdongense Li et al. 2016 "C. haemomassiliense" Boxberger et al. 2020 C. halotolerans Chen et al. 2004 C. hansenii Renaud et al. 2007 C. heidelbergense Braun et al. 2021 C. hindlerae Bernard et al. 2021 C. humireducens Wu et al. 2011 "C. ihumii" Padmanabhan et al. 2014 C. ilicis Mandel et al. 1961 (Approved Lists 1980) C. imitans Funke et al. 1997 "C. incognitum" Boxberger et al. 2021 C. jeddahense Edouard et al. 2017 C. jeikeium Jackman et al. 1988 C. kalinowskii Schaffert et al. 2021 "C. kefirresidentii" Blasche et al. 2017 C. kroppenstedtii Collins et al. 1998 C. kutscheri (Migula 1900) Bergey et al. 1925 (Approved Lists 1980) C. lactis Wiertz et al. 2013 "C. lactofermentum" Gubler et al. 1994 C. jeikliangguodongiiium Zhu et al. 2020 C. lipophiloflavum Funke et al. 1997 C. lizhenjunii Zhou et al. 2021 C. lowii Bernard et al. 2016 C. lubricantis Kämpfer et al. 2009 C. lujinxingii Zhang et al. 2021 C. macginleyi Riegel et al. 1995 C. marinum Du et al. 2010 C. maris Ben-Dov et al. 2009 C. massiliense Merhej et al. 2009 C. mastitidis Fernandez-Garayzabal et al. 1997 C. matruchotii (Mendel 1919) Collins 1983 C. minutissimum (ex Sarkany et al. 1962) Collins and Jones 1983 C. mucifaciens Funke et al. 1997 C. mustelae Funke et al. 2010 C. mycetoides (ex Castellani 1942) Collins 1983 C. nasicanis Baumgardt et al. 2015 "C. neomassiliense" Boxberger et al. 2020 C. nuruki Shin et al. 2011 C. occultum Schaffert et al. 2021 C. oculi Bernard et al. 2016 C. otitidis (Funke et al. 1994) Baek et al. 2018 "C. pacaense" Bellali et al. 2019 "C. parakroppenstedtii" Luo et al. 2022 "C. parvulum" Nakamura et al. 1983 C. pelargi Kämpfer et al. 2015 C. phocae Pascual et al. 1998 "C. phoceense" Cresci et al. 2016 C. pilbarense Aravena-Roman et al. 2010 C. pilosum Yanagawa and Honda 1978 (Approved Lists 1980) C. pollutisoli Negi et al. 2016 C. propinquum Riegel et al. 1994 "C. provencense" Ndongo et al. 2017 "C. provencense" Lo et al. 2019 C. pseudodiphtheriticum Lehmann and Neumann 1896 (Approved Lists 1980) "C. pseudokroppenstedtii" Luo et al. 2022 C. pseudopelargi Busse et al. 2019 C. pseudotuberculosis (Buchanan 1911) Eberson 1918 (Approved Lists 1980) C. pyruviciproducens Tong et al. 2010 C. qintianiae Zhou et al. 2021 C. renale (Migula 1900) Ernst 1906 (Approved Lists 1980) C. resistens Otsuka et al. 2005 C. riegelii Funke et al. 1998 C. rouxii Badell et al. 2020 C. sanguinis Jaén-Luchoro et al. 2020 "C. segmentosum" Collins et al. 1998 "C. senegalense" Ndiaye et al. 2019 C. silvaticum Dangel et al. 2020 C. simulans Wattiau et al. 2000 C. singulare Riegel et al. 1997 C. sphenisci Goyache et al. 2003 C. spheniscorum Goyache et al. 2003 C. sputi Yassin and Siering 2008 C. stationis (ZoBell and Upham 1944) Bernard et al. 2010 C. striatum (Chester 1901) Eberson 1918 (Approved Lists 1980) C. suicordis Vela et al. 2003 C. sundsvallense Collins et al. 1999 C. suranareeae Nantapong et al. 2020 C. tapiri Baumgardt et al. 2015 C. terpenotabidum Takeuchi et al. 1999 C. testudinoris Collins et al. 2001 C. thomssenii Zimmermann et al. 1998 C. timonense Merhej et al. 2009 C. trachiae Kämpfer et al. 2015 C. tuberculostearicum Feurer et al. 2004 C. tuscaniense corrig. Riegel et al. 2006 "C. uberis" Kittl et al. 2022 C. ulcerans (ex Gilbert and Stewart 1927) Riegel et al. 1995 C. ulceribovis Yassin 2009 C. urealyticum Pitcher et al. 1992 C. ureicelerivorans Yassin 2007 "C. urinapleomorphum" Morand et al. 2017 C. urinipleomorphum corrig. Niang et al. 2021 C. urogenitale Ballas et al. 2020 C. uropygiale Braun et al. 2016 C. uterequi Hoyles et al. 2013 C. variabile corrig. (Müller 1961) Collins 1987 C. vitaeruminis corrig. (Bechdel et al. 1928) Lanéelle et al. 1980 C. wankanglinii Zhang et al. 2021 C. xerosis (Lehmann and Neumann 1896) Lehmann and Neumann 1899 (Approved Lists 1980) C. yudongzhengii Zhu et al. 2020 C. zhongnanshanii Zhang et al. 2021
Biology and health sciences
Gram-positive bacteria
Plants
1285827
https://en.wikipedia.org/wiki/Steam%20distillation
Steam distillation
Steam distillation is a separation process that consists of distilling water together with other volatile and non-volatile components. The steam from the boiling water carries the vapor of the volatiles to a condenser; both are cooled and return to the liquid or solid state, while the non-volatile residues remain behind in the boiling container. If, as is usually the case, the volatiles are not miscible with water, they will spontaneously form a distinct phase after condensation, allowing them to be separated by decantation or with a separatory funnel. Steam distillation can be used when the boiling point of the substance to be extracted is higher than that of water, and the starting material cannot be heated to that temperature because of decomposition or other unwanted reactions. It may also be useful when the amount of the desired substance is small compared to that of the non-volatile residues. It is often used to separate volatile essential oils from plant material. for example, to extract limonene (boiling point 176 °C) from orange peels. Steam distillation once was a popular laboratory method for purification of organic compounds, but it has been replaced in many such uses by vacuum distillation and supercritical fluid extraction. It is however much simpler and economical than those alternatives, and remains important in certain industrial sectors. In the simplest form, water distillation or hydrodistillation, the water is mixed with the starting material in the boiling container. In direct steam distillation, the starting material is suspended above the water in the boiling flask, supported by a metal mesh or perforated screen. In dry steam distillation, the steam from a boiler is forced to flow through the starting material in a separate container. The latter variant allows the steam to be heated above the boiling point of water (thus becoming superheated steam), for more efficient extraction. History Steam distillation is used in many of the recipes given in the ('Book of Gentleness on Perfume'), also known as the ('Book of the Chemistry of Perfume and Distillations'), attributed to the early Arabic philosopher al-Kindi (–873). Steam distillation was also used by the Persian philosopher and physician Avicenna (980–1037) to produce essential oils by adding water to rose petals and distilling the mixture. The process was also used by al-Dimashqi (1256–1327) to produce rose water on a large scale. Principle Every substance has some vapor pressure even below its boiling point, so in theory it could be distilled at any temperature by collecting and condensing its vapors. However, ordinary distillation below the boiling point is not practical because a layer of vapor-rich air would form over the liquid, and evaporation would stop as soon as the partial pressure of the vapor in that layer reached the vapor pressure. The vapor would then flow to the condenser only by diffusion, which is an extremely slow process. Simple distillation is generally done by boiling the starting material, because, once its vapor pressure exceeds atmospheric pressure, that still vapor-rich layer of air will be disrupted, and there will be a significant and steady flow of vapor from the boiling flask to the condenser. In steam distillation, that positive flow is provided by steam from boiling water, rather than by the boiling of the substances of interest. The steam carries with it the vapors of the latter. The substance of interest does not need to be miscible water or soluble in it. It suffices that it has significant vapor pressure at the steam's temperature. If the water forms an azeotrope with the substances of interest, the boiling point of the mixture may be lower than the boiling point of water. For example, bromobenzene boils at 156 °C (at normal atmospheric pressure), but a mixture with water boils at 95 °C. However, the formation of an azeotrope is not necessary for steam distillation to work. Applications Steam distillation is often employed in the isolation of essential oils, for use in perfumes, for example. In this method, steam is passed through the plant material containing the desired oils. Eucalyptus oil, camphor oil and orange oil are obtained by this method on an industrial scale. Steam distillation is a means of purifying fatty acids, e.g. from tall oils. Steam distillation is sometimes used in the chemical laboratory. Illustrative is a classic preparation of bromobiphenyl where steam distillation is used to first remove the excess benzene and subsequently to purifiy the brominated product. In one preparation of benzophenone, steam is employed to first recover unreacted carbon tetrachloride and subsequently to hydrolyze the intermediate benzophenone dichloride into benzophenone, which is in fact not steam distilled. It one preparation of a purine, steam distillation is used to remove volatile benzaldehyde from nonvolatile product. Equipment On a lab scale, steam distillations are carried out using steam generated outside the system and piped through the mixture to be purified. Steam can also be generated in-situ using a Clevenger-type apparatus.
Physical sciences
Phase separations
Chemistry
1286213
https://en.wikipedia.org/wiki/Oroville%20Dam
Oroville Dam
Oroville Dam is an earthfill embankment dam on the Feather River east of the city of Oroville, California, in the Sierra Nevada foothills east of the Sacramento Valley. At 770 feet (235 m) high, it is the tallest dam in the U.S. and serves mainly for water supply, hydroelectricity generation, and flood control. The dam impounds Lake Oroville, the second-largest reservoir in California, capable of storing more than . Built by the California Department of Water Resources, Oroville Dam is one of the key features of the California State Water Project (SWP), one of two major projects passed that set up California's statewide water system. Construction was initiated in 1961, and despite numerous difficulties encountered during its construction, including multiple floods and a major train wreck on the rail line used to transport materials to the dam site, the embankment was topped out in 1967 and the entire project was ready for use in 1968. The dam began to generate electricity shortly afterwards with completion of the Edward Hyatt Power Plant, then the country's largest underground power station. Since its completion in 1968, the Oroville Dam has allocated the flow of the Feather River from the Sacramento-San Joaquin Delta into the SWP's California Aqueduct, which provides a major supply of water for irrigation in the San Joaquin Valley, as well as municipal and industrial water supplies to coastal Southern California, and has prevented large amounts of flood damage to the area—more than $1.3 billion between 1987 and 1999. The dam stops fish migration up the Feather River and the controlled flow of the river; as a result, the Oroville Dam has affected riparian habitat. Multiple attempts at trying to counter the dam's impacts on fish migration have included the construction of a salmon/steelhead fish hatchery on the river, which began shortly after the dam was completed. In February 2017, the main and emergency spillways threatened to fail, leading to the evacuation of 188,000 people living near the dam. After deterioration of the main spillway largely stabilized and the water level of the dam's reservoir dropped below the top of the emergency spillway, the evacuation order was lifted. The main spillway was reconstructed by November 1, 2018, and water releases were successfully tested, up to , during April 2019. History Planning In 1935, work began on the Central Valley Project, a federal water project that would develop the Sacramento and San Joaquin River systems for irrigation of the highly fertile Central Valley. However, after the end of World War II in 1945, the state experienced an economic boom that led to rapid urban and commercial growth in the central and southern portions of the state, and it became clear that California's economy could not depend solely on a state water system geared primarily towards agriculture. A new study of California's water supplies by the Division of Water Resources (now California Department of Water Resources, DWR) was carried out under an act of the California State Legislature in 1945. In 1951, California State Engineer A. D. Edmonston proposed the Feather River Project, the direct predecessor to the SWP, which included a major dam on the Feather River at Oroville, and aqueducts and pumping plants to transfer stored water to destinations in central and southern California. The proposed project was strongly opposed by voters in Northern California and parts of Southern California that received water from the Colorado River, but was supported by other Southern Californians and San Joaquin Valley farmers. However, major flooding in the 1950s prompted the 1957 passage of an emergency flood-control bill that provided sufficient funding for construction for a dam at Oroville – regardless of whether it would become part of the SWP. Construction Groundbreaking on the dam site occurred in May 1957 with the relocation of the Western Pacific Railroad tracks that ran through the Feather River Canyon. The Burns-Porter Act of the California Legislature, which authorized the SWP, was not passed until November 8, 1960, and only by a slim margin. Engineer Donald Thayer of the DWR was commissioned to design and head construction of Oroville Dam, and the primary work contract was awarded to Oro Dam Constructors Inc., a joint venture led by Oman Construction Co. Two concrete-lined diversion tunnels, each long and in diameter, were excavated to channel the Feather River around the dam site. One of the tunnels was located at river level and was to carry normal water flows, while the second one was only to be used during floods. In May 1963, workers poured the last of of concrete that comprised the high cofferdam, to protect the construction site from floods. This structure later served as an impervious core for the completed dam. With the cofferdam in place, an rail line was constructed to move earth and rock to the dam site. An average of 120 train cars ran along the line each hour, transporting fill that was mainly excavated from enormous piles of hydraulic mining debris that was washed down by the Feather River after the California Gold Rush. On December 22, 1964, disaster nearly struck when the Feather River, after days of heavy rain, reached a peak flow of above the Oroville Dam site. The water rose behind the partially completed embankment dam and nearly overtopped it, while a maximum of poured from the diversion tunnels. This Christmas flood of 1964 was one of the most disastrous floods on record in Northern California, but the incomplete dam was able to reduce the peak flow of the Feather River by nearly 40%, averting massive damage to the area. Ten months later, four men died in a tragic accident on the construction rail line. On October 7, 1965, two 40-car work trains, one fully loaded and the other empty, collided head-on at a tunnel entrance, igniting of diesel fuel, completely destroying two locomotives. The burning fuel from the collision started a forest fire that burned before it could be extinguished. The crash delayed construction of the dam by a week while the train wreckage was cleared. Overall, 34 men died in the construction of the dam. Oroville Dam was designed to withstand the strongest possible earthquake for the region, and was fitted with hundreds of instruments that serve to measure water pressure and settlement of the earth fill used in its construction, earning it the nickname "the dam that talks back". (A ML 5.7 earthquake in the Oroville area in 1975 is believed to have been caused by induced seismicity from the weight of the Oroville Dam and reservoir on a local fault line.) The embankment was finally topped out on October 6, 1967, with the last of of material that took over 40,000 train trips to transport. On May 4, 1968, Oroville Dam was officially dedicated by the state of California. Among the notable figures present were California governor Ronald Reagan, who spoke, Chief Justice (formerly California governor) Earl Warren, Senator Thomas Kuchel, and California Representative Harold T. "Bizz" Johnson. The dedication was accompanied by a week of festivities in nearby Oroville, attended by nearly 50,000 people. 2005 dam relicensing On October 17, 2005, three environmental groups filed a motion with the Federal Energy Regulatory Commission (FERC) urging federal officials to require that the dam's emergency spillway be armored with concrete, rather than remain as an earthen spillway, as it did not meet modern safety standards. "In the event of extreme rain and flooding, fast-rising water would overwhelm the main concrete spillway, then flow down the emergency spillway, and that could cause heavy erosion that would create flooding for communities downstream, but also could cause a failure, known as 'loss of crest control'." FERC water agencies responsible for the cost of the upgrades said this was unnecessary and that concerns were overblown. In 2006, a senior civil engineer sent a memorandum to his managers stating, "The emergency spillway meets FERC's engineering guidelines for an emergency spillway", and "The guidelines specify that during a rare flood event, it is acceptable for the emergency spillway to sustain significant damage." 2009 river valve accident At around 7:30am on July 22, 2009, several workers were deep below the reservoir operating flow controls to test a river valve chamber in the Oroville Dam. When the flow reached 85%, suction pulled a breakaway wall downstream into a diversion tunnel, cutting lights and nearly sending three workers to their deaths in the roaring current. One of the workers who was badly injured survived by clinging to a bent rail, where he was struck by tools and equipment being sucked into the tunnel. He was hospitalized for four days with head trauma, a broken leg and arm, cuts, and bruises. The California Division of the Occupational Health and Safety Administration (Cal OSHA) concluded opening the valves without an energy-dispersion ring, which reportedly was absent, "created water flow with such great turbulence that it blocked an air vent and created a vacuum". It sanctioned the DWR with six citations, including five classified as serious, and the department was initially fined $141,375. Two of the "serious" citations were overturned on appeal. This river valve system was one of the first parts of the dam to be built when the dam project started in 1961, because its initial purpose was to divert the river while the dam was under construction. After that, it served various purposes, including as a possible emergency release valve. Since the accident, DWR had implemented a standing order that prohibited the operation of the river outlet system and significantly limited access to the river valve chamber. Following the accident, DWR entered into a 2012 agreement with Cal OSHA to hire a third-party expert to improve the safety of the river valve outlet system (RVOS) and make it operational again. In 2014, DWR embarked on an accelerated refurbishment program to respond to concerns about operational needs during the ongoing drought. The system was mostly refurbished and was used during 2014 and 2015 to meet Endangered Species Act temperature requirements for the Feather River. Some additional refurbishments were being made to portions of the RVOS and were expected to conclude in early 2017. 2013, 2015 spillway cracks and inspection The spillway cracked in 2013. A senior civil engineer with the DWR was interviewed by the Sacramento Bee, and explained, "It’s common for spillways to develop a void because of the drainage systems under them", and "There were some patches needed and so we made repairs and everything checked out." In July 2015, the state Division of Safety of Dams inspected the dam spillway visually "from some distance" and did not walk it. 2017 spillway failure Initial spillway damage The rainy season of 2016–2017 was Northern California's wettest winter in over 100 years. Heavy rainfall resulted in record inflows from the Feather River, and the spillway was opened in January to relieve pressure on Oroville Dam. After a second series of heavy storms in February, the spillway flow was increased to , and on February 7, DWR employees noticed an unusual flow pattern. This halted spillway outflow, and DWR brought engineers onto the spillway to inspect its integrity. The engineers found a large area of concrete and foundation erosion. This erosion feature was too massive to repair without diverting water to the emergency spillway, and halted outflow along the main spillway for a period to fix the hole. High inflows to Lake Oroville forced dam operators to continue using the damaged spillway, causing additional damage. The spillway hole continued to grow. Debris from the crater in the main spillway was carried downstream, and caused damage to the Feather River Fish Hatchery due to high turbidity. Although engineers had hoped that using the damaged spillway could drain the lake enough to avoid use of the emergency spillway, they were forced to reduce its discharge from to due to potential damage to nearby power lines. Emergency spillway use and evacuation Shortly after 8:00 pm on February 11, 2017, the emergency spillway began carrying water for the first time since the dam's construction in 1968. The water flowed directly onto the earthen hillside below the emergency spillway, as designed. However, headward erosion of the emergency spillway threatened to undermine and collapse the concrete weir. On February 12, an evacuation was ordered for low-lying areas, due to possible failure of the emergency spillway. The flow over the main spillway was increased to to try to slow erosion of the emergency spillway. By 8:00 pm on the evening of February 12, the increased flow had lowered the water level, causing the emergency spillway to stop overflowing. On February 14, the sheriff of Butte County lifted the mandatory evacuation order. Investigation and reconstruction On May 19, 2017, the spillway was shut down for the summer, to allow demolition and repair work to begin. The total cost of the repair was projected to exceed $400 million, with the $275 million primary contract awarded to Kiewit Construction. FEMA was expected to cover a large portion of the expenses. According to an independent forensics team led by John France, the exact cause of the spillway failure remains uncertain, though they identified "24 possible causes for the spillway failure, including a faulty drainage system, variations in concrete thickness, and corrosion in the structure's rebar". For 2018 the DWR planned to demolish and reconstruct the portion of the spillway which was undamaged by the flood, but which also has been identified as structurally defective. In addition, crews worked to extend a cutoff wall under the emergency spillway to prevent erosion should that structure be used again in the future. On November 1, 2017, DWR director Grant Davis said, "Lake Oroville's main spillway is indeed ready to safely handle winter flows if needed". While this completes phase 1 of the construction, there remained a phase 2 to be completed in 2018. The second phase would include rebuilding the top section of the spillway (which was not rebuilt this season), putting slabs over the roller compacted concrete section, and constructing a concrete secant cutoff wall for the emergency spillway. The cost estimate at this point is over $500 million. In October 2017, hairline cracks were found in the rebuilt spillway. Things that added to the cost included relocating power lines, dredging the river downstream of the dam, as well as the discovery that the bedrock under the spillway was weak, necessitating deeper excavations and more concrete. The DWR commissioned an independent board of consultants (BOC) to review and comment on repairs to Oroville Dam. Memoranda (reports) prepared by the BOC are posted at the DWR web site. The independent forensic team (IFT) was selected to determine the cause of the spillways incident, including effects of operations, management, structural design and geological conditions. According to its 2017–18 operations plan, the DWR maintained Lake Oroville at a lower-than-normal level to reduce the possibility that the spillway would have to be used the following winter. In a second phase of spillway repairs in 2018–19, temporary repairs on the main spillway done during phase one were being torn out and replaced with steel-reinforced structural concrete. On April 2, 2019, due to heavy rainfall upstream, the DWR began releasing water over the newly reconstructed spillway at a rate of . Releases were increased to on April 7 to test how the spillway performed in higher flows. They were decreased to on April 9. 2020 Safety assessment The DWR released an assessment, dated October 1, 2020, concluding that Oroville Dam was suitable for continued safe and reliable operation. Meanwhile, the Federal Energy Regulatory Commission has demanded that California submit a plan by September 2022, for addressing the issue of greater amounts of rain predicted in the future. 2020–21 drought Due to the low precipitation in the catchment area, water levels were below normal beginning in 2020. In August 2021, the Hyatt power plant had to be shut down because the water level fell below its water inlets. After falling to a record low of 22% capacity by September 30, winter storms increased the lake level by December and the plant was restarted on January 4, 2022. Operations Hydroelectricity Construction of the underground Edward Hyatt Pump-Generating Plant was finished shortly after the completion of Oroville Dam. At the time, it was the largest underground power station in the United States, with three 132-megawatt (MW) conventional turbines and three 141 MW pump-generators for a total installed capacity of 819 MW. The Hyatt Powerplant is capable of pumping water back into Lake Oroville when surplus power is available. The pump-generators at Hyatt can lift up to into Lake Oroville (with a net consumption of 519 MW), while the six turbines combined use a flow of at maximum generation. Since 1969, the Hyatt plant has worked in tandem with an extensive pumped-storage operation comprising two offstream reservoirs west of Oroville. These two facilities are collectively known as the Oroville–Thermalito Complex. Water is diverted into the upper Thermalito reservoir (Thermalito Forebay) via the Thermalito Diversion Dam on the Feather River. During periods of off-peak power use, surplus energy generated at Hyatt is used to lift water from Thermalito's lower reservoir (the Thermalito Afterbay) to the forebay, which releases water back into the afterbay to generate up to 114 MW of power at times of high demand. The Hyatt and Thermalito plants produce an average of 2,200 gigawatt hours (GWh) of electricity each year, about half of the total power produced by the SWP's eight hydroelectric facilities. Water supply Water released from Oroville Dam travels down the Feather River before joining with the Sacramento River, eventually reaching the Sacramento-San Joaquin Delta, where the SWP's California Aqueduct diverts the fresh water for transport to the arid San Joaquin Valley and Southern California. Oroville–Thermalito hydroelectric facilities furnish about one-third of the power necessary to drive the pumps that lift the water in the aqueduct from the delta into the valley, and then from the valley over the Tehachapi Mountains into coastal Southern California. Water and power from the dam contribute to the irrigation of in the arid San Joaquin Valley Westside and municipal supplies to some 25 million people. At least of water is released. Flood control During the winter and early spring, Lake Oroville is required to have at least , or a fifth of the reservoir's storage capacity, available for flood control. The dam is operated to maintain an objective flood-control release of , which may be further reduced during large storms when flows below the Feather's confluence with the Yuba River exceed . In the particularly devastating flood of 1997, inflows to the reservoir hit more than , but dam operators managed to limit the outflow to , sparing large regions of the Sacramento Valley from flooding. Feather River Fish Hatchery Oroville Dam completely blocks the anadromous migrations of Chinook salmon and steelhead trout in the Feather River. In 1967, in an effort to compensate for lost habitat, the DWR and the California Department of Fish and Game completed the Feather River Fish Hatchery. The Fish Barrier Dam, built in 1962, intercepts salmon and trout before they reach the base of the impassable Thermalito Diversion Dam and forces them to swim up a fish ladder to the hatchery, which is located on the north bank of the Feather River. The hatchery produces 10 million salmon smolt, along with 450,000 trout smolt, to stock in the river each year. The salmon smolt are released in two runs, with 20% for the spring run and 80% for the fall run. This facility has been successful enough that concern exists that salmon of hatchery stock are outcompeting remaining wild salmon in the Feather River system.
Technology
Dams
null
1287695
https://en.wikipedia.org/wiki/Sand%20tiger%20shark
Sand tiger shark
The sand tiger shark (Carcharias taurus), grey/gray nurse shark, spotted ragged-tooth shark, or blue-nurse sand tiger, is a species of shark that inhabits subtropical and temperate waters worldwide. It inhabits the continental shelf, from sandy shorelines (hence the name sand tiger shark) and submerged reefs to a depth of around . They dwell in the waters of Japan, Australia, South Africa, and the east coasts of North and South America. The sand tiger shark also inhabited the Mediterranean, however it was last seen there in 2003 and is presumed extinct in the region. Despite its common names, it is not closely related to either the tiger shark (Galeocerdo cuvier) or the nurse shark (Ginglymostoma cirratum). Despite its fearsome appearance and strong swimming ability, it is a relatively placid and slow-moving shark with no confirmed human fatalities. This species has a sharp, pointy head, and a bulky body. The sand tiger's length can reach but is normally 2.2–2.5 m in length. They are grey with reddish-brown spots on their backs. Shivers (groups) have been observed to hunt large schools of fish. Their diet consists of bony fish, crustaceans, squid, skates and other sharks. Unlike other sharks, the sand tiger can gulp air from the surface, allowing it to be suspended in the water column with minimal effort. During pregnancy, the most developed embryo will feed on its siblings, a reproductive strategy known as intrauterine cannibalism i.e. "embryophagy" or, more colorfully, adelphophagy—literally "eating one's brother". The sand tiger is categorized as critically endangered on the International Union for Conservation of Nature Red List. It is the most widely kept large shark in public aquariums owing to its tolerance for captivity. Taxonomy The sand tiger shark's description as Carcharias taurus by Constantine Rafinesque came from a specimen caught off the coast of Sicily. Carcharias taurus means "bull shark". This taxonomic classification has been long disputed. Twenty-seven years after Rafinesque's original description the German biologists Müller and Henle changed the genus name from C. taurus to Triglochis taurus. The following year, Swiss-American naturalist Jean Louis Rodolphe Agassiz reclassified the shark as Odontaspis cuspidata based on examples of fossilized teeth. Agassiz's name was used until 1961 when three palaeontologists and ichthyologists, W. Tucker, E. I. White, and N. B. Marshall, requested that the shark be returned to the genus Carcharias. This request was rejected and Odontaspis was approved by the International Code of Zoological Nomenclature (ICZN). When experts concluded that taurus belongs after Odontaspis, the name was changed to Odontaspis taurus. In 1977, Compagno and Follet challenged the Odontaspis taurus name and substituted Eugomphodus, a somewhat unknown classification, for Odontaspis. Many taxonomists questioned his change, arguing that there was no significant difference between Odontaspis and Carcharias. After changing the name to Eugomphodus taurus, Compagno successfully advocated in establishing the shark's current scientific name as Carcharias taurus. The ICZN approved this name, and today it is used among biologists. Common names Because the sand tiger shark is worldwide in distribution, it has many common names. The term "sand tiger shark" actually refers to four different sand tiger shark species in the family Odontaspididae. Furthermore, the name creates confusion with the unrelated tiger shark Galeocerdo cuvier. The grey nurse shark, the name used in Australia, is the second-most used name for the shark, and in India it is known as blue-nurse sand tiger. However, there are unrelated nurse sharks in the family Ginglymostomatidae. The most unambiguous and descriptive English name is probably the South African one, spotted ragged-tooth shark. Identification There are four species referred to as sand tiger sharks: The sand tiger shark Carcharias taurus The Indian sand tiger shark Carcharias tricuspidatus. Very little is known about this species which, described before 1900, is probably the same as (a synonym of) the sand tiger C. taurus The small-toothed sand tiger shark Odontaspis ferox. This species has a worldwide distribution, is seldom seen but normally inhabits deeper water than does C. taurus. The Big-eyed sand tiger shark Odontaspis noronhai, a deep water shark of the Americas, of which little is known. The most likely problem when identifying the sand tiger shark is when in the presence of either of the two species of Odontaspis. Firstly, the sand tiger is usually spotted, especially on the hind half of the body. However, there are several other differences that are probably more reliable: The bottom part of the caudal fin (tail fin) of the sand tiger is smaller. The second (i.e., hind) dorsal fin of the sand tiger is almost as large as the first (i.e., front) dorsal fin. The first (i.e., front) dorsal fin of the sand tiger is relatively non-symmetric. The first (i.e., front) dorsal fin of the sand tiger is closer to the pelvic fin than to the pectoral fin (i.e., the first dorsal fin is positioned further backwards in the case of the sand tiger). Description Adult sand tigers range from to in length with most specimens reaching a length of around 2.2–2.5 m and to in weight. The head is pointy, as opposed to round, while the snout is flattened with a conical shape. Its body is stout and bulky and its mouth extends beyond the eyes. The eyes of the sand tiger shark are small, lacking eyelids. A sand tiger usually swims with its mouth open displaying three rows of protruding, smooth-edged, sharp-pointed teeth. The males have grey claspers with white tips located on the underside of their body. The caudal fin is elongated with a long upper lobe (i.e. strongly heterocercal). They have two large, broad-based grey dorsal fins set back beyond the pectoral fins. The sand tiger shark has a grey-brown back and pale underside. Adults tend to have reddish-brown spots scattered, mostly on the hind part of the body. In August 2007, an albino specimen was photographed off South West Rocks, Australia. The teeth of these sharks have no transverse serrations (as have many other sharks) but they have a large, smooth main cusp with a tiny cusplet on each side of the main cusp. The upper front teeth are separated from the teeth on the side of the mouth by small intermediate teeth. Habitat and range Geographical range Sand tiger sharks roam the epipelagic and mesopelagic regions of the ocean, sandy coastal waters, estuaries, shallow bays, and rocky or tropical reefs, at depths of up to . The sand tiger shark can be found in the Atlantic, Pacific and Indian Oceans, and in the Adriatic Seas. In the Western Atlantic Ocean, it is found in coastal waters around from the Gulf of Maine to Florida, in the northern Gulf of Mexico, around the Bahamas and Bermuda, and from southern Brazil to northern Argentina. It is also found in the eastern Atlantic Ocean from the Mediterranean Sea to the Canary Islands, at the Cape Verde islands, along the coasts of Senegal and Ghana, and from southern Nigeria to Cameroon. In the western Indian Ocean, the shark ranges from South Africa to southern Mozambique, but excluding Madagascar. The sand tiger shark has also been sighted in the Red Sea and may be found as far east as India. In the western Pacific, it has been sighted in the waters around the coasts of Japan and Australia, but not around New Zealand. Annual migration Sand tigers in South Africa and Australia undertake an annual migration that may cover more than . They pup during the summer in relatively cold water (temperature ca. ). After parturition, they swim northwards toward sites where there are suitable rocks or caves, often at a water depth ca. , where they mate during and just after the winter. Mating normally takes place at night. After mating, they swim further north to even warmer water where gestation takes place. In autumn they return southwards to give birth in cooler water. This round trip may encompass as much as . The young sharks do not take part in this migration, but they are absent from the normal birth grounds during winter: it is thought that they move deeper into the ocean. At Cape Cod (USA), juveniles move away from coastal areas when water temperatures decreases below 16 °C and day length decreases to less than 12 h. Juveniles, however, return to their usual summer haunts and as they become mature they start larger migratory movements. Behavior Hunting The sand tiger shark is a nocturnal feeder. During the day, they take shelter near rocks, overhangs, caves and reefs often at relatively shallow depths (<20 m). This is the typical environment where divers encounter sand tigers, hovering just above the bottom in large sandy gutters and caves. However, at night they leave the shelter and hunt over the ocean bottom, often ranging far from their shelter. Sand tigers hunt by stealth. It is the only shark known to gulp air and store it in the stomach, allowing the shark to maintain near-neutral buoyancy which helps it to hunt motionlessly and quietly. Aquarium observations indicate that when it comes close enough to a prey item, it grabs with a quick sideways snap of the prey. The sand tiger shark has been observed to gather in hunting groups when preying upon large schools of fish. Diet The majority of prey items of sand tigers are demersal (i.e. from the sea bottom), suggesting that they hunt extensively on the sea bottom as far out as the continental shelf. Bony fish (teleosts) form about 60% of sand tigers' food, the remaining prey comprising sharks, skates, other rays, lobsters, crabs and squid. In Argentina, the prey includes mostly demersal fishes, e.g. the striped weakfish (Cynoscion guatucupa) and whitemouth croaker (Micropogonias furnieri). The most important elasmobranch prey is the bottom-living narrownose smooth-hound shark (Mustelus schmitti.). Benthic (i.e. free-swimming) rays and skates are also taken, including fanskates, eagle rays and the angular angel shark, with larger individuals feeding on a higher number of benthic elasmobranchs than smaller individuals. Stomach content analysis indicates that smaller sand tigers mainly focus on the sea bottom and as they grow larger they start to take more pelagic prey. This perspective of the diet of sand tigers is consistent with similar observations in the north west Atlantic and in South Africa where large sand tigers capture a wider range of shark and skate species as prey, from the surf zone to the continental shelf, indicating the opportunistic nature of sand tiger feeding. Off South Africa, sand tigers less than in length prey on fish about a quarter of their own length; however, large sand tigers capture prey up to about half of their own length. The prey items are usually swallowed as three or four chunks. Courtship and mating Mating occurs around the months of August and December in the northern hemisphere and during August–October in the southern hemisphere. The courtship and mating of sand tigers has been best documented from observations in large aquaria. In Oceanworld, Sydney, the females tended to hover just above the sandy bottom ("shielding") when they were receptive. This prevented males from approaching from underneath towards their cloaca. Often there is more than one male close by with the dominant one remaining close to the female, intimidating others with an aggressive display in which the dominant shark closely follows the tail of the subordinate, forcing the subordinate to accelerate and swim away. The dominant male snaps at smaller fish of other species. The male approaches the female and the two sharks protect the sandy bottom over which they interact. Strong interest of the male is indicated by superficial bites in the anal and pectoral fin areas of the female. The female responds with superficial biting of the male. This behaviour continues for several days during which the male patrols the area around the female. The male regularly approaches the female in "nosing" behaviour to "smell" the cloaca of the female. If she is ready, she swims off with the male, while both partners contort their bodies so that the right clasper of the male enters the cloaca of the female. The male bites the base of her right pectoral fin, leaving scars that are easily visible afterwards. After one or two minutes, mating is complete and the two separate. Females often mate with more than one male. Females mate only every second or third year. After mating, the females remain behind, while the males move off to seek other areas to feed, resulting in many observations of sand tiger populations comprising almost exclusively females. Reproduction and growth Reproduction The reproductive pattern is similar to that of many of the Odontaspididae, the shark family to which sand tigers belong. Female sand tigers have two uterine horns that, during early embryonic development, may have as many as 50 embryos that obtain nutrients from their yolk sacs and possibly consume uterine fluids. When one of the embryos reaches some in length, it eats all the smaller embryos so that only one large embryo remains in each uterine horn, a process called intrauterine cannibalism i.e. "embryophagy" or, more colorfully, adelphophagy—literally "eating one's brother." While multiple male sand tigers commonly fertilize a single female, adelphophagy sometimes excludes all but one of them from gaining offspring. These surviving embryos continue to feed on a steady supply of unfertilised eggs. After a lengthy labour, the female gives birth to long, fully independent offspring. The gestation period is approximately eight to twelve months. These sharks give birth only every second or third year, resulting in an overall mean reproductive rate of less than one pup per year, one of the lowest reproductive rates for sharks. Growth In the north Atlantic, sand tiger sharks are born about 1 m in length. During the first year, they grow about 27 cm to reach 1.3 m. After that, the growth rate decreases by about 2.5 cm each year until it stabilises at about 7 cm/y. Males reach sexual maturity at an age of five to seven years and approximately in length. Females reach maturity when approximately long at about seven to ten years of age. They are normally not expected to reach lengths over 3 m and lengths around 2.2–2.5 is more common. In the informal media, such as YouTube, there have been several reports of sand tigers around 5 m long, but none of these have been verified scientifically. Interaction with humans Attacks on humans As of 2023, the Florida Museum's International Shark Attack File lists 36 unprovoked, non-fatal attacks by sand tiger sharks. Over the weekend of 4 July 2023, there were four attacks attributed to sand tiger sharks in New York, USA. This followed a recent spike in shark attacks in New York state, with 13 incidents reported over a two-year period. Nets around swimming beaches In Australia and South Africa, one of the common practices in beach holiday areas is to erect shark nets around the beaches frequently used by swimmers. These nets are erected some from the shore and act as gill nets that trap incoming sharks: this was the norm until about 2005. In South Africa, the mortality of sand tiger sharks caused a significant decrease in the length of these animals and it was concluded that the shark nets pose a significant threat to this species with its very low reproductive rate Before 2000, these nets snagged about 200 sand tiger sharks per year in South Africa, of which only about 40% survived and were released alive. The efficiency of shark nets for the prevention of unprovoked shark attacks on bathers has been questioned, and since 2000 there has been a reduced use of these nets and alternative approaches are being developed. Competition for food with humans In Argentina, the prey items of sand tigers largely coincided with important commercial fisheries targets. Humans affect sand tiger food availability and the sharks, in turn, compete with humans for food that, in turn, has already been heavily exploited by the fisheries industry. The same applies to the bottom-living sea catfish (Galeichthys feliceps), a fisheries resource off the South African coast. Effects of scuba divers A study near Sydney in Australia found that the behaviour of the sharks is affected by the proximity of scuba divers. Diver activity affects the aggregation, swimming and respiratory behaviour of sharks, but only at short time scales. The group size of scuba divers was less important in affecting sand tiger behaviour than the distance within which they approached the sharks. Divers approaching to within 3 m of sharks affected their behaviour but after the divers had retreated, the sharks resumed normal behaviour. Other studies indicate sand tiger sharks can be indifferent to divers. Scuba divers are normally compliant with Australian shark-diving regulations. In captivity Its large and menacing appearance, combined with its relative placidity, has made the sand tiger shark among the most popular shark species to be displayed in public aquaria. However, as with all large sharks, keeping them in captivity is not without its difficulties. Sand tiger sharks have been found to be highly susceptible to developing spinal deformities, with as many as one in every three captive sharks being affected, giving them a hunched appearance. These deformities have been hypothesized to be correlated to both the size and shape of their tank. If the tank is too small, the sharks have to spend more time actively swimming than they would in the wild, where they have space to glide. Also, sharks in small, circular tanks often spend most of their time circling along the edges in only one direction, causing asymmetrical stress on their bodies. Threats and conservation status Threats There are several factors contributing to the decline in the population of the sand tigers. Sand tigers reproduce at an unusually low rate, due to the fact that they do not have more than two pups at a time and because they breed only every second or third year. This shark is a highly prized food item in the western northern Pacific, off Ghana and off India and Pakistan where they are caught by fishing trawlers, although they are more commonly caught with a fishing line. Sand tigers' fins are a popular trade item in Japan. Off North America, it is fished for its hide and fins. Shark liver oil is a popular product in cosmetic products such as lipstick. It is sought by anglers in fishing competitions in South Africa and some other countries. In Australia it has been reduced in numbers by spear fishers using poison and where it is now protected. It is also prized as an aquarium exhibit in the United States, Europe, Australia and South Africa because of its docile and hardy nature. Thus, overfishing is a major contributor to the population decline. All indications show that the world population in sand tigers has been reduced significantly in size since 1980. Many sand tigers are caught in shark nets, and then either strangled or taken by fishermen. Estuaries along the United States of America's eastern Atlantic coast houses many of the young sand tiger sharks. These estuaries are susceptible to non-point source pollution that is harmful to the pups. In Eastern Australia, the breeding population was estimated to be fewer than 400 reproductively mature animals, a number believed to be too small to sustain a healthy population. Conservation status This species is therefore listed as Critically Endangered on the International Union for Conservation of Nature Red List, and as endangered under Queensland's Nature Conservation Act 1992. It is a U.S. National Marine Fisheries Service [Species of Concern], which are those species that the U.S. Government's National Oceanic and Atmospheric Administration, National Marine Fisheries Service (NMFS), has some concerns regarding status and threats, but for which insufficient information is available to indicate a need to list the species under the U.S. Endangered Species Act. According to the National Marine Fisheries Service, any shark caught must be released immediately with minimal harm, and is considered a prohibited species, making it illegal to harvest any part of the sand tiger shark on the United States' Atlantic coast. A recent report from the Pew Charitable Trusts suggests that a new management approach used for large mammals that have suffered population declines could hold promise for sharks. Because of the life-history characteristics of sharks, conventional fisheries management approaches, such as reaching maximum sustainable yield, may not be sufficient to rebuild depleted shark populations. Some of the more stringent approaches used to reverse declines in large mammals may be appropriate for sharks, including prohibitions on the retention of the most vulnerable species and regulation of international trade.
Biology and health sciences
Sharks
Animals
1288805
https://en.wikipedia.org/wiki/Ceratopsia
Ceratopsia
Ceratopsia or Ceratopia ( or ; Greek: "horned faces") is a group of herbivorous, beaked dinosaurs that thrived in what are now North America, Europe, and Asia, during the Cretaceous Period, although ancestral forms lived earlier, in the Jurassic. The earliest known ceratopsian, Yinlong downsi, lived between 161.2 and 155.7 million years ago. The last ceratopsian species, Triceratops prorsus, became extinct during the Cretaceous–Paleogene extinction event, . Triceratops is by far the best-known ceratopsian to the general public. It is traditional for ceratopsian genus names to end in "-ceratops", although this is not always the case. One of the first named genera was Ceratops itself, which lent its name to the group, although it is considered a nomen dubium today as its fossil remains have no distinguishing characteristics that are not also found in other ceratopsians. Description Early members of the ceratopsian group, such as Psittacosaurus, were small bipedal animals. Later members, including ceratopsids like Centrosaurus and Triceratops, became very large quadrupeds and developed elaborate facial horns and frills extending over the neck. While these frills might have served to protect the vulnerable neck from predators, they may also have been used for display, thermoregulation, the attachment of large neck and chewing muscles or some combination of the above. Ceratopsians ranged in size from and to over and . Ceratopsians are easily recognized by features of the skull. On the tip of a ceratopsian upper jaw is the rostral bone, an edentulous (toothless) ossification, unique to ceratopsians. Othniel Charles Marsh recognized and named this bone, which acts as a mirror image of the predentary bone on the lower jaw. This ossification evolved to morphologically aid the chewing of plant matter. Along with the predentary bone, which forms the tip of the lower jaw in all ornithischians, the rostral forms a superficially parrot-like beak. Also, the jugal bones below the eye are prominent, flaring out sideways to make the skull appear somewhat triangular when viewed from above. This triangular appearance is accentuated in later ceratopsians by the rearwards extension of the parietal and squamosal bones of the skull roof, to form the neck frill. The neck frills of ceratopsids are surrounded by the epoccipital bones. The name is a misnomer, as they are not associated with the occipital bone. Epoccipitals begin as separate bones that fuse during the animal's growth to either the squamosal or parietal bones that make up the base of the frill. These bones were ornamental instead of functional, and may have helped differentiate species. Epoccipitals probably were present in all known ceratopsids. They appear to have been broadly different between short-frilled ceratopsids (centrosaurines) and long-frilled ceratopsids (chasmosaurines), being elliptical with constricted bases in the former group, and triangular with wide bases in the latter group. Within these broad definitions, different species would have somewhat different shapes and numbers. In centrosaurines especially, like Centrosaurus, Pachyrhinosaurus, and Styracosaurus, these bones become long and spike- or hook-like. A well-known example is the coarse sawtooth fringe of broad triangular epoccipitals on the frill of Triceratops. When regarding the ossification's morphogenetic traits, it can be described as dermal. The term epoccipital was coined by paleontologist Othniel Charles Marsh in 1889. History of study The first ceratopsian remains known to science were discovered during the U.S. Geological and Geographical Survey of the Territories led by the American geologist F.V. Hayden. Teeth discovered during an 1855 expedition to Montana were first assigned to hadrosaurids and included within the genus Trachodon. It was not until the early 20th century that some of these were recognized as ceratopsian teeth. During another of Hayden's expeditions in 1872, Fielding Bradford Meek found several giant bones protruding from a hillside in southwestern Wyoming. He alerted paleontologist Edward Drinker Cope, who led a dig to recover the partial skeleton. Cope recognized the remains as a dinosaur, but noted that even though the fossil lacked a skull, it was different from any type of dinosaur then known. He named the new species Agathaumas sylvestris, meaning "marvellous forest-dweller". Soon after, Cope named two more dinosaurs that would eventually come to be recognized as ceratopsids: Polyonax and Monoclonius. Monoclonius was notable for the number of disassociated remains found, including the first evidence of ceratopsid horns and frills. Several Monoclonius fossils were found by Cope, assisted by Charles Hazelius Sternberg, in summer 1876 near the Judith River in Chouteau County, Montana. Since the ceratopsians had not been recognised yet as a distinctive group, Cope was uncertain about much of the fossil material, not recognizing the nasal horn core, nor the brow horns, as part of a fossil horn. The frill bone was interpreted as a part of the breastbone. In 1888 and 1889, Othniel Charles Marsh described the first well preserved horned dinosaurs, Ceratops and Triceratops. In 1890 Marsh classified them together in the family Ceratopsidae and the order Ceratopsia. This prompted Cope to reexamine his own specimens and to realize that Triceratops, Monoclonius, and Agathaumas all represented a single group of similar dinosaurs, which he named Agathaumidae in 1891. Cope redescribed Monoclonius as a horned dinosaur, with a large nasal horn and two smaller horns over the eyes, and a large frill. Classification Ceratopsia was coined by Othniel Charles Marsh in 1890 to include dinosaurs possessing certain characteristic features, including horns, a rostral bone, teeth with two roots, fused neck vertebrae, and a forward-oriented pubis. Marsh considered the group distinct enough to warrant its own suborder within Ornithischia. The name is derived from the Greek κέρας/kéras meaning 'horn' and ὄψῐς/ópsis meaning 'appearance, view' and by extension 'face'. As early as the 1960s, it was noted that the name Ceratopsia is actually incorrect linguistically and that it should be Ceratopia. However, this spelling, while technically correct, has been used only rarely in the scientific literature, and the vast majority of paleontologists continue to use Ceratopsia. As the ICZN does not govern taxa above the level of superfamily, this is unlikely to change. Following Marsh, Ceratopsia has usually been classified as a suborder within the order Ornithischia. While ranked taxonomy has largely fallen out of favor among dinosaur paleontologists, some researchers have continued to employ such a classification, though sources have differed on what its rank should be. Most who still employ the use of ranks have retained its traditional ranking of suborder, though some have reduced to the level of infraorder. Phylogeny In clade-based phylogenetic taxonomy, Ceratopsia is officially defined in the PhyloCode as "the largest clade containing Ceratops montanus and Triceratops horridus, but not Pachycephalosaurus wyomingensis. Under this definition, the most basal known ceratopsians are the family Chaoyangsauridae and the well known genus Psittacosaurus, from the Early Cretaceous Period, all of which were discovered in northern China or Mongolia. The rostral bone and flared jugals are already present in all of these forms, indicating that even earlier ceratopsians remain to be discovered. The clade Neoceratopsia is defined as "the largest clade containing Triceratops horridus, but not Chaoyangsaurus youngi and Psittacosaurus mongoliensis". By this definition, only the members of Chaoyangosauridae and Psittacosaurus are excluded from Neoceratopsia, while all more derived ceratopsians are part of this clade. A slightly less inclusive group is Euceratopsia, named and defined by Daniel Madzia and colleagues in 2021 as "the smallest clade containing Leptoceratops gracilis, Protoceratops andrewsi, and Triceratops horridus". This clade includes the family Leptoceratopsidae and all more derived ceratopsians. Leptoceratopsids are a mostly North American group of mostly small bodied and quadrupedal ceratopsians. Another subset of neoceratopsians is called Coronosauria, which is "the smallest clade containing Protoceratops andrewsi and Triceratops horridus". Coronosaurs show the first development of the neck frill and the fusion of the first several neck vertebrae to support the increasingly heavy head. Within Coronosauria, two groups are generally recognized. One group can be called Protoceratopsidae and includes Protoceratops and its closest relatives, all Asian. The other group, Ceratopsoidea, includes the family Ceratopsidae and closely related animals like Zuniceratops. This clade is defined as "the largest clade containing Ceratops montanus and Triceratops horridus, but not Protoceratops andrewsi". Ceratopsidae itself includes Triceratops and all the large North American ceratopsians and is further divided into the subfamilies Centrosaurinae and Chasmosaurinae. All previously published neoceratopsian phylogenetic analyses were incorporated into the analysis of Eric M. Morschhauser and colleagues in 2019, along with all previously published diagnostic species excluding the incomplete juvenile Archaeoceratops yujingziensis and the problematic genera Bainoceratops, Lamaceratops, Platyceratops and Gobiceratops that are very closely related to and potentially synonymous with Bagaceratops. While there were many unresolved areas of the strict consensus, including all of Leptoceratopsidae, a single most parsimonious tree was found that was most consistent with the relative ages of the taxa included, which is shown below. Paleobiology Unlike almost all other dinosaur groups, skulls are the most commonly preserved elements of ceratopsian skeletons and many species are known only from skulls. There is a great deal of variation between and even within ceratopsian species. Complete growth series from embryo to adult are known for Psittacosaurus and Protoceratops, allowing the study of ontogenetic variation in these species. Most restorations of ceratopsians show them with erect hindlimbs but semi-sprawling forelimbs, which suggest that they were not fast movers. But Paul and Christiansen (2000) argued that at least the later ceratopsians had upright forelimbs and the larger species may have been as fast as rhinos, which can run at up to 56 km or 35 miles per hour. A nocturnal lifestyle has been suggested for the primitive ceratopsian Protoceratops. However, comparisons between the scleral rings of Protoceratops and Psittacosaurus and modern birds and reptiles indicate that they may have been cathemeral, active throughout the day at short intervals. Paleoecology Paleobiogeography Ceratopsia appears to have originated in Asia, as all of the earliest members are found there. Fragmentary remains, including teeth, which appear to be neoceratopsian, are found in North America from the Albian stage (112 to 100 million years ago), indicating that the group had dispersed across what is now the Bering Strait by the middle of the Cretaceous Period. Almost all leptoceratopsids are North American, aside from Udanoceratops, which may represent a separate dispersal event, back into Asia. Ceratopsids and their immediate ancestors, such as Zuniceratops, were unknown outside of western North America, and were presumed endemic to that continent. The traditional view that ceratopsoids originated in North America was called into question by the 2009 discovery of better specimens of the dubious Asian form Turanoceratops, which may it as a ceratopsid. It is unknown whether this would indicates ceratopsids actually originated in Asia, or if the Turanoceratops immigrated from North America. Possible ceratopsians from the Southern Hemisphere include the Australian Serendipaceratops, known from an ulna, and Notoceratops from Argentina is known from a single toothless jaw (which has been lost). Craspedodon from the Late Cretaceous (Santonian) of Belgium may also be a ceratopsian, specifically a neoceratopsian closer to ceratopsoidea than protoceratopsidae. Possible leptoceratopsid remains have also been described from the early Campanian of Sweden. Ecological role Psittacosaurus and Protoceratops are the most common dinosaurs in the different Mongolian sediments where they are found. Triceratops fossils are far and away the most common dinosaur remains found in the latest Cretaceous rocks in the western United States, making up as much as 5/6ths of the large dinosaur fauna in some areas. These facts indicate that some ceratopsians were the dominant herbivores in their environments. Some species of ceratopsians, especially Centrosaurus and its relatives, appear to have been gregarious, living in herds. This is suggested by bonebed finds with the remains of many individuals of different ages. Like modern migratory herds, they would have had a significant effect on their environment, as well as serving as a major food source for predators.
Biology and health sciences
Ornitischians
Animals
1291174
https://en.wikipedia.org/wiki/Bike%20lane
Bike lane
Bike lanes (US) or cycle lanes (UK) are types of bikeways (cycleways) with lanes on the roadway for cyclists only. In the United Kingdom, an on-road cycle-lane can be firmly restricted to cycles (marked with a solid white line, entry by motor vehicles is prohibited) or advisory (marked with a broken white line, entry by motor vehicles is permitted). In the United States, a designated bicycle lane (1988 MUTCD) or class II bikeway (Caltrans) is always marked by a solid white stripe on the pavement and is for 'preferential use' by bicyclists. There is also a class III bicycle route, which has roadside signs suggesting a route for cyclists, and urging sharing the road. A class IV separated bike way (Caltrans) is a bike lane that is physically separate from motor traffic and restricted to bicyclists only. Research shows that separated bike lanes improve the safety of bicyclists, and either have positive or non-significant economic effects on nearby businesses. Effects In a 2024 assessment of existing research, the U.S. Department of Transportation concluded that "separated bicycle lanes have an overall improved safety performance." According to a 2019 study, cities with separated bike lanes had 44% fewer road fatalities and 50% fewer serious injuries from crashes. The relationship was particularly strong in cities where bike lanes were separated from car lanes with physical barriers. Research published in 2020 showed insights from communities where on-road cycling for transportation is less common, particularly in the Southeast U.S. and reported that potential bikers say separated bike lanes would make them more likely to participate in active transportation. However, scientific research indicates that different groups of cyclists show varying preferences of which aspects of cycling infrastructure are most relevant when choosing a specific cycling route over another; thus, to maximize use, these different groups of cyclists have to be taken into account. A 2019 study which examined the replacement of 136 on-street parking spots with a bike lane on the Bloor Street retail corridor in Toronto, Canada, found that it increased monthly customer spending and the number of customers on the street. These findings run contrary to a popular sentiment that bike lanes have an adverse effect on local economic activity. A 2021 review of existing research found that closing car lanes and replacing them with bike lanes or pedestrian lanes had positive or non-significant economic effects on nearby businesses. United States According to the National Association of City Transportation Officials (NACTO) bike lanes are an exclusive space for cyclists by using pavement markings and signage. Bike lanes flow in the same direction as motor vehicle traffic and is located adjacent to vehicle movement. Conventional bike lanes provide limited buffer space between vehicles cyclists, as those with protective space are referred to as buffered-bike lanes. Buffered bike lanes are similar to conventional lanes but provide a buffered space between vehicles and cyclists hence the name. The extra space can be between moving vehicles and/or parked vehicles and is typically , the width of a car door. Contra-Flow Bike Lanes allow cyclists to travel in the opposite direction of vehicle traffic flow. Contra-Flow lanes are found on one-way streets that then allow two-way directional traffic for cyclists. Left-Side Bike Lanes are lanes placed on the left side of one-way streets, or along a median on two-way divided streets. The Manual on Uniform Traffic Control Devices (MUTCD) by the U.S. Department of Transportation Federal Highway Administration (FHA) gives standards of how bike lanes should be implemented regarding pavement markings and signage. These can include the word, symbol, and arrow size to be used in a bike lane and the width of the lane itself, which ranges from 5–7 feet. Cities across America are actively expanding their amount of bike lanes, such as in Boston, Massachusetts, where they have created city-wide goals, Go Boston 2030, to increase their bike network. Europe In France, segregated cycling facilities on the carriageway are called bande cyclable, those beside the carriageway or totally independent ones piste cyclable, all together voie cyclable. In Belgium, traffic laws do not distinguish cycle lanes from cyclepaths. Cycle lanes are marked by two parallel broken white lines, and they are defined as being "not wide enough to allow use by motor vehicles". There is some confusion possible here: both in French (piste cyclable) and in Dutch (fietspad) the term for these lanes can also denote a segregated cycle track, marked by a road sign; the cycle lane is therefore often referred to as a "piste cyclable marquée" (in French) or a "gemarkeerd fietspad" (in Dutch), i.e. a cycle lane/track which is "marked" (i.e. identified by road markings) rather than one which is identified by a road sign. In the Netherlands the cycle lane is normally called "fietsstrook" instead of "fietspad". Asia Commuting via bicycle is quite common in some Asian countries like Japan, in which bicycle ridership has been increasing dramatically since the 1970's. Despite this fact however, many parts of Japan have been slow to adopting effective and safe means of transport, so in recent times there have been steps taken to promote biking in the nation's largest city, Tokyo. Many bike lanes in Tokyo have been constructed to allow the two directional flow of traffic in only one lane but add a physical separation between pedestrians, bike lanes, and the roads. In addition to these types of bike lanes, there are other forms of bike lanes within various parts of the Tokyo wards that do not protect bicycle users from pedestrians but do from the road. These lanes are designated typically with signs overhead and some form of painted line to denote a lane for pedestrians and a lane for bikers, yet these rules are often not adhered to. In addition to these forms of bike lanes in Tokyo, there are several other types which mostly consist of some alteration of the aforementioned two, or are simply just painted lanes on the side of the road. In other parts of Japan, such as the city of Fukuoka, there are clear types of bike lanes being implemented to promote biking in the city: "Bicycle roads, Bicycle lanes, Sidewalks shared between pedestrians and cyclists with markings, and Sidewalks shared with pedestrian with no markings." Other countries in Asia like China have larger networks of bike paths and lanes dedicated for cycling infrastructure. The city of Nanjing, China has several types of bike lanes: protected, unprotected, and shared lanes. These lanes are similar to that of other nations, in which the separated bike lanes are done either through physical barriers of some form or are entirely separate street paths. Unprotected bike lanes are painted on the street with vehicles but denote their own lane, and shared bike lanes are not denoted but implied that bikes shall share the entirety of the road with cars on that stretch of land. In addition, Chinese bike usage is relatively high compared to other nations, and as such, cycling is taken into account when designing interchanges on the road. As such, many interchanges include various paths for bicycle users to take so that they do not have to come into direct contact with motorized vehicles. Lastly, there has been increasing concern over the nature of biking accidents in China; a case study in Shanghai found that a desperate need for the shifting of bicycle lane type to protected from unprotected is the most needed change in particular.
Technology
Road infrastructure
null
27179600
https://en.wikipedia.org/wiki/Chemical%20weapon
Chemical weapon
A chemical weapon (CW) is a specialized munition that uses chemicals formulated to inflict death or harm on humans. According to the Organisation for the Prohibition of Chemical Weapons (OPCW), this can be any chemical compound intended as a weapon "or its precursor that can cause death, injury, temporary incapacitation or sensory irritation through its chemical action. Munitions or other delivery devices designed to deliver chemical weapons, whether filled or unfilled, are also considered weapons themselves." Chemical weapons are classified as weapons of mass destruction (WMD), though they are distinct from nuclear weapons, biological weapons, and radiological weapons. All may be used in warfare and are known by the military acronym NBC (for nuclear, biological, and chemical warfare). Weapons of mass destruction are distinct from conventional weapons, which are primarily effective due to their explosive, kinetic, or incendiary potential. Chemical weapons can be widely dispersed in gas, liquid and solid forms, and may easily afflict others than the intended targets. Nerve gas, tear gas, and pepper spray are three modern examples of chemical weapons. Lethal unitary chemical agents and munitions are extremely volatile and they constitute a class of hazardous chemical weapons that have been stockpiled by many nations. Unitary agents are effective on their own and do not require mixing with other agents. The most dangerous of these are nerve agents (GA, GB, GD, and VX) and vesicant (blister) agents, which include formulations of sulfur mustard such as H, HT, and HD. They all are liquids at normal room temperature, but become gaseous when released. Widely used during the World War I, the effects of so-called mustard gas, phosgene gas, and others caused lung searing, blindness, death and maiming. During World War II the Nazi regime used a commercial hydrogen cyanide blood agent trade-named Zyklon B to commit industrialised genocide against Jews and other targeted populations in large gas chambers. The Holocaust resulted in the largest death toll to chemical weapons in history. , CS gas and pepper spray remain in common use for policing and riot control; CS and pepper spray are considered non-lethal weapons. Under the Chemical Weapons Convention (1993), there is a legally binding, worldwide ban on the production, stockpiling, and use of chemical weapons and their precursors. However, large stockpiles of chemical weapons continue to exist, usually justified as a precaution against possible use by an aggressor. Continued storage of these chemical weapons is a hazard, as many of the weapons are now more than 50 years old, raising risks significantly. Use Chemical warfare involves using the toxic properties of chemical substances as weapons. This type of warfare is distinct from nuclear warfare and biological warfare, which together make up NBC, the military initialism for Nuclear, Biological, and Chemical (warfare or weapons). None of these fall under the term conventional weapons, which are primarily effective because of their destructive potential. Chemical warfare does not depend upon explosive force to achieve an objective. It depends upon the unique properties of the chemical agent weaponized. A lethal agent is designed to injure, incapacitate, or kill an opposing force, or deny unhindered use of a particular area of terrain. Defoliants are used to quickly kill vegetation and deny its use for cover and concealment. Chemical warfare can also be used against agriculture and livestock to promote hunger and starvation. Chemical payloads can be delivered by remote controlled container release, aircraft, or rocket. Protection against chemical weapons includes proper equipment, training, and decontamination measures. History Simple chemical weapons were used sporadically throughout antiquity and into the Industrial age. It was not until the 19th century that the modern conception of chemical warfare emerged, as various scientists and nations proposed the use of asphyxiating or poisonous gases. So alarmed were nations that multiple international treaties, discussed below, were passed banning chemical weapons. This however did not prevent the extensive use of chemical weapons in World War I. The development of chlorine gas, among others, was used by both sides to try to break the stalemate of trench warfare. Though largely ineffective over the long run, it decidedly changed the nature of the war. In most cases the gases used did not kill, but instead horribly maimed, injured, or disfigured casualties. Estimates for military gas casualties range from 500k to 1.3 million, with a few thousand additional civilian casualties as collateral damage or production accidents. The interwar period saw occasional use of chemical weapons, mainly by multiple European colonial forces to put down rebellions. The Italians also used poison gas during their 1936 invasion of Ethiopia. In Nazi Germany, much research went into developing new chemical weapons, such as potent nerve agents. However, chemical weapons saw little battlefield use in World War II. Both sides were prepared to use such weapons, but the Allied powers never did, and the Axis used them only very sparingly. The reason for the lack of use by the Nazis, despite the considerable efforts that had gone into developing new varieties, might have been a lack of technical ability or fears that the Allies would retaliate with their own chemical weapons. Those fears were not unfounded: the Allies made comprehensive plans for defensive and retaliatory use of chemical weapons, and stockpiled large quantities. Japanese forces used them more widely, though only against their Asian enemies, as they also feared that using it on Western powers would result in retaliation. Chemical weapons were frequently used against Kuomintang and Chinese communist troops. However, the Nazis did extensively use poison gas against civilians in the Holocaust. Vast quantities of Zyklon B gas and carbon monoxide were used in the gas chambers of Nazi extermination camps, resulting in the overwhelming majority of some three million deaths. This remains the deadliest use of poison gas in history. The post-war era has seen limited, though devastating, use of chemical weapons. Some 100,000 Iranian troops were casualties of Iraqi chemical weapons during the Iran–Iraq War. Iraq used mustard gas and nerve agents against its own civilians in the 1988 Halabja chemical attack. The Cuban intervention in Angola saw limited use of organophosphates. The Syrian government has used sarin, chlorine, and mustard gas in the Syrian civil war generally against civilians. Terrorist groups have also used chemical weapons, notably in the Tokyo subway sarin attack and the Matsumoto incident.
Technology
Weapons of mass destruction
null
19851252
https://en.wikipedia.org/wiki/Redlich%E2%80%93Kwong%20equation%20of%20state
Redlich–Kwong equation of state
In physics and thermodynamics, the Redlich–Kwong equation of state is an empirical, algebraic equation that relates temperature, pressure, and volume of gases. It is generally more accurate than the van der Waals equation and the ideal gas equation at temperatures above the critical temperature. It was formulated by Otto Redlich and Joseph Neng Shun Kwong in 1949. It showed that a two-parameter, cubic equation of state could well reflect reality in many situations, standing alongside the much more complicated Beattie–Bridgeman model and Benedict–Webb–Rubin equation that were used at the time. Although it was initially developed for gases, the Redlich–Kwong equation has been considered the most modified equation of state since those modifications have been aimed to generalize the predictive results obtained from it. Although this equation is not currently employed in practical applications, modifications derived from this mathematical model like the Soave Redlich-Kwong (SWK), and Peng Robinson have been improved and currently used in simulation and research of vapor–liquid equilibria. Equation The Redlich–Kwong equation is formulated as: where: p is the gas pressure R is the gas constant, T is temperature, Vm is the molar volume (V/n), a is a constant that corrects for attractive potential of molecules, and b is a constant that corrects for volume. The constants are different depending on which gas is being analyzed. The constants can be calculated from the critical point data of the gas: where: Tc is the temperature at the critical point, and Pc is the pressure at the critical point. The Redlich–Kwong equation can also be represented as an equation for the compressibility factor of gas, as a function of temperature and pressure: where: Or more simply: This equation only implicitly gives Z as a function of pressure and temperature, but is easily solved numerically, originally by graphical interpolation, and now more easily by computer. Moreover, analytic solutions to cubic functions have been known for centuries and are even faster for computers. The Redlich-Kwong equation of state may also be expressed as a cubic function of the molar volume. For all Redlich–Kwong gases: where: Zc is the compressibility factor at the critical point Using the equation of state can be written in the reduced form: And since it follows: with From the Redlich–Kwong equation, the fugacity coefficient of a gas can be estimated: Critical constants It is possible to express the critical constants Tc and Pc as functions of a and b by reversing the following system of 2 equations a(Tc, Pc) and b(Tc, Pc) with 2 variables Tc, Pc: Because of the definition of compressibility factor at critical condition, it is possible to reverse it to find the critical molar volume Vm,c, by knowing previous found Pc, Tc and Zc=1/3. Multiple components The Redlich–Kwong equation was developed with an intent to also be applicable to mixtures of gases. In a mixture, the b term, representing the volume of the molecules, is an average of the b values of the components, weighted by the mole fractions: or where: xi is the mole fraction of the ith component of the mixture, bij is the covolume parameter of the i-j pair in the mixture, and Bi is the B value of the ith component of the mixture The cross-terms of bij (i.e. terms for which ), are commonly computed as , where is an often empirically fitted interaction parameter accounting for asymmetry in the cross interactions. The constant representing the attractive forces, a, is not linear with respect to mole fraction, but rather depends on the square of the mole fractions. That is: where: ai j is the attractive term between a molecule of species i and species j, xi is the mole fraction of the ith component of the mixture, and xj is the mole fraction of the jth component of the mixture. It is generally assumed that the attractive cross terms represent the geometric average of the individual a terms, adjusted using an interaction parameter , that is: , Where the interaction parameter is an often empirically fitted parameter accounting for asymmetry in the molecular cross-interactions. In this case, the following equation for the attractive term is furnished: where Ai is the A term for the i'''th component of the mixture. These manners of creating a and b parameters for a mixture from the parameters of the pure fluids are commonly known as the van der Waals one-fluid mixing and combining rules. History The Van der Waals equation, formulated in 1873 by Johannes Diderik van der Waals, is generally regarded as the first somewhat realistic equation of state (beyond the ideal gas law): However, its modeling of real behavior is not sufficient for many applications, and by 1949, had fallen out of favor, with the Beattie–Bridgeman and Benedict–Webb–Rubin equations of state being used preferentially, both of which contain more parameters than the Van der Waals equation. The Redlich–Kwong equation was developed by Redlich and Kwong while they were both working for the Shell Development Company at Emeryville, California. Kwong had begun working at Shell in 1944, where he met Otto Redlich when he joined the group in 1945. The equation arose out of their work at Shell - they wanted an easy, algebraic way to relate the pressures, volumes, and temperatures of the gasses they were working with - mostly non-polar and slightly polar hydrocarbons (the Redlich–Kwong equation is less accurate for hydrogen-bonding gases). It was presented jointly in Portland, Oregon at the Symposium on Thermodynamics and Molecular Structure of Solutions in 1948, as part of the 14th Meeting of the American Chemical Society. The success of the Redlich–Kwong equation in modeling many real gases accurately demonstrate that a cubic, two-parameter equation of state can give adequate results, if it is properly constructed. After they demonstrated the viability of such equations, many others created equations of similar form to try to improve on the results of Redlich and Kwong. Derivation The equation is essentially empirical – the derivation is neither direct nor rigorous. The Redlich–Kwong equation is very similar to the Van der Waals equation, with only a slight modification being made to the attractive term, giving that term a temperature dependence. At high pressures, the volume of all gases approaches some finite volume, largely independent of temperature, that is related to the size of the gas molecules. This volume is reflected in the b in the equation. It is empirically true that this volume is about 0.26Vc (where Vc is the volume at the critical point). This approximation is quite good for many small, non-polar compounds – the value ranges between about 0.24Vc and 0.28Vc. In order for the equation to provide a good approximation of volume at high pressures, it had to be constructed such that The first term in the equation represents this high-pressure behavior. The second term corrects for the attractive force of the molecules to each other. The functional form of a with respect to the critical temperature and pressure is empirically chosen to give the best fit at moderate pressures for most relatively non-polar gasses. In reality The values of a and b are completely determined by the equation's shape and cannot be empirically chosen. Requiring it to hold at its critical point , enforcing the thermodynamic criteria for a critical point, and without loss of generality defining and yields 3 constraints, . Simultaneously solving these while requiring b' and Zc to be positive yields only one solution: . Modification The Redlich–Kwong equation was designed largely to predict the properties of small, non-polar molecules in the vapor phase, which it generally does well. However, it has been subject to various attempts to refine and improve it. In 1975, Redlich himself published an equation of state adding a third parameter, in order to better model the behavior of both long-chained molecules, as well as more polar molecules. His 1975 equation was not so much a modification to the original equation as a re-inventing of a new equation of state, and was also formulated so as to take advantage of computer calculation, which was not available at the time the original equation was published. Many others have offered competing equations of state, either modifications to the original equation, or equations quite different in form. It was recognized by the mid 1960s that to significantly improve the equation, the parameters, especially a, would need to become temperature dependent. As early as 1966, Barner noted that the Redlich–Kwong equation worked best for molecules with an acentric factor (ω) close to zero. He therefore proposed a modification to the attractive term: where α is the attractive term in the original Redlich–Kwong equation γ is a parameter related to ω, with γ = 0 for ω = 0 It soon became desirable to obtain an equation that would also model well the Vapor–liquid equilibrium (VLE) properties of fluids, in addition to the vapor-phase properties. Perhaps the best known application of the Redlich–Kwong equation was in calculating gas fugacities of hydrocarbon mixtures, which it does well, that was then used in the VLE model developed by Chao and Seader in 1961. However, in order for the Redlich–Kwong equation to stand on its own in modeling vapor–liquid equilibria, more substantial modifications needed to be made. The most successful of these modifications is the Soave modification to the equation, proposed in 1972. Soave's modification involved replacing the T1/2 power found in the denominator attractive term of the original equation with a more complicated temperature-dependent expression. He presented the equation as follows: whereTr is the reduced temperature of the compound, andω is the acentric factor The Peng–Robinson equation of state further modified the Redlich–Kwong equation by modifying the attractive term, giving the parameters a, b, and α are slightly modified, with The Peng–Robinson equation typically gives similar VLE equilibria properties as the Soave modification, but often gives better estimations of the liquid phase density. Several modifications have been made that attempt to more accurately represent the first term, related to the molecular size. The first significant modification of the repulsive term beyond the Van der Waals equation's (where Phs represents a hard spheres equation of state term.) was developed in 1963 by Thiele: where , and This expression was improved by Carnahan and Starling to give The Carnahan-Starling hard-sphere equation of state has term been used extensively in developing other equations of state, and tends to give very good approximations for the repulsive term. Beyond improved two-parameter equations of state, a number of three parameter equations have been developed, often with the third parameter depending on either Zc, the compressibility factor at the critical point, or ω, the acentric factor. Schmidt and Wenzel proposed an equation of state with an attractive term that incorporates the acentric factor: This equation reduces to the original Redlich–Kwong equation in the case when ω = 0, and to the Peng–Robinson equation when ω'' = 1/3.
Physical sciences
Thermodynamics
Physics
19852895
https://en.wikipedia.org/wiki/Introduction%20to%20evolution
Introduction to evolution
In biology, evolution is the process of change in all forms of life over generations, and evolutionary biology is the study of how evolution occurs. Biological populations evolve through genetic changes that correspond to changes in the organisms' observable traits. Genetic changes include mutations, which are caused by damage or replication errors in organisms' DNA. As the genetic variation of a population drifts randomly over generations, natural selection gradually leads traits to become more or less common based on the relative reproductive success of organisms with those traits. The age of the Earth is about 4.5 billion years. The earliest undisputed evidence of life on Earth dates from at least 3.5 billion years ago. Evolution does not attempt to explain the origin of life (covered instead by abiogenesis), but it does explain how early lifeforms evolved into the complex ecosystem that we see today. Based on the similarities between all present-day organisms, all life on Earth is assumed to have originated through common descent from a last universal ancestor from which all known species have diverged through the process of evolution. All individuals have hereditary material in the form of genes received from their parents, which they pass on to any offspring. Among offspring there are variations of genes due to the introduction of new genes via random changes called mutations or via reshuffling of existing genes during sexual reproduction. The offspring differs from the parent in minor random ways. If those differences are helpful, the offspring is more likely to survive and reproduce. This means that more offspring in the next generation will have that helpful difference and individuals will not have equal chances of reproductive success. In this way, traits that result in organisms being better adapted to their living conditions become more common in descendant populations. These differences accumulate resulting in changes within the population. This process is responsible for the many diverse life forms in the world. The modern understanding of evolution began with the 1859 publication of Charles Darwin's On the Origin of Species. In addition, Gregor Mendel's work with plants helped to explain the hereditary patterns of genetics. Fossil discoveries in palaeontology, advances in population genetics and a global network of scientific research have provided further details into the mechanisms of evolution. Scientists now have a good understanding of the origin of new species (speciation) and have observed the speciation process in the laboratory and in the wild. Evolution is the principal scientific theory that biologists use to understand life and is used in many disciplines, including medicine, psychology, conservation biology, anthropology, forensics, agriculture and other social-cultural applications. Simple overview The main ideas of evolution may be summarised as follows: Life forms reproduce and therefore have a tendency to become more numerous. Factors such as predation and competition work against the survival of individuals. Each offspring differs from their parent(s) in minor, random ways. If these differences are beneficial, the offspring is more likely to survive and reproduce. This makes it likely that more offspring in the next generation will have beneficial differences and fewer will have detrimental differences. These differences accumulate over generations, resulting in changes within the population. Over time, populations can split or branch off into new species. These processes, collectively known as evolution, are responsible for the many diverse life forms seen in the world. Natural selection In the 19th century, natural history collections and museums were popular. The European expansion and naval expeditions employed naturalists, while curators of grand museums showcased preserved and live specimens of the varieties of life. Charles Darwin was an English graduate educated and trained in the disciplines of natural history. Such natural historians would collect, catalogue, describe and study the vast collections of specimens stored and managed by curators at these museums. Darwin served as a ship's naturalist on board HMS Beagle, assigned to a five-year research expedition around the world. During his voyage, he observed and collected an abundance of organisms, being very interested in the diverse forms of life along the coasts of South America and the neighbouring Galápagos Islands. Darwin gained extensive experience as he collected and studied the natural history of life forms from distant places. Through his studies, he formulated the idea that each species had developed from ancestors with similar features. In 1838, he described how a process he called natural selection would make this happen. The size of a population depends on how much and how many resources are able to support it. For the population to remain the same size year after year, there must be an equilibrium or balance between the population size and available resources. Since organisms produce more offspring than their environment can support, not all individuals can survive out of each generation. There must be a competitive struggle for resources that aid in survival. As a result, Darwin realised that it was not chance alone that determined survival. Instead, survival of an organism depends on the differences of each individual organism, or "traits," that aid or hinder survival and reproduction. Well-adapted individuals are likely to leave more offspring than their less well-adapted competitors. Traits that hinder survival and reproduction would disappear over generations. Traits that help an organism survive and reproduce would accumulate over generations. Darwin realised that the unequal ability of individuals to survive and reproduce could cause gradual changes in the population and used the term natural selection to describe this process. Observations of variations in animals and plants formed the basis of the theory of natural selection. For example, Darwin observed that orchids and insects have a close relationship that allows the pollination of the plants. He noted that orchids have a variety of structures that attract insects, so that pollen from the flowers gets stuck to the insects' bodies. In this way, insects transport the pollen from a male to a female orchid. In spite of the elaborate appearance of orchids, these specialised parts are made from the same basic structures that make up other flowers. In his book, Fertilisation of Orchids (1862), Darwin proposed that the orchid flowers were adapted from pre-existing parts, through natural selection. Darwin was still researching and experimenting with his ideas on natural selection when he received a letter from Alfred Russel Wallace describing a theory very similar to his own. This led to an immediate joint publication of both theories. Both Wallace and Darwin saw the history of life like a family tree, with each fork in the tree's limbs being a common ancestor. The tips of the limbs represented modern species and the branches represented the common ancestors that are shared amongst many different species. To explain these relationships, Darwin said that all living things were related, and this meant that all life must be descended from a few forms, or even from a single common ancestor. He called this process descent with modification. Darwin published his theory of evolution by natural selection in On the Origin of Species in 1859. His theory means that all life, including humanity, is a product of continuing natural processes. The implication that all life on Earth has a common ancestor has been met with objections from some religious groups. Their objections are in contrast to the level of support for the theory by more than 99 percent of those within the scientific community today. Natural selection is commonly equated with survival of the fittest, but this expression originated in Herbert Spencer's Principles of Biology in 1864, five years after Charles Darwin published his original works. Survival of the fittest describes the process of natural selection incorrectly, because natural selection is not only about survival and it is not always the fittest that survives. Source of variation Darwin's theory of natural selection laid the groundwork for modern evolutionary theory, and his experiments and observations showed that the organisms in populations varied from each other, that some of these variations were inherited, and that these differences could be acted on by natural selection. However, he could not explain the source of these variations. Like many of his predecessors, Darwin mistakenly thought that heritable traits were a product of use and disuse, and that features acquired during an organism's lifetime could be passed on to its offspring. He looked for examples, such as large ground feeding birds getting stronger legs through exercise, and weaker wings from not flying until, like the ostrich, they could not fly at all. This misunderstanding was called the inheritance of acquired characters and was part of the theory of transmutation of species put forward in 1809 by Jean-Baptiste Lamarck. In the late 19th century this theory became known as Lamarckism. Darwin produced an unsuccessful theory he called pangenesis to try to explain how acquired characteristics could be inherited. In the 1880s August Weismann's experiments indicated that changes from use and disuse could not be inherited, and Lamarckism gradually fell from favour. The missing information needed to help explain how new features could pass from a parent to its offspring was provided by the pioneering genetics work of Gregor Mendel. Mendel's experiments with several generations of pea plants demonstrated that inheritance works by separating and reshuffling hereditary information during the formation of sex cells and recombining that information during fertilisation. This is like mixing different hands of playing cards, with an organism getting a random mix of half of the cards from one parent, and half of the cards from the other. Mendel called the information factors; however, they later became known as genes. Genes are the basic units of heredity in living organisms. They contain the information that directs the physical development and behaviour of organisms. Genes are made of DNA. DNA is a long molecule made up of individual molecules called nucleotides. Genetic information is encoded in the sequence of nucleotides, that make up the DNA, just as the sequence of the letters in words carries information on a page. The genes are like short instructions built up of the "letters" of the DNA alphabet. Put together, the entire set of these genes gives enough information to serve as an "instruction manual" of how to build and run an organism. The instructions spelled out by this DNA alphabet can be changed, however, by mutations, and this may alter the instructions carried within the genes. Within the cell, the genes are carried in chromosomes, which are packages for carrying the DNA. It is the reshuffling of the chromosomes that results in unique combinations of genes in offspring. Since genes interact with one another during the development of an organism, novel combinations of genes produced by sexual reproduction can increase the genetic variability of the population even without new mutations. The genetic variability of a population can also increase when members of that population interbreed with individuals from a different population causing gene flow between the populations. This can introduce genes into a population that were not present before. Evolution is not a random process. Although mutations in DNA are random, natural selection is not a process of chance: the environment determines the probability of reproductive success. Evolution is an inevitable result of imperfectly copying, self-replicating organisms reproducing over billions of years under the selective pressure of the environment. The outcome of evolution is not a perfectly designed organism. The end products of natural selection are organisms that are adapted to their present environments. Natural selection does not involve progress towards an ultimate goal. Evolution does not strive for more advanced, more intelligent, or more sophisticated life forms. For example, fleas (wingless parasites) are descended from a winged, ancestral scorpionfly, and snakes are lizards that no longer require limbs—although pythons still grow tiny structures that are the remains of their ancestor's hind legs. Organisms are merely the outcome of variations that succeed or fail, dependent upon the environmental conditions at the time. Rapid environmental changes typically cause extinctions. Of all species that have existed on Earth, 99.9 percent are now extinct. Since life began on Earth, five major mass extinctions have led to large and sudden drops in the variety of species. The most recent, the Cretaceous–Paleogene extinction event, occurred 66 million years ago. Genetic drift Genetic drift is a cause of allelic frequency change within populations of a species. Alleles are different variations of specific genes. They determine things like hair colour, skin tone, eye colour and blood type; in other words, all the genetic traits that vary between individuals. Genetic drift does not introduce new alleles to a population, but it can reduce variation within a population by removing an allele from the gene pool. Genetic drift is caused by random sampling of alleles. A truly random sample is a sample in which no outside forces affect what is selected. It is like pulling marbles of the same size and weight but of different colours from a brown paper bag. In any offspring, the alleles present are samples of the previous generations alleles, and chance plays a role in whether an individual survives to reproduce and to pass a sample of their generation onward to the next. The allelic frequency of a population is the ratio of the copies of one specific allele that share the same form compared to the number of all forms of the allele present in the population. Genetic drift affects smaller populations more than it affects larger populations. Hardy–Weinberg principle The Hardy–Weinberg principle states that under certain idealised conditions, including the absence of selection pressures, a large population will have no change in the frequency of alleles as generations pass. A population that satisfies these conditions is said to be in Hardy–Weinberg equilibrium. In particular, Hardy and Weinberg showed that dominant and recessive alleles do not automatically tend to become more and less frequent respectively, as had been thought previously. The conditions for Hardy-Weinberg equilibrium include that there must be no mutations, immigration, or emigration, all of which can directly change allelic frequencies. Additionally, mating must be totally random, with all males (or females in some cases) being equally desirable mates. This ensures a true random mixing of alleles. A population that is in Hardy–Weinberg equilibrium is analogous to a deck of cards; no matter how many times the deck is shuffled, no new cards are added and no old ones are taken away. Cards in the deck represent alleles in a population's gene pool. In practice, no population can be in perfect Hardy-Weinberg equilibrium. The population's finite size, combined with natural selection and many other effects, cause the allelic frequencies to change over time. Population bottleneck A population bottleneck occurs when the population of a species is reduced drastically over a short period of time due to external forces. In a true population bottleneck, the reduction does not favour any combination of alleles; it is totally random chance which individuals survive. A bottleneck can reduce or eliminate genetic variation from a population. Further drift events after the bottleneck event can also reduce the population's genetic diversity. The lack of diversity created can make the population at risk to other selective pressures. A common example of a population bottleneck is the Northern elephant seal. Due to excessive hunting throughout the 19th century, the population of the northern elephant seal was reduced to 30 individuals or less. They have made a full recovery, with the total number of individuals at around 100,000 and growing. The effects of the bottleneck are visible, however. The seals are more likely to have serious problems with disease or genetic disorders, because there is almost no diversity in the population. Founder effect The founder effect occurs when a small group from one population splits off and forms a new population, often through geographic isolation. This new population's allelic frequency is probably different from the original population's, and will change how common certain alleles are in the populations. The founders of the population will determine the genetic makeup, and potentially the survival, of the new population for generations. One example of the founder effect is found in the Amish migration to Pennsylvania in 1744. Two of the founders of the colony in Pennsylvania carried the recessive allele for Ellis–van Creveld syndrome. Because the Amish tend to be religious isolates, they interbreed, and through generations of this practice the frequency of Ellis–van Creveld syndrome in the Amish people is much higher than the frequency in the general population. Modern synthesis The modern evolutionary synthesis is based on the concept that populations of organisms have significant genetic variation caused by mutation and by the recombination of genes during sexual reproduction. It defines evolution as the change in allelic frequencies within a population caused by genetic drift, gene flow between sub populations, and natural selection. Natural selection is emphasised as the most important mechanism of evolution; large changes are the result of the gradual accumulation of small changes over long periods of time. The modern evolutionary synthesis is the outcome of a merger of several different scientific fields to produce a more cohesive understanding of evolutionary theory. In the 1920s, Ronald Fisher, J.B.S. Haldane and Sewall Wright combined Darwin's theory of natural selection with statistical models of Mendelian genetics, founding the discipline of population genetics. In the 1930s and 1940s, efforts were made to merge population genetics, the observations of field naturalists on the distribution of species and sub species, and analysis of the fossil record into a unified explanatory model. The application of the principles of genetics to naturally occurring populations, by scientists such as Theodosius Dobzhansky and Ernst Mayr, advanced the understanding of the processes of evolution. Dobzhansky's 1937 work Genetics and the Origin of Species helped bridge the gap between genetics and field biology by presenting the mathematical work of the population geneticists in a form more useful to field biologists, and by showing that wild populations had much more genetic variability with geographically isolated subspecies and reservoirs of genetic diversity in recessive genes than the models of the early population geneticists had allowed for. Mayr, on the basis of an understanding of genes and direct observations of evolutionary processes from field research, introduced the biological species concept, which defined a species as a group of interbreeding or potentially interbreeding populations that are reproductively isolated from all other populations. Both Dobzhansky and Mayr emphasised the importance of subspecies reproductively isolated by geographical barriers in the emergence of new species. The palaeontologist George Gaylord Simpson helped to incorporate palaeontology with a statistical analysis of the fossil record that showed a pattern consistent with the branching and non-directional pathway of evolution of organisms predicted by the modern synthesis. Evidence for evolution Scientific evidence for evolution comes from many aspects of biology and includes fossils, homologous structures, and molecular similarities between species' DNA. Fossil record Research in the field of palaeontology, the study of fossils, supports the idea that all living organisms are related. Fossils provide evidence that accumulated changes in organisms over long periods of time have led to the diverse forms of life we see today. A fossil itself reveals the organism's structure and the relationships between present and extinct species, allowing palaeontologists to construct a family tree for all of the life forms on Earth. Modern palaeontology began with the work of Georges Cuvier. Cuvier noted that, in sedimentary rock, each layer contained a specific group of fossils. The deeper layers, which he proposed to be older, contained simpler life forms. He noted that many forms of life from the past are no longer present today. One of Cuvier's successful contributions to the understanding of the fossil record was establishing extinction as a fact. In an attempt to explain extinction, Cuvier proposed the idea of "revolutions" or catastrophism in which he speculated that geological catastrophes had occurred throughout the Earth's history, wiping out large numbers of species. Cuvier's theory of revolutions was later replaced by uniformitarian theories, notably those of James Hutton and Charles Lyell who proposed that the Earth's geological changes were gradual and consistent. However, current evidence in the fossil record supports the concept of mass extinctions. As a result, the general idea of catastrophism has re-emerged as a valid hypothesis for at least some of the rapid changes in life forms that appear in the fossil records. A very large number of fossils have now been discovered and identified. These fossils serve as a chronological record of evolution. The fossil record provides examples of transitional species that demonstrate ancestral links between past and present life forms. One such transitional fossil is Archaeopteryx, an ancient organism that had the distinct characteristics of a reptile (such as a long, bony tail and conical teeth) yet also had characteristics of birds (such as feathers and a wishbone). The implication from such a find is that modern reptiles and birds arose from a common ancestor. Comparative anatomy The comparison of similarities between organisms of their form or appearance of parts, called their morphology, has long been a way to classify life into closely related groups. This can be done by comparing the structure of adult organisms in different species or by comparing the patterns of how cells grow, divide and even migrate during an organism's development. Taxonomy Taxonomy is the branch of biology that names and classifies all living things. Scientists use morphological and genetic similarities to assist them in categorising life forms based on ancestral relationships. For example, orangutans, gorillas, chimpanzees and humans all belong to the same taxonomic grouping referred to as a family—in this case the family called Hominidae. These animals are grouped together because of similarities in morphology that come from common ancestry (called homology). Strong evidence for evolution comes from the analysis of homologous structures: structures in different species that no longer perform the same task but which share a similar structure. Such is the case of the forelimbs of mammals. The forelimbs of a human, cat, whale, and bat all have strikingly similar bone structures. However, each of these four species' forelimbs performs a different task. The same bones that construct a bat's wings, which are used for flight, also construct a whale's flippers, which are used for swimming. Such a "design" makes little sense if they are unrelated and uniquely constructed for their particular tasks. The theory of evolution explains these homologous structures: all four animals shared a common ancestor, and each has undergone change over many generations. These changes in structure have produced forelimbs adapted for different tasks. However, anatomical comparisons can be misleading, as not all anatomical similarities indicate a close relationship. Organisms that share similar environments will often develop similar physical features, a process known as convergent evolution. Both sharks and dolphins have similar body forms, yet are only distantly related—sharks are fish and dolphins are mammals. Such similarities are a result of both populations being exposed to the same selective pressures. Within both groups, changes that aid swimming have been favoured. Thus, over time, they developed similar appearances (morphology), even though they are not closely related. Embryology In some cases, anatomical comparison of structures in the embryos of two or more species provides evidence for a shared ancestor that may not be obvious in the adult forms. As the embryo develops, these homologies can be lost to view, and the structures can take on different functions. Part of the basis of classifying the vertebrate group (which includes humans), is the presence of a tail (extending beyond the anus) and pharyngeal slits. Both structures appear during some stage of embryonic development but are not always obvious in the adult form. Because of the morphological similarities present in embryos of different species during development, it was once assumed that organisms re-enact their evolutionary history as an embryo. It was thought that human embryos passed through an amphibian then a reptilian stage before completing their development as mammals. Such a re-enactment, often called recapitulation theory, is not supported by scientific evidence. What does occur, however, is that the first stages of development are similar in broad groups of organisms. At very early stages, for instance, all vertebrates appear extremely similar, but do not exactly resemble any ancestral species. As development continues, specific features emerge from this basic pattern. Vestigial structures Homology includes a unique group of shared structures referred to as vestigial structures. Vestigial refers to anatomical parts that are of minimal, if any, value to the organism that possesses them. These apparently illogical structures are remnants of organs that played an important role in ancestral forms. Such is the case in whales, which have small vestigial bones that appear to be remnants of the leg bones of their ancestors which walked on land. Humans also have vestigial structures, including the ear muscles, the wisdom teeth, the appendix, the tail bone, body hair (including goose bumps), and the semilunar fold in the corner of the eye. Biogeography Biogeography is the study of the geographical distribution of species. Evidence from biogeography, especially from the biogeography of oceanic islands, played a key role in convincing both Darwin and Alfred Russel Wallace that species evolved with a branching pattern of common descent. Islands often contain endemic species, species not found anywhere else, but those species are often related to species found on the nearest continent. Furthermore, islands often contain clusters of closely related species that have very different ecological niches, that is have different ways of making a living in the environment. Such clusters form through a process of adaptive radiation where a single ancestral species colonises an island that has a variety of open ecological niches and then diversifies by evolving into different species adapted to fill those empty niches. Well-studied examples include Darwin's finches, a group of 13 finch species endemic to the Galápagos Islands, and the Hawaiian honeycreepers, a group of birds that once, before extinctions caused by humans, numbered 60 species filling diverse ecological roles, all descended from a single finch like ancestor that arrived on the Hawaiian Islands some 4 million years ago. Another example is the Silversword alliance, a group of perennial plant species, also endemic to the Hawaiian Islands, that inhabit a variety of habitats and come in a variety of shapes and sizes that include trees, shrubs, and ground hugging mats, but which can be hybridised with one another and with certain tarweed species found on the west coast of North America; it appears that one of those tarweeds colonised Hawaii in the past, and gave rise to the entire Silversword alliance. Molecular biology Every living organism (with the possible exception of RNA viruses) contains molecules of DNA, which carries genetic information. Genes are the pieces of DNA that carry this information, and they influence the properties of an organism. Genes determine an individual's general appearance and to some extent their behaviour. If two organisms are closely related, their DNA will be very similar. On the other hand, the more distantly related two organisms are, the more differences they will have. For example, brothers are closely related and have very similar DNA, while cousins share a more distant relationship and have far more differences in their DNA. Similarities in DNA are used to determine the relationships between species in much the same manner as they are used to show relationships between individuals. For example, comparing chimpanzees with gorillas and humans shows that there is as much as a 96 percent similarity between the DNA of humans and chimps. Comparisons of DNA indicate that humans and chimpanzees are more closely related to each other than either species is to gorillas. The field of molecular systematics focuses on measuring the similarities in these molecules and using this information to work out how different types of organisms are related through evolution. These comparisons have allowed biologists to build a relationship tree of the evolution of life on Earth. They have even allowed scientists to unravel the relationships between organisms whose common ancestors lived such a long time ago that no real similarities remain in the appearance of the organisms. Artificial selection Artificial selection is the controlled breeding of domestic plants and animals. Humans determine which animal or plant will reproduce and which of the offspring will survive; thus, they determine which genes will be passed on to future generations. The process of artificial selection has had a significant impact on the evolution of domestic animals. For example, people have produced different types of dogs by controlled breeding. The differences in size between the Chihuahua and the Great Dane are the result of artificial selection. Despite their dramatically different physical appearance, they and all other dogs evolved from a few wolves domesticated by humans in what is now China less than 15,000 years ago. Artificial selection has produced a wide variety of plants. In the case of maize (corn), recent genetic evidence suggests that domestication occurred 10,000 years ago in central Mexico. Prior to domestication, the edible portion of the wild form was small and difficult to collect. Today The Maize Genetics Cooperation • Stock Center maintains a collection of more than 10,000 genetic variations of maize that have arisen by random mutations and chromosomal variations from the original wild type. In artificial selection the new breed or variety that emerges is the one with random mutations attractive to humans, while in natural selection the surviving species is the one with random mutations useful to it in its non-human environment. In both natural and artificial selection the variations are a result of random mutations, and the underlying genetic processes are essentially the same. Darwin carefully observed the outcomes of artificial selection in animals and plants to form many of his arguments in support of natural selection. Much of his book On the Origin of Species was based on these observations of the many varieties of domestic pigeons arising from artificial selection. Darwin proposed that if humans could achieve dramatic changes in domestic animals in short periods, then natural selection, given millions of years, could produce the differences seen in living things today. Coevolution Coevolution is a process in which two or more species influence the evolution of each other. All organisms are influenced by life around them; however, in coevolution there is evidence that genetically determined traits in each species directly resulted from the interaction between the two organisms. An extensively documented case of coevolution is the relationship between Pseudomyrmex, a type of ant, and the acacia, a plant that the ant uses for food and shelter. The relationship between the two is so intimate that it has led to the evolution of special structures and behaviours in both organisms. The ant defends the acacia against herbivores and clears the forest floor of the seeds from competing plants. In response, the plant has evolved swollen thorns that the ants use as shelter and special flower parts that the ants eat. Such coevolution does not imply that the ants and the tree choose to behave in an altruistic manner. Rather, across a population small genetic changes in both ant and tree benefited each. The benefit gave a slightly higher chance of the characteristic being passed on to the next generation. Over time, successive mutations created the relationship we observe today. Speciation Given the right circumstances, and enough time, evolution leads to the emergence of new species. Scientists have struggled to find a precise and all-inclusive definition of species. Ernst Mayr defined a species as a population or group of populations whose members have the potential to interbreed naturally with one another to produce viable, fertile offspring. (The members of a species cannot produce viable, fertile offspring with members of other species). Mayr's definition has gained wide acceptance among biologists, but does not apply to organisms such as bacteria, which reproduce asexually. Speciation is the lineage-splitting event that results in two separate species forming from a single common ancestral population. A widely accepted method of speciation is called allopatric speciation. Allopatric speciation begins when a population becomes geographically separated. Geological processes, such as the emergence of mountain ranges, the formation of canyons, or the flooding of land bridges by changes in sea level may result in separate populations. For speciation to occur, separation must be substantial, so that genetic exchange between the two populations is completely disrupted. In their separate environments, the genetically isolated groups follow their own unique evolutionary pathways. Each group will accumulate different mutations as well as be subjected to different selective pressures. The accumulated genetic changes may result in separated populations that can no longer interbreed if they are reunited. Barriers that prevent interbreeding are either prezygotic (prevent mating or fertilisation) or postzygotic (barriers that occur after fertilisation). If interbreeding is no longer possible, then they will be considered different species. The result of four billion years of evolution is the diversity of life around us, with an estimated 1.75 million different species in existence today. Usually the process of speciation is slow, occurring over very long time spans; thus direct observations within human life-spans are rare. However speciation has been observed in present-day organisms, and past speciation events are recorded in fossils. Scientists have documented the formation of five new species of cichlid fishes from a single common ancestor that was isolated fewer than 5,000 years ago from the parent stock in Lake Nagubago. The evidence for speciation in this case was morphology (physical appearance) and lack of natural interbreeding. These fish have complex mating rituals and a variety of colorations; the slight modifications introduced in the new species have changed the mate selection process and the five forms that arose could not be convinced to interbreed. Mechanism The theory of evolution is widely accepted among the scientific community, serving to link the diverse speciality areas of biology. Evolution provides the field of biology with a solid scientific base. The significance of evolutionary theory is summarised by Theodosius Dobzhansky as "nothing in biology makes sense except in the light of evolution." Nevertheless, the theory of evolution is not static. There is much discussion within the scientific community concerning the mechanisms behind the evolutionary process. For example, the rate at which evolution occurs is still under discussion. In addition, there are conflicting opinions as to which is the primary unit of evolutionary change—the organism or the gene. Rate of change Darwin and his contemporaries viewed evolution as a slow and gradual process. Evolutionary trees are based on the idea that profound differences in species are the result of many small changes that accumulate over long periods. Gradualism had its basis in the works of the geologists James Hutton and Charles Lyell. Hutton's view suggests that profound geological change was the cumulative product of a relatively slow continuing operation of processes which can still be seen in operation today, as opposed to catastrophism which promoted the idea that sudden changes had causes which can no longer be seen at work. A uniformitarian perspective was adopted for biological changes. Such a view can seem to contradict the fossil record, which often shows evidence of new species appearing suddenly, then persisting in that form for long periods. In the 1970s palaeontologists Niles Eldredge and Stephen Jay Gould developed a theoretical model that suggests that evolution, although a slow process in human terms, undergoes periods of relatively rapid change (ranging between 50,000 and 100,000 years) alternating with long periods of relative stability. Their theory is called punctuated equilibrium and explains the fossil record without contradicting Darwin's ideas. Unit of change A common unit of selection in evolution is the organism. Natural selection occurs when the reproductive success of an individual is improved or reduced by an inherited characteristic, and reproductive success is measured by the number of an individual's surviving offspring. The organism view has been challenged by a variety of biologists as well as philosophers. Evolutionary biologist Richard Dawkins proposes that much insight can be gained if we look at evolution from the gene's point of view; that is, that natural selection operates as an evolutionary mechanism on genes as well as organisms. In his 1976 book, The Selfish Gene, he explains: Others view selection working on many levels, not just at a single level of organism or gene; for example, Stephen Jay Gould called for a hierarchical perspective on selection.
Biology and health sciences
Basics_4
Biology
6376505
https://en.wikipedia.org/wiki/Lophiodon
Lophiodon
Lophiodon (from , 'crest' and 'tooth') is an extinct genus of mammal related to chalicotheres. It lived in Eocene Europe , and was previously thought to be closely related to Hyrachyus. Lophiodon was named and described by Cuvier (1822) based on specimens from the Sables du Castrais Formation. There are various species of Lophiodon known to have existed throughout Eocene Europe, where they were the largest herbivorous mammals in the region.
Biology and health sciences
Perissodactyla
Animals
2654847
https://en.wikipedia.org/wiki/Biopharmaceutical
Biopharmaceutical
A biopharmaceutical, also known as a biological medical product, or biologic, is any pharmaceutical drug product manufactured in, extracted from, or semisynthesized from biological sources. Different from totally synthesized pharmaceuticals, they include vaccines, whole blood, blood components, allergenics, somatic cells, gene therapies, tissues, recombinant therapeutic protein, and living medicines used in cell therapy. Biologics can be composed of sugars, proteins, nucleic acids, or complex combinations of these substances, or may be living cells or tissues. They (or their precursors or components) are isolated from living sources—human, animal, plant, fungal, or microbial. They can be used in both human and animal medicine. Terminology surrounding biopharmaceuticals varies between groups and entities, with different terms referring to different subsets of therapeutics within the general biopharmaceutical category. Some regulatory agencies use the terms biological medicinal products or therapeutic biological product to refer specifically to engineered macromolecular products like protein- and nucleic acid-based drugs, distinguishing them from products like blood, blood components, or vaccines, which are usually extracted directly from a biological source. Biopharmaceutics is pharmaceutics that works with biopharmaceuticals. Biopharmacology is the branch of pharmacology that studies biopharmaceuticals. Specialty drugs, a recent classification of pharmaceuticals, are high-cost drugs that are often biologics. The European Medicines Agency uses the term advanced therapy medicinal products (ATMPs) for medicines for human use that are "based on genes, cells, or tissue engineering", including gene therapy medicines, somatic-cell therapy medicines, tissue-engineered medicines, and combinations thereof. Within EMA contexts, the term advanced therapies refers specifically to ATMPs, although that term is rather nonspecific outside those contexts. Gene-based and cellular biologics, for example, often are at the forefront of biomedicine and biomedical research, and may be used to treat a variety of medical conditions for which no other treatments are available. Building on the market approvals and sales of recombinant virus-based biopharmaceuticals for veterinary and human medicine, the use of engineered plant viruses has been proposed to enhance crop performance and promote sustainable production. In some jurisdictions, biologics are regulated via different pathways from other small molecule drugs and medical devices. Major classes Extracted from living systems Some of the oldest forms of biologics are extracted from the bodies of animals, and other humans especially. Important biologics include: Whole blood and other blood components Organ transplantation and tissue transplants Stem-cell therapy Antibodies for passive immunity (e.g., to treat a virus infection) Human reproductive cells Human breast milk Fecal microbiota Some biologics that were previously extracted from animals, such as insulin, are now more commonly produced by recombinant DNA. Produced by recombinant DNA Biologics can refer to a wide range of biological products in medicine. However, in most cases, the term is used more restrictively for a class of therapeutics (either approved or in development) that are produced using biological processes involving recombinant DNA technology. These medications are usually one of three types: Substances that are (nearly) identical to the body's key signaling proteins. Examples are the blood-production stimulating protein erythropoetin, or the growth-stimulating hormone named "growth hormone" or biosynthetic human insulin and its analogues. Monoclonal antibodies. These are similar to the antibodies that the human immune system uses to fight off bacteria and viruses, but they are "custom-designed" (using hybridoma technology or other methods) and can therefore be made specifically to counteract or block any given substance in the body, or to target any specific cell type; examples of such monoclonal antibodies for use in various diseases are given in the table below. Receptor constructs (fusion proteins), usually based on a naturally occurring receptor linked to the immunoglobulin frame. In this case, the receptor provides the construct with detailed specificity, whereas the immunoglobulin structure imparts stability and other useful features in terms of pharmacology. Some examples are listed in the table below. Biologics as a class of medications in this narrower sense have had a profound impact on many medical fields, primarily rheumatology and oncology, but also cardiology, dermatology, gastroenterology, neurology, and others. In most of these disciplines, biologics have added major therapeutic options for treating many diseases, including some for which no effective therapies were available, and others where previously existing therapies were inadequate. However, the advent of biologic therapeutics has also raised complex regulatory issues (see below), and significant pharmacoeconomic concerns because the cost for biologic therapies has been dramatically higher than for conventional (pharmacological) medications. This factor has been particularly relevant since many biological medications are used to treat chronic diseases, such as rheumatoid arthritis or inflammatory bowel disease, or for the treatment of otherwise untreatable cancer during the remainder of life. The cost of treatment with a typical monoclonal antibody therapy for relatively common indications is generally in the range of €7,000–14,000 per patient per year. Older patients who receive biologic therapy for diseases such as rheumatoid arthritis, psoriatic arthritis, or ankylosing spondylitis are at increased risk for life-threatening infection, adverse cardiovascular events, and malignancy. The first such substance approved for therapeutic use was biosynthetic "human" insulin made via recombinant DNA. Sometimes referred to as rHI, under the trade name Humulin, was developed by Genentech, but licensed to Eli Lilly and Company, who manufactured and marketed it starting in 1982. Major kinds of biopharmaceuticals include: Blood factors (Factor VIII and Factor IX) Thrombolytic agents (tissue plasminogen activator) Hormones (insulin, glucagon, growth hormone, gonadotrophins) Haematopoietic growth factors (Erythropoietin, colony-stimulating factors) Interferons (Interferons-α, -β, -γ) Interleukin-based products (Interleukin-2) Vaccines (Hepatitis B surface antigen) Monoclonal antibodies (Various) Additional products (tumour necrosis factor, therapeutic enzymes) Research and development investment in new medicines by the biopharmaceutical industry stood at $65.2 billion in 2008. A few examples of biologics made with recombinant DNA technology include: Vaccines Many vaccines are grown in tissue cultures. Gene therapy Viral gene therapy involves artificially manipulating a virus to include a desirable piece of genetic material. Viral gene therapies using engineered plant viruses have been proposed to enhance crop performance and promote sustainable production. Biosimilars With the expiration of many patents for blockbuster biologics between 2012 and 2019, the interest in biosimilar production, i.e., follow-on biologics, has increased. Compared to small molecules that consist of chemically identical active ingredients, biologics are vastly more complex and consist of a multitude of subspecies. Due to their heterogeneity and the high process sensitivity, originators and follow-on biosimilars will exhibit variability in specific variants over time. The safety and clinical performance of both originator and biosimilar biopharmaceuticals must remain equivalent throughout their lifecycle. Process variations are monitored by modern analytical tools (e.g., liquid chromatography, immunoassays, mass spectrometry, etc.) and describe a unique design space for each biologic. Biosimilars require a different regulatory framework compared to small-molecule generics. Legislation in the 21st century has addressed this by recognizing an intermediate ground of testing for biosimilars. The filing pathway requires more testing than for small-molecule generics, but less testing than for registering completely new therapeutics. In 2003, the European Medicines Agency introduced an adapted pathway for biosimilars, termed similar biological medicinal products. This pathway is based on a thorough demonstration of comparability of the product to an existing approved product. Within the United States, the Patient Protection and Affordable Care Act of 2010 created an abbreviated approval pathway for biological products shown to be biosimilar to, or interchangeable with, an FDA-licensed reference biological product. Researchers are optimistic that the introduction of biosimilars will reduce medical expenses to patients and the healthcare system. Commercialization When a new biopharmaceutical is developed, the company will typically apply for a patent, which is a grant to exclusive manufacturing rights. This is the primary means by which the drug developer can recover the investment cost for development of the biopharmaceutical. The patent laws in the United States and Europe differ somewhat on the requirements for a patent, with the European requirements perceived as more difficult to satisfy. The total number of patents granted for biopharmaceuticals has risen significantly since the 1970s. In 1978 the total patents granted was 30. This had climbed to 15,600 in 1995, and by 2001 there were 34,527 patent applications. In 2012 the US had the highest IP (Intellectual Property) generation within the biopharmaceutical industry, generating 37 percent of the total number of granted patents worldwide; however, there is still a large margin for growth and innovation within the industry. Revisions to the current IP system to ensure greater reliability for R&D (research and development) investments is a prominent topic of debate in the US as well. Blood products and other human-derived biologics such as breast milk have highly regulated or very hard-to-access markets; therefore, customers generally face a supply shortage for these products. Institutions housing these biologics, designated as 'banks', often cannot distribute their product to customers effectively. Conversely, banks for reproductive cells are much more widespread and available due to the ease with which spermatozoa and egg cells can be used for fertility treatment. Large-scale production Biopharmaceuticals may be produced from microbial cells (e.g., recombinant E. coli or yeast cultures), mammalian cell lines (see Cell culture) and plant cell cultures (see Plant tissue culture) and moss plants in bioreactors of various configurations, including photo-bioreactors. Important issues of concern are cost of production (low-volume, high-purity products are desirable) and microbial contamination (by bacteria, viruses, mycoplasma). Alternative platforms of production which are being tested include whole plants (plant-made pharmaceuticals). Transgenics A potentially controversial method of producing biopharmaceuticals involves transgenic organisms, particularly plants and animals that have been genetically modified to produce drugs. This production is a significant risk for its investor due to production failure or scrutiny from regulatory bodies based on perceived risks and ethical issues. Biopharmaceutical crops also represent a risk of cross-contamination with non-engineered crops, or crops engineered for non-medical purposes. One potential approach to this technology is the creation of a transgenic mammal that can produce the biopharmaceutical in its milk, blood, or urine. Once an animal is produced, typically using the pronuclear microinjection method, it becomes efficacious to use cloning technology to create additional offspring that carry the favorable modified genome. The first such drug manufactured from the milk of a genetically modified goat was ATryn, but marketing permission was blocked by the European Medicines Agency in February 2006. This decision was reversed in June 2006 and approval was given August 2006. Regulation European Union In the European Union, a biological medicinal product is one of the active substance(s) produced from or extracted from a biological (living) system, and requires, in addition to physicochemical testing, biological testing for full characterisation. The characterisation of a biological medicinal product is a combination of testing the active substance and the final medicinal product together with the production process and its control. For example: Production process – it can be derived from biotechnology or from other technologies. It may be prepared using more conventional techniques as is the case for blood or plasma-derived products and a number of vaccines. Active substance – consisting of entire microorganisms, mammalian cells, nucleic acids, proteinaceous, or polysaccharide components originating from a microbial, animal, human, or plant source. Mode of action – therapeutic and immunological medicinal products, gene transfer materials, or cell therapy materials. United States In the United States, biologics are licensed through the biologics license application (BLA), then submitted to and regulated by the FDA's Center for Biologics Evaluation and Research (CBER) whereas drugs are regulated by the Center for Drug Evaluation and Research. Approval may require several years of clinical trials, including trials with human volunteers. Even after the drug is released, it will still be monitored for performance and safety risks. The manufacture process must satisfy the FDA's "Good Manufacturing Practices", which are typically manufactured in a cleanroom environment with strict limits on the amount of airborne particles and other microbial contaminants that may alter the efficacy of the drug. Canada In Canada, biologics (and radiopharmaceuticals) are reviewed through the Biologics and Genetic Therapies Directorate within Health Canada.
Technology
Biotechnology
null
2655767
https://en.wikipedia.org/wiki/Domestic%20duck
Domestic duck
Domestic ducks (mainly mallard, Anas platyrhynchos domesticus, with some Muscovy ducks, Cairina moschata domestica) are ducks that have been domesticated and raised for meat and eggs. A few are kept for show, or for their ornamental value. Most varieties of domesticated ducks, apart from the Muscovy duck and hybrids, are descended from the mallard, which was domesticated in China around 2000 BC. Duck farming is simplified by their reliable flocking behaviour, and their ability to forage effectively for themselves. Over 80% of global duck production is in China. Breeds such as White Pekin are raised for meat, while the prolific Indian Runner can produce over 300 eggs per year. In East and Southeast Asia, polycultures such as rice-duck farming are widely practised: the ducks assist the rice with manure and by eating small pest animals, so that the same land produces rice and ducks at once. In culture, ducks feature in children's stories such as The Tale of Jemima Puddle-Duck, and in Sergei Prokofiev's musical composition Peter and the Wolf; they have appeared in art since the time of ancient Egypt, where they served as a fertility symbol. Origins Domestication Domestic ducks appear from whole-genome sequencing to have originated from a single domestication event of mallards during the Neolithic, followed by rapid selection for lineages favouring meat or egg production. They were probably domesticated in Southern China around 2000 BC by the rice paddy-farming ancestors of modern Southeast Asians and spread outwards from that region. There are few archaeological records, so the date of domestication is unknown; the earliest written records are in Han Chinese writings from central China dating to about 500 BC. Duck farming for both meat and eggs is a widespread and ancient industry in Southeast Asia. Wild ducks were hunted extensively in Egypt and other parts of the world in ancient times, but were not domesticated. Ducks are documented in Ancient Rome from the second century BC, but descriptions – such as by Columella – suggest that ducks in Roman agriculture were captured in the wild, not domesticated; there was no duck breeding in Roman times, so eggs from wild ducks were needed to start duck farms. Mallards were domesticated in Eurasia. The Muscovy duck was domesticated in Mexico and South America. Origins of breeds Most breeds and varieties of domestic duck derive from the mallard, Anas platyrhynchos; a few derive from Cairina moschata, the Muscovy duck, or are mulards, hybrids of these with A. platyrhynchos stock. Domestication has greatly altered their characteristics. Domestic ducks are mostly promiscuous, where wild mallards are monogamous. Domestic ducks have lost the mallard's territorial behaviour, and are less aggressive than mallards. Despite these differences, domestic ducks frequently mate with wild mallard, producing fully fertile hybrid offspring. A wild mallard weighs some , but large breeds like the Aylesbury may weigh (and hybrids even more), while small breeds like the Appleyard may be only . Those breeds are raised for meat and eggs, while other breeds are purely ornamental, having been selected for their crests, tufts, or striking plumage, for exhibition in competitions. A phylogenomic analysis found that Indian breeds of ducks formed a cluster that was sister to the White Pekin duck (a breed derived from ducks domesticated in China), while Muscovy ducks are from another genus. Farming Husbandry Ducks have been farmed for thousands of years. They are reared principally for meat, but also for duck eggs. Duck husbandry is simplified by aspects of their behaviour, including reliable flocking and the ability to forage effectively for themselves in wetlands and water bodies. Most breeds of duck may lay some 200 eggs per year, though the Indian Runner may produce over 300 eggs annually. The females of many breeds of domestic duck are unreliable at sitting their eggs and raising their young. Exceptions include the Rouen duck and especially the Muscovy duck. It has been a custom on farms for centuries to put duck eggs under broody hens for hatching; nowadays this role is often played by an incubator. However, young ducklings rely on their mothers for a supply of preen oil to make them waterproof; a chicken does not make as much preen oil as a duck, and an incubator makes none. Once the duckling grows its own feathers, it produces preen oil from the sebaceous gland near the base of its tail. Systems In East and Southeast Asia, rice-duck farming is widely practised. This polyculture yields both rice and ducks from the same land; the ducks eat small pest animals in the crop; they stir the water, limiting weeds, and manure the rice. Other rice polycultures in the region include rice-fish-duck and rice-fish-duck-azolla systems, where fish further manure the rice and help to control pests. Pests and diseases Domestic ducks have the advantage over other poultry of being strongly resistant to many bird diseases, including such serious conditions as duck plague (viral enteritis). They are however susceptible to the dangerous H5N1 strain of avian influenza. Ducks are subject to ectoparasites such as lice and endoparasites such as trematodes, cestodes, and acanthocephalans. A high parasitic load can result in a substantial reduction in the ducks' growth rate. Production In 2021 approximately 4.3 billion ducks were slaughtered for meat worldwide, for a total yield of about 6.2 million tonnes; over 80% of this production was in China, where more than 3.6 billion ducks were killed, yielding some 4.9 million tonnes of meat. Worldwide production of duck meat was substantially lower than that of chicken – 73.8 billion birds slaughtered, 121.6 million tonnes – but considerably greater than that of goose – about 750 million birds killed for 4.4 million tonnes of meat. Feathers are a by-product of duck farming. As food Meat Since ancient times, the duck has been eaten as food. Usually only the breast and thigh meat is eaten. It does not need to be hung before preparation, and is often braised or roasted, sometimes flavoured with bitter orange or with port. Peking duck is a dish of roast duck from Beijing, China, that has been prepared since medieval times. It is today traditionally served with spring pancakes, spring onions and sweet bean sauce. Eggs and other products In France, ducks are used for the production of foie gras de canard. In some cultures the blood of ducks slaughtered for meat is used as food; it may be eaten seasoned and lightly cooked, as in Ireland, or be used as an ingredient, as in a number of regional types of blood soup, among them the czarnina of Poland and the tiết canh of Vietnam. Duck eggs are eaten mainly in Asian countries such as China; in the Philippines, balut – a fertilised duck egg at about 17 days of development, boiled and eaten with salt – is considered a delicacy and is sold as street food. In culture For children The domestic duck has appeared numerous times in children's stories. Beatrix Potter's The Tale of Jemima Puddle-Duck was published by Frederick Warne & Co in 1908. One of Potter's best-known books, the tale was included in the Royal Ballet's The Tales of Beatrix Potter. It is the story of how Jemima, a domestic duck, is saved from a cunning fox who plans to kill her, when she tries to find a safe place for her eggs to hatch. The Story About Ping is a 1933 American children's book by Marjorie Flack, illustrated by Kurt Wiese, about a domestic duck lost on the Yangtze River. Make Way for Ducklings, a 1941 children's picture book by Robert McCloskey, tells the story of a pair of mallards who decide to raise their family on an island in the lagoon in Boston Public Garden. It won the 1942 Caldecott Medal for its illustrations. The Disney cartoon character Donald Duck, one of the world's most recognizable pop culture icons, is a domestic duck of the American Pekin breed. The domestic duck features in the musical composition Peter and the Wolf, written by the Russian composer Sergei Prokofiev in 1936. The orchestra illustrates the children's story while the narrator tells it. In this, a domestic duck and a little bird argue on each other's flight capabilities. The duck is represented by the oboe. The story ends with the wolf eating the duck alive, its quack heard from inside the wolf's belly. In art and folk culture Domestic ducks are featured in a range of ancient artefacts, which revealed that they were a fertility symbol.
Biology and health sciences
Anseriformes
Animals
2656829
https://en.wikipedia.org/wiki/Giant%20petrel
Giant petrel
Giant petrels form a genus, Macronectes, from the family Procellariidae, which consists of two living and one extinct species. They are the largest birds in this family. Both extant species in the genus are native to the Southern Hemisphere. Giant petrels are extremely aggressive predators and scavengers, inspiring another common name, the stinker. Seamen and whalers also referred to the giant petrel as the molly-hawk, gong, glutton bird and nelly. They are the only member of their family that is capable of walking on land. Taxonomy The genus Macronectes was introduced in 1905 by the American ornithologist Charles Wallace Richmond to accommodate what is now the southern giant petrel. It replaced the previous genus Ossifraga which was found to have been earlier applied to a different group of birds. The name Macronectes combines the Ancient Greek makros meaning "great" and nēktēs meaning "swimmer". The present-day giant petrels are two large seabirds from the genus Macronectes. Long considered to be conspecific (they were not established as separate species until 1966), the two species, the southern giant petrel, M. giganteus, and northern giant petrel, M. halli, are considered with the two species of fulmars, Fulmarus, to form a distinct subgroup within the Procellariidae, and including the Antarctic petrel, Cape petrel, and snow petrel, they form a separate group from the rest of the family. A fossil giant petrel, Macronectes tinae is known from the Pliocene epoch of New Zealand. Distribution The living species are restricted to the Southern Hemisphere, and though their distributions overlap significantly, with both species breeding on the Prince Edward Islands, Crozet Islands, Kerguelen Islands, Macquarie Island, and South Georgia, many southern giant petrels nest farther south, with colonies as far south as Antarctica. In July 2019, an individual, either of M. giganteus or M. halli, was found as a vagrant in County Durham and Northumberland in the United Kingdom, marking the first record of the genus in Europe. Description The southern giant petrel is slightly larger than the northern giant petrel, at , across the wings, and of body length. The northern giant petrel is , across the wings and of body length. They superficially resemble the albatross, and are the only procellarids that can equal them in size. They can be separated from the albatrosses by their bill; the two tube nostrils are joined on the top of the bill, unlike on albatross, where they are separated and on the side of the bill. Giant petrels are also the only members of the family Procellariidae to have strong legs to walk on land. They are also much darker and more mottled brown (except for the white morph southern, which are whiter than any albatross) and have a more hunch-backed look. The bills of Procellariiformes are also unique in that they are split into between seven and nine horny plates. The petrels have a hooked bill called the maxillary unguis which can hold slippery prey. They produce a stomach oil made up of wax esters and triglycerides which is stored in the proventriculus. This can be sprayed out of their mouths as a defense against predators and as a protein-rich food source for chicks and for the adults during their long flights. Petrels have a salt gland situated above the nasal passage that helps to desalinate their bodies by excreting a high saline solution from their noses. The two species are difficult to tell from each other, possessing similar long, pale, orange bills and uniform, mottled grey plumage (except for around 15% of southern petrels, which are almost completely white). The billtip of M. halli is reddish-pink and that of M. giganteus is pale green, appearing slightly darker and lighter than the rest of the bill, respectively. The underside of older M. halli birds is paler and more uniform than M. giganteus, the latter showing a contrast between paler head and neck and darker belly. Additionally, adults of M. halli typically appear pale-eyed, while adults of M. giganteus of the normal morph typically appear dark-eyed (occasionally flecked paler). Classic examples of northern giant are identifiable at some range. Young birds of both species are all dark and very hard to distinguish unless bill tip colour can be seen. Some relatively young northern giant petrels can appear to be paler on the head, suggesting southern giant, thus this species is harder to confirm. The extinct Macronectes tinae is characterized by having smaller bodies than their living relatives. Etymology Macronectes comes from the Greek words makros meaning "long" and nēktēs meaning "swimmer". Also, petrel is derived from St. Peter and the story of his walking on water, as they appear to run on the water when they take off. Behaviour Feeding Petrels are highly opportunistic feeders. Unique among procellarids, they will feed both on land and at sea; the majority of their food is found near coastlines. On land, they feed on carrion, and regularly scavenge the breeding colonies of penguins and seals. They will display their dominance over carcasses with a "sealmaster posture": the head and the wings are held outstretched, the head pointing at the opponent and the wingtips pointing slightly back; the tail is raised to a vertical position. Giant petrels are extremely aggressive and will kill other seabirds (usually penguin chicks, sick or injured adult penguins and the chicks of other seabirds), even those as large as an albatross, which they kill either by battering them to death or drowning. At sea, they feed on krill, squid, and fish. They often follow fishing boats and other ships, in the hope of picking up offal and other waste. Reproduction The southern giant petrel is more likely to form loose colonies than the northern, both species laying a single egg in a rough nest built about off the ground. The egg is incubated for about 60 days; once hatched the chick is brooded for three weeks. Chicks fledge after about four months, but do not achieve sexual maturity for six or seven years after fledging. Conservation While both species were listed as near threatened in the 2008 IUCN Red List, subsequent evidence suggested they were less threatened than previously believed, and the populations of both actually appeared to have increased, at least locally. Consequently, they were listed as least concern on the 2009 Red List and afterwards (as of IUCN's last assessment in 2018, they continue to be listed as least concern). The southern giant petrel is listed as endangered on the Australian Environment Protection and Biodiversity Conservation Act 1999, while the northern giant petrel is listed on the same act as vulnerable. Their conservation status also varies from state to state within Australia. Gallery
Biology and health sciences
Procellariiformes
Animals