id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
537859
https://en.wikipedia.org/wiki/Police%20dog
Police dog
A police dog, also known as a K-9, is a dog that is trained to assist police and other law enforcement officers. Their duties may include searching for drugs and explosives, locating missing people, finding crime scene evidence, protecting officers and other people, and attacking suspects who flee from officers. The breeds most commonly used by law enforcement are the German Shepherd, Belgian Malinois, Bloodhound, Dutch Shepherd, and Labrador Retriever. In recent years, the Belgian Malinois has become the leading choice for police and military work due to their intense drive, focus, agility, and smaller size, though German Shepherds remain the breed most associated with law enforcement. Police dogs are used on a federal and local level for law enforcement purposes in many parts of the world. They are often assigned to what in some nations is referred to as a K-9 Unit, with a specific handler, and must remember several verbal cues and hand gestures. Initial training for a police dog typically takes between eight months and a year, depending on where and how they are trained, and for what purpose. Police dogs often regularly take training programs with their assigned handler to reinforce their training. In many countries, intentionally injuring or killing a police dog is a criminal offense. History Early history Dogs have been used in law enforcement since the Middle Ages. Wealth and money was then tithed in the villages for the upkeep of the parish constable's bloodhounds that were used for hunting down outlaws. The first recorded use of police dogs were in the early 14th century in St. Malo, France, where dogs were used to guard docks and piers. By the late 14th century, bloodhounds were used in Scotland, known as "Slough dogs" – the word "Sleuth", (meaning detective) was derived from this. Between the 12th and 20th centuries, police dogs on the British Isles and European continent were primarily used for their tracking abilities. The rapid urbanization of England and France in the 19th century increased public concern regarding growing lawlessness. In London, the existing law enforcement, the Bow Street Runners, struggled to contain the crime on their own, and as a result, private associations were formed to help combat crime. Night watchmen were employed to guard premises, and were provided with firearms and dogs to protect themselves from criminals. Modern era One of the first attempts to use dogs in policing was in 1889 by the Commissioner of the Metropolitan Police of London, Sir Charles Warren. Warren's repeated failures at identifying and apprehending the serial killer Jack the Ripper had earned him much vilification from the press, including being denounced for not using bloodhounds to track the killer. He soon had two bloodhounds trained for the performance of a simple tracking test from the scene of another of the killer's crimes. The results were far from satisfactory, with one of the hounds biting the Commissioner and both dogs later running off, requiring a police search to find them. It was in Continental Europe that dogs were first used on a large scale. Police in Paris began using dogs against roaming criminal gangs at night, but it was the police department in Ghent, Belgium that introduced the first organized police dog service program in 1899. These methods soon spread to Austria-Hungary and Germany; in the latter the first scientific developments in the field took place with experiments in dog breeding and training. The German police selected the German Shepherd Dog as the ideal breed for police work and opened up the first dog training school in 1920 in Greenheide. In later years, many Belgian Malinois dogs were added to the unit. The dogs were systematically trained in obedience to their officers and tracking and attacking criminals. In Britain, the North Eastern Railway Police were among the first to use police dogs in 1908 to put a stop to theft from the docks in Hull. By 1910, railway police forces were experimenting with other breeds such as Belgian Malinois, Labrador Retrievers, and German shepherds. Training Popular dog breeds used by law enforcement include the Airedale terrier, Akita, Groenendael, Tervueren, Malinois dog, Bernese Mountain Dog, Bloodhound, Border Collie, Boxer, Bouvier des Flandres, Briard, Cane Corso, Bullmastiff, Croatian Sheepdog, Doberman Pinscher, German Shepherd, German Shorthaired Pointer, Golden Retriever, Labrador Retriever, Rottweiler and English Springer Spaniel, Dogo Argentino. Training of police dogs is a very lengthy process since it begins with the training of the canine handler. The canine handlers go through a long process of training to ensure that they will train the dog to the best of its ability. First, the canine handler has to complete the requisite police academy training and one to two years of patrol experience before becoming eligible to transfer to a specialty canine unit. This is because the experience as an officer allows prospective canine officers to gain valuable experience in law enforcement. However, having dog knowledge and training outside of the police academy is considered to be an asset, this could be dog obedience, crowd control, communicating effectively with animals and being approachable and personable since having a dog will draw attention from surrounding citizens. For a dog to be considered for a police department, it must first pass a basic obedience training course. They must be able to obey the commands of their handler without hesitation. This allows the officer to have complete control over how much force the dog should use against a suspect. Dogs trained in Europe are usually given commands in the country's native language. Dogs are initially trained with this language for basic behavior, so, it is easier for the officer to learn new words/commands, rather than retraining the dog to new commands. This is contrary to the popular belief that police dogs are trained in a different language so that a suspect cannot command the dog against the officer. Dogs used in law enforcement are trained to either be "single purpose" or "dual purpose". Single-purpose dogs are used primarily for backup, personal protection, and tracking. Dual-purpose dogs, however, are more typical. Dual-purpose dogs do everything that single-purpose dogs do, and also detect either explosives or narcotics. Dogs can only be trained for one or the other because the dog cannot communicate to the officer if it found explosives or narcotics. When a narcotics dog in the United States indicates to the officer that it found something, the officer has probable cause to search whatever the dog alerted on (i.e. bag or vehicle) without a warrant, in most states. In suspect apprehension, having a loud barking dog is helpful and can result in suspects surrendering without delay. Specialization Police dogs can be specialized to perform in specific areas. Apprehension and attack dogs – This dog is used to locate, apprehend, and sometimes subdue suspects. Detection dogs – Trained to detect explosives or drugs such as marijuana, heroin, cocaine, crack cocaine, or methamphetamines. Some dogs are specifically trained to detect firearms and ammunition. Dual purpose dog – Also known as a patrol dog, these dogs are trained and skilled in tracking, handler protection, off-leash obedience, criminal apprehension, and article, area and building search. Search and rescue dogs (SAR) – This dog is used to locate suspects or find missing people or objects. Belgian Malinois, German Shepherds, Golden Retrievers, Labrador Retrievers, and Bloodhounds can all be used. Retirement Police dogs are retired if they become injured to an extent where they will not recover completely, pregnant or raising puppies, or are too old or sick to continue working. Since many dogs are raised in working environments for the first year of their life and retired before they become unable to perform, the working life of a dog is 6–9 years. However, when police dogs retire in some countries they may have the chance to receive a pension plan for their contribution to policing. In 2013, a pension scheme for police dogs in Nottinghamshire, England was introduced, wherein the police force offered £805 over the span of three years to cover any additional medical costs; the dogs were also allowed to be adopted by their original handler. In many countries, police dogs killed in the line of duty receive the same honors as their human partners. Accusations of brutality and racial partiality A 2020 investigation coordinated by the Marshall Project found evidence of widespread deployment of police dogs in the U.S. as disproportionate force and disproportionately against people of color. A series of 13 linked reports found more than 150 cases from 2015 to 2020 of K-9 officers improperly using dogs as weapons to catch, bite, and injure people. The rate of police K-9 bites in Baton Rouge, Louisiana, a majority-Black city of 220,000 residents, averages more than double that of the next-ranked city, Indianapolis, and nearly one-third of the police dog bites are inflicted on teenage men, most of whom are Black. Medical researchers found that police dog attacks are "more like shark attacks than nips from a family pet" due to the aggressive training police dogs undergo. Many people bitten were not violent and were not suspected of crimes. Police officers are often shielded from liability, and federal civil rights laws don't typically cover bystanders who are bitten by mistake. Even when victims can bring cases, lawyers say they struggle because jurors tend to love police dogs. Usage by country and region
Technology
Agriculture, labor and economy
null
537990
https://en.wikipedia.org/wiki/Classical%20swine%20fever
Classical swine fever
Classical swine fever (CSF) or hog cholera (also sometimes called pig plague based on the German word ) is a highly contagious disease of swine (Old World and New World pigs). It has been mentioned as a potential bioweapon. Clinical signs Swine fever causes fever, skin lesions, convulsions, splenic infarctions and usually (particularly in young animals) death within 15 days. The disease has acute and chronic forms, and can range from severe, with high mortality, to mild or even unapparent. In the acute form of the disease, in all age groups, there is fever, huddling of sick animals, loss of appetite, dullness, weakness, conjunctivitis, constipation followed by diarrhoea, and an unsteady gait. Several days after the onset of clinical signs, the ears, abdomen and inner thighs may show a purple discoloration. Animals with acute disease die within 1–2 weeks. Severe cases of the disease appear very similar to African swine fever. With low-virulence strains, the only expression may be poor reproductive performance and the birth of piglets with neurologic defects such as congenital tremor. Immunization A small fraction of the infected pigs may survive and are rendered immune. Artificial immunization procedures were first developed by Marion Dorset. Epidemiology The disease is endemic in much of Asia, Central and South America, and parts of Europe and Africa. It was believed to have been eradicated in the United Kingdom by 1966 (according to the Department for Environment, Food and Rural Affairs), but an outbreak occurred in East Anglia in 2000. On January 31, 1978 USDA Secretary Bob Bergland declared that the United States was free of the disease. The appearance of CSF in Italy and Spain was traced by in a retroactive genetic analysis. Greiser-Wilke et al., 2000 traced these to shipments of piglets from the Netherlands. Other regions believed free of CSF include Australia, Belgium (1998), Canada (1962), Ireland, New Zealand, and Scandinavia. Virus The infectious agent responsible is a virus CSFV (previously called hog cholera virus) of the genus Pestivirus in the family Flaviviridae. CSFV is closely related to the ruminant pestiviruses that cause bovine viral diarrhoea and border disease. The effect of different CSFV strains varies widely, leading to a wide range of clinical signs. Highly virulent strains correlate with acute, obvious disease and high mortality, including neurological signs and hemorrhages within the skin. Less virulent strains can give rise to subacute or chronic infections that may escape detection, while still causing abortions and stillbirths. In these cases, herds in high-risk areas are usually serologically tested on a thorough statistical basis. Infected piglets born to infected but subclinical sows help maintain the disease within a population. Other signs can include lethargy, fever, immunosuppression, chronic diarrhoea, and secondary respiratory infections. The incubation period of CSF ranges from 2 to 14 days, but clinical signs may not be apparent until after 2 to 3 weeks. Preventive state regulations usually assume 21 days as the outside limit of the incubation period. Animals with an acute infection can survive 2 to 3 months before their eventual death. Eradicating CSF is problematic. Current programmes revolve around rapid detection, diagnosis, and slaughter. This may possibly be followed by emergency vaccination (ATCvet codes: for the inactivated viral vaccine, for the live vaccine). Vaccination is only used where the virus is widespread in the domestic pig population and/or in wild or feral pigs. In the latter case, a slaughter policy alone is usually impracticable. Instead, countries within the EU have implemented hunting restrictions designed to limit the movement of infected boars, as well as using marker and emergency vaccines to inhibit the spread of infection. Possible sources for maintaining and introducing infection include the wide transport of pigs and pork products, as well as endemic CSF within wild boar and feral pig populations. Strains 1 including 1.1, 1.2, 1.3, 1.4, the unassigned 1.x 2 including 2.1, 2.2, 2.3 3 including 3.1, 3.2, 3.3, 3.4 Diagnosis Standard diagnostic tests include Fluorescent antibody test (FAT) detection of viral protein using fluorescent labelled antibodies in tissue Serum Enzyme-linked-immunosorbent assay (ELISA) detection of host animal antibody response in serum samples. Antigen ELISA detection of viral protein (antigen) in serum samples. RT-qPCR test detection of viral RNA in samples, especially useful to differentiate strains. Direct genetic typing for CSF was first developed by Greiser-Wilke et al., 2000 to trace descendants of the 1997-1998 EU epizootic. Virus isolation isolation of virus in cell culture. Histopathological examination Histology of the brain shows vasculoendothelial proliferation and perivascular cuffing (cuffing is highly suggestive when accompanied by other signs, but is not pathognomonic for the disease).
Biology and health sciences
Viral diseases
Health
538005
https://en.wikipedia.org/wiki/Plateau
Plateau
In geology and physical geography, a plateau (; ; : plateaus or plateaux), also called a high plain or a tableland, is an area of a highland consisting of flat terrain that is raised sharply above the surrounding area on at least one side. Often one or more sides have deep hills or escarpments. Plateaus can be formed by a number of processes, including upwelling of volcanic magma, extrusion of lava, and erosion by water and glaciers. Plateaus are classified according to their surrounding environment as intermontane, piedmont, or continental. A few plateaus may have a small flat top while others have wider ones. Formation Plateaus can be formed by a number of processes, including upwelling of volcanic magma, extrusion of lava, plate tectonics movements, and erosion by water and glaciers. Volcanic Volcanic plateaus are produced by volcanic activity. The Columbia Plateau in the north-western United States is an example. They may be formed by upwelling of volcanic magma or extrusion of lava. The underlining mechanism in forming plateaus from upwelling starts when magma rises from the mantle, causing the ground to swell upward. In this way, large, flat areas of rock are uplifted to form a plateau. For plateaus formed by extrusion, the rock is built up from lava spreading outward from cracks and weak areas in the crust. Tectonic Tectonic plateaus are formed by tectonic plate movements which cause uplift, and are normally of a considerable size, and a fairly uniform altitude. Examples are the Deccan Plateau in India and the Meseta Central on the Iberian Peninsula. Erosion Plateaus can also be formed by the erosional processes of glaciers on mountain ranges, leaving them sitting between the mountain ranges. Water can also erode mountains and other landforms down into plateaus. Dissected plateaus are highly eroded plateaus cut by rivers and broken by deep narrow valleys. An example is the Scottish Highlands. Classification Plateaus are classified according to their surrounding environment. Intermontane plateaus are some of the highest and most extensive plateaus in the world, enclosed by fold mountains. Examples are the Tibetan Plateau between the Himalayas and Kunlun Mountains, Altiplano plateau between two ranges of Andes. Lava or volcanic plateaus are the plateau that occur in areas of widespread volcanic eruptions. The magma that comes out through narrow cracks or fissures in the crust spread over large area and solidifies. These layers of lava sheets form lava or volcanic plateaus. The Antrim Plateau in Northern Ireland, the Deccan Plateau in India, and the Columbia Plateau in the United States are examples of lava plateaus. Piedmont plateaus are bordered on one side by mountains and on the other by a plain or a sea. The Piedmont Plateau of the Eastern United States between the Appalachian Mountains and the Atlantic Plain is an example. Continental plateaus are bordered on all sides by plains or oceans, forming away from the mountains. An example of a continental plateau is the Antarctic Plateau in East Antarctica. Large plateaus by continent Africa The highest African plateau is the Ethiopian Highlands which cover the central part of Ethiopia. It forms the largest continuous area of its altitude in the continent, with little of its surface falling below 1,500 metres (4,921 ft), while the summits reach heights of up to 4,550 metres (14,928 ft). It is sometimes called the Roof of Africa due to its height and large area. Another example is the Highveld which is the portion of the South African inland plateau which has an altitude above approximately 1,500 metres, but below 2,100 metres, thus excluding the Lesotho mountain regions. It is home to some of the largest South African urban agglomerations. In Egypt are the Giza Plateau and Galala Mountain, which was once called Gallayat Plateaus, rising 3,300 ft above sea level. Antarctica Another very large plateau is the icy Antarctic Plateau, which is sometimes referred to as the Polar Plateau or King Haakon VII Plateau, home to the geographic South Pole and the Amundsen–Scott South Pole Station, which covers most of East Antarctica where there are no known mountains but rather high of superficial ice and which spreads very slowly toward the surrounding coastline through enormous glaciers. The polar ice cap is so massive that the echolocation measurements of ice thickness have shown that large areas are below sea level. But, as the ice melts, the land beneath will rebound through isostasy and ultimately rise above sea level. Asia The largest and highest plateau in the world is the Tibetan Plateau, sometimes metaphorically described as the "Roof of the World", which is still being formed by the collisions of the Indo-Australian and Eurasian tectonic plates. The Tibetan Plateau covers approximately , at about above sea level. The plateau is sufficiently high to reverse the Hadley cell convection cycles and to drive the monsoons of India towards the south. The Deosai Plains in Pakistan are situated at an average elevation of 4,114 meters (13,497 ft) above sea level. They are considered to be the second highest plateaus in the world. Other major plateaus in Asia are: Najd on the Arabian Peninsula, elevation 762 to 1,525 m (2,500 to 5,003 ft), Armenian Highlands (≈, elevation ), Iranian Plateau (≈, elevation ), Anatolian Plateau, Mongolian Plateau (≈, elevation ), and the Deccan Plateau (≈, elevation ). North America A large plateau in North America is the Colorado Plateau, which covers about in Colorado, Arizona, New Mexico, and Utah. In northern Arizona and southern Utah the Colorado Plateau is bisected by the Colorado River and the Grand Canyon. This came to be over 10 million years ago, a river was already there, though not necessarily on exactly the same course. Then, subterranean geological forces caused the land in that part of North America to gradually rise by about a centimeter per year for millions of years. An unusual balance occurred: the river that would become the Colorado River was able to erode into the crust of the Earth at a nearly equal rate to the uplift of the plateau. Now, millions of years later, the North Rim of the Grand Canyon is at an elevation of about above sea level, and the South Rim of the Grand Canyon is about above sea level. At its deepest, the Colorado River is about below the level of the North Rim. Another high-altitude plateau in North America is the Mexican Plateau. With an area of and average height of 1,825 metres, it is the home of more than 70 million people. Oceania The Western Plateau, part of the Australian Shield, is an ancient craton covering much of the continent's southwest, an area of some 700,000 square kilometres. It has an average elevation between 305 and 460 metres. The North Island Volcanic Plateau is an area of high land occupying much of the centre of the North Island of New Zealand, with volcanoes, lava plateaus, and crater lakes, the most notable of which is the country's largest lake, Lake Taupō. The plateau stretches approximately 100 km east to west and 130 km north to south. The majority of the plateau is more than 600 metres above sea level. South America A tepui (), or tepuy (), is a table-top mountain or mesa found in the Guiana Highlands of South America, especially in Venezuela and western Guyana. The word tepui means "house of the gods" in the native tongue of the Pemon, the Indigenous people who inhabit the Gran Sabana. Tepuis can be considered minute plateaus and tend to be found as isolated entities rather than in connected ranges, which makes them the host of a unique array of endemic plant and animal species. Some of the most outstanding tepuis are Neblina, Autana, Auyan and Mount Roraima. They are typically composed of sheer blocks of Precambrian quartz arenite sandstone that rise abruptly from the jungle, giving rise to spectacular natural scenery. Auyán-tepui is the source of Angel Falls, the world's tallest waterfall. The Colombian capital city of Bogota sits on an Andean plateau known as the Altiplano Cundiboyacense roughly the size of Switzerland. Averaging a height of above sea level, this northern Andean plateau is situated in the country's eastern range and is divided into three main flat regions: the Bogotá savanna, the valleys of Ubaté and Chiquinquirá, and the valleys of Duitama and Sogamoso. The parallel Sierra of Andes delimit one of the world highest plateaux: the Altiplano, (Spanish for "high plain"), Andean Plateau or Bolivian Plateau. It lies in west-central South America, where the Andes are at their widest, is the most extensive area of high plateau on Earth outside of Tibet. The bulk of the Altiplano lies within Bolivian and Peruvian territory while its southern parts lie in Chile. The Altiplano plateau hosts several cities like Puno, Oruro, El Alto and La Paz the administrative seat of Bolivia. Northeastern Altiplano is more humid than the Southwestern, the latter of which hosts several salares, or salt flats, due to its aridity. At the Bolivia-Peru border lies Lake Titicaca, the largest lake in South America.
Physical sciences
Landforms
null
538413
https://en.wikipedia.org/wiki/Friesian%20horse
Friesian horse
The Friesian ( in Dutch; in West Frisian) is a horse breed originating in Friesland in north Netherlands. The breed nearly became extinct on more than one occasion. It is classified as a light draught horse, and the modern day Friesian horse is used for riding and driving. The Friesian horse is most known for its all-black coat colour, its long flowing mane and tail, feathering on its legs, a high head carriage, and high stepping action. Breed characteristics The breed has powerful overall conformation and good bone structure, with what is sometimes called a Baroque body type. Friesians have long arched necks, well-chiseled short-ears, and Spanish-type heads. They have sloping shoulders, compact muscular bodies with sloping hindquarters and a low-set tail. Limbs are short and strong, with feathering—long hair on the lower legs. A Friesian horse also has a long, thick mane and tail, often wavy. The breed is known for a brisk, high-stepping trot. The Friesian is considered willing, active, and energetic, but also gentle and docile. A Friesian tends to have great presence and to carry itself with elegance. Today, there are two distinct conformation types—the "baroque" type, which has the more robust build of the classical Friesian, and the modern, "sport horse" type, which is finer-boned. Both types are common, though the modern type is currently more popular in the show ring than is the baroque Friesian. However, conformation type is considered less important than correct movement. Size The Friesian stands on average about , although it may vary from at the withers, and mares or geldings must be at least to qualify for a "star-designation" pedigree. Colour Friesians rarely have white markings of any kind; most registries allow only a small star on the forehead for purebred registration. Though Friesian horses are characteristically black, occasionally chestnut colouring appears, as some bloodlines do carry the "red" ("e") gene. In the 1930s, chestnuts and bays were seen. The chestnut colour is generally not acceptable for stallions, though it is sometimes allowed for mares and geldings. A chestnut-coloured Friesian that competes is penalised. Discoloration from old injuries or a black coat fading from the sun is not penalised. In 1990, the began to attempt breeding out the chestnut colour, and today stallions undergo genetic testing. If testing indicates the presence of the chestnut or "red" gene, even if heterozygous and masked by black colour, the horse is not accepted for registration with the FPS. In 2014 there were eight stallion lines known to still carry the chestnut gene. The American Friesian Association, which is not affiliated with the FPS, allows horses with white markings and/or chestnut colour to be registered if purebred parentage can be proven. Genetic disorders There are four genetic disorders acknowledged by the industry that may affect horses of Friesian breeding: dwarfism, hydrocephalus, a tendency for aortic rupture, and megaesophagus. There are genetic tests for the first two conditions. The Friesian is also among several breeds that may develop equine polysaccharide storage myopathy. Approximately 0.25% of Friesians are affected by dwarfism, which results in horses with a normal-sized head, a broader chest than normal, an abnormally long back and very short limbs. It is a recessive condition. Additionally, the breed has a higher-than-usual rate of digestive system disorders, and a greater tendency to have insect bite hypersensitivity. Like other feathered draught breeds, they are prone to a skin condition called verrucous pastern dermatopathy and may be generally prone to having a compromised immune system. Friesian mares have a very high 54% rate of retained placenta after foaling. Some normal-sized Friesians also have a propensity toward tendon and ligament laxity which may or may not be associated with dwarfism. The relatively small gene pool and inbreeding are thought to be factors behind most of these disorders. History The Friesian originates in the province of Friesland in the northern Netherlands, where there is evidence of thousands of years of horse populations. As far back in history as the 4th century there are mentions of Friesian troops which rode their own horses. One of the most well-known sources of this was by an English writer named Anthony Dent who wrote about the Friesian mounted troops in Carlisle. Dent, amongst others, wrote that the Friesian horse was the ancestor of both the British Shire horse and Fell pony, however, this is speculation. It was not until the 11th century, that there were illustrations of what appeared to be Friesians. Many of the illustrations found depict knights riding horses which resemble the breed, with one of the most famous examples being William the Conqueror. These ancestors of the modern Friesians were used in medieval times to carry knights to battle. In the 12th and 13th centuries, some eastern horses of crusaders were mated with Friesian stock. During the 16th and 17th centuries, when the Netherlands were briefly linked with Spain, there was less demand for heavy war horses, as battle arms changed and became lighter. Andalusian horses were bred with Friesians, producing a lighter horse more suitable for work as an urban carriage horse. Historian Ann Hyland wrote of the Friesian breed: The breed was especially popular in the 18th and 19th centuries, when they were in demand not only as harness horses and for agricultural work, but also for the trotting races so popular then. In the 1800s, the Friesian was bred to be lighter and faster for trotting, but this led to what some owners and breeders regarded as inferior stock, so a movement to return to pureblood stock took place at the end of the 19th century. A studbook society was founded in 1879 by Frisian farmers and landowners who had gathered to found the (FRS) The (horse stud book) was published in 1880 and initially registered both Friesian horses and a group of heavy warmblood breeds, including Ostfriesen and Alt-Oldenburgers, collectively known as "Bovenlanders". At the time, the Friesian horse was declining in numbers, and was being replaced by the more fashionable Bovenlanders, both directly, and by crossbreeding Bovenlander stallions on Friesian mares. This had already virtually exterminated the pure Friesian in significant parts of the province in 1879, which made the inclusion of Bovenlanders necessary. While the work of the society led to a revival of the breed in the late 19th century, it also resulted in the sale and disappearance of many of the best stallions from the breeding area, and Friesian horse populations dwindled. By the early 20th century, the number of available breeding stallions was down to three. Therefore, in 1906, the two parts of the registry were joined, and the studbook was renamed the (FPS) in 1907." In 1913 a society, , was founded to protect and promote the breed. By 1915 it had convinced FPS to split registration into two groups. By 1943, the breeders of non-Friesian horses left the FPS completely to form a separate association, which later became the Koninklijk Warmbloed Paardenstamboek Nederland (Royal Warmblood Studbook of the Netherlands (KWPN). Displacement by mechanical farm equipment on dairy farms also was a threat to the survival of Friesian horse. The last draught function performed by Friesians on a significant scale was on farms that raised dairy cattle. World War II slowed the process of displacement, allowing the population and popularity of the breed to rebound. Important in the initial stage of the recovery of the breed was due to the family owned Circus Strassburger, who, having fled Nazi Germany for the Low Countries, discovered the show qualities of the breed and demonstrated its abilities outside of its local breeding area during and after the Nazi occupation Uses As use in agricultural pursuits declined, the Friesian became popular for recreational uses. Today, about seven percent of the horses in the Netherlands are Friesians. The Friesian horse today is used both in harness and under saddle, particularly in the discipline of dressage. In harness, they are used for competitive and recreational driving. A traditional carriage seen in some events designed for Friesian horses is a high-wheeled cart called a . Friesians are also used to pull vintage carriages at ceremonial events and parades. Because of their color and striking appearance, Friesian horses are a popular breed in movies and television, particularly in historic and fantasy dramas. They are viewed as calm in the face of the activity associated with filmmaking, but also elegant on-camera. Registration of breeding (KFPS), which means "Royal Association, The Friesian Horse Studbook", is the original Friesian studbook founded in 1879 in the Netherlands and is the world-recognized official studbook for the Friesian horse breed. KFPS has licensed about 30 organizations around the world as authorized representatives to uphold its breeding program standards, record registrations and arrange horse evaluations. Most KFPS-registered horses are in the Netherlands, Germany and North America. KFPS studbook breeding is strictly controlled, and breeding a KFPS-registered and approved stallion to any non-KFPS mares or to any other breed of horse is strongly discouraged. Other registries exist for Friesians and Friesian crossbreeds, but KFPS does not permit dual registration. For a stallion to be approved as breeding stock it must pass a rigorous approval process. Horses are judged at an inspection, or , by Dutch judges, who decide whether the horse is worthy of breeding. There are multiple registries within KFPS. The two main registries are the studbook for approved stallions, and the foalbook for horses from the mating of an approved stallion and a mare in the foalbook. There are two auxiliary registries: B-Book I is for horses from a mating with a limited-approved stallion. In countries where approved-stallion stock is low, some stallions are given limited breeding rights, and their offspring can be registered in B-Book I with the possibility of upgrading to a higher studbook grade after three successive generations of breeding by KFPS approved studbook stallions. North America is one of the regions outside Netherlands with a sufficient number of approved stallions available such that there has been no B-Book I registering since 1992. B-Book II is for horses from the mating of two purebred Friesians, but the stallion is not approved. Such offspring must be registered directly with KFPS in Netherlands. There are a few special status awards that can be obtained during evaluation events, such as a star predicate which is awarded to a mare, gelding, or unapproved stallion that meets minimum standards of conformation, movement, and minimum height; and a sport predicate is awarded to horses that have achieved certain performance goals in dressage or driving competitions. KFPS-registered horses born prior to 1997 had tongue tattoos; horses born after have a microchip implant in the upper left neck. The tattoo or chip numbers are recorded on the registration papers. Names given to foals born in a certain year must begin with a fixed letter as determined by KFPS. Parentage is verified with DNA testing, and mares are DNA tested for hydrocephalus and dwarfism unless their sire and dam had both tested as non-carriers.
Biology and health sciences
Horses
null
539055
https://en.wikipedia.org/wiki/Share%20taxi
Share taxi
A share taxi, shared taxi, taxibus, or jitney or dollar van in the US, or marshrutka in former Soviet countries, is a mode of transport which falls between a taxicab and a bus. Share taxis are a form of paratransit. They are vehicles for hire and are typically smaller than buses. Share taxis usually take passengers on a fixed or semi-fixed route without timetables, sometimes only departing when all seats are filled. They may stop anywhere to pick up or drop off their passengers. They are most common in developing countries and inner cities. The vehicles used as share taxis range from four-seat cars to minibuses, midibuses, covered pickup trucks, station wagons, and trucks. Certain vehicle types may be better-suited than others. They are often owner-operated. An increase in bus fares usually leads to a significant rise in usage of share taxis. Liberalization is often encouraged by libertarian urban economists, such as Richard Allen Epstein of the University of Chicago, James Dunn of Rutgers, and Peter Gordon of the University of Southern California, as a more "market-friendly" alternative to public transportation. However, concerns over fares, insurance liabilities, and passenger safety have kept legislative support for decidedly tepid. Some share taxi services are forms of demand responsive transport and include shared shuttle bus service to airports. Some can be booked online using mobile apps. Operation Terminus A given share taxi route may start and finish in fixed central locations, and landmarks may serve as route names or route termini. In other places there may be no formal termini, with taxis simply congregating at a central location, instead. Even more-formal terminals may just parking lots. The term "rank" denotes an area, specifically built for taxi operators by a municipality or city, where commuters may start and end their journey. Route Where they exist, shared taxis provide service on set routes within and sometimes between towns. After a shared taxi has picked up passengers at its terminus, it proceeds along a semi-fixed route where the driver may determine the actual route within an area according to traffic conditions. Drivers will stop anywhere to allow riders to disembark, and may sometimes do the same when prospective passengers want to ride. Vehicle ownership Most share taxis are operated under one of two regimes. Some share taxis are operated by a company. For example, in Dakar there are company-owned fleets of hundreds of car rapides. In the Soviet Union, share taxis, known as marshrutka, were operated by state-owned taxi parks.<ref>RAF-977DM marshrutnoye taksi, "Avtomobil Na Sluzhbie, No.28, DeAgostini, 2012, (in Russian)</ref> There are also individual operators in many countries. In Africa, while there are company share taxis, individual owners are more common. Rarely owning more than two vehicles at a time, they will rent out a minibus to operators, who pay fuel and other running costs, and keep revenue. Syndicates In some places, like some African cities and also Hong Kong, share taxi minibuses are overseen by syndicates, unions, or route associations. These groups often function in the absence of a regulatory environment and may collect dues or fees from drivers (such as per-use terminal payments, sometimes illegally), set routes, manage terminals, and fix fares. Terminal management may include ensuring each vehicle leaves with a full load of passengers. Because the syndicates represent owners, their regulatory efforts tend to favor operators rather than passengers, and the very termini syndicates upkeep can cost delays and money for passengers as well as forcing them to disembark at inconvenient locations, in a phenomenon called "terminal constraint". By location Africa Some Francophone African countries use the term ('bush taxi', often spelled with a space rather than a hyphen in English) for share taxis. In some African cities, routes are run between formal termini, where the majority of passengers board. In these places, the share taxis wait for a full load of passengers prior to departing, and off-peak wait times may be in excess of an hour. In Africa, regulation is mainly something that pertains to the vehicle itself not its operator or its mode of operation. African minibuses are difficult to tax, and may operate in a "regulatory vacuum" perhaps because their existence is not part of a government scheme, but is simply a market response to a growing demand for such services. Route syndicates and operator's associations often exercise unrestricted control, and existing rules may see little enforcement. In many traffic-choked, sprawling, and low-density African cities, minibuses are used. Algeria In Algeria, taxis collectifs ply fixed routes with their destination displayed. Rides are shared with others who are picked up along the way, and the taxi will leave only when it seats all the passengers it can. While stations, set locations to board and disembark, exist, prospective passengers flag down a taxi collectif when they want a ride. Operating inter- and intra-city, taxis collectifs that travel between towns may be called interwilaya taxis. Along with all forms of public transport in Algeria, the Foreign Affairs and International Trade Canada recommend against using these share taxis. The Irish Department of Foreign Affairs asks that you use taxis recommended by a hotel. Burkina Faso In Ouagadougou, capital of Burkina Faso, the share taxi or role is not filled by the traditional African minibus. Democratic Republic of the Congo Those in Kinshasa, DRC, (or perhaps just the Kongo people) may call share taxis fula fula meaning "quick quick". There was no independent transport authority in the city of Kinshasa as of 2008. Cameroon Share taxis do exist in Cameroon, but as of 2008 minibuses cannot be used for this purpose, by law. That same year, Douala, Cameroon, also was without an independent transport authority. Egypt Egyptian share cabs are generally known as micro-bus ( or , "project"; plural or ). The second name is used by Alexandrians. Micro-buses are licensed by each of the governorates of Egypt as taxicabs, and are generally operated privately by their drivers. Although each governorate attempts to maintain a consistent paint scheme for them, in practice the color of them varies wildly, as the "consistent" schemes have changed from time to time and many drivers have not bothered to repaint their cars. Rates vary depending on distance traveled, although these rates are generally well known to those riding the micro-bus. The fares also depend on the city. Riders can typically hail micro-buses from any point along the route, often with well-established hand signals indicating the prospective rider's destination, although certain areas tend to be well-known micro-bus stops. Like the Eastern European marshrutka, a typical micro-bus is a large van, most often a Toyota HiAce or its Jinbei equivalent, the Haise, and the latter is produced by the Bavarian Auto Manufacturing Group in 6th of October City in Egypt. Smaller vans and larger small buses are also used. Ethiopia Minibus taxis in Ethiopia are one of the most important modes of transport in big cities like Addis Ababa. They are preferred by the majority of the populace over public buses and more traditional taxicabs because they are generally cheap, operate on diverse routes, and are available in abundance. All minibus taxis in Ethiopia have a standard blue-and-white coloring scheme, much as New York taxis are yellow. Minibus taxis are usually Toyota HiAces, frequent the streets. They typically can carry 11 passengers, but will always have room for another until that is no longer the case. The minibus driver has a crew member called a weyala whose job is to collect the fare from passengers. In 2008, publicly operated public transport was available in Addis Ababa in addition to that provided by the minibuses. A fleet of 350 large buses may operate for this purpose, as such a number does exist. Also as of 2008, the city lacks an independent transport authority, but some regulation, such as that controlling market entry, does exist. Route syndicates may be present but are described as "various". Ghana In Ghana and neighboring countries, share taxis are called tro tro. They are privately owned minibus that travel fixed routes and leave when filled to capacity. While there are tro tro stations, these shared taxis can also be boarded anywhere along the route. Operated by a driver and a bus conductor, who collects money, shouts out the destination, and is called a "mate", many are decorated with slogans and sayings, often religious, and few operate on Sundays. A 2010 report by The World Bank found that Tro tro are used by 70% of Ghanaian commuters. This popularity may be because in cities such as Accra had only basic public transportation save for these small minibuses. An informal means of transportation, in Ghana they are licensed by the government, but the industry is self-regulated. In Accra, syndicates include GPRTU and PROTOA. Aayalolo, a bus rapid transit system opened in November 2016; however, most people continued to use trotros as of 2019. The term "tro tro" is believed to derive from the Ga word tro, "threepence", because the conductors usually asked for "three three pence", which was the standard bus fare in the 1940s, when Ghana still used the British West African pound and later the Ghanaian pound. Alternatively, its origin is not "three times three pence" but rather "threepence [thruhpnce, tro] each": doubling a coin's name in the vernacular means "that coin for each person (or item)". Three pence was the price per passenger in the early 1960s, when pounds/shillings/pence were still in use, including threepence coins, before decimalization of the currency into cedi and pesewa in 1965. In Ghana, tro tro are licensed by the government, but the industry is self-regulated. There was no independent transport authority as of 2008 in the capital, Accra. In the absence of a regulatory environment, groups called syndicates oversee share taxis. These may collect dues, set routes, manage terminals, and fix fares. In Accra as of 2008, such syndicates include Ghana Private Road Transport Union and PROTOA. Despite the regulatory challenges, the service was regulated during the COVID-19 pandemic in Ghana. There was 98% compliance to guidelines on physical distancing, although guidelines on individual use of face masks were more difficult to enforce. Ivory Coast In the Ivory Coast, gbaka is a name for minibus public transports. The transport regulator in Abidjan, Ivory Coast, is Agence de Gestion des Transports Urbains or AGETU. As of 2008, Abidjan public transport was serviced by large buses as well as minibuses. Syndicates include UPETCA and SNTMVCI. Kenya In Kenya, regulation does extend to operatorsNairobi Today: the Paradox of a Fragmented City; Hidden $ Centz: Rolling the Wheels of Nairobi Matatu. Mbugua wa-Mungai (p. 376). edited by Helene Charton-Bigot, Deyssi Rodriguez-Torres. African Books Collective, 2010. 404 pp. 9987080936, 9789987080939. and mode of operation (such as routes used) as well as the vehicle. Madagascar Mali In Mali, share taxis are called sotrama and dourouni. As of 2008, Bamako, Mali, has no independent transport authority, but share taxi activity could fall under regulator Direction de la régulation et du contrôle du transport urbain (municipal) or DRCTU control. Morocco In Morocco, intercity share taxis are called grand taxis. They are generally old full-size Mercedes-Benz sedans, and seat six or more passengers. Nigeria In Nigeria, both minibusses (called danfo) and midibuses (molue) may be operated as share taxis. Such forms of public transport may also be referred to as bolekaja, and many bear slogans or sayings. Lagos, Nigeria, has a transport-dedicated regulator, Lagos Metropolitan Area Transport Agency (LAMATA). Outside of Lagos, most major cities in Africa have similar systems of transport. Syndicates in Lagos include the National Union of Road and Transport Workers (NURTW). Rwanda Minibus public transports in Rwanda may be called coaster buses, share taxis, or twegerane. The latter could easily be a word meaning "stuffed" or "full". As of 2020, in Kigali, Rwanda, syndicates include RFTC, Kigali Bus Services, and Royal Express. South Africa Over 60% of South African commuters use shared minibus taxis, which are 16 seater commuter buses, sometimes referred to as kombis. Many of these vehicles are unsafe and not roadworthy, and often dangerously overloaded. Since the 1980s, share taxis have been severely affected by turf wars. Prior to 1987, the taxi industry in South Africa was highly regulated and controlled. Black taxi operators were declined permits in the Apartheid era and all minibus taxi operations were, by their very nature, illegal. Post-1987, the industry was rapidly deregulated, leading to an influx of new minibus taxi operators, keen to make money off the high demand for this service. Taxi operators banded together to form local and national associations. Because the industry was largely unregulated and the official regulating bodies corrupt, these associations soon engaged in anti-competitive price fixing and exhibited gangster tactics – including the hiring of hit-men and all-out gang warfare. During the height of the conflict, it was common for taxi drivers to carry shotguns and AK-47s to simply shoot rival taxi drivers and their passengers on sight. Along with new legislation, the government has instituted a recapitalization scheme to replace the old and un-roadworthy vehicles with new 18- and 35-seater minibusses. These new minibus taxis carry the South African flag on the side and are notably more spacious and safe. Tanzania Minivans and minibuses are used as vehicles for hire and referred to as dala dala in Tanzania. While dala dala may run fixed routes picking up passengers at central locations, they will also stop along the route to drop someone off or allow a prospective passenger to board. Before minibuses became widely used, the typical dala dala was a pick-up truck with benches placed in the truck bed. In Dar es Salaam, as of 2008, publicly operated minibus service also exists. They are usually run by both a driver and a bus conductor called a mpigadebe, literally meaning "a person who hits a debe" (a 4-gallon tin container used for transporting gasoline or water). The name is in reference to the fact that conductors often hit the roof and side of the van to attract customers and to notify the driver when to leave the station. Often crowded, they have their routes allocated by a Tanzania transport regulator, Surface and Marine Transport Regulatory Authority (SUMATRA), but syndicates also exist and include DARCOBOA. Tunisia Share taxis in Tunisia are called louages and follow fixed or semi-fixed routes, departing from stations when full. In French, the name means "rental." Departing only when filled with passengers not at specific times, they can be hired at stations. Louage ply set routes, and fares are set by the government. At most louage stations, tickets must be purchased at a booth and given to the driver. In contrast to other share taxis in Africa, louage are sparsely decorated. These white vans sport a single colored stripe that alerts potential passengers to the type of transport they offer. Red-striped vans travel from one state to another, Blue which travels from city to city within a state, and yellow which serves rural locales. Blue-striped louage can also be seen. Small placards atop the vans specify either a van's exact destination or the town in which it is registered. Prior to the introduction of vans, French-made station wagons were used as louages. West Africa The term kia kia may be used in Yorùbáland to refer to minibus public transports, and means "quick quick". Asia Hong Kong Public light buses (), also known as minibus or maxicab (), run the length and breadth of Hong Kong, through areas which the standard bus lines cannot or do not reach as frequently, quickly or directly. They are 16 or 19 seater minibuses. Public Light Bus are differentiated from usual minibuses with their red coloured roof, and with very few exceptions, lack of route numbers. With no timetable, drivers can depart when they deem the passenger count on board is financially equitable. Special features include its high speeds (up to 110 km/h on some routes; which is illegal when exceeding the 80 km/h limit) and permission for the driver to end the journey prematurely, even with passengers on board. Although within their right to charge the full fare, drivers usually lower or omit the fare if they are unable to deliver the passenger to the promised destination. Typically offering a faster and more efficient transportation solution due to their small size, limited carrying capacity, frequency, and diverse range of routes, although they are generally slightly more expensive than standard buses, minibusses carry a maximum of 19 seated passengers. Standing passengers are not allowed. There are two types of public light minibus: green and red. Both types have a cream-coloured body, the distinguishing feature being the colour of the external roof, and the type of service that the colour denotes: green is like regular transit bus with fixed number, route, schedule and fare (but generally not fixed stops); red is a shared taxi, operating on semi-fixed route unregulated, with the driver waiting for enough passengers to justify leaving, as his income depends on the revenue. Cyprus In Cyprus, there are privately owned share taxis that travel to set destinations and board additional passengers en route called service taxis. India In India, several cities have minibuses apart from the presence of three-wheeler taxi-cabs called rickshaws. Minibuses are especially popular in the city of Kolkata for intra-city travel but are also present elsewhere. It is also a crucial mode of public transport in the Himalayan region and in the hilly tracts of Northeast India, as other modes of transport are infrequent or absent altogether. Shared taxis have been operating in Mumbai, India, since the early 1970s. These are point-to-point services that operate during peak hours. During off-peak hours they ply like regular taxis; they can be hailed anywhere on the roads and passengers are charged by the meter. During peak hours they will take a full cab load of passengers to a more or less common destination. The pick-up points are usually fixed, and sometimes (but not always) marked by a sign saying "shared taxis". Cabs typically line up at this point during peak hours. They sometimes display their general destination on their windscreens, and passengers get in and wait for the cab to fill up, which leave when full. Fares are fixed and much lower than the metered fare to the same destination, but higher than a bus or train fare. Such informal arrangements also exist in other Indian cities. Share jeeps are a common form of transportation in the Himalayas, the North Eastern States, and elsewhere. IndonesiaAngkutan Kota (), abbreviated as angkot, are shared taxis in Indonesia widely operating throughout the country, usually with microbuses. In some places there were also three-wheelers which are called bemo (such as autorickshaws based on the Daihatsu Midget) but these have been phased out. The older version of Angkot is called oplet. The name of this transportation differs from each different province or area in the country. In Jakarta, it is called angkot or "mikrotrans", in other parts such as in Sulawesi, the term mikrolet (shortened "mikro") is more widely used especially in Manado. In Makassar it is called "pete-pete", in Malang it is called "angkota", in Medan it is called "sudako", in Indonesian Papua it is called "taksi", in Aceh it is called "labi-labi", and in Samarinda it is called "minibus" (but even within the city itself is also called angkot). Share taxis operated across rural/village routes are called angkutan desa (), abbreviated as angkudes. Angkot and angkudes run accordingly to their exact routes and may stop at any class of bus stations (A, B, and C-Type bus stations). Additionally, passengers can stop the van anywhere along its route, and it is not required to stop at a bus stop. On late 2024 Jakarta and Surabaya has a modern Share taxis which operating in the city area, both are share taxis operated by Transjakarta and Suroboyo Bus. Iran In Iran, a share taxi is usually called "taxi", while a non-share is called "ajans"/اژانس, pronounced [aʒans]. Four passengers share a taxi and sometimes there is no terminus and they wait in the street side and blare their destination to all taxis until one of them stops. These are regular taxis but if somebody wants to get a non-share taxi he can call for an ajans (taxi service) for himself or wait in the street side and say "Darbast" (which means non-share). It means he is not interested in sharing the taxi and is consequently willing to pay more for the privilege. Minibuses, with a capacity of 18 passengers, and van taxis, with a capacity of 10 passengers are other kinds of share transport in Iran. Israel In monit sherut, pl. moniyot sherut is a word meaning "service taxi". Referring to vans or minibusesFrommer's Israel. Robert Ullian. Frommer's, 2010. 544 pp. 0470618205, 9780470618202. that serve as share taxis in Israel, these can be picked up from anywhere on their route. They follow fixed routes (sometimes the same routes as public transport buses) and usually leave from the initial station only when full. Moniyot sherut operate both inter-INTERNATIONAL ISSUE; Going Abroad Without Going Broke nytimes.com, March 11, 1990. and intra-city. Payment is often done by passing money to the driver in a "human chain" formed by the passengers seated before. The change (and the receipt, when requested) are returned to the person who paid by the same means. In intra-city routes, where they compete with official buses, the drivers usually coordinate their travel by radio so that they can arrive at the bus station just before public transport buses and take the most passengers. Monit sherut is one of the only forms of transit accessible to many Israelis during Shabbat, as most public transportation in the country closes down between sunset on Friday and nightfall on Saturday. In Israel, Mercedes are used, owned generally by Arabs, and very efficient, having space for 7–8 people, and having loosely fixed routes, dropping a passenger either at a specific terminus or going a little out of the way to facilitate the passenger. Philippines The most popular means of public transportation in the Philippines as of 2007, jeepneys were originally made out of US military jeeps left over from World War II and are known for their color and flamboyant decoration. The jeepneys are built by local automobile repair shops from a combination of prefabricated elements (from a handful of Filipino manufacturers) and improvisation and in most cases equipped with "surplus" or used Japanese SUV or light truck engines, drive train, suspension and steering components (from recycled vehicles in Japan). They have not changed much since their post-war creation, even in the face of increased access to pre-made vehicles, such as minibuses. However, due to the government's Public Utility Vehicle Modernization Program, Jeepneys and other modes of transportation must comply to the newer Philippine National Standards which is more compliant with international standards. Older jeepneys have the entrance on the back, and there is space for two people beside the driver (or more if they are small) while the modern jeepneys have two doors on the right side of the vehicle. The back cab of the Jeepney is equipped with two long bench seats along the sides and the people seated closest to the driver are responsible for passing the fare of new passengers forward to the driver and the change back to the passenger. The start and end point of the jeepney route is often a jeepney terminal, where there is a queue system so only one jeepney plying a particular route is filled at a time, and where a person helps the driver to collect fares and fill the vehicles with people, usually to more than comfortable capacity. Preferring to leave only when full and only stop for a crowd of potential passengers, riders can nonetheless disembark at any time; and while jeepneys ply fixed routes, these may be subject to change over time. New ones may need approval from a Philippine transport regulator. Jeepney stations do exist. Another share taxi that is also common in the Philippines is the UV Express which uses Compact MPVs and vans as its form factor. These vehicles seat 10–18 people and charge an additional 2 Philippine peso per kilometer (as of 2013). Thailand Literally "two rows" a songthaew or song thaew (Thai สองแถว, Lao: ສອງແຖວ [sɔ̌ːŋtʰíw]) is a passenger vehicle in Thailand and Laos adapted from a pick-up or a larger truck and used as a share taxi. They are also known as baht buses. Turkey and Northern Cyprus In Turkey and Northern Cyprus, dolmuş (pronounced "dolmush") are share taxis that run on set routes within and between cities. Each of these cars or minibuses displays their particular route on signboards behind the windscreen. Some cities may only allow dolmuş to pick up and disembark passengers at designated stops, and terminals also exist. The word derives from Turkish for "full" or "stuffed", as these share taxis depart from the terminal only when a sufficient number of passengers have boarded. Visitors to Turkey have been surprised by the speed of dolmuş travel. Traveling intra and inter-city, the privately owned minibuses are overseen by a governance institution; routes are leased and vehicles licensed. Passengers board anywhere along the route as well as at termini and official stations. Dolmuş in Turkish-controlled, Northern Cyprus display their routes but don't follow timetables. West Bank, Palestine Share taxis are often called "ser-vees" (service taxi) in the West Bank. Minibuses are often used in lieu of vans. Ford Transit vans were often a popular vehicle for conversion, resulting in the generic trademark "Ford" and "Fordat"(pl) being used to describe minibusses of various makes, replacing aging Mercedes sedans. Oceania New Zealand In New Zealand the first widespread motor vehicle services were shared taxi services termed service cars; a significant early provider was Aard, operating elongated Hudson Super Sixes. By 1930, there were 597 service cars. Aard was taken over by New Zealand Railways Road Services in 1928. Shared taxis in New Zealand nowadays are referred to as shuttles or shuttle vans. Shared buses or vans are available in many more developed countries connecting frequent destinations, charging a fixed fee per passenger. The most common case is a connection between an airport and central city locations. These services are often known as shuttles. Such services usually use smaller vehicles than normal buses and often operate on demand. An air traveler can contact the shuttle company by telephone or Internet, not necessarily in advance; the company will ensure that a shuttle is provided without unreasonable delay. The shuttle will typically connect one airport with several large hotels, or addresses in a specified area of the city. The shuttle offers much of the convenience of a taxi, although it takes longer, at a price that is significantly lower for one or two passengers. Scheduled services between an airport and a hotel, usually operated by the hotel, are also called shuttles. In many cases the shuttle operator takes the risk of there not being enough passengers to make the trip profitable; in others, there is a minimum charge when there are not enough passengers. Usually, there are regulations covering vehicles and drivers; for example in New Zealand under NZTA regulations, shuttles are only allowed to have up to eleven passenger seats, and the driver must have a passenger endorsement (P) on their driver's license. Europe Former Soviet Union Moldova In Moldova, share taxis are called rutierele (singular rutieră). Introduced in 1981, they are private, owner-operated minibuses that operate along fixed routes. In cities, each rutieră route has a given number, as in the case of buses or trolleybuses. Netherlands Besides the conventional deeltaxi, there are treintaxis in some Dutch towns. Operated on behalf of the Netherlands Railways, they run to and from railway stations and the ride is shared with additional passengers picked up along the way. Tickets can be purchased at railway ticket offices or from the cabdriver, but treintaxis must be ordered by phone unless boarding at a railway station. Bulgaria Marshrutkas are rare in Bulgaria. As of 2021, only a single route operates in Sofia, while 10 lines operate in Plovdiv. They are customized passenger vans that have been modified to include large windows in the back, rails and handles. Marshrutkas are commonly white, although their colour varied, and are partially covered in advertising. In some cases, seating has been modified — popular routes carrying more passengers typically have more standing space. Examples of van models include Peugeot Boxer, Citroën Jumper, Ford Transit, Iveco Daily and Renault Master. They have a fixed fare; the fare is paid upon boarding. Marshrutkas were not obliged to stop anywhere on the route, although they did slow down around popular spots. Marshrutka drivers were asked to stop and pick one up in a taxi-like manner; the getting-off was arranged with the driver, often by just standing up and approaching the door. Sometimes the driver would ask for consent to veer off the route to avoid a traffic jam or roadworks. Romania In Romania, microbuze or maxi-taxi supplied the need of affordable public transportation in smaller towns when some local administrations abolished the expensive community-owned systems of buses. In Bucharest, this form of transportation appeared in the early 1980s, when the ITB began using them as a peak-hour service, beginning to use Irannational-made Mercedes-Benz T2 vans, being supplemented in the late 1980s by Rocar-TV vans. In 1990, the newly founded RATB sold off its operations to private operators, who began using them in competition to the RATB. They enjoyed wide popularity, especially from 2003 to 2007, and from 2011 onwards, when the RATB lost the rights to operate suburban routes. On the Black Sea shore, it is very common to travel from Constanţa or Mangalia to the resorts on minibuses (microbuze), especially in those resorts where the competing train service is far from the beach and/or lodging facilities. These minibuses have been criticised for their shady operations, lack of safety and primitive transportation conditions. Greece In Athens, Greece most taxis were share taxis, but since the country joined the EU in 1981, this tradition started to disappear. United Kingdom In 2018, Arriva launched shared taxi service Arriva Click in Liverpool and Sittingbourne and Kent Science Park in the United Kingdom. Northern Ireland In some towns in Northern Ireland, notably certain districts in Ballymena, Belfast, Derry and Newry, share taxi services operate using Hackney carriages and are called black taxis. These services developed during The Troubles as public bus services were often interrupted due to street rioting. Taxi collectives are closely linked with political groups – those operating in Catholic areas with Sinn Féin, those in Protestant areas with loyalist paramilitaries and their political wings. Typically, fares approximate those of Translink operated bus services on the same route. Service frequencies are typically higher than on-bus services, especially at peak times, although limited capacities mean that passengers living close to the termini may find it difficult to find a black taxi with seats available in the rush hour. Switzerland Major providers of share taxis in Switzerland are Telebus Kriens LU, Taxito, myBuxi, Kollibri by Swiss Postal Bus, and Pikmi by VBZ Verkehrsbetriebe Zurich ZH. North America Barbados Most areas of Barbados are served by ZRs, which run in addition to the government-run bus service. Dominican Republic In the Dominican Republic, share taxis are privately owned vehicles running fixed routes with no designated stops. Foreign Affairs and International Trade Canada advises against traveling in the Dominican Republic carros públicos because doing so makes passengers targets for robbery, and because the taxis are known to, "disregard traffic laws, often resulting in serious accidents involving injuries and sometimes death." The United States Department of State also warns that using them is hazardous, due to pickpockets, and are sometimes passengers are robbed by the drivers themselves. HaitiTap taps, gaily painted buses or pick-up trucks, and publiques, usually older saloon cars, serve as share taxis in Haiti. Tap taps are privately owned and ornately decorated. They follow fixed routes; won't leave until filled with passengers; and many feature wild colors, portraits of famous people, and intricate, hand-cut wooden window covers. Often they are painted with religious names or slogans. Riders can disembark at any point in the journey. Their name refers to "fast motion". The publiques operate on fixed routes and pick up additional passengers all along the way. While saying not to use any form of public transport in Haiti, the Foreign Affairs and International Trade Canada advises against tap tap travel especially. The United States Department of State also warns travelers not to use tap taps, "because they are often overloaded, mechanically unsound, and driven unsafely." Saint Lucia In Saint Lucia, waychehs are a name for minibus public transports using Toyota HiAce. Canada In Vancouver, British Columbia, Canada, in the 1920s, jitneys competed directly with the streetcar monopoly operating along the same routes as the streetcars, but jitneys were charging lower fares. In Quebec, share taxis or jitneys are called taxis collectifs (in English "shared taxis") or transport collectif par taxi, literally "public transport by taxi". (which the STM translates in English as "taxibus") and are operated by subcontractors to the local transit authorities on fixed routes. In the case of Montréal, the fare is the same as the local bus fare, but no cash and transfers are issued or accepted; in the case of the STL, only bus passes. The Réseau de transport de Longueuil accepts regular RTL tickets and all RTL and some Réseau de transport métropolitain TRAM passes. Guatemala In Guatemala, ruleteros, minibus share taxis, pick up and discharge passengers along major streets. United States In the United States, share taxis are called jitneys or dollar vans. They are typically modified passenger vans, and often operate in urban neighborhoods that are underserved by public mass transit or taxis. Some are licensed and regulated, while others operate illegally. They operate at designated stops or can be hailed from the street. Both common names – dollar van and jitney – originated similarly. Jitney is an archaic term for an American nickel, the common fare for early jitneys. In the late 20th century, when a typical fare was one dollar, the corresponding name came into usage, though "jitney" is still also common. It is generally a small-capacity vehicle that follows a rough service route, but it can go slightly out of its way to pick up and drop off passengers. In many US cities such as Pittsburgh and Detroit, the term jitney refers to an unlicensed taxi cab. They are often owned and used by members of inner-city communities, such as African/Caribbean American, Latino, and Asian-American populations. Travelers cite cost and greater frequency as factors in choosing jitneys over larger bus service, whereas safety and comfort are cited for choosing buses. The first jitneys in the United States operated in Los Angeles, California in 1914. By 1915, there were 62,000 nationwide. Local regulations, demanded by streetcar companies, forced jitneys out of business in most places. By the end of 1916, only 6,000 jitneys remained. Operators were referred to as "jitney men." They were so successful that the city government banned them at the request of the streetcar operators. Atlanta Jitneys were popular in Atlanta from 1915 to 1925 as an alternative to streetcars. In Atlanta, jitneys run along Buford Highway. New York City In New York City, dollar vans serve major areas that lack adequate subway service in transit deserts. The vans pick up and drop off anywhere along a route, and payment is made at the end of a trip. During periods when limited public mass transit is unavailable, dollar vans were the only feasible method of transportation for many commuters. In such situations, city governments may pass legislation to deter price gouging. Most dollar vans operate illegally, due to possible rules and fines. Dollar vans and other jitneys mainly serve low-income, immigrant communities in transit deserts, which lack sufficient bus and subway service. New Jersey In New Jersey, 6,500 jitney buses are registered, and are required to have an "Omnibus" license plate, which denotes the vehicle's federal registration. They are also required to undergo inspection by the state MVC mobile inspection team on the vehicles' companies' property twice a year, and be subject to surprise inspection. Drivers of jitneys are required to qualify for a Class B or Class C Commercial Drivers License (CDL), depending on whether the vehicle seats up to 15 or 30 passengers. Violations against a driver's CDL must be resolved and result in payment of fines prior to resumption of driving on the driver's part, with retesting required if the driver waits longer than three years to resolve the issues. Denser urban areas of northern New Jersey, such as Hudson, Bergen and Passaic County, are also served by dollar vans,"Minutes of the Meeting Of the Historic New Bridge Landing Park Commission". March 6, 2008 which are commonly known as jitneys, and most of which are run by Spanish Transportation and Community Line, Inc. Nungessers, along the Anderson Avenue-Bergenline Avenue transit corridor is a major origination/termination point, as are 42nd Street in Manhattan, Newport Mall and Five Corners in Jersey City, and GWB Plaza in Fort Lee. These interstate vans are under the purview of the federal government. In Atlantic City the Atlantic City Jitney Association operates a jitney service that travels the main strip of casinos. One of the routes also services the new cluster of casinos west of Atlantic City proper. Hudson County commuters who prefer NJ Transit buses, for example, cite senior citizen discounts and air conditioning among their reasons, which has led some jitney operators to display bumper stickers advertising air conditioning aboard their vehicles in order to lure passengers. Some who prefer the buses will nonetheless take the jitneys if they arrive before the buses, as they pass bus stops more frequently than the buses and are cheaper. Others choose buses because, they claim, jitney drivers are less safe, and are prone to using cell phones and playing loud music while driving. Although Union City jitney driver Samuel Martinez has complained that authorities unfairly target them and not the larger buses, North Bergen Patrol Commander Lt. James Somers has contended that jitneys are less safe, and sometimes exhibit higher levels of aggressive driving in order to pick up passengers, which has led to arguments among drivers. Somers also stated that police can only stop a vehicle that appears to have an obvious problem, and that only certified inspectors from the state MVC can stop a vehicle for less apparent, more serious problems. Dollar vans may change ownership over the course of decades, and the mostly immigrant drivers are subject to police searches. Between 1994 and 2015, the TLC issued 418 van licenses, although the vast majority of vans are unlicensed. Licensed vans cannot pick up at New York City bus stops, and all pick-ups must be predetermined and all passengers logged. Additionally, in the 1980s and 1990s, the predominantly black and mostly immigrant dollar van drivers stated that they were harassed "day and night" by the New York City Police Department (NYPD), with some van drivers having their keys confiscated and thrown away by NYPD officers. Over the course of the 2000s, surprise inspections in Hudson County, New Jersey have been imposed on jitney operators, whose lack of regulation, licensing or regular scheduling has been cited as the cause for numerous fines. A series of such inspections of the vans on Bergenline Avenue in June 2010 resulted in 285 citation violations, including problems involving brake lights, bald tires, steering wheels, suspensions, exhaust pipes, and emergency doors welded shut. An early July 2010 surprise inspection by the Hudson County Prosecutor's Office, which receives federal funding for regulating jitneys, found 23 out of 33 jitneys to be unsafe, which were taken out of service.Hague, Jim (May 13, 2007). "Erratic driving, lack of licensing: Prosecutor's Office cracks down on commuter vans". The Hudson Reporter Claims have also been made that jitneys cause congestion and undermine licensed bus service. Drivers of these vans have also developed a reputation for ignoring traffic laws in the course of competing for fares, picking up and dropping off passengers at random locations, and driving recklessly. On July 30, 2013, an accident occurred at 56th Street and Boulevard East in West New York, New Jersey, in which Angelie Paredes, an 8-month-old North Bergen resident, was killed in her stroller when a full-sized jitney bus belonging to the New York-based Sphinx company toppled a light pole. The driver, Idowu Daramola of Queens, was arrested and charged with a number of offenses, including using a cell phone while driving.Williams, Barbara (August 4, 2013). "Family to hold public prayer vigil for infant girl killed in West New York jitney bus accident" . NorthJersey.com. Officials also stated that he was speeding; however, this was later disputed by an investigator to the scene who concluded that there was insufficient evidence to determine the speed of the bus. At an August 6 press conference, legislators including U.S. Representative Albio Sires, New Jersey State Senator Nicholas Sacco, State Assembly members Vincent Prieto, Charles Mainor and Angelica Jimenez, West New York Mayor Felix Roque, Weehawken Mayor Richard Turner, Guttenberg Mayor Gerald Drasheff, Freeholder Junior Maldonado and Hudson County Sheriff Frank Schillari, noted that problems with jitneys existed since the 1980s, and called for stricter regulations for drivers and bus companies. This included increased monitoring and enforcement, and heightened participation by the public in identifying poor drivers, as jitneys had been exempt from regulations imposed on buses and other forms of transportation. In February 2014, New Jersey Governor Chris Christie signed Angelie's Law, strengthening regulations on commuter buses. Several companies run vans in Northern New Jersey, often following similar routes to New Jersey Transit buses but at a slightly lower price and greater frequency. The most common routes have an eastern terminus on street level in Manhattan, either near the Port Authority Bus Terminal or the George Washington Bridge Bus Station. Often, several different companies ply the same route. Miami In Miami, jitneys (also known as the Miami Mini Bus) run through various neighborhoods, mostly those stretching between Downtown Miami to The Mall at 163rd Street in North Miami Beach, Florida. Miami has the country's most comprehensive jitney network, due to Caribbean influence. San Francisco Jitneys ran in San Francisco from late 1914 to January 2016. In the 1910s, there were more than 1,400 jitneys operating in the city. However, by 2016, declining ridership combined with mounting penalties for traffic citations made the operations unprofitable. Houston The Houston Wave, Houston's first jitney service in 17 years, operated between 2009 and 2019. It expanded into a network of buses operating within Loop 610 and to all special event venues in Houston. Latin America In Puerto Rico and the Dominican Republic, “carros públicos” (literally "public cars") are share taxis. Carros Publicos set routes with several passengers sharing the ride and others picked up throughout the journey. In Puerto Rico the industry is regulated by the Puerto Rico Public Service Commission. While these cars do travel inter-city, they may not be available for longer, cross-island travel. Stations may exist in cities, and Puerto Rican carros públicos may congregate in specific places around town. Mexico There are some areas in which traditional buses and minibuses cannot operate due to the size restrictions of the streets and overhanging objects. Some of these places are served by share taxis that are regulated by the state or city. The share taxis charge the standard minibuses fare. Central and South America ArgentinaColectivos operated as share taxis from the late 1920s until the 1950s in Buenos Aires, Argentina when they were integrated into the public transportation system. Vehicles still known as colectivos operate throughout the country, but have long been indistinguishable from buses. Chile, Peru and Guatemala Often share taxi routes in Mexico are ad hoc arrangements to fill in gaps in regular public transportation, and many operate inter-city as well as local routes. In many rural areas, they are the only public transportation. In some cases, truck/taxi combination vehicles have evolved to transport light goods as well as passengers. Heavily used share taxi routes often evolve into regulated microbus public transit routes, as has occurred in Mexico City and in Lima.Taxis colectivos are also found in Peru, Chile, Guatemala, and Argentina, where they are most commonly referred to simply as colectivos'', although in some places they have become essentially standard buses.
Technology
Motorized road transport
null
540093
https://en.wikipedia.org/wiki/Copenhagen%20Metro
Copenhagen Metro
The Copenhagen Metro (, ) is a light rapid transit system in Copenhagen, Denmark, serving the municipalities of Copenhagen, Frederiksberg, and Tårnby. The original system opened in October 2002, serving nine stations on two lines: M1 and M2. In 2003 and 2007, the Metro was extended to Vanløse and Copenhagen Airport (Lufthavnen) respectively, adding an additional six plus five stations to the network. In 2019, seventeen stations on a wholly underground circle line, the M3, was added bringing the number of stations to 37. In 2024, the M4 line was extended to København Syd, with 5 additional stations in Sydhavn. M5 is set to open in 2035. The driverless light metro supplements the larger S-train rapid transit system, and is integrated with local DSB and regional (Øresundståg) trains and municipal Movia buses. Through the city centre and west to Vanløse, M1 and M2 share a common line. To the southeast, the system serves Amager, with the M1 running through the new neighborhood of Ørestad, and the M2 serving the eastern neighborhoods and Copenhagen Airport. The M3 is a circle line connecting Copenhagen Central Station with Vesterbro, Frederiksberg, Nørrebro, Østerbro and Indre By districts. The metro has 44 stations, 30 of which are underground. Service is provided 24/7, making Copenhagen along with New York City and Chicago the only cities in the world to provide 24/7 rapid transit service throughout their city limits. In 2023, the metro carried 120 million passengers. Overview The system is owned by Metroselskabet (The Metro Company), which is owned by the municipalities of Copenhagen and Frederiksberg, and the Ministry of Transport. The M1 and M2 use 34 trains of the Hitachi Rail Italy Driverless Metro class and stationed at the Control and Maintenance Center at Vestamager. The trains are wide and three cars long; their power output is supplied by a 750-volt third rail. The metro trains were originally planned to be four cars long, but trains were reduced to three cars per set as a savings measure. Platforms are – although shorter than originally planned – built to accommodate trains with four cars, and the automatic doors can be modified accordingly should the need arise. Operation of the system is subcontracted to a private company. For the history of service, this has been Metro Service A/S. Trains run continually, twenty-four hours a day, with the headway varying from two to four minutes in daytime, with longer intervals (up to twenty minutes) during the night. Planning of the Metro started in 1992 as part of the redevelopment plans for Ørestad with construction starting in 1996, and stage 1, from Nørreport to Vestamager and Lergravsparken, opened in 2002. Stage 2, from Nørreport to Vanløse, opened in 2003, followed by stage 3, from Lergravsparken to Lufthavnen, in 2007. The City Circle Line (Danish: Cityringen) is an entirely underground loop through central Copenhagen and Frederiksberg with 17 stops. It does not share any track with the M1 and M2 lines, but intersect them at Kongens Nytorv and Frederiksberg stations. Before the Cityringen opened, the Metro expected that it would cause its ridership should almost double from its 2016 levels to 116 million annual passengers. A fourth line, M4, will be developed into a separate line between 2020 and 2022, as extensions of the Cityringen to Nordhavn and Sydhavn open. The two-stop -long line to Nordhavn opened in March 2020. The extension adds an interchange with Nordhavn S-train station. The five-stop, , extension to Sydhavn opened in 2024. The Sydhavn line will terminate at Copenhagen South Station where it will create a new regional rail transport hub by connecting the metro system to the S-train network, regional trains, and long-distance trains on the current lines and the high speed Copenhagen-Ringsted railway. Once these extensions are complete, Metro expects the daily ridership to triple from its current level of 200,000 riders per weekday to 600,000 riders per weekday in 2030. History Background The planning of the metro was spurred by the development of the Ørestad area of Copenhagen. The principle of building a rail transit was passed by the Parliament of Denmark on 24 June 1992, with the Ørestad Act. The responsibility for developing the area, as well as building and operating the metro, was given to the Ørestad Development Corporation, a joint venture between Copenhagen Municipality (45%) and the Ministry of Finance (55%). Initially, three modes were considered: a tramway, a light rail and a rapid transit. In October 1994, the Development Corporation chose a light rapid transit. The tram solution would have been a street tram, without any major infrastructure investments in the city centre, such as a dedicated right-of-way. Through Ørestad it would have had level crossings, except for a grade-separated crossing with the European Route E20 and the Øresund Line. It would have had a driver and have operated at about a 150-second interval—twice the cycle time of the city's traffic lights. Power would have been provided with overhead wires. Stops were to be located about every at street level. The articulated trams would have been about long and have a capacity for 230 passengers. The light rail model would have used the same approach as the tram in Ørestad, but would instead have run through a tunnel in the city centre. The tunnel sections would be shorter, but the diameter larger because it would have to accommodate overhead wires. The system would have the same frequency as the tram, but use double trams and would therefore require larger stations. The metro solution was chosen because it combined the highest average speeds, the highest passenger capacity, the lowest visual and noise impact, and the lowest number of accidents. Despite requiring the highest investment, it had the highest net present value. The decision to build stage 2, from Nørreport to Vanløse, and stage 3, to the airport, was taken by Parliament on 21 December 1994. Stage 2 involved the establishment of the company Frederiksbergbaneselskabet I/S in February 1995, owned 70% by the Ørestad Development Corporation and 30% by Frederiksberg Municipality. The third stage would be built by Østamagerbaneselskabet I/S, established in September 1995 and owned 55% by the Ørestad Development Corporation and 45% by Copenhagen County. In October 1996, a contract was signed with the Copenhagen Metro Construction Group (COMET) for building the lines (Civil Works), and with Ansaldo STS for delivery of technological systems and trains, and to operate the system the first five years. COMET was a single-purpose consortium composed of Astaldi, Bachy, SAE, Ilbau, NCC Rasmussen & Schiøtz Anlæg and Tarmac Construction. Construction of lines M1 and M2 Construction started in November 1996, with the moving of underground pipes and wires around the station areas. In August 1997, work started at the depot, and in September, COMET started the first mainline work. In October and November, the two tunnel boring machines (TBM), christened Liva and Betty, were delivered. They started boring each barrel of the tunnel from Islands Brygge in February 1998. The same month, the Public Transport Authority gave the necessary permits to operate a driverless metro. The section between Fasanvej and Frederiksberg is a former S-train line, and was last operated as such on 20 June 1998. The first section of tunnel was completed in September 1998, and the TBMs moved to Havnegade. By December 1998, work had started on the initial nine stations. Plans for M2 were presented to the public in April 1999, with a debate emerging if the proposed elevated solution was the best. In May, the first trains were delivered, and trial runs began at the depot. In December, the tunnels were completed to Strandlodsvej, and the TBMs were moved to Havnegade, where they started to grind towards Frederiksberg. From 1 January 2000, the S-train service from Solbjerg to Vanløse was terminated, and work commenced to rebuild the section to metro. The last section of tunnel was completed in February 2001. In March 2001, Copenhagen County Council decided to start construction of stage 3. On 6 November 2001, the first train operated through a tunnel section. On 28 November, laying of tracks along stage 1, and stage 2A from Nørreport to Frederiksberg, was completed. An agreement about financing stage 3 was reached on 12 April. By 22 May, the 18 delivered trains had test-run . The section from Nørreport to Lergravsparken and Vestamager was opened on 19 October 2002. Initially, the system had a 12-minute headway on each of the two services. From 3 December this was reduced to 9 minutes, and from 19 December to 6 minutes. Operation of the system was subcontracted to Ansaldo, who again subcontracted it to Metro Service, a subsidiary of Serco. The contract had a duration of five years, with an option for extension for another three. Opening of lines M1 and M2 Trial runs on stage 2A began on 24 February 2003 and opened on 29 May. All changes to bus and train schedules in Copenhagen took place on 25 May, but to allow Queen Margrethe II to open the line, the opening needed to be adapted to her calendar. This caused four days without a bus service along the line. Stage 2B, from Frederiksberg to Vanløse, opened on 12 October. Forum Station was nominated for the European Union Prize for Contemporary Architecture in 2005. On 2 December 2005, the final agreement to build the City Circle Line was made between the local and national governments. The price was estimated at 11.5 to 18.3 billion Danish krone (DKK), of which DKK 5.4 billion will be financed though ticket sales, and the remaining from the state and municipalities. In 2006, it was announced that the contract with Ansaldo to operate the metro had been prolonged another three years. However, the subcontract between Ansaldo and Serco Group was not extended, and the contract was instead given to Azienda Trasporti Milanesi in joint venture with Ansaldo; they took over operations from October 2007. The Ørestad Development Corporation was discontinued in 2007, and the ownership of the metro was transferred to Metroselskabet I/S. In January 2007, the city council decided that a branch was to be built during construction at Nørrebro, to allow a future branch line from the City Circle Line towards Brønshøj. The first part of this line was intended to be constructed at the same time as the City Circle Line, to avoid a multitude-higher construction cost and long interruptions of operations later. This did not involve a final decision, only an option for future construction. The Herlev/Brønshøj line was ultimately dropped as the City of Copenhagen withdrew its share of the cost of the Nørrebro branch chamber in its 2009 budget, and the state refused to continue the project. Any branch to the Herlev / Brønshøj region would now require a shutdown of the City Circle Line for an extended period of time. In March 2007, a proposal to establish a station at Valby, where the Carlsberg Group is planning an urban redevelopment, was scrapped. The proposal would have increased construction costs by DKK 900 million and was deemed not economical. The increased cost was, in part, due to an extra TBM being needed to complete the project on time. The City Circle Line was passed by parliament on 1 June 2007, with only the Red–Green Alliance voting in disfavor. The stage 3 opened on 28 September 2007, from Lergravsparken to the airport. It followed, for the most part, the route of the former Amager Line of the Danish State Railways. With this stage complete, the 34 trains were delivered for use by the M1 and M2. However, the line caused a heated debate, with several locals organized themselves into the Amager Metro Group. The group argued that the line should have been built underground, citing concerns that it would create noise pollution and a physical barrier in Amager. In April 2008, the Copenhagen Metro won the award at MetroRail 2008 for the world's best metro. The jury noted the system's high regularity, safety and passenger satisfaction, as well as the efficient transport to the airport. During 2008, the metro experienced a 16% passenger growth to 44 million passengers per year. Several parties agreed in September 2008 not to fund a northwest expansion of the metro. Initially, the system operated trains from 01:00 to 05:00 only on Thursdays to Saturdays, but, starting on 19 March 2009, night service was extended to the rest of the week. This caused a logistical challenge, because Metro Service used the nights for maintenance. The routes were therefore set up in such a way that the system could be operated on only a single track, leaving the other free for work. In May 2009, six companies were pre-qualified to bid for the public service obligation to operate the metro. These were Serco-NedRailways, Ansaldo STS, Arriva, S-Bahn Hamburg, Keolis and DSB Metro—a joint venture between DSB and RATP. The process was delayed because of a procedural error by Metroselskabet, who failed to pre-qualify DSB Metro. Construction of lines M3, City Circle line and M4, the Harbour Line An expansion of the metro, the City Circle Line, is current and opened on 29 September 2019. Independent of the existing system, it circles the city centre and connects the areas of Østerbro, Nørrebro and Vesterbro to Frederiksberg and Indre By. The line is long and runs entirely in tunnel. The circle has 17 stations, two of which are interchanges with both the M1 and M2 lines, as well with three Copenhagen S-train stations. It takes 25 minutes to complete a full lap in either direction. Archaeological and geological surveys started in 2007, preferred bidders were announced in November 2010 and contracts were signed in 2011. Preparations began by moving utilities etc. in 2010, and construction of work sites and stations began in 2011. Drilling of tunnels began in 2013. On 7 January 2011 the new project called Cityringen started with the signing of new contracts by Metroselskabet, with Ansaldo Breda and Ansaldo Sts (Finmeccanica Group) for the supply of trains and control systems and with an Italian joint-venture led by Salin Construttori (about 60%) and Tecnimont (about 40%) with Seli as third partner for the construction part. In July 2013, Natur- og Miljøklagenævnet, the environmental appeals board, ruled that the city was wrong to grant Metroselskabet permission for 24-hour work days and noise levels of up to 78 db at the Marmorkirken site. This forced construction to stop work at 6PM until a final ruling was made, thus delaying the completion date. The City Circle Line is serviced by lines M3 and M4. The M3 opened by 29 September 2019, and its trains operate on the entire circle in either direction. The M3 has transfers to M1 and M2 at Frederiksberg and Kongens Nytorv. The line is estimated to carry 240,000 daily passengers, bringing the metro's total daily ridership to 460,000. The M4 was opened on 28 March 2020 when two additional stations were opened in the Nordhavn district. This line running from Copenhagen Central Station (København H) via Østerport to Orientkaj station in Nordhavn, thus sharing six stations with the M3 and featuring two additional Nordhavn stations. The M4 line is interchange with the M1 and M2 at Kongens Nytorv. An extension to the Sydhavn district open in June 2024, served by the M4. The addition of this line will relocate the M4's southern terminus from Copenhagen Central Station to Ny Ellebjerg. Evolution of plans A northwestern expansion of the City Circle Line was planned, where M4 would have diverted at Nørrebro and run to the suburbs of Brønshøj and Gladsaxe. This project was abandoned, as the interchange chamber between any such line and the City Circle Line was scrapped as part of the City of Copenhagen's 2009 budget. In subsequent plans, the northern extension of the M4 was instead relocated as a Nordhavn branch which connects with the City Circle Line at Østerport. The Nordhavn extension with two stations opened on 28 March 2020. The southern extension of the M4 will run from Copenhagen Central Station through Sydhavn to Ny Ellebjerg, where the M4 will link up with the S-train and regional train system. The Danish Transport Authority (Trafikstyrelsen) has suggested converting the F-line of the S-train network to metro standard as an M5 line. If the M5 line becomes reality, it will connect with existing lines at Flintholm Station (interchange with M1 and M2), Nørrebro station (interchange with M3), and the future Ny Ellebjerg station (interchange with M4). The fourth line, M4 or the Harbour line, shares the track with the M3 between Copenhagen Central Station and Østerport station (six stations shared). An additional extension to the M4 is under construction: service the southern (Sydhavn) harbour district in Copenhagen. The completed M4 between Orientkaj and Ny Ellebjerg will feature 13 stations. The northern extension, Nordhavn station and Orientkaj station, both begun service on 28 March 2020. The southern extension will add five additional stops to the M4, with its southern terminus moving from Copenhagen Central Station to Ny Ellebjerg. This line will service the southern harbour district and is expected to open by 2024. As of 2019, the M1 and M2 has a total of 22 stations. After opening of the City Circle Line, the metro system is featured 3 lines with a total of 37 stations. Upon completion of both extensions of the M4, the system will feature four lines with 44 stations. 8 of these will be interchanges with the S-train. Future lines discussed Many new lines have been discussed. Initially Line M4 was supposed to supplement the circular M3 on the eastern side of the Inner City between Nørrebro station and Copenhagen Central Station. At this time, an extension was suggested from Nørrebro to the northwestern suburbs with a terminus at Husum station. This was abandoned as the City of Copenhagen rejected funding interchange chamber under Nørrebro station necessary for this extension. Instead, the city preferred the M4 to branch at Østerport station to facilitate development of the Nordhavn harbour area. The "M5"-label appears to having been reserved for a potential future conversion of Line F of the Copenhagen S-train to metro standard. In 2011, the City of Copenhagen suggested two additional lines M6 and M7, the M6 linking the northwestern suburbs and central Amager and the M7 forming a second ring line further east than the M3, and a western extension of the M1 or M2 to was also suggested. In 2017, the city of Copenhagen suggested a new M6 line connecting Brønshøj and Refshaleøen via Copenhagen Central Station. In 2018, the government and the city agreed on plans to construct an artificial island, Lynetteholmen north of Refshaleøen, and the city included its plans to link Copenhagen Central Station and Refshaleøen in this discussion. As of January 2018, no further development will be done after the construction of the Harbour line, or Line M4 between Ny Ellebjerg station and Orientkaj in the Nordhavn area, except for a few more stations northeast of Orientkaj. In September 2011, the city of Copenhagen and neighbouring Malmö in Sweden announced that they were seeking European Union funding to study a potential metro line under the Øresund to the neighbourhood of Malmö Central Station, providing faster trips and additional capacity beyond that of the existing Øresund Bridge. The study, for which the EU granted funding in the following December, will consider both a simple shuttle between the two stations and a continuous line integrated with the local transport networks on each side, and they anticipate a travel time of 15 minutes between the two city centers. Work on the study is expected to continue until 2020. The planned M5 line towards Refshaleøen and Lynetteholm is supposed to open in 2035. It will encompass 10 stations, of which 5 will be new. Route The metro consists of four lines, M1, M2, M3 and M4, with a planned M5 line expected to be operational in 2035. M1 and M2 share a common section from Vanløse to Christianshavn, where they split along two lines: M1 follows the Ørestad Line to Vestamager, while M2 follows the Østamager Line to the airport. The metro consists of a total route length of , and 22 stations, 9 of which are on the section shared by both lines. M1 is long and serves 15 stations, while M2 is long and serves 16 stations. About of the lines and 9 stations are in tunnel, located at below ground level. The remaining sections are on embankments, viaducts or at ground level. The section from Vanløse to Frederiksberg follows the Frederiksberg Line, a former S-train line which runs on an embankment. From Fasanvej station, the line runs underground, and continues this way through the city center. After Christianshavn, the line splits in two. M1 reaches ground level at Islands Brygge, and continues on a viaduct through the Vestamager area. M2 continues in tunnel until after Lergravsparken, where it starts to follow the former Amager Line. The tunnels consist of two parallel tunnels; that run through stable limestone at about depth, but are elevated slightly at each station. There are emergency exits every , so that no train is ever further than from an exit. The outer tunnel diameter is , while the inner diameter is . The tunnels were excavated by the cut-and-cover method, the New Austrian Tunnelling method and by tunnel boring machines (TBM). Along the elevated sections, the tracks run on alternating sections of separate reinforced concrete viaducts and joint embankments made of reinforced earth. M3 is a 15.5-kilometre (9.63 mi) looping line which serves 17 stations. Including Frederiksberg and Kongens Nytorv which also serve M1 and M2. A full trip around the line takes approximately 29 minutes. The M4 line serves 13 stations, 6 of which are shared with the M3 line. It branches off the M3 line at Østerport in the north and at København H in the south. The southern extension is the newest in the system and opened on 22 June 2024. Service The system operates 24/7 with a varying headway throughout the day. During rush hour (07:00–10:00 and 15:00–18:00), there is a two-minute headway on the common section and a four-minute headway on the single-service sections. During Thursday through Saturday night (0:00–05:00) on the M1 and M2 lines there is a seven/eight-minute headway on the common section and a fifteen-minute headway on the single-service sections, and other nights it is twenty-minutes on all sections of the metro. At all other times, there is a three-minute headway on the common section and a six-minute on the single-service sections. Travel time from Nørreport to Vestamager on M1 is 14 minutes, to the airport on M2 is 15 minutes, and to Vanløse on M1 and M2 is 9 minutes. During rush hour (07:00–10:00 and 15:00–18:00), on the M3 (Cityringen) there is a three-minute headway. During Thursday through Saturday night (0:00–05:00) on M3 there is a six-minute headway (one direction), while in the weekend it is twelve-minutes (two directions). At all other times, there is a four/five-minute headway. Travel time of the Cityringen M3 is 29 minutes. During Thursday through Saturday night (0:00–05:00) on the M4 there is a twelve minute headway between Osterport and Orientkaj stations, while in the weekend it is a ten minute headway between Kobenhavn H Central Station and Orientkaj Station. At all other times, there is a six/ten-minute headway. Travel time of the M4 is 12 minutes (only 3 minutes late night between Osterport and Orientkaj stations). In 2009, the metro transported 50 million passengers, or 137,000 per day; by 2013, the metro's ridership increased to 55 million. The metro operates with a proof-of-payment system, so riders must have a valid ticket before entering the station platforms. The system is divided into zones, and the fare structure is integrated with other public transport in Copenhagen, including the buses managed by Movia, local DSB trains and the S-train. The system lies within four different zones. Ticket machines are available at all stations, where special tickets for dogs and bicycles can also be purchased. A two-zone ticket costs DKK 24, and a three-zone ticket DKK 36, and tickets are good for 60 minutes. Holders of the Copenhagen Card museum pass ride free of charge, as do up to two children under twelve years of age accompanied by an adult. As of 2012, the metro has fully adapted to the national electronic fare card system Rejsekort. Outside the Central zones, the outer zones are divided into sub-zones and ticketing can be a bit confusing for visitors familiar with how zones work in London or Berlin. Passengers must specify, on their ticket which sub-zone they wish to travel to. The system is integrated with other public transport in Copenhagen. There is transfer to the S-train at Vanløse, Flintholm and Nørreport, to DSB's local trains at Nørreport, Ørestad and Lufthavnen, and to Copenhagen Airport at Lufthavnen. There are transfers to Movia bus services at all but four stations. The system is owned by Metroselskabet, who is also responsible for building the City Circle Line. The company is owned by Copenhagen Municipality (50.0%), the Ministry of Transport (41.7%) and Frederiksberg Municipality (8.3%). Construction and operation is subcontracted through public tenders, while consultants are used for planning. The contract to operate the system was made with Ansaldo STS, who has subcontracted it to Metro Service, a joint venture between them and Azienda Trasporti Milanesi (ATM), the public transport company of the city of Milan, Italy. The company has 285 employees, the majority of whom work as stewards. Stations There are 37 stations on the network. Of the initial 22 stations on lines M1 and M2, nine are underground and six of these are deep-level. They were all designed by KHR Arkitekter, who created open stations with daylight. Stations have an information column in front, marked with a large 'M' and featuring information screens. All stations have a vestibule at below ground level, which has ticket and local information, ticket machines and validators. The stations are built with island platforms and are fully accessible for people with disabilities. The deep-level stations are built as rectangular, open boxes long, wide and deep. The platforms are located below the surface. Access to the surface is reached via escalators and elevators. The design allows the stations to be located below streets and squares, allowing the stations to be built without expropriation. Access to the track is blocked by platform screen doors. The underground stations on M1 and M2 were built as cut-and-cover from the top down (except Christianshavn, which was excavated as a large hole and the station built bottom-up), and the first part of construction was building a water-tight wall on all sides. There are glass pyramids on the roof of the stations permitting daylight to enter. Inside the pyramids, there are prisms reflecting and splitting the light, sometimes resulting in rainbows on the walls. The light in the stations is automatically regulated to make best use of the daylight and maintain a constant level of illumination of the stations at all times. The elevated stations are built in glass, concrete and steel to minimize their visual impact. Outside, there is parking for bicycles, cars, buses and taxis. Access to the trains are blocked by platform screen doors. Trains The system uses 64 driverless electric multiple units built by Hitachi Rail Italy and designed by Giugiaro Design of Italy called the Hitachi Rail Italy Driverless Metro. The trains are long, wide, and weigh . Each train consists of three articulated cars with a total of six automated, wide doors, holding up to 96 seated and 204 standing passengers (300 in total). There are four large 'flex areas' in each train with folding seats providing space for wheelchairs, strollers and bicycles. Each car is equipped with two three-phase asynchronous motors, giving each train a power output of . In each car, the two motors are fed by the car's own IGBT motor drive. They transform the 750-volt direct current collected from the third rail shoe to the three-phase alternating current used in the motors. The trains' top speed are , while the average service speed is , with an acceleration and deceleration capacity of along the standard-gauge track. Operations The entire metro system and the trains are run by a fully automated computer system, located at the two Control and Maintenance Centers, south of Vestamager Station (M1 + M2), and at Sydhavnen (M3 + M4). The automatic train control (ATC) consists of three subsystems: automatic train protection (ATP), automatic train operation (ATO) and automatic train supervisory (ATS). The ATP is responsible for keeping the trains' speed, ensuring that doors are closed before departure and switches are correctly set. The system uses fixed block signaling, except around stations, where moving block signaling is used. The ATO is the autopilot that runs the trains on a predefined schedule, ensures that the trains stop at the station and open the doors. The ATS keeps track of all the components in the network, including the rails and all of the trains in the system, and displays a live schematic at the control center. The ATC is designed so that the ATP is the only safety-critical system, as it would halt the trains if the other systems fail. The safety and signaling specifications are based on the German BOStrab, and controlled by TÜV Rheinland and under supervision of the Public Transport Authority. Other aspects of the system, such as power supply, ventilation, security alarms, cameras and pumps, are controlled by a system called "control, regulating and surveillance". Vestamager CMC The Control and Maintenance Center is a facility located at the south end of M1. It consists of a storage area for trains not in use, a maintenance area and the control facility. Trains operate automatically through the system, and can also automatically be washed on the exterior. The facility has of track, of which is a test track for use after maintenance. The most common repairs are wheel grinding; more complicated repairs are made by replacing entire components that are sent to the manufacturer. By having components in reserve, trains can have shorter maintenance time. The depot also has several maintenance trains, including diesel locomotives that are able to retrieve broken down or disabled trains. At any time, there are four or five people working at the control center: two monitor the ATC system, one monitors passenger information, and one is responsible for secondary systems, such as power supply. In case of technical problems, there is always a team of linepeople that can be dispatched to perform repairs. Although the trains are not equipped with drivers, there are stewards at stations and on most trains that help passengers, perform ticket controls and assist in emergency situations.
Technology
Scandinavia
null
540448
https://en.wikipedia.org/wiki/Product%20%28chemistry%29
Product (chemistry)
Products are the species formed from chemical reactions. During a chemical reaction, reactants are transformed into products after passing through a high energy transition state. This process results in the consumption of the reactants. It can be a spontaneous reaction or mediated by catalysts which lower the energy of the transition state, and by solvents which provide the chemical environment necessary for the reaction to take place. When represented in chemical equations, products are by convention drawn on the right-hand side, even in the case of reversible reactions. The properties of products such as their energies help determine several characteristics of a chemical reaction, such as whether the reaction is exergonic or endergonic. Additionally, the properties of a product can make it easier to extract and purify following a chemical reaction, especially if the product has a different state of matter than the reactants. Spontaneous reaction Where R is reactant and P is product. Catalysed reaction Where R is reactant, P is product and C is catalyst. Much of chemistry research is focused on the synthesis and characterization of beneficial products, as well as the detection and removal of undesirable products. Synthetic chemists can be subdivided into research chemists who design new chemicals and pioneer new methods for synthesizing chemicals, as well as process chemists who scale up chemical production and make it safer, more environmentally sustainable, and more efficient. Other fields include natural product chemists who isolate products created by living organisms and then characterize and study these products. Determination of reaction The products of a chemical reaction influence several aspects of the reaction. If the products are lower in energy than the reactants, then the reaction will give off excess energy making it an exergonic reaction. Such reactions are thermodynamically favorable and tend to happen on their own. If the kinetics of the reaction are high enough, however, then the reaction may occur too slowly to be observed, or not even occur at all. This is the case with the conversion of diamond to lower energy graphite at atmospheric pressure, in such a reaction diamond is considered metastable and will not be observed converting into graphite. If the products are higher in chemical energy than the reactants then the reaction will require energy to be performed and is therefore an endergonic reaction. Additionally if the product is less stable than a reactant, then Leffler's assumption holds that the transition state will more closely resemble the product than the reactant. Sometimes the product will differ significantly enough from the reactant that it is easily purified following the reaction such as when a product is insoluble and precipitates out of solution while the reactants remained dissolved. History Ever since the mid-nineteenth century, chemists have been increasingly preoccupied with synthesizing chemical products. Disciplines focused on isolation and characterization of products, such as natural products chemists, remain important to the field, and the combination of their contributions alongside synthetic chemists has resulted in much of the framework through which chemistry is understood today. Much of synthetic chemistry is concerned with the synthesis of new chemicals as occurs in the design and creation of new drugs, as well as the discovery of new synthetic techniques. Beginning in the early 2000s, process chemistry began emerging as a distinct field of synthetic chemistry focused on scaling up chemical synthesis to industrial levels, as well as finding ways to make these processes more efficient, safer, and environmentally responsible. Biochemistry In biochemistry, enzymes act as biological catalysts to convert substrate to product. For example, the products of the enzyme lactase are galactose and glucose, which are produced from the substrate lactose. Where S is substrate, P is product and E is enzyme. Product promiscuity Some enzymes display a form of promiscuity where they convert a single substrate into multiple different products. It occurs when the reaction occurs via a high energy transition state that can be resolved into a variety of different chemical products. Product inhibition Some enzymes are inhibited by the product of their reaction binds to the enzyme and reduces its activity. This can be important in the regulation of metabolism as a form of negative feedback controlling metabolic pathways. Product inhibition is also an important topic in biotechnology, as overcoming this effect can increase the yield of a product.
Physical sciences
Reaction
Chemistry
540869
https://en.wikipedia.org/wiki/Lindane
Lindane
Lindane, also known as gamma-hexachlorocyclohexane (γ-HCH), gammaxene, Gammallin and benzene hexachloride (BHC), is an organochlorine chemical and an isomer of hexachlorocyclohexane that has been used both as an agricultural insecticide and as a pharmaceutical treatment for lice and scabies. Lindane is a neurotoxin that interferes with GABA neurotransmitter function by interacting with the GABAA receptor-chloride channel complex at the picrotoxin binding site. In humans, lindane affects the nervous system, liver, and kidneys, and may well be a carcinogen. Whether lindane is an endocrine disruptor is unclear. The World Health Organization classifies lindane as "moderately hazardous", and its international trade is restricted and regulated under the Rotterdam Convention on Prior Informed Consent. In 2009, the production and agricultural use of lindane was banned under the Stockholm Convention on persistent organic pollutants. A specific exemption to that ban allows it to continue to be used as a second-line pharmaceutical treatment for lice and scabies. History and use The chemical was originally synthesized in 1825 by Faraday. It is named after the Dutch chemist Teunis van der Linden (1884–1965), the first to isolate and describe γ-hexachlorcyclohexane in 1912. The fact that mixtures of isomers of hexachlorocyclohexane have insecticidal activity is a case of multiple discovery. Work in the 1930s at the Jealott's Hill laboratories of Imperial Chemical Industries Ltd (ICI) led in 1942 to the realization that the γ isomer was the key active component in the mixture which had hitherto been tested. Development work in the UK was accelerated because at that time in World War II imports of derris containing the insecticide rotenone were restricted owing to the Japanese occupation of Malaya and alternatives were urgently needed. In trials in 1943 it was found that a five-fold increase in the yield of oats and wheat was achieved using a dust formulation of the available material, owing to its efficacy against wireworm pests. By the end of 1945, γ-hexachlorcyclohexane of 98% purity became available and ICI commercialised a seed treatment launched in 1949 as Mergamma A, containing 1% mercury and 20% lindane. Subsequently, lindane has been used to treat food crops and forestry products, as a seed or soil treatment, and to treat livestock and pets. It was used as a household pesticide as the active pesticide ingredient of an insecticidal floor wax product called "Freewax". It has also been used as pharmaceutical treatment for lice and scabies, formulated as a shampoo or lotion. Between 1950 and 2000, an estimated 600,000 tonnes of lindane were produced globally, and the vast majority of which was used in agriculture. It has been manufactured by several countries, including the United States, China, Brazil, and several European countries. By November 2006, the use of lindane had been banned in 52 countries and restricted in 33 others. Seventeen countries, including the US and Canada, allowed either limited agricultural or pharmaceutical use. In 2009, an international ban on the use of lindane in agriculture was implemented under the Stockholm Convention on Persistent Organic Pollutants. A specific exemption allows for it to continue to be used in second-line treatments for the head lice and scabies for five more years. The production of the lindane isomers α- and β-hexachlorocyclohexane was also banned. Although the US has not ratified the convention, it has similarly banned agricultural uses while still allowing its use as a second-line lice and scabies treatment. United States In the US, lindane pesticide products were regulated by the U.S. Environmental Protection Agency (EPA), while lindane medications are regulated by the Food and Drug Administration (FDA). It was registered as an agricultural insecticide in the 1940s, and as pharmaceutical in 1951. The EPA gradually began restricting its agricultural use in the 1970s due to concerns over its effects on human health and the environment. By 2002, its use was limited to seed treatments for just six crops, and in 2007, these last uses were cancelled. Pharmaceutical uses Lindane medications continue to be available in the US, though since 1995, they have been designated "second-line" treatments, meaning they should be prescribed when other "first-line" treatments have failed or cannot be used. In December 2007, the FDA sent a Warning Letter to Morton Grove Pharmaceuticals, the sole U.S. manufacturer of lindane products, requesting that the company correct misleading information on two of its lindane websites. The letter said, in part, that the materials "are misleading in that they omit and/or minimize the most serious and important risk information associated with the use of Lindane Shampoo, particularly in pediatric patients; include a misleading dosing claim; and overstate the efficacy of Lindane Shampoo." California banned the pharmaceutical lindane, effective 2002, and the Michigan House of Representatives passed a bill in 2009 to restrict its use to doctors' offices. A recent analysis of the California ban concluded that a majority of pediatricians had not experienced problems treating lice or scabies since that ban took effect. The study also documented a marked decrease in lindane wastewater contamination and a dramatic decline in lindane poisoning incidents reported to poison control centers. The authors concluded, "The California experience suggests elimination of pharmaceutical lindane produced environmental benefits, was associated with a reduction in reported unintentional exposures and did not adversely affect head lice and scabies treatment." The Persistent Organic Pollutants Review Committee of the Stockholm Convention on Persistent Organic Pollutants considers the use of lindane in agriculture as largely redundant, with other, less toxic and less persistent pesticides. In the case of pharmaceutical use, the committee noted, "alternatives for pharmaceutical uses have often failed for scabies and lice treatment and the number of available alternative products for this use is scarce. For this particular case, a reasonable alternative would be to use lindane as a second-line treatment when other treatments fail, while potential new treatments are assessed." Other uses Pest repellent Lindane is a bird repellent. Rudd & Genelly 1954 noticed that bird pests seemed uninterested in treated seeds, specifically pheasants and blackbirds around Davis, CA, US. They tested its repellent effect on pheasants and found it effective, speculating that it may be usable as a general bird repellent. Synthesis Lindane is not known to occur naturally. Hexachlorocyclohexane (HCH) was discovered in 1825. Its insecticidal properties were not known until the 1940s. Technical grade HCH, as a mixture of isomers is synthesized from benzene and chlorine in presence of ultraviolet light. The resulting product mixture comprises 65-70% α-HCH, 7-10% β-HCH, 14-15% lindane (γ-HCH), approximately 7% δ-HCH, 1-2% ε-HCH, and 1-2% other components. It can also be prepared by exposing a mixture of benzene and chlorine to alpha radiation. Human health effects The EPA and WHO both classify lindane as "moderately" acutely toxic. It has an oral of 88 mg/kg in rats and a dermal LD50 of 1000 mg/kg. Most of the adverse human health effects reported for lindane have been related to agricultural uses and chronic, occupational exposure of seed-treatment workers. Exposure to large amounts of lindane can harm the nervous system, producing a range of symptoms from headache and dizziness to seizures, convulsions, and more rarely, death. Lindane has not been shown to affect the immune system in humans, and it is not considered to be genotoxic. Prenatal exposure to β-HCH, an isomer of lindane and production byproduct, has been associated with altered thyroid hormone levels and could affect brain development. The Occupational Safety and Health Administration and National Institute for Occupational Safety and Health have set occupational exposure limits (permissible exposure and recommended exposure, respectively) for lindane at 0.5 mg/m3 at a time-weighted average of eight hours for skin exposure. People can be exposed to lindane in the workplace by inhaling it, absorbing it through their skin, swallowing it, and eye contact. At levels of 50 mg/m3, lindane is immediately dangerous to life and health. It is classified as an extremely hazardous substance in the United States as defined in section 302 of the U.S. Emergency Planning and Community Right-to-Know Act (42 U.S.C. 11002), and is subject to strict reporting requirements by facilities which produce, store, or use it in significant quantities. Cancer risk Based primarily on evidence from animal studies, most evaluations of lindane have concluded that it may possibly cause cancer. In 2015, the International Agency for Research on Cancer classified lindane as a known human carcinogen, and in 2001 the EPA concluded there was "suggestive evidence of carcinogenicity, but not sufficient to assess human carcinogenic potential." The U.S. Department of Health and Human Services determined that all isomers of hexachlorocyclohexane, including lindane, "may reasonably be anticipated to cause cancer in humans," and in 1999, the EPA characterized the evidence carcinogenicity for lindane as "suggestive ... of carcinogenicity, but not sufficient to assess human carcinogenic potential." Lindane and its isomers have also been on California's Proposition 65 list of known carcinogens since 1989. In contrast, the World Health Organization concluded in 2004 that "lindane is not likely to pose a carcinogenic risk to humans." India's BIS considers Lindane a "confirmed carcinogen". Adverse reactions A variety of adverse reactions to lindane pharmaceuticals have been reported, ranging from skin irritation to seizures, and, in rare instances, death. The most common side effects are burning sensations, itching, dryness, and rash. While serious effects are rare and have most often resulted from misuse, adverse reactions have occurred when used properly. The FDA, therefore, requires a so-called black box warning on lindane products, which explains the risks of lindane products and their proper use. The black box warning emphasizes that lindane should not be used on premature infants and individuals with known uncontrolled seizure disorders, and should be used with caution in infants, children, the elderly, and individuals with other skin conditions (e.g., dermatitis, psoriasis) and people who weigh less than , as they may be at risk of serious neurotoxicity. Environmental contamination Lindane is a persistent organic pollutant: it is relatively long-lived in the environment, it is transported long distances by natural processes like global distillation, and it can bioaccumulate in food chains, though it is rapidly eliminated when exposure is discontinued. The production and agricultural use of lindane are the primary causes of environmental contamination, and levels of lindane in the environment have been decreasing in the U.S., consistent with decreasing agricultural usage patterns. The production of lindane generates large amounts of waste hexachlorocyclohexane isomers, and "every ton of lindane manufactured produces about nine tons of toxic waste." Modern manufacturing standards for lindane involve the treatment and conversion of waste isomers to less toxic molecules, a process known as "cracking". When lindane is used in agriculture, an estimated 12–30% of it volatilizes into the atmosphere, where it is subject to long-range transport and can be deposited by rainfall. Lindane in soil can leach to surface and even ground water, and can bioaccumulate in the food chain. However, biotransformation and elimination are relatively rapid when exposure is discontinued. Most exposure of the general population to lindane has resulted from agricultural uses and the intake of foods, such as produce, meats, and milk, produced from treated agricultural commodities. Human exposure has decreased significantly since the cancellation of agricultural uses in 2006. Even so, the CDC published in 2005 its Third National Report on Human Exposures to Environmental Chemicals, which found no detectable amounts of lindane in human blood taken from a random sampling of about 5,000 people in the US as part of the NHANES study (National Health and Nutrition Examination Survey. The lack of detection of lindane in this large human "biomonitoring" study likely reflects the increasingly limited agricultural uses of lindane over the last two decades. The cancellation of agricultural uses in the United States will further reduce the amount of lindane introduced into the environment by more than 99%. Over time, lindane is broken down in soil, sediment, and water into less harmful substances by algae, fungi, and bacteria; however, the process is relatively slow and dependent on ambient environmental conditions. Lindane residues in honey and beeswax are reported to be the highest of any historical or current pesticide and to continue to pose a threat to honeybee health. The ecological impact of lindane's environmental persistence continues to be debated. The US EPA determined in 2002 that the agency does not believe that lindane contaminates drinking water in excess of levels considered safe. U.S. Geological Survey teams concluded the same in 1999 and 2000. With regard to lindane medications, the EPA conducted "down-the-drain" estimates of the amount of lindane reaching public water supplies and concluded that lindane levels from pharmaceutical sources were "extremely low" and not of concern. Note that the EPA has set the maximum contaminant level or "MCL" for lindane allowed in public water supplies and considered safe for drinking at 200 parts per trillion (ppt). By comparison, the state of California imposes a lower MCL for lindane of 19 ppt. However, the California standard is based on a dated 1988 national water criterion that was subsequently revised by the EPA in 2003 to 980 ppt. The EPA stated that the change resulted from "significant scientific advances made in the last two decades particularly in the areas of cancer and noncancer risk assessments." While the EPA considered raising the MCL standard for lindane to 980 ppt at that time, the change was never implemented because states had little difficulty in maintaining lindane levels below the 200 ppt MCL limit already in place. Today, the legally enforceable MCL standard for lindane is 200 ppt, while the national water criterion for lindane is 980 ppt. Isomers Lindane is the gamma isomer of hexachlorocyclohexane ("γ-HCH"). In addition to the issue of lindane pollution, some concerns are related to the other isomers of HCH, namely alpha-HCH and beta-HCH, which are notably more toxic than lindane, lack its insecticidal properties, and are byproducts of lindane production. In the 1940s and 1950s, lindane producers stockpiled these isomers in open heaps, which led to ground and water contamination. The International HCH and Pesticide Forum has since been established to bring together experts to address the clean-up and containment of these sites. Modern manufacturing standards for lindane involve the treatment and conversion of waste isomers to less toxic industrial chemicals, a process known as "cracking". Today, only a few production plants remain active worldwide to accommodate public-health uses of lindane and declining agricultural needs. Lindane has not been manufactured in the U.S. since the mid-1970s, but continues to be imported.
Technology
Pest and disease control
null
540979
https://en.wikipedia.org/wiki/Magnetic%20moment
Magnetic moment
In electromagnetism, the magnetic moment or magnetic dipole moment is the combination of strength and orientation of a magnet or other object or system that exerts a magnetic field. The magnetic dipole moment of an object determines the magnitude of torque the object experiences in a given magnetic field. When the same magnetic field is applied, objects with larger magnetic moments experience larger torques. The strength (and direction) of this torque depends not only on the magnitude of the magnetic moment but also on its orientation relative to the direction of the magnetic field. Its direction points from the south pole to north pole of the magnet (i.e., inside the magnet). The magnetic moment also expresses the magnetic force effect of a magnet. The magnetic field of a magnetic dipole is proportional to its magnetic dipole moment. The dipole component of an object's magnetic field is symmetric about the direction of its magnetic dipole moment, and decreases as the inverse cube of the distance from the object. Examples of objects or systems that produce magnetic moments include: permanent magnets; astronomical objects such as many planets, including the Earth, and some moons, stars, etc.; various molecules; elementary particles (e.g. electrons); composites of elementary particles (protons and neutronsas of the nucleus of an atom); and loops of electric current such as exerted by electromagnets. Definition, units, and measurement Definition The magnetic moment can be defined as a vector (really pseudovector) relating the aligning torque on the object from an externally applied magnetic field to the field vector itself. The relationship is given by: where is the torque acting on the dipole, is the external magnetic field, and is the magnetic moment. This definition is based on how one could, in principle, measure the magnetic moment of an unknown sample. For a current loop, this definition leads to the magnitude of the magnetic dipole moment equaling the product of the current times the area of the loop. Further, this definition allows the calculation of the expected magnetic moment for any known macroscopic current distribution. An alternative definition is useful for thermodynamics calculations of the magnetic moment. In this definition, the magnetic dipole moment of a system is the negative gradient of its intrinsic energy, , with respect to external magnetic field: Generically, the intrinsic energy includes the self-field energy of the system plus the energy of the internal workings of the system. For example, for a hydrogen atom in a 2p state in an external field, the self-field energy is negligible, so the internal energy is essentially the eigenenergy of the 2p state, which includes Coulomb potential energy and the kinetic energy of the electron. The interaction-field energy between the internal dipoles and external fields is not part of this internal energy. Units The unit for magnetic moment in International System of Units (SI) base units is A⋅m2, where A is ampere (SI base unit of current) and m is meter (SI base unit of distance). This unit has equivalents in other SI derived units including: where N is newton (SI derived unit of force), T is tesla (SI derived unit of magnetic flux density), and J is joule (SI derived unit of energy). Although torque (N·m) and energy (J) are dimensionally equivalent, torques are never expressed in units of energy. In the CGS system, there are several different sets of electromagnetism units, of which the main ones are ESU, Gaussian, and EMU. Among these, there are two alternative (non-equivalent) units of magnetic dipole moment: where statA is statamperes, cm is centimeters, erg is ergs, and G is gauss. The ratio of these two non-equivalent CGS units (EMU/ESU) is equal to the speed of light in free space, expressed in cm⋅s−1. All formulae in this article are correct in SI units; they may need to be changed for use in other unit systems. For example, in SI units, a loop of current with current and area has magnetic moment (see below), but in Gaussian units the magnetic moment is . Other units for measuring the magnetic dipole moment include the Bohr magneton and the nuclear magneton. Measurement The magnetic moments of objects are typically measured with devices called magnetometers, though not all magnetometers measure magnetic moment: Some are configured to measure magnetic field instead. If the magnetic field surrounding an object is known well enough, though, then the magnetic moment can be calculated from that magnetic field. Relation to magnetization The magnetic moment is a quantity that describes the magnetic strength of an entire object. Sometimes, though, it is useful or necessary to know how much of the net magnetic moment of the object is produced by a particular portion of that magnet. Therefore, it is useful to define the magnetization field as: where and are the magnetic dipole moment and volume of a sufficiently small portion of the magnet This equation is often represented using derivative notation such that where is the elementary magnetic moment and is the volume element. The net magnetic moment of the magnet therefore is where the triple integral denotes integration over the volume of the magnet. For uniform magnetization (where both the magnitude and the direction of is the same for the entire magnet (such as a straight bar magnet) the last equation simplifies to: where is the volume of the bar magnet. The magnetization is often not listed as a material parameter for commercially available ferromagnetic materials, though. Instead the parameter that is listed is residual flux density (or remanence), denoted . The formula needed in this case to calculate in (units of A⋅m2) is: where: is the residual flux density, expressed in teslas. is the volume of the magnet (in m3). is the permeability of vacuum (). Models The preferred classical explanation of a magnetic moment has changed over time. Before the 1930s, textbooks explained the moment using hypothetical magnetic point charges. Since then, most have defined it in terms of Ampèrian currents. In magnetic materials, the cause of the magnetic moment are the spin and orbital angular momentum states of the electrons, and varies depending on whether atoms in one region are aligned with atoms in another. Magnetic pole model The sources of magnetic moments in materials can be represented by poles in analogy to electrostatics. This is sometimes known as the Gilbert model. In this model, a small magnet is modeled by a pair of fictitious magnetic monopoles of equal magnitude but opposite polarity. Each pole is the source of magnetic force which weakens with distance. Since magnetic poles always come in pairs, their forces partially cancel each other because while one pole pulls, the other repels. This cancellation is greatest when the poles are close to each other i.e. when the bar magnet is short. The magnetic force produced by a bar magnet, at a given point in space, therefore depends on two factors: the strength of its poles (magnetic pole strength), and the vector separating them. The magnetic dipole moment is related to the fictitious poles as It points in the direction from South to North pole. The analogy with electric dipoles should not be taken too far because magnetic dipoles are associated with angular momentum (see Relation to angular momentum). Nevertheless, magnetic poles are very useful for magnetostatic calculations, particularly in applications to ferromagnets. Practitioners using the magnetic pole approach generally represent the magnetic field by the irrotational field , in analogy to the electric field . Ampèrian loop model After Hans Christian Ørsted discovered that electric currents produce a magnetic field and André-Marie Ampère discovered that electric currents attract and repel each other similar to magnets, it was natural to hypothesize that all magnetic fields are due to electric current loops. In this model developed by Ampère, the elementary magnetic dipole that makes up all magnets is a sufficiently small amperian loop of current I. The dipole moment of this loop is where is the area of the loop. The direction of the magnetic moment is in a direction normal to the area enclosed by the current consistent with the direction of the current using the right hand rule. Localized current distributions The magnetic dipole moment can be calculated for a localized (does not extend to infinity) current distribution assuming that we know all of the currents involved. Conventionally, the derivation starts from a multipole expansion of the vector potential. This leads to the definition of the magnetic dipole moment as: where is the vector cross product, is the position vector, and is the electric current density and the integral is a volume integral. When the current density in the integral is replaced by a loop of current I in a plane enclosing an area S then the volume integral becomes a line integral and the resulting dipole moment becomes which is how the magnetic dipole moment for an Amperian loop is derived. Practitioners using the current loop model generally represent the magnetic field by the solenoidal field , analogous to the electrostatic field . Magnetic moment of a solenoid A generalization of the above current loop is a coil, or solenoid. Its moment is the vector sum of the moments of individual turns. If the solenoid has identical turns (single-layer winding) and vector area , Quantum mechanical model When calculating the magnetic moments of materials or molecules on the microscopic level it is often convenient to use a third model for the magnetic moment that exploits the linear relationship between the angular momentum and the magnetic moment of a particle. While this relation is straightforward to develop for macroscopic currents using the amperian loop model (see below), neither the magnetic pole model nor the amperian loop model truly represents what is occurring at the atomic and molecular levels. At that level quantum mechanics must be used. Fortunately, the linear relationship between the magnetic dipole moment of a particle and its angular momentum still holds, although it is different for each particle. Further, care must be used to distinguish between the intrinsic angular momentum (or spin) of the particle and the particle's orbital angular momentum. See below for more details. Effects of an external magnetic field Torque on a moment The torque on an object having a magnetic dipole moment in a uniform magnetic field is: This is valid for the moment due to any localized current distribution provided that the magnetic field is uniform. For non-uniform B the equation is also valid for the torque about the center of the magnetic dipole provided that the magnetic dipole is small enough. An electron, nucleus, or atom placed in a uniform magnetic field will precess with a frequency known as the Larmor frequency. See Resonance. Force on a moment A magnetic moment in an externally produced magnetic field has a potential energy : In a case when the external magnetic field is non-uniform, there will be a force, proportional to the magnetic field gradient, acting on the magnetic moment itself. There are two expressions for the force acting on a magnetic dipole, depending on whether the model used for the dipole is a current loop or two monopoles (analogous to the electric dipole). The force obtained in the case of a current loop model is Assuming existence of magnetic monopole, the force is modified as follows: In the case of a pair of monopoles being used (i.e. electric dipole model), the force is And one can be put in terms of the other via the relation In all these expressions is the dipole and is the magnetic field at its position. Note that if there are no currents or time-varying electrical fields or magnetic charge, , and the two expressions agree. Relation to free energy One can relate the magnetic moment of a system to the free energy of that system. In a uniform magnetic field , the free energy can be related to the magnetic moment of the system as where is the entropy of the system and is the temperature. Therefore, the magnetic moment can also be defined in terms of the free energy of a system as Magnetism In addition, an applied magnetic field can change the magnetic moment of the object itself; for example by magnetizing it. This phenomenon is known as magnetism. An applied magnetic field can flip the magnetic dipoles that make up the material causing both paramagnetism and ferromagnetism. Additionally, the magnetic field can affect the currents that create the magnetic fields (such as the atomic orbits) which causes diamagnetism. Effects on environment Magnetic field of a magnetic moment Any system possessing a net magnetic dipole moment will produce a dipolar magnetic field (described below) in the space surrounding the system. While the net magnetic field produced by the system can also have higher-order multipole components, those will drop off with distance more rapidly, so that only the dipole component will dominate the magnetic field of the system at distances far away from it. The magnetic field of a magnetic dipole depends on the strength and direction of a magnet's magnetic moment but drops off as the cube of the distance such that: where is the magnetic field produced by the magnet and is a vector from the center of the magnetic dipole to the location where the magnetic field is measured. The inverse cube nature of this equation is more readily seen by expressing the location vector as the product of its magnitude times the unit vector in its direction () so that: The equivalent equations for the magnetic -field are the same except for a multiplicative factor of μ0 = , where μ0 is known as the vacuum permeability. For example: Forces between two magnetic dipoles As discussed earlier, the force exerted by a dipole loop with moment on another with moment is where is the magnetic field due to moment . The result of calculating the gradient is where is the unit vector pointing from magnet 1 to magnet 2 and is the distance. An equivalent expression is The force acting on is in the opposite direction. Torque of one magnetic dipole on another The torque of magnet 1 on magnet 2 is Theory underlying magnetic dipoles The magnetic field of any magnet can be modeled by a series of terms for which each term is more complicated (having finer angular detail) than the one before it. The first three terms of that series are called the monopole (represented by an isolated magnetic north or south pole) the dipole (represented by two equal and opposite magnetic poles), and the quadrupole (represented by four poles that together form two equal and opposite dipoles). The magnitude of the magnetic field for each term decreases progressively faster with distance than the previous term, so that at large enough distances the first non-zero term will dominate. For many magnets the first non-zero term is the magnetic dipole moment. (To date, no isolated magnetic monopoles have been experimentally detected.) A magnetic dipole is the limit of either a current loop or a pair of poles as the dimensions of the source are reduced to zero while keeping the moment constant. As long as these limits only apply to fields far from the sources, they are equivalent. However, the two models give different predictions for the internal field (see below). Magnetic potentials Traditionally, the equations for the magnetic dipole moment (and higher order terms) are derived from theoretical quantities called magnetic potentials which are simpler to deal with mathematically than the magnetic fields. In the magnetic pole model, the relevant magnetic field is the demagnetizing field . Since the demagnetizing portion of does not include, by definition, the part of due to free currents, there exists a magnetic scalar potential such that In the amperian loop model, the relevant magnetic field is the magnetic induction . Since magnetic monopoles do not exist, there exists a magnetic vector potential such that Both of these potentials can be calculated for any arbitrary current distribution (for the amperian loop model) or magnetic charge distribution (for the magnetic charge model) provided that these are limited to a small enough region to give: where is the current density in the amperian loop model, is the magnetic pole strength density in analogy to the electric charge density that leads to the electric potential, and the integrals are the volume (triple) integrals over the coordinates that make up . The denominators of these equation can be expanded using the multipole expansion to give a series of terms that have larger of power of distances in the denominator. The first nonzero term, therefore, will dominate for large distances. The first non-zero term for the vector potential is: where is: where is the vector cross product, is the position vector, and is the electric current density and the integral is a volume integral. In the magnetic pole perspective, the first non-zero term of the scalar potential is Here may be represented in terms of the magnetic pole strength density but is more usefully expressed in terms of the magnetization field as: The same symbol is used for both equations since they produce equivalent results outside of the magnet. External magnetic field produced by a magnetic dipole moment The magnetic flux density for a magnetic dipole in the amperian loop model, therefore, is Further, the magnetic field strength is Internal magnetic field of a dipole The two models for a dipole (magnetic poles or current loop) give the same predictions for the magnetic field far from the source. However, inside the source region, they give different predictions. The magnetic field between poles (see the figure for Magnetic pole model) is in the opposite direction to the magnetic moment (which points from the negative charge to the positive charge), while inside a current loop it is in the same direction (see the figure to the right). The limits of these fields must also be different as the sources shrink to zero size. This distinction only matters if the dipole limit is used to calculate fields inside a magnetic material. If a magnetic dipole is formed by taking a "north pole" and a "south pole", bringing them closer and closer together but keeping the product of magnetic pole charge and distance constant, the limiting field is If a magnetic dipole is formed by making a current loop smaller and smaller, but keeping the product of current and area constant, the limiting field is Unlike the expressions in the previous section, this limit is correct for the internal field of the dipole. These fields are related by , where is the magnetization. Relation to angular momentum The magnetic moment has a close connection with angular momentum called the gyromagnetic effect. This effect is expressed on a macroscopic scale in the Einstein–de Haas effect, or "rotation by magnetization", and its inverse, the Barnett effect, or "magnetization by rotation". Further, a torque applied to a relatively isolated magnetic dipole such as an atomic nucleus can cause it to precess (rotate about the axis of the applied field). This phenomenon is used in nuclear magnetic resonance. Viewing a magnetic dipole as current loop brings out the close connection between magnetic moment and angular momentum. Since the particles creating the current (by rotating around the loop) have charge and mass, both the magnetic moment and the angular momentum increase with the rate of rotation. The ratio of the two is called the gyromagnetic ratio or so that: where is the angular momentum of the particle or particles that are creating the magnetic moment. In the amperian loop model, which applies for macroscopic currents, the gyromagnetic ratio is one half of the charge-to-mass ratio. This can be shown as follows. The angular momentum of a moving charged particle is defined as: where is the mass of the particle and is the particle's velocity. The angular momentum of the very large number of charged particles that make up a current therefore is: where is the mass density of the moving particles. By convention the direction of the cross product is given by the right-hand rule. This is similar to the magnetic moment created by the very large number of charged particles that make up that current: where and is the charge density of the moving charged particles. Comparing the two equations results in: where is the charge of the particle and is the mass of the particle. Even though atomic particles cannot be accurately described as orbiting (and spinning) charge distributions of uniform charge-to-mass ratio, this general trend can be observed in the atomic world so that: where the -factor depends on the particle and configuration. For example, the -factor for the magnetic moment due to an electron orbiting a nucleus is one while the -factor for the magnetic moment of electron due to its intrinsic angular momentum (spin) is a little larger than 2. The -factor of atoms and molecules must account for the orbital and intrinsic moments of its electrons and possibly the intrinsic moment of its nuclei as well. In the atomic world the angular momentum (spin) of a particle is an integer (or half-integer in the case of fermions) multiple of the reduced Planck constant . This is the basis for defining the magnetic moment units of Bohr magneton (assuming charge-to-mass ratio of the electron) and nuclear magneton (assuming charge-to-mass ratio of the proton). See electron magnetic moment and Bohr magneton for more details. Atoms, molecules, and elementary particles Fundamentally, contributions to any system's magnetic moment may come from sources of two kinds: 1) motion of electric charges, such as electric currents; and 2) the intrinsic magnetism due spin of elementary particles, such as the electron. Contributions due to the sources of the first kind can be calculated from knowing the distribution of all the electric currents (or, alternatively, of all the electric charges and their velocities) inside the system, by using the formulas below. Contributions due to particle spin sum the magnitude of each elementary particle's intrinsic magnetic moment, a fixed number, often measured experimentally to a great precision. For example, any electron's magnetic moment is measured to be . The direction of the magnetic moment of any elementary particle is entirely determined by the direction of its spin, with the negative value indicating that any electron's magnetic moment is antiparallel to its spin. The net magnetic moment of any system is a vector sum of contributions from one or both types of sources. For example, the magnetic moment of an atom of hydrogen-1 (the lightest hydrogen isotope, consisting of a proton and an electron) is a vector sum of the following contributions: the intrinsic moment of the electron, the orbital motion of the electron around the proton, the intrinsic moment of the proton. Similarly, the magnetic moment of a bar magnet is the sum of the contributing magnetic moments, which include the intrinsic and orbital magnetic moments of the unpaired electrons of the magnet's material and the nuclear magnetic moments. Magnetic moment of an atom For an atom, individual electron spins are added to get a total spin, and individual orbital angular momenta are added to get a total orbital angular momentum. These two then are added using angular momentum coupling to get a total angular momentum. For an atom with no nuclear magnetic moment, the magnitude of the atomic dipole moment, , is then where is the total angular momentum quantum number, is the Landé -factor, and is the Bohr magneton. The component of this magnetic moment along the direction of the magnetic field is then The negative sign occurs because electrons have negative charge. The integer (not to be confused with the moment, ) is called the magnetic quantum number or the equatorial quantum number, which can take on any of values: Due to the angular momentum, the dynamics of a magnetic dipole in a magnetic field differs from that of an electric dipole in an electric field. The field does exert a torque on the magnetic dipole tending to align it with the field. However, torque is proportional to rate of change of angular momentum, so precession occurs: the direction of spin changes. This behavior is described by the Landau–Lifshitz–Gilbert equation: where is the gyromagnetic ratio, is the magnetic moment, is the damping coefficient and is the effective magnetic field (the external field plus any self-induced field). The first term describes precession of the moment about the effective field, while the second is a damping term related to dissipation of energy caused by interaction with the surroundings. Magnetic moment of an electron Electrons and many elementary particles also have intrinsic magnetic moments, an explanation of which requires a quantum mechanical treatment and relates to the intrinsic angular momentum of the particles as discussed in the article Electron magnetic moment. It is these intrinsic magnetic moments that give rise to the macroscopic effects of magnetism, and other phenomena, such as electron paramagnetic resonance. The magnetic moment of the electron is where is the Bohr magneton, is electron spin, and the g-factor is 2 according to Dirac's theory, but due to quantum electrodynamic effects it is slightly larger in reality: . The deviation from 2 is known as the anomalous magnetic dipole moment. Again it is important to notice that is a negative constant multiplied by the spin, so the magnetic moment of the electron is antiparallel to the spin. This can be understood with the following classical picture: if we imagine that the spin angular momentum is created by the electron mass spinning around some axis, the electric current that this rotation creates circulates in the opposite direction, because of the negative charge of the electron; such current loops produce a magnetic moment which is antiparallel to the spin. Hence, for a positron (the anti-particle of the electron) the magnetic moment is parallel to its spin. Magnetic moment of a nucleus The nuclear system is a complex physical system consisting of nucleons, i.e., protons and neutrons. The quantum mechanical properties of the nucleons include the spin among others. Since the electromagnetic moments of the nucleus depend on the spin of the individual nucleons, one can look at these properties with measurements of nuclear moments, and more specifically the nuclear magnetic dipole moment. Most common nuclei exist in their ground state, although nuclei of some isotopes have long-lived excited states. Each energy state of a nucleus of a given isotope is characterized by a well-defined magnetic dipole moment, the magnitude of which is a fixed number, often measured experimentally to a great precision. This number is very sensitive to the individual contributions from nucleons, and a measurement or prediction of its value can reveal important information about the content of the nuclear wave function. There are several theoretical models that predict the value of the magnetic dipole moment and a number of experimental techniques aiming to carry out measurements in nuclei along the nuclear chart. Magnetic moment of a molecule Any molecule has a well-defined magnitude of magnetic moment, which may depend on the molecule's energy state. Typically, the overall magnetic moment of a molecule is a combination of the following contributions, in the order of their typical strength: magnetic moments due to its unpaired electron spins (paramagnetic contribution), if any orbital motion of its electrons, which in the ground state is often proportional to the external magnetic field (diamagnetic contribution) the combined magnetic moment of its nuclear spins, which depends on the nuclear spin configuration. Examples of molecular magnetism The dioxygen molecule, O, exhibits strong paramagnetism, due to unpaired spins of its outermost two electrons. The carbon dioxide molecule, CO, mostly exhibits diamagnetism, a much weaker magnetic moment of the electron orbitals that is proportional to the external magnetic field. The nuclear magnetism of a magnetic isotope such as C or O will contribute to the molecule's magnetic moment. The dihydrogen molecule, H, in a weak (or zero) magnetic field exhibits nuclear magnetism, and can be in a para- or an ortho- nuclear spin configuration. Many transition metal complexes are magnetic. The spin-only formula is a good first approximation for high-spin complexes of first-row transition metals. Elementary particles In atomic and nuclear physics, the Greek symbol represents the magnitude of the magnetic moment, often measured in Bohr magnetons or nuclear magnetons, associated with the intrinsic spin of the particle and/or with the orbital motion of the particle in a system. Values of the intrinsic magnetic moments of some particles are given in the table below:
Physical sciences
Magnetostatics
Physics
541000
https://en.wikipedia.org/wiki/Siesta
Siesta
A siesta (from Spanish, pronounced and meaning "nap") is a short nap taken in the early afternoon, often after the midday meal. Such a period of sleep is a common tradition in some countries, particularly those in warm-weather zones. The "siesta" can refer to the nap itself, or more generally to a period of the day, generally between 2 and 5p.m. This period is used for sleep, as well as leisure, midday meals, or other activities. Siestas are historically common throughout the Mediterranean and Southern Europe, the Middle East, mainland China, and the Indian subcontinent. The siesta is an old tradition in Spain and, through Spanish influence, in most of Latin America and the Philippines. The Spanish word is originally derived from the Latin phrase ('sixth [hour]', counting from dawn, hence "midday rest"). Factors explaining the geographical distribution of the modern siesta are warm temperatures and heavy intake of food at midday meals. Combined, these two factors contribute to the feeling of post-lunch drowsiness. In many countries that practice the siesta, the summer heat can be unbearable in the early afternoon, making a midday break at home welcome. Biological need for naps The timing of sleep in humans depends upon a balance between homeostatic sleep propensity, the need for sleep as a function of the amount of time elapsed since the last adequate sleep episode, and circadian rhythms which determine the ideal timing of a correctly structured and restorative sleep episode. The homeostatic pressure to sleep starts growing upon awakening. The circadian signal for wakefulness starts building in the (late) afternoon. As professor of sleep medicine Charles Czeisler notes, "the circadian system is set up in a beautiful way to override the homeostatic drive for sleep." Thus, in many people, there is a dip when the drive for sleep has been building for hours and the drive for wakefulness has not yet started. This is, again quoting Czeisler, "a great time for a nap". The drive for wakefulness intensifies through the evening, making it difficult to get to sleep 2–3 hours before one's usual bedtime when the wake maintenance zone ends. In different countries Taking a long lunch break including a nap is common in a number of Mediterranean, tropical, and subtropical countries. The Washington Post of 13 February 2007 reports at length on studies in Greece that indicate that those who nap have less risk of heart attacks. In the United States, the United Kingdom, and a growing number of other countries, a short sleep has been referred to as a "power nap", a term coined by Cornell University social psychologist James Maas and recognized by other research scientists such as Sara Mednick as well as in the popular press. Siesta is also practiced in some still colder regions, such as Patagonia. The power nap is called riposo in Northern Italy and pennichella or pisolino in Southern Italy. It used to be the custom in Russia, with Adam Olearius stating such was "the custom of the Countrey, where sleep is as necessary after Dinner as in the Night". One source of hostility toward False Dmitriy I was that he did not "...indulge in the siesta." In Southern Italy, the siesta is called controra (from contro ("counter") + ora "hour") that is considered a magical time of the day, in which the world comes back into the possession of ghosts and spirits. In Dalmatia (coastal Croatia), the traditional afternoon nap is known as pižolot (from Venetian pixolotto). In Egypt, as with other Middle Eastern countries, government workers typically work for six hours a day, six days a week. Due to this schedule, workers do not eat lunch at work, but instead leave work around 2 pm and eat their main meal, which is the heaviest, at lunchtime. Following the heavy lunch, they take a taaseela or nap and have tea upon waking up. For dinner, they usually have a smaller meal. Einhard's Life of Charlemagne describes the emperor's summertime siestas: "In summer, after his midday meal, he would eat some fruit and take another drink; then he would remove his shoes and undress completely, just as he did at night, and rest for two or three hours." In China, taking a nap after lunch, known as 午睡 (noon sleep), is a common practice among people. Surveys indicate that about two-thirds of the Chinese population habitually takes afternoon naps, with the average duration being approximately 30 minutes. Spain In modern Spain, the midday nap during the working week is being gradually abandoned among the adult working population. According to a 2009 survey, 16.2 percent of Spaniards polled claimed to take a nap "daily", whereas 22 percent did so "sometimes", 3.2 percent "weekends only" and the remainder, 58.6 percent, "never". The share of those who claimed to have a nap daily had diminished by 7 percent compared to a previous poll in 1998. Nearly three out of four siesta-takers claimed to take siestas on the sofa rather than the bed. The habit is more likely among the elderly or during summer holidays, in order to avoid the high temperatures of the day and extend social life until the cooler late evenings and nights. English-language media often conflates the siesta with the two to three hour lunch break that is characteristic of Spanish working hours, even though the working population is less likely to have time for a siesta and the two events are not necessarily connected. In fact, the average Spaniard works longer hours than almost all their European counterparts (typically 11-hour days, from 9 am to 8 pm). As for the origins of the practice in Spain, the scorching summer heat predominant mostly in the South is thought to have motivated those doing agrarian work to take a break to avoid the hottest part of the day and be able to work longer hours when it is cooler. In cities, the economic situation in Spain during the post-Spanish Civil War years was dismal. At that time, a long midday break—with or without a siesta—was necessary for those commuting between the part-time jobs which were common in the sputtering economy. This situation was soon followed by the advent of a modern economy and urbanization. Cardiovascular benefits The siesta habit has been associated with a 37 percent reduction in coronary mortality, possibly due to reduced cardiovascular stress mediated by daytime sleep. Epidemiological studies on the relations between cardiovascular health and siesta have led to conflicting conclusions, possibly because of poor control of confounding variables, such as physical activity. It is possible that people who take a siesta have different physical activity habits, for example, waking earlier and scheduling more activity during the morning. Such differences in physical activity may lead to different 24-hour profiles in cardiovascular function. Even if such effects of physical activity can be discounted in explaining the relationship between siesta and cardiovascular health, it is still not known whether the daytime nap itself, a supine posture, or the expectancy of a nap is the most important factor.
Biology and health sciences
Ethology
Biology
895672
https://en.wikipedia.org/wiki/Feral%20cat
Feral cat
A feral cat or a stray cat is an unowned domestic cat (Felis catus) that lives outdoors and avoids human contact; it does not allow itself to be handled or touched, and usually remains hidden from humans. Feral cats may breed over dozens of generations and become a local apex predator in urban, savannah and bushland environments, and especially on islands where native animals did not evolve alongside predators. Some feral cats may become more comfortable with people who regularly feed them, but even with long-term attempts at socialization, they usually remain aloof and reject human touch. Of the 700 million cats in the world, an estimated 480 million are feral. Feral cats are devastating to wildlife, and conservation biologists consider them to be one of the worst invasive species on Earth. They are included in the list of the world's 100 worst invasive alien species. Attempts to control feral cat populations are widespread but generally of greatest impact within purpose-fenced reserves. Some animal rights groups advocate for trap-neuter-return (TNR) programs, also known as trap-neuter-vaccinate-release (TNVR) programs, to prevent feral cats from breeding and being nuisances by spraying urine and fighting over territory. Scientific evidence has demonstrated that TNR is not effective at controlling feral cat populations alone and must be done alongside removal. For TNR to be effective, at least 88% of the cat colony should be neutered which is an unrealistic goal in majority of cases, and any lower rates actually increase cat populations. TNR also takes much longer to eliminate cat colonies as compared to trap and euthanasia and it is more expensive and resource-intensive. TNR receives many criticisms of it being inhumane, to the point that some say that TNR stands for trap-neuter-reabandon, because feral cats do not have good lives and die early from disease, poisoning, predation, vehicle collisions and sometimes violence by humans. A possibly more effective alternative to TNR has been proposed, which is trap-vasectomy-hysterectomy-release (TVHR). Definitions The meaning of the term feral cat varies between professions and countries, and is sometimes used interchangeably with other terms such as free-roaming, street, alley, or community cat. Some of these terms are also used to refer to stray cats, although stray and feral cats are generally considered to be different by rescuers, veterinarians, and researchers. The lines between stray and feral cat are diffuse. The general idea is that owned cats that wander away from their homes may become stray cats, and stray cats that have lived in the wild for some time may become feral. Activists who seek to normalize feral cats in the environment are attempting to rebrand feral cats as community cats. Biologists say that this new term is euphemistic and distracts from feral cats being an environmental problem, and that it has connotations that falsely imply that feral cats exist with the consent of the communities where they live, and that the public has a moral obligation to support them in the outdoors. Studies have shown that the public does not support there being large numbers of free ranging cats in the outdoors, but that the use of language in surveys appears to influence the levels of support for different management options. United Kingdom In the United Kingdom, a feral cat is defined as a cat that chooses not to interact with humans, survives with or without human assistance, and hides or defends itself when trapped rather than allowing itself to be handled. Animal rescuers and veterinarians consider cats to be feral when they had not had much human contact particularly before eight weeks of age, avoid humans, and prefer to escape rather than attack a human. Feral cats are distinguished from domesticated cats based on their levels of socialization, ownership, and confinement, and on the amount of fear of, interaction with, and dependence upon humans. However, veterinarians and rescuers disagreed on whether a feral cat would tend to hiss and spit at or attack a human during an encounter, and disagreed on whether adult feral cats could potentially be tamed. Italy In Italy, feral cats have been protected since 1991, and it is illegal to kill them. In Rome, they are surgically neutered by veterinarians of the Veterinary Public Services. Programs for sterilization of stray cats are also implemented in the Padua and Venice Provinces. United States A survey of rescue and veterinary facilities in the United States revealed that no widely accepted definition of a feral cat exists. Many facilities used waiting periods to evaluate whether a cat was feral by observing whether the cat became less afraid and evasive over time. Other indicators included the cat's response to touch with an inanimate object, and observation of the cats' social behavior in varying environments such as response to human contact, with a human nearby, or when moved to a quieter environment. Australia The Australian government categorizes cats who have no interaction with or assistance from humans as feral, and unowned cats who rely on humans as semi-feral or stray. However, even these so-called 'managed colonies' often have a devastating impact on wildlife as demonstrated in the decimation of native mammals in adjacent reserves, such as occurred with numbats and woylies in Western Australia. Farm cat A farm cat is a free-ranging domestic cat that lives in a cat colony on agricultural farms in a feral or semi-feral condition. Farm cats primarily live outdoors and usually shelter in barns. They are partially supplied with food and milk, but mainly subsist on hunting rodents such as black rat, brown rat, common vole and Apodemus species. In England, farm cat colonies are present on the majority of farms and consist of up to 30 cats. Female farm cats show allomothering behaviour; they use communal nests and take care of kittens of other colony members. Some animal rescue organizations maintain Barn Cat Programs and rehome neutered feral cats to people who are looking for barn cats. Ship's cat Domestic cats have been members of ship crews since the beginning of commercial navigation. Phoenician and Etruscan traders probably carried cats on board their trading vessels to Italy and the Mediterranean islands. History Cats in ancient Egypt were venerated for killing rodents and venomous snakes. The need to keep rodents from consuming or contaminating grain crops stored for later human consumption may be the original reason that cats were domesticated. The spread of cats throughout much of the world is thought to have originated in Egypt. Scientists do not agree on whether cats were domesticated in Ancient Egypt or introduced there after domestication. Phoenician traders brought them to Europe for control of rat populations, and monks brought them further into Asia. Roman armies also contributed spreading cats and eventually brought them to Britain. Since then, cats continued to be introduced to new countries, often by sailors or settlers. Cats are thought to have been introduced to Australia in either the 1600s by Dutch shipwrecks, or the late 1700s by English settlers. These domesticated cats began to form feral populations after their offspring began living away from human contact. In the 19th and 20th centuries, several cat specimens were described as wildcat subspecies that are considered feral cat populations today: Felis silvestris sarda, proposed by Fernand Lataste in 1885, was a skin and a skull of a male cat from Sarrabus in Sardinia that looked like an African wildcat (Felis lybica), but was more reddish, gray and brown and had longer hair on the back. In the 1980s, Colin Groves assessed values of Schauenberg's index of cat skulls of zoological specimens that originated in the Mediterranean islands. Based on these values, he concluded that Sardinian wildcats are descendants of African wildcats that were introduced from North Africa's Maghreb region. Results of zooarchaeological research indicate that Sardinian wildcats descended from domestic cats that were introduced during the Roman Empire, and probably originated in the Near East. Felis reyi, proposed by Louis Lavauden in 1929, was a skin and a skull of a specimen from Biguglia in Corsica that was smaller and darker than the European wildcat (Felis silvestris), had a much shorter tail than the African wildcat, and differed in fur colour and markings from both. When Reginald Innes Pocock reviewed Felis skins in the collection of the Natural History Museum, London, he considered Felis reyi a synonym of Felis lybica sarda, the Sardinian wild cat. The Corsican wildcat is considered to have been introduced in the early first millennium. The earliest known fossil records of cats date to the early 14th century, but older chronostratigraphic layers revealed fossils of livestock introduced since the Iron Age. Felis lybica jordansi, proposed by Ernst Schwarz in 1930, was a skull and skin of a male specimen from Santa Margarita in Mallorca that had more pronounced stripes than the African wildcat. This is also considered to have descended from domestic cats introduced to the island. Felis silvestris cretensis, proposed by Theodor Haltenorth in 1953, was a cat skin purchased in a bazaar in Chania that resembled an African wildcat, but had a bushy tail like a European wildcat. Groves considered the Cretan wildcat an introduced feral cat. Distribution and habitat The feral cat is the most widely distributed terrestrial carnivore. It occurs between 55° North and 54.3° South latitudes in a wide range of climatic zones and islands in the Atlantic, Indian and Pacific Oceans, and the Mediterranean Sea, including Canary Islands, Port-Cros, Dassen Island, Marion Island, Juan de Nova Island, Réunion, Hahajima, Okinawa Island, Raoul Island, Herekopare Island, Stewart Island, Macquarie Island, Galápagos Islands, San Clemente Island, Isla Natividad, San José Island, and New Island. Feral cat colonies also occur on the Japanese islands of Ainoshima, Hahajima and Aoshima, Ehime. The feral cat population on the Hawaiian Islands is mainly of European origin and probably arrived in the 19th century on ships. Feral cat colonies in Rome have been monitored since 1991. Urban feral cats were studied in Madrid, Jerusalem and Ottawa. Behavior and ecology Some behaviors of feral cats are commonly observed, although there is disagreement among veterinarians, rescuers and researchers on the prevalence of some. In a free-roaming environment, feral cats avoid humans. They do not allow themselves to be handled or touched by humans, and back away or run when they are able to do so. If trapped, they hiss, growl, bare their teeth, or strike out. They remain fairly hidden from humans and will not approach, although some feral cats gradually become more comfortable around humans who feed them regularly. Most feral cats have small home ranges, although some are more transient and travel long distances. The home ranges of male feral cats, which are generally two or three times larger than those of female cats, are on average under , but can vary from almost to under . This variance is often due to breeding season, access to females, whether the cat is neutered, age, time of day, and availability of prey. Feral cats depend on the presence of human settlement to subsist. Colonies and stray feral cats will settle in urban, suburban, and rural developments like cities and farms, wherever they can find easy access to food or prey animals. Few to no feral cats are found significantly distant from human settlements. While feral cats prey on other small mammals and reptiles, their home ranges don't change to reflect the seasonal availability of prey animals. This indicates that feral cats have a fairly consistent home range, and migration is more representative of mate availability, consistency in human-related food sources, or other less transient stimuli. Colonies Feral cats often live in groups called colonies, which are located close to food sources and shelter. Researchers disagree on the existence, extent, and structure of dominance hierarchies among feral cats in colonies. Different types of hierarchies have been observed in colonies, including despotic and linear hierarchies. Some colonies are organized in more complex structures, such as relative hierarchies, where social status of individual cats varies with location, time of day, or the activity the cats are engaged in, particularly feeding and mating. A 'managed colony' is taken care of by humans who supply food and water to the cats, provide shelters and veterinary care, implement trap-neuter-return programs, find foster homes for cats that can be socialized for eventual adoption, and educate people in the neighborhood. Feral cats are known to move from colony to colony when home ranges overlap. Additionally, colony populations fluctuate as cats leave family homes and some feral and semi-feral cats get socialized to home life and become family pets. Socialization Feral kittens can be trapped and socialized, then adopted into a home. The age at which a kitten becomes difficult to socialize is not agreed upon, but suggestions generally range from seven weeks to four months of age. Although older cats can sometimes be socialized, it is a very long and difficult process, and the cat rarely becomes friendly and may remain fearful. In a 2013 study with British participants, rescuers tended to be more willing than veterinarians to attempt to tame adult feral cats. Veterinarians tended to be more opposed to this practice, with some expressing concerns for the welfare of such a cat in a home environment. In a 2010 interview survey with veterinarians and rescuers in the United States, 66% of respondents had socialization programs for kittens, and 8% for adult cats. In Parañaque, Philippines, netizens lauded the building of wooden cattery, "Cat Homes" for "Puspin" (Pusang Pinoy) or stray cats. Diet Feral cats are either mesopredators (mid-ranking predators) or apex predators (top predators) in local ecosystems. They prey on a wide variety of both vertebrates and invertebrates, and typically prefer smaller animals with body weights under , particularly mammals, birds, and lizards. Their global prey spectrum encompasses over 1,000 species; the most commonly observed were the house mouse, European rabbit, black rat, house sparrow, and common blackbird. In Australia, they prey on introduced species like the European rabbit and house mouse, and on native rodents and marsupials, particularly the common ringtail possum. In the United States some people advocate for feral cats as a means to control pigeons and invasive rodents like the house mouse and brown rat, although these cosmopolitan species co-evolved with cats in human-disturbed environments and so have an advantage over native rodents in evading cat predation. Studies in California showed that 67% of the mice killed by cats were native species, and that areas near feral cat colonies actually have larger house mouse populations, but fewer birds and native rodents. Though cats usually prey on animals less than half their size, a feral cat in Australia was photographed killing an adult pademelon of around the cat's weight at . African feral cats have been observed directly pilfering milk from the elephant seal's teat. Predators Feral cats are prey of feral dogs, dingoes, coyotes, caracals and birds of prey. Health Life span and survival Without human assistance Adult feral cats without human assistance have been found in surprisingly good condition. In Florida, a study of feral cats admitted to a trap-neuter-return (TNR) program concluded that "euthanasia for debilitated cats for humane reasons is rarely necessary". A further study of over 100,000 feral and stray cats admitted to TNR programs in diverse locations of the U.S. resulted in the same 0.4% rate of euthanasia for debilitating conditions. The body condition of feral cats entering a TNR program in Florida was described as "generally lean but not emaciated". However, many feral cats had suffered from parasites such as fleas and ear mites before entering TNR programs. With human assistance Feral cats in managed colonies can live long lives. A number of cats in managed colonies in the United Kingdom died of old age. A long-term study of a trap-neuter-return (TNR) program on a university campus in Central Florida found that, despite widespread concern about the welfare of free-roaming cats, 83% of the cats studied had been present for more than six years, with almost half first observed as adults of unknown age. The authors compared this result to a 1984 study that found the mean life span for domesticated cats was 7.1 years. Disease Types Feral cats, like all cats, are susceptible to diseases and infections including rabies, bartonellosis, toxoplasmosis, feline panleukopenia virus, external and internal parasites, feline immunodeficiency virus (FIV), feline leukemia virus (FeLV), rickettsial diseases, ringworm, and feline respiratory disease complex (a group of respiratory illnesses including feline herpesvirus type 1, feline calicivirus, Chlamydophila felis, and Mycoplasma haemofelis). Feline leukemia virus and feline immunodeficiency virus belong to the Retroviridae family, and both cause immunosuppression in cats, which can increase their susceptibility to other infections. Research has shown that the prevalence of these viruses among feral cat populations is low and is similar to prevalence rates for owned cats in the United States. Researchers studying 553 feral cats in North Florida in the United States tested them for a number of infections that could be detrimental to feline or human health. The study found the most prevalent infection to be Bartonella henselae, the cause of cat-scratch disease in humans, with 33.6% of the cats testing positive. Feline coronavirus was the next most common infection, found in 18.3% of the cats, although they noted that the antibody levels were low in most of the cats who tested positive, and concluded that the cats they tested did not appear to be a greater risk for shedding the virus than pet cats. Researchers studying 96 feral cats on Prince Edward Island in Canada found that feline roundworm was the most common infection in cats in that colony, afflicting 34% of cats. This was followed by Toxoplasma gondii, which was detected in 29.8% of cats, although only one cat of the 78 for whom fecal samples were available was shedding T. gondii oocysts. They did note that most fecal samples collected indicated the presence of one intestinal parasite, with some samples indicating the presence of multiple parasites. Transmission to humans The Center for Disease Control and Prevention has warned about the rabies risk associated with feral cats. With 16% of people infected with rabies from exposure to rabid cats, cats have been the primary animals responsible for transmission of the virus to humans in the United States since the efforts to control rabies in dogs in the 1970s. In 2010, there were 303 rabid cats reported within the United States. Although some colony management programs involve administering rabies vaccines, the need to revaccinate every few years makes this challenging to maintain. Furthermore, lack of documentation can mean that contact with vaccinated feral cats may still require post-exposure treatment. The study of feral cats on Prince Edward Island warned of "considerable zoonotic risk" for transmission of intestinal parasites. Although the authors noted that their study did not provide evidence for great risk associated with T. gondii in cats, they advised that the risk should still be considered, as the infection in humans can cause significant health problems, and cats who are not otherwise transmitting the infection can begin shedding the parasite in times of stress. Control and management Feral cats are controlled or managed by various agencies to manage disease, for the protection of native wildlife and to protect their welfare. Control of feral cats can be managed through trapping and euthanasia or other forms of lethal control, or, some claim, through trap-neuter-return (TNR). Scientific research has not found TNR to be an effective means of controlling the feral cat population. Literature reviews have found that, in the instances where studies documented TNR colonies that declined in population, those declines were being driven primarily by substantial percentages of colony cats being permanently removed from colonies by some combination of re-homing and euthanasia on an ongoing basis. TNR colonies often increase in population because cats breed quickly and the trapping and sterilization rates are frequently too low to stop this population growth, because food is usually being provided to the cats, and because public awareness of a TNR colony tends to encourage people in the surrounding community to dump their own unwanted pet cats there. The growing popularity of TNR, even near areas of particular ecological sensitivity, has been attributed in part to the failure of scientists to communicate the environmental harm caused by feral cats to the public, and their unwillingness to engage with TNR advocates. Trap-neuter-return involves trapping feral cats, vaccinating, spaying or neutering them, and then returning them to the place where there were originally trapped. TNR programs are prevalent in several countries, including England, Italy, Canada, and the United States, and are supported by many local and state governments. Proponents of TNR argue that it is effective in stopping reproduction and reducing the population over time. TNR results in fewer complaints, as nuisance behaviors diminish following neutering, and the quality of life of the cats is improved. The practice is reported to save money and garner more public support and better morale than efforts that involve killing cats. TNR is popular, but there's little evidence that TNR by itself can control the growing population of free roaming cats. The International Companion Animal Management Coalition advocates for TNR as a humane method of controlling feral cat populations. In the U.S., the practice is endorsed by the Humane Society of the United States. and the National Animal Control Association, TNR is opposed by the Australian Veterinary Association, the National Audubon Society, the National Wildlife Federation, the Cornell Lab of Ornithology, the American Association of Wildlife Veterinarians, the Wildlife Society, the American Bird Conservancy, and PETA. Some U.S. military bases have TNR programs, but the United States Navy prohibits such programs on Navy land. In the US, the American Veterinary Medical Association (AVMA), in 2016, adopted a resolution that "encourages collaborative efforts to identify humane and effective alternatives to the destruction of healthy cats for animal control purposes, while minimizing their negative impact on native wildlife and public health." The AVMA voiced support for "properly managed [feral cat] colonies" outside "wildlife-sensitive ecosystems" but stated that "[t]he goal of colony management should be continual reduction and eventual elimination of the colony through attrition." The AVMA stated that "free-roaming abandoned and feral cats that are not in properly managed colonies should be removed from their environment and treated in the same manner as other abandoned and stray animals in accordance with local and state ordinances" and that "[f]or colonies not achieving attrition and posing active threats to the area in which they are residing, the AVMA does not oppose the consideration of euthanasia when conducted by qualified personnel, using appropriate humane methods as described in the AVMA Guidelines for the Euthanasia of Animals.". According to estimates from the Humane Society of the United States, the population of feral cats in the US ranges from 50 to 70 million. In contrast, the number of pet cats in the US stands at approximately 76 million. The effectiveness of both trap-and-euthanise and TNR programmes is largely dependent upon controlling immigration of cats into cleared or controlled areas; where immigration of new cats is controlled, both techniques can be effective. However where immigration is not controlled, culling is more effective. Comparisons of different techniques have also found that trap-and-euthanise programmes are half the cost of TNR ones. An analysis of both techniques in Hawaii suggested they are less effective when new cats were introduced by the abandonment of pets. The usefulness of TNR is disputed by some scientists and conservation specialists, who argue that TNR is only concerned with cat welfare and ignores the ongoing damage caused by feeding outdoor populations of neutered cats, including the depredation of wildlife, transmission of diseases, and the accumulation of cat faeces in the environment. Conservation scientists also question the effectiveness of TNR at controlling numbers of feral cats. Some studies that have supported TNR have also been criticised for using anecdotal data to evaluate their effectiveness. In order for TNR to reduce the cat population, sterilisation rates of at least 75% must be maintained at all times, particularly because TNR practitioners providing cats with food makes the problem worse by increasing the survival rate of feral kittens. Also, this food source causes other cats to be drawn into the colony from outside. Members of the public often begin dumping unwanted pet cats at TNR sites, increasing the rate of recruitment. And neutered cats are less territorial, allowing for higher populations. TNR programs are sometimes able to attain local reductions in the numbers of cats at specific colony locations, but it has never been demonstrated to meaningfully impact cat populations over large areas or regions, because the effort necessary to maintain sufficient sterilisation rates means that systemic TNR will never be a credible option. For example, to reduce a typical Australian city's population of 700,000 feral cats through TNR would require sterilising at least 500,000 of them initially, and then continuing to sterilise more than 75% of the kittens that the other 200,000 would continue to produce each year indefinitely, along with all the new recruits from other cat populations drawn by the food supply. TNR is backed by well-funded advocacy organizations: in 2010, Alley Cat Allies spent US$3 million advocating to legalise TNR throughout the United States, while the Best Friends Animal Society spent $11 million on a "Focus on Felines" initiative that included TNR advocacy. Promoters of TNR are often funded by big businesses with a commercial interest in selling cat food, such as pet food mills and the pet products retailer PetSmart. While TNR is a popular approach to resolving the over population problem, it is not a ubiquitously accepted method. Another perspective emphasizes the poor outdoor living conditions of feral cats, and advocates for rehoming, adoption, or euthanasia as a more ethical response. This perspective centers the pressure feral cats place on the ecosystem, which is alternative to the popular position that centers the value of each cat's life. TNR and wildlife De-sexing cats, as in TNR programs, does nothing to prevent them from continuing to destroy wildlife. In Mandurah, Western Australia, a single, neutered, semi-feral cat raided a protected fairy tern colony on at least six nights in November 2018. It killed at least six breeding adult fairy terns; directly or indirectly killed at least 40 nestlings, and caused enough stress on the fairy tern colony that all 111 nests were abandoned; resulting in a complete breeding failure for the entire colony of threatened seabirds. The predation was documented by wildlife cameras, as well as by the presence of cat tracks, cat scat, decapitated fairy terns, and injured and missing fairy tern nestlings. Though the colony was surrounded by ultrasound generators intended to deter cats, the fairy tern colony might have been an irresistible target, and this particular cat was white and had a blue eye, traits commonly associated with deafness. Management in sensitive environments In sensitive environments, such as delicate ecosystems that have been degraded by feral cats, population management can be quite difficult. On isolated Pacific islands, trapping and removing the feral population too quickly can have adverse effects including booms in rodent and small reptile populations previously checked by the feral cat population. This new dynamic may prove to be more harmful, with further upstream effects on the ecosystem that were not predicted before removal of the feral cat population. With such a sensitive system to account for, solutions for population control will likely differ from case to case, and especially in different ecosystems where feral cats are to be controlled. Effects on wildlife In the United States, free-ranging cats kill one to four billion birds and six to 22 billion mammals annually. In Australia, domestic cats were introduced in the 1800s to settlements that had developed near gold mining sites and farms as a pest control strategy to decimate rabbits, mice, and rats. Feral cats kill on average one million reptiles each day. Feral cats in Australia kill over 1.5 billion native mammals, birds, reptiles and frogs, and 1.1 billion invertebrates each year. Predation by cats is a recognised threat to over 200 nationally threatened species, and 37 listed migratory species. They have contributed to the extinction of more than 20 Australian mammal species, including the pig-footed bandicoots, lesser bilby and broad-faced potoroo. Impact on prey species Even well-fed domestic cats may hunt and kill, mainly catching small mammals such as rodents and lagamorphs, but also birds, reptiles, amphibians, fish, and invertebrates. Hunting by domestic cats contributes to the decline in the numbers of birds in urban areas. Feral cats can threaten native species with extinction, particularly on island ecosystems where native animals did not evolve alongside predators, especially ambush predators like cats. Controlling or eliminating the populations of the non-native cats can produce a rapid recovery in native animals. Native species such as the New Zealand kākāpō and the Australian bettong tend to be more ecologically vulnerable when faced with predation by cats due to lack of an evolutionary response to predation. Feral cats have had a major impact on these native species and have played a leading role in the endangerment and extinction of many species. In Hawaii's remote, mountainous areas, they destroy the nests of seabirds including Newell's shearwater (Puffinus newelli) and the Hawaiian petrel (Pterodroma sandwichensis), amongst many other ground-nesting birds. In agricultural settings, cats can be effective at keeping mouse and rat populations low, but only if rodent harborage locations (such as tall grass) are kept under control. While cats are effective at preventing rodent population explosions, they are not effective for eliminating pre-existing severe infestations. Rat terriers and native predators such as raptors and snakes are far better at eliminating mice and rats compared to the domestic cat. In systems where wildlife is threatened by both predation by rats and cats, there are concerns that controlling cats could increase predation by rats, due to rat populations increasing. For example, in Christmas Island, it was shown that decreasing cat populations would improve the growth rate of a threatened bird, as long as rats did not increase by more than 77 rats per cat removed. However, cats are actually ineffective predators of rats and prefer going after smaller prey, such as mice and small native animals. Hybridisation with wildcats Feral cats have interbred with wildcats to various extents throughout the world, the first reported case occurring more than 200 years ago. The significance of hybridisation is disputed. Modern genetic analysis revealed that the African wildcat is the ancestor of the domestic cat, and it is a subspecies of the African wildcat. Pure Scottish wildcats are unlikely to exist, but the current wildcat population is distinct enough from domestic cats to be worth protecting. High levels of hybridisation have led to difficulties in distinguishing pure wildcats from feral and domestic cats, which can complicate conservation efforts. Trap-neuter-return programs have been established to prevent hybridisation. Notable gene introgression into European wildcat populations exists also in Italy, Hungary, Spain and Portugal.
Biology and health sciences
Felines
Animals
896356
https://en.wikipedia.org/wiki/Magnetic%20storage
Magnetic storage
Magnetic storage or magnetic recording is the storage of data on a magnetized medium. Magnetic storage uses different patterns of magnetisation in a magnetizable material to store data and is a form of non-volatile memory. The information is accessed using one or more read/write heads. Magnetic storage media, primarily hard disks, are widely used to store computer data as well as audio and video signals. In the field of computing, the term magnetic storage is preferred and in the field of audio and video production, the term magnetic recording is more commonly used. The distinction is less technical and more a matter of preference. Other examples of magnetic storage media include floppy disks, magnetic tape, and magnetic stripes on credit cards. History Magnetic storage in the form of wire recording—audio recording on a wire—was publicized by Oberlin Smith in the Sept 8, 1888 issue of Electrical World. Smith had previously filed a patent in September, 1878 but found no opportunity to pursue the idea as his business was machine tools. The first publicly demonstrated (Paris Exposition of 1900) magnetic recorder was invented by Valdemar Poulsen in 1898. Poulsen's device recorded a signal on a wire wrapped around a drum. In 1928, Fritz Pfleumer developed the first magnetic tape recorder. Early magnetic storage devices were designed to record analog audio signals. Computers and now most audio and video magnetic storage devices record digital data. In computers, magnetic storage was also used for primary storage in a form of magnetic drum, or core memory, core rope memory, thin film memory, twistor memory or bubble memory. Unlike modern computers, magnetic tape was also often used for secondary storage. Design Information is written to and read from the storage medium as it moves past devices called read-and-write heads that operate very close (often tens of nanometers) over the magnetic surface. The read-and-write head is used to detect and modify the magnetisation of the material immediately under it. There are two magnetic polarities, each of which is used to represent either 0 or 1. The magnetic surface is conceptually divided into many small sub-micrometer-sized magnetic regions, referred to as magnetic domains, (although these are not magnetic domains in a rigorous physical sense), each of which has a mostly uniform magnetisation. Due to the polycrystalline nature of the magnetic material, each of these magnetic regions is composed of a few hundred magnetic grains. Magnetic grains are typically 10 nm in size and each form a single true magnetic domain. Each magnetic region in total forms a magnetic dipole which generates a magnetic field. In older hard disk drive (HDD) designs the regions were oriented horizontally and parallel to the disk surface, but beginning about 2005, the orientation was changed to perpendicular to allow for closer magnetic domain spacing. Older hard disk drives used iron(III) oxide (Fe2O3) as the magnetic material, but current disks use a cobalt-based alloy. For reliable storage of data, the recording material needs to resist self-demagnetisation, which occurs when the magnetic domains repel each other. Magnetic domains written too close together in a weakly magnetisable material will degrade over time due to rotation of the magnetic moment of one or more domains to cancel out these forces. The domains rotate sideways to a halfway position that weakens the readability of the domain and relieves the magnetic stresses. A write head magnetises a region by generating a strong local magnetic field, and a read head detects the magnetisation of the regions. Early HDDs used an electromagnet both to magnetise the region and to then read its magnetic field by using electromagnetic induction. Later versions of inductive heads included Metal In Gap (MIG) heads and thin film heads. As data density increased, read heads using magnetoresistance (MR) came into use; the electrical resistance of the head changed according to the strength of the magnetism from the platter. Later development made use of spintronics; in read heads, the magnetoresistive effect was much greater than in earlier types, and was dubbed "giant" magnetoresistance (GMR). In today's heads, the read and write elements are separate, but in close proximity, on the head portion of an actuator arm. The read element is typically magneto-resistive while the write element is typically thin-film inductive. The heads are kept from contacting the platter surface by the air that is extremely close to the platter; that air moves at or near the platter speed. The record and playback head are mounted on a block called a slider, and the surface next to the platter is shaped to keep it just barely out of contact. This forms a type of air bearing. Magnetic recording classes Analog recording Analog recording is based on the fact that remnant magnetisation of a given material depends on the magnitude of the applied field. The magnetic material is normally in the form of tape, with the tape in its blank form being initially demagnetised. When recording, the tape runs at a constant speed. The writing head magnetises the tape with current proportional to the signal. A magnetisation distribution is achieved along the magnetic tape. Finally, the distribution of the magnetisation can be read out, reproducing the original signal. The magnetic tape is typically made by embedding magnetic particles (approximately 0.5 micrometers in size) in a plastic binder on polyester film tape. The most commonly-used of these was ferric oxide, though chromium dioxide, cobalt, and later pure metal particles were also used. Analog recording was the most popular method of audio and video recording. Since the late 1990s, however, tape recording has declined in popularity due to digital recording. Digital recording Instead of creating a magnetisation distribution in analog recording, digital recording only needs two stable magnetic states, which are the +Ms and −Ms on the hysteresis loop. Examples of digital recording are floppy disks, hard disk drives (HDDs), and tape drives. HDDs offer large capacities at reasonable prices; , consumer-grade HDDs offer data storage at about per terabyte. Magneto-optical recording Magneto-optical recording writes/reads optically. When writing, the magnetic medium is heated locally by a laser, which induces a rapid decrease of coercive field. Then, a small magnetic field can be used to switch the magnetisation. The reading process is based on magneto-optical Kerr effect. The magnetic medium are typically amorphous R-Fe-Co thin film (R being a rare earth element). Magneto-optical recording is not very popular. One famous example is Minidisc developed by Sony. Domain propagation memory Domain propagation memory is also called bubble memory. The basic idea is to control domain wall motion in a magnetic medium that is free of microstructure. Bubble refers to a stable cylindrical domain. Data is then recorded by the presence/absence of a bubble domain. Domain propagation memory has high insensitivity to shock and vibration, so its application is usually in space and aeronautics. Technical details Access method Magnetic storage media can be classified as either sequential access memory or random access memory, although in some cases the distinction is not perfectly clear. The access time can be defined as the average time needed to gain access to stored records. In the case of magnetic wire, the read/write head only covers a very small part of the recording surface at any given time. Accessing different parts of the wire involves winding the wire forward or backward until the point of interest is found. The time to access this point depends on how far away it is from the starting point. The case of ferrite-core memory is the opposite. Every core location is immediately accessible at any given time. Hard disks and modern linear serpentine tape drives do not precisely fit into either category. Both have many parallel tracks across the width of the media and the read/write heads take time to switch between tracks and to scan within tracks. Different spots on the storage media take different amounts of time to access. For a hard disk this time is typically less than 10 ms, but tapes might take as much as 100 s. Coding schemes Magnetic disk heads and magnetic tape heads cannot pass DC (direct current), so the coding schemes for both tape and disk data are designed to minimize the DC offset. Most magnetic storage devices use error correction. Many magnetic disks internally use some form of run-length limited coding and partial-response maximum-likelihood. Current usage , common uses of magnetic storage media are for computer data mass storage on hard disks and the recording of analog audio and video works on analog tape. Since much of audio and video production is moving to digital systems, the usage of hard disks is expected to increase at the expense of analog tape. Digital tape and tape libraries are popular for the high capacity data storage of archives and backups. Floppy disks see some marginal usage, particularly in dealing with older computer systems and software. Magnetic storage is also widely used in some specific applications, such as bank cheques (MICR) and credit/debit cards (mag stripes). Future A new type of magnetic storage, called magnetoresistive random-access memory or MRAM, is being produced that stores data in magnetic bits based on the tunnel magnetoresistance (TMR) effect. Its advantage is non-volatility, low power usage, and good shock robustness. The 1st generation that was developed was produced by Everspin Technologies, and utilized field induced writing. The 2nd generation is being developed through two approaches: thermal-assisted switching (TAS) which is currently being developed by Crocus Technology, and spin-transfer torque (STT) on which Crocus, Hynix, IBM, and several other companies are working. However, with storage density and capacity orders of magnitude smaller than an HDD, MRAM is useful in applications where moderate amounts of storage with a need for very frequent updates are required, which flash memory cannot support due to its limited write endurance. Six state MRAM is also being developed, echoing four bit multi level flash memory cells, that have six different bits, as opposed to two. Research is also being done by Aleksei Kimel at Radboud University in the Netherlands towards the possibility of using terahertz radiation rather than using standard electropulses for writing data on magnetic storage media. By using terahertz radiation, writing time can be reduced considerably (50x faster than when using standard electropulses). Another advantage is that terahertz radiation generates almost no heat, thus reducing cooling requirements.
Technology
Non-volatile memory
null
896399
https://en.wikipedia.org/wiki/Gray%20fox
Gray fox
The gray fox (Urocyon cinereoargenteus), or grey fox, is an omnivorous mammal of the family Canidae, widespread throughout North America and Central America. This species and its only congener, the diminutive island fox (Urocyon littoralis) of the California Channel Islands, are the only living members of the genus Urocyon, which is considered to be genetically sister to all other living canids. Its species name cinereoargenteus means "ashen silver". It was once the most common fox in the eastern United States, and though still found there, human advancement and deforestation allowed the red fox to become the predominant fox-like canid. Despite this post-colonial competition, the gray fox has been able to thrive in urban and suburban environments, one of the best examples being southern Florida. The Pacific States and Great Lakes region still have the gray fox as their prevalent fox. Etymology The genus Urocyon comes from Ancient Greek οὐρά (ourá, “tail”) + κύων (kúōn, “dog”). The species epithet cinereoargenteus is a combination of 'cinereo' (from 'cinereus') meaning ashen, and 'argenteus' (from argentum), meaning 'silver', referencing the color of the tail. Description The gray fox is mainly distinguished from most other canids by its grizzled upper parts, black stripe down its tail and strong neck, ending in a black-tipped tail, while the skull can be easily distinguished from all other North American canids by its widely separated temporal ridges that form a 'U'-shape. Like other canids, the fox's ears and muzzle are angular and pointed. Its claws tend to be lengthier and curved. There is little sexual dimorphism, save for the females being slightly smaller than males. The gray fox ranges from in total length. The tail measures of that length and its hind feet measure . The gray fox typically weighs , though exceptionally large individuals can weigh as much as . The gray fox is readily distinguished from the red fox by its obvious lack of the "black stockings" that stand out on the red fox. The grey fox has a stripe of black hair that runs along the middle of its tail, and individual guard hairs that are banded with white, gray, and black. The gray fox displays white on the ears, throat, chest, belly, and hind legs. Gray foxes also have black around their eyes, on the lips, and on their noses. In contrast to the species in genus Vulpes, such as the red fox, the gray fox has oval (instead of slit-like) pupils. The gray fox also has reddish coloration on parts of its body, including the legs, sides, feet, chest, and back and sides of the head and neck. The stripe on the fox's tail ends in a black tip as well. Its weight can be similar to that of a red fox, but the gray fox appears smaller because its fur is not as long and it has shorter limbs. The dental formula of U. cinereoargenteus is = 42. Origin and genetics The gray fox appeared in North America during the mid-Pliocene (Hemphillian land animal age) epoch ago (AEO) with the first fossil evidence found at the lower 111 Ranch site, Graham County, Arizona with contemporary mammals like the giant sloth, the elephant-like Cuvieronius, the large-headed llama, and the early small horses of Nannippus and Equus. Faunal remains at two northern California cave sites confirm the presence of the gray fox during the late Pleistocene. Genetic analysis has shown that the gray fox migrated into the northeastern United States post-Pleistocene in association with the Medieval Climate Anomaly warming trend. Genetic analyses of the fox-like canids confirmed that the gray fox is a distinct genus from the red foxes (Vulpes spp.). The genus Urocyon is considered to be sister to the other living canid taxa. Genetically, the gray fox often clusters with two other ancient lineages: The east Asian raccoon dog (Nyctereutes procyonoides) and the African bat-eared fox (Otocyon megalotis). The chromosome number is 66 (diploid) with a fundamental number of 70. The autosomes include 31 pairs of sub-graded subacrocentrics, but one only pair of metacentrics. Recent mitochondrial genetic studies suggests divergence of North American eastern and western gray foxes in the Irvingtonian mid-Pleistocene into separate sister taxa. The gray fox's dwarf relative, the island fox, is likely descended from mainland gray foxes. These foxes apparently were transported by humans to the islands and from island to island, and are descended from a minimum of 3–4 matrilineal founders. Distribution and habitat The species occurs throughout most rocky, wooded, brushy regions of the southern half of North America from southern Canada (Manitoba through southeastern Quebec) to the northern part of South America (Venezuela and Colombia), excluding the mountains of northwestern United States. The species prefers a mix of wooded and agricultural land in the Midwest, juniper forests as well as ponderosa pine in the west, and deciduous forests in the east. It is the only canid whose natural range spans both North and South America. In some areas, high population densities exist near brush-covered bluffs. The species prefers a mix of forest and agricultural land towards the southern part of their range (Belize ). In southeastern Mexico, the species prefers areas with a human presence such as near roads. Behavior The gray fox is specifically adapted to climb trees. Its strong, hooked claws allow it to scramble up trees to escape many predators, such as the domestic dog or the coyote, or to reach tree-bound or arboreal food sources. It can climb branchless, vertical trunks to heights of and jump from branch to branch. It descends primarily by jumping from branch to branch, or by descending slowly backwards like a domestic cat. The gray fox is primarily nocturnal or crepuscular and makes its den in hollow trees, stumps or appropriated burrows during the day. Such gray fox tree dens may be located above the ground. For the most part, they rest on the ground rather than higher up in trees. Prior to European colonization of North America, the red fox was found primarily in boreal forest and the gray fox in deciduous forest. With the increase in human populations in North America, their habitat selection has adapted: Gray foxes that live near human populations tend to choose areas near hardwood trees, locations used primarily by humans, or roads to utilize as their habitat. The increase of coyote populations around North America has reduced certain fox populations, so gray foxes have to choose a habitat that will allow them to escape the coyote threat as much as possible, hence the choice of habitat nearer to areas where humans are active. The larger predators of the gray fox, like coyotes and bobcats, tend to avoid human-use areas and paved roads, making this habitat useful for the gray fox. They heavily utilize the edges of forests as a travel corridor, which is used for primary movement from place to place. Their choices do not change based on sex, the season, or the time of day. They also do the majority of their hunting in edges, and use them to escape from predators as well. Gray foxes are thus known as an "edge species". Interspecies competition Gray foxes often hunt for the same prey as bobcats and coyotes who occupy the same region. To avoid interspecific competition, the gray fox has developed certain behaviors and habits to increase their survival chances. In regions where gray foxes and coyotes hunt for the same food, the gray fox has been observed to give space to the coyote, staying within its own established range for hunting. Gray foxes might also avoid their competitors by occupying different habitats from them. In California, gray foxes do this by living in chaparral where their competitors are fewer and the low shrubbery provides them a greater chance to escape from a dangerous encounter. It also has been suggested that gray foxes could be more active at night than during the day to avoid their larger, diurnal competitors. Still, gray foxes frequently fall victim to bobcats and coyotes. When killed, the carcasses are often unconsumed, suggesting they are victims of intraguild predation. These gray foxes are often killed on or near the boundary of their established range, when they begin to interfere with their competitors. Gray foxes are known as mesopredators because they are mid-tier predators and their prey consists mostly of smaller mammals, while coyotes are known as de facto apex predators due to the removal of other apex predators, like wolves, in North America. This explains the gray fox's tendency to change behavior in response to the coyote threat, as they are essentially lower on the food chain. Reproduction The gray fox is assumed to be monogamous, like other foxes. The breeding season of the gray fox varies geographically; in Michigan, the gray fox mates in early March, in Alabama, breeding peaks occur in February. The gestation period lasts approximately 53 days. Litter size ranges from 1–7, with a mean of 3.8 young per female. The sexual maturity of females is around 10 months of age. Kits begin to hunt with their parents at the age of 3 months. By the time that they are 4 months old, the kits will have developed their permanent dentition and can now easily forage on their own. The family group remains together until the autumn, when the young males reach sexual maturity, then they disperse. In a study of 9 juvenile gray foxes, only the males dispersed, moving up to . The juvenile females stayed within proximity of the den within and always returned. Adult gray foxes showed no signs of dispersion for either sex. The gray fox will typically live between six and ten years. The annual reproductive cycle of males has been described through epididymal smears. They become fertile earlier and remain fertile longer than females. Logs, trees, rocks, burrows, or abandoned dwellings serve as suitable den sites. Dens are used at any time during the year but mostly during whelping season. Dens are built in brushy or wooded regions and are better concealed than the dens of the red fox. Diet The gray fox is an omnivorous, solitary hunter. It frequently preys on the eastern cottontail (Sylvilagus floridanus) in the eastern U.S., though it will readily catch voles, shrews, and birds. In California, the gray fox primarily eats rodents (such as deer mice, woodrats, and cotton rats), followed by lagomorphs, e.g. jackrabbit, brush rabbit, etc. When it is available, gray foxes may also feed on carrion. In some parts of the Western United States (such as in the Zion National Park in Utah), the gray fox is primarily insectivorous and herbivorous. Fruit is an important component of the diet of the gray fox, and they seek whatever fruits are readily available, generally eating more vegetable matter than does the red fox (Vulpes vulpes). Generally, there is an increase in fruits and invertebrates (such as grasshoppers, beetles, butterflies, and moths) within the gray fox's diet in the transition from winter to spring. As nuts, grains, and fruits become more numerous, they are cached by foxes. Typically, they attempt to cover the area with their scent either through their scent glands or urine. This marking serves the dual purpose of allowing them to find the food again later and preventing other animals from taking it. Ecosystem role Since woodrats, cotton rats, and mice make up a large part of the gray fox's diet, they serve as important regulators of small rodent populations. In addition to their beneficial predation on rodents, gray foxes are also less welcome hosts to some external and internal parasites, which include fleas, lice, nematodes, and tapeworms. In the United States, the most common parasite of the gray fox is a flea (Pulex simulans); however, several new parasitic arthropods were found in populations in central Mexico, and a warming climate may encourage them to migrate north. Hunting Gray foxes are hunted in the U.S. The intensity of the hunting has correlated with the value of their pelts. Between the 1970–1971 and 1975–1976 hunting seasons, the price of gray fox pelts greatly increased and the number of individuals hunted jumped over six-fold from 26,109 to 163,458. It has been recently reported that over 500,000 gray foxes are killed every year for their fur. Subspecies There are 16 subspecies recognized for the gray fox. Urocyon cinereoargenteus borealis (New England) Urocyon cinereoargenteus californicus (southern California) Urocyon cinereoargenteus cinereoargenteus (eastern United States) Urocyon cinereoargenteus costaricensis (Costa Rica) Urocyon cinereoargenteus floridanus (Gulf states) Urocyon cinereoargenteus fraterculus (Yucatán) Urocyon cinereoargenteus furvus (Panama) Urocyon cinereoargenteus guatemalae (southernmost Mexico south to Nicaragua) Urocyon cinereoargenteus madrensis (southern Sonora, south-west Chihuahua, and north-west Durango) Urocyon cinereoargenteus nigrirostris (south-west Mexico) Urocyon cinereoargenteus ocythous (Central Plains states) Urocyon cinereoargenteus orinomus (southern Mexico, Isthmus of Tehuantepec) Urocyon cinereoargenteus peninsularis (Baja California) Urocyon cinereoargenteus scottii (south-western United States and northern Mexico) Urocyon cinereoargenteus townsendi (northern California and Oregon) Urocyon cinereoargenteus venezuelae (Colombia and Venezuela) Parasites Parasites of gray fox include trematode Metorchis conjunctus. Other common parasites that were collected on gray foxes in Texas were a variety of tapeworms (Mesocestoides litteratus, Taenia pisiformis, Taenia serialis) and roundworms (Ancylostoma caninum, Ancylostoma braziliense, Haemonchus similis, Spirocerca lupi, Physaloptera rara, Eucoleus aerophilus). T. pisiformis was the most common parasite species and was associated with frequent impacts on health.
Biology and health sciences
Canines
Animals
896497
https://en.wikipedia.org/wiki/Magazine%20%28firearms%29
Magazine (firearms)
A magazine, often simply called a mag, is an ammunition storage and feeding device for a repeating firearm, either integral within the gun (internal/fixed magazine) or externally attached (detachable magazine). The magazine functions by holding several cartridges within itself and sequentially pushing each one into a position where it may be readily loaded into the barrel chamber by the firearm's moving action. The detachable magazine is sometimes colloquially referred to as a "clip", although this is technically inaccurate since a clip is actually an accessory device used to help load ammunition into a magazine or cylinder. Magazines come in many shapes and sizes, from integral tubular magazines on lever-action and pump-action rifles and shotguns, that may hold more than five rounds, to detachable box magazines and drum magazines for automatic rifles and light machine guns, that may hold more than fifty rounds. Various jurisdictions ban what they define as "high-capacity magazines". Nomenclature With the increased use of semi-automatic and automatic firearms, the detachable magazine became increasingly common. Soon after the adoption of the M1911 pistol, the term "magazine" was settled on by the military and firearms experts, though the term "clip" is often used in its place (though only for detachable magazines, never fixed). The defining difference between clips and magazines is the presence of a feed mechanism in a magazine, typically a spring-loaded follower, which a clip lacks. A magazine has four parts as follows: a spring, a spring follower, a body and a base. A clip may be made of one continuous piece of stamped metal and have no moving parts. Examples of clips are moon clips for revolvers; "stripper" clips such as what is used for military 5.56 ammo, in association with a speedloader; or the en bloc clip for M1 Garand rifles, among others. Use of the term "clip" to refer to detachable magazines is a point of strong disagreement. History The earliest firearms were loaded with loose powder and a lead ball, and to fire more than a single shot without reloading required multiple barrels, such as in pepper-box guns, double-barreled rifles, double-barreled shotguns, or multiple chambers, such as in revolvers. The main problem with these solutions is that they increase the bulk and/or weight of a firearm, over a firearm with a single barrel and/or single chamber. However, many attempts were made to get multiple shots from loading a single barrel through the use of superposed loads. While some early repeaters such as the Kalthoff repeater managed to operate using complex systems with multiple feed sources for ball, powder, and primer, easily mass-produced repeating mechanisms did not appear until self-contained cartridges were developed in the 19th century. Early tubular magazines The first successful mass-produced repeating weapon to use a "tubular magazine" permanently mounted to the weapon was the Austrian Army's Girandoni air rifle, first produced in 1779. The first mass-produced repeating firearm was the Volcanic Rifle which used a hollow bullet with the base filled with powder and primer fed into the chamber from a tube called a "magazine" with an integral spring to push the cartridges in to the action, thence to be loaded into the chamber and fired. It was named after a building or room used to store ammunition. The anemic power of the Rocket Ball ammunition used in the Volcanic doomed it to limited popularity.. The Henry repeating rifle is a lever-action, breech-loading, tubular magazine-fed repeating rifle, and was an improved version of the earlier Volcanic rifle. Designed by Benjamin Tyler Henry in 1860, it was one of the first firearms to use self-contained metallic cartridges. The Henry was introduced in 1860 and was in production until 1866 in the United States by the New Haven Arms Company. It was adopted in small quantities by the Union Army in the American Civil War and was favored for its greater firepower than the standard issue carbine. Many later found their way Westward and was famed both for its use at the Battle of the Little Bighorn, and being the basis for the iconic Winchester lever-action repeating rifle, which is still in production to the present day. The Henry and Winchester rifles would go on to see service with a number of militaries including Turkey. Switzerland and Italy adopted similar designs. The second magazine-fed firearm to achieve widespread success was the Spencer repeating rifle, which saw service in the American Civil War. The Spencer used a tubular magazine located in the butt of the gun instead of under the barrel and it used new rimfire metallic cartridges. The Spencer was successful, but the rimfire ammunition did occasionally ignite in the magazine tube and destroy the magazine. It could also injure the user. The new bolt-action rifles began to gain favor with militaries in the mid-1880s and were often equipped with tubular magazines. The Mauser Model 1871 was originally a single-shot action that added a tubular magazine in its 1884 update. The Norwegian Jarmann M1884 was adopted in 1884 and also used a tubular magazine. The French Lebel Model 1886 rifle also used 8-round tubular magazine. Tubular magazines remain common on many makes and models of shotgun. Integral box magazines The military cartridge was evolving as the magazine rifle evolved. Cartridges evolved from large-bore cartridges (.40 caliber/10 mm and larger) to smaller bores that fired lighter, higher-velocity bullets and incorporated new smokeless propellants. The Lebel Model 1886 rifle was the first rifle and cartridge to be designed for use with smokeless powder and used an 8 mm wadcutter-shaped bullet that was drawn from a tubular magazine. This would later become a problem when the Lebel's ammunition was updated to use a more aerodynamic pointed bullet. Modifications had to be made to the centerfire case to prevent the spitzer point from igniting the primer of the next cartridge inline in the magazine through recoil or simply rough handling. This remains a concern with lever-action firearms today. Two early box magazine patents were the ones by Rollin White in 1855 and William Harding in 1859. A detachable box magazine was patented in 1864 by the American Robert Wilson. Unlike later box magazines this magazine fed into a tube magazine and was located in the stock of the gun. Another box magazine, closer to the modern type though non-detachable, was patented in Britain (No. 483) by Mowbray Walker, George Henry Money and Francis Little in 1867. James Paris Lee patented a box magazine which held rounds stacked vertically in 1875, 1879 and 1882 and it was first adopted by Austria in the form of an 11mm straight-pull bolt-action rifle, the Mannlicher M1886. It also used a cartridge clip which held 5 rounds ready to load into the magazine. One of the first detachable box magazines with a double-stack staggered-feed was the Schmidt-Rubin of 1889. Other examples include the patent of Fritz von Stepski and Erich Sterzinger of Austria-Hungary in May 1888 and the British patents by George Vincent Fosbery in 1883 and 1884. James Paris Lee is sometimes claimed to have invented the double-stack, staggered-feed detachable box magazine but he didn't design one until 1892 for the Mark II Lee-Metford, three years after the Schmidt-Rubin. The first pistol with a double-stack, staggered-feed magazine was the Mauser C96 although it was an integral design fed by stripper clips. The first detachable double-stack, single-feed magazine for pistols was probably the one patented by the American Elbert H. Searle in 1904 and adopted by Arthur Savage though he didn't apply it in practice to his designs until much later. One of the first double-stack, single-feed box magazines was patented in November 1888 by an English inventor called Joseph James Speed of Waltham Cross. Another was patented in May 1887 by the Austro-Hungarian Karl Krnka. The bolt-action Krag–Jørgensen rifle, designed in Norway in 1886, used a unique rotary magazine that was built into the receiver. Like Lee's box magazine, the rotary magazine held the rounds side-by-side, rather than end-to-end. Like most rotary magazines, it was loaded through a loading gate one round at a time, this one located on the side of the receiver. While reliable, the Krag–Jørgensen's magazine was expensive to produce and slow to reload. It was adopted by only three countries, Denmark in 1889, the United States in 1892, and Norway in 1894. Clip-fed revolution A clip (called a charger in the United Kingdom) is a device that is used to store multiple rounds of ammunition together as a unit, ready for insertion into the magazine or cylinder of a firearm. This speeds up the process of reloading the firearm as several rounds can be loaded at once, rather than one round being loaded at a time. Several different types of clips exist, most of which are made of inexpensive metal stampings that are designed to be disposable, though they are often re-used. The first clips used were of the en bloc variety, developed by Ferdinand Mannlicher and first adopted by the Austro-Hungarian Army, which would be used Austro-Hungarians during the first world war in the form of the Mannlicher M1895, derivatives of which would be adopted by many national militaries. The Germans used this system for their Model 1888 Commission Rifle, featuring a 5-round en bloc clip-fed internal box magazine. One problem with the en bloc system is that the firearm cannot be practically used without a ready supply of (mostly disposable) clips. Paul Mauser would solve this problem by introducing a stripper clip that functioned only to assist the user in loading the magazine quickly: it was not required to load the magazine to full capacity. He would continue to make improved models of rifles that took advantage of this new clip design from 1889 through 1898 in various calibers that proved enormously successful, and were adopted by a wide range of national militaries. In 1890 the French adopted the 8mm Lebel Berthier rifles with 3-round internal magazines, fed from en bloc clips; the empty clips were pushed from the bottom of the action by the insertion of a loaded clip from the top. In the late 19th century, there were many short-lived designs, such as the M1895 Lee Navy and Gewehr 1888, eventually replaced by the M1903 Springfield rifle and Gewehr 98 respectively. The Russian Mosin–Nagant, adopted in 1891, was an exception. It was not revolutionary; it was a bolt-action rifle, used a small-bore smokeless powder cartridge, and a fixed box magazine loaded from the top with stripper clips, all of which were features that were used in earlier military rifles. What made the Nagant stand out was that it combined all the earlier features in a form that was to last virtually unchanged from its issue by Russia in 1894 through World War II and with its sniper rifle variants still in use today. Magazine cut-off A feature of many late 19th and early 20th century bolt-action rifles was the magazine cut-off, sometimes called a feed interrupter. This was a mechanical device that prevented the rifle from loading a round from the magazine, requiring the shooter to manually load each individual round as he fired, saving the rounds in the magazine for short periods of rapid fire when ordered to use them. Most military authorities that specified them assumed that their riflemen would waste ammunition indiscriminately if allowed to load from the magazine all the time. By the mid-20th century, most manufacturers deleted this feature to save costs and manufacturing time; it is also likely that battlefield experience had proven the futility of this philosophy. Final fixed-magazine developments One of the last new clip-fed, fixed-magazine rifles widely adopted that was not a modification of an earlier rifle was the M1 Garand. The M1 Garand was the first gas-operated semi-automatic rifle adopted and issued in large numbers as the standard service rifle of any military in the world. The M1 Garand was fed by a special eight-round en bloc clip. The clip itself was inserted into the rifle's magazine during loading, where it was locked in place. The rounds were fed directly from the clip, with a spring-loaded follower in the rifle pushing the rounds up into feeding position. When empty, the bolt would lock open, and a spring would automatically eject the empty clip with a distinctive pinging sound, leaving the rifle ready to be quickly reloaded. The M14 rifle, which was based on incremental changes to the Garand action, switched to a detachable box magazine. However, the M14 with magazine attached could also be loaded via 5-round stripper-clips. The Soviet SKS carbine, which entered service in 1945, was something of a stopgap between the semi-automatic service rifles being developed in the period leading up to World War II, and the new assault rifle developed by the Germans. The SKS used a fixed magazine, holding ten rounds and fed by a conventional stripper clip. It was a modification of the earlier AVS-36 rifle, shortened and chambered for the new reduced power 7.62×39mm cartridge. It was rendered obsolete for military use almost immediately by the 1947 introduction of the magazine-fed AK-47 assault rifle, though it remained in service for many years in Soviet Bloc nations alongside the AK-47. The detachable magazine quickly came to dominate post-war military rifle designs. Detachable box magazines Firearms using detachable magazines are made with an opening known as a magazine well into which the detachable magazine is inserted. The magazine well locks the magazine in position for feeding cartridges into the chamber of the firearm, and requires a device known as a magazine release to allow the magazine to be separated from the firearm. The Lee–Metford rifle, developed in 1888, was one of the first rifles to use a detachable box magazine, and the spare one could be optionally worn on soldier equipment, although with the adoption of the Short Magazine Lee–Enfield Mk I this became only detachable for cleaning and not swapped to reload the weapon. However, the first completely modern removable box magazine was patented in 1908 by Arthur Savage for the Savage Model 99 (1899), although it was not implemented on the 99 until 1965. James Paris Lee's patent of November 4, 1879, Number 221,328 would have been before Arthur Savage's magazine. Lee's magazine was also used on the Remington Lee model 1899 factory sporting rifle. Other guns did not adopt all of its features until his patent expired in 1942: It has shoulders to retain cartridges when it is removed from the rifle. It operates reliably with cartridges of different lengths. It is insertable and removable at any time with any number of cartridges. These features allow the operator to reload the gun infrequently, carry magazines rather than loose cartridges, and to easily change the types of cartridges in the field. The magazine is assembled from inexpensive stamped sheet metal. It also includes a crucial safety feature for hunting dangerous game: when empty the follower stops the bolt from engaging the chamber, informing the operator that the gun is empty before any attempt to fire. The first successful semi-automatic pistol was the Borchardt C-93 (1893) and incorporated detachable box magazines. Nearly all subsequent semiautomatic pistol designs adopted detachable box magazines. The Swiss Army evaluated the Luger pistol using a detachable box magazine in 7.65×21mm Parabellum and adopted it in 1900 as its standard sidearm. The Luger pistol was accepted by the Imperial German Navy in 1904. This version is known as Pistole 04 (or P.04). In 1908 the German Army adopted the Luger to replace the Reichsrevolver in front-line service. The Pistole 08 (or P.08) was chambered in 9×19mm Parabellum. The P.08 was the usual side arm for German Army personnel in both World Wars. The M1911 semi-automatic pistol set the standard for most modern handguns and likewise the mechanics of the handgun magazine. In most handguns the magazine follower engages a slide-stop to hold the slide back and keep the firearm out of battery when the magazine is empty and all rounds fired. Upon inserting a loaded magazine, the user depresses the slide stop, throwing the slide forward, stripping a round from the top of the magazine stack and chambering it. In single-action pistols this action keeps the hammer cocked back as the new round is chambered, keeping the gun ready to begin firing again. During World War One, detachable box magazines found favor, being used in all manner of firearms, such as pistols, light-machine guns, submachine guns, semi-automatic and automatic rifles. However, after the War to End All Wars, military planners failed to recognize the importance of automatic rifles and detachable box magazine concept, and instead maintained their traditional views and preference for clip-fed bolt-action rifles. As a result, many promising new automatic rifle designs that used detachable box magazines were abandoned. An important development that took place during this war was the invention of Schmeisser's Cone in 1916 by Hugo Schmeisser which allowed high-capacity double-stack, single-feed box magazine using guns to function reliably although it wasn't implemented on any of his designs until after World War One. The first reliable high-capacity double-stack, staggered-feed box magazine was developed by an American designer called Oscar V. Payne for the Thompson submachine gun around the same time as Schmeisser's Cone. As World War II loomed, most of the world's major powers began to develop submachine guns fed by 20- to 40-round detachable box magazines. However, of the major powers, only the United States would adopt a general-issue semi-automatic rifle that used detachable box magazines: the M1 carbine with its 15-round magazines. As the war progressed the Germans developed the Sturmgewehr 44 assault rifle concept with its 30-round detachable magazine. After WWII, automatic weapons using detachable box magazines were developed and used by all of the world's armies. Today, detachable box magazines are the norm and they are so widely used that they are simply referred to as magazines or "mags" for short. Function and types All cartridge-based single-barrel firearms designed to fire more than a single round of ammunition without manual reloading require some form of magazine designed to store and feed cartridges into the firearm's action. Magazines come in many shapes and sizes, with the most common type in modern firearms being the detachable box type. Most magazines designed for use with a reciprocating bolt firearm (tube fed firearms being the exception) make use of a set of feed lips which stop the vertical motion of the cartridges out of the magazine but allow one cartridge at a time to be pushed forward (stripped) out of the feed lips by the firearm's bolt into the chamber. Some form of spring and follower combination is almost always used to feed cartridges to the lips which can be located either in the magazine (most removable box magazines) or built into the firearm (fixed box magazines). There are also two distinct styles to feed lips. In a single-feed design the top cartridge touches both lips and is commonly used in single-column box magazines, while a staggered feed magazine (sometimes called "double-feed" magazine, not to be confused with the firearm malfunction) consists of a wider set of lips so that the second cartridge in line forces the top cartridge against one of the lips. The staggered-feed design has proven more resistant to jamming in use with double-column magazines than single-feed variants, since the narrowing of a magazine tube to a single-feed induces extra friction which the magazine springs needs to overcome. Some magazine types are strongly associated with certain firearm types, such as the fixed "tubular" magazine found on most modern lever-action rifles and pump-action shotguns. A firearm using detachable magazines may accept a variety of types of magazine, such as the Thompson submachine gun, most variations of which would accept box or drum magazines. Some types of firearm, such as the M249 and other squad automatic weapons, can feed from both magazines and belts. Tubular Many of the first repeating rifles and shotguns, particularly lever-action rifles and pump-action shotguns, used magazines that stored cartridges nose-to-end inside of a spring-loaded tube that typically runs parallel underneath the barrel, or inside of the buttstock. Tubular magazines are also commonly used in .22 caliber bolt-action rimfire rifles, such as the Marlin Model XT. Tubular magazines and centerfire cartridges with pointed (spitzer) bullets present a safety issue: a pointed bullet may (through the forces of recoil or simply rough handling) strike the next round's primer and ignite that round, or even cause a chain ignition of other rounds, within the magazine. The Winchester Model 1873 used blunt-nosed centerfire cartridges as the .44-40 Winchester. Certain modern rifle cartridges using soft pointed plastic tips have been designed to avoid this problem while improving the aerodynamic qualities of the bullet to match those available in bolt-action designs, therefore extending the effective range of lever-actions. Box The most popular type of magazine in modern rifles and handguns, a box magazine stores cartridges in a column, either one above the other or in staggered zigzag fashion. This zigzag stack is often identified as a double-column or double-stack (The double-stack is much more common because of its ability to store more rounds), since a staggered column is actually two single side-by-side vertical columns offset by half of the diameter of a round. As the firearm cycles, cartridges are moved to the top of the magazine by a follower driven by spring compression to either a single-feed (center-feed) position or side-by-side (staggered-feed) positions. Box magazines may be integral to the firearm or removable: An internal box, integral box or fixed magazine (also known as a blind box magazine when lacking a floorplate) is built into the firearm and is not easily removable. This type of magazine is found most often on bolt-action rifles. An internal box magazine is usually charged through the action, one round at a time. Military rifles often use stripper clips, a.k.a. chargers, permitting multiple rounds, commonly 5 or 10 at a time, to be loaded in rapid sequence. Some internal box magazines use en bloc clips that are loaded into the magazine with the ammunition and that are ejected from the firearm when empty. A detachable box magazine is a self-contained mechanism capable of being loaded or unloaded while detached from the host firearm. They are attached via a slot in the firearm receiver, usually below the action, to the side of the action, or on top of the action. When necessary, the magazine can easily be detached from the firearm and replaced by another. This significantly speeds the process of reloading, allowing the operator quick access to ammunition. This type of magazine may be straight or curved, the curve being necessary if the rifle uses rimmed ammunition or ammunition with a tapered case. Detachable box magazines may be metal or plastic. The plastic magazines are sometimes partially transparent so the operator can easily check the remaining ammunition. Box magazines are often affixed to each other with clamps, clips, tape, straps, or built-in studs to facilitate faster reloading: see jungle style. There are, however, exceptions to these rules. The Lee–Enfield rifle had a detachable box magazine only to facilitate cleaning. The Lee–Enfield magazine did open, permitting rapid unloading of the magazine without having to operate the bolt-action repeatedly to unload the magazine. Other designs, like the Breda Modello 30, had a fixed protruding magazine from the right side that resembled a conventional detachable box, but it was non-detachable and only reloaded by using 20 round stripper clips. Box magazines may come in straight, angled, or curved forms depending if the cartridges are tapered rimmed/rimless or bottlenecked. Straight or slightly curved magazines work well with straight-sided rimless cartridges, angled magazines work well with straight-sided rimmed/rimless cartridges and curved magazines work well with rimmed/rimless tapered cartridges. Pistol magazines are often single- or double-stack with single-feed, which may be due to this design being slimmer at the top which can simplify the design of the pistol frame with regards to grip thickness. Horizontal The FN P90, Kel-Tec P50, and AR-57 personal defense weapons use horizontally mounted feeding systems. The magazine sits parallel to the barrel, fitting flush with the top of the receiver, and the ammunition is rotated 90 degrees by a spiral feed ramp before being chambered. The Heckler & Koch G11, an experimental assault rifle that implements caseless ammunition, also functions similarly with the magazine aligned horizontally over the barrel. Rather than being positioned laterally to the barrel like with the aforementioned examples, ammunition is positioned vertically with the bullet facing downward at a 90-degree angle relative to the barrel where it is fed into a rotary chamber before firing. The AR-57, also known as the AR Five-seven, is an upper receiver for the AR-15 rifle lower receiver, firing FN 5.7×28mm rounds from standard FN P90 magazines. Casket Another form of box magazine, sometimes referred to as a "quad-column", can hold a large amount of ammunition. It is wider than a standard box magazine, but retains the same length. Casket magazines can be found on the Suomi KP/-31, Hafdasa C-4, Spectre M4, QCW-05 and on 5.45×39mm AK rifle derivatives, and now the Kel-Tec CP33 as well. Magpul has been granted a patent for a STANAG-compatible casket magazine, and such a magazine was also debuted by SureFire in December 2010, and is now sold as the MAG5-60 and MAG5-100 high capacity magazine (HCM) in 60 and 100 round capacities, respectively, in 5.56mm for AR-15 compatible with M4/M16/AR-15 variants and other firearms that accept STANAG 4179 magazines. Izhmash has also developed a casket magazine for the AK-12. Desert Tech have also released the QMAG-53 compatible Quattro-15 lower receiver for the AR-15. Tandem A tandem magazine is a type of box magazine with another magazine placed in front. When firing, the bolt travels further back past the front section magazine until the rear section is empty, then uses the front section. Firearms using tandem magazines are the Special Purpose Individual Weapon (SPIW) and Gerasimenko VAG-73. Rotary The rotary (or spool) magazine consists of a cylindrical sprocket actuated by a torsion spring, with cartridges fitting between the tooth bar of the sprocket, which is mounted on a spindle parallel to the bore axis and rotates each round sequentially into the feeding position. Rotary magazines may be fixed or detachable, and are usually of low capacity, generally 5 to 10 rounds, depending on the caliber used. John Smith patented a rotary magazine in 1856. Another rotary magazine was produced by Sylvester Roper in 1866 and was also used in the weapons by Anton Spitalsky and the Savage Model 1892. Otto Schönauer first patented a spool magazine in 1886 and his later design, patented in 1900, was used on bolt-action rifles produced at least until 1979, among them Mannlicher–Schönauer adopted by the Greek Army in 1903. The M1941 Johnson rifle also uses a rotary magazine. The design is still used in some modern firearms, most notably the Ruger American series, the semi-automatic Ruger 10/22, the bolt-action Ruger 77/22 and the Steyr SSG 69. Capsule A capsule magazine functions similar to a box magazine, but the spring and follower is stowed away when the magazine bottom is flipped open. The cartridges are loosely dumped into the magazine and spring-fed to the chamber when the bottom is closed. On the Krag-Jørgensen the magazine is wrapped around the bolt-action to save vertical space and ease loading from the side. The Krag-Jorgensen bolt-action rifle is the only firearm to use this type of magazine and it was adopted by the militaries of Denmark, Norway, and the United States in the late 19th century. Drum Drum magazines are used primarily for light machine guns. In one type, a moving partition within a cylindrical chamber forces loose rounds into an exit slot, with the cartridges being stored parallel to the axis of rotation. After loading of the magazine, a wound spring or other mechanism forces the partition against the rounds. In all models a single column is pushed by a follower through a curved path. From there the rounds enter the vertical riser either from a single or dual drums. Cylindrical designs such as rotary and drum magazines allow for larger capacity than box magazines, without growing to excessive length. The downside of a drum magazine's extra capacity is its added weight that, combined with the gun, can affect handling and prolonged use. Drum magazines can be more difficult to incorporate into combat gear compared to more regular, rectangular box magazines. Many drum-fed firearms can also load from conventional box magazines, such as the Soviet PPSh-41 submachine gun, RPK light machine gun, and the American Thompson submachine gun. The term "drum" is sometimes applied to a belt box for a belt-fed machine gun, though this is just a case that houses a length of ammunition belt, not a drum magazine. Saddle-drum Before WWII the Germans developed 75-round saddle-drum magazines for use in their MG 13 and MG 15 machine guns. The MG 34 machine guns could also use saddle-drum magazine when fitted with a special feed cover. The 75 rounds of ammunition were evenly distributed in each side of the magazine with a central feed "tower" where the ammunition is fed to the bolt. The ammunition was fed by a spring force, with rounds alternating from each side of the double drum so that the gun would not become unbalanced. Pan Pan magazines differ from other circular magazines in that the cartridges are stored perpendicular to the axis of rotation, rather than being parallel, and are usually mounted on top of the firearm. This type is used on the Lewis Gun, Vickers K, Bren Gun (only used in anti-aircraft mountings), Degtyaryov light machine gun, and American-180 submachine gun. A highly unusual example was found on the Type 89 machine gun fed from two 45-round quadrant-shaped pan magazines (each magazine held nine of the five-round stripper clips). Helical Helical magazines extend the drum magazine design so that rounds follow a spiral path around an auger-shaped rotating follower or drive member, allowing for large ammunition capacity in a relatively compact package (compared to a regular box magazine of similar capacity). Early helical magazine designs include that patented by an unidentified inventor through the patent agent William Edward Newton in 1857 and the internal magazine of the Evans Repeating Rifle, patented in the late 1860s. This type of magazine is used by the Calico M960, PP-19 Bizon, CS/LS06 and KBP PP90M1. The North Korean military uses a 100- to 150- round helical magazine in the Type 88 assault rifle. Helical magazines offer substantially more ammunition carriage; however, they are inherently complex designs. As such, they can be difficult to load and may decrease the reliability of feeding the weapon. Hopper The hopper magazine is a very unusual design. Unlike many other types of magazine-fed machine guns, which commonly used either box magazines or belts to feed ammunition into the firearm's action, the hopper magazine functioned differently. It would use stripper clips from an infantryman or machine gunner to supply ammunition for the machine gun to operate. This could be accomplished at any time, by just dropping the entire stripper clip into the hopper magazine. The Japanese Type 11 light machine gun was the only weapon system that used a hopper magazine. This light machine gun was fed by standard 6.5×50mmSR Arisaka stripper clips that were used by riflemen armed with the Type 38 bolt action rifle. The hopper is located on the left side of the receiver and held 6 of the 5-round clips, for a total of 30 rounds of ammunition. The hopper magazine was designed with a series of mechanical teeth activated by a cam track on the gas piston to pull cartridges off each clip and into the action. After the fifth and final round from each stripper clip was fed and fired, the empty clip would then fall out the bottom of the hopper magazine and the next fully loaded stripper clip would then be dropped into place for feeding. There is a spring-loaded follower that applied pressure on top of the clips to hold them in place so they would not fall out while the weapon was being transported or fired. STANAG magazine A STANAG magazine or NATO magazine is a type of detachable magazine proposed by NATO in October 1980. Shortly after NATO's acceptance of the 5.56×45mm NATO rifle cartridge, Draft Standardization Agreement (STANAG) 4179 was proposed in order to allow NATO members to easily share rifle ammunition and magazines down to the individual soldier level. The U.S. M16 rifle magazine was proposed for standardization. Many NATO members subsequently developed or purchased rifles with the ability to accept this type of magazine. However, the standard was never ratified and remains a "Draft STANAG". The STANAG magazine concept is only an interface, dimensional, and control (magazine latch, bolt stop, etc.) requirement. Therefore, it not only allows one type of magazine to interface with various weapon systems, but also allows STANAG magazines to be made in various configurations and capacities. The standard STANAG magazines are 20, 30, and 40 round box magazines, but there are many other designs available with capacities ranging from one round to 60 and 100 round casket magazines, 90 round snail-drum magazines, and 100 round and 150 round double-drum magazines. High-capacity magazines In the United States, a number of states have passed laws that ban magazines which are defined as "high-capacity" by statute. High-capacity or large-capacity magazines are generally those defined by statute to be capable of holding more than 10 to 15 rounds, although the definitions will vary by state. Other nations impose restrictions on magazine capacity as well. In Canada, magazines are generally limited to 5 rounds for rifles and shotguns (with some exceptions) and 10 rounds for handguns (with some exceptions), depending on the firearm.
Technology
Mechanisms_2
null
897047
https://en.wikipedia.org/wiki/Arc-fault%20circuit%20interrupter
Arc-fault circuit interrupter
An arc-fault circuit interrupter (AFCI) or arc-fault detection device (AFDD) is a circuit breaker that breaks the circuit when it detects the electric arcs that are a signature of loose connections in home wiring. Loose connections, which can develop over time, can sometimes become hot enough to ignite house fires. An AFCI selectively distinguishes between a harmless arc (incidental to normal operation of switches, plugs, and brushed motors), and a potentially dangerous arc (that can occur, for example, in a lamp cord which has a broken conductor). In Canada and the United States, AFCI breakers have been required by the electrical codes for circuits feeding electrical outlets in residential bedrooms since the beginning of the 21st century; the US National Electrical Code has required them to protect most residential outlets since 2014, and the Canadian Electrical Code has since 2015. In regions using 230 V, the combination of higher voltage and lower load currents lead to different conditions being required to initiate an arc fault that does not either burn clear or weld to a short circuit after a short time, and there are different arc characteristics once struck. Because of this, in Western Europe (where in many countries a domestic supply may be 400V 3 phase) and the UK (where domestically a single phase 230V supply is common), adoption is slower, and their use is optional, only being mandated in specified high risk locations. The Australian and New Zealand regulations – Wiring Rules (AS NZS 3000:2018) do not require installation of AFDDs in Australia. However, in New Zealand all final sub-circuits with ratings up to 20 A will require protection by an AFDD if they supply locations with significant fire risk, locations containing irreplaceable items, certain historic buildings, and socket-outlets in school sleeping accommodation. Most sockets in these countries are on circuits rated at 20 A or less. In the US, arc faults are said to be one of the leading causes for residential electrical fires. Each year in the United States, over 40,000 fires are attributed to home electrical wiring. These fires result in over 350 deaths and over 1,400 injuries each year. Conventional circuit breakers respond only to overloads and short circuits, so they do not protect against arcing conditions that produce erratic, and often reduced current. AFCIs are devices designed to protect against fires caused by arcing faults in the home electrical wiring. The AFCI circuitry continuously monitors the current and discriminates between normal and unwanted arcing conditions. Once detected, the AFCI opens its internal contacts, thus de-energizing the circuit and reducing the potential for a fire to occur. Operating principle The electronics inside an AFCI breaker detect electrical current alternating at characteristic frequencies, usually around 100 kHz, known to be associated with wire arcing, which are sustained for more than a few milliseconds. A combination AFCI breaker provides protection against parallel arcing (line to neutral), series arcing (a loose, broken, or otherwise high resistance segment in a single line), ground arcing (from line or neutral to ground), overload, and short circuit. The AFCI will open the circuit if dangerous arcing is detected. When installed as the first outlet on a branch circuit, AFCI receptacles provide series arc protection for the entire branch circuit. They also provide parallel arc protection for the branch circuit starting at the AFCI receptacle. Unlike AFCI breakers, AFCI receptacles may be used on any wiring system regardless of the panel. Electrical code requirements US and Canada Starting with the 1999 version of the National Electrical Code in the United States, and the 2002 version of the Canadian Electrical Code in Canada, the national codes require AFCIs in all circuits that feed outlets in bedrooms of dwelling units. As of the 2014 NEC, AFCI protection is required on all branch circuits supplying outlets or devices installed in dwelling unit kitchens, along with the 2008 NEC additions of family rooms, dining rooms, living rooms, parlors, libraries, dens, bedrooms, sunrooms, recreation rooms, closets, hallways, laundry areas, and similar rooms and areas. They are also required in dormitory units. This requirement may be accomplished by using a "combination type" breaker—a specific kind of circuit-breaker defined by UL 1699—in the breaker panel that provides combined arc-fault and overcurrent protection or by using an AFCI receptacle for modifications/extensions, as replacement receptacles or in new construction, at the first outlet on the branch. Not all U.S. jurisdictions have adopted the NEC's AFCI requirements so it is important to check local code requirements. The AFCI is intended to prevent fire from arcs. AFCI circuit breakers are designed to meet one of two standards as specified by UL 1699: "branch" type or "combination" type (note: the Canadian Electrical Code uses different terminology but similar technical requirements). A branch type AFCI trips on 75 amperes of arcing current from the line wire to either the neutral or ground wire. A combination type adds series arcing detection to branch type performance. Combination type AFCIs trip on 5 amperes of series arcing. AFCI receptacles are an alternative solution to AFCI breakers. These receptacles are designed to address the dangers associated with both types of potentially hazardous arcing: parallel and series. AFCI receptacles offer the benefit of localized test and reset with such buttons located on the face of the device. This can save a journey to the breaker panel but can also encourage simply resetting by a user without investigating the underlying fault, as would presumably happen if someone with access to the electrical panel was notified. In 2002, the NEC removed the word "receptacle", leaving "outlets", with the effect that lights and other wired-in devices such as ceiling fans within bedrooms were added to the requirement. The 2005 code made it clearer that all outlets must be protected despite discussion in the code-making panel about excluding bedroom smoke detectors from the requirement. "Outlets" as defined in the NEC includes receptacles, light fixtures and smoke alarms, among other things. Basically, any point where AC electricity is used to power something is an outlet. As of January 2008, only "combination type" AFCIs meet the NEC requirement. The 2008 NEC requires the installation of combination-type AFCIs in all 15 and 20 ampere residential circuits with the exception of laundries, kitchens, bathrooms, garages, and unfinished basements, though many of these require GFCI protection. The 2014 NEC adds kitchens and laundry rooms to the list of rooms requiring AFCI circuitry, as well as any devices (such as lighting) requiring protection. As of January 2023, there are a total of 6 means of protection covered as part of 210.12(A). These include the following: 1. A listed combination-type AFCI which is the primary method used to meet these requirements. 2. A listed branch/feeder-type AFCI that is installed at the origin of the branch circuit working in combination with the listed outlet branch-circuit-type AFCI (OBC AFCI) installed at the first outlet box which must also be marked that it is the first outlet box of the branch circuit. 3. This option includes a listed "Supplemental Arc Protection Circuit Breaker" which does not exist. There is no standard for this device and so this is not an option that can be used. 4. This option does have a single manufacturer who has a solution on the market. This option includes a listed outlet branch-circuit-type AFCI that is installed on the branch circuit at the first outlet in combination with a listed branch-circuit overcurrent protective device when the following four conditions are met: (a) The "Home Run" circuit must be continuous from the branch circuit overcurrent device to the OBC AFCI. (b) Maximum length for a 14 AWG conductor is 50 ft and the maximum length for a 12 AWG conductor is 70ft. (c) The first outlet box has to be marked as such. (d) The circuit breaker and the OBC AFCI must be listed to meet the requirements of a system combination-type AFCI. Options 5 and 6 are the same options as we've seen in this section in the past but just included as positive text instead of being an exception. These options are required for the following areas in dwelling units: (1) Kitchens (2) Family rooms (3) Dining rooms (4) Living rooms (5) Parlors (6) Libraries (7) Dens (8) Bedrooms (9) Sunrooms (10) Recreation rooms (11) Closets (12) Hallways (13) Laundry areas (14) Similar areas United Kingdom In the UK, the Wiring Regulations 18th edition (BS 7671:2018) is the first edition to make any mention of arc fault devices, and indicate they may be installed if the design has an unusually high risk of fire from arc faults. The annexes relating to testing indicate that when AFDDs are installed, their correct operation must be verified before completion, but the method of testing is not described. This is in contrast to RCDs where a number of trip times at different fault current levels must be verified. Germany The German Wiring rules VDE 0100, recommend AFDDs for high-risk situations and give as examples rooms with sleeping accommodation, rooms or places with a particular fire risk, rooms or places made of building components with combustible building materials, if these have a lower fire resistance than fire-retardant (< F30), and rooms or places with hazards for irreplaceable goods. Australia and New Zealand The Australian and New Zealand regulations – Wiring Rules (AS NZS 3000:2018) do not require installation of AFDDs in Australia. However, in New Zealand all final sub-circuits with ratings up to 20 A will require protection by an AFDD if they supply locations with significant fire risk, locations containing irreplaceable items, certain historic buildings, and socket-outlets in school sleeping accommodation. Most power circuits in these countries fall under this clause as the common sockets are 10 A and 15 A rating. The Australian standards are used in Argentina, Fiji, Tonga, Solomon Islands and Papua New Guinea. Limitations AFCIs are designed to protect against fires caused by electrical arc faults. While the sensitivity of the AFCIs helps in the detection of arc faults, these breakers can also indicate false positives by identifying normal circuit behaviors as arc faults. For instance, lightning strikes provide voltage and current profiles that resemble arc faults, and vacuum cleaners and some laser printers trip AFCIs. This nuisance tripping reduces the overall effectiveness of AFCIs. Research into advancements in this area is being pursued. AFCIs are also known to be sensitive (false tripping) to the presence of radio frequency energy, especially within the so-called high frequency (HF) spectrum (3–30 MHz), which includes legitimate shortwave broadcasting, over-the-horizon aircraft and marine communications, amateur radio, and citizens band radio operations. Sensitivities and mitigation have been known since 2013. AFCI circuit breakers include a standard inverse-time circuit breaker but provide no specific protection against "glowing" connections (also known as a high resistance connection), high line voltages, or low line voltages. An AFCI does not detect high line voltage caused by an open neutral in a multiwire branch circuit. A multiwire branch circuit uses both energized wires of a 120–240 V split phase service. If the neutral is broken along the return path to the circuit breaker panel, devices connected from a 120 V leg to the neutral may experience excess voltage, up to twice normal. AFCIs do not detect low line voltage. Low line voltage can cause electromechanical relays to repeatedly turn off and on, or "chatter". If current is flowing through the load contacts, it causes arcing across the contacts as they open. The arcing can oxidize, pit, and melt the contacts. This process can increase the contact resistance, superheat the relay, and lead to fires. Power fault circuit interrupters are designed to prevent fires from low voltage across loads. Interference with power line networking AFCIs may interfere with the operation of some power line communication technologies.
Technology
Electrical protective devices
null
897091
https://en.wikipedia.org/wiki/Asteroid%20family
Asteroid family
An asteroid family is a population of asteroids that share similar proper orbital elements, such as semimajor axis, eccentricity, and orbital inclination. The members of the families are thought to be fragments of past asteroid collisions. An asteroid family is a more specific term than asteroid group whose members, while sharing some broad orbital characteristics, may be otherwise unrelated to each other. General properties Large prominent families contain several hundred recognized asteroids (and many more smaller objects which may be either not-yet-analyzed, or not-yet-discovered). Small, compact families may have only about ten identified members. About 33% to 35% of asteroids in the main belt are family members. There are about 20 to 30 reliably recognized families, with several tens of less certain groupings. Most asteroid families are found in the main asteroid belt, although several family-like groups such as the Pallas family, Hungaria family, and the Phocaea family lie at smaller semi-major axis or larger inclination than the main belt. One family has been identified associated with the dwarf planet . Some studies have tried to find evidence of collisional families among the trojan asteroids, but at present the evidence is inconclusive. Origin and evolution The families are thought to form as a result of collisions between asteroids. In many or most cases the parent body was shattered, but there are also several families which resulted from a large cratering event which did not disrupt the parent body (e.g. the Vesta, Pallas, Hygiea, and Massalia families). Such cratering families typically consist of a single large body and a swarm of asteroids that are much smaller. Some families (e.g. the Flora family) have complex internal structures which are not satisfactorily explained at the moment, but may be due to several collisions in the same region at different times. Due to the method of origin, all the members have closely matching compositions for most families. Notable exceptions are those families (such as the Vesta family) which formed from a large differentiated parent body. Asteroid families are thought to have lifetimes of the order of a billion years, depending on various factors (e.g. smaller asteroids are lost faster). This is significantly shorter than the Solar System's age, so few if any are relics of the early Solar System. Decay of families occurs both because of slow dissipation of the orbits due to perturbations from Jupiter or other large bodies, and because of collisions between asteroids which grind them down to small bodies. Such small asteroids then become subject to perturbations such as the Yarkovsky effect that can push them towards orbital resonances with Jupiter over time. Once there, they are relatively rapidly ejected from the asteroid belt. Tentative age estimates have been obtained for some families, ranging from hundreds of millions of years to less than several million years as for the compact Karin family. Old families are thought to contain few small members, and this is the basis of the age determinations. It is supposed that many very old families have lost all the smaller and medium-sized members, leaving only a few of the largest intact. A suggested example of such old family remains are the 9 Metis and 113 Amalthea asteroid pair. Further evidence for a large number of past families (now dispersed) comes from analysis of chemical ratios in iron meteorites. These show that there must have once been at least 50 to 100 parent bodies large enough to be differentiated, that have since been shattered to expose their cores and produce the actual meteorites (Kelley & Gaffey 2000). Identification of members, interlopers and background asteroids When the orbital elements of main belt asteroids are plotted (typically inclination vs. eccentricity, or vs. semi-major axis), a number of distinct concentrations are seen against the rather uniform distribution of non-family background asteroids. These concentrations are the asteroid families (see above). Interlopers are asteroids classified as family members based on their so-called proper orbital elements but having spectroscopic properties distinct from the bulk of the family, suggesting that they, contrary to the true family members, did not originate from the same parent body that once fragmented upon a collisional impact. Description Strictly speaking, families and their membership are identified by analysing the proper orbital elements rather than the current osculating orbital elements, which regularly fluctuate on timescales of tens of thousands of years. The proper elements are related constants of motion that remain almost constant for at least tens of millions of years, and perhaps longer. The Japanese astronomer Kiyotsugu Hirayama (1874–1943) pioneered the estimation of proper elements for asteroids, and first identified several of the most prominent families in 1918. In his honor, asteroid families are sometimes called Hirayama families. This particularly applies to the five prominent groupings discovered by him. Hierarchical clustering method Present day computer-assisted searches have identified more than a hundred asteroid families. The most prominent algorithms have been the hierarchical clustering method (HCM), which looks for groupings with small nearest-neighbour distances in orbital element space, and wavelet analysis, which builds a density-of-asteroids map in orbital element space, and looks for density peaks. The boundaries of the families are somewhat vague because at the edges they blend into the background density of asteroids in the main belt. For this reason the number of members even among discovered asteroids is usually only known approximately, and membership is uncertain for asteroids near the edges. Additionally, some interlopers from the heterogeneous background asteroid population are expected even in the central regions of a family. Since the true family members caused by the collision are expected to have similar compositions, most such interlopers can in principle be recognised by spectral properties which do not match those of the bulk of family members. A prominent example is 1 Ceres, the largest asteroid, which is an interloper in the family once named after it (the Ceres family, now the Gefion family). Spectral characteristics can also be used to determine the membership (or otherwise) of asteroids in the outer regions of a family, as has been used e.g. for the Vesta family, whose members have an unusual composition. Family types As previously mentioned, families caused by an impact that did not disrupt the parent body but only ejected fragments are called cratering families. Other terminology has been used to distinguish various types of groups which are less distinct or less statistically certain from the most prominent "nominal families" (or clusters). Clusters, clumps, clans and tribes The term cluster is also used to describe a small asteroid family, such as the Karin cluster. Clumps are groupings which have relatively few members but are clearly distinct from the background (e.g. the Juno clump). Clans are groupings which merge very gradually into the background density and/or have a complex internal structure making it difficult to decide whether they are one complex group or several unrelated overlapping groups (e.g. the Flora family has been called a clan). Tribes are groups that are less certain to be statistically significant against the background either because of small density or large uncertainty in the orbital parameters of the members. List Prominent families Among the many asteroid families, the Eos, Eunomia, Flora, Hungaria, Hygiea, Koronis, Nysa, Themis and Vesta families are the most prominent ones in the asteroid belt. For a complete list, see . Eos family The Eos family (adj. Eoan; 9,789 members, named after 221 Eos) Eunomia family The Eunomia family (adj. Eunomian; 5,670 known members, named after 15 Eunomia) is a family of S-type asteroids. It is the most prominent family in the intermediate asteroid belt and the 6th-largest family with approximately 1.4% of all main belt asteroids. Flora family The Flora family (adj. Florian; 13,786 members, named after 8 Flora) is the 3rd-largest family. Broad in extent, it has no clear boundary and gradually fades into the surrounding background population. Several distinct groupings within the family, possibly created by later, secondary collisions. It has also been described as an asteroid clan. Hungaria family The Hungaria family (adj. Hungarian; 2,965 members, named after 434 Hungaria) Hygiea family The Hygiea family (adj. Hygiean; 4,854 members, named after 10 Hygiea) Koronis family The Koronis family (adj. Koronian; 5,949 members, named after 158 Koronis) Nysa family The Nysa family (adj. Nysian; 19,073 members, named after 44 Nysa). Alternatively named Hertha family after 135 Hertha. Themis family The Themis family (adj. Themistian; 4,782 members, named after 24 Themis) Vesta family The Vesta family (adj. Vestian; 15,252 members, named after 4 Vesta) All families In 2015, a study identified 122 notable families with a total of approximately 100,000 member asteroids, based on the entire catalog of numbered minor planets, which consisted of almost 400,000 numbered bodies at the time (see catalog index for a current listing of numbered minor planets). The data has been made available at the "Small Bodies Data Ferret". The first column of this table contains the family identification number or family identifier number (FIN), which is an attempt for a numerical labeling of identified families, independent of their currently used name, as a family's name may change with refined observations, leading to multiple names used in literature and to subsequent confusion. Other families or dynamical groups Other asteroid families from miscellaneous sources (not listed in the above table), as well as non-asteroid families include:
Physical sciences
Planetary science
Astronomy
897567
https://en.wikipedia.org/wiki/Insemination
Insemination
Insemination is the introduction of sperm (in semen) into a female or hermaphrodite's reproductive system in order to fertilize the ovum through sexual reproduction. The sperm enters into the uterus of a mammal or the oviduct of an oviparous (egg-laying) animal. Female humans and other mammals are inseminated during sexual intercourse or copulation, but can also be inseminated by artificial insemination. In humans, the act and form of insemination has legal, moral and interpersonal implications. However, whether insemination takes place naturally or by artificial means, the pregnancy and the progress of it will be the same. Insemination may be called in vivo fertilisation (from in vivo meaning "within the living") because an egg is fertilized inside the body, this is in contrast with in vitro fertilisation (IVF). Plants In plants, the fertilization process is referred to as pollination. It is the process of transfer of pollen grains from one anther to the stigma of other plants. Natural insemination Insemination during sexual intercourse through penile–vaginal penetration is referred to as natural insemination (i.e., insemination by natural means). If an artificial lubricant needs to be used, care must be taken that it does not have spermicidal properties. During ejaculation, semen, containing male gametes known as sperm, is expelled from the penis through the male urethra into the moist and warm environment of the female reproductive tract. In humans, semen is usually ejaculated into the posterior vaginal fornix in direct contact with cervical mucus, though sperm may swim from other areas of the vagina or vulva to the cervix. The average volume of semen produced at ejaculation is 2 to 5 millilitres (about a teaspoon), containing an average of 182 million sperm. Only a small proportion of the sperm in each ejaculation reach the site of fertilization in the fallopian tubes, their numbers decreasing exponentially as they progress through the female reproductive tract. The majority of sperm either die in the acidic environment of the vagina or drip out with the semen. Prior to ovulation, the cervical mucus becomes thinner and more hospitable to sperm. Sperm swim rapidly into the uterus upon encountering cervical mucus, though many become lost in the cervical crypts where they either die or are delayed. Further attrition occurs in the uterus, where sperm are attacked by the female immune system. Only about 100–1000 sperm reach the fallopian tubes, where they may survive for up to six days. If ovulation occurs and the sperm encounter an ovum in the fallopian tube, fertilization may occur. A woman may also be naturally inseminated while having penile–vaginal intercourse for pleasure without any intent to conceive. This may be unintentional as a result of the failure of a barrier or behavioural method of contraception, or may be intentional if relying on female contraceptive methods or indifferent to the possibility of pregnancy. In most cultures, insemination by a male through sexual intercourse, whether the woman's husband, normal sex partner or not, is subject to social and sexual inhibitions and taboos, and has legal, moral and interpersonal implications. The term is also used in the context of third-party insemination, where a male who is not the woman's usual sexual partner (i.e., a sperm donor) fathers a child for the woman by providing his sperm through sexual intercourse rather than by providing his sperm for it to be used to produce a pregnancy in the woman by artificial means. The incidence of natural insemination by a sperm donor is usually a private matter, and may also carry greater health risks than where sperm has been processed by a fertility center. Advocates claim natural insemination generates higher pregnancy rates and a more 'natural' conception which does not involve the intervention and intrusion of third parties. However, it has not been medically proven that natural insemination has an increased chance of pregnancy. Additionally, conceiving through natural insemination is considered a natural process, so the father may be liable for child support and have custody and other rights of the child. The law usually draws a distinction between a man fathering a child by natural means, and a man who provides his sperm for it to be used to father a child by artificial means (i.e. by artificial insemination). Artificial insemination Artificial insemination is the introduction of sperm into the reproductive tract of a female by means other than sexual intercourse for the purpose of impregnating the female. In humans, artificial insemination may be used when a woman or her normal sex partner cannot, for any of a number of reasons, conceive by natural means. A number of artificial insemination strategies are available, including intracervical insemination (ICI) and intrauterine insemination (IUI). Compared with natural insemination, artificial insemination may be more invasive, and may require professional assistance and medical expertise, which will have a higher cost. ICI attempts to simulate natural insemination, without the sexual element. It is painless and is the simplest, easiest and most common method of artificial insemination; it can be performed in the home, either by the female on herself or with non-professional assistance. ICI involves the introduction of unwashed or raw semen into the vagina at the entrance to the cervix, usually by means of a needleless syringe. The sperm for insemination may be provided either by a sexual partner of the female's choice or by a sperm donor. Donor sperm is most commonly used by lesbian couples and single women, and by heterosexual couples when the male partner is suffering from male infertility. In some circumstances, sperm has been collected from males before they go off to war, or even right after they have died, and used to inseminate their female partners. In some countries, there are laws restricting and regulating who can donate sperm, who can receive artificial insemination, and what the legal consequences are of such insemination. Subject to any such restrictions, donor sperm is available to all women who want or need it. Women who live in a jurisdiction that prohibits them from being artificially inseminated may sometimes choose to obtain it by traveling to a jurisdiction that permits it. Artificial insemination has been and continues to be commonly used in livestock breeding as an efficient way of increasing production. Other forms of insemination In various other animal species, sperm can be introduced into the female's reproductive tract by various means. For example, in some species of hemiptera sperm can be introduced violently by traumatic insemination, parenteral injection through the body wall. In some species of animals, sperm finds its way through the body wall when the spermatophore is left in contact with the female's skin, such as in the onychophora (velvet worms).
Biology and health sciences
Animal reproduction
Biology
897576
https://en.wikipedia.org/wiki/Adder
Adder
Vipera berus, also known as the common European adder and the common European viper, is a species of venomous snake in the family Viperidae. The species is extremely widespread and can be found throughout much of Europe, and as far as East Asia. There are three recognised subspecies. Known by a host of common names including common adder and common viper, the adder has been the subject of much folklore in Britain and other European countries. It is not regarded as especially dangerous; the snake is not aggressive and usually bites only when really provoked, stepped on, or picked up. Bites can be very painful, but are seldom fatal. The specific name, berus, is Neo-Latin and was at one time used to refer to a snake, possibly the grass snake, Natrix natrix. The common adder is found in different terrains, habitat complexity being essential for different aspects of its behaviour. It feeds on small mammals, birds, lizards, and amphibians, and in some cases on spiders, worms, and insects. The common adder, like most other vipers, is ovoviviparous. Females breed once every two or three years, with litters usually being born in late summer to early autumn in the Northern Hemisphere. Litters range in size from three to 20 with young staying with their mothers for a few days. Adults grow to a total length (including tail) of and a mass of . Three subspecies are recognised, including the nominate subspecies, Vipera berus berus described here. The snake is not considered to be threatened, though it is protected in some countries. Taxonomy There are three subspecies of V. berus that are recognised as being valid including the nominotypical subspecies. The subspecies V. b. bosniensis and V. b. sachalinensis have been regarded as full species in some recent publications. The name 'adder' is derived from nædre, an Old English word that had the generic meaning of serpent in the older forms of many Germanic languages. It was commonly used in the Old English version of the Christian Scriptures for the devil and the serpent in the Book of Genesis. In the 14th century, 'a nadder' in Middle English was rebracketed to 'an adder' (just as 'a napron' became 'an apron' and 'a nompere changed into 'an umpire'). In keeping with its wide distribution and familiarity through the ages, Vipera berus has a large number of common names in English, which include: Common European adder, common European viper, European viper, northern viper, adder, common adder, crossed viper, European adder, common viper, European common viper, cross adder, or common cross adder. In Welsh, it is called gwiber, a name derived from Latin vīpera. In Denmark, Norway and Sweden, the snake is known as hugorm, hoggorm and huggorm, roughly translated as 'striking snake'. In Finland, it is known as kyykäärme or simply kyy, in Estonia it is known as rästik, while in Lithuania it is known as angis. In Poland the snake is called żmija zygzakowata, which translates as 'zigzag viper', due to the pattern on its back. Description Relatively thick-bodied, adults usually grow to in total length (including tail), with an average of . Maximum size varies by region. The largest, at over , are found in Scandinavia; specimens of have been observed there on two occasions. In France and Great Britain, the maximum size is . Mass ranges from to about . The head is fairly large and distinct and its sides are almost flat and vertical. The edge of the snout is usually raised into a low ridge. Seen from above, the rostral scale is not visible, or only just. Immediately behind the rostral, there are two (rarely one) small scales. Dorsally, there are usually five large plates: a squarish frontal (longer than wide, sometimes rectangular), two parietals (sometimes with a tiny scale between the frontal and the parietals), and two long and narrow supraoculars. The latter are large and distinct, each separated from the frontal by one to four small scales. The nostril is situated in a shallow depression within a large nasal scale. The eye is relatively large—equal in size or slightly larger than the nasal scale—but often smaller in females. Below the supraoculars are six to 13 (usually eight to 10) small circumorbital scales. The temporal scales are smooth (rarely weakly keeled). There are 10–12 sublabials and six to 10 (usually eight or 9) supralabials. Of the latter, the numbers 3 and 4 are the largest, while 4 and 5 (rarely 3 and 4) are separated from the eye by a single row of small scales (sometimes two rows in alpine specimens). Midbody there are 21 dorsal scales rows (rarely 19, 20, 22, or 23). These are strongly keeled scales, except for those bordering the ventral scales. These scales seem loosely attached to the skin and lower rows become increasingly wide; those closest to the ventral scales are twice as wide as the ones along the midline. The ventral scales number 132–150 in males and 132–158 in females. The anal plate is single. The subcaudals are paired, numbering 32–46 in males and 23–38 in females. The colour pattern varies, ranging from very light-coloured specimens with small, incomplete, dark dorsal crossbars to entirely brown ones with faint or clear, darker brown markings, and on to melanistic individuals that are entirely dark and lack any apparent dorsal pattern. However, most have some kind of zigzag dorsal pattern down the entire length of their bodies and tails. The head usually has a distinctive dark V or X on the back. A dark streak runs from the eye to the neck and continues as a longitudinal series of spots along the flanks. Unusually for snakes, it is often possible to distinguish the sexes by their colour. Females are usually brownish in hue with dark-brown markings, the males are pure grey with black markings. The basal colour of males will often be slightly lighter than that of the females, making the black zigzag pattern stand out. The melanistic individuals are often females. Distribution and habitat Vipera berus has a wide range. It can be found across the Eurasian land-mass; from northwestern Europe (Great Britain, Belgium, Netherlands, Scandinavia, Germany, France) across southern Europe (Italy, Serbia, Albania, Croatia, Montenegro, Bosnia and Herzegovina, North Macedonia, Bulgaria, and northern Greece) and eastern Europe to north of the Arctic Circle, and Russia to the Pacific Ocean, Sakhalin Island, North Korea, northern Mongolia and northern China. It is found farther north than any other snake species. The type locality was originally listed as 'Europa'. Mertens and Müller (1940) proposed restricting the type locality to Uppsala, Sweden and it was eventually restricted to Berthåga, Uppsala by designation of a neotype by Krecsák & Wahlgren (2008). In several European countries, it is notable as being the only native venomous snake. It is one of only three snake species native to Britain. The other two, the barred grass snake and the smooth snake, are non-venomous. Sufficient habitat complexity is a crucial requirement for the presence of this species, in order to support its various behaviours—basking, foraging, and hibernation—as well as to offer some protection from predators and human harassment. It is found in a variety of habitats, including: chalky downs, rocky hillsides, moors, sandy heaths, meadows, rough commons, edges of woods, sunny glades and clearings, bushy slopes and hedgerows, dumps, coastal dunes, and stone quarries. It will venture into wetlands if dry ground is available nearby and thus may be found on the banks of streams, lakes, and ponds. In much of southern Europe, such as southern France and northern Italy, it is found in either low lying wetlands or at high altitudes. In the Swiss Alps, it may ascend to about . In Hungary and Russia, it avoids open steppeland; a habitat in which V. ursinii is more likely to occur. In Russia, however, it does occur in the forest steppe zone. Conservation status In Great Britain, it is illegal to kill, injure, harm or sell adders under the Wildlife and Countryside Act 1981. The same situation applies to Norway under the (The Wildlife Act 1981) and Denmark (1981). In Finland (Nature Conservation Act 9/2023) killing an adder is legal if it's not possible to capture and transfer it to another location and the same provision also applies in Sweden. The common viper is categorised as 'endangered' in Switzerland, and is also protected in some other countries in its range. It is also found in many protected areas. This species is listed as protected (Appendix III) under the Berne Convention. The International Union for Conservation of Nature Red List of Threatened Species describes the conservation status as of 'least concern' in view of its wide distribution, presumed large population, broad range of habitats, and likely slow rate of decline though it acknowledges the population to be decreasing. Reduction in habitat for a variety of reasons, fragmentation of populations in Europe due to intense agriculture practices, and collection for the pet trade or for venom extraction have been recorded as major contributing factors for its decline. A citizen science based survey in the UK found evidence of extensive population declines in the UK, especially affecting smaller populations. A combination of public pressure and disturbance, habitat fragmentation and poor habitat management were considered the most likely causes of the decline. The release of 47 million non-native pheasants and 10 million partridges each year by countryside estates has also been suggested to have a significant impact on adder populations across the UK, with the possibility the reptile could be extinct by 2032. Behaviour This species is mainly diurnal, especially in the north of its range. Further south it is said to be active in the evening, and it may even be active at night during the summer months. It is predominantly a terrestrial species, although it has been known to climb up banks and into low bushes in order to bask or search for prey. Adders are not usually aggressive, tending to be rather timid and biting only when cornered or alarmed. People are generally bitten only after stepping on them or attempting to pick them up. They will usually disappear into the undergrowth at a hint of any danger, but will return once all is quiet, often to the same spot. Occasionally, individual snakes will reveal their presence with a loud and sustained hissing, presumably to warn off potential aggressors. Often, these turn out to be pregnant females. When the adder is threatened, the front part of the body is drawn into an S-shape to prepare for a strike. The species is cold-adapted and hibernates in the winter. In Great Britain, males and females hibernate for about 150 and 180 days, respectively. In northern Sweden hibernation lasts 8–9 months. On mild winter days, they may emerge to bask where the snow has melted and will often travel across snow. About 15% of adults and 30–40% of juveniles die during hibernation. Feeding Their diet consists mainly of small mammals, such as mice, rats, voles, and shrews, as well as lizards. Sometimes, slow worms are taken, and even weasels and moles. Adders also feed on amphibians, such as frogs, newts, and salamanders. Birds are also reported to be consumed, especially nestlings and even eggs, for which they will climb into shrubbery and bushes. Generally, diet varies depending on locality. Juveniles will eat nestling mammals, small lizards and frogs as well as worms and spiders. One important dietary source for young adders is the alpine salamander (salamadra atra). Because both species live at higher altitudes, S. atra could be a prevalent food source for adders, since there may be few other animals. One study suggests that alpine salamanders could consist of almost half of the adders' diets in some locations. They have been witnessed swallowing these salamanders in the early morning hours. Once they reach about in length, their diet begins to resemble that of the adults. Reproduction In Hungary, mating takes place in the last week of April, whilst in the north it happens later (in the second week of May). Mating has also been observed in June and even early October, but it is not known if this autumn mating results in any offspring. Females often breed once every two years, or even once every three years if the seasons are short and the climate is not conducive. Males find females by following their scent trails, sometimes tracking them for hundreds of metres a day. If a female is found and then flees, the male follows. Courtship involves side-by-side parallel 'flowing' behaviour, tongue flicking along the back and excited lashing of the tail. Pairs stay together for one or two days after mating. Males chase away their rivals and engage in combat. Often, this also starts with the aforementioned flowing behaviour before culminating in the dramatic 'adder dance'. In this act, the males confront each other, raise up the front part of the body vertically, make swaying movements and attempt to push each other to the ground. This is repeated until one of the two becomes exhausted and crawls off to find another mate. Appleby (1971) notes that he has never seen an intruder win one of these contests, as if the frustrated defender is so aroused by courtship that he refuses to lose his chance to mate. There is no record of any biting taking place during these bouts. Females usually give birth in August or September, but sometimes as early as July, or as late as early October. Litters range in size from 3 to 20. The young are usually born encased in a transparent sac from which they must free themselves. Sometimes, they succeed in freeing themselves from this membrane while still inside the female. Neonates measure in total length (including tail), with an average total length of . They are born with a fully functional venom apparatus and a reserve supply of yolk within their bodies. They shed their skins for the first time within a day or two. Females do not appear to take much interest in their offspring, but the young have been observed to remain near their mothers for several days after birth. Venom Because of the rapid rate of human expansion throughout the range of this species, bites are relatively common. Domestic animals and livestock are frequent victims. In Great Britain, most instances occur in March–October. In Sweden, there are about 1,300 bites a year, with an estimated 12% that require hospitalisation. At least eight different antivenoms are available against bites from this species. Mallow et al. (2003) describe the venom toxicity as being relatively low compared to other viper species. They cite Minton (1974) who reported the values for mice to be 0.55 mg/kg IV, 0.80 mg/kg IP and 6.45 mg/kg SC. As a comparison, in one test the minimum lethal dose of venom for a guinea pig was 40–67 mg, but only 1.7 mg was necessary when Daboia russelii venom was used. Brown (1973) gives a higher subcutaneous LD50 range of 1.0–4.0 mg/kg. All agree that the venom yield is low: Minton (1974) mentions 10–18 mg for specimens in length, while Brown (1973) lists only 6 mg. Relatively speaking, bites from this species are not highly dangerous. In Britain there were only 14 known fatalities between 1876 and 2005—the last a 5-year-old child in 1975—and one nearly fatal bite of a 39-year-old woman in Essex in 1998. An 82-year-old woman died following a bite in Germany in 2004, although it is not clear whether her death was due to the effect of the venom. A 44-year-old British man was left seriously ill after he was bitten by an adder in the Dalby Forest, Yorkshire, in 2014. Even so, professional medical help should always be sought as soon as possible after any bite. Very occasionally bites can be life-threatening, particularly in small children, while adults may experience discomfort and disability long after the bite. The length of recovery varies, but may take up to a year. Local symptoms include immediate and intense pain, followed after a few minutes (but perhaps by as much as 30 minutes) by swelling and a tingling sensation. Blisters containing blood are not common. The pain may spread within a few hours, along with tenderness and inflammation. Reddish lymphangitic lines and bruising may appear, and the whole limb can become swollen and bruised within 24 hours. Swelling may also spread to the trunk, and with children, throughout the entire body. Necrosis and intracompartmental syndromes are very rare. Systemic symptoms resulting from anaphylaxis can be dramatic. These may appear within 5 minutes post bite, or can be delayed for many hours. Such symptoms include nausea, retching and vomiting, abdominal colic and diarrhoea, incontinence of urine and faeces, sweating, fever, vasoconstriction, tachycardia, lightheadedness, loss of consciousness, blindness, shock, angioedema of the face, lips, gums, tongue, throat and epiglottis, urticaria and bronchospasm. If left untreated, these symptoms may persist or fluctuate for up to 48 hours. In severe cases, cardiovascular failure may occur. In culture and beliefs Adders were believed to be deaf, which is mentioned in Psalm 58 (v. 4), but snake oil made from them was used as a cure for deafness and earache. Females were thought to swallow their young when threatened and regurgitate them unharmed later. It was believed that they did not die until sunset. Remedies for adder "stings" included killing the snake responsible and rubbing the corpse or its fat on the wound, also holding a pigeon or chicken on the bite, or jumping over water. Adders were thought to be attracted to hazel trees and repelled by ash trees. Druids believed that large frenzied gatherings of adders occurred in spring, at the centre of which could be found a polished rock called an adder stone or Glain Neidr in the Welsh language. These stones were said to have held supernatural powers.
Biology and health sciences
Reptiles
null
898503
https://en.wikipedia.org/wiki/Steam%20%28service%29
Steam (service)
Steam is a digital distribution service and storefront developed by Valve Corporation. It was launched as a software client in September 2003 to provide game updates automatically for Valve's games and expanded to distributing third-party titles in late 2005. Steam offers various features, such as game server matchmaking with Valve Anti-Cheat (VAC) measures, social networking, and game streaming services. The Steam client functions include update maintenance, cloud storage, and community features such as direct messaging, an in-game overlay, discussion forums, and a virtual collectable marketplace. The storefront also offers productivity software, game soundtracks, videos, and sells hardware made by Valve, such as the Index and Steam Deck. Steamworks, an application programming interface (API) released in 2008, is used by developers to integrate Steam's functions, including digital rights management (DRM), into their products. Several game publishers began distributing their products on Steam that year. Initially developed for Windows, Steam was ported to macOS and Linux in 2010 and 2013 respectively, while a mobile version of Steam for interacting with the service's online features was released on iOS and Android in 2012. The service is the largest digital distribution platform for PC games, with an estimated 75% of the market share in 2013 according to IHS Screen Digest. By 2017, game purchases through Steam totaled about 4.3 billion, or at least 18% of global PC game sales according to Steam Spy. By 2021, the service had over 34,000 games with over 132 million monthly active users. Steam's success has led to the development of the Steam Machine gaming PCs in 2015, including the SteamOS Linux distribution and Steam Controller; Steam Link devices for local game streaming; and in 2022, the handheld Steam Deck tailored for running Steam games. History In the early 2000s, Valve was looking for a better way to update its published games, as providing downloadable patches for multiplayer games resulted in most users disconnecting for several days until they had installed the patch. They decided to create a platform that would update games automatically, and implement stronger anti-piracy and anti-cheat measures. They approached several companies, including Microsoft, Yahoo!, and RealNetworks, to build a client with these features, but were rejected. Valve began its own platform development in 2002, using the working names "Grid" and "Gazelle". The Steam platform was announced at the Game Developers Conference event on March 22, 2002, and released for beta testing that day. Prior to the implementation of Steam, Valve had a publishing contract with Sierra Studios; the 2001 version of the contract gave Valve rights to digital distribution of its games. Valve took Sierra and its owners, Vivendi Games, to court in 2002 over a claimed breach of contract. Sierra counter-sued, asserting that Valve had undermined the contract by offering a digital storefront for their games, to compete directly with Sierra. Steam was released out of beta on September 12, 2003. In November 2004, Half-Life 2 was the first high-profile game to be offered digitally on Steam, requiring installation of the Steam client for retail copies. During this time users faced problems attempting to play; part of legal issues that Valve had with Vivendi, who claimed that physical copies they published could not be activated as to them the game had not been released. The Steam requirement was met with concerns about software ownership and requirements, as well as problems with overloaded servers - demonstrated previously by the Counter-Strike rollout. In 2005, third-party developers were contracted to release games on Steam, such as Rag Doll Kung Fu and Darwinia. In May 2007, ATI included Steam in the ATI Catalyst GPU driver as well as offering a free Steam copy of Half-Life 2: Lost Coast and Half-Life 2: Deathmatch to ATI Radeon owners. In January 2008, Nvidia promoted Steam in the GeForce GPU driver, as well as offering a free Steam copy of Portal: The First Slice to Nvidia hardware owners. In 2011, some Electronic Arts games, such as Crysis 2, Dragon Age II, and Alice: Madness Returns, were removed from sale because of terms of service that prevented them having their own in-game storefront for downloadable content. These were later launched on the Origin service. In 2019, Ubisoft announced that it would stop selling future games on Steam, starting with Tom Clancy's The Division 2 because Valve would not modify its revenue sharing model. In May 2019, Microsoft distributed its games on Steam in addition to the Microsoft Store. In 2020, Electronic Arts started to publish selected games on Steam, and offered its rebranded subscription service EA Play on the platform. In 2022, Ubisoft announced that it would return to selling its recent games on Steam, starting with Assassin's Creed Valhalla, stating that it was "constantly evaluating how to bring our games to different audiences wherever they are". By 2014, total annual game sales on Steam were estimated at $1.5 billion. By 2018, the service had over 90 million monthly active users. In 2018, its network delivered 15 billion gigabytes of data, compared to less than 4 billion in 2014. Features and functionality Software delivery and maintenance Steam's primary service is to allow its users to purchase games and other software, adding them to a virtual library from which they may be downloaded and installed an unlimited number of times. Initially, Valve was required to be the publisher for these games since they had sole access to Steam's database and engine, but with the introduction of the Steamworks software development kit (SDK) in May 2008, anyone could integrate Steam into their game without Valve's direct involvement. Valve intended to "make DRM obsolete" as games released on Steam had traditional anti-piracy measures, including the assignment and distribution of product keys and support for digital rights management software tools such as SecuROM or non-malicious rootkits. With an update to the Steamworks SDK in March 2009, Valve added "Custom Executable Generation" (CEG), which creates a unique, encrypted copy of the game's executable files for the given user, which allows them to install it multiple times and on multiple devices, and make backup copies of their software. Once the software is downloaded and installed, the user must then authenticate through Steam to de-encrypt the executable files to play the game. Normally this is done while connected to the Internet following the user's credential validation, but once they have logged into Steam once, a user can instruct Steam to launch in a special offline mode to be able to play their games without a network connection. Developers are not limited to Steam's CEG and may include other forms of DRM (or none at all) and other authentication services than Steam; for example, some games from publisher Ubisoft require the use of their Uplay gaming service. In September 2008, Valve added support for Steam Cloud, a service that can automatically store saved game and related custom files on Valve's servers; users can access this data from any machine running the Steam client. Users can disable this feature on a per-game and per-account basis. Cloud saving was expanded in January 2022 for Dynamic Cloud Sync, allowing games developed with this feature to store saved states to Steam Cloud while a game is running rather than waiting until the user quit; this was added ahead of the portable Steam Deck unit so that users can save from the Deck and then put the unit into a suspended state. In May 2012, the service added the ability for users to manage their game libraries from remote clients, including computers and mobile devices. Product keys sold through third-party retailers can also be redeemed on Steam. For games that incorporate Steamworks, users can buy redemption codes from other vendors and redeem these in the Steam client to add the title to their libraries. Steam also offers a framework for selling and distributing downloadable content (DLC) for games. In September 2013, Steam introduced the ability to share most games with family members and close friends by authorizing machines to access one's library. Authorized players can install the game locally and play it separately from the owning account. Users can access their saved games and achievements provided the main owner is not playing. When the main player initiates a game while a shared account is using it, the shared account user is allowed a few minutes to either save their progress and close the game or purchase the game for their own account. Within Family View, introduced in January 2014, parents can adjust settings for their children's tied accounts, limiting the functionality and accessibility to the Steam client and purchased games. A more robust implementation of Family Sharing, titled "Steam Families", was released in September 2024, allowing up to five members of a household to share games from a single account, including the ability to play different games on those accounts along with different game saves and profiles, and enhanced parental control tools for those accounts. By its acceptable use policy, Valve retains the right to block customers' access to their games and Steam services when Valve's Anti-Cheat (VAC) software determines that the user is cheating in multiplayer games, selling accounts to others, or trading games to exploit regional price differences. Blocking such users initially removed access to their other games, leading to some users with high-value accounts losing access because of minor infractions. Valve later changed its policy to be similar to that of Electronic Arts' Origin platform, in which blocked users can still access their games but are heavily restricted, limited to playing in offline mode and unable to participate in Steam Community features. Customers also lose access to their games and Steam account if they refuse to accept changes to Steam's end user license agreements; this last occurred in August 2012. In April 2015, Valve began allowing developers to set bans on players for their games, but enacted and enforced at the Steam level, which allowed them to police their own gaming communities in a customizable manner. Storefront features The Steam client includes a digital storefront called the Steam Store through which users can purchase games. Once the game is bought, a software license is permanently attached to the user's Steam account, allowing them to download the software on any compatible device. Game licenses can be given to other accounts under certain conditions. Content is delivered from an international network of servers using a proprietary file transfer protocol. Products sold on Steam are available for sale in different currencies, which changes depending on the user's location. In December 2010, the storefront began supporting WebMoney for payments, and from April 2016 until December 2017 supported Bitcoin payments before dropping support due to high value fluctuations and costly service fees. The Steam storefront validates the user's region; the purchase of games may be restricted to specific regions because of release dates, game classification, or agreements with publishers. Since 2010, the Steam Translation Server project allows Steam users to assist with the translation of the Steam client, storefront, and a selected library of Steam games for twenty-eight languages. In October 2018, official support for Vietnamese and Latin American Spanish was added, in addition to Steam's then 26 languages. Steam also allows users to purchase downloadable content for games, and for some specific games such as Team Fortress 2, the ability to purchase in-game inventory items. In February 2015, Steam began to open similar options for in-game item purchases for third-party games. In November 2007, achievements were added, similar to Xbox 360 Achievements. In conjunction with developers and publishers, Valve frequently provides discounted sales on games on a daily and weekly basis, sometimes oriented around a publisher, genre, or holiday theme, and sometimes allows games to be tried for free during the days of these sales. The site normally offers a large selection of games at a discount during its annual Summer and Holiday sales, including gamification of these sales. Users of Steam's storefront can also purchase games and other software as gifts for another Steam user. Before May 2017, users could purchase these gifts to be held in their profile's inventory until they opted to gift them. However, this feature enabled a gray market around some games, where a user in a country where the price of a game was substantially lower than elsewhere could stockpile giftable copies to sell to others in regions with much higher prices. In August 2016, Valve changed its gifting policy to require that games with VAC and Game Ban-enabled games be gifted immediately to another Steam user, which also served to combat players that worked around VAC and Game Bans; in May 2017, Valve expanded this policy to all games. The changes also placed limitations on gifts between users of different countries if there is a large difference in pricing. Due to runaway inflation in Argentina and Turkey, Valve eliminated the use of local currency pricing for users in those storefronts in November 2023, instead moving them to a special regional pricing model based on U.S. dollars as a means to provide fair payments to publisher and developers, though these local users saw effective price hikes as high as 2900%. The Steam store also enables users to redeem store product keys to add software from their library. The keys are sold by third-party providers such as Humble Bundle, distributed as part of a physical release, or given to a user as part of promotions, often used to deliver Kickstarter and other crowdfunding rewards. A grey market exists around Steam keys, where less reputable buyers purchase a large number of Steam keys for a game when it is offered for a low cost, and then resell these keys to users or other third-party sites at a higher price. This caused some of these third-party sites, such as G2A, to be embroiled in this grey market. It is possible for publishers to have Valve track down where specific keys have been used and cancel them, removing the product from the user's libraries. Other legitimate storefronts, like Humble Bundle, have set a minimum price that must be spent to obtain Steam keys as to discourage mass purchases. In June 2021, Valve began limiting how frequently Steam users could change their default region to prevent them from purchasing games from outside their home region for cheaper. In 2013, Steam began to accept player reviews of games. Other users can subsequently rate these reviews as helpful, humorous, or otherwise unhelpful, which are then used to highlight the most useful reviews on the game's Steam store page. Steam also aggregates these reviews and enables users to sort products based on this feedback while browsing the store. In May 2016, Steam further broke out these aggregations between all reviews overall and those made more recently in the last 30 days, a change Valve acknowledges to how game updates, particularly those in Early Access, can alter the impression of a game to users. To prevent observed abuse of the review system by developers or other third-party agents, Valve modified the review system in September 2016 to discount review scores for a game from users that activated the product through a product key rather than directly purchased by the Steam Store, though their reviews remain visible. Alongside this, Valve announced that it would end business relations with any developer or publisher that they found to be abusing the review system. Separately, Valve has taken actions to minimize the effects of review bombs on Steam. In particular, Valve announced in March 2019 that they mark reviews they believe are "off-topic" as a result of a review bomb, and eliminate their contribution to summary review scores; the first such games they took action on with this were the Borderlands games after it was announced Borderlands 3 would be a timed-exclusive to the Epic Games Store. Valve added support for free-to-play games on Steam as well as support for in-game microtransactions through the use of Steamworks in June 2011, while support was added in September 2011 for trading in-game items and "unopened" gifts between users. Steam Coupons, which was introduced in December 2011, provides single-use coupons that provide a discount to the cost of items. Steam Coupons can be provided to users by developers and publishers; users can trade these coupons between friends in a similar fashion to gifts and in-game items. In May 2015, GameStop began selling Steam Wallet cards. Steam Market, a feature introduced in beta in December 2012 that would allow users to sell virtual items to others via Steam Wallet funds, further extended the idea. Valve levies a transaction fee of 15% on such sales and game publishers that use Steam Market pay a transaction fee. For example, Team Fortress 2the first game supported at the beta phaseincurred both fees. Full support for other games was expected to be available in early 2013. In April 2013, Valve added subscription-based game support to Steam; the first game to use this service was Darkfall Unholy Wars. In October 2012, Steam introduced non-gaming applications, which are sold through the service in the same manner as games. Creativity and productivity applications can access the core functions of the Steamworks API, allowing them to use Steam's simplified installation and updating process, and incorporate features including cloud saving and Steam Workshop. Steam also allows game soundtracks to be purchased to be played via Steam Music or integrated with the user's other media players. Valve adjusted its approach to soundtracks in 2020, no longer requiring them to be offered as DLC, meaning that users can buy soundtracks to games they do not own, and publishers can offer soundtracks to games not on Steam. Valve has also added the ability for publishers to rent and sell digital movies via the service, with initially most being video game documentaries. Following Warner Bros. Entertainment offering the Mad Max films alongside the September 2015 release of the game based on the series, Lionsgate entered into agreement with Valve to rent over one hundred feature films from its catalog through Steam starting in April 2016, with more films following later. In March 2017, Crunchyroll started offering various anime for purchase or rent through Steam. However, by February 2019, Valve shuttered video from its storefront save for videos directly related to gaming content. While available, users could also purchase Steam Machine related hardware. Valve took a flat 30% share of all revenue generated from direct Steam sales and microtransactions until October 2018 when they changed their policy to reduce the cut to 25% once revenue for a game surpasses , and further to 20% at . The policy change was seen by journalists as trying to entice larger developers to stay with Steam, while the decision was also met with backlash from indie and other small game developers, as their revenue split remained unchanged. While Steam allows developers to offer demo versions of their games at any time, Valve worked with Geoff Keighley in 2019 in conjunction with The Game Awards to hold a week-long Steam Game Festival to feature a large selection of game demos of current and upcoming games, alongside sales for games already released. This event has since been repeated two or three times a year, typically in conjunction with game expositions or award events, and since has been renamed as the Steam Next Fest. Valve expanded support for demo versions of games in July 2024, allowing demos to have their own store page with user reviews and made it easier for user to manage demos within their game library. A Steam Points system and storefront was added in June 2020, which mirrored similar temporary points systems that had been used in prior sales on the storefront. Users earn points through purchases on Steam or by receiving community recognition for helpful reviews or discussion comments. These points can be redeemed in the separate storefront for cosmetics that apply to the user's profile and chat interface. Privacy, security and abuse The popularity of Steam has led to the services being attacked by hackers. An attempt occurred in November 2011, when Valve temporarily closed the community forums, citing potential hacking threats to the service. Days later, Valve reported that the hack had compromised one of its customer databases, potentially allowing the perpetrators to access customer information, including encrypted passwords and credit card details. At that time, Valve was not aware whether the intruders actually accessed this information or discovered the encryption method, but nevertheless warned users to be alert for fraudulent activity. Valve launched Steam Guard in March 2011 with the goal of protecting Steam users against account hijacking via phishing schemes, one of the largest security problems Valve had at the time. Steam Guard was advertised to take advantage of the identity protection provided by Intel's second-generation Core processors and compatible motherboard hardware, which allows users to lock their account to a specific computer. Once locked, activity by that account on other computers must first be approved by the user on the locked computer. Support APIs for Steam Guard are available to third-party developers through Steamworks. Steam Guard also offers two-factor, risk-based authentication that uses a one-time verification code sent to a verified email address associated with the Steam account; this was later expanded to include two-factor authentication through the Steam mobile application, known as Steam Guard Mobile Authenticator. In 2015, Valve stated that the potential monetary value of virtual goods attached to user accounts had drawn hackers to try to access accounts for financial benefit. Valve reported that in December 2015, around 77,000 accounts per month were hijacked, enabling the hijackers to empty the user's inventory of items through the trading features. To improve security, the company announced that new restrictions would be added in March 2016, under which 15-day holds are placed on traded items unless they activate, and authenticate with Steam Guard Mobile Authenticator. After a Counter-Strike: Global Offensive gambling controversy, Valve stated it was cracking down on third-party websites using Steam inventory trading for skin gambling in July 2016. ReVuln, a commercial vulnerability research firm, published a paper in October 2012 that said the Steam browser protocol was posing a security risk by enabling malicious exploits through a simple user click on a maliciously crafted steam:// URL in a browser. This was the second serious vulnerability of gaming-related software following a problem with Ubisoft's Uplay. German IT platform Heise online recommended strict separation of gaming and sensitive data, for example using a PC dedicated to gaming, gaming from a second Windows installation, or using a computer account with limited rights dedicated to gaming. In July 2015, a bug in the software allowed anyone to reset the password to any account by using the "forgot password" function of the client. High-profile professional gamers and streamers lost access to their accounts. In December 2015, Steam's content delivery network was misconfigured in response to a DDoS attack, causing cached store pages containing personal information to be temporarily exposed for 34,000 users. Valve added new privacy settings to Steam in April 2018, allowing users to hide their activity status, game lists, inventory, and other profile elements. While these changes brought Steam's privacy settings in line with services such as PlayStation Network and the Xbox network, third-party services such as Steam Spy were impacted, due to their reliance on public data to estimate Steam product sales. Valve established a HackerOne bug bounty program in May 2018, a crowdsourced method to test and improve the security features of the Steam client. In August 2019, a security researcher exposed a zero-day vulnerability in the Windows client of Steam, which allowed for any user to run arbitrary code with LocalSystem privileges using just a few simple commands. The vulnerability was then reported to Valve via the program, but it was initially rejected for being "out-of-scope". Following a second vulnerability found by the same user, Valve apologized and patched them both, and expanded the program's rules to accept any other similar problems. The Anti-Defamation League published a report that stated the Steam Community platform harbors hateful content in April 2020. In January 2021, a trading card glitch let players generate Steam Wallet funds from free Steam trading cards with bots using Capcom Arcade Stadium and other games, resulting in the game becoming one of the statistically most played titles. User interface and functionality Since November 2013, Steam has allowed for users to review their purchased games and organize them into categories set by the user and add to favorite lists for quick access. Players can add non-Steam games to their libraries, allowing the game to be easily accessed from the Steam client and providing support where possible for Steam Overlay features. The Steam interface allows for user-defined shortcuts to be added. In this way, third-party modifications and games not purchased through the Steam Store can use Steam features. Valve sponsors and distributes some modifications free of charge; and modifications that use Steamworks can also use any Steam features supported by their parent game. For most games launched from Steam, the client provides an in-game overlay from which the user can access Steam Community lists and participate in chat, manage selected Steam settings, and access a built-in web browser without having to exit the game. Since the beginning of February 2011 as a beta version, the overlay also allows players to take screenshots of the games in process. As a full version on February 24, 2011, this feature was reimplemented so that users could share screenshots on websites of Facebook, Twitter, and Reddit directly from a user's screenshot manager. Store game pages display a score from Metacritic since 2007. Steam's "Big Picture" mode was announced in 2011; public betas started in September 2012 and were integrated into the software in December 2012. Big Picture mode is a 10-foot user interface, which optimizes the Steam display to work on high-definition televisions, allowing the user to control Steam with a gamepad or with a keyboard and mouse. Newell stated that Big Picture mode was a step towards a dedicated Steam entertainment hardware unit. With the introduction of the Steam Deck, Valve began pushing the new Big Picture mode based on the Steam Deck UI in beta testing in October 2022, and full release in February 2023. The new UI was also adopted by SteamVR in October 2023. In 2012, Valve announced Steam for Schools, a free function-limited version of the Steam client for schools. It was part of Valve's initiative to support gamification of learning. It was released alongside free versions of Portal 2 and a standalone program called "Puzzle Maker" that allowed teachers and students to create and manipulate levels. It featured additional authentication security that allowed teachers to share and distribute content via a Steam Workshop-type interface but blocks access from students. In-Home Streaming was introduced in May 2014; it allows users to stream games installed on one computer to another on the same home network with low latency. By June 2019, Valve renamed this feature to Remote Play, allowing users to stream games across devices that may be outside of their home network. Steam's "Remote Play Together", added in November 2019 after a month of beta testing, gives the ability for local multiplayer games to be played by people in disparate locations, though will not necessary resolve latency problems typical of these types of games. Remote Play Together was expanded in February 2021 to give the ability to invite non-Steam players to play through a Steam Link app approach. The Steam client, as part of a social network service, allows users to identify friends and join groups using the Steam Community feature. Through the Steam Chat feature, users can use text chat and peer-to-peer VoIP with other users, identify which games their friends and other group members are playing, and join and invite friends to Steamworks-based multiplayer games that support this feature. Users can participate in forums hosted by Valve to discuss Steam games. Each user has a unique page that shows his or her groups and friends, game library including earned achievements, game wishlists, and other social features; users can choose to keep this information private. In January 2010, Valve reported that 10 million of the 25 million active Steam accounts had signed up to Steam Community. In conjunction with the 2012 Steam Summer Sale, user profiles were updated with Badges reflecting the user's participation in the Steam community and past events. Steam Trading Cards, a system where players earn virtual trading cards based on games they own, were introduced in May 2013. Using them, players can trade with other Steam users on the Steam Community Marketplace and use them to craft "Badges", which grant rewards such as discount coupons, and user profile page customization options. In 2010, the Steam client became an OpenID provider, allowing third-party websites to use a Steam user's identity without requiring the user to expose his or her Steam credentials. In order to prevent abuse, access to most community features is restricted until a one-time payment of at least 5 is made to Valve. This requirement can be fulfilled by making any purchase of five dollars or more on Steam, or by adding at the same amount to their wallet. Through Steamworks, Steam provides a means of server browsing for multiplayer games that use the Steam Community features, allowing users to create lobbies with friends or members of common groups. Steamworks also provides Valve Anti-Cheat (VAC), Valve's anti-cheat system; game servers automatically detect and report users who are using cheats in online, multiplayer games. In August 2012, Valve added new featuresincluding dedicated hub pages for games that highlight the best user-created content, top forum posts, and screenshotsto the Community area. In December 2012, a feature where users can upload walkthroughs and guides detailing game strategy was added. Starting in January 2015, the Steam client allowed players to livestream to Steam friends or the public while playing games on the platform. For the main event of The International 2018 Dota 2 tournament, Valve launched Steam.tv as a major update to Steam Broadcasting, adding Steam Chat and Steamworks integration for spectating matches played at the event. It has also been used for other events, such as a pre-release tournament for the digital card game Artifact and for The Game Awards 2018 and Steam Awards award shows. Game Recording was added in beta in June 2024 and released in full by November 2024, allowing for recording of gameplay sessions both on demand or as a background recording. Users can then edit and clip footage to share via Steam with other users. In September 2014, Steam Music was added to the Steam client, allowing users to play through music stored on their computer or to stream from a locally networked computer directly in Steam. An update to the friends and chat system was released in July 2018, allowing for non-peer-to-peer chats integrated with voice chat and other features that were compared to Discord. A standalone mobile app based on this for Android and iOS was released in May 2019. A major visual overhaul of the Library was released in October 2019, with the goal of aiding users in organizing their games, help showcase what shared games a user's friends are playing, games that are being live-streamed, and new content that may be available, along with more customization options for sorting games. Along with the redesign, Valve launched Steam Events, allowing game developers to communicate when new in-game events are approaching, which appear to players in the Library and game listings. In June 2023, a visual and architectural overhaul was released, unifying the backend functions of the Steam and Steam Deck clients and redesigning the desktop client. As part of this, the in-game overlay received a new customizable design where users can pin windows such as chat or game guides on top of the current game window. It also received several new features, including the ability to create pinnable personal notes stored in the cloud. Developer features Valve provides developers the ability to create storefront pages to help generate interest in their game ahead of release. This is also necessary to fix a release date that functions into Valve's "build review", a free service performed by Valve about a week before this release date to make sure the game's launch is trouble-free. Updates in 2020 to Discovery queues have given developers more options for customizing their storefront page and how these pages integrate with users' experiences with the Steam client. Valve offers Steamworks, an application programming interface (API) that provides development and publishing tools free of charge to game and software developers. Steamworks provides networking and player authentication tools for both server and peer-to-peer multiplayer games, matchmaking services, support for Steam community friends and groups, Steam statistics and achievements, integrated voice communications, and Steam Cloud support, allowing games to integrate with the Steam client. The API also provides anti-cheating devices and digital copy management. In 2016, after introducing the Steam Controller and improvements to the Steam interface to support numerous customization options, the Steamworks API was also updated to provide a generic controller library for developers and these customization features for other third-party controllers, starting with the DualShock 4. Steam's Input API has since been updated to include official support for other console controllers such as the Nintendo Switch Pro Controller in 2018, the Xbox Wireless Controller for the Xbox Series X and Series S consoles, and the PlayStation 5's DualSense, as well as compatible controllers from third-party manufacturers in 2020. In November 2020, Valve said the controller usage had more than doubled over the past 2 years. In March 2019, Steam's game server network was opened to third-party developers. Developers of software available on Steam can track sales of their games through the Steam store. In February 2014, Valve announced that it would begin to allow developers to set up their own sales for their games independent of any sales that Valve may set. Valve may also work with developers to suggest their participation in sales on themed days. Steam has conducted and partially published a monthly opt-in hardware and software survey between 2007 and 2010. Valve added the ability for developers to sell games under an early access model with a special section of the Steam store, starting in March 2013. This program allows developers to release functional, but not finished, products such as beta versions to the service to allow users to buy the games and help provide testing and feedback towards the final production. Early access also helps to provide funding to the developers to help complete their games. The early access approach allowed more developers to publish games onto the Steam service without the need for Valve's direct curation of games, significantly increasing the number of available games on the service. Developers can request Steam keys of their products to use as they see fit, such as to give away in promotions, to provide to selected users for review, or to give to key resellers for different prioritization. Valve generally honors all such requests, but clarified that they would evaluate some requests to avoid giving keys to games or other offerings that are designed to manipulate the Steam storefront and other features. Valve enabled the ability for multiple developers to create bundles of games from their offerings in June 2021. Steam Workshop The Steam Workshop is a service that allows users to share user-made content and modifications for video games available on Steam. New levels, art assets, gameplay modifications, or other content may be published to or installed from the Workshop depending on the title. The Workshop was originally used for distribution of new in-game items for Team Fortress 2; it was redesigned to extend support for any game in early 2012, including modifications for The Elder Scrolls V: Skyrim. A May 2012 patch for Portal 2, enabled by a new map-making tool through the Workshop, introduced the ability to share user-created levels. Independently developed games, including Dungeons of Dredmor, are able to provide Workshop support for user-made content. Dota 2 became Valve's third published title available for the Workshop in June 2012; its features include customizable accessories, character skins, and announcer packs. Workshop content may be monetized; Newell said that the Workshop was inspired by gold farming from World of Warcraft to find a way to incentive both players and content creators in video games, and which had informed them of their approach to Team Fortress 2 and their later multiplayer games. By January 2015, Valve themselves had provided some user-developed Workshop content as paid-for features in Valve-developed games, including Team Fortress 2 and Dota 2; with over $57 million being paid to content creators using the Workshop. Valve began allowing developers to use these advanced features in January 2015; both the developer and content generator share the profits of the sale of these items; the feature went live in April 2015, starting with various mods for Skyrim. This feature was pulled a few days afterward following negative user feedback and reports of pricing and copyright misuse. Six months later, Valve stated they were still interested in offering this type of functionality in the future. In November 2015, the Steam client was updated with the ability for game developers to offer in-game items for direct sale via the store interface, with Rust being the first game to use the feature. SteamVR SteamVR is a virtual reality hardware and software platform developed by Valve, with a focus on allowing "room-scale" experiences using positional tracking base stations, as opposed to those requiring the player to stay in a singular location. SteamVR was first introduced for the Oculus Rift headset in 2014, and later expanded to support other virtual reality headsets. Initially released for support on Windows, macOS, and Linux, Valve dropped macOS support for SteamVR in May 2020. SteamVR 2.0 was released in October 2023, introducing a new overlay interface that is unified with the updated SteamOS and Big Picture mode interfaces. Storefront curation Until 2012, Valve handpicked games to be included onto the Steam service, limiting these to games that either had a major developer supporting them, or smaller studios with proven track records. Since then, Valve have sought ways to enable more games to be offered through Steam, while pulling away from manually approving games, short of validating that a game runs on the platforms the publisher had indicated. In 2017, Steam development team member Alden Kroll said that Valve knows Steam is in a near-monopoly for game sales on personal computers, and the company does not want to be in a position to determine what gets sold, and thus had tried to find ways to make the process of adding games to Steam outside of their control. At the same time, Valve recognized that unfettered control of games in the service can lead to discovery problems as well as low-quality games. Steam Greenlight Valve announced Steam Greenlight to streamline game addition to the service in July 2012 and released the following month. Through Greenlight, Steam users would choose which games were added to the service. Developers were able to submit information about their games, as well as early builds or beta versions, for consideration by users. Users would pledge support for these games, and Valve would make top-pledged games available on Steam. In response to complaints during its first week that finding games to support was made difficult by a flood of inappropriate or false submissions, Valve required developers to pay to list a game on the service. Those fees were donated to the charity Child's Play. This fee was met with some concern from smaller developers, who often are already working in a deficit and may not have the money to cover such fees. A later modification allowed developers to put conceptual ideas on the Greenlight service to garner interest in potential projects free-of-charge; votes from such projects are visible only to the developer. Valve also allowed non-gaming software to be voted onto the service through Greenlight. The initial process offered by Greenlight was panned by developers because while they favored the concept, the rate of games that were eventually approved were small. In January 2013, Newell stated that Valve recognized that its role in Greenlight was perceived as a bottleneck, something the company was planning to eliminate in the future through an open marketplace infrastructure. On the eve of Greenlight's first anniversary, Valve simultaneously approved 100 games to demonstrate this change of direction. Steam Direct Valve launched Steam Direct on June 13, 2017, following Greenlight's shutdown the week before. With Steam Direct, a developer or publisher wishing to distribute their game on Steam needs only to complete appropriate identification and tax forms for Valve and then pay a recoupable application fee for each game they intend to publish. Once they apply, a developer must wait thirty days before publishing the game to allow Valve to review the game to ensure it is "configured correctly, matches the description provided on the store page, and doesn't contain malicious content". On announcing its plans for Steam Direct, Valve suggested the fee would be in the range of $100–5,000, meant to encourage earnest software submissions to the service and weed out poor quality games that are treated as shovelware, improving the discovery pipeline to Steam's customers. Smaller developers raised concerns about the Direct fee harming them, and excluding potentially good indie games from reaching the Steam store. Valve opted to set the Direct fee at $100 after reviewing concerns from the community and outlined plans to improve their discovery algorithms and inject more human involvement to help these. Valve refunds the fee should the game exceed $1,000 in sales. In the process of transitioning from Greenlight to Direct, Valve mass-approved most of the 3,400 remaining games that were still in Greenlight, though the company noted that not all of these were at a state to be published. Valve anticipated that the volume of new games added to the service would further increase with Direct in place. Some groups, such as publisher Raw Fury and crowdfunding/investment site Fig, have offered to pay the Direct fee for indie developers who cannot afford it. VentureBeat compared the system to the Google Play Store. Games discovery changes Without more direct interaction in the curation process, Valve had looked to find methods to allow players to find games they would be more likely to buy based on previous purchase patterns. Valve has rejected the use of paid advertising or placement on the storefront, which would have created a "pay to win" scenario. Instead, the company had relied on algorithms and other automatic features for game discovery, which has allowed for unexpected hits to gain more visibility. The September 2014 "Discovery Update" added tools that would allow existing Steam users to be curators for game recommendations, and sorting functions that presented more popular games and recommended games specific to the user. This Discovery update was considered successful by Valve, as they reported in March 2015 in seeing increased use of the Steam Storefront and an increase in 18% of sales by revenue from just prior to the update. A second Discovery update was released November 2016, giving users more control over what games they want to see or ignore within the Steam Store, alongside tools for developers and publishers to better customize and present their game. By February 2017, Valve reported that with the second Discovery update, the number of games shown to users via the store's front page increased by 42%, with more conversions into sales from that viewership. In 2016, more games are meeting a rough metric of success defined by Valve as selling more than $200,000 in revenues in its first 90 days of release. Valve added a "Curator Connect" program in December 2017. Curators can set up descriptors for the type of games they are interested in, preferred languages, and other tags along with social media profiles, while developers can find and reach out to specific curators from this information, and, after review, provide them directly with access to their game. This step, which eliminates the use of a Steam redemption key, is aimed at reducing the reselling of keys, as well as dissuading users who may be trying to game the curator system to obtain free game keys. Valve has attempted to deal with "fake games", those that are built around reused assets and little other innovation, by adding Steam Explorers atop its existing Steam Curator program. Any Steam user can sign up to be an Explorer and be asked to look at under-performing games on the service to either vouch that the game is truly original or if it is an example of a "fake game", at which point Valve can take action to remove the game. In July 2019, the Steam Labs feature was introduced as a means to showcase experimental discovery features Valve considered for including into Steam, to seek public feedback. For example, an initial experiment released at launch was the Interactive Recommender, which uses artificial intelligence algorithms pulling data from the user's past gameplay history to suggest new games that may be of interest to them. As these experiments mature through end-user testing, they have then been brought into the storefront as direct features. The September 2019 Discovery update, which Valve claimed would improve the visibility of niche and lesser-known games, was met with criticism from some indie game developers, who recorded a significant drop in the exposure of their games, including new wishlist additions and appearances in the "More Like This" and "Discovery queue" sections of the store. Steam Charts were introduced in September 2022 and publicly track the storefront's best-selling and most-played games, including historically by week and month. Charts replaced a previous statistics page to be more comprehensive, and features content that had previously been part of third-party websites including SteamSpy, SteamDB, and SteamCharts. Games and account policies In June 2015, Valve created a formal process to allow purchasers to request refunds, with refunds guaranteed within the first two weeks as long as the player had not spent more than two hours in a game. Prior to June 2015, Valve had a no-refunds policy, but allowed them in certain circumstances, such as digital rights management issues or false advertising. Games that are no longer available for sale for various reasons can still be downloaded and played by those who have already purchased these. Quality control With the launch of Steam Direct, effectively removing any curation of games by Valve prior to being published on Steam, there have been several incidents of published games that have attempted to mislead Steam users. Starting in June 2018, Valve has taken actions against games and developers that are "trolling" the system; in September 2018, Valve explicitly defined that trolls on Steam "aren't actually interested in good faith efforts to make and sell games to you or anyone". As an example, Valve's Lombardi stated that the game Active Shooter, which would have allowed the player to play as either a SWAT team member tasked to take down the shooter at a school shooting incident or as the shooter themselves, was an example of trolling, as he described it was "designed to do nothing but generate outrage and cause conflict through its existence". Within a month of clarifying its definition of trolling, Valve removed approximately 170 games from Steam. In addition to removing bad actors, Valve has also taken steps to reduce the impact of "fake games" which could be used to manipulate the trading card marketplace or artificially boost a user's Steam level, in addition to changes in Steam to prevent such abuse. Some of these changes have resulted in select false positives for legitimate games with unusual end-user usage patterns, such as Wandersong which was flagged in January 2019 for what the developer believed was related to near-unanimous positive user reviews from the game. Other actions taken by developers against the terms of service or other policies have prompted Valve to remove games, which has included asset flips, review manipulation, misuse of Steamworks tools, and hostile activities towards Steam users. Valve has banned games that incorporate blockchain-type technologies, such as non-fungible tokens (NFTs) since 2022 due to the questionable nature of their markets. With the rise of generative artificial intelligence in 2023, Valve originally established that games with content generated in this manner could be distributed through Steam, though cautioned developers about assuring that they had the rights for this type of content. As greater concerns about the copyright and ethical nature of generational AI in the latter half of 2023, Valve clarified its stance in January 2024, requiring games that did use content from generational AI to disclose this on the game's store page, including methods that the developers used to assure the AI engines did not generate illegal content. Mature content and moderation Valve has also removed or threatened to remove games due to inappropriate or mature content, though there was often confusion as to what material qualified for this. For example, Eek Games' House Party included scenes of nudity and sexual encounters in its original release, which drew criticism from conservative religious organization National Center on Sexual Exploitation, leading Valve to remove the title. Eek Games later included censor bars within the game, allowing the game to be added back to Steam, though they offered a patch on their website to remove the bars. In May 2018, several developers of anime-stylized games that contained some light nudity, such as HuniePop, were told by Valve they had to address sexual content within their games or face removal from Steam, leading to questions of inconsistent application of Valve's policies. The National Center on Sexual Exploitation took credit for convincing Valve to target these games. However, Valve later rescinded its orders, allowing these games to remain. In June 2018, Valve clarified its policy on content, taking a more hands-off approach outside of illegal material. Rather than trying to make decisions themselves on what content is appropriate, Valve enhanced its filtering system to allow developers and publishers to indicate and justify the types of mature content (violence, nudity, and sexual content) in their games. Users can block games that are marked with this type of content from appearing in the store, and if they have not blocked it, they are presented with the description before they can continue to the store page. Developers and publishers with existing games on Steam have been strongly encouraged to complete these forms for these games, while Valve will use moderators to make sure new games are appropriately marked. Valve also committed to developing anti-harassment tools to support developers who may find their game amid controversy. Until these tools were in place, some adult-themed games were delayed for release. Negligee: Love Stories developed by Dharker Studios was one of the first sexually explicit games to be offered after the introduction of the tools in September 2018. Dharker noted in discussions with Valve that they would be liable for any content-related fines or penalties that countries may place on Valve, a clause of their publishing contract for Steam, and took steps to restrict sale of the game in over 20 regions. Games that feature mature themes with primary characters that visually appear to be underaged, even if the game's narrative establishes them as adults, have been banned by Valve. In March 2019, Valve faced pressure over Rape Day, a planned game described as being a dark comedy and power fantasy where the player would control a serial rapist amid a zombie apocalypse. Valve ultimately decided against offering the game on Steam, arguing that while it "[respects] developers' desire to express themselves", there were "costs and risks" associated with the game, and the developers had "chosen content matter and a way of representing it that makes it very difficult for us to help them [find an audience]". The Anti-defamation League published a report in November 2024 accusing Valve of allowing the proliferation of hate and anti-semitic content generated by users and user groups, with over 40,000 groups identified to have names referring to such extreme views. Senator Mark Warner followed with a letter to Valve, asking the company if they were following their published online content policies and to review the cases identified by the ADL. Platforms, devices and regions Valve introduced the Steam Hardware Survey in 2003 ahead of the release of Half-Life 2. At that time, no information was available as to the distribution of CPU and GPU units among gamers, so Valve used the survey, which automatically collected hardware information with the user's permission through the Steam client, to collect this information and refine the hardware targets for Half-Life 2 to meet the widest possible specifications. Since then, Valve continues to use the Steam Hardware Survey to collect hardware distribution information, sharing the net results with other developers to understand the current market, as well as to make choices on when to discontinue support for older hardware and software. Windows Steam was originally released exclusively for Microsoft Windows in 2003, but has since been ported to other platforms. More recent Steam client versions use the Chromium Embedded Framework. To take advantage of some of its features for newer interface elements, Steam uses 64-bit versions of Chromium, which makes it unsupported on older operating systems such as Windows XP and Windows Vista. Steam on Windows also relies on some security features built into later versions of Windows. Support for XP and Vista was dropped in 2019. While users still on those operating systems can use the client, they do not have access to newer features. Around 0.2% of Steam users were affected by this when it began. In March 2023, Valve announced that Steam would drop support for Windows 7 and 8 on January 1, 2024. macOS Valve announced a client for macOS in March 2010. The announcement was preceded by a change in the Steam beta client to support the cross-platform WebKit web browser rendering engine instead of the Trident engine of Internet Explorer. Valve teased the release by emailing several images to Mac community and gaming websites; the images featured characters from Valve games with Apple logos and parodies of vintage Macintosh advertisements. Valve developed a full video homage to Apple's 1984 Macintosh commercial to announce the availability of Half-Life 2 on the service; some concept images for the video had previously been used to tease the Mac Steam client. Steam for macOS was originally planned for release in April 2010 before being pushed back to May 12, 2010. In addition to the Steam client, several features were made available to developers, allowing them to take advantage of the cross-platform Source engine and Steamworks' platform and network capabilities. Through the Steam Play functionality, the macOS client allows players who have purchased compatible products in the Windows version to download the Mac versions at no cost. The Steam Cloud, along with many multiplayer PC games, also supports cross-platform play. Linux In July 2012, Valve announced that it was developing a client for Linux based on the Ubuntu distribution. This announcement followed months of speculation, primarily from the website Phoronix that had discovered evidence of Linux developing in recent builds of Steam and other Valve games. Newell stated that getting Steam and games to work on Linux is a key strategy for Valve; Newell called the closed nature of Microsoft Windows 8 "a catastrophe for everyone in the PC space", and that Linux would maintain "the openness of the platform". Valve is extending support to any developers that want to bring their games to Linux, by "making it as easy as possible for anybody who's engaged with usputting their games on Steam and getting those running on Linux", according to Newell. The team developing the Linux client had been working for a year before the announcement to validate that such a port would be possible. As of the official announcement, a near-feature-complete Steam client for Linux had been developed and successfully run on Ubuntu. Internal beta testing of the Linux client started in October 2012; external beta testing occurred in early November the same year. Open beta clients for Linux were made available in late December 2012, and the client was officially released in mid-February 2013. At the time of announcement, Valve's Linux division assured that its first game on the OS, Left 4 Dead 2, would run at an acceptable frame rate and with a degree of connectivity with the Windows and Mac OS X versions. From there, it began working on porting other games to Ubuntu and expanding to other Linux distributions. Versions of Steam working under Fedora and Red Hat Enterprise Linux were released by October 2013. There were over 500 Linux-compatible games on Steam in June 2014, and in February 2019, Steam for Linux had 5,800 native games and was described as having "the power to keep Linux [gaming] alive" by Engadget. In August 2018, Valve released a beta version of Proton (named Steam Play), an open-source Windows compatibility layer for Linux, so that Linux users could run Windows games directly through Steam for Linux. Proton comprises a set of open-source tools including Wine and DXVK. The software allows the use of Steam-supported controllers, even those not compatible with Windows. Released in February 2022, Valve's handheld computer, the Steam Deck, runs SteamOS 3.0 which is based on the Arch Linux distribution and uses Proton to support Windows-based games without native Linux ports. Valve worked with various middleware developers to make sure their tools were compatible with Proton on Linux and maximize the number of games that the Steam Deck would support. This included working with various anti-cheat developers such as Easy Anti-Cheat and BattlEye to make sure their solutions worked with Proton. To help with compatibility, Valve developed a classification system that they will populate to rank any game as to how well it works as a Linux native solution or through Proton. Support for Nvidia's proprietary deep learning super sampling (DLSS) on supported video cards and games was added to Proton in June 2021, though this is not available on the Steam Deck which is based on AMD hardware. In March 2022, Google offered a prerelease version of Steam on Chromebooks, and entered public beta in November 2022. Steamworks on consoles At E3 2010, Newell announced that Steamworks would arrive on the PlayStation 3 with Portal 2. Steamworks made its debut on consoles with Portal 2 PlayStation 3 release. Several featuresincluding cross-platform play and instant messaging, Steam Cloud for saved games, and the ability for PS3 owners to download Portal 2 from Steam (Windows and Mac)were offered. Valve's Counter-Strike: Global Offensive also supports Steamworks and cross-platform features on the PlayStation 3, including using keyboard and mouse controls as an alternative to the gamepad. Valve said it "hope[s] to expand upon this foundation with more Steam features and functionality in DLC and future content releases". The Xbox 360 does not have support for Steamworks. Newell said that they would have liked to bring the service to the console through the game Counter-Strike: Global Offensive, but later said that cross-platform play would not be present in the final version of the game. Valve attributes the inability to use Steamworks on the Xbox 360 to limitations in the Xbox Live regulations of the ability to deliver patches and new content. Valve's Erik Johnson stated that Microsoft required new content on the console to be certified and validated before distribution, which would limit the usefulness of Steamworks' delivery approach. Mobile apps Valve released an official Steam client for iOS and Android devices in late January 2012, following a short beta period. The application allows players to log into their accounts to browse the storefront, manage their games, and communicate with friends in the Steam community. The application also incorporates a two-factor authentication system that works with Steam Guard. Newell stated that the application was a strong request from Steam users and sees it as a means "to make [Steam] richer and more accessible for everyone". A mobile Steam client for Windows Phone devices was released in June 2016. In May 2019, a mobile chat-only client for Steam was released under the name Steam Chat. On May 14, 2018, a "Steam Link" app with remote play features was released in beta to allow users to stream games to Android phones, named after discontinued set-top box Steam Link. It was also submitted to the iOS App Store, but was denied by Apple Inc., who cited "business conflicts with app guidelines". Apple later clarified its rule at the following Apple Worldwide Developers Conference in early June, in that iOS apps may not offer an app-like purchasing store, but does not restrict apps that provide remote desktop support. In response, Valve removed the ability to purchase games or other content through the app and resubmitted it for approval in June 2018, where it was accepted by Apple and allowed on their store in May 2019. Steam-branded devices Before 2013, industry analysts believed that Valve was developing hardware and tuning features of Steam with apparent use on its own hardware. These computers were pre-emptively dubbed as "Steam Boxes" by the gaming community and expected to be a dedicated machine focused on Steam functionality and maintaining the core functionality of a traditional video game console. In September 2013, Valve unveiled SteamOS, a custom Linux-based operating system they had developed specifically aimed for running Steam and games, and the final concept of the Steam Machine hardware. Unlike other consoles, the Steam Machine does not have set hardware; its technology is implemented at the discretion of the manufacturer and is fully customizable, much like a personal computer. In 2018 the Steam Machines were removed from the storefront due to low sales and small user traffic. In November 2015, Valve released the set-top box Steam Link and Steam Controller (which was discontinued in 2019). The Steam Link removed the need for HDMI cables for displaying a PC's screen and allowed for wireless connection when connecting to a TV. That was discontinued in 2018, but now "Steam Link" refers to the Remote Play mobile app that allows users to stream content, such as games, from a PC to a mobile device over a network. Valve released the Steam Deck, a handheld gaming computer running an updated version of SteamOS, with initial shipments starting on February 25, 2022. The Deck is designed for the play of Steam games, but it can be placed into a separate dock that allows the Deck to output to an external display. The Deck was released on February 25, 2022. Among updates to Steam and SteamOS included better Proton layer support for Windows-based games, improved user interface features in the Steam client for the Steam Deck display, and adding Dynamic Cloud Saves to Steam to allow synchronizing saved games while a game is being played. Valve began marking all games on the service through a Steam Deck Validated program to indicate how compatible they were with the Steam Deck software. Steam Cloud Play Valve included beta support for Steam Cloud Play in May 2020 for developers to allow users to play games in their library which developers and publishers have opted to allow in a cloud gaming service. At launch, Steam Cloud Play only worked through Nvidia's GeForce Now service and would link up to other cloud services in the future though whether Valve would run its own cloud gaming service was unclear. Steam China China has strict regulations on video games and Internet use; however, access to Steam is allowed through China's governmental firewalls. Currently, a large portion of Steam users are from China. By November 2017, more than half of the Steam userbase was fluent in Chinese, an effect created by the large popularity of Dota 2 and PlayerUnknown's Battlegrounds in the country, and several developers have reported that Chinese players make up close to 30% of the total players for their games. Following a Chinese government-ordered temporary block of many of Steam's functions in December 2017, Valve and Perfect World announced they would help to provide an officially sanctioned version of Steam that meets Chinese Internet requirements. Perfect World has worked with Valve before to help bring Dota 2 and Counter-Strike: Global Offensive to the country through approved government processes. All games to be released on Steam China are expected to pass through the government approval process and meet other governmental requirements, such as requiring a Chinese company to run any game with an online presence. The platform is known locally as "Steam Platform" () and runs independently from the rest of Steam. It was made to comply with China's strict regulations on video games. Valve does not plan to prevent Chinese users from accessing the global Steam platform and will try to assure that a player's cloud data remains usable between the two. The client launched as an open beta on February 9, 2021, with about 40 games available at launch. As of December 2021, only around 100 games that have been reviewed and licensed by the government are available through Steam China. On December 25, 2021, reports emerged that Steam's global service was the target of a domain name system attack that prevented users in China from accessing its site. The Chinese government Ministry of Industry and Information Technology (MIIT) later confirmed that Chinese gamers would no longer be able to use Steam's global service as its international domain name has been designated as "illegal". The block has effectively locked all Chinese users out of games they had purchased through Steam's international service. In 2023, reports emerged that the Steam Store could be used as normal in China, while the Steam Community was still blocked. Reception and impact Steam's success has led to some criticism because it supported DRM and for being an effective monopoly. In 2012, Free Software Foundation founder Richard Stallman called DRM using Steam on Linux "unethical", but still better than Windows. Steam's customer service has been highly criticized, with users citing poor response times or lack of response. In March 2015, Valve was given a failing "F" grade from the Better Business Bureau due to a large number of complaints about Valve's handling of Steam, leading Valve's Erik Johnson to state that "we don't feel like our customer service support is where it needs to be right now". Johnson stated the company plans to better integrate customer support features into the Steam client and be more responsive. In May 2017, in addition to hiring more staff for customer service, Valve publicized pages that show the number and type of customer service requests it was handling over the last 90 days, with an average of 75,000 entered each day. Of those, requests for refunds were the largest segment, and which Valve could resolve within hours, followed by account security and recovery requests. Valve stated at this time that 98% of all service requests were processed within 24 hours of filing. Users and revenue In August 2011, Valve said Steam's revenue, estimated to be $1 billion in 2010, was comparable to that of its published games and that this makes the company "tremendously profitable." Valve reported that there were 125 million active accounts on Steam by the end of 2015. By August 2017, the company reported that there were 27 million new active accounts since January 2016, bringing the total number of active users to at least 150 million. Most accounts were from North America and Western Europe, with there being a significant growth in accounts from Asia around 2017, spurred by their work to help localize the client and make additional currency options available to purchasers. In September 2014, 1.4 million accounts belonged to Australian users; this grew to 2.2 million by October 2015. Valve also considers concurrent users – how many accounts were logged in at the same time – a key indicator of the success of the platform. By August 2017, Valve reported that they saw a peak of 14 million concurrent players, up from 8.4 million in 2015, with 33 million concurrent players each day and 67 million each month. By January 2018, the peak online count had reached 18.5 million, with over 47 million daily active users. During the COVID-19 pandemic in 2020, in which a large proportion of the world's population were at home, Steam saw a concurrent player count of over 23 million in March, along with several games seeing similar record-breaking concurrent counts. The highest concurrent player count reached 39.2 million by December 2024, in part from the combined releases of Marvel Rivals and Path of Exile 2. Sales and distribution Steam has grown from seven games in 2004 to over 30,000 by 2019, with additional non-gaming products, such as creation software, DLC, and videos, numbering over 20,000. More than 50,000 games were on the service as of February 2021. The growth of games on Steam is attributed to changes in Valve's curation approach, which allows publishers to add games without Valve's direct involvement, and games supporting virtual reality technology. The addition of Greenlight and Direct has accelerated the number of games present on the service, with almost 40% of the 19,000 games on Steam by the end of 2017 having been released in 2017. Before Greenlight, Valve saw about five new games published each week. Greenlight expanded this to about seventy, and which doubled to one hundred and eighty week following the introduction of Direct. Although Steam provides direct sales data to developers and publishers, it does not provide public sales data. In 2011, Valve's Jason Holtman stated that the company felt that such data was outdated for a digital market. Data that Valve does provide cannot be released without permission because of a non-disclosure agreement. Developers and publishers have asked for some metrics of sales for games, to allow them to judge the potential success of a title by reviewing how similar games have performed. Algorithms that worked on publicly available data through user profiles to estimate sales data with some accuracy led to the creation of the website Steam Spy in 2015. Steam Spy was credited with being reasonably accurate, but in April 2018, Valve added new privacy settings that defaulted to hiding user game profiles, stating this was part of compliance with the General Data Protection Regulation (GDPR) of the European Union. The change broke the method by which Steam Spy had collected data, rendering it unusable. A few months later, another method had been developed using game achievements to estimate sales with similar accuracy, but Valve shortly changed the Steam API that reduced its functionality. Some have asserted that Valve used the GDPR change as a means to block methods of estimating sales, although Valve subsequently promised to provide tools to developers to help gain such insights that they say will be more accurate. In 2020, Simon Carless revised an approach originally proposed by Mike Boxleiter as early as 2013, with Carless's method used to estimate sales based on the number of reviews it has on Steam based on a modified "Boxleiter number" used as a multiplication factor. Competition and curation impact The accessibility of publishing games on digital storefronts like Steam has been described as key to the popularity of indie games. As these processes allow developers to publish games on Steam with minimal oversight from Valve, journalists have criticized Valve for lacking curation policies that make it difficult to find quality games among poorly produced games, sometimes called "shovelware". Following the launch of Steam Direct, the video game industry was split on Valve's hands-off approach. Some praised Valve for favoring avoiding trying to be a moral adjudicator of content and letting consumers decide what content they want to see, while others felt that this would encourage developers to publish games that are purposely hateful, and that Valve's reliance on user-filters and algorithms may not succeed in blocking undesirable content. Some further criticized the decision based on the financial gain from avoiding blocking any game content, as Valve collects a cut from sales through Steam. The National Center on Sexual Exploitation denounced the policy for avoiding corporate and social responsibility "in light of the rise of sexual violence and exploitation games being hosted on Steam". Steam was estimated to have the largest share of the PC digital distribution market in the 2010s. In 2013, sales via the Steam catalog are estimated to be between 50 and 75 percent of the total PC gaming market. In 2010 and 2013, with an increase in retail copies of major game publishers integrating or requiring Steam, retailers and journalists referred to the service as a monopoly, which they claimed can be detrimental to the industry and that sector competition would yield positive results for consumers. Several developers also noted that Steam's influence on the PC gaming market is powerful and one that smaller developers cannot afford to ignore or not work with, but believe that Valve's corporate practices make it a type of "benevolent dictator". Because of Valve's oversight of sales data, estimates of the market share that Steam has of the videogame market are difficult to compile. Stardock, developer of competing platform Impulse, estimated that Steam had a 70% share in 2009. In February 2011, Forbes reported that Steam sales constituted 50–70% of the market for downloaded PC games and that Steam offered game producers gross margins of 70% of the purchase price, compared with 30% at retail. Steam has been criticized for its reported 30% cut on revenue with publishers from game sales, a value that is similar to other digital storefronts according to IGN. However, some critics have asserted that the share no longer scales with cheaper costs of serving data. A 2019 Game Developers Conference survey showed only 6% of the 400 respondents deemed the share justified. Epic Games' Tim Sweeney postulated that Valve could reduce its cut to 8%, given that content delivery network costs has dropped significantly. Other services have promoted their sites having a lower cut, including the Epic Games Store and Discord. In November 2009, online retailers Impulse, Direct2Drive and GamersGate refused to offer Call of Duty: Modern Warfare 2 because it includes mandatory installation of Steamworks. Direct2Drive accused Steamworks of being a "trojan horse". Valve's business development director Jason Holtman replied Steamworks' features were chosen by developers and based on consumer wants and that Modern Warfare 2 was one of Steam's "greatest sellers". In December 2010, MCV/Develop reported that "key traditional retailers" would stop offering games that integrate Steam. Legal disputes Steam's predominance has led to Valve becoming involved in various legal cases. The lack of a formal refund policy led the Australian Competition and Consumer Commission (ACCC) to sue Valve in September 2014 for violating Australian consumer laws that required stores to offer refunds for faulty or broken products. The ACCC won the lawsuit in March 2016, though recognizing Valve changed its policy in the interim. In December 2016, the court fined Valve , as well as requiring Valve to include proper language for Australian consumers outlining their rights when purchasing games off of Steam. In January 2018, Valve filed for special leave to appeal the decision to the High Court of Australia, but the High Court dismissed this request. In September 2018, Valve's Steam refund policy was found to violate France's consumer laws, and it was fined and required to modify its refund policy. In December 2015, the French consumer group UFC-Que Choisir initiated a lawsuit against Valve for several of its Steam policies that conflicted with French law, including the restriction on reselling of purchased games, which is legal within the European Union. In September 2019, the Tribunal de grande instance de Paris found that Valve's practice of preventing resales violated the EU’s Information Society Directive of 2001 and the Computer Programs Directive of 2009, and required them to allow it in the future. The Interactive Software Federation of Europe (ISFE) issued a statement that the French court ruling goes against established EU case law related to digital copies and threatened to upend much of the digital distribution systems in Europe should it be upheld. In August 2016, BT Group filed a lawsuit against Valve, stating that Steam's client infringed on four of its patents, which it said were used within Steam's Library, Chat, Messaging, and Broadcasting services. In 2017, the European Commission began investigating Valve and five other publishers—Bandai Namco Entertainment, Capcom, Focus Home Interactive, Koch Media, and ZeniMax Media—for anti-competitive practices, specifically the use of geo-blocking to prevent access to software within certain countries within the European Economic Area. Such practices would be against the Digital Single Market initiative set by the European Union. The French gaming trade group, Syndicat National du Jeu Vidéo, noted that geo-blocking was a necessary feature to hinder inappropriate product key reselling. The Commission found, in January 2021, that Valve and co-defendants had violated antitrust rules of the European Union, issued combined fines of , and determined that these companies may be further liable to lawsuits from affected consumers. Valve had chosen "not to cooperate," and was fined , the most of any of the defendants. A January 2021 class-action lawsuit filed against Valve asserted that it forced developers into entering a "most favored nation"-type of pricing contract to offer games on their storefront, which required the developers to price their games the same on other platforms as they did on Steam, thus stifling competition. Gamasutras Simon Carless analyzed the lawsuit and observed that Valve's terms only apply the resale of Steam keys and not games themselves, and thus the lawsuit may be without merit. A separate class-action lawsuit filed against Valve by Wolfire Games in April 2021 asserted that Steam is essentially a monopoly, since developers must sell to PC users through it and that its 30% cut and "most favored nation" pricing practices violate antitrust laws as a result of their position. Valve's response, filed in July 2021, dismissed the complaint stating that it "has no duty under antitrust law to allow developers to use free Steam Keys to undersell prices for the games they sell on Steam—or to provide Steam Keys at all". Valve defended its 30% revenue as meeting the current Industry standard and claimed Wolfire's figure Steam's market share to lack evidence. Wolfire's suit was dismissed by the presiding judge in November 2021 after determining that Wolfire had failed to show that Valve had a monopoly on game sales and that the 30% cut has remained unchanged throughout Valve's history. Wolfire refiled its case, narrowing the complaint to Valve's use of its dominance to intimidate developers that sell their game for less on other marketplaces, which the judge allowed to proceed in May 2022. According to discovery, Valve was ordered to have Gabe Newell submit to a deposition for discussion of Valve's business strategy related to Steam. Valve changed the Steam terms of service in September 2024 to eliminate the forced arbitration clause, such that any disputes with the storefront are to be resolved within courtrooms, and allowing for class-action lawsuits.
Technology
Multimedia
null
5094165
https://en.wikipedia.org/wiki/African%20swine%20fever%20virus
African swine fever virus
African swine fever virus (ASFV) is a large, double-stranded DNA virus in the Asfarviridae family. It is the causative agent of African swine fever (ASF). The virus causes a hemorrhagic fever with high mortality rates in domestic pigs; some isolates can cause death of animals as quickly as a week after infection. It persistently infects its natural hosts, warthogs, bushpigs, and soft ticks of the genus Ornithodoros, which likely act as a vector, with no disease signs. It does not cause disease in humans. ASFV is endemic to sub-Saharan Africa and exists in the wild through a cycle of infection between ticks and wild pigs, bushpigs, and warthogs. The disease was first described after European settlers brought pigs into areas endemic with ASFV, and as such, is an example of an emerging infectious disease. ASFV replicates in the cytoplasm of infected cells. It is the only virus with a double-stranded DNA genome known to be transmitted by arthropods. Virology ASFV is a large (175–215 nm), icosahedral, double-stranded DNA virus with a linear genome of 189 kilobases containing more than 180 genes. The number of genes differs slightly among different isolates of the virus. ASFV has similarities to the other large DNA viruses, e.g., poxvirus, iridovirus, and mimivirus. In common with other viral hemorrhagic fevers, the main target cells for replication are those of monocyte, macrophage lineage. Entry of the virus into the host cell is receptor-mediated, but the precise mechanism of endocytosis is presently unclear. The virus encodes enzymes required for replication and transcription of its genome, including elements of a base excision repair system, structural proteins, and many proteins that are not essential for replication in cells, but instead have roles in virus survival and transmission in its hosts. Virus replication takes place in perinuclear factory areas. It is a highly orchestrated process with at least four stages of transcription—immediate-early, early, intermediate, and late. The majority of replication and assembly occurs in discrete, perinuclear regions of the cell called virus factories, and finally progeny virions are transported to the plasma membrane along microtubules where they bud out or are propelled away along actin projections to infect new cells. As the virus progresses through its lifecycle, most if not all of the host cell's organelles are modified, adapted, or in some cases destroyed. Assembly of the icosahedral capsid occurs on modified membranes from the endoplasmic reticulum. Products from proteolytically processed polyproteins form the core shell between the internal membrane and the nucleoprotein core. An additional outer membrane is gained as particles bud from the plasma membrane. The virus encodes proteins that inhibit signalling pathways in infected macrophages and thus modulate transcriptional activation of immune response genes. In addition, the virus encodes proteins which inhibit apoptosis of infected cells to facilitate production of progeny virions. Viral membrane proteins with similarity to cellular adhesion proteins modulate interaction of virus-infected cells and extracellular virions with host components. Genotypes Based on sequence variation in the C-terminal region of the B646L gene encoding the major capsid protein p72, 22 ASFV genotypes (I–XXIII) have been identified. All ASFV p72 genotypes have been circulating in eastern and southern Africa. Genotype I has been circulating in Europe, South America, the Caribbean, and western Africa. Genotype VIII is confined to four East African countries. Evolution The virus is thought to be derived from a virus of soft tick (genus Ornithodoros) that infects wild swine, including giant forest hogs (Hylochoerus meinertzhageni), warthogs (Phacochoerus africanus), and bushpigs (Potamochoerus porcus). In these wild hosts, infection is generally asymptomatic. This virus appears to have evolved around 1700 AD. This date is corroborated by the historical record. Pigs were initially domesticated in North Africa and Eurasia. They were introduced into southern Africa from Europe and the Far East by the Portuguese (300 years ago) and Chinese (600 years ago), respectively. At the end of the 19th century, the extensive pig industry in the native region of ASFV (Kenya) started after massive losses of cattle due to a rinderpest outbreak. Pigs were imported on a massive scale for breeding by colonizers from Seychelles in 1904 and from England in 1905. Pig farming was free-range at that time. The first outbreak of ASF was reported in 1907. Taxonomy ASFV has no known close relatives. It is the only species in the genus Asfivirus, family Asfarviridae, and order Asfuvirales. Each of these three taxa is at least partly named after ASFV. Signs and symptoms In the acute form of the disease caused by highly virulent strains, pigs may develop a high fever, but show no other noticeable symptoms for the first few days. They then gradually lose their appetites and become depressed. In white-skinned pigs, the extremities turn blueish-purple and hemorrhages become apparent on the ears and abdomen. Groups of infected pigs lie huddled together shivering, breathing abnormally, and sometimes coughing. If forced to stand, they appear unsteady on their legs; this is called congenital tremor type A-I in newborn piglets. Within a few days of infection, they enter a comatose state and then die. In pregnant sows, spontaneous abortions may occur. In milder infections, affected pigs lose weight, become thin, and develop signs of pneumonia, skin ulcers, and swollen joints. Diagnosis The clinical symptoms of ASFV infection are very similar to classical swine fever, and the two diseases normally have to be distinguished by laboratory diagnosis. This diagnosis is usually performed by an ELISA, real time PCR, or isolation of the virus from either the blood, lymph nodes, spleen, or serum of an infected pig. Spread The virus can be spread by ticks, but also by swine eating pork products that contain the virus. Biosecurity measures are essential for prevention and control of African swine fever. Disinfection procedures are an important asset of the mitigation phase. Laboratory tests have been conducted to assess the efficacy of chemical products and commercial disinfectants against African swine fever. The National Pig Association, a UK industry body, states that the virus can also be transmitted by direct or indirect contact with infected pigs, faeces or body fluids. As the virus may survive 11 days in pig faeces, and months or years in pork products, the association advises strict biosecurity measures for pig farms including a three-day quarantine on entering the UK, and avoiding both pigs and areas where wild boar are found. Vaccine research Vietnam successfully produced the first vaccine against African swine fever on June 1, 2022. Against the Eurasian "Georgia07" isolate, an experimental vaccine attenuated by deletion of the viral I177L gene has been in development in the United States since January 22, 2020. This experimental vaccine was licensed as a vaccine candidate as of January 3, 2023. History The first outbreak was retrospectively recognized as having occurred in 1907 after ASF was first described in 1921 in Kenya. The disease remained restricted to Africa until 1957, when it was reported in Lisbon, Portugal. A further outbreak occurred in Portugal in 1960. Subsequent to these initial introductions, the disease became established in the Iberian peninsula, and sporadic outbreaks occurred in France, Belgium, and other European countries during the 1980s. Both Spain and Portugal had managed to eradicate the disease by the mid-1990s through a slaughter policy. ASFV crossed the Atlantic Ocean, and outbreaks were reported in some Caribbean islands, including Hispaniola (Dominican Republic and Haiti). Resultantly, US Customs and Border Protection is on high alert to prevent any spread to the US, which would inflict billions of dollars of damage to the pork industry in the country. Major outbreaks of ASF in Africa are regularly reported to the World Organisation for Animal Health (previously called L'office international des épizooties). In 2018, the virus spread to Asia, affecting more than 10 percent of the total pig population in several countries, leading to severe economic losses in the pig sector. Cuba In 1971, an outbreak of the disease occurred in Cuba, resulting in the slaughter of 500,000 pigs to prevent a nationwide animal epidemic. The outbreak was labeled the "most alarming event" of 1971 by the United Nations Food and Agricultural Organization. Six years after the event, the newspaper Newsday, citing untraceable sources, claimed that anti-Castro saboteurs, with at least the tacit backing of U.S. Central Intelligence Agency officials, allegedly introduced African swine fever virus into Cuba six weeks before the outbreak in 1971, to destabilize the Cuban economy and encourage domestic opposition to Fidel Castro. The virus was allegedly delivered to the operatives from an army base in the Panama Canal Zone by an unnamed U.S. intelligence source. Europe ASFV first occurred in Europe in 1957, when it was introduced in Portugal. From there, it spread to Spain and France. Although concerted efforts to eradicate ASFV were undertaken, such as widespread culling and the construction of modern farming facilities, the disease was only eradicated in the 1990s. An outbreak occurred at the beginning of 2007 in Georgia, and subsequently spread to Armenia, Azerbaijan, Iran, Russia, and Belarus, raising concerns that ASFV may spread further geographically and have negative economic effects on the swine industry. African swine fever had become endemic in the Russian Federation since spreading into the North Caucasus in November 2007, most likely through movements of infected wild boar from Georgia to Chechnya, said a 2013 report by the Food and Agriculture Organization, a United Nations agency. The report showed how the disease had spread north from the Caucasus to other parts of the country where pig production was more concentrated the Central Federal District (home to 28.8% of Russia's pigs) and the Volga Federal District (with 25.4% of the national herd) and northwest towards Ukraine, Belarus, Poland and the Baltic nations. In Russia, the report added, the disease was 'on its way to becoming endemic in Tver oblast' (about 106 km north of Moscow—and about 500 km east of Russia's littoral neighbours on the Baltic. Among the vectors for the spread in Russia of African swine fever virus was the 'distribution' of 'infected pig products' outside affected (quarantined and trade restricted) areas, travelling large distances (thousands of kilometers) within the country. "Wholesale buyers, particularly the military food supply system, hav[ing] been implicated multiple times in the illegal distribution of contaminated meat" were vectors for the virus's spread, said the Food and Agriculture Organization report—and evidence of that was "repeated outbreaks in Leningrad oblast". The report warned that "countries immediately bordering the Russian Federation, particularly Ukraine, Moldova, Kazakhstan, and Latvia, are most vulnerable to [African swine fever] introduction and endemic establishment, largely because the biosecurity of their pig sector is predominantly low. Preventing the spread of [African swine fever] into Ukraine is particularly critical for the whole pig production sector in Europe. Given the worrisome developments in the Russian Federation, European countries have to be alert. They must be ready to prevent and to react effectively to [African swine fever] introductions into their territories for many years to come...". To stop the virus's spread, "the current scenario in the Russian Federation suggests that [prevention] should be particularly stressed at the often informal backyard level and should involve not just pig keepers, but all actors along the whole value chain—butchers, middlemen, slaughterhouses, etc. … They need to be aware of how to prevent and recognize the disease, and must understand the importance of reporting outbreaks to the national authorities … It is particularly important that [African swine fever]-free areas remain free by preventing the [re]introduction of the disease and by swiftly responding to it when it occurs". Since around 2007 to August 31, 2018, 1367 cases of ASF of domestic pigs or wild pigs were reported by veterinary department of the Rosselkhoznadzor (), a Russian federal agency that supervises over agriculture, and state media. According to official report the central and south districts were among most affected by the disease (with several occasions on the east). Many regions effectively established local quarantines some of which was ended later. In August 2012, an outbreak of African swine fever was reported in Ukraine. In June 2013, an outbreak was reported in Belarus. In January 2014, authorities announced the presence of African swine fever in Lithuania and Poland, in June 2014 in Latvia, and in July 2015 in Estonia. Estonia in July 2015 recorded its first case of African swine fever in farmed pigs in Valgamaa on the country's border with Latvia. Another case was reported the same day in Viljandi County, which also borders Latvia. All the pigs were culled and their carcasses incinerated. Less than a month later, almost 15,000 farmed pigs had been culled and the country was "struggling to get rid of hundreds of tons of carcasses". The death toll was "expected to rise". Latvia in January 2017 declared African swine fever emergency in relation to outbreaks in three regions, including a pig farm in Krimulda region, that resulted in a cull of around 5,000 sows and piglets by using gas. In February, another massive pig cull was required, after an industrial-scale farm of the same company in Salaspils region was found infected, leading to a cull of about 10,000 pigs. In June 2017, the Czech Republic recorded its historically first case of African swine fever. The veterinary administration of Zlin prevented the spread of the ASF infection by confining the contaminated zone via odor fences. Odor fences with a total length of 44.5 km were effective in keeping wild boar inside the health zone. In 2018, Romania experienced a nationwide African swine fever pandemic, which prompted the slaughter of most farm pigs. In August 2018, authorities announced the first outbreak of African swine fever in Bulgaria. By July 2019 five Bulgarian pig farms had had outbreaks of African swine fever. In September 2018, an outbreak occurred in wild boars in southern Belgium. Professional observers suspect that importation of wild boars from Eastern European countries by game hunters was the origin of the virus. By 4 October, 32 wild boars had tested positive for the virus. For control of the outbreak, 4,000 domestic pigs were slaughtered preventively in the Gaume region, and the forest was declared off-limits for recreation. In July 2019, authorities announced the first outbreak of African swine fever in Slovakia. In February 2020, authorities announced the first outbreak of African swine fever in a restricted area of Northern Greece. In September 2020, the German agriculture minister confirmed on a press conference that the African swine fever virus reached Germany. The virus was discovered on a wild boar carcass, which the test results confirmed all as positive results. The dead boar was located in the district of Spree-Neisse, Brandenburg, just a few kilometres from the Polish border. The dead animal had been there for quite a while according to the Friedrich-Loeffler Institute, which was the agency hired for testing. Following these events, the German government tried to lobby China and other Asian countries to allow West German states to keep exporting to Asia, while barring their own eastern states from doing so; the Asian countries ultimately refused these proposals. In January 2021, Romania suffered again from a new breakout of the ASFV that started in the Brăila County. In January 2022, an outbreak occurred in northern Italy, detected on some dead wild boars in Liguria and in Piedmont. In January 2022, ASF cases were reported in northern Italy, Latvia, and Hungary. In September 2023, one case was reported from Sweden. 2018–2020 African swine fever panzootic China and East Asia In August 2018, China reported the first African swine fever outbreak in Liaoning province, which was also the first reported case in East Asia. By September 1, 2018, the country had culled more than 38,000 hogs. Since the week of September 10, 2018, China has blocked transports of live pigs and pig products in a large part of the country to avoid spread beyond the 6 provinces where the virus was then confirmed. By the end of 2018, the outbreaks had been reported in 23 provinces and municipalities across China. On April 25, 2019, the virus was reported to have spread to every region of China, as well as parts of Southeast Asia, including Cambodia, Laos, Thailand, and Vietnam. The Chinese pig population was reported to have declined by almost 100 million compared with the previous year, driving European pork prices to reach a six-year high. Ze Chen, Shan Gao and co-workers from Nankai University detected ASFV in Dermacentor (hard ticks) from sheep and bovines using small RNA sequencing. This 235-bp segment had an identity of 99% to a 235-bp DNA segment of ASFV and contained three single nucleotide mutations (C38T, C76T and A108C). C38T, resulting in a single amino-acid mutation G66D, suggests the existence of a new ASFV strain, which is different from all reported ASFV strains in the NCBI Genbank database and the ASFV strain (GenBank: MH713612.1) reported in China in 2018. In December 2019, China banned imports of pigs and wild boars from Indonesia because of African swine fever outbreaks that reached 392 on 17 December. In September 2019, South Korea confirmed a second case of ASF at a pig farm in Yeoncheon, where 4,700 pigs had been raised, a day after reporting its first-ever outbreak of the virus. As of 31 October 2019, the virus has been detected in domestic pigs in nine places in Gyeonggi-do and five places in Incheon City. It was also confirmed in 18 wild boars from Gangwon-do and Gyeonggi-do, inside or near the Civilian Control Zone. The South Korean government set a buffer zone to separate affected areas from the rest of the country and instituted a compensation scheme for farms within of infected farms. In Taiwan, between December 2018 and June 2019, ten pig carcasses infected with ASF washed up on the shore in Kinmen (Quemoy) County. In June 2019, the Taiwanese government temporarily halted export of pigs and pork products from Kinmen County after two infected pig carcasses were discovered on the shore. Nearby farms were inspected by veterinarians and no live pigs tested positive for ASF. On February 3, 2020, another infected pig carcass was discovered on the shore of Lieyu (Lesser Kinmen), bringing the total number of infected pig carcasses found on Kinmen county shores to 11. In April 2020, a twelfth pig carcass confirmed to be carrying African swine fever virus washed up on the shore of Lieyu (Lesser Kinmen). After testing, no outbreak of the disease was detected in the seven active hog farms on the island. On May 29, China's Ministry of Agriculture and Rural Affairs said it had found a new outbreak of ASF near the city of Lanzhou in northwestern Gansu Province. On April 9, 2019, a pig carcass with ASF was discovered on the shore of Nangan in the Matsu Islands (Lienchiang County) leading to a week-long ban on the transport of pigs from the county. Southeast Asia The Philippines' Department of Agriculture started a probe in August 2019 regarding incidents of hog deaths in towns in Rizal and Bulacan for suspected ASF cases. The department ordered the culling of pigs within a radius of affected farms. The first case of African swine fever in the Philippines was confirmed on 9 September 2019 by the country's agriculture department. The department sent 20 blood samples to the United Kingdom for testing, with 14 samples testing positive for African swine fever virus. At the time of the confirmation, though, the department has stated that the disease has not reached epidemic levels yet and is still assessing its severity. On 16 September 2019, the Bureau of Animal Industry's director, Ronnie Domingo, confirmed that the status of the virus in the country is in an outbreak level in Rizal, Bulacan, and Quezon City, and also confirmed across the provinces of Pampanga and Pangasinan in the end of September. Vietnam confirmed its first case of African swine fever on 19 February 2019. As of 31 October 2019, all 63 provinces / municipalities have reported outbreaks, and more than 5,700,000 pigs have been culled. The Vietnamese government acted to limit movement of pigs and pig products and implemented measures to prevent, promptly detect, and strictly handle cases of smuggling, illegal transportation, and trafficking of animals and animal products—especially of pigs and pig products—into the country. ASF was reported on farms near the capital city, Dili, in East Timor from 9 September 2019. On 30 September 2019, a reported 405 animals were killed or culled. By the end of October 2019, a total of 100 outbreaks in smallholder pig farms had been recorded in Dili. In Sabah, Malaysia, outbreaks of ASF had been detected in domestic pigs by February 2021. However, large numbers of dead wild pigs had been reported dead since January, and over a hundred Bornean bearded pigs had been found dead as of March 2021; the Sabah Wildlife Department later stated that it was probably due to ASF. Other Southeast Asian pig species are thought to also be vulnerable including the (Sulawesi) Celebes warty pig. India On 29 April 2020, India reported the first ASF disease outbreak in the states of Assam and Arunachal Pradesh. The disease reportedly is being spread via transboundary transmission from China. According to the data available with the veterinary department, over 15,000 pigs had been recorded dead from the 9 affected districts so far in Assam, viz. Golaghat, Majuli, Dibrugarh, Kamrup, Dhemaji, Biswanath, North Lakhimpur, Sivasagar, and Jorhat. Sale and consumption of pork meat have been banned in the affected districts of Assam. On 4 July, the test results of certain pig carcass samples came in as a confirmation for ASF in Bokakhat subdivision of Golaghat. The pig mortality reached around 800 in number in Bokakhat. The infection was suspected to have come into the region from Kamargaon, which is located near Bokakhat. Historical theories The appearance of ASF outside Africa at about the same time as the emergence of AIDS led to some interest in whether the two were related, and a report appeared in The Lancet supporting this in 1986. However, the realization that the human immunodeficiency virus (HIV) causes AIDS discredited any potential connection with ASF.
Biology and health sciences
Specific viruses
Health
5095984
https://en.wikipedia.org/wiki/Ephedra%20%28plant%29
Ephedra (plant)
Ephedra is a genus of gymnosperm shrubs. The various species of Ephedra are widespread in many arid regions of the world, ranging across southwestern North America, southern Europe, northern Africa, southwest and central Asia, northern China, and western South America. It is the only extant genus in its family, Ephedraceae, and order, Ephedrales, and one of the three living members of the division Gnetophyta alongside Gnetum and Welwitschia. In temperate climates, most Ephedra species grow on shores or in sandy soils with direct sun exposure. Common names in English include joint-pine, jointfir, Mormon-tea, or Brigham tea. The Chinese name for Ephedra species is mahuang (). Ephedra is the origin of the name of the stimulant ephedrine, which the plants contain in significant concentration. Description The family Ephedraceae, of which Ephedra is the only extant genus, are gymnosperms, and generally shrubs, sometimes clambering vines, and rarely, small trees. Members of the genus frequently spread by the use of rhizomes. The stems are green and photosynthetic. The leaves are opposite or whorled. The scalelike leaves are fused into a sheath at the base and is often shed soon after development. There are no resin canals. The plants are mostly dioecious, with the pollen strobili in whorls of 1–10, each consisting of a series of decussate bracts. The pollen is furrowed. The female strobili also occur in whorls, with bracts which fuse around a single ovule. Fleshy bracts are white (such as in Ephedra frustillata) or red. There are generally 1–2 yellow to dark brown seeds per strobilus. Taxonomy The genus Ephedra was first described in 1753 by Carl Linnaeus, and the type species is Ephedra distachya. The family, Ephedraceae, was first described in 1829 by Dumortier. Evolutionary history The oldest known members of the genus are from the Early Cretaceous around 125 million years ago, with records being known from the Aptian-Albian of Argentina, China, Portugal and the United States. The fossil record of Ephedra outside of pollen disappears after the Early Cretaceous. Molecular clock estimates have suggested that last common ancestor of living Ephedra species lived much more recently, during the Early Oligocene around 30 million years ago. However, pollen modified from the ancestral condition of the genus with branched pseudosulci (grooves), which evolved in parallel in the living North American and Asian lineages is known from the Late Cretaceous, suggesting that the last common ancestor is at least this old. Species , Plants of the World Online accepted the following species: Ephedra alata Decne. – North Africa, Arabian Peninsula Ephedra altissima Desf. non-Bové (1834), non-Delile (1813), non-Buch (1828) (High-climbing jointfir) – North Africa, Canary Islands Ephedra americana Humb. & Bonpl. ex Willd. – Bolivia, Ecuador, Peru, Argentina, Chile Ephedra antisyphilitica Berland ex C.A.Mey. – Clapweed, Erect Ephedra – Texas, Oklahoma, New Mexico, Nuevo León, Coahuila, Chihuahua Ephedra aphylla Forssk. – eastern Mediterranean from Libya and Cyprus to the Persian Gulf Ephedra × arenicola H.C.Cutler – Arizona, Utah (hybrid, E. cutleri × E. torreyana) Ephedra aspera Engelm. ex S.Watson – Boundary Ephedra, Pitamoreal – Texas, New Mexico, Arizona, Utah, Nevada, California, Chihuahua, Durango, Zacatecas, Sinaloa, Sonora, Baja California Ephedra aurantiaca Takht. & Pachom. – Caucasus, Kazakhstan, Turkmenistan Ephedra aurea Brullo et al. Ephedra boelckei F.A.Roig – Argentina Ephedra botschantzevii Pachom. – Kazakhstan, Tuva region of Siberia Ephedra breana Phil. (frutilla de campo) – Peru, Bolivia, Chile, Argentina Ephedra brevifoliata Ghahr. – Iran Ephedra californica S.Watson – California Ephedra, California Jointfir – California, western Arizona, Baja California Ephedra chengiae Yang & Ferguson Ephedra chilensis C.Presl – Pingo-pingo - Chile, Argentina Ephedra compacta Rose – widespread in much of Mexico Ephedra coryi E.L.Reed (Cory's Ephedra) – Texas, New Mexico Ephedra cutleri Peebles – Navajo Ephedra, Cutler's Ephedra, Cutler Mormon-tea, Cutler's Jointfir – Colorado, Utah, Arizona, New Mexico, Wyoming Ephedra dahurica Turcz. – Siberia, Mongolia Ephedra dawuensis Y.Yang – Sichuan Ephedra distachya L. – Joint-pine, Jointfir – southern Europe and central Asia from Portugal to Kazakhstan Ephedra × eleutherolepis V.A.Nikitin – Tajikistan (hybrid E. intermedia × E. strobilacea) Ephedra equisetina Bunge – Ma huang – Caucasus, Central Asia, Siberia, Mongolia, Gansu, Hebei, Inner Mongolia, Ningxia, Qinghai, Shanxi, Xinjiang Ephedra fasciculata A.Nelson – Arizona Ephedra, Arizona Jointfir, Desert Mormon-tea – Arizona, California, Nevada, Utah Ephedra fedtschenkoae Paulsen – Central Asia, Siberia, Mongolia, Xinjiang Ephedra foeminea Forssk. – North Africa, Somalia, Balkans, Italy, Middle East; naturalized in Santa Barbara County of California Ephedra foliata Boiss. ex C.A.Mey. (Shrubby horsetail) – North Africa, Somalia, Middle East, India Ephedra fragilis Desf. (joint pine) – Mediterranean, Canary Islands, Madeira Ephedra frustillata Miers – Patagonian Ephedra – Chile, Argentina Ephedra funerea Coville & C.V.Morton – Death Valley Ephedra, Death Valley Jointfir – California, Arizona, Nevada Ephedra gerardiana Wall. ex Klotzsch & Garcke – Gerard's Jointfir, Shan Ling Ma Huang – Himalayas, Tibet, Yunnan, Siberia, Central Asia Ephedra gracilis Phil. ex Stapf Ephedra holoptera Riedl – Iran Ephedra intermedia Schrenk & C.A.Mey. (Zhong Ma Huang) – China, Siberia, Central Asia, Himalayas, Iran, Pakistan Ephedra kardangensis P.Sharma & P.L.Uniyal – western Himalayas Ephedra khurikensis P.Sharma & P.L.Uniyal – western Himalayas Ephedra laristanica Assadi – Iran Ephedra likiangensis Florin – Guizhou, Sichuan, Tibet, Yunnan Ephedra lomatolepis Schrenk – Kazakhstan, Tuva region of Siberia Ephedra major Host – Mediterranean, Middle East, Central Asia; from Canary Islands to Kashmir Ephedra milleri Freitag & Maier-St. – Oman, Yemen Ephedra minuta Florin – Qinghai, Sichuan Ephedra monosperma J.G.Gmel. ex C.A.Mey. (dan zi ma huang) – Siberia, Mongolia, much of China including Tibet and Xinjiang Ephedra multiflora Phil. ex Stapf – Chile, Argentina Ephedra nevadensis S.Watson – Nevada Ephedra, Nevada Jointfir, Nevada Mormon-tea – Baja California, California, Arizona, Nevada, Utah, Oregon Ephedra ochreata Miers – Argentina Ephedra oxyphylla Riedl – Afghanistan Ephedra pachyclada Boiss. – Middle East from Sinai and Yemen to Pakistan Ephedra pangiensis Rita Singh & P.Sharma Ephedra pedunculata Engelm. ex S.Watson – Vine Ephedra, Vine Jointfir – Texas, Chihuahua, Coahuila, Durango, San Luis Potosí, Nuevo León, Zacatecas Ephedra pentandra Pachom. – Iran Ephedra procera Fisch. & C.A.Mey. − Iran, Caucasus Ephedra przewalskii Stapf – Central Asia, Mongolia, Pakistan, Gansu, Inner Mongolia, Ningxia, Qinghai, Tibet Ephedra pseudodistachya Pachom. – Siberia, Mongolia Ephedra regeliana Florin – Xi Zi Ma Huang – Central Asia, Siberia, Pakistan, Xinjiang Ephedra rhytidosperma Pachom., syn. E. lepidosperma C.Y.Cheng – Gansu, Inner Mongolia, Ningxia, Mongolia Ephedra rituensis Y.Yang, D.Z.Fu & G.H.Zhu – Qinghai, Xinjiang, Tibet Ephedra rupestris Benth. – Ecuador, Peru, Bolivia, Argentina Ephedra sarcocarpa Aitch. & Hemsl. – Pakistan, Afghanistan Ephedra saxatilis (Stapf) Royle ex Florin Ephedra sinica Stapf – Cao Ma Huang, Chinese ephedra – Mongolia, Siberia, Primorye, Manchuria Ephedra somalensis Freitag & Maier-St. – Somalia, Eritrea Ephedra stipitata Biswas & Rita Singh Ephedra strobilacea Bunge – Iran, Central Asia Ephedra strongylensis Brullo et al. Ephedra sumlingensis P.Sharma & P.L.Uniyal – western Himalayas Ephedra tilhoana Maire – Chad Ephedra torreyana S.Watson – Torrey's Ephedra, Torrey's Jointfir, Torrey's Mormon-tea, Cañutillo – Nevada, Utah, Colorado, Arizona, New Mexico, Texas, Chihuahua Ephedra transitoria Riedl – Iraq, Syria, Palestine, Saudi Arabia Ephedra triandra Tul. − Bolivia, Argentina Ephedra trifurca Torrey ex S.Watson – Longleaf Ephedra, Longleaf Jointfir, Longleaf Mormon-tea, Popotilla, Teposote – California, Arizona, New Mexico, Texas, Chihuahua, Sonora, Baja California Ephedra trifurcata Zöllner Ephedra tweedieana C.A.Mey. – Brazil, Argentina, Uruguay Ephedra viridis Coville – Green Ephedra, Green Mormon-tea – California, Nevada, Utah, Arizona, New Mexico, Colorado, Wyoming, South Dakota, Oregon Ephedra vvedenskyi Pachom. – Iran, Caucasus, Turkmenistan Ephedra yangthangensis Prabha Sharma & Rita Singh – Yangthang to Ka, Leo, Nako, Chango, Chulling, Sumdo, Hoorling and Lira of Kinnaur district of Himachal Pradesh Distribution The genus is found worldwide, in desert regions, but not in Australia. Ecology Ephedraceae are adapted to extremely arid regions, growing often in high sunny habitats, and occur as high as 4000 m above sea level in both the Andes and the Himalayas. They make up a significant part of the North American Great Basin sage brush ecosystem. Human use Remains of a buried Neanderthal found at Shanidar cave in Iraqi Kurdistan, over 50,000 years old was found associated with Ephedra pollen among those of other plants. While some authors have suggested that these represent plant remains deliberately buried alongside the Neanderthal, other authors have suggested that natural agents like bees may have been responsible for the accumulation of pollen. In addition, archaeological remains of Ephedra dating back 15,000 years have been discovered at Taforalt Cave in Morocco. Fossil cones of Ephedra were found concentrated in the cemetery area, specifically within a human burial. The Ephedra alkaloids, ephedrine and pseudoephedrine constituents of E. sinica and other members of the genus have sympathomimetic and decongestant qualities, and have been used as dietary supplements, mainly for weight loss. The drug ephedrine is used to prevent low blood pressure during spinal anesthesia. In the United States, ephedra supplements were banned from the market in the early 21st century due to serious safety risks. Plants of the genus Ephedra, including E. sinica and others, were used in traditional medicine for treating headache and respiratory infections, but there is no scientific evidence they are effective or safe for these purposes. Ephedra has also had a role as a precursor in the clandestine manufacture of methamphetamine. Adverse effects Alkaloids obtained from the species of Ephedra used in herbal medicines, which are used to synthetically prepare pseudoephedrine and ephedrine, can cause cardiovascular events. These events have been associated with arrhythmias, palpitations, tachycardia and myocardial infarction. Caffeine consumption in combination with ephedrine has been reported to increase the risk of these cardiovascular events. Economic botany and alkaloid content The earliest uses of Ephedra species (mahuang) for specific illnesses date back to 5000 BC. Ephedrine and its isomers were isolated in 1881 from Ephedra distachya and characterized by the Japanese organic chemist Nagai Nagayoshi. His work to access Ephedra's active ingredients to isolate a pure pharmaceutical substance led to the systematic production of semi-synthetic derivatives thereof and is still relevant today. Three species, Ephedra sinica, Ephedra vulgaris, and to a lesser extent Ephedra equisetina, are commercially grown in Mainland China as a source for natural ephedrines and isomers for use in pharmaceuticals. E. sinica and E. distachya usually carry six optically active phenylethylamines, mostly ephedrine and pseudoephedrine with minor amounts of norephedrine, norpseudoephedrine as well as the three methylated analogs. Reliable information on the total alkaloid content of the crude drug is difficult to obtain. Based on HPLC analyses in industrial settings, the concentrations of total alkaloids in dried Herba Ephedra ranged between 1 and 4%, and in some cases up to 6%. For a review of the alkaloid distribution in different species of the genus Ephedra see Jian-fang Cui (1991). Other American and European species of Ephedra, e.g. Ephedra nevadensis (Nevada Mormon tea) have not been systematically assayed; based on unpublished field investigations, they contain very low levels (less than 0.1%) or none at all.
Biology and health sciences
Gymnosperms (except conifers)
Plants
20215858
https://en.wikipedia.org/wiki/Lunar%20south%20pole
Lunar south pole
The lunar south pole is the southernmost point on the Moon. It is of interest to scientists because of the occurrence of water ice in permanently shadowed areas around it. The lunar south pole region features craters that are unique in that the near-constant sunlight does not reach their interior. Such craters are cold traps that contain fossil records of hydrogen, water ice, and other volatiles dating from the early Solar System. In contrast, the lunar north pole region exhibits a much lower quantity of similarly sheltered craters. Geography The lunar south pole is located on the center of the polar Antarctic Circle (80°S to 90°S). (The axis spin is 88.5 degrees from the plane of the ecliptic.) The lunar south pole has shifted 5.5 degrees from its original position billions of years ago. This shift has changed the rotational axis of the Moon, allowing sunlight to reach previously shadowed areas, but the south pole still features some completely shadowed areas. Conversely, the pole also contains areas with permanent exposure to sunlight. The south pole region features many craters and basins such as the South Pole–Aitken basin, which appears to be one of the most fundamental features of the Moon, and mountains, such as Epsilon Peak at 9.050 km, taller than any mountain found on Earth. The south pole temperature averages approximately . Craters The pole defined by the rotational axis of the Moon lies within Shackleton Crater. Notable craters nearest to the lunar south pole include de Gerlache, Sverdrup, Shoemaker, Faustini, Haworth, Nobile, and Cabeus. Discoveries Illumination The lunar south pole features a region with crater rims exposed to near-constant solar illumination, yet the interior of the craters are permanently shaded from sunlight. The area's illumination was studied using high-resolution digital models produced from data by the Lunar Reconnaissance Orbiter. The lunar surface can also reflect solar wind as energetic neutral atoms. On average, 16% of these atoms have been protons that vary based on location. These atoms have created an integral flux of backscattered hydrogen atoms due to the reflected amount of plasma that exists on the surface of the Moon. They also reveal the line boundary and the magnetic dynamics within the regions of these neutral atoms on the Moon' surface. Cold traps Cold traps are some of the important places on the lunar south pole region in terms of possible water ice and other volatile deposits. Cold traps can contain water and ice that were originally from comets, meteorites and solar wind-induced iron reduction. From experiments and sample readings, scientists were able to confirm that cold traps do contain ice. Hydroxyl has also been found in these cold traps. The discovery of these two compounds has led to the funding of missions focusing primarily on the lunar poles using global-scale infrared detection. The ice stays in these traps because of the thermal behavior of the Moon that are controlled by thermophysical properties such as scattered sunlight, thermal re-radiation, internal heat and light given off by the Earth. Magnetic surface There are areas of the Moon where the crust is magnetized. This is known as a magnetic anomaly due to the remnants of metal iron that was emplaced by the impactor that formed the South Pole–Aitken basin (SPA basin). However, the concentration of iron that is thought to be in the basin was not present in the mappings, as they could be too deep in the Moon's crust for the mappings to detect. Or the magnetic anomaly is caused by another factor that does not involve metallic properties. The findings were proven inadequate due to the inconsistencies between the maps that were used, and also, they were not able to detect the magnitude of the magnetic fluctuations at the Moon's surface. Exploration Missions Orbiters from several countries have explored the region around the lunar south pole. Extensive studies were conducted by the Lunar Orbiters, Clementine, Lunar Prospector, Lunar Reconnaissance Orbiter, Kaguya, and Chandrayaan-1, that discovered the presence of lunar water. NASA's LCROSS mission found a significant amount of water in Cabeus. The LCROSS mission deliberately crashed into the floor of Cabeus and from samples found that it contained nearly 5% water. Lunar Reconnaissance Orbiter The Lunar Reconnaissance Orbiter (LRO) was launched by NASA on 18 June 2009 and is still mapping the lunar south pole region. This mission will help scientists see if the lunar south pole region has enough sustainable resources to sustain a permanent crewed station. The LRO carries the Diviner Lunar Radiometer Experiment, which investigates the radiation and thermophysical properties of the south pole surface. It can detect reflected solar radiation and internal infrared emissions. The LRO Diviner is able to detect where water ice could be trapped on the surface. LCROSS The Lunar Crater Observation and Sensing Satellite (LCROSS) was a robotic spacecraft operated by NASA. The mission was conceived as a low-cost means of determining the nature of hydrogen detected at the polar regions of the Moon. Launched immediately after discovery of lunar water by Chandrayaan-1, the main LCROSS mission objective was to further explore the presence of water in the form of ice in a permanently shadowed crater near a lunar polar region. It was launched together with the Lunar Reconnaissance Orbiter (LRO) and it's Centaur Upper stage. It was successful in confirming water in the southern lunar crater Cabeus. Moon Impact Probe The Moon Impact Probe (MIP) developed by the Indian Space Research Organisation (ISRO), India's national space agency, was a lunar probe that was released by ISRO's Chandrayaan-1 lunar remote sensing orbiter which in turn was launched, on 22 October 2008. The Moon Impact Probe separated from the Moon-orbiting Chandrayaan-1 on 14 November 2008, 20:06 IST and after nearly 25 minutes crashed as planned, near the rim of the crater Shackleton. With this mission India became the first to hard land or impact on the lunar South Pole. Luna 25 Russia launched its Luna 25 lunar lander on August 10, 2023. Luna-25 spent five days journeying to the Moon, then was circling the natural satellite for another five to seven days. The spacecraft then was planned to be set down in the Moon's south polar region, near the crater Boguslawsky. Luna developed an "emergency situation" that occurred during the reduction of the probe to a pre-landing orbit. The lunar lander abruptly lost communication at 2:57 p.m. (11:57 GMT). Luna 25 was a lander only, with a primary mission of proving the landing technology. The mission was carrying of scientific instruments, including a robotic arm for soil samples and possible drilling hardware. The launch took place on a Soyuz-2.1b rocket with Fregat upper stage, from Vostochny Cosmodrome. Chandrayaan-3 On August 23, 2023 12:34 UTC, India's Chandrayaan-3 became the first lunar mission to achieve a soft landing near the lunar south pole. The mission consisted of a lander and a rover for carrying out scientific experiments. IM-1 The IM-1 Odysseus lander has taken about six days to travel from the Earth to the Moon. Once in the vicinity of the Moon, the lander spent approximately one more Earth-day orbiting the Moon. This set February 22, 2024 at 11:24 PM UTC as the lander's lunar landing date. The initial aim was to land within the Malapert-A crater, which is about from the lunar south pole. Later, the exact time of landing was announced as 11:24 PM UTC. Odysseus became the first US moon landing in the 21st century. EagleCam to record lunar landing Just before landing, at approximately above the lunar surface, the Odysseus lander was planned to eject the EagleCam camera-equipped CubeSat, which would have been dropped onto the lunar surface near the lander, with an impact velocity of about . However, due to complications arising from a software patch, it was decided that EagleCam would not be ejected upon landing. It was later ejected on 28 February but was partially a failure as it returned all types of data, except post IM-1 landing images that were the main aim of its mission. Role in future exploration and observations The lunar south pole region is deemed to be a compelling place for future exploration missions and suitable for a lunar outpost. The permanently shadowed places on the Moon could contain ice and other minerals, which would be vital resources for future explorers. The mountain peaks near the pole are illuminated for large periods of time and could be used to provide solar energy to an outpost. With an outpost on the Moon, scientists will be able to analyze water and other volatile samples dating to the formation of the Solar System. Scientists used LOLA (Lunar Orbiter Laser Altimeter), which was a device used by NASA to provide an accurate topographic model of the Moon. With this data, locations near the south pole at Connecting Ridge, which connects Shackleton to the crater de Gerlache, were found that yielded sunlight for 92.27–95.65% of the time based on altitude ranging from 2 m above ground to 10 m above ground. At the same spots it was discovered that the longest continuous periods of darkness were only 3 to 5 days. The lunar south pole is a place where scientists may be able to perform unique astronomical observations of radio waves under 30 MHz. The Chinese Longjiang microsatellites were launched in May 2018 to orbit the Moon, and Longjiang-2 operated in this frequency until 31 July 2019. Before Longjiang-2, no space observatory had been able to observe astronomical radio waves in this frequency because of interference waves from equipment on Earth. The lunar south pole has mountains and basins, such as the south side of Malapert Mountain, that do not face Earth and would be an ideal place to receive such astronomical radio signals from a ground radio observatory. Resources Solar power, oxygen, and metals are abundant resources in the south polar region. By locating a lunar resource processing facility near the south pole, solar-generated electrical power will allow for nearly constant operation. Elements known to be present on the lunar surface include, among others, hydrogen (H), oxygen (O), silicon (Si), iron (Fe), magnesium (Mg), calcium (Ca), aluminium (Al), manganese (Mn) and titanium (Ti). Among the more abundant are oxygen, iron and silicon. The oxygen content is estimated at 45% (by weight). Future Blue Origin is planning a mission to the south polar region in about 2024. The Blue Moon lander derives from the vertical landing technology used in Blue Origin's New Shepard sub-orbital rocket. This would lead to a series of missions landing equipment for a crewed base in a south polar region crater using their Blue Moon lander. NASA's Artemis program has proposed to land several robotic landers and rovers (CLPS) in preparation for the planned late-2020s Artemis III crewed landing at the south polar region.
Physical sciences
Solar System
Astronomy
2798024
https://en.wikipedia.org/wiki/Ferricyanide
Ferricyanide
Ferricyanide is the name of the anion [Fe(CN)6]3−. It is also called hexacyanoferrate(III) and in rare, but systematic nomenclature, hexacyanidoferrate(III). The most common salt of this anion is potassium ferricyanide, a red crystalline material that is used as an oxidant in organic chemistry. Properties [Fe(CN)6]3− consists of a Fe3+ center bound in octahedral geometry to six cyanide ligands. The complex has Oh symmetry. The iron is low spin and easily reduced to the related ferrocyanide ion [Fe(CN)6]4−, which is a ferrous (Fe2+) derivative. This redox couple is reversible and entails no making or breaking of Fe–C bonds: [Fe(CN)6]3− + e− ⇌ [Fe(CN)6]4− This redox couple is a standard in electrochemistry. Compared to main group cyanides like potassium cyanide, ferricyanides are much less toxic because of the strong bond between the cyanide ion (CN−) and the Fe3+. They do react with mineral acids, however, to release highly toxic hydrogen cyanide gas. Uses Treatment of ferricyanide with iron(II) salts affords the brilliant, long-lasting pigment Prussian blue, the traditional color of blueprints.
Physical sciences
Cyanide salts
Chemistry
2798040
https://en.wikipedia.org/wiki/Ferrocyanide
Ferrocyanide
Ferrocyanide is the name of the anion [Fe(CN)6]4−. Salts of this coordination complex give yellow solutions. It is usually available as the salt potassium ferrocyanide, which has the formula K4Fe(CN)6. [Fe(CN)6]4− is a diamagnetic species, featuring low-spin iron(II) center in an octahedral ligand environment. Although many salts of cyanide are highly toxic, ferro- and ferricyanides are less toxic because they tend not to release free cyanide. It is of commercial interest as a precursor to the pigment Prussian blue and, as its potassium salt, an anticaking agent. Reactions Treatment of ferrocyanide with ferric-containing salts gives the intensely coloured pigment Prussian blue (sometimes called ferric ferrocyanide and ferrous ferricyanide). Ferrocyanide reversibly oxidized by one electron, giving ferricyanide: [Fe(CN)6]4− ⇌ [Fe(CN)6]3− + e− This conversion can be followed spectroscopically at 420 nm, since ferrocyanide has negligible absorption at this wavelength while ferricyanide has an extinction coefficient of 1040 M−1 cm−1. Applications The dominant use of ferrocyanides is as precursors to the Prussian blue pigments. Sodium ferrocyanide is a common anti-caking agent. Specialized applications involves their use as precipitating agents for production of citric acid and wine. Research Ferrocyanide and its oxidized product ferricyanide cannot freely pass through the plasma membrane. For this reason ferrocyanide has been used as a probe of extracellular electron acceptor in the study of redox reactions in cells. Ferricyanide is consumed in the process, thus any increase in ferrocyanide can be attributed to secretions of reductants or transplasma membrane electron transport activity. Nickel ferrocyanide (Ni2Fe(CN)6) is also used as catalyst in electro-oxidation (anodic oxidation) of urea. Aspirational applications range from hydrogen production for cleaner energy with lower CO2 emission to wastewater treatment. Ferrocyanide is also studied as an electrolyte in flow batteries. Nomenclature According to the recommendations of IUPAC, ferrocyanide should be called "hexacyanidoferrate(II)". Cyanides as a chemical class were named because they were discovered in ferrocyanide. Ferrocyanide in turn was named in Latin to mean "blue substance with iron." The dye Prussian blue had been first made in the early 18th century. The word "cyanide" used in the name is from κύανος kyanos, Greek for "(dark) blue." Gallery
Physical sciences
Cyanide salts
Chemistry
2799611
https://en.wikipedia.org/wiki/Zosterophyll
Zosterophyll
The zosterophylls are a group of extinct land plants that first appeared in the Silurian period. The taxon was first established by Banks in 1968 as the subdivision Zosterophyllophytina; they have since also been treated as the division Zosterophyllophyta or Zosterophyta and the class or plesion Zosterophyllopsida or Zosteropsida. They were among the first vascular plants in the fossil record, and had a world-wide distribution. They were probably stem-group lycophytes, forming a sister group to the ancestors of the living lycophytes. By the late Silurian (late Ludlovian, about ) a diverse assemblage of species existed, examples of which have been found fossilised in what is now Bathurst Island in Arctic Canada. Morphology The stems of zosterophylls were either smooth or covered with small spines known as enations, branched dichotomously, and grew at the ends by unrolling, a process known as circinate vernation. The stems had a central vascular column in which the protoxylem was exarch, and the metaxylem developed centripetally. The sporangia were kidney-shaped (reniform), with conspicuous lateral dehiscence and were borne laterally in a fertile zone towards the tips of the branches. The zosterophylls were named after the aquatic flowering plant Zostera from a mistaken belief that the two groups were related. David P. Penhallow's generic description of the type genus Zosterophyllum refers to "Aquatic plants with creeping stems, from which arise narrow dichotomous branches and narrow linear leaves of the aspect of Zostera." Zosterophyllum rhenanum was reconstructed as aquatic, the lack of stomata on the lower axes giving support to this interpretation. However, current opinion is that the Zosterophylls were terrestrial plants, and Penhallow's "linear leaves" are interpreted as the aerial stems of the plant that had become flattened during fossilization. Stomata were present, particularly on the upper axes. Their absence on the lower portions of the axes suggests that this part of the plants may have been submerged, and that the plants dwelt in boggy ground or even shallow water. In many fossils these appear to consist of a slit-like opening in the middle of a single elongated guard cell, leading to comparison with the stomata of some mosses. However, this is now thought to result from the loss of the wall separating paired guard cells during fossilisation. Taxonomy and classification At first most of the fossilized early land plants other than bryophytes were placed in the class Psilophyta, established in 1917 by Kidston and Lang. As additional fossils were discovered and described, it became apparent that the Psilophyta were not a homogeneous group of plants, and in 1975 Banks developed his earlier proposal to split it into three groups, which he put at the rank of subdivision. One of these was the subdivision Zosterophyllophytina, named after the genus Zosterophyllum. For Banks, zosterophyllophytes or zosterophylls comprised plants with lateral sporangia which released their spores by splitting distally (i.e. away from their attachment), and which had exarch strands of xylem. Bank's classification produces the hierarchy: Division Tracheata   Subdivision †Zosterophyllophytina = zosterophyllophytes, zosterophylls   Subdivision Lycophytina = lycopods   + other subdivisions Those who treat most of the extant groups of plants as divisions may raise both the zosterophylls and the Lycophytina sensu Banks to the rank of division: Division Zosterophyllophyta = zosterophylls, zosterophyllophytes Division Lycophyta = lycophytes In their cladistic study published in 1997, Kenrick and Crane provided support for a clade uniting both the zosterophylls and the lycopsids, producing a classification which places the zosterophylls in a class Zosterophyllopsida of the subdivision Lycophytina: Division Tracheata   Subdivision Lycophytina = lycophytes     Class †Zosterophyllopsida = zosterophylls     Class Lycopodiopsida = lycopsids This approach has been widely used alongside previous systems. A consequence is that "lycophyte" and corresponding formal names such as "Lycophyta" and "Lycophytina" are used by different authors in at least two senses: either excluding zosterophylls in the sense of Banks or including them in the sense of Kenrick and Crane. A further complication is that the cladograms of Kenrick and Crane show that the zosterophylls, broadly defined, are paraphyletic, but contain a 'core' clade of plants with marked bilateral symmetry and circinate tips. The class Zosterophyllopsida sensu Kenrick & Crane may be restricted to this core clade, leaving many genera (e.g. Hicklingia, Nothia) with no systematic placement other than Lycophytina sensu Kenrick & Crane, but nevertheless still informally called "zosterophylls". Under whatever name and rank, the zosterophylls have been divided into orders and families, e.g. the Zosterophyllales containing the Zosterophyllaceae and the Sawdoniales containing the Sawdoniaceae. Since the publication of cladograms showing that the group is paraphyletic divisions of the class have been less used, being ignored, for example, in the 2009 paleobotany textbook by Taylor et al. Phylogeny In 2004, Crane et al. published a unified cladogram for the polysporangiophytes (plants with branched stems bearing sporangia), based on cladistic analyses of morphological features. This suggests that the zosterophylls were a paraphyletic stem group, related to the ancestors of modern lycophytes. Genera Genera which are included at or around the zosterophyll position in the cladogram or have otherwise been included in the group by at least one source, and hence may be considered zosterophylls in the broadest sense, are listed below. "B" indicates genera included by Banks in his 1975 description of Zosterophyllophytina. Adoketophyton Anisophyton Barinophyton Bathurstia (B) Crenaticaulis (B) Danziella Deheubarthia Demersatheca Discalis Distichophytum (B) Gosferia (= Forgesia) Gosslingia (B) Guangnania Gumuia Hicklingia Hsua Huia Jugumella Konioria Macivera Nothia Oricilla Protobarinophyton Ramoferis Rebuchia, see Distichophytum Sawdonia (B) Serrulacaulis Tarella Thrinkophyton Trichopherophyton Ventarura Wenshania Xitunia Yunia Zosterophyllum (B) Genera may not be assigned to this group by other authors; for example, Adoketophyton was regarded by Hao et al., who named the genus, as having evolved separately from the lycopsids, so that its taxonomic placement was uncertain. Barinophytes, like Barinophyton, have been considered to be possible lycopsids, or to fall between the lycopsids and the euphyllophytes.
Biology and health sciences
Lycophytes
Plants
2800195
https://en.wikipedia.org/wiki/Yakhch%C4%81l
Yakhchāl
A yakhchāl ( "ice pit"; yakh meaning "ice" and chāl meaning "pit") is an ancient type of ice house, which also made ice. They are primarily found in the Dasht-e Lut and Dasht-e-Kavir deserts, whose climates range from cold (BWk) to hot (BWh) desert regions. In present-day Iran, Afghanistan, and Tajikistan, the term yakhchāl is also used to refer to modern refrigerators. The structure typically had a domed shape above ground, a subterranean storage space, shade walls, and ice pools. It was often used to store ice, but sometimes was used to store food as well as produce ice. The subterranean space and thick heat-resistant construction material insulated the storage space year-round. These structures were mainly built and used since ancient times in Persia. History Records indicate that these structures were built as far back as 400 BCE, and many that were built hundreds of years ago remain standing, where Persian engineers built yakhchāls in the desert to store ice, usually made nearby. The ice created nearby and stored in yakhchāls is used throughout the year especially during hot summer days, for various purposes, including preservation of food, to chill treats, or making traditional Persian desserts like faloodeh and sorbets. Although many have deteriorated over the years due to widespread commercial refrigeration technology, some interest in them has been revived as a source of inspiration in low-energy housing design and sustainable architecture. And some, like a yakhchāl in Kerman (over a mile above sea level), have been well-preserved. These still have their cone-shaped, eighteen meter high building, massive insulation, and continuous cooling waters that spiral down its side and keep the ice frozen throughout the summer. Design A yakhchāl's engineering is optimized to take advantage of the physics of evaporative cooling and radiative cooling, and the fact that the arid, desert climate is low in relative and absolute humidity. The low relative humidity increases the efficiency of evaporative cooling due to the vapor pressure differential, and the low absolute humidity increases the efficiency of radiative cooling because the water vapor in the air otherwise inhibits it. In addition, in some desert climates, like those at high altitudes, temperatures drop below freezing at night. Their design is generally split into three areas: The ice house or reservoir The shade walls The ice pits or pools However, they varied greatly, as some used all three components, whereas others were simply a large shade wall over a thin pool. Ice house Most yakhchāls operate like a traditional ice house. The tall, conical shape of the building is to optimize the solar chimney effect, creating a convection current to guide any remaining heat upward and outside through openings at the very top of the building. Through this passive process, the air inside the yakhchāl remains cooler than the outside. At the same time, the building allows cold air to pour in from entries at the structure's base and descend to the lowest part of the yakhchāl: large underground spaces up to in volume. The yakhchāl is built of a unique water-resistant mortar called sarooj. This mortar is composed of sand, clay, egg whites, lime, goat hair, and ash in specific proportions, is resistant to heat transfer and is thought to be completely water-impenetrable. This material acts as effective insulation all year round. The sarooj walls are at least two meters thick at the base. They also often have access to a qanat (Iranian aqueduct), and are sometimes equipped with bâdgirs (windcatchers or wind towers) built of mud or mud brick in square or round shapes with vents at the top which funnel cool air down through internal, vertically placed wooden slats to the water or structure below. A bâdgir can also function as a chimney, releasing warm air out the top and pulling cool air in from a base opening or a connected qanat (air in a qanat is cooled by the underground stream). It is this construction that allows the ice house of a yakhchāl to take advantage of evaporative cooling, keeping the structure cool to well below ambient temperatures. The ice inside the structure was often separated using wood and straw to separate the layers of ice and keep them from sticking to each other. Furthermore, most designs incorporated a hole at the bottom that would connect back to the qanat, or simply act as a well for drainage. Shade walls The temperature differences between shaded and non-shaded area in most areas where the yakhchāls were constructed often have temperature differences of nearly to cooler, making shade walls necessary for production and storage, as well as giving workers extra time to harvest ice. A wall is usually built in an east–west direction near the yakhchāl, often as high as and sometimes as high as to minimize convection losses as well as to provide shade. Due to their height, the base of the walls was often significantly thicker, and in some design the walls were arched and/or buttressed in order to support the load (as pictured at the yakhchāl at Sirjan). Water is often channeled from a qanat to a yakhchāl, which is used to fill the provisioning pools or used to power the evaporative cooling throughout the ice house. Incoming water is channeled along the north side of the wall so that radiative cooling in the wall's shadow pre-chills the water before it enters the yakhchāl (as pictured at the yakhchāl at Kowsar). Ice is then brought from either the ice pools covered by the walls, or from nearby mountains to be stored in the reservoir. Ice pools Many yakhchāls contained ice pools. These pools were constructed to either provision the yakhchāl with water needed for evaporative cooling, so that ice could easily be prepared and transferred to inside storage, or for the production of ice on site. Sometimes these pools were channels that were square in shape of dimensions roughly × with a depth of to , comparable to a reflecting pool. Often, no special material was used to finish the channel surface. By night time, the ice pools would often have a negative energy budget: Heat conduction into the pool would be minimal due to the construction of the shade walls throughout the day and / or straw covering over the pool bed during the day. Hot air convection towards the pools would be minimal, either by location or again due to the height and position of the walls. Evaporation would take heat away from the pools, as with the rest of the yakhchāl. Due to the low moisture content of the air, very little of the heat radiated upwards from the pool would be reflected back to the pool by the air above it, allowing the pool's heat to be largely emitted into space. This meant that ice pools could use the cold of the desert nights and/or radiative cooling to freeze water which would later be transported to storage as ice.
Technology
Heating and cooling
null
2800677
https://en.wikipedia.org/wiki/D%C3%A9sir%C3%A9e%20potato
Désirée potato
The Désirée potato (sometimes rendered Desirée or Desiree) is a red-skinned main-crop potato originally bred in the Netherlands in 1962. It has yellow flesh with a distinctive flavour and is a favourite with allotment-holders because of its resistance to drought, and is fairly resistant to disease. It is a versatile, fairly waxy variety which is firm and holds its shape, and is useful for all methods of cooking, from roasting to mashing and salads. It is immune to potato wart and it is resistant to skin spot. It has good resistance to PVY, tuber late blight and blackleg. It also has moderate resistance to PVA, PVX and fusarium dry rot. It is found to be moderately susceptible to leaf late blight and leaf roll, also it is susceptible to common scab. Description Habit: Medium height, later spreading Foliage: Medium to dark grey-green, strong purple colour throughout plant Stems numerous, purple Leaf rigid, open, slightly arched Leaflets oval, pointed Secondaries few Buds/flowers: Buds large, red-purple on hairy stalks, flowers red-violet fading to white
Biology and health sciences
Root vegetables
Plants
2801560
https://en.wikipedia.org/wiki/Ocean%20acidification
Ocean acidification
Ocean acidification is the ongoing decrease in the pH of the Earth's ocean. Between 1950 and 2020, the average pH of the ocean surface fell from approximately 8.15 to 8.05. Carbon dioxide emissions from human activities are the primary cause of ocean acidification, with atmospheric carbon dioxide () levels exceeding 422 ppm (). from the atmosphere is absorbed by the oceans. This chemical reaction produces carbonic acid () which dissociates into a bicarbonate ion () and a hydrogen ion (). The presence of free hydrogen ions () lowers the pH of the ocean, increasing acidity (this does not mean that seawater is acidic yet; it is still alkaline, with a pH higher than 8). Marine calcifying organisms, such as mollusks and corals, are especially vulnerable because they rely on calcium carbonate to build shells and skeletons. A change in pH by 0.1 represents a 26% increase in hydrogen ion concentration in the world's oceans (the pH scale is logarithmic, so a change of one in pH units is equivalent to a tenfold change in hydrogen ion concentration). Sea-surface pH and carbonate saturation states vary depending on ocean depth and location. Colder and higher latitude waters are capable of absorbing more . This can cause acidity to rise, lowering the pH and carbonate saturation levels in these areas. There are several other factors that influence the atmosphere-ocean exchange, and thus local ocean acidification. These include ocean currents and upwelling zones, proximity to large continental rivers, sea ice coverage, and atmospheric exchange with nitrogen and sulfur from fossil fuel burning and agriculture. A lower ocean pH has a range of potentially harmful effects for marine organisms. Scientists have observed for example reduced calcification, lowered immune responses, and reduced energy for basic functions such as reproduction. Ocean acidification can impact marine ecosystems that provide food and livelihoods for many people. About one billion people are wholly or partially dependent on the fishing, tourism, and coastal management services provided by coral reefs. Ongoing acidification of the oceans may therefore threaten food chains linked with the oceans. The only solution that would address the root cause of ocean acidification is to reduce carbon dioxide emissions. This is one of the main objectives of climate change mitigation measures. The removal of carbon dioxide from the atmosphere would also help to reverse ocean acidification. In addition, there are some specific ocean-based mitigation methods, for example ocean alkalinity enhancement and enhanced weathering. These strategies are under investigation, but generally have a low technology readiness level and many risks. Ocean acidification has happened before in Earth's geologic history. The resulting ecological collapse in the oceans had long-lasting effects on the global carbon cycle and climate. Cause Present-day (2021) atmospheric carbon dioxide (CO2) levels of around 415 ppm are around 50% higher than preindustrial concentrations. The current elevated levels and rapid growth rates are unprecedented in the past 55 million years of the geological record. The sources of this excess CO2 are clearly established as human driven: they include anthropogenic fossil fuel, industrial, and land-use/land-change emissions. One source of this is fossil fuels, which are burned for energy. When burned, CO2 is released into the atmosphere as a byproduct of combustion, which is a significant contributor to the increasing levels of CO2 in the Earth's atmosphere. The ocean acts as a carbon sink for anthropogenic CO2 and takes up roughly a quarter of total anthropogenic CO2 emissions. However, the additional CO2 in the ocean results in a wholesale shift in seawater acid-base chemistry toward more acidic, lower pH conditions and lower saturation states for carbonate minerals used in many marine organism shells and skeletons. Accumulated since 1850, the ocean sink holds up to 175 ± 35 gigatons of carbon, with more than two-thirds of this amount (120 GtC) being taken up by the global ocean since 1960. Over the historical period, the ocean sink increased in pace with the exponential anthropogenic emissions increase. From 1850 until 2022, the ocean has absorbed 26 % of total anthropogenic emissions. Emissions during the period 1850–2021 amounted to 670 ± 65 gigatons of carbon and were partitioned among the atmosphere (41 %), ocean (26 %), and land (31 %). The carbon cycle describes the fluxes of carbon dioxide () between the oceans, terrestrial biosphere, lithosphere, and atmosphere. The carbon cycle involves both organic compounds such as cellulose and inorganic carbon compounds such as carbon dioxide, carbonate ion, and bicarbonate ion, together referenced as dissolved inorganic carbon (DIC). These inorganic compounds are particularly significant in ocean acidification, as they include many forms of dissolved present in the Earth's oceans. When dissolves, it reacts with water to form a balance of ionic and non-ionic chemical species: dissolved free carbon dioxide (), carbonic acid (), bicarbonate () and carbonate (). The ratio of these species depends on factors such as seawater temperature, pressure and salinity (as shown in a Bjerrum plot). These different forms of dissolved inorganic carbon are transferred from an ocean's surface to its interior by the ocean's solubility pump. The resistance of an area of ocean to absorbing atmospheric is known as the Revelle factor. Main effects The ocean's chemistry is changing due to the uptake of anthropogenic carbon dioxide (CO2). Ocean pH, carbonate ion concentrations ([CO32−]), and calcium carbonate mineral saturation states (Ω) have been declining as a result of the uptake of approximately 30% of the anthropogenic carbon dioxide emissions over the past 270 years (since around 1750). This process, commonly referred to as "ocean acidification", is making it harder for marine calcifiers to build a shell or skeletal structure, endangering coral reefs and the broader marine ecosystems. Ocean acidification has been called the "evil twin of global warming" and "the other CO2 problem". Increased ocean temperatures and oxygen loss act concurrently with ocean acidification and constitute the "deadly trio" of climate change pressures on the marine environment. The impacts of this will be most severe for coral reefs and other shelled marine organisms, as well as those populations that depend on the ecosystem services they provide. Reduction in pH value Dissolving in seawater increases the hydrogen ion () concentration in the ocean, and thus decreases ocean pH, as follows: In shallow coastal and shelf regions, a number of factors interplay to affect air-ocean exchange and resulting pH change. These include biological processes, such as photosynthesis and respiration, as well as water upwelling. Also, ecosystem metabolism in freshwater sources reaching coastal waters can lead to large, but local, pH changes. Freshwater bodies also appear to be acidifying, although this is a more complex and less obvious phenomenon. The absorption of CO2 from the atmosphere does not affect the ocean's alkalinity. This is important to know in this context as alkalinity is the capacity of water to resist acidification. Ocean alkalinity enhancement has been proposed as one option to add alkalinity to the ocean and therefore buffer against pH changes. Decreased calcification in marine organisms Changes in ocean chemistry can have extensive direct and indirect effects on organisms and their habitats. One of the most important repercussions of increasing ocean acidity relates to the production of shells out of calcium carbonate (). This process is called calcification and is important to the biology and survival of a wide range of marine organisms. Calcification involves the precipitation of dissolved ions into solid structures, structures for many marine organisms, such as coccolithophores, foraminifera, crustaceans, mollusks, etc. After they are formed, these structures are vulnerable to dissolution unless the surrounding seawater contains saturating concentrations of carbonate ions (). Very little of the extra carbon dioxide that is added into the ocean remains as dissolved carbon dioxide. The majority dissociates into additional bicarbonate and free hydrogen ions. The increase in hydrogen is larger than the increase in bicarbonate, creating an imbalance in the reaction: To maintain chemical equilibrium, some of the carbonate ions already in the ocean combine with some of the hydrogen ions to make further bicarbonate. Thus the ocean's concentration of carbonate ions is reduced, removing an essential building block for marine organisms to build shells, or calcify: The increase in concentrations of dissolved carbon dioxide and bicarbonate, and reduction in carbonate, are shown in the Bjerrum plot. Disruption of the food chain is also a possible effect as many marine organisms rely on calcium carbonate-based organisms at the base of the food chain for food and habitat. This can potentially have detrimental effects throughout the food web and potentially lead to a decline in availability of fish stocks which would have an impact on human livelihoods. Decrease in saturation state The saturation state (known as Ω) of seawater for a mineral is a measure of the thermodynamic potential for the mineral to form or to dissolve, and for calcium carbonate is described by the following equation: Here Ω is the product of the concentrations (or activities) of the reacting ions that form the mineral (Ca2+ and CO32−), divided by the apparent solubility product at equilibrium (Ksp), that is, when the rates of precipitation and dissolution are equal. In seawater, dissolution boundary is formed as a result of temperature, pressure, and depth, and is known as the saturation horizon. Above this saturation horizon, Ω has a value greater than 1, and does not readily dissolve. Most calcifying organisms live in such waters. Below this depth, Ω has a value less than 1, and will dissolve. The carbonate compensation depth is the ocean depth at which carbonate dissolution balances the supply of carbonate to sea floor, therefore sediment below this depth will be void of calcium carbonate. Increasing levels, and the resulting lower pH of seawater, decreases the concentration of CO32− and the saturation state of therefore increasing dissolution. Calcium carbonate most commonly occurs in two common polymorphs (crystalline forms): aragonite and calcite. Aragonite is much more soluble than calcite, so the aragonite saturation horizon, and aragonite compensation depth, is always nearer to the surface than the calcite saturation horizon. This also means that those organisms that produce aragonite may be more vulnerable to changes in ocean acidity than those that produce calcite. Ocean acidification and the resulting decrease in carbonate saturation states raise the saturation horizons of both forms closer to the surface. This decrease in saturation state is one of the main factors leading to decreased calcification in marine organisms because the inorganic precipitation of is directly proportional to its saturation state and calcifying organisms exhibit stress in waters with lower saturation states. Natural variability and climate feedbacks Already now large quantities of water undersaturated in aragonite are upwelling close to the Pacific continental shelf area of North America, from Vancouver to Northern California. These continental shelves play an important role in marine ecosystems, since most marine organisms live or are spawned there. Other shelf areas may be experiencing similar effects. At depths of 1000s of meters in the ocean, calcium carbonate shells begin to dissolve as increasing pressure and decreasing temperature shift the chemical equilibria controlling calcium carbonate precipitation. The depth at which this occurs is known as the carbonate compensation depth. Ocean acidification will increase such dissolution and shallow the carbonate compensation depth on timescales of tens to hundreds of years. Zones of downwelling are being affected first. In the North Pacific and North Atlantic, saturation states are also decreasing (the depth of saturation is getting more shallow). Ocean acidification is progressing in the open ocean as the CO2 travels to deeper depth as a result of ocean mixing. In the open ocean, this causes carbonate compensation depths to become more shallow, meaning that dissolution of calcium carbonate will occur below those depths. In the North Pacific these carbonate saturations depths are shallowing at a rate of 1–2 m per year. It is expected that ocean acidification in the future will lead to a significant decrease in the burial of carbonate sediments for several centuries, and even the dissolution of existing carbonate sediments. Measured and estimated values Present day and recent history Between 1950 and 2020, the average pH value of the ocean surface is estimated to have decreased from approximately 8.15 to 8.05. This represents an increase of around 26% in hydrogen ion concentration in the world's oceans (the pH scale is logarithmic, so a change of one in pH unit is equivalent to a tenfold change in hydrogen ion concentration). For example, in the 15-year period 1995–2010 alone, acidity has increased 6 percent in the upper 100 meters of the Pacific Ocean from Hawaii to Alaska. The IPCC Sixth Assessment Report in 2021 stated that "present-day surface pH values are unprecedented for at least 26,000 years and current rates of pH change are unprecedented since at least that time. The pH value of the ocean interior has declined over the last 20–30 years everywhere in the global ocean. The report also found that "pH in open ocean surface water has declined by about 0.017 to 0.027 pH units per decade since the late 1980s". The rate of decline differs by region. This is due to complex interactions between different types of forcing mechanisms: "In the tropical Pacific, its central and eastern upwelling zones exhibited a faster pH decline of minus 0.022 to minus 0.026 pH unit per decade." This is thought to be "due to increased upwelling of -rich sub-surface waters in addition to anthropogenic uptake." Some regions exhibited a slower acidification rate: a pH decline of minus 0.010 to minus 0.013 pH unit per decade has been observed in warm pools in the western tropical Pacific. The rate at which ocean acidification will occur may be influenced by the rate of surface ocean warming, because warm waters will not absorb as much . Therefore, greater seawater warming could limit CO2 absorption and lead to a smaller change in pH for a given increase in CO2. The difference in changes in temperature between basins is one of the main reasons for the differences in acidification rates in different localities. Current rates of ocean acidification have been likened to the greenhouse event at the Paleocene–Eocene boundary (about 56 million years ago), when surface ocean temperatures rose by 5–6 degrees Celsius. In that event, surface ecosystems experienced a variety of impacts, but bottom-dwelling organisms in the deep ocean actually experienced a major extinction. Currently, the rate of carbon addition to the atmosphere-ocean system is about ten times the rate that occurred at the Paleocene–Eocene boundary. Extensive observational systems are now in place or being built for monitoring seawater chemistry and acidification for both the global open ocean and some coastal systems. Geologic past Ocean acidification has occurred previously in Earth's history. It happened during the Capitanian mass extinction, at the end-Permian extinction, during the end-Triassic extinction, and during the Cretaceous–Palaeogene extinction event. Three of the big five mass extinction events in the geologic past were associated with a rapid increase in atmospheric carbon dioxide, probably due to volcanism and/or thermal dissociation of marine gas hydrates. Elevated CO2 levels impacted biodiversity. Decreased saturation due to seawater uptake of volcanogenic CO2 has been suggested as a possible kill mechanism during the marine mass extinction at the end of the Triassic. The end-Triassic biotic crisis is still the most well-established example of a marine mass extinction due to ocean acidification, because (a) carbon isotope records suggest enhanced volcanic activity that decreased the carbonate sedimentation which reduced the carbonate compensation depth and the carbonate saturation state, and a marine extinction coincided precisely in the stratigraphic record, and (b) there was pronounced selectivity of the extinction against organisms with thick aragonitic skeletons, which is predicted from experimental studies. Ocean acidification has also been suggested as a one cause of the end-Permian mass extinction and the end-Cretaceous crisis. Overall, multiple climatic stressors, including ocean acidification, was likely the cause of geologic extinction events. The most notable example of ocean acidification is the Paleocene-Eocene Thermal Maximum (PETM), which occurred approximately 56 million years ago when massive amounts of carbon entered the ocean and atmosphere, and led to the dissolution of carbonate sediments across many ocean basins. Relatively new geochemical methods of testing for pH in the past indicate the pH dropped 0.3 units across the PETM. One study that solves the marine carbonate system for saturation state shows that it may not change much over the PETM, suggesting the rate of carbon release at our best geological analogy was much slower than human-induced carbon emissions. However, stronger proxy methods to test for saturation state are needed to assess how much this pH change may have affected calcifying organisms. Predicted future values Importantly, the rate of change in ocean acidification is much higher than in the geological past. This faster change prevents organisms from gradually adapting, and prevents climate cycle feedbacks from kicking in to mitigate ocean acidification. Ocean acidification is now on a path to reach lower pH levels than at any other point in the last 300 million years. The rate of ocean acidification (i.e. the rate of change in pH value) is also estimated to be unprecedented over that same time scale. These expected changes are considered unprecedented in the geological record. In combination with other ocean biogeochemical changes, this drop in pH value could undermine the functioning of marine ecosystems and disrupt the provision of many goods and services associated with the ocean, beginning as early as 2100. The extent of further ocean chemistry changes, including ocean pH, will depend on climate change mitigation efforts taken by nations and their governments. Different scenarios of projected socioeconomic global changes are modelled by using the Shared Socioeconomic Pathways (SSP) scenarios. Under a very high emission scenario (SSP5-8.5), model projections estimate that surface ocean pH could decrease by as much as 0.44 units by the end of this century, compared to the end of the 19th century. This would mean a pH as low as about 7.7, and represents a further increase in H+ concentrations of two to four times beyond the increase to date. Impacts on oceanic calcifying organisms Complexity of research findings The full ecological consequences of the changes in calcification due to ocean acidification are complex but it appears likely that many calcifying species will be adversely affected by ocean acidification. Increasing ocean acidification makes it more difficult for shell-accreting organisms to access carbonate ions, essential for the production of their hard exoskeletal shell. Oceanic calcifying organism span the food chain from autotrophs to heterotrophs and include organisms such as coccolithophores, corals, foraminifera, echinoderms, crustaceans and molluscs. Overall, all marine ecosystems on Earth will be exposed to changes in acidification and several other ocean biogeochemical changes. Ocean acidification may force some organisms to reallocate resources away from productive endpoints in order to maintain calcification. For example, the oyster Magallana gigas is recognized to experience metabolic changes alongside altered calcification rates due to energetic tradeoffs resulting from pH imbalances. Under normal conditions, calcite and aragonite are stable in surface waters since the carbonate ions are supersaturated with respect to seawater. However, as ocean pH falls, the concentration of carbonate ions also decreases. Calcium carbonate thus becomes undersaturated, and structures made of calcium carbonate are vulnerable to calcification stress and dissolution. In particular, studies show that corals, coccolithophores, coralline algae, foraminifera, shellfish and pteropods experience reduced calcification or enhanced dissolution when exposed to elevated . Even with active marine conservation practices it may be impossible to bring back many previous shellfish populations. Some studies have found different responses to ocean acidification, with coccolithophore calcification and photosynthesis both increasing under elevated atmospheric p, and an equal decline in primary production and calcification in response to elevated , or the direction of the response varying between species. Similarly, the sea star, Pisaster ochraceus, shows enhanced growth in waters with increased acidity. Reduced calcification from ocean acidification may affect the ocean's biologically driven sequestration of carbon from the atmosphere to the ocean interior and seafloor sediment, weakening the so-called biological pump. Seawater acidification could also reduce the size of Antarctic phytoplankton, making them less effective at storing carbon. Such changes are being increasingly studied and synthesized through the use of physiological frameworks, including the Adverse Outcome Pathway (AOP) framework. Coccolithophores A coccolithophore is a unicellular, eukaryotic phytoplankton (alga). Understanding calcification changes in coccolithophores may be particularly important because a decline in the coccolithophores may have secondary effects on climate: it could contribute to global warming by decreasing the Earth's albedo via their effects on oceanic cloud cover. A study in 2008 examined a sediment core from the North Atlantic and found that the species composition of coccolithophorids remained unchanged over the past 224 years (1780 to 2004). But the average coccolith mass had increased by 40% during the same period. Corals Warm water corals are clearly in decline, with losses of 50% over the last 30–50 years due to multiple threats from ocean warming, ocean acidification, pollution and physical damage from activities such as fishing, and these pressures are expected to intensify. The fluid in the internal compartments (the coelenteron) where corals grow their exoskeleton is also extremely important for calcification growth. When the saturation state of aragonite in the external seawater is at ambient levels, the corals will grow their aragonite crystals rapidly in their internal compartments, hence their exoskeleton grows rapidly. If the saturation state of aragonite in the external seawater is lower than the ambient level, the corals have to work harder to maintain the right balance in the internal compartment. When that happens, the process of growing the crystals slows down, and this slows down the rate of how much their exoskeleton is growing. Depending on the aragonite saturation state in the surrounding water, the corals may halt growth because pumping aragonite into the internal compartment will not be energetically favorable. Under the current progression of carbon emissions, around 70% of North Atlantic cold-water corals will be living in corrosive waters by 2050–60. Acidified conditions primarily reduce the coral's capacity to build dense exoskeletons, rather than affecting the linear extension of the exoskeleton. The density of some species of corals could be reduced by over 20% by the end of this century. An in situ experiment, conducted on a 400 m2 patch of the Great Barrier Reef, to decrease seawater CO2 level (raise pH) to near the preindustrial value showed a 7% increase in net calcification. A similar experiment to raise in situ seawater CO2 level (lower pH) to a level expected soon after the 2050 found that net calcification decreased 34%. However, a field study of the coral reef in Queensland and Western Australia from 2007 to 2012 found that corals are more resistant to the environmental pH changes than previously thought, due to internal homeostasis regulation; this makes thermal change (marine heatwaves), which leads to coral bleaching, rather than acidification, the main factor for coral reef vulnerability due to climate change. Studies at carbon dioxide seep sites In some places carbon dioxide bubbles out from the sea floor, locally changing the pH and other aspects of the chemistry of the seawater. Studies of these carbon dioxide seeps have documented a variety of responses by different organisms. Coral reef communities located near carbon dioxide seeps are of particular interest because of the sensitivity of some corals species to acidification. In Papua New Guinea, declining pH caused by carbon dioxide seeps is associated with declines in coral species diversity. However, in Palau carbon dioxide seeps are not associated with reduced species diversity of corals, although bioerosion of coral skeletons is much higher at low pH sites. Pteropods and brittle stars Pteropods and brittle stars both form the base of the Arctic food webs and are both seriously damaged from acidification. Pteropods shells dissolve with increasing acidification and the brittle stars lose muscle mass when re-growing appendages. For pteropods to create shells they require aragonite which is produced through carbonate ions and dissolved calcium and strontium. Pteropods are severely affected because increasing acidification levels have steadily decreased the amount of water supersaturated with carbonate. The degradation of organic matter in Arctic waters has amplified ocean acidification; some Arctic waters are already undersaturated with respect to aragonite. The brittle star's eggs die within a few days when exposed to expected conditions resulting from Arctic acidification. Similarly, when exposed in experiments to pH reduced by 0.2 to 0.4, larvae of a temperate brittle star, a relative of the common sea star, fewer than 0.1 percent survived more than eight days. Other impacts on ecosystems Other biological impacts Aside from the slowing and/or reversal of calcification, organisms may suffer other adverse effects, either indirectly through negative impacts on food resources, or directly as reproductive or physiological effects. For example, the elevated oceanic levels of may produce -induced acidification of body fluids, known as hypercapnia. Increasing acidity has been observed to reduce metabolic rates in jumbo squid and depress the immune responses of blue mussels. Atlantic longfin squid eggs took longer to hatch in acidified water, and the squid's statolith was smaller and malformed in animals placed in sea water with a lower pH. However, these studies are ongoing and there is not yet a full understanding of these processes in marine organisms or ecosystems. Acoustic properties Another potential route to ecosystem impacts is through bioacoustics. This may occur as ocean acidification can alter the acoustic properties of seawater, allowing sound to propagate further, and increasing ocean noise. This impacts all animals that use sound for echolocation or communication. Algae and seagrasses Another possible effect would be an increase in harmful algal bloom events, which could contribute to the accumulation of toxins (domoic acid, brevetoxin, saxitoxin) in small organisms such as anchovies and shellfish, in turn increasing occurrences of amnesic shellfish poisoning, neurotoxic shellfish poisoning and paralytic shellfish poisoning. Although algal blooms can be harmful, other beneficial photosynthetic organisms may benefit from increased levels of carbon dioxide. Most importantly, seagrasses will benefit. Research found that as seagrasses increased their photosynthetic activity, calcifying algae's calcification rates rose, likely because localized photosynthetic activity absorbed carbon dioxide and elevated local pH. Fish larvae Ocean acidification can also have effects on marine fish larvae. It internally affects their olfactory systems, which is a crucial part of their early development. Orange clownfish larvae mostly live on oceanic reefs that are surrounded by vegetative islands. Larvae are known to use their sense of smell to detect the differences between reefs surrounded by vegetative islands and reefs not surrounded by vegetative islands. Clownfish larvae need to be able to distinguish between these two destinations to be able to find a suitable area for their growth. Another use for marine fish olfactory systems is to distinguish between their parents and other adult fish, in order to avoid inbreeding. In an experimental aquarium facility, clownfish were sustained in non-manipulated seawater with pH 8.15 ± 0.07, which is similar to our current ocean's pH. To test for effects of different pH levels, the seawater was modified to two other pH levels, which corresponded with climate change models that predict future atmospheric levels. In the year 2100 the model projects possible levels of 1,000 ppm, which correlates with the pH of 7.8 ± 0.05. This experiment showed that when larvae are exposed to a pH of 7.8 ± 0.05 their reaction to environmental cues differs drastically from their reaction to cues at pH equal to current ocean levels. At pH 7.6 ± 0.05 larvae had no reaction to any type of cue. However, a meta-analysis published in 2022 found that the effect sizes of published studies testing for ocean acidification effects on fish behavior have declined by an order of magnitude over the past decade, and have been negligible for the past five years. Eel embryos, a "critically endangered" species yet profound in aquaculture, are also being affected by ocean acidification, specifically the European eel. Although they spend most of their lives in fresh water, usually in rivers, streams, or estuaries, they go to spawn and die in the Sargasso Sea. Here is where European eels are experiencing the effects of acidification in one of their key life stages. Fish embryos and larvae are usually more sensitive to pH changes than adults, as organs for pH regulation are not full developed. Because of this, European eel embryos are more vulnerable to changes in pH in the Sargasso Sea. A study of the European Eel in the Sargasso Sea was conducted in 2021 to analyze the specific effects of ocean acidification on embryos. The study found that exposure to predicted end-of-century ocean pCO2 conditions may affect normal development of this species in nature during sensitive early life history stages with limited physiological response capacities, while extreme acidification would negatively influence embryonic survival and development under hatchery conditions. Compounded effects of acidification, warming and deoxygenation There is a substantial body of research showing that a combination of ocean acidification and elevated ocean temperature have a compounded effect on marine life and the ocean environment. This effect far exceeds the individual harmful impact of either. In addition, ocean warming, along with increased productivity of phytoplankton from higher CO2 levels exacerbates ocean deoxygenation. Deoxygenation of ocean waters is an additional stressor on marine organisms that increases ocean stratification therefore limiting nutrients over time and reducing biological gradients. Meta analyses have quantified the direction and magnitude of the harmful effects of combined ocean acidification, warming and deoxygenation on the ocean. These meta-analyses have been further tested by mesocosm studies that simulated the interaction of these stressors and found a catastrophic effect on the marine food web: thermal stress more than negates any primary producer to herbivore increase in productivity from elevated . Impacts on the economy and societies The increase of ocean acidity decelerates the rate of calcification in salt water, leading to smaller and slower growing coral reefs which supports approximately 25% of marine life. Impacts are far-reaching from fisheries and coastal environments down to the deepest depths of the ocean. The increase in ocean acidity in not only killing the coral, but also the wildly diverse population of marine inhabitants which coral reefs support. Fishing and tourism industry The threat of acidification includes a decline in commercial fisheries and the coast-based tourism industry. Several ocean goods and services are likely to be undermined by future ocean acidification potentially affecting the livelihoods of some 400 to 800 million people, depending upon the greenhouse gas emission scenario. Some 1 billion people are completely or partially dependent on the fishing, tourism, and coastal management services provided by coral reefs. Ongoing acidification of the oceans may therefore threaten future food chains linked with the oceans. Arctic In the Arctic, commercial fisheries are threatened because acidification harms calcifying organisms which form the base of the Arctic food webs (pteropods and brittle stars, see above).  Acidification threatens Arctic food webs from the base up. Arctic food webs are considered simple, meaning there are few steps in the food chain from small organisms to larger predators. For example, pteropods are "a key prey item of a number of higher predators – larger plankton, fish, seabirds, whales". Both pteropods and sea stars serve as a substantial food source and their removal from the simple food web would pose a serious threat to the whole ecosystem. The effects on the calcifying organisms at the base of the food webs could potentially destroy fisheries. US commercial fisheries The value of fish caught from US commercial fisheries in 2007 was valued at $3.8 billion and of that 73% was derived from calcifiers and their direct predators. Other organisms are directly harmed as a result of acidification. For example, decrease in the growth of marine calcifiers such as the American lobster, ocean quahog, and scallops means there is less shellfish meat available for sale and consumption. Red king crab fisheries are also at a serious threat because crabs are also calcifiers. Baby red king crab when exposed to increased acidification levels experienced 100% mortality after 95 days. In 2006, red king crab accounted for 23% of the total guideline harvest levels and a serious decline in red crab population would threaten the crab harvesting industry. Possible responses Climate change mitigation Reducing carbon dioxide emissions (i.e. climate change mitigation measures) is the only solution that addresses the root cause of ocean acidification. For example, some mitigation measures focus on carbon dioxide removal (CDR) from the atmosphere (e.g. direct air capture (DAC), bioenergy with carbon capture and storage (BECCS)). These would also slow the rate of acidification. Approaches that remove carbon dioxide from the ocean include ocean nutrient fertilization, artificial upwelling/downwelling, seaweed farming, ecosystem recovery, ocean alkalinity enhancement, enhanced weathering and electrochemical processes. All of these methods use the ocean to remove from the atmosphere to store it in the ocean. These methods could assist with mitigation but they can have side-effects on marine life. The research field for all CDR methods has grown a lot since 2019. In total, "ocean-based methods have a combined potential to remove 1–100 gigatons of per year". Their costs are in the order of USD40–500 per ton of . For example, enhanced weathering could remove 2–4 gigatons of per year. This technology comes with a cost of 50–200 USD per ton of . Carbon removal technologies which add alkalinity Some carbon removal techniques add alkalinity to the ocean and therefore immediately buffer pH changes which might help the organisms in the region that the extra alkalinity is added to. The two technologies that fall into this category are ocean alkalinity enhancement and electrochemical methods. Eventually, due to diffusion, that alkalinity addition will be quite small to distant waters. This is why the term local ocean acidification mitigation is used. Both of these technologies have the potential to operate on a large scale and to be efficient at removing carbon dioxide. However, they are expensive, have many risks and side effects and currently have a low technology readiness level. Ocean alkalinity enhancement Ocean alkalinity enhancement (OAE) is a proposed "carbon dioxide removal (CDR) method that involves deposition of alkaline minerals or their dissociation products at the ocean surface". The process would increase surface total alkalinity. It would work to increase ocean absorption of . The process involves increasing the amount of bicarbonate (HCO3-) through accelerated weathering (enhanced weathering) of rocks (silicate, limestone and quicklime). This process mimics the silicate-carbonate cycle. The either becomes bicarbonate, remaining in that form for more than 100 years, or may precipitate into calcium carbonate (CaCO3). When calcium carbonate is buried in the deep ocean, it can hold the carbon indefinitely when utilizing silicate rocks. Enhanced weathering is one type of ocean alkalinity enhancement. Enhanced weathering increases alkalinity by scattering fine rock particles. This can happen on land and in the ocean (even though the outcome eventually affects the ocean). In addition to sequestering , alkalinity addition buffers the pH of the ocean therefore reducing ocean acidification. However, little is known about how organisms respond to added alkalinity, even from natural sources. For example, weathering of some silicate rocks could release a large amount of trace metals at the weathering site. Cost and energy consumed by ocean alkalinity enhancement (mining, pulverizing, transport) is high compared to other CDR techniques. The cost is estimated to be 20–50 USD per ton of CO2 (for "direct addition of alkaline minerals to the ocean"). Carbon sequestered as bicarbonate in the ocean amounts to about 30% of carbon emissions since the Industrial Revolution. Experimental materials include limestone, brucite, olivine and alkaline solutions. Another approach is to use electricity to raise alkalinity during desalination to capture waterborne CO2. Electrochemical methods Electrochemical methods, or electrolysis, can strip carbon dioxide directly from seawater. Electrochemical process are a type of ocean alkalinity enhancement, too. Some methods focus on direct removal (in the form of carbonate and gas) while others increase the alkalinity of seawater by precipitating metal hydroxide residues, which absorbs in a matter described in the ocean alkalinity enhancement section. The hydrogen produced during direct carbon capture can then be upcycled to form hydrogen for energy consumption, or other manufactured laboratory reagents such as hydrochloric acid. However, implementation of electrolysis for carbon capture is expensive and the energy consumed for the process is high compared to other CDR techniques. In addition, research to assess the environmental impact of this process is ongoing. Some complications include toxic chemicals in wastewaters, and reduced DIC in effluents; both of these may negatively impact marine life. Policies and goals Global policies As awareness about ocean acidification grows, policies geared towards increasing monitoring efforts of ocean acidification have been drafted. Previously in 2015, ocean scientist Jean-Pierre Gattuso had remarked that "The ocean has been minimally considered at previous climate negotiations. Our study provides compelling arguments for a radical change at the UN conference (in Paris) on climate change". International efforts, such as the Wider Caribbean's Cartagena Convention (entered into force in 1986), may enhance the support provided by regional governments to highly vulnerable areas in response to ocean acidification. Many countries, for example in the Pacific Islands and Territories, have constructed regional policies, or National Ocean Policies, National Action Plans, National Adaptation Plans of Action and Joint National Action Plans on Climate Change and Disaster Risk Reduction, to help work towards SDG 14. Ocean acidification is now starting to be considered within those frameworks. UN Ocean Decade The UN Ocean Decade has a program called "Ocean acidification research for sustainability". It was proposed by the Global Ocean Acidification Observing Network (GOA-ON) and its partners, and has been formally endorsed as a program of the UN Decade of Ocean Science for Sustainable Development. The OARS program builds on the work of GOA-ON and has the following aims: to further develop the science of ocean acidification; to increase observations of ocean chemistry changes; to identify the impacts on marine ecosystems on local and global scales; and to provide decision makers with the information needed to mitigate and adapt to ocean acidification. Global Climate Indicators The importance of ocean acidification is reflected in its inclusion as one of seven Global Climate Indicators. These Indicators are a set of parameters that describe the changing climate without reducing climate change to only rising temperature. The Indicators include key information for the most relevant domains of climate change: temperature and energy, atmospheric composition, ocean and water as well as the cryosphere. The Global Climate Indicators have been identified by scientists and communication specialists in a process led by Global Climate Observing System (GCOS). The Indicators have been endorsed by the World Meteorological Organization (WMO). They form the basis of the annual WMO Statement of the State of the Global Climate, which is submitted to the Conference of Parties (COP) of the United Nations Framework Convention on Climate Change (UNFCCC). Additionally, the Copernicus Climate Change Service (C3S) of the European Commission uses the Indicators for their annual "European State of the Climate". Sustainable Development Goal 14 In 2015, the United Nations adopted the 2030 Agenda and a set of 17 Sustainable Development Goals (SDG), including a goal dedicated to the ocean, Sustainable Development Goal 14, which calls to "conserve and sustainably use the oceans, seas and marine resources for sustainable development". Ocean acidification is directly addressed by the target SDG 14.3. The full title of Target 14.3 is: "Minimize and address the impacts of ocean acidification, including through enhanced scientific cooperation at all levels". This target has one indicator: Indicator 14.3.1 which calls for the "Average marine acidity (pH) measured at agreed suite of representative sampling stations".  The Intergovernmental Oceanographic Commission (IOC) of UNESCO was identified as the custodian agency for the SDG 14.3.1 Indicator. In this role, IOC-UNESCO is tasked with developing the SDG 14.3.1 Indicator Methodology, the annual collection of data towards the SDG 14.3.1 Indicator and the reporting of progress to the United Nations. Policies at country level United States In the United States, the Federal Ocean Acidification Research And Monitoring Act of 2009 supports government coordination, such as the National Oceanic Atmospheric Administration's (NOAA) "Ocean Acidification Program". In 2015, USEPA denied a citizens petition that asked EPA to regulate under the Toxic Substances Control Act of 1976 in order to mitigate ocean acidification. In the denial, the EPA said that risks from ocean acidification were being "more efficiently and effectively addressed" under domestic actions, e.g., under the Presidential Climate Action Plan, and that multiple avenues are being pursued to work with and in other nations to reduce emissions and deforestation and promote clean energy and energy efficiency. History Research into the phenomenon of ocean acidification, as well as awareness raising about the problem, has been going on for several decades. The fundamental research really began with the creation of the pH scale by Danish chemist Søren Peder Lauritz Sørensen in 1909. By around the 1950s the massive role of the ocean in absorbing fossil fuel CO2 was known to specialists, but not appreciated by the greater scientific community. Throughout much of the 20th century, the dominant focus has been the beneficial process of oceanic CO2 uptake, which has enormously ameliorated climate change. The concept of "too much of a good thing" has been late in developing and was triggered only by some key events, and the oceanic sink for heat and CO2 is still critical as the primary buffer against climate change. In the early 1970s questions over the long-term impact of the accumulation of fossil fuel CO2 in the sea were already arising around the world and causing strong debate. Researchers commented on the accumulation of fossil CO2 in the atmosphere and sea and drew attention to the possible impacts on marine life. By the mid-1990s, the likely impact of CO2 levels rising so high with the inevitable changes in pH and carbonate ion became a concern of scientists studying the fate of coral reefs. By the end of the 20th century the trade-offs between the beneficial role of the ocean in absorbing some 90 % of all heat created, and the accumulation of some 50 % of all fossil fuel CO2 emitted, and the impacts on marine life were becoming more clear. By 2003, the time of planning for the "First Symposium on the Ocean in a High-CO2 World" meeting to be held in Paris in 2004, many new research results on ocean acidification were published. In 2009, members of the InterAcademy Panel called on world leaders to "Recognize that reducing the build up of in the atmosphere is the only practicable solution to mitigating ocean acidification". The statement also stressed the importance to "Reinvigorate action to reduce stressors, such as overfishing and pollution, on marine ecosystems to increase resilience to ocean acidification". For example, research in 2010 found that in the 15-year period 1995–2010 alone, acidity had increased 6 percent in the upper 100 meters of the Pacific Ocean from Hawaii to Alaska. According to a statement in July 2012 by Jane Lubchenco, head of the U.S. National Oceanic and Atmospheric Administration "surface waters are changing much more rapidly than initial calculations have suggested. It's yet another reason to be very seriously concerned about the amount of carbon dioxide that is in the atmosphere now and the additional amount we continue to put out." A 2013 study found acidity was increasing at a rate 10 times faster than in any of the evolutionary crises in Earth's history. The "Third Symposium on the Ocean in a High- World" took place in Monterey, California, in 2012. The summary for policy makers from the conference stated that "Ocean acidification research is growing rapidly". In a synthesis report published in Science in 2015, 22 leading marine scientists stated that from burning fossil fuels is changing the oceans' chemistry more rapidly than at any time since the Great Dying (Earth's most severe known extinction event). Their report emphasized that the 2 °C maximum temperature increase agreed upon by governments reflects too small a cut in emissions to prevent "dramatic impacts" on the world's oceans. A study done in 2020 argues that ocean acidification is not only negatively affecting marine life, but also human health. Food quality, respiratory issues, and human health are all negatively affected by ocean acidification.
Physical sciences
Climate change
Earth science
30270392
https://en.wikipedia.org/wiki/Sternum
Sternum
The sternum (: sternums or sterna) or breastbone is a long flat bone located in the central part of the chest. It connects to the ribs via cartilage and forms the front of the rib cage, thus helping to protect the heart, lungs, and major blood vessels from injury. Shaped roughly like a necktie, it is one of the largest and longest flat bones of the body. Its three regions are the manubrium, the body, and the xiphoid process. The word sternum originates from Ancient Greek στέρνον (stérnon) 'chest'. Structure The sternum is a narrow, flat bone, forming the middle portion of the front of the chest. The top of the sternum supports the clavicles (collarbones) and its edges join with the costal cartilages of the first two pairs of ribs. The inner surface of the sternum is also the attachment of the sternopericardial ligaments. Its top is also connected to the sternocleidomastoid muscle. The sternum consists of three main parts, listed from the top: Manubrium Body (gladiolus) Xiphoid process In its natural position, the sternum is angled obliquely, downward and forward. It is slightly convex in front and concave behind; broad above, shaped like a "T", becoming narrowed at the point where the manubrium joins the body, after which it again widens a little to below the middle of the body, and then narrows to its lower extremity. In adults the sternum is on average about 1.7 cm longer in the male than in the female. Manubrium The manubrium (Latin for 'handle') is the broad upper part of the sternum. It has a quadrangular shape, narrowing from the top, which gives it four borders. The suprasternal notch (jugular notch) is located in the middle at the upper broadest part of the manubrium. This notch can be felt between the two clavicles. On either side of this notch are the right and left clavicular notches. The manubrium joins with the body of the sternum, the clavicles and the cartilages of the first 1.5 pairs of ribs. The inferior border, oval and rough, is covered with a thin layer of cartilage for articulation with the body. The lateral borders are each marked above by a depression for the first costal cartilage, and below by a small facet, which, with a similar facet on the upper angle of the body, forms a notch for the reception of the costal cartilage of the second rib. Between the depression for the first costal cartilage and the demi-facet for the second is a narrow, curved edge, which slopes from above downward towards the middle. Also, the superior sternopericardial ligament attaches the pericardium to the posterior side of the manubrium. Body The body, or gladiolus, is the longest sternal part. It is flat and considered to have only a front and back surface. It is flat on the front, directed upward and forward, and marked by three transverse ridges which cross the bone opposite the third, fourth, and fifth articular depressions. The pectoralis major attaches to it on either side. At the junction of the third and fourth parts of the body is occasionally seen an orifice, the sternal foramen, of varying size and form. The posterior surface, slightly concave, is also marked by three transverse lines, less distinct, however, than those in front; from its lower part, on either side, the transversus thoracis takes origin. The sternal angle is located at the point where the body joins the manubrium. The sternal angle can be felt at the point where the sternum projects farthest forward. However, in some people the sternal angle is concave or rounded. During physical examinations, the sternal angle is a useful landmark because the second rib attaches here. Each outer border, at its superior angle, has a small facet, which with a similar facet on the manubrium, forms a cavity for the cartilage of the second rib; below this are four angular depressions which receive the cartilages of the third, fourth, fifth, and sixth ribs. The inferior angle has a small facet, which, with a corresponding one on the xiphoid process, forms a notch for the cartilage of the seventh rib. These articular depressions are separated by a series of curved interarticular intervals, which diminish in length from above downward, and correspond to the intercostal spaces. Most of the cartilages belonging to the true ribs, articulate with the sternum at the lines of junction of its primitive component segments. This is well seen in some other vertebrates, where the parts of the bone remain separated for longer. The upper border is oval and articulates with the manubrium, at the sternal angle. The lower border is narrow, and articulates with the xiphoid process. Xiphoid process Located at the inferior end of the sternum, is the pointed xiphoid process. Improperly performed chest compressions during cardiopulmonary resuscitation can cause the xiphoid process to snap off, driving it into the liver which can cause a fatal hemorrhage. The sternum is composed of highly vascular tissue, covered by a thin layer of compact bone which is thickest in the manubrium between the articular facets for the clavicles. The inferior sternopericardial ligament attaches the pericardium to the posterior xiphoid process. Joints The cartilages of the top five ribs join with the sternum at the sternocostal joints. The right and left clavicular notches articulate with the right and left clavicles, respectively. The costal cartilage of the second rib articulates with the sternum at the sternal angle making it easy to locate. The transversus thoracis muscle is innervated by one of the intercostal nerves and superiorly attaches at the posterior surface of the lower sternum. Its inferior attachment is the internal surface of costal cartilages two through six and works to depress the ribs. Development The sternum develops from two cartilaginous bars one on the left and one on the right, connected with the cartilages of the ribs on each side. These two bars fuse together along the middle to form the cartilaginous sternum which is ossified from six centers: one for the manubrium, four for the body, and one for the xiphoid process. The ossification centers appear in the intervals between the articular depressions for the costal cartilages, in the following order: in the manubrium and first piece of the body, during the sixth month of fetal life; in the second and third pieces of the body, during the seventh month of fetal life; in its fourth piece, during the first year after birth; and in the xiphoid process, between the fifth and eighteenth years. The centers make their appearance at the upper parts of the segments, and proceed gradually downward. To these may be added the occasional existence of two small episternal centers, which make their appearance one on either side of the jugular notch; they are probably vestiges of the episternal bone of the monotremata and lizards. Occasionally some of the segments are formed from more than one center, the number and position of which vary [Fig. 6]. Thus, the first piece may have two, three, or even six centers. When two are present, they are generally situated one above the other, the upper being the larger; the second piece has seldom more than one; the third, fourth, and fifth pieces are often formed from two centers placed laterally, the irregular union of which explains the rare occurrence of the sternal foramen [Fig. 7], or of the vertical fissure which occasionally intersects this part of the bone constituting the malformation known as fissura sterni; these conditions are further explained by the manner in which the cartilaginous sternum is formed. More rarely still the upper end of the sternum may be divided by a fissure. Union of the various centers of the body begins about puberty, and proceeds from below upward [Fig. 5]; by the age of 25 they are all united. The xiphoid process may become joined to the body before the age of thirty, but this occurs more frequently after forty; on the other hand, it sometimes remains ununited in old age. In advanced life the manubrium is occasionally joined to the body by bone. When this takes place, however, the bony tissue is generally only superficial, the central portion of the intervening cartilage remaining unossified. The body of the sternum is formed by the fusion of four segments called sternebrae. Variations In 2.5–13.5% of the population, a foramen known as sternal foramen may be presented at the lower third of the sternal body. In extremely rare cases, multiple foramina may be observed. Fusion of the manubriosternal joint also occurs in around 5% of the population. Small ossicles known as episternal ossicles may also be present posterior to the superior end of the manubrium. Another variant called suprasternal tubercle is formed when the episternal ossicles fuse with the manubrium. Clinical significance Bone marrow biopsy Because the sternum contains bone marrow, it is sometimes used as a site for bone marrow biopsy. In particular, patients with a high BMI (obese or grossly overweight) may present with excess tissue that makes access to traditional marrow biopsy sites such as the pelvis difficult. Sternal opening A somewhat rare congenital disorder of the sternum sometimes referred to as an anatomical variation is a sternal foramen, a single round hole in the sternum that is present from birth and usually is off-centered to the right or left, commonly forming in the 2nd, 3rd, and 4th segments of the breastbone body. Congenital sternal foramina can often be mistaken for bullet holes. They are usually without symptoms but can be problematic if acupuncture in the area is intended. Fractures Fractures of the sternum are rather uncommon. They may result from trauma, such as when a driver's chest is forced into the steering column of a car in a car accident. A fracture of the sternum is usually a comminuted fracture. The most common site of sternal fractures is at the sternal angle. Some studies reveal that repeated punches or continual beatings, sometimes called "breastbone punches", to the sternum area have also caused fractured sternums. Those are known to have occurred in contact sports such as hockey and football. Sternal fractures are frequently associated with underlying injuries such as pulmonary contusions, or bruised lung tissue. Dislocation A manubriosternal dislocation is rare and usually caused by severe trauma. It may also result from minor trauma where there is a precondition of arthritis. Sternotomy The breastbone is sometimes cut open (a median sternotomy) to gain access to the thoracic contents when performing cardiothoracic surgery. Surgical fixation of sternotomy is achieved through the use of either wire cerclage or a plate and screw technique. The incidence of sternotomy complications falls within the narrow range of 0.5% to 5%. Nevertheless, these complications can have severe consequences, including increased mortality rates, the need for reoperation, and a mortality rate as high as 40%. Such complications often entail issues like dehiscence and sternal non-union, primarily stemming from lateral forces exerted during post-operative activities such as coughing and sneezing. Resection The sternum can be totally removed (resected) as part of a radical surgery, usually to surgically treat a malignancy, either with or without a mediastinal lymphadenectomy (Current Procedural Terminology codes # 21632 and # 21630, respectively). Bifid sternum or sternal cleft A bifid sternum is an extremely rare congenital abnormality caused by the fusion failure of the sternum. This condition results in sternal cleft which can be observed at birth without any symptom. Other animals The sternum, in vertebrate anatomy, is a flat bone that lies in the middle front part of the rib cage. It is endochondral in origin. It probably first evolved in early tetrapods as an extension of the pectoral girdle; it is not found in fish. In amphibians and reptiles, it is typically a shield-shaped structure, often composed entirely of cartilage. It is absent in both turtles and snakes. In birds, it is a relatively large bone and typically bears an enormous projecting keel to which the flight muscles are attached. Only in mammals does the sternum take on the elongated, segmented form seen in humans. Arthropods In arthropods, a sternum is the ventral part of a segment of thorax or abdomen. Etymology English sternum is a translation of Ancient Greek , . The Greek writer Homer used the term to refer to the male chest, and the term , to refer to the chest of both sexes. The Greek physician Hippocrates used στέρνον to refer to the chest, and στῆθος to the breastbone. The Greek physician Galen was the first to use in the present meaning of breastbone. The sternum as the solid bony part of the chest can be related to Ancient Greek , (), meaning firm or solid. The English term breastbone is actually more like the Latin os pectoris, derived from classical Latin os, bone and pectus, chest or breast. Confusingly, pectus is also used in classical Latin as breastbone. Additional images
Biology and health sciences
Skeletal system
Biology
30275643
https://en.wikipedia.org/wiki/Belemnitida
Belemnitida
Belemnitida (or belemnites) is an extinct order of squid-like cephalopods that existed from the Late Triassic to Late Cretaceous. Unlike squid, belemnites had an internal skeleton that made up the cone. The parts are, from the arms-most to the tip: the tongue-shaped pro-ostracum, the conical phragmocone, and the pointy guard. The calcitic guard is the most common belemnite remain. Belemnites, in life, are thought to have had 10 hooked arms and a pair of fins on the guard. The chitinous hooks were usually no bigger than , though a belemnite could have had between 100 and 800 hooks in total, using them to stab and hold onto prey. Belemnites were an important food source for many Mesozoic marine creatures, both the adults and the planktonic juveniles and they likely played an important role in restructuring marine ecosystems after the Triassic–Jurassic extinction event. They may have laid between 100 and 1,000 eggs. Some species may have been adapted to speed and swam in the turbulent open ocean, whereas others resided in the calmer littoral zone (nearshore) and fed off the seafloor. The largest belemnite known, Megateuthis elliptica, would have measured up to in total body length. Belemnites were coleoids, a group that includes squid and octopuses, and are often grouped into the superorder Belemnoidea, though the higher classification of cephalopods is volatile and there is no clear consensus on how belemnites are related to modern coleoids. Guards can give information on the climate, habitat, and carbon cycle of the ancient waters they inhabited. Guards have been found since antiquity and have become part of folklore. Description Shell The belemnite cone is composed of three parts. Going from arms to tip, these are the tongue-shaped pro-ostracum; the conical, chambered phragmocone; and the spear-shaped guard at the very tip. The guard is attached to the phragmocone in a socket called the alveolus. The cone, in life, would have been encased in muscle and connective tissue. They had calcite guards, and aragonite pro-ostraca and phragmocones, though a few belemnites also had aragonite guards, and the alveolar side of the guards of belemnitellids may have also been of aragonite. The pro-ostracum probably supported the soft parts of the belemnite, similar to the gladius of squid, and completely surrounded the phragmocone. The phragmocone was divided by septa into chambers, much like the shells of cuttlefish and nautiluses. The chambered phragmocone was probably the center of buoyancy, and so was positioned directly above the center of mass for stability purposes. Concerning buoyancy, belemnites may have behaved much like modern ram's horn squid, having the chambers of the phragmocone flooded and slowly releasing more seawater via the siphuncle tube as the animal increases in size and weight over its lifetime to maintain neutral buoyancy. At the tip of the phragmocone beneath the guard is a tiny, cup-like protoconch, the remains of the embryonic shell. The dense guard probably served to counterbalance the weight of the soft parts in the mantle cavity near the arms on the opposite end of the animal, analogous to the camera of nautiloids. This would have allowed the animal to move horizontally through the water. The guard may have also served to cut through waves while swimming at the surface, though modern cephalopods generally stay completely submerged. Though unlikely, it is possible fossilization increased the perceived density of the guard, and it may have been up to 20% more porous in life. Fins may have been attached to the guard, or the guard may have lent support for large fins. Including arms, guards could have accounted for one-fifth to one-third of the total length of a belemnite. Soft anatomy Belemnites had a radula – the "tongue" embedded in the buccal mass, the first part of a gastropod digestive system – similar to open ocean predatory cephalopods. The radula had rows of seven teeth, consistent with modern predatory squid. The statocysts – which give a sense of balance and function much like the cochlea of the ear – were large, much like in modern fast-moving squid. Like other cephalopods, the skin was likely thin and slippery. The eyeballs were likely thicker, stronger, and more convex than in other cephalopods. The mantle cavity of cephalopods serves to contain the gills, gonads, and other organs; also, water is siphoned into and expelled out of the mantle cavity via a tube opening near the arms of the animal, the hyponome, for jet propulsion. Though the hyponome was well-developed in belemnites, the phragmocone was large, implying a small mantle cavity and thus less jet propulsion efficiency. Like some modern squid, belemnites may have mainly used large fins to coast along currents. Two Acanthoteuthis specimens with preserved soft anatomy elements had a pair of rhomboid fins near the top of their guards; however, the specimens had different-sized fins, possibly owing to sexual dimorphism, age, or distortion during fossilization. These specimens appeared to have had similar adaptations to modern squid for speed and may have been able to reach similar maximum speeds of like modern migrating Todarodes flying squid. Limbs and hooks Belemnites had 10 hooked arms of, more or less, equal length with suckers. The hooks were rarely larger than , and increased in size toward the midsection of the arm, possibly because the midsection is where maximum power could be exerted when grabbing, or bigger hooks on the extremities of the arm increased the risk of losing the arm. Having two rows of hooks covering the entire breadth of the arm, a belemnite could have had between 100 and 800 hooks in total. Some hooks have a spur just above the base, but this may be a distortion from fossilization or preparation of the material. The chitinous hooks are subdivided into three sections: The base - which can be either flat or concave - the shaft - which projects either upward at an incline either straight or bent - and the uncinus - which can be hook- or saber-like. Overall, they were fish-hook shaped, and probably only the uncinus was exposed. Different hook shapes were probably specialized for certain tasks, for example, a strongly hooked uncinus was designed to stab prey at a constant angle. It would force and sink in deeper if the prey tried to move away from the belemnite. Hook shapes and forms vary from species to species. In Chondroteuthis, large hooks were common near the mouth, and were either used for surrounding small prey or ramming into large prey; however, these large hooks were not present in a small specimen, indicating it was either a juvenile—and the development of different hooks coincided with a difference in prey selection - or the specimen was a female and the hooks were used by males for male-on-male combat or during copulation. In modern hook-bearing squid species, only matured males have hooks, indicating a reproductive purpose. It is possible the hooks, being analogous to suckers, could move. The males, like in modern squid, probably had one or two hectocotyli - long, modified arms used in copulation or combat with other males. Instead of several hooks, the hectocotyli feature a pair of enlarged hooks—mega-onychites—to latch onto the female at a safe distance to prevent getting stuck with one of her hooks. Like squid, the positioning of the mega-onychites could have been either at the tip or origin of the arm depending on the species. Copulation probably involved the male depositing spermatophores into the female's internal mantle chamber. Development Like other cephalopods, belemnites may have laid floating egg masses, and a single female may have laid between 100 and 1,000 eggs. Hatchlings were either miniature forms of adults or went through a larval stage. According to the latter model, the egg was formed by the protoconch and a single-layered shell wall. During the larval stage, the protoconch became internal and the guard began to form. The embryo of Passaloteuthis, the most well-studied among belemnite embryos, had a protoconch, a developing guard, and a solid guard. The developing guard tightly surrounded the protoconch. The embryonic shell consisted of an ovoid protoconch and several chambers. The protoconch had two layers, and several compartments - called "protoconch pockets" - formed between the layers, which may have stored gas or liquid in life to stay buoyant. The protoconch and guard were probably made of chitin, a protective material that may have allowed the embryo to survive at greater depths and colder temperatures, develop into adults faster, and allow juveniles and adults to venture into deeper waters. Further, the protoconch would have allowed them to form limbs before reaching the phragmocone stage, and thus inhabit the open ocean earlier. These may have allowed belemnites to colonize a range of habitats across the world. Much like in cuttlefish, nautiluses, and ammonites, the number and successive size of the chambers of the phragmocone are used to analyze the growth of an individual over their life. Successive belemnite chambers tend to increase in size exponentially. Unlike other cephalopods, there is no decreasing trend of chamber size in the earliest stages. The decreasing trend generally coincides with hatching, meaning embryonic belemnites had no or few chambers and hatched only with a protoconch. The phragmocone, thus, developed after hatching. Ammonites are thought to have done the same, implying a similar reproductive strategy, and, considering both reached cosmopolitan distributions, a rather efficient one. Belemnite hatchling protoconches are estimated to have been generally around . The guards of Megateuthis elliptica are the largest among belemnites, measuring in length and up to in diameter. The Cretaceous Neohibolites is one of the smallest known with a guard length of around . In the New Zealand Belemnopsis, four major annual growth stages were preserved in the guard, giving belemnites a lifespan of about three to four years. The mesohibolitid belemnites, using the same methods, had a lifespan of about a year. In Megateuthis, the guard was demonstrated to have fully developed after one or two years, and growth spurts followed the lunar cycle. Pathology Belemnite guards have sometimes been found with fractures with signs of healing. It has been interpreted in the past that these are evidence of digging, with belemnites using their guard to dig up prey on the seafloor; however, belemnites are now generally interpreted to have been open ocean predators. A deformed, zigzag-like guard of a Gonioteuthis was likely the result of a failed predation attempt. Two other Gonioteuthis guard specimens exhibit a double-pointed tip, probably stemming from some traumatic event. One belemnite guard also presents a double-pointed tip, with one of the points projecting higher than the other, probably a sign of an infection or settlement of a parasite. A Neoclavibelus guard features a large growth on the side likely stemming from a parasitic infection. A Hibolithes guard shows a large ovoid bubble near the base, likely deriving from a parasitic cyst. A Goniocamax guard has several blister-like formations, thought to have come from a polychaete flatworm infection. The calcitic guards were desirable habitats for boring parasites indicated by the diversity of trace fossils left on some guards, including the sponge Entobia, worm Trypanites, and barnacle Rogerella. Taxonomy Evolution Belemnites, being coleoids, derive from the orthoconic (conical) Devonian belemnoid order Aulacocerida, which, in turn, is derived from the Devonian Bactritida. Belemnites were traditionally thought to have evolved in northern Europe in the Hettangian stage of the Early Jurassic 201.6–197 million years ago (mya) and later spread to the rest of the world by the Pliensbachian stage 190 mya. However, the 2012 discovery of early Asian forms classified into the family Sinobelemnitidae—now moves this to around 234 mya in the Carnian stage of the Late Triassic. Belemnites probably originated in the Asian part of the Panthalassic Ocean around the eastern coasts of the ancient continent of Laurasia in a cephalopod radiation, alongside the octopus-like Prototeuthina and the belemnoid Phragmoteuthida. However, there is a dubious Permian occurrence, the Palaeobelemnopsidae, reported from Southern China. By the Early Jurassic, belemnites were probably quite common, having spread out into the western Laurasian coasts as well as Gondwanan waters to the south. Guard shapes in the early Jurassic ranged from conical to spearheaded but spearheaded became more prevalent as the Jurassic progressed. This was probably due to pressure to become more streamlined and increase swimming efficiency, coevolving with increasingly faster predators and competitors. Their early evolution and apparent abundance were likely important in reconstructing marine ecosystems after the Triassic–Jurassic extinction event, providing an ample food source for marine reptiles and sharks. Belemnoidea, as a group, seemed to feature a reduction of the projection of the otherwise conical phragmocone into the pro-ostracum. That of the most ancient order Aulacocerida is orthoconic (none projects), Phragmoteuthida three-quarters projects, Belemnitida a quarter, and the most developed Diplobelida an eighth. Research history The first mention of belemnites in writing comes from the Greek philosopher Theophrastus, who lived in the 4th and 3rd century BCE, in his book De Animalibus Quæ Dicuntur Invidere who described it as lyngurium, lynx urine which had been buried and solidified. Pliny the Elder, in the first century CE, did not believe in lyngurium and called the gemstone a belemnite for the first time—though not recognizing it as a fossil. The name is from Ancient Greek βέλεμνον bélemnon meaning dart for the guard's shape. Subsequent authors either considered it to be lyngurium or amber. The first mention of a belemnite representing a fossil was made in 1546 by German mineralogist Georgius Agricola, and subsequent authors gave several hypotheses to its nature in life, including them being shellfish, sea urchin spines, sea cucumbers, coral polyps, or some internal shell. In 1823, English naturalist John Samuel Miller classified belemnites as cephalopods, comparing the newly discovered phragmocone remains to that of a nautilus, and concluding a resemblance to Sepia cuttlefish. He also erected the genus Belemnites with 11 species. This classification was confirmed when the first impressions of belemnite soft body anatomy were described by English paleontologist Richard Owen in 1844. In 1895, German paleontologist Karl Alfred Ritter von Zittel organized the clade Belemnoidea and included the families Belemnitidae, Asteroconites, and Xiphoteuthis. The guard—also known as the rostrum, scabbard, gaine, and sheath—is the part of the animal most likely to be fossilized. Guards are difficult to distinguish at the species level, and, consequently, synonyms are common and inflate the group's apparent diversity. Preserved hooks can be used to distinguish belemnite species as each species has unique hook shapes. However, scolecodont segmented worm fossils have been mistaken for belemnite hooks and vice versa. Preserved fossil guards are used to measure the ancient isotopic signature of the waters the individual inhabited in life, which gives information on the climate, habitat, and carbon cycle. Phylogeny Belemnites were cephalopods. Having no outer shells, they are classified into the subclass Coleoidea. In 1994, American geologist Peter Doyle defined Coleoidea as composed of three superorders: Decapodiformes (squid and cuttlefish), Octopodiformes (octopuses), and Belemnoidea; with Belemnoidea containing the orders Aulacocerida, Diplobelida, and Belemnitida. Also, the order Phragmoteuthida is sometimes believed to be a sister group to Belemnoidea, but Doyle considered it to be a stem-group to Decapodiformes and Octopodiformes. However, the higher classification of cephalopods is volatile with no clear consensus. Coleoidea is sometimes divided into Neocoleoidea (containing all modern cephalopods) and Paleocoleoidea (containing Belemnoidea), so belemnites would be a sister group of modern cephalopods. However, this grouping is probably paraphyletic—it does not contain a common ancestor and all its descendants—and, thus, invalid. According to some authors, belemnites were a stem-group of Decapodiformes: According to the "belemnoid root-stock theory", belemnoids gave rise to modern coleoids sometime in the Mesozoic, with octopuses deriving from Phragmoteuthida and squid from Diplobelida, making Belemnoidea paraphyletic. The spirulid Longibelus could be a transitional species between belemnoids and squid. However, molecular evidence suggests that the squid and octopus lineage diverged from Belemnoidea in the Permian. The order Belemnitida is a monophyletic taxon, consisting of a common ancestor and all of its descendants, and is characterized by the possession of ten hooked appendages, a multilayered outer wall of the phragmocone, and a septum between the pro-ostracum and the phragmocone. Belemnitida is separated into two suborders: Belemnitina and Belemnopseina, though a third possible suborder may exist with Sinobelemnitidae. The Belemnopseina guards have a groove on their alveolus, whereas the Belemnitina have a groove at their apex. The grooves probably corresponded to blood vessels. Another suborder, Belemnotheutina, is also proposed, whose members have an aragonitic guard in contrast to the calcitic guards of other belemnites. Aragonitic guards are usually only seen in the ancestral Aulacocerida belemnoids, and Belemnotheutina may represent a transitional stage between the two orders, though some believe Belemnitida derived from Phragmoteuthida which derived from Aulacocerida. Family Dimitobelidae Conobelus Dimitobelus Pumiliobelus Family Belemnitellidae Actinocamax Belemnitella Belemnites Family Belemnopseidae Belemnopsis Vaunagites Family Cylindroteuthidae Cylindroteuthis Family Dicoelitidae Dicoelites Family Duvaliidae Duvalia Pseudobelus Rhopaloteuthis Pseudoduvalia Family Hastitidae Hastites Pleurobelus Rhabdobelus Bairstowius Family Halcobelidae Holcobelus Calabribelus Lissajousibelus Family Megateuthidae Acrocoelites Cuspiteuthis Dactyloteuthis Megateuthis Family Mesohibolithidae Curtohibolithes Hibolithes Mesohibolithes Family Nipponoteuthidae Nipponoteuthis Family Oxyteuthidae Oxyteuthis Family Passaloteuthidae Acroteuthis Angeloteuthis Brevibelus Clastoteuthis Parapassaloteuthis Passaloteuthis Pseudohastites Family Pseudodicoelitidae Pseudodicoelites Family Salpingoteuthidae Salpingoteuthis Family Sinobelemnitidae Sichuanobelus Incertae sedis Aulacoteuthis Belemnella Coeloteuthis Eobelemnites Gonioteuthis Nannobelus Pachyteuthis Simpsonibelus Youngibelus Rhaphibelus Winkleriteuthis Paleoecology Habitat Belemnite remains are found in what were littoral (nearshore) and mid-shelf zones. To hunt, they may have quickly or stealthily grabbed prey, maintaining a grip with the hooks, and then dove down to eat. It is traditionally thought they resided on the shelf their entire life, and preyed on crustaceans and other mollusks. Belemnites with slender guards may have been better swimmers than those with more massive guards, with the former having dived into deeper waters and hunted in the open ocean; and the latter restricted to the nearshore and fed from the seafloor. Broadly speaking, they may have preferred temperatures of , and, like modern squid, warmer waters may have heightened their metabolism, increasing birth and growth rates, but also decreasing lifespan. It has been suggested that most belemnite species were stenothermic, inhabiting only a narrow range of temperatures, though Neohibolites had a cosmopolitan distribution during the Cretaceous Thermal Maximum, a period of dramatic increase in global temperatures. Mortality Belemnites were likely an abundant and important food source to many sea-going creatures of the Mesozoic. Belemnite hook remains have been found in the stomach contents of crocodilians, plesiosaurs, and ichthyosaurs; and the coprolite remains of ichthyosaurs and the extinct thylacocephalan crustaceans. Some animals may have only eaten the heads, leaving the phragmocone and guards, however, the guards of around 250 Acrocoelites were found in the stomach of a Hybodus shark, and a fragment in an Oxford Clay marine crocodile, meaning they were eaten whole. It may be that they were to regurgitate the indigestible matter later, similar to the modern sperm whale. To defend themselves, belemnites likely were able to eject a cloud of ink. The abundant planktonic belemnite larvae, along with planktonic ammonite larvae, likely formed the base of Mesozoic food webs, serving a greater ecological function than the adults. Giant pachycormid fish are thought to have been the main filter feeders of the time, occupying the same niche as modern baleen whales. Large accumulations of guards are commonly found and have been nicknamed "belemnite battlefields". The most quoted explanation is that belemnites were semelparous and died shortly after spawning, much like modern coleoids which migrate from the ocean to the shelf area. In battlefields comprising both adults and juveniles—as the former model would consist entirely of adults—large groups of belemnites may have been killed by volcanism, changes in salinity or temperature, harmful algal blooms (and, thereby, anoxia), or mass stranding. Another popular theory is that the guards were simply moved or redeposited by ocean currents into large aggregations. Some battlefields may be regurgitated indigestible matter from a predator. Extinction Squid and octopuses diversified and began to outcompete belemnites by the Late Jurassic to Early Cretaceous. Belemnites declined through the Late Cretaceous, and their range became more restricted to the polar regions; the southern populations became extinct in the early Maastrichtian, and the last belemnites—of the family Belemnitellidae—inhabited what is now northern Europe. They finally became extinct in the Cretaceous–Paleogene extinction event, around 66 mya, where, like in ammonites, it is thought the protoconch of embryos could not survive the ensuing acidification of the oceans. However, the dubious genus Bayanoteuthis is reported from the Eocene, though this is often excluded from Belemnitida. Following the extinction of the belemnites at the end of the Cretaceous, holoplanktonic gastropods, namely sea butterflies, replaced planktonic belemnite larvae at the base of the food chain. In culture Belemnite guards have been known since antiquity, and much folklore has evolved since. The symbol of the Egyptian god Min has been described, among others, as two fossil belemnites. Before belemnites were identified as fossils, it was believed the guards were some gemstones, namely lyngurium and amber. After a thunderstorm, guards would sometimes be left exposed in the soil, explained as lightning bolts thrown from the sky. This belief persists in parts of rural Britain. In Germanic folklore, belemnites are known by at least 27 different names, such as Fingerstein ("finger stone"), Teufelsfinger ("Devil's finger"), and Gespensterkerze ("ghostly candle"). In Southern England, the pointy guards were used to cure rheumatism, ground up to cure sore eyes (which only aggravated the problem), and, in Western Scotland, put into water to cure distemper in their horses. Belemnitella was declared the state fossil of Delaware on 2 July 1996.
Biology and health sciences
Cephalopods
Animals
3756619
https://en.wikipedia.org/wiki/T-top
T-top
A T-top (UK: T-bar) is an automobile roof with a removable panel on each side of a rigid bar running from the center of one structural bar between pillars to the center of the next structural bar. The panels of a traditional T-top are usually made of auto grade safety glass (tempered or laminated), or acrylic – but they can also be black or body-colored and made of other (often light-weight) materials. The removable panel roof was patented by Gordon Buehrig on June 5, 1951. It was first used in a 1948 prototype by The American Sportscar Company or “Tasco.” The 1968 Chevrolet Corvette coupe was the first U.S.-built production automobile to feature a T-top roof. This increased the popularity of the coupe, such that it outsold the convertible and later led to the discontinuation of the Corvette convertible after 1975 until it was revived in 1986. Post-C3 models were built with a targa top instead of a T-top. Examples of traditional T-Top Buick Regal (1978–1987) Chevrolet Corvette (1968–1982) Chevrolet Camaro (1978–2002) Chevrolet Monte Carlo Chrysler Cordoba Datsun 280ZX Dodge Daytona Dodge Magnum Dodge Mirada Ford Mustang (second and third generation) Ford Thunderbird (seventh generation) Mercury Capri Nissan NX Nissan 300ZX Nissan EXA Nissan URGE (concept) Pontiac Fiero Pontiac Firebird, incl. Trans Am (1976–2002) Pontiac Formula 350 Pontiac Grand Prix Rover 200 Coupe (1992–1999) Oldsmobile Cutlass Supreme Suzuki X-90 Subaru BRAT Subaru Vivio Toyota MR2 (AW11/SW20/SW21/SW22) T-Top variations Jeep Wranglers including 'JK' and 'JL' offer removable roof-panel designs that build upon the T-top construction concept Suzuki Cappuccino - has an optional solid roof which can be converted into a T-top Triumph Stag - has the underlying T-Top structure, but has a one piece, non-glass, roof panel which passes over the central front-to-back bar when in place
Technology
Motorized road transport
null
3757010
https://en.wikipedia.org/wiki/Azhdarchidae
Azhdarchidae
Azhdarchidae (from the Persian word , , a dragon-like creature in Persian mythology) is a family of pterosaurs known primarily from the Late Cretaceous Period, though an isolated vertebra apparently from an azhdarchid is known from the Early Cretaceous as well (late Berriasian age, about 140 million years ago). Azhdarchids are mainly known for including some of the largest flying animals discovered, but smaller cat-size members have also been found. Originally considered a sub-family of Pteranodontidae, Nesov (1984) named the Azhdarchinae to include the pterosaurs Azhdarcho, Quetzalcoatlus, and Titanopteryx (now known as Arambourgiania). They were among the last known surviving members of the pterosaurs, and were a rather successful group with a worldwide distribution. Previously it was thought that by the end of the Cretaceous, most pterosaur families except for the Azhdarchidae disappeared from the fossil record, but recent studies indicate a wealth of pterosaurian fauna, including pteranodontids, nyctosaurids, tapejarids and several indeterminate forms. Description Azhdarchids are characterized by their long legs and extremely long necks, made up of elongated neck vertebrae which are round in cross section. Most species of azhdarchids are still known mainly from their distinctive neck bones and not much else. The few azhdarchids that are known from reasonably good skeletons include Zhejiangopterus and Quetzalcoatlus. Azhdarchids are also distinguished by their relatively large heads and long, spear-like jaws. There are two major types of azhdarchid morphologies: the "blunt-beaked" forms with shorter and deeper bills and the "slender-beaked" forms with longer and thinner jaws. It had been suggested azhdarchids were skimmers, but further research has cast doubt on this idea, demonstrating that azhdarchids lacked the necessary adaptations for a skim-feeding lifestyle, and that they may have led a more terrestrial existence similar to modern storks and ground hornbills. Most large azhdarchids probably fed on small prey, including hatchling and small dinosaurs; in an unusual modification of the azhdarchid bodyplan, the robust Hatzegopteryx may have tackled larger prey as the apex predator in its ecosystem. In another departure from typical azhdarchid lifestyles, the jaw of Alanqa may possibly be an adaptation to crushing shellfish and other hard foodstuffs. Azhdarchids are generally medium- to large-sized pterosaurs, with the largest achieving wingspans of , but several small-sized species have recently been discovered. Another azhdarchid that is currently unnamed, recently discovered in Transylvania, may be the largest representative of the family thus far discovered. This unnamed specimen (nicknamed "Dracula" by paleontologists), currently on display in the Altmühltal Dinosaur Museum in Bavaria is estimated to have a wingspan of , although similarities to the contemporary azhdarchid Hatzegopteryx have also been noted. Systematics Azhdarchids were originally classified as close relatives of Pteranodon due to their long, toothless beaks. Others have suggested they were more closely related to the toothy ctenochasmatids (which include filter-feeders like Ctenochasma and Pterodaustro), but this classification is largely obsolete. Currently it is widely agreed that azhdarchids were closely related to pterosaurs such as chaoyangopterids, thalassodromids, and tapejarids, all of which belong to the superfamily Azhdarchoidea. Phylogeny Two of the most complete cladograms that include the family Azhdarchidae are presented below. The first one is by Brian Andres in 2021, in which Azhdarchidae was found to be the sister taxon to Montanazhdarcho within the clade Azhdarchiformes. Within Azhdarchidae, two different subfamilies were recovered, the Azhdarchinae and the Quetzalcoatlinae. The former contains azhdarchids closer related to Azhdarcho and are smaller in size, while the latter contains azhdarchids closer to Quetzalcoatlus and are much larger in size. The second cladogram is by Xuanyu Zhou and colleagues in 2024, which is based on the phylogenetic analysis by Rodrigo Pêgas in the same year. In this study, Azhdarchidae was recovered as the sister taxon to the family Alanqidae within the clade Azhdarchiformes. The subfamily Quetzalcoatlinae comprised more azhdarchid genera within it, while the subfamily Azhdarchinae was not recovered. Various azhdarchids found within the Quetzalcoatlinae by Andres in 2021 have been found outside said subfamily in this study. These include Aralazhdarcho, Eurazhdarcho, Wellnhopterus, and Phosphatodraco, which together form a subgroup, as well as Zhejiangopterus, Mistralazhdarcho, and Aerotitan, which form another subgroup. Azhdarcho is recovered as the sister taxon to Quetzalcoatlinae, while Albadraco was found within Quetzalcoatlinae instead of Azhdarchinae. Topology 1: Andres (2021). Topology 2: Zhou and colleagues (2024). Former and possible azhdarchid genera There have been many pterosaur genera that were once assigned to the Azhdarchidae, but have since been reassigned to other pterosaur groups. Alanqa and Argentinadraco, for example, have sometimes been referred to Azhdarchidae, but recent phylogenetic studies have recovered these as either forming their own family, the Alanqidae, or within the family Thalassodromidae. The genus Bakonydraco, also initially classified as an azhdarchid, has been recovered as a tapejarid in many recent studies. Volgadraco and Bogolubovia, both assigned to this family in at least one study, are currently considered pteranodontians. The pterosaur Montanazhdarcho has also been reclassified as a non-azhdarchid, with phylogenetic analyses recovering it as either an alanqid, or as a basal azhdarchiform. The genus Navajodactylus was tentatively assigned to this family, but its status has been questioned. The pterosaur Tethydraco has been suggested to have been an azhdarchid in one study, but this assignment has not been found in other analyses, with most finding it as a pteranodontian.
Biology and health sciences
Pterosaurs
Animals
3758115
https://en.wikipedia.org/wiki/Edge%20contraction
Edge contraction
In graph theory, an edge contraction is an operation that removes an edge from a graph while simultaneously merging the two vertices that it previously joined. Edge contraction is a fundamental operation in the theory of graph minors. Vertex identification is a less restrictive form of this operation. Definition The edge contraction operation occurs relative to a particular edge, . The edge is removed and its two incident vertices, and , are merged into a new vertex , where the edges incident to each correspond to an edge incident to either or . More generally, the operation may be performed on a set of edges by contracting each edge (in any order). The resulting graph is sometimes written as . (Contrast this with , which means simply removing the edge without merging its incident vertices.) As defined below, an edge contraction operation may result in a graph with multiple edges even if the original graph was a simple graph. However, some authors disallow the creation of multiple edges, so that edge contractions performed on simple graphs always produce simple graphs. Formal definition Let be a graph (or directed graph) containing an edge with . Let be a function that maps every vertex in to itself, and otherwise, maps it to a new vertex . The contraction of results in a new graph , where , , and for every , is incident to an edge if and only if, the corresponding edge, is incident to in . Vertex identification Vertex identification (sometimes called vertex contraction) removes the restriction that the contraction must occur over vertices sharing an incident edge. (Thus, edge contraction is a special case of vertex identification.) The operation may occur on any pair (or subset) of vertices in the graph. Edges between two contracting vertices are sometimes removed. If and are vertices of distinct components of , then we can create a new graph by identifying and in as a new vertex in . More generally, given a partition of the vertex set, one can identify vertices in the partition; the resulting graph is known as a quotient graph. Vertex cleaving Vertex cleaving, which is the same as vertex splitting, means one vertex is being split into two, where these two new vertices are adjacent to the vertices that the original vertex was adjacent to. This is a reverse operation of vertex identification, although in general for vertex identification, adjacent vertices of the two identified vertices are not the same set. Path contraction Path contraction occurs upon the set of edges in a path that contract to form a single edge between the endpoints of the path. Edges incident to vertices along the path are either eliminated, or arbitrarily (or systematically) connected to one of the endpoints. Twisting Consider two disjoint graphs and , where contains vertices and and contains vertices and . Suppose we can obtain the graph by identifying the vertices of and of as the vertex of and identifying the vertices of and of as the vertex of . In a twisting of with respect to the vertex set , we identify, instead, with and with . Repeated contractions Given a finite set of edges, the order in which contractions are performed on a graph does not change the result (up to isomorphism). The result reduces to showing that is isomorphic to for two edges of . Applications Both edge and vertex contraction techniques are valuable in proof by induction on the number of vertices or edges in a graph, where it can be assumed that a property holds for all smaller graphs and this can be used to prove the property for the larger graph. Edge contraction is used in the recursive formula for the number of spanning trees of an arbitrary connected graph, and in the recurrence formula for the chromatic polynomial of a simple graph. Contractions are also useful in structures where we wish to simplify a graph by identifying vertices that represent essentially equivalent entities. One of the most common examples is the reduction of a general directed graph to an acyclic directed graph by contracting all of the vertices in each strongly connected component. If the relation described by the graph is transitive, no information is lost as long as we label each vertex with the set of labels of the vertices that were contracted to form it. Another example is the coalescing performed in global graph coloring register allocation, where vertices are contracted (where it is safe) in order to eliminate move operations between distinct variables. Edge contraction is used in 3D modelling packages (either manually, or through some feature of the modelling software) to consistently reduce vertex count, aiding in the creation of low-polygon models.
Mathematics
Graph theory
null
3762639
https://en.wikipedia.org/wiki/Ice%20pellets
Ice pellets
Ice pellets (Canadian English) or sleet (American English) is a form of precipitation consisting of small, hard, translucent balls of ice. Ice pellets are different from graupel ("soft hail"), which is made of frosty white opaque rime, and from a mixture of rain and snow, which is a slushy liquid or semisolid. Ice pellets often bounce when they hit the ground or other solid objects, and make a higher-pitched "tap" when striking objects like jackets, windshields, and dried leaves, compared to the dull splat of liquid raindrops. Pellets generally do not freeze into other solid masses unless mixed with freezing rain. The METAR code for ice pellets is PL (PE before November 1998). Terminology Ice pellets are known as sleet in the United States, the official term used by the U.S. National Weather Service. However, the term sleet refers to a mixture of rain and snow in most Commonwealth countries instead, including Canada. Because of this, Environment Canada never uses the term sleet, and uses the terms "ice pellets" or "wet snow" instead. Formation Ice pellets form when a layer of above-freezing air is located between above the ground, with sub-freezing air both above and below it. This causes the partial or complete melting of any snowflakes falling through the warm layer (the French term for sleet, neige fondue, literally means "melted snow" because of this). As they fall back into the sub-freezing layer closer to the surface, they re-freeze into ice pellets. However, if the sub-freezing layer beneath the warm layer is too small, the precipitation will not have time to re-freeze before hitting the surface, so it will become freezing rain and freeze on the surface instead. A temperature profile showing a warm layer above the ground is most likely to be found in advance of a warm front during the cold season, but can occasionally be found behind a passing cold front, and often with a stationary front. Effects In most parts of the world, ice pellets only occur for brief periods and do not accumulate a significant and troublesome amount. However, across the eastern United States and southeastern Canada, warm air flowing north from the Gulf of Mexico ahead of a strong synoptic-scale storm system can overrun cold, dense air at the surface for many hundreds of miles for an extended period of time. In these areas, ice pellet accumulations of are not unheard of. The effects of a significant accumulation of ice pellets are not unlike an accumulation of snow. One significant difference however is that for the same volume of snow, an equal volume of ice pellets is significantly heavier and thus more difficult to clear away. Additionally, a volume of ice pellets takes significantly longer to melt compared to an equal volume of fresh snowfall due to less surface area.
Physical sciences
Precipitation
Earth science
21294842
https://en.wikipedia.org/wiki/Spinal%20cord
Spinal cord
The spinal cord is a long, thin, tubular structure made up of nervous tissue that extends from the medulla oblongata in the lower brainstem to the lumbar region of the vertebral column (backbone) of vertebrate animals. The center of the spinal cord is hollow and contains a structure called the central canal, which contains cerebrospinal fluid. The spinal cord is also covered by meninges and enclosed by the neural arches. Together, the brain and spinal cord make up the central nervous system. In humans, the spinal cord is a continuation of the brainstem and anatomically begins at the occipital bone, passing out of the foramen magnum and then enters the spinal canal at the beginning of the cervical vertebrae. The spinal cord extends down to between the first and second lumbar vertebrae, where it tapers to become the cauda equina. The enclosing bony vertebral column protects the relatively shorter spinal cord. It is around long in adult men and around long in adult women. The diameter of the spinal cord ranges from in the cervical and lumbar regions to in the thoracic area. The spinal cord functions primarily in the transmission of nerve signals from the motor cortex to the body, and from the afferent fibers of the sensory neurons to the sensory cortex. It is also a center for coordinating many reflexes and contains reflex arcs that can independently control reflexes. It is also the location of groups of spinal interneurons that make up the neural circuits known as central pattern generators. These circuits are responsible for controlling motor instructions for rhythmic movements such as walking. Structure The spinal cord is the main pathway for information connecting the brain and peripheral nervous system. Much shorter than its protecting spinal column, the human spinal cord originates in the brainstem, passes through the foramen magnum, and continues through to the conus medullaris near the second lumbar vertebra before terminating in a fibrous extension known as the filum terminale. The spinal cord is an estimated long in males and about in females. It is ovoid-shaped and is enlarged in the cervical and lumbar regions. The cervical enlargement, stretching from the C4 to T1 vertebrae, is where sensory input comes from and motor output goes to the arms and trunk. The lumbar enlargement, located between T10 and L1, handles sensory input and motor output coming from and going to the legs. The spinal cord is continuous with the caudal portion of the medulla, running from the base of the skull to the body of the first lumbar vertebra. It does not run the full length of the vertebral column in adults. It is made of 31 segments from which branch one pair of sensory nerve roots and one pair of motor nerve roots. The nerve roots then merge into bilaterally symmetrical pairs of spinal nerves. The peripheral nervous system is made up of these spinal roots, nerves, and ganglia. The dorsal roots are afferent fascicles, receiving sensory information from the skin, muscles, and visceral organs to be relayed to the brain. The roots terminate in dorsal root ganglia, which are composed of the cell bodies of the corresponding neurons. Ventral roots consist of efferent fibers that arise from motor neurons whose cell bodies are found in the ventral (or anterior) gray horns of the spinal cord. The spinal cord (and brain) are protected by three layers of tissue or membranes called meninges, that surround the canal. The dura mater is the outermost layer, and it forms a tough protective coating. Between the dura mater and the surrounding bone of the vertebrae is a space called the epidural space. The epidural space is filled with adipose tissue, and it contains a network of blood vessels. The arachnoid mater, the middle protective layer, is named for its open, spiderweb-like appearance. The space between the arachnoid and the underlying pia mater is called the subarachnoid space. The subarachnoid space contains cerebrospinal fluid, which can be sampled with a lumbar puncture, or "spinal tap" procedure. The delicate pia mater, the innermost protective layer, is tightly associated with the surface of the spinal cord. The cord is stabilized within the dura mater by the connecting denticulate ligaments, which extend from the enveloping pia mater laterally between the dorsal and ventral roots. The dural sac ends at the vertebral level of the second sacral vertebra. In cross-section, the peripheral region of the cord contains neuronal white matter tracts containing sensory and motor axons. Internal to this peripheral region is the grey matter, which contains the nerve cell bodies arranged in the three grey columns that give the region its butterfly-shape. This central region surrounds the central canal, which is an extension of the fourth ventricle and contains cerebrospinal fluid. The spinal cord is elliptical in cross section, being compressed dorsolaterally. Two prominent grooves, or sulci, run along its length. The posterior median sulcus is the groove in the dorsal side, and the anterior median fissure is the groove in the ventral side. Segments The human spinal cord is divided into segments where pairs of spinal nerves (mixed; sensory and motor) form. Six to eight motor nerve rootlets branch out of right and left ventralateral sulci in a very orderly manner. Nerve rootlets combine to form nerve roots. Likewise, sensory nerve rootlets form off right and left dorsal lateral sulci and form sensory nerve roots. The ventral (motor) and dorsal (sensory) roots combine to form spinal nerves (mixed; motor and sensory), one on each side of the spinal cord. Spinal nerves, with the exception of C1 and C2, form inside the intervertebral foramen. These rootlets form the demarcation between the central and peripheral nervous systems. Generally, the spinal cord segments do not correspond to bony vertebra levels. As the spinal cord terminates at the L1–L2 level, other segments of the spinal cord would be positioned superior to their corresponding bony vertebral body. For example, the T11 spinal segment is located higher than the T11 bony vertebra, and the sacral spinal cord segment is higher than the L1 vertebral body. The grey columns, (three regions of grey matter) in the center of the cord, is shaped like a butterfly and consists of cell bodies of interneurons, motor neurons, neuroglia cells and unmyelinated axons. The anterior and posterior grey columns present as projections of grey matter and are also known as the horns of the spinal cord. The white matter is located outside of the grey matter and consists almost totally of myelinated motor and sensory axons. Columns of white matter known as funiculi carry information either up or down the spinal cord. The spinal cord proper terminates in a region called the conus medullaris, while the pia mater continues as an extension called the filum terminale, which anchors the spinal cord to the coccyx. The cauda equina ("horse's tail") is a collection of nerves inferior to the conus medullaris that continue to travel through the vertebral column to the coccyx. The cauda equina forms because the spinal cord stops growing in length at about age four, even though the vertebral column continues to lengthen until adulthood. This results in sacral spinal nerves originating in the upper lumbar region. For that reason, the spinal cord occupies only two-thirds of the vertebral canal. The inferior part of the vertebral canal is filled with cerebrospinal fluid and the space is called the lumbar cistern. Within the central nervous system (CNS), nerve cell bodies are generally organized into functional clusters, called nuclei, their axons are grouped into tracts. There are 31 spinal cord nerve segments in a human spinal cord: 8 cervical segments forming 8 pairs of cervical nerves (C1 spinal nerves exit the spinal column between the foramen magnum and the C1 vertebra; C2 nerves exit between the posterior arch of the C1 vertebra and the lamina of C2; C3–C8 spinal nerves pass through the intervertebral foramen above their corresponding cervical vertebrae, with the exception of the C8 pair which exit between the C7 and T1 vertebrae) 12 thoracic segments forming 12 pairs of thoracic nerves 5 lumbar segments forming 5 pairs of lumbar nerves 5 sacral segments forming 5 pairs of sacral nerves 1 coccygeal segment In the fetus, vertebral segments correspond with spinal cord segments. However, because the vertebral column grows longer than the spinal cord, spinal cord segments do not correspond to vertebral segments in the adult, particularly in the lower spinal cord. For example, lumbar and sacral spinal cord segments are found between vertebral levels T9 and L2, and the spinal cord ends around the L1/L2 vertebral level, forming a structure known as the conus medullaris. Although the spinal cord cell bodies end around the L1/L2 vertebral level, the spinal nerves for each segment exit at the level of the corresponding vertebra. For the nerves of the lower spinal cord, this means that they exit the vertebral column much lower (more caudally) than their roots. As these nerves travel from their respective roots to their point of exit from the vertebral column, the nerves of the lower spinal segments form a bundle called the cauda equina. Enlargements There are two regions where the spinal cord enlarges: Cervical enlargement – corresponds roughly to the brachial plexus nerves, which innervate the upper limb. It includes spinal cord segments from about C4 to T1. The vertebral levels of the enlargement are roughly the same (C4 to T1). Lumbar enlargement – corresponds to the lumbosacral plexus nerves, which innervate the lower limb. It comprises the spinal cord segments from L2 to S3 and is found about the vertebral levels of T9 to T12. Blood supply The spinal cord is supplied with blood by three arteries that run along its length starting in the brain, and many arteries that approach it through the sides of the spinal column. The three longitudinal arteries are the anterior spinal artery, and the right and left posterior spinal arteries. These travel in the subarachnoid space and send branches into the spinal cord. They form anastomoses (connections) via the anterior and posterior segmental medullary arteries, which enter the spinal cord at various points along its length. The actual blood flow caudally through these arteries, derived from the posterior cerebral circulation, is inadequate to maintain the spinal cord beyond the cervical segments. The major contribution to the arterial blood supply of the spinal cord below the cervical region comes from the radially arranged posterior and anterior radicular arteries, which run into the spinal cord alongside the dorsal and ventral nerve roots, but with one exception do not connect directly with any of the three longitudinal arteries. These intercostal and lumbar radicular arteries arise from the aorta, provide major anastomoses and supplement the blood flow to the spinal cord. In humans the largest of the anterior radicular arteries is known as the artery of Adamkiewicz, or anterior radicularis magna (ARM) artery, which usually arises between L1 and L2, but can arise anywhere from T9 to L5. Impaired blood flow through these critical radicular arteries, especially during surgical procedures that involve abrupt disruption of blood flow through the aorta for example during aortic aneurysm repair, can result in spinal cord infarction and paraplegia. Development The spinal cord is made from part of the neural tube during development. There are four stages of the spinal cord that arises from the neural tube: The neural plate, neural fold, neural tube, and the spinal cord. Neural differentiation occurs within the spinal cord portion of the tube. As the neural tube begins to develop, the notochord begins to secrete a factor known as Sonic hedgehog (SHH). As a result, the floor plate then also begins to secrete SHH, and this will induce the basal plate to develop motor neurons. During the maturation of the neural tube, its lateral walls thicken and form a longitudinal groove called the sulcus limitans. This extends the length of the spinal cord into dorsal and ventral portions as well. Meanwhile, the overlying ectoderm secretes bone morphogenetic protein (BMP). This induces the roof plate to begin to secrete BMP, which will induce the alar plate to develop sensory neurons. Opposing gradients of such morphogens as BMP and SHH form different domains of dividing cells along the dorsal ventral axis. Dorsal root ganglion neurons differentiate from neural crest progenitors. As the dorsal and ventral column cells proliferate, the lumen of the neural tube narrows to form the small central canal of the spinal cord. The alar plate and the basal plate are separated by the sulcus limitans. Additionally, the floor plate also secretes netrins. The netrins act as chemoattractants to decussation of pain and temperature sensory neurons in the alar plate across the anterior white commissure, where they then ascend towards the thalamus. Following the closure of the caudal neuropore and formation of the brain's ventricles that contain the choroid plexus tissue, the central canal of the caudal spinal cord is filled with cerebrospinal fluid. Earlier findings by Viktor Hamburger and Rita Levi-Montalcini in the chick embryo have been confirmed by more recent studies which have demonstrated that the elimination of neuronal cells by programmed cell death is necessary for the correct assembly of the nervous system. Overall, spontaneous embryonic activity has been shown to play a role in neuron and muscle development but is probably not involved in the initial formation of connections between spinal neurons. Spinal cord tracts The spinal cord mainly functions to carry information to and from the brain, in ascending and descending tracts. Ascending tracts There are two ascending somatosensory pathways in the spinal cord. The dorsal column–medial lemniscus pathway (DCML pathway), and the anterolateral system (ALS). DCML In the dorsal column-medial lemniscus pathway, a primary neuron's axon enters the spinal cord and then enters the dorsal column. Here the dorsal column connects to the axon of the nerve cell. If the primary axon enters below spinal level T6, the axon travels in the gracile fasciculus, the medial part of the column. If the axon enters above level T6, then it travels in the cuneate fasciculus, which is lateral to the fasciculus gracilis. Either way, the primary axon ascends to the lower medulla, where it leaves its fasciculus and synapses with a secondary neuron in one of the dorsal column nuclei: either the nucleus gracilis or the nucleus cuneatus, depending on the pathway it took. At this point, the secondary axon leaves its nucleus and passes anteriorly and medially. The collection of secondary axons that do this are known as internal arcuate fibers. The internal arcuate fibers decussate and continue ascending as the contralateral medial lemniscus. Secondary axons from the medial lemniscus finally terminate in the ventral posterolateral nucleus (VPLN) of the thalamus, where they synapse with tertiary neurons. From there, tertiary neurons ascend via the posterior limb of the internal capsule and end in the primary sensory cortex. The proprioception of the lower limbs differs from the upper limbs and upper trunk. There is a four-neuron pathway for lower limb proprioception. This pathway initially follows the dorsal spino-cerebellar pathway. It is arranged as follows: proprioceptive receptors of lower limb → peripheral process → dorsal root ganglion → central process → Clarke's column → 2nd order neuron → spinocerebellar tract →cerebellum. Anterolateral system The anterolateral system (ALS) works somewhat differently. Its primary neurons axons enter the spinal cord and then ascend one to two levels before synapsing in the substantia gelatinosa. The tract that ascends before synapsing is known as Lissauer's tract. After synapsing, secondary axons decussate and ascend in the anterior lateral portion of the spinal cord as the spinothalamic tract. This tract ascends all the way to the VPLN, where it synapses on tertiary neurons. Tertiary neuronal axons then travel to the primary sensory cortex via the posterior limb of the internal capsule. Some of the "pain fibers" in the ALS deviate from their pathway towards the VPLN. In one such deviation, axons travel towards the reticular formation in the midbrain. The reticular formation then projects to a number of places including the hippocampus (to create memories about the pain), the centromedian nucleus (to cause diffuse, non-specific pain) and various parts of the cortex. Additionally, some ALS axons from the spinomesencephalic pathway project to the periaqueductal gray in the pons, and the axons forming the periaqueductal gray then project to the nucleus raphes magnus, which projects back down to where the pain signal is coming from and inhibits it. This helps control the sensation of pain to some degree. Spinocerebellar tracts Proprioceptive information in the body travels up the spinal cord via three tracts. Below L2, the proprioceptive information travels up the spinal cord in the ventral spinocerebellar tract. Also known as the anterior spinocerebellar tract, sensory receptors take in the information and travel into the spinal cord. The cell bodies of these primary neurons are located in the dorsal root ganglia. In the spinal cord, the axons synapse and the secondary neuronal axons decussates and then travel up to the superior cerebellar peduncle where they decussate again. From here, the information is brought to deep nuclei of the cerebellum including the fastigial and interposed nuclei. From the levels of L2 to T1, proprioceptive information enters the spinal cord and ascends ipsilaterally, where it synapses in Clarke's nucleus. The secondary neuronal axons continue to ascend ipsilaterally and then pass into the cerebellum via the inferior cerebellar peduncle. This tract is known as the dorsal spinocerebellar tract. From above T1, proprioceptive primary axons enter the spinal cord and ascend ipsilaterally until reaching the accessory cuneate nucleus, where they synapse. The secondary axons pass into the cerebellum via the inferior cerebellar peduncle where again, these axons synapse on cerebellar deep nuclei. This tract is known as the cuneocerebellar tract. Descending tracts The descending tracts are of motor information. Descending tracts involve two neurons: the upper motor neuron, and lower motor neuron. A nerve signal travels down the upper motor neuron until it synapses with the lower motor neuron in the spinal cord. Then, the lower motor neuron conducts the nerve signal to the spinal root where efferent nerve fibers carry the motor signal toward the target muscle. The descending tracts are composed of white matter. There are several descending tracts serving different functions. The corticospinal tracts (lateral and anterior) are responsible for coordinated limb movements. The corticospinal tract serves as the motor pathway for upper motor neuronal signals coming from the cerebral cortex and from primitive brainstem motor nuclei. Cortical upper motor neurons originate from Brodmann areas 1, 2, 3, 4, and 6 and then descend in the posterior limb of the internal capsule, through the crus cerebri, down through the pons, and to the medullary pyramids, where about 90% of the axons cross to the contralateral side at the decussation of the pyramids. They then descend as the lateral corticospinal tract. These axons synapse with lower motor neurons in the ventral horns of all levels of the spinal cord. The remaining 10% of axons descend on the ipsilateral side as the ventral corticospinal tract. These axons also synapse with lower motor neurons in the ventral horns. Most of them will cross to the contralateral side of the cord (via the anterior white commissure) right before synapsing. The midbrain nuclei include four motor tracts that send upper motor neuronal axons down the spinal cord to lower motor neurons. These are the rubrospinal tract, the vestibulospinal tract, the tectospinal tract and the reticulospinal tract. The rubrospinal tract descends with the lateral corticospinal tract, and the remaining three descend with the anterior corticospinal tract. The function of lower motor neurons can be divided into two different groups: the lateral corticospinal tract and the anterior cortical spinal tract. The lateral tract contains upper motor neuronal axons which synapse on dorsal lateral (DL) lower motor neurons. The DL neurons are involved in distal limb control. Therefore, these DL neurons are found specifically only in the cervical and lumbosacral enlargements within the spinal cord. There is no decussation in the lateral corticospinal tract after the decussation at the medullary pyramids. The anterior corticospinal tract descends ipsilaterally in the anterior column, where the axons emerge and either synapse on lower ventromedial (VM) motor neurons in the ventral horn ipsilaterally or descussate at the anterior white commissure where they synapse on VM lower motor neurons contralaterally. The tectospinal, vestibulospinal and reticulospinal descend ipsilaterally in the anterior column but do not synapse across the anterior white commissure. Rather, they only synapse on VM lower motor neurons ipsilaterally. The VM lower motor neurons control the large, postural muscles of the axial skeleton. These lower motor neurons, unlike those of the DL, are located in the ventral horn all the way throughout the spinal cord. Other functions The spinal cord is a center for coordinating many reflexes and contains reflex arcs that can independently control reflexes. It is also the location of groups of spinal interneurons that make up the neural circuits known as central pattern generators. These circuits are responsible for controlling motor instructions for rhythmic movements such as walking. Clinical significance A congenital disorder is diastematomyelia in which part of the spinal cord is split usually at the level of the upper lumbar vertebrae. Sometimes the split can be along the length of the spinal cord. Injury Spinal cord injuries can be caused by trauma to the spinal column (stretching, bruising, applying pressure, severing, laceration, etc.). The vertebral bones or intervertebral disks can shatter, causing the spinal cord to be punctured by a sharp fragment of bone. Usually, victims of spinal cord injuries will suffer loss of feeling in certain parts of their body. In milder cases, a victim might only suffer loss of hand or foot function. More severe injuries may result in paraplegia, tetraplegia (also known as quadriplegia), or full body paralysis below the site of injury to the spinal cord. Damage to upper motor neuron axons in the spinal cord results in a characteristic pattern of ipsilateral deficits. These include hyperreflexia, hypertonia and muscle weakness. Lower motor neuronal damage results in its own characteristic pattern of deficits. Rather than an entire side of deficits, there is a pattern relating to the myotome affected by the damage. Additionally, lower motor neurons are characterized by muscle weakness, hypotonia, hyporeflexia and muscle atrophy. Spinal shock and neurogenic shock can occur from a spinal injury. Spinal shock is usually temporary, lasting only for 24–48 hours, and is a temporary absence of sensory and motor functions. Neurogenic shock lasts for weeks and can lead to a loss of muscle tone due to disuse of the muscles below the injured site. The two areas of the spinal cord most commonly injured are the cervical spine (C1–C7) and the lumbar spine (L1–L5). (The notation C1, C7, L1, L5 refer to the location of a specific vertebra in either the cervical, thoracic, or lumbar region of the spine.) Spinal cord injury can also be non-traumatic and caused by disease (transverse myelitis, polio, spina bifida, Friedreich's ataxia, spinal cord tumor, spinal stenosis etc.) Globally, it is expected there are around 40 to 80 cases of spinal cord injury per million population, and approximately 90% of these cases result from traumatic events. Real or suspected spinal cord injuries need immediate immobilisation including that of the head. Scans will be needed to assess the injury. A steroid, methylprednisolone, can be of help as can physical therapy and possibly antioxidants. Treatments need to focus on limiting post-injury cell death, promoting cell regeneration, and replacing lost cells. Regeneration is facilitated by maintaining electric transmission in neural elements. Stenosis Spinal stenoses at the lumbar region are usually due to disc herniation, hypertrophy of the facet joint and ligamentum flavum, osteophyte, and spondylolisthesis. An uncommon cause of lumbar spinal stenosis is spinal epidural lipomatosis, a condition where there is excessive deposit of fat in the epidural space, causing compression of nerve root and spinal cord. The epidural fat can be seen as low density on CT scan and high intensity on T2-weighted fast spin echo MRI images. Tumors Spinal tumors can occur in the spinal cord and these can be either inside (intradural) or outside (extradural) the dura mater. Procedures The spinal cord ends at the level of vertebrae L1–L2, while the subarachnoid spacethe compartment that contains cerebrospinal fluidextends down to the lower border of S2. Lumbar punctures in adults are usually performed between L3–L5 (cauda equina level) in order to avoid damage to the spinal cord. In the fetus, the spinal cord extends the full length of the spine and regresses as the body grows. Additional images
Biology and health sciences
Nervous system
null
21294852
https://en.wikipedia.org/wiki/Pony
Pony
A pony is a type of small horse, usually measured under a specified height at maturity. Ponies often have thicker coats, manes and tails, compared to larger horses, and proportionally shorter legs, wider barrels, heavier , thicker necks and shorter heads. In modern use, breed registries and horse shows may define a pony as measuring at the withers below a certain height; height limits varying from about to . Some distinguish between horse or pony based on its breed or phenotype, regardless of its height. The word pony derives from the old French poulenet, a diminutive of meaning foal, a young, immature horse. A full-sized horse may sometimes be called a pony as a term of endearment. Definition For many forms of competition, the official definition of a pony is a horse that measures up to at the withers. Standard horses are taller than 14.2. The International Federation for Equestrian Sports defines the official cutoff point at without shoes and with shoes. However, the term pony can be used in general (or affectionately) for any small horse, regardless of its actual size or breed. Furthermore, some horse breeds may have individuals who mature under that height but are still called horses and are allowed to compete as horses. In Australia, horses that measure from are known as a "galloway", and ponies in Australia measure under . History Ponies originally developed as a landrace adapted to a harsh natural environment, and were considered part of the "draft" subtype typical of Northern Europe. At one time, it was hypothesized that they may have descended from a wild "draft" subspecies of Equus ferus. Studies of mitochondrial DNA (which is passed on though the female line) indicate that a large number of wild mares have contributed to modern domestic breeds; in contrast, studies of y-DNA (passed down the male line) suggest that there was possibly just one single male ancestor of all domesticated breeds. Domestication of the horse probably first occurred in the Eurasian steppes with horses of between to over , and as horse domestication spread, the male descendants of the original stallion went on to be bred with local wild mares. Domesticated ponies of all breeds originally developed mainly from the need for a working animal that could fulfill specific local draft and transportation needs while surviving in harsh environments. The usefulness of the pony was noted by farmers who observed that a pony could outperform a draft horse on small farms. By the 20th century, many pony breeds had Arabian and other blood added to make a more refined pony suitable for riding. Uses In many parts of the world, ponies are used as working animals, as pack animals and for pulling various horse-drawn vehicles. They are seen in many different equestrian pursuits. Some breeds, such as the Hackney pony, are primarily used for driving, while other breeds, such as the Connemara pony and Australian Pony, are used primarily for riding. Others, such as the Welsh pony, are used for both riding and driving. There is no direct correlation between a horse's size and its inherent athletic ability. Characteristics Ponies are often distinguished by their phenotype, a stocky body, dense bone, round shape and well-sprung ribs. They have a short head, large eyes and small ears. In addition to being smaller than a horse, their legs are proportionately shorter. They have strong hooves and grow a heavier hair coat, seen in a thicker mane and tail as well as a particularly heavy winter coat. Pony breeds have developed all over the world, particularly in cold and harsh climates where hardy, sturdy working animals were needed. They are remarkably strong for their size. Breeds such as the Connemara pony are recognized for their ability to carry a full-sized adult rider. Pound for pound, ponies can pull and carry more weight than a horse. Draft-type ponies are able to pull loads significantly greater than their own weight, with larger ponies capable of pulling loads comparable to those pulled by full-sized draft horses, and even very small ponies are able to pull as much as 450 percent of their own weight. Nearly all pony breeds are very hardy, easy keepers that share the ability to thrive on a more limited diet than that of a regular-sized horse, requiring half the hay for their weight as a horse, and often not needing grain at all. However, for the same reason, they are also more vulnerable to laminitis and Cushing's syndrome. They may also have problems with hyperlipidemia. Ponies are generally considered intelligent and friendly, though sometimes they also are described as stubborn or cunning. The differences of opinion often result from an individual pony's degree of proper training. Ponies trained by inexperienced individuals, or only ridden by beginners, can turn out to be spoiled because their riders typically lack the experience base to correct bad habits. Properly trained ponies are appropriate mounts for children who are learning to ride. Larger ponies can be ridden by adults, as ponies are usually strong for their size. For showing purposes, ponies are often grouped into small, medium, and large sizes. Small ponies are and under, medium ponies are over 12.2 but no taller than , and large ponies are over but no taller than . The smallest equines are called miniature horses by many of their breeders and breed organizations, rather than ponies, even though they stand smaller than small ponies, usually no taller than at the withers. There are also miniature pony breeds. Similar or similarly named horses Some horse breeds are not defined as ponies, even when they have some animals that measure under . This is usually due to body build, traditional uses and overall physiology. Breeds that are considered horses regardless of height include the Arabian horse, American Quarter Horse and the Morgan horse, all of which have individual members both over and under . Many horse breeds have some pony characteristics, such as small size, a heavy coat, a thick mane or heavy bone, but are considered to be horses. In cases such as these, there can be considerable debate over whether to call certain breeds "horses" or "ponies." However, individual breed registries usually are the arbiters of such debates, weighing the relative horse and pony characteristics of a breed. In some breeds, such as the Welsh pony, the horse-versus-pony controversy is resolved by creating separate divisions for consistently horse-sized animals, such as the "Section D" Welsh Cob. Some horses may be pony height due to environment more than genetics. For example, the Chincoteague pony, a feral horse that lives on Assateague Island off the coasts of Maryland and Virginia, often matures to the height of an average small horse when raised from a foal under domesticated conditions. Conversely, the term "pony" is occasionally used to describe horses of normal height. Horses used for polo are often called "polo ponies" regardless of height, even though they are often of Thoroughbred breeding and often well over . American Indigenous tribes also have the tradition of referring to their horses as "ponies", when speaking in English, even though many of the Mustang horses they used in the 19th century were close to or over , and most horses owned and bred by Native peoples today are of full horse height. Non-racing horses at racetracks that are used to lead the racehorses, ponying them, are called "pony horses". The term "pony" is also sometimes used to describe a full-sized horse in a humorous or affectionate sense. The Pony Club uses the term "pony" for any mount ridden by a member, regardless of its breed or size. Pony Club members are allowed to compete with full-size horses and are not limited to pony-sized mounts.
Biology and health sciences
Horses
null
21299730
https://en.wikipedia.org/wiki/Lemon
Lemon
The lemon (Citrus × limon) is a species of small evergreen tree in the Citrus genus of the flowering plant family Rutaceae. The lemon is a hybrid of the citron and the bitter orange. Its origins are uncertain, but some evidence suggests lemons originated during the 1st millennium BC in what is now northeastern India. The yellow fruit of the lemon tree is used throughout the world, primarily for its juice. The pulp and rind are used in cooking and baking. The su juice of the lemon is about 5–6% citric acid, giving it a sour taste. This makes it a key ingredient in drinks and foods such as lemonade and lemon meringue pie. In 2022, world production was 22 million tonnes, led by India with 18% of the total. Description The lemon tree produces a pointed oval yellow fruit. Botanically this is a hesperidium, a modified berry with a tough, leathery rind. The rind is divided into an outer colored layer or zest, which is aromatic with essential oils, and an inner layer of white spongy pith. Inside are multiple carpels arranged as radial segments. The seeds develop inside the carpels. The space inside each segment is a locule filled with juice vesicles. Lemons contain many phytochemicals, including polyphenols, terpenes, and tannins. Their juice contains slightly more citric acid than lime juice (about 47 g/L), nearly twice as much as grapefruit juice, and about five times as much as orange juice. Origins The lemon, like many other cultivated Citrus species, is a hybrid, in its case of the citron and the bitter orange. Lemons were most likely first grown in northeast India. The origin of the word lemon may be Middle Eastern. The word draws from the Old French limon, then Italian limone, from the Arabic laymūn or līmūn, and from the Persian līmūn, a generic term for citrus fruit, which is a cognate of Sanskrit (nimbū, 'lime'). Lemons entered Europe near southern Italy no later than the second century AD, during the time of Ancient Rome. They were later introduced to Persia and then to Iraq and Egypt around 700 AD. The lemon was first recorded in literature in a 10th-century Arabic treatise on farming; it was used as an ornamental plant in early Islamic gardens. It was distributed widely throughout the Arab world and the Mediterranean region in the Arab Agricultural Revolution between 1000 and 1150. A section on lemon and lime tree cultivation in Andalusia, Spain, was included in Ibn al-'Awwam's 12th-century agricultural work, ("Book on Agriculture"). The first substantial cultivation of lemons in Europe began in Genoa in the middle of the 15th century. It was introduced to the Americas in 1493, when Christopher Columbus brought lemon seeds to Hispaniola on his voyages. Spanish conquest throughout the New World helped spread lemon seeds, part of the Columbian exchange of plants between the Old and New Worlds. It was mainly used as an ornamental plant and for medicine. In the 19th century, lemons were increasingly planted in Florida and California. In 1747, the English physician James Lind's experiments on seamen suffering from scurvy involved adding lemon juice to their diets, though vitamin C was not yet known as an important dietary ingredient. Cultivation Growing and pruning Lemons need a minimum temperature of around , so they are not hardy year-round in temperate climates, but become hardier as they mature. Citrus require minimal pruning by trimming overcrowded branches, with the tallest branch cut back to encourage bushy growth. Throughout summer, pinching back tips of the most vigorous growth assures more abundant canopy development. As mature plants may produce unwanted, fast-growing shoots (called "water shoots"), these are removed from the main branches at the bottom or middle of the plant. There is reputed merit in the tradition of urinating near a lemon tree. In cultivation in the UK, the cultivars "Meyer" and "Variegata" have gained the Royal Horticultural Society's Award of Garden Merit (confirmed 2017). Production In 2022, world production of lemons (combined with limes for reporting) was 22 million tonnes led by India with 18% of the total. Mexico and China were major secondary producers (table). Varieties The 'Bonnie Brae' is oblong, smooth, thin-skinned, and seedless. These are mostly grown in San Diego County, US. The 'Eureka' grows year-round and abundantly. This is the common supermarket lemon, also known as "Four Seasons" (Quatre Saisons) because of its ability to produce fruit and flowers together throughout the year. This variety is also available as a plant for domestic customers. There is also a pink-fleshed Eureka lemon with a green and yellow variegated outer skin. The Lisbon lemon is very similar to the Eureka and is the other common supermarket lemon. It is smoother than the Eureka, has thinner skin, and has fewer or no seeds. It generally produces more juice than the Eureka. The 'Femminello St. Teresa', or 'Sorrento' originates in Italy. This fruit's zest is high in lemon oils. It is the variety traditionally used in the making of limoncello. The 'Yen Ben' is an Australasian cultivar. Uses Nutrition Lemon is a rich source of vitamin C, providing 64% of the Daily Value in a 100 g reference amount (table). Other essential nutrients are low in content. Culinary Lemon juice and rind are used in a wide variety of foods and drinks, the juice for its sour taste, from its content of 5–6% citric acid. The whole lemon is used to make marmalade, lemon curd and lemon liqueurs such as Limoncello. Lemon slices and lemon rind are used as a garnish for food and drinks. Lemon zest, the grated outer rind of the fruit, is used to add flavor to baked goods. The juice is used to make lemonade and some cocktails. It is used in marinades for fish, where its acid neutralizes amines in fish. In meat, the acid partially hydrolyzes tough collagen fibers, tenderizing it. In the United Kingdom, lemon juice is frequently added to pancakes eaten to celebrate Shrove Tuesday. Lemon juice is used as a short-term preservative on certain foods that tend to oxidize and turn brown after being sliced (enzymatic browning), such as apples, bananas, and avocados: its acidity suppresses oxidation by polyphenol oxidase enzymes. Lemon peel is used in the manufacture of pectin, a gelling agent and stabilizer in food and other products. In Mediterranean countries including Morocco, lemons are preserved in jars or barrels of salt. The salt penetrates the peel and rind, softening them, and curing them so that they last almost indefinitely. Lemon oil is extracted from oil-containing cells in the skin. A machine breaks up the cells and uses a water spray to flush off the oil. The oil–water mixture is then filtered and separated by centrifugation. The leaves of the lemon tree are used to make a tea and for preparing cooked meats and seafoods. Other uses Lemons were the primary commercial source of citric acid before the development of fermentation-based processes. Lemon oil is used in aromatherapy. Lemon oil aroma does not influence the human immune system, but may contribute to relaxation. An educational science experiment involves attaching electrodes to a lemon and using it as a battery to produce electricity. Although very low power, several lemon batteries can power a small digital watch. Lemon juice forms a simple invisible ink, developed by heat. Lemon juice is sometimes used to increase the blonde color of hair, acting as a natural highlight after the moistened hair is exposed to sunlight. This works because citric acid acts as bleach. Other citrus called 'lemons' Flat lemon, a mandarin hybrid. Meyer lemon, a cross between a citron and a mandarin/pomelo hybrid distinct from sour or sweet orange, Ponderosa lemon, more cold-sensitive than true lemons, the fruit are thick-skinned and very large. Genetic analysis showed it to be a complex hybrid of citron and pomelo. Rough lemon, a citron-mandarin cross, cold-hardy and often used as a citrus rootstock Sweet lemons or sweet limes, a mixed group including the lumia (pear lemon), limetta, and Palestinian sweet lime. Among them is the Jaffa lemon, a pomelo-citron hybrid. Volkamer lemon, like the rough lemon, a citron-mandarin cross In art and culture Lemons appear in paintings, pop art, and novels. A wall painting in the tomb of Nakht in 15th century BC Egypt depicts a woman in a festival, holding a lemon. In the 17th century, Giovanna Garzoni painted a Still Life with Bowl of Citrons, the fruits still attached to leafy flowering twigs, with a wasp on one of the fruits. The impressionist Edouard Manet depicted a lemon on a pewter plate. In modern art, Arshile Gorky painted Still Life with Lemons in the 1930s. In India, a lemon may be ritually encircled around a person in the belief that it repels negative energies. It is a common practice for Hindu owners of a new car to drive over four lemons, one under each wheel, crushing them during their first drive. This is believed to protect the driver from accidents. Hindu deities are sometimes depicted with lemons in their iconography, representing the attribute of wealth or abundance. In 20th century American self-improvement culture, Dale Carnegie advised readers "If You Have a Lemon, Make a Lemonade", meaning to make the best of what you have. In the 21st century, a defective machine such as a car is called a lemon.
Biology and health sciences
Sapindales
null
21304320
https://en.wikipedia.org/wiki/Quartz%20clock
Quartz clock
Quartz clocks and quartz watches are timepieces that use an electronic oscillator regulated by a quartz crystal to keep time. This crystal oscillator creates a signal with very precise frequency, so that quartz clocks and watches are at least an order of magnitude more accurate than mechanical clocks. Generally, some form of digital logic counts the cycles of this signal and provides a numerical time display, usually in units of hours, minutes, and seconds. As the advent of solid-state digital electronics in the 1980s allowed them to be made more compact and inexpensive, quartz timekeepers became the world's most widely used timekeeping technology, used in most clocks and watches as well as computers and other appliances that keep time. Explanation Chemically, quartz is a specific form of a compound called silicon dioxide. Many materials can be formed into plates that will resonate. However, quartz is also a piezoelectric material: that is, when a quartz crystal is subject to mechanical stress, such as bending, it accumulates electrical charge across some planes. In a reverse effect, if charges are placed across the crystal plane, quartz crystals will bend. Since quartz can be directly driven (to flex) by an electric signal, no additional transducer is required to use it in a resonator. Similar crystals are used in low-end phonograph cartridges: The movement of the stylus (needle) flexes a quartz crystal, which produces a small voltage, which is amplified and played through speakers. Quartz microphones are still available, though not common. Quartz has a further advantage in that its size does not change much as temperature fluctuates. Fused quartz is often used for laboratory equipment that must not change shape along with the temperature. A quartz plate's resonance frequency, based on its size, will not significantly rise or fall. Similarly, since its resonator does not change shape, a quartz clock will remain relatively accurate as the temperature changes. In the early 20th century, radio engineers sought a precise, stable source of radio frequencies and started at first with steel resonators. However, when Walter Guyton Cady found in the early 1920s that quartz can resonate with less equipment and better temperature stability, steel resonators disappeared within a few years. Later, scientists at National Institute of Standards and Technology (then the U.S. National Bureau of Standards) discovered that a crystal oscillator could be more accurate than a pendulum clock. The electronic circuit is an oscillator, an amplifier whose output passes through the quartz resonator. The resonator acts as an electronic filter, eliminating all but the single frequency of interest. The output of the resonator feeds back to the input of the amplifier, and the resonator assures that the oscillator runs at the exact frequency of interest. When the circuit is powered up, a single burst of shot noise (always present in electronic circuits) can cascade to bring the oscillator into oscillation at the desired frequency. If the amplifier were perfectly noise-free, the oscillator would not start. The frequency at which the crystal oscillates depends on its shape, size, and the crystal plane on which the quartz is cut. The positions at which electrodes are placed can slightly change the tuning as well. If the crystal is accurately shaped and positioned, it will oscillate at a desired frequency. In nearly all quartz clocks and watches, the frequency is , and the crystal is cut in a small tuning fork shape on a particular crystal plane. This frequency is a power of two ( = 215), just high enough to exceed the human hearing range, yet low enough to keep electric energy consumption, cost and size at a modest level and to permit inexpensive counters to derive a 1-second pulse. The data line output from such a quartz resonator goes high and low times a second. This is fed into a flip-flop (which is essentially two transistors with a bit of cross-connection) which changes from low to high, or vice versa, whenever the line from the crystal goes from high to low. The output from that is fed into a second flip-flop, and so on through a chain of 15 flip-flops, each of which acts as an effective power of 2 frequency divider by dividing the frequency of the input signal by 2. The result is a 15-bit binary digital counter driven by the frequency that will overflow once per second, creating a digital pulse once per second. The pulse-per-second output can be used to drive many kinds of clocks. In analog quartz clocks and wristwatches, the electric pulse-per-second output is nearly always transferred to a Lavet-type stepping motor that converts the electronic input pulses from the flip-flops counting unit into mechanical output that can be used to move hands. It is also possible for quartz clocks and watches to have their quartz crystal oscillate at a higher frequency than (= 215) Hz (high frequency quartz movements) and/or generate digital pulses more than once per second, to drive a stepping motor powered second hand at a higher power of 2 than once every second, but the electric energy consumption (drain on the battery) goes up because higher oscillation frequencies and any activation of the stepping motor costs energy, making such small battery powered quartz watch movements relatively rare. Some analog quartz clocks feature a sweep second hand moved by a non-stepped battery or mains powered electric motor, often resulting in reduced mechanical output noise. Mechanism In modern standard-quality quartz clocks, the quartz-crystal resonator or oscillator is cut in the shape of a small tuning fork (XY-cut), laser-trimmed or precision-lapped to vibrate at . This frequency is equal to 215 cycles per second. A power of 2 is chosen so a simple chain of digital divide-by-2 stages can derive the signal needed to drive the watch's second hand. In most clocks, the resonator is in a small cylindrical or flat package, about long. The resonator has become so common due to a compromise between the large physical size of low-frequency crystals for watches and the larger current drain of high-frequency crystals, which reduces the life of the watch battery. The basic formula for calculating the fundamental frequency of vibration of a cantilever as a function of its dimensions (quadratic cross-section) is where (rounded) is the smallest positive solution of the equation ; is the length of the cantilever; is its thickness along the direction of motion; is its Young's modulus; and is its density. A cantilever made of quartz ( =  = ; = ), with a length of and a thickness of , thus has a fundamental frequency around . The crystal is tuned to exactly 215 = , or runs at a slightly higher frequency with inhibition compensation (see below). Accuracy The relative stability of the quartz resonator and its driving circuit is much better than its absolute accuracy. Standard-quality resonators of this type are warranted to have a long-term accuracy of about six parts per million (0.0006%) at : that is, a typical quartz clock or wristwatch will gain or lose 15 seconds per 30 days (within a normal temperature range of ) or less than a half second clock drift per day when worn near the body. Temperature and frequency variation Though quartz has a very low coefficient of thermal expansion, temperature changes are the major cause of frequency variation in crystal oscillators. The most obvious way of reducing the effect of temperature on the oscillation rate is to keep the crystal at a constant temperature. For laboratory-grade oscillators, an oven-controlled crystal oscillator is used, in which the crystal is kept in a very small oven that is held at a constant temperature. This method is, however, impractical for consumer quartz clock and wristwatch movements. The crystal planes and tuning of consumer-grade clock crystal resonators used in wristwatches are designed for minimal temperature sensitivity to frequency and operate best at a temperature range of about . The exact temperature where the crystal oscillates at its fastest is called the "turnover point" and can be chosen within limits. A well-chosen turnover point can minimize the negative effect of temperature-induced frequency drift, and hence improve the practical timekeeping accuracy of a consumer-grade crystal oscillator without adding significant cost. A higher or lower temperature will result in a −0.035 ppm/°C2 (slower) oscillation rate. So a ±1 °C temperature deviation will account for a (±1)2 × −0.035 ppm = −0.035 ppm rate change, which is equivalent to −1.1 seconds per year. If, instead, the crystal experiences a ±10 °C temperature deviation, then the rate change will be (±10)2 × −0.035 ppm = −3.5 ppm, which is equivalent to −110 seconds per year. Quartz watch manufacturers use a simplified version of the oven-controlled crystal oscillator method by recommending that their watches be worn regularly to ensure the best time-keeping performance. Regular wearing of a quartz watch significantly reduces the magnitude of environmental temperature swings, since a correctly designed watch case forms an expedient crystal oven that uses the stable temperature of the human body to keep the crystal oscillator in its most accurate temperature range. Accuracy enhancement Some movement designs feature accuracy-enhancing features or self-rate and self-regulate. That is, rather than just counting vibrations, their computer program takes the simple count and scales it using a ratio calculated between an epoch set at the factory, and the most recent time the clock was set. Clocks that are sometimes regulated by service centers with the help of a precision timer and adjustment terminal after leaving the factory, also become more accurate as their quartz crystal ages and somewhat unpredictable aging effects are appropriately compensated. Autonomous high-accuracy quartz movements, even in wristwatches, can be accurate to within ±1 to ±25 seconds per year and can be certified and used as marine chronometers to determine longitude (the East–West position of a point on the Earth's surface) by means of celestial navigation. When time at the prime meridian (or another starting point) is accurately enough known, celestial navigation can determine longitude, and the more accurately time is known the more accurate the latitude determination. At latitude 45° one second of time is equivalent in longitude to , or one-tenth of a second means . Trimmer condenser Regardless of the precision of the oscillator, a quartz analog or digital watch movement can have a trimmer condenser. They are generally found in older, vintage quartz watches – even many of the cheaper ones. A trimmer condenser or variable capacitor changes the frequency coming from the quartz crystal oscillator when its capacitance is changed. The frequency dividers remain unchanged, so the trimmer condenser can be used to adjust the electric pulse-per-second (or other desired time interval) output. The trimmer condenser looks like a small screw that has been wired into the circuit board. Typically, turning the screw clockwise speeds the movement up, and counterclockwise slows it down at about 1 second per day per turn of the screw. Few newer quartz movement designs feature a mechanical trimmer condenser and rely on generally digital correction methods. Thermal compensation It is possible for a computerized high-accuracy quartz movement to measure its temperature and adjust for that. For this the movement autonomously measures the crystal's temperature a few hundred to a few thousand times a day and compensates for this with a small calculated offset. Both analog and digital temperature compensation have been used in high-end quartz watches. In more expensive high-end quartz watches, thermal compensation can be implemented by varying the number of cycles to inhibit depending on the output from a temperature sensor. The COSC average daily rate standard for officially certified COSC quartz chronometers is ±25.55 seconds per year at . To acquire the COSC chronometer label, a quartz instrument must benefit from thermo-compensation and rigorous encapsulation. Each quartz chronometer is tested for 13 days, in one position, at 3 different temperatures and 4 different relative humidity levels. Only approximately 0.2% of the Swiss made quartz watches are chronometer-certified by the COSC. These COSC chronometer-certified movements can be used as marine chronometers to determine longitude by means of celestial navigation. Additional accuracy enhancing methods As of 2019, an autonomous light-powered high-accuracy quartz watch movement became commercially available which is claimed to be accurate to ± 1 second per year. Key elements to obtain the high claimed accuracy are applying an unusually shaped (for a watch) (AT-cut) quartz crystal operated at 223 or frequency, thermal compensation and hand selecting pre-aged crystals. AT-cut variations allow for greater temperature tolerances, specifically in the range of , they exhibit reduced deviations caused by gravitational orientation changes. As a result, errors caused by spatial orientation and positioning become less of a concern. Inhibition compensation Many inexpensive quartz clocks and watches use a rating and compensation technique known as inhibition compensation. The crystal is deliberately made to run somewhat faster. After manufacturing, each module is calibrated against a precision clock at the factory and adjusted to keep accurate time by programming the digital logic to skip a small number of crystal cycles at regular intervals, such as 10 seconds or 1 minute. For a typical quartz movement, this allows programmed adjustments in 7.91 seconds per 30 days increments for 10-second intervals (on a 10-second measurement gate) or programmed adjustments in 1.32 seconds per 30 days increments for 60-second intervals (on a 60-second measurement gate). The advantage of this method is that using digital programming to store the number of pulses to suppress in a non-volatile memory register on the chip is less expensive than the older technique of trimming the quartz tuning-fork frequency. The inhibition-compensation logic of some quartz movements can be regulated by service centers with the help of a professional precision timer and adjustment terminal after leaving the factory, though many inexpensive quartz watch movements do not offer this functionality. External time signal correction If a quartz movement is daily "rated" by measuring its timekeeping characteristics against a radio time signal or satellite time signal, to determine how much time the movement gained or lost between time signal receptions, and adjustments are made to the circuitry to "regulate" the timekeeping, then the corrected time will be accurate within ±1 second per year. This is more than adequate to perform longitude determination by celestial navigation. These quartz movements over time become less accurate when no external time signal has been successfully received and internally processed to set or synchronize their time automatically, and without such external compensation generally fall back on autonomous timekeeping. The United States National Institute of Standards and Technology (NIST) has published guidelines recommending that these movements keep the time between synchronizations to within ±0.5 seconds to keep time correct when rounded to the nearest second. Some of these movements can keep the time between synchronizations to within ±0.2 seconds by synchronizing more than once spread over a day. Quartz crystal aging Clock quartz crystals are manufactured in an ultra-clean environment, then protected by an inert ultra-high vacuum in hermetically sealed containers. Despite these measures, the frequency of a quartz crystal can slowly change over time. The effect of aging is much smaller than the effect of frequency variation caused by temperature changes, however, and manufacturers can estimate its effects. Generally, the aging effect eventually decreases a given crystal's frequency but it can also increase a given crystal's frequency. Factors that can cause a small frequency drift over time are stress relief in the mounting structure, loss of hermetic seal, contamination of the crystal lattice, moisture absorption, changes in or on the quartz crystal, severe shock and vibrations effects, and exposure to very high temperatures. Crystal aging tends to be logarithmic, meaning the maximum rate of change of frequency occurs immediately after manufacture and decays thereafter. Most of the aging will occur within the first year of the crystal's service life. Crystals do eventually stop aging (asymptotically), but it can take many years. Movement manufacturers can pre-age crystals before assembling them into clock movements. To promote accelerated aging the crystals are exposed to high temperatures. If a crystal is pre-aged, the manufacturer can measure its aging rates (strictly, the coefficients in the aging formula) and have a microcontroller calculate out the corrections over time. The initial calibration of a movement will stay accurate longer if the crystals are pre-aged. The advantage would end after subsequent regulation which resets any cumulative aging error to zero. A reason more expensive movements tend to be more accurate is that the crystals are pre-aged longer and selected for better aging performance. Sometimes, pre-aged crystals are hand selected for movement performance. Chronometers Quartz chronometers designed as time standards often include a crystal oven, to keep the crystal at a constant temperature. Some self-rate and include "crystal farms", so that the clock can take the average of a set of time measurements. External magnetic interference The Lavet-type stepping motors used in analog quartz clock movements which themselves are driven by a magnetic field (generated by the coil) can be affected by external (nearby) magnetism sources, and this may impact the rotor sprocket output. As a result, the mechanical output of analog quartz clock movements may temporarily stop, advance or reverse and negatively impact correct timekeeping. As the strength of a magnetic field almost always decreases with distance, moving an analog quartz clock movement away from an interfering external magnetic source normally results in a resumption of correct mechanical output. Some quartz wristwatch testers feature a magnetic field function to test if the stepping motor can provide mechanical output and let the gear train and hands deliberately spin overly fast to clear minor fouling. In general, magnetism encountered in daily life has no effect on digital quartz clock movements since there are no stepping motors in these movements. Powerful magnetism sources like MRI magnets can damage quartz clock movements. History The piezoelectric properties of quartz were discovered by Jacques and Pierre Curie in 1880. The vacuum tube oscillator was invented in 1912. An electrical oscillator was first used to sustain the motion of a tuning fork by the British physicist William Eccles in 1919; his achievement removed much of the damping associated with mechanical devices and maximised the stability of the vibration's frequency. The first quartz crystal oscillator was built by Walter G. Cady in 1921. In 1923, D. W. Dye at the National Physical Laboratory in the UK and Warren Marrison at Bell Telephone Laboratories produced sequences of precision time signals with quartz oscillators. In October 1927 the first quartz clock was described and built by Joseph W. Horton and Warren A. Marrison at Bell Telephone Laboratories. The 1927 clock used a block of crystal, stimulated by electricity, to produce pulses at a frequency of 50,000 cycles per second. A submultiple controlled frequency generator then divided this down to a usable, regular pulse that drove a synchronous motor. The next 3 decades saw the development of quartz clocks as precision time standards in laboratory settings; the bulky delicate counting electronics, built with vacuum tubes, limited their use elsewhere. In 1932 a quartz clock was able to measure tiny variations in the rotation rate of the Earth over periods as short as a few weeks. In Japan in 1932, Issac Koga developed a crystal cut that gave an oscillation frequency with greatly reduced temperature dependence. The National Bureau of Standards (now NIST) based the time standard of the US on quartz clocks between the 1930s and the 1960s, after which it transitioned to atomic clocks, which rely on the same mechanism that the International System of Units (SI) uses to define the second. In 1953, Longines deployed the first quartz movement. The wider use of quartz clock technology had to await the development of cheap semiconductor digital logic in the 1960s. The revised 1929 14th edition of Encyclopædia Britannica stated that quartz clocks would probably never be affordable enough to be used domestically. Their inherent physical and chemical stability and accuracy have resulted in the subsequent proliferation, and since the 1940s they have formed the basis for precision measurements of time and frequency worldwide. Developing quartz clocks for the consumer market took place during the 1960s. One of the first successes was a portable quartz clock called the Seiko Crystal Chronometer QC-951. This portable clock was used as a backup timer for marathon events in the 1964 Summer Olympics in Tokyo. In 1966, prototypes of the world's first quartz pocket watch were unveiled by Seiko and Longines in the Neuchâtel Observatory's 1966 competition. In 1967, both the CEH and Seiko presented prototypes of quartz wristwatches to the Neuchâtel Observatory competition. The world's first prototype analog quartz wristwatches were revealed in 1967: the Beta 1 revealed by the Centre Electronique Horloger (CEH) in Neuchâtel Switzerland, and the prototype of the Astron revealed by Seiko in Japan (Seiko had been working on quartz clocks since 1958). The first Swiss quartz watch – the Ebauches SA Beta 21 – arrived at the 1970 Basel Fair. In December 1969, Seiko produced the world's first commercial quartz wristwatch, the Seiko Quartz-Astron 35SQ which is now honored with IEEE Milestone. The Astron had a quartz oscillator with a frequency of 8,192 Hz and was accurate to 0.2 seconds per day, 5 seconds per month, or 1 minute per year. The Astron was released less than a year prior to the introduction of the Swiss Beta 21, which was developed by 16 Swiss Watch manufacturers and used by Rolex, Patek and Omega in their electroquartz models. These first quartz watches were quite expensive and marketed as luxury watches. The inherent accuracy and eventually achieved low cost of production have resulted in the proliferation of quartz clocks and watches since that time. Girard-Perregaux introduced the Caliber 350 in 1971, with an advertised accuracy within about 0.164 seconds per day, which had a quartz oscillator with a frequency of 32,768 Hz, which was faster than previous quartz watch movements and has since become the oscillation frequency used by most quartz clocks. The introduction during the 1970s of metal–oxide–semiconductor (MOS) integrated circuits allowed a 12-month battery life from a single coin cell when driving either a mechanical Lavet-type stepping motor, a smooth sweeping non-stepping motor, or a liquid-crystal display (in an LCD digital watch). Light-emitting diode (LED) displays for watches have become rare due to their comparatively high battery consumption. These innovations made the technology suitable for mass market adoption. In laboratory settings atomic clocks had replaced quartz clocks as the basis for precision measurements of time and frequency, resulting in International Atomic Time. By the 1980s, quartz technology had taken over applications such as kitchen timers, alarm clocks, bank vault time locks, and time fuzes on munitions, from earlier mechanical balance wheel movements, an upheaval known in watchmaking as the quartz crisis. Quartz timepieces have dominated the wristwatch and domestic clock market since the 1980s. Because of the high Q factor and low-temperature coefficient of the quartz crystal, they are more accurate than the best mechanical timepieces, and the elimination of all moving parts and significantly lower sensitivity to disturbances from external causes like magnetism and shock makes them more rugged and eliminates the need for periodic maintenance. Standard 'Watch' or Real-time clock (RTC) crystal units have become cheap mass-produced items on the electronic parts market. With the proliferation of the Internet, consumer timekeeping devices (e.g. smartphones and smartwatches) may now automatically synchronize their internal clocks via automated protocols (e.g. network time protocol) to atomic clock time servers.
Technology
Clocks
null
21304364
https://en.wikipedia.org/wiki/DOS
DOS
DOS (, ) is a family of disk-based operating systems for IBM PC compatible computers. The DOS family primarily consists of IBM PC DOS and a rebranded version, Microsoft's MS-DOS, both of which were introduced in 1981. Later compatible systems from other manufacturers include DR-DOS (1988), ROM-DOS (1989), PTS-DOS (1993), and FreeDOS (1994). MS-DOS dominated the IBM PC compatible market between 1981 and 1995. Although the name has come to be identified specifically with this particular family of operating systems, DOS is a platform-independent acronym for disk operating system, whose use predates the IBM PC. Dozens of other operating systems also use the acronym, beginning with the mainframe DOS/360 from 1966. Others include Apple DOS, Apple ProDOS, Atari DOS, Commodore DOS, TRSDOS, and AmigaDOS. History Origins IBM PC DOS (and the separately sold MS-DOS) and its predecessor, 86-DOS, ran on Intel 8086 16-bit processors. It was developed to be similar to Digital Research's CP/M—the dominant disk operating system for 8-bit Intel 8080 and Zilog Z80 microcomputers—in order to simplify porting CP/M applications to MS-DOS. When IBM introduced the IBM PC, built with the Intel 8088 microprocessor, they needed an operating system. Chairman John Opel had a conversation with fellow United Way National Board Executive Committee member Mary Maxwell Gates, who referred Opel to her son Bill Gates for help with an 8088-compatible build of CP/M. IBM was then sent to Digital Research, and a meeting was set up. However, initial negotiations for the use of CP/M broke down: Digital Research wished to sell CP/M on a royalty basis, while IBM sought a single license, and to change the name to "PC DOS". Digital Research founder Gary Kildall refused, and IBM withdrew. IBM again approached Bill Gates. Gates in turn approached Seattle Computer Products. There, programmer Tim Paterson had developed a variant of CP/M-80, intended as an internal product for testing SCP's new 16-bit Intel 8086 CPU card for the S-100 bus. The system was initially named QDOS (Quick and Dirty Operating System), before being made commercially available as 86-DOS. Microsoft purchased 86-DOS, allegedly for . This became Microsoft Disk Operating System, MS-DOS, introduced in 1981. Within a year Microsoft licensed MS-DOS to over 70 other companies, which supplied the operating system for their own hardware, sometimes under their own names. Microsoft later required the use of the MS-DOS name, with the exception of the IBM variant. IBM continued to develop their version, PC DOS, for the IBM PC. Digital Research became aware that an operating system similar to CP/M was being sold by IBM (under the same name that IBM insisted upon for CP/M), and threatened legal action. IBM responded by offering an agreement: they would give PC consumers a choice of PC DOS or CP/M-86, Kildall's 8086 version. Side-by-side, CP/M cost more than PC DOS, and sales were low. CP/M faded, with MS-DOS and PC DOS becoming the marketed operating system for PCs and PC compatibles. Microsoft originally sold MS-DOS only to original equipment manufacturers (OEMs). One major reason for this was that not all early PCs were 100% IBM PC compatible. DOS was structured such that there was a separation between the system specific device driver code (IO.SYS) and the DOS kernel (MSDOS.SYS). Microsoft provided an OEM Adaptation Kit (OAK) which allowed OEMs to customize the device driver code to their particular system. By the early 1990s, most PCs adhered to IBM PC standards so Microsoft began selling a retail version of MS-DOS, starting with MS-DOS 5.0. In the mid-1980s, Microsoft developed a multitasking version of DOS. This version of DOS is generally referred to as "European MS-DOS 4" because it was developed for ICL and licensed to several European companies. This version of DOS supports preemptive multitasking, shared memory, device helper services and New Executable ("NE") format executables. None of these features were used in later versions of DOS, but they were used to form the basis of the OS/2 1.0 kernel. This version of DOS is distinct from the widely released PC DOS 4.0 which was developed by IBM and based upon DOS 3.3. Digital Research attempted to regain the market lost from CP/M-86, initially with Concurrent DOS, FlexOS and DOS Plus (both compatible with both MS-DOS and CP/M-86 software), later with Multiuser DOS (compatible with both MS-DOS and CP/M-86 software) and DR DOS (compatible with MS-DOS software). Digital Research was bought by Novell, and DR DOS became PalmDOS and Novell DOS; later, it was part of Caldera (under the names OpenDOS and DR-DOS 7.02/7.03), Lineo, and DeviceLogics. Gordon Letwin wrote in 1995 that "DOS was, when we first wrote it, a one-time throw-away product intended to keep IBM happy so that they'd buy our languages." Microsoft expected that it would be an interim solution before the introduction of Xenix. The company planned to improve MS-DOS over time, so it would be almost indistinguishable from single-user Xenix, or XEDOS, which would also run on the Motorola 68000, Zilog Z-8000, and LSI-11; they would be upwardly compatible with Xenix, which BYTE in 1983 described as "the multi-user MS-DOS of the future". IBM, however, did not want to replace DOS. After AT&T began selling Unix, Microsoft and IBM began developing OS/2 as an alternative. The two companies later had a series of disagreements over two successor operating systems to DOS, OS/2 and Windows. They split development of their DOS systems as a result. The last retail version of MS-DOS was MS-DOS 6.22; after this, MS-DOS became part of Windows 95, 98 and Me. The last retail version of PC DOS was PC DOS 2000 (also called PC DOS 7 revision 1), though IBM did later develop PC DOS 7.10 for OEMs and internal use. The FreeDOS project began on 26 June 1994, when Microsoft announced it would no longer sell or support MS-DOS. Jim Hall then posted a manifesto proposing the development of an open-source replacement. Within a few weeks, other programmers including Pat Villani and Tim Norman joined the project. A kernel, the COMMAND.COM command line interpreter (shell), and core utilities were created by pooling code they had written or found available. There were several official pre-release distributions of FreeDOS before the FreeDOS 1.0 distribution was released on 3 September 2006. Made available under the GNU General Public License (GPL), FreeDOS does not require license fees or royalties. Decline Early versions of Microsoft Windows ran on MS-DOS. By the early 1990s, the Windows graphical shell saw heavy use on new DOS systems. In 1995, Windows 95 was bundled as a standalone operating system that did not require a separate DOS license. Windows 95 (and Windows 98 and ME, that followed it) took over as the default OS kernel, though the MS-DOS component remained for compatibility. With Windows 95 and 98, but not ME, the MS-DOS component could be run without starting Windows. With DOS no longer required to use Windows, the majority of users stopped using it directly. Continued use , available compatible systems are FreeDOS, ROM-DOS, PTS-DOS, RxDOS and REAL/32. Some computer manufacturers, including Dell and HP, sell computers with FreeDOS as an OEM operating system. And a few developers and computer engineers still use it because it is close to the hardware. Embedded systems DOS's structure of accessing hardware directly allows it to be used in embedded devices. The final versions of DR-DOS are still aimed at this market. ROM-DOS is used as operating system for the Canon PowerShot Pro 70. Emulation On Linux, it is possible to run DOSEMU, a Linux-native virtual machine for running DOS programs at near native speed. There are a number of other emulators for running DOS on various versions of Unix and Microsoft Windows such as DOSBox. DOSBox is designed for legacy gaming (e.g. King's Quest, Doom) on modern operating systems. DOSBox includes its own implementation of DOS which is strongly tied to the emulator and cannot run on real hardware, but can also boot MS-DOS, FreeDOS, or other DOS operating systems if needed. Design MS-DOS and IBM PC DOS related operating systems are commonly associated with machines using the Intel x86 or compatible CPUs, mainly IBM PC compatibles. Machine-dependent versions of MS-DOS were produced for many non-IBM-compatible x86-based machines, with variations from relabelling of the Microsoft distribution under the manufacturer's name, to versions specifically designed to work with non-IBM-PC-compatible hardware. As long as application programs used DOS APIs instead of direct hardware access, they could run on both IBM-PC-compatible and incompatible machines. The original FreeDOS kernel, DOS-C, was derived from DOS/NT for the Motorola 68000 series of CPUs in the early 1990s. While these systems loosely resembled the DOS architecture, applications were not binary compatible due to the incompatible instruction sets of these non-x86-CPUs. However, applications written in high-level languages could be ported easily. DOS is a single-user, single-tasking operating system with basic kernel functions that are non-reentrant: only one program at a time can use them, and DOS itself has no functionality to allow more than one program to execute at a time. The DOS kernel provides various functions for programs (an application program interface), like character I/O, file management, memory management, program loading and termination. DOS provides the ability for shell scripting via batch files (with the filename extension .BAT). Each line of a batch file is interpreted as a program to run. Batch files can also make use of internal commands, such as GOTO and conditional statements. The operating system offers an application programming interface that allows development of character-based applications, but not for accessing most of the hardware, such as graphics cards, printers, or mice. This required programmers to access the hardware directly, usually resulting in each application having its own set of device drivers for each hardware peripheral. Hardware manufacturers would release specifications to ensure device drivers for popular applications were available. Boot sequence The bootstrap loader on PC-compatible computers, the master boot record, is located beginning at the boot sector, the first sector on the first track (track zero), of the boot disk. The ROM BIOS will load this sector into memory at address :, and typically check for a signature "" at offset . If the sector is not considered to be valid, the ROM BIOS will try the next physical disk in the row, otherwise it will jump to the load address with certain registers set up. If the loaded boot sector happens to be a Master Boot Record (MBR), as found on partitioned media, it will relocate itself to : in memory, otherwise this step is skipped. The MBR code will scan the partition table, which is located within this sector, for an active partition (modern MBRs check if bit 7 is set at offset , whereas old MBRs simply check for a value of ), and, if found, load the first sector of the corresponding partition, which holds the Volume Boot Record (VBR) of that volume, into memory at : in the similar fashion as if it had been loaded by the ROM BIOS itself. The MBR will then pass execution to the loaded portion with certain registers set up. The sector content loaded at : constitutes a VBR now. VBRs are operating system specific and cannot be exchanged between different DOS versions in general, as the exact behaviour differs between different DOS versions. In very old versions of DOS such as DOS 1.x, the VBR would load the whole IO.SYS/IBMBIO.COM file into memory at :. For this to work, these sectors had to be stored in consecutive order on disk by SYS. In later issues, it would locate and store the contents of the first two entries in the root directory at : and if they happen to reflect the correct boot files as recorded in the VBR, the VBR would load the first 3 consecutive sectors of the IO.SYS/IBMBIO.COM file into memory at :. The VBR also has to take care to preserve the contents of the Disk Parameter Table (DPT). Finally, it passes control to the loaded portion by jumping to its entry point with certain registers set up (with considerable differences between different DOS versions). In later DOS versions, where the VBR has loaded only the first 3 sectors of the IO.SYS/IBMBIO.COM file into memory, the loaded portion contains another boot loader, which will then load the remainder of itself into memory, using the root directory information stored at :. For most versions, the file contents still need to be stored in consecutive order on disk. In older versions of DOS, which were still loaded as a whole, this step is skipped. The DOS system initialization code will initialize its built-in device drivers and then load the DOS kernel, located in MSDOS.SYS on MS-DOS systems, into memory as well. In Windows 9x, the DOS system initialization code and built-in device drivers and the DOS kernel are combined into a single IO.SYS file while MSDOS.SYS is used as a text configuration file. The CONFIG.SYS file is then read to parse configuration parameters. The variable specifies the location of the shell which defaults to COMMAND.COM. The shell is loaded and executed. The startup batch file AUTOEXEC.BAT is then run by the shell. The DOS system files loaded by the boot sector must be contiguous and be the first two directory entries. As such, removing and adding this file is likely to render the media unbootable. It is, however, possible to replace the shell at will, a method that can be used to start the execution of dedicated applications faster. This limitation does not apply to any version of DR DOS, where the system files can be located anywhere in the root directory and do not need to be contiguous. Therefore, system files can be simply copied to a disk provided that the boot sector is DR DOS compatible already. In PC DOS and DR DOS 5.0 and above, the DOS system files are named IBMBIO.COM instead of IO.SYS and IBMDOS.COM instead of MSDOS.SYS. Older versions of DR DOS used DRBIOS.SYS and DRBDOS.SYS instead. Starting with MS-DOS 7.0 the binary system files IO.SYS and MSDOS.SYS were combined into a single file IO.SYS whilst MSDOS.SYS became a configuration file similar to CONFIG.SYS and AUTOEXEC.BAT. If the MSDOS.SYS BootGUI directive is set to 0, the boot process will stop with the command processor (typically COMMAND.COM) loaded, instead of executing WIN.COM automatically. File system DOS uses a filesystem which supports 8.3 filenames: 8 characters for the filename and 3 characters for the extension. Starting with DOS 2 hierarchical directories are supported. Each directory name is also 8.3 format but the maximum directory path length is 64 characters due to the internal current directory structure (CDS) tables that DOS maintains. Including the drive name, the maximum length of a fully qualified filename that DOS supports is 80 characters using the format drive:\path\filename.ext followed by a null byte. DOS uses the File Allocation Table (FAT) filesystem. This was originally FAT12 which supported up to 4078 clusters per drive. DOS 3.0 added support for FAT16 which used 16-bit allocation entries and supported up to 65518 clusters per drive. Compaq MS-DOS 3.31 added support for FAT16B which removed the 32‑MiB drive limit and could support up to 512 MiB. Finally MS-DOS 7.1 (the DOS component of Windows 9x) added support for FAT32 which used 32-bit allocation entries and could support hard drives up to 137 GiB and beyond. Starting with DOS 3.1, file redirector support was added to DOS. This was initially used to support networking but was later used to support CD-ROM drives with MSCDEX. IBM PC DOS 4.0 also had preliminary installable file system (IFS) support but this was unused and removed in DOS 5.0. DOS also supported Block Devices ("Disk Drive" devices) loaded from CONFIG.SYS that could be used under the DOS file system to support network devices. Drive naming scheme In DOS, drives are referred to by identifying letters. Standard practice is to reserve "A" and "B" for floppy drives. On systems with only one floppy drive DOS assigns both letters to the drive, prompting the user to swap disks as programs alternate access between them. This facilitates copying from floppy to floppy or having a program run from one floppy while accessing its data on another. Hard drives were originally assigned the letters "C" and "D". DOS could only support one active partition per drive. As support for more hard drives became available, this developed into first assigning a drive letter to each drive's active primary partition, then making a second pass over the drives to allocate letters to logical drives in the extended partition, then a third pass to give any other non-active primary partitions their names (where such additional partitions existed and contained a DOS-supported file system). Lastly, DOS allocates letters for optical disc drives, RAM disks, and other hardware. Letter assignments usually occur in the order the drivers are loaded, but the drivers can instruct DOS to assign a different letter; drivers for network drives, for example, typically assign letters nearer to the end of the alphabet. Because DOS applications use these drive letters directly (unlike the /dev directory in Unix-like systems), they can be disrupted by adding new hardware that needs a drive letter. An example is the addition of a new hard drive having a primary partition where a pre-existing hard drive contains logical drives in extended partitions; the new drive will be assigned a letter that was previously assigned to one of the extended partition logical drives. Moreover, even adding a new hard drive having only logical drives in an extended partition would still disrupt the letters of RAM disks and optical drives. This problem persisted through Microsoft's DOS-based 9x versions of Windows until they were replaced by versions based on the NT line, which preserves the letters of existing drives until the user changes them. Under DOS, this problem can be worked around by defining a SUBST drive and installing the DOS program into this logical drive. The assignment of this drive would then be changed in a batch job whenever the application starts. Under some versions of Concurrent DOS, as well as under Multiuser DOS, System Manager and REAL/32, the reserved drive letter L: will automatically be assigned to the corresponding load drive whenever an application starts. Reserved device names There are reserved device names in DOS that cannot be used as filenames regardless of extension as they are occupied by built-in character devices. These restrictions also affect several Windows versions, in some cases causing crashes and security vulnerabilities. The reserved names are: COM1, COM2, COM3, COM4, COM5, COM6, COM7, COM8, COM9 (serial communication ports) CON, for console LPT1, LPT2, LPT3, LPT4, LPT5, LPT6, LPT7, LPT8, LPT9 (line printers) AUX, for auxiliary PRN, for printer NUL, for null devices; added in 86-DOS 1.10 and PC DOS 1.0. In Windows 95 and Windows 98, typing in the location of the reserved name (such as CON/CON, AUX/AUX, or PRN/PRN) crashes the operating system, of which Microsoft has provided a security fix for the issue. In Windows XP, the name of the file or folder using a reserved name silently reverts to its previous name, with no notification or error message. In Windows Vista and later, attempting to use a reserved name for a file or folder brings up an error message saying "The specified device name is invalid." These names (except for NUL) have continued to be supported in all versions of MS-DOS, PC DOS and DR-DOS ever since. LST was also available in some OEM versions of MS-DOS 1.25, whereas other OEM versions of MS-DOS 1.25 already used LPT1 (first line printer) and COM1 (first serial communication device) instead, as introduced with PC DOS. In addition to LPT1 and LPT2 as well as COM1 to COM3, Hewlett-Packard's OEM version of MS-DOS 2.11 for the HP Portable Plus also supported LST as alias for LPT2 and 82164A as alias for COM2; it also supported PLT for plotters. Otherwise, COM2, LPT2, LPT3 and the CLOCK$ (still named CLOCK in some issues of MS-DOS 2.11) clock device were introduced with DOS 2.0, and COM3 and COM4 were added with DOS 3.3. Only the multitasking MS-DOS 4 supported KEYBD$ and SCREEN$. DR DOS 5.0 and higher and Multiuser DOS support an $IDLE$ device for dynamic idle detection to saving power and improve multitasking. LPT4 is an optional built-in driver for a fourth line printer supported in some versions of DR-DOS since 7.02. CONFIG$ constitutes the real mode PnP manager in MS-DOS 7.0–8.0. AUX typically defaults to COM1, and PRN to LPT1 (LST), but these defaults can be changed in some versions of DOS to point to other serial or parallel devices. The PLT device (present only in some HP OEM versions of MS-DOS) was reconfigurable as well. Filenames ended with a colon () such as NUL: conventionally indicate device names, but the colon is not actually a part of the name of the built-in device drivers. Colons are not necessary to be typed in some cases, for example: ECHO This achieves nothing > NUL It is still possible to create files or directories using these reserved device names, such as through direct editing of directory data structures in disk sectors. Such naming, such as starting a file name with a space, has sometimes been used by viruses or hacking programs to obscure files from users who do not know how to access these locations. Memory management DOS was designed for the Intel 8088 processor, which can only directly access a maximum of 1 MiB of RAM. Both IBM and Microsoft chose 640 kibibytes (KiB) as the maximum amount of memory available to programs and reserved the remaining 384 KiB for video memory, the read-only memory of adapters on some video and network peripherals, and the system's BIOS. By 1985, some DOS applications were already hitting the memory limit, while much of reserved was unused, depending on the machine's specifications. Specifications were developed to allow access to additional memory. The first was the Expanded Memory Specification (EMS) was designed to allow memory on an add-on card to be accessed via a 64 KiB page frame in the reserved upper memory area. 80386 and later systems could use a virtual 8086 mode (V86) mode memory manager like EMM386 to create expanded memory from extended memory without the need of an add-on card. The second specification was the Extended Memory Specification (XMS) for 80286 and later systems. This provided a way to copy data to and from extended memory, access to the 65,520-byte high memory area directly above the first megabyte of memory and the upper memory block area. Generally XMS support was provided by HIMEM.SYS or a V86 mode memory manager like QEMM or 386MAX which also supported EMS. Starting with DOS 5, DOS could directly take advantage of the HMA by loading its kernel code and disk buffers there via the DOS=HIGH statement in CONFIG.SYS. DOS 5+ also allowed the use of available upper memory blocks via the DOS=UMB statement in CONFIG.SYS. DOS under OS/2 and Windows The DOS emulation in OS/2 and Windows runs in much the same way as native applications do. They can access all of the drives and services, and can even use the host's clipboard services. Because the drivers for file systems and such forth reside in the host system, the DOS emulation needs only provide a DOS API translation layer which converts DOS calls to OS/2 or Windows system calls. The translation layer generally also converts BIOS calls and virtualizes common I/O port accesses which many DOS programs commonly use. In Windows 3.1 and 9x, the DOS virtual machine is provided by WINOLDAP. WinOldAp creates a virtual machine based on the program's PIF file, and the system state when Windows was loaded. The DOS graphics mode, both character and graphic, can be captured and run in the window. DOS applications can use the Windows clipboard by accessing extra published calls in WinOldAp, and one can paste text through the WinOldAp graphics. The emulated DOS in OS/2 and Windows NT is based upon DOS 5. Although there is a default configuration (config.sys and autoexec.bat), one can use alternate files on a session-by-session basis. It is possible to load drivers in these files to access the host system, although these are typically third-party. Under OS/2 2.x and later, the DOS emulation is provided by DOSKRNL. This is a file that represents the combined IBMBIO.COM and IBMDOS.COM, the system calls are passed through to the OS/2 windowing services. DOS programs run in their own environment, the bulk of the DOS utilities are provided by bound DOS / OS2 applications in the \OS2 directory. OS/2 can run Windows 3.1 applications by using a modified copy of Windows (Win-OS/2). The modifications allow Windows 3.1 programs to run seamlessly on the OS/2 desktop, or one can start a WinOS/2 desktop, similar to starting Windows from DOS. OS/2 allows for 'DOS from Drive A:', (VMDISK). This is a real DOS, like MS-DOS 6.22 or PC DOS 5.00. One makes a bootable floppy disk of the DOS, adds a number of drivers from OS/2, and then creates a special image. The DOS booted this way has full access to the system, but provides its own drivers for hardware. One can use such a disk to access cdrom drives for which there is no OS/2 driver. In all 32-bit (IA-32) editions of the Windows NT family since 1993, DOS emulation is provided by way of a virtual DOS machine (NTVDM). 64-bit (IA-64 and x86-64) versions of Windows do not support NTVDM and cannot run 16-bit DOS applications directly; third-party emulators such as DOSbox can be used to run DOS programs on those machines. User interface DOS systems use a command-line interface. A program is started by entering its filename at the command prompt. DOS systems include utility programs and provide internal commands that do not correspond to programs. In an attempt to provide a more user-friendly environment, numerous software manufacturers wrote file management programs that provided users with WIMP interfaces. Microsoft Windows is a notable example, eventually resulting in Microsoft Windows 9x becoming a self-contained program loader, and replacing DOS as the most-used PC-compatible program loader. Text user interface programs included Norton Commander, DOS Navigator, Volkov Commander, Quarterdesk DESQview, and Sidekick. Graphical user interface programs included Digital Research's GEM (originally written for CP/M) and GEOS. Eventually, the manufacturers of major DOS systems began to include their own environment managers. MS-DOS/IBM DOS 4 included DOS Shell; DR DOS 5.0, released the following year, included ViewMAX, based upon GEM. Terminate and stay resident Although DOS is not a multitasking operating system, it does provide a terminate-and-stay-resident (TSR) function which allows programs to remain resident in memory. These programs can hook the system timer or keyboard interrupts to allow themselves to run tasks in the background or to be invoked at any time, preempting the current running program and effectively implementing a simple form of multitasking on a program-specific basis. The DOS PRINT command does this to implement background print spooling. Borland Sidekick, a popup personal information manager (PIM), also uses this technique. Terminate-and-stay-resident programs are also used to provide additional features not available by default. Programs like CED and DOSKEY provide command-line editing facilities beyond what is available in COMMAND.COM. Programs like the Microsoft CD-ROM Extensions (MSCDEX) provide access to files on CD-ROM disks. Some TSRs can even perform a rudimentary form of task switching. For example, the shareware program Back and Forth (1990) has a hotkey to save the state of the currently-running program to disk, load another program, and switch to it, making it possible to switch "back and forth" between programs (albeit slowly, due to the disk access required). Back and Forth could not enable background processing however; that needed DESQview (on at least a 386). Software Arachne, a 16-bit graphical web browser dBase, database program Harvard Graphics, a presentation graphics design program Lotus 1-2-3, a spreadsheet which has been credited with the success of the IBM PC Norton Commander and XTree, file management utilities PKZIP, the utility that quickly became the standard in file compression ProComm, Qmodem, and Telix, modem communication programs Sidekick, personal information manager that could be used from within other programs WordPerfect, a word processor that was dominant in the 1980s WordStar, word processor originally for CP/M that became popular on the IBM PC Development tools BASIC language interpreters. BASICA and GW-BASIC DJGPP, the 32-bit DPMI DOS port of gcc Microsoft Macro Assembler, Microsoft C, and CodeView from Microsoft Watcom C/C++ from Watcom Turbo Pascal, Turbo BASIC, Turbo C, Turbo Prolog, and Turbo Assembler from Borland
Technology
Operating Systems
null
21304415
https://en.wikipedia.org/wiki/Sexual%20reproduction
Sexual reproduction
Sexual reproduction is a type of reproduction that involves a complex life cycle in which a gamete (haploid reproductive cells, such as a sperm or egg cell) with a single set of chromosomes combines with another gamete to produce a zygote that develops into an organism composed of cells with two sets of chromosomes (diploid). This is typical in animals, though the number of chromosome sets and how that number changes in sexual reproduction varies, especially among plants, fungi, and other eukaryotes. In placental mammals, sperm cells exit the penis through the male urethra and enter the vagina during copulation, while egg cells enter the uterus through the oviduct. Other vertebrates of both sexes possess a cloaca for the release of sperm or egg cells. Sexual reproduction is the most common life cycle in multicellular eukaryotes, such as animals, fungi and plants. Sexual reproduction also occurs in some unicellular eukaryotes. Sexual reproduction does not occur in prokaryotes, unicellular organisms without cell nuclei, such as bacteria and archaea. However, some processes in bacteria, including bacterial conjugation, transformation and transduction, may be considered analogous to sexual reproduction in that they incorporate new genetic information. Some proteins and other features that are key for sexual reproduction may have arisen in bacteria, but sexual reproduction is believed to have developed in an ancient eukaryotic ancestor. In eukaryotes, diploid precursor cells divide to produce haploid cells in a process called meiosis. In meiosis, DNA is replicated to produce a total of four copies of each chromosome. This is followed by two cell divisions to generate haploid gametes. After the DNA is replicated in meiosis, the homologous chromosomes pair up so that their DNA sequences are aligned with each other. During this period before cell divisions, genetic information is exchanged between homologous chromosomes in genetic recombination. Homologous chromosomes contain highly similar but not identical information, and by exchanging similar but not identical regions, genetic recombination increases genetic diversity among future generations. During sexual reproduction, two haploid gametes combine into one diploid cell known as a zygote in a process called fertilization. The nuclei from the gametes fuse, and each gamete contributes half of the genetic material of the zygote. Multiple cell divisions by mitosis (without change in the number of chromosomes) then develop into a multicellular diploid phase or generation. In plants, the diploid phase, known as the sporophyte, produces spores by meiosis. These spores then germinate and divide by mitosis to form a haploid multicellular phase, the gametophyte, which produces gametes directly by mitosis. This type of life cycle, involving alternation between two multicellular phases, the sexual haploid gametophyte and asexual diploid sporophyte, is known as alternation of generations. The evolution of sexual reproduction is considered paradoxical, because asexual reproduction should be able to outperform it as every young organism created can bear its own young. This implies that an asexual population has an intrinsic capacity to grow more rapidly with each generation. This 50% cost is a fitness disadvantage of sexual reproduction. The two-fold cost of sex includes this cost and the fact that any organism can only pass on 50% of its own genes to its offspring. However, one definite advantage of sexual reproduction is that it increases genetic diversity and impedes the accumulation of harmful genetic mutations. Sexual selection is a mode of natural selection in which some individuals out-reproduce others of a population because they are better at securing mates for sexual reproduction. It has been described as "a powerful evolutionary force that does not exist in asexual populations". Evolution The first fossilized evidence of sexual reproduction in eukaryotes is from the Stenian period, about 1.05 billion years old. Biologists studying evolution propose several explanations for the development of sexual reproduction and its maintenance. These reasons include reducing the likelihood of the accumulation of deleterious mutations, increasing rate of adaptation to changing environments, dealing with competition, DNA repair, masking deleterious mutations, and reducing genetic variation on the genomic level. All of these ideas about why sexual reproduction has been maintained are generally supported, but ultimately the size of the population determines if sexual reproduction is entirely beneficial. Larger populations appear to respond more quickly to some of the benefits obtained through sexual reproduction than do smaller population sizes. However, newer models presented in recent years suggest a basic advantage for sexual reproduction in slowly reproducing complex organisms. Sexual reproduction allows these species to exhibit characteristics that depend on the specific environment that they inhabit, and the particular survival strategies that they employ. Sexual selection In order to reproduce sexually, both males and females need to find a mate. Generally in animals mate choice is made by females while males compete to be chosen. This can lead organisms to extreme efforts in order to reproduce, such as combat and display, or produce extreme features caused by a positive feedback known as a Fisherian runaway. Thus sexual reproduction, as a form of natural selection, has an effect on evolution. Sexual dimorphism is where the basic phenotypic traits vary between males and females of the same species. Dimorphism is found in both sex organs and in secondary sex characteristics, body size, physical strength and morphology, biological ornamentation, behavior and other bodily traits. However, sexual selection is only implied over an extended period of time leading to sexual dimorphism. Animals Arthropods Insects Insect species make up more than two-thirds of all extant animal species. Most insect species reproduce sexually, though some species are facultatively parthenogenetic. Many insect species have sexual dimorphism, while in others the sexes look nearly identical. Typically they have two sexes with males producing spermatozoa and females ova. The ova develop into eggs that have a covering called the chorion, which forms before internal fertilization. Insects have very diverse mating and reproductive strategies most often resulting in the male depositing a spermatophore within the female, which she stores until she is ready for egg fertilization. After fertilization, and the formation of a zygote, and varying degrees of development, in many species the eggs are deposited outside the female; while in others, they develop further within the female and the young are born live. Mammals There are three extant kinds of mammals: monotremes, placentals and marsupials, all with internal fertilization. In placental mammals, offspring are born as juveniles: complete animals with the sex organs present although not reproductively functional. After several months or years, depending on the species, the sex organs develop further to maturity and the animal becomes sexually mature. Most female mammals are only fertile during certain periods during their estrous cycle, at which point they are ready to mate. For most mammals, males and females exchange sexual partners throughout their adult lives. Fish The vast majority of fish species lay eggs that are then fertilized by the male. Some species lay their eggs on a substrate like a rock or on plants, while others scatter their eggs and the eggs are fertilized as they drift or sink in the water column. Some fish species use internal fertilization and then disperse the developing eggs or give birth to live offspring. Fish that have live-bearing offspring include the guppy and mollies or Poecilia. Fishes that give birth to live young can be ovoviviparous, where the eggs are fertilized within the female and the eggs simply hatch within the female body, or in seahorses, the male carries the developing young within a pouch, and gives birth to live young. Fishes can also be viviparous, where the female supplies nourishment to the internally growing offspring. Some fish are hermaphrodites, where a single fish is both male and female and can produce eggs and sperm. In hermaphroditic fish, some are male and female at the same time while in other fish they are serially hermaphroditic; starting as one sex and changing to the other. In at least one hermaphroditic species, self-fertilization occurs when the eggs and sperm are released together. Internal self-fertilization may occur in some other species. One fish species does not reproduce by sexual reproduction but uses sex to produce offspring; Poecilia formosa is a unisex species that uses a form of parthenogenesis called gynogenesis, where unfertilized eggs develop into embryos that produce female offspring. Poecilia formosa mate with males of other fish species that use internal fertilization, the sperm does not fertilize the eggs but stimulates the growth of the eggs which develops into embryos. Plants Animals have life cycles with a single diploid multicellular phase that produces haploid gametes directly by meiosis. Male gametes are called sperm, and female gametes are called eggs or ova. In animals, fertilization of the ovum by a sperm results in the formation of a diploid zygote that develops by repeated mitotic divisions into a diploid adult. Plants have two multicellular life-cycle phases, resulting in an alternation of generations. Plant zygotes germinate and divide repeatedly by mitosis to produce a diploid multicellular organism known as the sporophyte. The mature sporophyte produces haploid spores by meiosis that germinate and divide by mitosis to form a multicellular gametophyte phase that produces gametes at maturity. The gametophytes of different groups of plants vary in size. Mosses and other pteridophytic plants may have gametophytes consisting of several million cells, while angiosperms have as few as three cells in each pollen grain. Flowering plants Flowering plants are the dominant plant form on land and they reproduce either sexually or asexually. Often their most distinctive feature is their reproductive organs, commonly called flowers. The anther produces pollen grains which contain the male gametophytes that produce sperm nuclei. For pollination to occur, pollen grains must attach to the stigma of the female reproductive structure (carpel), where the female gametophytes are located within ovules enclose within the ovary. After the pollen tube grows through the carpel's style, the sex cell nuclei from the pollen grain migrate into the ovule to fertilize the egg cell and endosperm nuclei within the female gametophyte in a process termed double fertilization. The resulting zygote develops into an embryo, while the triploid endosperm (one sperm cell plus two female cells) and female tissues of the ovule give rise to the surrounding tissues in the developing seed. The ovary, which produced the female gametophyte(s), then grows into a fruit, which surrounds the seed(s). Plants may either self-pollinate or cross-pollinate. In 2013, flowers dating from the Cretaceous (100 million years before present) were found encased in amber, the oldest evidence of sexual reproduction in a flowering plant. Microscopic images showed tubes growing out of pollen and penetrating the flower's stigma. The pollen was sticky, suggesting it was carried by insects. Ferns Ferns produce large diploid sporophytes with rhizomes, roots and leaves. Fertile leaves produce sporangia that contain haploid spores. The spores are released and germinate to produce small, thin gametophytes that are typically heart shaped and green in color. The gametophyte prothalli, produce motile sperm in the antheridia and egg cells in archegonia on the same or different plants. After rains or when dew deposits a film of water, the motile sperm are splashed away from the antheridia, which are normally produced on the top side of the thallus, and swim in the film of water to the archegonia where they fertilize the egg. To promote out crossing or cross fertilization the sperm are released before the eggs are receptive of the sperm, making it more likely that the sperm will fertilize the eggs of different thallus. After fertilization, a zygote is formed which grows into a new sporophytic plant. The condition of having separate sporophyte and gametophyte plants is called alternation of generations. Bryophytes The bryophytes, which include liverworts, hornworts and mosses, reproduce both sexually and vegetatively. They are small plants found growing in moist locations and like ferns, have motile sperm with flagella and need water to facilitate sexual reproduction. These plants start as a haploid spore that grows into the dominant gametophyte form, which is a multicellular haploid body with leaf-like structures that photosynthesize. Haploid gametes are produced in antheridia (male) and archegonia (female) by mitosis. The sperm released from the antheridia respond to chemicals released by ripe archegonia and swim to them in a film of water and fertilize the egg cells thus producing a zygote. The zygote divides by mitotic division and grows into a multicellular, diploid sporophyte. The sporophyte produces spore capsules (sporangia), which are connected by stalks (setae) to the archegonia. The spore capsules produce spores by meiosis and when ripe the capsules burst open to release the spores. Bryophytes show considerable variation in their reproductive structures and the above is a basic outline. Also in some species each plant is one sex (dioicous) while other species produce both sexes on the same plant (monoicous). Fungi Fungi are classified by the methods of sexual reproduction they employ. The outcome of sexual reproduction most often is the production of resting spores that are used to survive inclement times and to spread. There are typically three phases in the sexual reproduction of fungi: plasmogamy, karyogamy and meiosis. The cytoplasm of two parent cells fuse during plasmogamy and the nuclei fuse during karyogamy. New haploid gametes are formed during meiosis and develop into spores. The adaptive basis for the maintenance of sexual reproduction in the Ascomycota and Basidiomycota (dikaryon) fungi was reviewed by Wallen and Perlin. They concluded that the most plausible reason for maintaining this capability is the benefit of repairing DNA damage, caused by a variety of stresses, through recombination that occurs during meiosis. Bacteria and archaea Three distinct processes in prokaryotes are regarded as similar to eukaryotic sex: bacterial transformation, which involves the incorporation of foreign DNA into the bacterial chromosome; bacterial conjugation, which is a transfer of plasmid DNA between bacteria, but the plasmids are rarely incorporated into the bacterial chromosome; and gene transfer and genetic exchange in archaea. Bacterial transformation involves the recombination of genetic material and its function is mainly associated with DNA repair. Bacterial transformation is a complex process encoded by numerous bacterial genes, and is a bacterial adaptation for DNA transfer. This process occurs naturally in at least 40 bacterial species. For a bacterium to bind, take up, and recombine exogenous DNA into its chromosome, it must enter a special physiological state referred to as competence (see Natural competence). Sexual reproduction in early single-celled eukaryotes may have evolved from bacterial transformation, or from a similar process in archaea (see below). On the other hand, bacterial conjugation is a type of direct transfer of DNA between two bacteria mediated by an external appendage called the conjugation pilus. Bacterial conjugation is controlled by plasmid genes that are adapted for spreading copies of the plasmid between bacteria. The infrequent integration of a plasmid into a host bacterial chromosome, and the subsequent transfer of a part of the host chromosome to another cell do not appear to be bacterial adaptations. Exposure of hyperthermophilic archaeal Sulfolobus species to DNA damaging conditions induces cellular aggregation accompanied by high frequency genetic marker exchange Ajon et al. hypothesized that this cellular aggregation enhances species-specific DNA repair by homologous recombination. DNA transfer in Sulfolobus may be an early form of sexual interaction similar to the more well-studied bacterial transformation systems that also involve species-specific DNA transfer leading to homologous recombinational repair of DNA damage.
Biology and health sciences
Biological reproduction
null
21304461
https://en.wikipedia.org/wiki/Steam
Steam
Steam is water vapour (water in the gas phase), often mixed with air and/or an aerosol of liquid water droplets. This may occur due to evaporation or due to boiling, where heat is applied until water reaches the enthalpy of vaporization. Steam that is saturated or superheated (water vapor) is invisible; however, wet steam, a visible mist or aerosol of water droplets, is often referred to as "steam". When liquid water becomes steam, it increases in volume by 1,700 times at standard temperature and pressure; this change in volume can be converted into mechanical work by steam engines such as reciprocating piston type engines and steam turbines, which are a sub-group of steam engines. Piston type steam engines played a central role in the Industrial Revolution and modern steam turbines are used to generate more than 80 % of the world's electricity. If liquid water comes in contact with a very hot surface or depressurizes quickly below its vapour pressure, it can create a steam explosion. Types of steam and conversions Steam is traditionally created by heating a boiler via burning coal and other fuels, but it is also possible to create steam with solar energy. Water vapour that includes water droplets is described as wet steam. As wet steam is heated further, the droplets evaporate, and at a high enough temperature (which depends on the pressure) all of the water evaporates and the system is in vapour–liquid equilibrium. When steam has reached this equilibrium point, it is referred to as saturated steam. Superheated steam or live steam is steam at a temperature higher than its boiling point for the pressure, which only occurs when all liquid water has evaporated or has been removed from the system. Steam tables contain thermodynamic data for water/saturated steam and are often used by engineers and scientists in design and operation of equipment where thermodynamic cycles involving steam are used. Additionally, thermodynamic phase diagrams for water/steam, such as a temperature-entropy diagram or a Mollier diagram shown in this article, may be useful. Steam charts are also used for analysing thermodynamic cycles. Uses Agricultural In agriculture, steam is used for soil sterilization to avoid the use of harmful chemical agents and increase soil health. Domestic Steam's capacity to transfer heat is also used in the home: for cooking vegetables, steam cleaning of fabric, carpets and flooring, and for heating buildings. In each case, water is heated in a boiler, and the steam carries the energy to a target object. Steam is also used in ironing clothes to add enough humidity with the heat to take wrinkles out and put intentional creases into the clothing. Electricity generation (and co-generation) As of 2000 around 90% of all electricity was generated using steam as the working fluid, nearly all by steam turbines. In electric generation, steam is typically condensed at the end of its expansion cycle, and returned to the boiler for re-use. However, in co-generation, steam is piped into buildings through a district heating system to provide heat energy after its use in the electric generation cycle. The world's biggest steam generation system is the New York City steam system, which pumps steam into 100,000 buildings in Manhattan from seven co-generation plants. Energy storage In other industrial applications steam is used for energy storage, which is introduced and extracted by heat transfer, usually through pipes. Steam is a capacious reservoir for thermal energy because of water's high heat of vaporization. Fireless steam locomotives were steam locomotives that operated from a supply of steam stored on board in a large tank resembling a conventional locomotive's boiler. This tank was filled by process steam, as is available in many sorts of large factory, such as paper mills. The locomotive's propulsion used pistons and connecting rods, as for a typical steam locomotive. These locomotives were mostly used in places where there was a risk of fire from a boiler's firebox, but were also used in factories that simply had a plentiful supply of steam to spare. Mechanical effort Steam engines and steam turbines use the expansion of steam to drive a piston or turbine to perform mechanical work. The ability to return condensed steam as water-liquid to the boiler at high pressure with relatively little expenditure of pumping power is important. Condensation of steam to water often occurs at the low-pressure end of a steam turbine, since this maximizes the energy efficiency, but such wet-steam conditions must be limited to avoid excessive turbine blade erosion. Engineers use an idealised thermodynamic cycle, the Rankine cycle, to model the behaviour of steam engines. Steam turbines are often used in the production of electricity. Sterilization An autoclave, which uses steam under pressure, is used in microbiology laboratories and similar environments for sterilization. Steam, especially dry (highly superheated) steam, may be used for antimicrobial cleaning even to the levels of sterilization. Steam is a non-toxic antimicrobial agent. Steam in piping Steam is used in piping for utility lines. It is also used in jacketing and tracing of piping to maintain the uniform temperature in pipelines and vessels. Industrial Processes Steam is used across multiple industries for its ability to transfer heat to drive chemical reactions, sterilize or disinfect objects and to maintain constant temperatures. In the lumber industry, steam is used in the process of wood bending, killing insects, and increasing plasticity. Steam is used to accentuate drying of concrete especially in prefabricates. Care should be taken since concrete produces heat during hydration and additional heat from the steam could be detrimental to hardening reaction processes of the concrete. In chemical and petrochemical industries, steam is used in various chemical processes as a reactant. Steam cracking of long chain hydrocarbons produces lower molecular weight hydrocarbons for fuel or other chemical applications. Steam reforming produces syngas or hydrogen. Cleaning Used in cleaning of fibers and other materials, sometimes in preparation for painting. Steam is also useful in melting hardened grease and oil residues, so it is useful in cleaning kitchen floors and equipment and internal combustion engines and parts. Among the advantages of using steam versus a hot water spray are the facts that steam can operate at higher temperatures and it uses substantially less water per minute.
Physical sciences
Water
Chemistry
21306150
https://en.wikipedia.org/wiki/Random-access%20memory
Random-access memory
Random-access memory (RAM; ) is a form of electronic computer memory that can be read and changed in any order, typically used to store working data and machine code. A random-access memory device allows data items to be read or written in almost the same amount of time irrespective of the physical location of data inside the memory, in contrast with other direct-access data storage media (such as hard disks and magnetic tape), where the time required to read and write data items varies significantly depending on their physical locations on the recording medium, due to mechanical limitations such as media rotation speeds and arm movement. In today's technology, random-access memory takes the form of integrated circuit (IC) chips with MOS (metal–oxide–semiconductor) memory cells. RAM is normally associated with volatile types of memory where stored information is lost if power is removed. The two main types of volatile random-access semiconductor memory are static random-access memory (SRAM) and dynamic random-access memory (DRAM). Non-volatile RAM has also been developed and other types of non-volatile memories allow random access for read operations, but either do not allow write operations or have other kinds of limitations. These include most types of ROM and NOR flash memory. The use of semiconductor RAM dates back to 1965 when IBM introduced the monolithic (single-chip) 16-bit SP95 SRAM chip for their System/360 Model 95 computer, and Toshiba used bipolar DRAM memory cells for its 180-bit Toscal BC-1411 electronic calculator, both based on bipolar transistors. While it offered higher speeds than magnetic-core memory, bipolar DRAM could not compete with the lower price of the then-dominant magnetic-core memory. In 1966, Dr. Robert Dennard invented modern DRAM architecture in which there's a single MOS transistor per capacitor. The first commercial DRAM IC chip, the 1K Intel 1103, was introduced in October 1970. Synchronous dynamic random-access memory (SDRAM) was reintroduced with the Samsung KM48SL2000 chip in 1992. History Early computers used relays, mechanical counters or delay lines for main memory functions. Ultrasonic delay lines were serial devices which could only reproduce data in the order it was written. Drum memory could be expanded at relatively low cost but efficient retrieval of memory items requires knowledge of the physical layout of the drum to optimize speed. Latches built out of triode vacuum tubes, and later, out of discrete transistors, were used for smaller and faster memories such as registers. Such registers were relatively large and too costly to use for large amounts of data; generally only a few dozen or few hundred bits of such memory could be provided. The first practical form of random-access memory was the Williams tube. It stored data as electrically charged spots on the face of a cathode-ray tube. Since the electron beam of the CRT could read and write the spots on the tube in any order, memory was random access. The capacity of the Williams tube was a few hundred to around a thousand bits, but it was much smaller, faster, and more power-efficient than using individual vacuum tube latches. Developed at the University of Manchester in England, the Williams tube provided the medium on which the first electronically stored program was implemented in the Manchester Baby computer, which first successfully ran a program on 21 June, 1948. In fact, rather than the Williams tube memory being designed for the Baby, the Baby was a testbed to demonstrate the reliability of the memory. Magnetic-core memory was invented in 1947 and developed up until the mid-1970s. It became a widespread form of random-access memory, relying on an array of magnetized rings. By changing the sense of each ring's magnetization, data could be stored with one bit stored per ring. Since every ring had a combination of address wires to select and read or write it, access to any memory location in any sequence was possible. Magnetic core memory was the standard form of computer memory until displaced by semiconductor memory in integrated circuits (ICs) during the early 1970s. Prior to the development of integrated read-only memory (ROM) circuits, permanent (or read-only) random-access memory was often constructed using diode matrices driven by address decoders, or specially wound core rope memory planes. Semiconductor memory appeared in the 1960s with bipolar memory, which used bipolar transistors. Although it was faster, it could not compete with the lower price of magnetic core memory. MOS RAM In 1957, Frosch and Derick manufactured the first silicon dioxide field-effect transistors at Bell Labs, the first transistors in which drain and source were adjacent at the surface. Subsequently, in 1960, a team demonstrated a working MOSFET at Bell Labs. This led to the development of metal–oxide–semiconductor (MOS) memory by John Schmidt at Fairchild Semiconductor in 1964. In addition to higher speeds, MOS semiconductor memory was cheaper and consumed less power than magnetic core memory. The development of silicon-gate MOS integrated circuit (MOS IC) technology by Federico Faggin at Fairchild in 1968 enabled the production of MOS memory chips. MOS memory overtook magnetic core memory as the dominant memory technology in the early 1970s. Integrated bipolar static random-access memory (SRAM) was invented by Robert H. Norman at Fairchild Semiconductor in 1963. It was followed by the development of MOS SRAM by John Schmidt at Fairchild in 1964. SRAM became an alternative to magnetic-core memory, but required six MOS transistors for each bit of data. Commercial use of SRAM began in 1965, when IBM introduced the SP95 memory chip for the System/360 Model 95. Dynamic random-access memory (DRAM) allowed replacement of a 4 or 6-transistor latch circuit by a single transistor for each memory bit, greatly increasing memory density at the cost of volatility. Data was stored in the tiny capacitance of each transistor and had to be periodically refreshed every few milliseconds before the charge could leak away. Toshiba's Toscal BC-1411 electronic calculator, which was introduced in 1965, used a form of capacitor bipolar DRAM, storing 180-bit data on discrete memory cells, consisting of germanium bipolar transistors and capacitors. Capacitors had also been used for earlier memory schemes, such as the drum of the Atanasoff–Berry Computer, the Williams tube and the Selectron tube. While it offered higher speeds than magnetic-core memory, bipolar DRAM could not compete with the lower price of the then-dominant magnetic-core memory. In 1966, Robert Dennard invented modern DRAM architecture for which there is a single MOS transistor per capacitor. While examining the characteristics of MOS technology, he found it was capable of building capacitors, and that storing a charge or no charge on the MOS capacitor could represent the 1 and 0 of a bit, while the MOS transistor could control writing the charge to the capacitor. This led to his development of a single-transistor DRAM memory cell. In 1967, Dennard filed a patent under IBM for a single-transistor DRAM memory cell, based on MOS technology. The first commercial DRAM IC chip was the Intel 1103, which was manufactured on an 8μm MOS process with a capacity of 1kbit, and was released in 1970. The earliest DRAMs were often synchronized with the CPU clock (clocked) and were used with early microprocessors. In the mid-1970s, DRAMs moved to the asynchronous design, but in the 1990s returned to synchronous operation. In 1992 Samsung released KM48SL2000, which had a capacity of 16Mbit. and mass-produced in 1993. The first commercial DDR SDRAM (double data rate SDRAM) memory chip was Samsung's 64Mbit DDR SDRAM chip, released in June 1998. GDDR (graphics DDR) is a form of DDR SGRAM (synchronous graphics RAM), which was first released by Samsung as a 16Mbit memory chip in 1998. Types The two widely used forms of modern RAM are static RAM (SRAM) and dynamic RAM (DRAM). In SRAM, a bit of data is stored using the state of a six-transistor memory cell, typically using six MOSFETs. This form of RAM is more expensive to produce, but is generally faster and requires less dynamic power than DRAM. In modern computers, SRAM is often used as cache memory for the CPU. DRAM stores a bit of data using a transistor and capacitor pair (typically a MOSFET and MOS capacitor, respectively), which together comprise a DRAM cell. The capacitor holds a high or low charge (1 or 0, respectively), and the transistor acts as a switch that lets the control circuitry on the chip read the capacitor's state of charge or change it. As this form of memory is less expensive to produce than static RAM, it is the predominant form of computer memory used in modern computers. Both static and dynamic RAM are considered volatile, as their state is lost or reset when power is removed from the system. By contrast, read-only memory (ROM) stores data by permanently enabling or disabling selected transistors, such that the memory cannot be altered. Writable variants of ROM (such as EEPROM and NOR flash) share properties of both ROM and RAM, enabling data to persist without power and to be updated without requiring special equipment. ECC memory (which can be either SRAM or DRAM) includes special circuitry to detect and/or correct random faults (memory errors) in the stored data, using parity bits or error correction codes. In general, the term RAM refers solely to solid-state memory devices (either DRAM or SRAM), and more specifically the main memory in most computers. In optical storage, the term DVD-RAM is somewhat of a misnomer since, it is not random access; it behaves much like a hard disc drive if somewhat slower. Aside, unlike CD-RW or DVD-RW, DVD-RAM does not need to be erased before reuse. Memory cell The memory cell is the fundamental building block of computer memory. The memory cell is an electronic circuit that stores one bit of binary information and it must be set to store a logic 1 (high voltage level) and reset to store a logic 0 (low voltage level). Its value is maintained/stored until it is changed by the set/reset process. The value in the memory cell can be accessed by reading it. In SRAM, the memory cell is a type of flip-flop circuit, usually implemented using FETs. This means that SRAM requires very low power when not being accessed, but it is expensive and has low storage density. A second type, DRAM, is based around a capacitor. Charging and discharging this capacitor can store a "1" or a "0" in the cell. However, the charge in this capacitor slowly leaks away, and must be refreshed periodically. Because of this refresh process, DRAM uses more power, but it can achieve greater storage densities and lower unit costs compared to SRAM. Addressing To be useful, memory cells must be readable and writable. Within the RAM device, multiplexing and demultiplexing circuitry is used to select memory cells. Typically, a RAM device has a set of address lines , and for each combination of bits that may be applied to these lines, a set of memory cells are activated. Due to this addressing, RAM devices virtually always have a memory capacity that is a power of two. Usually several memory cells share the same address. For example, a 4 bit "wide" RAM chip has four memory cells for each address. Often the width of the memory and that of the microprocessor are different, for a 32 bit microprocessor, eight 4 bit RAM chips would be needed. Often more addresses are needed than can be provided by a device. In that case, external multiplexors to the device are used to activate the correct device that is being accessed. RAM is often byte addressable, although it is also possible to make RAM that is word-addressable. Memory hierarchy One can read and over-write data in RAM. Many computer systems have a memory hierarchy consisting of processor registers, on-die SRAM caches, external caches, DRAM, paging systems and virtual memory or swap space on a hard drive. This entire pool of memory may be referred to as "RAM" by many developers, even though the various subsystems can have very different access times, violating the original concept behind the random access term in RAM. Even within a hierarchy level such as DRAM, the specific row, column, bank, rank, channel, or interleave organization of the components make the access time variable, although not to the extent that access time to rotating storage media or a tape is variable. The overall goal of using a memory hierarchy is to obtain the fastest possible average access time while minimizing the total cost of the entire memory system (generally, the memory hierarchy follows the access time with the fast CPU registers at the top and the slow hard drive at the bottom). In many modern personal computers, the RAM comes in an easily upgraded form of modules called memory modules or DRAM modules about the size of a few sticks of chewing gum. These can be quickly replaced should they become damaged or when changing needs demand more storage capacity. As suggested above, smaller amounts of RAM (mostly SRAM) are also integrated in the CPU and other ICs on the motherboard, as well as in hard-drives, CD-ROMs, and several other parts of the computer system. Other uses of RAM In addition to serving as temporary storage and working space for the operating system and applications, RAM is used in numerous other ways. Virtual memory Most modern operating systems employ a method of extending RAM capacity, known as "virtual memory". A portion of the computer's hard drive is set aside for a paging file or a scratch partition, and the combination of physical RAM and the paging file form the system's total memory. (For example, if a computer has 2 GB (10243 B) of RAM and a 1 GB page file, the operating system has 3 GB total memory available to it.) When the system runs low on physical memory, it can "swap" portions of RAM to the paging file to make room for new data, as well as to read previously swapped information back into RAM. Excessive use of this mechanism results in thrashing and generally hampers overall system performance, mainly because hard drives are far slower than RAM. RAM disk Software can "partition" a portion of a computer's RAM, allowing it to act as a much faster hard drive that is called a RAM disk. A RAM disk loses the stored data when the computer is shut down, unless memory is arranged to have a standby battery source, or changes to the RAM disk are written out to a nonvolatile disk. The RAM disk is reloaded from the physical disk upon RAM disk initialization. Shadow RAM Sometimes, the contents of a relatively slow ROM chip are copied to read/write memory to allow for shorter access times. The ROM chip is then disabled while the initialized memory locations are switched in on the same block of addresses (often write-protected). This process, sometimes called shadowing, is fairly common in both computers and embedded systems. As a common example, the BIOS in typical personal computers often has an option called "use shadow BIOS" or similar. When enabled, functions that rely on data from the BIOS's ROM instead use DRAM locations (most can also toggle shadowing of video card ROM or other ROM sections). Depending on the system, this may not result in increased performance, and may cause incompatibilities. For example, some hardware may be inaccessible to the operating system if shadow RAM is used. On some systems the benefit may be hypothetical because the BIOS is not used after booting in favor of direct hardware access. Free memory is reduced by the size of the shadowed ROMs. Memory wall The 'memory wall is the growing disparity of speed between CPU and the response time of memory (known as memory latency) outside the CPU chip. An important reason for this disparity is the limited communication bandwidth beyond chip boundaries, which is also referred to as bandwidth wall. From 1986 to 2000, CPU speed improved at an annual rate of 55% while off-chip memory response time only improved at 10%. Given these trends, it was expected that memory latency would become an overwhelming bottleneck in computer performance. Another reason for the disparity is the enormous increase in the size of memory since the start of the PC revolution in the 1980s. Originally, PCs contained less than 1 mebibyte of RAM, which often had a response time of 1 CPU clock cycle, meaning that it required 0 wait states. Larger memory units are inherently slower than smaller ones of the same type, simply because it takes longer for signals to traverse a larger circuit. Constructing a memory unit of many gibibytes with a response time of one clock cycle is difficult or impossible. Today's CPUs often still have a mebibyte of 0 wait state cache memory, but it resides on the same chip as the CPU cores due to the bandwidth limitations of chip-to-chip communication. It must also be constructed from static RAM, which is far more expensive than the dynamic RAM used for larger memories. Static RAM also consumes far more power. CPU speed improvements slowed significantly partly due to major physical barriers and partly because current CPU designs have already hit the memory wall in some sense. Intel summarized these causes in a 2005 document. First of all, as chip geometries shrink and clock frequencies rise, the transistor leakage current increases, leading to excess power consumption and heat... Secondly, the advantages of higher clock speeds are in part negated by memory latency, since memory access times have not been able to keep pace with increasing clock frequencies. Third, for certain applications, traditional serial architectures are becoming less efficient as processors get faster (due to the so-called von Neumann bottleneck), further undercutting any gains that frequency increases might otherwise buy. In addition, partly due to limitations in the means of producing inductance within solid state devices, resistance-capacitance (RC) delays in signal transmission are growing as feature sizes shrink, imposing an additional bottleneck that frequency increases don't address. The RC delays in signal transmission were also noted in "Clock Rate versus IPC: The End of the Road for Conventional Microarchitectures" which projected a maximum of 12.5% average annual CPU performance improvement between 2000 and 2014. A different concept is the processor-memory performance gap, which can be addressed by 3D integrated circuits that reduce the distance between the logic and memory aspects that are further apart in a 2D chip. Memory subsystem design requires a focus on the gap, which is widening over time. The main method of bridging the gap is the use of caches; small amounts of high-speed memory that houses recent operations and instructions nearby the processor, speeding up the execution of those operations or instructions in cases where they are called upon frequently. Multiple levels of caching have been developed to deal with the widening gap, and the performance of high-speed modern computers relies on evolving caching techniques. There can be up to a 53% difference between the growth in speed of processor and the lagging speed of main memory access. Solid-state hard drives have continued to increase in speed, from ~400 Mbit/s via SATA3 in 2012 up to ~7 GB/s via NVMe/PCIe in 2024, closing the gap between RAM and hard disk speeds, although RAM continues to be an order of magnitude faster, with single-lane DDR5 8000MHz capable of 128 GB/s, and modern GDDR even faster. Fast, cheap, non-volatile solid state drives have replaced some functions formerly performed by RAM, such as holding certain data for immediate availability in server farms - 1 terabyte of SSD storage can be had for $200, while 1 TB of RAM would cost thousands of dollars. Timeline SRAM DRAM SDRAM
Technology
Data storage
null
24277294
https://en.wikipedia.org/wiki/Blossom%20algorithm
Blossom algorithm
In graph theory, the blossom algorithm is an algorithm for constructing maximum matchings on graphs. The algorithm was developed by Jack Edmonds in 1961, and published in 1965. Given a general graph , the algorithm finds a matching such that each vertex in is incident with at most one edge in and is maximized. The matching is constructed by iteratively improving an initial empty matching along augmenting paths in the graph. Unlike bipartite matching, the key new idea is that an odd-length cycle in the graph (blossom) is contracted to a single vertex, with the search continuing iteratively in the contracted graph. The algorithm runs in time , where is the number of edges of the graph and is its number of vertices. A better running time of for the same task can be achieved with the much more complex algorithm of Micali and Vazirani. A major reason that the blossom algorithm is important is that it gave the first proof that a maximum-size matching could be found using a polynomial amount of computation time. Another reason is that it led to a linear programming polyhedral description of the matching polytope, yielding an algorithm for min-weight matching. As elaborated by Alexander Schrijver, further significance of the result comes from the fact that this was the first polytope whose proof of integrality "does not simply follow just from total unimodularity, and its description was a breakthrough in polyhedral combinatorics." Augmenting paths Given and a matching of , a vertex is exposed if no edge of is incident with . A path in is an alternating path, if its edges are alternately not in and in (or in and not in ). An augmenting path is an alternating path that starts and ends at two distinct exposed vertices. Note that the number of unmatched edges in an augmenting path is greater by one than the number of matched edges, and hence the total number of edges in an augmenting path is odd. A matching augmentation along an augmenting path is the operation of replacing with a new matching . By Berge's lemma, matching is maximum if and only if there is no -augmenting path in . Hence, either a matching is maximum, or it can be augmented. Thus, starting from an initial matching, we can compute a maximum matching by augmenting the current matching with augmenting paths as long as we can find them, and return whenever no augmenting paths are left. We can formalize the algorithm as follows: INPUT: Graph G, initial matching M on G OUTPUT: maximum matching M* on G A1 function find_maximum_matching(G, M) : M* A2 P ← find_augmenting_path(G, M) A3 if P is non-empty then A4 return find_maximum_matching(G, augment M along P) A5 else A6 return M A7 end if A8 end function We still have to describe how augmenting paths can be found efficiently. The subroutine to find them uses blossoms and contractions. Blossoms and contractions Given and a matching of , a blossom is a cycle in consisting of edges of which exactly belong to , and where one of the vertices of the cycle (the base) is such that there exists an alternating path of even length (the stem) from to an exposed vertex . Finding Blossoms: Traverse the graph starting from an exposed vertex. Starting from that vertex, label it as an outer vertex . Alternate the labeling between vertices being inner and outer such that no two adjacent vertices have the same label. If we end up with two adjacent vertices labeled as outer then we have an odd-length cycle and hence a blossom. Define the contracted graph as the graph obtained from by contracting every edge of , and define the contracted matching as the matching of corresponding to . has an -augmenting path if and only if has an -augmenting path, and that any -augmenting path in can be lifted to an -augmenting path in by undoing the contraction by so that the segment of (if any) traversing through is replaced by an appropriate segment traversing through . In more detail: if traverses through a segment in , then this segment is replaced with the segment {{math|u → ( u → … → w' ) → w}} in , where blossom vertices and and the side of , , going from to are chosen to ensure that the new path is still alternating ( is exposed with respect to , ). if has an endpoint , then the path segment in is replaced with the segment in , where blossom vertices and and the side of , , going from to are chosen to ensure that the path is alternating ( is exposed, ). Thus blossoms can be contracted and search performed in the contracted graphs. This reduction is at the heart of Edmonds' algorithm. Finding an augmenting path The search for an augmenting path uses an auxiliary data structure consisting of a forest whose individual trees correspond to specific portions of the graph . In fact, the forest is the same that would be used to find maximum matchings in bipartite graphs (without need for shrinking blossoms). In each iteration the algorithm either (1) finds an augmenting path, (2) finds a blossom and recurses onto the corresponding contracted graph, or (3) concludes there are no augmenting paths. The auxiliary structure is built by an incremental procedure discussed next. The construction procedure considers vertices and edges in and incrementally updates as appropriate. If is in a tree of the forest, we let denote the root of . If both and are in the same tree in , we let denote the length of the unique path from to in . INPUT: Graph G, matching M on G OUTPUT: augmenting path P in G or empty path if none found B01 function find_augmenting_path(G, M) : P B02 F ← empty forest B03 unmark all vertices and edges in G, mark all edges of M B05 for each exposed vertex v do B06 create a singleton tree { v } and add the tree to F B07 end for B08 while there is an unmarked vertex v in F with distance(v, root(v)) even do B09 while there exists an unmarked edge e = { v, w } do B10 if w is not in F then''' // w is matched, so add e and ws matched edge to F B11 x ← vertex matched to w in M B12 add edges { v, w } and { w, x } to the tree of v B13 else B14 if distance(w, root(w)) is odd then // Do nothing. B15 else B16 if root(v) ≠ root(w) then // Report an augmenting path in F { e }. B17 P ← path (root(v) → ... → v) → (w → ... → root(w)) B18 return P B19 else // Contract a blossom in G and look for the path in the contracted graph. B20 B ← blossom formed by e and edges on the path v → w in T B21 G’, M’ ← contract G and M by B B22 P’ ← find_augmenting_path(G’, M’) B23 P ← lift P’ to G B24 return P B25 end if B26 end if B27 end if B28 mark edge e B29 end while B30 mark vertex v B31 end while B32 return empty path B33 end functionExamples The following four figures illustrate the execution of the algorithm. Dashed lines indicate edges that are currently not present in the forest. First, the algorithm processes an out-of-forest edge that causes the expansion of the current forest (lines B10 – B12). Next, it detects a blossom and contracts the graph (lines B20 – B21). Finally, it locates an augmenting path in the contracted graph (line B22) and lifts it to the original graph (line B23). Note that the ability of the algorithm to contract blossoms is crucial here; the algorithm cannot find in the original graph directly because only out-of-forest edges between vertices at even distances from the roots are considered on line B17 of the algorithm. Analysis The forest constructed by the function is an alternating forest. a tree in is an alternating tree with respect to , if contains exactly one exposed vertex called the tree root every vertex at an odd distance from the root has exactly two incident edges in , and all paths from to leaves in have even lengths, their odd edges are not in and their even edges are in . a forest in is an alternating forest''' with respect to , if its connected components are alternating trees, and every exposed vertex in is a root of an alternating tree in . Each iteration of the loop starting at line B09 either adds to a tree in (line B10) or finds an augmenting path (line B17) or finds a blossom (line B20). It is easy to see that the running time is . Bipartite matching When is bipartite, there are no odd cycles in . In that case, blossoms will never be found and one can simply remove lines B20 – B24 of the algorithm. The algorithm thus reduces to the standard algorithm to construct maximum cardinality matchings in bipartite graphs where we repeatedly search for an augmenting path by a simple graph traversal: this is for instance the case of the Ford–Fulkerson algorithm. Weighted matching The matching problem can be generalized by assigning weights to edges in and asking for a set that produces a matching of maximum (minimum) total weight: this is the maximum weight matching problem. This problem can be solved by a combinatorial algorithm that uses the unweighted Edmonds's algorithm as a subroutine. Kolmogorov provides an efficient C++ implementation of this.
Mathematics
Graph theory
null
32882672
https://en.wikipedia.org/wiki/Sup%27ung%20Dam
Sup'ung Dam
The Sup'ung Dam (), also referred to as the Shuifeng Dam and originally the Suihō Dam, is a gravity dam on the Yalu River between Kuandian Manchu Autonomous County, Liaoning Province in China and Sakju County, North Pyongan Province in North Korea. The dam was constructed by the Japanese between 1937 and 1943 in order to generate electricity and has been repaired and renovated several times throughout the years, mainly due to spillway damage from flooding. During the Korean War, the dam was bombed by the United Nations Command three separate times in order to disrupt power generation for the North Koreans. At the time of its completion, the dam was the largest in Asia, and power station was third-largest (after Hoover Dam and Wilson Dam) hydroelectric power station in the world. It is still the largest hydroelectric power station on the Yalu (Korean: Amnok) River. Power produced at the dam's main 630 MW power station is evenly shared between China and North Korea. The dam is featured on the national emblem of North Korea. Background In 1937, during Japan's colonization of Korea, the Yalu Hydroelectric Company was established and in the same year construction began on the dam, with the Pyeongbuk Railway opening a rail line in 1939 to assist with the construction. In 1941, the dam was complete with two 100 MW generators operational, and the emperor of Manchukuo, Puyi, visited the dam. Four more generators were later operational in 1943. The seventh generator was German-made and not delivered due to shipping difficulties during World War II. At the time of its completion, the dam was the largest in Asia and third largest in the world. Power from the dam was used throughout the Korean peninsula and southern Manchuria (Manchukuo at the time). After World War II, in 1947, the Soviet Union occupied the area, and dismantled and carried three of the seven generators to the Irtysh River dam in Kazakhstan. They were later re-installed during the 1950s. Korean War attacks The dam's power station and transformer yard were targeted by the United Nations Command three times during the Korean War in order to disrupt power supply. Between 23 and 24 June 1952, the dam was attacked by 250 bombers and fighters, dropping 90 tons of munitions on the power station, transformer yard and auxiliary facilities. The power station was destroyed but the dam left intact. After intelligence indicated it may have been partially operational again, the power station was again targeted and disabled on 12 September 1952 by B-29 bombers. By 1 February 1953, it was believed that two generators had been repaired and were operational once again. This resulted in a third raid on the dam on 15 February which left the power station inoperable once again. Repairs and renovations Throughout the dam's history, it underwent several renovations and repairs. Flooding in 1946 damaged the stilling basin at the toe of the dam and destroyed its spillway, requiring repairs the next year. Between September 1949 and April 1950, in a second repair, the spillway and plunge pool were renovated. Between 1955 and 1958, permanent post-war repairs were made to the dam and power station. The generators removed by the Soviets were replaced and the installed capacity of the power station upgraded to 630 MW. In 1983, China began constructing an additional power station just downstream of the dam on their side of the river with two 67.5 MW generators. The first was commissioned in 1987 and the second in 1988. The most recent renovation occurred between 2009 and 2011 in order to improve the function of the dam's spillways. The US$24.5 million renovation was funded by State Grid Corporation of China. Design The Supung is a tall and long concrete gravity dam with a crest elevation of . The dam's spillway consists of 26 sluice gates with a maximum discharge capacity of . An auxiliary spillway north of the dam consists of 16 sluice gates and has a maximum discharge capacity of . The dam's reservoir has a capacity of of which is active (or "useful") for power generation. The dam sits at the head of a catchment area and its reservoir has a surface area of . The original power station at the base of the dam contains six 105 MW Francis turbine-generators which are afforded an average hydraulic head of . The additional power station on China's side contains two 67.5 MW Francis turbine generators. The total installed capacity of the dam's power stations is 765 MW.
Technology
Dams
null
28825877
https://en.wikipedia.org/wiki/Smartwatch
Smartwatch
A smartwatch is a portable wearable computer that resembles a wristwatch. Most modern smartwatches are operated via a touchscreen, and rely on mobile apps that run on a connected device (such as a smartphone) in order to provide core functions. Early smartwatches were capable of performing basic functions like calculating, displaying digital time, translating text, and playing games. More recent models often offer features comparable to smartphones, including apps, a mobile operating system, Bluetooth and Wi-Fi connectivity, and the ability to function as portable media players or FM radios. Some high-end models have cellular capabilities, allowing users to make and receive phone calls. While internal hardware varies, most smartwatches have a backlit LCD or OLED electronic visual display and are powered by a rechargeable lithium-ion battery. They may also incorporate GPS receivers, digital cameras, and microSD card readers, as well as various internal and environmental sensors such as thermometers, accelerometers, altimeters, barometers, gyroscopes, and ambient light sensors. Some smartwatches also function as activity trackers and include body sensors such as pedometers, heart rate monitors, galvanic skin response sensors, and ECG sensors. Software may include maps, health and exercise-related apps, calendars, and various watch faces. History Early years The first digital watch was the Pulsar, introduced by the Hamilton Watch Company in 1972. The "Pulsar" became a brand name, and would later be acquired by Seiko in 1978. In 1982, a Pulsar watch (NL C01) was released which could store 24 digits, likely making it the first watch with user-programmable memory, or the first "memorybank" watch. With the introduction of personal computers in the 1980s, Seiko began to develop computers in the form of watches. The Data 2000 watch, named for its ability to store 2000 characters, came with an external keyboard for data entry. Data was synchronised from the keyboard to the watch via electromagnetic coupling (wireless docking). Its memory was small, at only 112 digits. It was released in 1984, in gold, silver, and black. These models were followed by many others from Seiko during the 1980s, most notably the "RC Series". The RC-1000 Wrist Terminal from Seiko Epson was released in 1984; it was the first Seiko model to interface with a computer and was priced at around £100. It featured 2 KB of storage, a two-line, 12-character display, and data transfer with a computer via an RS232C interface. It was powered by a computer on a chip, and was compatible with most of the popular PCs of that time, including Apple II, II+ and IIe, BBC Micro, Commodore 64, IBM PC, NEC 8201, Tandy Color Computer, Model 1000, 1200, 2000 and TRS-80 Model I, III, 4 and 4p. The RC-20 Wrist Computer was released in 1985, followed by the RC-4000 and RC-4500. During the 1980s, Casio began to market a successful line of "computer watches" in addition to its calculator watches, most notably the Casio data bank series. Casio and other companies also produced novelty "game watches", such as the Nelsonic game watches. Although pager watches were predicted in the early 1980s, it took until the end of the decade for them to become more common. Two models were particularly notable: Motorola and Timex produced the Wrist Watch Pager, while AT&T Corporation and Seiko produced the MessageWatch. 1990s The Timex Datalink, introduced in 1994, was the first watch capable of transferring data wirelessly from a computer. Appointments and contacts created with Microsoft Schedule+ (the predecessor to MS Outlook) could be downloaded to the watch via patterns of visible light, which were displayed by a computer monitor and then detected by the watch's optical sensor. In 1998, Steve Mann designed and built the world's first Linux wristwatch. He presented it at the IEEE ISSCC on 7 February 2000, where he was dubbed "the father of wearable computing". The watch later appeared on the cover of Linux Journal in July 2000, in which it was the topic of a featured article. Seiko launched the Ruputer in 1998 in Japan, a wristwatch computer with a 3.6 MHz processor. The Ruputer failed to achieve wide success due to its small, hard-to-read screen, cumbersome joystick method of navigation and text input, and poor battery life. Outside of Japan, this watch was distributed as the Matsucom onHand PC. Despite low demand, it was distributed until 2006, making it a smartwatch with a long life cycle. Ruputer and onHand PC applications are fully compatible with each other. This watch is sometimes considered the first smartwatch, as it was the first to display graphics (albeit in monochrome), and third-party applications (mostly homebrew). In 1999, Samsung launched the world's first watch phone, the SPH-WP10. It had a protruding antenna, monochrome LCD screen, and 90-minute talk time with an integrated speaker and microphone. 2000s In June 2000, IBM displayed a prototype for the WatchPad, a wristwatch that ran Linux. The original version had only 6 hours of battery life, which was later extended to 12. It featured 8 MB of memory and ran Linux 2.2. The device was later upgraded with an accelerometer, vibrating mechanism, and fingerprint sensor. IBM began to collaborate with Citizen Watch Co. to create the "WatchPad". The WatchPad 1.5 features a 320 × 240 QVGA monochrome touch sensitive display and runs Linux 2.4. It also features calendar software, Bluetooth, 8 MB of RAM and 16 MB of flash memory. Citizen was hoping to market the watch to students and businessmen, with a retail price of around $399. Epson Seiko introduced their Chrono-bit wristwatch in September 2000. The Chrono-bit watches have a rotating bezel for data input, synchronize PIM data via a serial cable, and can load custom watch faces. In 2003, Fossil released the Wrist PDA, a watch that ran the Palm OS and contained 8 MB of RAM and 4 MB of flash memory. It contained a built in stylus to assist in using the tiny monochrome display, which had a resolution of 160×160 pixels. Although many reviewers declared the watch revolutionary, it was criticized for its weight (108 grams) and was discontinued in 2005. In the same year, Microsoft announced its SPOT smartwatch, which it released in early 2004. SPOT stands for Smart Personal Objects Technology, an initiative by Microsoft to personalize household electronics and other everyday gadgets. For instance, the company demonstrated coffee makers, weather stations, and alarm clocks featuring built-in SPOT technology. The device was a standalone smartwatch that offered information at a glance, in comparison to other devices that required more immersion and interaction. The information included weather, news, stock prices, and sports scores, and was transmitted through FM waves. It was accessible through a yearly subscription that cost between $39 and $59. The Microsoft SPOT Watch had a monochrome 90×126 pixel screen. Fossil, Suunto, and Tissot also sold smartwatches using SPOT technology. For instance, Fossil's Abacus, which was a variant of the Fossil Wrist PDA, retailed from $130 to $150. Sony Ericsson teamed up with Fossil and released the first watch, MBW-100, that connected to Bluetooth. This watch notified the user when receiving calls and text messages. The watch struggled to gain popularity, however, due to its exclusivity to Sony Ericsson phones. In 2009, Hermen van den Burg, CEO of Smartwatch and Burg Wearables, launched Burg, the first smartphone watch with its own SIM card. The watch was "standalone", meaning it did not require tethering to a smartphone. Burg received the award for the Most Innovative Product at the Canton Fair in April 2009. Samsung also launched their S9110 Watch Phone, which featured a color LCD display and was thin. 2010s Sony Ericsson launched the Sony Ericsson LiveView, a wearable watch device which was essentially an external Bluetooth display for an Android smartphone. Vyzin Electronics Private Limited launched a ZigBee enabled smart watch called VESAG, which featured cellular connectivity for remote health monitoring. Motorola released MOTOACTV on 6 November 2011. Pebble was a smartwatch funded via Kickstarter, which set a fundraising record for the site, raising $10.3 million between 12 April and 18 May 2012. The watch has a 144 × 168 pixel black and white memory LCD, using an ultra low-power "transflective LCD" manufactured by Sharp. It features a backlight, vibrating motor, magnetometer, ambient light sensors, and three-axis accelerometer. It can communicate with an Android or iOS device using both Bluetooth 2.1 and Bluetooth 4.0 (Bluetooth Low Energy) via Stonestreet One's Bluetopia+MFi software stack. Bluetooth 4.0 support was not initially enabled, but a firmware update in November 2013 enabled it. The watch is charged using a modified USB-cable that attaches magnetically to the watch, allowing it to maintain water resistance. The battery was reported in April 2012 to last seven days. Based on feedback from Kickstarter backers, the developers added water resistance to the device's feature set. The Pebble has a waterproof rating of 5 atm, which means it can be submerged down to and has been tested in both fresh and salt water, allowing one to shower, dive or swim while wearing the watch. In 2013, startup Omate announced its TrueSmart watch via a Kickstarter campaign, claiming it was the first smartwatch to capture the full capabilities of a smartphone. The campaign raised over $1 million, making it the 5th most successful Kickstarter at that time. The device made its public debut in early 2014. Consumer device analyst Avi Greengart, from research firm Current Analysis, suggested that 2013 may be the "year of the smartwatch", as "the components have gotten small enough and cheap enough" and many consumers own smartphones that are compatible with a wearable device. Wearable technology, such as Google Glass, was speculated to evolve into a business worth US$6 billion annually, and a July 2013 media report revealed that the majority of major consumer electronics manufacturers were undertaking work on a smartwatch device at the time of publication. The retail price of a smartwatch could be over US$300, plus data charges, while the minimum cost of smartphone-linked devices may be US$100. As of July 2013, the list of companies that were engaged in smartwatch development activities consisted of Acer, Apple, BlackBerry, Foxconn/Hon Hai, Google, LG, Microsoft, Qualcomm, Samsung, Sony, VESAG and Toshiba. Some notable omissions from this list include HP, HTC, Lenovo, and Nokia. Science and technology journalist Christopher Mims identified the following points in relation to the future of smartwatches: The physical size of smartwatches is likely to be large. Insufficient battery life is an ongoing problem for smartwatch developers, as the battery life of devices at the time of publication was three to four days, and this is likely to be reduced if further functions are added. New display technologies will be invented as a result of smartwatch research. The market success of smartwatches is unpredictable, as they may follow a similar trajectory to netbooks, or they may fulfil aims akin to those of Google Glass, another wearable electronic product. Acer's S.T. Liew stated in an interview with gadget website Pocket-Lint that he believed that companies should be researching wearable technology, and that it could grown to "billions of dollars' worth of industry". As of 4 September 2013, three new smartwatches had been launched: the Samsung Galaxy Gear, Sony SmartWatch 2, and the Qualcomm Toq. PHTL, a company based in Dallas, Texas, completed a Kickstarter campaign for its HOT Watch smartwatch in September 2013. This device enables users to leave their handsets in their pockets, since it has a speaker for phone calls in both quiet and noisy environments. In a September 2013 interview, Pebble founder Eric Migicovsky stated that his company was not interested in any acquisition offers. Two months later, he revealed that his company has sold 190,000 smartwatches, most of which were sold after its Kickstarter campaign closed. Motorola Mobility CEO Dennis Woodside confirmed during a December 2013 interview that his company was working on a smartwatch. Woodside further discussed the difficulties that other companies had experienced with wrist-wearable technologies. In April 2014, the Samsung Gear 2 was released, one of few smartwatches to be equipped with a digital camera. It has a resolution of two megapixels and can record video in 720p. At the 2014 Consumer Electronics Show, a large number of new smartwatches were released from various companies such as Razer Inc. Archos, Some called the show a "wrist revolution". At Google I/O on 25 June 2014, the Android Wear platform was introduced and the LG G Watch and Samsung Gear Live were released. The Wear-based Moto 360 was announced by Motorola in 2014. At the end of July, Swatch's CEO Nick Hayek announced that they will launch a Swatch Touch with smartwatch technologies in 2015. In the UK, London's Wearable Technology Show debuted several new models from smartwatch companies. Samsung's Gear S smartwatch was launched in late August 2014. The model features a curved Super AMOLED display and a built-in 3G modem. TechCrunchs Darrell Etherington said that "we’re finally starting to see displays that wrap around the contours of the wrist, rather than sticking out as a traditional flat surface". The corporation commenced selling the Gear S smartwatch in October 2014, alongside the Gear Circle headset accessory. At IFA 2014, Sony Mobile announced the third generation of its smartwatch series, the Sony Smartwatch 3, powered by Android Wear. Fashion Entertainments' e-paper watch was also announced at the show. On 9 September 2014, Apple Inc. announced its first smartwatch, called Apple Watch, with an early 2015 release date. On 24 April 2015, Apple Watch began shipping worldwide. Apple's first wearable attempt was met with considerable criticism during its pre-launch period, with many early technology reviews citing issues with battery life and hardware malfunctions. However, other outlets praised Apple for creating a device with the potential to compete with "traditional watches" instead of just smartwatches. The watch's screen only wakes when activated by lifting one's wrist, touching the screen, or pressing a button. On 29 October 2014, Microsoft announced the Microsoft Band, a smart fitness tracker and the company's first venture into wrist-worn devices since SPOT (Smart Personal Objects Technology) a decade earlier. The Microsoft Band was released at $199 the following day. In October 2015, Samsung unveiled the Samsung Gear S2. It features a rotating bezel for ease of use, and an IP68 rating for water resistance up to 1.5 meters deep for 30 minutes. The watch is compatible with industry-standard 20 mm straps. At the 2016 Consumer Electronics Show, Razer released the Nabu Watch, a dual-screen smartwatch. The first screen integrates an always-on illuminated backlit display and handles standard features such as date and time. The second OLED screen, activated by raising one's wrist, allows access to additional smart features. Luxury watchmaker TAG Heuer also released TAG Heuer Connected, a smartwatch powered by Android Wear. On 31 August 2016, Samsung unveiled the Samsung Gear S3 smartwatch, with improved specifications. There are two models of the watch: the Samsung Gear S3 Classic and the LTE version Samsung Gear S3 Frontier. The top smartwatches that debuted at the 2017 Consumer Electronics Show included the Casio WSD-F20, the Misfit Wearables Vapor and the Garmin Fenix 5 series. On 22 September 2017 Apple released their Apple Watch Series 3 model which offers built in LTE cellular connectivity allowing phone calls, messaging and data without relying on a nearby smartphone connection. In 2018, Samsung introduced the Samsung Galaxy Watch series. In its September 2018 keynote, Apple introduced a redesigned Apple Watch Series 4. It featured a larger display with smaller bezels, as well as an EKG feature which is built to detect abnormal heart function. In Qualcomm's September 2018 presentation, it unveiled its Snapdragon 3100 chip. It is a successor to the Wear 2100, and it includes greater power efficiency, and a separate low power core that can run basic watch functions as well as slightly more advanced functions, such as step tracking. 2020s In 2020, the United States Food and Drug Administration granted marketing approval for an Apple Watch app called NightWare. The app aims to improve sleep for people suffering from PTSD-related nightmares, by vibrating when it detects a nightmare in progress based on heart rate monitoring and body movement. Market and popularity Smartwatches rose in popularity during the 2010s. Today, they are often used as fitness trackers, and smartphone "companions". According to a study from statista, smartwatch revenue was estimated to reach $44.15 billion by 2023, and revenue per year was expected to continue to grow to $62.46 billion by 2028. The top contributors to the market size of market watches include Apple Inc, Fossil Group Inc, Garmin Lt, Google LLC, Huawei Technologies Co, Samsung, and Xiaomi. Typical features Many smartwatch smartphone models manufactured in the 2010s are completely functional as standalone products. Some are used in sports and feature a GPS tracking unit that can record historical data. For example, after a workout, data can be uploaded onto a computer or online in order to create a log of activities for analysis or sharing. Some watches can provide full GPS support, displaying maps and current coordinates, recording tracks, and bookmarking locations. With Apple, Sony, Samsung, and Motorola introducing smartwatch models, 15 percent of tech consumers use wearable technologies, which has attracted advertisers. Advertising on wearable devices was expected to increase heavily by 2017 as advanced hypertargeting modules were introduced to the devices; companies aim to crate advertisements that are tailored for smartwatches. "Sport watch" functionality often includes activity tracker features, as included on GPS watches made for training, diving, and outdoor sports. Functions may include training programs (such as intervals), lap times, speed display, GPS tracking unit, route tracking, dive computer, heart rate monitor compatibility, Cadence sensor compatibility, and compatibility with sport transitions (as in triathlons). Other watches can cooperate with a smartphone app to execute their functions. They are paired to a smartphone, usually via Bluetooth. Some of these only work with a phone that runs the same mobile operating system; others use an OS that is unique to the watch, or otherwise is able to work with most smartphones. Paired, the watch may function as a remote to the phone. This allows the watch to display data such as calls, SMS messages, emails, calendar invitations, and any data that may be made available by relevant phone apps. LTE From about 2015, several manufacturers began to release smartwatches with LTE support, enabling direct connection to 3G/4G mobile networks for voice and SMS use, without the need to carry a paired smartphone. Security and health issues Tests by UK consumer organization Which? found that ultra-cheap smartwatches and fitness trackers sold online often had serious security flaws, including excessive data collection, insecure data storage, the inability to opt out of data collection, and a lack of a security lock function. Typically, a watch app can request permission to collect and store personally identifiable information, and apps can be rendered unusable if permission is denied. The user cannot know if information is being stored securely, and it cannot be deleted. There is no control over whether the supplier views it or sells it on, for whatever purpose. In many cases, data collected is not encrypted when transmitted to the supplier. Which? did not specifically test the functionality of ultra-cheap watches, but noticed during their security audit that some could detect heart rate, blood oxygen measurements, and steps while not being worn or moved. They said that this "suggests [that] they are at best inaccurate and at worst useless". In the United Kingdom, a Product Security and Telecoms Infrastructure Act was passed in December 2022, effective from 2024. The Act, which should cover smartwatches, specifies security standards that manufacturers, importers and distributors (including online marketplaces) of smart devices must meet. A 2024 study by the University of Notre Dame found that some smartwatch straps contain high levels of PFAS, chemical compounds that have been classified as toxic or carcinogenic and might penetrate the skin. The researchers recommend replacing straps containing fluoroelastomer with straps made of silicone, which does not contain PFAS. Social implications and biases Due to faults in the design of current smartwatches, hardware and software designs have sometimes favored certain demographics. For example, smartwatches have more accurate tracking of data for individuals who have lighter skin, compared to individuals who have darker skin. This is due to the method that smartwatches use to monitor heart rate. An article published by the Healthcare Degree describes the most common method, in which devices use optical sensors to track the presence of blood in the wrist, indicating a heart beat. This type of lighting technique is cheaper and simple to use than other methods; however, because the green light used has shorter wavelengths, it is less able to penetrate melanin, the pigment which causes darker skin. This can make heart rate tracking for darker-skinned individuals less accurate. Social consequences from the increase in popularity of smartwatches include data collection and data privacy concerns. Smartwatches are capable of collecting personal health data such as activity levels, heart rate, sleep patterns, and other metrics. This user data is often collected and stored in the cloud, which can sometimes be accessed by companies and researchers, and used for many purposes. There have been many cases of data misuse. One instance published by the Warren Alpert Medical School involved Fitbit facing a lawsuit in 2011 for selling personal health data to advertisers without user consent. Another instance occurred when Strava allowed users to share their routes, which led to the accidental revelation of several military base locations throughout the world. Operating systems AsteroidOS AsteroidOS is an open source firmware replacement for some Android Wear devices. Flyme OS Flyme OS is firmware based on the Android operating system, developed by Meizu. InfiniTime InfiniTime is the default firmware for the PineTime smartwatch, produced by Pine64. It is a community project based on FreeRTOS, as well as being free software licensed under the GNU General Public License. It supports Android, desktop Linux, the PinePhone, and SailfishOS as companion devices for features such as music playback, call and text notifications, navigation instructions, and time synchronization. As of January 2022, Infinitime version 1.8's additional features include secure Bluetooth pairing, customisable watch faces, a flashlight, basic paint program, stopwatch, alarm clock, countdown timer, step counter, heart rate monitor, a one-player pong clone, a numerical puzzle game and a metronome. Features are under ongoing development, with firmware updates available via GitHub. HarmonyOS HarmonyOS is an operating system developed by Huawei, intended for the various "smart" devices they manufacture. Starting in 2021, it has seen use in Huawei Watches, replacing its predecessor, LiteOS. Sailfish OS Sailfish OS is a Linux-based operating system for various platforms, including Sailfish smartwatches. Tizen Tizen is a Linux-based operating system developed for various platforms, including smartwatches. Tizen is a project within the Linux Foundation and is governed by a Technical Steering Group (TSG) composed of Samsung and Intel among others. Samsung released the Samsung Gear 2, Gear 2 Neo, Samsung Gear S, Samsung Gear S2 and Samsung Gear S3, all running Tizen. watchOS watchOS is a proprietary mobile operating system developed by Apple Inc. to run on the Apple Watch. Wear OS Wear OS, previously known as Android Wear, is a smartwatch operating system developed by Google Inc. For children and the elderly In China, since around 2015, smartwatches have become widely used by schoolchildren, and are widely advertised on Chinese television as a safety device for them. The devices are typically colorful and made of plastic, and they often lack a display unless a button is pressed. While their functionality is limited, they primarily allow children to make and receive calls, display the time, and sometimes measure air temperature. These smartwatches typically cost between US$100 and US$200. Children's smartwatches are also sold in other countries. Some smartwatches can also help elderly or disabled people, reporting their location to a caretaker if they fall or become lost. Smart strap A "smart strap" is a technology that is capable of providing enhanced functionality to smartwatches, through built-in sensors located within the strap. For example, smart strap accessories can add a webcam, ECG sensor and biompedance measurement features.
Technology
Computer hardware
null
28831147
https://en.wikipedia.org/wiki/Roundness%20%28geology%29
Roundness (geology)
Roundness is the degree of smoothing due to abrasion of sedimentary particles. It is expressed as the ratio of the average radius of curvature of the edges or corners to the radius of curvature of the maximum inscribed sphere. Measure of roundness Rounding, roundness or angularity are terms used to describe the shape of the corners on a particle (or clast) of sediment. Such a particle may be a grain of sand, a pebble, cobble or boulder. Although roundness can be numerically quantified, for practical reasons geologists typically use a simple visual chart with up to six categories of roundness: Very angular: corners sharp and jagged Angular Sub-angular Sub-rounded Rounded Well-rounded: corners completely rounded This six-fold category characterisation is used in the Shepard and Young comparison chart and the Powers chart but the Krumbein chart has nine categories. Rounding of sediment particles can indicate the distance and time involved in the transportation of the sediment from the source area to where it is deposited. Speed of rounding will depend on composition, hardness and mineral cleavage. For example, a soft claystone pebble will obviously round much faster, and over a shorter distance of transport, than a more resistant quartz pebble. The rate of rounding is also affected by the grain size and energy conditions. Angularity (A) and roundness (R) are but two parameters of the complexity of a clast's generalised form (F). A defining expression is given by: F=f(Sh, A, R, Sp, T) where f denotes a functional relationship between these terms and where Sh denotes the shape, Sp the sphericity and T the micro-scale surface texture. An example of this practical use has been applied to the roundness of the grains in the Gulf of Mexico in order to observe the distance from the source rocks. Abrasion Abrasion occurs in natural environments such as beaches, sand dunes, river or stream beds by the action of current flow, wave impact, glacial action, wind, gravitational creep and other erosive agents. Recent studies have demonstrated that aeolian processes are more efficient in the rounding of sedimentary grains. Experimental studies have shown that the angularity of sand-sized detrital quartz can remain virtually unchanged after hundreds of kilometers of fluvial transport. Paleogeographic value of determining the degree of roundness of clastic material Roundness is an important indicator of the genetic affiliation of a clastic rock. The degree of roundness points to the range and mode of transport of clastic material, and can also serve as a search criterion in mineral exploration, especially for placer deposits. Alluvial debris in major rivers tend to exhibit a high degree of roundness. Alluvium from small rivers is less rounded. Deposits of ephemeral streams exhibit little rounding with angular clasts. Clast rounding in non-sedimentary environments Pebble dikes are dikelike bodies found in intrusive environments, usually associated with porphyry-type ore deposits, which contain variably rounded fragments in a finely-ground matrix of pulverized rock. The clasts originate in deeper formations in hydrothermal systems, and have been brought up explosively by diatreme or intrusive breccias as groundwater and/or magmatic water flash boils. The clasts have been rounded due to thermal spallation, milling action, or corrosion by hydrothermal fluids. The ore deposits of Tintic mining district and White Pine mining district, and East Traverse Mountain, Utah; Urad, Mt. Emmons, Central City, Leadville, and Ouray, Colorado; Butte, Montana; Silver Bell; and Bisbee, Arizona; and the Kiruna iron deposit in Sweden, Cuajone and Toquepala in Peru; El Salvador in Chile; Mt. Morgan in Australia; and Agua Rica in Argentina contain these pebble dikes.
Physical sciences
Sedimentology
Earth science
2010826
https://en.wikipedia.org/wiki/Sodium%20oxide
Sodium oxide
Sodium oxide is a chemical compound with the formula . It is used in ceramics and glasses. It is a white solid but the compound is rarely encountered. Instead "sodium oxide" is used to describe components of various materials such as glasses and fertilizers which contain oxides that include sodium and other elements. Sodium oxide is a component. Structure The structure of sodium oxide has been determined by X-ray crystallography. Most alkali metal oxides (M = Li, Na, K, Rb) crystallise in the antifluorite structure. In this motif the positions of the anions and cations are reversed relative to their positions in , with sodium ions tetrahedrally coordinated to 4 oxide ions and oxide cubically coordinated to 8 sodium ions. Preparation Sodium oxide is produced by the reaction of sodium with sodium hydroxide, sodium peroxide, or sodium nitrite: To the extent that NaOH is contaminated with water, correspondingly greater amounts of sodium are employed. Excess sodium is distilled from the crude product. A second method involves heating a mixture of sodium azide and sodium nitrate: Burning sodium in air produces a mixture of and sodium peroxide (). A third much less known method involves heating sodium metal with iron(III) oxide (rust): the reaction should be done in an inert atmosphere to avoid the reaction of sodium with the air instead. Applications Glassmaking Glasses are often described in terms of their sodium oxide content although they do not really contain . Furthermore, such glasses are not made from sodium oxide, but the equivalent of is added in the form of "soda" (sodium carbonate), which loses carbon dioxide at high temperatures: A typical manufactured glass contains around 15% sodium oxide, 70% silica (silicon dioxide), and 9% lime (calcium oxide). The sodium carbonate "soda" serves as a flux to lower the temperature at which the silica mixture melts. Such soda-lime glass has a much lower melting temperature than pure silica and has slightly higher elasticity. These changes arise because the -based material is somewhat more flexible. Reactions Sodium oxide reacts readily and irreversibly with water to give sodium hydroxide: Because of this reaction, sodium oxide is sometimes referred to as the base anhydride of sodium hydroxide (more archaically, "anhydride of caustic soda").
Physical sciences
Alkali oxide salts
Chemistry
2010836
https://en.wikipedia.org/wiki/Sodium%20peroxide
Sodium peroxide
Sodium peroxide is an inorganic compound with the formula Na2O2. This yellowish solid is the product of sodium ignited in excess oxygen. It is a strong base. This metal peroxide exists in several hydrates and peroxyhydrates including Na2O2·2H2O2·4H2O, Na2O2·2H2O, Na2O2·2H2O2, and Na2O2·8H2O. The octahydrate, which is simple to prepare, is white, in contrast to the anhydrous material. Properties Sodium peroxide crystallizes with hexagonal symmetry. Upon heating, the hexagonal form undergoes a transition into a phase of unknown symmetry at 512 °C. With further heating above the 657 °C boiling point, the compound decomposes to Na2O, releasing O2. 2 Na2O2 → 2 Na2O + O2 Preparation Commercially, sodium peroxide is produced from the elements in a two-stage process. First sodium is oxidized to sodium oxide: Subsequently, this oxide is treated with more oxygen: This was the method by which the substance was discovered in 1810 by Joseph Louis Gay-Lussac and Louis Jacques Thénard, as well as how it was for the first time commercially made by Hamilton Castner in the 1890s. It may also be produced by passing ozone gas over solid sodium iodide inside a platinum or palladium tube. The ozone oxidizes the sodium to form sodium peroxide. The iodine can be sublimed by mild heating. The platinum or palladium catalyzes the reaction and is not attacked by the sodium peroxide. The octahydrate can be produced by treating sodium hydroxide with hydrogen peroxide. Uses Sodium peroxide hydrolyzes to give sodium hydroxide and hydrogen peroxide according to the reaction Na2O2 + 2 H2O → 2 NaOH + H2O2 Sodium peroxide was used to bleach wood pulp for the production of paper and textiles. Presently it is mainly used for specialized laboratory operations, e.g., the extraction of minerals from various ores. Sodium peroxide may go by the commercial names of Solozone and Flocool. In chemistry preparations, sodium peroxide is used as an oxidizing agent. It is also used as an oxygen source by reacting it with carbon dioxide to produce oxygen and sodium carbonate: Na2O2 + CO2 → Na2CO3 + O2 Na2O2 + H2O + 2 CO2 → 2 NaHCO3 + O2 It is thus particularly useful in scuba gear, submarines, etc. Lithium peroxide and potassium superoxide have similar uses. Sodium peroxide was once used on a large scale for the production of sodium perborate, but alternative routes to that cleaning agent have been developed.
Physical sciences
Peroxide salts
Chemistry
2010867
https://en.wikipedia.org/wiki/Potassium%20peroxide
Potassium peroxide
Potassium peroxide is an inorganic compound with the molecular formula K2O2. It is formed as potassium reacts with oxygen in the air, along with potassium oxide (K2O) and potassium superoxide (KO2). Potassium peroxide reacts with water to form potassium hydroxide and oxygen: Properties Potassium peroxide is a highly reactive, oxidizing white to yellowish solid which, while not flammable itself, reacts violently with flammable materials. It decomposes violently on contact with water. [1] The standard enthalpy of formation of potassium peroxide is ΔH f 0 = −496 kJ/mol. Usage Potassium peroxide is used as an oxidizing agent and bleach (due to the peroxide), and to purify air.
Physical sciences
Peroxide salts
Chemistry
2011471
https://en.wikipedia.org/wiki/Sodium%20perchlorate
Sodium perchlorate
Sodium perchlorate is an inorganic compound with the chemical formula . It consists of sodium cations and perchlorate anions . It is a white crystalline, hygroscopic solid that is highly soluble in water and ethanol. It is usually encountered as sodium perchlorate monohydrate . The compound is noteworthy as the most water-soluble of the common perchlorate salts. Sodium perchlorate and other perchlorates has been found on the planet Mars, first detected by the NASA probe Phoenix in 2009. This was later confirmed by spectral analysis by the Mars Reconnaissance Orbiter in 2015 of what is thought to be brine seeps which may be the first evidence of flowing liquid water containing hydrated salts on Mars. Selected properties Its heat of formation is −382.75 kJ/mol, i.e. it is thermally stable up to high temperatures. At 490 °C it undergoes thermal decomposition, producing sodium chloride and dioxygen. It crystallizes in the rhombic crystal system. Uses Perchloric acid is made by treating with HCl. Ammonium perchlorate and potassium perchlorate, of interest in rocketry and pyrotechnics, are prepared by double decomposition from a solution of sodium perchlorate and ammonium chloride or potassium chloride, respectively. Laboratory applications Because of its high solubility (2096 g/L at 25 °C) and the inert behaviour of dissolved perchlorate, solutions of are often used as unreactive background electrolyte (supporting electrolyte). Indeed, because the reduction reaction of perchlorate is kinetically limited even if it is a thermodynamically unstable compound, perchlorate is a redox non-sensitive anion. It is also a non-complexing anion with a fairly low ligand binding capacity. In the past perchlorates were quite widely used in the synthesis of coordination compounds because their larger size (compared to halides) and excellent hydrogen bonding abilities made them highly effective counter-ions for complexes with ammine, aquo and halido ligands, often yielding highly crystalline products. However because of the hazards (see Safety Section below) associated with their use they have been largely superseded in most labs by much less risky counterions like fluoroborate (BF4–, PF6– and related anions. Sodium perchlorate is the precursor to ammonium, potassium and lithium perchlorate salts, often taking advantage of their low solubility in water relative to (209 g/(100 mL) at 25 °C). It is used for denaturating proteins in biochemistry and in standard DNA extraction and hybridization reactions in molecular biology. In medicine Sodium perchlorate can be used to block iodine uptake before administration of iodinated contrast agents in patients with subclinical hyperthyroidism (suppressed TSH). Production Sodium perchlorate is produced by anodic oxidation of sodium chlorate () at an inert electrode, such as platinum. (acidic medium) (alkaline medium) Safety All perchlorates are potent oxidisers. When mixed with organic compounds extreme combustion reactions can result, hence the use of such materials in fireworks, low tech rocket propellants and improvised explosives. Because of their kinetic inertness mixtures of perchlorate with organic compounds can ignite/detonate spontaneously and be shock sensitive. Acute toxicity: The median lethal dose (LD50) is 2 – 4 g/kg (rabbits, oral). Chronic toxicity: The frequent consumption of drinking water with low concentrations (in the range of μg/L, ppb) of perchlorate is harmful for the thyroid gland as the perchlorate anion competes with the uptake of iodide severely disrupting thyroid function. Environmental effects: Perchlorate anions are regarded as persistent pollutants that can cause long term contamination of drinking water and NaClO4's high solubility makes it highly mobile in the environment. Significant concerns have been raised about the environmental impacts of perchlorates because of its ability to disrupt iodide uptake and metabolism.
Physical sciences
Halide oxyanions
Chemistry
2013229
https://en.wikipedia.org/wiki/Megazostrodon
Megazostrodon
Megazostrodon is an extinct genus of basal mammaliaforms belonging to the order Morganucodonta. It is approximately 200 million years old. Two species are known: M. rudnerae from the Early Jurassic of Lesotho and South Africa, and M. chenali from the Late Triassic of France. Discovery The type species M. rudnerae was first discovered in 1966 in the Elliot Formation of Lesotho, southern Africa, by palaeontologist and archaeologist Ione Rudner. It was first described by A.W. Crompton and F.A. Jenkins Jr in 1968. The generic name Megazostrodon means, literally, 'large girdle tooth' (from the Greek mega-large, zostros-girdle and don-tooth—referring to the large external cingula of the upper molars). The specific name honours Rudner for her discovery. A second species, M. chenali, was named in 2015 based on remains found in Saint-Nicolas-de-Port, France. It is named after the French palaeontologist Emmanuel Chenal. Characteristics Megazostrodon was a small, shrew-like animal between long which probably ate insects and small reptiles. It is thought to have been nocturnal as it had a larger brain than earlier cynodonts and the enlarged areas of its brain were found to be those that process sounds and smells. This was probably in order to avoid being in competition with the reptiles or becoming prey to the dinosaurs. Although considered a close relative of mammals, it did have some non-mammalian characteristics inherited from its predecessors: the first two vertebrae (atlas and axis) were still unfused as in earlier cynodonts, and it only had three sacral vertebrae instead of the usual mammalian five. An interclavicle is also present, which is still present in monotremes but lost in the line leading to therian mammals. Evolution Megazostrodon is the best-known genus of the family Megazostrodontidae, part of the larger group Morganucodonta. The other members of this family that are currently known are Indozostrodon, Dinnetherium, Wareolestes and Brachyzostrodon. The megazostrodontids used to be classified as members of a group of mammals called the triconodonts, which are thought to have evolved from a specific group of cynodonts during the late Triassic and early Jurassic periods. However, recent classifications consider the megazostrodontids to be mammaliaforms outside of the stricter grouping of Mammalia proper, while the triconodonts remain within the mammalian crown group. These early mammaliaforms possessed many traits which made them well-suited for an active lifestyle. They had a heterodont dentition consisting of four types of teeth: incisors, canines, premolars and molars, as opposed to the uniform (homodont) teeth of most reptiles. This enabled them to chew and therefore process their food more thoroughly than their reptilian cousins. There is evidence that the movement of the mandible allowed a shearing action to chew food. Their skeletons changed so that their limbs were more mobile, being less laterally splayed, and allowing faster forward motion. They had a short ribcage and large lungs, which allowed efficient respiration. Their lower jaw comprised a single bone—the dentary (as opposed to the multiple bones in the jaws of their ancestors, or seven different bones found in reptilian lower jaws). The other bones which once made up the jaw had reduced, and in later mammals would become incorporated into the middle ear, enhancing their hearing. Probably the most important change in the evolution of the first mammals was that their ancestors, the cynodonts, had become endothermic. This meant that they generated their own body heat, relying on the food they ate to help sustain their body temperature rather than depending on their surrounding environment. This permitted higher, more sustained activity levels during the day than reptiles (reptiles must frequently perform temperature regulation activities such as sun basking and seeking shade). It was probably the key to becoming nocturnal—a major advantage in a world where most predators were active during the day. Phylogeny Reproduction Like placentals and possibly Erythrotherium, Megazostrodon is unique among mammaliaforms in lacking epipubic bones. It is likely that Megazostrodon, like the modern monotremes, laid eggs.
Biology and health sciences
Stem-mammals
Animals
2013448
https://en.wikipedia.org/wiki/Scrubber
Scrubber
Scrubber systems (e.g. chemical scrubbers, gas scrubbers) are a diverse group of air pollution control devices that can be used to remove some particulates and/or gases from industrial exhaust streams. An early application of a carbon dioxide scrubber was in the submarine the Ictíneo I, in 1859; a role for which they continue to be used today. Traditionally, the term "scrubber" has referred to pollution control devices that use liquid to wash unwanted pollutants from a gas stream. Recently, the term has also been used to describe systems that inject a dry reagent or slurry into a dirty exhaust stream to "wash out" acid gases. Scrubbers are one of the primary devices that control gaseous emissions, especially acid gases. Scrubbers can also be used for heat recovery from hot gases by flue-gas condensation. They are also used for the high flows in solar, PV, or LED processes. There are several methods to remove toxic or corrosive compounds from exhaust gas and neutralize it. Combustion Combustion is sometimes the cause of harmful exhausts, but, in many cases, combustion may also be used for exhaust gas cleaning if the temperature is high enough and enough oxygen is available. Wet scrubbing The exhaust gases of combustion may contain substances considered harmful to the environment, and the scrubber may remove or neutralize those. A wet scrubber is used for cleaning air, fuel gas or other gases of various pollutants and dust particles. Wet scrubbing works via the contact of target compounds or particulate matter with the scrubbing solution. Water is the most common solvent used to remove inorganic contaminants, particularly for dust, but solutions of reagents that specifically target certain compounds may also be used. Process exhaust gas can also contain water-soluble toxic and/or corrosive gases like hydrochloric acid (HCl) or ammonia (NH3). These can be removed very well by a wet scrubber. Removal efficiency of pollutants is improved by increasing residence time in the scrubber or by the increase of surface area of the scrubber solution by the use of a spray nozzle, packed towers or an aspirator. Wet scrubbers may increase the proportion of water in the gas, resulting in a visible stack plume, if the gas is sent to a stack. Wet scrubbers can also be used for heat recovery from hot gases by flue-gas condensation. In this mode, termed a condensing scrubber, water from the scrubber drain is circulated through a cooler to the nozzles at the top of the scrubber. The hot gas enters the scrubber at the bottom. If the gas temperature is above the water dew point, it is initially cooled by evaporation of water drops. Further cooling causes water vapors to condense, adding to the amount of circulating water. The condensation of water releases significant amounts of low temperature heat due to the high value of the specific latent heat of the vaporisation of water (more than per ton of water), which can be recovered by the cooler for e.g. district heating purposes. Excess condensed water must continuously be removed from the circulating water. Dry scrubbing A dry or semi-dry scrubbing system, unlike the wet scrubber, does not saturate the flue gas stream that is being treated with moisture. In some cases no moisture is added, while in others only the amount of moisture that can be evaporated in the flue gas without condensing is added. Therefore, dry scrubbers generally do not have a stack steam plume or wastewater handling/disposal requirements. Dry scrubbing systems are used to remove acid gases (such as SO2 and HCl) primarily from combustion sources. There are a number of dry type scrubbing system designs. However, all consist of two main sections or devices: a device to introduce the acid gas sorbent material into the gas stream and a particulate matter control device to remove reaction products, excess sorbent material as well as any particulate matter already in the flue gas. Dry scrubbing systems can be categorized as dry sorbent injectors (DSIs) or as spray dryer absorbers (SDAs). Spray dryer absorbers are also called semi-dry scrubbers or spray dryers. Dry scrubbing systems are often used for the removal of odorous and corrosive gases from wastewater treatment plant operations. The medium used is typically an activated alumina compound impregnated with materials to handle specific gases such as hydrogen sulfide. Media used can be mixed together to offer a wide range of removal for other odorous compounds such as methyl mercaptans, aldehydes, volatile organic compounds, dimethyl sulfide, and dimethyl disulfide. Dry sorbent injection involves the addition of an alkaline material (usually hydrated lime, soda ash, or sodium bicarbonate) into the gas stream to react with the acid gases. The sorbent can be injected directly into several different locations: the combustion process, the flue gas duct (ahead of the particulate control device), or an open reaction chamber (if one exists). The acid gases react with the alkaline sorbents to form solid salts which are removed in the particulate control device. These simple systems can achieve only limited acid gas (SO2 and HCl) removal efficiencies. Higher collection efficiencies can be achieved by increasing the flue gas humidity (i.e., cooling using water spray). These devices have been used on medical waste incinerators and a few municipal waste combustors. In spray dryer absorbers, the flue gases are introduced into an absorbing tower (dryer) where the gases are contacted with a finely atomized alkaline slurry. Acid gases are absorbed by the slurry mixture and react to form solid salts which are removed by the particulate control device. The heat of the flue gas is used to evaporate all the water droplets, leaving a non-saturated flue gas to exit the absorber tower. Spray dryers are capable of achieving high (80+%) acid gas removal efficiencies. These devices have been used on industrial and utility boilers and municipal waste incinerators. Adsorber Many chemicals can be removed from exhaust gas also by using adsorber material. The flue gas is passed through a cartridge which is filled with one or several adsorber materials and has been adapted to the chemical properties of the components to be removed. This type of scrubber is sometimes also called dry scrubber. The adsorber material has to be replaced after its surface is saturated. Note: adsorption is a surface phenomena, absorption involves the entire material. Ex: Activated carbon an adsorbent, used for the adsorption of odorous compounds. Mercury removal Mercury is a highly toxic element commonly found in coal and municipal waste. Wet scrubbers are only effective for removal of soluble mercury species, such as oxidized mercury, Hg2+. Mercury vapor in its elemental form, Hg0, is insoluble in the scrubber slurry and not removed. Therefore, an additional process of Hg0 conversion is required to complete mercury capture. Usually halogens are added to the flue gas for this purpose. The type of coal burned as well as the presence of a selective catalytic reduction unit both affect the ratio of elemental to oxidized mercury in the flue gas and thus the degree to which the mercury is removed. In July 2015, one study found that some mercury scrubbers installed on coal power plants inadvertently capture PAH (polycyclic aromatic hydrocarbons) emissions as well. Scrubber waste products One side effect of scrubbing is that the process only moves the unwanted substance from the exhaust gases into a liquid solution, solid paste or powder form. This must be disposed of safely, if it can not be reused. For example, mercury removal results in a waste product that either needs further processing to extract the raw mercury, or must be buried in a special hazardous wastes landfill that prevents the mercury from seeping out into the environment. There are issues with that, as it is extremely dangerous to the environment, and many factories cannot process them or have it moved to a landfill. As an example of reuse, limestone-based scrubbers in coal-fired power plants can produce a synthetic gypsum of sufficient quality that can be used to manufacture drywall and other industrial products. Bacteria spread Poorly maintained scrubbers have the potential to spread disease-causing bacteria. The problem is a result of inadequate cleaning. For example, the cause of a 2005 outbreak of Legionnaires' disease in Norway was just a few infected scrubbers. The outbreak caused 10 deaths and more than 50 cases of infection. Scrubbers on ships Scrubbers were first used on board ships for the production of inert gas for oil tanker operations. Later, in preparation for the global 0.5% sulfur cap in 2020, the International Maritime Organization (IMO) adopted guidelines on the approval, installation and use of exhaust gas scrubbers (exhaust gas cleaning systems) on board ships to ensure compliance with the sulfur regulation of MARPOL Annex VI. Flag states must approve such systems and port states can (as part of their port state control) ensure that such systems are functioning correctly. If a scrubber system is not functioning properly (and the IMO procedures for such malfunctions are not adhered to), port states can sanction the ship. The United Nations Convention on the Law of the Sea also bestows port states with a right to regulate (and even ban) the use of open loop scrubber systems within ports and internal waters.
Technology
Food, water and health
null
2014227
https://en.wikipedia.org/wiki/Ravine
Ravine
A ravine is a landform that is narrower than a canyon and is often the product of streambank erosion. Ravines are typically classified as larger in scale than gullies, although smaller than valleys. Ravines may also be called a cleuch, dell, ghout (Nevis), gill or ghyll, glen, gorge, kloof (South Africa), and chine (Isle of Wight) A ravine is generally a fluvial slope landform of relatively steep (cross-sectional) sides, on the order of twenty to seventy percent in gradient. Ravines may or may not have active streams flowing along the downslope channel which originally formed them; moreover, often they are characterized by intermittent streams, since their geographic scale may not be sufficiently large to support a perennial stream. Definition According to Merriam-Webster, a ravine is "a small, narrow, steep-sided valley that is larger than a gully and smaller than a canyon and that is usually worn by running water". Some societies and languages do not differentiate between a gully and ravine; in others, there is a distinction, particularly when concerning environmental management. Formation Gullies are often found in hilly or mountainous regions, where water runoff is guided downhill by steep slopes and over time erodes the landscape. A ravine is the final step in gully erosion, formed when a stream has eroded so severely it forms a deep cut in the earth. A gully can be classified as a ravine after it reaches a large depth, typically in excess of . Environmental impact Ravine erosion contributes heavily to land loss globally and particularly threatens agricultural lands. Additionally, soil loss contributes to pollution, flooding, and sedimentation of waterways. The formation of ravine lands can be sped up by deforestation and overgrazing. In Indian badlands, soil erosion is estimated to exceed a rate of annually. Examples Hawaii The shield volcanoes of Hawaii have significant impact on the distribution of ravines across the islands, specifically Mauna Loa and Mauna Kea, the former of which is one the most active shield volcanoes on Earth. Both of these volcanoes show V-shaped ravines on their flanks, solely where they have been mantled by Pahala ash. Being the older of the two, Mauna Kea displays more pronounced dissection of these ravines. Rainfall and infiltration capacity are critical to valley initiation on the Hawaiian volcanoes. Once these valleys are initiated, their streams incise to form V-shaped ravines. Eventually, they become sufficiently deep ravine systems and expose groundwater activity. The deepest of these incisions are U-shaped, theatre-headed valleys. Notable ravines Babi Yar, Ukraine Bam Bam Amphitheaters, Gabon Barranco de Badajoz, Spain Barranco del Infierno, Spain Gravina Ravine, Italy Moola Chotok, Pakistan Ravenna Park, United States Rauðfeldsgjá, Iceland Stuðlagil, Iceland Taishaku Valley, Japan Toronto ravine system, Canada
Physical sciences
Fluvial landforms
Earth science
5101086
https://en.wikipedia.org/wiki/Cycle%20track
Cycle track
A cycle track or cycleway (British) or bikeway (mainly North American), sometimes historically referred to as a sidepath, is a separate route for cycles and not motor vehicles. In some cases cycle tracks are also used by other users such as pedestrians and horse riders (see shared-use route). A cycle track can be next to a normal road, and can either be a shared route with pedestrians (common in countries such as the United Kingdom) or be made distinct from both the pavement and general roadway by vertical barriers or elevation differences. In urban planning, cycle tracks are designed to encourage cycling and reduce motor vehicle congestion and pollution, cycling accidents (by alleviating the conflict between motor vehicles and cycles sharing the same road space) and general confusion and inconvenience for road users. Cycle tracks may be one-way or two-way, and may be at road level, at sidewalk level, or at an intermediate level. When located alongside normal roads, they usually have some separation from motor traffic in the form of bollards, car parking, barriers or boulevards. Barriers may include curbs, concrete berms, posts, planting/median strips, walls, trenches, or fences. They are often accompanied by a curb extension or other features at intersections to simplify crossing. In the UK, a cycle track is a road specifically for use by cyclists and not motor vehicles. In Ireland the term cycle track also includes cycle lanes marked on the carriageway, but only if accompanied by a specific sign. In the UK, a cycle track may be alongside a roadway (or carriageway) for all vehicles or it may be on its own alignment. The term does not include cycle lanes or other facilities within an all-vehicle carriageway. Impact Levels of bicycle traffic In the United States, an academic analysis of eight cycle tracks found that they had increased bike traffic on the street by 75 percent within one year of installation. Rider surveys indicated that 10 percent of riders after installation would have chosen a different mode for that trip without the cycle track, and 25 percent said they were biking more in general since the installation of the cycle track. However, scientific research indicates that different groups of cyclists show varying preferences of which aspects of cycling infrastructure are most relevant when choosing a specific cycling route over another; thus these different preferences need to be accounted for in order to maximize utilization of new cycling infrastructure. A 2015 study of a street in Toronto, Canada where cycle tracks replaced a painted cycle lane involved a survey of cyclists. Results reported 38% would use other travel modes than cycling before the redevelopment (most of whom would take transit). An improvement to safety was the most commonly cited reason. Safety Recent studies generally affirm that segregated cycle tracks have a better safety record between intersections than cycling on major roads in traffic. The increase in cycling caused by cycle tracks may lead to a "safety in numbers" effect though some contributors caution against this hypothesis. Older studies tended to come to negative conclusions about mid-block cycle track safety. The implications for road safety of cycle tracks at intersections is disputed. Studies generally show an increase in collisions at junctions, especially where cyclists are travelling in the direction opposite to the flow of traffic (e.g. on two-way cycle tracks). Protected intersection designs generally improve safety records over non-protected junction types. Specifications Netherlands The Dutch guidance for cycle traffic specifies that one-way cycle paths should be a minimum width of 2 metres. United Kingdom The LTN 1/20 guidance covers cycle infrastructure design in England and Northern Ireland. LTN 1/20 states that one-way cycle tracks should be a minimum of 1.5-2.5 metres depending on the number of cyclists. Two-way cycle tracks should be a minimum of 2-4 m, depending on the number of cyclists. Cycling by Design covers cycle infrastructure design in Scotland. It specifies a width of minimum width varying from 1.5 to 2.5 metres for one-way tracks and between 2 and 4 metres for two-way tracks. Shared pedestrian tracks should only be used if there are less than 300 cycles per hour at the peak hour and it should be 4 metres (2.5 metres at minimum). Gallery
Technology
Road infrastructure
null
1376450
https://en.wikipedia.org/wiki/Square%20inch
Square inch
A square inch (plural: square inches) is a unit of area, equal to the area of a square with sides of one inch. The following symbols are used to denote square inches: square in sq inches, sq inch, sq in inches/-2, inch/-2, in/-2 inches^2, inch^2, in^2 inches2, inch2, in2 (also denoted by "2) historic engineering drawings □″ (number with a square & a double apostrophe, both as an exponent) The square inch is a common unit of measurement in the United States and the United Kingdom. A common unit of pressure, pound per square inch (psi) is derived from this unit of area. Equivalence with other units of area 1 square inch (assuming an international inch) is equal to: (the overbars indicate repeating decimals) square feet (1 square foot is equal to 144 square inches) square yards (1 square yard is equal to 1,296 square inches) square centimetres (1 square centimetre is equal to square inches) square metres (1 square metre is equal to square inches)
Physical sciences
Area
Basics and measurement
1377614
https://en.wikipedia.org/wiki/Elevated%20railway
Elevated railway
An elevated railway or elevated train (also known as an el train or el for short) is a railway with the tracks above street level on a viaduct or other elevated structure (usually constructed from steel, cast iron, concrete, or bricks). The railway may be broad-gauge, standard-gauge or narrow-gauge railway, light rail, monorail, or a suspension railway. Elevated railways are normally found in urban areas where there would otherwise be multiple level crossings. Usually, the tracks of elevated railways that run on steel viaducts can be seen from street level. History The earliest elevated railway was the London and Greenwich Railway on a brick viaduct of 878 arches, built between 1836 and 1838. The first of the London and Blackwall Railway (1840) was also built on a viaduct. During the 1840s there were other plans for elevated railways in London that never came to fruition. From the late 1860s onward, elevated railways became popular in US cities. New York's West Side and Yonkers Patent Railway opened in 1868 as a cable-hauled elevated railway and was operated using locomotives after 1871, when it was renamed the New York Elevated Railroad. This was followed in 1875 by the Manhattan Railway Company, which took over the New York Elevated Railroad. Other early elevated systems in the US included the Chicago "L", which was built by multiple competing companies beginning in 1892, as well as the Boston Elevated Railway in 1901 and the Market–Frankford Line in Philadelphia in 1907. Globally, the Berlin Stadtbahn (1882) and the Vienna Stadtbahn (1898) are also mainly elevated. The first electric elevated railway was the Liverpool Overhead Railway, which operated through Liverpool docks from 1893 until 1956. In London, the Docklands Light Railway is a modern elevated railway that opened in 1987 and has since expanded. The trains are driverless and automatic. Another modern elevated railway is Tokyo's driverless Yurikamome line, opened in 1995. Systems Monorail systems Most monorails are elevated railways, such as the Disneyland Monorail System (1959), the Tokyo Monorail (1964), the Sydney Monorail (1988–2013), the KL Monorail, the Las Vegas Monorail, the Seattle Center Monorail and the São Paulo Monorail. Most maglev railways are also elevated. Suspension railways During the 1890s there was some interest in suspension railways, particularly in Germany, with the Schwebebahn Dresden, (1891–) and the Wuppertal Schwebebahn (1901). H-Bahn suspension railways were built in Dortmund and Düsseldorf airport, 1975. The Memphis Suspension Railway opened in 1982. Suspension railways are usually monorail; Shonan Monorail and Chiba Urban Monorail in Japan, despite their names, are suspension railways. People mover systems People mover or automated people mover (APM) is a type of driverless grade-separated, mass-transit system. The term is generally used only to describe systems that serve as loops or feeder systems, but is sometimes applied to considerably more complex automated systems. Similar to monorails, Bombardier Innovia APM technology uses only one rail to guide the vehicle along the guideway. APMs are common at airports and effective at helping passengers quickly reach their gates. Several elevated APM systems at airports including the PHX Sky Train at Phoenix Sky Harbor International Airport; AeroTrain at Kuala Lumpur International Airport; and the Tracked Shuttle System at London Gatwick Airport, United Kingdom. Modern systems Metro or commuter rail systems Africa Addis Ababa Light Rail Cairo Metro (Line 3) Lagos Metro Americas Baltimore Metro (west of Mondawmin) BART (partial) Chicago "L" (except for underground sections of the Red Line and Blue Line and at-grade sections of the Brown Line, Purple Line, Pink Line, and Yellow Line) Cleveland Red Line (partial) DART Green Line (north branch) Guadalajara light rail system Line 3 (partial under construction) Lima Metro (partial) Market–Frankford Line (underground in downtown Philadelphia and West Philadelphia up to 40th Street Station but elevated elsewhere) Medellín Metro MARTA (partial) Montreal REM (partial, not fully) Metrorrey (partial) Mexico City Metro (partial) Miami Metrorail New York City Subway (partial, 40% of tracks) Panama City Metro PATCO (partial) PATH (partial) Santiago Metro (partial) Skyline SkyTrain, Vancouver, British Columbia, Canada (partial) Tren Urbano Washington Metro (partial) UP Express (partial), an airport express train connecting Lester B. Pearson International Airport to Toronto Union Station in Toronto, Ontario Asia Bangalore metro BTS Skytrain, two elevated rapid transit lines in Bangkok Chennai Metro Chennai Mass Rapid Transit System Chongqing Rail Transit (Line 3) Daegu Metro (Line 3) Danhai Light Rail Delhi Metro (Yellow Line, Green Line, Red Line) Dhaka Metro Rail Dubai Metro Hanoi Metro Hong Kong MTR (Kwun Tong Line, Tsuen Wan Line, Tuen Ma Line, East Rail Line, South Island Line and Tung Chung Line) (all partial) Hyderabad Metro Jabodebek LRT Jakarta LRT Jakarta MRT (North-South Line, partial) Kochi Metro Kolkata Metro (future line 5 and 6, later is under construction) Lahore Metro (Orange Line) Manila Light Rail Transit System Navi Mumbai Metro Mumbai Metro Mumbai Monorail Nagpur Metro Rapid Metro Gurgaon Rapid Rail, the operator of the rapid transit (metro) system serving Kuala Lumpur and the Klang Valley area in Malaysia. Singapore MRT (North–South Line and East–West Line) (all partial) Singapore LRT (automated people mover (APM)) New Taipei Metro (Circular line, north branch of Tamsui–Xinyi line, and Wenhu line) Doha Metro (all partial except Gold Line which is fully underground.) Riyadh Metro (except Green Line which is fully underground.) Europe Amsterdam Metro (except Line 52) Berlin S-Bahn (Berlin Stadtbahn and Siemensbahn) Berlin U-Bahn (U1 and U2 lines) Charleroi light rail (partial) Copenhagen Metro (partial) Devon Metro (partial) Docklands Light Railway (partial) Frankfurt U-Bahn (U1) (partial) Hamburg U-Bahn (U3 line) London Overground (Windrush line) (partial) Moscow Metro (Butovskaya line) Paris Metro (Line 1, Line 2, Line 5, Line 6, Line 8 and Line 13) (all partial) Rhine-Main S-Bahn (S3, S4, S5, S6) (partial) Rotterdam Metro (partial) Vienna S-Bahn (Vienna Stadtbahn) Vienna U-Bahn (U6 line) Wuppertal Suspension Railway Oceania Metro Trains Melbourne, mainly built by the Level Crossing Removal Project Sydney Metro Northwest Line in Sydney, New South Wales, Australia (between Bella Vista and Tallawong) Disused Boston Elevated Railways: Atlantic Avenue Elevated, Charlestown Elevated, Washington Street Elevated, Causeway Street Elevated Elevated railways operated by the Interborough Rapid Transit Company and Brooklyn Rapid Transit Company in New York City Liverpool Overhead Railway The elevated Airport line of Kolkata Suburban Railway, closed in 2016 for reconstruction relating Kolkata Metro line 4 Line 3 Scarborough, a medium capacity metro rail line in Toronto, Ontario, Canada (ceased operation in July 2023 due to derailment and age, currently undergoing replacement by an extension of TTC Line 2) People mover Tomorrowland Transit Authority PeopleMover, a people mover at and around Tomorrowland, Magic Kingdom, Walt Disney World Resort, Orlando, Florida, United States AirTrain JFK, a people mover at and around John F. Kennedy International Airport, New York City, New York, United States ATL Skytrain, a people mover at Hartsfield–Jackson Atlanta International Airport, Atlanta, Georgia, United States Changi Airport Skytrain, an inter-terminal people mover at Changi International Airport in Singapore Detroit People Mover, an urban transit people mover in Detroit, Michigan, United States H-Bahn, an inter-terminal automated people mover in Dortmund and Düsseldorf, Germany Jacksonville Skyway, an automated people mover in Jacksonville, Florida, United States Metromover, a people mover at Miami, Florida, United States PHX Sky Train, a people mover at Phoenix Sky Harbor International Airport, Phoenix, Arizona, United States Proposed designs Phnom Penh SkyTrain (Cambodia) Ho Chi Minh City Metro (Vietnam) will be partially elevated Managua Metro (Nicaragua) San Salvador Metro (El Salvador) Ljubljana Metro (Slovenia) Transperth's Armadale line will be partially elevated by the Victoria Park-Canning Level Crossing Removal Project
Technology
Rail and cable transport
null
1378708
https://en.wikipedia.org/wiki/Sodium%20thiosulfate
Sodium thiosulfate
Sodium thiosulfate (sodium thiosulphate) is an inorganic compound with the formula . Typically it is available as the white or colorless pentahydrate (x = 5), which is a white solid that dissolves well in water. The compound is a reducing agent and a ligand, and these properties underpin its applications. Uses Sodium thiosulfate is used predominantly in dyeing. It converts some dyes to their soluble colorless "leuco" forms. It is also used to bleach "wool, cotton, silk, ...soaps, glues, clay, sand, bauxite, and... edible oils, edible fats, and gelatin." Medical uses Sodium thiosulfate is used in the treatment of cyanide poisoning. It is on the World Health Organization's List of Essential Medicines. Other uses include topical treatment of ringworm and tinea versicolor, and treating some side effects of hemodialysis and chemotherapy. In September 2022, the U.S. Food and Drug Administration (FDA) approved sodium thiosulfate under the trade name Pedmark to lessen the risk of ototoxicity and hearing loss in infant, child, and adolescent cancer patients receiving the chemotherapy medication cisplatin. Photographic processing In photography, sodium thiosulfate is used in both film and photographic paper processing as a fixer, sometimes still called 'hypo' from the original chemical name, hyposulphite of soda. It functions to dissolve silver halides, e.g., AgBr, components of photographic emulsions. Ammonium thiosulfate is typically preferred to sodium thiosulfate for this application. The ability of thiosulfate to dissolve silver ions is related to its ability to dissolve gold ions. Neutralizing chlorinated water It is used to dechlorinate tap water including lowering chlorine levels for use in aquariums, swimming pools, and spas (e.g., following superchlorination) and within water treatment plants to treat settled backwash water prior to release into rivers. The reduction reaction is analogous to the iodine reduction reaction. In pH testing of bleach substances, sodium thiosulfate neutralizes the color-removing effects of bleach and allows one to test the pH of bleach solutions with liquid indicators. The relevant reaction is akin to the iodine reaction: thiosulfate reduces the hypochlorite (the active ingredient in bleach) and in so doing becomes oxidized to sulfate. The complete reaction is: Similarly, sodium thiosulfate reacts with bromine, removing the free bromine from the solution. Solutions of sodium thiosulfate are commonly used as a precaution in chemistry laboratories when working with bromine and for the safe disposal of bromine, iodine, or other strong oxidizers. Structure Two polymorphs are known as pentahydrate. The anhydrous salt exists in several polymorphs. In the solid state, the thiosulfate anion is tetrahedral in shape and is notionally derived by replacing one of the oxygen atoms by a sulfur atom in a sulfate anion. The S-S distance indicates a single bond, implying that the terminal sulfur holds a significant negative charge and the S-O interactions have more double-bond character. Production Sodium thiosulfate is prepared by oxidation of sodium sulfite with sulfur. It is also produced from waste sodium sulfide from the manufacture of sulfur dyes. This salt can also be prepared by boiling aqueous sodium hydroxide and sulfur according to the following equation. However, this is not recommended outside of a laboratory, as exposure to hydrogen sulfide can result if improperly handled. Principal reactions Upon heating to 300 °C, it decomposes to sodium sulfate and sodium polysulfide: Thiosulfate salts characteristically decompose upon treatment with acids. Initial protonation occurs at sulfur. When the protonation is conducted in diethyl ether at −78 °C, (thiosulfuric acid) can be obtained. It is a somewhat strong acid with pKas of 0.6 and 1.7 for the first and second dissociations, respectively. Under normal conditions, acidification of solutions of this salt excess with even dilute acids results in complete decomposition to sulfur, sulfur dioxide, and water: Coordination chemistry Thiosulfate forms complexes with transition metal ions. One such complex is . Iodometry Some analytical procedures exploit the oxidizability of thiosulfate anion by iodine. The reaction produces tetrathionate: Due to the quantitative nature of this reaction, as well as because has an excellent shelf-life, it is used as a titrant in iodometry. is also a component of iodine clock experiments. This particular use can be set up to measure the oxygen content of water through a long series of reactions in the Winkler test for dissolved oxygen. It is also used in estimating volumetrically the concentrations of certain compounds in solution (hydrogen peroxide, for instance) and in estimating the chlorine content in commercial bleaching powder and water. Organic chemistry Alkylation of sodium thiosulfate gives S-alkylthiosulfates, which are called Bunte salts. The alkylthiosulfates are susceptible to hydrolysis, affording the thiol. This reaction is illustrated by one synthesis of thioglycolic acid: Safety Sodium thiosulfate has low toxicity. LDLo for rabbits is 4000 mg/kg.
Physical sciences
Sulfuric oxyanions
Chemistry
1380144
https://en.wikipedia.org/wiki/Linalool
Linalool
Linalool () refers to two enantiomers of a naturally occurring terpene alcohol found in many flowers and spice plants. Together with geraniol, nerol, citronellol, linalool is one of the rose alcohols. Linalool has multiple commercial applications, the majority of which are based on its pleasant scent (floral, with a touch of spiciness). A colorless oil, linalool is classified as an acyclic monoterpenoid. In plants, it is a metabolite, a volatile oil component, an antimicrobial agent, and an aroma compound. Linalool has uses in manufacturing of soaps, fragrances, food additives as flavors, household products, and insecticides. Esters of linalool are referred to as linalyl, e.g. linalyl pyrophosphate, an isomer of geranyl pyrophosphate. The word linalool is based on linaloe (a type of wood) and the suffix . In food manufacturing, it may be called coriandrol. Occurrence Both enantiomeric forms are found in nature: (S)-linalool is found, for example, as a major constituent of the essential oils of coriander (Coriandrum sativum L.), cymbopogon (Cymbopogon martini var. martinii), and sweet orange (Citrus sinensis) flowers. (R)-linalool is present in lavender (Lavandula officinalis), bay laurel (Laurus nobilis), and sweet basil (Ocimum basilicum), among others. Each enantiomer evokes distinct neural responses in humans, so each is classified as possessing distinct scents. (S)-(+)-Linalool is perceived as sweet, floral, petitgrain-like (odor threshold 7.4 ppb) and the (R)-form as more woody and lavender-like (odor threshold 0.8 ppb). Over 200 species of plants produce linalool, notably from the families Lamiaceae (mint and other herbs), Lauraceae (laurels, cinnamon, rosewood), and Rutaceae (citrus fruits), but also birch trees and other plants, from tropical to boreal climate zones. Aniba rosaeodora Lavandula Cinnamomum tamala Cannabis sativa Basil Solidago (goldenrod) Artemisia vulgaris (mugwort) Humulus lupulus (hop) It was first synthesized in the laboratory of Leopold Ružička in 1919. Production Linalool is produced commercially from several terpenes and terpenoid precursors, which are often components of terpentine. 2-Pinanol, derived from pinene, gives linalool upon pyrolysis. Biosynthesis In higher plants linalool is formed by rearrangement of geranyl pyrophosphate (GPP). With the aid of linalool synthase (LIS), water attacks to form the chiral center. LIS appears to show a limonene synthase-type catalysis through a simplified "metal-cofactor-binding domain [where the majority] of the residues involved in substrate...binding [are] in the C-terminal part of the protein" suggesting stereoselectivity and the reasoning behind why some plants have varying levels of each enantiomer. Odor and flavor Linalool has complex odor and flavor properties. Its odor is similar to floral, spicy wood, somewhat resembling French lavender plants, bergamot oil or lily of the valley. It has a light, citrus-like flavor, sweet with a spicy tropical accent. Linalool is used as a scent in perfumed hygiene products and cleaning agents, including soaps, detergents, shampoos, and lotions. It exhibits antimicrobial and antifungal properties. Chemical derivatives Linalool is hydrogenated to give dihydro- and tetrahydrolinalool, which are fragrances that are more resilient toward oxidants, as might be found in household cleaning products. Linalyl acetate, a popular scent, is produced by esterification of linalool (as well as occurring naturally). Isomerization of linalool gives geraniol and nerol. Safety Linalool can be absorbed by inhalation of its aerosol and by oral intake or skin absorption, potentially causing irritation, pain and allergic reactions. Some 7% of people undergoing patch testing in Europe were found to be allergic to the oxidized form of linalool. The US Food and Drug Administration (FDA) lists linalool in the Code of Federal Regulations under substances generally recognized as safe, synthetic flavoring substances and adjuvants.
Physical sciences
Terpenes and terpenoids
Chemistry
25686223
https://en.wikipedia.org/wiki/Symbian
Symbian
Symbian was a mobile operating system (OS) and computing platform designed for smartphones. It was originally developed as a proprietary software OS for personal digital assistants in 1998 by the Symbian Ltd. consortium. Symbian OS is a descendant of Psion's EPOC, and was released exclusively on ARM processors, although an unreleased x86 port existed. Symbian was used by many major mobile phone brands, like Samsung, Motorola, Sony Ericsson, and above all by Nokia. It was also prevalent in Japan by brands including Fujitsu, Sharp and Mitsubishi. As a pioneer that established the smartphone industry, it was the most popular smartphone OS on a worldwide average until the end of 2010, at a time when smartphones were in limited use, when it was overtaken by iOS and Android. It was notably less popular in North America. The Symbian OS platform is formed of two components: one being the microkernel-based operating system with its associated libraries, and the other being the user interface (as middleware), which provides the graphical shell atop the OS. The most prominent user interface was the S60 (formerly Series 60) platform built by Nokia, first released in 2002 and powering most Nokia Symbian devices. UIQ was a competing user interface mostly used by Motorola and Sony Ericsson that focused on pen-based devices, rather than a traditional keyboard interface from S60. Another interface was the MOAP(S) platform from carrier NTT DoCoMo in the Japanese market. Applications for these different interfaces were not compatible with each other, despite each being built atop Symbian OS. Nokia became the largest shareholder of Symbian Ltd. in 2004 and purchased the entire company in 2008. The non-profit Symbian Foundation was then created to make a royalty-free successor to Symbian OS. Seeking to unify the platform, S60 became the Foundation's favoured interface and UIQ stopped development. The touchscreen-focused Symbian^1 (or S60 5th Edition) was created as a result in 2009. Symbian^2 (based on MOAP) was used by NTT DoCoMo, one of the members of the Foundation, for the Japanese market. Symbian^3 was released in 2010 as the successor to S60 5th Edition, by which time it became fully free software. The transition from a proprietary operating system to a free software project is believed to be one of the largest in history. Symbian^3 received the Anna and Belle updates in 2011. The Symbian Foundation disintegrated in late 2010 and Nokia took back control of the OS development. In February 2011, Nokia, by then the only remaining company still supporting Symbian outside Japan, announced that it would use Microsoft's Windows Phone 7 as its primary smartphone platform, while Symbian would be gradually wound down. Two months later, Nokia moved the OS to proprietary licensing, only collaborating with the Japanese OEMs and later outsourced Symbian development to Accenture. Although support was promised until 2016, including two major planned updates, by 2012 Nokia had mostly abandoned development and most Symbian developers had already left Accenture, and in January 2014 Nokia stopped accepting new or changed Symbian software from developers. The Nokia 808 PureView in 2012 was officially the last Symbian smartphone from Nokia. NTT DoCoMo continued releasing OPP(S) (Operator Pack Symbian, successor of MOAP) devices in Japan, which still act as middleware on top of Symbian. Phones running this include the from Fujitsu and from Sharp in 2014. History Symbian originated from EPOC32, an operating system created by Psion in the 1990s. In June 1998, Psion Software became Symbian Ltd., a major joint venture between Psion and phone manufacturers Ericsson, Motorola, and Nokia. Afterwards, different software platforms were created for Symbian, backed by different groups of mobile phone manufacturers. They include S60 (Nokia, Samsung and LG), UIQ (Sony Ericsson and Motorola) and MOAP(S) (Japanese only such as Fujitsu, Sharp etc.). With no major competition in the smartphone OS then (Palm OS and Windows Mobile were comparatively small players), Symbian reached as high as 67% of the global smartphone market share in 2006. Despite its sizable market share then, Symbian was at various stages difficult to develop for: First (at around early-to-mid-2000s) due to the complexity of then the only native programming languages Open Programming Language (OPL) and Symbian C++, and of the OS; then the stubborn developer bureaucracy, along with high prices of various integrated development environments (IDEs) and software development kits (SDKs), which were prohibitive for independent or very small developers; and then the subsequent fragmentation, which was in part caused by infighting among and within manufacturers, each of which also had their own IDEs and SDKs. All of this discouraged third-party developers, and served to cause the native app ecosystem for Symbian not to evolve to a scale later reached by Apple's App Store or Android's Google Play. By contrast, iPhone OS (renamed iOS in 2010) and Android had comparatively simpler design, provided easier and much more centralized infrastructure to create and obtain third-party apps, offered certain developer tools and programming languages with a manageable level of complexity, and having abilities such as multitasking and graphics to meet future consumer demands. Although Symbian was difficult to program for, this issue could be worked around by creating Java Mobile Edition apps, ostensibly under a "write once, run anywhere" slogan. This wasn't always the case because of fragmentation due to different device screen sizes and differences in levels of Java ME support on various devices. In June 2008, Nokia announced the acquisition of Symbian Ltd., and a new independent non-profit organization called the Symbian Foundation was established. Symbian OS and its associated user interfaces S60, UIQ, and MOAP(S) were contributed by their owners Nokia, NTT DoCoMo, Sony Ericsson, and Symbian Ltd., to the foundation with the objective of creating the Symbian platform as a royalty-free, Free software, under the Free Software Foundation (FSF) and Open Source Initiative (OSI) approved Eclipse Public License (EPL). The platform was designated as the successor to Symbian OS, following the official launch of the Symbian Foundation in April 2009. The Symbian platform was officially made available as Free software in February 2010. Nokia became the major contributor to Symbian's code, since it then possessed the development resources for both the Symbian OS core and the user interface. Since then Nokia maintained its own code repository for the platform development, regularly releasing its development to the public repository. Symbian was intended to be developed by a community led by the Symbian Foundation, which was first announced in June 2008 and which officially launched in April 2009. Its objective was to publish the source code for the entire Symbian platform under the EPL. This was accomplished on 4 February 2010; the Symbian Foundation reported this event to be the largest codebase moved to Free software in history. However, some important components within Symbian OS were licensed from third parties, which prevented the foundation from publishing the full source under EPL immediately; instead much of the source was published under a more restrictive Symbian Foundation License (SFL) and access to the full source code was limited to member companies only, although membership was open to any organisation. Also, the Free software Qt framework was introduced to Symbian in 2010, as the primary upgrade path to MeeGo, which was to be the next mobile operating system to replace and supplant Symbian on high-end devices; Qt was by its nature free and very convenient to develop with. Several other frameworks were deployed to the platform, among them Standard C and C++, Python, Ruby, and Adobe Flash Lite. IDEs and SDKs were developed and then released for free, and application software (app) development for Symbian picked up. In November 2010, the Symbian Foundation announced that due to changes in global economic and market conditions (and also a lack of support from members such as Samsung and Sony Ericsson), it would transition to a licensing-only organisation; Nokia announced it would take over the stewardship of the Symbian platform. Symbian Foundation would remain the trademark holder and licensing entity and would only have non-executive directors involved. With market share sliding from 39% in Q32010 to 31% in Q42010, Symbian was losing ground to iOS and Android quickly, eventually falling behind Android in Q42010. Stephen Elop was appointed the CEO of Nokia in September 2010, and on 11 February 2011, he announced a partnership with Microsoft that would see Nokia adopt Windows Phone as its primary smartphone platform, and Symbian would be gradually phased out, together with MeeGo. As a consequence, Symbian's market share fell, and application developers for Symbian dropped out rapidly. Research in June 2011 indicated that over 39% of mobile developers using Symbian at the time of publication were planning to abandon the platform. By 5 April 2011, Nokia ceased to make free any portion of the Symbian software and reduced its collaboration to a small group of preselected partners in Japan. Source code released under the original EPL remains available in third party repositories, including a full set of all public code from the project as of 7 December 2010. On 22 June 2011, Nokia had made an agreement with Accenture for an outsourcing program. Accenture will provide Symbian-based software development and support services to Nokia through 2016. The transfer of Nokia employees to Accenture was completed on 30 September 2011 and 2,800 Nokia employees became Accenture employees as of October 2011. Nokia had terminated its support of software development and maintenance for Symbian with effect from 1 January 2014, thereafter refusing to publish new or changed Symbian applications or content in the Nokia Store and terminating its 'Symbian Signed' program for software certification. Features User interface Symbian has had a native graphics toolkit since its inception, known as AVKON (formerly known as Series 60). S60 was designed to be manipulated by a keyboard-like interface metaphor, such as the ~15-key augmented telephone keypad, or the mini-QWERTY keyboards. AVKON-based software is binary-compatible with Symbian versions up to and including Symbian^3. Symbian^3 includes the Qt framework, which became the recommended user interface toolkit for new applications. Qt can also be installed on older Symbian devices. Symbian^4 was planned to introduce a new GUI library framework specifically designed for a touch-based interface, known as "UI Extensions for Mobile" or UIEMO (internal project name "Orbit"), which was built on top of Qt Widget; a preview was released in January 2010, however in October 2010 Nokia announced that Orbit/UIEMO had been cancelled. Nokia later recommended that developers use Qt Quick with QML, the new high-level declarative UI and scripting framework for creating visually rich touchscreen interfaces that allowed development for both Symbian and MeeGo; it would be delivered to existing Symbian^3 devices as a Qt update. When more applications gradually feature a user interface reworked in Qt, the legacy S60 framework (AVKON) would be deprecated and no longer included with new devices at some point, thus breaking binary compatibility with older S60 applications. Browser Symbian^3 and earlier have a built-in WebKit based browser. Symbian was the first mobile platform to make use of WebKit (in June 2005). Some older Symbian models have Opera Mobile as their default browser. Nokia released a new browser with the release of Symbian Anna with improved speed and an improved user interface. Multiple language support Symbian had strong localization support enabling manufacturers and 3rd party application developers to localize Symbian based products to support global distribution. Nokia made languages available in the device, in language packs: a set of languages which cover those commonly spoken in the area where a device variant is to be sold. All language packs have in common English, or a locally relevant dialect of it. The last release, Symbian Belle, supports these 48 languages, with [dialects], and (scripts): Arabic (Arabic) Basque (Latin) Bulgarian (Cyrillic) Catalan (Latin) Chinese [PRC] (Simplified Chinese) Chinese [Hong Kong] (Traditional Chinese) Chinese [Taiwan] (Traditional Chinese) Croatian (Latin) Czech (Latin) Danish (Latin) Dutch (Latin) English [UK] (Latin) English [US] (Latin) Estonian (Latin) Finnish (Latin) French (Latin) French [Canadian] (Latin) Galician (Latin) German (Latin) Greek (Greek) Hebrew (Hebrew) Hindi (Indian) Hungarian (Latin) Icelandic (Latin) Indonesian [Bahasa Indonesia] (Latin) Italian (Latin) Japanese (Japanese script)* Kazakh (Cyrillic) Latvian (Latin) Lithuanian (Latin) Malay [Bahasa Malaysia] (Latin) Marathi (India: Maharashtra) Norwegian (Latin) Persian [Farsi] Polish (Latin) Portuguese (Latin) Portuguese [Brazilian] (Latin) Romanian (Latin) Russian (Cyrillic) Serbian (Latin) Slovak (Latin) Slovene (Latin) Spanish (Latin) Spanish [Latin America] (Latin) Swedish (Latin) Tagalog [Filipino] (Latin) Thai (Thai) Tamil (India) Turkish (Latin) Ukrainian (Cyrillic) Urdu (Arabic) Vietnamese (Latin) Symbian Belle marks the introduction of Kazakh, while Korean is no longer supported. Japanese is only available on Symbian^2 devices as they are made in Japan, and on other Symbian devices Japanese is still supported with limitations. Application development From 2010, Symbian switched to using standard C++ with Qt as the main SDK, which can be used with either Qt Creator or Carbide.c++. Qt supports the older Symbian/S60 3rd (starting with Feature Pack 1, a.k.a. S60 3.1) and Symbian/S60 5th Edition (a.k.a. S60 5.01b) releases, as well as the new Symbian platform. It also supports Maemo and MeeGo, Windows, Linux and Mac OS X. Alternative application development can be done using Python (see Python for S60), Adobe Flash Lite or Java ME. Symbian OS previously used a Symbian specific C++ version, along with CodeWarrior and later Carbide.c++ integrated development environment (IDE), as the native application development environment. Web Runtime (WRT) is a portable application framework that allows creating widgets on the S60 Platform; it is an extension to the S60 WebKit based browser that allows launching multiple browser instances as separate JavaScript applications. Application development Qt As of 2010, the SDK for Symbian is standard C++, using Qt. It can be used with either Qt Creator, or Carbide (the older IDE previously used for Symbian development). A phone simulator allows testing of Qt apps. Apps compiled for the simulator are compiled to native code for the development platform, rather than having to be emulated. Application development can either use C++ or QML. Symbian C++ As Symbian OS is written in C++ using Symbian Software's coding standards, it is possible to develop using Symbian C++, although it is not a standard implementation. Before the release of the Qt SDK, this was the standard development environment. There were multiple platforms based on Symbian OS that provided software development kits (SDKs) for application developers wishing to target Symbian OS devices, the main ones being UIQ and S60. Individual phone products, or families, often had SDKs or SDK extensions downloadable from the maker's website too. The SDKs contain documentation, the header files and library files needed to build Symbian OS software, and a Windows-based emulator ("WINS"). Up until Symbian OS version 8, the SDKs also included a version of the GNU Compiler Collection (GCC) compiler (a cross-compiler) needed to build software to work on the device. Symbian OS 9 and the Symbian platform use a new application binary interface (ABI) and needed a different compiler. A choice of compilers is available including a newer version of GCC (see external links below). Symbian C++ programming has a steep learning curve, as Symbian C++ requires the use of special techniques such as descriptors, active objects and the cleanup stack. This can make even relatively simple programs initially harder to implement than in other environments. It is possible that the techniques, developed for the much more restricted mobile hardware and compilers of the 1990s, caused extra complexity in source code because programmers are required to concentrate on low-level details instead of more application-specific features. As of 2010, these issues are no longer the case when using standard C++, with the Qt SDK. Symbian C++ programming is commonly done with an integrated development environment (IDE). For earlier versions of Symbian OS, the commercial IDE CodeWarrior for Symbian OS was favoured. The CodeWarrior tools were replaced during 2006 by Carbide.c++, an Eclipse-based IDE developed by Nokia. Carbide.c++ is offered in four different versions: Express, Developer, Professional, and OEM, with increasing levels of capability. Fully featured software can be created and released with the Express edition, which is free. Features such as UI design, crash debugging etc. are available in the other, charged-for, editions. Microsoft Visual Studio 2003 and 2005 are also supported via the Carbide.vs plugin. Other languages Symbian devices can also be programmed using Python, Java ME, Flash Lite, Ruby, .NET, Web Runtime (WRT) Widgets and Standard C/C++. Visual Basic programmers can use NS Basic to develop apps for S60 3rd Edition and UIQ 3 devices. In the past, Visual Basic, Visual Basic .NET, and C# development for Symbian were possible through AppForge Crossfire, a plug-in for Microsoft Visual Studio. On 13 March 2007 AppForge ceased operations; Oracle purchased the intellectual property, but announced that they did not plan to sell or provide support for former AppForge products. Net60, a .NET compact framework for Symbian, which is developed by redFIVElabs, is sold as a commercial product. With Net60, VB.NET, and C# (and other) source code is compiled into an intermediate language (IL) which is executed within the Symbian OS using a just-in-time compiler. (As of 18 January 2010, RedFiveLabs has ceased development of Net60 with this announcement on their landing page: "At this stage we are pursuing some options to sell the IP so that Net60 may continue to have a future.") There is also a version of a Borland IDE for Symbian OS. Symbian development is also possible on Linux and macOS using tools and methods developed by the community, partly enabled by Symbian releasing the source code for key tools. A plug-in that allows development of Symbian OS applications in Apple's Xcode IDE for Mac OS X was available. Java ME applications for Symbian OS are developed using standard techniques and tools such as the Sun Java Wireless Toolkit (formerly the J2ME Wireless Toolkit). They are packaged as JAR (and possibly JAD) files. Both CLDC and CDC applications can be created with NetBeans. Other tools include SuperWaba, which can be used to build Symbian 7.0 and 7.0s programs using Java. Nokia S60 phones can also run Python scripts when the interpreter Python for S60 is installed, with a custom made API that allows for Bluetooth support and such. There is also an interactive console to allow the user to write Python scripts directly from the phone. Deployment Once developed, Symbian applications need to find a route to customers' mobile phones. They are packaged in SIS files which may be installed over-the-air, via PC connect, Bluetooth or on a memory card. An alternative is to partner with a phone manufacturer and have the software included on the phone itself. Applications must be Symbian Signed for Symbian OS 9.x to make use of certain capabilities (system capabilities, restricted capabilities and device manufacturer capabilities). Applications could be signed for free in 2010. Architecture Technology domains and packages Symbian's design is subdivided into technology domains, each of which comprises a set of software packages. Each technology domain has its own roadmap, and the Symbian Foundation has a team of technology managers who manage these technology domain roadmaps. Every package is allocated to exactly one technology domain, based on the general functional area to which the package contributes and by which it may be influenced. By grouping related packages by themes, the Symbian Foundation hopes to encourage a strong community to form around them and to generate discussion and review. The Symbian System Model illustrates the scope of each of the technology domains across the platform packages. Packages are owned and maintained by a package owner, a named individual from an organization member of the Symbian Foundation, who accepts code contributions from the wider Symbian community and is responsible for package. Symbian kernel The Symbian kernel (EKA2) supports sufficiently fast real-time response to build a single-core phone around it – that is, a phone in which a single processor core executes both the user applications and the signalling stack. The real-time kernel has a microkernel architecture containing only the minimum, most basic primitives and functionality, for maximum robustness, availability and responsiveness. It has been termed a nanokernel, because it needs an extended kernel to implement any other abstractions. It contains a scheduler, memory management and device drivers, with networking, telephony, and file system support services in the OS Services Layer or the Base Services Layer. The inclusion of device drivers means the kernel is not a true microkernel. Design Symbian features pre-emptive multitasking and memory protection, like other operating systems (especially those created for use on desktop computers). EPOC's approach to multitasking was inspired by VMS and is based on asynchronous server-based events. Symbian OS was created with three systems design principles in mind: the integrity and security of user data is paramount user time must not be wasted all resources are scarce To best follow these principles, Symbian uses a microkernel, has a request-and-callback approach to services, and maintains separation between user interface and engine. The OS is optimised for low-power battery-based devices and for read-only memory (ROM)-based systems (e.g. features like XIP and re-entrancy in shared libraries). The OS, and application software, follows an object-oriented programming design named model–view–controller (MVC). Later OS iterations diluted this approach in response to market demands, notably with the introduction of a real-time kernel and a platform security model in versions 8 and 9. There is a strong emphasis on conserving resources which is exemplified by Symbian-specific programming idioms like descriptors and a cleanup stack. Similar methods exist to conserve storage space. Further, all Symbian programming is event-based, and the central processing unit (CPU) is switched into a low power mode when applications are not directly dealing with an event. This is done via a programming idiom called active objects. Similarly the Symbian approach to threads and processes is driven by reducing overheads. Operating system The All over Model contains the following layers, from top to bottom: UI Framework Layer Application Services Layer Java ME OS Services Layer generic OS services communications services multimedia and graphics services connectivity services Base Services Layer Kernel Services & Hardware Interface Layer The Base Services Layer is the lowest level reachable by user-side operations; it includes the File Server and User Library, a Plug-In Framework which manages all plug-ins, Store, Central Repository, DBMS and cryptographic services. It also includes the Text Window Server and the Text Shell: the two basic services from which a completely functional port can be created without the need for any higher layer services. Symbian has a microkernel architecture, which means that the minimum necessary is within the kernel to maximise robustness, availability and responsiveness. It contains a scheduler, memory management and device drivers, but other services like networking, telephony and file system support are placed in the OS Services Layer or the Base Services Layer. The inclusion of device drivers means the kernel is not a true microkernel. The EKA2 real-time kernel, which has been termed a nanokernel, contains only the most basic primitives and requires an extended kernel to implement any other abstractions. Symbian is designed to emphasise compatibility with other devices, especially removable media file systems. Early development of EPOC led to adopting File Allocation Table (FAT) as the internal file system, and this remains, but an object-oriented persistence model was placed over the underlying FAT to provide a POSIX-style interface and a streaming model. The internal data formats rely on using the same APIs that create the data to run all file manipulations. This has resulted in data-dependence and associated difficulties with changes and data migration. There is a large networking and communication subsystem, which has three main servers called: ETEL (EPOC telephony), ESOCK (EPOC sockets) and C32 (responsible for serial communication). Each of these has a plug-in scheme. For example, ESOCK allows different ".PRT" protocol modules to implement various networking protocol schemes. The subsystem also contains code that supports short-range communication links, such as Bluetooth, IrDA and USB. There is also a large volume of user interface (UI) Code. Only the base classes and substructure were contained in Symbian OS, while most of the actual user interfaces were maintained by third parties. This is no longer the case. The three major UIs – S60, UIQ and MOAP – were contributed to Symbian in 2009. Symbian also contains graphics, text layout and font rendering libraries. All native Symbian C++ applications are built up from three framework classes defined by the application architecture: an application class, a document class and an application user interface class. These classes create the fundamental application behaviour. The remaining needed functions, the application view, data model and data interface, are created independently and interact solely through their APIs with the other classes. Many other things do not yet fit into this model – for example, SyncML, Java ME providing another set of APIs on top of most of the OS and multimedia. Many of these are frameworks, and vendors are expected to supply plug-ins to these frameworks from third parties (for example, Helix Player for multimedia codecs). This has the advantage that the APIs to such areas of functionality are the same on many phone models, and that vendors get a lot of flexibility. But it means that phone vendors needed to do a great deal of integration work to make a Symbian OS phone. Symbian includes a reference user-interface called "TechView". It provides a basis for starting customisation and is the environment in which much Symbian test and example code runs. It is very similar to the user interface from the Psion Series 5 personal organiser and is not used for any production phone user interface. Symbian UI variants, platforms Symbian, as it advanced to OS version 7.0, spun off into several different graphical user interfaces, each backed by a certain company or group of companies. Unlike Android OS's cosmetic GUIs, Symbian GUIs are referred to as "platforms" due to more significant modifications and integrations. Things became more complicated when applications developed for different Symbian GUI platforms were not compatible with each other, and this led to OS fragmentation. User Interfaces platforms that run on or are based on Symbian OS include: S60, Symbian, also called Series 60. It was backed mainly by Nokia. There are several editions of this platform, appearing first as S60 (1st Edition) on Nokia 7650. It was followed by S60 2nd Edition (e.g. Nokia N70), S60 3rd Edition (e.g. Nokia N73) and S60 5th Edition (which introduced touch UI e.g. Nokia N97). The name, S60, was changed to just Symbian after the formation of Symbian Foundation, and subsequently called Symbian^1, 2 and 3. Series 80 used by Nokia Communicators such as Nokia 9300i. Series 90 Touch and button based. The only phone using this platform is Nokia 7710. UIQ backed mainly by Sony Ericsson and then Motorola. It is compatible with both buttons and touch/stylus based inputs. The last major release version is UIQ3.1 in 2008, on Sony Ericsson G900. It was discontinued after the formation of Symbian Foundation, and the decision to consolidate different Symbian UI variants into one led to the adoption of S60 as the version going forward. MOAP (Mobile Oriented Applications Platform) [Japan Only] used by Fujitsu, Mitsubishi, Sony Ericsson and Sharp-developed phones for NTT DoCoMo. It uses an interface developed specifically for DoCoMo's FOMA "Freedom of Mobile Access" network brand and is based on the UI from earlier Fujitsu FOMA models. The user cannot install new C++ applications. (Japan Only) OPP [Japan Only], successor of MOAP, used on NTT DoCoMo's FOMA phone. Version comparison * Manufactured by Fujitsu † Manufactured by Sharp ▲ Software update service for Nokia Belle and Symbian (S60) phones is discontinued at the end of December 2015 Market share and competition In Q1 2004 2.4 million Symbian phones were shipped, double the number as in Q1 2003. Symbian Ltd. was particularly impressed by progress made in Japan. 3.7 million devices were shipped in Q3 2004, a growth of 201% compared to Q3 2003 and market share growing from 30.5% to 50.2%. However, in the United States it was much less popular, with a 6% market share in Q3 2004, well behind Palm OS (43%) and Windows Mobile (25%). This has been attributed to North American customers preferring wireless PDAs over smartphones, as well as Nokia's low popularity there. On 16 November 2006, the 100 millionth smartphone running the OS was shipped. As of 21 July 2009, more than 250 million devices running Symbian OS had been produced. In 2006, Symbian had 73% of the smartphone market, compared with 22.1% of the market in the second quarter of 2011. By the end of May 2006, 10 million Symbian-powered phones were sold in Japan, representing 11% of Symbian's total worldwide shipments of 89 million. By November 2007 the figure was 30 million, achieving a market share of 65% by June 2007 in the Japanese market. Symbian has lost market share over the years as the market has dramatically grown, with new competing platforms entering the market, though its sales have increased during the same timeframe. E.g., although Symbian's share of the global smartphone market dropped from 52.4% in 2008 to 47.2% in 2009, shipments of Symbian devices grew 4.8%, from 74.9 million units to 78.5 million units. From Q2 2009 to Q2 2010, shipments of Symbian devices grew 41.5%, by 8.0 million units, from 19,178,910 units to 27,129,340; compared to an increase of 9.6 million units for Android, 3.3 million units for RIM, and 3.2 million units for Apple. Prior reports on device shipments as published in February 2010 showed that the Symbian devices formed a 47.2% share of the smart mobile devices shipped in 2009, with RIM having 20.8%, Apple having 15.1% (via iOS), Microsoft having 8.8% (via Windows CE and Windows Mobile) and Android having 4.7%. In the number of "smart mobile device" sales, Symbian devices were the market leaders for 2010. Statistics showed that Symbian devices formed a 37.6% share of smart mobile devices sold, with Android having 22.7%, RIM having 16%, and Apple having 15.7% (via iOS). Some estimates indicate that the number of mobile devices shipped with the Symbian OS up to the end of Q2 2010 is 385 million. Over the course of 2009–10, Motorola, Samsung, LG, and Sony Ericsson announced their withdrawal from Symbian in favour of alternative platforms including Google's Android, Microsoft's Windows Phone. In Q2 2012, according to IDC worldwide market share has dropped to an all-time low of 4.4%. Criticism The users of Symbian in the countries with non-Latin alphabets (such as Russia, Ukraine and others) have been criticizing the complicated method of language switching for many years. For example, if a user wants to type a Latin letter, they must call the menu, click the languages item, use arrow keys to choose, for example, the English language from among many other languages, and then press the 'OK' button. After typing the Latin letter, the user must repeat the procedure to return to their native keyboard. This method slows down typing significantly. In touch-phones and QWERTY phones the procedure is slightly different but remains time-consuming. All other mobile operating systems, as well as Nokia's S40 phones, enable switching between two initially selected languages by one click or a single gesture. Early versions of the firmware for the original Nokia N97, running on Symbian^1/Series 60 5th Edition have been heavily criticized as buggy (also contributed by the low amount of RAM installed in the phone). In November 2010, Smartphone blog All About Symbian criticized the performance of Symbian's default web browser and recommended the alternative browser Opera Mobile. Nokia's Senior Vice President Jo Harlow promised an updated browser in the first quarter of 2011. There are many different versions and editions of Symbian, which led to fragmentation. Apps and software may be incompatible when installed across different versions of Symbian. Malware Symbian OS is subject to a variety of viruses, the best known of which is Cabir. Usually these send themselves from phone to phone by Bluetooth. So far, none have exploited any flaws in Symbian OS. Instead, they have all asked the user whether they want to install the software, with somewhat prominent warnings that it can't be trusted, although some rely on social engineering, often in the form of messages that come with the malware: rogue software purporting to be a utility, game, or some other application for Symbian. However, with a view that the average mobile phone user shouldn't have to worry about security, Symbian OS 9.x adopted a Unix-style capability model (permissions per process, not per object). Installed software is theoretically unable to do damaging things (such as costing the user money by sending network data) without being digitally signed – thus making it traceable. Commercial developers who can afford the cost can apply to have their software signed via the Symbian Signed program. Developers also have the option of self-signing their programs. However, the set of available features does not include access to Bluetooth, IrDA, GSM CellID, voice calls, GPS and few others. Some operators opted to disable all certificates other than the Symbian Signed certificates. Some other hostile programs are listed below, but all of them still require the input of the user to run. Drever.A is a malicious SIS file trojan that attempts to disable the automatic startup from Simworks and Kaspersky Symbian Anti-Virus applications. Locknut.B is a malicious SIS file trojan that pretends to be a patch for Symbian S60 mobile phones. When installed, it drops a binary that will crash a critical system service component. This will prevent any application from being launched in the phone. Mabir.A is basically Cabir with added MMS functionality. The two are written by the same author, and the code shares many similarities. It spreads using Bluetooth via the same routine as early variants of Cabir. As Mabir.A activates, it will search for the first phone it finds, and starts sending copies of itself to that phone. Fontal.A is an SIS file trojan that installs a corrupted file which causes the phone to fail at reboot. If the user tries to reboot the infected phone, it will be permanently stuck on the reboot screen, and cannot be used without disinfection – that is, the use of the reformat key combination which causes the phone to lose all data. Being a trojan, Fontal cannot spread by itself – the most likely way for the user to get infected would be to acquire the file from untrusted sources, and then install it to the phone, inadvertently or otherwise. A new form of malware threat to Symbian OS in the form of 'cooked firmware' was demonstrated at the International Malware Conference, Malcon, December 2010, by Indian hacker Atul Alex. Bypassing platform security Symbian OS 9.x devices can be hacked to remove the platform security introduced in OS 9.1 onwards, allowing users to execute unsigned code. This allows altering system files, and access to previously locked areas of the OS. The hack was criticised by Nokia for potentially increasing the threat posed by mobile viruses as unsigned code can be executed. Version history List of devices
Technology
Operating Systems
null
25687534
https://en.wikipedia.org/wiki/Ranina%20ranina
Ranina ranina
Ranina ranina, also known as the Huỳnh Đế crab, (red) frog crab or spanner crab, is a species of crab found throughout tropical and subtropical habitats. It is often fished for its meat. Description It may grow up to long, and may weigh up to . The carapace is wider at the front, reddish brown in color, with ten white spots. Ranina ranina is mainly nocturnal, and remains buried in the sand during the day. Ranina ranina is easily distinguished from other crab species in its habitat due to its red carapace and elongated midsection. Distribution and ecology Spanner crabs inhabit coastal waters along the east coast of Australia, from Yeppoon in Queensland to the North coast of New South Wales. There is also a population to the north of Perth in Western Australia. Ranina ranina is abundant in the coastal waters of south-western Mindanao, Philippines. These crabs are also found in the eastern coast of Africa, across the Indian Ocean to Indonesia, Japan and Hawaii and Vietnam. Ranina ranina inhabits depths of on sandy-smooth substrata in which they bury themselves from where they attack small bottom-dwelling fish. When waiting for prey, Ranina ranina will cover itself with sand, but leave its eye and mouthparts sticking out to help detect its food. Offshore areas within this range in a subtropical or tropical environment serves as a habitat for Ranina ranina, but they must have ample sand for Ranina ranina to flourish, as covering themselves in sand is instrumental in their method of catching prey. The Ranina Ranina crab is a popular species of crab that gets fished, in Australia, 96% of the female crab doesn’t get fished. In one location during spawning season, the decrease in females being hunted doesn’t apply. Meaning that the specific area of Tallows Beach is the focus of spawning migration. Fishery The species is commercially exploited over much of its range, but the largest fishery is in Australia, where the annual commercial catch an estimated at . In Queensland, only adults above carapace length may be landed. In the Philippines in 2008, prices for Ranina ranina were around 200–300 pesos per kilogram. Ranina ranina populations have been surveyed to avoid overfishing and are currently stable. Although Ranina ranina is a target of commercial fishing operations, little is known about the species' biology, population dynamics and ecology. Attempts have been made to grow Ranina ranina in captivity, but have so far been met with little success. Culinary use Ranina ranina is a regional specialty in some regions of the Philippines where it is known as curacha. It is generally eaten steamed as halabos, or cooked in coconut milk as ginataan. A notable variant of the latter is the curacha Alavar of Zamboanga City. In Vietnam the species is named as "Huỳnh Đế crab", literally means "emperor crab". The names refer to the fact that R. ranina is one of the favorite high-ranked cuisine of historical Vietnamese monarchs. It is the delicacy harvested in the provinces Bình Định and Quảng Ngãi, and is hailed "monarch of all the crab".
Biology and health sciences
Crabs and hermit crabs
Animals
2808556
https://en.wikipedia.org/wiki/Alpher%E2%80%93Bethe%E2%80%93Gamow%20paper
Alpher–Bethe–Gamow paper
In physical cosmology, the Alpher–Bethe–Gamow paper, or αβγ paper, was created by Ralph Alpher, then a physics PhD student, his advisor George Gamow, and Hans Bethe. The work, which would become the subject of Alpher's PhD dissertation, argued that the Big Bang would create hydrogen, helium and heavier elements in the correct proportions to explain their abundance in the early universe. While the original theory neglected a number of processes important to the formation of heavy elements, subsequent developments showed that Big Bang nucleosynthesis is consistent with the observed constraints on all primordial elements. Formally titled "The Origin of Chemical Elements", it was published in the April 1948 issue of Physical Review. Bethe's name Gamow humorously decided to add the name of his friend—the eminent physicist Hans Bethe—to this paper in order to create the whimsical author list of Alpher, Bethe, Gamow, a play on the Greek letters α, β, and γ (alpha, beta, gamma). Bethe () was listed in the article as "H. Bethe, Cornell University, Ithaca, New York". In his 1952 book The Creation of the Universe, Gamow explained Hans Bethe's association with the theory thus: After this, Bethe did work on Big Bang nucleosynthesis. Alpher, at the time only a graduate student, was generally dismayed by the inclusion of Bethe's name on this paper. He felt that the inclusion of another eminent physicist would overshadow his personal contribution to this work and prevent him from receiving proper recognition for such an important discovery. He expressed resentment over Gamow's whimsy as late as 1999. Main shortcoming of the theory The theory originally proposed that all atomic nuclei are produced by the successive capture of neutrons, one mass unit at a time. However, later study challenged the universality of the successive-capture theory. No element was found to have a stable isotope with an atomic mass of five or eight. Physicists soon noticed that these mass gaps would hinder the production of elements beyond helium. Just as it is impossible to climb a staircase one step at a time when one of the steps is missing, this discovery meant that the successive-capture theory could not account for higher elements. It was eventually recognized that most of the heavy elements observed in the present universe are the result of stellar nucleosynthesis in stars, a theory first suggested by Arthur Stanley Eddington, given credence by Hans Bethe, and quantitatively developed by Fred Hoyle and a number of other scientists. However, the Alpher–Bethe–Gamow theory does correctly explain the relative abundances of the isotopes of hydrogen and helium. Taken together, these account for more than 99% of the baryonic mass of the universe. Today, nucleosynthesis is widely considered to have taken place in two stages: formation of hydrogen and helium according to the Alpher–Bethe–Gamow theory, and stellar nucleosynthesis of higher elements according to Bethe and Hoyle's later theories.
Physical sciences
Physical cosmology
Astronomy
898756
https://en.wikipedia.org/wiki/Basking%20shark
Basking shark
The basking shark (Cetorhinus maximus) is the second-largest living shark and fish, after the whale shark. It is one of three plankton-eating shark species, along with the whale shark and megamouth shark. Typically, basking sharks reach in length. It is usually greyish-brown, with mottled skin, with the inside of the mouth being white in colour. The caudal fin has a strong lateral keel and a crescent shape. Other common names include bone shark, elephant shark, sailfish, and sunfish. In Orkney, it is called hoe-mother (contracted homer), meaning "the mother of the piked dogfish". The basking shark is a cosmopolitan migratory species found in all the world's temperate oceans. A slow-moving filter feeder, its common name derives from its habit of feeding at the surface, appearing to be basking in the warmer water there. It has anatomical adaptations for filter-feeding, such as a greatly enlarged mouth and highly developed gill rakers. Its snout is conical, and the gill slits extend around the top and bottom of its head. The gill rakers, dark and bristle-like, are used to catch plankton as water filters through the mouth and over the gills. The teeth are numerous and very small and often number 100 per row. The teeth have a single conical cusp, are curved backwards and are the same on both the upper and lower jaws. This species has the smallest weight-for-weight brain size of any shark, reflecting its relatively passive lifestyle. Basking sharks have been shown from satellite tracking to overwinter in both continental shelf (less than ) and deeper waters. They may be found in either small shoals or alone. Despite their large size and threatening appearance, basking sharks are not aggressive and are harmless to humans. The basking shark has long been a commercially important fish as a source of food, shark fin, animal feed, and shark liver oil. Overexploitation has reduced its populations to the point where some have disappeared and others need protection. Taxonomy The basking shark is the only extant member of the family Cetorhinidae, part of the mackerel shark order Lamniformes. Johan Ernst Gunnerus first described the species as Cetorhinus maximus, from a specimen found in Norway, naming it. The genus name Cetorhinus comes from the Greek ketos, meaning "marine monster" or "whale", and rhinos, meaning "nose". The species name maximum is from Latin and means "greatest". Following its initial description, more attempts at naming included: Squalus isodus, in 1819 by Italian zoologist Saverio Macri (1754–1848); Squalus elephas, by Charles Alexandre Lesueur in 1822; Squalus rashleighanus, by Jonathan Couch in 1838; Squalus cetaceus, by Laurens Theodorus Gronovius in 1854; Cetorhinus blainvillei by the Portuguese biologist Felix Antonio de Brito Capello (1828–1879) in 1869; Selachus pennantii, by Charles John Cornish in 1885; Cetorhinus maximus infanuncula, by Dutch zoologists Antonius Boudewijn Deinse (1885–1965) and Marcus Jan Adriani (1929–1995) in 1953; and Cetorhinus maximus normani, by Siccardi in 1961. Evolutionary history The oldest known members of Cetorhinidae are members of the extinct genus Keasius, from the middle Eocene of Antarctica, the Eocene of Oregon and possibly the Eocene of Russia. Members of the modern genus Cetorhinus appear during the Miocene, with members of the modern species appearing during the Late Miocene. The association of Pseudocetorhinus from the Late Triassic of Europe with Cetorhinidae is doubtful. Range and habitat The basking shark is a coastal-pelagic shark found worldwide in boreal to warm-temperate waters. It lives around the continental shelf and occasionally enters brackish waters. It is found from the surface down to at least . It prefers temperatures of but has been confirmed to cross the much warmer waters at the equator. It is often seen close to land, including in bays with narrow openings. The shark follows plankton concentrations in the water column, so it is often visible at the surface. It characteristically migrates with the seasons. Anatomy and appearance The basking shark regularly reaches in length with some individuals reaching . The average length of an adult is around weighing about . Historical sightings suggest basking sharks around in length, including three basking sharks estimated at ~40 fod () and a one ~45 fod () were reported between 1884 and 1905, but these visual estimates lack good evidence. A specimen trapped in a herring net in the Bay of Fundy, Canada, in 1851 has been credited as the largest recorded. Its weight has been estimated at . A study looking at the growth and longevity of the basking shark suggested that individuals larger than ~ are unlikely. This is the second-largest extant fish species, after the whale shark. They possess the typical shark lamniform body plan and have been mistaken for great white sharks. The two species can be easily distinguished by the basking shark's cavernous jaw, up to in width, longer and more obvious gill slits that nearly encircle the head and are accompanied by well-developed gill rakers, smaller eyes, much larger overall size and smaller average girth. Great whites possess large, dagger-like teeth; basking shark teeth are much smaller and hooked; only the first three or four rows of the upper jaw and six or seven rows of the lower jaw function. In behaviour, the great white is an active predator of large animals, not a filter feeder. Other distinctive characteristics include a strongly keeled caudal peduncle, highly textured skin covered in placoid scales and a mucus layer, a pointed snout—distinctly hooked in younger specimens—and a lunate caudal fin. In large individuals, the dorsal fin may flop to one side when above the surface. Colouration is highly variable (and likely dependent on observation conditions and the individual's condition): commonly, the colouring is dark brown to black or blue dorsally, fading to a dull white ventrally. The sharks are often noticeably scarred, possibly through encounters with lampreys or cookiecutter sharks. The basking shark's liver, which may account for 25% of its body weight, runs the entire length of the abdominal cavity and is thought to play a role in buoyancy regulation and long-term energy storage. On several occasions, "globster" corpses initially identified by non-scientists as a sea serpents or plesiosaurs have later been identified as likely to be the decomposing carcasses of basking sharks, as in the Stronsay Beast and the Zuiyo-maru cases. Life history Basking sharks do not hibernate and are active year-round. In winter, basking sharks often move to deeper depths, even down to and have been tracked making vertical movements consistent with feeding on overwintering zooplankton. Surfacing behaviors They are slow-moving sharks (feeding at about ) and do not evade approaching boats (unlike great white sharks). They are not attracted to chum. The basking shark is large and slow, but it can breach jump entirely out of the water. This behaviour could be an attempt to dislodge parasites or commensals. Such interpretations are speculative, however, and difficult to verify; breaching in large marine animals such as whales and sharks might equally well be intraspecific threat displays of size and strength. Migration Argos system satellite tagging of 20 basking sharks in 2003 confirmed basking sharks move thousands of kilometres during the summer and winter, seeking the richest zooplankton patches, often along ocean fronts. They shed and renew their gill rakers in an ongoing process, rather than over one short period. A 2009 study tagged 25 sharks off the coast of Cape Cod, Massachusetts, and indicated at least some migrate south in the winter. Remaining at depths between for many weeks, the tagged sharks crossed the equator to reach Brazil. One individual spent a month near the mouth of the Amazon River. They may undertake this journey to aid reproduction. On 23 June 2015, a , basking shark was caught accidentally by a fishing trawler in the Bass strait near Portland, Victoria, in southeast Australia, the first basking shark caught in the region since the 1930s, and only the third reported in the region in 160 years. The whole shark was donated to the Victoria Museum for research, instead of the fins being sold for use in shark fin soup. While basking sharks are not infrequently seen in the Mediterranean Sea and records exist in the Dardanelles Strait, it is unclear whether they historically reached deeper basins of Sea of Marmara, Black Sea and Azov Sea. Social behaviour Basking sharks are usually solitary, but during summer months in particular, they aggregate in dense patches of zooplankton, where they engage in social behaviour. They can form sex-segregated shoals, usually in small numbers (three or four), but reportedly up to 100 individuals. Small schools in the Bay of Fundy and the Hebrides have been seen swimming nose to tail in circles; their social behaviour in summer months has been studied and is thought to represent courtship. Predators Basking sharks have few predators. White sharks have been reported to scavenge on the remains of these sharks. Killer whales have been observed feeding on basking sharks off California in the US and New Zealand. Lampreys are often seen attached to them, although they are unlikely to be able to cut through the shark's thick skin. Diet The basking shark is a ram feeder, filtering zooplankton, very small fish, and invertebrates from the water with its gill rakers by swimming forwards with its mouth open. A basking shark has been calculated to filter up to of water per hour swimming at an observed speed of . Basking sharks are not indiscriminate feeders on zooplankton. Samples taken in the presence of feeding individuals recorded zooplankton densities 75% higher than adjacent non-feeding areas. Basking sharks feed preferentially in zooplankton patches dominated by small planktonic crustaceans called calanoid copepods (on average 1,700 individuals per cubic metre of water). They will also feed on copepods of the genera Pseudocalanus and Oithona. Basking sharks sometimes congregate in groups of up to 1,400 spotted along the northeastern U.S. Samples taken near feeding sharks contained 2.5 times as many Calanus helgolandicus individuals per cubic metre, which were also found to be 50% longer. Unlike the megamouth shark and whale shark, the basking shark relies only on the water it pushes through its gills by swimming; the megamouth shark and whale shark can suck or pump water through their gills. Reproduction Basking sharks are ovoviviparous: the developing embryos first rely on a yolk sac, with no placental connection. Their seemingly useless teeth may play a role before birth in helping them feed on the mother's unfertilized ova (a behaviour known as oophagy). In females, only the right ovary appears to function, and it is currently unknown why only one of the organs seems to function. Gestation is thought to span over a year (perhaps two to three years), with a small, though unknown, number of young born fully developed at . Only one pregnant female is known to have been caught; she was carrying six unborn young. Mating is thought to occur in early summer, and birthing in late summer, following the female's movement into shallow waters. The age of maturity is thought to be between the ages of six and 13 and at a length of . Breeding frequency is thought to be two to four years. The exact lifespan of the basking shark is unknown, but experts estimate it to be about 50 years. Conservation Aside from direct catches, by-catches in trawl nets have been one of several threats to basking sharks. In New Zealand, basking sharks had been abundant historically; however, after the mass by-catches recorded in the 1990s and 2000s, confirmations of the species became very scarce. Management plans have been declared to promote effective conservation. In June 2018 the Department of Conservation classified the basking shark as "Threatened - Nationally Vulnerable" under the New Zealand Threat Classification System. The eastern north Pacific Ocean population is a U.S. National Marine Fisheries Service species of concern, one of those species about which the U.S. Government's National Oceanic and Atmospheric Administration has some concerns regarding status and threats, but for which insufficient information is available to indicate a need to list the species under the U.S. Endangered Species Act (ESA). The IUCN Red List indicates this as an endangered species. The endangered aspect of this shark was publicized in 2005 with a postage stamp issued by Guernsey Post. Importance to humans Historically, the basking shark has been a staple of fisheries because of its slow swimming speed, placid nature, and previously abundant numbers. Commercially, it was put to many uses: the flesh for food and fishmeal, the hide for leather, and its large liver (which has a high squalene content) for oil. It is currently fished mainly for its fins (for shark fin soup). Parts (such as cartilage) are also used in traditional Chinese medicine and as an aphrodisiac in Japan, further adding to demand. As a result of rapidly declining numbers, the basking shark has been protected in some territorial waters and trade in its products is restricted in many countries under CITES. Among others, it is fully protected in the United Kingdom and the Atlantic and Mexican Gulf regions of the United States. Since 2008, it has been illegal to fish for, or retain if accidentally caught, basking sharks in waters of the European Union. It is partially protected in Norway and New Zealand, as targeted commercial fishing is illegal, but accidental bycatch can be used (in Norway, any basking shark caught as bycatch and still alive must be released). As of March 2010, it was also listed under Annex I of the CMS Migratory Sharks Memorandum of Understanding. Once considered a nuisance along the Canadian Pacific coast, basking sharks were the target of a government eradication programme from 1945 to 1970. , efforts were underway to determine whether any sharks still lived in the area and monitor their potential recovery. It is tolerant of boats and divers approaching it and may even circle divers, making it an important draw for dive tourism in areas where it is common.
Biology and health sciences
Sharks
Animals
899115
https://en.wikipedia.org/wiki/Lunar%20distance
Lunar distance
The instantaneous Earth–Moon distance, or distance to the Moon, is the distance from the center of Earth to the center of the Moon. In contrast, the Lunar distance (LD or ), or Earth–Moon characteristic distance, is a unit of measure in astronomy. More technically, it is the semi-major axis of the geocentric lunar orbit. The lunar distance is on average approximately , or 1.28 light-seconds; this is roughly 30 times Earth's diameter or 9.5 times Earth's circumference. Around 389 lunar distances make up an AU astronomical unit (roughly the distance from Earth to the Sun). Lunar distance is commonly used to express the distance to near-Earth object encounters. Lunar semi-major axis is an important astronomical datum; the few-millimeter precision of the range measurements determines semi-major axis to a few decimeters; it has implications for testing gravitational theories such as general relativity and and for refining other astronomical values, such as the mass, radius, and rotation of Earth. The measurement is also useful in expressing the lunar radius, as well as the distance to the Sun. Millimeter-precision measurements of the lunar distance are made by measuring the time taken for laser beam light to travel between stations on Earth and retroreflectors placed on the Moon. The Moon is spiraling away from Earth at an average rate of per year, as detected by the Lunar Laser Ranging experiment. Value Because of the influence of the Sun and other perturbations, the Moon's orbit around the Earth is not a precise ellipse. Nevertheless, different methods have been used to define a semi-major axis. Ernest William Brown provided a formula for the parallax of the Moon as viewed from opposite sides of the Earth, involving trigonometric terms. This is equivalent to a formula for the inverse of the distance, and the average value of this is the inverse of . On the other hand, the time-averaged distance (rather than the inverse of the average inverse distance) between the centers of Earth and the Moon is . One can also model the orbit as an ellipse that is constantly changing, and in this case one can find a formula for the semi-major axis, again involving trigonometric terms. The average value by this method is 383,397 km. The actual distance varies over the course of the orbit of the Moon. Values at closest approach (perigee) or at farthest (apogee) are rarer the more extreme they are. The graph at right shows the distribution of perigee and apogee over six thousand years. Jean Meeus gives the following extreme values for 1500 BC to AD 8000: greatest distance: 406 719.97 km on January 7, AD 2266 smallest distance: 356 352.93 km on November 13, 1054 BC An AU is Lunar distances. A lightyear is 24,611,700 Lunar distances. Geostationary Earth Orbit is from Earth center, or LD LD (or LDEO) Variation The instantaneous lunar distance is constantly changing. The actual distance between the Moon and Earth can change as quickly as , or more than in just 6 hours, due to its non-circular orbit. There are other effects that also influence the lunar distance. Some factors include: The formula of Chapront and Touzé for the distance in kilometres begins with the terms: where is the mean anomaly (more or less how moon has moved from perigee) and is the mean elongation (more or less how far it has moved from conjunction with the Sun at new moon). They can be calculated from G = 134.963 411 38° + 13.064 992 953 630°/d · t D = 297.850 204 20° + 12.190 749 117 502°/d · t where t is the time (in days) since January 1, 2000 (see Epoch (astronomy)). This shows that the smallest perigee occurs at either new moon or full moon (ca 356870 km), as does the greatest apogee (ca 406079 km), whereas the greatest perigee will be around half-moon (ca 370180 km), as will be the smallest apogee (ca 404593 km). The exact values will be slightly different due to other terms. Twice in every full moon cycle of about 411 days there will be a minimal perigee and a maximal apogee, separated by two weeks, and a maximal perigee and a minimal apogee, also separated by two weeks. Perturbations and eccentricity The distance to the Moon can be measured to an accuracy of over a 1-hour sampling period, which results in an overall uncertainty of a decimeter for the semi-major axis. However, due to its elliptical orbit with varying eccentricity, the instantaneous distance varies with monthly periodicity. Furthermore, the distance is perturbed by the gravitational effects of various astronomical bodies – most significantly the Sun and less so Venus and Jupiter. Other forces responsible for minute perturbations are: gravitational attraction to other planets in the Solar System and to asteroids; tidal forces; and relativistic effects. The effect of radiation pressure from the Sun contributes an amount of ± to the lunar distance. Although the instantaneous uncertainty is a few millimeters, the measured lunar distance can change by more than from the mean value throughout a typical month. These perturbations are well understood and the lunar distance can be accurately modeled over thousands of years. Tidal dissipation Through the action of tidal forces, the angular momentum of Earth's rotation is slowly being transferred to the Moon's orbit. The result is that Earth's rate of spin is gradually decreasing (at a rate of ), and the lunar orbit is gradually expanding. The rate of recession is . However, it is believed that this rate has recently increased, as a rate of would imply that the Moon is only 1.5 billion years old, whereas scientific consensus supports an age of about 4 billion years. It is also believed that this anomalously high rate of recession may continue to accelerate. Theoretically, the lunar distance will continue to increase until the Earth and Moon become tidally locked, as are Pluto and Charon. This would occur when the duration of the lunar orbital period equals the rotational period of Earth, which is estimated to be 47 Earth days. The two bodies would then be at equilibrium, and no further rotational energy would be exchanged. However, models predict that 50 billion years would be required to achieve this configuration, which is significantly longer than the expected lifetime of the Solar System. Orbital history Laser measurements show that the average lunar distance is increasing, which implies that the Moon was closer in the past, and that Earth's days were shorter. Fossil studies of mollusk shells from the Campanian era (80 million years ago) show that there were 372 days (of 23 h 33 min) per year during that time, which implies that the lunar distance was about (383,000 km or 238,000 mi). There is geological evidence that the average lunar distance was about (332,000 km or 205,000 mi) during the Precambrian Era; 2500 million years BP. The widely accepted giant impact hypothesis states that the Moon was created as a result of a catastrophic impact between Earth and another planet, resulting in a re-accumulation of fragments at an initial distance of (24,000 km or 15,000 mi). This theory assumes the initial impact to have occurred 4.5 billion years ago. History of measurement Until the late 1950s all measurements of lunar distance were based on optical angular measurements: the earliest accurate measurement was by Hipparchus in the 2nd century BC. The space age marked a turning point when the precision of this value was much improved. During the 1950s and 1960s, there were experiments using radar, lasers, and spacecraft, conducted with the benefit of computer processing and modeling. Some historically significant or otherwise interesting methods of determining the lunar distance: Parallax The oldest method of determining the lunar distance involved measuring the angle between the Moon and a chosen reference point from multiple locations, simultaneously. The synchronization can be coordinated by making measurements at a pre-determined time, or during an event which is observable to all parties. Before accurate mechanical chronometers, the synchronization event was typically a lunar eclipse, or the moment when the Moon crossed the meridian (if the observers shared the same longitude). This measurement technique is known as lunar parallax. For increased accuracy, the measured angle can be adjusted to account for refraction and distortion of light passing through the atmosphere. Lunar eclipse Early attempts to measure the distance to the Moon exploited observations of a lunar eclipse combined with knowledge of Earth's radius and an understanding that the Sun is much further than the Moon. By observing the geometry of a lunar eclipse, the lunar distance can be calculated using trigonometry. The earliest accounts of attempts to measure the lunar distance using this technique were by Greek astronomer and mathematician Aristarchus of Samos in the 4th century BC and later by Hipparchus, whose calculations produced a result of ( or ). This method later found its way into the work of Ptolemy, who produced a result of ( or ) at its farthest point. Meridian crossing An expedition by French astronomer A.C.D. Crommelin observed lunar meridian transits on the same night from two different locations. Careful measurements from 1905 to 1910 measured the angle of elevation at the moment when a specific lunar crater (Mösting A) crossed the local meridian, from stations at Greenwich and at Cape of Good Hope. A distance was calculated with an uncertainty of , and this remained the definitive lunar distance value for the next half century. Occultations By recording the instant when the Moon occults a background star, (or similarly, measuring the angle between the Moon and a background star at a predetermined moment) the lunar distance can be determined, as long as the measurements are taken from multiple locations of known separation. Astronomers O'Keefe and Anderson calculated the lunar distance by observing four occultations from nine locations in 1952. They calculated a semi-major axis of ( ± ). This value was refined in 1962 by Irene Fischer, who incorporated updated geodetic data to produce a value of ( ± ). Radar The distance to the moon was measured by means of radar first in 1946 as part of Project Diana. Later, an experiment was conducted in 1957 at the U.S. Naval Research Laboratory that used the echo from radar signals to determine the Earth-Moon distance. Radar pulses lasting were broadcast from a diameter radio dish. After the radio waves echoed off the surface of the Moon, the return signal was detected and the delay time measured. From that measurement, the distance could be calculated. In practice, however, the signal-to-noise ratio was so low that an accurate measurement could not be reliably produced. The experiment was repeated in 1958 at the Royal Radar Establishment, in England. Radar pulses lasting were transmitted with a peak power of 2 megawatts, at a repetition rate of 260 pulses per second. After the radio waves echoed off the surface of the Moon, the return signal was detected and the delay time measured. Multiple signals were added together to obtain a reliable signal by superimposing oscilloscope traces onto photographic film. From the measurements, the distance was calculated with an uncertainty of . These initial experiments were intended to be proof-of-concept experiments and only lasted one day. Follow-on experiments lasting one month produced a semi-major axis of ( ± ), which was the most precise measurement of the lunar distance at the time. Laser ranging An experiment which measured the round-trip time of flight of laser pulses reflected directly off the surface of the Moon was performed in 1962, by a team from Massachusetts Institute of Technology, and a Soviet team at the Crimean Astrophysical Observatory. During the Apollo missions in 1969, astronauts placed retroreflectors on the surface of the Moon for the purpose of refining the accuracy and precision of this technique. The measurements are ongoing and involve multiple laser facilities. The instantaneous precision of the Lunar Laser Ranging experiments can achieve small millimeter resolution, and is the most reliable method of determining the lunar distance. The semi-major axis is determined to be 384,399.0 km. Amateur astronomers and citizen scientists Due to the modern accessibility of accurate timing devices, high resolution digital cameras, GPS receivers, powerful computers and near-instantaneous communication, it has become possible for amateur astronomers to make high accuracy measurements of the lunar distance. On May 23, 2007, digital photographs of the Moon during a near-occultation of Regulus were taken from two locations, in Greece and England. By measuring the parallax between the Moon and the chosen background star, the lunar distance was calculated. A more ambitious project called the "Aristarchus Campaign" was conducted during the lunar eclipse of 15 April 2014. During this event, participants were invited to record a series of five digital photographs from moonrise until culmination (the point of greatest altitude). The method took advantage of the fact that the Moon is actually closest to an observer when it is at its highest point in the sky, compared to when it is on the horizon. Although it appears that the Moon is biggest when it is near the horizon, the opposite is true. This phenomenon is known as the Moon illusion. The reason for the difference in distance is that the distance from the center of the Moon to the center of the Earth is nearly constant throughout the night, but an observer on the surface of Earth is actually 1 Earth radius from the center of Earth. This offset brings them closest to the Moon when it is overhead. Modern cameras have achieved a resolution capable of capturing the Moon with enough precision to detect and measure this tiny variation in apparent size. The results of this experiment were calculated as LD = . The accepted value for that night was , which implied a accuracy. The benefit of this method is that the only measuring equipment needed is a modern digital camera (equipped with an accurate clock, and a GPS receiver). Other experimental methods of measuring the lunar distance that can be performed by amateur astronomers involve: Taking pictures of the Moon before it enters the penumbra and after it is completely eclipsed. Measuring, as precisely as possible, the time of the eclipse contacts. Taking good pictures of the partial eclipse when the shape and size of the Earth shadow are clearly visible. Taking a picture of the Moon including, in the same field of view, Spica and Mars – from various locations.
Physical sciences
Astronomical
Basics and measurement
899474
https://en.wikipedia.org/wiki/Acre-foot
Acre-foot
The acre-foot is a non-SI unit of volume equal to about commonly used in the United States in reference to large-scale water resources, such as reservoirs, aqueducts, canals, sewer flow capacity, irrigation water, and river flows. An acre-foot equals approximately an eight-lane swimming pool, long, wide and deep. Definitions As the name suggests, an acre-foot is defined as the volume of one acre of surface area to a depth of one foot. Since an acre is defined as a chain by a furlong (i.e. ), an acre-foot is . There has been two definitions of the acre-foot (differing by about 0.0006%), using either the international foot (0.3048 m) or a U.S. survey foot (exactly meters since 1893). On December 31, 2022, the National Institute of Standards and Technology, the National Geodetic Survey, and the United States Department of Commerce deprecated use of the US survey foot and recommended conversion to either the meter or the international foot. Application As a rule of thumb in US water management, one acre-foot is taken to be the planned annual water usage of a suburban family household. In some areas of the desert Southwest, where water conservation is followed and often enforced, a typical family uses only about of water per year. One acre-foot/year is approximately . The acre-foot per year has been used historically in the US in many water-management agreements, for example the Colorado River Compact, which divides among seven western US states. Water reservoir capacities in the US are commonly given in thousands of acre-feet, abbreviated TAF or KAF. In most other countries except the US, the metric system is in common use and water volumes are normally expressed in liter, cubic meter or cubic kilometer. One acre-foot is approximately equivalent to 1.233 megaliters. Large bodies of water may be measured in cubic kilometers (1,000,000,000 m, or 1000 gigaliter), with 1 million acre-feet approximately equalling 1.233 km.
Physical sciences
Volume
Basics and measurement
899880
https://en.wikipedia.org/wiki/Sea%20surface%20temperature
Sea surface temperature
Sea surface temperature (or ocean surface temperature) is the temperature of ocean water close to the surface. The exact meaning of surface varies in the literature and in practice. It is usually between and below the sea surface. Sea surface temperatures greatly modify air masses in the Earth's atmosphere within a short distance of the shore. The thermohaline circulation has a major impact on average sea surface temperature throughout most of the world's oceans. Warm sea surface temperatures can develop and strengthen cyclones over the ocean. Tropical cyclones can also cause a cool wake. This is due to turbulent mixing of the upper of the ocean. Sea surface temperature changes during the day. This is like the air above it, but to a lesser degree. There is less variation in sea surface temperature on breezy days than on calm days. Coastal sea surface temperatures can cause offshore winds to generate upwelling, which can significantly cool or warm nearby landmasses, but shallower waters over a continental shelf are often warmer. Onshore winds can cause a considerable warm-up even in areas where upwelling is fairly constant, such as the northwest coast of South America. Its values are important within numerical weather prediction as the sea surface temperature influences the atmosphere above, such as in the formation of sea breezes and sea fog. It is very likely that global mean sea surface temperature increased by 0.88°C between 1850–1900 and 2011–2020 due to global warming, with most of that warming (0.60°C) occurring between 1980 and 2020. The temperatures over land are rising faster than ocean temperatures. This is because the ocean absorbs about 90% of excess heat generated by climate change. Definitions Sea surface temperature (SST), or ocean surface temperature, is the water temperature close to the ocean's surface. The exact meaning of surface varies according to the measurement method used, but it is between and below the sea surface. For comparison, the sea surface skin temperature relates to the top 20 or so micrometres of the ocean's surface. The definition proposed by IPCC for sea surface temperature does not specify the number of metres but focuses more on measurement techniques: Sea surface temperature is "the subsurface bulk temperature in the top few metres of the ocean, measured by ships, buoys and drifters. [...] Satellite measurements of skin temperature (uppermost layer; a micrometre thick) in the infrared or the top centimetre or so in the microwave are also used, but must be adjusted to be compatible with the bulk temperature." The temperature further below that is called ocean temperature or deeper ocean temperature. Ocean temperatures (more than 20 metres below the surface) also vary by region and time, and they contribute to variations in ocean heat content and ocean stratification. The increase of both ocean surface temperature and deeper ocean temperature is an important effect of climate change on oceans. Extent of "surface" The extent of the ocean surface down into the ocean is influenced by the amount of mixing that takes place between the surface water and the deeper water. This depends on the temperature: in the tropics the warm surface layer of about 100 m is quite stable and does not mix much with deeper water, while near the poles winter cooling and storms makes the surface layer denser and it mixes to great depth and then stratifies again in summer. This is why there is no simple single depth for ocean surface. The photic depth of the ocean is typically about 100 m and is related to this heated surface layer. It can be up to around 200 m deep in the open ocean. Variations and changes Local variations The sea surface temperature (SST) has a diurnal range, just like the Earth's atmosphere above, though to a lesser degree due to its greater thermal inertia. On calm days, the temperature can vary by . The temperature of the ocean at depth lags the Earth's atmosphere temperature by 15 days per , which means for locations like the Aral Sea, temperatures near its bottom reach a maximum in December and a minimum in May and June. Near the coastline, some offshore and longshore winds move the warm waters near the surface offshore, and replace them with cooler water from below in the process known as Ekman transport. This pattern generally increases nutrients for marine life in the region, and can have a profound effect in some regions where the bottom waters are particularly nutrient-rich. Offshore of river deltas, freshwater flows over the top of the denser seawater, which allows it to heat faster due to limited vertical mixing. Remotely sensed SST can be used to detect the surface temperature signature due to tropical cyclones. In general, an SST cooling is observed after the passing of a hurricane, primarily as the result of mixed layer deepening and surface heat losses. In the wake of several day long Saharan dust outbreaks across the adjacent northern Atlantic Ocean, sea surface temperatures are reduced 0.2 C to 0.4 C (0.3 to 0.7 F). Other sources of short-term SST fluctuation include extratropical cyclones, rapid influxes of glacial fresh water and concentrated phytoplankton blooms due to seasonal cycles or agricultural run-off. The tropical ocean has been warming faster than other regions since 1950, with the greatest rates of warming in the tropical Indian Ocean, western Pacific Ocean, and western boundary currents of the subtropical gyres. However, the eastern Pacific Ocean, subtropical North Atlantic Ocean, and Southern Ocean have warmed more slowly than the global average or have experienced cooling since the 1950s. Atlantic Multidecadal Oscillation Ocean currents, such as the Atlantic Multidecadal Oscillation, can affect sea surface temperatures over several decades. The Atlantic Multidecadal Oscillation (AMO) is an important driver of North Atlantic SST and Northern Hemisphere climate, but the mechanisms controlling AMO variability remain poorly understood. Atmospheric internal variability, changes in ocean circulation, or anthropogenic drivers may control the multidecadal temperature variability associated with AMO. These changes in North Atlantic SST may influence winds in the subtropical North Pacific and produce warmer SSTs in the western Pacific Ocean. Regional variations El Niño is defined by prolonged differences in Pacific Ocean surface temperatures when compared with the average value. The accepted definition is a warming or cooling of at least 0.5 °C (0.9 °F) averaged over the east-central tropical Pacific Ocean. Typically, this anomaly happens at irregular intervals of 2–7 years and lasts nine months to two years. The average period length is 5 years. When this warming or cooling occurs for only seven to nine months, it is classified as El Niño/La Niña "conditions"; when it occurs for more than that period, it is classified as El Niño/La Niña "episodes". The sign of an El Niño in the sea surface temperature pattern is when warm water spreads from the west Pacific and the Indian Ocean to the east Pacific. It takes the rain with it, causing extensive drought in the western Pacific and rainfall in the normally dry eastern Pacific. El Niño's warm rush of nutrient-poor tropical water, heated by its eastward passage in the Equatorial Current, replaces the cold, nutrient-rich surface water of the Humboldt Current. When El Niño conditions last for many months, extensive ocean warming and the reduction in Easterly Trade winds limits upwelling of cold nutrient-rich deep water and its economic impact to local fishing for an international market can be serious. Among scientists, there is medium confidence that the tropical Pacific will transition to a mean pattern resembling that of El Niño on centennial time scale, but there is still high uncertainty in tropical Pacific SST projections because it is difficult to capture El Niño variability in climate models. Recent increase due to climate change Overall, scientists project that all regions of the oceans will warm by 2050, but models disagree for SST changes expected in the subpolar North Atlantic, the equatorial Pacific, and the Southern Ocean. The future global mean SST increase for the period 1995-2014 to 2081-2100 is 0.86°C under the most modest greenhouse gas emissions scenarios, and up to 2.89°C under the most severe emissions scenarios. Measurement There are a variety of techniques for measuring this parameter that can potentially yield different results because different things are actually being measured. Away from the immediate sea surface, general temperature measurements are accompanied by a reference to the specific depth of measurement. This is because of significant differences encountered between measurements made at different depths, especially during the daytime when low wind speed and high sunshine conditions may lead to the formation of a warm layer at the ocean's surface and strong vertical temperature gradients (a diurnal thermocline). Sea surface temperature measurements are confined to the top portion of the ocean, known as the near-surface layer. Thermometers The sea surface temperature was one of the first oceanographic variables to be measured. Benjamin Franklin suspended a mercury thermometer from a ship while travelling between the United States and Europe in his survey of the Gulf Stream in the late eighteenth century. SST was later measured by dipping a thermometer into a bucket of water that was manually drawn from the sea surface. The first automated technique for determining SST was accomplished by measuring the temperature of water in the intake port of large ships, which was underway by 1963. These observations have a warm bias of around due to the heat of the engine room. Fixed weather buoys measure the water temperature at a depth of . Measurements of SST have had inconsistencies over the last 130 years due to the way they were taken. In the nineteenth century, measurements were taken in a bucket off a ship. However, there was a slight variation in temperature because of the differences in buckets. Samples were collected in either a wood or an uninsulated canvas bucket, but the canvas bucket cooled quicker than the wood bucket. The sudden change in temperature between 1940 and 1941 was the result of an undocumented change in procedure. The samples were taken near the engine intake because it was too dangerous to use lights to take measurements over the side of the ship at night. Many different drifting buoys exist around the world that vary in design, and the location of reliable temperature sensors varies. These measurements are beamed to satellites for automated and immediate data distribution. A large network of coastal buoys in U.S. waters is maintained by the National Data Buoy Center (NDBC). Between 1985 and 1994, an extensive array of moored and drifting buoys was deployed across the equatorial Pacific Ocean designed to help monitor and predict the El Niño phenomenon. Weather satellites Weather satellites have been available to determine sea surface temperature information since 1967, with the first global composites created during 1970. Since 1982, satellites have been increasingly utilized to measure SST and have allowed its spatial and temporal variation to be viewed more fully. Satellite measurements of SST are in reasonable agreement with in situ temperature measurements. The satellite measurement is made by sensing the ocean radiation in two or more wavelengths within the infrared part of the electromagnetic spectrum or other parts of the spectrum which can then be empirically related to SST. These wavelengths are chosen because they are: within the peak of the blackbody radiation expected from the Earth, and able to transmit adequately well through the atmosphere The satellite-measured SST provides both a synoptic view of the ocean and a high frequency of repeat views, allowing the examination of basin-wide upper ocean dynamics not possible with ships or buoys. NASA's (National Aeronautic and Space Administration) Moderate Resolution Imaging Spectroradiometer (MODIS) SST satellites have been providing global SST data since 2000, available with a one-day lag. NOAA's GOES (Geostationary Orbiting Earth Satellites) satellites are geo-stationary above the Western Hemisphere which enables them to deliver SST data on an hourly basis with only a few hours of lag time. There are several difficulties with satellite-based absolute SST measurements. First, in infrared remote sensing methodology the radiation emanates from the top "skin" of the ocean, approximately the top 0.01 mm or less, which may not represent the bulk temperature of the upper meter of ocean due primarily to effects of solar surface heating during the daytime, reflected radiation, as well as sensible heat loss and surface evaporation. All these factors make it somewhat difficult to compare satellite data to measurements from buoys or shipboard methods, complicating ground truth efforts. Secondly, the satellite cannot look through clouds, creating a cool bias in satellite-derived SSTs within cloudy areas. However, passive microwave techniques can accurately measure SST and penetrate cloud cover. Within atmospheric sounder channels on weather satellites, which peak just above the ocean's surface, knowledge of the sea surface temperature is important to their calibration. Importance to the Earth's atmosphere Sea surface temperature affects the behavior of the Earth's atmosphere above, so their initialization into atmospheric models is important. While sea surface temperature is important for tropical cyclogenesis, it is also important in determining the formation of sea fog and sea breezes. Heat from underlying warmer waters can significantly modify an air mass over distances as short as to . For example, southwest of Northern Hemisphere extratropical cyclones, curved cyclonic flow bringing cold air across relatively warm water bodies can lead to narrow lake-effect snow (or sea effect) bands. Those bands bring strong localized precipitation, often in the form of snow, since large water bodies such as lakes efficiently store heat that results in significant temperature differences—larger than —between the water surface and the air above. Because of this temperature difference, warmth and moisture are transported upward, condensing into vertically oriented clouds which produce snow showers. The temperature decrease with height and cloud depth are directly affected by both the water temperature and the large-scale environment. The stronger the temperature decrease with height, the taller the clouds get, and the greater the precipitation rate becomes. Tropical cyclones Ocean temperature of at least 26.5°C (79.7°F) spanning through at minimum a 50-metre depth is one of the precursors needed to maintain a tropical cyclone (a type of mesocyclone). These warm waters are needed to maintain the warm core that fuels tropical systems. This value is well above 16.1 °C (60.9 °F), the long term global average surface temperature of the oceans. However, this requirement can be considered only a general baseline because it assumes that the ambient atmospheric environment surrounding an area of disturbed weather presents average conditions. Tropical cyclones have intensified when SSTs were slightly below this standard temperature. Tropical cyclones are known to form even when normal conditions are not met. For example, cooler air temperatures at a higher altitude (e.g., at the 500 hPa level, or 5.9 km) can lead to tropical cyclogenesis at lower water temperatures, as a certain lapse rate is required to force the atmosphere to be unstable enough for convection. In a moist atmosphere, this lapse rate is 6.5 °C/km, while in an atmosphere with less than 100% relative humidity, the required lapse rate is 9.8 °C/km. At the 500 hPa level, the air temperature averages −7 °C (18 °F) within the tropics, but air in the tropics is normally dry at this height, giving the air room to wet-bulb, or cool as it moistens, to a more favorable temperature that can then support convection. A wet-bulb temperature at 500 hPa in a tropical atmosphere of is required to initiate convection if the water temperature is , and this temperature requirement increases or decreases proportionally by 1 °C in the sea surface temperature for each 1 °C change at 500 hpa. Inside a cold cyclone, 500 hPa temperatures can fall as low as , which can initiate convection even in the driest atmospheres. This also explains why moisture in the mid-levels of the troposphere, roughly at the 500 hPa level, is normally a requirement for development. However, when dry air is found at the same height, temperatures at 500 hPa need to be even colder as dry atmospheres require a greater lapse rate for instability than moist atmospheres. At heights near the tropopause, the 30-year average temperature (as measured in the period encompassing 1961 through 1990) was −77 °C (−132 °F). One example of a tropical cyclone maintaining itself over cooler waters was Epsilon late in the 2005 Atlantic hurricane season.
Physical sciences
Oceanography
Earth science
899973
https://en.wikipedia.org/wiki/Definition%20of%20planet
Definition of planet
The definition of the term planet has changed several times since the word was coined by the ancient Greeks. Greek astronomers employed the term (), 'wandering stars', for star-like objects which apparently moved over the sky. Over the millennia, the term has included a variety of different celestial bodies, from the Sun and the Moon to satellites and asteroids. In modern astronomy, there are two primary conceptions of a planet. A planet can be an astronomical body that dynamically dominates its region (that is, whether it controls the fate of other smaller bodies in its vicinity) or it is defined to be in hydrostatic equilibrium (it has become gravitationally rounded and compacted). These may be characterized as the dynamical dominance definition and the geophysical definition. The issue of a clear definition for planet came to a head in January 2005 with the discovery of the trans-Neptunian object Eris, a body more massive than the smallest then-accepted planet, Pluto. In its August 2006 response, the International Astronomical Union (IAU), which is recognised by astronomers as the international governing body responsible for resolving issues of nomenclature, released its decision on the matter during a meeting in Prague. This definition, which applies only to the Solar System (though exoplanets had been addressed in 2003), states that a planet is a body that orbits the Sun, is massive enough for its own gravity to make it round, and has "cleared its neighbourhood" of smaller objects approaching its orbit. Pluto fulfills the first two of these criteria, but not the third and therefore does not qualify as a planet under this formalized definition. The IAU's decision has not resolved all controversies. While many astronomers have accepted it, some planetary scientists have rejected it outright, proposing a geophysical or similar definition instead. History Planets in antiquity While knowledge of the planets predates history and is common to most civilizations, the word planet dates back to ancient Greece. Most Greeks believed the Earth to be stationary and at the center of the universe in accordance with the geocentric model and that the objects in the sky, and indeed the sky itself, revolved around it (an exception was Aristarchus of Samos, who put forward an early version of heliocentrism). Greek astronomers employed the term (), 'wandering stars', to describe those starlike lights in the heavens that moved over the course of the year, in contrast to the (), the 'fixed stars', which stayed motionless relative to one another. The five bodies currently called "planets" that were known to the Greeks were those visible to the naked eye: Mercury, Venus, Mars, Jupiter, and Saturn. Graeco-Roman cosmology commonly considered seven planets, with the Sun and the Moon counted among them (as is the case in modern astrology); however, there is some ambiguity on that point, as many ancient astronomers distinguished the five star-like planets from the Sun and Moon. As the 19th-century German naturalist Alexander von Humboldt noted in his work Cosmos, Of the seven cosmical bodies which, by their continually varying relative positions and distances apart, have ever since the remotest antiquity been distinguished from the "unwandering orbs" of the heaven of the "fixed stars", which to all sensible appearance preserve their relative positions and distances unchanged, five only—Mercury, Venus, Mars, Jupiter and Saturn—wear the appearance of stars—"cinque stellas errantes"—while the Sun and Moon, from the size of their disks, their importance to man, and the place assigned to them in mythological systems, were classed apart. In his Timaeus, written in roughly 360 BCE, Plato mentions, "the Sun and Moon and five other stars, which are called the planets". His student Aristotle makes a similar distinction in his On the Heavens: "The movements of the sun and moon are fewer than those of some of the planets". In his Phaenomena, which set to verse an astronomical treatise written by the philosopher Eudoxus in roughly 350 BCE, the poet Aratus describes "those five other orbs, that intermingle with [the constellations] and wheel wandering on every side of the twelve figures of the Zodiac." In his Almagest written in the 2nd century, Ptolemy refers to "the Sun, Moon and five planets." Hyginus explicitly mentions "the five stars which many have called wandering, and which the Greeks call Planeta." Marcus Manilius, a Latin writer who lived during the time of Caesar Augustus and whose poem Astronomica is considered one of the principal texts for modern astrology, says, "Now the dodecatemory is divided into five parts, for so many are the stars called wanderers which with passing brightness shine in heaven." The single view of the seven planets is found in Cicero's Dream of Scipio, written sometime around 53 BCE, where the spirit of Scipio Africanus proclaims, "Seven of these spheres contain the planets, one planet in each sphere, which all move contrary to the movement of heaven." In his Natural History, written in 77 CE, Pliny the Elder refers to "the seven stars, which owing to their motion we call planets, though no stars wander less than they do." Nonnus, the 5th century Greek poet, says in his Dionysiaca, "I have oracles of history on seven tablets, and the tablets bear the names of the seven planets." Planets in the Middle Ages Medieval and Renaissance writers generally accepted the idea of seven planets. The standard medieval introduction to astronomy, Sacrobosco's De Sphaera, includes the Sun and Moon among the planets, the more advanced Theorica planetarum presents the "theory of the seven planets," while the instructions to the Alfonsine Tables show how "to find by means of tables the mean motuses of the sun, moon, and the rest of the planets." In his Confessio Amantis, 14th-century poet John Gower, referring to the planets' connection with the craft of alchemy, writes, "Of the planetes ben begonne/The gold is tilted to the Sonne/The Mone of Selver hath his part...", indicating that the Sun and the Moon were planets. Even Nicolaus Copernicus, who rejected the geocentric model, was ambivalent concerning whether the Sun and Moon were planets. In his De Revolutionibus, Copernicus clearly separates "the sun, moon, planets and stars"; however, in his Dedication of the work to Pope Paul III, Copernicus refers to, "the motion of the sun and the moon... and of the five other planets." Earth When Copernicus's heliocentric model was accepted over the geocentric, Earth was placed among the planets and the Sun and Moon were reclassified, necessitating a conceptual revolution in the understanding of planets. As the historian of science Thomas Kuhn noted in his book, The Structure of Scientific Revolutions: The Copernicans who denied its traditional title 'planet' to the sun ... were changing the meaning of 'planet' so that it would continue to make useful distinctions in a world where all celestial bodies ... were seen differently from the way they had been seen before... Looking at the moon, the convert to Copernicanism ... says, 'I once took the moon to be (or saw the moon as) a planet, but I was mistaken.' Copernicus obliquely refers to Earth as a planet in De Revolutionibus when he says, "Having thus assumed the motions which I ascribe to the Earth later on in the volume, by long and intense study I finally found that if the motions of the other planets are correlated with the orbiting of the earth..." Galileo also asserts that Earth is a planet in the Dialogue Concerning the Two Chief World Systems: "[T]he Earth, no less than the moon or any other planet, is to be numbered among the natural bodies that move circularly." Modern planets In 1781, the astronomer William Herschel was searching the sky for elusive stellar parallaxes when he observed what he termed a comet in the constellation of Taurus. Unlike stars, which remained mere points of light even under high magnification, this object's size increased in proportion to the power used. That this strange object might have been a planet simply did not occur to Herschel; the five planets beyond Earth had been part of humanity's conception of the universe since antiquity. As the asteroids had yet to be discovered, comets were the only moving objects one expected to find in a telescope. However, unlike a comet, this object's orbit was nearly circular and within the ecliptic plane. Before Herschel announced his discovery of his "comet,” his colleague, British Astronomer Royal Nevil Maskelyne, wrote to him, saying "I don't know what to call it. It is as likely to be a regular planet moving in an orbit nearly circular to the sun as a Comet moving in a very eccentric ellipsis. I have not yet seen any coma or tail to it." The "comet" was also very far away, too far away for a mere comet to resolve itself. Eventually it was recognized as the seventh planet and named Uranus after the father of Saturn. Gravitationally induced irregularities in Uranus's observed orbit led eventually to the discovery of Neptune in 1846, and presumed irregularities in Neptune's orbit subsequently led to a search which did not find the perturbing object (it was later found to be a mathematical artifact caused by an overestimation of Neptune's mass) but did find Pluto in 1930. Initially believed to be roughly the mass of the Earth, observation gradually shrank Pluto's estimated mass until it was revealed to be a mere five hundredth as large; far too small to have influenced Neptune's orbit at all. In 1989, Voyager 2 determined the irregularities to be due to an overestimation of Neptune's mass. Satellites When Copernicus placed Earth among the planets, he also placed the Moon in orbit around Earth, making the Moon the first natural satellite to be identified. When Galileo discovered his four satellites of Jupiter in 1610, they lent weight to Copernicus's argument, because if other planets could have satellites, then Earth could too. However, there remained some confusion as to whether these objects were "planets"; Galileo referred to them as "four planets flying around the star of Jupiter at unequal intervals and periods with wonderful swiftness." Similarly, Christiaan Huygens, upon discovering Saturn's largest moon Titan in 1655, employed many terms to describe it, including "planeta" (planet), "stella" (star), "luna" (moon), and "satellite" (attendant), a word coined by Johannes Kepler. Giovanni Cassini, in announcing his discovery of Saturn's moons Iapetus and Rhea in 1671 and 1672, described them as Nouvelles Planetes autour de Saturne ("New planets around Saturn"). However, when the "Journal de Scavans" reported Cassini's discovery of two new Saturnian moons (Dione and Tethys) in 1686, it referred to them strictly as "satellites", though sometimes Saturn as the "primary planet". When William Herschel announced his discovery of two objects in orbit around Uranus in 1787 (Titania and Oberon), he referred to them as "satellites" and "secondary planets". All subsequent reports of natural satellite discoveries used the term "satellite" exclusively, though the 1868 book "Smith's Illustrated Astronomy" referred to satellites as "secondary planets". Minor planets One of the unexpected results of William Herschel's discovery of Uranus was that it appeared to validate Bode's law, a mathematical function which generates the size of the semimajor axis of planetary orbits. Astronomers had considered the "law" a meaningless coincidence, but Uranus fell at very nearly the exact distance it predicted. Since Bode's law also predicted a body between Mars and Jupiter that at that point had not been observed, astronomers turned their attention to that region in the hope that it might be vindicated again. Finally, in 1801, astronomer Giuseppe Piazzi found a miniature new world, Ceres, lying at just the correct point in space. The object was hailed as a new planet. Then in 1802, Heinrich Olbers discovered Pallas, a second "planet" at roughly the same distance from the Sun as Ceres. The fact that two planets could occupy the same orbit was an affront to centuries of thinking. In 1804, another world, Juno, was discovered in a similar orbit. In 1807, Olbers discovered a fourth object, Vesta, at a similar orbital distance. Herschel suggested that these four worlds be given their own separate classification, asteroids (meaning "starlike" since they were too small for their disks to resolve and thus resembled stars), though most astronomers preferred to refer to them as planets. This conception was entrenched by the fact that, due to the difficulty of distinguishing asteroids from yet-uncharted stars, those four remained the only asteroids known until 1845. Science textbooks in 1828, after Herschel's death, still numbered the asteroids among the planets. With the arrival of more refined star charts, the search for asteroids resumed, and a fifth and sixth were discovered by Karl Ludwig Hencke in 1845 and 1847. By 1851 the number of asteroids had increased to 15, and a new method of classifying them, by affixing a number before their names in order of discovery, was adopted, inadvertently placing them in their own distinct category. Ceres became "(1) Ceres", Pallas became "(2) Pallas", and so on. By the 1860s, the number of known asteroids had increased to over a hundred, and observatories in Europe and the United States began referring to them collectively as "minor planets", or "small planets", though it took the first four asteroids longer to be grouped as such. To this day, "minor planet" remains the official designation for all small bodies in orbit around the Sun, and each new discovery is numbered accordingly in the IAU's Minor Planet Catalogue. Pluto The long road from planethood to reconsideration undergone by Ceres is mirrored in the story of Pluto, which was named a planet soon after its discovery by Clyde Tombaugh in 1930. Uranus and Neptune had been declared planets based on their circular orbits, large masses and proximity to the ecliptic plane. None of these applied to Pluto, a tiny and icy world in a region of gas giants with an orbit that carried it high above the ecliptic and even inside that of Neptune. In 1978, astronomers discovered Pluto's largest moon, Charon, which allowed them to determine its mass. Pluto was found to be much tinier than anyone had expected: only one-sixth the mass of Earth's Moon. However, as far as anyone could yet tell, it was unique. Then, beginning in 1992, astronomers began to detect large numbers of icy bodies beyond the orbit of Neptune that were similar to Pluto in composition, size, and orbital characteristics. They concluded that they had discovered the hypothesized Kuiper belt (sometimes called the Edgeworth–Kuiper belt), a band of icy debris that is the source for "short-period" comets—those with orbital periods of up to 200 years. Pluto's orbit lays within this band and thus its planetary status was thrown into question. Many scientists concluded that tiny Pluto should be reclassified as a minor planet, just as Ceres had been a century earlier. Mike Brown of the California Institute of Technology suggested that a "planet" should be redefined as "any body in the Solar System that is more massive than the total mass of all of the other bodies in a similar orbit." Those objects under that mass limit would become minor planets. In 1999, Brian G. Marsden of Harvard University's Minor Planet Center suggested that Pluto be given the minor planet number 10000 while still retaining its official position as a planet. The prospect of Pluto's "demotion" created a public outcry, and in response the International Astronomical Union clarified that it was not at that time proposing to remove Pluto from the planet list. The discovery of several other trans-Neptunian objects, such as Quaoar and Sedna, continued to erode arguments that Pluto was exceptional from the rest of the trans-Neptunian population. On July 29, 2005, Mike Brown and his team announced the discovery of a trans-Neptunian object confirmed to be more massive than Pluto, named Eris. In the immediate aftermath of the object's discovery, there was much discussion as to whether it could be termed a "tenth planet". NASA even put out a press release describing it as such. However, acceptance of Eris as the tenth planet implicitly demanded a definition of planet that set Pluto as an arbitrary minimum size. Many astronomers, claiming that the definition of planet was of little scientific importance, preferred to recognise Pluto's historical identity as a planet by "grandfathering" it into the planet list. IAU definition The discovery of Eris forced the IAU to act on a definition. In October 2005, a group of 19 IAU members, which had already been working on a definition since the discovery of Sedna in 2003, narrowed their choices to a shortlist of three, using approval voting. The definitions were: A planet is any object in orbit around the Sun with a diameter greater than 2,000 km. (Eleven votes in favor) A planet is any object in orbit around the Sun whose shape is stable due to its own gravity. (Eight votes in favor) A planet is any object in orbit around the Sun that is dominant in its immediate neighbourhood. (Six votes in favor) Since no consensus could be reached, the committee decided to put these three definitions to a wider vote at the IAU General Assembly meeting in Prague in August 2006, and on August 24, the IAU put a final draft to a vote, which combined elements from two of the three proposals. It essentially created a medial classification between planet and rock (or, in the new parlance, small Solar System body), called dwarf planet and placed Pluto in it, along with Ceres and Eris. The vote was passed, with 424 astronomers taking part in the ballot. The IAU also resolved that "planets and dwarf planets are two distinct classes of objects", meaning that dwarf planets, despite their name, would not be considered planets. On September 13, 2006, the IAU placed Eris, its moon Dysnomia, and Pluto into their Minor Planet Catalogue, giving them the official minor planet designations (134340) Pluto, (136199) Eris, and (136199) Eris I Dysnomia. Other possible dwarf planets, such as 2003 EL61, 2005 FY9, Sedna and Quaoar, were left in temporary limbo until a formal decision could be reached regarding their status. On June 11, 2008, the IAU executive committee announced the establishment of a subclass of dwarf planets comprising the aforementioned "new category of trans-Neptunian objects" to which Pluto is a prototype. This new class of objects, termed plutoids, would include Pluto, Eris and any other trans-Neptunian dwarf planets, but excluded Ceres. The IAU decided that those TNOs with an absolute magnitude brighter than +1 would be named by a joint commissions of the planetary and minor-planet naming committees, under the assumption that they were likely to be dwarf planets. To date, only two other TNOs, 2003 EL61 and 2005 FY9, have met the absolute magnitude requirement, while other possible dwarf planets, such as Sedna, Orcus and Quaoar, were named by the minor-planet committee alone. On July 11, 2008, the Working Group on Planetary Nomenclature named 2005 FY9 Makemake, and on September 17, 2008, they named 2003 EL61 Haumea. Acceptance of the IAU definition Among the most vocal proponents of the IAU's decided definition are Mike Brown, the discoverer of Eris; Steven Soter, professor of astrophysics at the American Museum of Natural History; and Neil deGrasse Tyson, director of the Hayden Planetarium. In the early 2000s, when the Hayden Planetarium was undergoing a $100 million renovation, Tyson refused to refer to Pluto as the ninth planet at the planetarium. He explained that he would rather group planets according to their commonalities rather than counting them. This decision resulted in Tyson receiving large amounts of hate mail, primarily from children. In 2009, Tyson wrote a book detailing the demotion of Pluto. In an article in the January 2007 issue of Scientific American, Soter cited the definition's incorporation of current theories of the formation and evolution of the Solar System; that as the earliest protoplanets emerged from the swirling dust of the protoplanetary disc, some bodies "won" the initial competition for limited material and, as they grew, their increased gravity meant that they accumulated more material, and thus grew larger, eventually outstripping the other bodies in the Solar System by a very wide margin. The asteroid belt, disturbed by the gravitational tug of nearby Jupiter, and the Kuiper belt, too widely spaced for its constituent objects to collect together before the end of the initial formation period, both failed to win the accretion competition. When the numbers for the winning objects are compared to those of the losers, the contrast is striking; if Soter's concept that each planet occupies an "orbital zone" is accepted, then the least orbitally dominant planet, Mars, is larger than all other collected material in its orbital zone by a factor of 5100. Ceres, the largest object in the asteroid belt, only accounts for one third of the material in its orbit; Pluto's ratio is even lower, at around 7 percent. Mike Brown asserts that this massive difference in orbital dominance leaves "absolutely no room for doubt about which objects do and do not belong." Ongoing controversies Despite the IAU's declaration, a number of critics remain unconvinced. The definition is seen by some as arbitrary and confusing. A number of Pluto-as-planet proponents, in particular Alan Stern, head of NASA's New Horizons mission to Pluto, have circulated a petition among astronomers to alter the definition. Stern's claim is that, since less than 5 percent of astronomers voted for it, the decision was not representative of the entire astronomical community. Even with this controversy excluded, however, there remain several ambiguities in the definition. Clearing the neighbourhood One of the main points at issue is the precise meaning of "cleared the neighbourhood around its orbit". Alan Stern objects that "it is impossible and contrived to put a dividing line between dwarf planets and planets", and that since neither Earth, Mars, Jupiter, nor Neptune have entirely cleared their regions of debris, none could properly be considered planets under the IAU definition. Mike Brown responds to these claims by saying that, far from not having cleared their orbits, the major planets completely control the orbits of the other bodies within their orbital zone. Jupiter may coexist with a large number of small bodies in its orbit (the Trojan asteroids), but these bodies only exist in Jupiter's orbit because they are in the sway of the planet's huge gravity. Similarly, Pluto may cross the orbit of Neptune, but Neptune long ago locked Pluto and its attendant Kuiper belt objects, called plutinos, into a 3:2 resonance, i.e., they orbit the Sun twice for every three Neptune orbits. The orbits of these objects are entirely dictated by Neptune's gravity, and thus, Neptune is gravitationally dominant. In October 2015, astronomer Jean-Luc Margot of the University of California Los Angeles proposed a metric for orbital zone clearance derived from whether an object can clear an orbital zone of extent 2 of its Hill radius in a specific time scale. This metric places a clear dividing line between the dwarf planets and the planets of the solar system. The calculation is based on the mass of the host star, the mass of the body, and the orbital period of the body. An Earth-mass body orbiting a solar-mass star clears its orbit at distances of up to 400 astronomical units from the star. A Mars-mass body at the orbit of Pluto clears its orbit. This metric, which leaves Pluto as a dwarf planet, applies to both the Solar System and to extrasolar systems. Some opponents of the definition have claimed that "clearing the neighbourhood" is an ambiguous concept. Mark Sykes, director of the Planetary Science Institute in Tucson, Arizona, and organiser of the petition, expressed this opinion to National Public Radio. He believes that the definition does not categorize a planet by composition or formation, but, effectively, by its location. He believes that a Mars-sized or larger object beyond the orbit of Pluto would not be considered a planet, because he believes that it would not have time to clear its orbit. Brown notes, however, that were the "clearing the neighbourhood" criterion to be abandoned, the number of planets in the Solar System could rise from eight to more than 50, with hundreds more potentially to be discovered. Hydrostatic equilibrium The IAU's definition mandates that planets be large enough for their own gravity to form them into a state of hydrostatic equilibrium; this means that they will reach a round, ellipsoidal shape. Up to a certain mass, an object can be irregular in shape, but beyond that point gravity begins to pull an object towards its own centre of mass until the object collapses into an ellipsoid. (None of the large objects of the Solar System are truly spherical. Many are spheroids, and several, such as the larger moons of Saturn and the dwarf planet , have been further distorted into ellipsoids by rapid rotation or tidal forces, but still in hydrostatic equilibrium.) However, there is no precise point at which an object can be said to have reached hydrostatic equilibrium. As Soter noted in his article, "how are we to quantify the degree of roundness that distinguishes a planet? Does gravity dominate such a body if its shape deviates from a spheroid by 10 percent or by 1 percent? Nature provides no unoccupied gap between round and non-round shapes, so any boundary would be an arbitrary choice." Furthermore, the point at which an object's mass compresses it into an ellipsoid varies depending on the chemical makeup of the object. Objects made of ices, such as Enceladus and Miranda, assume that state more easily than those made of rock, such as Vesta and Pallas. Heat energy, from gravitational collapse, impacts, tidal forces such as orbital resonances, or radioactive decay, also factors into whether an object will be ellipsoidal or not; Saturn's icy moon Mimas is ellipsoidal (though no longer in hydrostatic equilibrium), but Neptune's larger moon Proteus, which is similarly composed but colder because of its greater distance from the Sun, is irregular. In addition, the much larger Iapetus is ellipsoidal but does not have the dimensions expected for its current speed of rotation, indicating that it was once in hydrostatic equilibrium but no longer is, and the same is true for Earth's moon. Even Mercury, universally regarded as a planet, is not in hydrostatic equilibrium. Thus the IAU definition is not taken literally even by the IAU, as it includes Mercury as a planet; its requirement for hydrostatic equilibrium is in practice ignored in favour of a requirement for roundedness. Double planets and moons The definition specifically excludes satellites from the category of dwarf planet, though it does not directly define the term "satellite". In the original draft proposal, an exception was made for Pluto and its largest satellite, Charon, which possess a barycenter outside the volume of either body. The initial proposal classified Pluto–Charon as a double planet, with the two objects orbiting the Sun in tandem. However, the final draft made clear that, even though they are similar in relative size, only Pluto would currently be classified as a dwarf planet. However, some have suggested that the Moon nonetheless deserves to be called a planet. In 1975, Isaac Asimov noted that the timing of the Moon's orbit is in tandem with the Earth's own orbit around the Sun—looking down on the ecliptic, the Moon never actually loops back on itself, and in essence it orbits the Sun in its own right. Also many moons, even those that do not orbit the Sun directly, often exhibit features in common with true planets. There are 20 moons in the Solar System that are massive enough to have achieved hydrostatic equilibrium (the so-called planetary-mass moons); they would be considered planets if only the physical parameters are considered. Both Jupiter's moon Ganymede and Saturn's moon Titan are larger than Mercury, and Titan even has a substantial atmosphere, thicker than the Earth's. Moons such as Io and Triton demonstrate obvious and ongoing geological activity, and Ganymede has a magnetic field. Just as stars in orbit around other stars are still referred to as stars, some astronomers argue that objects in orbit around planets that share all their characteristics could also be called planets. Indeed, Mike Brown makes just such a claim in his dissection of the issue, saying: It is hard to make a consistent argument that a 400 km iceball should count as a planet because it might have interesting geology, while a 5000 km satellite with a massive atmosphere, methane lakes, and dramatic storms [Titan] shouldn't be put into the same category, whatever you call it. However, he goes on to say that, "For most people, considering round satellites (including our Moon) 'planets' violates the idea of what a planet is." Alan Stern has argued that location should not matter and that only geophysical attributes should be taken into account in the definition of a planet, and proposes the term satellite planet for planetary-mass moons. Extrasolar planets and brown dwarfs The discovery since 1992 of extrasolar planets, or planet-sized objects around other stars ( such planets in planetary systems including multiple planetary systems as of ), has widened the debate on the nature of planethood in unexpected ways. Many of these planets are of considerable size, approaching the mass of small stars, while many newly discovered brown dwarfs are, conversely, small enough to be considered planets. The material difference between a low-mass star and a large gas giant is not clear-cut; apart from size and relative temperature, there is little to separate a gas giant like Jupiter from its host star. Both have similar overall compositions: hydrogen and helium, with trace levels of heavier elements in their atmospheres. The generally accepted difference is one of formation; stars are said to have formed from the "top down", out of the gases in a nebula as they underwent gravitational collapse, and thus would be composed almost entirely of hydrogen and helium, while planets are said to have formed from the "bottom up", from the accretion of dust and gas in orbit around the young star, and thus should have cores of silicates or ices. As yet it is uncertain whether gas giants possess such cores, though the Juno mission to Jupiter could resolve the issue. If it is indeed possible that a gas giant could form as a star does, then it raises the question of whether such an object should be considered an orbiting low-mass star rather than a planet. Traditionally, the defining characteristic for starhood has been an object's ability to fuse hydrogen in its core. However, stars such as brown dwarfs have always challenged that distinction. Too small to commence sustained hydrogen-1 fusion, they have been granted star status on their ability to fuse deuterium. However, due to the relative rarity of that isotope, this process lasts only a tiny fraction of the star's lifetime, and hence most brown dwarfs would have ceased fusion long before their discovery. Binary stars and other multiple-star formations are common, and many brown dwarfs orbit other stars. Therefore, since they do not produce energy through fusion, they could be described as planets. Indeed, astronomer Adam Burrows of the University of Arizona claims that "from the theoretical perspective, however different their modes of formation, extrasolar giant planets and brown dwarfs are essentially the same". Burrows also claims that such stellar remnants as white dwarfs should not be considered stars, a stance which would mean that an orbiting white dwarf, such as Sirius B, could be considered a planet. However, the current convention among astronomers is that any object massive enough to have possessed the capability to sustain atomic fusion during its lifetime and that is not a black hole should be considered a star. The confusion does not end with brown dwarfs. María Rosa Zapatero Osorio et al. have discovered many objects in young star clusters of masses below that required to sustain fusion of any sort (currently calculated to be roughly 13 Jupiter masses). These have been described as "free floating planets" because current theories of Solar System formation suggest that planets may be ejected from their star systems altogether if their orbits become unstable. However, it is also possible that these "free floating planets" could have formed in the same manner as stars. In 2003, a working group of the IAU released a position statement to establish a working definition as to what constitutes an extrasolar planet and what constitutes a brown dwarf. To date, it remains the only guidance offered by the IAU on this issue. The 2006 planet definition committee did not attempt to challenge it, or to incorporate it into their definition, claiming that the issue of defining a planet was already difficult to resolve without also considering extrasolar planets. This working definition was amended by the IAU's Commission F2: Exoplanets and the Solar System in August 2018. The official working definition of an exoplanet is now as follows: The IAU noted that this definition could be expected to evolve as knowledge improves. This definition makes location, rather than formation or composition, the determining characteristic for planethood. A free-floating object with a mass below 13 Jupiter masses is a "sub-brown dwarf", whereas such an object in orbit around a fusing star is a planet, even if, in all other respects, the two objects may be identical. Further, in 2010, a paper published by Burrows, David S. Spiegel and John A. Milsom called into question the 13-Jupiter-mass criterion, showing that a brown dwarf of three times solar metallicity could fuse deuterium at as low as 11 Jupiter masses. Also, the 13 Jupiter-mass cutoff does not have precise physical significance. Deuterium fusion can occur in some objects with mass below that cutoff. The amount of deuterium fused depends to some extent on the composition of the object. As of 2011 the Extrasolar Planets Encyclopaedia included objects up to 25 Jupiter masses, saying, "The fact that there is no special feature around in the observed mass spectrum reinforces the choice to forget this mass limit". As of 2016 this limit was increased to 60 Jupiter masses based on a study of mass–density relationships. The Exoplanet Data Explorer includes objects up to 24 Jupiter masses with the advisory: "The 13 Jupiter-mass distinction by the IAU Working Group is physically unmotivated for planets with rocky cores, and observationally problematic due to the sin i ambiguity." The NASA Exoplanet Archive includes objects with a mass (or minimum mass) equal to or less than 30 Jupiter masses. Another criterion for separating planets and brown dwarfs, rather than deuterium burning, formation process or location, is whether the core pressure is dominated by Coulomb pressure or electron degeneracy pressure. One study suggests that objects above formed through gravitational instability and not core accretion and therefore should not be thought of as planets. A 2016 study shows no noticeable difference between gas giants and brown dwarfs in mass–radius trends: from approximately one Saturn mass to about (the onset of hydrogen burning), radius stays roughly constant as mass increases, and no obvious difference occurs when passing . By this measure, brown dwarfs are more like planets than they are like stars. Planetary-mass stellar objects The ambiguity inherent in the IAU's definition was highlighted in December 2005, when the Spitzer Space Telescope observed Cha 110913-773444 (above), only eight times Jupiter's mass with what appears to be the beginnings of its own planetary system. Were this object found in orbit around another star, it would have been termed a planet. In September 2006, the Hubble Space Telescope imaged CHXR 73 b (left), an object orbiting a young companion star at a distance of roughly 200 AU. At 12 Jovian masses, CHXR 73 b is just under the threshold for deuterium fusion, and thus technically a planet; however, its vast distance from its parent star suggests it could not have formed inside the small star's protoplanetary disc, and therefore must have formed, as stars do, from gravitational collapse. In 2012, Philippe Delorme, of the Institute of Planetology and Astrophysics of Grenoble in France announced the discovery of CFBDSIR 2149-0403; an independently moving 4–7 Jupiter-mass object that likely forms part of the AB Doradus moving group, less than 100 light years from Earth. Although it shares its spectrum with a spectral class T brown dwarf, Delorme speculates that it may be a planet. In October 2013, astronomers led by Dr. Michael Liu of the University of Hawaii discovered PSO J318.5-22, a solitary free-floating L dwarf estimated to possess only 6.5 times the mass of Jupiter, making it the least massive sub-brown dwarf yet discovered. In 2019, astronomers at the Calar Alto Observatory in Spain identified GJ3512b, a gas giant about half the mass of Jupiter orbiting around the red dwarf star GJ3512 in 204 days. Such a large gas giant around such a small star at such a wide orbit is highly unlikely to have formed via accretion, and is more likely to have formed by fragmentation of the disc, similar to a star. Semantics Finally, from a purely linguistic point of view, there is the dichotomy that the IAU created between 'planet' and 'dwarf planet'. The term 'dwarf planet' arguably contains two words, a noun (planet) and an adjective (dwarf). Thus, the term could suggest that a dwarf planet is a type of planet, even though the IAU explicitly defines a dwarf planet as not so being. By this formulation therefore, 'dwarf planet' and 'minor planet' are best considered compound nouns. Benjamin Zimmer of Language Log summarized the confusion: "The fact that the IAU would like us to think of dwarf planets as distinct from 'real' planets lumps the lexical item 'dwarf planet' in with such oddities as 'Welsh rabbit' (not really a rabbit) and 'Rocky Mountain oysters' (not really oysters)." As Dava Sobel, the historian and popular science writer who participated in the IAU's initial decision in October 2006, noted in an interview with National Public Radio, "A dwarf planet is not a planet, and in astronomy, there are dwarf stars, which are stars, and dwarf galaxies, which are galaxies, so it's a term no one can love, dwarf planet." Mike Brown noted in an interview with the Smithsonian that "Most of the people in the dynamical camp really did not want the word 'dwarf planet', but that was forced through by the pro-Pluto camp. So you're left with this ridiculous baggage of dwarf planets not being planets." Conversely, astronomer Robert Cumming of the Stockholm Observatory notes that, "The name 'minor planet' [has] been more or less synonymous with 'asteroid' for a very long time. So it seems to me pretty insane to complain about any ambiguity or risk for confusion with the introduction of 'dwarf planet'."
Physical sciences
Planetary science
Astronomy
900775
https://en.wikipedia.org/wiki/Taipei%20Metro
Taipei Metro
Taipei Metro (also known as Taipei Mass Rapid Transit (MRT) and branded as Metro Taipei) is a rapid transit system operated by the Taipei Rapid Transit Corporation serving the capital Taipei and New Taipei City in Taiwan. It was the first rapid transit system to be built on the island. The initial network was approved for construction in 1986 and work began two years later. It began operations on 28 March 1996, and by 2000, 62 stations were in service across three main lines. Over the next nine years, the number of passengers had increased by 70%. Since 2008, the network has expanded to 131 stations and the passenger count has grown by another 96%. The system has been praised by locals for its effectiveness in relieving growing traffic congestion in Taipei and its surrounding satellite towns, with over eight million trips made daily. History Proposal and construction The idea of constructing a rapid transit system on the island was first put forth at a press conference on 28 June 1968, where the Ministry of Transportation and Communications announced its plans to begin researching the possibility of constructing such a network in the Taipei metropolitan area; however, the plan was shelved due to financial concerns and the belief that such a system was not urgently needed at the time. With the increase of traffic congestion accompanying economic growth in the 1970s, the need for a rapid transit system became more pressing. In February 1977, the Institute of Transportation (IOT) of the Ministry of Transportation and Communications (MOTC) released a preliminary rapid transport system report, with the designs of five lines: U1, U2, U3, S1, and S2, to form a rough sketch of the planned corridors, some of which would be converted from single-tracked Taiwan Railways Administration (TRA) branch lines, resulting in the first rapid transit system plan for Taipei. In 1981, the IOT invited British Mass Transit Consultants (BMTC) and to form a team and provide in-depth research on the preliminary report. In 1982, the Taipei City Government commissioned National Chiao Tung University to do a research and feasibility study on medium-capacity rapid transit systems. In January 1984, the university proposed an initial design for a medium-capacity rapid transit system in Taipei City, including plans for Wenhu line and Tamsui–Xinyi line of the medium-capacity metro system. The pre-1985 plans would have retained the 3 ft 6 in gauge of the TRA lines and the rolling stock design would have to be conform to TRA and Japanese narrow-gauge standards. On 1 March 1985, the Executive Yuan Council for Economic Planning and Development (CEPD) signed a treaty with the Taipei Transit Council (TTC), composed of three American consultant firms, to do overall research on a rapid transit system in metropolitan Taipei. Apart from adjustments made to the initial proposal, such as the move to standard gauge track and wider and longer rolling stock for the high-capacity lines, Wenhu line of the medium-capacity metro system was also included into the network. In 1986, the initial network design of the Taipei Metro by the CEPD was passed by the Executive Yuan, although the network corridors were not yet set. A budget of NT$441.7 billion was allocated for the project. On 27 June 1986, the Preparatory Office of Rapid Transit Systems was created, which on 23 February 1987 was formally established as the Department of Rapid Transit Systems (DORTS) for the task of handling, planning, design, and construction of the system. Apart from preparing for the construction of the metro system, DORTS also made small changes to the metro corridor. The 6 lines proposed on the initial network were: Tamsui line and Xindian line (Lines U1 and U2), Zhonghe Line (Line U3), Nangang Line and Banqiao Line (Line S1), and Muzha (now Wenhu) line (Wenhu line medium-capacity), totaling 79 stations and route length, including of elevated rail, at ground level, and underground. The Neihu Line corridor was approved later in 1990. On 27 June 1994, the Taipei Rapid Transit Corporation (TRTC) was formed to oversee the operation of the Taipei Metro system. The Executive Yuan approved the initial network plan for the system on 27 May 1986. Ground was broken and construction began on 15 December 1988. The growing traffic problems of the time, compounded by road closures due to TRTS construction led to what became popularly known as the "dark age of Taipei traffic". The TRTS was the center of political controversy during its construction and shortly after the opening of its first line in 1996 due to incidents such as computer malfunction during a thunderstorm, alleged structural problems in some elevated segments, budget overruns, and fare prices. Opening and Initial network The system opened on 28 March 1996, with the elevated , a driverless, medium-capacity line with twelve stations running from to . The first high-capacity line, the , began service on 28 March 1997, running from to , then extended to at the end of the year. On 23 December 1998, the system passed the milestone of 100 million passengers. 1999–2006 Expansions On 24 December 1999, a section of the was opened between and . This section became the first east–west line running through the city, connecting the two previously completed north–south lines. On 31 May 2006, the second stage of the Banqiao–Nangang section and the Tucheng section began operation. The service was then named Bannan after the districts that it connects (Banqiao and Nangang). Maokong Gondola On 4 July 2007, the Maokong Gondola, a new aerial lift/cable-car system, was opened to the public. The system connects the , , and Maokong. Service was suspended on 1 October 2008 due to erosion from mudslides under a support pillar following Typhoon Jangmi. The gondola officially resumed service as of 31 March 2010, after relocation of the pillar and passing safety inspections. 2009–2014 expansions On 4 July 2009, with the opening of the Neihu segment of , the last of the six core segments was completed. Due to debate on whether to construct a medium-capacity or high-capacity line, construction of the line did not begin until 2002. was extended from to and in 2012. The Xinyi section of and Songshan section of were opened on 24 November 2013 and 15 November 2014 respectively. Prior to 2014, only physical lines had official names; services did not. In 2008, all full-run and short-turn services were referred to by termini while Bannan and Wenhu services were referred to by the physical lines on which they operated. Following the completion of the core sections of the system in 2014, the naming scheme for services was set and 'lines' started to referred to services. Between 2014 and 2016, lines were given alternative number names based on the order of the dates the lines first opened. Brown, Red, Green, Orange and Blue lines were named lines 1 to 5 respectively. The planned Circular, Wanda–Shulin and Minsheng–Xizhi lines were to be lines 6 to 8 respectively. In 2016, the number names were replaced by colour names. Today, on-board announcements in Chinese use full official names, whereas in English, colour names are used instead. In June 2023, due to an increasing number of South Korean tourists, the metro announced the addition of Korean announcements at stations where there are high amounts of tourists. On 3 April 2024, following a magnitude 7 earthquake hitting the island, all active MRT trains were suspended for safety checks to be conducted. All Taipei Metro routes resumed operations later that day. Timeline of services Size Lines The system is designed based on the spoke-hub distribution paradigm, with most rail lines running radially outward from central Taipei. The MRT system operates daily from 06:00 to 00:00 the following day (the last trains finish their runs by 01:00), with extended services during special events (such as New Year festivities). Trains operate at intervals of 1:30 to 15 minutes depending on the line and time of day. Smoking is forbidden in the entire metro system, while eating, drinking, and chewing gum and betel nuts are forbidden within the paid area. Stations become extremely crowded during rush hours, especially at transfer stations such as , , and . Automated station announcements are recorded in Mandarin, English, Taiwanese, and Hakka, with Japanese at busy stations. Japanese coverage across the network was expanded on 24 August 2023. Select stations also received Korean announcements to accommodate for the high influx of South Korean tourists to the capital. Subsequently, announcement order was changed to Mandarin, English, Japanese, Korean, then Taiwanese and Hakka. Fares and tickets Fares range between –65 per trip as of 2018. RFID single journey tokens and rechargeable IC cards (such as the EasyCard and the iPASS), as well as NFC-based mobile payments (only Google Wallet and Samsung Wallet) are used to collect fares for day-to-day use. Discounts and Cocessions A 20% off discount was given to all IC card users, but it was cancelled at the start of February 2020. The discount program was instead switched to an intensity-based scheme. The more times passengers take the MRT, the higher the level of discount they could receive. For example, 10% discount is given for 11–20 rides; 20% discount is provided for 31–40 rides; the highest discount is 30% off for more than 50 rides. The discount is considered a rebate and is deposited to the user's card on the first of each month from the previous month. Those with welfare cards issued by local governments could receive 60% off per ride. Children aged 6 or over pay adult fares. Other ticket types include passes, joint tickets with other services and tickets for groups and discounts for YouBike rentals at the Taipei Main Station. Ticketing System Turnstiles of Taipei Metro are being replaced by the end of 2025 to enable contactless bank card and QR code payments. Infrastructure The Taipei Metro provides an obstacle-free environment within the entire system; all stations and trains are handicap accessible. Features include: handicap-capable restrooms, ramps and elevators for wheelchairs and strollers, tactile guide paths, extra-wide faregates, and trains with a designated wheelchair area. Beginning in September 2003, the English station names for Taipei Metro stations were converted to use Hanyu pinyin before the end of December, with brackets for Tongyong Pinyin names for signs shown at the station entrances and exits. However, after the conversion, many stations were reported to have multiple conflicting English station names caused by inconsistent conversions, even for stations built after enactment of the new naming policy. The information brochures (臺北市大眾捷運系統捷運站轉乘公車資訊手冊) printed in September 2004 still used Wade–Giles romanizations. To accommodate increasing passenger numbers, all metro stations have replaced turnstiles with speed gates since 2007, and single-journey magnetic cards have been replaced by RFID tokens. TRTS provides free mobile phone connections in all stations, trains, and tunnels and also provides WiFi WLAN connections at several station hotspots. The world's first WiMAX-service metro trains were introduced on the in 2007, allowing passengers to access the internet and watch live broadcasts. Several stations are also equipped with mobile charging stations. Platforms Most underground stations have island platform configurations while a few have side platform configurations. Most elevated and at-grade stations have side platform configurations, while a few have island platform configurations. All high-capacity metro stations have a long platform to accommodate all six-train cars on a typical metro train (with the exception of ). The width of the platform and concourse depends on the volume of transit; the largest stations include Taipei Main Station, , and . Some other transfer stations, including , , and , also have wide platforms. Several stations have a cross-platform interchange: Chiang Kai-Shek Memorial Hall, Guting, Dongmen and Ximen. Both lines' tracks in one direction use the lower floor, while both lines' tracks in the other direction use the upper floor. Dongmen station is unique in that the directions of travel on each floor are reversed, so that there's a cross-platform interchange when travelling between the city center and the suburbs. Each station is equipped with LED displays and LCD TVs both in the concourse and on the platforms which display the time of arrival of the next train. At all stations, red lights on or above automatic platform gates at stations flash prior to a train arrival to alert passengers and an arrival melody would play (except on the and certain elevated and at-grade stations). Similarly, before platform screen doors were retrofitted, stations would have lights on the edges of platforms which would flash upon a train's arrival. This can still be seen on other metro systems such as the Washington Metro. As of September 2018, all stations have automatic platform gates. Before 2018, all the stations on the Wenhu line and most stations on the , as well as a few stations on other lines, were equipped with platform screen doors. A Track Intrusion Detection System had also been installed to improve passenger safety at stations without platform doors. The system uses infrared and radio detectors to monitor unusual movement in the track area. Signalling When the Muzha Line first opened in 1996, the line was initially equipped with automatic train operation (ATO) and automatic train control (ATC), which in turn comprised automatic train protection (ATP) and automatic train supervision (ATS); in particular the ATP relied on transmission coils and wayside control units whereas the ATO relied on dwell operation control units. The transmission coils are controlled by the Control Centre to ensure safety of the line and were positioned on the guideway. Among such coils included the PD loop, safety frequency loop, stopping program loop, vehicle station link and station vehicle link; these loops were cross-arranged to produce electromagnetic induction with the interval between two cross points being 0.3 seconds to both monitor the train and control its speed. However this fixed-block ATC system used on the Muzha Line was plagued with problems in its early years of operation and was replaced with the new moving-block Cityflo 650 CBTC that was supplied by Bombardier Transportation of Canada for the Neihu Line. On the other hand, the heavy-capacity lines use the traditional fixed block system design, which were initially supplied by General Railway Signal of Rochester, New York, for the Tamsui, Xindian, Zhonghe, and Bannan lines; and later by Alstom for the Tucheng, Xinzhuang, Luzhou, Xinyi and Songshan lines. Key components of the system include impedance bond, 4-foot loops, marker coils, alignment antennae and two-aspect light signals for the wayside as well as automatic train supervision which utilises centralized traffic control. Public art In the initial network, important stations such as transfer stations, terminal stations, and stations with heavy passenger flow were chosen for the installation of public art. The principles behind the locations of public art were visual focus and non-interference with passenger circulation and construction schedules. The artworks included murals, children's mosaic collages, sculptures, hung forms, spatial art, interactive art, and window displays. The selection methods included open competitions, invitational competitions, direct assignments, and cooperation with children. Stations with public art displays include: , , , , , , , , , , , Songshan Airport, , , and . Stations with art galleries include , , , and . station contains a small archeological museum. Other facilities In addition to the rapid transit system itself, Taipei Metro operates several public facilities such as underground shopping malls, parks, and public squares in and around stations, including: Zhongshan Metro Mall: – – (815 m, 81 shops). Taipei main station underground mall: on floor B1 of the station. Taipei New World Shopping Center: Between the metro and TRA sections of Taipei Station. Station front metro mall: West of Taipei main station, beneath Zhongxiao W Road. Taipei City Mall: Northwest of Taipei main station, beneath Zhengzhou Rd and Civic Blvd. East Metro Mall: Between and (825 m, 35 shops). Ximen Underground Mall: north of (currently used as an office building and library). Longshan Temple Underground Mall: north and south sides. Global Mall: floors B1 to 2F. As of 2022 there are 229 shops within the stations themselves. Transit Transfers to city bus stations are available at all metro stations. In 2009, transfer volume between the metro and bus systems reached 444,100 transfers per day (counting only EasyCard users). Connections to Taiwan Railway Administration and Taiwan High Speed Rail trains are available at , and . Connections to Taipei Bus Station and Taipei City Hall Bus Station are available at and stations, respectively. The Maokong Gondola is accessible from . Taipei Songshan Airport is served by the station. A metro system to connect Taipei to Taoyuan International Airport has also been available since March 2017. Connections with New Taipei Metro is also available, specifically with Circular line and Danhai LRT. Rolling stock All rolling stocks on the Taipei Metro are electric multiple units, powered by a third rail at 750 volts direct current. Each train is equipped with automatic train operation (ATO) for a partial or complete automatic train piloting and driverless functions. Medium-capacity trains The medium-capacity trains of are broad gauge rubber-tired trains with no onboard train operators but are operated remotely by the medium-capacity system operation control center. It initially used a fixed-block automatic train control (ATC) system. Each train consists of two 2-car electric multiple unit (EMU) sets, with a total of 4 cars. The Wenhu line is the only line on the system to have no open-gangway carriages, meaning that passengers cannot move between carriages when the train is moving. The was initially operated with VAL 256 trains cars, where two VAL 256 cars in the same set would share the same road number. As a result of this numbering scheme, the 102 cars of the VAL fleet have car numbers from 1 to 51. In June 2003, Bombardier was awarded a contract to supply the with 202 INNOVIA APM 256 train cars, to install the CITYFLO 650 moving-block communications-based train control (CBTC) system to replace the fixed-block ATC system and also to retrofit the existing 102 VAL 256 cars with the CITYFLO 650 CBTC system. Integration of Bombardier's trains with the existing proved to be difficult in the beginning, with multiple system malfunctions and failures during the first three months of operation. Retrofitting older trains also took longer than expected, as the older trains must undergo several hours of reliability tests during non-service hours. The VAL 256 trains resumed operations in December 2010. Heavy-capacity trains The heavy-capacity trains have steel wheels and are operated by an on-board train operator. The trains are computer-controlled. The operator, who is both driver and conductor, is responsible for opening and closing the doors and making (not all) announcements. Most announcements are pre-recorded in Mandarin, English, Hokkien and Hakka, with Japanese and Korean at busy stations. The ATC provides the functions of ATP, ATO and ATS and controls all train movements, including braking, acceleration and speed control, but can be manually overridden by the operator in case of an emergency. Newer trains also use a Train Supervision Information System (TSIS) supplied by Mitsubishi Electric that allows the operator to monitor the conditions of the train and identify any faults. Each train consists of two 3-car Electric Multiple Unit (EMU) sets, with a total of 6 cars. Each 3-car EMU set is permanently coupled as DM–T–M, where DM is the motor car with full-width cab, T is a trailer car and M is the motor car without cab. Each motor car has four 3-phase AC traction motors. The configuration of a 6-car train is DM–T–M+M–T–DM, not interchanged with other car types. Like many contemporary metro rolling stock designs such as the MOVIA by Bombardier, each train features open gangways, allowing passengers to move freely between cars. All carriages of the heavy-capacity trains are wide by high, and have a total capacity of 368 passengers, 60 of which seated. Their design maximum speed is , which is limited to in service. The first digit of a DM car is 1, while that of a T car is 2 and that of an M car is 3. This digit then follows the three digits of the set number. For example, C301 set 001/002 consists of carriages 1001-2001-3001+3002-2002-1002. A single set cannot be in revenue service except C371 single sets 397–399, where their M car is exactly a DM car despite its first digit being 3. These single sets run exclusively on the Xinbeitou and Xiaobitan branch lines. Before the C371 single sets were in revenue service on 22 July 2006, the M cars of C301 sets 013/014 were converted to temporary cab cars to run the Xinbeitou branch. In 2010, the new C381 was built for Taipei Metro to cope with increasing passenger ridership and the expansion of its network route. Upon entering service on 7 October 2012, three C381 trainsets are servicing the Beitou – Taipower Building segment of the Tamsui and Xindian Lines, with the remaining fleet being put into service on 20 October 2012. These trains provided much-needed capacity increase when the Xinyi and Songshan extensions opened in late 2013. After November 2014, the C381 trains are serving both and . Whereas the earlier heavy capacity train types have largely retained the same design, the C381 sets are more distinctive with double blue stripes and the re-positioning of the logo from the driver's door to well below the passenger's windows, right on the stripe. Also placed were the more "sleeker" cab and the new advertising screens (as seen in newer Japanese commuter trains such as the E233 series) to improve energy efficiency, although it retains the same propulsion as the C371s. Fleet roster Medium-capacity fleet Heavy-capacity fleet Engineering trains Taipei Metro also uses a fleet of specialised trains for maintenance of way purposes: Depots The system currently has 9 depots, with more under construction. Reception Taipei Metro is one of the most expensive rapid transit systems ever constructed, with phase one of the system costing US$18 billion and phase two estimated to have cost US$13.8 billion. Despite earlier controversy, by the time the first phase of construction was completed in 2000, it was generally agreed that the metro project was a success, and it has since become an essential part of life in Taipei. The system has been effective in reducing traffic congestion in the city and has spurred the revival of satellite towns (like Tamsui) and development of new areas (like Nangang). The system has also helped to increase average vehicle speed for routes running from New Taipei into Taipei. Property prices along metro routes (both new and existing) tend to increase with the opening of more lines. Since the Taipei Metro joined the Nova International Railway Benchmarking Group and the Community of Metros (Nova/CoMET) in 2002, it has started collecting and analysing data of the 33 Key Performance Indicators set by Nova/CoMET in order to compare them with those of other metro systems around the world, as a reference to improve its operation. Taipei Metro also has gained keys to success from case studies on different subjects such as safety, reliability, and incidents, and from the operational experiences of other metro systems. According to a study conducted by the Railway Technology Strategy Center at Imperial College London, and data gathered by Nova/CoMET, the Taipei Metro has ranked number 1 in the world for four consecutive years in terms of reliability, safety, and quality standards (2004–2007). The most congested route sections handle over 38,000 commuters per hour during peak times. On New Year's Eve 2009 and New Year's Day 2010, the Metro system transported 2.17 million passengers in 42 consecutive hours. On 22 April 2010 after 14 years of service, the system achieved the milestone of 4 billion cumulative riders. On 29 December 2010, the system passed the benchmark of 500 million annual passengers for the first time. The record for single day ridership hit 2.5 million passengers during the New Year's Eve celebrations on 31 December 2010. Following opening of the Xinyi section of , the system reached another record of 2.75 million passengers on 31 December 2013. In May 2016, the Singapore Transport Minister, Khaw Boon Wan, said that his country's rail operators, SBS Transit and SMRT, should emulate the example of Taipei Metro. Speaking at a rail engineering forum, he cited the Taipei Metro's timely maintenance and replacement of assets, as well as its fast response to rail network problems. Khaw said the Singapore Land Transport Authority (LTA) is working with the TRTC to attach staff from SBS and SMRT to its metro workshops, so they can learn from its asset maintenance practices and engineering improvements. Future expansions Several lines are planned to be added to the network. Wanda–Zhonghe–Shulin line (Light Green line) Wanda–Zhonghe–Shulin is a metro line under construction. Phase 1 will run from to Juguang, Zhonghe, New Taipei. Phase 1 is expected to be completed in 2025. Phase 2 will connect Zhonghe Senior High School, the previous station of Juguang, to , making the part between Zhonghe Senior High School and Juguang a branch line. The entire line is expected to be fully completed around late 2028. Minsheng–Xizhi line (Sky Blue line) Minsheng–Xizhi is a planned metro line. As of February 2011, New Taipei has been pursuing the construction of the 17.52-km Minsheng–Xizhi line, though the most recent plan was rejected by the Ministry of Transportation and Communications, citing the need for further evidence for the line's viability. The city plans to re-submit the proposal, and the project is estimated to cost NT$42.2 billion (US$1.44 billion). A possible 4.25-km extension of the line to connect with the planned Keelung light rail is also being considered. The line is planned to be built partially underground and partially elevated. It will begin from Dadaocheng Harbour beneath Minsheng West Road in Taipei, run along Minsheng East and West Roads, pass through Minsheng Community and journey under the Keelung River towards the Neihu District. The line will then change to an elevated mode and reach its termini at Xintai 5th Road in Xizhi District, New Taipei City. As of May 2018, the proposal for this line has been submitted to the Ministry of Transportation and Communications, but has yet to be approved. Network map Safety and security 2001 typhoon flooding On 17 September 2001, Typhoon Nari flooded all underground tracks as well as 16 stations, the heavy-capacity system operation control center, the administration building, and the Nangang Depot. The elevated was not seriously affected and resumed operations the next day. However, the heavy-capacity lines were not restored to full operational status until three months later. 2014 stabbing attack On 21 May 2014, 28 people were stabbed in a mass stabbing by a knife-wielding college student on the . The attack occurred on a train near , resulting in 4 deaths and 24 injured. It was the first fatal attack on the metro system since it began operations in 1996. The suspect was 21-year-old Cheng Chieh (鄭捷), a university student at Tunghai University, who was arrested at immediately after the incident. On 6 March 2015, Cheng Chieh was found guilty on multiple counts of murder and attempted murder, and was sentenced to death. He was subsequently executed on 10 May 2016. Controversies In early 2021, it was discovered that a pornographic film production company had created a series of sets which copied the design of MRT trains and stations. This caused a brief stir when it was first released as many were concerned that the films had been shot on actual MRT trains and stations. Nevertheless, it was still condemned by Taipei MRT for imitating its train carriages. On 30 December 2021, Taipei MRT rejected an Amnesty International advertisement which featured detained human rights activist Lee Ming-che.
Technology
Asia_2
null
900867
https://en.wikipedia.org/wiki/Drilling
Drilling
Drilling is a cutting process where a drill bit is spun to cut a hole of circular cross-section in solid materials. The drill bit is usually a rotary cutting tool, often multi-point. The bit is pressed against the work-piece and rotated at rates from hundreds to thousands of revolutions per minute. This forces the cutting edge against the work-piece, cutting off chips (swarf) from the hole as it is drilled. In rock drilling, the hole is usually not made through a circular cutting motion, though the bit is usually rotated. Instead, the hole is usually made by hammering a drill bit into the hole with quickly repeated short movements. The hammering action can be performed from outside the hole (top-hammer drill) or within the hole (down-the-hole drill, DTH). Drills used for horizontal drilling are called drifter drills. In rare cases, specially-shaped bits are used to cut holes of non-circular cross-section; a square cross-section is possible. Process Drilled holes are characterized by their sharp edge on the entrance side and the presence of burrs on the exit side (unless they have been removed). Also, the inside of the hole usually has helical feed marks. Drilling may affect the mechanical properties of the workpiece by creating low residual stresses around the hole opening and a very thin layer of highly stressed and disturbed material on the newly formed surface. This causes the workpiece to become more susceptible to corrosion and crack propagation at the stressed surface. A finish operation may be done to avoid these detrimental conditions. For fluted drill bits, any chips are removed via the flutes. Chips may form long spirals or small flakes, depending on the material, and process parameters. The type of chips formed can be an indicator of the machinability of the material, with long chips suggesting good material machinability. When possible drilled holes should be located perpendicular to the workpiece surface. This minimizes the drill bit's tendency to "walk", that is, to be deflected from the intended center-line of the bore, causing the hole to be misplaced. The higher the length-to-diameter ratio of the drill bit, the greater the tendency to walk. The tendency to walk is also preempted in various other ways, which include: Establishing a centering mark or feature before drilling, such as by: Casting, molding, or forging a mark into the workpiece Center punching Spot drilling (i.e., center drilling) Spot facing, which is machining a certain area on a casting or forging to establish an accurately located face on an otherwise rough surface. Constraining the position of the drill bit using a drill jig with drill bushings Surface finish produced by drilling may range from 32 to 500 microinches. Finish cuts will generate surfaces near 32 microinches, and roughing will be near 500 microinches. Cutting fluid is commonly used to cool the drill bit, increase tool life, increase speeds and feeds, increase the surface finish, and aid in ejecting chips. Application of these fluids is usually done by flooding the workpiece with coolant and lubricant or by applying a spray mist. In deciding which drill(s) to use it is important to consider the task at hand and evaluate which drill would best accomplish the task. There are a variety of drill styles that each serve a different purpose. The subland drill is capable of drilling more than one diameter. The spade drill is used to drill larger hole sizes. The indexable drill is useful in managing chips. Spot drilling The purpose of spot drilling is to drill a hole that will act as a guide for drilling the final hole. The hole is only drilled part way into the workpiece because it is only used to guide the beginning of the next drilling process. Centre drilling Centre drill is a two-fluted tool consisting of a twist drill with a 60° countersink; used to drill countersink center holes in a workpiece to be mounted between centers for turning or grinding. Deep hole drilling Deep hole drilling is defined as drilling a hole of depth greater than ten times the diameter of the hole. These types of holes require special equipment to maintain the straightness and tolerances. Other considerations are roundness and surface finish. Deep hole drilling is generally achievable with a few tooling methods, usually gun drilling or BTA drilling. These are differentiated due to the coolant entry method (internal or external) and chip removal method (internal or external). Using methods such as a rotating tool and counter-rotating workpiece are common techniques to achieve required straightness tolerances. Secondary tooling methods include trepanning, skiving and burnishing, pull boring, or bottle boring. Finally, a new kind of drilling technology is available to face this issue: vibration drilling. This technology breaks up the chips by a small controlled axial vibration of the drill. The small chips are easily removed by the flutes of the drill. A high tech monitoring system is used to control force, torque, vibrations, and acoustic emission. Vibration is considered a major defect in deep hole drilling which can often cause the drill to break. A special coolant is usually used to aid in this type of drilling. Gun drilling Gun drilling was originally developed to drill out gun barrels and is used commonly for drilling smaller diameter deep holes. The depth-to-diameter ratio can be even greater than 300:1. The key feature of gun drilling is that the bits are self-centering; this is what allows for such deep accurate holes. The bits use a rotary motion similar to a twist drill; however, the bits are designed with bearing pads that slide along the surface of the hole keeping the drill bit on center. Gun drilling is usually done at high speeds and low feed rates. Trepanning Trepanning is commonly used for creating larger diameter holes (up to ) where a standard drill bit is not feasible or economical. Trepanning removes the desired diameter by cutting out a solid disk similar to the workings of a drafting compass. Trepanning is performed on flat products such as sheet metal, granite (curling stone), plates, or structural members like I-beams. Trepanning can also be useful to make grooves for inserting seals, such as O-rings. Microdrilling Microdrilling refers to the drilling of holes less than . Drilling of holes at this small diameter presents greater problems since coolant fed drills cannot be used and high spindle speeds are required. High spindle speeds that exceed 10,000 RPM also require the use of balanced tool holders. Vibration drilling.... The first studies into vibration drilling began in the 1950s (Pr. V.N. Poduraev, Moscow Bauman University). The main principle consists in axial vibrations or oscillations in addition to the feed movement of the drill so that the chips break up and are then easily removed from the cutting zone. There are two main technologies of vibration drilling: self-maintained vibration systems and forced vibration systems. Most vibration drilling technologies are still at a research stage. In the case of self-maintained vibration drilling, the eigenfrequency of the tool is used in order to make it naturally vibrate while cutting; vibrations are self-maintained by a mass-spring system included in the tool holder. Other works use a piezoelectric system to generate and control the vibrations. These systems allow high vibration frequencies (up to 2 kHz) for small magnitude (about a few micrometers); they are particularly suitable for drilling small holes. Finally, vibrations can be generated by mechanical systems: the frequency is given by the combination of the rotation speed and the number of oscillation per rotation (a few oscillations per rotation), with magnitude about 0.1 mm. This last technology is a fully industrial one (example: SineHoling® technology of MITIS). Vibration drilling is a preferred solution in situations like deep hole drilling, multi-material stack drilling (aeronautics) and dry drilling (without lubrication). Generally, it provides improved reliability and greater control of the drilling operation. Circle interpolating Circle interpolating, also known as orbital drilling, is a process for creating holes using machine cutters. Orbital drilling is based on rotating a cutting tool around its own axis and simultaneously about a centre axis which is off-set from the axis of the cutting tool. The cutting tool can then be moved simultaneously in an axial direction to drill or machine a hole – and/or combined with an arbitrary sidewards motion to machine an opening or cavity. By adjusting the offset, a cutting tool of a specific diameter can be used to drill holes of different diameters as illustrated. This implies that the cutting tool inventory can be substantially reduced. The term orbital drilling comes from that the cutting tool “orbits” around the hole center. The mechanically forced, dynamic offset in orbital drilling has several advantages compared to conventional drilling that drastically increases the hole precision. The lower thrust force results in a burr-less hole when drilling in metals. When drilling in composite materials the problem with delamination is eliminated. Material Drilling in metal Under normal usage, swarf is carried up and away from the tip of the drill bit by the fluting of the drill bit. The cutting edges produce more chips which continue the movement of the chips outwards from the hole. This is successful until the chips pack too tightly, either because of deeper than normal holes or insufficient backing off (removing the drill slightly or totally from the hole while drilling). Cutting fluid is sometimes used to ease this problem and to prolong the tool's life by cooling and lubricating the tip and chip flow. Coolant may be introduced via holes through the drill shank, which is common when using a gun drill. When cutting aluminum in particular, cutting fluid helps ensure a smooth and accurate hole while preventing the metal from grabbing the drill bit in the process of drilling the hole. When cutting brass, and other soft metals that can grab the drill bit and causes "chatter", a face of approx. 1-2 millimeters can be ground on the cutting edge to create an obtuse angle of 91 to 93 degrees. This prevents "chatter" during which the drill tears rather than cuts the metal. However, with that shape of bit cutting edge, the drill is pushing the metal away, rather than grabbing the metal. This creates high friction and very hot swarf. For heavy feeds and comparatively deep holes oil-hole drills are used in the drill bit, with a lubricant pumped to the drill head through a small hole in the bit and flowing out along the fluting. A conventional drill press arrangement can be used in oil-hole drilling, but it is more commonly seen in automatic drilling machinery in which it is the workpiece that rotates rather than the drill bit. In computer numerical control (CNC) machine tools a process called , or interrupted cut drilling, is used to keep swarf from detrimentally building up when drilling deep holes (approximately when the depth of the hole is three times greater than the drill diameter). Peck drilling involves plunging the drill part way through the workpiece, no more than five times the diameter of the drill, and then retracting it to the surface. This is repeated until the hole is finished. A modified form of this process, called high speed peck drilling or chip breaking, only retracts the drill slightly. This process is faster, but is only used in moderately long holes, otherwise it will overheat the drill bit. It is also used when drilling stringy material to break the chips. When it is not possible to bring the material to the СNС machine, a Magnetic Base Drilling Machine may be used. The base allows drilling in a horizontal position and even on a ceiling. Usually, for these machines, it is better to use cutters because they can drill much faster with less speed. Cutter sizes vary from 12mm to 200mm DIA and from 30mm to 200mm DOC(depth of cut). These machines are widely used in construction, fabrication, marine, and oil & gas industries. In the oil and gas industry, pneumatic magnetic drilling machines are used to avoid sparks, as well as special tube magnetic drilling machines that can be fixed on pipes of different sizes, even inside. Heavy-duty plate drilling machines provide high-quality solutions in the manufacturing of steel construction, bridge construction, shipyards, and various fields of the construction sector. Drilling in wood Wood being softer than most metals, drilling in wood is considerably easier and faster than drilling in metal. Cutting fluids are not used or needed. The main issue in drilling wood is ensuring clean entry and exit holes and preventing burning. Avoiding burning is a question of using sharp bits and the appropriate cutting speed. Drill bits can tear out chips of wood around the top and bottom of the hole and this is undesirable in fine woodworking applications. The ubiquitous twist drill bits used in metalworking also work well in wood, but they tend to chip wood out at the entry and exit of the hole. In some cases, as in holes for rough carpentry, the quality of the hole does not matter, and a number of bits for fast cutting in wood exist, including spade bits and self-feeding auger bits. Many types of specialised drill bits for boring clean holes in wood have been developed, including brad-point bits, Forstner bits and hole saws. Chipping on exit can be minimized by using a piece of wood as backing behind the work piece, and the same technique is sometimes used to keep the hole entry neat. Holes are easier to start in wood as the drill bit can be accurately positioned by pushing it into the wood and creating a dimple. The bit will thus have little tendency to wander. Others Some materials like plastics as well as other non-metals and some metals have a tendency to heat up enough to expand making the hole smaller than desired. Related processes The following are some related processes that often accompany drilling: Counterboring This process creates a stepped hole in which a larger diameter follows a smaller diameter partially into a hole. Countersinking This process is similar to counterboring but the step in the hole is cone-shaped. Boring Boring precisely enlarges an already existing hole using a single point cutter. Friction drilling drilling holes using plastic deformation of the subject (under heat and pressure) instead of cutting it. Reaming Reaming is designed to enlarge the size of a hole to leave smooth sides. Spot facing This is similar to milling, it is used to provide a flat machine surface on the workpiece in a localized area
Technology
Hand tools
null
901291
https://en.wikipedia.org/wiki/Sodium%20benzoate
Sodium benzoate
Sodium benzoate also known as benzoate of soda is the sodium salt of benzoic acid, widely used as a food preservative (with an E number of E211) and a pickling agent. It appears as a white crystalline chemical with the formula C6H5COONa. Production Sodium benzoate is commonly produced by the neutralization of sodium hydroxide (NaOH) with benzoic acid (C6H5COOH), which is itself produced commercially by partial oxidation of toluene with oxygen. Reactions Sodium benzoate can be decarboxylated with strong base and heat, yielding benzene: C6H5COONa + NaOH -> C6H6 + Na2CO3 Natural occurrence Sodium benzoate is not a naturally occurring substance. However many foods are natural sources of benzoic acid, its salts, and its esters. Fruits and vegetables can be rich sources, particularly berries such as cranberry and bilberry. Other sources include seafood, such as prawns, and dairy products. Uses As a preservative Sodium benzoate can act as a food preservative. It is most widely used in acidic foods such as salad dressings (for example acetic acid in vinegar), carbonated drinks (carbonic acid), jams and fruit juices (citric acid), pickles (acetic acid), condiments, and frozen yogurt toppings. It is also used as a preservative in medicines and cosmetics. Under these conditions it is converted into benzoic acid (E210), which is bacteriostatic and fungistatic. Benzoic acid is generally not used directly due to its poor water solubility. Concentration as a food preservative is limited by the FDA in the U.S. to 0.1% by weight. Sodium benzoate is also allowed as an animal food additive at up to 0.1%, per the Association of American Feed Control Officials. Sodium benzoate has been replaced by potassium sorbate in the majority of soft drinks in the United Kingdom. In the 19th century, sodium benzoate as a food ingredient was investigated by Harvey W. Wiley with his 'Poison Squad' as part of the US Department of Agriculture. This led to the 1906 Pure Food and Drug Act, a key event in the early history of food regulation in the United States. In pharmaceuticals Sodium benzoate is used as a treatment for urea cycle disorders due to its ability to bind amino acids. This leads to excretion of these amino acids and a decrease in ammonia levels. Recent research shows that sodium benzoate may be beneficial as an add-on therapy (1 gram/day) in schizophrenia. Total Positive and Negative Syndrome Scale scores dropped by 21% compared to placebo. Sodium benzoate, along with phenylbutyrate, is used to treat hyperammonemia. Sodium benzoate, along with caffeine, is used to treat postdural puncture headache, respiratory depression associated with overdosage of narcotics, and with ergotamine to treat vascular headache. Other uses Sodium benzoate is also used in fireworks as a fuel in whistle mix, a powder that emits a whistling noise when compressed into a tube and ignited. Mechanism of food preservation The mechanism starts with the absorption of benzoic acid into the cell. If the intracellular pH falls to 5 or lower, the anaerobic fermentation of glucose through phosphofructokinase decreases sharply, which inhibits the growth and survival of microorganisms that cause food spoilage. Health and safety In the United States, sodium benzoate is designated as generally recognized as safe (GRAS) by the Food and Drug Administration. The International Programme on Chemical Safety found no adverse effects in humans at doses of 647–825 mg/kg of body weight per day. Cats have a significantly lower tolerance against benzoic acid and its salts than rats and mice. The human body rapidly clears sodium benzoate by combining it with glycine to form hippuric acid which is then excreted. The metabolic pathway for this begins with the conversion of benzoate by butyrate-CoA ligase into an intermediate product, benzoyl-CoA, which is then metabolized by glycine N-acyltransferase into hippuric acid. Association with benzene in soft drinks and pepper sauces In combination with ascorbic acid (vitamin C, E300), sodium benzoate and potassium benzoate may form benzene. In 2006, the Food and Drug Administration tested 100 beverages available in the United States that contained both ascorbic acid and benzoate. Four had benzene levels that were above the 5 ppb Maximum Contaminant Level set by the Environmental Protection Agency for drinking water. Most of the beverages that tested above the limit have been reformulated and subsequently tested below the safety limit. Heat, light and shelf life can increase the rate at which benzene is formed. Hot peppers naturally contain vitamin C ("nearly as much as in one orange") so the observation about beverages applies to pepper sauces containing sodium benzoate, like Texas Pete. ADHD and hyperactivity Research published, including in 2007 for the UK's Food Standards Agency (FSA) suggests that certain artificial colors, when paired with sodium benzoate, may be linked to hyperactive behavior and other ADHD symptoms. The results were inconsistent regarding sodium benzoate, so the FSA recommended further study. The Food Standards Agency concluded that the observed increases in hyperactive behavior, if real, were more likely to be linked to the artificial colors than to sodium benzoate. The report's author, Jim Stevenson from Southampton University, said: "The results suggest that consumption of certain mixtures of artificial food colours and sodium benzoate preservative are associated with increases in hyperactive behaviour in children. . . . Many other influences are at work but this at least is one a child can avoid." Compendial status British Pharmacopoeia European Pharmacopoeia Food Chemicals Codex Japanese Pharmacopoeia United States Pharmacopeia
Physical sciences
Organic salts
Chemistry
902027
https://en.wikipedia.org/wiki/Orion%20Arm
Orion Arm
The Orion Arm, also known as the Orion–Cygnus Arm, is a minor spiral arm within the Milky Way Galaxy spanning in width and extending roughly in length. This galactic structure encompasses the Solar System, including Earth. It is sometimes referred to by alternate names such as the Local Arm or Orion Bridge, and it was previously identified as the Local Spur or the Orion Spur. It should not be confused with the outer terminus of the Norma Arm, known as the Cygnus Arm. Naming and brightness The arm is named after the Orion Constellation, one of the most prominent constellations of the Northern Hemisphere in winter (or the Southern Hemisphere in summer). Some of the brightest stars in the sky as well as other well-known celestial objects of the constellation (e.g. Betelgeuse, Rigel, the three stars of Orion's Belt, and the Orion Nebula) are found within it, as shown on Orion Arm's interactive map. Location The Orion arm is located between the Carina–Sagittarius Arm, the local portion of which projects toward the Galactic Center, and the Perseus Arm's local portion, which forms the main outer-most arm. Scientists once believed the Orion arm to be a minor structure, namely a "spur" between Carina-Sagittarius and Perseus, but evidence presented in 2013 suggests the Orion Arm to be a branch of the Perseus Arm or possibly an independent arm segment. The Solar System is close to its inner rim, about halfway along the arm's length, in a relative cavity in the arm's interstellar medium, known as the Local Bubble. It is approximately from the Galactic Center. Composition Recently, the BeSSeL Survey (Bar and Spiral Structure Legacy Survey) analyzed the parallax and proper motion of more than 30 methanol (6.7-GHz) and water (22-GHz) masers in high-mass, star-forming regions within a few kiloparsecs of the Sun. Their measurement has accuracy above ±10% and even 3%. The accurate locations of interstellar masers in HMSFRs (high-mass star-forming regions) suggests the Local Arm appears to be an orphan segment of an arm between the Sagittarius and Perseus arms that wraps around less than a quarter of the Milky Way. The segment has a length of ~20,000 ly in length and ~3,000 ly in width, with a pitch angle of 10.1° ± 2.7° to 11.6° ± 1.8°. These results suggest the Local Arm is larger than previously thought, and both its pitch angle and star formation rate are comparable to those of the Galaxy’s major spiral arms. The Local Arm is reasonably referred to as the fifth feature in the Milky Way. Form To understand the form of the Local Arm between the Sagittarius and Perseus arms, the stellar density of a specific population of stars with about 1 Gyr of age between 90° ≤ l ≤ 270° have been mapped using the Gaia DR2. The 1 Gyr population have been employed because they are significantly more-evolved objects than the gas in HMSFRs tracing the Local Arm. Investigations have been carried out to compare both the stellar density and gas distribution along the Local Arm. Researchers have found a marginally significant arm-like stellar overdensity close to the Local Arm, identified with the HMSFRs, especially in the region of 90° ≤ l ≤ 190°. The researchers have concluded that the Local Arm segment is associated only with gas and star-forming clouds, showing a significant overdensity of stars. They have also found that the pitch angle of the stellar arm is slightly larger than the gas-defined arm, and there is an offset between the gas-defined and stellar arm. These differences in pitch angles and offsets between the stellar and HMSFR-defined spiral arms are consistent with the expectation that star formation lags behind gas compression in a spiral density wave that lasts longer than the typical star formation timescale of 107 − 108 years. Messier objects The Orion Arm contains a number of Messier objects: The Butterfly Cluster (M6) The Ptolemy Cluster (M7) Open Cluster M23 Open Cluster M25 The Dumbbell Nebula (M27) Open Cluster M29 Open Cluster M34 Open Cluster M35 Open Cluster M39 Winnecke 4 (M40) Open Cluster M41 The Orion Nebula (M42) The De Mairan's Nebula (M43) The Beehive Cluster (M44) The Pleiades (M45) Open Cluster M46 Open Cluster M47 Open Cluster M48 Open Cluster M50 The Ring Nebula (M57) Open Cluster M67 M73 The Little Dumbbell Nebula (M76) Diffuse Nebula M78 Open Cluster M93 The Owl Nebula (M97) Maps Interactive maps
Physical sciences
Milky Way
Astronomy
902183
https://en.wikipedia.org/wiki/Double-decker%20bus
Double-decker bus
A double-decker bus or double-deck bus is a bus that has two storeys or decks. Double-deckers are used primarily for commuter transport, but open-top models are used as sightseeing buses for tourists, and there are coaches too for long-distance travel. They appear in many places around the world but are presently most commonly used as mass transport in cities of Britain, and in Ireland, Hong Kong, Berlin and Singapore. The earliest double-decker horse-drawn omnibus appeared in Paris in 1853 and such vehicles were motorised in the 1900s. Double-decker buses were popularised in Great Britain at the start of the 20th century and today the best-known example is the red London bus, namely the AEC Routemaster. Double-deckers in urban transport were also in common use in other places, such as major cities of India, but were mostly diminished or phased out by the end of the 20th century. However they remain common in Britain as well as Ireland and Hong Kong, while in Singapore and Dhaka they have been introduced and expanded into large numbers after British colonial rule. Overview There are several types of double-decker buses as shown in the imagebox below: Early double-deckers put the driver in a separate cab. Passenger access was via an open platform at the rear and a bus conductor collected fares. Modern double-deckers have a main entrance door at the front and the driver takes fares, thus halving the number of workers aboard, but slowing the boarding process. The rear open platform, popular with passengers, was abandoned for safety reasons, as there was a risk of passengers falling when running and jumping onto the bus. By country Cities listed here have double-decker buses as part of their regular mass transit fleet. Cities with only tourist and sightseeing double-decker buses are excluded. Europe In the European Union, the maximum height for any vehicle is 4 metres, for motor vehicles in categories M2 and M3 and their trailers in category 0 and motor vehicles in categories N2 and N3 and their trailers in categories 03 and 04, in national and international traffic according to Council directive 96/53/EC of 25 July 1996 and in continuity of council directive 85/3/CEE. The United Kingdom has a triple standard for the double-decker bus: highbridge bus (urban Britain), lowbridge bus (countryside Britain) and 4 metres height coach such as the Neoplan Skyliner that can traverse Europe. Outside the British Isles in Europe double-decker buses are most prominent in Skopje and Berlin. United Kingdom The first commercial horse-drawn double-decker omnibuses were introduced in England in 1847 by Adams & Co. of Fairfield, Bow; it was then improved by John Greenwood, who introduced a new double-decker in 1852. William Gladstone, speaking of London's double-deck horse-drawn omnibuses, once observed that "...the best way to see London is from the top of a bus". Double-decker buses are in common use throughout the United Kingdom and have been favoured over articulated buses by many operators because of the shorter length of double-deckers and larger seating capacity; they also may be safer to operate through narrow streets and round tight corners. The majority of double-decker buses in the UK are between and long, the latter being more common since the mid-1990s, though there are three-axle models in service with some operators. Double-decker coaches in the UK have traditionally been in length, though many newer models are about . The red double-decker buses in London have become a national symbol of England. Most buses in London, as in the rest of the UK, are double-deckers. A particular example was the AEC Routemaster bus, which had been a staple of the public transport network in London for nearly half a century following its introduction in 1956. The remaining Routemasters were finally retired from general service in 2005 because of the difficulty of accommodating disabled passengers. Transport for London kept these vintage buses in operation on heritage route 15H until 2020, when it was discontinued due to the COVID-19 pandemic. The contract expired in November 2020 and was not renewed; in 2021 it was announced that the service would no longer continue. There was formerly a second heritage route (9H) but this ceased operation in 2014 due to low patronage and increased operation costs. In 2007, a hybrid-powered double-decker entered service on London Buses route 141. By late 2008, more hybrid double-deckers from three manufacturers had entered service in London. A New Routemaster was developed that year and entered service on 20 February 2012. In October 2015, London added five all-electric double-decker buses - the world's first - made by Chinese firm BYD. The maximum permissible length of a rigid double-decker bus and coach in the UK is with 3 axles and metres with two. However, the total maximum dimensions, including trailer or articulated section, in normal circumstances are: Coaches are normally built to high, while 'highbridge' buses are normally about taller. Articulated double-deckers are also allowed at a maximum length of . Channel Islands Double-deckers operate in Jersey. Gibraltar In the territory of Gibraltar, Calypso Transport operates using double deckers in red livery. Notably, this is the only British territory in Europe that drives on the right and hence the buses are left-hand drive. Isle of Man Bus Vannin operates about 24 double-deckers on routes all across the island. Republic of Ireland In the Republic of Ireland, the majority of the buses operated in and around the Greater Dublin area are double-deckers, operated by Dublin Bus. There are 1,000 double-decker buses (second largest in Europe after London) in the company's fleet of 1,008 (October 2019). The private operator Go-Ahead Ireland also operate a mixed fleet consisting of both double and single deck vehicles. Bus Éireann also utilises double-decker buses on some of its commuter routes, such as the Dublin to Wicklow service. Double-deckers are also common on some of the company's city routes in Cork, Galway and Limerick. More luxurious double-deckers are used on inter-city routes, such as the X1 Dublin-Belfast or X3/X4 Dublin-Derry routes. Austria Double decker buses were in use on city services in Vienna between 1960 and 1991. They are used on services between Vienna and its airport, and also operated by Ötztaler Verkehrsgesellschaft (ÖVG) under contract to ÖBB-Postbus on service 4420 between Innsbruck and Lienz. Czech Republic Since 2020, two Scania UNVI Urbis DD CNG buses have been running on public transport lines in Ostrava. During working days on line 78. Over the weekend and holidays during the summer season on line 88. Denmark Since 1970, various operators of Copenhagen city transport were using double-deckers—originally Leyland, in the 1980s–1990s MAN and in the 2000s Volvo, derivates of model B7. Finland Double-decker buses are relatively rare in Finland, but there are known to be at least four Routemasters in Finland: one in Helsinki, one in Heinola, one in summer tourist charter in Espoo and one in summer tourist traffic in Kuopio. In the autumn of 2019, Public Transport of Turku, also known as Föli, was the first city to officially incorporate double-decker buses into local traffic. France The first French double-decker bus was brought into service in Paris in 1853; it was a horse-drawn omnibus. The upper floor was cheaper and often uncovered. The first double-decker motor bus in Paris, the Schneider Brillié P2, appeared in 1906. It was designed to carry more passengers and to replace the horse-drawn double-decker omnibus. Like trams and omnibuses, double-decker motor buses included two classes of travel: first class inside the car and second class outdoors on top. But this type of vehicle was withdrawn in 1911 because one of them overturned at place de l'Étoile; following this incident the P2s lost their upper deck and were renamed P3s. It was not until 1966 that the RATP re-tried double-deckers on two lines in Paris. A prototype built by Berliet (type E-PCMR), was put into service in 1966, with an order being placed for 25 vehicles. The first production car was commissioned on 19 June 1968 for line 94, Gare Montparnasse - Levallois. On 17 February 1969, line 53, Opera - Porte d'Asnieres was in turn equipped with this model. But traffic problems caused RATP to definitively abandon this vehicle in 1977, because this type of bus was found to be poorly suited to the structure of the Paris network, the stops being too close to each other, preventing people from going upstairs. Consequently, there are no Parisian bus routes using double-deckers. SITAC operates a service 5 between Calais and Sangatte using a double decker bus. Germany In Germany, double-decker buses in Berlin are operated by Berliner Verkehrsbetriebe (BVG). Berlin has had the largest double-decker fleet on continental Europe with 197 vehicles operating as of 2023 (compared to 484 single-deckers and 928 articulated buses). However it used to be higher: 1,000 in 1992, reduced to 450 in 2002. The city originally had double-decker buses at least since the 1920s. The models in operation in 2004 were long and held around 95 passengers. The replacements, which are supplied by Neoman Bus, are longer. The new buses are able to hold 128 passengers. Italy During the 1960s and 1970s, major cities like Turin, Milan, Rome, Florence, Verona, Bologna, Rimini, Naples, Bari and Palermo adopted Fiat double decker buses. The most common model was the Fiat 412 Aerfer, and in 1961 it was replaced by Fiat 413 Viberti Monotral CV61. Liechtenstein Liemobil operates four double decker MAN A39 buses on service 11 between Sargans, Switzerland and Feldkirch, Vorarlberg, Austria and on other services 12, 13 and 14 in the country. Netherlands It is only very recently that double-decker buses have started to be used in the Netherlands. On 10 December 2017 Connexxion put 18 three-axle double-deckers into service on route 346 between Haarlem and Amsterdam Zuid, a heavily used commuter route not served by rail. They are Futura FDD2s built by VDL Bus & Coach in Valkenswaard, are long, and carry 86 seated passengers. Their introduction was not entirely without issues since their route initially had to be diverted to avoid passing under a dangerously low tram overhead wire near the VU Medical Centre stop. Also in December 2017, Qbuzz introduced five double-deckers on its route 300 between Groningen and Emmen. These are Van Hool TDX27 Astromegas, also long and carrying 85 passengers. North Macedonia The Macedonian government bought 217 Yutong City Master double-decker city buses for local transport in Skopje, the capital, built in China's Zhengzhou Yutong factory. The buses were put into operation on 8 September 2011, coinciding with the day of Macedonian independence. This model of bus has capacity for 80 passengers. They represent most of the 312 buses currently in operation by the Skopje public transport company. Norway In June 2008 Boreal Transport on contract with Kolumbus introduced three double-decker buses to provide more seating for certain high-traffic departures in Stavanger. Poland PKS Szczecin since 2021. Portugal Double-decker buses were introduced in Portugal during the 1950s when buses in general started to be used in the main cities such as Lisbon, Porto, Coimbra and Setúbal. The types used were the AEC Regent and later the Daimler Fleetline and the Leyland Atlantean, with Portuguese-built bodies. There was also one Leyland Olympian as a demonstration vehicle in Lisbon. In Porto, there were double-decker trolleybuses, produced by Lancia and with Dalfa bodywork, in use from the mid-1960s until the mid-1990s. Double-decker buses were not in widespread use for normal service but were mainly used for sightseeing purposes. They were most commonly Portuguese-produced vehicles, including rebodies of regular service buses (for example, the Volvo B10R from Carristur), as well as some from former companies, such as the MAN SD202 from BVG Berlin, many of them still in circulation. The absence of double-decker buses on regular service lasted until 2011, when STCP acquired 15 double-decker buses, of the type MAN A39 (as used in Berlin). They were introduced at an event by the company, named "Duplex Tour", on 26 February 2011 and put into normal service on the 28th of that month. These buses can be seen usually on route 500. Russia Until 2011 double-decker buses were operating in the city of Barnaul. The double-decker fleet consisted of seven MAN SD200 and MAN SD202 second hand buses imported from Berlin. Those buses were used on routes 3, 10 and 17. In the mid-1990s, some double-deckers were operated briefly in Saint Petersburg. Spain Double-decker buses were introduced in 2014 in Bilbao by the city bus operator Bilbobus. They are not the first double deck vehicles in the city as ex-London Transport Q1 trolleybuses were sold to Bilbao after the end of London trolleybus operations in 1962 and were operated until the system's closure in 1978. Initially, six vehicles are operating on Bilbobus route 56. They have a capacity of 132 passengers - 80 seated and fifty standing. Sweden Sweden bought in 1965 50 Leyland Atlantean double-decker buses with Park Royal bodies. Leyland claimed they were the first double-decker buses with one man operation. They had two staircases and two pairs of doors. The Atlanteans were not replaced at the end of their revenue service life in 1974. However, in 2011 double-deckers returned to Sweden on revenue duties with VDL Synergy on in the SL 676 Stockholm Östra - Norrtälje line. Norrtälje is located around 70 km north of Stockholm. Switzerland In Switzerland Postauto operate double decker buses on a route between Engelburg–St Gallen–Heiden routes and in the Obertoggenburg region and in the regions of Rorschach and Goldach. 19 Alexander Dennis Enviro500 have been ordered to operate on these services, which seat 80 passengers and can carry 48 standing. Four double deckers are also operated in Graubünden which are due to be replaced within the next two years. Turkey In Turkey, the Istanbul public transit system (IETT) runs 89 double-decker buses on longer-distance routes, most notably commuter buses crossing the Bosphorus Bridge linking the European and the Asiatic sides of the city. Double-decker buses are also used on routes to and from Taksim Square to far-flung western suburbs such as Büyükçekmece and Bahçeşehir. Africa Egypt Several cities in Egypt use double-decker buses as part of their public transportation systems, including Cairo. The MAN Lion's City buses, manufactured in Egypt in 2018, were introduced in Cairo to address provide greater capacity on its bus network. Red double-decker buses are also a feature of Alexandria's bus network. Ethiopia In 2017, as part of a larger order of 850 new buses, the city of Addis Ababa purchased a fleet of 50 double-decker buses to operate routes on its public transportation system. Of these, 25 are operated by the Anbessa City Bus Service Enterprise and 25 are part of the Sheger bus company's fleet; both are government-owned. Kenya A fleet of double-decker buses operate in Mombasa, running a route aimed at tourists. The buses are open top, and run on a hop-on hop-off sightseein route around the city; they are manufactured by Yaxing Coach. Since 2014, a double-decker bus owned by the City Shguttle Bus Company also provides public transportation in Nairobi. Malawi In Malawi, multiple companies utilize fleets of double-decker buses for intercity bus services. Modern Marcopolo buses run direct routes between the nation's two largest cities, Lilongwe and Blantyre. New double-decker buses are also in use on more regional routes, including those connecting cities like Mangochi, Mzimba, and Mzuzu. South Africa Double-decker buses are a feature of a number of transportation systems in South Africa. Johannesburg's public bus system, known as Metrobus and operated by the city, has 550 buses that run on 84 routes throughout the city. Of these, 150 are modern double-decker buses manufactured by Volvo, Scania, and Marcopolo. The City of Tshwane Metropolitan Municipality in Gauteng Province boasts 113 double-decker buses among its public transportation fleet. Golden Arrow Bus Services, the main operator of bus services in Cape Town, is the owner of recently acquired double-decker MAN Lion's City buses. Like in other countries across Southern Africa, double-decker buses are often utilized by private companies for intercity transport connecting major population centers, as well as linking South African cities with hubs in Botswana, Malawi, Mozambique, Namibia, and Zimbabwe. A number of tourism companies offer open top, double-decker bus sightseeing tours of major cities in South Africa. Zambia Double-decker buses are used by a number of private companies in Zambia for intercity bus services, both domestically connecting Lusaka and other cities across the country and internationally to cities like Johannesburg. East Asia Mainland China Several cities in continental China have double-deckers in regular use on certain crowded lines, while some have a few double-deck buses in use on lines which also use single-deck vehicles, e.g. Nanning on line 704 in peak hours. Guilin is leading city that operate double-deckers regularly in major routes; in its main street the double-deckers prevails and run one-by-one almost every minute. Besides Guilin and Nanning, Beijing, Shanghai, Guangzhou, Shenzhen, Tianjin, Hangzhou, Wuhan, Dalian, Foshan and Kunming also have those buses in service, particularly on routes during rush hours. Larger towns in the developed coastal provinces, including Shaoxing, Zhejiang province, use double-decker buses. Hong Kong The former British colony of Hong Kong introduced its first double-decker buses in 1949 by Kowloon Motor Bus. They have become very popular since then, and they are found in large numbers among the fleets of the territory's major bus operators. By law, double-decker buses in Hong Kong are limited to a length of . , the majority of buses running in Hong Kong are double-decker buses, and all of them are air-conditioned. Kowloon Motor Bus, which is the largest bus operator in Hong Kong, operates a fleet of 3,752 double-deckers as of 2017, representing 96% of its total fleet. , Citybus operates a total of 950 double-deckers which is also almost all of its fleet. Hong Kong also has a double-decker tram system, the Hong Kong Tramways, one of only three in the world and the only fleet which is all double-deck. Macau In the former Portuguese territory of Macau, Fok Lei and its successor Transmac used second-hand double-deckers widely from the early 1970s until the late 1980s. Taiwan In early 1990s two tri-axle Leyland Olympians were evaluated in Taipei and Taichung. The evaluation was unsuccessful and the buses were sold to Hong Kong for spares. Japan By Japanese law, vehicles are confined to maximum height and length. Japanese double-decker buses are mainly used for inter-city highway buses (i.e., motor coaches), city tours, and charter buses. In 1960, Kinki Sharyo and Hino Motors manufactured the first original double-decker bus "Vista Coach" for Kinki Nippon Railway (Kintetsu). In 1979, Chuo Kotsu, a chartered bus operator in Osaka, imported the Neoplan Skyliner. Skyliner, and the other imported buses: Van Hool Astromega TD824, Drögmöller E440 Meteor, and a few MAN coaches inspired Japanese bus manufactures, who developed three domestic models in the mid-1980s: "Nissan Diesel Space Dream", "Hino Grand View" and "Mitsubishi Fuso Aero King". They did not, however, sell very well as the ceiling was only high. Nevertheless, Aero King was sold for 22 years, but, being unable to meet exhaust gas emission and safety levels, production stopped in 2005. In 1982, Toei Bus operated Neoplan Skyliners in Tokyo, between Asakusa and Ueno to 2001. Joban Kotsu operated Skyliners in a trans-Fukushima route: between Iwaki and Aizu-Wakamatsu via Koriyama from 1983 to 1996. Since the 1990s, JR Buses started to use Aero King for an overnight inter-city highway bus service named "Dream-go". The first Aero King in Dream-go, operated to "Fuku Fuku Tokyo" between Tokyo and Shimonoseki, Yamaguchi with Sanden Kotsu, which was replaced with a "super high-decker" coach in middle of the 1990s, "Fuku Fuku Tokyo," and finally stopped in 2006. Japanese overnight highway buses are mainly equipped with a three-line, two-aisle (1+1+1) seat configuration with reclining seats. When this configuration is used on an ordinary coach, it has 28, 29 or 31 seats. When this configuration is used on a double-decker bus, it has 36 or 40 seats: the vehicle's price and capacity increase while operating cost decreases. JR Bus group mainly uses Aero King, Skyliner, and a few Jonckheere Monaco (equipped with Nissan Diesel engine) for inter-city highway bus operations between Kanto (near Tokyo) and Kansai (near Osaka), which is named "Dream-go" (overnight express) and "Hiru-tokkyu" (Daytime Express). The other bus operators, inspired by "Dream-go", increased use of the Aero King for overnight inter-city bus service. JR Bus Kanto imported four Neoplan Megaliner N128/4, leasing two to an operating partner (from 2003 to 2006, Kanto Railway, since 2006 Nishinihon JR Bus). The Megaliner is long, and has 84 seats (with 2+2 configuration), and is operated on an inter-city highway route between Tokyo and Tsukuba, Ibaraki from 2002 to 2005. The Megaliner has also been converted for a low-price overnight highway bus service between Tokyo and Osaka called "Seishun Mega Dream-go," with special authorisation. South Korea In 2015, a fleet of 20 double-decker buses was introduced for commuters making the journey between the capital Seoul and its surrounding Gyeonggi Province and nearby Incheon city in 2015 as a pilot project. North Korea Double deckers started running on Pyongyang streets in the latter half of 2000s. South Asia Bangladesh The Bangladesh Road Transport Corporation operates a fleet of Ashok Leyland buses on the streets of Dhaka and Chittagong. The majority of buses running in Dhaka are double-deckers, numbering about 445 buses as of 2022. In 2002, 50 Volvo B10M/Alexander buses were procured to operate in Dhaka and quickly making up most of the fleet, but due to a lack of maintenance and unavailability of spare parts, they were slowly taken off the roads before being completely phased out in 2010. The BRTC has tried unsuccessfully to have them repaired and brought back to service. The company instead purchased 290 double-decker vehicles from Ashok Leyland of India as a replacement. BRTC had about 399 double-deckers as of 2012. In 2018, the BRTC acquired 300 buses from Ashok Leyland. In 2020, Indonesian bus manufacturer CV Laksana exported 10 luxury double decker coaches to Bangladesh. These buses were built atop a Scania chassis. India In India, cities such as Hyderabad, Bangalore, Lucknow, Delhi, and Kolkata had double deckers for a while before discontinuing them. In Delhi, double-deckers were discontinued from service around 1986. Hyderabad operated them until 2003, until it was revived for strictly touristic purposes in 2023. Chennai's Metropolitan Transport Corporation (MTC) had a small fleet of double-decker buses, mostly in the high-density, longer distance routes. They operated from 1975 before being wound up. They were briefly reintroduced in 1997 and were operated till 2008. Mumbai has had double-decker buses since 1937, being the first Indian city to have had it during British colonial period, and it has since become an iconic symbol of the city. About 900 double-deckers were in operation in (then called) Bombay during the 1960s and these buses were modelled like London's, but the numbers went into decline in the 1990s. The Brihanmumbai Electric Supply and Transport (BEST), which operates the city's buses, phased out the legacy double deckers in 2023 in favour of modern ones. Kerala State Road Transport Corporation is operating double deckers in Thiruvananthapuram and Kochi cities. They are modelled on the London buses. Ashok Leyland Titan double decker buses are used in all cities. Articulated double-decker buses from Ashok Leyland were used till they were phased out in the early 1990s as they were thought to be unsuitable for city traffic. In 2020, West Bengal Transport Corporation reintroduced double decker buses in Kolkata after an absence of 30 years, on routes where wide road space was available, i.e. no over-head cables, low bridges or flyovers. In 2022, Ashok Leyland subsidiary Switch Mobility introduced its electric double decker, the EiV22 in India. The bus was subsequently inducted into the fleets of BEST in Mumbai, TSRTC in Hyderabad, TMC in Tirupati and Mo Bus in Bhubaneswar. As of end 2023, BEST has 49 of these vehicles in its fleet in Mumbai. Pakistan Certain cities of Pakistan including Karachi and Peshawar had double-deckers in operation from the 1950s until the late 1970s. In Lahore these were Leyland buses operated by the Lahore Omnibus Service. In November 2015, double-deckers were reintroduced to Pakistan but only in the form of a sightseeing service for tourists, operated by Tourism Development Corporation of the Punjab (TDCP), under the name Sightseeing Lahore. The vehicles were imported from China. TDCP started a sightseeing tour in Islamabad and Rawalpindi in 2020 using the same double-decker vehicles. Sri Lanka In the 1950s, double-decker buses of the South Western Bus Company plied on the Galle Road in Colombo, Sri Lanka. These were taken over by the Ceylon Transport Board (CTB) when all bus services were nationalised in 1958. Beginning around 1959, large numbers of second-hand double-decker buses of the RT, RTL and RTW classes were imported by the CTB from London Transport, and ran in their original red livery with the oval CTB logo painted on the sides. These buses were phased out beginning in the mid-1970s, and none remain in service. Later, around 1985, 40 ex-London Routemaster entered service. One Routemaster bus is run by the Sirasa TV and radio station. Today's buses in Sri Lanka include AEC Routemaster (Currently phased out in order to make way for Volvo B9TL/East Lancs Nordic and incoming First Western Dennis Trident 2/Plaxton President - 2001/02), MCW Metrobus (including 12m parts), Leyland Atlantean, and Dennis Trident 2 (1999/2000), plus some of the Volvo B7TL/East Lancs Vyking and Volvo B9TL/East Lancs Nordic buses. Southeast Asia Indonesia Indonesia first operated its double-decker bus fleet in 1968, with Leyland Titan double-decker bus first operated in Jakarta. The double-decker bus service linked Salemba in Central Jakarta with Blok M area in South Jakarta from 1968 to 1982. Between 1984 and 1996, the Jakarta municipal bus service, Perusahaan Umum Pengangkutan Penumpang Djakarta (Perum PPD) operated a fleet of 180 Volvo B55 double-decker buses, connecting various corners in the city. The double-decker bus service ceased to operate in 1996 due to aging fleet, lack of spare parts, and there are no plan to renew the double-decker fleet in Jakarta. By that time, the remnant of double-decker bus body were sold and repurposed as bus-themed clothing store in Blok M and restaurant in Senayan (now SCBD) area, but now the establishment has been demolished. By the early 2000s, the PPD has shifted their bus fleet from European built double-decker buses to cheaper second-hand Japanese buses and imported articulated buses from China. By that time, the double-decker seems to be lost in favour of articulated bus, which provides more exit and entry points to accommodate faster embarkment. By 2004 the TransJakarta bus rapid transit began its service in Jakarta, but uses no double-decker bus and chosen conventional and articulated buses instead. Since February 2014, the Jakarta Government provides free double-decker bus tours that offers sightseeing in Central Jakarta. The buses' route covers tourist attractions, such as Monas, Istiqlal Mosque, the Cathedral, National Museum, Sarinah, and Plaza Indonesia, as well as Grand Indonesia shopping centres. As 2016 there are 18 double-decker buses in Jakarta, and the service is expanded to include Kota Tua and Gelora Bung Karno Stadium in Senayan area via Sudirman avenue. Other than the capital Jakarta, there are some cities in Indonesia that have operated double-decker buses, mostly as city sightseeing tour service. They are Bandung, Semarang and Surakarta. The Bandros is a double-decker tourist bus operating in Bandung since 2014. After the completion of Trans-Java highway section connecting Jakarta and Surabaya in 2018, some intercity bus services began operating fleet of double decker buses. This choice was due to a larger capacity and the available seat space for a more comfortable journey across Java. Previously, the problem with operating intercity double-decker buses was the steep and narrow roads of mountainous interior on Java island. The Trans-Java highway enables a rather straight an even road terrain for smooth travel between major cities in Java. Malaysia Malaysia has historically seen the use of double-decker buses in mass transit to varying degrees, but were significantly limited in use due operational costs and driving spaces needed for such buses. Early double-decker municipal buses primarily existed in Malaya within the Kuala Lumpur area of Selangor and George Town, Penang between the late 1940s and the early 1960s, when double-deckers were eventually withdrawn in favour of cheaper and more agile single-deck buses. The earliest recorded use of double-deckers by Malayan bus companies was in Selangor in 1948 when the Toong Fong Omnibus Company acquired two Park Royal-built Guy Arab IIIs at a cost of M$40,000 each; the General Transport Company (GTC) followed by acquiring Park Royal-built AEC Regent IIIs. While the buses saw service for over a decade, all of them were taken out of service for a variety of reasons and were never replaced with new double-deckers; the buses were often obstructed by narrow streets, trees, low bridges, and increasing overhead wires, while passengers eventually favoured staying on the lower deck of the bus; the cost of operating the buses was also higher due to a local vehicle tax calculated based on the number of seats of a taxed vehicle. One Toong Fong double-decker was burned in the late-1950s by communist insurgents, while the remaining double-deckers were ultimately disused by the mid-1960s due to age. The successor of the GTC, Sri Jaya, experimented with a reintroduction of double-deckers in 1989 by leasing a Singapore-assembled, 102-seat Leyland Olympian for use within Kuala Lumpur for 6 months, but found that street conditions were problematic as before and discontinued the use of the bus after the trial. In George Town, Penang, five retired AEC C1-class double-decker trolleybuses were procured in 1956 by the George Town Municipal Tramways from London Transport as an experiment for the possible use of double-decker buses in George Town. Poor performance results and the advancing ages of the buses, coupled with efforts to replace the entire trolleybus fleet with single-deck diesel-powered buses in the 1960s, led to the withdrawal of the only double-deck buses in early Penangite public transport. Following increasing public bus ridership, more open roadways and the feasibility of operating double-deck Hop-On Hop-Off tourist buses within Kuala Lumpur, Prasarana Malaysia purchased 40 (revised from an earlier 111) Alexander Dennis Enviro500 double-decker buses in 2014 to serve high volume Rapid KL Rapid Bus routes; with a capacity of 108 passengers each, it is double that of a contemporary single-deck bus in the fleet. The first five buses of the batch entered service in September 2015; with the rest of the fleet gradually added into service in the following months. In November 2019, Prasarana ordered a further 90 Gemilang Coachworks-bodied Volvo B8Ls as part of Rapid KL's bus fleet replacement programme, with the first batch of buses entering service in June 2020. Feasibility studies were also conducted by Prasarana in 2015 on the reintroduction of double-deckers in Penang through Rapid Penang's bus service. By August 2016, a fleet of three Rapid Penang Enviro500s were officially launched into service, with a total of 33 buses planned. Although plans were reaffirmed in 2017 to expand the double-decker fleet by 30, the existing three Enviro500s were quietly withdrawn from service by 2018 due to operational issues and were subsequently transferred to the Rapid KL fleet. Beyond mass transit, double-deckers have seen widespread use as long-distance coaches since the late-2000s in response to growing demand for intercity travel, as expressways and outlying bus stops lack physical obstacles that plagued urban bus services. Singapore There are currently over 2,000 double-deckers in operation in Singapore. In October 1953, a single AEC Regent III double-decker from the fleet of the Kuala Lumpur-based General Transport Company was sent to Singapore for demonstration. It was used on service by the Singapore Traction Company for two weeks. After that, it was inspected by two Chinese-owned bus companies, and then sent back to Kuala Lumpur. No orders for double deckers were followed all the way until the 1970s. Sentosa was already operating several ex-London double decker buses such as the AEC Regent III RT, Leyland Titan RTL from 1975 to the early 1980s and subsequently Leyland Olympian from Bexleybus from 1991 to 2007. They were followed by Volvo B7R Open Top double deckers from 2006 to 2015 until they were modified and converted back to single-deck. The first bus route to Sentosa, service 123 also had used double decker buses from 30 July 2017. Normal double-decker buses were re-introduced into the Singapore's public bus system on 13 June 1977 when Singapore Bus Service (SBS, present-day SBS Transit), introduced 20 Leyland Atlantean AN68 buses on route 86 which was launched by then-Deputy Prime Minister and Minister of Communications Ong Teng Cheong that day, with the success, SBS had purchased 500 Leyland Atlantean AN68, 200 Mercedes-Benz O305 and 200 Leyland Olympian double decker buses until 1986 with the Alexander R-type body, while also trialing several demonstrators such as the Volvo B55, Dennis Dominator, Scania BR112DH and Volvo B10MD Citybus concurrently. The next batch of double-deck buses was introduced from 1984 to 1986 and consists of the Mercedes-Benz O305 and the Leyland Olympian, both of which were bodied by Walter Alexander Coachbuilders with an R-type body. In 1993, the first air-conditioned (as opposed to the average length then) double-decker bus, the Leyland Olympian 3-axle was launched as the "Superbus". The original 200 "Superbuses" were followed by an additional 471 Volvo Olympian 3-axle "Superbuses" and 100 non air-conditioned Volvo Olympian buses from 1994 to 2000. The Volvo B10TL, the first stepless, ultra-low-floor "Superbus", was launched in 1999 while 20 low-floor Dennis Trident 3 buses followed in 2001. The first wheelchair-accessible double-decker buses, the Volvo B9TL, were introduced in 2006; SBS Transit's monopoly on double-decker buses in Singapore ended in 2014 when rival SMRT Buses ordered 201 Alexander Dennis Enviro500 buses and were succeeded by 15 more buses. It was followed up with 16 MAN ND323F A95 buses in 2015, and as of 2023, 542 double deckers are in service. Those are being ordered by LTA, and newer units introduced from 2018 onwards are also Euro VI-compliant and are equipped with USB charging ports and Visual Passenger Information Displays. Some of the existing double-deckers owned by SMRT Buses and SBS Transit were transferred to new operators Tower Transit and Go-Ahead under the Bus Contracting Model. Subsequent orders of B9TL and A95 buses are being made by the Land Transport Authority, are painted in a lush green livery (rather than the operator-specific liveries of SMRT and SBS Transit), and are used by all operators. Double deckers are prohibited on certain services due to height restrictions, narrow roads and width restrictions. In March 2017, the first three-door, two-staircase 12.8m long double-decker bus in Singapore was introduced by Tower Transit with the registration plate SG5999Z for a six-month trial period. After the trial, the bus was transferred to SBS Transit. The trial was successful and the Land Transport Authority purchased another 100 3-door double decker buses of the Alexander Dennis Enviro500 and MAN A95 builds in April 2019. In April 2018, SBS Transit introduced into service a single Volvo B8L registered as SG4003D. Fully electric Yutong E12DD double-decker buses were introduced in 2020. Outside public bus operations, open-top buses are also operated by Big Bus Tours. The New Routemaster from London also visited Singapore twice; once in 2014 and another in 2016. Philippines Presently double-decker buses are used by the Mall of Asia Arena (Higer KLQ6119GSE3 B91H-series) and the Subic Bay Metropolitan Authority (King Long XMQ6110GS). Former operators were Manila Motor Company (Matorco), which introduced such buses to the Philippines, and the Metro Manila Transit Corporation (Leyland Atlantean). The first double-decker bus in decades to serve the riding public in the capital region debuted in January 2016, serving the SM City North EDSA-Ayala Center route. It also sports PWD and elderly sitting, a national first. Those were technically the Premium Point To Point buses In the entirety of Visayas and Mindanao region, Pabama Transport based in Bukidnon province in northern Mindanao was the first bus line to deploy double decker buses (Zhongtong LCK6148H Navigator) which started servicing the riding public in June 2018. It was also the first in the country to field double-decker buses for provincial operations. Thailand Double deckers are also commonly found in Thailand nowadays. Previously there are Volvo B10M with Alexander bodies available. Vietnam The first two double decker buses were used for route 06 of Ho Chi Minh City since 3 December 2005 in green color, like many HCMC buses at that time. There were rumors in May 2019 that these two buses will cease their operation, but Head of HCMC Department of Transportation confirmed that the buses will remain in service. West Asia Iraq First used in 1938 and continued until 2003. They became an icon of the city of Baghdad, and were brought back in 2012 with modern buses. A fare cannot reach more than 45 U.S. cents or 500 Iraqi Dinars. The double-decker buses in Baghdad were the first to enter the Middle East and the Arab world. Iran The first double-decker bus in Iran was assembled in 1959. One of the most popular double-decker buses in Iran was Leyland Atlantean. There are other examples, tourist double-decker buses being one, in metropolitan cities such as Tehran, Mashhad, Tabriz, the Island of Kish, and more. Israel In Israel, Egged operated double-decker buses from 1984 to 2013. First double-decker (built by Neoplan) arrived to Israel on 26 August 1984, and during the trial period it performed 434 test drives on four different lines, passed 195583 kilometers and gathered public attention and interest. Double-deckers were found commercially attractive, and in 1988 an agreement between Egged and Neoplan was signed, ordering first 20 buses, with additional 30 buses ordered in the following year. The double-deckers started to arrive in 1989, and they worked on lines Tel-Aviv—Jerusalem, Tel-Aviv—Haifa, Tel-Aviv—Be'er Sheba and on Eilat lines. However, both technical issues (in connection between the engine and the transmission) and religious issues (Haredi Jews had issues with women sitting above and in front of them) led to gradual decline in use of these double-deckers. They were removed from the last major line operating them, the Tel-Aviv—Jerusalem line (480), in the end of the 1990s, worked for some time on local lines in the Shfela region, and in the beginning of 2000s were finally removed from service. Some of the buses were sold to an auto dealer from Jordan, who in turn sold them to Iraq and other countries of the Persian gulf. In August 2005, three double-deckers returned to service on line 99 "Scenic route" in Jerusalem. These 3 buses were assembled from parts of 7 double-deckers that were still in storage. The operational cost for double-deckers gradually grew beyond the limit when repairs were still attractive, which led to their final removal from service in 2013. In June 2021, Egged began operating a double-decker bus on route 190 between Rishon Lezion and Tel Aviv. The bus is based on a Mann chassis and a chassis from the Spanish UNVI company, can transport about 90 passengers (73 seated) and is also accessible with a ramp for the disabled. Egged purchased 4 buses of this model. Kuwait In the 1980s, KPTC took a number of Leyland Atlantean's. Since Deregulation one of the other main bus operators Kuwait Citybus re-introduced 56 double deckers built by King Long in 2017. United Arab Emirates 170 Neoplan double deckers are in operation in Dubai. North America Canada In 2000, the cities of Victoria and Kelowna, British Columbia, placed an order for 10 Dennis Trident 3 buses imported from the United Kingdom, becoming the first cities in North America to use modern double-decker buses in their public transit systems. Several more orders have been placed since then, and as of 2017 BC Transit operates 69 double-decker buses, including Trident 3s and the newer Alexander Dennis Enviro500s, of which 62 operate on the Victoria Regional Transit System and the remaining 7 with the Kelowna Regional Transit System. In Victoria, the buses are mainly used on routes that go from downtown to the suburbs, and to the Swartz Bay Ferry Terminal near Sidney, B.C. They can also be found on routes that head to the University of Victoria and the Western Communities, and have proven to be very popular amongst both locals and tourists. TransLink, the transit authority of Metro Vancouver, British Columbia, tested 2 Enviro500 buses on lines 301, 311, 351, 354, 555, 601 and 620 between November 2017 and March 2018. It was announced soon after that 32 double-deckers will be purchased, arriving in 2019. On 30 October 2019, TransLink's first double-decker bus made its first ever run along route 620 from the Bridgeport station in Richmond to the Tsawwassen Ferry Terminal. From March 2009 to June 2012, three imported Alexander Dennis Enviro500 double-decker buses similar to those in Victoria were used on OC Transpo express routes on the Transitway in Ottawa, Ontario. Delivered in November 2008, these buses proved to be efficient in reducing costs, but their height prohibited their use on many routes. Consequently, these three buses were withdrawn and sold to Victoria in late 2012 after a new series of 75 Enviro500 buses with a lower height that met MTO regulations entered service earlier that year. As of 2018 OC Transpo has 133 of these buses. GO Transit, a regional transit system serving the Greater Toronto Area in Ontario is the largest user of double-decker buses in Canada, with over 150 such vehicles in service as of 2017. Its fleet comprises Alexander Dennis Enviro500s in a single-door, commuter-type configuration similar to its fleet of highway coaches. The first 22 entered service between 2008 and 2009, with its roof height limiting usage to the Highway 407 corridor. 105 additional buses were delivered between 2012 and 2015 and feature a lower roof height of . The latest series of 253 buses, which is currently built in a local factory in Vaughan, has a roof height of and is expected to replace most of the single-decker coach fleet by 2020, at which point 75% of the active fleet is expected to be composed of double-deckers. Strathcona County Transit of Strathcona County, Alberta, started a pilot project in September 2010 which explored using different high-capacity bus types to carry more passengers on high-demand commuter routes between Strathcona County and Edmonton. This involved a one-year lease of an Alexander Dennis Enviro500 from the manufacturer. After completing a year of testing between September 2010 and October 2011, a firm order of 14 Enviro500s was placed in 2013 for their service between Sherwood Park and downtown Edmonton, with the first arriving in late August. Five more buses were ordered in 2016, bringing the fleet of double-deckers to 19 as of 2017. Strathcona County Transit currently has 24 Enviro500s in their fleet. Mexico The Mexico City Metrobús bus rapid transit system started operating a fleet of 90 Alexander Dennis Enviro500s on its new line 7, along the city's boulevard, Paseo de la Reforma, in February 2018. Panama At least one double-decker bus. United States With the exception of coaches, double-decker buses are uncommon in the United States. Several private operators, such as Megabus, run by Coach USA, employ double-decker buses on busier intercity routes. For publicly run transport, articulated buses are generally preferred. Nonetheless, a handful of municipal operators use double-decker buses, primarily on the West Coast. In Davis, California, Unitrans, the student-run bus company of University of California, Davis, operates six double-decker buses imported from London. One of these buses has been converted to run on compressed natural gas (CNG). There was also the prototype GX-1 Scenicruiser of Greyhound Lines, which enters from the first floor: the second floor contains the driver's compartment and more seats. Citizens Area Transit, the transit authority in the Las Vegas area, introduced a fleet of double-deckers to serve the Las Vegas Strip route in October 2005. The route is branded as "The Deuce". As of 2009 it serviced eight lines. In Snohomish County, Washington, Community Transit operates 45 Alexander Dennis Enviro500 double-decker buses, which are used on commuter routes between Snohomish County and Seattle. An initial order of 23 buses went into service in 2011, and a second order of 17 went into service in 2015. Sound Transit, another operator in the Seattle area, bought five double-decker buses through a Community Transit order and began operating their own fleet in 2015. In 2016, a joint procurement between three transit agencies in Washington state ordered additional double-decker buses from Alexander Dennis. Community Transit would order 17 buses, with an option for 40, Sound Transit would receive 32 with an option for 43, and Kitsap Transit would buy 11 of their own. As of 1 January 2020, Community Transit owns 52 and Sound Transit owns 37. Community transit purchased 23 in 2010 (10800-10822), 22 in 2015(15800-15821), and 8 in 2019(19850-19857). Sound transit purchased 5 in 2015(91501-91505) and another 32 in 2017(91701-91732). One of the 10800s has been retired, bringing the total for Community Transit from 53 to 52. In San Luis Obispo, California, SLO Transit tested a double-decker bus in late 2008 to see if it would alleviate the over-crowdedness of Route 4. The borrowed bus has been returned, and SLO Transit has purchased one double-decker bus of its own using a combination of Federal, State and local funding. The bus went into operation on 8 September 2010. In Los Angeles County, California, Foothill Transit uses double-decker battery electric buses as part of its commuter service to the Los Angeles area. Also in Los Angeles, SCRTD used Neoplan AN 122/3 Skyliners double-decker buses from the late 1970s until 1993. New York City phased out double-decker buses in 1960. They briefly returned from 1976 to 1978, although they only ran in Manhattan. In 2008 the Metropolitan Transportation Authority (MTA) briefly ran a Van Hool double-decker bus on several express routes. However, that year's financial crisis meant the end of the trial period. In 2018, the MTA tested another double decker bus, an Alexander Dennis Enviro500 SuperLo, on the X17J express bus route between Manhattan and Staten Island. However, the MTA has no current plans to purchase double decker buses. In San Francisco, California, the San Francisco Municipal Transportation Agency operated one Alexander Dennis double-decker bus as a demonstrator between 12 December 2007 and 8 January 2008. The bus was running on some high capacity routes as trial. In California, AC Transit began experimental use of a double-decker bus on the commuter route between Fremont, California, and Stanford University in 2015. On 3 December 2018 the company introduced double-deckers on its FS and J routes from Berkeley to San Francisco, and later added routes L and LA serving Richmond, El Sobrante, San Pablo and Albany. Oceania Australia Double-decker buses plied route services in Sydney from the 1920s until 1986. Popular makers included AEC, Albion and Leyland. Disputes over one-man operation of double-deckers led to the phasing-out of this configuration. Double-deckers were thereafter limited to charter and tourist services. Double-decker buses were reintroduced to the Sydney area in 2012, with Busways operating regular services from Blacktown to Rouse Hill, in Sydney's north-west. These were expanded in 2013, to traverse routes from Castle Hill and the Northern Beaches to Sydney's CBD. Forest Coach Lines and Hillsbus also purchased some. The B-Line service uses an exclusive fleet of double-decker buses. Double-decker buses were also reintroduced to Melbourne by CDC Melbourne to operate between Werribee, Wyndham Vale and Tarneit railway stations in 2015 using buses made in Melbourne. Double deck coaches were built by Denning between 1988 and 1992 with AAT Kings, Australian Pacific Touring and Deluxe Coachlines the main customers. The concept was revived in 2011 by Denning Manufacturing. Volgren fitted double deck coach bodies to Volvo B10Ms in the 1980s for Greyhound. New Zealand Until 2013–14, double decker buses were used only by tour operators and for long-distance coach services operated by Intercity Coachlines and Manabus. They were not used for public transport on urban routes. In the 1970s a number of former London double decker buses were imported for museums, such as the Museum of Transport and Technology who used AEC Regent Low Height (RLH) buses to connect museum sites and for charters. Sydney double decker and more London buses of various models (AEC Regent III RT, AEC Routemasters etc.) were imported by charter and tourist operators and slowly became more commonplace. Bridge heights and shop verandas restricted the use of double deckers around New Zealand until congestion and high public transport use required some innovative solutions. A single double-decker bus arrived in Auckland in early March 2013 for a trial, with more planned to arrive in 2014 if the trial proved successful. The Scania K320UD bus, operated by Ritchies Transport, began revenue service on 11 March 2013 on the well-patronised Northern Express services on the Northern Busway between Albany and Britomart in downtown Auckland. In addition, NZ Bus and Howick & Eastern investigated the use of double-decker buses on the Dominion Road, Mount Eden Road, and Botany to downtown routes. By May 2016, double deckers were running on the busway and on many other Auckland urban routes, operated by several companies, with more to be introduced. Four 87-seat DesignLine double deckers started on Waikato services in 2018. Since 15 July 2018, double deckers, including some fully electric models, have been operating in Wellington. There are currently 51 diesel double deckers running and 10 electric ones. 34 of the diesel double deckers are euro 6, while the remaining 17 are euro 5. South America Argentina In Argentina, double-decker buses are the second most widely used means of transport for long-distance trips, surpassed only by aeroplanes. Double-decker buses are also used by tourists in Buenos Aires where they're used in city-tours. Bolivia In Bolivia, double-deck buses are a common means of transportation for long-distance trips between large cities such as department capitals. These buses also connect Bolivia with different countries. The double deck buses travel to Argentina, Brazil, Peru and Chile. The buses are equipped with toilets, and several companies offer buses with large seats called Leito (Bus Cama) that can be pulled back and be shaped into a bed. Brazil Double-decker buses, built mainly by Comil or Marcopolo, are common in long-distance services interstate and international connecting whole South America. It is possible to see VIP double-decker buses also connecting cities in the same state, such as São Paulo City, São José do Rio Preto and Ribeirão Preto. Inside these buses there are one TV for each person and boarding service for example. Open double-decker are used for city tours (such as Rio de Janeiro and Bahia). In São Paulo, there was an experience of use for urban services in the 1980s, built by Thamco, but without success due to issues with the height of the vehicle. These buses are fabricated in Brazil and exported to many countries. Ecuador Double-decker buses are used in city-tours in Quito, Cuenca and parts of the coast. They are very popular in the touristic district of the Historic District in Quito. Double-decker buses are common on long distance interurban trips. Chile The first double-decker intercity buses arrived from Germany at the end of the 1970s to serve national long-distance routes and international services, mainly to Argentina, for the Varmontt, Flota L, Chile Bus and Tas Choapa lines. Since the end of the 1990s, they have become the standard for interurban transport, due to the advantageous cost-benefit ratio for transport companies. During the decades of 2000 and 2010, the heyday of this means of transport was experienced, which has meant that practically all Chilean intercity bus companies have these buses, this time not of European origin but with bodies in Brazil, Peru, Argentina and some of Chinese Origin. However, on several occasions, the safety and low comfort provided by these vehicles have been a strong subject of discussion; however, the increase in passenger capacity makes them quite profitable for interurban companies. On 9 March 2017, the British Embassy in Chile reached an agreement with the Ministry of Public Transport in Santiago to test double-decker buses for public transport. That day the first bus, an Alexander Dennis Enviro500 was tested on the streets of Santiago for six weeks. On 26 February 2019, it was announced that another double-decker bus, a Wrightbus Streetdeck will start to be used in a particular route in Central Santiago. On that day the first bus was tested on Santiago roads. On 17 August 2023, the first fleet of 10 BYD electric double-decker buses disembarked for public transport that will travel the streets of Santiago de Chile, they are the first double-decker electric buses that currently operate in America and in October 2023 they will be operational to integrate into the public transport network of the Chilean capital. Peru Double-decker buses are common on long distance interurban trips to main cities of the country. Open top double-deckers are used in city tours in downtown Lima and in the tourist district of Miraflores. Pio Delgado Arguedas bought 300 Greyhound buses and was the distributor of the buses in South America and Mexico. He also created TEPSA, and was the owner for years until he sold his company. Uruguay Since the 1990s, Uruguay long-distance bus service operators have operated Brazilian double-deckers. Two AEC Routemasters were imported in the 1970s for urban tourism services – one is now held by the transport heritage group ERHITRAN. In the 2010s Montevideo had an urban tourism circuit using open roof Argentinian double-deckers (suspended with the COVID-19 lockdowns). Triple-decker buses There have been attempts to build a triple-decker bus. However such vehicles are problematic in that the high centre of gravity leads to instability and there is the risk of hitting trees or bridges. In almost all models the third level was a small compartment in the rear part of the bus, such as a triple-decker capable of carrying eighty-eight people from Rome to Tivoli in 1932 or the General American Aerocoach 3 Decker Bus of 1952. The only three-decker with a full-length third level ever built is the Knight Bus that John Richardson created for Harry Potter and the Prisoner of Azkaban by combining two AEC Regent III RTs. Although the AEC was merely a film prop, a British built ECW bodied Bristol VR double-decker bus imitation of the Knight bus was a functioning bus, which even went on tour. Some buses in Pakistan have a partial bottom level that uses a separate cabin from the upper two levels. Comparison with articulated buses Operators worldwide must often decide between articulated and double-decker buses on popular routes. Articulated buses, entirely on one level, offer more room for disabled passengers, luggage and pushchairs; they may also be needed on routes going under low bridges or weak bridges that cannot take high axle loads. Double-decker buses, however, have a smaller road footprint and as such disrupt traffic, or block turning lanes, less than articulated buses. Double-decker buses may be more popular with passengers because of the better view, and with cyclists, who may be at less risk than they are with the unpredictable swing of an articulated bus's tail. Articulated buses normally offer more standing room while double-decker buses may sometimes (not always) offer more seats. Articulated buses have less dwell time because of the extra doors, and double-decker buses offer fewer chances for fare dodgers since there are fewer or no unmanned doors. Collisions with bridges There have been a significant number of incidents in which a double-decker bus has collided with a low bridge, often a railway bridge. This is often caused by the driver making a wrong turn, driving a route they are unfamiliar with, or being used to driving single-decker buses and forgetting to allow for their vehicle's extra height when driving a double-decker. A collision with a railroad bridge by a Megabus in September 2010 at Syracuse, New York, killed four passengers and injured 17. In recent years in the United Kingdom, six people had minor injuries after their bus hit a railway bridge at Stockport in July 2013. An empty bus had its roof removed after hitting a railway bridge in Birkenhead in December 2014. In March 2015, a bus carrying 76 children hit a bridge at Staines-upon-Thames. Eleven passengers were taken to hospital but none were seriously injured. In the same month, an empty bus had its roof removed after hitting a railway bridge in Isleworth West London. A Stagecoach Highlands bus collided with a railway bridge at Balloch, Highland, Scotland in April 2015. There were no casualties, one top-deck passenger narrowly escaped injury by throwing himself to the floor. A bus operated by Bluestar had its roof removed after colliding with a railway bridge at Romsey in May 2015. An incident in July 2015 in Norwood, London also resulted in the removal of the bus' roof; seven people were injured. Similar incidents occurred in September 2015 in Rochdale, Greater Manchester (seventeen were injured) and in Bournemouth in April 2016, with all thirty passengers escaping without injury. On 11 September 2020, a bus carrying 72 children hit a bridge, taking the complete roof off, in Winchester, Hampshire, on the way to school. Three children were seriously injured and required surgery whilst a further 12 suffered minor injuries. The bus was operated by Stagecoach South. In May 2023, ten people were taken to hospital after a bus crashed into a railway bridge on Cook Street, Glasgow. The same bridge was involved in another crash in December 2024. Again, the roof of a bus was torn off, this time after the driver reportedly took a wrong turn into the one-way street. Numerous injuries were caused and eight people went to hospital. Days later, in a separate incident, another double decker bus hit another railway bridge in Scotland, in Kilmarnock. In popular culture In the film Summer Holiday, Cliff Richard and friends drive a double-decker bus fitted out as a caravan across Europe. In The Mummy Returns, Rick O'Connell (Brendan Fraser), Evelyn O'Connell (Rachel Weisz), Ardeth Bay (Oded Fehr), Jonathan Carnahan (John Hannah) and Alex O'Connell (Freddie Boath) are chased by mummies whilst they ride on a AEC Regent III RT in London. In Live and Let Die, Roger Moore as James Bond, drives one being chased. The chase involving the double-decker bus was filmed with a former London bus adapted by having the top sliced off, then put back in place running on ball bearings to allow it to slide away from the undercarriage on impact with a low bridge. The stunts involving the bus were performed by Maurice Patchett, a London Transport bus driving instructor. A double deck bus also featured at the end of the final episode of the sitcom The Young Ones. The British sitcom On the Buses featured double deckers, driven by Stan Butler (portrayed by Reg Varney). In Harry Potter and the Prisoner of Azkaban, the aforementioned Knight Bus is a triple-decker bus which can fit under bridges due to magic. During the 2012 Summer Olympics, Czech artist David Černý presented his moving sculpture named London Booster, a full-sized "London double-decker bus" (actually ex-Southern Vectis from the Isle of Wight) permanently doing push-ups with hydraulic-powered human-like arms. This was an accompanying installation outside temporarily Czech Olympic House in London borough of Islington. The double decker bus was also a star in a Saturday morning TV series titled Here Come the Double Deckers in the 1970s. A double decker bus was featured on the 2009 Doctor Who episode ‘Planet of the Dead’, where it transported its passengers through a wormhole to the alien planet of San Helios. The British television series Thomas & Friends features an anthropomorphic AEC Bridgemaster named Bulgy, a character infamous for his dislike for railways. He deems roads superior to rail traffic, and often tells lies or sabotages the railways to make the roads flourish. He always gets his comeuppance in the end, though he refuses to give up his beliefs. List of double-decker buses (including coaches and trolleybuses) Adiputro Jetbus Super Double Decker AEC 661T AEC 663T AEC 664T AEC 691T AEC 761T AEC Bridgemaster AEC K-type AEC Q-type AEC Regent I AEC Regent II AEC Regent III AEC Regent III RT AEC Regent IV AEC Regent V AEC Routemaster Alexander ALX400 Alexander ALX500 Alexander Dennis Enviro400 Alexander Dennis Enviro400 City Alexander Dennis Enviro400EV Alexander Dennis Enviro400 MMC Alexander Dennis Enviro500 Alexander Dennis Enviro500EV Alexander Dennis Enviro500 MMC Ashok Leyland Titan Ayats Bravo BCI CitiRider BMMO D9 BMMO D10 Bristol K Bristol Lodekka Bristol VR Beulas Jewel Bombardier DD BÜSSING D2U BÜSSING D3 BÜSSING D38 BÜSSING DE 72 LVG Bustech CDi Daimler/Leyland Fleetline Dennis Dragon Dennis Trident 2 Dennis Trident 3 Do 54 Do 56 Duple Metsec DM5000 Gemilang Coachworks Double Decker Hyundai Elec City Double Decker Laksana Legacy SR eXtra Double Decker Leyland Atlantean Leyland-DAB Lion Leyland Olympian Leyland Titan Leyland Titan (B15) MAN Lion's City A95 DD MAN Lion's City A39 DD Mávaut-Ikarus 556 MCV DD102 MCV DD103 MCV EvoSeti MCW Metrobus MCW Metroliner MCW/Scania Metropolitan Marcopolo S.A. Paradiso 1800 G7 Mitsubishi Fuso Aero King Morodadi Prima patriot Three Pointed Star Double Decker Neoplan Centroliner Double Decker Neoplan Jumbocruiser Neoplan Megaliner Neoplan Skyliner New Routemaster New Armada Evolander New Armada Highlander New Armada Skylander Nusantara Gemilang Maxi Miracle Double Decker Nusantara Gemilang Conqueror Double Decker Opel Blitz Aero Strassenzepp Doppeldecker Optare MetroDecker Rahayu Santosa Jetliner Double Decker Scania Citywide LFDD Scania OmniCity DD Scania OmniDekka Scania KUD Scania NUD Setra S 228 DT Setra S 328 DT Setra S 431 DT Stallion Bus Double Decker Switch Mobility EiV 22 Thaco TB120SS Thaco TB138SS Tentrem Avante D2 Van Hool Astromega TD8/TD9/TDX series VDL Citea DLF-114 VDL Futura FDD2 Volvo Ailsa B55 Volvo Citybus Volvo B5TL Volvo B5LH Volvo B7TL Volvo B8L Volvo B8RLE Volvo B9TL Volvo Mexico Volvo Olympian Volvo Super Olympian Wright SRM Wright StreetDeck Yutong City Master
Technology
Motorized road transport
null
31417095
https://en.wikipedia.org/wiki/Antenna%20array
Antenna array
An antenna array (or array antenna) is a set of multiple connected antennas which work together as a single antenna, to transmit or receive radio waves. The individual antennas (called elements) are usually connected to a single receiver or transmitter by feedlines that feed the power to the elements in a specific phase relationship. The radio waves radiated by each individual antenna combine and superpose, adding together (interfering constructively) to enhance the power radiated in desired directions, and cancelling (interfering destructively) to reduce the power radiated in other directions. Similarly, when used for receiving, the separate radio frequency currents from the individual antennas combine in the receiver with the correct phase relationship to enhance signals received from the desired directions and cancel signals from undesired directions. More sophisticated array antennas may have multiple transmitter or receiver modules, each connected to a separate antenna element or group of elements. An antenna array can achieve higher gain (directivity), that is a narrower beam of radio waves, than could be achieved by a single element. In general, the larger the number of individual antenna elements used, the higher the gain and the narrower the beam. Some antenna arrays (such as military phased array radars) are composed of thousands of individual antennas. Arrays can be used to achieve higher gain, to give path diversity (also called MIMO) which increases communication reliability, to cancel interference from specific directions, to steer the radio beam electronically to point in different directions, and for radio direction finding (RDF). The term antenna array most commonly means a driven array consisting of multiple identical driven elements all connected to the receiver or transmitter. A parasitic array consists of a single driven element connected to the feedline, and other elements which are not, called parasitic elements. It is usually another name for a Yagi–Uda antenna. A phased array usually means an electronically scanned array; a driven array antenna in which each individual element is connected to the transmitter or receiver through a phase shifter controlled by a computer. The beam of radio waves can be steered electronically to point instantly in any direction over a wide angle, without moving the antennas. However the term "phased array" is sometimes used to mean an ordinary array antenna. Principle From the Rayleigh criterion, the directivity of an antenna, the angular width of the beam of radio waves it emits, is proportional to the wavelength of the radio waves divided by the width of the antenna. Small antennas around one wavelength in size, such as quarter-wave monopoles and half-wave dipoles, don't have much directivity (gain); they are omnidirectional antennas which radiate radio waves over a wide angle. To create a directional antenna (high gain antenna), which radiates radio waves in a narrow beam, two general techniques can be used: One technique is to use reflection by large metal surfaces such as parabolic reflectors or horns, or refraction by dielectric lenses to change the direction of the radio waves, to focus the radio waves from a single low gain antenna into a beam. This type is called an aperture antenna. A parabolic dish is an example of this type of antenna. A second technique is to use multiple antennas which are fed from the same transmitter or receiver; this is called an array antenna, or antenna array. For a transmitting antenna the electromagnetic wave received at any point is the vector sum of the electromagnetic waves from each of the antenna elements. If the currents are fed to the antennas with the proper phase, due to the phenomenon of interference the spherical waves from the individual antennas combine (superpose) in front of the array to create plane waves, a beam of radio waves traveling in a specific direction. In directions in which the waves from the individual antennas arrive in phase, the waves add together (constructive interference) to enhance the power radiated. In directions in which the individual waves arrive out of phase, with the peak of one wave coinciding with the valley of another, the waves cancel (destructive interference) reducing the power radiated in that direction. Similarly, when receiving, the oscillating currents received by the separate antennas from radio waves received from desired directions are in phase and when combined in the receiver reinforce each other, while currents from radio waves received from other directions are out of phase and when combined in the receiver cancel each other. The radiation pattern of such an antenna consists of a strong beam in one direction, the main lobe, plus a series of weaker beams at different angles called sidelobes, usually representing residual radiation in unwanted directions. The larger the width of the antenna and the greater the number of component antenna elements, the narrower the main lobe, and the higher the gain which can be achieved, and the smaller the sidelobes will be. Arrays in which the antenna elements are fed in phase are broadside arrays; the main lobe is emitted perpendicular to the plane of the elements. The largest array antennas are radio interferometers used in the field of radio astronomy, in which multiple radio telescopes consisting of large parabolic antennas are linked together into an antenna array, to achieve higher resolution. Using the technique called aperture synthesis such an array can have the resolution of an antenna with a diameter equal to the distance between the antennas. In the technique called Very Long Baseline Interferometry (VLBI) dishes on separate continents have been linked, creating "array antennas" thousands of miles in size. Types Most array antennas can be divided into two classes based on how the component antennas' axis relates to the radiation direction. A broadside array is a one or two dimensional array in which the direction of radiation (main lobe) of the radio waves is perpendicular to the plane of the antennas. To radiate perpendicularly, the antennas must be fed in phase. An endfire array is a linear array in which the direction of radiation is along the line of the antennas. The antennas must be fed with a phase difference equal to the separation of adjacent antennas. There are also arrays (such as phased arrays) which don't belong to either of these categories, in which the direction of radiation is at some other angle to the antenna axis. Array antennas can also be categorized by how the element antennas are arranged: Driven array – This is an array in which the individual component antennas are all "driven" – connected to the transmitter or receiver. The individual antennas, which are usually identical, often consist of single driven elements, such as half-wave dipoles, but may also be composite antennas such as Yagi antennas or turnstile antennas. Collinear array – a broadside array consisting of multiple identical dipole antennas oriented vertically in a line. This is a high gain omnidirectional antenna, often used in the VHF band as broadcasting antennas for television stations and base station antennas for land mobile two-way radios. Superturnstile or Batwing array – specialized vertical antenna used for television broadcasting consisting of multiple crossed-dipole antennas mounted collinearly on a mast. High gain omnidirectional radiation pattern with wide bandwidth. Planar array – a flat two-dimensional array of antennas. Since an array of omnidirectional antennas radiates two beams 180° apart broadside from both sides of the antenna, it is usually either mounted in front of a flat reflector, or is composed of directive antennas such as Yagi or helical antennas, to give a unidirectional beam. Reflective array – a planar array of antennas, often half-wave dipoles fed in phase, in front of a flat reflector such as a metal plate or wire screen. This radiates a single beam of radio waves perpendicular (broadside) to the array. Used as UHF television antennas and radar antennas. Curtain array – an outdoor wire shortwave transmitting antenna consisting of a planar array of wire dipoles suspended in front of a vertical reflector made of a "curtain" of parallel wires. Used on HF band as long distance transmitting antenna for shortwave broadcasting stations. May be steered as phased array. Microstrip antenna – an array of patch antennas fabricated on a printed circuit board with copper foil on the reverse side functioning as a reflector. The elements are fed through striplines made of copper foil. Used as UHF and satellite television receiving antennas. Phased array or electronically scanned array – A planar array in which the beam can be steered electronically to point in any direction over a wide angle in front of the array, without physically moving the antenna. The current from the transmitter is fed to each component antenna through a phase shifter, controlled by a computer. By changing the relative phase of the feed currents, the beam can instantly be pointed in different directions. Widely used in military radars, this technique is rapidly spreading to civilian applications. Passive Electronically Scanned Array (PESA) – A phased array as described above, in which the antenna elements are fed from a single transmitter or receiver through phase shifters. Active Electronically Scanned Array (AESA) – A phased array in which each antenna element has its own transmitter and/or receiver module, controlled by a central computer. This second generation phased array technology can radiate multiple beams at multiple frequencies simultaneously, and is mostly used in sophisticated military radars. Conformal array – a two-dimensional phased array which is not flat, but conforms to some curved surface. The individual elements are driven by phase shifters which compensate for the varying path lengths, allowing the antenna to radiate a plane wave beam. Conformal antennas are often integrated into the curving skin of aircraft and missiles, to reduce aerodynamic drag. Smart antenna, reconfigurable antenna or adaptive array – a receiving array that estimates the direction of arrival of the radio waves and electronically optimizes the radiation pattern adaptively to receive it, synthesizing a main lobe in that direction. Like a phased array it consists of multiple identical elements with phase shifters in the feed lines, controlled by a computer. Log-periodic dipole array (LPDA) – an endfire array consisting of many dipole driven elements in a line, with gradually increasing length. It acts as a high gain broadband antenna. Used as television reception antennas and for shortwave communication. Parasitic array – This is an endfire array which consist of multiple antenna elements in a line of which only one, the driven element, is connected to the transmitter or receiver, while the other elements, called parasitic elements, are not. The parasitic elements function as resonators, absorbing radio waves from the driven element and reradiating them with a different phase, to modify the radiation pattern of the antenna, increasing the power radiated in the desired direction. Since these have only one driven element they are often called "antennas" instead of "arrays". Yagi–Uda antenna or Yagi antenna – this endfire array consists of multiple half-wave dipole elements in a line. It consists of a single driven element with multiple "director" parasitic elements in the direction of radiation, and usually a single "reflector" parasitic element behind it. They are widely used on the HF, VHF, and UHF bands as television antennas, shortwave communication antennas, and in radar arrays. Quad antenna – This consists of multiple loop antennas in a line, with one driven loop and the others parasitic. Functions similarly to the Yagi antenna. Periodic Arrays Let us consider a linear array whose elements are arranged along the x-axis of an orthogonal Cartesian reference system. It is assumed that radiators have the same orientation and the same polarization of the electric field. Based on this, the array factor can be written as follows where is the number of antenna elements, is the wavenumber, and (in meters) are the complex excitation coefficient and the position of the n-th radiator, respectively, , with and being the zenith angle and azimuth angle, respectively. If the spacing between adjacent elements is constant, then it can be written that , and the array is said to be periodic. The array is periodic both spatially (physically) and in the variable . For example, if , with being the wavelength, then the magnitude of the array factor has a period, in the domain of , equal to . It is worth emphasising that is an auxiliary variable. In fact, from a physical point of view, the values of that are of interest for radiative purposes fall in the interval , which is associated with the values of and . In this case, the interval [-1,1] is called visible space. As shown further, if the definition of the variable changes, the extent of the visible space also changes accordingly. Now, suppose that the excitation coefficients are positive real variables. In this case, always in the domain of , the array factor magnitude has a main lobe with maximum value at , called mainlobe, several secondary lobes lower than the mainlobe, called sidelobes and mainlobe replicas called grating-lobes. Grating lobes are a source of disadvantages in both transmission and reception. In fact, in transmission, they can lead to radiation in unwanted directions, while, in reception, they can be a source of ambiguity since the desired signal entering the mainlobe region could be strongly disturbed by other signals (unwanted interfering signals) entering the regions of the various grating lobes. Therefore, in periodic arrays, the spacing between adjacent radiators must not exceed a specific value to prevent the appearance of grating lobes (in the visible space)in the visible space), the spacing between adjacent radiators must not exceed a specific value. For example, as seen previously, the first grating lobes for occur in . So, in this case, there are no problems since, in this way, the grating lobes are outside the interval [-1,1]. Aperiodic Arrays As seen above, when the spacing is constant between adjacent radiators, the array factor is characterized by the presence of grating lobes. In the literature, it has been amply demonstrated that to destroy the array factor's periodicity, the same array's geometry must also be made aperiodic. It is possible to act on the positions of the radiators so that these positions are not commensurable with each other. Several methods have been developed to synthesize arrays in which also the positions represent further degrees of freedom (unknowns). There are both deterministic and probabilistic methodologies. Since the probabilistic theory of aperiodic arrays is a sufficiently systematised theory, with a strong general methodological basis, let us first concentrate on describing its peculiarities. Suppose that the radiators positions, , are independent and identically distributed random variables whose support coincides with the whole array aperture. Consequently, the array factor is a stochastic process, whose mean is as follows Design of antenna arrays In an antenna array providing a fixed radiation pattern, we may consider that the feed network is a part of the antenna array. Thus, the antenna array has a single port. Narrow beams can be formed, provided the phasing of each element of the array is appropriate. If, in addition, the amplitude of the excitation received by each element (during emission) is also well chosen, it is possible to synthesize a single-port array having a radiation pattern that closely approximates a specified pattern. Many methods have been developed for array pattern synthesis. Additional issues to be considered are matching, radiation efficiency and bandwidth. The design of an electronically steerable antenna array is different, because the phasing of each element can be varied, and possibly also the relative amplitude for each element. Here, the antenna array has multiple ports, so that the subject matters of matching and efficiency are more involved than in the single-port case. Moreover, matching and efficiency depend on the excitation, except when the interactions between the antennas can be ignored. An antenna array used for spatial diversity and/or spatial multiplexing (which are different types of MIMO radio communication) always has multiple ports. It is intended to receive independent excitations during emission, and to deliver more or less independent signals during reception. Here also, the subject matters of matching and efficiency are involved, especially in the case of an antenna array of a mobile device (see chapter 10 of ), since, in this case, the surroundings of the antenna array influence its behavior, and vary over time. Suitable matching metrics and efficiency metrics take into account the worst possible excitations.
Technology
Signal transmission
null
24287300
https://en.wikipedia.org/wiki/Halobates
Halobates
Halobates or sea skaters are a genus with over 40 species of water striders. Most Halobates species are coastal and typically found in sheltered marine habitats (a habitat where a few other genera of water striders also live), but five live on the surface of the open ocean and only occur near the coast when storms blow them ashore. These are the only known truly oceanic, offshore insects. They are found in tropical and subtropical marine habitats around the world, with a single species recorded in rivers a few kilometers upstream from the ocean. Halobates are generally very common. They were first collected by Johann Friedrich von Eschscholtz, a doctor who was part of a Russian expedition aboard the Rurik between 1815 and 1818. A fossil species H. ruffoi is known from 45 million year old deposits in Verona, Italy. Close relatives of the genus include Austrobates and Asclepios. Appearance They are small insects with a body that is up to long and broad, and a leg span up to at least . They lack wings, have long antennae, short front legs used for catching prey (and, in the male, for holding the female during mating), long middle legs used for propulsion, and somewhat shorter rear legs used for steering. The nymphs resemble miniature versions of the adult. The sexes are quite similar, except that males are thinner than females and have the rear part of the body modified into genitalia, and when gravid the females may have a notably plump abdomen. The various species closely resemble each other in general appearance. Range and abundance Halobates are found in tropical and subtropical marine habitats around the world. They generally prefer temperatures of , are infrequent below and only exceptionally recorded in waters less than . The coastal species are largely restricted to the Indo-Pacific region, with the exception of H. robustus from the Galápagos Islands. Some of these coastal species have very small ranges, often restricted to a single archipelago, while others are more widespread. They primarily occur near mangrove or other marine plants. A single species, H. acherontis, has been recorded in rivers a few kilometers upstream from the ocean. The absence of coastal species in the Atlantic region may in part be explained by Trochopus. That genus of veliid water striders inhabit coastal mangrove areas in the Atlantic region; the same niche inhabited by coastal Halobates in the Indian and Pacific oceans. The five offshore, pelagic species are H. micans, H. germanus, H. sericeus, H. splendens and H. sobrinus, of which the last four are found in the Indian and/or Pacific Oceans. H. micans has a circumglobal range, occurring offshore in warmer seas around the world from about 40° north to 40° south, and it is the only one found in the Atlantic Ocean, including the Caribbean. Their occurrences are generally patchy, but where found they can be very common. During scientific surveys with relatively fast-moving surface nets, they are caught in more than 60% of the tows (less in slow-moving tows, likely because of their ability to avoid them). Studies show that densities locally can be as high as 1 individual per in the oceanic species, and 120 individuals per m² (11 per sq ft) in breeding aggregations of the coastal species. Behavior and predators They are predators, with coastal species feeding mainly on land-living insects that have fallen into the water. Less is known about the feeding of the oceanic species, but they appear to mostly eat zooplankton, with other recorded items being floating insects, fish eggs and larvae, and dead jellyfish. Small prey is caught and eaten by a single Halobates, but larger prey such as small fish may be eaten by three or four Halobates at once. Adults may cannibalize their own nymphs, and old nymphs cannibalize young nymphs, but generally they do not eat their own age class. Some species prefer struggling prey over immobile prey, but in other species, it is the other way around. The feeding behavior of the newly-hatched nymphs is unknown, as aquarium kept individuals refused to eat the various organisms that older captive nymphs and adults will eat (for example, dead fruit flies). This has resulted in speculations that the newly-hatched nymphs might feed on organic-rich surface film. Halobates may catch aquatic prey just below the surface with their front legs, but do not dive. They are very fast and can reach speeds of per second. The coastal species lay their eggs close to the water surface on rocks, plants, and other structures near the shore, while the oceanic species attach their egg masses on floating objects such as cuttlebone and feathers. Each female lays 1–20 whitish or translucent eggs that each measure about long and half that wide. They may hatch just above or just below the surface. In recent decades the oceanic species have been documented laying their eggs on floating plastic waste, which potentially may disrupt the marine food chain, as the Halobates (now with access to more surfaces for breeding) may become far more common than usual. In one extreme case, a plastic gallon jug was found to be covered by 15 layers of eggs, equalling about 70,000 in total. Some species of storm petrel actively feed on Halobates, sometimes splashing the water with their feet to attract or detect sea striders. Other seabirds (especially noddies) and a range of surface-feeding fish will also eat them. Open research questions Apart from understanding how exactly Halobates sp. came to be the only genus of insects to live on the open ocean – in spite of insects making up the majority of all animals – those animals offer unique research questions that could have applications in materials sciences. For example, it is still unknown how they can move on the water surface without slipping, but yet their legs are capable of effortlessly detaching from the surface in order to jump. Incapable of diving or hiding, Halobates must protect themselves from ultraviolet radiation. Although it is known that the cuticle of Halobates sericeus filters more than 99.9998 percent of the UV radiation at the 280 nm wavelength, the chemical properties that confer this protection are still unknown.
Biology and health sciences
Hemiptera (true bugs)
Animals
24295969
https://en.wikipedia.org/wiki/Applied%20mathematics
Applied mathematics
Applied mathematics is the application of mathematical methods by different fields such as physics, engineering, medicine, biology, finance, business, computer science, and industry. Thus, applied mathematics is a combination of mathematical science and specialized knowledge. The term "applied mathematics" also describes the professional specialty in which mathematicians work on practical problems by formulating and studying mathematical models. In the past, practical applications have motivated the development of mathematical theories, which then became the subject of study in pure mathematics where abstract concepts are studied for their own sake. The activity of applied mathematics is thus intimately connected with research in pure mathematics. History Historically, applied mathematics consisted principally of applied analysis, most notably differential equations; approximation theory (broadly construed, to include representations, asymptotic methods, variational methods, and numerical analysis); and applied probability. These areas of mathematics related directly to the development of Newtonian physics, and in fact, the distinction between mathematicians and physicists was not sharply drawn before the mid-19th century. This history left a pedagogical legacy in the United States: until the early 20th century, subjects such as classical mechanics were often taught in applied mathematics departments at American universities rather than in physics departments, and fluid mechanics may still be taught in applied mathematics departments. Engineering and computer science departments have traditionally made use of applied mathematics. As time passed, Applied Mathematics grew alongside the advancement of science and technology. With the advent of modern times, the application of mathematics in fields such as science, economics, technology, and more became deeper and more timely. The development of computers and other technologies enabled a more detailed study and application of mathematical concepts in various fields. Today, Applied Mathematics continues to be crucial for societal and technological advancement. It guides the development of new technologies, economic progress, and addresses challenges in various scientific fields and industries. The history of Applied Mathematics continually demonstrates the importance of mathematics in human progress. Divisions Today, the term "applied mathematics" is used in a broader sense. It includes the classical areas noted above as well as other areas that have become increasingly important in applications. Even fields such as number theory that are part of pure mathematics are now important in applications (such as cryptography), though they are not generally considered to be part of the field of applied mathematics per se. There is no consensus as to what the various branches of applied mathematics are. Such categorizations are made difficult by the way mathematics and science change over time, and also by the way universities organize departments, courses, and degrees. Many mathematicians distinguish between "applied mathematics", which is concerned with mathematical methods, and the "applications of mathematics" within science and engineering. A biologist using a population model and applying known mathematics would not be doing applied mathematics, but rather using it; however, mathematical biologists have posed problems that have stimulated the growth of pure mathematics. Mathematicians such as Poincaré and Arnold deny the existence of "applied mathematics" and claim that there are only "applications of mathematics." Similarly, non-mathematicians blend applied mathematics and applications of mathematics. The use and development of mathematics to solve industrial problems is also called "industrial mathematics". The success of modern numerical mathematical methods and software has led to the emergence of computational mathematics, computational science, and computational engineering, which use high-performance computing for the simulation of phenomena and the solution of problems in the sciences and engineering. These are often considered interdisciplinary. Applicable mathematics Sometimes, the term applicable mathematics is used to distinguish between the traditional applied mathematics that developed alongside physics and the many areas of mathematics that are applicable to real-world problems today, although there is no consensus as to a precise definition. Mathematicians often distinguish between "applied mathematics" on the one hand, and the "applications of mathematics" or "applicable mathematics" both within and outside of science and engineering, on the other. Some mathematicians emphasize the term applicable mathematics to separate or delineate the traditional applied areas from new applications arising from fields that were previously seen as pure mathematics. For example, from this viewpoint, an ecologist or geographer using population models and applying known mathematics would not be doing applied, but rather applicable, mathematics. Even fields such as number theory that are part of pure mathematics are now important in applications (such as cryptography), though they are not generally considered to be part of the field of applied mathematics per se. Such descriptions can lead to applicable mathematics being seen as a collection of mathematical methods such as real analysis, linear algebra, mathematical modelling, optimisation, combinatorics, probability and statistics, which are useful in areas outside traditional mathematics and not specific to mathematical physics. Other authors prefer describing applicable mathematics as a union of "new" mathematical applications with the traditional fields of applied mathematics. With this outlook, the terms applied mathematics and applicable mathematics are thus interchangeable. Utility Historically, mathematics was most important in the natural sciences and engineering. However, since World War II, fields outside the physical sciences have spawned the creation of new areas of mathematics, such as game theory and social choice theory, which grew out of economic considerations. Further, the utilization and development of mathematical methods expanded into other areas leading to the creation of new fields such as mathematical finance and data science. The advent of the computer has enabled new applications: studying and using the new computer technology itself (computer science) to study problems arising in other areas of science (computational science) as well as the mathematics of computation (for example, theoretical computer science, computer algebra, numerical analysis). Statistics is probably the most widespread mathematical science used in the social sciences. Status in academic departments Academic institutions are not consistent in the way they group and label courses, programs, and degrees in applied mathematics. At some schools, there is a single mathematics department, whereas others have separate departments for Applied Mathematics and (Pure) Mathematics. It is very common for Statistics departments to be separated at schools with graduate programs, but many undergraduate-only institutions include statistics under the mathematics department. Many applied mathematics programs (as opposed to departments) consist primarily of cross-listed courses and jointly appointed faculty in departments representing applications. Some Ph.D. programs in applied mathematics require little or no coursework outside mathematics, while others require substantial coursework in a specific area of application. In some respects this difference reflects the distinction between "application of mathematics" and "applied mathematics". Some universities in the U.K. host departments of Applied Mathematics and Theoretical Physics, but it is now much less common to have separate departments of pure and applied mathematics. A notable exception to this is the Department of Applied Mathematics and Theoretical Physics at the University of Cambridge, housing the Lucasian Professor of Mathematics whose past holders include Isaac Newton, Charles Babbage, James Lighthill, Paul Dirac, and Stephen Hawking. Schools with separate applied mathematics departments range from Brown University, which has a large Division of Applied Mathematics that offers degrees through the doctorate, to Santa Clara University, which offers only the M.S. in applied mathematics. Research universities dividing their mathematics department into pure and applied sections include MIT. Students in this program also learn another skill (computer science, engineering, physics, pure math, etc.) to supplement their applied math skills. Associated mathematical sciences Applied mathematics is associated with the following mathematical sciences: Engineering and technological engineering With applications of applied geometry together with applied chemistry. Scientific computing Scientific computing includes applied mathematics (especially numerical analysis), computing science (especially high-performance computing), and mathematical modelling in a scientific discipline. Computer science Computer science relies on logic, algebra, discrete mathematics such as graph theory, and combinatorics. Operations research and management science Operations research and management science are often taught in faculties of engineering, business, and public policy. Statistics Applied mathematics has substantial overlap with the discipline of statistics. Statistical theorists study and improve statistical procedures with mathematics, and statistical research often raises mathematical questions. Statistical theory relies on probability and decision theory, and makes extensive use of scientific computing, analysis, and optimization; for the design of experiments, statisticians use algebra and combinatorial design. Applied mathematicians and statisticians often work in a department of mathematical sciences (particularly at colleges and small universities). Actuarial science Actuarial science applies probability, statistics, and economic theory to assess risk in insurance, finance and other industries and professions. Mathematical economics Mathematical economics is the application of mathematical methods to represent theories and analyze problems in economics. The applied methods usually refer to nontrivial mathematical techniques or approaches. Mathematical economics is based on statistics, probability, mathematical programming (as well as other computational methods), operations research, game theory, and some methods from mathematical analysis. In this regard, it resembles (but is distinct from) financial mathematics, another part of applied mathematics. According to the Mathematics Subject Classification (MSC), mathematical economics falls into the Applied mathematics/other classification of category 91: Game theory, economics, social and behavioral sciences with MSC2010 classifications for 'Game theory' at codes 91Axx and for 'Mathematical economics' at codes 91Bxx . Other disciplines The line between applied mathematics and specific areas of application is often blurred. Many universities teach mathematical and statistical courses outside the respective departments, in departments and areas including business, engineering, physics, chemistry, psychology, biology, computer science, scientific computation, information theory, and mathematical physics. Applied Mathematics Societies The Society for Industrial and Applied Mathematics is an international applied mathematics organization. As of 2024, the society has 14,000 individual members. The American Mathematics Society has its Applied Mathematics Group.
Mathematics
Other
null
541808
https://en.wikipedia.org/wiki/Paddy%20field
Paddy field
A paddy field is a flooded field of arable land used for growing semiaquatic crops, most notably rice and taro. It originates from the Neolithic rice-farming cultures of the Yangtze River basin in southern China, associated with pre-Austronesian and Hmong-Mien cultures. It was spread in prehistoric times by the expansion of Austronesian peoples to Island Southeast Asia, Madagascar, Melanesia, Micronesia, and Polynesia. The technology was also acquired by other cultures in mainland Asia for rice farming, spreading to East Asia, Mainland Southeast Asia, and South Asia. Fields can be built into steep hillsides as terraces or adjacent to depressed or steeply sloped features such as rivers or marshes. They require a great deal of labor and materials to create and need large quantities of water for irrigation. Oxen and water buffalo, adapted for life in wetlands, are important working animals used extensively in paddy field farming. Paddy field farming remains the dominant form of growing rice in modern times. It is practiced extensively in Bangladesh, Cambodia, China, India, Indonesia, northern Iran, Japan, Laos, Malaysia, Mongolia, Myanmar, Nepal, North Korea, Pakistan, the Philippines, South Korea, Sri Lanka, Taiwan, Thailand, and Vietnam. It has also been introduced elsewhere since the colonial era, notably in northern Italy, the Camargue in France, and in Spain, particularly in the Albufera de València wetlands in the Valencian Community, the Ebro Delta in Catalonia and the Guadalquivir wetlands in Andalusia, as well as along the eastern coast of Brazil, the Artibonite Valley in Haiti, Sacramento Valley in California, and West Lothian in Scotland among other places. Paddy cultivation should not be confused with cultivation of deepwater rice, which is grown in flooded conditions with water more than 50 cm (20 in) deep for at least a month. Global paddies' emissions account for at least 10% of global methane emissions. Drip irrigation systems have been proposed as a possible environmental and commercial solution. Etymology The word "paddy" is derived from the Malay/Indonesian word padi, meaning "rice plant", which is itself derived from Proto-Austronesian *pajay ("rice in the field", "rice plant"). Cognates include Amis panay; Tagalog pálay; Kadazan Dusun paai; Javanese pari; and Chamorro fai, among others. History Neolithic southern China Genetic evidence shows that all forms of paddy rice, including both indica and japonica, spring from a domestication of the wild rice Oryza rufipogon by cultures associated with pre-Austronesian and Hmong-Mien-speakers. This occurred 13,500 to 8,200 years ago south of the Yangtze River in present-day China. There are two likely centers of domestication for rice as well as the development of the wet-field technology. The first is in the lower Yangtze River, believed to be the homelands of the pre-Austronesians and possibly also the Kra-Dai, and associated with the Kuahuqiao, Hemudu, Majiabang, Songze, Liangzhu, and Maquiao cultures. The second is in the middle Yangtze River, believed to be the homelands of the early Hmong-Mien speakers and associated with the Pengtoushan, Nanmuyuan, Liulinxi, Daxi, Qujialing, and Shijiahe cultures. Both of these regions were heavily populated and had regular trade contacts with each other, as well as with early Austroasiatic speakers to the west, and early Kra-Dai speakers to the south, facilitating the spread of rice cultivation throughout southern China. The earliest paddy field found dates to 4330 BC, based on carbon dating of grains of rice and soil organic matter found at the Chaodun site in Kunshan. At Caoxieshan, a site of the Neolithic Majiabang culture, archaeologists excavated paddy fields. Some archaeologists claim that Caoxieshan may date to 4000–3000 BC. There is archaeological evidence that unhusked rice was stored for the military and for burial with the deceased from the Neolithic period to the Han dynasty in China. By the late Neolithic (3500 to 2500 BC), population in the rice cultivating centers had increased rapidly, centered around the Qujialing-Shijiahe and Liangzhu cultures. There was also evidence of intensive rice cultivation in paddy fields as well as increasingly sophisticated material cultures in these two regions. The number of settlements among the Yangtze cultures and their sizes increased, leading some archeologists to characterize them as true states, with clearly advanced socio-political structures. However, it is unknown if they had centralized control. In the terminal Neolithic (2500 to 2000 BC), Shijiahe shrank in size, and Liangzhu disappeared altogether. This is largely believed to be the result of the southward expansion of the early Sino-Tibetan Longshan culture. Fortifications like walls (as well as extensive moats in Liangzhu cities) are common features in settlements during this period, indicating widespread conflict. This period also coincides with the southward movement of rice-farming cultures to the Lingnan and Fujian regions, as well as the southward migrations of the Austronesian, Kra-Dai, and Austroasiatic-speaking peoples to Mainland Southeast Asia and Island Southeast Asia. Austronesian expansion The spread of japonica rice cultivation and paddy field agriculture to Southeast Asia started with the migrations of the Austronesian Dapenkeng culture into Taiwan between 3500 and 2000 BC. The Nanguanli site in Taiwan, dated to ca. 2800 BC, has yielded numerous carbonized remains of both rice and millet in waterlogged conditions, indicating intensive wetland rice cultivation and dryland millet cultivation. From about 2000 to 1500 BC, the Austronesian expansion began, with settlers from Taiwan moving south to migrate to Luzon in the Philippines, bringing rice cultivation technologies with them. From Luzon, Austronesians rapidly colonized the rest of Maritime Southeast Asia, moving westwards to Borneo, the Malay Peninsula and Sumatra; and southwards to Sulawesi and Java. By 500 BC, there is evidence of intensive wetland rice agriculture already established in Java and Bali, especially near very fertile volcanic islands. Rice did not survive the Austronesian voyages into Micronesia and Polynesia; however, wet-field agriculture was transferred to the cultivation of other crops, most notably for taro cultivation. The Austronesian Lapita culture also came into contact with the non-Austronesian (Papuan) early agriculturists of New Guinea and introduced wetland farming techniques to them. In turn, they assimilated their range of indigenous cultivated fruits and tubers before spreading further eastward to Island Melanesia and Polynesia. In Hawaii, the conditions of available taro pondfields (loʻi) as worked by native Hawaiians later proved feasible for rice cultivation by Chinese and Japanese migrant farmers in the late 19th to early 20th century; rice plots were often enlargened by dismantling bunds (kuāuna) that bordered between smaller established loʻi. Rice and wet-field agriculture were also introduced to Madagascar, the Comoros, and the coast of East Africa around the 1st millennium AD by Austronesian settlers from the Greater Sunda Islands. Korea There are ten archaeologically excavated rice paddy fields in Korea. The two oldest are the Okhyun and Yaumdong sites, found in Ulsan, dating to the early Mumun pottery period. Paddy field farming goes back thousands of years in Korea. A pit-house at the Daecheon-ni site yielded carbonized rice grains and radiocarbon dates, indicating that rice cultivation in dry-fields may have begun as early as the Middle Jeulmun pottery period (c. 3500–2000 BC) in the Korean Peninsula. Ancient paddy fields have been carefully unearthed in Korea by institutes such as Kyungnam University Museum (KUM) of Masan. They excavated paddy field features at the Geumcheon-ni Site near Miryang, South Gyeongsang Province. The paddy field feature was found next to a pit-house that is dated to the latter part of the Early Mumun pottery period (c. 1100–850 BC). KUM has conducted excavations, that have revealed similarly dated paddy field features, at Yaeum-dong and Okhyeon, in modern-day Ulsan. The earliest Mumun features were usually located in low-lying narrow gullies, that were naturally swampy and fed by the local stream system. Some Mumun paddy fields in flat areas were made of a series of squares and rectangles, separated by bunds approximately 10 cm in height, while terraced paddy fields consisted of long irregular shapes that followed natural contours of the land at various levels. Mumun Period rice farmers used all of the elements that are present in today's paddy fields, such as terracing, bunds, canals, and small reservoirs. We can grasp some paddy-field farming techniques of the Middle Mumun (c. 850–550 BC), from the well-preserved wooden tools excavated from archaeological rice fields at the Majeon-ni Site. However, iron tools for paddy-field farming were not introduced until sometime after 200 BC. The spatial scale of paddy-fields increased, with the regular use of iron tools, in the Three Kingdoms of Korea Period (c. AD 300/400-668). Japan The first paddy fields in Japan date to the Early Yayoi period (300 BC – 250 AD). The Early Yayoi has been re-dated, and based on studies of early Japanese paddy formations in Kyushu it appears that wet-field rice agriculture in Japan was directly adopted from the Lower Yangtze river basin in Eastern China. Culture China Although China's agricultural output is the largest in the world, only about 15% of its total land area can be cultivated. About 75% of the cultivated area is used for food crops. Rice is China's most important crop, raised on about 25% of the cultivated area. Most rice is grown south of the Huai River, in the Yangtze valley, the Zhu Jiang delta, and in Yunnan, Guizhou, and Sichuan provinces. Rice appears to have been used by the Early Neolithic populations of Lijiacun and Yunchanyan in China. Evidence of possible rice cultivation from ca. 11,500 BC has been found, however it is still questioned whether the rice was indeed being cultivated, or instead being gathered as wild rice. Bruce Smith, an archaeologist at the Smithsonian Institution in Washington, D.C., who has written on the origins of agriculture, says that evidence has been mounting that the Yangtze was probably the site of the earliest rice cultivation. In 1998, Crawford & Shen reported that the earliest of 14 AMS or radiocarbon dates on rice from at least nine Early to Middle Neolithic sites is no older than 7000 BC, that rice from the Hemudu and Luojiajiao sites indicates that rice domestication likely began before 5000 BC, but that most sites in China from which rice remains have been recovered are younger than 5000 BC. During the Spring and Autumn period (722–481 BC), two revolutionary improvements in farming technology took place. One was the use of cast iron tools and beasts of burden to pull plows, and the other was the large-scale harnessing of rivers and development of water conservation projects. Sunshu Ao of the 6th century BC and Ximen Bao of the 5th century BC are two of the earliest hydraulic engineers from China, and their works were focused upon improving irrigation systems. These developments were widely spread during the ensuing Warring States period (403–221 BC), culminating in the enormous Du Jiang Yan Irrigation System engineered by Li Bing by 256 BC for the State of Qin in ancient Sichuan. During the Eastern Jin (317–420) and the Northern and Southern Dynasties (420–589), land-use became more intensive and efficient, rice was grown twice a year and cattle began to be used for plowing and fertilization. By about 750, 75% of China's population lived north of the Yangtze, but by 1250, 75% of China's population lived south of it. Such large-scale internal migration was possible due to introduction of quick-ripening strains of rice from Vietnam suitable for multi-cropping. Famous rice paddies in China include the Longsheng Rice Terraces and the fields of Yuanyang County, Yunnan. India India has the largest paddy output in the world and is also the largest exporter of rice in the world as of 2020. In India, West Bengal is the largest rice producing state. Paddy fields are a common sight throughout India, both in the northern Gangetic Plains and the southern peninsular plateaus. Paddy is cultivated at least twice a year in most parts of India, the two seasons being known as Rabi and Kharif respectively. The former cultivation is dependent on irrigation, while the latter depends on the Monsoon. The paddy cultivation plays a major role in socio-cultural life of rural India. Many regional festivals celebrate the harvest, such as Onam, Bihu, Thai Pongal, Makar Sankranti, and Nabanna. The Kaveri delta region of Thanjavur is historically known as the rice bowl of Tamil Nadu, and Kuttanadu is called the rice bowl of Kerala. Gangavathi is known as the rice bowl of Karnataka. Indonesia Prime Javanese paddies yield roughly 6 metric tons of unmilled rice (2.5 metric tons of milled rice) per hectare. When irrigation is available, rice farmers typically plant Green Revolution rice varieties allowing three growing seasons per year. Since fertilizer and pesticide are relatively expensive inputs, farmers typically plant seeds in a very small plot. Three weeks following germination, the 15-20 centimetre (6–8 in) stalks are picked and replanted at greater separation, in a backbreaking manual procedure. Rice harvesting in Central Java is often performed not by owners or sharecroppers of paddies, but rather by itinerant middlemen, whose small firms specialize in the harvest, transport, milling, and distribution of rice. The fertile volcanic soil of much of the Indonesian archipelago—particularly the islands of Java and Bali—has made rice a central dietary staple. Steep terrain on Bali resulted in complex irrigation systems, locally called subak, to manage water storage and drainage for rice terraces. Italy Rice is grown in Northern Italy, especially in the valley of the Po River. The paddy fields are irrigated by fast-flowing streams descending from the Alps. In the 19th century and much of the 20th century, the paddy fields were farmed by the mondine, a subculture of seasonal rice paddy workers composed mostly of poor women. Japan The acidic soil conditions common in Japan due to volcanic eruptions have made the paddy field the most productive farming method. Paddy fields are represented by the kanji (commonly read as ta or as den) that has had a strong influence on Japanese culture. In fact, the character , which originally meant 'field' in general, is used in Japan exclusively to refer to paddy fields. One of the oldest samples of writing in Japan is widely credited to the kanji found on pottery at the archaeological site of Matsutaka in Mie Prefecture that dates to the late 2nd century. Ta () is used as a part of many place names as well as in many family names. Most of these places are somehow related to the paddy field and, in many cases, are based on the history of a particular location. For example, where a river runs through a village, the place east of the river may be called Higashida (), literally "east paddy field." A place with a newly irrigated paddy field, especially those made during or after the Edo period, may be called Nitta or Shinden (both ), "new paddy field." In some places, lakes and marshes were likened to a paddy field and were named with ta, like Hakkōda (). Today, many family names have ta as a component, a practice which can be largely attributed to a government edict in the early Meiji Period which required all citizens to have a family name. Many chose a name based on some geographical feature associated with their residence or occupation, and as nearly three-fourths of the population were farmers, many made family names using ta. Some common examples are Tanaka (), literally meaning "in the paddy field;" Nakata (), "middle paddy field;" Kawada (川田), "river paddy field;" and Furuta (), "old paddy field." In recent years, rice consumption in Japan has fallen and many rice farmers are increasingly elderly. The government has subsidized rice production since the 1970s, and favors protectionist policies regarding cheaper imported rice. Korea Arable land in small alluvial flats of most rural river valleys in South Korea are dedicated to paddy-field farming. Farmers assess paddy fields for any necessary repairs in February. Fields may be rebuilt, and bund breaches are repaired. This work is carried out until mid-March, when warmer spring weather allows the farmer to buy or grow rice seedlings. They are transplanted (usually by rice transplanter) from the indoors into freshly flooded paddy fields in May. Farmers tend and weed their paddy fields through the summer until around the time of Chuseok, a traditional holiday held on 15 August of the Lunar Calendar (circa mid-September on the Solar Calendar). The harvest begins in October. Coordinating the harvest can be challenging because many Korean farmers have small paddy fields in a number of locations around their villages, and modern harvesting machines are sometimes shared between extended family members. Farmers usually dry the harvested grains in the sun before bringing them to market. The Hanja character for 'field', jeon (), is found in some place names, especially small farming townships and villages. However, the specific Korean term for 'paddy' is a purely Korean word, "non" (). Madagascar In Madagascar, the average annual consumption of rice is 130 kg per person, one of the largest in the world. According to a 1999 study of UPDRS / FAO: The majority of rice is related to irrigation (1,054,381 ha). The choice of methods conditioning performance is determined by the variety and quality control of water. The tavy is traditionally the culture of flooded upland rice on burning of cleared natural rain forest (135,966 ha). Criticized as being the cause of deforestation, tavy is still widely practiced by farmers in Madagascar, who find a good compromise between climate risks, availability of labour and food security. By extension, the tanety, which literally means "hill," is also growing upland rice, carried out on the grassy slopes that have been deforested for the production of charcoal (139,337 ha). Among the many varieties, rice of Madagascar includes: Vary lava - a translucent long and large grain rice, considered a luxury rice; Vary Makalioka - a translucent long and thin grain rice; Vary Rojofotsy - a half-long grain rice; and Vary mena, or red rice, exclusive to Madagascar. Malaysia Paddy fields can be found in most states on the Malaysian Peninsula, with most of the fields being located in the northern states such as Kedah, Perlis, Perak, and Penang. Paddy fields can also be found on Malaysia's east coast region, in Kelantan and Terengganu. The central state of Selangor also has its fair share of paddy fields, especially in the districts of Kuala Selangor and Sabak Bernam. Before Malaysia became heavily reliant on its industrial output, people were mainly involved in agriculture, especially in the production of rice. It was for that reason, that people usually built their houses next to paddy fields. The very spicy chili pepper that is often eaten in Malaysia, the bird's eye chili, is locally called cili padi, literally "paddy chili". Some research pertaining to Rainfed lowland rice in Sarawak has been reported. Myanmar Rice is grown in Myanmar primarily in three areas – the Irrawaddy Delta, the area along and the delta of the Kaladan River, and the Central plains around Mandalay, though there has been an increase in rice farming in Shan State and Kachin State in recent years. Up until the later 1960s, Myanmar was the main exporter of rice. Termed the rice basket of Southeast Asia, much of the rice grown in Myanmar does not rely on fertilizers and pesticides, thus, although "organic" in a sense, it has been unable to cope with population growth and other rice economies which utilized fertilizers. Rice is now grown in all the three seasons of Myanmar, though primarily in the Monsoon season – from June to October. Rice grown in the delta areas relies heavily on the river water and sedimented minerals from the northern mountains, whilst the rice grown in the central regions require irrigation from the Irrawaddy River. The fields are tilled when the first rains arrive – traditionally measured at 40 days after Thingyan, the Burmese New Year – around the beginning of June. In modern times, tractors are used, but traditionally, buffalos were employed. The rice plants are planted in nurseries and then transplanted by hand into the prepared fields. The rice is then harvested in late November – "when the rice bends with age". Most of the rice planting and harvesting is done by hand. The rice is then threshed and stored, ready for the mills. Nepal In Nepal, rice (Nepali: धान, Dhaan) is grown in the Terai and hilly regions. It is mainly grown during the summer monsoon in Nepal. Philippines Paddy fields are a common sight in the Philippines. Several vast paddy fields exist in the provinces of Ifugao, Nueva Ecija, Isabela, Cagayan, Bulacan, Quezon, and other provinces. Nueva Ecija is considered the main rice growing province of the Philippines. The Banaue Rice Terraces are an example of paddy fields in the country. They are located in Banaue in Northern Luzon, Philippines and were built by the Ifugaos 2,000 years ago. Streams and springs found in the mountains were tapped and channeled into irrigation canals that run downhill through the rice terraces. Other notable Philippine paddy fields are the Batad Rice Terraces, the Bangaan Rice Terraces, the Mayoyao Rice Terraces and the Hapao Rice Terraces. Located at Barangay Batad in Banaue, the Batad Rice Terraces are shaped like an amphitheatre, and can be reached by a 12-kilometer ride from Banaue Hotel and a 2-hour hike uphill through mountain trails. The Bangaan Rice Terraces portray the typical Ifugao community, where the livelihood activities are within the village and its surroundings. The Bangaan Rice Terraces are accessible by a one-hour ride from Poblacion, Banaue, then a 20-minute trek down to the village. It can be viewed best from the road to Mayoyao. The Mayoyao Rice Terraces are located at Mayoyao, 44 kilometers away from Poblacion, Banaue. The town of Mayoyao lies in the midst of these rice terraces. All dikes are tiered with flat stones. The Hapao Rice Terraces are within 55 kilometers from the capital town of Lagawe. Other Ifugao stone-walled rice terraces are located in the municipality of Hungduan. Sri Lanka Sri Lankan paddy cultivation history dates back to more than 2000 years ago. The historical reports say that Sri Lanka is regarded as the "paddy store of the east" because it produced an excessive quantity of rice. Paddy cultivation can be found all over the island and a considerable amount of land is allocated for it. Both upcountry and low country wetlands use paddy cultivation. The majority of paddy land is in the dry zone, and it uses special irrigation systems for cultivation. The water storing tank called "Wewa" facilitates a supply of water to paddy lands in the cultivation period. Agriculture in Sri Lanka mainly depends on rice production. Sri Lanka sometimes exports rice to its neighboring countries. Around 1.5 million hectares of land are cultivated in Sri Lanka for paddy in 2008/2009 maha: 64% of which is cultivated during the dry season and 35% cultivated during the wet season. Around 879,000 farmer families are engaged in paddy cultivation in Sri Lanka. They make up 20% of the country's population and 32% of the employment. Thailand Rice production in Thailand represents a significant portion of the Thai economy. It uses over half of the farmable land area and labor force in Thailand. Thailand has a strong tradition of rice production. It has the fifth-largest amount of land used for rice cultivation in the world and is the world's largest exporter of rice. Thailand has plans to further increase its land available for rice production, with a goal of adding 500,000 hectares to the 9.2 million hectares of rice-growing areas already cultivated. The Thai Ministry of Agriculture expected rice production to yield around 30 million tons of rice for 2008. The most produced strain of rice in Thailand is jasmine rice, which has a significantly lower yield rate than other types of rice, but also normally fetches more than double the price of other strains in a global market. Vietnam Rice fields in Vietnam (ruộng or cánh đồng in Vietnamese) are the predominant land use in the valley of the Red River and the Mekong Delta. In the Red River Delta of northern Vietnam, control of seasonal river flooding is achieved by an extensive network of dykes which over the centuries total some 3000 km. In the Mekong Delta of southern Vietnam, there is an interlacing drainage and irrigation canal system that has become the symbol of this area. The canals additionally serve as transportation routes, allowing farmers to bring their produce to market. In Northwestern Vietnam, Thai people built their "valley culture" based on the cultivation of glutinous rice planted in upland fields, requiring terracing of the slopes. The primary festival related to the agrarian cycle is "lễ hạ điền" (literally "descent into the fields") held as the start of the planting season in hope of a bountiful harvest. Traditionally, the event was officiated with much pomp. The monarch carried out the ritual plowing of the first furrow while local dignitaries and farmers followed suit. Thổ địa (deities of the earth), thành hoàng làng (the village patron spirit), Thần Nông (god of agriculture), and thần lúa (god of rice plants) were all venerated with prayers and offerings. In colloquial Vietnamese, wealth is frequently associated with the vastness of the individual's land holdings. Paddy fields so large as for "storks to fly with their wings out-stretched" ("đồng lúa thẳng cánh cò bay") can be heard as a common metaphor. Wind-blown undulating rice plants across a paddy field in literary Vietnamese is termed figuratively "waves of rice plants" ("sóng lúa"). Ecology Paddy fields are a major source of atmospheric methane which contributes to global warming, having been estimated to contribute in the range of 50 to 100 million tonnes of the gas per annum. Studies have shown that this can be significantly reduced while also boosting crop yield by draining the paddies to allow the soil to aerate to interrupt methane production. Studies have also shown the variability in assessment of methane emission using local, regional and global factors and calling for better inventorization based on micro level data. Rice paddies are responsible for 10% of global methane emissions, roughly equal to the emissions of the aviation industry. Drip irrigation systems developed by Netafim and N-Drip were introduced in several countries and according to the Times of Israel can reduce up to 85% of emissions. Gallery
Technology
Buildings and infrastructure
null
541900
https://en.wikipedia.org/wiki/Microbiological%20culture
Microbiological culture
A microbiological culture, or microbial culture, is a method of multiplying microbial organisms by letting them reproduce in predetermined culture medium under controlled laboratory conditions. Microbial cultures are foundational and basic diagnostic methods used as research tools in molecular biology. The term culture can also refer to the microorganisms being grown. Microbial cultures are used to determine the type of organism, its abundance in the sample being tested, or both. It is one of the primary diagnostic methods of microbiology and used as a tool to determine the cause of infectious disease by letting the agent multiply in a predetermined medium. For example, a throat culture is taken by scraping the lining of tissue in the back of the throat and blotting the sample into a medium to be able to screen for harmful microorganisms, such as Streptococcus pyogenes, the causative agent of strep throat. Furthermore, the term culture is more generally used informally to refer to "selectively growing" a specific kind of microorganism in the lab. It is often essential to isolate a pure culture of microorganisms. A pure (or axenic) culture is a population of cells or multicellular organisms growing in the absence of other species or types. A pure culture may originate from a single cell or single organism, in which case the cells are genetic clones of one another. For the purpose of gelling the microbial culture, the medium of agarose gel (agar) is used. Agar is a gelatinous substance derived from seaweed. A cheap substitute for agar is guar gum, which can be used for the isolation and maintenance of thermophiles. History The first culture media was liquid media, designed by Louis Pasteur in 1860. This was used in the laboratory until Robert Koch's development of solid media in 1881. Koch's method of using a flat plate for his solid media was replaced by Julius Richard Petri's round box in 1887. Since these foundational inventions, a diverse array of media and methods have evolved to help scientists grow, identify, and purify cultures of microorganisms. Types of microbial cultures Prokaryotic culture The culturing of prokaryotes typically involves bacteria, since archaea are difficult to culture in a laboratory setting. To obtain a pure prokaryotic culture, one must start the culture from a single cell or a single colony of the organism. Since a prokaryotic colony is the asexual offspring of a single cell, all of the cells are genetically identical and will result in a pure culture. Viral culture Virus and phage cultures require host cells in which the virus or phage multiply. For bacteriophages, cultures are grown by infecting bacterial cells. The phage can then be isolated from the resulting plaques in a lawn of bacteria on a plate. Viral cultures are obtained from their appropriate eukaryotic host cells. The streak plate method is a way to physically separate the microbial population, and is done by spreading the inoculate back and forth with an inoculating loop over the solid agar plate. Upon incubation, colonies will arise and single cells will have been isolated from the biomass. Once a microorganism has been isolated in pure culture, it is necessary to preserve it in a viable state for further study and use in cultures called stock cultures. These cultures have to be maintained, such that there is no loss of their biological, immunological and cultural characters. Eukaryotic cell culture Eukaryotic cell cultures provide a controlled environment for studying eukaryotic organisms. Single-celled eukaryotes - such as yeast, algae, and protozoans - can be cultured in similar ways to prokaryotic cultures. The same is true for multicellular microscopic eukaryotes, such as C. elegans. Although macroscopic eukaryotic organisms are too large to culture in a laboratory, cells taken from these organisms can be cultured. This allows researchers to study specific parts and processes of a macroscopic eukaryote in vitro. Culture methods Liquid cultures One method of microbiological culture is liquid culture, in which the desired organisms are suspended in a liquid nutrient medium, such as Luria broth, in an upright flask. This allows a scientist to grow up large amounts of bacteria or other microorganisms for a variety of downstream applications. Liquid cultures are ideal for preparation of an antimicrobial assay in which the liquid broth is inoculated with bacteria and let to grow overnight (a ‘shaker’ may be used to mechanically mix the broth, to encourage uniform growth). Subsequently, aliquots of the sample are taken to test for the antimicrobial activity of a specific drug or protein (antimicrobial peptides). Static liquid cultures may be used as an alternative. These cultures are not shaken, and they provide the microbes with an oxygen gradient. Agar plates Microbiological cultures can be grown in petri dishes of differing sizes that have a thin layer of agar-based growth medium. Once the growth medium in the petri dish is inoculated with the desired bacteria, the plates are incubated at the optimal temperature for the growing of the selected bacteria (for example, usually at 37 degrees Celsius, or the human body temperature, for cultures from humans or animals, or lower for environmental cultures). After the desired level of growth is achieved, agar plates can be stored upside down in a refrigerator for an extended period of time to keep bacteria for future experiments. There are a variety of additives that can be added to agar before it is poured into a plate and allowed to solidify. Some types of bacteria can only grow in the presence of certain additives. This can also be used when creating engineered strains of bacteria that contain an antibiotic-resistance gene. When the selected antibiotic is added to the agar, only bacterial cells containing the gene insert conferring resistance will be able to grow. This allows the researcher to select only the colonies that were successfully transformed. Agar based dipsticks Miniaturized version of agar plates implemented to dipstick formats, e.g. Dip Slide, Digital Dipstick show potential to be used at the point-of-care for diagnosis purposes. They have advantages over agar plates since they are cost effective and their operation does not require expertise or laboratory environment, which enable them to be used at the point-of-care. Selective and differential media Selective and differential media reveal characteristics about the microorganisms being cultured on them. This kind of media can be selective, differential, or both selective and differential. Growing a culture on multiple kinds of selective and differential media can purify mixed cultures and reveal to scientists the characteristics needed to identify unknown cultures. Selective media Selective media is used to distinguish organisms by allowing for a specific kind of organism to grow on it while inhibiting the growth of others. For example, eosin methylene blue (EMB) may be used to select against Gram-positive bacteria, most of which have hindered growth on EMB, and select for Gram-negative bacteria, whose growth is not inhibited on EMB. Differential media Scientists use differential media when culturing microorganisms to reveal certain biochemical characteristics about the organisms. These revealed traits can then be compared to attributes of known microorganisms in an effort to identify unknown cultures. An example of this is MacConkey agar (MAC), which reveals lactose-fermenting bacteria through a pH indicator that changes color when acids are produced from fermentation. Multitarget panels On multitarget panels, bacteria isolated from a previously grown colony are distributed into each well, each of which contains growth medium as well as the ingredients for a biochemical test, which will change the absorbance of the well depending on the bacterial property for the tested target. The panel will be incubated in a machine, which subsequently analyses each well with a light-based method such as colorimetry, turbidimetry, or fluorometry. The combined results will be automatically compared to a database of known results for various bacterial species, in order to generate a diagnosis of what bacterial species is present in the current panel. Simultaneously, it performs antibiotic susceptibility testing. Stab cultures Stab cultures are similar to agar plates, but are formed by solid agar in a test tube. Bacteria is introduced via an inoculation needle or a pipette tip being stabbed into the center of the agar. Bacteria grow in the punctured area. Stab cultures are most commonly used for short-term storage or shipment of cultures. Additionally, stab cultures can reveal characteristics about cultured microorganisms such as motility or oxygen requirements. Solid plate culture of thermophilic microorganisms For solid plate cultures of thermophilic microorganisms such as Bacillus acidocaldarius, Bacillus stearothermophilus, Thermus aquaticus and Thermus thermophilus etc. growing at temperatures of 50 to 70 degrees C, low acyl clarified gellan gum has been proven to be the preferred gelling agent comparing to agar for the counting or isolation or both of the above thermophilic bacteria. Cell Culture Collections Microbial culture collections focus on the acquisition, authentication, production, preservation, cataloguing and distribution of viable cultures of standard reference microorganisms, cell lines and other materials for research in microbial systematics. Culture collection are also repositories of type strains.
Biology and health sciences
Biology basics
Biology
542071
https://en.wikipedia.org/wiki/Anopheles
Anopheles
Anopheles () is a genus of mosquito first described by the German entomologist J. W. Meigen in 1818, and are known as nail mosquitoes and marsh mosquitoes. Many such mosquitoes are vectors of the parasite Plasmodium, a genus of protozoans that cause malaria in birds, reptiles, and mammals, including humans. The Anopheles gambiae mosquito is the best-known species of marsh mosquito that transmits the Plasmodium falciparum, which is a malarial parasite deadly to human beings; no other mosquito genus is a vector of human malaria. The genus Anopheles diverged from other mosquitoes approximately (mya), and, like other mosquitoes, the eggs, larvae, and pupae are aquatic. The Anopheles larva has no respiratory siphon through which to breathe, so it breathes and feeds with its body horizontal to the surface of the water. The adult mosquito hatches from the surface and feeds on the nectar of flowers; the female mosquito also feeds on blood, which animal diet allows them to carry and transmit parasites between hosts. The adult's feeding position is head-down, unlike the horizontal stance of the culicines. Anopheles are distributed almost worldwide, throughout the tropics, the subtropics, and the temperate regions of planet Earth. In hot weather, adult Anopheles aestivate, which is a state of dormancy that enables the mosquito to survive in hot dry regions, such as the Sahel. Evolution Fossil history Fossils of the genus Anopheles are rare; only two had been found by 2015. They are Anopheles (Nyssorhynchus) dominicanus Zavortink & Poinar in Dominican Republic amber from the Late Eocene ( to ), and Anopheles rottensis Statz in German amber from the Late Oligocene ( to ). Phylogeny The ancestors of all flies including mosquitoes appeared . The culicine and Anopheles clades of mosquitoes diverged between and . The Old and New World Anopheles species subsequently diverged between and . Anopheles darlingi diverged from the African and Asian malaria vectors ~. The cladogram is based on an analysis of mosquito genomes by Heafsey and colleagues in 2015: Taxonomy The genus name Anopheles was introduced by the German entomologist Johann Wilhelm Meigen in 1818. He described two species, A. birfurcatus and the type species, Anopheles maculipennis. He stated that the name meant , "burdensome". The name comes from the Ancient Greek word 'useless', derived from , 'not', 'un-' and 'profit'. The taxonomy of the genus was greatly advanced in 1901 when the English entomologist Frederick Vincent Theobald described 39 Anopheles species in his 5-volume monograph on the Culicidae. He was provided with mosquito specimens sent in to the British Museum (Natural History) from around the world, on the 1898 instruction of the Secretary of State for the Colonies, Joseph Chamberlain. Anopheles (with a nearly worldwide distribution) belongs to the subfamily Anophelinae alongside two other genera: Bironella (restricted to Australia) and Chagasia (restricted to the Neotropics). The taxonomy remains incompletely settled. Classification into species is based on morphological characteristics – wing spots, head anatomy, larval and pupal anatomy, chromosome structure, and more recently, on DNA sequences. In the taxonomy published by Harbach and Kitching in 2016, it was shown that three species of Bironella (B. confusa, B. gracilis, and B. hollandi) are phylogenetically more similar to A. kyondawensis than other Bironella species. That phylogeny argues that, based on genetic similarity, A. implexus is divergent from the common ancestor of Anopheles. Life cycle Like all mosquitoes, anophelines go through four stages in their life cycles: egg, larva, pupa, and adult. The first three stages are aquatic and together last 5–14 days, depending on the species and the ambient temperature. The adult stage is when the female Anopheles acts as malaria vector. The adult females can live up to a month (or more in captivity), but most probably do not live more than two weeks in nature. Eggs Adult females lay 50–200 eggs per oviposition. The eggs are quite small (about × ). Eggs are laid singly and directly on water. They are unique in that they have floats on either side. Eggs are not resistant to drying and hatch within 2–3 days, although hatching may take up to 2–3 weeks in colder climates. Larvae The mosquito larva has a well-developed head with mouth brushes used for feeding, a large thorax and a nine-segment abdomen. It has no legs. In contrast to other mosquitoes, the Anopheles larva lacks a respiratory siphon, so it positions itself so that its body is parallel to the surface of the water. In contrast, the feeding larva of culicine mosquitoes attach themselves to the water surface with the posterior siphon, the body pointing downwards. Larvae breathe through spiracles located on the eighth abdominal segment and so must come to the surface frequently. The larvae spend most of their time feeding on algae, bacteria, and other microorganisms in the thin surface layer. They dive below the surface only when disturbed. Larvae swim either by jerky movements of the entire body or through propulsion with the mouth brushes. Larvae develop through four stages, or instars, after which they metamorphose into pupae. At the end of each instar, the larvae molt, shedding their exoskeletons, or skin, to allow for further growth. The larvae occur in a wide range of habitats, but most species prefer clean, unpolluted water. Larvae of Anopheles have been found in freshwater or saltwater marshes, mangrove swamps, rice fields, grassy ditches, the edges of streams and rivers, and small, temporary rain pools. Many species prefer habitats with vegetation. Others prefer habitats with none. Some breed in open, sun-lit pools, while others are found only in shaded breeding sites in forests. A few species breed in tree holes or the leaf axils of some plants. Pupae The pupa (also known as a tumbler) is comma-shaped when viewed from the side. The head and thorax are merged into a cephalothorax, with the abdomen curving around underneath it. As with the larvae, the pupa must come to the surface frequently to breathe, which it does through a pair of respiratory trumpets on its cephalothorax. After a few days as a pupa, the dorsal surface of the cephalothorax splits and the adult mosquito emerges. Adults Like all mosquitoes, adult Anopheles species have slender bodies with three sections: head, thorax and abdomen. The head is specialized for acquiring sensory information and for feeding. It contains the eyes and a pair of long, many-segmented antennae. The antennae are important for detecting host odours, as well as of breeding sites where females lay eggs. Female mosquitoes carrying Plasmodium parasites, the causative agents of malaria, are significantly more attracted to human breath and odours than uninfected mosquitoes. The head has an elongated, forward-projecting proboscis used for feeding, and two maxillary palps. These palps carry the receptors for carbon dioxide, a major attractant that enables the mosquito to locate its host. The thorax is specialized for locomotion. Three pairs of legs and a pair of wings are attached to the thorax. The abdomen is specialized for food digestion and egg development. This segmented body part expands considerably when a female takes a blood meal. The blood is digested over time, serving as a source of protein for the production of eggs, which gradually fill the abdomen. Anopheles can be distinguished from other mosquitoes by the palps, which are as long as the proboscis, and by the presence of discrete blocks of black and white scales on the wings. Adults can further be identified by their typical resting position: both sexes rest with their abdomens pointing up, unlike culicine mosquitoes. Adult mosquitoes usually mate within a few days after emerging from the pupal stage. In most species, the males form large swarms, usually around dusk, and the females fly into the swarms to mate. The duration from egg to adult varies considerably among species, and is strongly influenced by ambient temperature. Mosquitoes can develop from egg to adult in as little as five days, but it can take 10–14 days in tropical conditions. Males live for about a week, feeding on nectar and other sources of sugar. Males cannot feed on blood, as it appears to produce toxic effects and kills them within a few days, around the same lifespan as a water-only diet. Females feed on sugar sources for energy, but usually require a blood meal for the development of eggs. After obtaining a full blood meal, the female rests for a few days while the blood is digested and eggs are developed. This process depends on the temperature, but usually takes 2–3 days in tropical conditions. Once the eggs are fully developed, the female lays them and resumes host-seeking. The cycle repeats itself until the female dies. While females can live longer than a month in captivity, most do not live longer than one to two weeks in nature. Their lifespans depend on temperature, humidity, and their ability to successfully obtain a blood meal while avoiding host defenses. Ecology Distribution Anopheles species live both in tropical areas known for malaria such as sub-Saharan Africa, and in colder latitudes. Malaria outbreaks have in the past occurred in colder climates, for example during the construction of the Rideau Canal in Canada during the 1820s. Anopheles species that can transmit malaria are not limited to malaria-endemic areas, so areas where they have been eliminated are constantly at risk of reintroduction of the disease. Habitat Anopheles require bodies of water, possibly small and seasonal, for their aquatic larvae and pupae. Suitable habitats range from ponds to water tanks, swamps, ditches and puddles. The adults can however live in dry regions such as Africa's savanna and Sahel. They can travel far from water, and are sometimes blown hundreds of kilometres by suitable winds. Adults can aestivate for months at a time, becoming dormant in hot dry weather, allowing them to persist through the African dry season. Further, Anopheles have been documented travelling in baggage, such as on aircraft. Parasites Parasites of Anopheles include Microsporidia of the genera Amblyospora, Crepidulospora, Senoma and Parathelohania. Two distinct life cycles are found in the Microsporidia. In the first type, the parasite is transmitted by the oral route and is relatively species nonspecific. In the second, while again the oral route is the usual route of infection, the parasite is ingested within an already infected intermediate host. Infection of the insect larval form is frequently tissue-specific, and commonly involves the fat body. Vertical (transovarial) transmission also occurs. The parasitic Wolbachia bacteria have been studied for use as control agents. Predators The jumping spider Evarcha culicivora indirectly feeds on vertebrate blood by preying on female Anopheles. Juvenile spiders choose the Anopheles over all other prey regardless of whether it actually is carrying blood. Juvenile spiders have adopted an Anopheles-specific prey-capture behavior, using the posture of Anopheles as a primary cue to identify them. Anopheles has a distinctive resting posture with its abdomen angled up. In this case, the spider approaches from behind the mosquito and under its abdomen, and then attacks from below. Malaria vectors Preferred sources for blood meals Since the genus Anopheles is the sole vector for malaria, it has been studied intensively in the search for effective control methods. One important behavioral factor is the degree to which an Anopheles species prefers to feed on humans (anthropophily) or animals such as cattle or birds (zoophily). Anthropophilic Anopheles are more likely to transmit the malaria parasites from one person to another. Most Anopheles are not exclusively anthropophilic or zoophilic, including the primary malaria vector in the western United States, A. freeborni. However, the primary malaria vectors in Africa, A. gambiae and A. funestus, are strongly anthropophilic and are consequently major vectors of human malaria. Probability of transmitting malaria Once ingested by a mosquito, malaria parasites must undergo development within the mosquito before they are infectious to humans. The time required for the parasite to develop in the mosquito (the extrinsic incubation period) ranges from 10 to 21 days, depending on the parasite species and the temperature. If a mosquito does not survive long enough for the parasite to develop, then she transmits no parasites. It is not possible to measure directly the lifespans of mosquitoes in nature, but indirect estimates of daily survivorship have been made for several Anopheles species. Estimates of daily survivorship in Tanzania of A. gambiae, the vector of the dangerous Plasmodium falciparum parasite, ranged from 0.77 to 0.84, meaning that after one day, between 77% and 84% have survived. Assuming this survivorship is constant through the adult life of a mosquito, less than 10% of female A. gambiae would survive longer than a 14-day extrinsic incubation period. If daily survivorship increased to 0.9, over 20% of mosquitoes would survive longer than the same period. Control measures that rely on insecticides (e.g. indoor residual spraying) may actually impact malaria transmission more through their effect on adult longevity than through their effect on the population of adult mosquitoes. Patterns of feeding and resting Most Anopheles are crepuscular (active at dusk or dawn) or nocturnal (active at night). Some feed indoors (endophagic), while others feed outdoors (exophagic). After feeding, some blood mosquitoes prefer to rest indoors (endophilic), while others prefer to rest outdoors (exophilic). Biting by nocturnal, endophagic Anopheles can be markedly reduced through the use of insecticide-treated bed nets or through improved housing construction to prevent mosquito entry (e.g. window screens). Endophilic mosquitoes are readily controlled by indoor spraying of residual insecticides. In contrast, exophagic/exophilic vectors are best controlled by destroying breeding sites, such as by filling in ponds. Gut flora Because transmission of disease by the mosquito requires ingestion of blood, the gut flora may have a bearing on the success of infection of the mosquito host. The larval and pupal gut is largely colonized by photosynthetic cyanobacteria, while in the adult, gram-negative bacteria in the Pseudomonadota and Bacteroidota phyla predominate. Blood meals drastically reduce the diversity of microorganisms in the gut, favouring bacteria. Control Insecticide control and resistance Insecticides have offered a first line of approach to ridding areas of malarial mosquitoes. However, mosquitoes, with a short generation time, may rapidly evolve resistance, as experienced during the Global Malaria Eradication Campaign of the 1950s. The use of insecticides in agriculture has resulted in resistance in mosquito populations, implying that an effective control program must monitor for resistance and switch to other means if resistance is detected. Eradication In 2016, a CRISPR-Cas9 gene drive system was proposed to eradicate Anopheles gambiae, by deleting the dsx gene, causing female sterility. Such a gene drive system has been shown to suppress an entire caged A. gambiae population within 7–11 generations, typically less than a year. This has raised concerns with both the efficiency of a gene drive system as well as the ethical and ecological impact of such an eradication program. Therefore, there have been efforts to use the gene drive system to more efficiently introduce genes of Plasmodium resistance into the species, such as targeting and knocking out the FREP1 gene in Anopheles gambiae. Researchers in Burkina Faso have created a strain of the fungus Metarhizium pinghaense that is genetically engineered to produce the venom of an Australian funnel-web spider; exposure to the fungus caused populations of Anopheles to crash by 99% in a controlled trial.
Biology and health sciences
Flies (Diptera)
Animals
542107
https://en.wikipedia.org/wiki/Schizotypal%20personality%20disorder
Schizotypal personality disorder
Schizotypal personality disorder (StPD or SPD), also known as schizotypal disorder, is a cluster A personality disorder. The Diagnostic and Statistical Manual of Mental Disorders (DSM) describes the disorder specifically as a personality disorder characterized by thought disorder, paranoia, a characteristic form of social anxiety, derealization, transient psychosis, and unconventional beliefs. People with this disorder often feel pronounced discomfort in forming and maintaining social connections with other people, primarily due to the belief that other people harbor negative thoughts and views about them. Peculiar speech mannerisms and socially unexpected modes of dress are also characteristic. People with StPD may react oddly in conversations, not respond, or talk to themselves. They frequently interpret situations as being strange or having unusual meanings for them; paranormal and superstitious beliefs are common. People with StPD usually disagree with the suggestion that their thoughts and behaviors are a 'disorder' and seek medical attention for depression or anxiety instead. Schizotypal personality disorder occurs in approximately 3% of the general population and is more commonly diagnosed in males. Signs and symptoms Magical thinking Odd and magical thinking is common among people with StPD. They are more likely to believe in supernatural phenomena and entities. It is common for people with StPD to experience severe social anxiety and have paranoid ideation. Ideas of reference are common in people with StPD. They can feel as if expressing themselves is dangerous. They may also feel that others are more competent, and have deeply entrenched and pervasive insecurities. Strange thinking patterns may be a defense mechanism against these feelings. People with StPD usually have limited levels of self-awareness. They may believe others think of them more negatively than they actually do. Affect Patients with StPD can have difficulties in recognizing their or others' emotions. This can extend to difficulties expressing emotion. They may have limited responses to others' emotions and can be ambivalent. It is common for people with StPD to derive limited joy from activities. People with StPD are typically more socially isolated and uninterested in social situations than most people, although they can be socially active on the internet. Depersonalization, derealization, boredom, and internal fantasies are common in patients with StPD. Abnormal facial expressions are also common in people with StPD, and they can have aberrant eye movements and difficulty responding to stimuli. They are more prone to substance abuse or suicidal ideation. Another epidemiological study on suicidal behavior in StPD found that, even when accounting for sociodemographic factors, people with StPD were 1.51 times more likely to attempt suicide. Cognition People with StPD tend to have cognitive impairments. They can have abnormal perceptional and sensory experiences such as illusions. For example, someone with StPD may perceive colors as lighter or darker than others perceive them. Facial perception may also be difficult for people with the disorder. They can see others as deformed, may misrecognize them, or can feel as if they are alien to them. People with StPD can have difficulty processing information such as speech or language. They are more likely to speak slowly, with less fluctuation in pitch, and long pauses between speech. Patients with StPD may have a lower odor detection threshold, and can have impaired auditory or olfactory processing. It is also common for people with StPD to struggle with context processing, which cause them to form loose connections between events. In addition, people with StPD can have decreased capacities for multisensory integration or contrast sensitivity, either hyperreactive or impaired reactions to sensory input, slower response times, impaired attention, poorer postural control, and difficulties with decision-making. They can have difficulties in memory, and may have frequent intrusive memories of events. It is common for people with StPD to feel déjà vu or as if they can accurately predict future events due to abnormalities in the brain's memory storage. History StPD was introduced in 1980 in the DSM-III. Its inclusion provided a new classification for schizophrenia-spectrum disorders and of personality disorders that were previously unspecified. Its diagnosis was developed through differentiating the classifications of borderline personality disorder, of which some of the diagnosed population demonstrated schizophrenia-spectrum traits. When the separation of borderline personality disorder and StPD was originally suggested by Spitzer and Endicott, Siever and Gunderson opposed the distinction. Siever and Gunderson's opposition to Spitzer and Endicott was that StPD was related to schizophrenia. Spitzer and Endicott stated "We believe, as do the authors, that the evidence for the genetic relationship between Schizotypal features and Chronic Schizophrenia is suggestive rather than proven". StPD was included in the DSM-IV and the DSM-V and saw little change in its diagnosis. Epidemiology The reported prevalence of StPD in community studies ranges from 1.37% in a Norwegian sample, to 4.6% in an American sample. A large American study found a lifetime prevalence of 3.9%, with somewhat higher rates among men (4.2%) than women (3.7%). It may be uncommon in clinical populations, with reported rates of up to 1.9%. It has been estimated to be prevalent among up to 5.2% of the general population. Together with other cluster A personality disorders, it is also very common among homeless people who show up at drop-in centers, according to a 2008 New York study. The study did not address homeless people who do not show up at drop-in centers. Schizotypal disorder may be overdiagnosed in Russia and other post-Soviet states. Prognosis People with StPD usually had symptoms of schizotypal personality disorder in childhood. Traits of StPD usually remain consistently present over time, although can fluctuate greatly in severity and stability. DSM characterizes StPD as having nine major symptoms: ideas of reference, odd/magical beliefs, social anxiety, not having close friends, odd or eccentric behavior, odd speech, unusual perceptions, suspiciousness, schizo-obsessive behaviors and constricted affect. There may be gender differences in the symptomology of men and women with StPD. Women with the disorder might be more likely to have less severe cognitive deficits, and more severe social anxiety and magical thinking. People with StPD are more likely to only have a high school education, to be unemployed, and to have significant functional impairment. The two traits of StPD which are least likely to change are paranoia and abnormal experiences. Compared to those without StPD, adolescents with StPD spend more time socialising on the Internet, such as on forums, chat rooms and cooperative computer games, and spend less time socialising in-person. People who are treatment-resistant to obsessive–compulsive disorder (OCD) behavioral therapy and medication that also display odd or eccentric behaviors could contribute to the coexistence of obsessive–compulsive disorder with schizotypal disorder. Etiology Genetic Although environmental factors likely play an important role in the onset of the disorder, people who have relatives with schizotypy, mood disorders, or other disorders on the schizophrenia spectrum are at a higher likelihood of developing StPD. The COMT Val158Met polymorphism and its Val or Met allele are suspected to be associated with Schizotypal personality disorder. These genes affect dopamine production in the brain, a neurochemical thought to be associated with schizotypal traits. The gene may also contribute to decreased levels of gray matter in the prefrontal cortex. This may lead to impaired capacities for decision-making, speech, cognitive flexibility, and altered perceptual experiences. The rs1006737 polymorphism of the CACNA1C gene is also believed to have a part in schizotypal symptoms. It may lead to a significantly increased physiological response to stress through the cortisol awakening response in the brain. It may also negatively affect reward processing in the brain and lead to anhedonia or depression in patients. These factors possibly lead to the development of Schizotypal traits. The zinc-finger protein ZNF804A likely affects the levels of paranoia, anxiety, and ideas of reference in StPD. This gene is also thought to negatively impact attention in people with StPD. It may lead to an increased level of white matter volume in the frontal lobe. Another gene, the NOTCH4 is thought to relate to Schizophrenia spectrum disorders. It can lead to disruptions in the occipital cortex, and therefore symptoms of schizotypy. The GLRA1 and the p250GAP genes are also potentially associated with StPD. It may lead to abnormally low levels of Glutamic acids in the NMDA receptors, which impairs memory and learning. StPD may stem from abnormalities in Chromosome 22. Neurological Exposure to influenza during week 23 of gestation is associated with a higher likelihood of developing StPD. Poor nutrition in childhood may also contribute to the onset of StPD by altering the course of brain development. Numerous areas of the brain are thought to be associated with StPD. Higher levels of dopamine in the brain, possibly specifically the D1 receptor, might contribute to the development of StPD. StPD is associated with heightened dopaminergic activity in the striatum. Their symptoms may also stem from higher presynaptic dopamine release. People with StPD may also have decreased volumes of grey or white matter in their caudate nucleus, which leads to difficulties in speech. People with StPD likely have a reduced volume in their temporal lobes, possibly specifically the left hemisphere. The reduced levels of gray matter in these areas may be linked to their negative symptoms. Reduced volume of gray or white matter in the superior temporal gyrus or the transverse temporal gyrus are thought to lead to issues with speech, memory, and hallucinations. Deficits in the gray matter volume of the temporal lobe and prefrontal cortex are likely associated with impairments in cognitive function, sensory processing, speech, executive function, decision-making, and emotional processing present in people with StPD. StPD symptoms may also be influenced by reduced internal capsule, which carries information to the cerebral cortex. People with StPD can also have impairments in the uncinate fasciculus, which connects parts of the limbic system. People with StPD have reduced levels of gray matter in their middle frontal gyrus and Brodmann area 10. Although, not as reduced as patients with Schizophrenia. Possibly preventing them from developing schizophrenia. Increased gyrification in gyri by the cerebellum may lead to dysconnectivity in the brain, and therefore, schizotypal symptoms. They may also have a hyporeactive, or hyperreactive amygdala. As well as hyperactive pituitary glands and putamens. It is also possible that lower capacities for prepulse inhibition plays a role in StPD. Research has suggested that people with StPD can have higher concentrations of Homovanillic acids. Abnormalities in the cave of septum pellucidum may also be present. In people predisposed to the development of Schizophrenia spectrum disorders, the consumption of cannabis can induce the onset of StPD or other disorders with psychotic symptoms. Environmental Unique environmental factors, which differ from shared sibling experiences, have been found to play a role in the development of StPD and its dimensions. There is evidence to suggest that parenting styles, early separation, childhood trauma, and childhood neglect can lead to the development of schizotypal traits. Neglect, abuse, stress, trauma, or family dysfunction during childhood may increase the risk of developing schizotypal personality disorder. There is also evidence indicating that disruptions in brain development during the prenatal period could affect the development of StPD. Over time, children learn to interpret social cues and respond appropriately but for unknown reasons this process does not work well for people with this disorder. During childhood, people with StPD may have seen little emotional expression from their parents. Another possibility is that they were excessively criticized or felt like they were constantly under threat, potentially resulting in the onset of social anxiety, strange thinking patterns, and blunted affect present in StPD. Their difficulties in social situations might eventually cause the individual to withdraw from most social interactions, thus leading to asociality. Children with schizotypal symptoms usually are more likely to indulge in internal fantasies, more anxious, socially isolated, and more sensitive to criticism. People with the most severe cases of StPD usually have a combination of childhood trauma and a genetic basis for their condition. Diagnosis Formal diagnostic criteria StPD is characterized by 5 or more of the following: Ideas of reference (but not delusions of reference) Odd beliefs or magical thinking (e.g. the supernatural or special connection or bond to an abuser) Unusual perceptional experiences (hearing a voice, dissociative experiences, illusions, etc.) Odd thought and speech (e.g. jumping from one topic to another) Eccentric behavior and/or appearance Paranoid ideation Moods and facial expressions that don't match each other or the situation Few to no close supports Excessive social anxiety that remains even with familiar people These symptoms must have begun by early adulthood. Differential diagnosis Differential diagnosis with the following disorders should also be considered: Other disorders with psychotic symptoms: (e.g., schizophrenia, bipolar disorder, or depressive disorder with psychotic features) Paranoid, schizoid, or avoidant personality disorders Dissociative identity disorder (DID) Communication disorders Screening There are various methods of screening for schizotypal personality. The Schizotypal Personality Questionnaire (SPQ) measures nine traits of StPD using a self-report assessment. The nine traits referenced are Ideas of Reference, Excessive Social Anxiety, Odd Beliefs or Magical Thinking, Unusual Perceptual Experiences, Odd or Eccentric Behavior, No Close Friends, Odd Speech, Constricted Affect, and Suspiciousness. A study found that of the participants who scored in the top 10th percentile of all the SPQ scores, 55% were clinically diagnosed with StPD. It has been adapted into a computerized adaptive version, known as the SPQ-CAT. A method that measures the risk of developing psychosis through self-reports is the Wisconsin Schizotypy Scale (WSS). The WSS divides schizotypal personality traits into 4 scales for Perceptual Aberration, Magical Ideation, Revised Social Anhedonia, and Physical Anhedonia. A comparison of the SPQ and the WSS suggests that these measures should be cautiously used for screening of StPD. When screening for StPD, it is difficult to distinguish between schizotypal personality disorder and autism spectrum disorder. In order to develop better screening tools, researchers are looking into the importance of ipseity disturbance, which is characteristic of schizophrenia spectrum disorders such as StPD but not of autism. Millon's subtypes Theodore Millon proposes two subtypes of schizotypal personality. Any individual with schizotypal personality disorder may exhibit either one of the following somewhat different subtypes (note that Millon believes it is rare for a personality to show one pure variant, but rather a mixture of one major variant with one or more secondary variants): Millon's typology of personality disorders was influential in the development of the DSM-III, particularly with respect to distinguishing between schizoid, schizotypal and avoidant personality disorders. These had previously been considered different surface-level expressions of the same underlying personality structure, and some psychologists, particularly those working in psychoanalytic or psychodynamic traditions, still take these personality disorders to be essentially similar. Common comorbidities Antisocial personality disorder Avoidant personality disorder Bipolar disorder Borderline personality disorder Dysthymia Narcissistic personality disorder Obsessive–compulsive disorder Major depressive disorder Paranoid personality disorder Post-traumatic stress disorder Schizoid personality disorder Schizophrenia Substance use disorders Social anxiety disorder Dissociative identity disorder Treatment Medication StPD is rarely seen as the primary reason for treatment in a clinical setting, but it often occurs as a comorbid finding with other mental disorders. When patients with StPD have prescribed pharmaceuticals, they are usually prescribed antipsychotics, however, the use of neuroleptic drugs in the schizotypal population is in great doubt. The antipsychotics which show promise as treatments for StPD include olanzapine, risperidone, haloperidol, and thiothixene. The antidepressant fluoxetine may also be helpful. While people with schizotypal personality disorder and other attenuated psychotic-spectrum disorders may have a good outcome with neuroleptics in the short term, long-term follow-up suggests significant impairment in daily functioning compared to schizotypal and even schizophrenic people without antipsychotic drug exposure. Positive, negative, and depressive symptoms were shown to be improved by the used of olanzapine, an antipsychotic. Those with comorbid OCD and StPD were most positively affected by the use of olanzapine, and showed worse outcomes with the use of clomipramine, an antidepressant. Antidepressants are also sometimes prescribed, whether for StPD proper or for comorbid anxiety and depression. However, there is some ambiguity in the efficacy of antidepressants, as many studies have only tested people with StPD and comorbid obsessive-compulsive disorder or borderline personality disorder. They have shown little efficacy for treating dysthymia and anhedonia related to StPD. Both of these medications are the most frequently prescribed medication for StPD, though the use and efficacy of them should be evaluated differently for every case. The use of stimulants has also shown some efficacy, especially for those with worsened cognitive and attentional issues. Patients that suffer from concurrent psychosis should be monitored more closely if stimulants are used as part of their treatment. Other drugs which may be effective include pergolide, guanfacine, and dihydrexidine. Therapy According to Theodore Millon, schizotypal personality disorder is one of the easiest personality disorders to identify but one of the most difficult to treat with psychotherapy. Cognitive remediation therapy, metacognitive therapy, supportive psychotherapy, social skills training and cognitive-behavioral therapy can be effective treatments for the disorder. Increased social interaction with others may be able to help limit symptoms of StPD. Support is especially important for schizotypal patients with predominant paranoid symptoms, because they may have difficulties even in highly structured groups. Persons with StPD usually consider themselves to be simply eccentric or nonconformist; the degree to which they consider their social nonconformity a problem differs from the degree to which it is considered a problem in psychiatry. It is difficult to gain rapport with people with StPD because increasing familiarity and intimacy often increase their level of anxiety and discomfort. Therapy for StPD must be flexible to face emergencies or unique challenges.
Biology and health sciences
Mental disorders
Health
542241
https://en.wikipedia.org/wiki/Tetra
Tetra
Tetra is the common name of many small freshwater characiform fishes. Tetras come from Africa, Central America, and South America, belonging to the biological family Characidae and to its former subfamilies Alestidae (the "African tetras") and Lebiasinidae. The Characidae are distinguished from other fish by the presence of a small adipose fin between the dorsal and caudal fins. Many of these, such as the neon tetra (Paracheirodon innesi), are brightly colored and easy to keep in captivity. Consequently, they are extremely popular for home aquaria. Tetra is no longer a taxonomic, phylogenetic term. It is short for Tetragonopterus, a genus name formerly applied to many of these fish, which is Greek for "square-finned" (literally, four-sided-wing). Because of the popularity of tetras in the fishkeeping hobby, many unrelated fish are commonly known as tetras, including species from different families. Even vastly different fish may be called tetras. For example, payara (Hydrolycus scomberoides) is occasionally known as the "sabretooth tetra" or "vampire tetra". Tetras generally have compressed (sometimes deep), fusiform bodies and are typically identifiable by their fins. They ordinarily possess a homocercal caudal fin (a twin-lobed, or forked, tail fin whose upper and lower lobes are of equal size) and a tall dorsal fin characterized by a short connection to the fish's body. Additionally, tetras possess a long anal fin stretching from a position just posterior of the dorsal fin and ending on the ventral caudal peduncle, and a small, fleshy adipose fin located dorsally between the dorsal and caudal fins. This adipose fin represents the fourth unpaired fin on the fish (the four unpaired fins are the caudal fin, dorsal fin, anal fin, and adipose fin), lending to the name tetra, which is Greek for four. While this adipose fin is generally considered the distinguishing feature, some tetras (such as the emperor tetras, Nematobrycon palmeri) lack this appendage. Ichthyologists debate the function of the adipose fin, doubting its role in swimming due to its small size and lack of stiffening rays or spines. Although the list below is sorted by common name, in a number of cases, the common name is applied to different species. Since the aquarium trade may use a different name for the same species, advanced aquarists tend to use scientific names for the less-common tetras. The list below is incomplete. Species Tetra species: A–D Adonis tetra, Lepidarchus adonis African long-finned tetra, Brycinus longipinnis African moon tetra, Bathyaethiops caudomaculatus Arnold's tetra, Arnoldichthys spilopterus Banded tetra, Psalidodon fasciatus Bandtail tetra, Moenkhausia dichroura Barred glass tetra, Phenagoniates macrolepis Beacon tetra, Hemigrammus ocellifer Belgian flag tetra, Hyphessobrycon heterorhabdus black morpho tetra, Poecilocharax weitzmani Black neon tetra, Hyphessobrycon herbertaxelrodi Black phantom tetra, Hyphessobrycon megalopterus Black tetra or butterfly tetra, Gymnocorymbus ternetzi Black tetra, Gymnocorymbus thayer Black wedge tetra, Hemigrammus pulcher Blackband tetra, Hyphessobrycon scholzei Blackedge tetra, Tyttocharax madeirae Black-flag tetra, Hyphessobrycon rosaceus Black-jacket tetra, Moenkhausia takasei blackline tetra, Hyphessobrycon scholzei Bleeding heart tetra, Hyphessobrycon erythrostigma Blind tetra, Stygichthys typhlops Bloodfin tetra, Aphyocharax anisitsi blue tetra, Boehlkea fredcochui blue tetra, Mimagoniates microlepis blue tetra, Tyttocharax madeirae Bucktooth tetra, Exodon paradoxus Buenos Aires tetra, Psalidodon anisitsi Callistus tetra, Hyphessobrycon eques calypso tetra, Hyphessobrycon axelrodi Candy cane tetra, Hyphessobrycon sp. HY511 Cardinal tetra, Paracheirodon axelrodi Carlana tetra, Carlana eigenmanni Cochu's blue tetra, Knodus borki Colombian tetra, Hyphessobrycon columbianus Central tetra, Astyanax aeneus Coffee-bean tetra, Hyphessobrycon takasei Colcibolca tetra, Astyanax nasutus Congo tetra, Phenacogrammus interruptus Copper tetra, Hasemania melanura Costello tetra, Hemigrammus hyanuary Creek tetra, Bryconamericus scleroparius Creek tetra, Bryconamericus terrabensis Croaking tetra, Mimagoniates inequalis Croaking tetra, Mimagoniates lateralis Croaking tetra, Mimagoniates microlepis Dawn tetra, Aphyocharax paraguayensis Dawn tetra, Hyphessobrycon eos Diamond tetra, Moenkhausia pittieri Discus tetra, Brachychalcinus orbicularis Disk tetra, Myleus schomburgkii Dragonfin tetra, Pseudocorynopoma doriae E–Q Ember tetra, Hyphessobrycon amandae Emperor tetra, Nematobrycon palmeri False black tetra, Gymnocorymbus thayeri False rummynose tetra, Petitella georgiae Featherfin tetra, Hemigrammus unilineatus Firehead tetra, Petitella bleheri Flag tetra, Hyphessobrycon heterorhabdus Flame tail tetra, Aphyocharax erythrurus Flame tetra, Hyphessobrycon flammeus Garnet tetra, Hemigrammus pulcher Glass tetra, Moenkhausia oligolepis Glass bloodfin tetra, Prionobrama filigera Glossy tetra, Moenkhausia oligolepis Glowlight tetra, Hemigrammus erythrozonus Gold tetra (aka golden tetra, or brass tetra), Hemigrammus rodwayi Goldencrown tetra, Aphyocharax alburnus Goldspotted tetra, Hyphessobrycon griemi Gold-tailed tetra, Carlastyanax aurocaudatus Green dwarf tetra, Odontocharacidium aphanes Green neon tetra, Paracheirodon simulans Griem's tetra, Hyphessobrycon griemi Head & Taillight tetra, Hemigrammus ocellifer January tetra, Hemigrammus hyanuary Jellybean tetra, Lepidarchus adonis Jewel tetra, Hyphessobrycon eques Jumping tetra, Hemibrycon tridens Largespot tetra, Astyanax orthodus Lemon tetra, Hyphessobrycon pulchripinnis Longfin tetra, Brycinus longipinnis Long-finned glass tetra, Xenagoniates bondi Longjaw tetra, Bramocharax bransfordii Loreto tetra, Hyphessobrycon loretoensis Mayan tetra, Hyphessobrycon compressus Mexican tetra, Astyanax mexicanus Mimic scale-eating tetra, Deuterodon heterostomus Mourning tetra, Brycon pesu Naked tetra, Gymnocharacinus bergii Neon tetra, Paracheirodon innesi Niger tetra, Arnoldichthys spilopterus Nurse tetra, Brycinus nurse Oneline tetra, Nannaethiops unitaeniatus One-line tetra, Hemigrammus unilineatus Orangefin tetra, Bryconops affinis Ornate tetra, Hyphessobrycon bentosi Panama tetra, Hyphessobrycon panamensis Penguin tetra, Thayeria boehlkei Peruvian tetra, Hyphessobrycon peruvianus Petticoat tetra, Gymnocorymbus ternetzi Phantom tetra, Hyphessobrycon megalopterus Pittier's tetra, Moenkhausia pittieri Pretty tetra, Hemigrammus pulcher Pristella tetra, Pristella maxillaris Pygmy tetra, Odontostilbe dialeptura R–Z rainbow tetra, Nematobrycon lacortei rainbow tetra, Nematobrycon palmeri Red eye tetra, Moenkhausia sanctaefilomenae Red phantom tetra, Hyphessobrycon sweglesi Red tetra or rio tetra, Hyphessobrycon flammeus Redspotted tetra, Copeina guttata Rosy tetra, Hyphessobrycon rosaceus Royal tetra, Inpaichthys kerri Ruby tetra, Axelrodia riesei Rummy-nose tetra, Petitella rhodostoma brilliant rummy-nose tetra, Petitella bleheri Sailfin tetra, Crenuchus spilurus Savage tetra, Hyphessobrycon savagei Savanna tetra, Hyphessobrycon stegemanni Semaphore tetra, Pterobrycon myrnae Serpae tetra, Hyphessobrycon eques Sharptooth tetra, Micralestes acutidens Silver tetra, Ctenobrycon spilurus Silver tetra, Gymnocorymbus thayeri Silver tetra, Micralestes acutidens Silvertip tetra, Hasemania melanura Silvertip tetra, Hasemania nana Splash tetra, Copella arnoldi Spot-fin tetra, Hyphessobrycon socolofi Spottail tetra, Moenkhausia dichroura Spotted tetra, Copella nattereri Swegles's tetra, Hyphessobrycon sweglesi Tailspot tetra, Bryconops caudomaculatus Three-lined African tetra, Neolebias trilineatus Tietê tetra, Brycon insignis Tortuguero tetra, Hyphessobrycon tortuguerae transparent tetra, Charax gibbosus True big-scale tetra, Brycinus macrolepidotus Uruguay tetra, Cheirodon interruptus White spot tetra, Aphyocharax paraguayensis x-ray tetra, Pristella maxillaris Yellow tetra, Hyphessobrycon bifasciatus Yellow-tailed African tetra, Alestopetersius caudalis
Biology and health sciences
Characiformes
Animals
542399
https://en.wikipedia.org/wiki/Binomial%20%28polynomial%29
Binomial (polynomial)
In algebra, a binomial is a polynomial that is the sum of two terms, each of which is a monomial. It is the simplest kind of a sparse polynomial after the monomials. Definition A binomial is a polynomial which is the sum of two monomials. A binomial in a single indeterminate (also known as a univariate binomial) can be written in the form where and are numbers, and and are distinct non-negative integers and is a symbol which is called an indeterminate or, for historical reasons, a variable. In the context of Laurent polynomials, a Laurent binomial, often simply called a binomial, is similarly defined, but the exponents and may be negative. More generally, a binomial may be written as: Examples Operations on simple binomials The binomial , the difference of two squares, can be factored as the product of two other binomials: This is a special case of the more general formula: When working over the complex numbers, this can also be extended to: The product of a pair of linear binomials and is a trinomial: A binomial raised to the th power, represented as can be expanded by means of the binomial theorem or, equivalently, using Pascal's triangle. For example, the square of the binomial is equal to the sum of the squares of the two terms and twice the product of the terms, that is: The numbers (1, 2, 1) appearing as multipliers for the terms in this expansion are the binomial coefficients two rows down from the top of Pascal's triangle. The expansion of the th power uses the numbers rows down from the top of the triangle. An application of the above formula for the square of a binomial is the "-formula" for generating Pythagorean triples: For , let , , and ; then . Binomials that are sums or differences of cubes can be factored into smaller-degree polynomials as follows:
Mathematics
Basics
null
542410
https://en.wikipedia.org/wiki/Thermus%20aquaticus
Thermus aquaticus
Thermus aquaticus is a species of bacteria that can tolerate high temperatures, one of several thermophilic bacteria that belong to the Deinococcota phylum. It is the source of the heat-resistant enzyme Taq DNA polymerase, one of the most important enzymes in molecular biology because of its use in the polymerase chain reaction (PCR) DNA amplification technique. History When studies of biological organisms in hot springs began in the 1960s, scientists thought that the life of thermophilic bacteria could not be sustained in temperatures above about . Soon, however, it was discovered that many bacteria in different springs not only survived, but also thrived in higher temperatures. In 1969, Thomas D. Brock and Hudson Freeze of Indiana University reported a new species of thermophilic bacteria which they named Thermus aquaticus. The bacterium was first isolated from Mushroom Spring in the Lower Geyser Basin of Yellowstone National Park, which is near the major Great Fountain Geyser and White Dome Geyser, and has since been found in similar thermal habitats around the world. Decades later, this discovery had profound implications, including the invention of Polymerase Chain Reaction (PCR) by biochemist Kary Mullis, which revolutionized DNA research and earned Mullis a Nobel Prize in Chemistry in 1993. PCR facilitated advancements in medical diagnostics, genetics, and other fields. After Kary Mullis' discovery of PCR, Cetus awarded him $10,000. However, Cetus later sold the PCR patent to F. Hoffmann-La Roche (Roche) for $300 million. This transaction left Mullis feeling cheated throughout his life. Roche has since profited immensely from the PCR patent, with annual PCR-related sales reaching $5.4 billion in 2022. Despite these profits, neither the National Park Service, Yellowstone National Park, nor the state of Wyoming have received any share of these revenues. Recognizing the scientific and financial potential of Yellowstone's extremophiles, biotechnology companies like Diversa signed agreements with the National Park Service for bioprospecting. This led to further scientific exploration and potential commercial applications, despite some environmental concerns. Overall, Brock's initial discovery in Yellowstone's hot springs paved the way for significant scientific breakthroughs, demonstrating the importance of basic research in driving innovation and technological advancements. Biology T. aquaticus shows best growth at , but can survive at temperatures of . It primarily scavenges for protein from its environment as is evidenced by the large number of extracellular and intracellular proteases and peptidases as well as transport proteins for amino acids and oligopeptides across its cell membrane. This bacterium is a chemotroph—it performs chemosynthesis to obtain food. However, since its range of temperature overlaps somewhat with that of the photosynthetic cyanobacteria that share its ideal environment, it is sometimes found living jointly with its neighbors, obtaining energy for growth from their photosynthesis. T. aquaticus normally respires aerobically but one of its strains, Thermus aquaticus Y51MC23, is able to be grown anaerobically. The genetic material of T. aquaticus consists of one chromosome and four plasmids, and its complete genome sequencing revealed CRISPR genes at numerous loci. Morphology Thermus aquaticus is generally of cylindrical shape with a diameter of 0.5 μm to 0.8 μm. The shorter rod shape has a length of 5 μm to 10 μm. The longer filament shape has a length that varies greatly and in some cases exceeds 200 μm. T. aquaticus has shown multiple possible morphologies in different cultures, rod-shaped or as short filaments. The rod-shaped bacteria have a tendency to aggregate. Associations of several individuals can lead to the formation of spherical bodies 10 μm to 20 μm in diameter, also called rotund bodies. These bodies are not composed of cell envelope or outer membrane components as previously thought, but are instead made from remodelled peptidoglycan cell wall. Their exact function in the survival of T. aquaticus remains unknown but has been theorised to include temporary food and nucleotide storage, or they may play a role in the attachment and organisation of colonies. Thermus aquaticus is a typical gram-negative bacterium, which indicates that its cell walls have considerably less peptidoglycan compared to gram-positive counterparts. In the presence of sunlight, Thermus can display hues ranging from yellow to pink or red, which are visible in hot springs. Additionally, Thermus aquaticus may possess flagella for motility or remain immotile. Enzymes from T. aquaticus T. aquaticus has become famous as a source of thermostable enzymes, particularly the Taq DNA polymerase, as described below. Aldolase Studies of this extreme thermophilic bacterium that could be grown in cell culture was initially centered on attempts to understand how enzymes, which are normally inactive at high temperature, can function at high temperature in thermophiles. In 1970, Freeze and Brock published an article describing a thermostable aldolase enzyme from T. aquaticus. RNA polymerase The first polymerase enzyme isolated from T. aquaticus in 1974 was a DNA-dependent RNA polymerase, used in the process of transcription. Taq I restriction enzyme Most molecular biologists probably became aware of T. aquaticus in the late 1970s or early 1980s because of the isolation of useful restriction endonucleases from this organism. Use of the term Taq to refer to Thermus aquaticus arose at this time from the convention of giving restriction enzymes short names, such as Sal and Hin, derived from the genus and species of the source organisms. DNA polymerase ("Taq pol") DNA polymerase was first isolated from T. aquaticus in 1976. The first advantage found for this thermostable (temperature optimum 72°C, does not denature even in 95 °C) DNA polymerase was that it could be isolated in a purer form (free of other enzyme contaminants) than could the DNA polymerase from other sources. Later, Kary Mullis and other investigators at Cetus Corporation discovered this enzyme could be used in the polymerase chain reaction (PCR) process for amplifying short segments of DNA, eliminating the need to add E. coli polymerase enzymes after every cycle of thermal denaturation of the DNA. The enzyme was also cloned, sequenced, modified (to produce the shorter 'Stoffel fragment'), and produced in large quantities for commercial sale. In 1989 Science magazine named Taq polymerase as its first "Molecule of the Year". In 1993, Mullis was awarded the Nobel Prize in Chemistry for his work with PCR. Other enzymes The high optimum temperature for T. aquaticus allows researchers to study reactions under conditions for which other enzymes lose activity. Other enzymes isolated from this organism include DNA ligase, alkaline phosphatase, NADH oxidase, isocitrate dehydrogenase, amylomaltase, and fructose 1,6-disphosphate-dependent L-lactate dehydrogenase. Controversy The commercial use of enzymes from T. aquaticus has not been without controversy. After Brock's studies, samples of the organism were deposited in the American Type Culture Collection, a public repository. Other scientists, including those at Cetus, obtained it from there. As the commercial potential of Taq polymerase became apparent in the 1990s, the National Park Service labeled its use as the "Great Taq Rip-off". Researchers working in National Parks are now required to sign "benefits sharing" agreements that would send a portion of later profits back to the Park Service.
Biology and health sciences
Gram-negative bacteria
Plants
542465
https://en.wikipedia.org/wiki/Locus%20%28mathematics%29
Locus (mathematics)
In geometry, a locus (plural: loci) (Latin word for "place", "location") is a set of all points (commonly, a line, a line segment, a curve or a surface), whose location satisfies or is determined by one or more specified conditions. The set of the points that satisfy some property is often called the locus of a point satisfying this property. The use of the singular in this formulation is a witness that, until the end of the 19th century, mathematicians did not consider infinite sets. Instead of viewing lines and curves as sets of points, they viewed them as places where a point may be located or may move. History and philosophy Until the beginning of the 20th century, a geometrical shape (for example a curve) was not considered as an infinite set of points; rather, it was considered as an entity on which a point may be located or on which it moves. Thus a circle in the Euclidean plane was defined as the locus of a point that is at a given distance of a fixed point, the center of the circle. In modern mathematics, similar concepts are more frequently reformulated by describing shapes as sets; for instance, one says that the circle is the set of points that are at a given distance from the center. In contrast to the set-theoretic view, the old formulation avoids considering infinite collections, as avoiding the actual infinite was an important philosophical position of earlier mathematicians. Once set theory became the universal basis over which the whole mathematics is built, the term of locus became rather old-fashioned. Nevertheless, the word is still widely used, mainly for a concise formulation, for example: Critical locus, the set of the critical points of a differentiable function. Zero locus or vanishing locus, the set of points where a function vanishes, in that it takes the value zero. Singular locus, the set of the singular points of an algebraic variety. Connectedness locus, the subset of the parameter set of a family of rational functions for which the Julia set of the function is connected. More recently, techniques such as the theory of schemes, and the use of category theory instead of set theory to give a foundation to mathematics, have returned to notions more like the original definition of a locus as an object in itself rather than as a set of points. Examples in plane geometry Examples from plane geometry include: The set of points equidistant from two points is a perpendicular bisector to the line segment connecting the two points. The set of points equidistant from two intersecting lines is the union of their two angle bisectors. All conic sections are loci: Circle: the set of points at constant distance (the radius) from a fixed point (the center). Parabola: the set of points equidistant from a fixed point (the focus) and a line (the directrix). Hyperbola: the set of points for each of which the absolute value of the difference between the distances to two given foci is a constant. Ellipse: the set of points for each of which the sum of the distances to two given foci is a constant Other examples of loci appear in various areas of mathematics. For example, in complex dynamics, the Mandelbrot set is a subset of the complex plane that may be characterized as the connectedness locus of a family of polynomial maps. Proof of a locus To prove a geometric shape is the correct locus for a given set of conditions, one generally divides the proof into two stages: the proof that all the points that satisfy the conditions are on the given shape, and the proof that all the points on the given shape satisfy the conditions. Examples First example Find the locus of a point P that has a given ratio of distances k = d1/d2 to two given points. In this example k = 3, A(−1, 0) and B(0, 2) are chosen as the fixed points. P(x, y) is a point of the locus This equation represents a circle with center (1/8, 9/4) and radius . It is the circle of Apollonius defined by these values of k, A, and B. Second example A triangle ABC has a fixed side [AB] with length c. Determine the locus of the third vertex C such that the medians from A and C are orthogonal. Choose an orthonormal coordinate system such that A(−c/2, 0), B(c/2, 0). C(x, y) is the variable third vertex. The center of [BC] is M((2x + c)/4, y/2). The median from C has a slope y/x. The median AM has slope 2y/(2x + 3c). C(x, y) is a point of the locus the medians from A and C are orthogonal The locus of the vertex C is a circle with center (−3c/4, 0) and radius 3c/4. Third example A locus can also be defined by two associated curves depending on one common parameter. If the parameter varies, the intersection points of the associated curves describe the locus. In the figure, the points K and L are fixed points on a given line m. The line k is a variable line through K. The line l through L is perpendicular to k. The angle between k and m is the parameter. k and l are associated lines depending on the common parameter. The variable intersection point S of k and l describes a circle. This circle is the locus of the intersection point of the two associated lines. Fourth example A locus of points need not be one-dimensional (as a circle, line, etc.). For example, the locus of the inequality is the portion of the plane that is below the line of equation .
Mathematics
Other
null
543106
https://en.wikipedia.org/wiki/Enzyme%20unit
Enzyme unit
The enzyme unit, or international unit for enzyme (symbol U, sometimes also IU) is a unit of enzyme's catalytic activity. 1 U (μmol/min) is defined as the amount of the enzyme that catalyzes the conversion of one micromole of substrate per minute under the specified conditions of the assay method. The specified conditions will usually be the optimum conditions, including but not limited to temperature, pH, and substrate concentration, that yield the maximal substrate conversion rate for that particular enzyme. In some assay method, one usually takes a temperature of 25°C. The enzyme unit was adopted by the International Union of Biochemistry in 1964. Since the minute is not an SI base unit of time, the enzyme unit is discouraged in favor of the katal, the unit recommended by the General Conference on Weights and Measures in 1978 and officially adopted in 1999. One katal is the enzyme activity that converts one mole of substrate per second under specified assay conditions, so 1 U = 1 μmol/min = 1/60 μmol/s ≈ 16.67 nmol/s; 16.67 nkat = 16.67 nmol/s; Therefore, 1 U = 16.67 nkat While the katal may be recommended, almost all scientific research today still uses the system based on the minute, for the simple reason that enzyme assays are measured in minutes, not seconds. The concept of enzyme unit should not be confused with the one of international unit (IU). Although it is true that 1 U = 1 IU (because, for many enzymes, the existing U was adopted as the later IU), international units can be defined for the biologic activity of many other kinds of substance besides enzymes (for example, vitamins and hormones).
Physical sciences
Catalytic activity
Basics and measurement
543301
https://en.wikipedia.org/wiki/Marten
Marten
A marten is a weasel-like mammal in the genus Martes within the subfamily Guloninae, in the family Mustelidae. They have bushy tails and large paws with partially retractile claws. The fur varies from yellowish to dark brown, depending on the species; it is valued by animal trappers for the fur trade. Martens are slender, agile animals, which are adapted to living in the taiga, and inhabit coniferous and northern deciduous forests across the Northern Hemisphere. Classification Results of DNA research indicate that the genus Martes is paraphyletic, with some studies placing Martes americana outside the genus and allying it with Eira and Gulo, to form a new New World clade. The genus first evolved up to seven million years ago during the Miocene epoch. Fossils Several fossil martens have been described, including: †Martes campestris (Pliocene) †Martes wenzensis (Pliocene) †Martes vetus (Pleistocene) Another described fossil species, Martes nobilis from the Holocene, is now considered synonymous with the American marten. Etymology The Modern English "marten" comes from the Middle English , in turn borrowed from the Anglo-French and Old French , itself from a Germanic source; cf. Old English , Old Norse , and Old High German and Yiddish . Ecology and behaviour Martens are solitary animals, meeting only to breed in late spring or early summer. Litters of up to five blind and nearly hairless kits are born in early spring. They are weaned after around two months, and leave the mother to fend for themselves at about three to four months of age. They are omnivorous. Spatial niche segregation The stone marten and the pine marten segregate spatially where they occur in sympatry. This spatial niche segregation is due to the differences regarding their food preferences, adaptability to cold climates and avoidance of predators. The spatial niche segregation between stone and pine martens is also influenced by each species' habitat preferences and resource availability within specific ecosystems. Studies in Belarus show that the pine marten is are more densely distributed in clay-rich, biodiverse woodlands, whereas the stone marten is adapted to habitats with greater resource limitations, such as sandy soils, where it relies more on seasonally available resources such as berries and carrion to meet its dietary needs. In Ireland and Italy, the pine marten displays seasonal stability in home ranges within well-resourced habitats, suggesting that resource abundance can enhance spatial exclusivity and reduce direct competition between species. In human culture Canada The marten is populous in the northern Ontario community of Big Trout Lake. During the fur trade, commissioned by the Hudson Bay Company in the 18th and 19th centuries, the marten pelt was typically fashioned into mittens. The marten is still traded locally. The locals place a high value on this pelt, typically trading it for consumable goods. Croatia In the Middle Ages, marten pelts were highly valued goods used as a form of payment in Slavonia, the Croatian Littoral, and Dalmatia. The marturina was a form of tax named after this. The banovac, a coin struck and used between 1235 and 1384, included the image of a marten. This is one of the reasons why the Croatian word for marten, kuna, was the name of the former Croatian currency. A marten is depicted on the obverse of the 1-, 2-, and 5-kuna coins, minted since 1993, and on the reverse of the 25-kuna commemorative coins. With adoption of euro as the national currency in 2023, a marten continues to be depicted on the obverse of the Croatian 1 euro coin. A running marten is shown on the coat of arms of Slavonia and subsequently on the modern design of the coat of arms of Croatia. The official seal of the Croatian Parliament from 1497 until the late 18th century had a similar design. Finland The Finnish communications company Nokia derives its name, via the river Nokianvirta, from a type of marten locally known as the nokia. Greece In the Illiad, the fleet-footed spy Dolon wore a marten-pelt cap. Italy The Latin word for helmet, , originally meant "marten pelt", although it is unclear whether early Romans wore these helmets for symbolical reasons or for their fine fur.
Biology and health sciences
Carnivora
null
543466
https://en.wikipedia.org/wiki/Siberian%20tiger
Siberian tiger
The Siberian tiger or Amur tiger is a population of the tiger subspecies Panthera tigris tigris native to Northeast China, the Russian Far East, and possibly North Korea. It once ranged throughout the Korean Peninsula, but currently inhabits mainly the Sikhote-Alin mountain region in south-west Primorye Province in the Russian Far East. In 2005, there were 331–393 adult and subadult Siberian tigers in this region, with a breeding adult population of about 250 individuals. The population had been stable for more than a decade because of intensive conservation efforts, but partial surveys conducted after 2005 indicate that the Russian tiger population was declining. An initial census held in 2015 indicated that the Siberian tiger population had increased to 480–540 individuals in the Russian Far East, including 100 cubs. This was followed up by a more detailed census which revealed there was a total population of 562 wild Siberian tigers in Russia. As of 2014, about 35 individuals were estimated to range in the international border area between Russia and China. The Siberian tiger is genetically close to the now-extinct Caspian tiger. Results of a phylogeographic study comparing mitochondrial DNA from Caspian tigers and living tiger populations indicate that the common ancestor of the Siberian and Caspian tigers colonized Central Asia from eastern China, via the Gansu−Silk Road corridor, and then subsequently traversed Siberia eastward to establish the Siberian tiger population in the Russian Far East. The Caspian and Siberian tiger populations were the northernmost in mainland Asia. The Siberian tiger was also called "Amur tiger", "Manchurian tiger", "Korean tiger", and "Ussurian tiger", depending on the region where individuals were observed. Taxonomy Felis tigris was the scientific name proposed by Carl Linnaeus in 1758 for the tiger. In the 19th century, several tiger specimens were collected in East Asia and described: Felis tigris altaicus proposed by Coenraad Jacob Temminck in 1844 were tiger skins with long hairs and dense coats sold in Japan, which originated in Korea, most likely from animals killed in the Altai and Pisihan Mountains. Tigris longipilis proposed by Leopold Fitzinger in 1868 was based on a long-haired tiger skin in the Natural History Museum, Vienna. Felis tigris var. amurensis proposed by Charles Dode in 1871 was based on tiger skins from the Amur region. Felis tigris coreensis by Emil Brass in 1904 was a tiger skin from Korea. The validity of several tiger subspecies was questioned in 1999. Most putative subspecies described in the 19th and 20th centuries were distinguished on the basis of fur length and colouration, striping patterns and body size – characteristics that vary widely within populations. Morphologically, tigers from different regions vary little, and gene flow between populations in those regions is considered to have been possible during the Pleistocene. Therefore, it was proposed to recognize only two tiger subspecies as valid, namely Panthera tigris tigris in mainland Asia, and P. t. sondaica in the Greater Sunda Islands and possibly in Sundaland. In 2015, morphological, ecological and molecular traits of all putative tiger subspecies were analysed in a combined approach. Results support distinction of the two evolutionary groups: continental and Sunda tigers. The authors proposed recognition of only two subspecies: namely P. t. tigris comprising the Bengal, Malayan, Indochinese, South China, Siberian and Caspian tiger populations; and P. t. sondaica comprising the Javan, Bali and Sumatran tiger populations. In 2017, the Cat Specialist Group revised felid taxonomy and now recognizes all the tiger populations in mainland Asia as P. t. tigris. Phylogeny Several reports have been published since the 1990s on the genetic makeup of the Siberian tiger and its relationship to other populations. One of the most important outcomes has been the discovery of low genetic variability in the wild population, especially when it comes to maternal or mitochondrial DNA lineages. It seems that a single mtDNA haplotype almost completely dominates the maternal lineages of wild Siberian tigers. On the other hand, captive tigers appear to show higher mtDNA diversity. This may suggest that the subspecies has experienced a very recent genetic bottleneck caused by human pressure, with the founders of the captive population having been captured when genetic variability was higher in the wild. At the start of the 21st century, researchers from the University of Oxford, U.S. National Cancer Institute and Hebrew University of Jerusalem collected tissue samples from 20 of 23 Caspian tiger specimens kept in museums across Eurasia. They sequenced at least one segment of five mitochondrial genes and found a low amount of variability of the mitochondrial DNA in Caspian tigers as compared to other tiger subspecies. They re-assessed the phylogenetic relationships of tiger subspecies and observed a remarkable similarity between Caspian and Siberian tigers, indicating that the Siberian tiger is the genetically closest living relative of the Caspian tiger, which strongly implies a very recent common ancestry. Based on phylogeographic analysis, they suggested that the ancestor of Caspian and Siberian tigers colonized Central Asia less than 10,000 years ago via the Gansu−Silk Road region from eastern China, and subsequently traversed eastward to establish the Siberian tiger population in the Russian Far East. The events of the Industrial Revolution may have been the critical factor in the reciprocal isolation of Caspian and Siberian tigers from what was likely a single contiguous population. Samples of 95 wild Amur tigers were collected throughout their native range to investigate questions relative to population genetic structure and demographic history. Additionally, targeted individuals from the North American ex situ population were sampled to assess the genetic representation found in captivity. Population genetic and Bayesian structure analyses clearly identified two populations separated by a development corridor in Russia. Despite their well-documented 20th century decline, the researchers failed to find evidence of a recent population bottleneck, although genetic signatures of a historical contraction were detected. This disparity in signal may be due to several reasons, including historical paucity in population genetic variation associated with postglacial colonisation and potential gene flow from an extirpated Chinese population. The extent and distribution of genetic variation in captive and wild populations were similar, yet gene variants persisted ex situ that were lost in situ. Overall, their results indicate the need to secure ecological connectivity between the two Russian populations to minimize loss of genetic diversity and overall susceptibility to stochastic events, and support a previous study suggesting that the captive population may be a reservoir of gene variants lost in situ. In 2013, the whole genome of the Siberian tiger was sequenced and published. Tigers in mainland Asia fall into two clades: the northern clade comprises the Siberian and Caspian tiger populations, and the southern clade all remaining continental tiger populations. A study published in 2018 was based on 32 tiger specimens using a whole-genome sequencing for analysis. Results support six monophyletic tiger clades and indicate that the most recent common ancestor lived about 110,000 years ago. Characteristics The tiger is reddish-rusty, or rusty-yellow in colour, with narrow black transverse stripes. The body length is not less than , condylobasal length of skull , zygomatic width , and length of upper carnassial tooth over long. It has an extended supple body standing on rather short legs with a fairly long tail. Body size In the 1980s, the typical weight range of wild Siberian tigers was indicated as for males and for females. Exceptionally large individuals were targeted and shot by hunters. In 2005, a group of Russian, American and Indian zoologists published an analysis of historical and contemporary data on body weights of wild and captive tigers, both female and male across all subspecies. The data used include weights of tigers that were older than 35 months and measured in the presence of authors. Their comparison with historical data indicates that up to the first half of the 20th century both male and female Siberian tigers were on average heavier than post-1970 ones. The average historical wild male Siberian tiger weighed and the female ; the contemporary wild male Siberian tiger weighs on average with an asymptotic limit being ; a wild female weighs on average. Historical Siberian tigers and Bengal tigers were the largest ones, whereas contemporary Siberian tigers are on average lighter than Bengal tigers. The reduction of the body weight of today's Siberian tigers may be explained by concurrent causes, namely the reduced abundance of prey because of illegal hunting and that the individuals were usually sick or injured and captured in a conflict situation with people. Measurements taken by scientists of the Siberian Tiger Project in the Sikhote-Alin range from in head and body length measured in straight line, with an average of for males; and for females ranging from with an average of . The average tail measures in males and in females. The longest male measured in total length including a tail of and with a chest girth of . The longest female measured in total length including tail of and with a chest girth of . A male captured by members of the Siberian Tiger Project weighed , and the largest radio-collared male weighed . The Siberian tiger is often considered to be the largest tiger. A wild male, killed in Manchuria by the Sungari River in 1943, reportedly measured "over the curves", with a tail length of about . It weighed about . Dubious sources mention weights of and even . Skull The skull of the Siberian tiger is characterized by its large size. The facial region is very powerful and very broad in the region of the canines. The skull prominences, especially in the sagittal crest and crista occipitalis, are very high and strong in old males, and often much more massive than usually observed in the biggest skulls of Bengal tigers. The size variation in skulls of Siberian tigers ranges from in nine individuals measured. A female skull is always smaller and never as heavily built and robust as that of a male. The height of the sagittal crest in its middle part reaches as much as , and in its posterior part up to . Female skulls range from . The skulls of male Caspian tigers from Turkestan had a maximum length of , while that of females measured . A tiger killed on the Sumbar River in Kopet Dag in January 1954 had a greatest skull length of , which is considerably more than the known maximum for this population and slightly exceeds that of most Siberian tigers. However, its condylobasal length was only , smaller than those of the Siberian tigers, with a maximum recorded condylobasal length of . The biggest skull of a Siberian tiger from northeast China measured in length, which is about more than the maximum skull lengths of tigers from the Amur region and northern India, with the exception of a skull of a northern Indian tiger from the vicinity of Nagina, which measured "over the bone". Fur and coat The ground colour of Siberian tigers' pelage is often very pale, especially in winter coat. However, variations within populations may be considerable. Individual variation is also found in form, length, and partly in colour, of the dark stripes, which have been described as being dark brown rather than black. The fur of the Siberian tiger is moderately thick, coarse and sparse compared to that of other felids living in the former Soviet Union. Compared to the extinct westernmost populations, the Siberian tiger's summer and winter coats contrast sharply with other subspecies. Generally, the coat of western populations was brighter and more uniform than that of the Far Eastern populations. The summer coat is coarse, while the winter coat is denser, longer, softer, and silkier. The winter fur often appears quite shaggy on the trunk and is markedly longer on the head, almost covering the ears. Siberian and Caspian tigers had the thickest fur amongst tigers. The whiskers and hair on the back of the head and the top of the neck are also greatly elongated. The background colour of the winter coat is generally less bright and rusty compared to that of the summer coat. Because of the winter fur's greater length, the stripes appear broader with less defined outlines. The summer fur on the back is long, along the top of the neck, on the abdomen, and on the tail. The winter fur on the back is , on the top of the neck, on the throat, on the chest and on the abdomen. The whiskers are . Distribution and habitat The Siberian tiger once inhabited much of the Korean Peninsula, Manchuria and other parts of north-eastern China, the eastern part of Siberia and the Russian Far East, perhaps as far west as Mongolia and the area of Lake Baikal, where the Caspian tiger also reportedly occurred. During the late Pleistocene and Holocene, it was likely connected to the South China tiger population through corridors in the Yellow River basin, before humans interrupted gene flow. Today, its range stretches south to north for almost the length of Primorsky Krai and into southern Khabarovsk Krai east and south of the Amur River. It also occurs within the Greater Xing'an Range, which crosses into Russia from China at several places in southwest Primorye. This region represents a merger zone of the East Asian temperate broadleaf and mixed forest and the taiga, resulting in a mosaic of forest types that vary in elevation and topography. Key habitats of the Siberian tiger are Korean pine forests with a complex composition and structure. The faunal complex of the region is represented by a mixture of Asian and boreal life forms. The ungulate complex is represented by seven species, with Manchurian wapiti, Siberian roe deer, and wild boar being the most common throughout the Sikhote-Alin mountains but rare in higher altitude spruce-fir forests. Sika deer are restricted to the southern half of the Sikhote-Alin mountains. Siberian musk deer and Amur moose are associated with the conifer forests and are near the southern limits of their distribution in the central Sikhote-Alin mountains. In 2005, the number of Amur tigers in China was estimated at 18–22, and 331–393 in the Russian Far East, comprising a breeding adult population of about 250, fewer than 100 likely to be sub-adults, more than 20 likely to be less than 3 years of age. More than 90% of the population occurred in the Sikhote Alin mountain region. An unknown number of tigers survive in the reserve areas around Baekdu Mountain, on the border between China and North Korea, based on tracks and sightings. In August 2012, a Siberian tiger with four cubs was recorded for the first time in northeastern China's Hunchun National Nature Reserve located in the vicinity of the international borders with Russia and North Korea. Camera-trap surveys carried out in the spring seasons of 2013 and 2014 revealed between 27 and 34 tigers along the China-Russian border. In April 2014, World Wide Fund for Nature personnel captured a video of a tigress with cubs in inland China. The tiger population in the Changbai Mountains dispersed westwards between 2003 and 2016. Camera trap surveys between 2013 and 2018 revealed about 55 Siberian tigers in four forested landscapes in northeastern China: Laoyeling, Zhangguangcai Range, Wandashan and Lesser Khingan Mountains. Feces, urine and hair was used to genetically identify 30 tigers in this region. However, only Laoyeling is thought to support a breeding population. Ecology and behavior Siberian tigers are known to travel up to over ecologically unbroken country. In 1992 and 1993, the maximum total population density of the Sikhote-Alin tiger population was estimated at 0.62 tigers in . The maximum adult population estimated in 1993 reached 0.3 tigers in , with a sex ratio of averaging 2.4 females per male. These density values were much lower than what had been reported for other subspecies at the time. In 2004, dramatic changes in land tenure, population density, and reproductive output in the core area of the Sikhote-Alin Zapovednik Siberian Tiger Project were detected, suggesting that when tigers are well protected from human-induced mortality for long periods, the adult female population density increases significantly. When more adult females survived, the mothers shared their home ranges with their daughters once the daughters reached maturity. By 2007, population density of tigers was estimated at 0.8±0.4 tigers in in the southern part of Sikhote-Alin Zapovednik, and 0.6±0.3 tigers in in the central part of the protected area. Siberian tigers share habitat with Amur leopards (P. pardus orientalis), but in the Changbai Mountains have been recorded more often in lower elevations than leopards. Hunting and diet Prey species of the tiger include ungulates such as Manchurian wapiti (Cervus canadensis xanthopygus), Siberian musk deer (Moschus moschiferus), long-tailed goral (Naemorhedus caudatus), moose (Alces alces), Siberian roe deer (Capreolus pygargus) and sika deer (Cervus nippon), wild boar (Sus scrofa), and even sometimes small size Asiatic black bears (Ursus thibetanus) and brown bears (Ursus arctos). Siberian tigers also take smaller prey like hares, rabbits, pikas and even salmon. Scat was collected along the international border between Russia and China between November 2014 and April 2015; 115 scat samples of nine tigers contained foremost remains of wild boar, sika deer and roe deer. Between January 1992 and November 1994, 11 tigers were captured, fitted with radio-collars and monitored for more than 15 months in the eastern slopes of the Sikhote-Alin mountain range. Results of this study indicate that their distribution is closely associated with distribution of Manchurian wapiti, while distribution of wild boar was not such a strong predictor for tiger distribution. Although they prey on both Siberian roe deer and sika deer, overlap of these ungulates with tigers was low. Distribution of moose was poorly associated with tiger distribution. The distribution of preferred habitat of key prey species was an accurate predictor of tiger distribution. Results of a three-year study on Siberian tigers indicate that the mean interval between their kills and estimated prey consumption varied across seasons: during 2009 to 2012, three adult tigers killed prey every 7.4 days in summer and consumed a daily average of ; in winter they killed more large-bodied prey, made kills every 5.7 days and consumed a daily average of . Interspecific predatory relationships Following a decrease of ungulate populations from 1944 to 1959, 32 cases of Amur tigers attacking both Ussuri brown (Ursus arctos lasiotus) and Ussuri black bears (U. thibetanus ussuricus) were recorded in the Russian Far East, and hair of bears were found in several tiger scat samples. Tigers attack black bears less often than brown bears, as the latter live in more open habitats and are not able to climb trees. In the same time period, four cases of brown bears killing female tigers and young cubs were reported, both in disputes over prey and in self-defense. Tigers mainly feed on the bear's fat deposits, such as the back, hams and groin. When Amur tigers prey on brown bears, they usually target young and sub-adult bears, besides small female adults taken outside their dens, generally when lethargic from hibernation. Predation by tigers on denned brown bears was not detected during a study carried between 1993 and 2002. Ussuri brown bears, along with the smaller black bears constitute 2.1% of the Siberian tiger's annual diet, of which 1.4% are brown bears. The effect the presence of tigers has on brown bear behavior seems to vary. In the winters of 1970–1973, Yudakov and Nikolaev recorded two cases of bears showing no fear of tigers and another case of a brown bear changing path upon crossing tiger tracks. Other researchers have observed bears following tiger tracks to scavenge tiger kills and to potentially prey on tigers. Despite the threat of predation, some brown bears actually benefit from the presence of tigers by appropriating tiger kills that the bears may not be able to successfully hunt themselves. Brown bears generally prefer to contest the much smaller female tigers. During telemetry research in the Sikhote-Alin Nature Reserve, 44 direct confrontations between bears and tigers were observed, in which bears in general were killed in 22 cases, and tigers in 12 cases. There are reports of brown bears specifically targeting Amur leopards and tigers to abstract their prey. In the Sikhote-Alin reserve, 35% of tiger kills were stolen by bears, with tigers either departing entirely or leaving part of the kill for the bear. Some studies show that bears frequently track down tigers to usurp their kills, with occasional fatal outcomes for the tiger. A report from 1973 describes twelve known cases of brown bears killing tigers, including adult males; in all cases the tigers were subsequently eaten by the bears. The relationship between the Amur tiger and the Himalayan bear is not specifically studied. Numerous publications on these species there are mainly episodic and survey data on this issue are collected by different authors in selected areas which do not give a complete picture of the nature. Tigers depress wolf (Canis lupus) numbers, either to the point of localized extinction or to such low numbers as to make them a functionally insignificant component of the ecosystem. Wolves appear capable of escaping competitive exclusion from tigers only when human pressure decreases tiger numbers. In areas where wolves and tigers share ranges, the two species typically display a great deal of dietary overlap, resulting in intense competition. Wolf and tiger interactions are well documented in Sikhote-Alin, where until the beginning of the 20th century, very few wolves were sighted. Wolf numbers may have increased in the region after tigers were largely eliminated during the Russian colonisation in the late 19th century and early 20th century. This is corroborated by native inhabitants of the region claiming that they had no memory of wolves inhabiting Sikhote-Alin until the 1930s, when tiger numbers decreased. Today, wolves are considered scarce in tiger habitat, being found in scattered pockets, and usually seen travelling as loners or in small groups. First hand accounts on interactions between the two species indicate that tigers occasionally chase wolves from their kills, while wolves will scavenge from tiger kills. Tigers are not known to prey on wolves, though there are four records of tigers killing wolves without consuming them. Tigers recently released are also said to hunt wolves. This competitive exclusion of wolves by tigers has been used by Russian conservationists to convince hunters in the Far East to tolerate the big cats, as they limit ungulate populations less than wolves, and are effective in controlling wolf numbers. Siberian tigers also compete with the Eurasian lynx (Lynx lynx) and occasionally kill and eat them. Eurasian lynx remains have been found in the stomach contents of Siberian tigers in Russia. In March 2014, a dead lynx discovered in Bastak Nature Reserve bore evidence of predation by a Siberian tiger. The tiger apparently ambushed, pursued, and killed the lynx but only consumed it partially. This incident marks one of the first documented cases of a tiger preying on a lynx, and indicates that the tiger might have been more intent on eliminating a competitor than on catching prey. Reproduction and life cycle Siberian tigers mate at any time of the year. A female signals her receptiveness by leaving urine deposits and scratch marks on trees. She will spend 5 or 6 days with the male, during which she is receptive for three days. Gestation lasts from 3 to 3½ months. Litter size is normally two or four cubs but there can be as many as six. The cubs are born blind in a sheltered den and are left alone when the female leaves to hunt for food. Cubs are divided equally between sexes at birth. However, by adulthood there are usually two to four females for every male. The female cubs remain with their mothers longer, and later they establish territories close to their original ranges. Males, on the other hand, travel unaccompanied and range farther earlier in their lives, making them more vulnerable to poachers and other tigers. A Siberian tiger family comprising an adult male, a female and three cubs were recorded in 2015. At 35 months of age, tigers are sub-adults. Males reach sexual maturity at the age of 48 to 60 months. The average lifespan for Siberian tigers ranges from 16–18 years. Wild individuals tend to live between 10–15 years, while in captivity individuals may live up to 25 years. Threats Results of genetic analysis of 95 wild Siberian tiger samples from Russia revealed that genetic diversity is low, only 27–35 individuals contributed to their genes. Further exacerbating the problem is that more than 90% of the population occurred in the Sikhote Alin mountain region. Tigers rarely move across the development corridor, which separates this sub-population from the much smaller sub-population in southwest Primorye province. The winter of 2006–2007 was marked by heavy poaching. Poaching of tigers and their wild prey species is considered to be driving the decline, although heavy snows in the winter of 2009 could have biased the data. In northern China’s Huang Ni He National Nature Reserve, poachers set up foremost snare traps, but there is not sufficient personnel to patrol this area throughout the year. In Hunchun National Nature Reserve, poaching of ungulate species impedes recovery of the tiger population. In the past After the dissolution of the Soviet Union, illegal deforestation and bribery of park rangers facilitated poaching of Siberian tigers. Local hunters had access to a formerly sealed off lucrative Chinese market, and this once again put the region's tiger population at risk of extinction. While improvement in the local economy has led to greater resources being invested in conservation efforts, an increase in economic activity has led to an increased rate of development and deforestation. The major obstacle in preserving the tiger is the enormous territory individual tigers require; up to is needed by a single female and more for a single male. The Siberian tiger was once common in the Korean Peninsula. It was eradicated during the period of Korea under Japanese rule between 1910 and 1945. Conservation Tigers are included on CITES Appendix I, banning international trade. All tiger range states and countries with consumer markets have banned domestic trade as well. At the 14th Conference of the Parties to CITES in 2007, stronger enforcement measures were called for, as well as an end to tiger farming. In 1992, the Siberian Tiger Project was founded, with the aim of providing a comprehensive picture of the ecology of the Amur tiger and the role of tigers in the Russian Far East through scientific studies. By capturing and outfitting tigers with radio collars, their social structure, land use patterns, food habits, reproduction, mortality patterns and their relation with other inhabitants of the ecosystem, including humans is studied. These data compilations will hopefully contribute toward minimizing poaching threats because of traditional hunting. The Siberian Tiger Project has been productive in increasing local capacity to address human-tiger conflict with a Tiger Response Team, part of the Russian government's Inspection Tiger, which responds to all tiger-human conflicts; by continuing to enhance the large database on tiger ecology and conservation with the goal of creating a comprehensive Siberian tiger conservation plan; and training the next generation of Russian conservation biologists. In August 2010, China and Russia agreed to enhance conservation and cooperation in protected areas in a transboundary area for Amur tigers. China has undertaken a series of public awareness campaigns including celebration of the first Global Tiger Day in July 2010, and International Forum on Tiger Conservation and Tiger Culture and China 2010 Hunchun Amur Tiger Culture Festival in August 2010. Reintroduction Inspired by findings that the Amur tiger is the closest relative of the Caspian tiger, there has been discussion whether the Amur tiger could be an appropriate subspecies for reintroduction into a safe place in Central Asia. The Amu-Darya Delta was suggested as a potential site for such a project. A feasibility study was initiated to investigate if the area is suitable and if such an initiative would receive support from relevant decision makers. A viable tiger population of about 100 animals would require at least of large tracts of contiguous habitat with rich prey populations. Such habitat is not presently available in the delta and so cannot be provided in the short term. The proposed region is therefore unsuitable for the reintroduction, at least at this stage of development. A second possible introduction site in Kazakhstan is the Ili River delta at the southern edge of Lake Balkhash. The delta is situated between the Saryesik-Atyrau Desert and the Taukum Desert and forms a large wetland of about . Until 1948, the delta was a refuge of the extinct Caspian tiger. Reintroduction of the Siberian tiger to the delta has been proposed. Large populations of wild boar inhabit the swamps of the delta. The reintroduction of the Bukhara deer, which was once an important prey, is under consideration. The Ili delta is therefore considered as a suitable site for introduction. In 2010, Russia exchanged two captive Siberian tigers for Persian leopards with the Iranian government, as conservation groups of both countries agreed on reintroducing these animals into the wild within the next five years. This issue is controversial since only 30% of such releases have been successful. In addition, the Siberian tiger is not genetically identical to the Caspian tiger. Another difference is the climatic, with temperatures higher in Iran than in Siberia. Introducing exotic species into a new habitat could inflict irreversible and unknown damage. In December 2010, one of the tigers exchanged died in Eram Zoo in Tehran. Nevertheless, the project has its defenders, and Iran has successfully reintroduced the Persian onager and Caspian red deer. In 2005, re-introduction was planned as part of the rewilding project at Pleistocene Park in the Kolyma River basin in northern Yakutia, Russia, provided the herbivore population has reached a size warranting the introduction of large predators. In captivity In recent years, captive breeding of tigers in China has accelerated to the point where the captive population of several tiger subspecies exceeds 4,000 animals. Three thousand specimens are reportedly held by 10–20 "significant" facilities, with the remainder scattered among some 200 facilities. This makes China home to the second largest captive tiger population in the world, after the U.S., which in 2005 had an estimated 4,692 captive tigers. In a census conducted by the U.S.-based Feline Conservation Federation, 2,884 tigers were documented as residing in 468 American facilities. In 1986, the Chinese government established the world's largest Siberian tiger breeding base, the Harbin Siberian Tiger Park, and was meant to build a Siberian tiger gene pool to ensure the genetic diversity of the tiger. The Park and its existing tiger population would be further divided into two parts, one as the protective species for genetic management and the other as the ornamental species. It was discovered that when the Heilongjiang Northeast Tiger Forest Park was founded it had only 8 tigers, but according to the current breeding rate of tigers at the park, the worldwide number of wild Siberian tigers will break through 1,000 in late 2010. South Korea expected to receive three tigers pledged for donation in 2009 by Russia in 2011. Attacks on humans The Siberian tiger very rarely becomes a man-eater. Numerous cases of attacks on humans were recorded in the 19th century, occurring usually in central Asia excluding Turkmenistan, Kazakhstan and the Far East. Tigers were historically rarely considered dangerous unless provoked, though in the lower reaches of the Syr-Darya, a tiger reportedly killed a woman collecting firewood and an unarmed military officer whilst passing through reed thickets. Attacks on shepherds were recorded in the lower reaches of Ili. In the Far East, during the middle and late 19th century, attacks on people were recorded. In 1867 on the Tsymukha River, tigers killed 21 men and injured 6 others. In China's Jilin Province, tigers reportedly attacked woodsmen and coachmen, and occasionally entered cabins and dragged out both adults and children. According to the Japanese Police Bureau in Korea, in 1928 a tiger killed one human, whereas leopards killed three, wild boars four and wolves killed 48. Six cases were recorded in 20th century Russia of unprovoked attacks leading to man-eating behaviour. Provoked attacks are however more common, usually the result of botched attempts at capturing them. In December 1997, an injured Amur tiger attacked, killed and consumed two people. Both attacks occurred in the Bikin River valley. The anti-poaching task force Inspection Tiger investigated both deaths, tracked down and killed the tiger. In January 2002, a man was attacked by a tiger on a remote mountain road near Hunchun in Jilin province, China, near the borders of Russia and North Korea. He suffered compound fractures but managed to survive. When he sought medical attention, his story raised suspicions as Siberian tigers seldom attack humans. An investigation of the attack scene revealed that raw venison carried by the man was left untouched by the tiger. Officials suspected the man to be a poacher who provoked the attack. The following morning, tiger sightings were reported by locals along the same road, and a local TV station did an on-site coverage. The group found tiger tracks and blood spoor in the snow at the attack scene and followed them for approximately 2,500 meters, hoping to catch a glimpse of the animal. Soon, the tiger was seen ambling slowly ahead of them. As the team tried to get closer for a better camera view, the tiger suddenly turned and charged, causing the four to flee in panic. About an hour after that encounter, the tiger attacked and killed a 26-year-old woman on the same road. Authorities retrieved the body with the help of a bulldozer. By then, the tiger was found lying 20 meters away, weak and barely alive. It was successfully tranquilized and taken for examination, which revealed that the tiger was anemic and gravely injured by a poacher's snare around its neck, with the steel wire cutting deeply down to the vertebrae, severing both trachea and esophagus. Despite extensive surgery by a team of veterinarians, the tiger died of wound infection. Subsequent investigation revealed that the first victim was a poacher who set multiple snares that caught both the tiger and a deer. The man was later charged for poaching and harming endangered species. He served two years in prison. After being released from prison, he worked in clearing the forest of old snares. In an incident at the San Francisco Zoo in December 2007, a tiger escaped and killed a visitor, and injured two others. The animal was shot by the police. The zoo was widely criticized for maintaining only a fence around the tiger enclosure, while the international standard is . The zoo subsequently erected a taller barrier topped by an electric fence. One of the victims admitted to taunting the animal. Zookeepers in Anhui province and the cities of Shanghai and Shenzhen were attacked and killed in 2010. In January 2011, a tiger attacked and killed a tour bus driver at a breeding park in Heilongjiang province. Park officials reported that the bus driver violated safety guidelines by leaving the vehicle to check on the condition of the bus. In September 2013, a tiger mauled a zookeeper to death at a zoo in western Germany after the worker forgot to lock a cage door during feeding time. In July 2020, a female tiger attacked and killed a 55-year-old zookeeper at the Zürich Zoo in Switzerland. In culture The English name 'Siberian tiger' was coined by James Cowles Prichard in the 1830s. The name 'Amur tiger' was used in 1933 for Siberian tigers killed by the Amur River for an exhibition in the American Museum of Natural History. The Tungusic peoples considered the tiger a near-deity and often referred to it as "Grandfather" or "Old man". The Udege and Nani people call it "Amba". The Manchu considered the Siberian tiger as Hu Lin, the king. Since the tiger has a mark on its foreheads that looks like a Chinese character for 'King' (), or a similar character meaning "Great Emperor", it is revered by the Udege and Chinese people. The Siberian tiger is used in heraldic symbols throughout the area where it is indigenous.
Biology and health sciences
Felines
Animals
543568
https://en.wikipedia.org/wiki/Lorentz%20covariance
Lorentz covariance
In relativistic physics, Lorentz symmetry or Lorentz invariance, named after the Dutch physicist Hendrik Lorentz, is an equivalence of observation or observational symmetry due to special relativity implying that the laws of physics stay the same for all observers that are moving with respect to one another within an inertial frame. It has also been described as "the feature of nature that says experimental results are independent of the orientation or the boost velocity of the laboratory through space". Lorentz covariance, a related concept, is a property of the underlying spacetime manifold. Lorentz covariance has two distinct, but closely related meanings: A physical quantity is said to be Lorentz covariant if it transforms under a given representation of the Lorentz group. According to the representation theory of the Lorentz group, these quantities are built out of scalars, four-vectors, four-tensors, and spinors. In particular, a Lorentz covariant scalar (e.g., the space-time interval) remains the same under Lorentz transformations and is said to be a Lorentz invariant (i.e., they transform under the trivial representation). An equation is said to be Lorentz covariant if it can be written in terms of Lorentz covariant quantities (confusingly, some use the term invariant here). The key property of such equations is that if they hold in one inertial frame, then they hold in any inertial frame; this follows from the result that if all the components of a tensor vanish in one frame, they vanish in every frame. This condition is a requirement according to the principle of relativity; i.e., all non-gravitational laws must make the same predictions for identical experiments taking place at the same spacetime event in two different inertial frames of reference. On manifolds, the words covariant and contravariant refer to how objects transform under general coordinate transformations. Both covariant and contravariant four-vectors can be Lorentz covariant quantities. Local Lorentz covariance, which follows from general relativity, refers to Lorentz covariance applying only locally in an infinitesimal region of spacetime at every point. There is a generalization of this concept to cover Poincaré covariance and Poincaré invariance. Examples In general, the (transformational) nature of a Lorentz tensor can be identified by its tensor order, which is the number of free indices it has. No indices implies it is a scalar, one implies that it is a vector, etc. Some tensors with a physical interpretation are listed below. The sign convention of the Minkowski metric is used throughout the article. Scalars Spacetime interval Proper time (for timelike intervals) Proper distance (for spacelike intervals) Mass Electromagnetism invariants D'Alembertian/wave operator Four-vectors 4-displacement 4-position 4-gradient which is the 4D partial derivative: 4-velocity where 4-momentum where and is the rest mass. 4-current where 4-potential Four-tensors Kronecker delta Minkowski metric (the metric of flat space according to general relativity) Electromagnetic field tensor (using a metric signature of + − − −) Dual electromagnetic field tensor Lorentz violating models In standard field theory, there are very strict and severe constraints on marginal and relevant Lorentz violating operators within both QED and the Standard Model. Irrelevant Lorentz violating operators may be suppressed by a high cutoff scale, but they typically induce marginal and relevant Lorentz violating operators via radiative corrections. So, we also have very strict and severe constraints on irrelevant Lorentz violating operators. Since some approaches to quantum gravity lead to violations of Lorentz invariance, these studies are part of phenomenological quantum gravity. Lorentz violations are allowed in string theory, supersymmetry and Hořava–Lifshitz gravity. Lorentz violating models typically fall into four classes: The laws of physics are exactly Lorentz covariant but this symmetry is spontaneously broken. In special relativistic theories, this leads to phonons, which are the Goldstone bosons. The phonons travel at less than the speed of light. Similar to the approximate Lorentz symmetry of phonons in a lattice (where the speed of sound plays the role of the critical speed), the Lorentz symmetry of special relativity (with the speed of light as the critical speed in vacuum) is only a low-energy limit of the laws of physics, which involve new phenomena at some fundamental scale. Bare conventional "elementary" particles are not point-like field-theoretical objects at very small distance scales, and a nonzero fundamental length must be taken into account. Lorentz symmetry violation is governed by an energy-dependent parameter which tends to zero as momentum decreases. Such patterns require the existence of a privileged local inertial frame (the "vacuum rest frame"). They can be tested, at least partially, by ultra-high energy cosmic ray experiments like the Pierre Auger Observatory. The laws of physics are symmetric under a deformation of the Lorentz or more generally, the Poincaré group, and this deformed symmetry is exact and unbroken. This deformed symmetry is also typically a quantum group symmetry, which is a generalization of a group symmetry. Deformed special relativity is an example of this class of models. The deformation is scale dependent, meaning that at length scales much larger than the Planck scale, the symmetry looks pretty much like the Poincaré group. Ultra-high energy cosmic ray experiments cannot test such models. Very special relativity forms a class of its own; if charge-parity (CP) is an exact symmetry, a subgroup of the Lorentz group is sufficient to give us all the standard predictions. This is, however, not the case. Models belonging to the first two classes can be consistent with experiment if Lorentz breaking happens at Planck scale or beyond it, or even before it in suitable preonic models, and if Lorentz symmetry violation is governed by a suitable energy-dependent parameter. One then has a class of models which deviate from Poincaré symmetry near the Planck scale but still flows towards an exact Poincaré group at very large length scales. This is also true for the third class, which is furthermore protected from radiative corrections as one still has an exact (quantum) symmetry. Even though there is no evidence of the violation of Lorentz invariance, several experimental searches for such violations have been performed during recent years. A detailed summary of the results of these searches is given in the Data Tables for Lorentz and CPT Violation. Lorentz invariance is also violated in QFT assuming non-zero temperature. There is also growing evidence of Lorentz violation in Weyl semimetals and Dirac semimetals.
Physical sciences
Theory of relativity
Physics
544776
https://en.wikipedia.org/wiki/Soil%20liquefaction
Soil liquefaction
Soil liquefaction occurs when a cohesionless saturated or partially saturated soil substantially loses strength and stiffness in response to an applied stress such as shaking during an earthquake or other sudden change in stress condition, in which material that is ordinarily a solid behaves like a liquid. In soil mechanics, the term "liquefied" was first used by Allen Hazen in reference to the 1918 failure of the Calaveras Dam in California. He described the mechanism of flow liquefaction of the embankment dam as: The phenomenon is most often observed in saturated, loose (low density or uncompacted), sandy soils. This is because a loose sand has a tendency to compress when a load is applied. Dense sands, by contrast, tend to expand in volume or 'dilate'. If the soil is saturated by water, a condition that often exists when the soil is below the water table or sea level, then water fills the gaps between soil grains ('pore spaces'). In response to soil compressing, the pore water pressure increases and the water attempts to flow out from the soil to zones of low pressure (usually upward towards the ground surface). However, if the loading is rapidly applied and large enough, or is repeated many times (e.g., earthquake shaking, storm wave loading) such that the water does not flow out before the next cycle of load is applied, the water pressures may build to the extent that it exceeds the force (contact stresses) between the grains of soil that keep them in contact. These contacts between grains are the means by which the weight from buildings and overlying soil layers is transferred from the ground surface to layers of soil or rock at greater depths. This loss of soil structure causes it to lose its strength (the ability to transfer shear stress), and it may be observed to flow like a liquid (hence 'liquefaction'). Although the effects of soil liquefaction have been long understood, engineers took more notice after the 1964 Alaska earthquake and 1964 Niigata earthquake. It was a major cause of the destruction produced in San Francisco's Marina District during the 1989 Loma Prieta earthquake, and in the Port of Kobe during the 1995 Great Hanshin earthquake. More recently soil liquefaction was largely responsible for extensive damage to residential properties in the eastern suburbs and satellite townships of Christchurch during the 2010 Canterbury earthquake and more extensively again following the Christchurch earthquakes that followed in early and mid-2011. On 28 September 2018, an earthquake of 7.5 magnitude hit the Central Sulawesi province of Indonesia. Resulting soil liquefaction buried the suburb of Balaroa and Petobo village deep in mud. The government of Indonesia is considering designating the two neighborhoods of Balaroa and Petobo, that have been totally buried under mud, as mass graves. The building codes in many countries require engineers to consider the effects of soil liquefaction in the design of new buildings and infrastructure such as bridges, embankment dams and retaining structures. Technical definitions Soil liquefaction occurs when the effective stress (shear strength) of soil is reduced to essentially zero. This may be initiated by either monotonic loading (i.e., a single, sudden occurrence of a change in stress – examples include an increase in load on an embankment or sudden loss of toe support) or cyclic loading (i.e., repeated changes in stress condition – examples include wave loading or earthquake shaking). In both cases a soil in a saturated loose state, and one which may generate significant pore water pressure on a change in load are the most likely to liquefy. This is because loose soil has the tendency to compress when sheared, generating large excess porewater pressure as load is transferred from the soil skeleton to adjacent pore water during undrained loading. As pore water pressure rises, a progressive loss of strength of the soil occurs as effective stress is reduced. Liquefaction is more likely to occur in sandy or non-plastic silty soils but may in rare cases occur in gravels and clays (see quick clay). A 'flow failure' may initiate if the strength of the soil is reduced below the stresses required to maintain the equilibrium of a slope or footing of a structure. This can occur due to monotonic loading or cyclic loading and can be sudden and catastrophic. A historical example is the Aberfan disaster. Casagrande referred to this type of phenomena as 'flow liquefaction' although a state of zero effective stress is not required for this to occur. 'Cyclic liquefaction' is the state of soil when large shear strains have accumulated in response to cyclic loading. A typical reference strain for the approximate occurrence of zero effective stress is 5% double amplitude shear strain. This is a soil test-based definition, usually performed via cyclic triaxial, cyclic direct simple shear, or cyclic torsional shear type apparatus. These tests are performed to determine a soil's resistance to liquefaction by observing the number of cycles of loading at a particular shear stress amplitude required to induce 'fails'. Failure here is defined by the aforementioned shear strain criteria. The term 'cyclic mobility' refers to the mechanism of progressive reduction of effective stress due to cyclic loading. This may occur in all soil types including dense soils. However, on reaching a state of zero effective stress such soils immediately dilate and regain strength. Thus, shear strains are significantly less than a true state of soil liquefaction. Occurrence Liquefaction is more likely to occur in loose to moderately saturated granular soils with poor drainage, such as silty sands or sands and gravels containing impermeable sediments. During wave loading, usually cyclic undrained loading, e.g. seismic loading, loose sands tend to decrease in volume, which produces an increase in their pore water pressures and consequently a decrease in shear strength, i.e. reduction in effective stress. Deposits most susceptible to liquefaction are young (Holocene-age, deposited within the last 10,000 years) sands and silts of similar grain size (well-sorted), in beds at least metres thick, and saturated with water. Such deposits are often found along stream beds, beaches, dunes, and areas where windblown silt (loess) and sand have accumulated. Examples of soil liquefaction include quicksand, quick clay, turbidity currents and earthquake-induced liquefaction. Depending on the initial void ratio, the soil material can respond to loading either strain-softening or strain-hardening. Strain-softened soils, e.g., loose sands, can be triggered to collapse, either monotonically or cyclically, if the static shear stress is greater than the ultimate or steady-state shear strength of the soil. In this case flow liquefaction occurs, where the soil deforms at a low constant residual shear stress. If the soil strain-hardens, e.g., moderately dense to dense sand, flow liquefaction will generally not occur. However, cyclic softening can occur due to cyclic undrained loading, e.g., earthquake loading. Deformation during cyclic loading depends on the density of the soil, the magnitude and duration of the cyclic loading, and amount of shear stress reversal. If stress reversal occurs, the effective shear stress could reach zero, allowing cyclic liquefaction to take place. If stress reversal does not occur, zero effective stress cannot occur, and cyclic mobility takes place. The resistance of the cohesionless soil to liquefaction will depend on the density of the soil, confining stresses, soil structure (fabric, age and cementation), the magnitude and duration of the cyclic loading, and the extent to which shear stress reversal occurs. Liquefaction potential: simplified empirical analysis Three parameters are needed to assess liquefaction potential using the simplified empirical method: A measure of soil resistance to liquefaction: Standard Penetration Resistance (SPT), Cone Penetration Resistance (CPT), or shear wave velocity (Vs) The earthquake load, measured as cyclic stress ratio the capacity of the soil to resist liquefaction, expressed in terms of the cyclic resistance ratio (CRR) Liquefaction potential: advanced constitutive model The interaction between the solid skeleton and pore fluid flow has been considered by many researchers to model the material softening associated with the liquefaction phenomenon. The dynamic performance of saturated porous media depends on the soil-pore fluid interaction. When the saturated porous media is subjected to strong ground shaking, pore fluid movement relative to the solid skeleton is induced. The transient movement of pore fluid can significantly affect the redistribution of pore water pressure, which is generally governed by the loading rate, soil permeability, pressure gradient, and boundary conditions. It is well known that for a sufficiently high seepage velocity, the governing flow law in porous media is nonlinear and does not follow Darcy's law. This fact has been recently considered in the studies of soil-pore fluid interaction for liquefaction modeling. A fully explicit dynamic finite element method has been developed for turbulent flow law. The governing equations have been expressed for saturated porous media based on the extension of the Biot formulation. The elastoplastic behavior of soil under earthquake loading has been simulated using a generalized plasticity theory that is composed of a yield surface along with a non-associated flow rule. Earthquake liquefaction Pressures generated during large earthquakes can force underground water and liquefied sand to the surface. This can be observed at the surface as effects known alternatively as "sand boils", "sand blows" or "sand volcanoes". Such earthquake ground deformations can be categorized as primary deformation if located on or close to the ruptured fault, or distributed deformation if located at considerable distance from the ruptured fault. The other common observation is land instability – cracking and movement of the ground down slope or towards unsupported margins of rivers, streams, or the coast. The failure of ground in this manner is called 'lateral spreading' and may occur on very shallow slopes with angles only 1 or 2 degrees from the horizontal. One positive aspect of soil liquefaction is the tendency for the effects of earthquake shaking to be significantly damped (reduced) for the remainder of the earthquake. This is because liquids do not support a shear stress and so once the soil liquefies due to shaking, subsequent earthquake shaking (transferred through ground by shear waves) is not transferred to buildings at the ground surface. Studies of liquefaction features left by prehistoric earthquakes, called paleoliquefaction or paleoseismology, can reveal information about earthquakes that occurred before records were kept or accurate measurements could be taken. Soil liquefaction induced by earthquake shaking is a major contributor to urban seismic risk. Effects The effects of soil liquefaction on the built environment can be extremely damaging. Buildings whose foundations bear directly on sand which liquefies will experience a sudden loss of support, which will result in drastic and irregular settlement of the building causing structural damage, including cracking of foundations and damage to the building structure, or leaving the structure unserviceable, even without structural damage. Where a thin crust of non-liquefied soil exists between building foundation and liquefied soil, a 'punching shear' type foundation failure may occur. Irregular settlement may break underground utility lines. The upward pressure applied by the movement of liquefied soil through the crust layer can crack weak foundation slabs and enter buildings through service ducts and may allow water to damage building contents and electrical services. Bridges and large buildings constructed on pile foundations may lose support from the adjacent soil and buckle or come to rest at a tilt. Sloping ground and ground next to rivers and lakes may slide on a liquefied soil layer (termed 'lateral spreading'), opening large ground fissures, and can cause significant damage to buildings, bridges, roads and services such as water, natural gas, sewerage, power and telecommunications installed in the affected ground. Buried tanks and manholes may float in the liquefied soil due to buoyancy. Earth embankments such as flood levees and earth dams may lose stability or collapse if the material comprising the embankment or its foundation liquefies. Over geological time, liquefaction of soil material due to earthquakes could provide a dense parent material in which the fragipan may develop through pedogenesis. Mitigation methods Mitigation methods have been devised by earthquake engineers and include various soil compaction techniques such as vibro compaction (compaction of the soil by depth vibrators), dynamic compaction, and vibro stone columns. These methods densify soil and enable buildings to avoid soil liquefaction.<ref></ref> Existing buildings can be mitigated by injecting grout into the soil to stabilize the layer of soil that is subject to liquefaction. Another method called IPS (Induced Partial Saturation) is now practicable to apply on larger scale. In this method, the saturation degree of the soil is decreased. Quicksand Quicksand forms when water saturates an area of loose sand, and the sand is agitated. When the water trapped in the batch of sand cannot escape, it creates liquefied soil that can no longer resist force. Quicksand can be formed by standing or (upwards) flowing underground water (as from an underground spring), or by earthquakes. In the case of flowing underground water, the force of the water flow opposes the force of gravity, causing the granules of sand to be more buoyant. In the case of earthquakes, the shaking force can increase the pressure of shallow groundwater, liquefying sand and silt deposits. In both cases, the liquefied surface loses strength, causing buildings or other objects on that surface to sink or fall over. The saturated sediment may appear quite solid until a change in pressure, or a shock initiates the liquefaction, causing the sand to form a suspension with each grain surrounded by a thin film of water. This cushioning gives quicksand, and other liquefied sediments, a spongy, fluidlike texture. Objects in the liquefied sand sink to the level at which the weight of the object is equal to the weight of the displaced sand/water mix and the object floats due to its buoyancy. Quick clay Quick clay, known as Leda Clay in Canada, is a water-saturated gel, which in its solid form resembles highly sensitive clay. This clay has a tendency to change from a relatively stiff condition to a liquid mass when it is disturbed. This gradual change in appearance from solid to liquid is a process known as spontaneous liquefaction. The clay retains a solid structure despite its high-water content (up to 80% by volume), because surface tension holds water-coated flakes of clay together. When the structure is broken by a shock or sufficient shear, it enters a fluid state. Quick clay is found only in northern countries such as Russia, Canada, Alaska in the U.S., Norway, Sweden and Finland, which were glaciated during the Pleistocene epoch. Quick clay has been the underlying cause of many deadly landslides. In Canada alone, it has been associated with more than 250 mapped landslides. Some of these are ancient, and may have been triggered by earthquakes. Turbidity currents Submarine landslides are turbidity currents and consist of water-saturated sediments flowing downslope. An example occurred during the 1929 Grand Banks earthquake that struck the continental slope off the coast of Newfoundland. Minutes later, transatlantic telephone cables began breaking sequentially, further and further downslope, away from the epicenter. Twelve cables were snapped in a total of 28 places. The exact times and locations were recorded for each break. Investigators suggested that a 60-mile-per-hour (100 km/h) submarine landslide or turbidity current of water-saturated sediments swept 400 miles (600 km) down the continental slope from the earthquake's epicenter, snapping the cables as it passed.
Physical sciences
Geophysics
Earth science
544934
https://en.wikipedia.org/wiki/Klebsiella%20pneumoniae
Klebsiella pneumoniae
Klebsiella pneumoniae is a Gram-negative, non-motile, encapsulated, lactose-fermenting, facultative anaerobic, rod-shaped bacterium. It appears as a mucoid lactose fermenter on MacConkey agar. Although found in the normal flora of the mouth, skin, and intestines, it can cause destructive changes to human and animal lungs if aspirated, specifically to the alveoli, resulting in bloody, brownish or yellow colored jelly-like sputum. In the clinical setting, it is the most significant member of the genus Klebsiella of the Enterobacteriaceae. K. oxytoca and K. rhinoscleromatis have also been demonstrated in human clinical specimens. In recent years, Klebsiella species have become important pathogens in nosocomial infections. It naturally occurs in the soil, and about 30% of strains can fix nitrogen in anaerobic conditions. As a free-living diazotroph, its nitrogen-fixation system has been much-studied, and is of agricultural interest, as K. pneumoniae has been demonstrated to increase crop yields in agricultural conditions. It is closely related to K. oxytoca from which it is distinguished by being indole-negative and by its ability to grow on melezitose but not 3-hydroxybutyrate. History The genus Klebsiella was named after the German microbiologist Edwin Klebs (1834–1913). It is also known as Friedlander's bacillum in honor of Carl Friedländer, a German pathologist, who proposed that this bacterium was the etiological factor for the pneumonia seen especially in immunocompromised individuals such as people with chronic diseases or alcoholics. Community-acquired pneumonia caused by Klebsiella pneumoniae may occasionally be called Friedländer's pneumonia. Epidemiology Illness most commonly affects middle-aged and older men more often than women with debilitating diseases. This patient population is believed to have impaired respiratory host defenses, including persons with diabetes, alcoholism, malignancy, liver disease, chronic obstructive pulmonary diseases, glucocorticoid therapy, kidney failure, and certain occupational exposures (such as papermill workers). Many of these infections are obtained when a person is in the hospital for some other reason (a nosocomial infection). In addition to pneumonia, Klebsiella can also cause infections in the urinary tract, lower biliary tract, and surgical wound sites. The range of clinical diseases includes pneumonia, thrombophlebitis, urinary tract infection, cholecystitis, diarrhea, upper respiratory tract infection, wound infection, osteomyelitis, meningitis, and bacteremia, and sepsis. For patients with an invasive device in their bodies, contamination of the device becomes a risk; neonatal ward devices, respiratory support equipment, and urinary catheters put patients at increased risk. Also, the use of antibiotics can be a factor that increases the risk of nosocomial infection with Klebsiella bacteria. Sepsis and septic shock can follow entry of the bacteria into the blood. Research conducted at King's College, London has implicated molecular mimicry between HLA-B27 and two Klebsiella surface molecules as the cause of ankylosing spondylitis. Klebsiella ranks second to E. coli for urinary tract infections in older people. It is also an opportunistic pathogen for patients with chronic pulmonary disease, enteric pathogenicity, nasal mucosa atrophy, and rhinoscleroma. New antibiotic-resistant strains of K. pneumoniae are appearing. Klebsiella pneumonia The most common condition caused by Klebsiella bacteria outside the hospital is pneumonia, typically in the form of bronchopneumonia and also bronchitis. These patients have an increased tendency to develop lung abscesses, cavitation, empyema, and pleural adhesions. It has a death rate around 50%, even with antimicrobial therapy. Pathophysiology It is typically due to aspiration and alcoholism may be a risk factor, though it is also commonly implicated in hospital-acquired urinary tract infections, and COPD (chronic obstructive pulmonary disease) individuals. In terms of the pathophysiology of Klebsiella pneumonia the neutrophil myeloperoxidase defense against K. pneumoniae is often seen. Oxidative inactivation of elastase is involved, while LBP helps transfer bacteria cell wall elements to the cells. Signs and symptoms Individuals with Klebsiella pneumoniae tend to cough up a characteristic sputum, as well as having fever, nausea, tachycardia, and vomiting. Klebsiella pneumoniae tends to affect people with underlying conditions, such as alcoholism. Diagnosis In terms of the diagnosis of Klebsiella pneumoniae the following can be done to determine if the individual has this infection, with the addition of susceptibility testing to identify drug-resistant organisms: Blood culture CBC Sputum(culture) Radiography(chest) CT scan Treatment Treatment for Klebsiella pneumoniae is by antibiotics such as aminoglycosides, piperacillin/tazobactam, and cephalosporins, the choice depending upon antibiotic susceptibility testing, the person's health condition, medical history and severity of the disease. Klebsiella possesses beta-lactamase giving it resistance to ampicillin. Many strains have acquired an extended-spectrum beta-lactamase with additional resistance to carbenicillin, amoxicillin, and ceftazidime. The bacteria remain susceptible to aminoglycosides and some cephalosporins, and varying degrees of inhibition of the beta-lactamase with clavulanic acid have been reported. Infections due to multidrug-resistant gram-negative pathogens in the ICU have invoked the re-emergence of colistin. However, colistin-resistant strains of K. pneumoniae have been reported in ICUs. In 2009, strains of K. pneumoniae with gene called New Delhi metallo-beta-lactamase ( NDM-1) that even gives resistance against intravenous antibiotic carbapenem, were discovered in India and Pakistan. Klebsiella cases in Taiwan have shown abnormal toxicity, causing liver abscesses in people with diabetes mellitus (DM); treatment consists of third generation cephalosporins. Hypervirulent Klebsiella pneumoniae Hypervirulent (hvKp) is a rather recent K pneumoniae variant that is significantly more virulent than classical K. pneumoniae (cKp). While cKp is an opportunistic pathogen responsible for nosocomial infections that usually affect immunocompromised patients, hvKp is clinically more concerning since it also causes disease in healthy individuals and can infect virtually every site of the body. The genetic traits that lead to this pathotype are included in a large virulence plasmid and potentially on additional conjugative elements. These newly identified strains were described to overproduce capsule components and siderophores for iron acquisition, among other factors. Although initial studies showed that hvKp is rather susceptible to antibiotic treatment, it has been recently shown that such strains can acquire resistance plasmids and become multiresistant to a variety of antibiotics. It originated from Asia, having a high mortality rate among the population. It often spreads to central nervous system and eye causing endophthalmitis, nonhepatic abscesses, pneumonia, necrotizing fasciitis, and meningitis. One visual trait of these strains is hypermucoviscous phenotype and a string test can be used to help the diagnosis. Further examinations and treatments are made on a case-by-case basis, as there are currently no international guidelines. Transmission To get a K. pneumoniae infection, a person must be exposed to the bacteria. In other words, K. pneumoniae must enter the respiratory tract to cause pneumonia, or the blood to cause a bloodstream infection. In healthcare settings, K. pneumoniae bacteria can be spread through person-to-person contact (for example, contaminated hands of healthcare personnel, or other people via patient to patient) or, less commonly, by contamination of the environment; the role of transmission directly from the environment to patients is controversial and requires further investigation. However, the bacteria are not spread through the air. Patients in healthcare settings also may be exposed to K. pneumoniae when they are on ventilators, or have intravenous catheters or wounds. These medical tools and conditions may allow K. pneumoniae to enter the body and cause infection. Resistant strains Klebsiella organisms are often resistant to multiple antibiotics. Current evidence implicates plasmids as the primary source of the resistance genes. Klebsiella species with the ability to produce extended-spectrum beta-lactamases (ESBL) are resistant to virtually all beta-lactam antibiotics, except carbapenems. Other frequent resistance targets include aminoglycosides, fluoroquinolones, tetracyclines, chloramphenicol, and trimethoprim/sulfamethoxazole. Infection with carbapenem-resistant Enterobacteriaceae (CRE) or carbapenemase-producing Enterobacteriaceae is emerging as an important challenge in health-care settings. One of many CREs is carbapenem-resistant Klebsiella pneumoniae (CRKP). Over the past 10 years, a progressive increase in CRKP has been seen worldwide; however, this new emerging nosocomial pathogen is probably best known for an outbreak in Israel that began around 2006 within the healthcare system there. In the US, it was first described in North Carolina in 1996; since then CRKP has been identified in 41 states; and is routinely detected in certain hospitals in New York and New Jersey. It is now the most common CRE species encountered within the United States. CRKP is resistant to almost all available antimicrobial agents, and infections with CRKP have caused high rates of morbidity and mortality, in particular among persons with prolonged hospitalization and those critically ill and exposed to invasive devices (e.g., ventilators or central venous catheters). The concern is that carbapenem is often used as a drug of last resort when battling resistant bacterial strains. New slight mutations could result in infections for which healthcare professionals can do very little, if anything, to treat patients with resistant organisms. A number of mechanisms cause carbapenem resistance in the Enterobacteriaceae. These include hyperproduction of ampC beta-lactamase with an outer membrane porin mutation, CTX-M extended-spectrum beta-lactamase with a porin mutation or drug efflux, and carbapenemase production. The most important mechanism of resistance by CRKP is the production of a carbapenemase enzyme, blakpc. The gene that encodes the blakpc enzyme is carried on a mobile piece of genetic material (a transposon; the specific transposon involved is called Tn4401), which increases the risk for dissemination. CRE can be difficult to detect because some strains that harbor blakpc have minimum inhibitory concentrations that are elevated, but still within the susceptible range for carbapenems. Because these strains are susceptible to carbapenems, they are not identified as potential clinical or infection control risks using standard susceptibility testing guidelines. Patients with unrecognized CRKP colonization have been reservoirs for transmission during nosocomial outbreaks. The extent and prevalence of CRKP within the environment is currently unknown. The mortality rate is also unknown, but has been observed to be as high as 44%. The Centers for Disease Control and Prevention released guidance for aggressive infection control to combat CRKP: Place all patients colonized or infected with carbapenemase-producing Enterobacteriaceae on contact precautions. Acute-care facilities are to establish a protocol, in conjunction with the guidelines of the Clinical and Laboratory Standards Institute, to detect nonsusceptibility and carbapenemase production in Enterobacteriaceae, in particular Klebsiella spp. and Escherichia coli, and immediately alert epidemiology and infection-control staff members if identified. All acute-care facilities are to review microbiology records for the preceding 6–12 months to ensure that there have not been previously unrecognized CRE cases. If they do identify previously unrecognized cases, a point prevalence survey (a single round of active surveillance cultures) in units with patients at high risk (e.g., intensive-care units, units where previous cases have been identified, and units where many patients are exposed to broad-spectrum antimicrobials) is needed to identify any additional patients colonized with carbapenem-resistant or carbapenemase-producing Klebsiella spp. and E. coli. When a case of hospital-associated CRE is identified, facilities should conduct a round of active surveillance testing of patients with epidemiologic links to the CRE case (e.g., those patients in the same unit or patients having been cared for by the same health-care personnel). In 2019, there were 192,530 global deaths attributed to resistant strains of Klebsiella pneumoniae. Local outbreaks Israel 2007–2008. A nationwide outbreak of CRE in Israel peaked in March 2007 at 55.5 cases per 100,000 patient days and necessitated a nationwide treatment plan. The intervention entailed physical separation of all CRE carriers and appointment of a task force to oversee efficacy of isolation by closely monitoring hospitals and intervening when necessary. After the treatment plan (measured in May 2008), the number of cases per 100,000 patient days decreased to 11.7. The plan was effective because of strict hospital compliance, wherein each was required to keep detailed documentation of all CRE carriers. In fact, for each increase in compliance by 10%, incidence of cases per 100,000 patient days decreased by 0.6. Therefore, containment on a nationwide scale requires nationwide intervention. Nevada 2016. In mid-August 2016, a resident of Washoe County was hospitalized in Reno due to a CRE (specifically Klebsiella pneumoniae) infection. In early September of the same year, she developed septic shock and died. On testing by CDC an isolate from the patient was found to be resistant to all 26 antibiotics available in the US, including drug of last resort colistin. It is believed she may have picked up the microbe while hospitalized in India for two years due to a broken right femur and subsequent femur and hip infections. Antimicrobial resistance gene transfer Klebsiella pneumoniae carries a large number of anti-microbial resistance genes (AMR genes). These genes are transferred via plasmids from and to other human pathogens. One human pathogen that commonly acquires AMR genes from Klebsiella pneumoniae is Salmonella. This could help with treatment of salmonella infections due to having knowledge of possible antibiotic resistance data. The majority of AMR genes in Klebsiella pneumoniae are plasmid-borne. An example of a niche would be soil, often considered a hotspot for gene transfer. The table shows the number of AMR genes and plasmids (per strain or subspecies) compared to other common bacteria species. Prevention To prevent spreading Klebsiella infections between patients, healthcare personnel must follow specific infection-control precautions, which may include strict adherence to hand hygiene (preferably using an alcohol based hand rub (60–90%) or soap and water if hands are visibly soiled. Alcohol based hand rubs are effective against these Gram-negative bacilli) and wearing gowns and gloves when they enter rooms where patients with Klebsiella-related illnesses are housed. Healthcare facilities also must follow strict cleaning procedures to prevent the spread of Klebsiella. To prevent the spread of infections, patients also should clean their hands very often, including: Before preparing or eating food Before touching their eyes, nose, or mouth Before and after changing wound dressings or bandages After using the restroom After blowing their nose, coughing, or sneezing After touching hospital surfaces such as bed rails, bedside tables, doorknobs, remote controls, or the phone Treatment K. pneumoniae can be treated with antibiotics if the infections are not drug-resistant. Infections by K. pneumoniae can be difficult to treat because fewer antibiotics are effective against them. In such cases, a microbiology laboratory must run tests to determine which antibiotics will treat the infection. More specific treatments of Klebsiella pneumonia are given in its section above. For urinary tract infections with multidrug-resistant Klebsiella species, a combination therapy with amikacin and meropenem has been suggested. Research Multiple drug-resistant K. pneumoniae strains have been killed in vivo by intraperitoneal, intravenous, or intranasal administration of phages in laboratory tests. Resistance to phages is not likely to be as troublesome as to antibiotics as new infectious phages are likely to be available in environmental reservoirs. Phage therapy can be used in conjunction with antibiotics, to supplement their activity instead of replacing it altogether. Vaccine development New data sources outlining the global burden of K. pneumoniae and drug-resistant forms are expected to build momentum into prophylactic vaccine development. The recent 2022 IHME study showed that in 2019 K. pneumoniae was responsible for 790,000 deaths [571,000–1,060,000] in all age groups across 11 infectious syndromes. Importantly, in Sub-saharan Africa K. pneumoniae was responsible for 124,000 [89,000–167,000] neonatal deaths due to bloodstream infections. Based on these and other data, a newly developed prophylactic vaccine would ideally be designed to prevent invasive K. pneumoniae disease in both vulnerable persons but also as a maternal vaccine to prevent neonatal sepsis and global demand assessments have been published. As of June 2023, one single clinical development program for a K. pneumoniae vaccine [Kleb4V/GSK4429016A] was in a Phase 1/2 study in healthy adults aged 18–70 yrs (n=166) [Clinical trials identifier: NCT04959344]. The vaccine is an O-antigen based conjugate where the specific O-antigens in the vaccine remain undisclosed [Michael Kowarik, LimmaTech Biologics, World Vaccine Congress EU, 2022] although only a limited number of O-serotypes can account for a high proportion of clinical isolates. A recent Q1 2024 GSK Corporate R&D pipeline update showed that Kleb4V/GSK4429016A had been removed. The status of the program is now subject to verification.
Biology and health sciences
Gram-negative bacteria
Plants
3766560
https://en.wikipedia.org/wiki/Entropy%20of%20activation
Entropy of activation
In chemical kinetics, the entropy of activation of a reaction is one of the two parameters (along with the enthalpy of activation) that are typically obtained from the temperature dependence of a reaction rate constant, when these data are analyzed using the Eyring equation of the transition state theory. The standard entropy of activation is symbolized and equals the change in entropy when the reactants change from their initial state to the activated complex or transition state ( = change, = entropy, = activation). Importance Entropy of activation determines the preexponential factor of the Arrhenius equation for temperature dependence of reaction rates. The relationship depends on the molecularity of the reaction: for reactions in solution and unimolecular gas reactions , while for bimolecular gas reactions . In these equations is the base of natural logarithms, is the Planck constant, is the Boltzmann constant and the absolute temperature. is the ideal gas constant. The factor is needed because of the pressure dependence of the reaction rate. = . The value of provides clues about the molecularity of the rate determining step in a reaction, i.e. the number of molecules that enter this step. Positive values suggest that entropy increases upon achieving the transition state, which often indicates a dissociative mechanism in which the activated complex is loosely bound and about to dissociate. Negative values for indicate that entropy decreases on forming the transition state, which often indicates an associative mechanism in which two reaction partners form a single activated complex. Derivation It is possible to obtain entropy of activation using Eyring equation. This equation is of the form where: = reaction rate constant = absolute temperature = enthalpy of activation = gas constant = transmission coefficient = Boltzmann constant = R/NA, NA = Avogadro constant = Planck constant = entropy of activation This equation can be turned into the formThe plot of versus gives a straight line with slope from which the enthalpy of activation can be derived and with intercept from which the entropy of activation is derived.
Physical sciences
Kinetics
Chemistry
3767418
https://en.wikipedia.org/wiki/Amchoor
Amchoor
Amchoor or aamchur or amchur, also referred to as mango powder, is a fruity spice powder made from dried unripe green mangoes. A citrusy seasoning, it is mostly produced in India. In addition to its use as a seasoning it adds the nutritional benefits of mangoes when the fresh fruit is out of season. Preparation To make amchoor, early-season mangoes are harvested while still green and unripe. Once harvested, the green mangoes are peeled, thinly sliced, and sun-dried. The dried slices, which are light brown and resemble strips of woody bark, can be purchased whole and ground by the individual at home, but the majority of the slices processed in this way are ground into fine powder and sold as ready-made amchoor. Use It has a honey-like fragrance and a sour fruity flavour and is a tart pale-beige-to-brownish powder. It is used in dishes where acidity is required, in stir fried vegetable dishes, soups, curries, and to tenderize meat and poultry. It is used to add a fruit flavour without adding moisture, or as a souring agent, and lends an acidic taste to the foods. Amchoor is a predominant flavouring agent used in Indian dishes where it is used to add a sour tangy fruity flavour without moisture. It is used to flavour samosa and pakora fillings, stews and soups, fruit salads and pastries, curries, chutneys, pickles and dals and to tenderize meats, poultry, and fish. It is added to marinades for meat and poultry as an enzymatic tenderizer and lends its sourness to chutneys and pickles. Amchoor is also a primary component of chaat masala, an Indian spice mix.
Biology and health sciences
Herbs and spices
Plants
5111982
https://en.wikipedia.org/wiki/Agricultural%20chemistry
Agricultural chemistry
Agricultural chemistry is the chemistry, especially organic chemistry and biochemistry, as they relate to agriculture. Agricultural chemistry embraces the structures and chemical reactions relevant in the production, protection, and use of crops and livestock. Its applied science and technology aspects are directed towards increasing yields and improving quality, which comes with multiple advantages and disadvantages. Agricultural and environmental chemistry This aspect of agricultural chemistry deals with the role of molecular chemistry in agriculture as well as the negative consequences. Plant biochemistry Plant biochemistry encompasses the chemical reactions that occur within plants. In principle, knowledge at a molecular level informs technologies for providing food. Particular focus is on the biochemical differences between plants and other organisms as well as the differences within the plant kingdom, such as dicotyledons vs monocotyledons, gymnosperms vs angiosperms, C2- vs C4-fixers, etc. Pesticides Chemical materials developed to assist in the production of food, feed, and fiber include herbicides, insecticides, fungicides, and other pesticides. Pesticides are chemicals that play an important role in increasing crop yield and mitigating crop losses. These work to keep insects and other animals away from crops to allow them to grow undisturbed, effectively regulating pests and diseases. Disadvantages of pesticides include contamination of the ground and water (see persistent organic pollutants). They may be toxic to non-target species, including birds, fish, pollinators, as well as the farmworkers themselves. Soil chemistry Agricultural chemistry often aims at preserving or increasing the fertility of soil with the goals of maintaining or improving the agricultural yield and improving the quality of the crop. Soils are analyzed with attention to the inorganic matter (minerals), which comprise most of the mass of dry soil, and organic matter, which consists of living organisms, their degradation products, humic acids and fulvic acids. Fertilizers are a major consideration. While organic fertilizers are time-honored, their use has largely been displaced by chemicals produced from mining (phosphate rock) and the Haber-Bosch process. The use of these materials dramatically increased the rate at which crops are produced, which is able to support the growing human population. Common fertilizers include urea, ammonium sulphate, diammonium phosphate, and calcium ammonium phosphate. Biofuels and bio-derived materials Agricultural chemistry encompases the science and technology of producing not only edible crops, but feedstocks for fuels ("biofuels") and materials. Ethanol fuel obtained by fermentation of sugars. Biodiesel is derived from fats, both animal- and plant-derived. Methane can be recovered from manure and other ag wastes by microbial action. Lignocellulose is a promising precursor to new materials. Biotechnology Biocatalysis is used to produce a number of food products. More than five biilion tons of high fructose corn syrup are produced annually by the action of the immobilized enzyme glucose isomerase of corn-derived glucose. Emerging technologies are numerous, including enzymes for clarifying or debittering of fruit juices. A variety of potentially useful chemicals are obtained by engineered plants. Bioremediation is a green route to biodegradation. GMOs Genetically Modified Organisms (GMO's) are plants or living things that have been altered at a genomic level by scientists to improve the organisms characteristics. These characteristics include providing new vaccines for humans, increasing nutrients supplies, and creating unique plastics. They may also be able to grow in climates that are typically not suitable for the original organism to grow in. Examples of GMO's include virus resistant tobacco and squash, delayed ripening tomatoes, and herbicide resistant soybeans. GMO's came with an increased interest in using biotechnology to produce fertilizer and pesticides. Due to an increased market interest in biotechnology in the 1970s, there was more technology and infrastructure developed, a decreased cost, and an advance in research. Since the early 1980s, genetically-modified crops have been incorporated. Increased biotechnological work calls for the union of biology and chemistry to produce improved crops, a main reason behind this being the increasing amount of food needed to feed a growing population. That being said, concerns with GMO's include potential antibiotic resistance from eating a GMO. There are also concerns about the long term effects on the human body since many GMO's were recently developed. Much controversy surrounds GMO's. In the United States, all foods containing GMO's must be labeled as such. Omics Particularly relevant is proteomics as protein (nutrition) guides much of agriculture.
Technology
Academic disciplines
null
21323216
https://en.wikipedia.org/wiki/Herpes
Herpes
Herpes simplex, often known simply as herpes, is a viral infection caused by the herpes simplex virus. Herpes infections are categorized by the area of the body that is infected. The two major types of herpes are oral herpes and genital herpes, though other forms also exist. Oral herpes involves the face or mouth. It may result in small blisters in groups, often called cold sores or fever blisters, or may just cause a sore throat. Genital herpes involves the genitalia. It may have minimal symptoms or form blisters that break open and result in small ulcers. These typically heal over two to four weeks. Tingling or shooting pains may occur before the blisters appear. Herpes cycles between periods of active disease followed by periods without symptoms. The first episode is often more severe and may be associated with fever, muscle pains, swollen lymph nodes and headaches. Over time, episodes of active disease decrease in frequency and severity. Herpetic whitlow typically involves the fingers or thumb, herpes simplex keratitis involves the eye, herpesviral encephalitis involves the brain, and neonatal herpes involves any part of the body of a newborn, among others. There are two types of herpes simplex virus, type 1 (HSV-1) and type 2 (HSV-2). HSV-1 more commonly causes infections around the mouth while HSV-2 more commonly causes genital infections. They are transmitted by direct contact with body fluids or lesions of an infected individual. Transmission may still occur when symptoms are not present. Genital herpes is classified as a sexually transmitted infection. It may be spread to an infant during childbirth. After infection, the viruses are transported along sensory nerves to the nerve cell bodies, where they reside lifelong. Causes of recurrence may include decreased immune function, stress, and sunlight exposure. Oral and genital herpes is usually diagnosed based on the presenting symptoms. The diagnosis may be confirmed by viral culture or detecting herpes DNA in fluid from blisters. Testing the blood for antibodies against the virus can confirm a previous infection but will be negative in new infections. The most effective method of avoiding genital infections is by avoiding vaginal, oral, manual, and anal sex. Condom use decreases the risk. Daily antiviral medication taken by someone who has the infection can also reduce spread. There is no available vaccine and once infected, there is no cure. Paracetamol (acetaminophen) and topical lidocaine may be used to help with the symptoms. Treatments with antiviral medication such as aciclovir or valaciclovir can lessen the severity of symptomatic episodes. Worldwide rates of either HSV-1 or HSV-2 are between 60% and 95% in adults. HSV-1 is usually acquired during childhood. Since there is no cure for either HSV-1 or HSV-2, rates of both inherently increase as people age. Rates of HSV-1 are between 70% and 80% in populations of low socioeconomic status and 40% to 60% in populations of improved socioeconomic status. An estimated 536 million people worldwide (16% of the population) were infected with HSV-2 as of 2003 with greater rates among women and those in the developing world. Most people with HSV-2 do not realize that they are infected. Etymology The name is from herpēs, which is related to the meaning 'to creep', referring to spreading blisters. The name does not refer to latency. Signs and symptoms HSV infection causes several distinct medical disorders. Common infection of the skin or mucosa may affect the face and mouth (orofacial herpes), genitalia (genital herpes), or hands (herpetic whitlow). More serious disorders occur when the virus infects and damages the eye (herpes keratitis), or invades the central nervous system, damaging the brain (herpes encephalitis). People with immature or suppressed immune systems, such as newborns, transplant recipients, or people with AIDS, are prone to severe complications from HSV infections. HSV infection has also been associated with cognitive deficits of bipolar disorder, and Alzheimer's disease, although this is often dependent on the genetics of the infected person. In all cases, HSV is never removed from the body by the immune system. Following a primary infection, the virus enters the nerves at the site of primary infection, migrates to the cell body of the neuron, and becomes latent in the ganglion. As a result of primary infection, the body produces antibodies to the particular type of HSV involved, which can help reduce the odds of subsequent infection of that type at a different site. In HSV-1-infected individuals, seroconversion after an oral infection helps prevent additional HSV-1 infections such as whitlow, genital herpes, and herpes of the eye. Prior HSV-1 seroconversion seems to reduce the symptoms of a later HSV-2 infection, although HSV-2 can still be contracted. Many people infected with HSV-2 display no physical symptoms—individuals with no symptoms are described as asymptomatic or as having subclinical herpes. However, infection with herpes can be fatal. Types of herpes Other Neonatal herpes simplex is an HSV infection in an infant. It is a rare but serious condition, usually caused by vertical transmission of HSV-1 or -2 from mother to newborn. During immunodeficiency, herpes simplex can cause unusual lesions in the skin. One of the most striking is the appearance of clean linear erosions in skin creases, with the appearance of a knife cut. Herpetic sycosis is a recurrent or initial herpes simplex infection affecting primarily the hair follicles. Eczema herpeticum is an infection with herpesvirus in patients with chronic atopic dermatitis may result in spread of herpes simplex throughout the eczematous areas. Herpetic keratoconjunctivitis, a primary infection, typically presents as swelling of the conjunctiva and eyelids (blepharoconjunctivitis), accompanied by small white itchy lesions on the surface of the cornea. Herpetic sycosis is a recurrent or initial herpes simplex infection affecting primarily the hair follicle. Bell's palsy Although the exact cause of Bell's palsya type of facial paralysisis unknown, it may be related to the reactivation of HSV-1. This theory has been contested, however, since HSV is detected in large numbers of individuals having never experienced facial paralysis, and higher levels of antibodies for HSV are not found in HSV-infected individuals with Bell's palsy compared to those without. Antivirals may improve the condition slightly when used together with corticosteroids in those with severe disease. Alzheimer's disease HSV-1 has been proposed as a possible cause of Alzheimer's disease. In the presence of a certain gene variation (APOE-epsilon4 allele carriers), HSV-1 appears to be particularly damaging to the nervous system and increases one's risk of developing Alzheimer's disease. The virus interacts with the components and receptors of lipoproteins, which may lead to its development. Pathophysiology Herpes is contracted through direct contact with an active lesion or body fluid of an infected person. Herpes transmission occurs between discordant partners; a person with a history of infection (HSV seropositive) can pass the virus to an HSV seronegative person. Herpes simplex virus 2 is typically contracted through direct skin-to-skin contact with an infected individual, but can also be contracted by exposure to infected saliva, semen, vaginal fluid, or the fluid from herpetic blisters. To infect a new individual, HSV travels through tiny breaks in the skin or mucous membranes in the mouth or genital areas. Even microscopic abrasions on mucous membranes are sufficient to allow viral entry. HSV asymptomatic shedding occurs at some time in most individuals infected with herpes. It can occur more than a week before or after a symptomatic recurrence in 50% of cases. Virus enters into susceptible cells by entry receptors such as nectin-1, HVEM and 3-O sulfated heparan sulfate. Infected people who show no visible symptoms may still shed and transmit viruses through their skin; asymptomatic shedding may represent the most common form of HSV-2 transmission. Asymptomatic shedding is more frequent within the first 12 months of acquiring HSV. Concurrent infection with HIV increases the frequency and duration of asymptomatic shedding. Some individuals may have much lower patterns of shedding, but evidence supporting this is not fully verified; no significant differences are seen in the frequency of asymptomatic shedding when comparing persons with one to 12 annual recurrences to those with no recurrences. Antibodies that develop following an initial infection with a type of HSV can reduce the odds of reinfection with the same virus type. In a monogamous couple, a seronegative female runs a greater than 30% per year risk of contracting an HSV infection from a seropositive male partner. If an oral HSV-1 infection is contracted first, seroconversion will have occurred after 6 weeks to provide protective antibodies against a future genital HSV-1 infection. Herpes simplex is a double-stranded DNA virus. Diagnosis Classification Herpes simplex virus is divided into two types. However, each may cause infections in all areas. HSV-1 causes primarily mouth, throat, face, eye, and central nervous system infections. HSV-2 causes primarily anogenital infections. Examination Primary orofacial herpes is readily identified by examination of persons with no previous history of lesions and contact with an individual with known HSV infection. The appearance and distribution of sores is typically presents as multiple, round, superficial oral ulcers, accompanied by acute gingivitis. Adults with atypical presentation are more difficult to diagnose. Prodromal symptoms that occur before the appearance of herpetic lesions help differentiate HSV symptoms from the similar symptoms of other disorders, such as allergic stomatitis. When lesions do not appear inside the mouth, primary orofacial herpes is sometimes mistaken for impetigo, a bacterial infection. Common mouth ulcers (aphthous ulcer) also resemble intraoral herpes, but do not present a vesicular stage. Genital herpes can be more difficult to diagnose than oral herpes, since most people have none of the classical symptoms. Further confusing diagnosis, several other conditions resemble genital herpes, including fungal infection, lichen planus, atopic dermatitis, and urethritis. Laboratory testing Laboratory testing is often used to confirm a diagnosis of genital herpes. Laboratory tests include culture of the virus, direct fluorescent antibody (DFA) studies to detect virus, skin biopsy, and polymerase chain reaction to test for presence of viral DNA. Although these procedures produce highly sensitive and specific diagnoses, their high costs and time constraints discourage their regular use in clinical practice. Until the 1980s serological tests for antibodies to HSV were rarely useful to diagnosis and not routinely used in clinical practice. The older IgM serologic assay could not differentiate between antibodies generated in response to HSV-1 or HSV-2 infection. However, a glycoprotein G-specific (IgG) HSV test introduced in the 1980s is more than 98% specific at discriminating HSV-1 from HSV-2. Differential diagnosis It should not be confused with conditions caused by other viruses in the herpesviridae family such as herpes zoster (also known as shingles), which is caused by varicella zoster virus. The differential diagnosis includes hand, foot and mouth disease due to similar lesions on the skin. Lymphangioma circumscriptum and dermatitis herpetiformis may also have a similar appearance. Prevention As with almost all sexually transmitted infections, women are more susceptible to acquiring genital HSV-2 than men. On an annual basis, without the use of antivirals or condoms, the transmission risk of HSV-2 from infected male to female is about 8–11%. This is believed to be due to the increased exposure of mucosal tissue to potential infection sites. Transmission risk from infected female to male is around 4–5% annually. Suppressive antiviral therapy reduces these risks by 50%. Antivirals also help prevent the development of symptomatic HSV in infection scenarios, meaning the infected partner will be seropositive but symptom-free by about 50%. Condom use also reduces the transmission risk significantly. Condom use is much more effective at preventing male-to-female transmission than vice versa. Previous HSV-1 infection may reduce the risk for acquisition of HSV-2 infection among women by a factor of three, although the one study that states this has a small sample size of 14 transmissions out of 214 couples. However, asymptomatic carriers of the HSV-2 virus are still contagious. In many infections, the first symptom people will have of their own infections is the horizontal transmission to a sexual partner or the vertical transmission of neonatal herpes to a newborn at term. Since most asymptomatic individuals are unaware of their infection, they are considered at high risk for spreading HSV. In October 2011, the anti-HIV drug tenofovir, when used topically in a microbicidal vaginal gel, was reported to reduce herpes virus sexual transmission by 51%. Barrier methods Condoms offer moderate protection against HSV-2 in both men and women, with consistent condom users having a 30%-lower risk of HSV-2 acquisition compared with those who never use condoms. A female condom can provide greater protection than the male condom, as it covers the labia. The virus cannot pass through a synthetic condom, but a male condom's effectiveness is limited because herpes ulcers may appear on areas not covered by it. Neither type of condom prevents contact with the scrotum, anus, buttocks, or upper thighs, areas that may come in contact with ulcers or genital secretions during sexual activity. Protection against herpes simplex depends on the site of the ulcer; therefore, if ulcers appear on areas not covered by condoms, abstaining from sexual activity until the ulcers are fully healed is one way to limit risk of transmission. The risk is not eliminated, however, as viral shedding capable of transmitting infection may still occur while the infected partner is asymptomatic. The use of condoms or dental dams also limits the transmission of herpes from the genitals of one partner to the mouth of the other (or vice versa) during oral sex. When one partner has a herpes simplex infection and the other does not, the use of antiviral medication, such as valaciclovir, in conjunction with a condom, further decreases the chances of transmission to the uninfected partner. Topical microbicides that contain chemicals that directly inactivate the virus and block viral entry are being investigated. Antivirals Antivirals may reduce asymptomatic shedding; asymptomatic genital HSV-2 viral shedding is believed to occur on 20% of days per year in patients not undergoing antiviral treatment, versus 10% of days while on antiviral therapy. Pregnancy The risk of transmission from mother to baby is highest if the mother becomes infected around the time of delivery (30% to 60%), since insufficient time will have occurred for the generation and transfer of protective maternal antibodies before the birth of the child. In contrast, the risk falls to 3% if the infection is recurrent, and is 1–3% if the woman is seropositive for both HSV-1 and HSV-2, and is less than 1% if no lesions are visible. Women seropositive for only one type of HSV are only half as likely to transmit HSV as infected seronegative mothers. To prevent neonatal infections, seronegative women are recommended to avoid unprotected oral-genital contact with an HSV-1-seropositive partner and conventional sex with a partner having a genital infection during the last trimester of pregnancy. Mothers infected with HSV are advised to avoid procedures that would cause trauma to the infant during birth (e.g. fetal scalp electrodes, forceps, and vacuum extractors) and, should lesions be present, to elect caesarean section to reduce exposure of the child to infected secretions in the birth canal. The use of antiviral treatments, such as aciclovir, given from the 36th week of pregnancy, limits HSV recurrence and shedding during childbirth, thereby reducing the need for caesarean section. Aciclovir is the recommended antiviral for herpes suppressive therapy during the last months of pregnancy. The use of valaciclovir and famciclovir, while potentially improving compliance, have less-well-determined safety in pregnancy. Management No method eradicates herpes virus from the body, but antiviral medications can reduce the frequency, duration, and severity of outbreaks. Analgesics such as ibuprofen and paracetamol (acetaminophen) can reduce pain and fever. Topical anesthetic treatments such as prilocaine, lidocaine, benzocaine, or tetracaine can also relieve itching and pain. Antiviral Several antiviral drugs are effective for treating herpes, including aciclovir (acyclovir), valaciclovir, famciclovir, and penciclovir. Aciclovir was the first discovered and is now available in generic. Valaciclovir is also available as a generic and is slightly more effective than aciclovir for reducing lesion healing time. Evidence supports the use of aciclovir and valaciclovir in the treatment of herpes labialis as well as herpes infections in people with cancer. The evidence to support the use of aciclovir in primary herpetic gingivostomatitis is weaker. Topical A number of topical antivirals are effective for herpes labialis, including aciclovir, penciclovir, and docosanol. Alternative medicine Evidence is insufficient to support use of many of these compounds, including echinacea, eleuthero, L-lysine, zinc, monolaurin bee products, and aloe vera. While a number of small studies show possible benefit from monolaurin, L-lysine, aspirin, lemon balm, topical zinc, or licorice root cream in treatment, these preliminary studies have not been confirmed by higher-quality randomized controlled studies. Prognosis Following active infection, herpes viruses establish a latent infection in sensory and autonomic ganglia of the nervous system. The double-stranded DNA of the virus is incorporated into the cell physiology by infection of the nucleus of a nerve's cell body. HSV latency is static; no virus is produced; and is controlled by a number of viral genes, including latency-associated transcript. Many HSV-infected people experience recurrence within the first year of infection. Prodrome precedes development of lesions. Prodromal symptoms include tingling (paresthesia), itching, and pain where lumbosacral nerves innervate the skin. Prodrome may occur as long as several days or as short as a few hours before lesions develop. Beginning antiviral treatment when prodrome is experienced can reduce the appearance and duration of lesions in some individuals. During recurrence, fewer lesions are likely to develop and are less painful and heal faster (within 5–10 days without antiviral treatment) than those occurring during the primary infection. Subsequent outbreaks tend to be periodic or episodic, occurring on average four or five times a year when not using antiviral therapy. The causes of reactivation are uncertain, but several potential triggers have been documented. A 2009 study showed the protein VP16 plays a key role in reactivation of the dormant virus. Changes in the immune system during menstruation may play a role in HSV-1 reactivation. Concurrent infections, such as viral upper respiratory tract infection or other febrile diseases, can cause outbreaks. Reactivation due to other infections is the likely source of the historic terms 'cold sore' and 'fever blister'. Other identified triggers include local injury to the face, lips, eyes, or mouth; trauma; surgery; radiotherapy; and exposure to wind, ultraviolet light, or sunlight. The frequency and severity of recurrent outbreaks vary greatly between people. Some individuals' outbreaks can be quite debilitating, with large, painful lesions persisting for several weeks, while others experience only minor itching or burning for a few days. Some evidence indicates genetics play a role in the frequency of cold sore outbreaks. An area of human chromosome 21 that includes six genes has been linked to frequent oral herpes outbreaks. An immunity to the virus is built over time. Most infected individuals experience fewer outbreaks and outbreak symptoms often become less severe. After several years, some people become perpetually asymptomatic and no longer experience outbreaks, though they may still be contagious to others. Immunocompromised individuals may experience longer, more frequent, and more severe episodes. Antiviral medication has been proven to shorten the frequency and duration of outbreaks. Outbreaks may occur at the original site of the infection or in proximity to nerve endings that reach out from the infected ganglia. In the case of a genital infection, sores can appear at the original site of infection or near the base of the spine, the buttocks, or the back of the thighs. HSV-2-infected individuals are at higher risk for acquiring HIV when practicing unprotected sex with HIV-positive persons, in particular during an outbreak with active lesions. Epidemiology Worldwide rates of either HSV-1 and/or HSV-2 are between 60 and 95% in adults. HSV-1 is more common than HSV-2, with rates of both increasing as people age. HSV-1 rates are between 70% and 80% in populations of low socioeconomic status and 40% to 60% in populations of improved socioeconomic status. An estimated 536 million people or 16% of the population worldwide were infected with HSV-2 as of 2003 with greater rates among women and in those in the developing world. Rates of infection are determined by the presence of antibodies against either viral species. In the US, 58% of the population is infected with HSV-1 and 16% are infected with HSV-2. Among those HSV-2-seropositive, only 19% were aware they were infected. During 2005–2008, the prevalence of HSV-2 was 39% in black people and 21% in women. The annual incidence in Canada of genital herpes due to HSV-1 and HSV-2 infection is not known (for a review of HSV-1/HSV-2 prevalence and incidence studies worldwide, see Smith and Robinson 2002). As many as one in seven Canadians aged 14 to 59 may be infected with herpes simplex type 2 virus and more than 90 per cent of them may be unaware of their status, a new study suggests. In the United States, it is estimated that about 1,640,000 HSV-2 seroconversions occur yearly (730,000 men and 910,000 women, or 8.4 per 1,000 persons). In British Columbia in 1999, the seroprevalence of HSV-2 antibody in leftover serum submitted for antenatal testing revealed a prevalence of 17%, ranging from 7% in women 15–19 years old to 28% in those 40–44 years. In Norway, a study published in 2000 found that up to 70–90% of genital initial infections were due to HSV-1. In Nova Scotia, 58% of 1,790 HSV isolates from genital lesion cultures in women were HSV-1; in men, 37% of 468 isolates were HSV-1. History Herpes originated and evolved in Africa and could be the result of a cross-species transmission event from gibbons, orangutans, or gorillas. Herpes has been known for at least 2,000 years. Emperor Tiberius is said to have banned kissing in Rome for a time due to so many people having cold sores. In the 16th century Romeo and Juliet, blisters "o'er ladies' lips" are mentioned. In the 18th century, it was so common among prostitutes that it was called "a vocational disease of women". The term 'herpes simplex' appeared in Richard Boulton's A System of Rational and Practical Chirurgery in 1713, where the terms 'herpes miliaris' and 'herpes exedens' also appeared. Herpes was not found to be a virus until the 1940s. Herpes antiviral therapy began in the early 1960s with the experimental use of medications that interfered with viral replication called deoxyribonucleic acid (DNA) inhibitors. The original use was against normally fatal or debilitating illnesses such as adult encephalitis, keratitis, in immunocompromised (transplant) patients, or disseminated herpes zoster (also known as disseminated shingles). The original compounds used were 5-iodo-2'-deoxyuridine, AKA idoxuridine, IUdR, or(IDU) and 1-β-D-arabinofuranosylcytosine or ara-C, later marketed under the name cytosar or cytarabine. The usage expanded to include topical treatment of herpes simplex, zoster, and varicella. Some trials combined different antivirals with differing results. The introduction of 9-β-D-arabinofuranosyladenine, (ara-A or vidarabine), considerably less toxic than ara-C, in the mid-1970s, heralded the way for the beginning of regular neonatal antiviral treatment. Vidarabine was the first systemically administered antiviral medication with activity against HSV for which therapeutic efficacy outweighed toxicity for the management of life-threatening HSV disease. Intravenous vidarabine was licensed for use by the U.S. Food and Drug Administration in 1977. Other experimental antivirals of that period included: heparin, trifluorothymidine (TFT), Ribivarin, interferon, Virazole, and 5-methoxymethyl-2'-deoxyuridine (MMUdR). The introduction of 9-(2-hydroxyethoxymethyl)guanine, AKA aciclovir, in the late 1970s raised antiviral treatment another notch and led to vidarabine vs. aciclovir trials in the late 1980s. The lower toxicity and ease of administration over vidarabine has led to aciclovir becoming the drug of choice for herpes treatment after it was licensed by the FDA in 1998. Another advantage in the treatment of neonatal herpes included greater reductions in mortality and morbidity with increased dosages, which did not occur when compared with increased dosages of vidarabine. However, aciclovir seems to inhibit antibody response, and newborns on aciclovir antiviral treatment experienced a slower rise in antibody titer than those on vidarabine. Society and culture Some people experience negative feelings related to the condition following diagnosis, in particular, if they have acquired the genital form of the disease. Feelings can include depression, fear of rejection, feelings of isolation, fear of being found out, and self-destructive feelings. Herpes support groups have been formed in the United States and the United Kingdom, providing information about herpes and running message forums and dating websites for affected people. People with the herpes virus are often hesitant to divulge to other people, including friends and family, that they are infected. This is especially true of new or potential sexual partners whom they consider casual. In a 2007 study, 1,900 people (25% of which had herpes) ranked genital herpes second for social stigma, out of all sexually transmitted diseases (HIV took the top spot for STD stigma). Support groups United States A source of support is the National Herpes Resource Center which arose from the work of the American Sexual Health Association (ASHA). The ASHA was created in 1914 in response to the increase in sexually transmitted diseases that had spread during World War I. During the 1970s, there was an increase in sexually transmitted diseases. One of the diseases that increased dramatically was genital herpes. In response, ASHA created the National Herpes Resource Center in 1979. The Herpes Resource Center (HRC) was designed to meet the growing need for education and awareness about the virus. One of the projects of the HRC was to create a network of local support (HELP) groups. The goal of these HELP groups was to provide a safe, confidential environment where participants can get accurate information and share experiences, fears, and feelings with others who are concerned about herpes. UK In the UK, the Herpes Association (now the Herpes Viruses Association) was started in 1982, becoming a registered charity with a Department of Health grant in 1985. The charity started as a string of local group meetings before acquiring an office and a national spread. Research Research has gone into vaccines for both prevention and treatment of herpes infections. As of October 2022, the U.S. FDA have not approved a vaccine for herpes. However, there are herpes vaccines currently in clinical trials, such as Moderna mRNA-1608. Unsuccessful clinical trials have been conducted for some glycoprotein subunit vaccines. As of 2017, the future pipeline includes several promising replication-incompetent vaccine proposals while two replication-competent (live-attenuated) HSV vaccine are undergoing human testing. A genomic study of the herpes simplex type 1 virus confirmed the human migration pattern theory known as the out-of-Africa hypothesis.
Biology and health sciences
Infectious disease
null
2017415
https://en.wikipedia.org/wiki/Ani%20%28bird%29
Ani (bird)
The anis are the three species of birds in the genus Crotophaga of the cuckoo family. They are essentially tropical New World birds, although the range of two species just reaches the United States. Unlike some cuckoos, the anis are not brood parasites, but nest communally, the cup nest being built by several pairs from 2–6 m high in a tree. A number of females lay their eggs in the nest and then share incubation and feeding. The anis are large black birds with a long tail and a deep ridged black bill. Their flight is weak and wobbly, but they run well, and usually feed on the ground. These are very gregarious species, always found in noisy groups. Anis feed on termites, large insects, and even lizards and frogs. The claim that they will remove ticks and other parasites from grazing animals has been disputed; while there is no doubt that anis follow grazing animals to catch disturbed insects and will occasionally eat fallen ticks, there is no proof that they remove ticks from the animals' bodies. Taxonomy The genus Crotophaga was introduced in 1758 by the Swedish naturalist Carl Linnaeus in the tenth edition of his Systema Naturae to accommodate a single species, the smooth-billed ani (Crotophaga ani). The genus name combines the Ancient Greek krotōn meaning "tick" with -phagos meaning "-eating". Linnaeus cited the Irish physician Patrick Browne who in 1756 in his The Civil and Natural History of Jamaica had used the name Crotophaga and remarked that smooth-billed anis "live chiefly upon ticks and other small vermin; and may be frequently seen jumping about all cows and oxen in the fields". The name "Ani" was used in 1648 by German naturalist Georg Marcgrave in his Historia Naturalis Brasiliae. Marcgrave did not explain the origin of the word, but it is probably derived from the Tupi language. Species The genus contains three species.
Biology and health sciences
Cuculiformes and relatives
Animals
2018701
https://en.wikipedia.org/wiki/Yellow-green%20algae
Yellow-green algae
Yellow-green algae or the Xanthophyceae (xanthophytes) are an important group of heterokont algae. Most live in fresh water, but some are found in marine and soil habitats. They vary from single-celled flagellates to simple colonial and filamentous forms. Xanthophyte chloroplasts contain the photosynthetic pigments chlorophyll a, chlorophyll c, β-carotene, and the carotenoid diadinoxanthin. Unlike other Stramenopiles (heterokonts), their chloroplasts do not contain fucoxanthin, which accounts for their lighter colour. Their storage polysaccharide is chrysolaminarin. Xanthophyte cell walls are produced of cellulose and hemicellulose. They appear to be the closest relatives of the brown algae. Classifications The species now placed in the Xanthophyceae were formerly included in the Chlorophyceae. In 1899, Lüther created the group Heterokontae for green algae with unequal flagella. Pascher (1914) included the Heterokontae in the Chrysophyta. In 1930, Allorge renamed the group as Xanthophyceae. The monadoid (unicellular flagellates) and also sometimes the amoeboid species have been included by some authors in the Protozoa or Protista, as order Heterochloridina (e.g., Doflein and Reichenow, 1927-1929), as class Xanthomonadina, with orders Heterochloridea and Rhizochloridea (e.g., Deflandre, 1956), as order Heterochlorida (e.g., Hall, 1953, Honigberg et al., 1964), as order Heteromonadida (e.g., Leedale, 1983), or as subclass Heterochloridia (e.g., Puytorac et al., 1987). These groups are called ambiregnal protists, as names for these have been published under either or both of the ICZN and the ICN. AlgaeBase (2020) Xanthophyceae have been divided into the following five orders in some classification systems: Dictyosphaeriopsis Groenlandiella Halosphaeropsis Pelagocystis Polyedrium Pseudopleurochloris Raphidosphaera Sphaerochloris Tiresias Order Botrydiales Schaffner 1922 Family Botrydiaceae Rabenhorst 1863 e.g. Botrydium Order Mischococcales Fritsch 1927 Family Botrydiopsidaceae Hibberd 1980 e.g. Botrydiopsis Family Botryochloridaceae Pascher 1938 e.g. Ilsteria Family Centritractaceae Pascher 1937 e.g. Centritractus Family Characiopsidaceae Pascher 1938 e.g. Characiopsis, Chlorothecium Family Chloropediaceae Pascher 1931 e.g. Chloropedia Family Gloeobotrydaceae Pascher 1937 e.g. Gloeobotrys Family Gloeopodiaceae Pascher 1938 e.g. Gloeopodium Family Mischococcaceae Pascher 1912 e.g. Mischococcus Family Ophiocytiaceae Lemmermann 1899 e.g. Ophiocytium Family Pleurochloridaceae Pascher 1937 e.g. Meringosphaera, Pleurochloris Family Trypanochloridaceae Geitler ex Pascher 1938 e.g. Trypanochloris Order Rhizochloridales Pascher 1925 Family Myxochloridaceae Pascher 1937 e.g. Myxochloris Family Rhizochloridaceae Pascher 1925 e.g. Rhizochloris Family Stipitococcaceae Pascher ex Smith 1933 e.g. Stipitococcus Order Tribonematales Pascher 1939 Family Heterodendraceae Pascher 1939 e.g. Heterodendron Family Heteropediaceae Hibberd 1982 e.g. Heterococcus, Heteropedia Family Neonemataceae Ettl 1977 e.g. Neonema Family Tribonemataceae West 1904 e.g. Tribonema Family Xanthonemataceae Silva 1980 e.g. Xanthonema Order Vaucheriales Nägeli ex Bohlin 1901 Family Vaucheriaceae (Gray) Dumortier 1822 e.g. Vaucheria Lüther (1899) Classification according to Lüther (1899): Class Heterokontae Order Chloromonadales Order Confervales Pascher (1912) Classification according to Pascher (1912): Heterokontae Heterochloridales Heterocapsales Heterococcales Heterotrichales Heterosiphonales Fritsch (1935) Fritsch (1935) recognizes the following orders in the class Xanthophyceae: Order Heterochloridales Family Heterochloridaceae (e.g., Heterochloris) Family Heterocapsaceae (e.g., Chlorogloea) Family Mischococcaceae (e.g., Mischococcus) Family Heterorhizidaceae (e.g., Rhizolekane) Order Heterococcales Family Halosphaeraceae (e.g., Halosphaera) Family Myxochloridaceae (e.g., Myxochloris) Family Chlorobotrydaceae(e.g., Chlorobotrys) Family Chlorotheciaceae (e.g., Chlorothecium) Family Ophiocytiaceae (e.g., Ophiocytium) Order Heterotrichales Family Tribonemataceae (e.g., Tribonema) Family Heterocloniaceae (e.g., Heterodendron[?]) Order Heterosiphonales Family Botrydiaceae (e.g., Botrydium) Smith (1938) In the classification of Smith (1938), there are six orders in the class Xanthophyceae, placed in the division Chrysophyta: Order Heterochloridales (e.g., Chlorochromonas) Order Rhizochloridales (e.g., Chlorarachnion) Order Heterocapsales (e.g., Chlorosaccus) Order Heterotrichales (e.g., Tribonema) Order Heterococcales (e.g., Botrydiopsis) Order Heterosiphonales (e.g., Botrydium) Pascher (1939) Pascher (1939) recognizes 6 classes in Heterokontae: Class Heterochloridineae Class Rhizochloridineae Class Hetcrocapsineae Class Heterococcincae Class Hetcrotrichineae Class Heterosiphonineae Copeland (1956) Copeland (1956) treated the group as order Vaucheriacea: Kingdom Protoctista Phylum Phaeophyta Class Heterokonta Order Vaucheriacea Family Chlorosaccacea Family Mischococcacea Family Chlorotheciacea Family Botryococcacea Family Stipitococcacea Family Chloramoebacea Family Tribonematacea Family Phyllosiphonacea Ettl (1978), van den Hoek et al. (1995) In a classification presented by van den Hoek, Mann and Jahns (1995), based on the level of organization of the thallus, there are seven orders: Order Chloramoebales (e.g., Chloromeson) - flagellate organisms Order Rhizochloridales (e.g., Rhizochloris, Myxochloris) - ameboid organisms Order Heterogloeales (e.g., Gloeochloris) - palmelloid (tetrasporal) organisms Order Mischococcales (e.g., Chloridella, Botrydiopsis, Characiopsis, Ophiocytium) - coccoid organisms Order Tribonematales (e.g., Tribonema, Heterococcus, Heterodendron) - filamentous organization Order Botrydiales (e.g., Botrydium) - siphonous organization; sexual reproduction isogamous or anisogamous Order Vaucheriales (e.g., Vaucheria) - siphonous organization; sexual reproduction oogamous These are the same orders of the classification of Ettl (1978), an updated version of the classic work by Pascher (1939). Ultrastructural and molecular studies shows that the Mischococcales might be paraphyletic, and the Tribonematales and Botrydiales polyphyletic, and suggests two orders at most be used until the relationships within the division are sorted. Maistro et al. (2009) Informal groups, according to Maistro et al. (2009): Botrydiopsalean clade Chlorellidialean clade Tribonematalean clade Vaucherialean clade Unicellular flagellates, amoeboid and palmelloid taxa were not included in this study. Adl et al. (2005, 2012) According to Adl et al. (2005, 2012): Tribonematales (genera Botrydium, Bumilleriopsis, Characiopsis, Chloromeson, Heterococcus, Ophiocytium, Sphaerosorus, Tribonema, Xanthonema) Vaucheriales (genus Vaucheria)
Biology and health sciences
SAR supergroup
Plants
2018704
https://en.wikipedia.org/wiki/Leptostraca
Leptostraca
Leptostraca (from the Greek words for thin and shell) is an order of small, marine crustaceans. Its members, including the well-studied Nebalia, occur throughout the world's oceans and are usually considered to be filter-feeders. It is the only extant order in the subclass Phyllocarida. They are believed to represent the most primitive members of their class, the Malacostraca, and first appear in the fossil record during the Cambrian period. Description Leptostracans are usually small, typically long, but the largest species (Nebaliopsis typica) can reach 4 cm, and the Silurian Ceratiocaris could grow to 75 cm. They are distinguished from all other members of their class in having seven abdominal segments, instead of six. Their head has stalked compound eyes, two pairs of antennae (one biramous, one uniramous), and a pair of mandibles but no maxillipeds. They are the only malacostracans with a carapace that comprises two valves. It covers the head and the thorax, including most of the thoracic appendages, and serves as a brood pouch for the developing embryos. Its anterior tip bears a movable rostrum. Also unique among malacostracans is their eight pairs of thoracic appendages which have been specialized into leaf-like filter feeding organs, and are not used for locomotion. The first six abdominal segments bear pleopods, while the seventh bears a pair of caudal furcae, which may be homologous to uropods of other crustaceans. Leptostracans have gills on their thoracic limbs, but also breathe through a respiratory membrane on the inside of the carapace. The eggs hatch as a postlarval, or "manca" stage, which lacks a fully developed carapace, but otherwise resembles the adult. Classification It is now accepted that leptostracans belong to the Malacostraca, and the sister crown group to Leptostraca is Eumalacostraca. The order Leptostraca is divided into three families, with ten genera containing a total of around 40 validly described extant species:
Biology and health sciences
Malacostraca
Animals
2018725
https://en.wikipedia.org/wiki/Argulidae
Argulidae
The family Argulidae, whose members are commonly known as carp lice or fish lice, are parasitic crustaceans in the class Ichthyostraca. It is the only family in the monotypic subclass Branchiura and the order Arguloida, although a second family, Dipteropeltidae, has been proposed. Taxonomy Branchiurans were once thought to be copepods but are now recognised as a separate subclass in the superclass Oligostraca due to their distinct morphological characteristics. There are approximately 170 species in four genera recognised in the family Branchiura. The centres of diversity are the Afrotropical and Neotropical realms. Description Branchiurans have a flattened, oval body, which is almost entirely covered by a broad, oval carapace, four thoracic segments each with a pair of swimming legs, a pair of anterior compound eyes, and an unsegmented abdomen without appendages which ends in paired abdominal lobes separated by the medial anal cleft. They are compressed dorsoventrally and can vary in size from just a few millimetres to over long, with females usually somewhat larger than the males. The mandibles are generally toothed hooks in Branchiurans. The maxillules provide sucking capability, and in the genera Argulus, Chonopeltis, and Dipteropeltis, the adults have a pair of suction cups that are from modified first maxillae. The genus Dolops, keeps the larval stages claw-like appendages into adulthood. It is still unknown whether the ancestral state of these organisms had suction discs or the hooked condition seen in Dolops, although it is thought that the specialized suctions discs are a later product of evolution. Also, females tend to be larger than the males. Between the genera there are multiple distinction between the sexes. For example, males in Argulus and Chonopeltis possess secondary sexual modifications on legs 2–4. The sexes both have their own sexual reproductive organs on their abdomens. The females have a spermathecae, while the males have a pair of testes. Their compound eyes are prominent, and the mouthparts and the first pair of antennae are modified to form a hooked, spiny proboscis armed with suckers, as an adaptation to parasitic life. They have four pairs of thoracic appendages, which are used to swim when not attached to the host. Not much research has been done on the respiratory system, which lacks gills, but respiratory areas on the carapace and gas exchange through the fleshy abdomen has been suggested. Distribution and habitat Branchiurans are widely distributed throughout the world. Most species are found in Africa and South America, and none are found in Antarctica. In North America, the genus Argulus is the only one known to be found in freshwater ecosystems. Behaviour and ecology Parasitism Branchiurans are obligate ectoparasites that are found primarily on marine and freshwater fish (only the genus Argulus occurs in marine environments), but can also be found on other aquatic organisms such as invertebrates, salamanders, tadpoles and alligators. Some species feed on the blood of their host, while others feed on mucus and extracellular material. Feeding is facilitated by distinct morphological adaptations (see Anatomy). Branchiura are able to attach to hosts through two mechanisms, hooked maxillae (as seen in Dolops) or suction disks. After engorging themselves, the parasites typically wait two to three weeks before feeding again. Mitigation of these parasites has been studied through the use of a treatment containing plant parts. From this study, it is thought that Tobacco leaf dust (containing nicotine) can safely and effectively eliminate adult Branchiurans from fish, although this may be specific to only Argulus bengalensis. Reproduction Only the life cycle of freshwater forms of the genus Argulus is well known. Branchiurans are not permanently attached to their hosts, and leave them for up to three weeks to mate and lay eggs, and reattach behind the fish's operculum, where they feed on mucus and sloughed-off scales, or pierce the skin and feed on the internal fluids. The eggs hatch into parasitic postnauplius larvae. While on their host, Branchiurans mate. The female holds the eggs in the thorax and in some species the eggs can be found inside the lobes of the carapace. The spermathecae on the female stores the sperm. In the genus Dolops, the males deposit a spermatophore onto the females. Once the eggs are fertilized the females leave the host organism to lay their eggs in rows on surfaces of plants, rocks, etc. Like the adults, the larvae are parasites on fish. They are opportunistic in selecting host species of fish, and females are motile in their pursuit of locality of egg-laying. Chonopeltis larvae appear to be less developed than those of the other genera. Members of one group of Argulus hatch as metanauplius-larvae, followed by a juvenile stage. Another Argulus group, and all known species of Dolops, hatch as juveniles. Impact Fish lice occasionally reach high enough densities to cause fish kills in aquaculture operations, or more rarely in wild populations of fish. They can also become abundant in aquaria, sometimes resulting in the death of ornamental fish.
Biology and health sciences
Crustaceans
Animals
2019834
https://en.wikipedia.org/wiki/Navy%20bean
Navy bean
The navy bean, haricot bean, pearl haricot bean, Boston bean, white pea bean, or pea bean is a variety of the common bean (Phaseolus vulgaris) native to the Americas, where it was first domesticated. It is a dry white bean that is smaller than many other types of white beans, and has an oval, slightly flattened shape. It features in such dishes as baked beans, various soups such as Senate bean soup, and bean pies. The plants that produce navy beans may be either of the bush type or vining type, depending on the cultivar. History The name "Navy bean" is an American term coined because the US Navy has served the beans as a staple to its sailors since the mid-1800s. In Australia, navy bean production began during World War II when it became necessary to find an economical way of supplying a nutritious food to the many troops—especially American troops—based in Queensland. The United States military maintained a large base in Kingaroy and had many bases and camps throughout south-east Queensland. It actively encouraged the widespread planting of the beans. Kingaroy is known as the Baked Bean Capital of Australia. Another popular name for the bean during this time was "the Yankee bean". Cultivars Navy bean cultivars include: "Rainy River" "Robust", resistant to the bean common mosaic virus (BCMV), which is transmitted through seeds Michelite, descended from 'Robust', but with higher yields and better seed quality Sanilac, the first bush navy bean cultivar Nutritional value White beans are the most abundant plant-based source of phosphatidylserine (PS) currently known. It contains notably high levels of apigenin, , which vary widely among legumes. Consumption of baked beans has been shown to lower total cholesterol levels and low-density lipoprotein cholesterol. This might be at least partly explained by the high saponin content of navy beans. Saponins also exhibit antibacterial and anti-fungal activity, and have been found to inhibit cancer cell growth. Furthermore, navy beans are the richest source of ferulic acid and p-coumaric acid among the common bean varieties. Storage and safety Dried and canned beans stay fresh longer by storing them in a pantry or other cool, dark place under . With normal seed storage, seeds should last from one to four years for replanting, with a very large timetable for cooking for well-kept seeds, nearing on indefinite. Beans that are discolored from the pure white color should be avoided, as they may have been poorly handled while they dried.
Biology and health sciences
Pulses
Plants
2019941
https://en.wikipedia.org/wiki/Micro%20combined%20heat%20and%20power
Micro combined heat and power
Micro combined heat and power, micro-CHP, μCHP or mCHP is an extension of the idea of cogeneration to the single/multi family home or small office building in the range of up to 50 kW. Usual technologies for the production of heat and power in one common process are e.g. internal combustion engines, micro gas turbines, stirling engines or fuel cells. Local generation has the potential for a higher efficiency than traditional grid-level generators since it lacks the 8-10% energy losses from transporting electricity over long distances. It also lacks the 10–15% energy losses from heat transport in heating networks due to the difference between the thermal energy carrier (hot water) and the colder external environment. The most common systems use natural gas as their primary energy source and emit carbon dioxide; nevertheless the effective efficiency of CHP heat production is much higher than of a condensing boiler, and thus reducing emissions and fuel costs. Overview A micro-CHP system usually contains a small heat engine as a prime mover used to rotate a generator which provides electric power, while simultaneously utilizing the waste heat from the prime mover for an individual building's space heating and the provision of hot domestic water. With fuel cells there is no rotating machinery, but the fuel cell's stack and where applicable also the reformer will provide useful heat. The stack does generate DC power which is converted by DC/AC inverter into mains voltage. Micro-CHP is defined by the EU as less than 50 kW electrical power output, however, others have more restrictive definitions, all the way down to <5 kWe. A micro-CHP generator may primarily follow heat demand, delivering electricity as the by-product, or may follow electrical demand to generate electricity, with heat as the by-product. When used primarily for heating, micro-CHP systems may generate more electricity than is instantaneously being demanded; the surplus is then fed into the grid. The purpose of cogeneration is to make use of more of the chemical energy in the fuel. The reason for using CHP systems is that large thermal power plants which generate electric power by burning fuel produce between 40% and 60% low-temperature waste heat, due to Carnot's theorem. The temperature produced by this waste heat (around 80 °C - 150 °C) does allow it to be used for space heating purposes, therefore in some urban areas district heating networks have been installed. Heat networks have a limited extent, as it is not economical to transport heat long distances due to heat loss from the pipes, and it will not reach into areas of low population density, or else revenues per CAPEX will go down. Where no district heating is possible due to low heat demand density or because the local utility has not invested in costly heat networks, this thermal energy is usually wasted via cooling towers or discharged into rivers, lakes or the sea. Micro CHP systems allow highly efficient cogeneration while using the waste heat even if the served heat load is rather low. This allows cogeneration to be used outside population centers, or even if there is no district heating network. It is efficient to generate the electricity near the place where the heat can also be used. Small power plants (μCHP) are located in individual buildings, where the heat can be used to support the heating system and recharge the hot domestic water tank, thus saving heating oil or heating gas. CHP systems are able to increase the total energy utilization of primary energy sources. Thus CHP has been steadily gaining popularity in all sectors of the energy economy, due to the increased costs of electricity and fuel, particularly fossil fuels, and due to environmental concerns, particularly climate change. In a traditional power plant delivering electricity to consumers, about 34.4% of the primary energy of the input fuel, such as coal, natural gas, uranium, petroleum, solar thermal, or biomass, reaches the consumer via electricity, although the efficiency can be 20% for very old plants and 45% for newer gas plants. In contrast, a CHP system converts 15%–42% of the primary heat to electricity, and most of the remaining heat is captured for hot water or space heating. In total, over 90% of the heat from the primary energy source (LHV based) can be used when heat production does not exceed the thermal demand. Since the year 2000, micro-CHP has become cost effective in many markets around the world, due to rising energy costs. The development of micro-CHP systems has also been facilitated by recent technological developments of small heat engines. This includes improved performance and cost-effectiveness of fuel cells, Stirling engines, steam engines, gas turbines, diesel engines and Otto engines. Combined heat and power (CHP) systems for homes or small commercial buildings are usually fueled by natural gas to produce electricity and heat. If no access to the natural gas network is available, which in general is the cheapest alternative, LPG, LNG or heating fuel (diesel) might be an alternative. The PEMFC fuel cell mCHP operates at low temperatures (50 to 100 °C) and needs high purity hydrogen. It is prone to contamination; changes are made to operate at higher temperatures and improvements on the fuel reformer. The SOFC fuel cell mCHP operates at a high temperature (500 to 1,000 °C) and can handle different fuel sources well, but the high temperature requires expensive materials to handle it; changes are made to operate at a lower temperature. Because of the higher temperature the SOFC in general has a longer start-up time and needs continuous heat output even at times when there is no thermal demand. CHP systems linked to absorption chillers can use waste heat for refrigeration. A 2013 UK report from Ecuity Consulting stated that MCHP is the most cost-effective method of utilizing gas to generate energy at the domestic level. The fuel cell industry review stated in 2013 that with 64% of global sales the fuel cell micro-combined heat and power had passed the conventional engine-based micro-CHP systems in sales in 2012. Technologies Micro-CHP engine systems are currently based on several different technologies: Internal combustion engines Stirling engines Fuel cell Microturbines Steam engine/Steam motor (using either the traditional water or organic chemicals such as refrigerants) Fuels There are many types of fuels and sources of heat that may be considered for micro-CHP. The properties of these sources vary in terms of system cost, heat cost, environmental effects, convenience, ease of transportation and storage, system maintenance, and system life. Some of the heat sources and fuels that are being considered for use with micro-CHP include: natural gas, LPG, biomass, vegetable oil (such as rapeseed oil), woodgas, solar thermal, and lately also hydrogen, as well as multi-fuel systems. The energy sources with the lowest emissions of particulates and net-carbon dioxide include solar power, hydrogen, biomass (with two-stage gasification into biogas), and natural gas. Due to the high efficiency of the CHP process, cogeneration has still lower carbon emissions compared to energy transformation in fossil driven boilers or thermal power plants. The majority of cogeneration systems use natural gas for fuel, because natural gas burns easily and cleanly, it can be inexpensive, it is available in most areas and is easily transported through pipelines which already exist for over 60 million homes. Engine types Reciprocating internal combustion engines are the most popular type of engine used in micro-CHP systems. Reciprocating internal combustion engine based systems can be sized such that the engine operates at a single fixed speed, usually resulting in a higher electrical or total efficiency. However, since reciprocating internal combustion engines have the ability to modulate their power output by changing their operating speed and fuel input, micro-CHP systems based on these engines can have varying electrical and thermal output designed to meet changing demand. Natural gas is suitable for internal combustion engines, such as Otto engine and gas turbine systems. Gas turbines are used in many small systems due to their high efficiency, small size, clean combustion, durability and low maintenance requirements. Gas turbines designed with foil bearings and air-cooling operate without lubricating oil or coolants. The waste heat of gas turbines is mostly in the exhaust, whereas the waste heat of reciprocating internal combustion engines is split between the exhaust and cooling system. External combustion engines can run on any high-temperature heat source. These engines include the Stirling engine, hot "gas" turbocharger, and the steam engine. Both range from 10%-20% efficiency, and as of 2014, small quantities are in production for micro-CHP products. Other possibilities include the Organic Rankine cycle, which operates at lower temperatures and pressures using low-grade heat sources. The primary advantage to this is that the equipment is essentially an air-conditioning or refrigeration unit operating as an engine, whereby the piping and other components need not be designed for extreme temperatures and pressures, reducing cost and complexity. Electrical efficiency suffers, but it is presumed that such a system would be utilizing waste heat or a heat source such as a wood stove or gas boiler that would exist anyway for purposes of space heating. The future of combined heat and power, particularly for homes and small businesses, will continue to be affected by the price of fuel, including natural gas. As fuel prices continue to climb, this will make the economics more favorable for energy conservation measures, and more efficient energy use, including CHP and micro-CHP. Fuel cells Fuel cells generate electricity and heat as a by product. The advantages for a stationary fuel cell application over stirling CHP are no moving parts, less maintenance, and quieter operation. The surplus electricity can be delivered back to the grid. PEMFC fuel cells fueled by natural gas or propane use a steam reformer to convert methane in the gas supply into carbon dioxide and hydrogen; the hydrogen then reacts with oxygen in the fuel cell to produce electricity. A PEMFC fuel cell based micro-CHP has an electrical efficiency of 37% LHV and 33% HHV and a heat recovery efficiency of 52% LHV and 47% HHV with a service life of 40,000 hours or 4000 start/stop cycles which is equal to 10 year use. An estimated 138,000 Fuel cell CHP systems below 1 kW had been installed in Japan by the end of 2014. Most of these CHP systems are PEMFC based (85%) and the remaining are SOFC systems. In 2013 Lifetime is around 60,000 hours. For PEM fuel cell units, which shut down at night, this equates to an estimated lifetime of between ten and fifteen years. United States Department of Energy (DOE) Technical Targets: 1–10 kW residential combined heat and power fuel cells operating on natural gas. 1 Standard utility natural gas delivered at typical residential distribution line pressures. 2 Regulated AC net/lower heating value of fuel. 3 Only heat available at 80 °C or higher is included in CHP energy efficiency calculation. 4 Cost includes materials and labor costs to produce stack, plus any balance of plant necessary for stack operation. Cost defined at 50,000 unit/year production (250 MW in 5 kW modules). 5 Based on operating cycle to be released in 2010. 6 Time until >20% net power degradation. Thermoelectrics Thermoelectric generators operating on the Seebeck Effect show promise due to their total absence of moving parts. Efficiency, however, is the major concern as most thermoelectric devices fail to achieve 5% efficiency even with high temperature differences. Solar micro-CHP CPVT This can be achieved by photovoltaic thermal hybrid solar collector, another option is Concentrated photovoltaics and thermal (CPVT), also sometimes called combined heat and power solar (CHAPS), is a cogeneration technology used in concentrated photovoltaics that produce both electricity and heat in the same module. The heat may be employed in district heating, water heating and air conditioning, desalination or process heat. CPVT systems are currently in production in Europe, with Zenith Solar developing CPVT systems with a claimed efficiency of 72%. Sopogy produces a micro concentrated solar power (microCSP) system based on parabolic trough which can be installed above building or homes, the heat can be used for water heating or solar air conditioning, a steam turbine can also be installed to produce electricity. CHP+PV The recent development of small scale CHP systems has provided the opportunity for in-house power backup of residential-scale photovoltaic (PV) arrays. The results of a recent study show that a PV+CHP hybrid system not only has the potential to radically reduce energy waste in the status quo electrical and heating systems, but it also enables the share of solar PV to be expanded by about a factor of five. In some regions, in order to reduce waste from excess heat, an absorption chiller has been proposed to utilize the CHP-produced thermal energy for cooling of PV-CHP system. These trigen+PV systems have the potential to save even more energy. Net metering To date, micro-CHP systems achieve much of their savings, and thus attractiveness to consumers, by the value of electrical energy which is replaced by the autoproduced electricity. A "generate-and-resell" or net metering model supports this, as home-generated power exceeding the instantaneous in-home needs is sold back to the electrical utility. This system is efficient because the energy used is distributed and used instantaneously over the electrical grid. The main losses are in the transmission from the source to the consumer, which will typically be less than the losses incurred by storing energy locally or generating power at less than the peak efficiency of the micro-CHP system. So, from a purely technical standpoint dynamic demand management and net-metering are very efficient. Another advantage of net-metering is that it is fairly easy to configure. The user's electrical meter can easily record electrical energy exiting as well as entering the home or business. For a grid with relatively few micro-CHP users, no design changes to the electrical grid need be made. Additionally, in the United States, federal and now many state regulations require utility operators to compensate anyone adding power to the grid. From the standpoint of the grid operator, these points present operational and technical as well as administrative burdens. As a consequence, most grid operators compensate non-utility power-contributors at less than or equal to the rate they charge their customers. While this compensation scheme may seem almost fair at first glance, it only represents the consumer's cost-savings of not purchasing utility power versus the true cost of generation and operation to the micro-CHP operator. Thus from the standpoint of micro-CHP operators, net-metering is not ideal. While net-metering is a very efficient mechanism for using excess energy generated by a micro-CHP system, it does have disadvantages: while the main generating source on the electrical grid is a large commercial generator, net-metering generators "spill" power to the smart grid in a haphazard and unpredictable fashion. However, the effect is negligible if there are only a small percentage of customers generating electricity and each of them generates a relatively small amount of electricity. When turning on an oven or space heater, about the same amount of electricity is drawn from the grid as a home generator puts out. If the percentage of homes with generating systems becomes large, then the effect on the grid may become significant. Coordination among the generating systems in homes and the rest of the grid may be necessary for reliable operation and to prevent damage to the grid. Market status Japan The largest deployment of micro-CHP is in Japan in 2009 with over 90,000 units in place, with the vast majority being of Honda's "ECO-WILL" type. Six Japanese energy companies launched the 300 W–1 kW PEMFC/SOFC ENE FARM product in 2009, with 3,000 installed units in 2008, a production target of 150,000 units for 2009–2010 and a target of 2,500,000 units in 2030. 20,000 units were sold in 2012 overall within the Ene Farm project making an estimated total of 50,000 PEMFC and up to 5,000 SOFC installations. For 2013 a state subsidy for 50,000 units is in place. The ENE FARM project will pass 100.000 systems in 2014, 34.213 PEMFC and 2.224 SOFC were installed in the period 2012–2014, 30,000 units on LNG and 6,000 on LPG. ECOWILL Sold by various gas companies and as of 2013, installed in a total of 131,000 homes. Manufactured by Honda using their single cylinder EXlink engine capable of burning natural gas or propane. Each unit produces 1 kW of electricity and 2.8 kW of hot water. PEMFC Per December 2012, Panasonic and Tokyo Gas Co., Ltd. sold about 21,000 PEM Ene-Farm units in Japan for a price of $22,600 before installation. Toshiba and Osaka Gas Co., Ltd./Nichigas installed 6,500 PEM ENE FARM units (manufactured by CHOFU SEISAKUSHO Co., Ltd. ) per November 2011. SOFC In the middle of 2012, JX Nippon Oil Co. & Sanyo and Seibu Gas Energy Co. sold around 4,000 SOFC Ene Farm units. Aisin Seiki in combination with Osaka Gas, Kyocera, Toyota and Chofu Seisakusho started in April 2012 with the sales of the SOFC ENE-FARM Type S for around $33,500 before installation. NGK is a manufacturer of 700W-1 kW mCHP units. Miura Kogyo and Sumitomo Precision Products with a 4.2 kW unit. Toto Ltd. South Korea In South Korea, subsidies will start at 80 percent of the cost of a domestic fuel cell. The Renewable Portfolio Standard program with renewable energy certificates runs from 2012 to 2022. Quota systems favor large, vertically integrated generators and multinational electric utilities, if only because certificates are generally denominated in units of one megawatt-hour. They are also more difficult to design and implement than a Feed-in tariff. Around 350 residential mCHP units were installed in 2012. PEMFC by GS FuelCell, FuelCell Power, Hyundai Hysco JV with Plug Power and Hyosung, SOFC by KEPRI, LS Industrial Systems (from ClearEdge Power), Samsung Everland (ClearEdge Power). MCFC by POSCO Energy (FuelCell Energy) and Doosan. PAFC Doosan Fuel Cell America AFC AFC Energy Europe The European public–private partnership Fuel Cells and Hydrogen Joint Undertaking Seventh Framework Programme project ene.field aims to deploy by 2017 up 1,000 residential fuel cell Combined Heat and Power (micro-CHP) installations in 12 EU member states. The programme brings together 9 mature European micro FC-CHP manufacturers into a common analysis framework to deliver trials across all of the available fuel cell CHP technologies. Fuel cell micro-CHP trials will be installed and actively monitored in dwellings across the range of European domestic heating markets, dwelling types and climatic zones, which will lead to an invaluable dataset on domestic energy consumption and micro-CHP applicability across Europe. The ene.field project also brings together over 30 utilities, housing providers and municipalities to bring the products to market and explore different business models for micro-CHP deployment. Sweden Powercell Sweden is a fuel cell company that develop environmentally friendly electric generators with the unique fuel cell and reformer technology that is suitable for both existing and future fuel. Germany In Germany, ca 50 MW of mCHP up to 50 kW units have been installed in 2015. The German government is offering large CHP incentives, including a market premium on electricity generated by CHP and an investment bonus for micro-CHP units. The German testing project Callux has 500 mCHP installations per nov 2014. North Rhine-Westphalia launched a 250 million subsidy program for up to 50 kW lasting until 2017. PEMFC BDR Thermea/BAXI (Toshiba) Viessmann (Panasonic) Elcore, a 300W addon. Tropical Dantherm Power Riesaer Brennstoffzellentechnik GmbH (Inhouse Engineering) SOFC Center for Fuel Cell Technology (ZBT) (JX Nippon) Ceramic Fuel Cells installs until 2014 up to 100 SOFC units under the SOFT-PACT project with E.ON in Germany and the UK. A factory in Heinsberg, Germany for the production of SOFC based micro-CHP units started in June 2009 to produce 10,000 two-kilowatt units per year. Vaillant (Sunfire/Staxera) Buderus/Junkers – Bosch Thermotechnik (Aisin Seiki) SOFCpower/Ariston Itho-Daalderop (Ceres Power) Viessmann (HEXIS), UK It is estimated that about 1,000 micro-CHP systems were in operation in the UK as of 2002. These are primarily Whispergen using Stirling engines, and Senertec Dachs reciprocating engines. The market is supported by the government through regulatory work, and some government research money expended through the Energy Saving Trust and Carbon Trust, which are public bodies supporting energy efficiency in the UK. Effective as of 7 April 2005, the UK government cut the VAT from 17.5% to 5% for micro-CHP systems, in order to support demand for this emerging technology at the expense of existing, less environmentally friendly technology. Of the 24 million households in the UK, as many as 14 to 18 million are thought to be suitable for micro-CHP units. PEMFC In early 2012 less than 1000 1 kWe Baxi-Innotech PEM micro-CHP units from BDR Thermea were installed IE-CHP SOFC A Ceres Power factory in Horsham UK for the production of SOFC based micro-CHP units is expected to start low-volume production in the second half of 2009 Ceramic Fuel Cells Denmark The Danish mCHP project 2007 to 2014 with 30 units is on the island of Lolland and in the western town Varde. Denmark is currently part of the Ene.field project. EWII Fuel Cell Dantherm Power (Ballard Power) The Netherlands The micro-CHP subsidy was ended in 2012. To test the effects of mCHP on a smart grid, 45 natural gas SOFC units (each 1,5 kWh) from Republiq Power (Ceramic Fuel Cells) will be placed on Ameland in 2013 to function as a virtual power plant. United States The federal government is offering a 10% tax credit for smaller CHP and micro-CHP commercial applications. In 2007, the United States company "Climate Energy" of Massachusetts introduced the "Freewatt, a micro-CHP system based on a Honda MCHP engine bundled with a gas furnace (for warm air systems) or boiler (for hydronic or forced hot water heating systems). AFC Doosan Fuel Cell America PEMFC Plug Power (Ballard Power Systems) The Freewatt is no longer commercially available (since at least 2014). Through testing it was found to operate at 23.4% efficiency for electrical and 51% efficiency for waste heat recovery. Marathon Engine Systems, a Wisconsin company, produces a variable electrical and thermal output micro-CHP system called the ecopower with an electrical output of 2.2-4.7 kWe. The ecopower was independently measured to operate at 24.4% and 70.1% electrical and waste heat recovery efficiency, respectively. Canada Hyteon PEM Through a pilot program scheduled for mid-2009 in the Canadian province of Ontario, the Freewatt system is being offered by home builder Eden Oak with support from ECR International, Enbridge Gas Distribution and National Grid. Research Testing is underway in Ameland, the Netherlands for a three-year field testing until 2010 of HCNG where 20% hydrogen is added to the local CNG distribution net, the appliances involved are kitchen stoves, condensing boilers, and micro-CHP boilers. Micro-CHP Accelerator, a field trial performed between 2005 and 2008, studied the performance of 87 Stirling engine and internal combustion engine devices in residential houses in the UK. This study found that the devices resulted in average carbon savings of 9% for houses with heat demand over 54 GJ/year. An ASME (American Society of Mechanical Engineers) paper fully describes the performance and operating experience with two residential sized Combined Heat and Power units which were in operation from 1979 through 1995. Oregon State University, funded by the U.S. Department of Energy's Advanced Research Project Agency - Energy (ARPA-e), tested the state of the art micro-CHP systems in the United States. The results showed that the nominally 1 kWe state-of-the-art micro-CHP system operated at an electrical and total efficiency (LHV based) of 23.4 and 74.4%, respectively. The nominally 5 kWe state-of-the-art system operated at an electrical and total efficiency (LHV based) of 24.4 and 94.5%, respectively. The most popular 7 kWe home backup generator (not CHP) operated at an electrical efficiency (LHV based) of 21.5%. The price of the emergency backup generator was an order of magnitude lower than the 5 kWe generator, but the projected life span of the system was over 2 orders of magnitude lower. These results show the trade-off between efficiency, cost, and durability. The U.S. Department of Energy's Advanced Research Project Agency - Energy (ARPA-e) has funded $25 million towards mCHP research in the GENerators for Small Electrical and Thermal Systems (GENSETS) program. 12 project teams have been selected to develop a 1 kWe mCHP technology that can achieve 40% electrical efficiency, have a 10-year system life, and cost under $3000.
Technology
Power generation
null
1381306
https://en.wikipedia.org/wiki/Sweat%20gland
Sweat gland
Sweat glands, also known as sudoriferous or sudoriparous glands, , are small tubular structures of the skin that produce sweat. Sweat glands are a type of exocrine gland, which are glands that produce and secrete substances onto an epithelial surface by way of a duct. There are two main types of sweat glands that differ in their structure, function, secretory product, mechanism of excretion, anatomic distribution, and distribution across species: Eccrine sweat glands are distributed almost all over the human body, in varying densities, with the highest density in palms and soles, then on the head, but much less on the trunk and the extremities. Their water-based secretion represents a primary form of cooling in humans. Apocrine sweat glands are mostly limited to the axillae (armpits) and perineal area in humans. They are not significant for cooling in humans, but are the sole effective sweat glands in hoofed animals, such as the camels, donkeys, horses, and cattle. Ceruminous glands (which produce ear wax), mammary glands (which produce milk), and ciliary glands in the eyelids are modified apocrine sweat glands. Structure Generally, sweat glands consist of a secretory unit that produces sweat, and a duct that carries the sweat away. The secretory coil or base, is set deep in the lower dermis and hypodermis, and the entire gland is surrounded by adipose tissue. In both sweat gland types, the secretory coils are surrounded by contractile myoepithelial cells that function to facilitate excretion of secretory product. The secretory activities of the gland cells and the contractions of myoepithelial cells are controlled by both the autonomic nervous system and by the circulating hormones. The distal or apical part of the duct that opens to the skin's surface is known as the acrosyringium. Each sweat gland receives several nerve fibers that branch out into bands of one or more axons and encircle the individual tubules of the secretory coil. Capillaries are also interwoven among sweat tubules. Distribution The number of active sweat glands varies greatly among different people, though comparisons between different areas (ex. axillae vs. groin) show the same directional changes (certain areas always have more active sweat glands while others always have fewer). According to Henry Gray's estimates, the palm has around 370 sweat glands per cm2; the back of the hand has 200 per cm2; the forehead has 175 per cm2; the breast, abdomen, and forearm have 155 per cm2; and the back and legs have 60–80 per cm2. In the finger pads, sweat glands pores are somewhat irregularly spaced on the epidermal ridges. There are no pores between the ridges, though sweat tends to spill into them. The thick epidermis of the palms and soles causes the sweat glands to become spirally coiled. Other animals Non-primate mammals have eccrine sweat glands only on the palms and soles. Apocrine glands cover the rest of the body, though they are not as effective as humans' in temperature regulation (with the exception of horses'). Prosimians have a 1:20 ratio of follicles with apocrine glands versus follicles without. They have eccrine glands between hairs over most of their body (while humans have them between the hairs on their scalp). The overall distribution of sweat glands varies among primates: the rhesus and patas monkeys have them on the chest; the squirrel monkey has them only on the palms and soles; and the stump-tailed macaque, Japanese monkey, and baboon have them over the entire body. Domestic animals have apocrine glands at the base of each hair follicle, but eccrine glands only in foot pads and snout. Their apocrine glands, like those in humans, produce an odorless oily milky secretion evolved not to evaporate and cool but rather coat and stick to hair so odor-causing bacteria can grow on it. Eccrine glands on their foot pads, like those on palms and soles of humans, did not evolve to cool either but rather increase friction and enhance grip. Dogs and cats have apocrine glands that are specialized in both structure and function located at the eyelids (Moll's glands), ears (ceruminous glands), anal sac, clitoral hood, and circumanal area. History The pores of eccrine sweat pores were first identified by the Italian physiologist Marcello Malpighi. Sweat glands themselves were first discovered by the Czech physiologist, Johannes Purkinjé in 1833. The differing densities of sweat glands in different body regions was first investigated in 1844 by the German anatomist Karl Krause. Sweat glands were first separated into kinds by the French histologist Louis-Antoine Ranvier, who separated them in 1887 regarding their type of secretion into holocrine glands (sebaceous glands) and the merocrine glands (sweat glands), the latter were then in 1917 divided into apocrine and eccrine sweat glands. In 1987, apoeccrine glands were identified. Types Eccrine Eccrine sweat glands are everywhere except the lips, ear canal, foreskin, glans penis, labia minora, clitoral hood, and clitoris. They are ten times smaller than apocrine sweat glands, do not extend as deeply into the dermis, and excrete directly onto the surface of the skin. The proportion of eccrine glands decreases with age. The clear secretion produced by eccrine sweat glands is termed sweat or sensible perspiration. Sweat is mostly water, but it does contain some electrolytes, since it is derived from blood plasma. The presence of sodium chloride gives sweat a salty taste. The total volume of sweat produced depends on the number of functional glands and the size of the surface opening. The degree of secretory activity is regulated by neural and hormonal mechanisms (men sweat more than women). When all of the eccrine sweat glands are working at maximum capacity, the rate of perspiration for a human being may exceed three liters per hour, and dangerous losses of fluids and electrolytes can occur. Eccrine glands have three primary functions: Thermoregulation: sweat (through evaporation and evaporative heat loss) can lead to cooling of the surface of the skin and a reduction of body temperature. Excretion: eccrine sweat gland secretion can also provide a significant excretory route for water and electrolytes. Protection: eccrine sweat gland secretion aids in preserving the skin's acid mantle, which helps protect the skin from colonization from bacteria and other pathogenic organisms. Apocrine Apocrine sweat glands are found in the armpit, areola (around the nipples), perineum (between the anus and genitals), in the ear, and the eyelids. The secretory portion is larger than that of eccrine glands (making them larger overall). Rather than opening directly onto the surface of the skin, apocrine glands secrete sweat into the pilary canal of the hair follicle. Before puberty, the apocrine sweat glands are inactive; hormonal changes in puberty cause the glands to increase in size and begin functioning. The substance secreted is thicker than eccrine sweat and provides nutrients for bacteria on the skin: the bacteria's decomposition of sweat is what creates the acrid odor. Apocrine sweat glands are most active in times of stress and sexual excitement. In mammals (including humans), apocrine sweat contains pheromone-like compounds to attract other organisms within their species. Study of human sweat has revealed differences between men and women in apocrine secretions and bacteria. Apoeccrine Some human sweat glands cannot be classified as either apocrine or eccrine, having characteristics of both; such glands are termed apoeccrine. They are larger than eccrine glands, but smaller than apocrine glands. Their secretory portion has a narrow portion similar to secretory coils in eccrine glands as well as a wide section reminiscent of apocrine glands. Apoeccrine glands, found in the armpits and perianal region, have ducts opening onto the skin surface. They are presumed to have developed in puberty from the eccrine glands, and can comprise up to 50% of all axillary glands. Apoeccrine glands secrete more sweat than both eccrine and apocrine glands, thus playing a large role in axillary sweating. Apoeccrine glands are sensitive to cholinergic activity, though they can also be activated via adrenergic stimulation. Like eccrine glands, they continuously secrete a thin, watery sweat. Others Specialized sweat glands, including the ceruminous glands, mammary glands, ciliary glands of the eyelids, and sweat glands of the nasal vestibulum, are modified apocrine glands. Ceruminous glands are near the ear canals, and produce cerumen (earwax) that mixes with the oil secreted from sebaceous glands. Mammary glands use apocrine secretion to produce milk. Sweat Sweat glands are used to regulate temperature and remove waste by secreting water, sodium salts, and nitrogenous waste (such as urea) onto the skin surface. The main electrolytes of sweat are sodium and chloride, though the amount is small enough to make sweat hypotonic at the skin surface. Eccrine sweat is clear, odorless, and is composed of 98–99% water; it also contains NaCl, fatty acids, lactic acid, citric acid, ascorbic acid, urea, and uric acid. Its pH ranges from 4 to 6.8. On the other hand, the apocrine sweat has a pH of 6 to 7.5; it contains water, proteins, carbohydrate waste material, lipids, and steroids. The sweat is oily, cloudy, viscous, and originally odorless; it gains odor upon decomposition by bacteria. Because both apocrine glands and sebaceous glands open into the hair follicle, apocrine sweat is mixed with sebum. Mechanism Both apocrine and eccrine sweat glands use merocrine secretion, where vesicles in the gland release sweat via exocytosis, leaving the entire cell intact. It was originally thought that apocrine sweat glands use apocrine secretion due to histological artifacts resembling "blebs" on the cell surface, however, recent electron micrographs indicate that the cells use merocrine secretion. In both apocrine and eccrine sweat glands, the sweat is originally produced in the gland's coil, where it is isotonic with the blood plasma there. When the rate of sweating is low, salt is conserved and reabsorbed by the gland's duct; high sweat rates, on the other hand, lead to less salt reabsorption and allow more water to evaporate on the skin (via osmosis) to increase evaporative cooling. Secretion of sweat occurs when the myoepithelial cell cells surrounding the secretory glands contract. Eccrine sweat increases the rate of bacterial growth and volatilizes the odor compounds of apocrine sweat, strengthening the latter's acrid smell. Normally, only a certain number of sweat glands are actively producing sweat. When stimuli call for more sweating, more sweat glands are activated, with each then producing more sweat. Stimuli Thermal Both eccrine and apocrine sweat glands participate in thermoregulatory sweating, which is directly controlled by the hypothalamus. Thermal sweating is stimulated by a combination of internal body temperature and mean skin temperature. In eccrine sweat glands, stimulation occurs via activation by acetylcholine, which binds to the gland's muscarinic receptors. Emotional Emotional sweating is stimulated by stress, anxiety, fear, and pain; it is independent of ambient temperature. Acetylcholine acts on the eccrine glands and adrenaline acts on both eccrine and apocrine glands to produce sweat. Emotional sweating can occur anywhere, though it is most evident on the palms, soles of the feet, and axillary regions. Sweating on the palms and soles is thought to have evolved as a fleeing reaction in mammals: it increases friction and prevents slipping when running or climbing in stressful situations. Gustatory Gustatory sweating refers to thermal sweating induced by the ingestion of food. The increase in metabolism caused by ingestion raises body temperature, leading to thermal sweating. Hot and spicy foods also lead to mild gustatory sweating in the face, scalp and neck: capsaicin (the compound that makes spicy food taste "hot"), binds to receptors in the mouth that detect warmth. The increased stimulation of such receptors induces a thermoregulatory response. Antiperspirant Unlike deodorant, which simply reduces axillary odor without affecting body functions, antiperspirant reduces both eccrine and apocrine sweating. Antiperspirants, which are classified as drugs, cause proteins to precipitate and mechanically block eccrine (and sometimes apocrine) sweat ducts. The metal salts found in antiperspirants alters the keratin fibrils in the ducts; the ducts then close and form a "horny plug". The main active ingredients in modern antiperspirants are aluminum chloride, aluminum chlorohydrate, aluminum zirconium chlorohydrate, and buffered aluminum sulfate. On apocrine glands, antiperspirants also contain antibacterial agents such as trichlorocarbanilide, hexamethylene tetramine, and zinc ricinoleate. The salts are dissolved in ethanol and mixed with essential oils high in eugenol and thymol (such as thyme and clove oils). Antiperspirants may also contain levomethamphetamine. Pathology Some diseases of the sweat glands include: Fox-Fordyce disease The apocrine sweat glands become inflamed, causing a persistent, itchy rash, usually in the axillae and pubic areas. Frey's Syndrome If the auriculotemporal nerve is damaged (most often as a result of a Parotidectomy), excess sweat can be produced in the rear of the cheek area (just below the ear) in response to stimuli that cause salivation. Heatstroke When the eccrine glands become exhausted and unable to secrete sweat. Heatstroke can lead to fatal hyperpyrexia (extreme rise in body temperature). Hidradenitis suppurativa Occurs when the skin and sweat glands become inflamed with swollen lumps. These are typically painful and break open, releasing fluid or pus. The most commonly affected areas are the underarms, under the breasts, and the groin. Hyperhidrosis (also known as polyhidrosis or sudorrhea) is a pathological, excessive sweating that can be either generalized or localized (focal hyperhidrosis); focal hyperhidrosis occurs most often on the palms, soles, face, scalp and axillae. Hyperhidrosis is usually brought on by emotional or thermal stress, but it can also occur or with little to no stimulus. Local (or asymmetrical) hyperhidrosis is said to be caused by problems in the sympathetic nervous system: either lesions or nerve inflammation. Hyperhidrosis can also be caused by trench foot or encephalitis. Milaria rubra Also called prickly heat. Milaria rubra is the rupture of sweat glands and migration of sweat to other tissues. In hot environments, the skin's horny layer can expand due to sweat retention, blocking the ducts of eccrine sweat glands. The glands, still stimulated by high temperatures, continues to secrete. Sweat builds up in the duct, causing enough pressure to rupture the duct where it meets the epidermis. Sweat also escapes the duct to adjacent tissues (a process called milaria). Hypohydrosis then follows milaria (postmiliarial hypohydrosis). Osmidrosis Often called bromhidrosis, especially in combination with hyperhidrosis. Osmohidrosis is excessive odor from apocrine sweat glands (which are overactive in the axillae). Osmidrosis is thought to be caused by changes in the apocrine gland structure rather than changes in the bacteria that acts on sweat. Tumors Sweat gland tumors include: Acrospiroma Aggressive digital papillary adenocarcinoma Apocrine gland carcinoma Ceruminoma Cutaneous myoepithelioma Cylindroma Eccrine carcinoma Hidradenoma papilliferum Hidrocystoma Microcystic adnexal carcinoma Mucinous carcinoma Papillary eccrine adenoma Poroma Porocarcinoma Syringadenoma papilliferum Syringofibroadenoma Syringoma Adenolipomas are lipomas associated with eccrine sweat glands. As signs in other illnesses Many diseases cause sweat gland dysfunction: Acromegaly, a result of excess growth hormone, causes the size of sweat glands increase, which leads to thicker skin. Aquagenic wrinkling of the palms, in which white papules develop on the palms after exposure to water, can sometimes come with abnormal aquaporin 5 in the sweat glands. Cystic fibrosis can be diagnosed by a sweat test, as the disease causes the sweat glands ducts to reabsorb less chloride, leading to higher concentrations of chloride in the secreted sweat. Ectodermal dysplasia can present a lack of sweat glands. Fabry disease, characterized by excess globotriaosylceramide (GL3), causes a decrease in sweat gland function due to GL3 deposits in the eccrine glands. GM1 gangliosidoses, characterized by abnormal lipid storage, leads to vacuolization in eccrine sweat gland cells. Hunter syndrome can include metachromin granules and mucin in the cytoplasm of the eccrine sweat gland cells. Hypothyroidism's low levels of thyroid hormone lead to decreased secretions from sweat glands; the result is dry, coarse skin. Kearns–Sayre syndrome, a disease of the mitochondria, involves abnormal mitochondria in eccrine sweat glands. Lafora disease is a rare genetic disorder marked by the presence of abnormal polyglucosan deposits. These "Lafora bodies" appear in the ducts of sweat glands, as well as the myoepithelial cells of apocrine glands. Lichen striatus, a self-limited eruption of small, slightly scaly papules, includes a lymphoid infiltrate around eccrine sweat glands. Metachromatic leukodystrophy, a lysosomal storage disease, leads to the accumulation of lipopigments and lysosomal residual bodies in the epithelial cells of sweat glands. Neuronal ceroid lipofuscinosis causes abnormal deposits of lipopigment in sweat gland epithelial cells (among other places). Neutral lipid storage disease includes abnormal lipid deposits in cells, including those of the sweat gland. Niemann-Pick disease type C, another lipid storage disease, includes abnormal lipid storage in sweat glands. Schindler disease causes cytoplasmic vacuoles that appear to be empty or contain filamentous material to manifest in eccrine sweat gland cells. Small fiber peripheral neuropathy can damage the nerves that control the sweat glands. The sweat gland nerve fiber density test can diagnose this condition.
Biology and health sciences
Exocrine system
Biology
1381556
https://en.wikipedia.org/wiki/Chlorocebus
Chlorocebus
Chlorocebus is a genus of medium-sized primates from the family of Old World monkeys. Six species are currently recognized, although some people classify them all as a single species with numerous subspecies. Either way, they make up the entirety of the genus Chlorocebus. Confusingly, the terms "vervet monkey" and "green monkey" are sometimes used to refer to the whole genus Chlorocebus, though they also refer more precisely to species Chlorocebus pygerythrus and Chlorocebus sabaeus, respectively, neither of which is the type species for Chlorocebus. This article uses the term Chlorocebus consistently for the genus and the common names only for the species. The native range of these monkeys is sub-Saharan Africa from Senegal and Ethiopia south to South Africa. However, in previous centuries, a number of them were taken as pets by early Caribbean settlers and slave traders, and were transported across the Atlantic Ocean to the Caribbean islands. The monkeys subsequently escaped or were released and became naturalized. Today, they are found on the West Indian islands of Barbados, Saint Kitts, Nevis, Anguilla, and Saint Martin. A colony also exists in Broward County, Florida. Taxonomy The classification of the Chlorocebus monkeys is undergoing change. They were previously lumped together with the medium-sized arboreal African monkeys of the guenon genus, Cercopithecus, where they were classified as a single species, Cercopithecus aethiops. More species and subspecies are expected to be identified as scientists study this genus further. The most basal member of the genus is thought to be the dryas monkey (C. dryas), which was previously classified in Cercopithecus and may potentially warrant its own genus. Physical description The dorsal fur of Chlorocebus monkeys varies by species from pale yellow through grey-green brown to dark brown, while the lower portion and the hair ring around the face is a whitish yellow. The face, hands, and feet are hairless and black, although their abdominal skin is bluish. Males have a blue scrotum and red penis. The monkeys are sexually dimorphic, wild adult males range from and females are , including a tail measuring . Males weigh from and females weigh from . Behavior and ecology Unlike the closely related guenons, Chlorocebus species are not primarily forest dwellers. Rather, they are semi-arboreal and semi-terrestrial, spending most of the day on the ground feeding and then sleeping at night in the trees. However, they must drink each day and are dependent on water, so they are never far from rivers or lakes. Like most other Old World monkeys, they have cheek pouches for storing food. They are diurnal, and are particularly active in the early morning and in the later afternoon or early evening. Chlorocebus monkeys live in multiple male/multiple female groups, which can be as large as 76 individuals. The group hierarchy plays an important role: dominant males and females are given priority in the search for food, and are groomed by subordinate members of the group. They exhibit female philopatry, a social system whereby the females remain in the same home range where they were born, and males leave once sexually mature. These monkeys are territorial animals, and a group can occupy an area of approximately . They use a wide variety of vocalizations. They can with warn off members of other groups from their territory, and they can also warn members of their own troop of dangers from predators, using different calls for different predators. Monkeys scream when they are disciplined by members of the troop. Facial expressions and body posturing serve as additional communication tools. Their social interactions are highly complex. Where alliances can be formed for benefit, deception is sometimes used. Physical affection is important between family members. Chlorocebus monkeys are, along with chimpanzees and baboons, the most omnivorous of the primates. They will eat leaves, gum, seeds, nuts, grasses, fungi, fruit, berries, flowers, buds, shoots, invertebrates, bird eggs, birds, lizards, rodents, and other vertebrate prey. Their preferred foods are fruit and flowers, a seasonal resource, varied to cope with changes in food availability. On the island of Saint Kitts, they will commonly steal brightly coloured alcoholic drinks left behind by tourists on the beach. Many tourists have also found out these monkeys will deliver a powerful bite if they are cornered or threatened. In Africa, the documented attacks by these monkeys are extremely rare when compared with dog attacks, in spite of living very closely with humans and often being threatened by humans and their dogs. To signal mating readiness, the female presents her vulva to the male. Since groups are made of several more females than males, each male mates with several females. Generally, the male will display a striking, light-blue scrotal pouch, most prevalent during the mating season. Males do not take part in raising the young, but other females of the group (the "aunties") share the burden. The dominance hierarchy also comes into play, as the offspring of the more dominant group members get preferential treatment. The gestation time is about 163–165 days, and births are typically of a single young. The births usually happen at the beginning of the rainy season, when sufficient food is available. The young are weaned at about six months of age and are fully mature in four to five years. The life expectancy of the green monkeys is 11–13 years in captivity, and about 10–12 years in the wild. Human interaction In the Caribbean islands, interactions between humans and monkeys are sometimes problematic. On the island of Barbados, farmers complain about the monkeys damaging their crops, and many try to find ways to keep them at bay. On Halloween 2006, a monkey was suspected of causing an island-wide, eight-hour blackout. The monkey apparently climbed a light pole and tripped an 11,000- and 24,000-volt powerline. In some African countries, many monkeys are killed by power lines, dogs, predatory animals e.g. wild cats, vehicles, shooting, poisoning, and hunting for sport. Added to this, an increase in desertification, and loss of habitat due to agriculture and urbanisation has occurred. As a result, the population numbers in troops are declining in urban areas to an average of between 15 and 25 individuals, with many troops disappearing altogether. Use in scientific research and vaccine production The African green monkey has been the focus of much scientific research since the 1950s, and cell lines derived from its tissues are still used today to produce vaccines for polio and smallpox. Chlorocebus species are also important in studying high blood pressure and AIDS. Unlike most other nonhuman primates, they naturally develop high blood pressure. In Africa, the monkeys are massively infected with simian immunodeficiency virus (SIV), related to the ancestor of human immunodeficiency virus (HIV), both of which are widespread throughout populations. Chlorocebus monkeys are a natural host of SIV and do not succumb to immunodeficiency upon infection; therefore they are an important model in AIDS studies to understand protective mechanisms against AIDS. The monkeys infected with SIV and humans infected with HIV differ in microbial responses to infection. Vero cells are a continuous cell line derived from epithelial cells of the African green monkey kidney, and are widely used for research in immunology and infectious disease. Similar cell lines include buffalo green monkey kidney and BS-C-1. Chlorocebus monkeys are an important model organism for studies of AIDS, microbiome, development, neurobehavior, neurodegeneration, metabolism and obesity. A genome of chlorocebus monkey (Chlorocebus sabaeus) was sequenced and the genome reference with gene models is available in genome browsers NCBI Chlorocebus_sabeus 1.1 and Ensembl Vervet-AGM (Chlorocebus sabaeus) . It facilitated genomic investigations in this monkey, including population genetics studies across Africa and Caribbean and characterization of gene expression regulation across development in brain and peripheral tissues, during prenatal development, and during reaction to psychosocial stress related to relocation and social isolation. Epigenetic clock based on CpG methylation in DNA - a complex biomarker of aging - was developed for Chlorocebus sabaeus in several variants: tissue-specific clocks for brain cortex, blood, and liver; multitissue clock; and human-sabaeus monkey clocks.
Biology and health sciences
Old World monkeys
Animals