id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
24,810,047
https://en.wikipedia.org/wiki/Alcohol%20law
Alcohol laws are laws relating to manufacture, use, as being under the influence of and sale of alcohol (also known formally as ethanol) or alcoholic beverages. Common alcoholic beverages include beer, wine, (hard) cider, and distilled spirits (e.g., vodka, rum, gin). Definition of alcoholic beverage varies internationally, e.g., the United States defines an alcoholic beverage as "any beverage in liquid form which contains not less than one-half of one percent of alcohol by volume". Alcohol laws can restrict those who can produce alcohol, those who can buy it (often with minimum age restrictions and laws against selling to an already intoxicated person), when one can buy it (with hours of serving or days of selling set out), labelling and advertising, the types of alcoholic beverage that can be sold (e.g., some stores can only sell beer and wine), where one can consume it (e.g., drinking in public is not legal in many parts of the US), what activities are prohibited while intoxicated (e.g., drunk driving), and where one can buy it. In some cases, laws have even prohibited the use and sale of alcohol entirely. Temperance movement The temperance movement is a social movement against the consumption of alcoholic beverages. Participants in the movement typically criticize alcohol intoxication or promote complete abstinence (teetotalism), with leaders emphasizing alcohol's negative effects on health, personality, and family life. Typically the movement promotes alcohol education, as well as demands new laws against the selling of alcoholic drinks, or those regulating the availability of alcohol, or those completely prohibiting it. During the 19th and early 20th centuries, the Temperance Movement became prominent in many countries, particularly English-speaking and Scandinavian ones, and it led to Prohibition in the United States from 1920 to 1933. Alcohol laws by country Australia Germany Hong Kong India Turkey United States Alcohol licensing laws by country United Kingdom Ireland Prohibition Some countries forbid alcoholic beverages or have forbidden them in the past. People trying to get around prohibition turn to smuggling of alcohol – known as bootlegging or rum-running – or make moonshine, a distilled beverage in an unlicensed still. Canada Canada imposed prohibition at the beginning of the 20th century, but repealed it in the 1920s. India In India, manufacture, sale or consumption of alcohol is prohibited in the states of Bihar, Gujarat, Manipur and Nagaland, as well as the union territory of Lakshadweep. Prohibition has become controversial in Gujarat, following a July 2009 incident in which widespread poisoning resulted from alcohol that had been sold illegally. All Indian states observe dry days on major religious festivals/occasions depending on the popularity of the festival in that region. Dry days are specific days when the sale of alcohol is banned, although consumption is permitted. Dry days are also observed on voting days. Dry days are fixed by the respective state government. National holidays such as Republic Day (26 January), Independence Day (15 August) and Gandhi Jayanthi (2 October) are usually dry days throughout India. Nordic countries Two Nordic countries (Finland and Norway) had a period of alcohol prohibition in the early 20th century. In Sweden, prohibition was heavily discussed, but never introduced, replaced by strict rationing and later by more lax regulation, which included allowing alcohol to be sold on Saturdays. Following the end of prohibition, government alcohol monopolies were established with detailed restrictions and high taxes. Some of these restrictions have since been lifted. For example, supermarkets in Finland were allowed to sell only fermented beverages with an alcohol content up to 4.7% ABV, but Alko, the government monopoly, is allowed to sell wine and spirits. The alcohol law in Finland was changed in 2018, allowing grocery stores to sell beverages with an alcohol content up to 5.5% ABV. This is also the case with the Norwegian Vinmonopolet and the Swedish Systembolaget (though in Sweden the limit for allowed ABV in supermarkets is 3.5%.) Philippines Under the Omnibus Election Code, the Commission on Elections can impose a prohibition on the sale and purchasing of alcoholic and intoxicating drinks on the election day and the day before. Certain establishments catering to foreigners can obtain an exemption. United States In the United States, there was an attempt from 1919 to 1933 to eliminate the drinking of alcoholic beverages by means of a national prohibition of their manufacture and sale. This period became known as the Prohibition era. During this time, the 18th Amendment to the Constitution of the United States made the manufacture, sale, and transportation of alcoholic beverages illegal throughout the United States. Prohibition led to the unintended consequence of causing widespread disrespect for the law, as many people procured alcoholic beverages from illegal sources. In this way, a lucrative business was created for illegal producers and sellers of alcohol, which led to the development of organized crime. As a result, Prohibition became extremely unpopular, which ultimately led to the repeal of the 18th Amendment in 1933 via the adoption of the 21st Amendment to the Constitution. Prior to national Prohibition, beginning in the late 19th century, many states and localities had enacted Prohibition within their jurisdictions. After the repeal of the 18th Amendment, some localities (known as dry counties) continue to ban the sale of alcohol, but often not possession or consumption. Between 1832 and 1953, US federal law prohibited the sale of alcohol to Native Americans. The federal legislation was repealed in 1953, and within a few years, most tribes passed their own prohibition laws. As of 2007, 63% of the federally recognized tribes in the lower 48 states had legalized alcohol sales on their reservations. Majority-Muslim countries Some majority-Muslim countries, such as Saudi Arabia, Kuwait, Iran, Somalia, Libya, Yemen, etc. prohibit the production, sale, and consumption of alcoholic beverages either entirely or for its Muslim citizens because they are considered haram (forbidden) in Islam. Alcohol was illegal in Sudan but, it was legalized for non-Muslims in July 2020. Other Muslim countries have it either illegal in certain parts or by non-Muslims. Afghanistan Alcohol is completely illegal in Afghanistan. Alcohol, especially wine, was popular for thousands of years in region currently known as Afghanistan. The Taliban banned alcohol during its rule from 1996 to 2001 as well as after the Afghan government collapsed in 2021. Prior to the collapse of the Afghan government, alcohol licenses were given to journalists and tourists and bringing up to 2 liters (½ gallon) was legal. There does however remain a large black market for alcohol in Afghanistan, especially in Kabul and Herat. Algeria What is now known as Algeria has been known for its wine for thousands of years. In Algeria, it is illegal to drink alcohol in public. Alcohol can be drunk in restaurants, bars and hotels. Bangladesh In Bangladesh, alcohol is illegal for Muslims. It is legal for non-Muslims to drink with a permit. It is only legal for Muslims "under medical circumstances" with a doctor's permit. In 2022, the laws were revised to allow hotels, restaurants, and outlets that serve food as well as display and sell alcohol to apply for liquor sale licenses. Those over 21 can apply for a drinking permit, while Muslims must get a prescription from a doctor with at least an associate professor rank. Egypt Ancient Egypt was widely known for its beer. In Egypt, drinking alcohol is illegal in public as well as shops and sales are banned for Muslims during Ramadan. Alcohol is legal in bars, hotels and tourist facilities approved by the Minister of Tourism. Indonesia Alcohol is legal in Indonesia with the exception of Aceh. Iran Prior to the establishment of the Islamic republic, alcohol was accessible in Iran. Ancient Persia was known for its wine and it was even common for Saffarid and Samanid rulers. After the Iranian revolution in 1979, alcohol became completely illegal for Muslims, however there is a major black market and underground scene for alcohol. A popular moonshine is Aragh sagi, distilled from raisins. Smuggling alcohol into Iran is highly illegal and is punishable by death. The only legal alcohol in Iran is home production for recognized non-Muslim minorities such as Armenians, Assyrians, and Zoroastrians. The Jewish community in Iran is also allowed to produce and drink its own wine for the Sabbath. Iraq Now known as Iraq, this region is one of the oldest producers of beer. Buying alcohol is especially prevalent in larger cities by shops owned by Christians. Parts ruled by the Islamic State of Iraq and the Levant completely banned alcohol, with a death penalty. In 2016, the Iraqi parliament passed a law banning alcohol, fining 25 million Iraqi dinars, however, it is unclear how it can be enforced and it could be struck down by the Supreme Court. The president at the time, Fuad Masum called for the law to be revised. Jordan Alcohol is legal in Jordan, however public drinking is illegal. Restaurants, bars, hotels, etc. serve alcohol legally. Malaysia Alcohol is mostly legal in Malaysia with the exceptions of Kelantan and Terengganu for Muslims. Morocco Although alcohol is legal in Morocco, it is illegal to drink in public. Alcohol can be drunk in hotels, bars and licensed tourist areas. There is also a separate section in supermarkets for alcohol. Pakistan After its independence in 1947, Pakistani law was fairly liberal regarding liquor laws. Major cities had a culture of drinking, and alcohol was readily available until the 1970s when prohibition was introduced for Muslim citizens. However it remains widely available in urban Pakistan through bootleggers and also through the diplomatic staff of some minor countries. Advertising alcohol isn't illegal, although cultural taboos often prevent people from talking about it in public. Foreigners and non-Muslims are less likely to be barred from buying alcohol and some local producers with special licenses will even assist them with the purchase. Somalia Alcohol is illegal in both Somalia and the autonomous Somaliland. During the Italian Somalia period, rum was produced and continued until the fall of Siad Barre's government in 1991. Sudan Alcohol is illegal for Muslims in Sudan. In 2020, Sudan legalized private consumption of alcohol by non-Muslims. Syria Alcohol is completely legal in Syria, however in parts ruled by the Islamic State of Iraq and the Levant, it was illegal with a penalty of death. Tunisia Alcohol is completely legal in Tunisia, however, sales are banned on Fridays as well as during Ramadan. United Arab Emirates Prior to 2020, a license was required to handle alcohol, whether it be drinking, selling or transporting. It was illegal for Muslims to drink before. Alcohol is completely illegal in the emirate of Sharjah. In 2020, the license requirement was removed for authorized areas. Yemen Alcohol is illegal in Yemen. Prior to the Yemeni Civil War, it was legal for tourists in hotels in the cities of Aden and Sana'a. Alcohol-related crime Alcohol-related crime refers to criminal activities that involve alcohol use as well as violations of regulations covering the sale or use of alcohol; in other words, activities violating the alcohol laws. Some crimes are uniquely tied to alcohol, such as public intoxication or underage drinking, while others are simply more likely to occur together with alcohol consumption. Underage drinking and drunk driving are the most prevalent alcohol-specific offenses in the United States and a major problem in many, if not most, countries worldwide. Similarly, arrests for alcohol-related crimes constitute a high proportion of all arrests made by police in the U.S. and elsewhere. Taxation and regulation of production In most countries, the commercial production of alcoholic beverages requires a license from the government, which then levies a tax upon these beverages. In many countries, alcoholic beverages may be produced in the home for personal use without a license or tax. Taxation Alcoholic beverages are subject to excise taxes. Additionally, they fall under different jurisdiction than other consumables in many countries, with highly specific regulations and licensing on alcohol content, methods of production, and retail and restaurant sales. Alcohol tax is an excise tax, and while a sin tax or demerit tax, is a significant source of revenue for governments. The U.S. government collected $5.8 billion in 2009. In history, the Whiskey Rebellion was caused by the introduction of an alcohol tax to fund the newly formed U.S. federal government. Pigou listed alcohol taxes as an example Pigouvian tax. Health warnings Alcohol packaging warning messages are warning messages that appear on the packaging of alcoholic drinks concerning their health effects. They have been implemented in an effort to enhance the public's awareness of the harmful effects of consuming alcoholic beverages, especially with respect to fetal alcohol syndrome and alcohol's carcinogenic properties. In general, warnings used in different countries try to emphasize the same messages. Such warnings have been required in alcohol advertising for many years. For example, in the US, since 1989, all packaging of alcoholic products must contain a health warning from the Surgeon General. Denmark In Denmark, home production of wine and beer is not regulated. Home distillation of spirits is legal but not common because it is subject to the same tax as spirits sold commercially. Danish alcohol taxes are significantly lower than in Sweden and Norway, but higher than those of most other European countries. Singapore In Singapore, alcohol production is regulated by Singapore Customs. Up to of beer, wine, and cider per month can be produced at home without a license. Alcohol distillation is only allowed with a commercial license. United Kingdom In the United Kingdom, HM Revenue and Customs issues distilling licenses, but people may produce beer and wine for personal consumption without a license. United States The production of distilled beverages is regulated and taxed. The Bureau of Alcohol, Tobacco, Firearms, and Explosives and the Alcohol and Tobacco Tax and Trade Bureau (formerly a single organization called the Bureau of Alcohol, Tobacco and Firearms) enforce federal laws and regulations related to alcohol. In most of the American states, individuals may produce wine and beer for personal consumption (but not for sale) in amounts [usually] of up to 100 gallons per adult per year, but no more than 200 gallons per household per year. The illegal (i.e., unlicensed) production of liquor in the United States is commonly referred to as "bootlegging." Illegally produced liquor (popularly called "moonshine" or "white lightning") is not aged and contains a high percentage of alcohol. Restrictions on sale and possession Alcoholic drinks are available only from licensed shops in many countries, and in some countries, strong alcoholic drinks are sold only by a government-operated alcohol monopoly. Scotland The Alcohol (Minimum Pricing) (Scotland) Act 2012 is an Act of the Scottish Parliament, which introduces a statutory minimum price for alcohol, initially 50p per unit, as an element in the programme to counter alcohol problems. The government introduced the Act to discourage excessive drinking. As a price floor, the Act is expected to increase the cost of the lowest-cost alcoholic beverages. The Act was passed with the support of the Scottish National Party, the Conservatives, the Liberal Democrats and the Greens. The opposition, Scottish Labour, refused to support the legislation because the Act failed to claw back an estimated £125m windfall profit from alcohol retailers. In April 2019, it was reported that, despite the legislation, consumption of alcohol in Scotland had increased. Nordic countries In each of the Nordic countries, except Denmark, the government has a monopoly on the sale of liquor. The state-run vendor is called Systembolaget in Sweden, Vinmonopolet in Norway, Alko in Finland, Vínbúð in Iceland, and Rúsdrekkasøla Landsins in the Faroe Islands. The first such monopoly was in Falun in the 19th century. The governments of these countries claim that the purpose of these monopolies is to reduce the consumption of alcohol. These monopolies have had success in the past, but since joining the European Union it has been difficult to curb the importation of liquor, legal or illegal, from other EU countries. That has made the monopolies less effective in reducing excessive drinking. There is an ongoing debate over whether to retain these state-run monopolies. Norway In Norway, beers with an alcohol content of 4.74% by volume or less can be legally sold in grocery stores. Stronger beers, wines, and spirits can only be bought at government monopoly vendors. All alcoholic beverages can be bought at licensed bars and restaurants, but they must be consumed on the premises. At the local grocery store, alcohol can only be bought before 8 p.m. (6 p.m. on Saturdays, municipalities can set stricter regulations). And the government monopoly vendors close at 6 p.m. Monday–Friday and 4 p.m. on Saturdays. On Sundays, no alcohol can be bought, except in bars. Norway levies some of the heaviest taxes in the world on alcoholic beverages, particularly on spirits. These taxes are levied on top of a 25% VAT on all goods and services. For example, 700 mL of Absolut Vodka currently retails at 300+ NOK. Sweden In Sweden, beer with a low alcohol content (called folköl, 2.25% to 3.5% alcohol by weight) can be sold in regular stores to anyone aged 18 or over, but beverages with a high alcohol content can only be sold by government-run vendors to people aged 20 or older, or by licensed facilities such as restaurants and bars, where the age limit is 18. Alcoholic drinks bought at these licensed facilities must be consumed on the premises; nor is it allowed to bring and consume your own alcoholic beverages bought elsewhere. North America Canada In most Canadian provinces, there is a very tightly held government monopoly on the sale of alcohol. Two examples of this are the Liquor Control Board of Ontario, and the Liquor Distribution Branch of British Columbia. Government control and supervision of the sale of alcohol was a compromise devised in the 1920s between "drys" and "wets" for the purpose of ending Prohibition in Canada. Some provinces have moved away from government monopoly. In Alberta, privately owned liquor stores exist, and in Quebec a limited number of wines and liquors can be purchased at dépanneurs and grocery stores. Canada has some of the highest excise taxes on alcohol in the world. These taxes are a source of income for governments and are also meant to discourage drinking. (See Taxation in Canada.) The province of Quebec has the lowest overall prices of alcohol in Canada. Restrictions on the sale of alcohol vary from province to province. In Alberta, changes introduced in 2008 included a ban on "happy hour," minimum prices, and a limit on the number of drinks a person can buy in a bar or pub at one time after 1 a.m. United States In the United States, the sale of alcoholic beverages is controlled by the individual states, by the counties or parishes within each state, and by local jurisdictions. In many states, alcohol can only be sold by staff qualified to serve responsibly through alcohol server training. A county that prohibits the sale of alcohol is known as a dry county. In some states, liquor sales are prohibited on Sunday by a blue law. The places where alcohol may be sold or possessed, like all other alcohol restrictions, vary from state to state. Some states, like Louisiana, Missouri, and Connecticut, have very permissive alcohol laws, whereas other states, like Kansas and Oklahoma, have very strict alcohol laws. Many states require that liquor may be sold only in liquor stores. In Nevada, Missouri, and Louisiana, state law does not specify the locations where alcohol may be sold. In 18 alcoholic beverage control states, the state has a monopoly on the sale of liquor. For example, in most of North Carolina, beer and wine may be purchased in retail stores, but distilled spirits are only available at state ABC (Alcohol Beverage Control) stores. In Maryland, distilled spirits are available in liquor stores except in Montgomery County, where they are sold only by the county. Most states follow a three-tier system in which producers cannot sell directly to retailers, but must instead sell to distributors, who in turn sell to retailers. Exceptions often exist for brewpubs (pubs which brew their own beer) and wineries, which are allowed to sell their products directly to consumers. Most states also do not allow open containers of alcohol inside moving vehicles. The federal Transportation Equity Act for the 21st Century of 1999 mandates that, if a state does not prohibit open containers of alcohol inside moving vehicles, then a percentage of its federal highway funds will be transferred instead to alcohol education programs each year. As of December 2011, only one state (Mississippi) allows drivers to consume alcohol while driving (below the 0.08% limit), and only five states (Arkansas, Delaware, Mississippi, Missouri, and West Virginia) allow passengers to consume alcohol while the vehicle is in motion. Four U.S. states limit alcohol sales in grocery stores and gas stations to beer at or below 3.2% alcohol: Kansas, Minnesota, Oklahoma, and Utah. In these states, stronger beverage sales are restricted to liquor stores. In Oklahoma, liquor stores may not refrigerate any beverage containing more than 3.2% alcohol. Missouri also has provisions for 3.2% beer, but its permissive alcohol laws (when compared to other states) make this type of beer a rarity. Pennsylvania is starting to allow grocery stores and gas stations to sell alcohol. Wines and spirits are still sold at locations called "state stores", but wine kiosks are starting to be put in at grocery stores. The kiosks are connected to a database in Harrisburg, and purchasers must present valid ID, signature, and look into a camera for facial identification to purchase wine. Only after all of these measures are passed is the individual allowed to obtain one bottle of wine from the "vending machine". The kiosks are only open during the same hours as the state-run liquor stores and are not open on Sundays. Alcoholic drinks were banned or restricted on U.S. Indian reservations for much of the 19th and twentieth centuries, until federal legislation in 1953 permitted Native Americans to legislate alcohol sales and consumption. See also Alcohol tax Wine law Alcohol exclusion laws Alcohol advertising Drunk driving law by country Public intoxication References External links Drug control law
Alcohol law
[ "Chemistry" ]
4,563
[ "Drug control law", "Regulation of chemicals" ]
24,812,601
https://en.wikipedia.org/wiki/Magnetic%20braking%20%28astronomy%29
Magnetic braking is a theory explaining the loss of stellar angular momentum due to material getting captured by the stellar magnetic field and thrown out at great distance from the surface of the star. It plays an important role in the evolution of binary star systems. The problem The currently accepted theory of the a planetary system's evolution states that the system originates from a contracting gas cloud. As the cloud contracts, the angular momentum must be conserved. Any small net rotation of the cloud will cause the spin to increase as the cloud collapses, forcing the material into a rotating disk. At the dense center of this disk a protostar forms, which gains heat from the gravitational energy of the collapse. As the collapse continues, the rotation rate can increase to the point where the accreting protostar can break up due to centrifugal force at the equator. Thus the rotation rate must be braked during the first 100,000 years of the star's life to avoid this scenario. One possible explanation for the braking is the interaction of the protostar's magnetic field with the stellar wind. In the case of the Solar System, when the planets' angular momenta are compared to the Sun's own, the Sun has less than 1% of its supposed angular momentum. In other words, the Sun has slowed down its spin while the planets have not. The idea behind magnetic braking Ionized material captured by the magnetic field lines will rotate with the Sun as if it were a solid body. As material escapes from the Sun due to the solar wind, the highly ionized material will be captured by the field lines and rotate with the same angular velocity as the Sun, even though it is carried far away from the Sun's surface, until it eventually escapes. This effect of carrying mass far from the centre of the Sun and throwing it away slows down the spin of the Sun. The same effect is used in slowing the spin of a rotating satellite; here two wires spool out weights to a distance slowing the satellites spin, then the wires are cut, letting the weights escape into space and permanently robbing the spacecraft of its angular momentum. Theory behind magnetic braking As ionized material follows the Sun's magnetic field lines, due to the effect of the field lines being frozen in the plasma, the charged particles feel a force of the magnitude: where is the charge, is the velocity and is the magnetic field vector. This bending action forces the particles to "corkscrew" around the magnetic field lines while held in place by a "magnetic pressure", or "energy density", while rotating together with the Sun as a solid body: Since magnetic field strength decreases with the cube of the distance there will be a place where the kinetic gas pressure of the ionized gas is great enough to break away from the field lines: where n is the number of particles, m is the mass of the individual particle and v is the radial velocity away from the Sun, or the speed of the solar wind. Due to the high conductivity of the stellar wind, the magnetic field outside the sun declines with radius like the mass density of the wind, i.e. decline as an inverse square law. The magnetic field is therefore given by where is the magnetic field on the surface of the Sun and is its radius. The critical distance where the material will break away from the field lines can then be calculated as the distance where the kinetic pressure and the magnetic pressure are equal, i.e. If the solar mass loss is omni-directional then the mass loss ; plugging this into the above equation and isolating the critical radius it follows that Present day value Currently it is estimated that: The mass loss rate of the Sun is about The solar wind speed is The magnetic field on the surface is The solar radius is This leads to a critical radius . This means that the ionized plasma will rotate together with the Sun as a solid body until it reaches a distance of nearly 15 times the radius of the Sun; from there the material will break off and stop affecting the Sun. The amount of solar mass needed to be thrown out along the field lines to make the Sun completely stop rotating can then be calculated using the specific angular momentum: It has been suggested that the Sun lost a comparable amount of material over the course of its lifetime. Weakened magnetic braking In 2016 scientists at Carnegie Observatories published a research suggesting that stars at a similar stage of life as the Sun were spinning faster than magnetic braking theories predicted. To calculate this they pinpointed the dark spots on the surface of stars and tracked them as they moved with the stars' spin. While this method has been successful for measuring the spin of younger stars, the "weakened" magnetic braking in older stars proved harder to confirm, as the latter notoriously have fewer star spots. In a study published in Nature Astronomy in 2021, researchers at the University of Birmingham used a different approach, namely asteroseismology, to confirm that older stars do appear to rotate faster than expected. See also Kraft break Stellar magnetic field Stellar rotation References Conservation laws Physical phenomena Rotation Rotational symmetry
Magnetic braking (astronomy)
[ "Physics" ]
1,027
[ "Physical phenomena", "Equations of physics", "Conservation laws", "Classical mechanics", "Rotation", "Motion (physics)", "Physics theorems", "Symmetry", "Rotational symmetry" ]
24,812,935
https://en.wikipedia.org/wiki/Technomimetics
Technomimetics are molecular systems that can mimic man-made devices. The term was first introduced in 1997. The current set of technomimetic molecules includes motors, rotors, gears, gyroscopes, tweezers, and other molecular devices. Technomimetics can be considered as the essential components of molecular machines and have the primary use in molecular nanotechnology. See also Molecular tweezers References Nanotechnology Molecular machines
Technomimetics
[ "Physics", "Chemistry", "Materials_science", "Technology", "Engineering" ]
94
[ "Machines", "Materials science stubs", "Materials science", "Molecular machines", "Physical systems", "Nanotechnology stubs", "Nanotechnology" ]
24,813,120
https://en.wikipedia.org/wiki/Linear%20compressor
A linear compressor is a gas compressor where the piston moves along a linear track to minimize friction and reduce energy loss during conversion of motion. This technology has been successfully used in cryogenic applications which must be oil-less. The suspension spring can be flexure type or coil type. An oil-free valved linear compressor enables the design of compact heat exchangers. Linear compressors work similarly to a solenoid: by using a spring-loaded piston with an electromagnet connected to AC through a diode. The spring-loaded piston is the only moving part, and it is placed in the center of the electromagnet. During the positive cycle of the AC, the diode allows energy to pass through the electromagnet, generating a magnetic field that moves the piston backwards, compressing the spring, and generating suction. During the negative cycle of the AC, the diode blocks current flow to the electromagnet, letting the spring uncompress, moving the piston forward, and compressing the refrigerant. The compressed refrigerant is then released by a valve. History A number of patents for linear compressors powered by free-piston engines were issued in the 20th century, including: To Brown, Boveri & Cie, GB191215963, published 1913-10-08 To Hugo Junkers, CA245708, published 1924-12-30 To Raúl Pateras Pescara, US1615133, published 1927-01-18 The first market introduction of a linear compressor to compress refrigerant in a refrigerator was in 2001. Valved linear compressor The single piston linear compressor uses dynamic counterbalancing, where an auxiliary movable mass is flexibly attached to a movable piston assembly and to the stationary compressor casing using auxiliary mechanical springs with zero vibration export at minimum electrical power and current consumed by the motor. It is used in cryogenics. Linear compressors are used as they have fewer mechanical losses. Linear compressors are made by LG and used in LG and Kenmore refrigerators. Linear compressors were also announced by Embraco. Compressors of this type have less noise, and are more energy efficient than conventional refrigerator compressors. The Embraco linear compressors are also claimed to be oil-free. In the 2010s and 2020s, multiple lawsuits in the United States alleged that the LG compressors had a high rate of failure or lack of expected cooling. As of 2024, there were settlements and ongoing cases. See also Scroll compressor References External links Gas compressors Vacuum pumps Cooling technology Hydrogen technologies
Linear compressor
[ "Physics", "Chemistry", "Engineering" ]
540
[ "Turbomachinery", "Gas compressors", "Vacuum pumps", "Vacuum", "Vacuum systems", "Matter" ]
24,813,911
https://en.wikipedia.org/wiki/Guided-rotor%20compressor
The guided-rotor compressor (GRC) is a positive-displacement rotary gas compressor. The compression volume is defined by the trochoidally rotating rotor mounted on an eccentric drive shaft with a typical 80 to 85% adiabatic efficiency. History The development of the GRC started in 1990 to minimize the use of compressor valve plates and springs by using simple inlet/discharge ports. Uses The guided-rotor compressor is under research as a hydrogen compressor for hydrogen stations and hydrogen pipeline transport. See also References Gas compressors Gas technologies
Guided-rotor compressor
[ "Chemistry" ]
108
[ "Gas compressors", "Turbomachinery" ]
24,814,404
https://en.wikipedia.org/wiki/Pendetide
Pendetide (GYK-DTPA) is a chelating agent. It consists of pentetic acid (DTPA) linked to the tripeptide glycine (G) – L-tyrosine (Y) – L-lysine (K). Use The following monoclonal antibodies are linked to pendetide to chelate a radionuclide, indium-111. The antibodies selectively bind to certain tumour cells, and the radioactivity is then used for imaging of the tumours. Indium (111In) capromab pendetide (prostate cancer) Indium (111In) satumomab pendetide (other cancer types) References Chelating agents Chelating agents used as drugs Carboxylic acids Peptides
Pendetide
[ "Chemistry" ]
169
[ "Pharmacology", "Biomolecules by chemical classification", "Carboxylic acids", "Functional groups", "Medicinal chemistry stubs", "Molecular biology", "Chelating agents", "Pharmacology stubs", "Peptides", "Process chemicals" ]
21,831,985
https://en.wikipedia.org/wiki/Macromolecules%20%28journal%29
Macromolecules is a peer-reviewed scientific journal that has been published since 1968 by the American Chemical Society. Initially published bimonthly, it became monthly in 1983 and then, in 1990, biweekly. Macromolecules is abstracted and indexed in Scopus, EBSCOhost, PubMed, Web of Science, and SwetsWise. The editor-in-chief is Marc A. Hillmyer. Its first editor was Dr. Field H. Winslow. References External links American Chemical Society academic journals Bimonthly journals English-language journals Academic journals established in 1968 Polymer chemistry
Macromolecules (journal)
[ "Chemistry", "Materials_science", "Engineering" ]
127
[ "Materials science", "Polymer chemistry" ]
21,833,682
https://en.wikipedia.org/wiki/Non-contact%20ultrasound
Non-contact ultrasound (NCU) is a method of non-destructive testing where ultrasound is generated and used to test materials without the generating sensor making direct or indirect contact with the test material or test subject. Historically this has been difficult to do, as a typical transducer is very inefficient in air. Therefore, most conventional ultrasound methods require the use of some type of acoustic coupling medium in order to efficiently transmit the energy from the sensor to the test material. Couplant materials can range from gels or jets of water to direct solder bonds. However, in non-contact ultrasound, ambient air is the only acoustic coupling medium. An electromagnetic acoustic transducer (EMAT), is a type of non-contact ultrasound that generates an ultrasonic pulse which reflects off the sample and induces an electric current in the receiver. This is interpreted by software and provides clues about the internal structure of the sample such as cracks or faults. Research is continuing to improve traditional transducers by applying different plastics, elastomers, and other materials. The sensitivity of these devices continues to improve; a newly developed piezoelectric transducer can produce frequencies in the MHz that can easily propagate through even high acoustic impedance materials such as steel and dense ceramics. Non-contact ultrasound allows some materials to be inspected which otherwise can't be inspected due to fear of contamination from couplants or water. In general non-contact ultrasound would facilitate testing of materials or components that are continuously rolled on a production line, in extremely hot environments, coated, oxidized, or otherwise difficult to physically contact. Methods for potential medical use are also being investigated Laser ultrasonics is another method of non-contact ultrasound. References Nondestructive testing
Non-contact ultrasound
[ "Materials_science" ]
362
[ "Nondestructive testing", "Materials testing" ]
21,837,511
https://en.wikipedia.org/wiki/P-constrained%20group
In mathematics, a p-constrained group is a finite group resembling the centralizer of an element of prime order p in a group of Lie type over a finite field of characteristic p. They were introduced by in order to extend some of Thompson's results about odd groups to groups with dihedral Sylow 2-subgroups. Definition If a group has trivial p core Op(G), then it is defined to be p-constrained if the p-core Op(G) contains its centralizer, or in other words if its generalized Fitting subgroup is a p-group. More generally, if Op(G) is non-trivial, then G is called p-constrained if G/Op(G) is . All p-solvable groups are p-constrained. See also p-stable group The ZJ theorem has p-constraint as one of its conditions. References Finite groups Properties of groups
P-constrained group
[ "Mathematics" ]
185
[ "Mathematical structures", "Algebraic structures", "Finite groups", "Properties of groups" ]
21,838,250
https://en.wikipedia.org/wiki/Hardy%27s%20paradox
Hardy's paradox is a thought experiment in quantum mechanics devised by Lucien Hardy in 1992–1993 in which a particle and its antiparticle may interact without annihilating each other. Experiments using the technique of weak measurement have studied an interaction of polarized photons, and these have demonstrated that the phenomenon does occur. However, the consequence of these experiments is only that past events can be inferred after their occurrence as a probabilistic wave collapse. These weak measurements are considered to be an observation themselves, and therefore part of the causation of wave collapse, making the objective results only a probabilistic function rather than a fixed reality. However, a careful analysis of the experiment shows that Hardy's paradox only proves that a local hidden-variable theory cannot exist, as there cannot be a theory that assumes that the system meets the states of reality regardless of the interaction with the measuring apparatus. This confirms that a quantum theory, to be consistent with the experiments, must be non-local (in the sense of Bell) and contextual. Setup description and the results The basic building block of Hardy's thought experiment are two Mach–Zehnder interferometers for quantum particles and antiparticles. We will describe the case using electrons and positrons. Each interferometer consists of bent paths and two beam splitters (labeled BS1 and BS2 in the accompanying diagram) and is tuned so that when operating individually, particles always exit to the same particle detector (the ones labeled c in the diagram; c is for "constructive interference" and d is for "destructive interference"). For example, for the right-hand side interferometer, when operating alone, entering electrons (labeled e−) become a quantum superposition of electrons taking the path v− and electrons taking path w− (in the diagram, the latter part of the w− path is labeled u−), but these constructively interfere and thus always exit in arm c−: Similarly, positrons (labeled e+) are always detected at c+. In the actual experiment the interferometers are arranged so that part of their paths overlap as shown in the diagram. If the amplitude for the particle in one arm, say w−, were to be obstructed by a second particle in w+ that collides with it, only the v amplitude would reach the second beam splitter and would split into arms c+ and d+ with equal amplitudes. The detection of a particle in d+ would thus indicate the presence of the obstructing particle, but without an annihilation taking place. For this reason, this scheme was named interaction-free measurement. If (classically speaking) both the electron and the positron take the w paths in their respective interferometers, they will annihilate to produce two gamma rays: . There is a 1 in 4 chance of this happening. We can express the state of the system, before the final beam splitters, as Since the detectors click for , and the detectors for , this becomes Since the probabilities are the squares of the absolute values of these amplitudes, this means a 9 in 16 chance of each particle being detected in its respective c detector; a 1 in 16 chance each for one particle being detected in its c detector and the other in its d detector, or for both being detected in their d detectors; and a 4 in 16 (1 in 4) chance that the electron and positron annihilate, so neither is detected. Notice that a detection in both d detectors is represented by This is not orthogonal to the expression above for the state before the final beam splitters. The scalar product between them is 1/4, showing that there is a 1 in 16 chance of this happening, paradoxically. The situation can be analyzed in terms of two simultaneous interaction-free measurements: from the point of view of the interferometer on the left, a click at d+ implies the presence of the obstructing electron in u−. Similarly, for the interferometer on the right, a click at d− implies the presence of the positron in u+. Indeed, every time a click is recorded at d+ (or d−), the other particle is found in u− (or u+ respectively). If we assume the particles are independent (described by local hidden variables), we conclude that they can never emerge simultaneously in d+ and d−. This would imply that they were in u+ and u−, which cannot occur because of the annihilation process. A paradox then arises because sometimes the particles do emerge simultaneously at d+ and d− (with probability p = 1/16). Quantum mechanically, the term arises, in fact, from the nonmaximally entangled nature of the state just before the final beam splitters. An article by Yakir Aharonov and colleagues in 2001 pointed out that the number of electrons or positrons in each branch is theoretically observable and is 0 in the w branches and 1 in the v branches. And yet, the number of electron–positron pairs in any combination is also observable and is not given by the product of the single-particle values. So we find that the number of ww pairs (both particles in their w path) is 0, each wv pair is 1, and the number in the vv combination is −1! They proposed a way that this could be observed physically by temporarily trapping the electron and the positron in the v paths in boxes and noting the effect of their mutual electrostatic attraction. They stated that one would actually find a repulsion between the boxes. In 2009 Jeff Lundeen and Aephraim M. Steinberg published work in which they set up a "Hardy's paradox" system using photons. A 405 nm laser goes through a barium borate crystal to produce pairs of 810 nm photons with polarizations orthogonal to each other. These then hit a beam splitter, which sends photons back to the barium borate crystal with 50% probability. The 405 nm pumping beam also bounces from a mirror and comes back to the barium borate. If both the 810 nm photons come back to the crystal, they are annihilated by interaction with the returning pump beam. In any case, the beam of photons that make it through the crystal and the beam of photons that pass through the beam splitter are both separated into "vertically polarized" and "horizontally polarized" beams, which correspond to the "electrons" and the "positrons" of Hardy's scheme. The two "electron" beams (the photons with one kind of polarization) are united at a beam splitter and go to one or two detectors, and the same for the "positrons" (the other photons). Classically, no photons should be detected at what the authors call the "dark ports" because if they take both directions from the first beam splitter, they will interfere with themselves, whereas if they take only one path, then one cannot detect them both at the dark ports because of the paradox. By introducing a 20° rotation in polarization and using half-wave plates on certain beams, and then measuring coincidence rates at the detectors, they were able to make weak measurements that allowed them to calculate the "occupation" of different arms (paths) and combinations. As predicted by Aharonov and colleagues, they found a negative value for the combination in which both photons take the outer (no-annihilation) route. The results were not exactly as predicted, and they attribute this to imperfect switching (annihilation) and interaction-free measurements. See also Uncertainty principle Wave function collapse References External links Lecture by Aephraim Steinberg , 2012 Quantum measurement Paradoxes Thought experiments in quantum mechanics
Hardy's paradox
[ "Physics" ]
1,627
[ "Quantum measurement", "Quantum mechanics", "Thought experiments in quantum mechanics" ]
37,509,820
https://en.wikipedia.org/wiki/Processor%20%28computing%29
In computing and computer science, a processor or processing unit is an electrical component (digital circuit) that performs operations on an external data source, usually memory or some other data stream. It typically takes the form of a microprocessor, which can be implemented on a single or a few tightly integrated metal–oxide–semiconductor integrated circuit chips. In the past, processors were constructed using multiple individual vacuum tubes, multiple individual transistors, or multiple integrated circuits. The term is frequently used to refer to the central processing unit (CPU), the main processor in a system. However, it can also refer to other coprocessors, such as a graphics processing unit (GPU). Traditional processors are typically based on silicon; however, researchers have developed experimental processors based on alternative materials such as carbon nanotubes, graphene, diamond, and alloys made of elements from groups three and five of the periodic table. Transistors made of a single sheet of silicon atoms one atom tall and other 2D materials have been researched for use in processors. Quantum processors have been created; they use quantum superposition to represent bits (called qubits) instead of only an on or off state. Moore's law Moore's law, named after Gordon Moore, is the observation and projection via historical trend that the number of transistors in integrated circuits, and therefore processors by extension, doubles every two years. The progress of processors has followed Moore's law closely. Types Central processing units (CPUs) are the primary processors in most computers. They are designed to handle a wide variety of general computing tasks rather than only a few domain-specific tasks. If based on the von Neumann architecture, they contain at least a control unit (CU), an arithmetic logic unit (ALU), and processor registers. In practice, CPUs in personal computers are usually also connected, through the motherboard, to a main memory bank, hard drive or other permanent storage, and peripherals, such as a keyboard and mouse. Graphics processing units (GPUs) are present in many computers and designed to efficiently perform computer graphics operations, including linear algebra. They are highly parallel, and CPUs usually perform better on tasks requiring serial processing. Although GPUs were originally intended for use in graphics, over time their application domains have expanded, and they have become an important piece of hardware for machine learning. There are several forms of processors specialized for machine learning. These fall under the category of AI accelerators (also known as neural processing units, or NPUs) and include vision processing units (VPUs) and Google's Tensor Processing Unit (TPU). Sound chips and sound cards are used for generating and processing audio. Digital signal processors (DSPs) are designed for processing digital signals. Image signal processors are DSPs specialized for processing images in particular. Deep learning processors, such as neural processing units are designed for efficient deep learning computation. Physics processing units (PPUs) are built to efficiently make physics-related calculations, particularly in video games. Field-programmable gate arrays (FPGAs) are specialized circuits that can be reconfigured for different purposes, rather than being locked into a particular application domain during manufacturing. The Synergistic Processing Element or Unit (SPE or SPU) is a component in the Cell microprocessor. Processors based on different circuit technology have been developed. One example is quantum processors, which use quantum physics to enable algorithms that are impossible on classical computers (those using traditional circuitry). Another example is photonic processors, which use light to make computations instead of semiconducting electronics. Processing is done by photodetectors sensing light produced by lasers inside the processor. See also Logic gate Processor design Microprocessor Multiprocessing Multiprocessor system architecture Multi-core processor Processor power dissipation Central processing unit Graphics processing unit Superscalar processor Hardware acceleration Von Neumann architecture References Electronic design
Processor (computing)
[ "Engineering" ]
807
[ "Electronic design", "Electronic engineering", "Design", "Digital electronics" ]
37,510,151
https://en.wikipedia.org/wiki/Beckley%20Furnace%20Industrial%20Monument
The Beckley Furnace Industrial Monument is a state-owned historic site preserving a 19th-century iron-making blast furnace on the north bank of the Blackberry River in the town of North Canaan, Connecticut. The site became a state park in 1946; it was added to the National Register of Historic Places in 1978. Description The Beckley Furnace stands in what is now a rural area of central North Canaan, on the south side of Lower Road just west of its junction with Furnace Hill Road. The site spans the Blackberry River, with the main blast furnace and its developed features on the north bank. The main furnace is a large stone structure, tall and per side at the base, gradually sloping to at the top. It is set near the road, which runs at a high elevation above a stone retaining wall. About upriver is the dam, a stone structure with a penstock providing access to a turbine chamber. Further downstream are the remnants of two more dams and furnaces, and there are large piles of slag mounded on the south side of the river. No longer extant are wood-frame buildings that would have been needed to support the operations of the furnace. History The furnace was built for the production of pig iron by John Adam Beckley in 1847 and continued in operation until 1919. It was the second of three working blast furnaces built at the site; a fourth furnace was under construction in the early years of the 20th century but was never put in operation. The works successfully adapted to changing conditions, but was unable to compete on scale, and closed in the early 1920s. The stack was restored by the state in 1999. The dam built on the Blackberry River to provide power for the furnace and other industrial operations was repaired by the state in 2010. Activities and amenities The state park offers picnicking and pond fishing. Tours of the furnace are offered periodically by Friends of Beckley Furnace. See also National Register of Historic Places listings in Litchfield County, Connecticut References External links Beckley Furnace Industrial Monument Connecticut Department of Energy and Environmental Protection Friends of Beckley Furnace State parks of Connecticut Parks in Litchfield County, Connecticut Protected areas established in 1946 Buildings and structures in Litchfield County, Connecticut Furnaces Industrial buildings completed in 1847 1847 establishments in Connecticut National Register of Historic Places in Litchfield County, Connecticut Industrial buildings and structures on the National Register of Historic Places in Connecticut North Canaan, Connecticut 1946 establishments in Connecticut
Beckley Furnace Industrial Monument
[ "Engineering" ]
487
[ "Furnaces", "Combustion engineering" ]
37,512,105
https://en.wikipedia.org/wiki/Simulated%20body%20fluid
A simulated body fluid (SBF) is a solution with an ion concentration close to that of human blood plasma, kept under mild conditions of pH and identical physiological temperature. SBF was first introduced by Kokubo et al. in order to evaluate the changes on a surface of a bioactive glass ceramic. Later, cell culture media (such as DMEM, MEM, α-MEM, etc.), in combination with some methodologies adopted in cell culture, were proposed as an alternative to conventional SBF in assessing the bioactivity of materials. Applications Surface modification of metallic implants For an artificial material to bond to living bone, the formation of bonelike apatite layer on the surface of an implant is of significant importance. The SBF can be used as an in vitro testing method to study the formation of apatite layer on the surface of implants so as to predict their in vivo bone bioactivity. The consumption of calcium and phosphate ions, present in the SBF solution, results in the spontaneous growth of bone-like apatite nuclei on the surface of biomaterials in vitro. Therefore, the apatite formation on the surface of biomaterials, soaked in the SBF solution, is considered a successful development of novel bioactive materials. The SBF technique for surface modification of metallic implants is usually a time-consuming process, and obtaining uniform apatite layers on substrates takes at least 7 days, with daily refreshing of the SBF solution. Another approach for decreasing the coating time is to concentrate the calcium and phosphate ions in the SBF solution. Enhanced concentration of calcium and phosphate ions in SBF solution accelerates the coating process and, in the meantime, eliminates the need for regular replenishment of the SBF solution. Gene delivery An attempt was made to investigate the application of SBF in gene delivery. Calcium phosphate nanoparticles, required for the delivery of plasmid DNA (pDNA) into the nucleus of the cells, were synthesized in a SBF solution and mixed with pDNA. The in vitro studies showed higher gene delivery efficiency for the calcium-phosphate/DNA complexes made of SBF solution than for the complexes prepared in pure water (as control). Formulation References Body fluids Gene delivery
Simulated body fluid
[ "Chemistry", "Biology" ]
466
[ "Genetics techniques", "Molecular biology techniques", "Gene delivery" ]
37,515,751
https://en.wikipedia.org/wiki/Ministry%20of%20Mining%20and%20Energy%20%28Serbia%29
The Ministry of Mining and Energy () is the ministry in the Government of Serbia which is in the charge of mining and energy. The current minister is Dubravka Negre, in office since 26 October 2022. History The Ministry of Mining and Energy was established on 11 February 1991. The Ministry was abolished in 2011, when it was merged into Ministries of Infrastructure (Energy department) and Environment (Mining department). In 2012, it was reestablished as Energy department was split from the Infrastructure Ministry, and Environment department of reorganized former Environment Ministry was split. In 2014, the ministry in its original form with departments of Mining and Energy was established. Subordinate institutions There is agency that operate within the scope of the Ministry: Energy Resources Management Board List of ministers Political Party: See also Minister of Natural Resources, Mining and Spatial Planning (Serbia) References External links Serbian Ministry of Energy, Development and Environmental Protection Serbia Energy Mining Market News Serbian ministries, etc – Rulers.org Mining and Energy 1991 establishments in Serbia Ministries established in 1991 Serbia Serbia
Ministry of Mining and Energy (Serbia)
[ "Engineering" ]
209
[ "Energy organizations", "Energy ministries" ]
44,655,902
https://en.wikipedia.org/wiki/Tasman%20Front
The Tasman Front is a relatively warm water east-flowing surface current and thermal boundary that separates the Coral Sea to the north and the Tasman Sea to the south. Naming The name was proposed by Denham and Crook in 1976, to describe a thermal front that extends from Australia and New Zealand between the Coral Sea and Tasman Sea. Geography Originating in the edge of the East Australian Current (EAC), the Tasman Front meanders eastward between longitudes 152° E and 164° E and latitudes 31° S and 37° S, then reattaches to the coastline at New Zealand, forming the East Auckland Current. Topography plays a dominant role in establishing the Tasman Front. Data on the Tasman Front shows that the path of the front is influenced in part by the forcing of the flow over the major ridge systems. Meanders observed in the Tasman Front can be driven by meridional flows along ridges such as those observed at the New Caledonia Trough (166° E) and the Norfolk Ridge (167° E). Abyssal currents also drive meanders associated with the Lord Howe Rise (161° E) and Dampier Ridge (159° E). Oceanography There have been a number of observational and modeling studies on this front in addition to a number of paleo-oceanographic studies of marine sediments. Contrarily, there have been few biological observational studies, but those have been conducted resulted in relating the physical features of the front to properties of fish communities. Likewise, there are even fewer studies relating biogeochemical properties to physical processes of the Tasman Front. See also Lord Howe Marine Park References Physical oceanography Tasman Sea Currents of the Pacific Ocean
Tasman Front
[ "Physics" ]
347
[ "Applied and interdisciplinary physics", "Physical oceanography" ]
44,663,137
https://en.wikipedia.org/wiki/Supercritical%20liquid%E2%80%93gas%20boundaries
Supercritical liquid–gas boundaries are lines in the pressure-temperature (pT) diagram that delimit more liquid-like and more gas-like states of a supercritical fluid. They comprise the Fisher–Widom line, the Widom line, and the Frenkel line. Overview According to textbook knowledge, it is possible to transform a liquid continuously into a gas, without undergoing a phase transition, by heating and compressing strongly enough to go around the critical point. However, different criteria still allow to distinguish liquid-like and more gas-like states of a supercritical fluid. These criteria result in different boundaries in the pT plane. These lines emanate either from the critical point, or from the liquid–vapor boundary (boiling curve) somewhat below the critical point. They do not correspond to first or second order phase transitions, but to weaker singularities. The Fisher–Widom line is the boundary between monotonic and oscillating asymptotics of the pair correlation function . The Widom line is a generalization thereof, apparently so named by H. Eugene Stanley. However, it was first measured experimentally in 1956 by Jones and Walker, and subsequently named the 'hypercritical line' by Bernal in 1964, who suggested a structural interpretation. A common criterion for the Widom line is a peak in the isobaric heat capacity. In the subcritical region, the phase transition is associated with an effective spike in the heat capacity (i.e., the latent heat). Approaching the critical point, the latent heat falls to zero but this is accompanied by a gradual rise in heat capacity in the pure phases near phase transition. At the critical point, the latent heat is zero but the heat capacity shows a diverging singularity. Beyond the critical point, there is no divergence, but rather a smooth peak in the heat capacity; the highest point of this peak identifies the Widom line. The Frenkel line is a boundary between "rigid" and "non-rigid" fluids characterized by the onset of transverse sound modes. One of the criteria for locating the Frenkel line is based on the velocity autocorrelation function (vacf): below the Frenkel line the vacf demonstrates oscillatory behaviour, while above it the vacf monotonically decays to zero. The second criterion is based on the fact that at moderate temperatures liquids can sustain transverse excitations, which disappear upon heating. One further criterion is based on isochoric heat capacity measurements. The isochoric heat capacity per particle of a monatomic liquid near to the melting line is close to (where is the Boltzmann constant). The contribution to the heat capacity due to the potential part of transverse excitations is . Therefore at the Frenkel line, where transverse excitations vanish, the isochoric heat capacity per particle should be , a direct prediction from the phonon theory of liquid thermodynamics. Anisimov et al. (2004), without referring to Frenkel, Fisher, or Widom, reviewed thermodynamic derivatives (specific heat, expansion coefficient, compressibility) and transport coefficients (viscosity, speed of sound) in supercritical water, and found pronounced extrema as a function of pressure up to 100 K above the critical temperature. References Phases of matter Critical phenomena Phase transitions
Supercritical liquid–gas boundaries
[ "Physics", "Chemistry", "Materials_science", "Mathematics" ]
708
[ "Physical phenomena", "Phase transitions", "Critical phenomena", "Phases of matter", "Condensed matter physics", "Statistical mechanics", "Matter", "Dynamical systems" ]
44,663,150
https://en.wikipedia.org/wiki/Widom%20line
In the context of the pressure-temperature phase diagram of a substance and of the supercritical fluid state in particular, the Widom line is a line emanating from the critical point which in a way extends the liquid-vapor coexistence curve above the critical point. It corresponds to the maxima or minima of certain physical properties of the supercritical fluid, such as the speed of sound, isothermal compressibility, isochoric and isobaric heat capacities. A common criterion for locating the Widom line is indeed the maximum in the isobaric heat capacity. More generally, the Widom line is defined as the line in the pressure-temperature phase diagram of a fluid substance along which the correlation length has its maximum. It always emanates from a critical point. It has been investigated for various systems, including for example in the context of the hypothesized liquid–liquid critical point (or second critical point) of water. Similar boundary lines include the Fisher-Widom line and the Frenkel line, which also describe transitions between distinct fluid behaviors. Overview Named after theoretical physicist Benjamin Widom, the Widom line is a crucial concept in fluid thermodynamics and critical phenomena. The Widom line has been suggested to separate liquid-like behaviour and gas-like behaviour in supercritical fluids, where the traditional distinction between liquid and gas no longer exists. Specifically, on the low-pressure side of the line, the fluid exhibits a gas-like behavior, while on the high-pressure side, it behaves more like a liquid. This separation is not a sharp phase change but a continuous crossover in some of the properties of the fluid. It has been observed in laboratory experiments, for example on fluid methane. The concept of Widom line provides a useful framework for characterizing and predicting the properties of fluids, which are important for scientific research as well as various industrial processes. Such a concept is indeed relevant to the physical properties of any single-component fluid at sufficiently high pressures and temperatures, and its study is an active research area. See also Supercritical liquid–gas boundaries Phase diagram References Statistical mechanics
Widom line
[ "Physics" ]
441
[ "Statistical mechanics" ]
44,933,795
https://en.wikipedia.org/wiki/Space%20cloth
Space cloth is a hypothetical infinite plane of conductive material having a resistance of η ohms per square, where η is the impedance of free space. η ≈ 376.7 ohms. If a transmission line composed of straight parallel perfect conductors in free space is terminated by space cloth that is normal to the transmission line then that transmission line is terminated by its characteristic impedance. The calculation of the characteristic impedance of a transmission line composed of straight, parallel good conductors may be replaced by the calculation of the D.C. resistance between electrodes placed on a two-dimensional resistive surface. This equivalence can be used in reverse to calculate the resistance between two conductors on a resistive sheet if the arrangement of the conductors is the same as the cross section of a transmission line of known impedance. For example, a pad surrounded by a guard ring on a printed circuit board (PCB) is similar to the cross section of a coaxial cable transmission line. Examples Calculating characteristic impedance from the surface resistance The figure to the right shows a coaxial cable terminated by space cloth. In the case of a closed structure like a coaxial cable, the space cloth may be trimmed to the boundary of the outer conductor. The computation of resistance between the conductors can be computed with 2D electromagnetic field solver methods including the relaxation method and analog methods using resistance paper. In the case of a coaxial cable, there is a closed-form solution. The resistive surface is considered to be a series of infinitesimal annular rings, each having a width of dρ and a resistance of (η/2πρ)dρ. The resistance between the inner electrode and the outer electrode is just the integral over all such rings. This is exactly the equation for the characteristic impedance of a coaxial cable in free space. Calculating surface resistance from characteristic impedance The characteristic impedance of a two parallel wire transmission line is given by where d is the diameter of the wire and D is the center to center separation between the wires. If the second figure is taken to be two round pads on a printed circuit board that has surface contamination resulting in a surface resistivity of Rs (50 MΩ per square, for example) then the resistance between the two pads is given by: Multi-mode transmission line The figure shows the cross section of a three conductor transmission line. The structure has two transmission eigen-modes which are the differential mode (conductors a and b driven with equal amplitude but opposite phase voltages with respect to conductor c) and the common mode (conductors a and b driven with the same voltages with respect to conductor c). In general, the eigen-modes have different characteristic impedances. If , , then the field in region IV and V and can be ignored. The resistance of regions I–III are where η is the impedance of space cloth (unit: ohm per square) In the common mode, conductors a and b are at the same voltage so there is no effect from region I. The common mode characteristic impedance is the resistance of region II in parallel with region III. In the differential mode, the characteristic impedance is the resistance of region I in parallel with the series combination of regions II and III. See also Resistance paper Teledeltos Notes References Electromagnetic radiation Transmission lines
Space cloth
[ "Physics" ]
677
[ "Electromagnetic radiation", "Physical phenomena", "Radiation" ]
33,447,667
https://en.wikipedia.org/wiki/Covariance%20operator
In probability theory, for a probability measure P on a Hilbert space H with inner product , the covariance of P is the bilinear form Cov: H × H → R given by for all x and y in H. The covariance operator C is then defined by (from the Riesz representation theorem, such operator exists if Cov is bounded). Since Cov is symmetric in its arguments, the covariance operator is self-adjoint. Even more generally, for a probability measure P on a Banach space B, the covariance of P is the bilinear form on the algebraic dual B#, defined by where is now the value of the linear functional x on the element z. Quite similarly, the covariance function of a function-valued random element (in special cases is called random process or random field) z is where z(x) is now the value of the function z at the point x, i.e., the value of the linear functional evaluated at z. See also Further reading References Bilinear forms Covariance and correlation Probability theory Hilbert spaces
Covariance operator
[ "Physics" ]
232
[ "Hilbert spaces", "Quantum mechanics" ]
33,447,811
https://en.wikipedia.org/wiki/Mechanical%20joint
A mechanical joint is a section of a machine which is used to connect one or more mechanical parts to another. Mechanical joints may be temporary or permanent; most types are designed to be disassembled. Most mechanical joints are designed to allow relative movement of these mechanical parts of the machine in one degree of freedom, and restrict movement in one or more others. Pin A pin joint, also called a revolute joint, is a one-degree-of-freedom kinematic pair. It constrains the motion of two bodies to pure rotation along a common axis. The joint doesn't allow translation, or sliding linear motion. This is usually done through a rotary bearing. It enforces a cylindrical contact area, which makes it a lower kinematic pair, also called a full joint. Prismatic A prismatic joint provides a linear sliding movement between two bodies, and is often called a slider, as in the slider-crank linkage. A prismatic pair is also called as sliding pair. A prismatic joint can be formed with a polygonal cross-section to resist rotation. The relative position of two bodies connected by a prismatic joint is defined by the amount of linear slide of one relative to the other one. This one parameter movement identifies this joint as a one degree of freedom kinematic pair. Prismatic joints provide single-axis sliding often found in hydraulic and pneumatic cylinders. Ball In an automobile, ball joints are spherical bearings that connect the control arms to the steering knuckles. They are used on virtually every automobile made and work similarly to the ball-and-socket design of the human hip joint. A ball joint consists of a bearing stud and socket enclosed in a casing; all these parts are made of steel. The bearing stud is tapered and threaded, and fits into a tapered hole in the steering knuckle. A protective encasing prevents dirt from getting into the joint assembly. Usually, this is a rubber-like boot that allows movement and expansion of lubricant. Motion-control ball joints tend to be retained with an internal spring, which helps to prevent vibration problems in the linkage. The "offset" ball joint provides means of movement in systems where thermal expansion and contraction, shock, seismic motion, and torsional motions, and forces are present. Cotterpin This is mainly used to connect rigidly two rods which transmit motion in the axial direction, without rotation. These joints may be subjected to tensile or compressive forces along the axes of the rods. The very famous example is the joining of piston rod's extension with the connecting rod in the cross head assembly. Advantages: Quick assembly and disassembly is possible It can take tensile as well as compressive force. Application: Joint between piston rod and cross head of a steam engine Joint between valve rod and its steam A steam engine connecting rod strap end Foundation bolt Bolted A bolted joint is a mechanical joint which is the most popular choice for connecting two members together. It is easy to design and easy to procure parts for, making it a very popular design choice for many applications. Advantage: Joints are easily assembled/ disassembled by using a torque wrench or other fastener tooling. Clamped members can be axially tensioned at variable preloads. Disadvantage: Threaded components can fail from fatigue failure. Joints can come loose, requiring re-torqueing. Application: Pipe flanges Automotive engines Foundation bolts Screw Universal References Kinematics Rigid bodies Mechanical engineering Hardware (mechanical) Mechanical fasteners
Mechanical joint
[ "Physics", "Technology", "Engineering" ]
725
[ "Machines", "Kinematics", "Applied and interdisciplinary physics", "Physical phenomena", "Mechanical fasteners", "Classical mechanics", "Physical systems", "Construction", "Motion (physics)", "Mechanics", "Mechanical engineering", "Hardware (mechanical)" ]
21,839,925
https://en.wikipedia.org/wiki/Runge%E2%80%93Gross%20theorem
In quantum mechanics, specifically time-dependent density functional theory, the Runge–Gross theorem (RG theorem) shows that for a many-body system evolving from a given initial wavefunction, there exists a one-to-one mapping between the potential (or potentials) in which the system evolves and the density (or densities) of the system. The potentials under which the theorem holds are defined up to an additive purely time-dependent function: such functions only change the phase of the wavefunction and leave the density invariant. Most often the RG theorem is applied to molecular systems where the electronic density, ρ(r,t) changes in response to an external scalar potential, v(r,t), such as a time-varying electric field. The Runge–Gross theorem provides the formal foundation of time-dependent density functional theory. It shows that the density can be used as the fundamental variable in describing quantum many-body systems in place of the wavefunction, and that all properties of the system are functionals of the density. The theorem was published by and in 1984. As of September 2021, the original paper has been cited over 5,700 times. Overview The Runge–Gross theorem was originally derived for electrons moving in a scalar external field. Given such a field denoted by v and the number of electron, N, which together determine a Hamiltonian Hv, and an initial condition on the wavefunction Ψ(t = t0) = Ψ0, the evolution of the wavefunction is determined by the Schrödinger equation (written in atomic units) At any given time, the N-electron wavefunction, which depends upon 3N spatial and N spin coordinates, determines the electronic density through integration as Two external potentials differing only by an additive time-dependent, spatially independent, function, c(t), give rise to wavefunctions differing only by a phase factor exp(-i α(t)), with dα(t)/dt = c(t), and therefore the same electronic density. These constructions provide a mapping from an external potential to the electronic density: The Runge–Gross theorem shows that this mapping is invertible, modulo c(t). Equivalently, that the density is a functional of the external potential and of the initial wavefunction on the space of potentials differing by more than the addition of c(t): Proof Given two scalar potentials denoted as v(r,t) and v'(r,t), which differ by more than an additive purely time-dependent term, the proof follows by showing that the density corresponding to each of the two scalar potentials, obtained by solving the Schrödinger equation, differ. The proof relies heavily on the assumption that the external potential can be expanded in a Taylor series about the initial time. The proof also assumes that the density vanishes at infinity, making it valid only for finite systems. The Runge–Gross proof first shows that there is a one-to-one mapping between external potentials and current densities by invoking the Heisenberg equation of motion for the current density so as to relate time-derivatives of the current density to spatial derivatives of the external potential. Given this result, the continuity equation is used in a second step to relate time-derivatives of the electronic density to time-derivatives of the external potential. The assumption that the two potentials differ by more than an additive spatially independent term, and are expandable in a Taylor series, means that there exists an integer k ≥ 0, such that is not constant in space. This condition is used throughout the argument. Step 1 From the Heisenberg equation of motion, the time evolution of the current density, j(r,t), under the external potential v(r,t) which determines the Hamiltonian Hv, is Introducing two potentials v and v', differing by more than an additive spatially constant term, and their corresponding current densities j and j', the Heisenberg equation implies The final line shows that if the two scalar potentials differ at the initial time by more than a spatially independent function, then the current densities that the potentials generate will differ infinitesimally after t0. If the two potentials do not differ at t0, but uk(r) ≠ 0 for some value of k, then repeated application of the Heisenberg equation shows that ensuring the current densities will differ from zero infinitesimally after t0. Step 2 The electronic density and current density are related by a continuity equation of the form Repeated application of the continuity equation to the difference of the densities ρ and ρ', and current densities j and j', yields The two densities will then differ if the right-hand side (RHS) is non-zero for some value of k. The non-vanishing of the RHS follows by a reductio ad absurdum argument. Assuming, contrary to our desired outcome, that integrate over all space and apply Green's theorem. The second term is a surface integral over an infinite sphere. Assuming that the density is zero at infinity (in finite systems, the density decays to zero exponentially) and that ∇uk2(r) increases slower than the density decays, the surface integral vanishes and, because of the non-negativity of the density, implying that uk is a constant, contradicting the original assumption and completing the proof. Extensions The Runge–Gross proof is valid for pure electronic states in the presence of a scalar field. The first extension of the RG theorem was to time-dependent ensembles, which employed the Liouville equation to relate the Hamiltonian and density matrix. A proof of the RG theorem for multicomponent systems—where more than one type of particle is treated within the full quantum theory—was introduced in 1986. Incorporation of magnetic effects requires the introduction of a vector potential (A(r)) which together with the scalar potential uniquely determine the current density. Time-dependent density functional theories of superconductivity were introduced in 1994 and 1995. Here, scalar, vector, and pairing (D(t)) potentials map between current and anomalous (ΔIP(r,t)) densities. References Density functional theory Theorems in quantum mechanics
Runge–Gross theorem
[ "Physics", "Chemistry", "Mathematics" ]
1,322
[ "Theorems in quantum mechanics", "Density functional theory", "Quantum chemistry", "Equations of physics", "Quantum mechanics", "Theorems in mathematical physics", "Physics theorems" ]
30,874,007
https://en.wikipedia.org/wiki/Brake-specific%20fuel%20consumption
Brake-specific fuel consumption (BSFC) is a measure of the fuel efficiency of any prime mover that burns fuel and produces rotational, or shaft power. It is typically used for comparing the efficiency of internal combustion engines with a shaft output. It is the rate of fuel consumption divided by the power produced. In traditional units, it measures fuel consumption in pounds per hour divided by the brake horsepower, lb/(hp⋅h); in SI units, this corresponds to the inverse of the units of specific energy, kg/J = s2/m2. It may also be thought of as power-specific fuel consumption, for this reason. BSFC allows the fuel efficiency of different engines to be directly compared. The term "brake" here as in "brake horsepower" refers to a historical method of measuring torque (see Prony brake). Calculation The brake-specific fuel consumption is given by, where: is the fuel consumption rate in grams per second (g/s) is the power produced in watts where (W) is the engine speed in radians per second (rad/s) is the engine torque in newton metres (N⋅m) The above values of r, , and may be readily measured by instrumentation with an engine mounted in a test stand and a load applied to the running engine. The resulting units of BSFC are grams per joule (g/J) Commonly BSFC is expressed in units of grams per kilowatt-hour (g/(kW⋅h)). The conversion factor is as follows: BSFC [g/(kW⋅h)] = BSFC [g/J] × (3.6 × 106) The conversion between metric and imperial units is: BSFC [g/(kW⋅h)] = BSFC [lb/(hp⋅h)] × 608.277 BSFC [lb/(hp⋅h)] = BSFC [g/(kW⋅h)] × 0.001644 Relation to efficiency To calculate the actual efficiency of an engine requires the energy density of the fuel being used. Different fuels have different energy densities defined by the fuel's heating value. The lower heating value (LHV) is used for internal-combustion-engine-efficiency calculations because the heat at temperatures below cannot be put to use. Some examples of lower heating values for vehicle fuels are: Certification gasoline = 18,640 BTU/lb (0.01204 kW⋅h/g) Regular gasoline = 18,917 BTU/lb (0.0122222 kW⋅h/g) Diesel fuel = 18,500 BTU/lb (0.0119531 kW⋅h/g) Thus a diesel engine's efficiency = 1/(BSFC × 0.0119531) and a gasoline engine's efficiency = 1/(BSFC × 0.0122225) Operating values and as a cycle average statistic Any engine will have different BSFC values at different speeds and loads. For example, a reciprocating engine achieves maximum efficiency when the intake air is unthrottled and the engine is running near its peak torque. The efficiency often reported for a particular engine, however, is not its maximum efficiency but a fuel economy cycle statistical average. For example, the cycle average value of BSFC for a gasoline engine is 322 g/(kW⋅h), translating to an efficiency of 25% (1/(322 × 0.0122225) = 0.2540). Actual efficiency can be lower or higher than the engine’s average due to varying operating conditions. In the case of a production gasoline engine, the most efficient BSFC is approximately 225 g/(kW⋅h), which is equivalent to a thermodynamic efficiency of 36%. An iso-BSFC map (fuel island plot) of a diesel engine is shown. The sweet spot at 206 BSFC has 40.6% efficiency. The x-axis is rpm; y-axis is BMEP in bar (bmep is proportional to torque) Engine design and class BSFC numbers change a lot for different engine designs, and compression ratio and power rating. Engines of different classes like diesels and gasoline engines will have very different BSFC numbers, ranging from less than 200 g/(kW⋅h) (diesel at low speed and high torque) to more than 1,000 g/(kW⋅h) (turboprop at low power level). Examples for shaft engines The following table takes values as an example for the specific fuel consumption of several types of engines. For specific engines values can and often do differ from the table values shown below. Energy efficiency is based on a lower heating value of 42.7 MJ/kg ( g/(kW⋅h)) for diesel fuel and jet fuel, 43.9 MJ/kg ( g/(kW⋅h)) for gasoline. Turboprop efficiency is only good at high power; SFC increases dramatically for approach at low power (30% Pmax) and especially at idle (7% Pmax) : See also Fuel economy in automobiles Energy-efficient driving Fuel management systems Marine fuel management Thrust specific fuel consumption References Further reading Reciprocating engine types HowStuffWorks: How Car Engines Work Reciprocating Engines at infoplease Piston Engines US Centennial of Flight Commission Effect of EGR on the exhaust gas temperature and exhaust opacity in compression ignition engines Heywood J B 1988 Pollutant formation and control. Internal combustion engine fundamentals Int. edn (New York: Mc-Graw Hill) pp 572–577 Well-to-Wheel Studies, Heating Values, and the Energy Conservation Principle Exemplary maps for commercial car engines collected by ecomodder forum users Fuel technology Energy efficiency Power (physics)
Brake-specific fuel consumption
[ "Physics", "Mathematics" ]
1,205
[ "Force", "Physical quantities", "Quantity", "Power (physics)", "Energy (physics)", "Wikipedia categories named after physical quantities" ]
30,874,071
https://en.wikipedia.org/wiki/Passivity%20%28engineering%29
Passivity is a property of engineering systems, most commonly encountered in analog electronics and control systems. Typically, analog designers use passivity to refer to incrementally passive components and systems, which are incapable of power gain. In contrast, control systems engineers will use passivity to refer to thermodynamically passive ones, which consume, but do not produce, energy. As such, without context or a qualifier, the term passive is ambiguous. An electronic circuit consisting entirely of passive components is called a passive circuit, and has the same properties as a passive component. If a device is not passive, then it is an active device. Thermodynamic passivity In control systems and circuit network theory, a passive component or circuit is one that consumes energy, but does not produce energy. Under this methodology, voltage and current sources are considered active, while resistors, capacitors, inductors, transistors, tunnel diodes, metamaterials and other dissipative and energy-neutral components are considered passive. Circuit designers will sometimes refer to this class of components as dissipative, or thermodynamically passive. While many books give definitions for passivity, many of these contain subtle errors in how initial conditions are treated and, occasionally, the definitions do not generalize to all types of nonlinear time-varying systems with memory. Below is a correct, formal definition, taken from Wyatt et al., which also explains the problems with many other definitions. Given an n-port R with a state representation S, and initial state x, define available energy EA as: where the notation supx→T≥0 indicates that the supremum is taken over all T ≥ 0 and all admissible pairs {v(·), i(·)} with the fixed initial state x (e.g., all voltage–current trajectories for a given initial condition of the system). A system is considered passive if EA is finite for all initial states x. Otherwise, the system is considered active. Roughly speaking, the inner product is the instantaneous power (e.g., the product of voltage and current), and EA is the upper bound on the integral of the instantaneous power (i.e., energy). This upper bound (taken over all T ≥ 0) is the available energy in the system for the particular initial condition x. If, for all possible initial states of the system, the energy available is finite, then the system is called passive. If the available energy is finite, it is known to be non-negative, since any trajectory with voltage gives an integral equal to zero, and the available energy is the supremum over all possible trajectories. Moreover, by definition, for any trajectory {v(·), i(·)}, the following inequality holds: . The existence of a non-negative function EA that satisfies this inequality, known as a "storage function", is equivalent to passivity. For a given system with a known model, it is often easier to construct a storage function satisfying the differential inequality than directly computing the available energy, as taking the supremum on a collection of trajectories might require the use of calculus of variations. Incremental passivity In circuit design, informally, passive components refer to ones that are not capable of power gain; this means they cannot amplify signals. Under this definition, passive components include capacitors, inductors, resistors, diodes, transformers, voltage sources, and current sources. They exclude devices like transistors, vacuum tubes, relays, tunnel diodes, and glow tubes. To give other terminology, systems for which the small signal model is not passive are sometimes called locally active (e.g. transistors and tunnel diodes). Systems that can generate power about a time-variant unperturbed state are often called parametrically active (e.g. certain types of nonlinear capacitors). Formally, for a memoryless two-terminal element, this means that the current–voltage characteristic is monotonically increasing. For this reason, control systems and circuit network theorists refer to these devices as locally passive, incrementally passive, increasing, monotone increasing, or monotonic. It is not clear how this definition would be formalized to multiport devices with memory – as a practical matter, circuit designers use this term informally, so it may not be necessary to formalize it. Other definitions of passivity This term is used colloquially in a number of other contexts: A passive USB to PS/2 adapter consists of wires, and potentially resistors and similar passive (in both the incremental and thermodynamic sense) components. An active USB to PS/2 adapter consists of logic to translate signals (active in the incremental sense) A passive mixer consists of just resistors (incrementally passive), whereas an active mixer includes components capable of gain (active). In audio work one can also find both (incrementally) passive and active converters between balanced and unbalanced lines. A passive balun converter is generally just a transformer along with, of course, the requisite connectors, while an active one typically consists of a differential drive or an instrumentation amplifier. In some books, devices that exhibit gain or a rectifying function (e.g. diodes) are considered active. Only resistors, capacitors, inductors, transformers, and gyrators are considered passive. United States Patent and Trademark Office is amongst the organisations classing diodes as active devices. This definition is somewhat informal, as diodes can be considered non-linear resistors, and virtually all real-world devices exhibit some non-linearity. Sales/product catalogs will often use different informal definitions of this term, as fitting to a particular hierarchies of products being sold. It is not uncommon, for example, to list all silicon devices under "active devices," even if some of those devices are technically passive. Stability Passivity, in most cases, can be used to demonstrate that passive circuits will be stable under specific criteria. This only works if only one of the above definitions of passivity is used – if components from the two are mixed, the systems may be unstable under any criteria. In addition, passive circuits will not necessarily be stable under all stability criteria. For instance, a resonant series LC circuit will have unbounded voltage output for a bounded voltage input, but will be stable in the sense of Lyapunov, and given bounded energy input will have bounded energy output. Passivity is frequently used in control systems to design stable control systems or to show stability in control systems. This is especially important in the design of large, complex control systems (e.g. stability of airplanes). Passivity is also used in some areas of circuit design, especially filter design. Passive filter A passive filter is a kind of electronic filter that is made only from passive components – in contrast to an active filter, it does not require an external power source (beyond the signal). Since most filters are linear, in most cases, passive filters are composed of just the four basic linear elements – resistors, capacitors, inductors, and transformers. More complex passive filters may involve nonlinear elements, or more complex linear elements, such as transmission lines. A passive filter has several advantages over an active filter: Guaranteed stability Scale better to large signals (tens of amperes, hundreds of volts), where active devices are often expensive or impractical No power supply needed Often less expensive in discrete designs (unless large coils are required). Active filters tend to be less expensive in integrated designs. For linear filters, potentially greater linearity depending on components required (in many cases, active filters allow the use of more linear components; e.g. active components can permit the use of a polypropylene or NP0 ceramic capacitor, while a passive one might require an electrolytic). They are commonly used in speaker crossover design (due to the moderately large voltages and currents, and the lack of easy access to a power supply), filters in power distribution networks (due to the large voltages and currents), power supply bypassing (due to low cost, and in some cases, power requirements), as well as a variety of discrete and home brew circuits (for low-cost and simplicity). Passive filters are uncommon in monolithic integrated circuit design, where active devices are inexpensive compared to resistors and capacitors, and inductors are prohibitively expensive. Passive filters are still found, however, in hybrid integrated circuits. Indeed, it may be the desire to incorporate a passive filter that leads the designer to use the hybrid format. Energic and non-energic passive circuit elements Passive circuit elements may be divided into energic and non-energic kinds. When current passes through it, an energic passive circuit element converts some of the energy supplied to it into heat. It is dissipative. When current passes through it, a non-energic passive circuit element converts none of the energy supplied to it into heat. It is non-dissipative. Resistors are energic. Ideal capacitors, inductors, transformers, and gyrators are non-energic. Notes References Further reading —Very readable introductory discussion on passivity in control systems. —Good collection of passive stability theorems, but restricted to memoryless one-ports. Readable and formal. —Somewhat less readable than Chua, and more limited in scope and formality of theorems. —Gives a definition of passivity for multiports (in contrast to the above), but the overall discussion of passivity is quite limited. — A pair of memos that have good discussions of passivity. —A complete exposition of dissipative systems, with emphasis on the celebrated KYP Lemma, and on Willems' dissipativity and its use in Control. Engineering concepts
Passivity (engineering)
[ "Engineering" ]
2,087
[ "nan" ]
30,874,505
https://en.wikipedia.org/wiki/Scalar%20field%20dark%20matter
In astrophysics and cosmology scalar field dark matter is a classical, minimally coupled, scalar field postulated to account for the inferred dark matter. Background The universe may be accelerating, fueled perhaps by a cosmological constant or some other field possessing long range 'repulsive' effects. A model must predict the correct form for the large scale clustering spectrum, account for cosmic microwave background anisotropies on large and intermediate angular scales, and provide agreement with the luminosity distance relation obtained from observations of high redshift supernovae. The modeled evolution of the universe includes a large amount of unknown matter and energy in order to agree with such observations. This energy density has two components: cold dark matter and dark energy. Each contributes to the theory of the origination of galaxies and the expansion of the universe. The universe must have a critical density, a density not explained by baryonic matter (ordinary matter) alone. Scalar field The dark matter can be modeled as a scalar field using two fitted parameters, mass and self-interaction. In this model the dark matter consists of an ultralight particle with a mass of ~10−22 eV when there is no self-interaction. If there is a self-interaction a wider mass range is allowed. The uncertainty in position of a particle is larger than its Compton wavelength (a particle with mass 10−22 eV has a Compton wavelength of 1.3 light years), and for some reasonable estimates of particle mass and density of dark matter there is no point talking about the individual particles' positions and momenta. By some dynamical measurements, we can deduce that the mass density of the dark matter is about . One can calculate the average separation between these particles by deducing the de-Broglie wavelength: , here m is the mass of the dark matter particle and v is the dispersion velocity of the halo. The average number of the particles in cubic volume having the dimension equal to the de Broglie wavelength, is given by, The occupation number of these particles is so huge that we can consider the wave nature of these particles in the classical description. To satisfy Pauli's exclusion principle the particle must be bosons especially spin zero (scalar) particles, hence these ultra-light dark matter would be more like a wave than a particle, and the galactic halos are giant systems of condensed bose liquid, possibly superfluid. The dark matter can be described as a Bose–Einstein condensate of the ultralight quanta of the field and as boson stars. The enormous Compton wavelength of these particles prevents structure formation on small, subgalactic scales, which is a major problem in traditional cold dark matter models. The collapse of initial over-densities is studied in the references. There are not many models in which we consider dark matter as the scalar field. Axion-like particle (ALP) in string theory can be considered as a model of scalar field dark matter, as its mass density satisfies the relic density of the dark matter. The most common production mechanism of ALP is misalignment mechanism. Which shows the mass around satisfies with the relic abundance of observed dark matter. This dark matter model is also known as BEC dark matter or wave dark matter. Fuzzy dark matter and ultra-light axion are examples of scalar field dark matter. See also References External links Scaled-Up Darkness, Scientific American Astroparticle physics Dark matter Particle physics
Scalar field dark matter
[ "Physics", "Astronomy" ]
717
[ "Dark matter", "Unsolved problems in astronomy", "Concepts in astronomy", "Astroparticle physics", "Unsolved problems in physics", "Astrophysics", "Particle physics", "Exotic matter", "Physics beyond the Standard Model", "Matter" ]
30,875,016
https://en.wikipedia.org/wiki/Equivalent%20%28chemistry%29
An equivalent (symbol: officially equiv; unofficially but often Eq) is the amount of a substance that reacts with (or is equivalent to) an arbitrary amount (typically one mole) of another substance in a given chemical reaction. It is an archaic quantity that was used in chemistry and the biological sciences (see ). The mass of an equivalent is called its equivalent weight. Formula The formula from milligrams (mg) to milli-equivalent (mEq) and back is as follows: where is the valence and is the molecular weight. For elemental compounds: Common examples mEq to milligram Milligram to mEq Formal definition In a more formal definition, the equivalent is the amount of a substance needed to do one of the following: react with or supply one mole of hydrogen ions () in an acid–base reaction react with or supply one mole of electrons in a redox reaction. The "hydrogen ion" and the "electron" in these examples are respectively called the "reaction units." By this definition, the number of equivalents of a given ion in a solution is equal to the number of moles of that ion multiplied by its valence. For example, consider a solution of 1 mole of and 1 mole of . The solution has 1 mole or 1 equiv , 1 mole or 2 equiv , and 3 mole or 3 equiv . An earlier definition, used especially for chemical elements, holds that an equivalent is the amount of a substance that will react with of hydrogen, of oxygen, or of chlorine—or that will displace any of the three. In medicine and biochemistry In biological systems, reactions often happen on small scales, involving small amounts of substances, so those substances are routinely described in terms of milliequivalents (symbol: officially mequiv; unofficially but often mEq or meq), the prefix milli- denoting a factor of one thousandth (10−3). Very often, the measure is used in terms of milliequivalents of solute per litre of solution (or milliNormal, where ). This is especially common for measurement of compounds in biological fluids; for instance, the healthy level of potassium in the blood of a human is defined between 3.5 and 5.0 mEq/L. A certain amount of univalent ions provides the same amount of equivalents while the same amount of divalent ions provides twice the amount of equivalents. For example, 1 mmol (0.001 mol) of Na+ is equal to 1 meq, while 1 mmol of Ca2+ is equal to 2 meq. References External links A dictionary of units of measurement Units of amount of substance Stoichiometry pl:Równoważnik chemiczny
Equivalent (chemistry)
[ "Chemistry", "Mathematics" ]
575
[ "Units of amount of substance", "Stoichiometry", "Chemical reaction engineering", "Quantity", "nan", "Units of measurement" ]
30,875,123
https://en.wikipedia.org/wiki/Stuttering%20equivalence
In theoretical computer science, stuttering equivalence, a relation written as , can be seen as a partitioning of paths and into blocks, so that states in the block of one path are labeled () the same as states in the block of the other path. Corresponding blocks may have different lengths. Formally, this can be expressed as two infinite paths and being stuttering equivalent () if there are two infinite sequences of integers and such that for every block holds . Stuttering equivalence is not the same as bisimulation, since bisimulation cannot capture the semantics of the 'eventually' (or 'finally') operator found in linear temporal/computation tree logic (branching time logic)(modal logic). So-called branching bisimulation has to be used. References Formal methods Logic in computer science
Stuttering equivalence
[ "Mathematics", "Engineering" ]
164
[ "Software engineering", "Mathematical logic", "Logic in computer science", "Formal methods" ]
30,876,319
https://en.wikipedia.org/wiki/Parallel%20axis%20theorem
The parallel axis theorem, also known as Huygens–Steiner theorem, or just as Steiner's theorem, named after Christiaan Huygens and Jakob Steiner, can be used to determine the moment of inertia or the second moment of area of a rigid body about any axis, given the body's moment of inertia about a parallel axis through the object's center of gravity and the perpendicular distance between the axes. Mass moment of inertia Suppose a body of mass is rotated about an axis passing through the body's center of mass. The body has a moment of inertia with respect to this axis. The parallel axis theorem states that if the body is made to rotate instead about a new axis , which is parallel to the first axis and displaced from it by a distance , then the moment of inertia with respect to the new axis is related to by Explicitly, is the perpendicular distance between the axes and . The parallel axis theorem can be applied with the stretch rule and perpendicular axis theorem to find moments of inertia for a variety of shapes. Derivation We may assume, without loss of generality, that in a Cartesian coordinate system the perpendicular distance between the axes lies along the x-axis and that the center of mass lies at the origin. The moment of inertia relative to the z-axis is then The moment of inertia relative to the axis , which is at a distance from the center of mass along the x-axis, is Expanding the brackets yields The first term is and the second term becomes . The integral in the final term is a multiple of the x-coordinate of the center of masswhich is zero since the center of mass lies at the origin. So, the equation becomes: Tensor generalization The parallel axis theorem can be generalized to calculations involving the inertia tensor. Let denote the inertia tensor of a body as calculated at the center of mass. Then the inertia tensor as calculated relative to a new point is where is the displacement vector from the center of mass to the new point, and is the Kronecker delta. For diagonal elements (when ), displacements perpendicular to the axis of rotation results in the above simplified version of the parallel axis theorem. The generalized version of the parallel axis theorem can be expressed in the form of coordinate-free notation as where E3 is the identity matrix and is the outer product. Further generalization of the parallel axis theorem gives the inertia tensor about any set of orthogonal axes parallel to the reference set of axes x, y and z, associated with the reference inertia tensor, whether or not they pass through the center of mass. In this generalization, the inertia tensor can be moved from being reckoned about any reference point to some final reference point via the relational matrix as: where is the vector from the initial reference point to the object's center of mass and is the vector from the initial reference point to the final reference point (). The relational matrix is given by Second moment of area The parallel axes rule also applies to the second moment of area (area moment of inertia) for a plane region D: where is the area moment of inertia of D relative to the parallel axis, is the area moment of inertia of D relative to its centroid, is the area of the plane region D, and is the distance from the new axis to the centroid of the plane region D. The centroid of D coincides with the centre of gravity of a physical plate with the same shape that has uniform density. Polar moment of inertia for planar dynamics The mass properties of a rigid body that is constrained to move parallel to a plane are defined by its center of mass R = (x, y) in this plane, and its polar moment of inertia IR around an axis through R that is perpendicular to the plane. The parallel axis theorem provides a convenient relationship between the moment of inertia IS around an arbitrary point S and the moment of inertia IR about the center of mass R. Recall that the center of mass R has the property where r is integrated over the volume V of the body. The polar moment of inertia of a body undergoing planar movement can be computed relative to any reference point S, where S is constant and r is integrated over the volume V. In order to obtain the moment of inertia IS in terms of the moment of inertia IR, introduce the vector d from S to the center of mass R, The first term is the moment of inertia IR, the second term is zero by definition of the center of mass, and the last term is the total mass of the body times the square magnitude of the vector d. Thus, which is known as the parallel axis theorem. Moment of inertia matrix The inertia matrix of a rigid system of particles depends on the choice of the reference point. There is a useful relationship between the inertia matrix relative to the center of mass R and the inertia matrix relative to another point S. This relationship is called the parallel axis theorem. Consider the inertia matrix [IS] obtained for a rigid system of particles measured relative to a reference point S, given by where ri defines the position of particle Pi, i = 1, ..., n. Recall that [ri − S] is the skew-symmetric matrix that performs the cross product, for an arbitrary vector y. Let R be the center of mass of the rigid system, then where d is the vector from the reference point S to the center of mass R. Use this equation to compute the inertia matrix, Expand this equation to obtain The first term is the inertia matrix [IR] relative to the center of mass. The second and third terms are zero by definition of the center of mass R, And the last term is the total mass of the system multiplied by the square of the skew-symmetric matrix [d] constructed from d. The result is the parallel axis theorem, where d is the vector from the reference point S to the center of mass R. Identities for a skew-symmetric matrix In order to compare formulations of the parallel axis theorem using skew-symmetric matrices and the tensor formulation, the following identities are useful. Let [R] be the skew symmetric matrix associated with the position vector R = (x, y, z), then the product in the inertia matrix becomes This product can be computed using the matrix formed by the outer product [R RT] using the identity where [E3] is the 3 × 3 identity matrix. Also notice, that where tr denotes the sum of the diagonal elements of the outer product matrix, known as its trace. See also Christiaan Huygens Jakob Steiner Moment of inertia Perpendicular axis theorem Rigid body dynamics Stretch rule References External links Parallel axis theorem Moment of inertia tensor Video about the inertia tensor Mechanics Physics theorems Moment (physics) fr:Moment d'inertie#Théorème de transport (ou théorème d'Huygens-Steiner)
Parallel axis theorem
[ "Physics", "Mathematics", "Engineering" ]
1,457
[ "Equations of physics", "Physical quantities", "Quantity", "Mechanics", "Mechanical engineering", "Moment (physics)", "Physics theorems" ]
30,876,419
https://en.wikipedia.org/wiki/Quantum%20state
In quantum physics, a quantum state is a mathematical entity that embodies the knowledge of a quantum system. Quantum mechanics specifies the construction, evolution, and measurement of a quantum state. The result is a prediction for the system represented by the state. Knowledge of the quantum state, and the rules for the system's evolution in time, exhausts all that can be known about a quantum system. Quantum states may be defined differently for different kinds of systems or problems. Two broad categories are wave functions describing quantum systems using position or momentum variables and the more abstract vector quantum states. Historical, educational, and application-focused problems typically feature wave functions; modern professional physics uses the abstract vector states. In both categories, quantum states divide into pure versus mixed states, or into coherent states and incoherent states. Categories with special properties include stationary states for time independence and quantum vacuum states in quantum field theory. From the states of classical mechanics As a tool for physics, quantum states grew out of states in classical mechanics. A classical dynamical state consists of a set of dynamical variables with well-defined real values at each instant of time. For example, the state of a cannon ball would consist of its position and velocity. The state values evolve under equations of motion and thus remain strictly determined. If we know the position of a cannon and the exit velocity of its projectiles, then we can use equations containing the force of gravity to predict the trajectory of a cannon ball precisely. Similarly, quantum states consist of sets of dynamical variables that evolve under equations of motion. However, the values derived from quantum states are complex numbers, quantized, limited by uncertainty relations, and only provide a probability distribution for the outcomes for a system. These constraints alter the nature of quantum dynamic variables. For example, the quantum state of an electron in a double-slit experiment would consist of complex values over the detection region and, when squared, only predict the probability distribution of electron counts across the detector. Role in quantum mechanics The process of describing a quantum system with quantum mechanics begins with identifying a set of variables defining the quantum state of the system. The set will contain compatible and incompatible variables. Simultaneous measurement of a complete set of compatible variables prepares the system in a unique state. The state then evolves deterministically according to the equations of motion. Subsequent measurement of the state produces a sample from a probability distribution predicted by the quantum mechanical operator corresponding to the measurement. The fundamentally statistical or probabilisitic nature of quantum measurements changes the role of quantum states in quantum mechanics compared to classical states in classical mechanics. In classical mechanics, the initial state of one or more bodies is measured; the state evolves according to the equations of motion; measurements of the final state are compared to predictions. In quantum mechanics, ensembles of identically prepared quantum states evolve according to the equations of motion and many repeated measurements are compared to predicted probability distributions. Measurements Measurements, macroscopic operations on quantum states, filter the state. Whatever the input quantum state might be, repeated identical measurements give consistent values. For this reason, measurements 'prepare' quantum states for experiments, placing the system in a partially defined state. Subsequent measurements may either further prepare the system – these are compatible measurements – or it may alter the state, redefining it – these are called incompatible or complementary measurements. For example, we may measure the momentum of a state along the axis any number of times and get the same result, but if we measure the position after once measuring the momentum, subsequent measurements of momentum are changed. The quantum state appears unavoidably altered by incompatible measurements. This is known as the uncertainty principle. Eigenstates and pure states The quantum state after a measurement is in an eigenstate corresponding to that measurement and the value measured. Other aspects of the state may be unknown. Repeating the measurement will not alter the state. In some cases, compatible measurements can further refine the state, causing it to be an eigenstate corresponding to all these measurements. A full set of compatible measurements produces a pure state. Any state that is not pure is called a mixed state as discussed in more depth below. The eigenstate solutions to the Schrödinger equation can be formed into pure states. Experiments rarely produce pure states. Therefore statistical mixtures of solutions must be compared to experiments. Representations The same physical quantum state can be expressed mathematically in different ways called representations. The position wave function is one representation often seen first in introductions to quantum mechanics. The equivalent momentum wave function is another wave function based representation. Representations are analogous to coordinate systems or similar mathematical devices like parametric equations. Selecting a representation will make some aspects of a problem easier at the cost of making other things difficult. In formal quantum mechanics (see below) the theory develops in terms of abstract 'vector space', avoiding any particular representation. This allows many elegant concepts of quantum mechanics to be expressed and to be applied even in cases where no classical analog exists. Wave function representations Wave functions represent quantum states, particularly when they are functions of position or of momentum. Historically, definitions of quantum states used wavefunctions before the more formal methods were developed. The wave function is a complex-valued function of any complete set of commuting or compatible degrees of freedom. For example, one set could be the spatial coordinates of an electron. Preparing a system by measuring the complete set of compatible observables produces a pure quantum state. More common, incomplete preparation produces a mixed quantum state. Wave function solutions of Schrödinger's equations of motion for operators corresponding to measurements can readily be expressed as pure states; they must be combined with statistical weights matching experimental preparation to compute the expected probability distribution. Pure states of wave functions Numerical or analytic solutions in quantum mechanics can be expressed as pure states. These solution states, called eigenstates, are labeled with quantized values, typically quantum numbers. For example, when dealing with the energy spectrum of the electron in a hydrogen atom, the relevant pure states are identified by the principal quantum number , the angular momentum quantum number , the magnetic quantum number , and the spin z-component . For another example, if the spin of an electron is measured in any direction, e.g. with a Stern–Gerlach experiment, there are two possible results: up or down. A pure state here is represented by a two-dimensional complex vector , with a length of one; that is, with where and are the absolute values of and . The postulates of quantum mechanics state that pure states, at a given time , correspond to vectors in a separable complex Hilbert space, while each measurable physical quantity (such as the energy or momentum of a particle) is associated with a mathematical operator called the observable. The operator serves as a linear function that acts on the states of the system. The eigenvalues of the operator correspond to the possible values of the observable. For example, it is possible to observe a particle with a momentum of 1 kg⋅m/s if and only if one of the eigenvalues of the momentum operator is 1 kg⋅m/s. The corresponding eigenvector (which physicists call an eigenstate) with eigenvalue 1 kg⋅m/s would be a quantum state with a definite, well-defined value of momentum of 1 kg⋅m/s, with no quantum uncertainty. If its momentum were measured, the result is guaranteed to be 1 kg⋅m/s. On the other hand, a pure state described as a superposition of multiple different eigenstates does in general have quantum uncertainty for the given observable. Using bra–ket notation, this linear combination of eigenstates can be represented as: The coefficient that corresponds to a particular state in the linear combination is a complex number, thus allowing interference effects between states. The coefficients are time dependent. How a quantum state changes in time is governed by the time evolution operator. Mixed states of wave functions A mixed quantum state corresponds to a probabilistic mixture of pure states; however, different distributions of pure states can generate equivalent (i.e., physically indistinguishable) mixed states. A mixture of quantum states is again a quantum state. A mixed state for electron spins, in the density-matrix formulation, has the structure of a matrix that is Hermitian and positive semi-definite, and has trace 1. A more complicated case is given (in bra–ket notation) by the singlet state, which exemplifies quantum entanglement: which involves superposition of joint spin states for two particles with spin 1/2. The singlet state satisfies the property that if the particles' spins are measured along the same direction then either the spin of the first particle is observed up and the spin of the second particle is observed down, or the first one is observed down and the second one is observed up, both possibilities occurring with equal probability. A pure quantum state can be represented by a ray in a projective Hilbert space over the complex numbers, while mixed states are represented by density matrices, which are positive semidefinite operators that act on Hilbert spaces. The Schrödinger–HJW theorem classifies the multitude of ways to write a given mixed state as a convex combination of pure states. Before a particular measurement is performed on a quantum system, the theory gives only a probability distribution for the outcome, and the form that this distribution takes is completely determined by the quantum state and the linear operators describing the measurement. Probability distributions for different measurements exhibit tradeoffs exemplified by the uncertainty principle: a state that implies a narrow spread of possible outcomes for one experiment necessarily implies a wide spread of possible outcomes for another. Statistical mixtures of states are a different type of linear combination. A statistical mixture of states is a statistical ensemble of independent systems. Statistical mixtures represent the degree of knowledge whilst the uncertainty within quantum mechanics is fundamental. Mathematically, a statistical mixture is not a combination using complex coefficients, but rather a combination using real-valued, positive probabilities of different states . A number represents the probability of a randomly selected system being in the state . Unlike the linear combination case each system is in a definite eigenstate. The expectation value of an observable is a statistical mean of measured values of the observable. It is this mean, and the distribution of probabilities, that is predicted by physical theories. There is no state that is simultaneously an eigenstate for all observables. For example, we cannot prepare a state such that both the position measurement and the momentum measurement (at the same time ) are known exactly; at least one of them will have a range of possible values. This is the content of the Heisenberg uncertainty relation. Moreover, in contrast to classical mechanics, it is unavoidable that performing a measurement on the system generally changes its state. More precisely: After measuring an observable A, the system will be in an eigenstate of A; thus the state has changed, unless the system was already in that eigenstate. This expresses a kind of logical consistency: If we measure A twice in the same run of the experiment, the measurements being directly consecutive in time, then they will produce the same results. This has some strange consequences, however, as follows. Consider two incompatible observables, and , where corresponds to a measurement earlier in time than . Suppose that the system is in an eigenstate of at the experiment's beginning. If we measure only , all runs of the experiment will yield the same result. If we measure first and then in the same run of the experiment, the system will transfer to an eigenstate of after the first measurement, and we will generally notice that the results of are statistical. Thus: Quantum mechanical measurements influence one another, and the order in which they are performed is important. Another feature of quantum states becomes relevant if we consider a physical system that consists of multiple subsystems; for example, an experiment with two particles rather than one. Quantum physics allows for certain states, called entangled states, that show certain statistical correlations between measurements on the two particles which cannot be explained by classical theory. For details, see Quantum entanglement. These entangled states lead to experimentally testable properties (Bell's theorem) that allow us to distinguish between quantum theory and alternative classical (non-quantum) models. Schrödinger picture vs. Heisenberg picture One can take the observables to be dependent on time, while the state was fixed once at the beginning of the experiment. This approach is called the Heisenberg picture. (This approach was taken in the later part of the discussion above, with time-varying observables , .) One can, equivalently, treat the observables as fixed, while the state of the system depends on time; that is known as the Schrödinger picture. (This approach was taken in the earlier part of the discussion above, with a time-varying state .) Conceptually (and mathematically), the two approaches are equivalent; choosing one of them is a matter of convention. Both viewpoints are used in quantum theory. While non-relativistic quantum mechanics is usually formulated in terms of the Schrödinger picture, the Heisenberg picture is often preferred in a relativistic context, that is, for quantum field theory. Compare with Dirac picture. Formalism in quantum physics Pure states as rays in a complex Hilbert space Quantum physics is most commonly formulated in terms of linear algebra, as follows. Any given system is identified with some finite- or infinite-dimensional Hilbert space. The pure states correspond to vectors of norm 1. Thus the set of all pure states corresponds to the unit sphere in the Hilbert space, because the unit sphere is defined as the set of all vectors with norm 1. Multiplying a pure state by a scalar is physically inconsequential (as long as the state is considered by itself). If a vector in a complex Hilbert space can be obtained from another vector by multiplying by some non-zero complex number, the two vectors in are said to correspond to the same ray in the projective Hilbert space of . Note that although the word ray is used, properly speaking, a point in the projective Hilbert space corresponds to a line passing through the origin of the Hilbert space, rather than a half-line, or ray in the geometrical sense. Spin The angular momentum has the same dimension (M·L·T) as the Planck constant and, at quantum scale, behaves as a discrete degree of freedom of a quantum system. Most particles possess a kind of intrinsic angular momentum that does not appear at all in classical mechanics and arises from Dirac's relativistic generalization of the theory. Mathematically it is described with spinors. In non-relativistic quantum mechanics the group representations of the Lie group SU(2) are used to describe this additional freedom. For a given particle, the choice of representation (and hence the range of possible values of the spin observable) is specified by a non-negative number that, in units of the reduced Planck constant , is either an integer (0, 1, 2, ...) or a half-integer (1/2, 3/2, 5/2, ...). For a massive particle with spin , its spin quantum number always assumes one of the possible values in the set As a consequence, the quantum state of a particle with spin is described by a vector-valued wave function with values in C2S+1. Equivalently, it is represented by a complex-valued function of four variables: one discrete quantum number variable (for the spin) is added to the usual three continuous variables (for the position in space). Many-body states and particle statistics The quantum state of a system of N particles, each potentially with spin, is described by a complex-valued function with four variables per particle, corresponding to 3 spatial coordinates and spin, e.g. Here, the spin variables mν assume values from the set where is the spin of νth particle. for a particle that does not exhibit spin. The treatment of identical particles is very different for bosons (particles with integer spin) versus fermions (particles with half-integer spin). The above N-particle function must either be symmetrized (in the bosonic case) or anti-symmetrized (in the fermionic case) with respect to the particle numbers. If not all N particles are identical, but some of them are, then the function must be (anti)symmetrized separately over the variables corresponding to each group of identical variables, according to its statistics (bosonic or fermionic). Electrons are fermions with , photons (quanta of light) are bosons with (although in the vacuum they are massless and can't be described with Schrödinger mechanics). When symmetrization or anti-symmetrization is unnecessary, -particle spaces of states can be obtained simply by tensor products of one-particle spaces, to which we will return later. Basis states of one-particle systems A state belonging to a separable complex Hilbert space can always be expressed uniquely as a linear combination of elements of an orthonormal basis of . Using bra–ket notation, this means any state can be written as with complex coefficients and basis elements . In this case, the normalization condition translates to In physical terms, has been expressed as a quantum superposition of the "basis states" , i.e., the eigenstates of an observable. In particular, if said observable is measured on the normalized state , then is the probability that the result of the measurement is . In general, the expression for probability always consist of a relation between the quantum state and a portion of the spectrum of the dynamical variable (i.e. random variable) being observed. For example, the situation above describes the discrete case as eigenvalues belong to the point spectrum. Likewise, the wave function is just the eigenfunction of the Hamiltonian operator with corresponding eigenvalue(s) ; the energy of the system. An example of the continuous case is given by the position operator. The probability measure for a system in state is given by: where is the probability density function for finding a particle at a given position. These examples emphasize the distinction in charactertistics between the state and the observable. That is, whereas is a pure state belonging to , the (generalized) eigenvectors of the position operator do not. Pure states vs. bound states Though closely related, pure states are not the same as bound states belonging to the pure point spectrum of an observable with no quantum uncertainty. A particle is said to be in a bound state if it remains localized in a bounded region of space for all times. A pure state is called a bound state if and only if for every there is a compact set such that for all . The integral represents the probability that a particle is found in a bounded region at any time . If the probability remains arbitrarily close to then the particle is said to remain in . Superposition of pure states As mentioned above, quantum states may be superposed. If and are two kets corresponding to quantum states, the ket is also a quantum state of the same system. Both and can be complex numbers; their relative amplitude and relative phase will influence the resulting quantum state. Writing the superposed state using and defining the norm of the state as: and extracting the common factors gives: The overall phase factor in front has no physical effect. Only the relative phase affects the physical nature of the superposition. One example of superposition is the double-slit experiment, in which superposition leads to quantum interference. Another example of the importance of relative phase is Rabi oscillations, where the relative phase of two states varies in time due to the Schrödinger equation. The resulting superposition ends up oscillating back and forth between two different states. Mixed states A pure quantum state is a state which can be described by a single ket vector, as described above. A mixed quantum state is a statistical ensemble of pure states (see Quantum statistical mechanics). Mixed states arise in quantum mechanics in two different situations: first, when the preparation of the system is not fully known, and thus one must deal with a statistical ensemble of possible preparations; and second, when one wants to describe a physical system which is entangled with another, as its state cannot be described by a pure state. In the first case, there could theoretically be another person who knows the full history of the system, and therefore describe the same system as a pure state; in this case, the density matrix is simply used to represent the limited knowledge of a quantum state. In the second case, however, the existence of quantum entanglement theoretically prevents the existence of complete knowledge about the subsystem, and it's impossible for any person to describe the subsystem of an entangled pair as a pure state. Mixed states inevitably arise from pure states when, for a composite quantum system with an entangled state on it, the part is inaccessible to the observer. The state of the part is expressed then as the partial trace over . A mixed state cannot be described with a single ket vector. Instead, it is described by its associated density matrix (or density operator), usually denoted ρ. Density matrices can describe both mixed and pure states, treating them on the same footing. Moreover, a mixed quantum state on a given quantum system described by a Hilbert space can be always represented as the partial trace of a pure quantum state (called a purification) on a larger bipartite system for a sufficiently large Hilbert space . The density matrix describing a mixed state is defined to be an operator of the form where is the fraction of the ensemble in each pure state The density matrix can be thought of as a way of using the one-particle formalism to describe the behavior of many similar particles by giving a probability distribution (or ensemble) of states that these particles can be found in. A simple criterion for checking whether a density matrix is describing a pure or mixed state is that the trace of is equal to 1 if the state is pure, and less than 1 if the state is mixed. Another, equivalent, criterion is that the von Neumann entropy is 0 for a pure state, and strictly positive for a mixed state. The rules for measurement in quantum mechanics are particularly simple to state in terms of density matrices. For example, the ensemble average (expectation value) of a measurement corresponding to an observable is given by where and are eigenkets and eigenvalues, respectively, for the operator , and "" denotes trace. It is important to note that two types of averaging are occurring, one (over ) being the usual expected value of the observable when the quantum is in state , and the other (over ) being a statistical (said incoherent) average with the probabilities that the quantum is in those states. Mathematical generalizations States can be formulated in terms of observables, rather than as vectors in a vector space. These are positive normalized linear functionals on a C*-algebra, or sometimes other classes of algebras of observables. See State on a C*-algebra and Gelfand–Naimark–Segal construction for more details. See also Atomic electron transition Bloch sphere Greenberger–Horne–Zeilinger state Ground state Introduction to quantum mechanics No-cloning theorem Orthonormal basis PBR theorem Quantum harmonic oscillator Quantum logic gate Stationary state Wave function collapse W state Notes References Further reading The concept of quantum states, in particular the content of the section Formalism in quantum physics above, is covered in most standard textbooks on quantum mechanics. For a discussion of conceptual aspects and a comparison with classical states, see: For a more detailed coverage of mathematical aspects, see: In particular, see Sec. 2.3. For a discussion of purifications of mixed quantum states, see Chapter 2 of John Preskill's lecture notes for Physics 219 at Caltech. For a discussion of geometric aspects see: , second, revised edition (2017)
Quantum state
[ "Physics" ]
5,012
[ "Quantum states", "Quantum mechanics" ]
30,876,740
https://en.wikipedia.org/wiki/Phototropism
In biology, phototropism is the growth of an organism in response to a light stimulus. Phototropism is most often observed in plants, but can also occur in other organisms such as fungi. The cells on the plant that are farthest from the light contain a hormone called auxin that reacts when phototropism occurs. This causes the plant to have elongated cells on the furthest side from the light. Phototropism is one of the many plant tropisms, or movements, which respond to external stimuli. Growth towards a light source is called positive phototropism, while growth away from light is called negative phototropism. Negative phototropism is not to be confused with skototropism, which is defined as the growth towards darkness, whereas negative phototropism can refer to either the growth away from a light source or towards the darkness. Most plant shoots exhibit positive phototropism, and rearrange their chloroplasts in the leaves to maximize photosynthetic energy and promote growth. Some vine shoot tips exhibit negative phototropism, which allows them to grow towards dark, solid objects and climb them. The combination of phototropism and gravitropism allow plants to grow in the correct direction. Mechanism There are several signaling molecules that help the plant determine where the light source is coming from, and these activate several genes, which change the hormone gradients allowing the plant to grow towards the light. The very tip of the plant is known as the coleoptile, which is necessary in light sensing. The middle portion of the coleoptile is the area where the shoot curvature occurs. The Cholodny–Went hypothesis, developed in the early 20th century, predicts that in the presence of asymmetric light, auxin will move towards the shaded side and promote elongation of the cells on that side to cause the plant to curve towards the light source. Auxins activate proton pumps, decreasing the pH in the cells on the dark side of the plant. This acidification of the cell wall region activates enzymes known as expansins which disrupt hydrogen bonds in the cell wall structure, making the cell walls less rigid. In addition, increased proton pump activity leads to more solutes entering the plant cells on the dark side of the plant, which increases the osmotic gradient between the symplast and apoplast of these plant cells. Water then enters the cells along its osmotic gradient, leading to an increase in turgor pressure. The decrease in cell wall strength and increased turgor pressure above a yield threshold causes cells to swell, exerting the mechanical pressure that drives phototropic movement. Proteins encoded by a second group of genes, PIN genes, have been found to play a major role in phototropism. They are auxin transporters, and it is thought that they are responsible for the polarization of auxin location. Specifically PIN3 has been identified as the primary auxin carrier. It is possible that phototropins receive light and inhibit the activity of PINOID kinase (PID), which then promotes the activity of PIN3. This activation of PIN3 leads to asymmetric distribution of auxin, which then leads to asymmetric elongation of cells in the stem. pin3 mutants had shorter hypocotyls and roots than the wild-type, and the same phenotype was seen in plants grown with auxin efflux inhibitors. Using anti-PIN3 immunogold labeling, movement of the PIN3 protein was observed. PIN3 is normally localized to the surface of hypocotyl and stem, but is also internalized in the presence of Brefeldin A (BFA), an exocytosis inhibitor. This mechanism allows PIN3 to be repositioned in response to an environmental stimulus. PIN3 and PIN7 proteins were thought to play a role in pulse-induced phototropism. The curvature responses in the "pin3" mutant were reduced significantly, but only slightly reduced in "pin7" mutants. There is some redundancy among "PIN1", "PIN3", and "PIN7", but it is thought that PIN3 plays a greater role in pulse-induced phototropism. There are phototropins that are highly expressed in the upper region of coleoptiles. There are two main phototropism they are phot1 and phot2. phot2 single mutants have phototropic responses like that of the wild-type, but phot1 phot2 double mutants do not show any phototropic responses. The amounts of PHOT1 and PHOT2 present are different depending on the age of the plant and the intensity of the light. There is a high amount of PHOT2 present in mature Arabidopsis leaves and this was also seen in rice orthologs. The expression of PHOT1 and PHOT2 changes depending on the presence of blue or red light. There was a downregulation of PHOT1 mRNA in the presence of light, but upregulation of PHOT2 transcript. The levels of mRNA and protein present in the plant were dependent upon the age of the plant. This suggests that the phototropin expression levels change with the maturation of the leaves. Mature leaves contain chloroplasts that are essential in photosynthesis. Chloroplast rearrangement occurs in different light environments to maximize photosynthesis. There are several genes involved in plant phototropism including the NPH1 and NPL1 gene. They are both involved in chloroplast rearrangement. The nph1 and npl1 double mutants were found to have reduced phototropic responses. In fact, the two genes are both redundant in determining the curvature of the stem. Recent studies reveal that multiple AGC kinases, except for PHOT1 and PHOT2, are involved in plant phototropism. Firstly, PINOID, exhibiting a light-inducible expression pattern, determines the subcellular relocation of PIN3 during phototropic responses via a direct phosphorylation. Secondly, D6PK and its D6PKL homologs modulates the auxin transport activity of PIN3, likely through phosphorylation as well. Third, upstream of D6PK/D6PKLs, PDK1.1 and PDK1.2 acts an essential activator for these AGC kinases. Interestingly, different AGC kinases might participate in different steps during the progression of a phototropic response. D6PK/D6PKLs exhibit an ability to phosphorylate more phosphosites than PINOID. Five models of auxin distribution in phototropism In 2012, Sakai and Haga outlined how different auxin concentrations could be arising on shaded and lighted side of the stem, giving birth to phototropic response. Five models in respect to stem phototropism have been proposed, using Arabidopsis thaliana as the study plant. First model In the first model incoming light deactivates auxin on the light side of the plant allowing the shaded part to continue growing and eventually bend the plant over towards the light. Second model In the second model light inhibits auxin biosynthesis on the light side of the plant, thus decreasing the concentration of auxin relative to the unaffected side. Third model In the third model there is a horizontal flow of auxin from both the light and dark side of the plant. Incoming light causes more auxin to flow from the exposed side to the shaded side, increasing the concentration of auxin on the shaded side and thus more growth occurring. Fourth model In the fourth model it shows the plant receiving light to inhibit auxin basipetal down to the exposed side, causing the auxin to only flow down the shaded side. Fifth model Model five encompasses elements of both model 3 and 4. The main auxin flow in this model comes from the top of the plant vertically down towards the base of the plant with some of the auxin travelling horizontally from the main auxin flow to both sides of the plant. Receiving light inhibits the horizontal auxin flow from the main vertical auxin flow to the irradiated exposed side. And according to the study by Sakai and Haga, the observed asymmetric auxin distribution and subsequent phototropic response in hypocotyls seems most consistent with this fifth scenario. Effects of wavelength Phototropism in plants such as Arabidopsis thaliana is directed by blue light receptors called phototropins. Other photosensitive receptors in plants include phytochromes that sense red light and cryptochromes that sense blue light. Different organs of the plant may exhibit different phototropic reactions to different wavelengths of light. Stem tips exhibit positive phototropic reactions to blue light, while root tips exhibit negative phototropic reactions to blue light. Both root tips and most stem tips exhibit positive phototropism to red light. Cryptochromes are photoreceptors that absorb blue/ UV-A light, and they help control the circadian rhythm in plants and timing of flowering. Phytochromes are photoreceptors that sense red/far-red light, but they also absorb blue light; they can control flowering in adult plants and the germination of seeds, among other things. The combination of responses from phytochromes and cryptochromes allow the plant to respond to various kinds of light. Together phytochromes and cryptochromes inhibit gravitropism in hypocotyls and contribute to phototropism. Gallery See also Scotobiology Cholodny–Went model References Bibliography External links Time lapse films, Plants-In-Motion Biology terminology Tropism Auxin action Articles containing video clips Light Sun
Phototropism
[ "Physics", "Biology" ]
2,061
[ "Physical phenomena", "Spectrum (physical sciences)", "Electromagnetic spectrum", "Waves", "Light", "nan" ]
30,876,757
https://en.wikipedia.org/wiki/Survival%20function
The survival function is a function that gives the probability that a patient, device, or other object of interest will survive past a certain time. The survival function is also known as the survivor function or reliability function. The term reliability function is common in engineering while the term survival function is used in a broader range of applications, including human mortality. The survival function is the complementary cumulative distribution function of the lifetime. Sometimes complementary cumulative distribution functions are called survival functions in general. Definition Let the lifetime be a continuous random variable describing the time to failure. If has cumulative distribution function and probability density function on the interval , then the survival function or reliability function is: Examples of survival functions The graphs below show examples of hypothetical survival functions. The -axis is time. The -axis is the proportion of subjects surviving. The graphs show the probability that a subject will survive beyond time . For example, for survival function 1, the probability of surviving longer than months is . That is, 37% of subjects survive more than 2 months. For survival function 2, the probability of surviving longer than months is . That is, 97% of subjects survive more than 2 months. Median survival may be determined from the survival function: The median survival is the point where the survival function intersects the value . For example, for survival function 2, 50% of the subjects survive 3.72 months. Median survival is thus months. Median survival cannot always be determined from the graph alone. For example, in survival function 4, more than 50% of the subjects survive longer than the observation period of 10 months. The survival function is one of several ways to describe and display survival data. Another useful way to display data is a graph showing the distribution of survival times of subjects. Olkin, page 426, gives the following example of survival data. The number of hours between successive failures of an air-conditioning (AC) system were recorded. The time in hours, , between successive failures are 1, 3, 5, 7, 11, 11, 11, 12, 14, 14, 14, 16, 16, 20, 21, 23, 42, 47, 52, 62, 71, 71, 87, 90, 95, 120, 120, 225, 246 and 261. The mean time between failures is 59.6. The figure below shows the distribution of the time between failures. The blue tick marks beneath the graph are the actual hours between successive AC failures. In this example, a curve representing the exponential distribution overlays the distribution of AC failure times; the exponential distribution approximates the distribution of AC failure times. This particular exponential curve is specified by the parameter lambda, : . The distribution of failure times is the probability density function (PDF), since time can take any positive value. In equations, the PDF is specified as . If time can only take discrete values (such as 1 day, 2 days, and so on), the distribution of failure times is called the probability mass function. Most survival analysis methods assume that time can take any positive value, and is the PDF. If the time between observed AC failures is approximated using the exponential function, then the exponential curve gives the probability density function, , for AC failure times. Another useful way to display the survival data is a graph showing the cumulative failures up to each time point. These data may be displayed as either the cumulative number or the cumulative proportion of failures up to each time. The graph below shows the cumulative probability (or proportion) of failures at each time for the air conditioning system. The stairstep line in black shows the cumulative proportion of failures. For each step there is a blue tick at the bottom of the graph indicating an observed failure time. The smooth red line represents the exponential curve fitted to the observed data. A graph of the cumulative probability of failures up to each time point is called the cumulative distribution function (CDF). In survival analysis, the cumulative distribution function gives the probability that the survival time is less than or equal to a specific time, . Let be survival time, which is any positive number. A particular time is designated by the lower case letter . The cumulative distribution function of is the function where the right-hand side represents the probability that the random variable is less than or equal to . If time can take on any positive value, then the cumulative distribution function is the integral of the probability density function . For the air-conditioning example, the graph of the CDF below illustrates that the probability that the time to failure is less than or equal to 100 hours is , as estimated using the exponential curve fit to the data. An alternative to graphing the probability that the failure time is less than or equal to 100 hours is to graph the probability that the failure time is greater than 100 hours. The probability that the failure time is greater than 100 hours must be 1 minus the probability that the failure time is less than or equal to 100 hours, because total probability must sum to 1. This gives: This relationship generalizes to all failure times: This relationship is shown on the graphs below. The graph on the left is the cumulative distribution function, which is P(T ≤ t). The graph on the right is P(T > t) = 1 - P(T ≤ t). The graph on the right is the survival function, S(t). The fact that the S(t) = 1 – CDF is the reason that another name for the survival function is the complementary cumulative distribution function. Parametric survival functions In some cases, such as the air conditioner example, the distribution of survival times may be approximated well by a function such as the exponential distribution. Several distributions are commonly used in survival analysis, including the exponential, Weibull, gamma, normal, log-normal, and log-logistic. These distributions are defined by parameters. The normal (Gaussian) distribution, for example, is defined by the two parameters mean and standard deviation. Survival functions that are defined by parameters are said to be parametric. In the four survival function graphs shown above, the shape of the survival function is defined by a particular probability distribution: survival function 1 is defined by an exponential distribution, 2 is defined by a Weibull distribution, 3 is defined by a log-logistic distribution, and 4 is defined by another Weibull distribution. Exponential survival function For an exponential survival distribution, the probability of failure is the same in every time interval, no matter the age of the individual or device. This fact leads to the "memoryless" property of the exponential survival distribution: the age of a subject has no effect on the probability of failure in the next time interval. The exponential may be a good model for the lifetime of a system where parts are replaced as they fail. It may also be useful for modeling survival of living organisms over short intervals. It is not likely to be a good model of the complete lifespan of a living organism. As Efron and Hastie (p. 134) note, "If human lifetimes were exponential there wouldn't be old or young people, just lucky or unlucky ones". Weibull survival function A key assumption of the exponential survival function is that the hazard rate is constant. In an example given above, the proportion of men dying each year was constant at 10%, meaning that the hazard rate was constant. The assumption of constant hazard may not be appropriate. For example, among most living organisms, the risk of death is greater in old age than in middle age – that is, the hazard rate increases with time. For some diseases, such as breast cancer, the risk of recurrence is lower after 5 years – that is, the hazard rate decreases with time. The Weibull distribution extends the exponential distribution to allow constant, increasing, or decreasing hazard rates. Other parametric survival functions There are several other parametric survival functions that may provide a better fit to a particular data set, including normal, lognormal, log-logistic, and gamma. The choice of parametric distribution for a particular application can be made using graphical methods or using formal tests of fit. These distributions and tests are described in textbooks on survival analysis. Lawless has extensive coverage of parametric models. Parametric survival functions are commonly used in manufacturing applications, in part because they enable estimation of the survival function beyond the observation period. However, appropriate use of parametric functions requires that data are well modeled by the chosen distribution. If an appropriate distribution is not available, or cannot be specified before a clinical trial or experiment, then non-parametric survival functions offer a useful alternative. Non-parametric survival functions A parametric model of survival may not be possible or desirable. In these situations, the most common method to model the survival function is the non-parametric Kaplan–Meier estimator. This estimator requires lifetime data. Periodic case (cohort) and death (and recovery) counts are statistically sufficient to make non-parametric maximum likelihood and least squares estimates of survival functions, without lifetime data. Properties Every survival function is monotonically decreasing, i.e. for all . It is a property of a random variable that maps a set of events, usually associated with mortality or failure of some system, onto time. The time, , represents some origin, typically the beginning of a study or the start of operation of some system. is commonly unity but can be less to represent the probability that the system fails immediately upon operation. Since the CDF is a right-continuous function, the survival function is also right-continuous. The survival function can be related to the probability density function and the hazard function So that The expected survival time The expected value of a random variable is defined as: where is the probability density function. Using the relation , the expected value formula may be modified: This may be further simplified by employing integration by parts: By definition, , meaning that the boundary terms are identically equal to zero. Therefore, we may conclude that the expected value is simply the integral of the survival function: See also Failure rate Frequency of exceedance Kaplan–Meier estimator Mean time to failure Residence time (statistics) Survivorship curve References Survival analysis Applied probability
Survival function
[ "Mathematics" ]
2,072
[ "Applied mathematics", "Applied probability" ]
30,876,867
https://en.wikipedia.org/wiki/Molecular%20cloning
Molecular cloning is a set of experimental methods in molecular biology that are used to assemble recombinant DNA molecules and to direct their replication within host organisms. The use of the word cloning refers to the fact that the method involves the replication of one molecule to produce a population of cells with identical DNA molecules. Molecular cloning generally uses DNA sequences from two different organisms: the species that is the source of the DNA to be cloned, and the species that will serve as the living host for replication of the recombinant DNA. Molecular cloning methods are central to many contemporary areas of modern biology and medicine. In a conventional molecular cloning experiment, the DNA to be cloned is obtained from an organism of interest, then treated with enzymes in the test tube to generate smaller DNA fragments. Subsequently, these fragments are then combined with vector DNA to generate recombinant DNA molecules. The recombinant DNA is then introduced into a host organism (typically an easy-to-grow, benign, laboratory strain of E. coli bacteria). This will generate a population of organisms in which recombinant DNA molecules are replicated along with the host DNA. Because they contain foreign DNA fragments, these are transgenic or genetically modified microorganisms (GMOs). This process takes advantage of the fact that a single bacterial cell can be induced to take up and replicate a single recombinant DNA molecule. This single cell can then be expanded exponentially to generate a large number of bacteria, each of which contains copies of the original recombinant molecule. Thus, both the resulting bacterial population, and the recombinant DNA molecule, are commonly referred to as "clones". Strictly speaking, recombinant DNA refers to DNA molecules, while molecular cloning refers to the experimental methods used to assemble them. The idea arose that different DNA sequences could be inserted into a plasmid and that these foreign sequences would be carried into bacteria and digested as part of the plasmid. That is, these plasmids could serve as cloning vectors to carry genes. Virtually any DNA sequence can be cloned and amplified, but there are some factors that might limit the success of the process. Examples of the DNA sequences that are difficult to clone are inverted repeats, origins of replication, centromeres and telomeres. There is also a lower chance of success when inserting large-sized DNA sequences. Inserts larger than 10kbp have very limited success, but bacteriophages such as bacteriophage λ can be modified to successfully insert a sequence up to 40 kbp. History Prior to the 1970s, the understanding of genetics and molecular biology was severely hampered by an inability to isolate and study individual genes from complex organisms. This changed dramatically with the advent of molecular cloning methods. Microbiologists, seeking to understand the molecular mechanisms through which bacteria restricted the growth of bacteriophage, isolated restriction endonucleases, enzymes that could cleave DNA molecules only when specific DNA sequences were encountered. They showed that restriction enzymes cleaved chromosome-length DNA molecules at specific locations, and that specific sections of the larger molecule could be purified by size fractionation. Using a second enzyme, DNA ligase, fragments generated by restriction enzymes could be joined in new combinations, termed recombinant DNA. By recombining DNA segments of interest with vector DNA, such as bacteriophage or plasmids, which naturally replicate inside bacteria, large quantities of purified recombinant DNA molecules could be produced in bacterial cultures. The first recombinant DNA molecules were generated and studied in 1972. Overview Molecular cloning takes advantage of the fact that the chemical structure of DNA is fundamentally the same in all living organisms. Therefore, if any segment of DNA from any organism is inserted into a DNA segment containing the molecular sequences required for DNA replication, and the resulting recombinant DNA is introduced into the organism from which the replication sequences were obtained, then the foreign DNA will be replicated along with the host cell's DNA in the transgenic organism. Molecular cloning is similar to PCR in that it permits the replication of DNA sequence. The fundamental difference between the two methods is that molecular cloning involves replication of the DNA in a living microorganism, while PCR replicates DNA in an in vitro solution, free of living cells. In silico cloning and simulations Before actual cloning experiments are performed in the lab, most cloning experiments are planned in a computer, using specialized software. Although the detailed planning of the cloning can be done in any text editor, together with online utilities for e.g. PCR primer design, dedicated software exist for the purpose. Software for the purpose include for example ApE (open source), DNAStrider (open source), Serial Cloner (gratis), Collagene (open source), and SnapGene (commercial). These programs allow to simulate PCR reactions, restriction digests, ligations, etc., that is, all the steps described below. Steps In standard molecular cloning experiments, the cloning of any DNA fragment essentially involves seven steps: (1) Choice of host organism and cloning vector, (2) Preparation of vector DNA, (3) Preparation of DNA to be cloned, (4) Creation of recombinant DNA, (5) Introduction of recombinant DNA into host organism, (6) Selection of organisms containing recombinant DNA, (7) Screening for clones with desired DNA inserts and biological properties. Notably, the growing capacity and fidelity of DNA synthesis platforms allows for increasingly intricate designs in molecular engineering. These projects may include very long strands of novel DNA sequence and/or test entire libraries simultaneously, as opposed to of individual sequences. These shifts introduce complexity that require design to move away from the flat nucleotide-based representation and towards a higher level of abstraction. Examples of such tools are GenoCAD, Teselagen (free for academia) or GeneticConstructor (free for academics). Choice of host organism and cloning vector Although a very large number of host organisms and molecular cloning vectors are in use, the great majority of molecular cloning experiments begin with a laboratory strain of the bacterium E. coli (Escherichia coli) and a plasmid cloning vector. E. coli and plasmid vectors are in common use because they are technically sophisticated, versatile, widely available, and offer rapid growth of recombinant organisms with minimal equipment. If the DNA to be cloned is exceptionally large (hundreds of thousands to millions of base pairs), then a bacterial artificial chromosome or yeast artificial chromosome vector is often chosen. Specialized applications may call for specialized host-vector systems. For example, if the experimentalists wish to harvest a particular protein from the recombinant organism, then an expression vector is chosen that contains appropriate signals for transcription and translation in the desired host organism. Alternatively, if replication of the DNA in different species is desired (for example, transfer of DNA from bacteria to plants), then a multiple host range vector (also termed shuttle vector) may be selected. In practice, however, specialized molecular cloning experiments usually begin with cloning into a bacterial plasmid, followed by subcloning into a specialized vector. Whatever combination of host and vector are used, the vector almost always contains four DNA segments that are critically important to its function and experimental utility: DNA replication origin is necessary for the vector (and its linked recombinant sequences) to replicate inside the host organism one or more unique restriction endonuclease recognition sites to serve as sites where foreign DNA may be introduced a selectable genetic marker gene that can be used to enable the survival of cells that have taken up vector sequences a tag gene that can be used to screen for cells containing the foreign DNA Preparation of vector DNA The cloning vector is treated with a restriction endonuclease to cleave the DNA at the site where foreign DNA will be inserted. The restriction enzyme is chosen to generate a configuration at the cleavage site that is compatible with the ends of the foreign DNA (see DNA end). Typically, this is done by cleaving the vector DNA and foreign DNA with the same restriction enzyme or restriction endonuclease, for example EcoRI and this restriction enzyme was isolated from E.coli. Most modern vectors contain a variety of convenient cleavage sites that are unique within the vector molecule (so that the vector can only be cleaved at a single site) and are located within a gene (frequently beta-galactosidase) whose inactivation can be used to distinguish recombinant from non-recombinant organisms at a later step in the process. To improve the ratio of recombinant to non-recombinant organisms, the cleaved vector may be treated with an enzyme (alkaline phosphatase) that dephosphorylates the vector ends. Vector molecules with dephosphorylated ends are unable to replicate, and replication can only be restored if foreign DNA is integrated into the cleavage site. Preparation of DNA to be cloned For cloning of genomic DNA, the DNA to be cloned is extracted from the organism of interest. Virtually any tissue source can be used (even tissues from extinct animals), as long as the DNA is not extensively degraded. The DNA is then purified using simple methods to remove contaminating proteins (extraction with phenol), RNA (ribonuclease) and smaller molecules (precipitation and/or chromatography). Polymerase chain reaction (PCR) methods are often used for amplification of specific DNA or RNA (RT-PCR) sequences prior to molecular cloning. DNA for cloning experiments may also be obtained from RNA using reverse transcriptase (complementary DNA or cDNA cloning), or in the form of synthetic DNA (artificial gene synthesis). cDNA cloning is usually used to obtain clones representative of the mRNA population of the cells of interest, while synthetic DNA is used to obtain any precise sequence defined by the designer. Such a designed sequence may be required when moving genes across genetic codes (for example, from the mitochondria to the nucleus) or simply for increasing expression via codon optimization. The purified DNA is then treated with a restriction enzyme to generate fragments with ends capable of being linked to those of the vector. If necessary, short double-stranded segments of DNA (linkers) containing desired restriction sites may be added to create end structures that are compatible with the vector. Creation of recombinant DNA with DNA ligase The creation of recombinant DNA is in many ways the simplest step of the molecular cloning process. DNA prepared from the vector and foreign source are simply mixed together at appropriate concentrations and exposed to an enzyme (DNA ligase) that covalently links the ends together. This joining reaction is often termed ligation. The resulting DNA mixture containing randomly joined ends is then ready for introduction into the host organism. DNA ligase only recognizes and acts on the ends of linear DNA molecules, usually resulting in a complex mixture of DNA molecules with randomly joined ends. The desired products (vector DNA covalently linked to foreign DNA) will be present, but other sequences (e.g. foreign DNA linked to itself, vector DNA linked to itself and higher-order combinations of vector and foreign DNA) are also usually present. This complex mixture is sorted out in subsequent steps of the cloning process, after the DNA mixture is introduced into cells. Introduction of recombinant DNA into host organism The DNA mixture, previously manipulated in vitro, is moved back into a living cell, referred to as the host organism. The methods used to get DNA into cells are varied, and the name applied to this step in the molecular cloning process will often depend upon the experimental method that is chosen (e.g. transformation, transduction, transfection, electroporation). When microorganisms are able to take up and replicate DNA from their local environment, the process is termed transformation, and cells that are in a physiological state such that they can take up DNA are said to be competent. In mammalian cell culture, the analogous process of introducing DNA into cells is commonly termed transfection. Both transformation and transfection usually require preparation of the cells through a special growth regime and chemical treatment process that will vary with the specific species and cell types that are used. Electroporation uses high voltage electrical pulses to translocate DNA across the cell membrane (and cell wall, if present). In contrast, transduction involves the packaging of DNA into virus-derived particles, and using these virus-like particles to introduce the encapsulated DNA into the cell through a process resembling viral infection. Although electroporation and transduction are highly specialized methods, they may be the most efficient methods to move DNA into cells. Selection of organisms containing vector sequences Whichever method is used, the introduction of recombinant DNA into the chosen host organism is usually a low efficiency process; that is, only a small fraction of the cells will actually take up DNA. Experimental scientists deal with this issue through a step of artificial genetic selection, in which cells that have not taken up DNA are selectively killed, and only those cells that can actively replicate DNA containing the selectable marker gene encoded by the vector are able to survive. When bacterial cells are used as host organisms, the selectable marker is usually a gene that confers resistance to an antibiotic that would otherwise kill the cells, typically ampicillin. Cells harboring the plasmid will survive when exposed to the antibiotic, while those that have failed to take up plasmid sequences will die. When mammalian cells (e.g. human or mouse cells) are used, a similar strategy is used, except that the marker gene (in this case typically encoded as part of the kanMX cassette) confers resistance to the antibiotic Geneticin. Screening for clones with desired DNA inserts and biological properties Modern bacterial cloning vectors (e.g. pUC19 and later derivatives including the pGEM vectors) use the blue-white screening system to distinguish colonies (clones) of transgenic cells from those that contain the parental vector (i.e. vector DNA with no recombinant sequence inserted). In these vectors, foreign DNA is inserted into a sequence that encodes an essential part of beta-galactosidase, an enzyme whose activity results in formation of a blue-colored colony on the culture medium that is used for this work. Insertion of the foreign DNA into the beta-galactosidase coding sequence disables the function of the enzyme so that colonies containing transformed DNA remain colorless (white). Therefore, experimentalists are easily able to identify and conduct further studies on transgenic bacterial clones, while ignoring those that do not contain recombinant DNA. The total population of individual clones obtained in a molecular cloning experiment is often termed a DNA library. Libraries may be highly complex (as when cloning complete genomic DNA from an organism) or relatively simple (as when moving a previously cloned DNA fragment into a different plasmid), but it is almost always necessary to examine a number of different clones to be sure that the desired DNA construct is obtained. This may be accomplished through a very wide range of experimental methods, including the use of nucleic acid hybridizations, antibody probes, polymerase chain reaction, restriction fragment analysis and/or DNA sequencing. Applications Molecular cloning provides scientists with an essentially unlimited quantity of any individual DNA segments derived from any genome. This material can be used for a wide range of purposes, including those in both basic and applied biological science. A few of the more important applications are summarized here. Genome organization and gene expression Molecular cloning has led directly to the elucidation of the complete DNA sequence of the genomes of a very large number of species and to an exploration of genetic diversity within individual species, work that has been done mostly by determining the DNA sequence of large numbers of randomly cloned fragments of the genome, and assembling the overlapping sequences. At the level of individual genes, molecular clones are used to generate probes that are used for examining how genes are expressed, and how that expression is related to other processes in biology, including the metabolic environment, extracellular signals, development, learning, senescence and cell death. Cloned genes can also provide tools to examine the biological function and importance of individual genes, by allowing investigators to inactivate the genes, or make more subtle mutations using regional mutagenesis or site-directed mutagenesis. Genes cloned into expression vectors for functional cloning provide a means to screen for genes on the basis of the expressed protein's function. Production of recombinant proteins Obtaining the molecular clone of a gene can lead to the development of organisms that produce the protein product of the cloned genes, termed a recombinant protein. In practice, it is frequently more difficult to develop an organism that produces an active form of the recombinant protein in desirable quantities than it is to clone the gene. This is because the molecular signals for gene expression are complex and variable, and because protein folding, stability and transport can be very challenging. Many useful proteins are currently available as recombinant products. These include--(1) medically useful proteins whose administration can correct a defective or poorly expressed gene (e.g. recombinant factor VIII, a blood-clotting factor deficient in some forms of hemophilia, and recombinant insulin, used to treat some forms of diabetes), (2) proteins that can be administered to assist in a life-threatening emergency (e.g. tissue plasminogen activator, used to treat strokes), (3) recombinant subunit vaccines, in which a purified protein can be used to immunize patients against infectious diseases, without exposing them to the infectious agent itself (e.g. hepatitis B vaccine), and (4) recombinant proteins as standard material for diagnostic laboratory tests. Transgenic organisms Once characterized and manipulated to provide signals for appropriate expression, cloned genes may be inserted into organisms, generating transgenic organisms, also termed genetically modified organisms (GMOs). Although most GMOs are generated for purposes of basic biological research (see for example, transgenic mouse), a number of GMOs have been developed for commercial use, ranging from animals and plants that produce pharmaceuticals or other compounds (pharming), herbicide-resistant crop plants, and fluorescent tropical fish (GloFish) for home entertainment. Gene therapy Gene therapy involves supplying a functional gene to cells lacking that function, with the aim of correcting a genetic disorder or acquired disease. Gene therapy can be broadly divided into two categories. The first is alteration of germ cells, that is, sperm or eggs, which results in a permanent genetic change for the whole organism and subsequent generations. This "germ line gene therapy" is considered by many to be unethical in human beings. The second type of gene therapy, "somatic cell gene therapy", is analogous to an organ transplant. In this case, one or more specific tissues are targeted by direct treatment or by removal of the tissue, addition of the therapeutic gene or genes in the laboratory, and return of the treated cells to the patient. Clinical trials of somatic cell gene therapy began in the late 1990s, mostly for the treatment of cancers and blood, liver, and lung disorders. Despite a great deal of publicity and promises, the history of human gene therapy has been characterized by relatively limited success. The effect of introducing a gene into cells often promotes only partial and/or transient relief from the symptoms of the disease being treated. Some gene therapy trial patients have suffered adverse consequences of the treatment itself, including deaths. In some cases, the adverse effects result from disruption of essential genes within the patient's genome by insertional inactivation. In others, viral vectors used for gene therapy have been contaminated with infectious virus. Nevertheless, gene therapy is still held to be a promising future area of medicine, and is an area where there is a significant level of research and development activity. References Further reading External links Genetics techniques Molecular genetics Molecular biology techniques
Molecular cloning
[ "Chemistry", "Engineering", "Biology" ]
4,200
[ "Genetics techniques", "Genetic engineering", "Molecular genetics", "Molecular biology techniques", "Molecular biology" ]
29,439,514
https://en.wikipedia.org/wiki/Holometer
The Fermilab Holometer in Illinois is intended to be the world's most sensitive laser interferometer, surpassing the sensitivity of the GEO600 and LIGO systems, and theoretically able to detect holographic fluctuations in spacetime. According to the director of the project, the Holometer should be capable of detecting fluctuations in the light of a single attometer, meeting or exceeding the sensitivity required to detect the smallest units in the universe called Planck units. Fermilab states: "Everyone is familiar these days with the blurry and pixelated images, or noisy sound transmission, associated with poor internet bandwidth. The Holometer seeks to detect the equivalent blurriness or noise in reality itself, associated with the ultimate frequency limit imposed by nature." Craig Hogan, a particle astrophysicist at Fermilab, states about the experiment, "What we’re looking for is when the lasers lose step with each other. We’re trying to detect the smallest unit in the universe. This is really great fun, a sort of old-fashioned physics experiment where you don’t know what the result will be." Experimental physicist Hartmut Grote of the Max Planck Institute in Germany states that although he is skeptical that the apparatus will successfully detect the holographic fluctuations, if the experiment is successful "it would be a very strong impact to one of the most open questions in fundamental physics. It would be the first proof that space-time, the fabric of the universe, is quantized." Holometer has started, in 2014, collecting data that will help determine whether the universe fits the holographic principle. The hypothesis that holographic noise may be observed in this manner has been criticized on the grounds that the theoretical framework used to derive the noise violates Lorentz-invariance. Lorentz-invariance violation is however very strongly constrained already, an issue that has been very unsatisfactorily addressed in the mathematical treatment. The Fermilab holometer has found also other uses than studying the holographic fluctuations of spacetime. It has shown constraints on the existence of high-frequency gravitational waves and primordial black holes. Experimental description The Holometer will consist of two 39 m arm-length power-recycled Michelson interferometers, similar to the LIGO instruments. The interferometers will be able to be operated in two spatial configurations, termed "nested" and "back-to-back". According to Hogan's hypothesis, in the nested configuration the interferometers' beamsplitters should appear to wander in step with each other (that is, the wandering should be correlated); conversely, in the back-to-back configuration any wandering of the beamsplitters should be uncorrelated. The presence or absence of the correlated wandering effect in each configuration can be determined by cross-correlating the interferometers' outputs. The experiment started one year of data collection in August 2014. A paper about the project titled Now Broadcasting in Planck Definition by Craig Hogan ends with the statement "We don't know what we will find." A new result of the experiment released on December 3, 2015, after a year of data collection, has ruled out Hogan's theory of a pixelated universe to a high degree of statistical significance (4.6 sigma). The study found that space-time is not quantized at the scale being measured. References External links Fermilab Holometer Science and technology in the United States Experimental physics Experimental particle physics Fermilab 2014 inventions Fermilab experiments
Holometer
[ "Physics" ]
741
[ "Experimental physics", "Particle physics", "Experimental particle physics" ]
29,452,338
https://en.wikipedia.org/wiki/Orientational%20glass
In solid-state physics, an orientational glass is a molecular solid in which crystalline long-range order coexists with quenched disorder in some rotational degree of freedom. An orientational glass is either obtained by quenching a plastic crystal, (e.g. cyclohexane, levoglucosan), or it is a mixed crystal in which positional disorder causes additional disorder of molecular orientations, e.g. CN orientations in KCN:KBr. References Condensed matter physics Crystallography
Orientational glass
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
110
[ "Materials science stubs", "Phases of matter", "Materials science", "Crystallography stubs", "Crystallography", "Condensed matter physics", "Condensed matter stubs", "Matter" ]
29,453,694
https://en.wikipedia.org/wiki/Cobalt-chrome
Cobalt-chrome or cobalt-chromium (CoCr) is a metal alloy of cobalt and chromium. Cobalt-chrome has a very high specific strength and is commonly used in gas turbines, dental implants, and orthopedic implants. History Co-Cr alloy was first discovered by Elwood Haynes in the early 1900s by fusing cobalt and chromium. The alloy was first discovered with many other elements such as tungsten and molybdenum in it. Haynes reported his alloy was capable of resisting oxidation and corrosive fumes and exhibited no visible sign of tarnish even when subjecting the alloy to boiling nitric acid. Under the name Stellite, Co-Cr alloy has been used in various fields where high wear-resistance was needed including aerospace industry, cutlery, bearings, blades, etc. Co-Cr alloy started receiving more attention as its biomedical application was found. In the 20th century, the alloy was first used in medical tool manufacturing, and in 1960, the first Co-Cr prosthetic heart valve was implanted, which happened to last over 30 years showing its high wear-resistance. Recently, due to excellent resistant properties, biocompatibility, high melting points, and incredible strength at high temperatures, Co-Cr alloy is used for the manufacture of many artificial joints including hips and knees, dental partial bridge work, gas turbines, and many others. Synthesis The common Co-Cr alloy production requires the extraction of cobalt and chromium from cobalt oxide and chromium oxide ores. Both of the ores need to go through reduction process to obtain pure metals. Chromium usually goes through aluminothermic reduction technique, and pure cobalt can be achieved through many different ways depending on the characteristics of the specific ore. Pure metals are then fused together under vacuum either by electric arc or by induction melting. Due to the chemical reactivity of metals at high temperature, the process requires vacuum conditions or inert atmosphere to prevent oxygen uptake by the metal. ASTM F75, a Co-Cr-Mo alloy, is produced in an inert argon atmosphere by ejecting molten metals through a small nozzle that is immediately cooled to produce a fine powder of the alloy. However, synthesis of Co-Cr alloy through the method mentioned above is very expensive and difficult. Recently, in 2010, scientists at the University of Cambridge have produced the alloy through a novel electrochemical, solid-state reduction technique known as the FFC Cambridge Process which involves the reduction of an oxide precursor cathode in a molten chloride electrolyte. Properties Co-Cr alloys show high resistance to corrosion due to the spontaneous formation of a protective passive film composed of mostly Cr2O3, and minor amounts of cobalt and other metal oxides on the surface. CoCr has a melting point around . As its wide application in biomedical industry indicates, Co-Cr alloys are well known for their biocompatibility. Biocompatibility also depends on the film and how this oxidized surface interacts with physiological environment. Good mechanical properties that are similar to stainless steel are a result of a multiphase structure and precipitation of carbides, which increase the hardness of Co-Cr alloys tremendously. The hardness of Co-Cr alloys varies ranging 550-800 MPa, and tensile strength varies ranging 145-270 MPa. Moreover, tensile and fatigue strength increases radically as they are heat-treated. However, Co-Cr alloys tend to have low ductility, which can cause component fracture. This is a concern as the alloys are commonly used in hip replacements. In order to overcome the low ductility, nickel, carbon, and/or nitrogen are added. These elements stabilize the γ phase, which has better mechanical properties compared to other phases of Co-Cr alloys. Common types There are several Co-Cr alloys that are commonly produced and used in various fields. ASTM F75, ASTM F799, ASTM F1537 are Co-Cr-Mo alloys with very similar composition yet slightly different production processes, ASTM F90 is a Co-Cr-W-Ni alloy, and ASTM F562 is a Co-Ni-Cr-Mo-Ti alloy. Structure Depending on the percent composition of cobalt or chromium and the temperature, Co-Cr alloys show different structures. The σ phase, where the alloy contains approximately 60–75% chromium, tends to be brittle and subject to a fracture. FCC crystal structure is found in the γ phase, and the γ phase shows improved strength and ductility compared to the σ phase. FCC crystal structure is commonly found in cobalt rich alloys, while chromium rich alloys tend to have BCC crystal structure. The γ phase Co-Cr alloy can be converted into the ε phase at high pressures, which shows a HCP crystal structure. Uses Medical implants Co-Cr alloys are most commonly used to make artificial joints including knee and hip joints due to high wear-resistance and biocompatibility. Co-Cr alloys tend to be corrosion resistant, which reduces complication with the surrounding tissues when implanted, and chemically inert that they minimize the possibility of irritation, allergic reaction, and immune response. Co-Cr alloy has also been widely used in the manufacture of stent and other surgical implants as Co-Cr alloy demonstrates excellent biocompatibility with blood and soft tissues as well. The alloy composition used in orthopedic implants is described in industry standard ASTM-F75: mainly cobalt, with 27 to 30% chromium, 5 to 7% molybdenum, and upper limits on other important elements such as less than 1% each of manganese and silicon, less than 0.75% iron, less than 0.5% nickel, and very small amounts of carbon, nitrogen, tungsten, phosphorus, sulfur, boron, etc. Besides cobalt-chromium-molybdenum (CoCrMo), cobalt-nickel-chromium-molybdenum (CoNiCrMo) is also used for implants. The possible toxicity of released Ni ions from CoNiCr alloys and also their limited frictional properties are a matter of concern in using these alloys as articulating components. Thus, CoCrMo is usually the dominant alloy for total joint arthroplasty. Dental prosthetics Co-Cr alloy dentures and cast partial dentures have been commonly manufactured since 1929 due to lower cost and lower density compared to gold alloys; however, Co-Cr alloys tend to exhibit a higher modulus of elasticity and cyclic fatigue resistance, which are significant factors for dental prosthesis. The alloy is a commonly used as a metal framework for dental partials. A well known brand for this purpose is Vitallium. Industry Due to mechanical properties such as high resistance to corrosion and wear, Co-Cr alloys (e.g., Stellites) are used in making wind turbines, engine components, and many other industrial/mechanical components where high wear resistance is needed. Co-Cr alloy is also very commonly used in fashion industry to make jewellery, especially wedding bands. Hazards Metals released from Co-Cr alloy tools and prosthetics may cause allergic reactions and skin eczema. Prosthetics or any medical equipment with high nickel mass percentage Co-Cr alloy should be avoided due to low biocompatibility, as nickel is the most common metal sensitizer in the human body. See also Alacrite Hastelloy References Biomaterials Chromium alloys Cobalt alloys
Cobalt-chrome
[ "Physics", "Chemistry", "Biology" ]
1,557
[ "Biomaterials", "Materials", "Alloys", "Medical technology", "Chromium alloys", "Matter", "Cobalt alloys" ]
23,327,939
https://en.wikipedia.org/wiki/Three-dimensional%20edge-matching%20puzzle
A three-dimensional edge-matching puzzle is a type of edge-matching puzzle or tiling puzzle involving tiling a three-dimensional area with (typically regular) polygonal pieces whose edges are distinguished with colors or patterns, in such a way that the edges of adjacent pieces match. Edge-matching puzzles are known to be NP-complete, and capable of conversion to and from equivalent jigsaw puzzles and polyomino packing puzzle. Three-dimensional edge-matching puzzles are not currently under direct U.S. patent protection, since the 1892 patent by E. L. Thurston has expired. Current examples of commercial three-dimensional edge-matching puzzles include the Dodek Duo, The Enigma, Mental Misery, and Kadon Enterprises' range of three-dimensional edge-matching puzzles. See also Edge-matching puzzle Domino tiling References External links Erich's 3-D Matching Puzzles Color- and Edge-Matching Polygons by Peter Esser Rob's puzzle page by Rob Stegmann More about edgematching Tiling puzzles NP-complete problems
Three-dimensional edge-matching puzzle
[ "Physics", "Mathematics" ]
217
[ "Tessellation", "Recreational mathematics", "NP-complete problems", "Computational problems", "Tiling puzzles", "Mathematical problems", "Symmetry" ]
23,338,010
https://en.wikipedia.org/wiki/Butcher%20group
In mathematics, the Butcher group, named after the New Zealand mathematician John C. Butcher by , is an infinite-dimensional Lie group first introduced in numerical analysis to study solutions of non-linear ordinary differential equations by the Runge–Kutta method. It arose from an algebraic formalism involving rooted trees that provides formal power series solutions of the differential equation modeling the flow of a vector field. It was , prompted by the work of Sylvester on change of variables in differential calculus, who first noted that the derivatives of a composition of functions can be conveniently expressed in terms of rooted trees and their combinatorics. pointed out that the Butcher group is the group of characters of the Hopf algebra of rooted trees that had arisen independently in their own work on renormalization in quantum field theory and Connes' work with Moscovici on local index theorems. This Hopf algebra, often called the Connes–Kreimer algebra, is essentially equivalent to the Butcher group, since its dual can be identified with the universal enveloping algebra of the Lie algebra of the Butcher group. As they commented: Differentials and rooted trees A rooted tree is a graph with a distinguished node, called the root, in which every other node is connected to the root by a unique path. If the root of a tree t is removed and the nodes connected to the original node by a single bond are taken as new roots, the tree t breaks up into rooted trees t1, t2, ... Reversing this process a new tree t = [t1, t2, ...] can be constructed by joining the roots of the trees to a new common root. The number of nodes in a tree is denoted by |t|. A heap-ordering of a rooted tree t is an allocation of the numbers 1 through |t| to the nodes so that the numbers increase on any path going away from the root. Two heap orderings are equivalent, if there is an automorphism of rooted trees mapping one of them on the other. The number of equivalence classes of heap-orderings on a particular tree is denoted by α(t) and can be computed using the Butcher's formula: where St denotes the symmetry group of t and the tree factorial is defined recursively by with the tree factorial of an isolated root defined to be 1 The ordinary differential equation for the flow of a vector field on an open subset U of RN can be written where x(s) takes values in U, f is a smooth function from U to RN and x0 is the starting point of the flow at time s = 0. gave a method to compute the higher order derivatives x(m)(s) in terms of rooted trees. His formula can be conveniently expressed using the elementary differentials introduced by Butcher. These are defined inductively by With this notation giving the power series expansion As an example when N = 1, so that x and f are real-valued functions of a single real variable, the formula yields where the four terms correspond to the four rooted trees from left to right in Figure 3 above. In a single variable this formula is the same as Faà di Bruno's formula of 1855; however in several variables it has to be written more carefully in the form where the tree structure is crucial. Definition using Hopf algebra of rooted trees The Hopf algebra H of rooted trees was defined by in connection with Kreimer's previous work on renormalization in quantum field theory. It was later discovered that the Hopf algebra was the dual of a Hopf algebra defined earlier by in a different context. The characters of H, i.e. the homomorphisms of the underlying commutative algebra into R, form a group, called the Butcher group. It corresponds to the formal group structure discovered in numerical analysis by . The Hopf algebra of rooted trees H is defined to be the polynomial ring in the variables t, where t runs through rooted trees. Its comultiplication is defined by where the sum is over all proper rooted subtrees s of t; is the monomial given by the product the variables ti formed by the rooted trees that arise on erasing all the nodes of s and connected links from t. The number of such trees is denoted by n(t\s). Its counit is the homomorphism ε of H into R sending each variable t to zero. Its antipode S can be defined recursively by the formula The Butcher group is defined to be the set of algebra homomorphisms φ of H into R with group structure The inverse in the Butcher group is given by and the identity by the counit ε. Using complex coefficients in the construction of the Hopf algebra of rooted trees one obtains the complex Hopf algebra of rooted trees. Its C-valued characters form a group, called the complex Butcher group GC. The complex Butcher group GC is an infinite-dimensional complex Lie group which appears as a toy model in the of quantum field theories. Butcher series and Runge–Kutta method The non-linear ordinary differential equation can be solved approximately by the Runge–Kutta method. This iterative scheme requires an m x m matrix and a vector with m components. The scheme defines vectors xn by first finding a solution X1, ... , Xm of and then setting showed that the solution of the corresponding ordinary differential equations has the power series expansion where φj and φ are determined recursively by and The power series above are called B-series or Butcher series. The corresponding assignment φ is an element of the Butcher group. The homomorphism corresponding to the actual flow has Butcher showed that the Runge–Kutta method gives an nth order approximation of the actual flow provided that φ and Φ agree on all trees with n nodes or less. Moreover, showed that the homomorphisms defined by the Runge–Kutta method form a dense subgroup of the Butcher group: in fact he showed that, given a homomorphism φ', there is a Runge–Kutta homomorphism φ agreeing with φ' to order n; and that if given homomorphims φ and φ' corresponding to Runge–Kutta data (A, b) and (A' , b' ), the product homomorphism corresponds to the data proved that the Butcher group acts naturally on the functions f. Indeed, setting they proved that Lie algebra showed that associated with the Butcher group G is an infinite-dimensional Lie algebra. The existence of this Lie algebra is predicted by a theorem of : the commutativity and natural grading on H implies that the graded dual H* can be identified with the universal enveloping algebra of a Lie algebra . Connes and Kreimer explicitly identify with a space of derivations θ of H into R, i.e. linear maps such that the formal tangent space of G at the identity ε. This forms a Lie algebra with Lie bracket is generated by the derivations θt defined by for each rooted tree t. The infinite-dimensional Lie algebra from and the Lie algebra L(G) of the Butcher group as an infinite-dimensional Lie group are not the same. The Lie algebra L(G) can be identified with the Lie algebra of all derivations in the dual of H (i.e. the space of all linear maps from H to R), whereas is obtained from the graded dual. Hence turns out to be a (strictly smaller) Lie subalgebra of L(G). Renormalization provided a general context for using Hopf algebraic methods to give a simple mathematical formulation of renormalization in quantum field theory. Renormalization was interpreted as Birkhoff factorization of loops in the character group of the associated Hopf algebra. The models considered by had Hopf algebra H and character group G, the Butcher group. has given an account of this renormalization process in terms of Runge–Kutta data. In this simplified setting, a renormalizable model has two pieces of input data: a set of Feynman rules given by an algebra homomorphism Φ of H into the algebra V of Laurent series in z with poles of finite order; a renormalization scheme given by a linear operator R on V such that R satisfies the Rota–Baxter identity and the image of R – id lies in the algebra V+ of power series in z. Note that R satisfies the Rota–Baxter identity if and only if id – R does. An important example is the minimal subtraction scheme In addition there is a projection P of H onto the augmentation ideal ker ε given by To define the renormalized Feynman rules, note that the antipode S satisfies so that The renormalized Feynman rules are given by a homomorphism of H into V obtained by twisting the homomorphism Φ • S. The homomorphism is uniquely specified by Because of the precise form of Δ, this gives a recursive formula for . For the minimal subtraction scheme, this process can be interpreted in terms of Birkhoff factorization in the complex Butcher group. Φ can be regarded as a map γ of the unit circle into the complexification GC of G (maps into C instead of R). As such it has a Birkhoff factorization where γ+ is holomorphic on the interior of the closed unit disk and γ– is holomorphic on its complement in the Riemann sphere C with γ–(∞) = 1. The loop γ+ corresponds to the renormalized homomorphism. The evaluation at z = 0 of γ+ or the renormalized homomorphism gives the dimensionally regularized values for each rooted tree. In example, the Feynman rules depend on additional parameter μ, a "unit of mass". showed that so that γμ– is independent of μ. The complex Butcher group comes with a natural one-parameter group λw of automorphisms, dual to that on H for w ≠ 0 in C. The loops γμ and λw · γμ have the same negative part and, for t real, defines a one-parameter subgroup of the complex Butcher group GC called the renormalization group flow (RG). Its infinitesimal generator β is an element of the Lie algebra of GC and is defined by It is called the beta function of the model. In any given model, there is usually a finite-dimensional space of complex coupling constants. The complex Butcher group acts by diffeomorphisms on this space. In particular the renormalization group defines a flow on the space of coupling constants, with the beta function giving the corresponding vector field. More general models in quantum field theory require rooted trees to be replaced by Feynman diagrams with vertices decorated by symbols from a finite index set. Connes and Kreimer have also defined Hopf algebras in this setting and have shown how they can be used to systematize standard computations in renormalization theory. Example has given a "toy model" involving dimensional regularization for H and the algebra V. If c is a positive integer and qμ = q / μ is a dimensionless constant, Feynman rules can be defined recursively by where z = 1 – D/2 is the regularization parameter. These integrals can be computed explicitly in terms of the Gamma function using the formula In particular Taking the renormalization scheme R of minimal subtraction, the renormalized quantities are polynomials in when evaluated at z = 0. Notes References (also in Volume 3 of the Collected Works of Cayley, pages 242–246) , Chapter 14. John C. Butcher: "B-Series : Algebraic Analysis of Numerical Methods", Springer(SSCM, volume 55), ISBN 978-3030709556 (April, 2021). Combinatorics Numerical analysis Quantum field theory Renormalization group Hopf algebras
Butcher group
[ "Physics", "Mathematics" ]
2,462
[ "Quantum field theory", "Physical phenomena", "Discrete mathematics", "Critical phenomena", "Quantum mechanics", "Renormalization group", "Combinatorics", "Computational mathematics", "Mathematical relations", "Numerical analysis", "Statistical mechanics", "Approximations" ]
23,338,454
https://en.wikipedia.org/wiki/Wet%20sulfuric%20acid%20process
The wet sulfuric acid process (WSA process) is a gas desulfurization process. After Danish company Haldor Topsoe introduced this technology in 1987, it has been recognized as a process for recovering sulfur from various process gases in the form of commercial quality sulfuric acid (H2SO4) with the simultaneous production of high-pressure steam. The WSA process can be applied in all industries where sulfur removal presents an issue. The wet catalysis process is used for processing sulfur-containing streams, such as: H2S gas from e.g. amine gas treating unit Off-gas from sour water stripper (SWS) gas Off-gas from Rectisol Spent acid from an alkylation unit Claus process tail gas Heavy residue or petcoke-fired utility boiler off-gas Boiler flue gases from various processes SNOX flue gas desulfurization Metallurgical process gas Production of sulfuric acid The process The main reactions in the WSA process Combustion: 2 H2S + 3 O2 2 H2O + 2 SO2 (-1036 kJ/mol) Oxidation: 2 SO2 + O2 2 SO3 (-198 kJ/mol) [in the presence of a vanadium (V) oxide catalyst] Hydration: SO3 + H2O H2SO4 (g) (-101 kJ/mol) Condensation: H2SO4 (g) H2SO4 (l) (-90 kJ/mol) The energy released by the above-mentioned reactions is used for steam production. Approximately 2–3 tons of high-pressure steam are produced per ton of acid. Industrial applications Industries where WSA process plants are installed: Refinery and petrochemical industry Metallurgy industry Coal-based industry (coking and gasification) Power industry Viscose industry Sulfuric acid industry WSA for gasifiers The acid gas coming from a Rectisol-, Selexol-, amine gas treating or similar unit installed after the gasifier contains H2S, COS and hydrocarbons in addition to CO2. These gases were previously vented to the atmosphere, but now the acid gas requires purification in order not to affect the environment with SO2 emission. The WSA process provides a high sulfur recovery and recovers heat for steam production. The heat recovery rate is high, and the cooling water consumption is low, which saves resources. Examples of WSA process for gasification Example 1: Feed-gas flow: 14,000 Nm3/h Composition [vol %]: 5.8% H2S, 1.2% COS, 9.7% HC and 77.4% CO2 SOx concentration [vol %]: 1.58% H2SO4 production: 106 MTPD Steam production: 53 ton/h Cooling water consumption: 8 m3/ton acid (delta T = 10 °C) Fuel consumption: 1,000 Nm3/h (LHV = 2,821 kcal/Nm3) Example 2: A sulfur plant in China will be built in connection with an ammonia plant, producing 500 kilotons/year of ammonia for fertilizer production Spent acid regeneration and production of sulfuric acid The WSA process can also be used for the production of sulfuric acid from sulfur burning or regeneration of the spent acid from e.g., alkylation plants. Wet catalysis processes differ from other contact sulfuric acid processes in that the feed gas contains excess moisture when it comes into contact with the catalyst. The sulfur trioxide formed by catalytic oxidation of the sulfur dioxide reacts instantly with the moisture to produce sulfuric acid in the vapor phase to an extent determined by the temperature. Liquid acid is subsequently formed by condensation of the sulfuric acid vapor and not by the absorption of the sulfur trioxide in concentrated sulfuric acid, as in contact processes based on dry gases. The concentration of the product acid depends on the H2O:SO3 ratio in the catalytically converted gases and on the condensation temperature. The combustion gases are cooled to the converter inlet temperature of about 420–440 °C. Processing these wet gases in a conventional cold-gas contact process (DCDA) plant would necessitate cooling and drying of the gas to remove all moisture. Therefore, the WSA process is, in most cases, a more cost-efficient way of producing sulfuric acid. About 80% to 85% of the world’s sulfur production is used to manufacture sulfuric acid. 50% of the world’s sulfuric acid production is used in fertilizer production, mainly to convert phosphates to water-soluble forms. according to the Fertilizer Manual published jointly by the United Nations Industrial Development Organization (UNIDO) and the International Fertilizer Development Center References Oil refining Desulfurization Air pollution control systems
Wet sulfuric acid process
[ "Chemistry" ]
1,009
[ "Desulfurization", "Petroleum technology", "Oil refining", "Separation processes" ]
23,340,612
https://en.wikipedia.org/wiki/N-ary%20group
In mathematics, and in particular universal algebra, the concept of an n-ary group (also called n-group or multiary group) is a generalization of the concept of a group to a set G with an n-ary operation instead of a binary operation. By an operation is meant any map f: Gn → G from the n-th Cartesian power of G to G. The axioms for an group are defined in such a way that they reduce to those of a group in the case . The earliest work on these structures was done in 1904 by Kasner and in 1928 by Dörnte; the first systematic account of (what were then called) polyadic groups was given in 1940 by Emil Leon Post in a famous 143-page paper in the Transactions of the American Mathematical Society. Axioms Associativity The easiest axiom to generalize is the associative law. Ternary associativity is the polynomial identity , i.e. the equality of the three possible bracketings of the string abcde in which any three consecutive symbols are bracketed. (Here it is understood that the equations hold for all choices of elements a, b, c, d, e in G.) In general, associativity is the equality of the n possible bracketings of a string consisting of distinct symbols with any n consecutive symbols bracketed. A set G which is closed under an associative operation is called an n-ary semigroup. A set G which is closed under any (not necessarily associative) operation is called an n-ary groupoid. Inverses / unique solutions The inverse axiom is generalized as follows: in the case of binary operations the existence of an inverse means has a unique solution for x, and likewise has a unique solution. In the ternary case we generalize this to , and each having unique solutions, and the case follows a similar pattern of existence of unique solutions and we get an n-ary quasigroup. Definition of n-ary group An n-ary group is an semigroup which is also an quasigroup. Structure of n-ary groups Post gave a structure theorem for an n-ary group in terms of an associated group. Identity / neutral elements In the case, there can be zero or one identity elements: the empty set is a 2-ary group, since the empty set is both a semigroup and a quasigroup, and every inhabited 2-ary group is a group. In groups for n ≥ 3 there can be zero, one, or many identity elements. An groupoid (G, f) with , where (G, ◦) is a group is called reducible or derived from the group (G, ◦). In 1928 Dörnte published the first main results: An groupoid which is reducible is an group, however for all n > 2 there exist inhabited groups which are not reducible. In some n-ary groups there exists an element e (called an identity or neutral element) such that any string of n-elements consisting of all e's, apart from one place, is mapped to the element at that place. E.g., in a quaternary group with identity e, eeae = a for every a. An group containing a neutral element is reducible. Thus, an group that is not reducible does not contain such elements. There exist groups with more than one neutral element. If the set of all neutral elements of an group is non-empty it forms an subgroup. Some authors include an identity in the definition of an group but as mentioned above such operations are just repeated binary operations. Groups with intrinsically operations do not have an identity element. Weaker axioms The axioms of associativity and unique solutions in the definition of an group are stronger than they need to be. Under the assumption of associativity it suffices to postulate the existence of the solution of equations with the unknown at the start or end of the string, or at one place other than the ends; e.g., in the case, xabcde = f and abcdex = f, or an expression like abxcde = f. Then it can be proved that the equation has a unique solution for x in any place in the string. The associativity axiom can also be given in a weaker form. Example The following is an example of a three element ternary group, one of four such groups (n,m)-group The concept of an n-ary group can be further generalized to that of an (n,m)-group, also known as a vector valued group, which is a set G with a map f: Gn → Gm where n > m, subject to similar axioms as for an n-ary group except that the result of the map is a word consisting of m letters instead of a single letter. So an is an group. were introduced by G. Ĉupona in 1983. See also Universal algebra References Further reading S. A. Rusakov: Some applications of n-ary group theory, (Russian), Belaruskaya navuka, Minsk 1998. Algebraic structures
N-ary group
[ "Mathematics" ]
1,084
[ "Mathematical objects", "Mathematical structures", "Algebraic structures" ]
23,340,871
https://en.wikipedia.org/wiki/Anvis%20Group
Anvis (Antivibrationssystems) is a global business group that specialises in antivibration systems to decouple vibrating parts in motor vehicles. The company's head office is located in Steinau an der Straße, Germany. Company Anvis Group GmbH operates 13 business sites around the world. Its 2,500 employees generate annual turnover of more than €300 million. In 2013, the company was acquired by the Japanese Sumitomo Riko group from the Sumitomo Group. The product range of the Anvis Group was significantly expanded as a result of the simultaneous acquisition of Dytech, an Italian company that makes special fluid-handling products. Together, the business group generates turnover of nearly €3 billion. The customer structure comprises Original Equipment Manufacturer (OEM) like the Volkswagen AG, BMW, Daimler AG, Audi, Renault-Nissan, Mazda, Toyota and General Motors and first-tier companies (the direct suppliers to OEMs) like Continental AG. Anvis Industry SAS, a subsidiary of the Anvis Group, also supplies solutions and products to the railroad industry, among others. The AVS Holding 2 GmbH owns 100 percent of shares in Anvis Netherlands B.V., which holds all foreign subsidiaries. Products The Anvis Group's products include vibration control solutions for automotive and industrial applications. The foundation of the products is its processing of natural rubber, synthetic elastomers and plastics. The Anvis Group holds more than 160 patents in this area. History Kléber-Colombes, a company specialising in elastomers and expansion joints, was established in 1910. Nearly 50 years later, in 1956, the automotive supplier Woco Industrietechnik GmbH went into operation. In 1980, Woco entered the antivibration sector. Its work soon resulted in the first automotive parts that used a rubber-metal combination that could reduce vibration and driving noise in cars. The basis of these parts was innovations in the area of natural-rubber, plastic and metal adhesion technology. Michelin acquired the Kléber Group in 1982, established CMP Kléber Industry and also worked in the area of rubber processing for vehicles. At the turn of the millennium, the knowledge acquired in this work flowed into a Joint Venture set up by Woco and Michelin. Under the name Woco Michelin AVS, the company made its products at its base in Bad Soden-Salmünster and locations around the world. In 2007, Olaf Hahn joined with the financial investor Arques Industries AG to acquire the joint venture and gave the company the name that it uses today, Anvis Group GmbH. Olaf Hahn became managing director, a position that he still holds today. With the complete takeover by the Japanese Sumitomo Riko Company Limited, a strategic investor from the relevant industry acquired the company in 2013. References Companies based in Hesse Automotive companies of Germany Mechanical vibrations
Anvis Group
[ "Physics", "Engineering" ]
594
[ "Structural engineering", "Mechanics", "Mechanical vibrations" ]
46,323,202
https://en.wikipedia.org/wiki/Register%20%28air%20and%20heating%29
A register is a grille with moving parts, capable of being opened and closed and the air flow directed, which is part of a building's heating, ventilation, and air conditioning (HVAC) system. The placement and size of registers is critical to HVAC efficiency. Register dampers are also important, and can serve a safety function. Terminology A grille is a perforated cover for an air duct (used for heating, cooling, or ventilation, or a combination thereof). Grilles sometimes have louvers which allow the flow of air to be directed. A register differs from a grille in that a damper is included. However, in practice, the terms grille, register, and return are often used interchangeably, and care must be taken to determine the meaning of the term used. Register size and placement Placement of registers is key in creating an efficient HVAC system. Usually, a register is placed near a window or door, which is where the greatest heat/cooling loss occurs. In contrast, returns (grilled ducts which suck air back into the HVAC system for heating or cooling) are usually placed in the wall or ceiling nearest the center of the building. Generally, in rooms where it is critical to maintain a constant temperature two registers (one placed near the ceiling to deliver cold air, and one placed in the floor to deliver hot air) and two returns (one high, one low) will be used. HVAC systems generally have one register and one return per room. Registers vary in size with the heating and cooling requirements of the room. If a register is too small, the HVAC system will need to push air through the ducts at a faster rate in order to achieve the desired heating or cooling. This can create rushing sounds which can disturb occupants or interfere with conversation or work (such as sound recording). The velocity of air through a register is usually kept low enough so that it is masked by background noise. (Higher ambient levels of background noise, such as those in restaurants, allow higher air velocities.) On the other hand, air velocity must be high enough to achieve the desired temperature. Registers are a critical part of the HVAC system. If not properly installed and tightly connected to the ductwork, air will spill around the register and greatly reduce the HVAC system's efficiency. Ideally, a room will have both heating and cooling registers. In practice, cost considerations usually require that heating and cooling be provided by the same register. In such cases, heating most often takes precedence over cooling, and registers are usually found close to the floor. For heating purposes, a floor register is preferred. This is because hot air rises, and as it cools it falls. This creates good air circulation in a room, and helps to maintain a more even temperature as hot and cold air is mixed more thoroughly. Floor registers generally have a grille strong enough for a human being to walk on without damaging the grille. It is rare to find a floor register installed less than from the corner of a room. When a floor register is not practical or desired, a wall register is used. The correct placement of wall heating registers is critical. Generally, the heating register will be directly across from an exterior window. The hot air from the register will mix with the cold air coming off the window, cool, and drop to the floor—creating good air circulation. However, the hot air must be pushed from the register with enough force (or "throw") so that it will cross the room and reach the window. If there is too little throw, the hot air will stop moving partway across the room, the cold air from the window will not be heated (creating the feeling of a cool draft), and air circulation will suffer. Register dampers A register's damper provides a critical function. Primarily, the damper allows the amount of hot or cool air entering a room to be controlled, providing for more accurate control over room temperature. Dampers also allow air to be shut off in unused rooms, improving the efficiency of the HVAC system. Dampers can also help adjust a HVAC system for seasonal use. During winter months, for example, an air conditioning register can be closed to prevent cold air from being pulled from the room. This allows the hot air to mix more completely with the cold air in the room, improving the efficiency of the HVAC system. (The return should be efficient enough to draw off the cooler air.) Some registers, particularly those in commercial buildings or institutions which house large numbers of people (such as hotels or hospitals) have a fire damper attached to them. This damper automatically senses smoke or extreme heat, and shuts the register closed so that fire and smoke do not travel throughout the building via the HVAC system. References Bibliography Heating, ventilation, and air conditioning Mechanical engineering
Register (air and heating)
[ "Physics", "Engineering" ]
986
[ "Applied and interdisciplinary physics", "Mechanical engineering" ]
46,324,663
https://en.wikipedia.org/wiki/C23H21FN2O
{{DISPLAYTITLE:C23H21FN2O}} The molecular formula C23H21FN2O (molar mass: 360.42 g/mol, exact mass: 360.1638 u) may refer to: FUBIMINA (also known as BIM-2201, BZ-2201 and FTHJ) THJ-2201 Molecular formulas
C23H21FN2O
[ "Physics", "Chemistry" ]
84
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
46,328,125
https://en.wikipedia.org/wiki/Ganoderma%20sessile
Ganoderma sessile is a species of polypore fungus in the Ganodermataceae family. There is taxonomic uncertainty with this fungus since its circumscription in 1902. This wood decay fungus is found commonly in Eastern North America, and is associated with declining or dead hardwoods. Taxonomy Murrill described 17 new Ganoderma species in his treatises of North American polypores, including for example, G. oregonense, G. sessile, G. tsugae, G. tuberculosum and G. zonatum. Most notably and controversial was the typification of Ganoderma sessile, which was described from various hardwoods only in the United States. The specific epithet "sessile" comes from the sessile (without typical stem) nature of this species when found growing in a natural setting. Ganoderma sessile was distinguished based on a sessile fruiting habit, common on hardwood substrates and occasionally having a reduced, eccentric or "wanting" stipe. In 1908, Atkinson considered G. tsugae and G. sessile as synonyms of G. lucidum, but erected the species G. subperforatum from a single collection in Ohio on the basis of having “smooth” spores. Although he did not recognize the genus Ganoderma, but rather kept taxa in the genus, Polyporus, Overholts considered G. sessile as a synonym of the European G. lucidum. In a 1920 report on Polyporaceae of North America, Murrill conceded that G. sessile was closely related to the European G. lucidum. Approximately a decade later, Haddow considered G. sessile a unique taxon, but suggested Atkinson's G. subperforatum was a synonym of G. sessile, on the basis of the "smooth" spores the original basis of G. subperforatum when earlier named by Atkinson in 1908. Until this point, all identifications of Ganoderma taxa were based on fruiting body morphology, geography, host, and spore characters. In 1948 and then amended in 1965, Nobles characterized the cultural characteristics of numerous wood-inhabiting hymenomycetes, including Ganoderma taxa. Her work laid the foundation for culture-based identifications in this group of fungi. Nobles recognized that there were differences in cultural characteristics between G. oregonense, G. sessile, and G. tsugae. Although Nobles recognized G. lucidum in her 1948 publication as a correct name for the taxon from North American isolates that produce numerous broadly ovoid to elongate chlamydospores (12–21 x 7.5–10.5 μm), she corrected this misnomer in 1968 by amending the name to G. sessile. Others agreed with Haddow's distinction between G. lucidum and G. sessile on the basis of smooth spores, but synonymized G. sessile with G. resinaceum, a previously described European taxon. Others demonstrated the similarity in culture morphology and that vegetative compatibility was successful between the North American taxon recognized as ‘G. lucidum’ and the European G. resinaceum. In the monograph of North American Polypores written in 1986, which is still the only comprehensive treatise on this group of fungi unique for North America, the authors did not recognize G. sessile, but rather the five species present in the U.S.: G. colossum (Fr.) C.F. Baker (current name: Tomophagus colossus (Fr.) Murrill), G. curtisii, G. lucidum, G. oregonense, and G. tsugae. Molecular taxonomy In a multilocus phylogeny, the authors revealed that the global diversity of the laccate Ganoderma species included three highly supported major lineages that separated G. oregonense/G. tsugae from G. zonatum and from G. curtisii/G. sessile, and these lineages were not correlated to geographical separation. These results agree with several of the earlier works focusing mostly on morphology, geography and host preference showing genetic affinity of G. resinaceum and G. sessile, but with statistical support separating the European and North American taxa. Also, Ganoderma curtisii and G. sessile were separated with high levels of statistical support, although there was not enough information to say they were from distinct lineages. Lastly, G. sessile was not sister to G. lucidum. The phylogeny supported G. tsugae and G. oregonense as sister taxa to the European taxon G. lucdium sensu stricto. Description Fruiting bodies annual and sessile (without a stipe) or pseudostipitate (very small stipe). Fruiting bodies found growing on trunks or root flares of living or dead hardwood trees. Mature fruiting bodies are laccate and reddish-brown, often with a wrinkled margin if dry. Fruiting bodies are shelf-like if on stumps or overlapping clusters of fan-shaped (flabelliform) fruiting bodies if growing from underground roots, and range in size of in diameter. Hymenium white, bruising brown, and poroid with irregular pores that can range in shape from circular to angular. The context tissue is cream colored and can be thin to thick and on average the same length as the tubes. Black resinous deposits are never found embedded in the context tissue, but concentric zones are often found. Spores appear smooth, or nearly so, due to the fine (thin) echinulations from the endosporium. The spores can be used to differentiate the species from other common Eastern North American species such as Ganoderma curtisii (Berk.) Murrill. Elliptical to obovate to obpyriform chlamydospores formed in vegetative mycelium, and are abundant in cultures. Distribution Very common taxon, being found in practically every state East of the Rocky Mountains within the United States. Uses For centuries, laccate (varnished or polished) Ganoderma species have been used in traditional Chinese medicine. These species are often sold as G. lucidum', although genetic testing has shown that traditional Chinese medicine uses multiple species, such as G. lingzhi, G. multipileum, and G. sichuanense. References External links Ganoderma sessile images at Mushroom Observer Fungi described in 1902 Fungi of North America Fungal plant pathogens and diseases sessile Fungus species
Ganoderma sessile
[ "Biology" ]
1,389
[ "Fungi", "Fungus species" ]
46,332,876
https://en.wikipedia.org/wiki/Interleukin-1%20receptor%20associated%20kinase
The interleukin-1 receptor (IL-1R) associated kinase (IRAK) family plays a crucial role in the protective response to pathogens introduced into the human body by inducing acute inflammation followed by additional adaptive immune responses. IRAKs are essential components of the Interleukin-1 receptor signaling pathway and some Toll-like receptor signaling pathways. Toll-like receptors (TLRs) detect microorganisms by recognizing specific pathogen-associated molecular patterns (PAMPs) and IL-1R family members respond the interleukin-1 (IL-1) family cytokines. These receptors initiate an intracellular signaling cascade through adaptor proteins, primarily, MyD88. This is followed by the activation of IRAKs. TLRs and IL-1R members have a highly conserved amino acid sequence in their cytoplasmic domain called the Toll/Interleukin-1 (TIR) domain. The elicitation of different TLRs/IL-1Rs results in similar signaling cascades due to their homologous TIR motif leading to the activation of mitogen-activated protein kinases (MAPKs) and the IκB kinase (IKK) complex, which initiates a nuclear factor-κB (NF-κB) and AP-1-dependent transcriptional response of pro-inflammatory genes. Understanding the key players and their roles in the TLR/IL-1R pathway is important because the presence of mutations causing the abnormal regulation of Toll/IL-1R signaling leading to a variety of acute inflammatory and autoimmune diseases. IRAKs are membrane proximal putative serine-threonine kinases. Four IRAK family members have been described in humans: IRAK1, IRAK2, IRAKM, and IRAK4. Two are active kinases, IRAK-1 and IRAK-4, and two are inactive, IRAK-2 and IRAK-M, but all regulate the nuclear factor-κB (NF-κB) and mitogen-activated protein kinase (MAPK) pathways. Some special/significant features of each IRAK family member: There is some evidence that IRAK-1 functions in regulating other signaling cascades leading to NF-κB activation. One signaling pathway in particular nerve growth factor (NGF) may be dependent on the function of IRAK-1 in its signaling pathway for its activation and cell survival. IRAK-2 has 4 isoforms IRAK-2a, IRAK-2b, IRAK-2c, and IRAK-2d. The latter two have negative feedback in the TLR signaling pathways. IRAK-2a and IRAK-2b positively activate NF-κB/TLR pathway by stimulating LPS. IRAK-M is specific to monomyeloic cells (monocytes and macrophages) while the other IRAKs that are ubiquitously expressed. IRAK-M negatively regulates TLR signaling by inhibiting the IRAK-4/IRAK-1 complex The newest described IRAK family member, IRAK-4, has been found to be critical for the recruitment of IRAK-1 and for its activation/degradation. IL-1 stimulates IRAK-4 to the IL-1R complex initiating the Toll/IL-1 receptor signaling cascade upstream of IRAKs, so the deletion of IRAK-1 does not abolish the activation of NF-κB and mitogen-activated protein kinase pathways. Discovery IRAKs were first identified in 1994 by Michael Martin and colleagues when they successfully co-precipitated a protein kinase with type I interleukin-1 receptors (IL-1RI) from human T cells. They speculated that this kinase was the link between the T cell's transmembrane IL-1 receptor and the cytosolic signalling pathway's downstream components. The name “IRAK” came from Zhaodan Cao and colleagues in 1995. The DNA sequence analysis of IRAK's domains revealed many conserved amino acids with the serine/threonine specific protein kinase Pelle in Drosophila, that functions downstream of a Toll receptor. Cao's lab confirmed the kinase's activity as necessarily associated with the IL-1 receptor by immunoprecipitating the IL-1 receptors from different cell types treated with IL-1 and without IL-1. Even cells without over-expressed IL-1 receptors showed kinase activity when exposed to IL-1, and were able to co-precipitate a protein kinase with endogenous IL-1 receptors. Thus the human IL-1 receptor's accessory protein was named Interleukin-1 Receptor-Associated Kinase. In 1997, MyD88 was identified as the cytosolic protein that recruits IRAKs to the cytosolic domains of IL-1 receptors, mediating IL-1's signal transduction to the cytosolic signal cascade. Subsequent studies associated IRAKs with multiple signalling pathways triggered by interleukin, and specified multiple IRAK types. Structure Functional domains All IRAK family members are multidomain proteins consisting of a conserved N-terminal Death Domain (DD) and a central kinase domain (KD). The DD is a protein interaction motif that important for interacting with other signaling molecules such as the adaptor protein MyD88 and other IRAK members. The KD is responsible for the kinase activity of IRAK proteins and consists of 12 subdomains. All IRAK KDs have an ATP binding pocket with an invariable lysine residue in subdomain II, however, only IRAK-1 and IRAK-4 have an aspartate residue in the catalytic site of subdomain VI, which is thought to be critical for kinase activity. It is thought that IRAK-2 and IRAK-M are catalytically inactive because they lack this aspartate residue in the KD. The C-terminal domain does not seem to show much similarity between IRAK family members. The C-terminal domain is important for the interaction with the signaling molecule TRAF6. IRAK-1 contains three TRAF6 interaction motifs, IRAK-2 contains two and IRAK-M contains one. IRAK-1 contains a region that is rich in serine, proline, and threonine (proST). It is thought that IRAK-1 undergoes hyperphosphorylation in this region. The proST region also contains two proline (P), glutamic acid (E), serine (S) and threonine (T)-rich (PEST) sequences that are thought to promote the degradation of IRAK-1. Role in immune signaling Interleukin-1 receptor signaling Interleukin-1 receptors (IL-1Rs) are cytokine receptors that transduce an intracellular signaling cascade in response to the binding of the inflammatory cytokine interleukin-1 (IL-1). This signaling cascade results in the initiation of transcription of certain genes involved in inflammation. Because IL-1Rs do not possess intrinsic kinase activity, they rely on the recruitment of adaptor molecules, such as IRAKs, to transduce their signals. IL-1 binding to IL-1R complex triggers the recruitment of the adaptor molecule MyD88 through interactions with the TIR domain. MyD88 brings IRAK-4 to the receptor complex. Preformed complexes of the adaptor molecule Tollip and IRAK-1 are also recruited to the receptor complex, allowing IRAK-1 to bind MyD88. IRAK-1 binding to MyD88 brings it into close proximity with IRAK-4 so that IRAK-4 can phosphorylate and activate IRAK-1. Once phosphorylated, IRAK-1 recruits the adaptor protein TNF receptor associated factor 6 (TRAF6) and the IRAK-1-TRAF6 complex dissociates from the IL-1R complex. The IRAK-1-TRAF6 complex interacts with a pre-existing complex at the plasma membrane consisting of TGF-β activated kinase 1 (TAK1), and two TAK binding proteins, TAB1 and TAB2. TAK1 is a mitogen-activated protein kinase kinase kinase (MAPKKK). This interaction leads to the phosphorylation of TAB2 and TAK1, which then translocate to the cytosol with TRAF6 and TAB1. IRAK-1 remains at the membrane and is targeted for degradation by ubiquitination. Once the TAK1-TRAF6-TAB1-TAB2 complex is in the cytosol, ubiquitination of TRAF6 in triggers the activation of TAK1 kinase activity. TAK1 can then activate two transcription pathways, the nuclear factor-κB (NF-κB) pathway and the mitogen-activated protein kinase (MAPK) pathway. To activate the NF-κB pathway, TAK1 phosphorylates the IκB kinase (IKK) complex, which subsequently phosphorylates the NF-κB inhibitor, IκB, targeting it for degradation by the proteasome. Once IκB is removed, the NF-κB proteins p65 and p50 are free to translocate into the nucleus and activate transcription of proinflammatory genes. To activate the MAPK pathway, TAK1 phosphorylates MAPK kinase (MKK) 3/4/6, which then phosphorylate members of the MAPK family, c-Jun N-terminal kinase (JNK) and p38. Phosphorylated JNK/p38 can then translocate into the nucleus and phosphorylate and activate transcription factors such as c-Fos and c-Jun. Toll-like receptor signaling Toll-like receptors (TLRs) are important innate immune receptors that recognize pathogen associated molecular patterns (PAMPs) and initiate the appropriate immune response to eliminate a particular pathogen. PAMPs are conserved motifs associated with microorganisms that are not found in host cells, such as, bacterial lipopolysaccharide (LPS), viral double-stranded RNA, etc. TLRs are similar to IL-1Rs in that they do not possess intrinsic kinase activity and require adaptor molecules to relay their signals. Stimulation of TLRs can also result in NF-κB and MAPK mediated transcription, similar to the IL-1R signaling pathway. It has been shown that IRAK-1 is essential for TLR7 and TLR9 interferon (IFN) induction. TLR7 and TLR9 in plasmacytoid dendritic cells (pDCs) recognize viral nucleic acids and trigger the production of interferon-α (IFN-α), an important cytokine for inducing an antiviral state in host cells. TLR7 and TLR9 mediated IFN-α induction requires the formation of a complex consisting of MyD88, TRAF6 and the interferon regulatory factor 7 (IRF7). IRF7 is a transcription factor that translocates into the nucleus when activated and initiates transcription of IFN-α. IRAK-1 was shown to directly phosphorylates IRF7 in vitro and the kinase activity of IRAK-1 was shown to be essential for IRF7 transcriptional activation. It was subsequently shown that IRAK-1 is required for the activation of interferon regulatory factor 5 (IRF5). IRF5 is another transcription factor that induces IFN production following stimulation of TLR7, TLR8 and TLR9 by specific viruses. In order to be activated, IRF5 must be polyubiquitinated by TRAF6. It has been shown that TRAF6-mediated ubiquitination of IRF5 is dependent on the kinase activity of IRAK-1. IRAK-1 has also been shown to play a critical role in TLR4 interleukin-10 (IL-10) induction. TLR4 recognizes bacterial LPS and triggers the transcription of IL-10, a cytokine regulating the inflammatory response. IL-10 transcription is activated by signal transducer and activator of transcription 3 (STAT3). IRAK-1 forms a complex with STAT3 and the IL-10 promoter element in the nucleus and is required for STAT3 phosphorylation and activation of IL-10 transcription. IRAK-2 plays an important role in TLR-mediated NF-κB activation. Knocking down IRAK-2 has been shown to impair NF-κB activation by TLR3, TLR4 and TLR8. The mechanism of how IRAK-2 functions is still unknown, however, IRAK-2 has been shown to interact with a TIR adaptor protein that does not bind to IRAK-1, called Mal/TIRAP. Mal/TIRAP has been specifically implicated in TLR2 and TLR4 mediated NF-κB signaling. In addition, it has been shown that IRAK-2 is recruited to the TLR3 receptor. IRAK-2 is the only IRAK family member that is known to play a role in TLR3 signaling. One of the most distinct features of IRAK-M is that it is a negative regulator of TLR signaling to prevent excessive inflammation. It is thought that IRAK-M enhances the binding of MyD88 to IRAK-1 and IRAK-4, preventing IRAK-1 from dissociating from the receptor complex and inducing downstream NF-κB and MAPK signaling. It has also been shown that IRAK-M negatively regulates the alternative NF-κB pathway in TLR2 signaling. The alternative NF-κB pathway is predominantly triggered by CD40, lymphotoxin β receptor (LT), and the B-cell activating receptor belonging to the TNF family (BAFF receptor). The alternative NF-κB pathway involves the activation of NF-κN-inducing kinase (NIK) and subsequent phosphorylation of the transcription factors p100/RelB in an IKKα-dependent mechanism. It was observed that IRAK-M knockout resulted in increased induction of the alternative NF-κB pathway but not the classical pathway. The mechanism by which IRAK-M inhibits NF-κB signaling is still unknown. IRAK-4 is an essential component of MyD88 mediated signaling pathways and is therefore critical for both IL-1R and TLR signaling. MyD88 acts as a scaffold protein for the interaction between IRAK-1 and IRAK-4, allowing IRAK-4 to phosphorylate IRAK-1, leading to autophosphorylation and activation of IRAK-1 [1,2]. IRAK-4 is critical for IL-1R and TLR NF-κB and MAPK signaling pathways as well as TLR7/9 MyD88-mediated interferon activation. Role in disease Interleukin 1 is a cytokine that acts locally and systemically in the innate immune system. IL-1a and IL-1ß are known for causing inflammation, but can also cause induction of other proinflammatory cytokines, and fever. Because IRAKs are a crucial step in the IL-1 receptor signalling pathway, deficiencies or over-expression of IRAKs can cause suboptimal or overactive cellular response to IL-1a and IL-1ß. Thus Interleukin-1 Receptor Associated Kinases are promising therapeutic targets for autoimmune-, immunodeficiency-, and cancer-related disorders. Cancer Inflammation signalling is known to be a major factor in many cancer types, and an inflammatory microclimate is a key aspect of human tumours. IL-1ß, which activates the inflammatory signalling pathway containing IRAKs, is directly involved in tumour cell growth, angiogenesis, invasion, and metastasis. In tumour cells containing the L265P MyD88 mutant, protein-signalling complexes spontaneously assemble, activating IRAK-4's kinase activity and promoting inflammation and growth independent of Interleukin-1 signalling. IRAK-4 inhibiting drugs are thus a potential therapeutic treatment for lymphoid malignancies with the L265P MyD88 mutation, especially in Waldenström's Macroglobulinaemia, in which BTK and IRAK1/4 inhibitors have shown promising but unconfirmed results. In 2013, Garrett Rhyasen and his colleagues at the University of Cincinnati studied the contribution of active IRAK-1 and IRAK-4 in human myelodysplastic syndrome (MDS) and acute myeloid leukemia (AML). They found that IRAK1 knockout therapy incited apoptosis and impaired leukemic progenitor activity. They also established that IRAK4, while imperative to proliferation of human hematologic malignancies, is not imperative to the pathogenesis of MDS/AML. Further testing of IRAK-inhibitory therapy could prove essential to cancer therapy development. Autoimmune Disorders Autoimmune disorders such as MS, rheumatoid arthritis, lupus and psoriasis are caused by innate immune system deregulation inducing chronic inflammation. In most cases, inhibition of IRAK-1 and IRAK-4 are suspected to the most effective targets for knockout drugs, as their functions are integral to the cytokine pathways inducing chronic inflammation. Mutations in the gene for IRAK-M have been identified as contributors to early onset asthma. Compromised IRAK-M leads to overproduction of inflammatory cytokines in the lungs, eventually triggering T cell mediated allergic reactions and exacerbation of asthma symptoms. Researchers have proposed that increasing IRAK-M function in these individuals may moderate asthma symptoms. References EC 2.7.11 Immunology
Interleukin-1 receptor associated kinase
[ "Biology" ]
3,799
[ "Immunology" ]
41,688,269
https://en.wikipedia.org/wiki/List%20of%20dimensionless%20quantities
This is a list of well-known dimensionless quantities illustrating their variety of forms and applications. The tables also include pure numbers, dimensionless ratios, or dimensionless physical constants; these topics are discussed in the article. Biology and medicine Chemistry Physics Physical constants Fluids and heat transfer Solids Optics Other Mathematics and statistics Geography, geology and geophysics Sport Other fields References Bibliography
List of dimensionless quantities
[ "Physics", "Mathematics" ]
76
[ "Quantity", "Physical quantities", "Dimensionless quantities" ]
41,691,180
https://en.wikipedia.org/wiki/Floating%20reedbeds
Floating reedbeds are artificial or natural systems consisting of buoyancy and reeds. Plants including rice and wheat can be cultivated on floating reedbeds. The primary purpose of artificial floating reedbeds is to improve water quality through biofiltration, preventing algal blooms through denitrification and plant nutrient uptake, with a secondary benefit of habitat provision. Modern floating reedbeds are increasingly being used by local government and land managers to improve water quality at source, reducing pollutants in surface water bodies and providing biodiversity habitat. Examples include Gold Coast City Council in Australia. Artificial floating reedbeds are commonly anchored to the shoreline or bottom of a water body, to ensure the system does not float away in a storm event or create a hazard. Buoyancy in artificial floating reedbeds is commonly provided by polyethylene or polyurethane flotation foam, or polyethylene or PVC plastic containing air voids. Growth media includes coconut fibre, mats made of polyester or recycled PET bottles, synthetic geotechnical mat, open cell polyurethane foam, jute, soil and sand. Additional elements may be added such as activated carbon, zeolites, and materials that accumulate pollutants. References Environmental engineering Water treatment
Floating reedbeds
[ "Chemistry", "Engineering", "Environmental_science" ]
259
[ "Water treatment", "Chemical engineering", "Water pollution", "Civil engineering", "Environmental engineering", "Water technology" ]
47,803,999
https://en.wikipedia.org/wiki/Mean%20dimension
In mathematics, the mean (topological) dimension of a topological dynamical system is a non-negative extended real number that is a measure of the complexity of the system. Mean dimension was first introduced in 1999 by Gromov. Shortly after it was developed and studied systematically by Lindenstrauss and Weiss. In particular they proved the following key fact: a system with finite topological entropy has zero mean dimension. For various topological dynamical systems with infinite topological entropy, the mean dimension can be calculated or at least bounded from below and above. This allows mean dimension to be used to distinguish between systems with infinite topological entropy. Mean dimension is also related to the problem of embedding topological  dynamical systems in shift spaces (over Euclidean cubes). General definition A topological dynamical system consists of a compact Hausdorff topological space and a continuous self-map . Let denote the collection of open finite covers of . For define its order by An open finite cover refines , denoted , if for every , there is so that . Let Note that in terms of this definition the Lebesgue covering dimension is defined by . Let be open finite covers of . The join of and is the open finite cover by all sets of the form where , . Similarly one can define the join of any finite collection of open covers of . The mean dimension is the non-negative extended real number: where Definition in the metric case If the compact Hausdorff topological space is metrizable and is a compatible metric, an equivalent definition can be given. For , let be the minimal non-negative integer , such that there exists an open finite cover of by sets of diameter less than such that any distinct sets from this cover have empty intersection. Note that in terms of this definition the Lebesgue covering dimension is defined by . Let The mean dimension is the non-negative extended real number: Properties Mean dimension is an invariant of topological dynamical systems taking values in . If the Lebesgue covering dimension of the system is finite then its mean dimension vanishes, i.e. . If the topological entropy of the system is finite then its mean dimension vanishes, i.e. . Example Let . Let and be the shift homeomorphism , then . See also Dimension theory Topological entropy Universal spaces (in topology and topological dynamics) References External links What is Mean Dimension? Entropy and information Topological dynamics
Mean dimension
[ "Physics", "Mathematics" ]
479
[ "Physical quantities", "Entropy and information", "Entropy", "Topology", "Topological dynamics", "Dynamical systems" ]
47,805,407
https://en.wikipedia.org/wiki/Frequency-dependent%20negative%20resistor
A frequency-dependent negative resistor (FDNR) is a circuit element that exhibits a purely real negative resistance −1/(ω2kC) that decreases in magnitude at a rate of −40 dB per decade. The element is used in implementation of low-pass active filters modeled from ladder filters. The element is usually implemented from a generalized impedance converter (GIC) or gyrator. The impedance of a FDNR is or when s = jω. The definition and application of frequency-dependent negative resistors is discussed in Temes & LaPatra, Chen and Wait, Huelsman & Korn. The technique is attributed to L. T. Bruton. Application If all the impedances (including the source and load impedances) of a passive ladder filter are divided by sk, the transfer function is not changed. The effect of this division is to transform resistors into capacitors, inductors into resistors and capacitors into FDNRs. The purpose of this transformation is to eliminate inductors which are often problematic components. This technique is especially useful when all the capacitors are grounded. If the technique is applied to capacitors that are not grounded, the resulting FDNRs are floating (neither end is grounded), which in practice, can be difficult to stabilize. The resulting circuit has two problems. Practical FDNRs require a DC path to ground. The DC transfer function has a value of (R6)/(R1 + R6). The transformed ladder filter realizes the DC transfer gain as the ratio of two capacitors. In the ideal case, this is valid, but in the practical case there is always some, usually unpredictable, finite resistance across the capacitors so that the DC performance of the transformed ladder is unpredictable. Ra and Rb are added to the circuit to mitigate these problems. If Rb/(Ra + Rb + L3/k + L5/k) = (R6)/(R1 + R6) then the DC gain of the transformed circuit is the same as the predecessor circuit. Finally, if Ra and Rb are large with respect to the other resistors there is little effect on the filter's transition band and high frequency behavior. Implementation Wait gives the circuit shown to the right as suitable for a grounded FDNR. References Analog circuits Linear filters
Frequency-dependent negative resistor
[ "Engineering" ]
496
[ "Analog circuits", "Electronic engineering" ]
26,223,965
https://en.wikipedia.org/wiki/Impingement%20filter
An impingement filter can be used to purify a polluted solution, be it gas or liquid. The impingement filter acts by inducing the solution to change direction and the particles to adhere to the filter medium. In many cases this filter medium is designed to contain apertures of specific size which will absorb the impurities in the solution. The gas or liquid, less impurities, is permitted free passage through the medium. Common examples of impingement filters are air filters, fuel filters and oil filters used in cars, trucks etc. References Measuring Smokes and Rating Efficiencies of Industrial Air Filters Gas cleaning and cooling Filters
Impingement filter
[ "Chemistry", "Engineering" ]
132
[ "Chemical equipment", "Filtration", "Filters" ]
26,226,561
https://en.wikipedia.org/wiki/Toxic%20equivalency%20factor
Toxic equivalency factor (TEF) expresses the toxicity of dioxins, furans and PCBs in terms of the most toxic form of dioxin, 2,3,7,8-TCDD. The toxicity of the individual congeners may vary by orders of magnitude. With the TEFs, the toxicity of a mixture of dioxins and dioxin-like compounds can be expressed in a single number – the toxic equivalency (TEQ). It is a single figure resulting from the product of the concentration and individual TEF values of each congener. The TEF/TEQ concept has been developed to facilitate risk assessment and regulatory control. While the initial and current set of TEFs only apply to dioxins and dioxin-like chemicals (DLCs), the concept can theoretically be applied to any group of chemicals satisfying the extensive similarity criteria used with dioxins, primarily that the main mechanism of action is shared across the group. Thus far, only the DLCs have had such a high degree of evidence of toxicological similarity. There have been several systems over the years in operation, such as the International Toxic Equivalents for dioxins and furans only, represented as I-TEQDF, as well as several country-specific TEFs. The present World Health Organization scheme, represented as WHO-TEQDFP, which includes PCBs is now universally accepted. Chemical mixtures and additivity Humans and wildlife are rarely exposed to solitary contaminants, but rather to complex mixtures of potentially harmful compounds. Dioxins and DLCs are no exception. This is important to consider when assessing toxicity because the effects of chemicals in a mixture are often different from when acting alone. These differences can take place on the chemical level, where the properties of the compounds themselves change due to the interaction, creating a new dose at the target tissue and a quantitatively different effect. They may also act together (simple similar action) or independently on the organism at the receptor during uptake, when transported throughout the body, or during metabolism, to produce a joint effect. Joint effects are described as being additive (using dose, response/risk, or measured effect), synergistic, or antagonistic. A dose-additive response occurs when the mixture effect is determined by the sum of the component chemical doses, each weighted by its relative toxic potency. A risk-additive response occurs when the mixture response is the sum of component risks, based on the probability law of independent events. An effect-additive mixture response occurs when the combined effect of exposure a chemical mixture is equal to the sums of the separate component chemical effects, e.g., incremental changes in relative liver weight. Synergism occurs when the combined effect of chemicals together is greater than the additivity prediction based on their separate effects. Antagonism describes where the combined effect is less than the additive prediction. Clearly it is important to identify which kind of additivity is being used. These effects reflect the underlying modes of action and mechanisms of toxicity of the chemicals. Additivity is an important concept here because the TEF method operates under the assumption that the assessed contaminants are dose-additive in mixtures. Because dioxins and DLCs act similarly at the AhR, their individual quantities in a mixture can be added together as proportional values, i.e. TEQs, to assess the total potency. This notion is fairly well supported by research. Some interactions have been observed and some uncertainties remain, including application to other than oral intake. TEF Exposure to environmental media containing 2,3,7,8-TCDD and other dioxins and dioxin-like compounds can be harmful to humans as well as to wildlife. These chemicals are resistant to metabolism and biomagnify up the food chain. Toxic and biological effects of these compounds are mediated through the aryl hydrocarbon receptor (AhR). Oftentimes results of human activity leads to instances of these chemicals as mixtures of DLCs in the environment. The TEF approach has also been used to assess the toxicity of other chemicals including PAHs and xenoestrogens. The TEF approach uses an underlying assumption of additivity associated with these chemicals that takes into account chemical structure and behavior. For each chemical the model uses comparative measures from individual toxicity assays, known as relative effect potency (REP), to assign a single scaling factor known as the TEF. TCDD 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) is the reference chemical to which the toxicity of other dioxins and DLCs are compared. TCDD is the most toxic DLC known. Other dioxins and DLCs are assigned a scaling factor, or TEF, in comparison to TCDD. TCDD has a TEF of 1.0. Sometimes PCB 126 is also used as a reference chemical, with a TEF of 0.1. Determination of TEF TEFs are determined using a database of REPs that meet WHO established criteria, using different biological models or endpoints and are considered estimates with an order of magnitude of uncertainty. The characteristics necessary for inclusion of a compound in the WHO's TEF approach include: Structural similarity to polychlorinated dibenzo-p-dioxins or polychlorinated dibenzofurans Capacity to bind to the aryl hydrocarbon receptor (AhR) Capacity to elicit AhR-mediated biochemical and toxic responses Persistence and accumulation in the food chain All viable REPs for a chemical are compiled into a distribution, and the TEF is selected based on half order of magnitude increments on a logarithmic scale. The TEF is typically selected from the 75th percentile of the REP distribution in order to be protective of health. In vivo and in vitro studies REP distributions are not weighted to give more importance to certain types of studies. Current focus of REPs is on in vivo studies rather than in vitro. This is because all types of in vivo studies (acute, subchronic, etc.) and different endpoints have been combined, and associated REP distributions are shown as a single box plot. TEQ Toxic Equivalents (TEQs) report the toxicity-weighted concentration of mixtures of PCDDs, PCDFs, and PCBs. The reported value provides toxicity information about the mixture of chemicals and is more meaningful to toxicologists than reporting the total concentration. To obtain TEQs, the concentration of each chemical in a mixture is multiplied by its TEF and is then summed with all other chemicals to report the total toxicity-weighted concentration. TEQs are then used for risk characterization and management purposes, such as prioritizing areas of cleanup. Calculation The toxic equivalency of a mixture is defined by the sum of the concentrations of individual compounds (Ci) multiplied by their relative toxicity (TEF): TEQ = Σ[Ci × TEFi] Applications Risk assessment Risk assessment is the process by which one estimates the probability of some adverse effect, such as that of a contaminant in the environment. Environmental risk assessments are conducted to help protect human health and the environment and are often used to assist in meeting regulations such as those stipulated by CERCLA in the United States. Risk assessments may take place retroactively, i.e., when assessing the contamination hazard at a superfund site, or predictively, such as when planning waste discharges. The complex nature of chemicals mixtures in the environment presents a challenge to risk assessment. The TEF approach was developed to help assess the toxicity of DLCs and other environmental contaminants with additive effects and is currently endorsed by the World Health Organization Human health Human exposure to dioxins and DLCs is a cause for public and regulatory concern. Health concerns include endocrine, developmental, immune and carcinogenic effects. The route of exposure is primarily through the ingestion of animal products such as meat, dairy, fish, and human breast milk. However, humans are also exposed to high levels of "natural dioxins" in cooked foods and vegetables. The human diet accounts for over 95% of the total uptake of TEQ. Risks in humans are typically calculated from known ingestion of contaminants or from blood or adipose tissue samples. However, human intake data is limited, and calculations from blood and tissue are not well supported. This presents a limitation to the TEF application in risk assessment to humans. Fish and wildlife DLC exposure to wildlife results from various sources including the atmospheric deposition of emissions (e.g. waste incineration) over terrestrial and aquatic habitats and contamination from waste effluents. Contaminants then bioaccumulate up the food chain. The WHO has derived TEFs for fish, bird, and mammal species, however differences among taxa for some compounds are orders of magnitude apart. Compared to mammals, fish are less responsive to mono-ortho PCBs. Limitations The TEF approach DLC risk assessment operates under certain assumptions which attach varying degrees of uncertainty. These assumptions include: Individual compounds all act through the same biologic pathway Individual effects are dose-additive Dose-response curves are similarly shaped Individual compounds are similarly distributed throughout the body TEFs are assumed to be equivalent for all effects, all exposure scenarios and all species, although this may not be the reality. The TEF method only accounts for toxicity effects related to the AhR mechanism - however, some DLC toxicity may be mediated through other processes. Dose-additivity may not be applicable to all DLCs and exposure scenarios, particularly those involving low doses. Interactions with other chemicals that may induce antagonistic effects are not considered and those may be species-specific. In terms of human health risk assessments, estimates of relative potency from animal studies are assumed to be predictive of toxicity in humans, although there are species-specific differences in the AhR. Nevertheless, In vivo mixture studies have shown that WHO 1998 TEF values predicted mixture toxicity within a factor of two or less A probabilistic approach may provide an advantage in the determination of TEF because it will better describe the level of uncertainty present in a TEF value The use of TEF values to assess abiotic matrices such as soil, sediment, and water is problematic because TEF values are primarily calculated from oral intake studies. History and development Dating back to the 1980s there is a long history of developing TEFs and how to use them. New research being conducted influences guiding criteria for assigning TEFs as the science progresses. The World Health Organization has held expert panels to reach a global consensus on how to assign TEFs in conjunction with new data. Each individual country recommends their own TEF values, typically endorsing the WHO global consensus TEFs. Other compounds for potential inclusion Based on mechanistic considerations, PCB 37, PBDDs, PBDFs, PXCDDs, PXCDFs, PCNs, PBNs and PBBs can be included in the TEF concept. However, most of these compounds lack human exposure data. Thus, TEF values for these compounds are in the process of review See also Dioxins and dioxin-like compounds Sources TRI Dioxin and Dioxin-like Compounds Toxic Equivalency Reporting Rule – Proposed Rule (US EPA) Archived at webcitation on 2 October 2012. Concentration indicators Environmental toxicology Equivalent units
Toxic equivalency factor
[ "Mathematics", "Environmental_science" ]
2,367
[ "Equivalent quantities", "Toxicology", "Quantity", "Environmental toxicology", "Equivalent units", "Units of measurement" ]
26,236,727
https://en.wikipedia.org/wiki/Brooks%E2%80%93Iyengar%20algorithm
The Brooks–Iyengar algorithm or FuseCPA Algorithm or Brooks–Iyengar hybrid algorithm is a distributed algorithm that improves both the precision and accuracy of the interval measurements taken by a distributed sensor network, even in the presence of faulty sensors. The sensor network does this by exchanging the measured value and accuracy value at every node with every other node, and computes the accuracy range and a measured value for the whole network from all of the values collected. Even if some of the data from some of the sensors is faulty, the sensor network will not malfunction. The algorithm is fault-tolerant and distributed. It could also be used as a sensor fusion method. The precision and accuracy bound of this algorithm have been proved in 2016. Background The Brooks–Iyengar hybrid algorithm for distributed control in the presence of noisy data combines Byzantine agreement with sensor fusion. It bridges the gap between sensor fusion and Byzantine fault tolerance. This seminal algorithm unified these disparate fields for the first time. Essentially, it combines Dolev's algorithm for approximate agreement with Mahaney and Schneider's fast convergence algorithm (FCA). The algorithm assumes N processing elements (PEs), t of which are faulty and can behave maliciously. It takes as input either real values with inherent inaccuracy or noise (which can be unknown), or a real value with apriori defined uncertainty, or an interval. The output of the algorithm is a real value with an explicitly specified accuracy. The algorithm runs in O(NlogN) where N is the number of PEs. It is possible to modify this algorithm to correspond to Crusader's Convergence Algorithm (CCA), however, the bandwidth requirement will also increase. The algorithm has applications in distributed control, software reliability, High-performance computing, etc. Algorithm The Brooks–Iyengar algorithm is executed in every processing element (PE) of a distributed sensor network. Each PE exchanges their measured interval with all other PEs in the network. The "fused" measurement is a weighted average of the midpoints of the regions found. The concrete steps of Brooks–Iyengar algorithm are shown in this section. Each PE performs the algorithm separately: Input: The measurement sent by PE k to PE i is a closed interval , Output: The output of PE i includes a point estimate and an interval estimate PE i receives measurements from all the other PEs. Divide the union of collected measurements into mutually exclusive intervals based on the number of measurements that intersect, which is known as the weight of the interval. Remove intervals with weight less than , where is the number of faulty PEs If there are L intervals left, let denote the set of the remaining intervals. We have , where interval and is the weight associated with interval . We also assume . Calculate the point estimate of PE i as and the interval estimate is Example: Consider an example of 5 PEs, in which PE 5 () is sending wrong values to other PEs and they all exchange the values. The values received by are in the next Table. We draw a Weighted Region Diagram (WRD) of these intervals, then we can determine for PE 1 according to the Algorithm: which consists of intervals where at least 4(= = 5−1) measurements intersect. The output of PE 1 is equal to and the interval estimate is Similar, we could obtain all the inputs and results of the 5 PEs: Related algorithms 1982 Byzantine Problem: The Byzantine General Problem as an extension of Two Generals' Problem could be viewed as a binary problem. 1983 Approximate Consensus: The method removes some values from the set consists of scalars to tolerant faulty inputs. 1985 In-exact Consensus: The method also uses scalar as the input. 1996 Brooks-Iyengar Algorithm: The method is based on intervals. 2013 Byzantine Vector Consensus: The method uses vectors as the input. 2013 Multidimensional Agreement: The method also use vectors as the input while the measure of distance is different. We could use Approximate Consensus (scalar-based), Brooks-Iyengar Algorithm (interval-based) and Byzantine Vector Consensus (vector-based) to deal with interval inputs, and the paper proved that Brooks–Iyengar algorithm is the best here. Application Brooks–Iyengar algorithm is a seminal work and a major milestone in distributed sensing, and could be used as a fault tolerant solution for many redundancy scenarios. Also, it is easy to implement and embed in any networking systems. In 1996, the algorithm was used in MINIX to provide more accuracy and precision, which leads to the development of the first version of RT-Linux. In 2000, the algorithm was also central to the DARPA SensIT program's distributed tracking program. Acoustic, seismic and motion detection readings from multiple sensors are combined and fed into a distributed tracking system. Besides, it was used to combine heterogeneous sensor feeds in the application fielded by BBN Technologies, BAE systems, Penn State Applied Research Lab(ARL), and USC/ISI. The Thales Group, a UK Defense Manufacturer, used this work in its Global Operational Analysis Laboratory. It is applied to Raytheon's programs where many systems need to extract reliable data from unreliable sensor network, this exempts the increasing investment in improving sensor reliability. Also, the research in developing this algorithm results in the tools used by the US Navy in its maritime domain awareness software. In education, Brooks–Iyengar algorithm has been widely used in teaching classes such as University of Wisconsin, Purdue, Georgia Tech, Clemson University, University of Maryland, etc. In addition to the area of sensor network, other fields such as time-triggered architecture, safety of cyber-physical systems, data fusion, robot convergence, high-performance computing, software/hardware reliability, ensemble learning in artificial intelligence systems could also benefit from Brooks–Iyengar algorithm. Algorithm characteristics Faulty PEs tolerated < N/3 Maximum faulty PEs < 2N/3 Complexity = O(N log N) Order of network bandwidth = O(N) Convergence = 2t/N Accuracy = limited by input Iterates for precision = often Precision over accuracy = no Accuracy over precision = no See also Distributed sensor network Sensor fusion Fault tolerance Consensus (computer science) Chandra–Toueg consensus algorithm Paxos consensus protocol Raft consensus algorithm Marzullo's algorithm Intersection algorithm Two Generals' Problem Awards and recognition The Inventors of Brooks Iyengar Algorithm Dr Brooks and Dr SS Iyengar received the prestigious 25 year Test of Time Award for his Pioneering Research and high impact of the Brooks-Iyengar Algorithm. The high impact research and how this work has influenced numerous US government programs and commercial products. Dr SS Iyengar receiving award from Prof Steve Yau, IEEE Dr SS Iyengar with his student Dr Brooks References Distributed computing problems Fault tolerance Theory of computation
Brooks–Iyengar algorithm
[ "Mathematics", "Engineering" ]
1,397
[ "Distributed computing problems", "Reliability engineering", "Computational problems", "Fault tolerance", "Mathematical problems" ]
21,842,199
https://en.wikipedia.org/wiki/OpenMS
OpenMS is an open-source project for data analysis and processing in mass spectrometry and is released under the 3-clause BSD licence. It supports most common operating systems including Microsoft Windows, MacOS and Linux. OpenMS has tools for analysis of proteomics data, providing algorithms for signal processing, feature finding (including de-isotoping), visualization in 1D (spectra or chromatogram level), 2D and 3D, map mapping and peptide identification. It supports label-free and isotopic-label based quantification (such as iTRAQ and TMT and SILAC). OpenMS also supports metabolomics workflows and targeted analysis of DIA/SWATH data. Furthermore, OpenMS provides tools for the analysis of cross linking data, including protein-protein, protein-RNA and protein-DNA cross linking. Lastly, OpenMS provides tools for analysis of RNA mass spectrometry data. History OpenMS was originally released in 2007 in version 1.0 and was described in two articles published in Bioinformatics in 2007 and 2008 and has since seen continuous releases. In 2009, the visualization tool TOPPView was published and in 2012, the workflow manager and editor TOPPAS was described. In 2013, a complete high-throughput label-free analysis pipeline using OpenMS 1.8 was described and compared with similar, proprietary software (such as MaxQuant and Progenesis QI). The authors conclude that "[...] all three software solutions produce adequate and largely comparable quantification results; all have some weaknesses, and none can outperform the other two in every aspect that we examined. However, the performance of OpenMS is on par with that of its two tested competitors [...]". The OpenMS 1.10 release contained several new analysis tools, including OpenSWATH (a tool for targeted DIA data analysis), a metabolomics feature finder and a TMT analysis tool. Furthermore, full support for TraML 1.0.0 and the search engine MyriMatch were added. The OpenMS 1.11 release was the first release to contain fully integrated bindings to the Python programming language (termed pyOpenMS). In addition, new tools were added to support QcML (for quality control) and for metabolomics accurate mass analysis. Multiple tools were significantly improved with regard to memory and CPU performance. With OpenMS 2.0, released in April 2015, the project provides a new version that has been completely cleared of GPL code and uses git (in combination with GitHub) for its version control and ticketing system. Other changes include support for mzIdentML, mzQuantML and mzTab while improvements in the kernel allow for faster access to data stored in mzML and provide a novel API for accessing mass spectrometric data. In 2016, the new features of OpenMS 2.0 were described in an article in Nature Methods. In 2024, OpenMS 3.0 was released, providing support for a wide array of data analysis task in proteomics, metabolomics and MS-based transcriptomics. OpenMS is currently developed with contributions from the group of Knut Reinert at the Free University of Berlin, the group of Oliver Kohlbacher at the University of Tübingen and the group of Hannes Roest at University of Toronto. Features OpenMS provides a set of over 100 different executable tools than can be chained together into pipelines for mass spectrometry data analysis (the TOPP Tools). It also provides visualization tools for spectra and chromatograms (1D), mass spectrometric heat maps (2D m/z vs RT) as well as a three-dimensional visualization of a mass spectrometry experiment. Finally, OpenMS also provides a C++ library (with bindings to Python available since 1.11) for LC/MS data management and analyses accessible to developers to create new tools and implement their own algorithms using the OpenMS library. OpenMS is free software available under the 3-clause BSD licence (previously under the LGPL). Among others, it provides algorithms for signal processing, feature finding (including de-isotoping), visualization, map mapping and peptide identification. It supports label-free and isotopic-label based quantification (such as iTRAQ and TMT and SILAC). The following graphical applications are part an OpenMS release: TOPPView is a viewer that allows visualization of mass spectrometric data on MS1 and MS2 level as well as in 3D; additionally it also displays chromatographic data from SRM experiments (in version 1.10). OpenMS is compatible with current and upcoming Proteomics Standard Initiative (PSI) formats for mass spectrometric data. TOPPAS is a graphical application to build and execute data processing pipelines which consist of TOPP tools. Releases See also ProteoWizard Trans-Proteomic Pipeline Mass spectrometry software References External links OpenMS Project Homepage Free science software Bioinformatics software Mass spectrometry software Proteomics Software using the BSD license
OpenMS
[ "Physics", "Chemistry", "Biology" ]
1,089
[ "Spectrum (physical sciences)", "Chemistry software", "Bioinformatics software", "Bioinformatics", "Mass spectrometry software", "Mass spectrometry" ]
21,842,531
https://en.wikipedia.org/wiki/The%20OpenMS%20Proteomics%20Pipeline
The OpenMS Proteomics Pipeline (TOPP) is a set of computational tools that can be chained together to tailor problem-specific analysis pipelines for HPLC-MS data. It transforms most of the OpenMS functionality into small command line tools that are the building blocks for more complex analysis pipelines. The functionality of the tools ranges from data preprocessing (file format conversion, baseline reduction, noise reduction, peak picking, map alignment,...) over quantitation (isotope-labeled and label-free) to identification (wrapper tools for Mascot, Sequest, InsPecT and OMSSA). TOPP is developed in the groups of Prof. Knut Reinert at the Free University of Berlin and in the group of Prof. Kohlbacher at the University of Tübingen. For more detailed information about the TOPP tools, see the TOPP documentation of the latest release and the TOPP publication in the references. The OpenMS Proteomics Pipeline is free software released under the 3-clause BSD license. References Free science software Mass spectrometry software Proteomics
The OpenMS Proteomics Pipeline
[ "Physics", "Chemistry" ]
227
[ "Chromatography", "Spectrum (physical sciences)", "Chemistry software", "Mass spectrometry software", "Mass spectrometry", "Chromatography software" ]
35,017,199
https://en.wikipedia.org/wiki/Differential%20dynamic%20microscopy
Differential dynamic microscopy (DDM) is an optical technique that allows performing light scattering experiments by means of a simple optical microscope. DDM is suitable for typical soft materials such as for instance liquids or gels made of colloids, polymers and liquid crystals but also for biological materials like bacteria and cells. Basic idea The typical DDM data is a time sequence of microscope images (movie) acquired at some height within the sample (typically at its mid-plane). If the image intensity is locally proportional to the concentration of particles or molecules to be studied (possibly convoluted with the microscope point spread function (PSF)), each movie can be analyzed in the Fourier space to obtain information about the dynamics of concentration Fourier modes, independent on the fact that the particles/molecules can be individually optically resolved or not. After suitable calibration also information about the Fourier amplitude of the concentration modes can be extracted. Applicability and working principle The concentration-intensity proportionality is valid at least in two very important cases that distinguish two corresponding classes of DDM methods: scattering-based DDM: where the image is the result of the superposition of the strong transmitted beam with the weakly scattered light from the particles. Typical cases where this condition can be obtained are bright field, phase contrast, polarized microscopes. fluorescence-based DDM: where the image is the result of the incoherent addition of the intensity emitted by the particles (fluorescence, confocal) microscopes In both cases the convolution with the PSF in the real space corresponds to a simple product in the Fourier space, which guarantees that studying a given Fourier mode of the image intensity provides information about the corresponding Fourier mode of the concentration field. In contrast with particle tracking, there is no need of resolving the individual particles, which allows DDM to characterize the dynamics of particles or other moving entities whose size is much smaller than the wavelength of light. Still, the images are acquired in the real space, which provides several advantages with respect to traditional (far field) scattering methods. Data analysis DDM is based on an algorithm proposed in Croccolo et al. and Alaimo et al., which is conveniently named differential dynamic algorithm (DDA). DDA works by subtracting images acquired at different times and taking advantage that, as the delay between two subtracted images gets large, the energy content of the difference image increases correspondingly. A two-dimensional fast Fourier transform (FFT) analysis of the difference images allows to quantify the growth of the signal contains for each wave vector and one can calculate the Fourier power spectrum of the difference images for different delays to obtain the so-called image structure function . Calculation shows that for both scattering- and fluorescence-based DDM where is the normalized intermediate scattering function that would be measured in a dynamic light scattering (DLS) experiment, the sample scattering intensity that would be measured in a static light scattering (SLS) experiment, a background term due to the noise along the detection chain a transfer function that depends on the microscope details. Equation () shows that DDM can be used for DLS experiments, provided that a model for the normalized intermediate scattering function is available. For instance, in the case of Brownian motion one has where is the diffusion coefficient of the Brownian particles. If the transfer function is determined by calibrating the microscope with a suitable sample, DDM can be employed also for SLS experiments. Alternative algorithms for data analysis are suggested in. Running DDM on a series of frames smaller than the full-frame has been called multi-DDM. This is analogous to changing the scattering volume in a light scattering experiment, but is readily carried out by selecting out of the full-frame movie. The coherence lengthscale of the dynamics can be picked up from a multi-DDM analysis. Relationship with other imaging-based scattering methods Scattering-based DDM belongs to the so-called near-field (or deep Fresnel) scattering family, a recently introduced family of imaging-based scattering methods. Near field is used here in a similar way to what is used for near field speckles i.e. as a particular case of Fresnel region as opposed to the far field or Fraunhofer region. The near field scattering family includes also quantitative shadowgraphy and Schlieren. Applications DDM was introduced in 2008 and it was applied for characterizing the dynamics of colloidal particles in Brownian motion. More recently it has been successfully applied also to the study of aggregation processes of colloidal nanoparticles, of bacterial motions, of the dynamics of anisotropic colloids and of motile cilia. References Scientific techniques Microscopy Scattering, absorption and radiative transfer (optics) Biochemistry methods Physical chemistry Spectroscopy
Differential dynamic microscopy
[ "Physics", "Chemistry", "Biology" ]
990
[ "Biochemistry methods", " absorption and radiative transfer (optics)", "Applied and interdisciplinary physics", "Spectrum (physical sciences)", "Molecular physics", "Instrumental analysis", "Spectroscopy", "Scattering", "nan", "Microscopy", "Biochemistry", "Physical chemistry" ]
35,017,259
https://en.wikipedia.org/wiki/Shelter
A shelter is an architectural structure or natural formation (or a combination of the two) providing protection from the local environment. A shelter can serve as a home or be provided by a residential institution. It can be understood as both a temporary and a permanent structure. In the American Counterculture of the 1960s, the concept of "Shelter" intervenes as one of the key concepts of the Whole Earth Catalog, and expresses an alternative to the modes of teaching architecture practiced in American academies. In the context of Maslow's hierarchy of needs, shelter holds a crucial position as one of the fundamental human necessities, complementing other physiological imperatives such as the need for air, water, food, rest, clothing, and reproduction." Types Forms Apartment Bivouac shelter Blast shelter Bunker Fallout shelter House Hut Lean-to Mia-mia, Indigenous Australian for a temporary shelter Quinzhee, a shelter made from a hollow mound of loose snow Ramada, a roof with no walls Rock shelter Tent Toguna, a shelter used by the Dogon people in Africa Townhouse Applications Air raid shelter Animal shelter Bothy, public supply shelter in the British Isles Bus stop Emergency shelter Homeless shelter Housing unit Mountain hut Refugee shelter Transitional shelter Women's shelter Gallery See also List of human habitation forms Right to housing References External links Buildings and structures by type
Shelter
[ "Engineering" ]
273
[ "Buildings and structures by type", "Architecture" ]
35,017,527
https://en.wikipedia.org/wiki/Salinosporamide
The salinosporamides are a group of closely related chemical compounds isolated from marine bacteria in the genus Salinispora. They possess a densely functionalized γ-lactam-β-lactone bicyclic core. Salinosporamide A has attracted interest for its potential use in treating various types of cancer. In addition, a variety of synthetic analogs have been prepared. Chemical structures References External links Natural products Lactams Lactones
Salinosporamide
[ "Chemistry" ]
95
[ "Natural products", "Medicinal chemistry" ]
35,017,966
https://en.wikipedia.org/wiki/Personal%20radio%20service
A personal radio service is any system that allows individuals to operate radio transmitters and receivers for personal purposes with minimal or no special license or individual authorization. Personal radio services exist around the world and typically use light-weight walkie talkie portable radios. The power output, antenna size, and technical characteristics of the equipment are set by regulations in each country. Many regions (for example, the European Union) have standardized personal radio service rules to allow travelers from one country to use their equipment in another country. Examples of standardized services include PMR446 and FM Citizens Band Radio (CB) in the EU and several other countries/regions. 26–27 MHz CB radio is the oldest personal radio service and is used in nearly every country worldwide, with many countries and regions copying the United States 40-channel frequency plan. In many countries, CB radio is less popular due to the availability of other personal radio services that offer shorter antennas and better protection from noise and interference. Because radio spectrum allocation varies around the world, a personal radio service device may not be usable outside its original area of purchase. For example, US-specification Family Radio Service radios operate on frequencies that in Europe are allocated to fire and emergency services. Operation of a personal radio device that causes interference to other services may result in prosecution. Some personal radio service frequency plans are regionally accepted, for example, the European PMR446 system is available in many countries, and the American FRS/GMRS system's channel plans have been adopted by Canada, Mexico and some countries in South America. Operating characteristics Specific details vary between the different national services, but many personal radio services operate in the VHF or UHF part of the radio spectrum, using frequency modulation and a maximum power of only a few watts. Operation is on predetermined channels. Unlike commercial business, marine, aviation, or emergency services radio, all users in an area share access to the available channels, requiring cooperation for effective communications. Unlike amateur radio, experimentation with different types of apparatus, and modes of modulation is not permitted, and equipment must be factory-built and approved. Many of these services require non-removable antennas or place restrictions on antenna size, height or gain. The high-VHF band (137–174 MHz) and UHF bands (325, 900 MHz) are the most popular aside from the 25–28 MHz "HF CB" bands. There are notable exceptions to this, including the 78 MHz and 245 MHz Thai "CB 78" and "CB 245" VHF-FM bands, the 68–71 MHz Finnish band, the 30–31 MHz Swedish "Hunting Radio" band, and the 43 MHz Italian "VHF CB" bands. The lower frequency allocations (especially the 30/31 and 43 MHz bands) often exhibit propagation and communication range characteristics similar to 27 MHz CB radio. Higher frequencies (especially above 200 MHz) are almost exclusively line-of-sight. These services are different from cellular mobile telephone systems in that no infrastructure (towers, base stations) is required; communications is point-to-point directly between users. However, this also means that communication range is usually limited to line-of-sight propagation, a few kilometres (miles) under the best of circumstances, and much less in heavily built up urban areas. Also unlike mobile telephones, operation is push-to-talk; a user must wait for the shared frequency to be clear before transmitting, and all stations on the frequency may hear the transmission. Since both stations are on the same frequency, the receiving station cannot interrupt the transmitter until it has finished. Generally only voice transmission is allowed under personal radio service regulations, although tone and digital selective calling features are allowed in some countries. Some services permit digital data transmission, either as part of digital "text messaging" and GPS location "sharing" with other nearby radios (such as FRS), or the services themselves involve digital voice and data transmissions (such as DPMR446/DMR446). Family Radio Service and derivatives United States In the United States, the Family Radio Service was authorized starting in 1996. Initially, it used half-watt hand-held FM UHF radios with 14 fixed channels near 462 and 467 MHz. For a time dual-standard FRS and General Mobile Radio Service (GMRS) radios were available, that could be operated without individual licensing on the FRS channels, but which required a license to operate on the GMRS frequencies at a power level above the FRS standard. In May 2017 the regulations were changed so that FRS service included operation at up to 2 watts on GMRS channels, and prohibiting the use of dual-standard radios in FRS service that would exceed the 2-watt limit. Canada American-standard FRS radios have been approved for use in Canada since April 2000. The revised technical standard RSS 210 has essentially the same technical requirements as in the United States. Since September 2004, low-power GMRS radios and dual-standard FRS/GMRS radios have also been approved for use in Canada, giving additional channels. In Canada, no license is required and no restrictions are imposed on the GMRS channels. Mexico Since tourists often bring their FRS radios with them, and since trade between the U.S., Canada, and Mexico is of great value to all three countries, the Mexican Secretary of Communication and Transportation has authorized use of the FRS frequencies and equipment similar to that in the US. Dual-mode FRS/GMRS equipment is not approved in Mexico, so caution should be exercised in operating hybrid FRS/GMRS devices purchased elsewhere. South America Dual-mode GMRS/FRS equipment is also approved in Brazil. Using other UHF and VHF frequencies Australia and New Zealand In Australia and New Zealand, the 77-channel (previously 40-channel) UHF CB citizen's band near 477 MHz is used for a similar purpose. In New Zealand hand-held transceivers are "class licensed" and require no individual registration. Repeaters may be used, but these require individual station licences. The Australian Communications and Media Authority (ACMA) also allocated a band near 434 MHz for low-powered devices with low potential for interference to other users of the band. Bangladesh The Bangladesh Telecommunications Regulatory Commission (BTRC) allows for use of the 26–27 MHz Citizen's Band (CB) allocation from 26.965 to 27.405 MHz (standard 40-channel allocation used in most of the world for CB). BTRC also permits usage of "Short Business Radio" (SBR) in the 245.000 to 246.000 MHz band. This is the same 245 MHz CB allocation as Thailand and uses the same channel plan (80 channels, 12.5 kHz steps). The BTRC considers both the 26–28 MHz CB and 245–246 MHz SBR allocations to be shared resources and all users must share the channels with other users of these two bands. China In China, Hong Kong and Macau, the Public Radio Service is a personal radio service similar to the American-style FRS. Twenty UHF channels near 409 MHz are allocated. Sometimes this is called "Citizen Band" or CB in China, but not to be confused with the Citizens Band radio within the 27 MHz band. FRS/GMRS and PMR446 radios are not approved for use in China. FRS band radios may be found in use in China illegally, starting before the Chinese government opened the 409 MHz band to the public. Legal action against such usage is rare, because of the low power and short range of FRS radios. List of China Public Radio Service Channels: Europe In Europe, PMR446 is a personal radio service with two sets of 16 channels and one set of 8 channels available for a total of 40 channels. The original PMR446 allocation for 16 channels from 446.00625 MHz to 446.19375 MHz with 12.5 kHz channel spacing steps has been complemented with two digital channel plans (sometimes called "DMR446" or "DPMR446"). Digital FDMA offers another 16 channels from 446.103125 MHz to 446.196875 MHz with 6.25 kHz channel spacing steps. 4-level FSK modulation at 3.6 kbit/s is used. Digital DMR Tier I TDMA PMR446 channels are also available, eight channels from 446.10625 MHz to 446.19375 MHz with 12.5 kHz channel spacing steps. 4-level FSK modulation at 3.6 kbit/s is used. All three of the PMR446 channel plans occupy the same European-harmonized 446.0 MHz to 446.2 MHz frequency band. One cannot legally use the FRS radio in Europe or PMR446 in the U.S. The 446 MHz band is allocated to amateur radio in the United States. In Great Britain, FRS frequencies are used for fire brigade communications and this sometimes causes problems when FRS equipment is imported from the U.S. and used without awareness of the consequences by members of the public. A similar situation exists with the German Freenet VHF CB allocation, the American MURS VHF CB allocation, and the Australian/New Zealand UHF CB allocation at 476–477 MHz. Use of Australian UHF CB equipment in the United States would cause severe interference to public safety communications, especially in larger metropolitan areas. European countries also have LPD devices operating in the 433 MHz band; these devices are restricted to 10 mW output power and are intended to provide an alternative to PMR446 over short distances. Additionally, most European countries allow use of the 26.965 – 27.405 MHz US FCC Citizen's Band Radio allocation (40 channels). Some countries allow more channels (for example, Germany has an extra 40-channel allocation from 26.565 to 27.955 MHz, making a total of 80 available CB channels) or other modes. Many countries allow the use of both AM and FM modulation, with some (including the UK) allowing the use of SSB as well. Finland Finland has a 26-channel mid-band (68–72 MHz) VHF-FM allocation called "RHA68" consisting of "common channels for hobby usage in general". No license, examination or fee is required to operate in the RHA68 band. The channels may be referred to by their sequential number or by alphanumeric designation per Finnish law. Power limit is 5 watts for hand-portable stations and 25 watts for mobile stations on all of group A channels and on group E channels 15, 16 and 18–21. Base stations may only be used on group A channels with maximum transmitting power of 15 watts. Channel group A Channel group E Germany In addition to license-free PMR446, CB and SRD/LPD433 radio, Germany has a VHF-FM allocation called Freenet that allows a maximum of 1 W (ERP) of power on six channels in the 149 MHz band. 149.0250 MHz 149.0375 MHz 149.0500 MHz 149.0875 MHz 149.1000 MHz 149.1125 MHz The Freenet allocation is a re-purposing of the old B-Netz mobile telephone service. It is similar in scope and purpose to the Multi-Use Radio Service (MURS) in the United States, and the “Jagtradio” (Hunting Radio) services in Norway and Sweden. India India has a 13-channel UHF-FM service known as "Short Range Radio" or "SRR" that operates in the 350.225–350.400 MHz band with a maximal output power of 2 watts. FM mode is used with 12.5 kHz channel spacing. 350.2250 MHz 350.2625 MHz 350.2750 MHz 350.2875 MHz 350.3000 MHz 350.3125 MHz 350.3250 MHz 350.3375 MHz 350.3500 MHz 350.3625 MHz 350.3750 MHz 350.3875 MHz 350.4000 MHz In August 2005, India deregulated the 26.957–27.283 MHz band for license-free CB radio usage with a maximum power output of 5 watts. The channel plan follows channels 1–27 from the standard 40 channel CB plan originally adopted by the United States (and most other countries worldwide). Channel 1 is 26.965 and channel 27 is 27.275 MHz. Use of frequencies below 26.965 or above 27.275 is not permitted in India. Multi-norm mobile CB radios are now being shipped with the 27 channel India CB frequency plan programmed in them. Indonesia Two frequency bands available: VHF-FM (142–143 MHz VHF CB) and UHF-FM (476–477 MHz UHF CB). Indonesia allows 40 channels from 476.425 MHz to 477.400 MHz at 25 kHz channel spacing. It is the same channel plan as the original 40 channel Australia/New Zealand UHF CB allocation. Indonesia also has a 60-channel VHF-FM service available from 142.050 MHz to 143.525 MHz (channels spaced every 25 kHz). Indonesia also permits usage of the standardized 26.965–27.405 MHz HF CB band with AM/SSB modes allowed. The new frequency allocations (142 & 476 MHz) were regulated in a decision by the Director General of Posts and Telecommunications in decree Number 92 Year 1994 on Implementing Regulation of the Inter-Citizens Radio Communications. Italy Italy allows use of the European standardized 40-channel 26/27 MHz CB band, plus a 34-channel allocation from 26.875 MHz to 27.245 MHz, giving Italian HF CB a total of 49 channels between to two CB bands (Band I1: 26.965–27.405, Band I2: 26.875–27.245). In addition to this, Italy has a "VHF CB" allocation at 43 MHz, usually called "Apparati a 43 MHz" or "CB 43 MHz". Italy, like many other countries, suffers from extremely lax enforcement of radio communications laws, and "freeband" modified equipment covering wider frequency ranges as well as amplifiers are widely available and openly advertised by communications equipment vendors. "Freebanding" occurs with both the 27 MHz area (often as low as 25 MHz and as high as 30 MHz) and the 43 MHz area (as 43 MHz CB equipment is often modified to cover down to 34 MHz and up to 47 MHz, using 12.5 kHz steps). There is evidence of these frequencies being used outside of Italy for illegal "CB-like" operations. Italian 43 MHz "VHF CB" or "43 MHz CB" allocation. 24 channels, FM mode, 12.5 kHz channel spacing. Each channel has a "recommended use" associated with it. Portable handheld (walkie-talkie), in-vehicle mobile and base station transceivers are available for this band. Channels are numbered in straight sequence, however many transceivers marketed for this band also include a frequency display. Due to the low-VHF band frequency characteristics of this band, it is often used as an adjunct to, or replacement for, the traditional 26–27 MHz CB allocations. 43.3000 MHz – Rescue, Road/Traffic Control, Forestry, Hunting, Fishing, Security 43.3125 MHz – Rescue, Road/Traffic Control, Forestry, Hunting, Fishing, Security 43.3250 MHz – Rescue, Road/Traffic Control, Forestry, Hunting, Fishing, Security 43.3375 MHz – Rescue, Road/Traffic Control, Forestry, Hunting, Fishing, Security 43.3500 MHz – Rescue, Road/Traffic Control, Forestry, Hunting, Fishing, Security 43.3625 MHz – Rescue, Road/Traffic Control, Forestry, Hunting, Fishing, Security 43.3750 MHz – Industrial, Commercial, Agricultural, Crafts 43.3875 MHz – Industrial, Commercial, Agricultural, Crafts 43.4000 MHz – Industrial, Commercial, Agricultural, Crafts 43.4125 MHz – Industrial, Commercial, Agricultural, Crafts 43.4250 MHz – Industrial, Commercial, Agricultural, Crafts 43.4375 MHz – Industrial, Commercial, Agricultural, Crafts 43.4500 MHz – For safety of life at sea, Marine Use (Ship-to-Ship/Ship-to-Shore), Marinas and Harbors 43.4625 MHz – For safety of life at sea, Marine Use (Ship-to-Ship/Ship-to-Shore), Marinas and Harbors 43.4750 MHz – For safety of life at sea, Marine Use (Ship-to-Ship/Ship-to-Shore), Marinas and Harbors 43.4875 MHz – For safety of life at sea, Marine Use (Ship-to-Ship/Ship-to-Shore), Marinas and Harbors 43.5000 MHz – To aid in the administration of sports and other competitive activities 43.5125 MHz – To aid in the administration of sports and other competitive activities 43.5250 MHz – To aid in the administration of sports and other competitive activities 43.5375 MHz – to aid in the administration of sports and other competitive activities 43.5500 MHz – For use by health professionals, doctors, hospitals, and activities related to them. 43.5625 MHz – For use by health professionals, doctors, hospitals, and activities related to them. 43.5750 MHz – For use by health professionals, doctors, hospitals, and activities related to them. 43.5875 MHz – For use by health professionals, doctors, hospitals, and activities related to them. Japan Japan has several services in the VHF and UHF bands: Japan's or SLPR service covers a variety of low-power uses, and does not require registration. Walkie-talkies are limited to 10 mW in the 420, 421, and 422 MHz bands. Simplex: 422.2000–422.3000 MHz (Leisure use), 10 mW, 9 channels, 12.5 kHz spacing Simplex: 422.0500–422.1750 MHz (Business use), 10 mW, 11 channels, 12.5 kHz spacing Duplex: 421.8125–421.9125 MHz (paired with 440.2625–440.3625 MHz) (Leisure use), 10 mW, 12.5 kHz spacing Duplex: 421.575-421.800 MHz (paired with 440.025-440.250 MHz) (Business use), 10 mW, 12.5 kHz spacing covers 351 MHz UHF digital service for leisure and business use. Radios must be registered and equipped with a built-in control ROM for automatic digital callsign identification. 351.16875–351.19375 MHz, digital voice and data, 1 watt, 5 channels, 6.25 kHz spacing 351.20000–351.38125 MHz, digital voice and data, 5 watts, 30 channels, 6.25 kHz spacing 142 & 146 MHz VHF-Digital "Personal Radio" service for Personal, Leisure and Family use. Radios must be equipped with a GPS and built-in control ROM for automatic digital callsign identification. 142.934375–142.984375 MHz, 146.934375–146.984375 MHz, digital voice and GPS data, 0.5 watts, 18 channels, 6.25 kHz spacing Malaysia On 1 April 2010, the Malaysian Communications and Multimedia Commission (MCMC) introduced PMR446 (446.00625 MHz to 446.093750 MHz and 446.103125 MHz to 446.196875 MHz) in addition to 26.965 MHz to 27.405 MHz as a class assignment. Subsequently, the MCMC revoked 477 MHz personal radio service as a class assignment on 1 January 2022. Norway Norway has a Short distance radio service (called "KDR444" to distinguish it from similar services such as PMR446) with six UHF FM channels between 444.600 and 444.975 MHz. Some dual-mode KDR/PMR radios are sold but are only usable in Sweden and Norway. Norway has a 6-channel VHF FM Jaktradio (Hunting Radio) service, maximum power 5 watts. Because both Norway and Sweden have high-band VHF FM hunting allocations (see section on Sweden below), many hunting radios are marketed with both the 6 Norse channels and 7 Swedish channels in one unit. 143.900 MHz 139.400 MHz 143.350 MHz 138.850 MHz 143.250 MHz 138.750 MHz Philippines The Philippines has a radio service for use of families and small businesses. This service is called SRRS or Short Range Radio Service. Repeaters are not permitted, and units are limited to 2.5 watts. This service has been allocated 40 channels at 325 MHz: Singapore Since 3 February 2004, the Infocomm Media Development Authority of Singapore (IMDA) has allocated the 446.0–446.1 MHz frequency band for low-powered walkie-talkies on a non-interference, non-protected and shared-use basis. As these walkie-talkies are low-powered devices which do not potentially cause interference to other licensed radio services, it need not be licensed for use in Singapore. However, the device must be type approved by IMDA for local sale. These personal radios (or walkie talkies in local parlance) are generally programmed with the first 8 channels of the PMR446 frequencies. South Africa South Africa is in the process of conforming to ITC region 1 recommendations. They do allow 8 channels between 446.0 – 446.1 MHz band currently, this is the same as the European PMR446. There are two HF AM/SSB CB bands available for use in South Africa, 27 MHz and 29 MHz. There are some radio transceivers sold that include both "27 Megs" and "29 Megs" however some only include one band or the other. South Africa also has a 23-channel allocation between 29.710 MHz and 29.985 MHz. This service commonly referred to as "29 Megs" or "29 MHz CB". Channels are 12.5 kHz apart. This service is shared between land mobile users, marine users, and civil defense/rural firefighting users. AM mode may be used on any channel, while SSB may only be used on specified channels 29.7475 MHz, 29.8225 MHz, 29.8475 MHz, 29.9475 MHz and 29.985 MHz. Maximum output power is 5 watts carrier power in AM mode and 12 watts PEP power in SSB mode. These frequencies are heavily used by the 4x4 and Land Rover communities as well as by safari companies. The 4-Wheel Drive Club of South Africa (4WDC) uses their own internal channel numbering scheme for these frequencies. Three of these channels (designated 1/A, 2/B and 3/C) are designated for use by marine operators. These frequencies offer excellent range over water (20 – 30 km or more depending on antenna installation) and are heavily used by ski boats and fishing clubs, often as an adjunct to the internationally allocated Marine VHF radio band. South Africa's use of 29 MHz for marine purposes is similar to Australia's use of 27 MHz for marine radio. Australia allows use of the standard 26.965–27.405 MHz AM/SSB 40 channel CB band for land mobile (and marine) communications plus 10 marine-only AM channels in the 27.68–27.98 MHz band. Many coast guard stations monitor 27 Meg channel 88 (27.880 MHz) in addition to VHF channel 16 (156.800 MHz) for distress, safety information and calling. Several 27 MHz marine transceivers sold in Australia are also available in South Africa (programmed for the South African 29 MHz frequencies). This allows for one transceiver to be sold in both Australia (27 MHz marine frequencies) and South Africa (29 MHz marine frequencies). 29.7100 MHz – Mobile (AM) 29.7225 MHz – Mobile (AM) 29.7350 MHz – Mobile (AM) 29.7475 MHz – Civil Defense – Base/Mobile (AM/SSB) 29.7600 MHz – Mobile (AM) – 4WDC Channel 4 29.7725 MHz – Marine "Channel B" or "Channel 2" and Mobile (AM only) – 4WDC Channel 5 29.7850 MHz – Mobile (AM) 29.7975 MHz – Mobile (AM) 29.8100 MHz – Mobile (AM) 29.8225 MHz – Civil Defense – Base/Mobile (AM/SSB) – 4WDC Channel 3 29.8350 MHz – Mobile (AM) 29.8475 MHz – Civil Defense (Primary) – Base/Mobile (AM/SSB) 29.8600 MHz – Mobile (AM) 29.8725 MHz – Mobile (AM) – 4WDC Channel 2 29.8850 MHz – Mobile (AM) 29.8975 MHz – Mobile (AM) – 4WDC Channel 1 29.9100 MHz – Mobile (AM) 29.9225 MHz – Mobile, Rural Fire Fighting Use (AM) 29.9350 MHz – Marine "Channel A" or "Channel 1" (AM) – Marine Calling Channel 29.9475 MHz – Civil Defense – Base/Mobile (AM/SSB) 29.9600 MHz – Mobile (AM) 29.9725 MHz – Marine "Channel C" or "Channel 3" and Mobile (AM only) 29.9850 MHz – Civil Defense – Base/Mobile (AM/SSB) South Africa allows use of nine 27 MHz CB frequencies between 27.185 MHz and 27.275 MHz (27 MHz CB channels 19–27). Maximal output power is 5 watts carrier power in AM mode and 12 watts PEP power in SSB mode. As with the 29 MHz allocation, each frequency is assigned to either AM only or SSB operation. Channels 19–22 (27.185–27.225 MHz) are designated for AM use and channels 23–27 (27.255–27.275 MHz) are designated for SSB use. South Korea UHF-FM 448.750–449.2625 MHz band allocation similar to the American FRS and European PMR446 services. 25 channel FM with 500 mW maximal power output. 12.5 kHz channel spacing with skips. As with FRS/GMRS and PMR446, the use of tone squelch systems such as CTCSS/DCS is encouraged. Like the PMR446, LPD433, Japan's 421–422 MHz SLPR service and KDR444 services, use of these frequencies in countries such as the United States is illegal without an amateur radio license as they fall within the 420–450 MHz 70 cm ham radio allocation. 448.7500 MHz 448.7625 MHz 448.7750 MHz 448.7875 MHz 448.8000 MHz 448.8125 MHz 448.8250 MHz 448.8375 MHz 448.8500 MHz 448.8625 MHz 448.8750 MHz 448.8875 MHz 448.9000 MHz 448.9125 MHz 448.9250 MHz 449.1500 MHz 449.1625 MHz 449.1750 MHz 449.1875 MHz 449.2000 MHz 449.2125 MHz 449.2250 MHz 449.2375 MHz 449.2500 MHz 449.2625 MHz Sweden Similar to Norway, Sweden also has Jaktradio (Hunting Radio) allocations, 6 channels in the 155.400–155.525 MHz band and 40 channels in the 30–31 MHz band. In addition to these, the general purpose channel 156.000 MHz is available. 156.000 MHz is sometimes included as Marine VHF radio "channel 0" or "M1" (private) channel, allowing communication between portables and radios mounted in boats. Modulation is FM, maximal power 4 watts for 30–31 MHz and 5 watts for 155 MHz. The 30/31 MHz band is sometimes referred to as "PR31" or "Jakt 31 MHz" to avoid confusion with the 155 MHz Jaktradio frequencies. Due to the proximity of this band in frequency to the 26–27 MHz CB band and the wide availability of modified CB or 10-meter amateur radio transceivers that cover up to 32 MHz, some operations on these channels involve the use of modified equipment. Handheld radios are available for these frequencies, with some models including both the 30/31 MHz band and the 155 MHz band. VHF-FM High Band 155 MHz Jaktradio: 155.400 MHz 155.425 MHz 155.450 MHz 155.475 MHz 155.500 MHz 155.525 MHz 156.000 MHz (general-purpose channel sometimes grouped with the other hunting channels) VHF-FM Low Band 31 MHz Jaktradio: 30.930 MHz 30.940 MHz 30.950 MHz 30.960 MHz 30.970 MHz 31.030 MHz 31.040 MHz 31.050 MHz 31.060 MHz 31.070 MHz 31.080 MHz 31.090 MHz 31.100 MHz 31.110 MHz 31.120 MHz 31.130 MHz 31.140 MHz 31.150 MHz 31.160 MHz 31.170 MHz 31.180 MHz 31.190 MHz 31.200 MHz 31.210 MHz 31.220 MHz 31.230 MHz 31.240 MHz 31.250 MHz 31.260 MHz 31.270 MHz 31.280 MHz 31.290 MHz 31.300 MHz 31.310 MHz 31.320 MHz 31.330 MHz 31.340 MHz 31.350 MHz 31.360 MHz 31.370 MHz Thailand Thailand has two 80 channel and 160 channel CB-style services, one in mid-band VHF 78 MHz band and another at high-band VHF 245 MHz band. Both services use FM mode only. Per Thai law, 78 MHz transceivers must have a yellow case. 245 MHz transceivers must have a red case. The HF (26 – 27 MHz band) CB service is not allowed in Thailand. 78 MHz takes the place of traditional 27 MHz CB for truckers, etc. The 78 MHz CB allocation is primarily used by mobile and base stations, although handheld radios for 78 MHz are available. The lower frequency allows for longer communication range in rural and suburban areas compared to the 245 MHz service. 78 MHz is popular with trucking companies, buses, taxi companies and other transportation users, often in conjunction with 245 MHz. This service is commonly referred to as "CB78", "VHF78", "CB 78 MHz" or simply "78 MHz". Frequency allocation is between 78.0000 and 78.9875 MHz. Channels are spaced 12.5 kHz apart for a total of 80 channels in straight numerical sequence (channel 1 is 78.0000 MHz, channel 80 is 78.9875 MHz). Units are allowed up to 10 watts PEP RF power. External high-gain antennas are permitted. Base station antennas are permitted and base stations are commonly found in this band. Use of selective calling and tone squelch systems such as DTMF, CTCSS and DCS are allowed. According to Thai law, transceivers operating on the 78 MHz band must have a yellow case. The 245 MHz CB allocation is more popular than the 78 MHz service, especially in urban areas. This service is commonly referred to as "CB245", "VHF245" or "VHF CB 245 MHz". Frequency allocation is between 245.0000 and 246.9875 MHz. Channels are spaced 12.5 kHz apart for a total of 160 channels in straight numerical sequence (channel 1 is 245.0000 MHz, channel 160 is 246.9875 MHz). Units are allowed up to 10 watts PEP RF power. External high-gain antennas are permitted. Base station antennas are permitted. Besides personal use, the equipment is used by search and rescue organizations, businesses, security guards, taxi companies and delivery services. In urban areas, simplex repeaters, usually mounted on the roofs of high-rise buildings, are used to increase communication range. CTCSS and DCS are often used due to heavy channel congestion in urban areas. Operating rules are less restrictive than amateur radio service, with an initial license fee required. According to Thai law, transceivers operating on the 245 MHz band must have a red case. There are an estimated one million users of the 245 MHz VHF CB service, often in large cities. Taiwan Some manufacturers in Taiwan have radios that carry both American FRS and GMRS frequencies, and have additional channels 1 to 99. Channels 1 to 14 are well-known, while channels 15 to 99 are less popular. While radios designed for the Taiwan market have FRS/GMRS frequencies as part of their channel plan, it is still technically illegal to use equipment designed for the Taiwan market in the United States. FRS, 14 channels, 12.5 kHz spacing: 467.5125 MHz 467.5250 MHz 467.5375 MHz 467.5500 MHz – US GMRS Repeater Ch 15/23 Input 467.5625 MHz – US FRS Channel 8 467.5750 MHz – US GMRS Repeater Ch 16/24 Input 467.5875 MHz – US FRS Channel 9 467.6000 MHz – US GMRS Repeater Ch 17/25 Input 467.6125 MHz – US FRS Channel 10 467.6250 MHz – US GMRS Repeater Ch 18/26 Input 467.6375 MHz – US FRS Channel 11 467.6500 MHz – US GMRS Repeater Ch 19/27 Input 467.6625 MHz – US FRS Channel 12 467.6750 MHz – US GMRS Repeater Ch 20/28 Input United States In addition to the UHF FRS and GMRS allocation and the high-HF CB allocation, in 2000, the American FCC allocated five VHF channels to the Multi-Use Radio Service (MURS). Like CB, MURS frequencies may be used for business or personal/family communications. Two of these frequencies were re-allocated from the Business/Industrial Radio Pool (Business Radio Service). These two frequencies were often used illegally by businesses as they were/are part of the "color dot" frequencies that handheld "on-site" business radios come pre-programmed with. Channels 1–3 were not allocated as part of the "color dot" frequencies and therefore do not generally have "grandfathered" business users on them (see below). 151.82 MHz 151.88 MHz 151.94 MHz 154.57 MHz "Blue Dot" 154.6 MHz "Green Dot" All five channels now see significant use by businesses, as well as mobile-to-mobile users. Use of squelch systems such as CTCSS and DCS on the MURS frequencies is encouraged to facilitate frequency sharing. Voice and data transmissions are permitted on the MURS frequencies. AM and FM modes are permitted on the MURS frequencies for both data and voice transmission (see FCC rules Part 95.631). However, store-and-forward digital operations are not permitted and transmitters must not operate in continuous-carrier (constant transmit) mode. External antennas are permitted, transmitter output power is limited to 2 watts. MURS is often used for data transmissions as well as portable and mobile voice communications, due to the external high-gain antenna provision, MURS offers the possibility of greater range than FRS. As with CB, FRS and GMRS, there are reports of users using higher-than-legal power levels on the MURS frequencies. Using HF range Citizens Band radio is a family of services available in different countries and with different operating rules, generally using channels in the 27 MHz part of the radio spectrum. 26–27 MHz occupies the "boundary area" between HF (3–30 MHz) and VHF (30–300 MHz). This means that CB signals provide local coverage similar to low-band VHF during times of low sunspot activity. However, during the peak of the sunspot cycle, CB frequencies exhibit skywave propagation just like the lower parts of HF do, making communication hundreds or even thousands of miles (km) away possible. While some operators seek out long distance "DX" contacts on CB frequencies and on frequencies above channel 40 and below channel 1 (a practice referred to as "freebanding" or "outbanding"), interference from distant stations will often make local communication extremely difficult or impossible during band openings. CB was, and still is, designed for short-distance (local) communications needs. US FCC law prohibits communicating with any station more than 250 km (155.3 miles) on CB frequencies.(150-mile rule deleted by FCC September 2017) Like many rules regarding the HF CB services, the distance prohibition is largely ignored and unenforced. Often as a result of channel overcrowding and interference, many HF CB users have turned to purchasing "export" or "10-meter" radios that operate in the legal CB band but also provide access to frequencies above and below the CB band. Other CB users purchase amplifiers to increase their output power and "punch through" interference caused by distant stations (or by local stations running amplifiers). This has created a tragedy of the commons situation in and near the HF CB spectrum (25–28 MHz). As more users purchase amplifiers and operate on out-of-band frequencies, more interference is produced, forcing others to acquire even more powerful equipment to punch through even more interference, and/or to acquire transceivers capable of accessing more frequencies so that a clear frequency may be found. Many CB users have moved to other personal radio services to avoid these issues. When first developed in the United States, CB operation required an individual license fee. After the surge in popularity in the mid-1970s, licensing was deprecated. Other countries provided legislation to allow use of similar frequencies and operating modes. The 26.965–27.405 MHz 40-channel American channel plan serves as the frequency plan for many other countries HF CB service, including Canada, Australia, Mexico, most of Central and South America and the European Union. FM may be used throughout Europe on the standard 40 channels – called the "mid band" or "CEPT" channels. Many countries also allow AM in addition to FM or AM/FM/SSB with various different power limits. European standardization allows a maximum of 4 watts FM power or 1 watt AM carrier power. Others, such as Germany, Russia, and Brazil, allow more than 40 legal channels. Germany has an 80-channel allocation – the 40 CEPT/American channels plus 40 channels from 26.565 to 26.955 MHz in straight 10 kHz sequence. Germany only allows FM on channels 41–80. Brazil allows for 80 channels from 26.965 to 27.855 MHz with AM/SSB permitted. Brazil allows higher power levels than the US and most of Europe. New Zealand has two 40-channel HF CB bands available, the NZ-specific "NZ CB Band" 26.330–26.770 MHz (40 channels, AM and SSB allowed) and the standardized "mid band" 26.965–27.405 MHz (40 channels, AM and SSB allowed) for a total of 80 HF CB channels. In Russia, HF CB radio is extremely popular, especially with taxicab, trucking, delivery and general transportation users. Due to the sheer size of Russia and the remoteness of many Russian communities, CB radio is an important resource. Russian CB operators and clubs have installed several simplex repeaters on mountaintops or the roofs of high-rise apartment buildings to increase communication range of low-powered mobile CB radios. Some of these repeaters feature CTCSS/PL tone protection, remote control via DTMF, linking via Internet gateways, simulcasting via several repeaters at once, and cross-band repeat connections to the UHF PMR446 service. Cities such as Moscow and St. Petersburg feature motorist emergency services that are directly accessible via a CB frequency monitored by police and other emergency services. Other services provided by cities in Russia include weather broadcasts and travel/traffic information and warnings via specific CB channels. Due to the rapid proliferation of "open-banded" export equipment (sold as "10-meter" or "multi-norm" radios) that cover wide frequency ranges, Russia allows for two overlapping sets of 3 bands of 40 channels, for a total of 240 channels. 26.515–26.955 MHz, 26.965–27.405 MHz and 27.415–27.855 MHz make up the first set. These are referred to as the "European channels" or the "fives" due to the frequency ending in 5. The second set, known as the "Polish channels" or the "zeros" is 26.510 – 26.950 MHz, 26.960–27.400 MHz and 27.410–27.850 MHz. Multi-norm radios sold in Europe and Asia designed for use in several different countries – with the end-user selecting the "mode" of the country they live in – now come with a "RU" (Russia) mode that opens the radio to full band coverage (usually 25.615–28.305 MHz or 25.615–30.105 MHz). It is important to note that only 26.515–27.855 / 26.510–27.850 MHz is legally permitted in Russia. Each channel is given an alphanumeric identifier, with the three bands being labeled B-C-D, then the channel number and the last digit of the frequency labeled "E" for the 5 and "P" for the 0. The final letter(s) indicate the mode (AM or FM). For example, 27.185 MHz AM (Channel 19 in the European/American frequency plan) would be designated "C19EA" or "C19EAM". 27.180 MHz FM (Channel 19 on the Polish assignment) would be designated "C19PF" or "C19PFM". The B-C-D (or "grid") designation comes from common export radio band labeling. Originally these radios would feature 5 bands labeled A-B-C-D-E, with coverage from 26.065 to 28.305 MHz, later these radios switched to a 6 band configuration A-B-D-C-E-F with coverage from 25.615 to 28.305 MHz, making 26.965–27.405 MHz band D instead of band C. Originally, Russia (and most other Eastern European/CIS countries) used the zero frequency offset in line with the Polish frequency plan (channel 1 being 26.960 MHz). However, in the past 4–5 years, most Russians have switched to the standardized European or American "fives" offset (channel 1 being 26.965 MHz). Due to the use of both frequency plans, many radios sold in the Russian/Eastern European market come with a -5 kHz or +5 kHz switch to quickly change from one channel plan to the other. See also Business band Unlicensed Personal Communications Services References Bandplans Radio technology
Personal radio service
[ "Technology", "Engineering" ]
8,951
[ "Information and communications technology", "Telecommunications engineering", "Radio technology" ]
35,021,032
https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93Szemer%C3%A9di%20theorem
In arithmetic combinatorics, the Erdős–Szemerédi theorem states that for every finite set of integers, at least one of the sets and (the sets of pairwise sums and pairwise products, respectively) form a significantly larger set. More precisely, the Erdős–Szemerédi theorem states that there exist positive constants and such that, for any non-empty set , . It was proved by Paul Erdős and Endre Szemerédi in 1983. The notation denotes the cardinality of the set . The set of pairwise sums is and is called the sumset of . The set of pairwise products is and is called the product set of ; it is also written . The theorem is a version of the maxim that additive structure and multiplicative structure cannot coexist. It can also be viewed as an assertion that the real line does not contain any set resembling a finite subring or finite subfield; it is the first example of what is now known as the sum-product phenomenon, which is now known to hold in a wide variety of rings and fields, including finite fields. Sum-Product Conjecture The sum-product conjecture informally says that one of the sum set or the product set of any set must be nearly as large as possible. It was originally conjectured by Erdős in 1974 to hold whether is a set of integers, reals, or complex numbers. More precisely, it proposes that, for any set , one has The asymptotic parameter in the notation is . Examples If , then using asymptotic notation, with the asymptotic parameter. Informally, this says that the sum set of does not grow. On the other hand, the product set of satisfies a bound of the form for all . This is related to the Erdős multiplication table problem. The best lower bound on for this set is due to Kevin Ford. This example is an instance of the Few Sums, Many Products version of the sum-product problem of György Elekes and Imre Z. Ruzsa. A consequence of their result is that any set with small additive doubling (such as an arithmetic progression) has the lower bound on the product set . Xu and Zhou proved that for any dense subset of an arithmetic progression in integers, which is sharp up to the in the exponent. Conversely, the set satisfies , but has many sums: . This bound comes from considering the binary representation of a number. The set is an example of a geometric progression. For a random set of numbers, both the product set and the sumset have cardinality ; that is, with high probability, neither the sumset nor the product set generates repeated elements. Sharpness of the conjecture Erdős and Szemerédi give an example of a sufficiently smooth set of integers with the bound . This shows that the term in the conjecture is necessary. Extremal cases Often studied are the extreme cases of the hypothesis: few sums, many product (FSMP): if , then , and few products, many sums (FPMS): if , then . History and current results The following table summarizes progress on the sum-product problem over the reals. The exponents 1/4 of György Elekes and 1/3 of József Solymosi are considered milestone results within the citing literature. All improvements after 2009 are of the form , and represent refinements of the arguments of Konyagin and Shkredov. Complex numbers Proof techniques involving only the Szemerédi–Trotter theorem extend automatically to the complex numbers, since the Szemerédi-Trotter theorem holds over by a theorem of Tóth. Konyagin and Rudnev matched the exponent of 4/3 over the complex numbers. The results with exponents of the form have not been matched over the complex numbers. Over finite fields The sum-product problem is particularly well-studied over finite fields. Motivated by the finite field Kakeya conjecture, Wolff conjectured that for every subset , where is a (large) prime, that for an absolute constant . This conjecture had also been formulated in the 1990s by Wigderson motivated by randomness extractors. Note that the sum-product problem cannot hold in finite fields unconditionally due to the following example: Example: Let be a finite field and take . Then since is closed under addition and multiplication, , and so . This pathological example extends to taking to be any sub-field of the field in question. Qualitatively, the sum-product problem has been solved over finite fields: Theorem (Bourgain, Katz, Tao (2004)): Let be prime and let with for some . Then for some . Bourgain, Katz, and Tao extended this theorem to arbitrary fields. Informally, the following theorem says that if a sufficiently large set does not grow under either addition or multiplication, then it is mostly contained in a dilate of a sub-field. Theorem (Bourgain, Katz, Tao (2004)): Let be a subset of a finite field so that for some , and suppose that. Then there exists a sub-field with , an element , and a set with so that . They suggest that the constant may be independent of . Quantitative results towards the finite field sum-product problem in typically fall into two categories: when is small or large with respect to the characteristic of . This is because different types of techniques are used in each setting. Small sets In this regime, let be a field of characteristic . Note that the field is not always finite. When this is the case, and the characteristic of is zero, then the -constraint is omitted. In fields with non-prime order, the -constraint on can be replaced with the assumption that does not have too large an intersection with any subfield. The best work in this direction is due to Li and Roche-Newton attaining an exponent of in the notation of the above table. Large sets When for prime, the sum-product problem is considered resolved due to the following result of Garaev: Theorem (Garaev (2007)): Let . Then . This is optimal in the range . This result was extended to finite fields of non-prime order by Vinh in 2011. Variants and generalizations Other combinations of operators Bourgain and Chang proved unconditional growth for sets , as long as one considers enough sums or products: Theorem (Bourgain, Chang (2003)): Let . Then there exists so that for all , one has . In many works, addition and multiplication are combined in one expression. With the motto addition and multiplication cannot coexist, one expects that any non-trivial combination of addition and multiplication of a set should guarantee growth. Note that in finite settings, or in fields with non-trivial subfields, such a statement requires further constraints. Sets of interest include (results for ): : Stevens and Warren show that : Murphy, Roche-Newton and Shkredov show that : Stevens and Warren show that : Stevens and Rudnev show that See also Sum-free set Restricted sumset Additive combinatorics Additive number theory Multiplicative number theory References External links Additive combinatorics Sumsets Theorems in discrete mathematics Theorems in number theory
Erdős–Szemerédi theorem
[ "Mathematics" ]
1,499
[ "Discrete mathematics", "Additive combinatorics", "Theorems in discrete mathematics", "Combinatorics", "Theorems in number theory", "Sumsets", "Mathematical problems", "Mathematical theorems", "Number theory" ]
43,198,110
https://en.wikipedia.org/wiki/Random%20algebra
In set theory, the random algebra or random real algebra is the Boolean algebra of Borel sets of the unit interval modulo the ideal of measure zero sets. It is used in random forcing to add random reals to a model of set theory. The random algebra was studied by John von Neumann in 1935 (in work later published as ) who showed that it is not isomorphic to the Cantor algebra of Borel sets modulo meager sets. Random forcing was introduced by . See also Random number References Boolean algebra Forcing (mathematics)
Random algebra
[ "Mathematics" ]
111
[ "Boolean algebra", "Forcing (mathematics)", "Fields of abstract algebra", "Mathematical logic" ]
43,203,128
https://en.wikipedia.org/wiki/Ultrafast%20electron%20diffraction
Ultrafast electron diffraction, also known as femtosecond electron diffraction, is a pump-probe experimental method based on the combination of optical pump-probe spectroscopy and electron diffraction. Ultrafast electron diffraction provides information on the dynamical changes of the structure of materials. It is very similar to time resolved crystallography, but instead of using X-rays as the probe, it uses electrons. In the ultrafast electron diffraction technique, a femtosecond (10–15 second) laser optical pulse excites (pumps) a sample into an excited, usually non-equilibrium, state. The pump pulse may induce chemical, electronic or structural transitions. After a finite time interval, a femtosecond electron pulse is incident upon the sample. The electron pulse undergoes diffraction as a result of interacting with the sample. The diffraction signal is, subsequently, detected by an electron counting instrument such as a charge-coupled device camera. Specifically, after the electron pulse diffracts from the sample, the scattered electrons will form a diffraction pattern (image) on a charge-coupled device camera. This pattern contains structural information about the sample. By adjusting the time difference between the arrival (at the sample) of the pump and probe beams, one can obtain a series of diffraction patterns as a function of the various time differences. The diffraction data series can be concatenated in order to produce a motion picture of the changes that occurred in the data. Ultrafast electron diffraction can provide a wealth of dynamics on charge carriers, atoms, and molecules. History The design of early ultrafast electron diffraction instruments was based on X-ray streak cameras, the first reported ultrafast electron diffraction experiment demonstrating an electron pulse length of 100 picoseconds (10–10 seconds). The temporal resolution of ultrafast electron diffraction has been reduced to the attosecond (10–18 second) time scale to perform attosecond electron diffraction measurements which reveal electron motion dynamics. Electron Pulse Production The electron pulses are typically produced by the process of photoemission in which a femtosecond optical pulse is directed toward a photocathode. If the incident laser pulse has an appropriate energy, electrons will be ejected from the photocathode through a process known as photoemission. The electrons are subsequently accelerated to high energies, ranging from tens of kiloelectron-volts to several megaelectron-volts, using an electron gun. Electron Pulse Compression Generally, two methods are used in order to compress electron pulses in order to overcome pulsewidth expansion due to Coulomb repulsion. Generating high-flux ultrashort electron beams has been relatively straightforward, but pulse duration below a picosecond proved extremely difficult due to space-charge effects. Space-charge interactions increase in severity with bunch charge and rapidly act to broaden the pulse duration, which has resulted in an apparently unavoidable trade-off between signal (bunch charge) and time-resolution in ultrafast electron diffraction experiments. Radio-frequency (RF) compression has emerged has an leading method of reducing the pulse expansion in ultrafast electron diffraction experiments, achieving temporal resolution well below 50 femtoseconds. Shorter electron beams below 10 femtoseconds are ultimately required to probe the fastest dynamics in solid state materials and observe gas phase molecular reactions. Single shot For studying irreversible process, a diffraction signal is obtained from a single electron bunch containing or more particles. Stroboscopic When studying reversible process, especially weak signals caused by, e.g., thermal diffuse scattering, a diffraction pattern is accumulated from many electron bunches, as many as . Resolution The resolution of an ultrafast electron diffraction apparatus can be characterized both in space and in time. Spatial resolution comes in two distinct parts: real space and reciprocal space. Real space resolution is determined by the physical size of the electron probe on the sample. A smaller physical probe size can allow experiments on crystals that cannot feasibly be grown in large sizes. High reciprocal space resolution allows for the detection of Bragg diffraction spots that correspond to long periodicity phenomena. It can be calculated with the following equation: , where is the reciprocal space resolution, is the Compton wavelength of the electrons, is the normalized emittance of the electrons, and is the size of the probe on the sample. Temporal resolution is primarily a function of the bunch length of the electrons and the relative timing jitters between the pump and probe. See also Ahmed Zewail R. J. Dwayne Miller Time resolved crystallography References Sources Laser applications Diffraction
Ultrafast electron diffraction
[ "Physics", "Chemistry", "Materials_science" ]
980
[ "Crystallography", "Diffraction", "Spectroscopy", "Spectrum (physical sciences)" ]
43,203,961
https://en.wikipedia.org/wiki/Z-source%20inverter
A Z-source inverter is a type of power inverter, a circuit that converts direct current to alternating current. The circuit functions as a buck-boost inverter without making use of DC-DC converter bridge due to its topology. Impedance (Z) source networks efficiently convert power between source and load from DC to DC, DC to AC, and from AC to AC. The numbers of modifications and new Z-source topologies have grown rapidly since 2002. Improvements to the impedance networks by introducing coupled magnetics have also been lately proposed for achieving even higher voltage boosting, while using a shorter shoot-through time. They include the Γ-source, T-source, trans-Z-source, TZ-source, LCCT-Z-source that utilizes a high-frequency transformer connected in series with two DC-current-blocking capacitors, high-frequency transformer-isolated, and Y-source networks. Amongst them, the Y-source network is more versatile and can be viewed as the generic network, from which the Γ-source, T-source, and trans-Z-source networks are derived. The incommensurate properties of this network open a new horizon to researchers and engineers to explore, expand, and modify the circuit for a wide range of power conversion applications. Types of inverters Inverters can be classified by their structure as Single-phase inverter: This type of inverter consists of two legs or two poles. (A pole is connection of two switches where source of one and drain of other are connected and this common point is taken out). Three-phase inverter: This type of inverter consists of three legs or poles or four legs (three legs for phases and one for neutral). Inverters are also classified based on the type of input source as follows: Voltage-source inverter (VSI): In this type of inverter, a constant voltage source acts as input to the inverter bridge. The constant voltage source is obtained by connecting a large capacitor across the DC source. Current-source inverter (CSI): In this type of inverter, a constant current source acts as input to the inverter bridge. The constant current source is obtained by connecting a large inductor in series with the DC source. Operation Normally, three-phase inverters have 8 vector states (6 active states and 2 zero states). There is an additional state known as the shoot through state, during which the switches of one leg are short-circuited. In this state, energy is stored in the impedance network, and when the inverter is in its active state, the stored energy is transferred to the load, thus providing boost operation. This shoot through state is prohibited in VSI. Achieving the buck-boost facility in ZSI requires pulse-width modulation. The normal sinusoidal pulse width modulation (SPWM) is generated by comparing carrier triangular wave with reference sine wave. For shoot through pulses, the carrier wave is compared with two complementary DC reference levels. These pulses are added in the SPWM. ZSI has two control freedoms: modulation index of the reference wave which is the ratio of amplitude of reference wave to amplitude of carrier wave and shoot through duty ratio which can be controlled by DC level. Advantages The advantages of Z-source inverter are: The source can be either a voltage source or a current source. The DC source of a ZSI can be a battery, a diode rectifier or a thyristor converter, a fuel cell stack or a combination of these. The main circuit of a ZSI can either be the traditional VSI or the traditional CSI. Works as a buck-boost inverter. The load of a ZSC can be inductive or capacitive, or it can be another Z-source network. Disadvantages Typical inverters (VSI and CSI) have few disadvantages: They behave in a boost or buck operation only. Thus the obtainable output voltage range is either smaller or greater than the input voltage. They are vulnerable to electromagnetic interference and the devices get damaged in either open or short circuit conditions. The combined system of DC-DC boost converter and the inverter has lower reliability. The main switching devices of VSI and CSI are not interchangeable. Applications Renewable energy sources Electric vehicles Motor drives References Electrical circuits Inverters
Z-source inverter
[ "Engineering" ]
921
[ "Electrical engineering", "Electronic engineering", "Electrical circuits" ]
43,204,134
https://en.wikipedia.org/wiki/Cloud%20computing%20issues
Cloud computing enables users to access scalable and on-demand computing resources via the internet, utilizing hardware and software virtualization. It is a rapidly evolving technology capable of delivering extensible services efficiently, supporting a wide range of applications from personal storage solutions to enterprise-level systems. Despite its advantages, cloud computing also faces several challenges. Privacy concerns remain a primary issue, as users often lose direct control over their data once it is stored on servers owned and managed by cloud providers. This loss of control can create uncertainties regarding data privacy, unauthorized access, and compliance with regional regulations such as the General Data Protection Regulation (GDPR), the Health Insurance Portability and Accountability Act (HIPAA), and the California Consumer Privacy Act (CCPA). Service agreements and shared responsibility models define the boundaries of control and accountability between the cloud provider and the customer, but misunderstandings or mismanagement in these areas can still result in security breaches or accidental data loss. Cloud providers offer tools, such as AWS Artifact (compliance documentation and audits), Azure Compliance Manager (compliance assessments and risk analysis), and Google Assured Workloads (region-specific data compliance), to assist customers in managing compliance requirements. Security issues in cloud computing are generally categorized into two broad groups. The first involves risks faced by cloud service providers, including vulnerabilities in their infrastructure, software, or third-party dependencies. The second includes risks faced by cloud customers, such as misconfigurations, inadequate access controls, and accidental data exposure. These risks are often amplified by human error or a lack of understanding of the shared responsibility model. Security responsibilities also vary depending on the service model—whether Infrastructure as a Service (IaaS), Platform as a Service (PaaS), or Software as a Service (SaaS). In general, cloud providers are responsible for hardware security, physical infrastructure, and software updates, while customers are responsible for data encryption, identity and access management (IAM), and application-level security. Another significant concern is uncertainty regarding guaranteed Quality of Service (QoS), particularly in multi-tenant environments where resources are shared among customers. Major cloud providers address these concerns through Service Level Agreements (SLAs), which define performance and uptime guarantees and often offer compensation in the form of service credits when guarantees are unmet. Automated management and remediation processes, supported by tools such as AWS CloudWatch, Azure Monitor, and Google Cloud Operations Suite, help detect and respond to large-scale failures. Despite these tools, managing QoS in highly distributed and multi-tenant systems remains complex. For latency-sensitive workloads, cloud providers have introduced edge computing solutions, such as AWS Wavelength, Azure Edge Zones, and Google Distributed Cloud Edge, to minimize latency by processing data closer to the end-user. Jurisdictional and regulatory requirements regarding data residency and sovereignty introduce further complexity. Data stored in one region may fall under the legal jurisdiction of that region, creating potential conflicts for organizations operating across multiple geographies. Major cloud providers, such as AWS, Microsoft Azure, and Google Cloud, address these concerns by offering region-specific data centers and compliance management tools designed to align with regional regulations and legal frameworks. Factors Influencing Adoption and Suitability of Cloud Computing The decision to adopt cloud computing or maintain on-premises infrastructure depends on factors such as scalability, cost structure, latency requirements, regulatory constraints, and infrastructure customization. Organizations with variable or unpredictable workloads, limited capital for upfront investments, or a focus on rapid scalability benefit from cloud adoption. Startups, SaaS companies, and e-commerce platforms often prefer the pay-as-you-go operational expenditure (OpEx) model of cloud infrastructure. Additionally, companies prioritizing global accessibility, remote workforce enablement, disaster recovery, and leveraging advanced services such as AI/ML and analytics are well-suited for the cloud. In recent years, some cloud providers have started offering specialized services for high-performance computing and low-latency applications, addressing some use cases previously exclusive to on-premises setups. On the other hand, organizations with strict regulatory requirements, highly predictable workloads, or reliance on deeply integrated legacy systems may find cloud infrastructure less suitable. Businesses in industries like defense, government, or those handling highly sensitive data often favor on-premises setups for greater control and data sovereignty. Additionally, companies with ultra-low latency requirements, such as high-frequency trading (HFT) firms, rely on custom hardware (e.g., FPGAs) and physical proximity to exchanges, which most cloud providers cannot fully replicate despite recent advancements. Similarly, tech giants like Google, Meta, and Amazon build their own data centers due to economies of scale, predictable workloads, and the ability to customize hardware and network infrastructure for optimal efficiency. However, these companies also use cloud services selectively for certain workloads and applications where it aligns with their operational needs. In practice, many organizations are increasingly adopting hybrid cloud architectures, combining on-premises infrastructure with cloud services. This approach allows businesses to balance scalability, cost-effectiveness, and control, offering the benefits of both deployment models while mitigating their respective limitations. Cloud Migration Challenges According to the 2024 State of the Cloud Report by Flexera, approximately 50% of respondents identified the following top challenges when migrating workloads to public clouds: "Understanding application dependencies" "Comparing on-premise and cloud costs" "Assessing technical feasibility." Leaky Abstractions Cloud computing abstractions aim to simplify resource management, but leaky abstractions can expose underlying complexities. These variations in abstraction quality depend on the cloud vendor, service and architecture. Mitigating leaky abstractions requires users to understand the implementation details and limitations of the cloud services they utilize. Privacy The increased use of cloud computing services such as Gmail and Google Docs has pressed the issue of privacy concerns of cloud computing services to the utmost importance. The provider of such services lie in a position such that with the greater use of cloud computing services has given access to a plethora of data. This access has the immense risk of data being disclosed either accidentally or deliberately. The privacy of the companies can be compromised as all the information is sent to the cloud service provider. Privacy advocates have criticized the cloud model for giving hosting companies' greater ease to control—and thus, to monitor at will—communication between host company and end user, and access user data (with or without permission). Instances such as the secret NSA program, working with AT&T, and Verizon, which recorded over 10 million telephone calls between American citizens, causes uncertainty among privacy advocates, and the greater powers it gives to telecommunication companies to monitor user activity. A cloud service provider (CSP) can complicate data privacy because of the extent of virtualization (virtual machines) and cloud storage used to implement cloud service. CSP operations, customer or tenant data may not remain on the same system, or in the same data center or even within the same provider's cloud; this can lead to legal concerns over jurisdiction. While there have been efforts (such as US-EU Safe Harbor) to "harmonize" the legal environment, providers such as Amazon still cater to major markets (typically to the United States and the European Union) by deploying local infrastructure and allowing customers to select "regions and availability zones". Cloud computing poses privacy concerns because the service provider can access the data that is on the cloud at any time. It could accidentally or deliberately alter or even delete information. This becomes a major concern as these service providers employ administrators, which can leave room for potential unwanted disclosure of information on the cloud. Technical issues Sometimes there can be some technical issues like servers might be down so at that time it becomes difficult to gain access to the resources at any time and from anywhere e.g. non-availability of services can be due to denial of service attack. To use the technique of cloud computing there should always be a strong internet connection without which we would not be able to take advantage of the cloud computing. The other issue related to the cloud computing is that it consumes the great power of physical devices such as a smartphone. Sharing information without a warrant Many cloud providers can share information with third parties if necessary for purposes of law and order even without a warrant. That is permitted in their privacy policies which users have to agree to before they start using cloud services. There are life-threatening situations in which there is no time to wait for the police to issue a warrant. Many cloud providers can share information immediately with the police in such situations. Example of a privacy policy that allows this The Dropbox privacy policy states that We may share information as discussed below … Law & Order. We may disclose your information to third parties if we determine that such disclosure is reasonably necessary to (a) comply with the law; (b) protect any person from death or serious bodily injury; (c) prevent fraud or abuse of Dropbox or our users; or (d) protect Dropbox's property rights. Previous situation about this The Sydney Morning Herald reported about the Mosman bomb hoax, which was a life-threatening situation, that: As to whether NSW Police needed a warrant to access the information it was likely to have, Byrne said it depended on the process taken. "Gmail does set out in their process in terms of their legal disclosure guidelines [that] it can be done by a search warrant ... but there are exceptions that can apply in different parts of the world and different service providers. For example, Facebook generally provides an exception for emergency life threatening situations that are signed off by law enforcement." Another computer forensic expert at iT4ensics, which works for large corporations dealing with matters like internal fraud, Scott Lasak, said that police "would just contact Google" and "being of a police or FBI background Google would assist them". "Whether or not they need to go through warrants or that sort of thing I'm not sure. But even for just an IP address they might not even need a warrant for something like that being of a police background. ... NSW Police would not comment on whether it had received help from Google. The search giant also declined to comment, instead offering a standard statement on how it cooperated with law enforcement. A spokesman for the online users' lobby group Electronic Frontiers Australia, Stephen Collins, said Google was likely to have handed over the need information on the basis of "probable cause or a warrant", which he said was "perfectly legitimate". He also said “It happens with relative frequency. … Such things are rarely used in Australia for trivial or malevolent purposes.” Privacy solutions Solutions to privacy in cloud computing include policy and legislation as well as end users' choices for how data is stored. The cloud service provider needs to establish clear and relevant policies that describe how the data of each cloud user will be accessed and used. Cloud service users can encrypt data that is processed or stored within the cloud to prevent unauthorized access. Cryptographic encryption mechanisms are certainly the best options. In addition, authentication and integrity protection mechanisms ensure that data only goes where the customer wants it to go and it is not modified in transit. Strong authentication is a mandatory requirement for any cloud deployment. User authentication is the primary basis for access control, and specially in the cloud environment, authentication and access control are more important than ever since the cloud and all of its data are publicly accessible. Biometric identification technologies linking users' biometrics information to their data are available. These technologies use searchable encryption techniques, and perform identification in an encrypted domain so that cloud providers or potential attackers do not gain access to sensitive data or even the contents of the individual queries. Compliance To comply with regulations including FISMA, HIPAA, and SOX in the United States, the Data Protection Directive in the EU and the credit card industry's PCI DSS, users may have to adopt community or hybrid deployment modes that are typically more expensive and may offer restricted benefits. This is how Google is able to "manage and meet additional government policy requirements beyond FISMA" and Rackspace Cloud or QubeSpace are able to claim PCI compliance. Many providers also obtain a SAS 70 Type II audit, but this has been criticised on the grounds that the hand-picked set of goals and standards determined by the auditor and the auditee are often not disclosed and can vary widely. Providers typically make this information available on request, under non-disclosure agreement. Customers in the EU contracting with cloud providers outside the EU/EEA have to adhere to the EU regulations on export of personal data. A multitude of laws and regulations have forced specific compliance requirements onto many companies that collect, generate or store data. These policies may dictate a wide array of data storage policies, such as how long information must be retained, the process used for deleting data, and even certain recovery plans. Below are some examples of compliance laws or regulations. United States, the Health Insurance Portability and Accountability Act (HIPAA) requires a contingency plan that includes, data backups, data recovery, and data access during emergencies. The privacy laws of Switzerland demand that private data, including emails, be physically stored in Switzerland. In the United Kingdom, the Civil Contingencies Act of 2004 sets forth guidance for a business contingency plan that includes policies for data storage. In a virtualized cloud computing environment, customers may never know exactly where their data is stored. In fact, data may be stored across multiple data centers in an effort to improve reliability, increase performance, and provide redundancies. This geographic dispersion may make it more difficult to ascertain legal jurisdiction if disputes arise. FedRAMP U.S. Federal Agencies have been directed by the Office of Management and Budget to use a process called FedRAMP (Federal Risk and Authorization Management Program) to assess and authorize cloud products and services. Federal CIO Steven VanRoekel issued a memorandum to federal agency Chief Information Officers on December 8, 2011 defining how federal agencies should use FedRAMP. FedRAMP consists of a subset of NIST Special Publication 800-53 security controls specifically selected to provide protection in cloud environments. A subset has been defined for the FIPS 199 low categorization and the FIPS 199 moderate categorization. The FedRAMP program has also established a Joint Accreditation Board (JAB) consisting of Chief Information Officers from DoD, DHS, and GSA. The JAB is responsible for establishing accreditation standards for 3rd party organizations who perform the assessments of cloud solutions. The JAB also reviews authorization packages, and may grant provisional authorization (to operate). The federal agency consuming the service still has final responsibility for final authority to operate. Legal As with other changes in the landscape of computing, certain legal issues arise with cloud computing, including trademark infringement, security concerns and sharing of proprietary data resources. The Electronic Frontier Foundation has criticized the United States government during the Megaupload seizure process for considering that people lose property rights by storing data on a cloud computing service. One important but not often mentioned problem with cloud computing is the problem of who is in "possession" of the data. If a cloud company is the possessor of the data, the possessor has certain legal rights. If the cloud company is the "custodian" of the data, then a different set of rights would apply. The next problem in the legalities of cloud computing is the problem of legal ownership of the data. Many Terms of Service agreements are silent on the question of ownership. These legal issues are not confined to the time period in which the cloud-based application is actively being used. There must also be consideration for what happens when the provider-customer relationship ends. In most cases, this event will be addressed before an application is deployed to the cloud. However, in the case of provider insolvencies or bankruptcy the state of the data may become blurred. Vendor lock-in Because cloud computing is still relatively new, standards are still being developed. Many cloud platforms and services are proprietary, meaning that they are built on the specific standards, tools and protocols developed by a particular vendor for its particular cloud offering. This can make migrating off a proprietary cloud platform prohibitively complicated and expensive. Three types of vendor lock-in can occur with cloud computing: Platform lock-in: cloud services tend to be built on one of several possible virtualization platforms, for example VMware or Xen. Migrating from a cloud provider using one platform to a cloud provider using a different platform could be very complicated. Data lock-in: since the cloud is still new, standards of ownership, i.e. who actually owns the data once it lives on a cloud platform, are not yet developed, which could make it complicated if cloud computing users ever decide to move data off of a cloud vendor's platform. Tools lock-in: if tools built to manage a cloud environment are not compatible with different kinds of both virtual and physical infrastructure, those tools will only be able to manage data or apps that live in the vendor's particular cloud environment. Heterogeneous cloud computing is described as a type of cloud environment that prevents vendor lock-in, and aligns with enterprise data centers that are operating hybrid cloud models. The absence of vendor lock-in lets cloud administrators select their choice of hypervisors for specific tasks, or to deploy virtualized infrastructures to other enterprises without the need to consider the flavor of hypervisor in the other enterprise. A heterogeneous cloud is considered one that includes on-premises private clouds, public clouds and software-as-a-service clouds. Heterogeneous clouds can work with environments that are not virtualized, such as traditional data centers. Heterogeneous clouds also allow for the use of piece parts, such as hypervisors, servers, and storage, from multiple vendors. Cloud piece parts, such as cloud storage systems, offer APIs but they are often incompatible with each other. The result is complicated migration between backends, and makes it difficult to integrate data spread across various locations. This has been described as a problem of vendor lock-in. The solution to this is for clouds to adopt common standards. Heterogeneous cloud computing differs from homogeneous clouds, which have been described as those using consistent building blocks supplied by a single vendor. Intel General Manager of high-density computing, Jason Waxman, is quoted as saying that a homogeneous system of 15,000 servers would cost $6 million more in capital expenditure and use 1 megawatt of power. Service lock-in within the same vendor Service lock-in within the same vendor occurs when a customer becomes dependent on specific services within a cloud vendor, making it challenging to switch to alternative services within the same vendor when their needs change. Open source Open-source software has provided the foundation for many cloud computing implementations, prominent examples being the Hadoop framework and VMware's Cloud Foundry. In November 2007, the Free Software Foundation released the Affero General Public License, a version of GPLv3 intended to close a perceived legal loophole associated with free software designed to run over a network. Open standards Most cloud providers expose APIs that are typically well documented (often under a Creative Commons license) but also unique to their implementation and thus not interoperable. Some vendors have adopted others' APIs and there are a number of open standards under development, with a view to delivering interoperability and portability. As of November 2012, the Open Standard with broadest industry support is probably OpenStack, founded in 2010 by NASA and Rackspace, and now governed by the OpenStack Foundation. OpenStack supporters include AMD, Intel, Canonical, SUSE Linux, Red Hat, Cisco, Dell, HP, IBM, Yahoo, Huawei and now VMware. Security Security is generally a desired state of being free from harm (anything that compromises the state of an entity's well-being). As defined in information security, it is a condition in which an information asset is protected against its confidentiality (quality or state of being free from unauthorized or insecure disclosure contrary to the defined access rights as listed in the access control list and or matrix), integrity (a quality or state of being whole/ as complete as original and uncorrupted as functionally proven by the hash integrity values) and availability (a desired state of an information resource being accessible only by authorized parties (as listed in access control list and or matrix) in the desired state and at the right time. Security is an important domain in as far as cloud computing is concerned, there are a number of issues to be addressed if the cloud is to be perfectly secure (a condition i doubt will ever be achieved)(Martin Muduva, 2015). As cloud computing is achieving increased popularity, concerns are being voiced about the security issues introduced through adoption of this new model. The effectiveness and efficiency of traditional protection mechanisms are being reconsidered as the characteristics of this innovative deployment model can differ widely from those of traditional architectures. An alternative perspective on the topic of cloud security is that this is but another, although quite broad, case of "applied security" and that similar security principles that apply in shared multi-user mainframe security models apply with cloud security. The relative security of cloud computing services is a contentious issue that may be delaying its adoption. Physical control of the Private Cloud equipment is more secure than having the equipment off site and under someone else's control. Physical control and the ability to visually inspect data links and access ports is required in order to ensure data links are not compromised. Issues barring the adoption of cloud computing are due in large part to the private and public sectors' unease surrounding the external management of security-based services. It is the very nature of cloud computing-based services, private or public, that promote external management of provided services. This delivers great incentive to cloud computing service providers to prioritize building and maintaining strong management of secure services. Security issues have been categorized into sensitive data access, data segregation, privacy, bug exploitation, recovery, accountability, malicious insiders, management console security, account control, and multi-tenancy issues. Solutions to various cloud security issues vary, from cryptography, particularly public key infrastructure (PKI), to use of multiple cloud providers, standardization of APIs, and improving virtual machine support and legal support. Cloud computing offers many benefits, but is vulnerable to threats. As cloud computing uses increase, it is likely that more criminals find new ways to exploit system vulnerabilities. Many underlying challenges and risks in cloud computing increase the threat of data compromise. To mitigate the threat, cloud computing stakeholders should invest heavily in risk assessment to ensure that the system encrypts to protect data, establishes trusted foundation to secure the platform and infrastructure, and builds higher assurance into auditing to strengthen compliance. Security concerns must be addressed to maintain trust in cloud computing technology. Data breach is one of the big concerns in cloud computing. A compromised server could significantly harm the users as well as cloud providers. A variety of information could be stolen. These include credit card and social security numbers, addresses, and personal messages. The U.S. now requires cloud providers to notify customers of breaches. Once notified, customers now have to worry about identity theft and fraud, while providers have to deal with federal investigations, lawsuits and reputational damage. Customer lawsuits and settlements have resulted in over $1 billion in losses to cloud providers. Availability A cloud provider may shut down without warning. For instance, the Anki robot company suddenly went bankrupt in 2019, making 1.5 million robots unresponsive to voice command. Sustainability Although cloud computing is often assumed to be a form of green computing, there is currently no way to measure how "green" computers are. The primary environmental problem associated with the cloud is energy use. Phil Radford of Greenpeace said “we are concerned that this new explosion in electricity use could lock us into old, polluting energy sources instead of the clean energy available today.” Greenpeace ranks the energy usage of the top ten big brands in cloud computing, and successfully urged several companies to switch to clean energy. On December 15, 2011, Greenpeace and Facebook announced together that Facebook would shift to use clean and renewable energy to power its own operations. Soon thereafter, Apple agreed to make all of its data centers ‘coal free’ by the end of 2013 and doubled the amount of solar energy powering its Maiden, NC data center. Following suit, Salesforce agreed to shift to 100% clean energy by 2020. Citing the servers' effects on the environmental effects of cloud computing, in areas where climate favors natural cooling and renewable electricity is readily available, the environmental effects will be more moderate. (The same holds true for "traditional" data centers.) Thus countries with favorable conditions, such as Finland, Sweden and Switzerland, are trying to attract cloud computing data centers. Energy efficiency in cloud computing can result from energy-aware scheduling and server consolidation. However, in the case of distributed clouds over data centers with different sources of energy including renewable energy, the use of energy efficiency reduction could result in a significant carbon footprint reduction. Abuse As with privately purchased hardware, customers can purchase the services of cloud computing for nefarious purposes. This includes password cracking and launching attacks using the purchased services. In 2009, a banking trojan illegally used the popular Amazon service as a command and control channel that issued software updates and malicious instructions to PCs that were infected by the malware. IT governance The introduction of cloud computing requires an appropriate IT governance model to ensure a secured computing environment and to comply with all relevant organizational information technology policies. As such, organizations need a set of capabilities that are essential when effectively implementing and managing cloud services, including demand management, relationship management, data security management, application lifecycle management, risk and compliance management. A danger lies with the explosion of companies joining the growth in cloud computing by becoming providers. However, many of the infrastructural and logistical concerns regarding the operation of cloud computing businesses are still unknown. This over-saturation may have ramifications for the industry as a whole. Consumer end storage The increased use of cloud computing could lead to a reduction in demand for high storage capacity consumer end devices, due to cheaper low storage devices that stream all content via the cloud becoming more popular. In a Wired article, Jake Gardner explains that while unregulated usage is beneficial for IT and tech moguls like Amazon, the anonymous nature of the cost of consumption of cloud usage makes it difficult for business to evaluate and incorporate it into their business plans. Ambiguity of terminology Outside of the information technology and software industry, the term "cloud" can be found to reference a wide range of services, some of which fall under the category of cloud computing, while others do not. The cloud is often used to refer to a product or service that is discovered, accessed and paid for over the Internet, but is not necessarily a computing resource. The term "cloud" retains the aura of something noumenal and numinous. Examples of service that are sometimes referred to as "the cloud" include, but are not limited to, crowd sourcing, cloud printing, crowd funding, cloud manufacturing. Performance interference and noisy neighbors Due to its multi-tenant nature and resource sharing, cloud computing must also deal with the "noisy neighbor" effect. This effect in essence indicates that in a shared infrastructure, the activity of a virtual machine on a neighboring core on the same physical host may lead to increased performance degradation of the VMs in the same physical host, due to issues such as e.g. cache contamination. Due to the fact that the neighboring VMs may be activated or deactivated at arbitrary times, the result is an increased variation in the actual performance of cloud resources. This effect seems to be dependent on the nature of the applications that run inside the VMs but also other factors such as scheduling parameters and the careful selection may lead to optimized assignment in order to minimize the phenomenon. This has also led to difficulties in comparing various cloud providers on cost and performance using traditional benchmarks for service and application performance, as the time period and location in which the benchmark is performed can result in widely varied results. This observation has led in turn to research efforts to make cloud computing applications intrinsically aware of changes in the infrastructure so that the application can automatically adapt to avoid failure. Monopolies and privatization of cyberspace Philosopher Slavoj Žižek points out that, although cloud computing enhances content accessibility, this access is "increasingly grounded in the virtually monopolistic privatization of the cloud which provides this access". According to him, this access, necessarily mediated through a handful of companies, ensures a progressive privatization of global cyberspace. Žižek criticizes the argument purported by supporters of cloud computing that this phenomenon is part of the "natural evolution" of the Internet, sustaining that the quasi-monopolies "set prices at will but also filter the software they provide to give its "universality" a particular twist depending on commercial and ideological interests." Limitations of Service Level Agreements Typically, cloud providers' Service Level Agreements (SLAs) do not encompass all forms of service interruptions. Exclusions typically include planned maintenance, downtime resulting from external factors such as network issues, human errors, like misconfigurations, natural disasters, force majeure events, or security breaches. Typically, customers bear the responsibility of monitoring SLA compliance and must file claims for any unmet SLAs within a designated timeframe. Customers should be aware of how deviations from SLAs are calculated, as these parameters may vary by service. These requirements can place a considerable burden on customers. Additionally, SLA percentages and conditions can differ across various services within the same provider, with some services lacking any SLA altogether. In cases of service interruptions due to hardware failures in the cloud provider, the company typically does not offer monetary compensation. Instead, eligible users may receive credits as outlined in the corresponding SLA. Cloud cost overruns In a report by Gartner, a survey of 200 IT leaders revealed that 69% experienced budget overruns in their organizations' cloud expenditures during 2023. Conversely, 31% of IT leaders whose organizations stayed within budget attributed their success to accurate forecasting and budgeting, proactive monitoring of spending, and effective optimization. The 2024 Flexera State of Cloud Report identifies the top cloud challenges as managing cloud spend, followed by security concerns and lack of expertise. Public cloud expenditures exceeded budgeted amounts by an average of 15%. The report also reveals that cost savings is the top cloud initiative for 60% of respondents. Furthermore, 65% measure cloud progress through cost savings, while 42% prioritize shorter time-to-market, indicating that cloud's promise of accelerated deployment is often overshadowed by cost concerns. Implementation challenges Applications hosted in the cloud are susceptible to the fallacies of distributed computing, a series of misconceptions that can lead to significant issues in software development and deployment. See also Cloud computing References Criticisms of software and websites
Cloud computing issues
[ "Technology", "Engineering" ]
6,385
[ "Cybersecurity engineering", "Criticisms of software and websites", "Cloud infrastructure attacks and failures" ]
43,207,115
https://en.wikipedia.org/wiki/Journal%20of%20Industrial%20Ecology
The Journal of Industrial Ecology is a bimonthly peer-reviewed academic journal covering industrial ecology. It is published by Wiley-Blackwell on behalf of the Yale School of the Environment and is an official journal of the International Society for Industrial Ecology. The editor-in-chief is Reid Lifset. According to the Journal Citation Reports, the journal had an impact factor of 6.946 in 2020. Abstracting and indexing The journal is abstracted and indexed in: References External links Environmental science journals Wiley-Blackwell academic journals Yale University academic journals Bimonthly journals Academic journals established in 1997 English-language journals Industrial ecology Engineering journals
Journal of Industrial Ecology
[ "Chemistry", "Engineering", "Environmental_science" ]
129
[ "Industrial engineering", "Environmental science journals", "Environmental engineering", "Environmental science journal stubs", "Industrial ecology" ]
43,207,142
https://en.wikipedia.org/wiki/Unilamellar%20liposome
A unilamellar liposome is a spherical liposome, a vesicle, bounded by a single bilayer of an amphiphilic lipid or a mixture of such lipids, containing aqueous solution inside the chamber. Unilamellar liposomes are used to study biological systems and to mimic cell membranes, and are classified into three groups based on their size: small unilamellar liposomes/vesicles (SUVs) that with a size range of 20–100 nm, large unilamellar liposomes/vesicles (LUVs) with a size range of 100–1000 nm and giant unilamellar liposomes/vesicles (GUVs) with a size range of 1–200 μm. GUVs are mostly used as models for biological membranes in research work. Animal cells are 10–30 μm and plant cells are typically 10–100 μm. Even smaller cell organelles such as mitochondria are typically 1–2 μm. Therefore, a proper model should account for the size of the specimen being studied. In addition, the size of vesicles dictates their membrane curvature which is an important factor in studying fusion proteins. SUVs have a higher membrane curvature and vesicles with high membrane curvature can promote membrane fusion faster than vesicles with lower membrane curvature such as GUVs. The composition and characteristics of the cell membrane varies in different cells (plant cells, mammalian cells, bacterial cells, etc). In a membrane bilayer, often the composition of the phospholipids is different between the inner and outer leaflets. Phosphatidylcholine, phosphatidylethanolamine, phosphatidylserine, phosphatidylinositol, and sphingomyelin are some of the most common lipids most animal cell membranes. These lipids are widely different in charge, length, and saturation state. The presence of unsaturated bonds (double bonds) in lipids for example, creates a kink in acyl chains which further changes the lipid packing and results in a looser packing. Therefore, the composition and sizes of the unilamellar liposomes must be chosen carefully based on the subject of the study. Each lipid bilayer structure is comparable to lamellar phase lipid organization in biological membranes, in general. In contrast, multilamellar liposomes (MLVs), consist of many concentric amphiphilic lipid bilayers analogous to onion layers, and MLVs may be of variable sizes up to several micrometers. Preparation Small unilamellar vesicles and large unilamellar vesicles There are several methods to prepare unilamellar liposomes and the protocols differ based on the type of desired unilamellar vesicles. Different lipids can be bought either dissolved in chloroform or as lyophilized lipids. In the case of lyophilized lipids, they can be solubilized in chloroform. Lipids are then mixed with a desired molar ratio. Then chloroform is evaporated using a gentle stream of nitrogen (to avoid oxygen contact and oxidation of lipids) at room temperature. A rotary evaporator can be used to form a homogeneous layer of liposomes. This step removes the bulk of chloroform. To remove the residues of trapped chloroform, lipids are placed under vacuum from several hours to overnight. Next step is re-hydration where the dried lipids are re-suspended in the desired buffer. Lipids can be vortexed for several minutes to insure that all the lipid residues get re-suspended. SUVs can be obtained in via two methods. Either by sonication (for instance with 1 second pulses in 3 Hz cycles at a power of 150 W) or by extrusion. In extrusion method, the lipid mixture is passed through a membrane for 10 or more times. Depending on the size of the membrane, either SUVs or LUVs can be obtained. Keeping vesicles under argon and away from oxygen and light can extend their lifetime. Giant unilamellar vesicles Natural swelling: in this method soluble lipids in chloroform are pipetted on a Teflon ring. The chloroform is allowed to evaporate and the ring is then placed under the vacuum for several hours. Next the aqueous buffer is added gently over the Teflon ring and lipids are allowed to naturally swell to form GUVs overnight. the disadvantage of this method is that a large amount of multilamellar vesicles and lipid debris are formed. Electroformation: In this method lipids are placed on a conductive cover glass (indium tin oxide or ITO coated glass) or on Pt wires instead of a Teflon ring and after vacuuming, buffer is placed on the dried lipids and it is sandwiched using a second conductive cover glass. Next an electrical field with certain frequency and voltage is applied which promotes formation of GUVs. For polyunsaturated lipids, this technique can induce a significant oxidation effect on the vesicles. Nevertheless, it is a very common and reliable technique to generate GUVs. Modified approaches exist that employ gel-assisted swelling (agarose-assisted swelling or PVA-assisted swelling) for the formation of GUVs under more biologically relevant conditions. A variety of methods exist to encapsulate biological reactants within GUVs by using water-oil interfaces as a scaffold to assemble lipid layers. This allows the use GUVs as cell-like membrane containers for the in vitro recreation (and investigation) of biological functions. These encapsulation methods include microfluidic methods, which allow for a high-yield production of vesicles with consistent sizes. Applications Phospholipid liposomes are used as targeted drug delivery systems. Hydrophilic drugs can be carried as solution inside the SUVs or MLVs and hydrophobic drugs can be incorporated into lipid bilayer of these liposomes. If injected into circulation of human/animal body, MLVs are preferentially taken up phagocytic cells, and thus drugs can be targeted to these cells. For general or overall delivery, SUVs may be used. For topical applications on skin, specialized lipids like phospholipids and sphingolipids may be used to make drug-free liposomes as moisturizers, and with drugs such as for anti-ultraviolet radiation applications. In biomedical research, unilamellar liposomes are extremely useful to study biological systems and mimicking cell functions. As a living cell is very complicated to study, unilamellar liposomes provide a simple tool to study membrane interaction events such as membrane fusion, protein localization in the plasma membrane, study ion channels, etc. See also Lipid polymorphism Liposome Lipid bilayer References Drug delivery devices Membrane biology Surfactants Colloidal chemistry
Unilamellar liposome
[ "Chemistry" ]
1,479
[ "Pharmacology", "Colloidal chemistry", "Membrane biology", "Drug delivery devices", "Surface science", "Colloids", "Molecular biology" ]
43,207,401
https://en.wikipedia.org/wiki/International%20Society%20for%20Industrial%20Ecology
The International Society for Industrial Ecology (ISIE) is an international professional association with the aim of promoting the development and application of industrial ecology. History The decision to found ISIE was made in January 2000 at the New York Academy of Sciences in a meeting devoted to industrial ecology attended by experts from diverse fields. The society formally opened its doors to membership in February 2001. Membership ISIE offers different types of membership that can be purchased from the Wiley-Blackwell website. Members get access to 6 issues of the Journal of Industrial Ecology published by Wiley-Blackwell on behalf of Yale University, as well as discounts for attending ISIE biennial conferences and discounts on books published by Wiley-Blackwell. Recent conferences of ISIE have been held at locations such as the University of Ulsan in South Korea, Melbourne, Australia, University of California, Berkeley, and Stockholm Environmental Institute. References External links Industrial ecology International professional associations International sustainability organizations Organizations established in 2001 Organizations based in New Haven, Connecticut
International Society for Industrial Ecology
[ "Chemistry", "Engineering" ]
197
[ "Industrial ecology", "Industrial engineering", "Environmental engineering" ]
36,084,530
https://en.wikipedia.org/wiki/Depth-graded%20multilayer%20coating
A depth-graded multilayer coating is a multi-layer coating optimised for broadband response by varying the thickness of the layers used. A multi-layer coating consisting of alternating layers with different optical properties and the same thickness will tend to have a narrow frequency response, getting narrower as more layers are added; for some applications such as precise focussing of a monochromatic laser light source this is exactly what's desired, but it is useless for astronomical optics where it is often required to detect a whole range of frequencies emitted by some source of interest. The design of such coatings generally starts with an approximate analytical solution and then uses the simplex method of multi-variable optimisation to solve for optimal thicknesses of the layers. Typically the thin layers (to reflect high-energy X-rays) are on the inside since low-energy X-rays are absorbed more readily. One model used is a power law distribution of thicknesses, with the thickness of the ith bilayer as a/(b+i)^c for some optimised a, b, c. An optimum multilayer design depends on the graze angle, so ideally a different prescription would be used on each shell of a multi-shell X-ray Wolter mirror; in practice the same prescription is used for about ten shells. Characterising such coatings requires a synchrotron as a variable-wavelength X-ray source. The Danish Space Research Institute in Copenhagen is (in 2012) the world centre of excellence for such coatings, though a good deal of the earlier research and development was done in Russia. References Christensen, FE; Craig, WW; Windt, DL; Jimenez-Garate, MA; Hailey, CJ; Harrison, FA; Mao, PH; Chakan, JM; Ziegler, E; Honkimaki, V; "Measured reflectance of graded multilayer mirrors designed for astronomical hard X-ray telescopes," NUCLEAR INSTRUMENTS & METHODS IN PHYSICS RESEARCH SECTION A-ACCELERATORS SPECTROMETERS DETECTORS AND ASSOCIATED EQUIPMENT Volume: 451 Issue: 3 Pages: 572-581 Published: SEP 11 2000 Thin-film optics
Depth-graded multilayer coating
[ "Materials_science", "Mathematics" ]
443
[ "Thin-film optics", "Planes (geometry)", "Thin films" ]
38,962,399
https://en.wikipedia.org/wiki/Stromatinia%20cepivora
Stromatinia cepivora is a fungus in the division Ascomycota. It is the teleomorph of Sclerotium cepivorum, the cause of white rot in onions, garlic, and leeks. The infective sclerotia remain viable in the soil for many years and are stimulated to germinate by the presence of a susceptible crop. Pathogenesis Sclerotium cepivorum is the asexual reproductive form of Stromatinia cepivora and is a plant pathogen, causing white rot in Allium species, particularly onions, leeks, and garlic. On a worldwide basis, white rot is probably the most serious threat to Allium crop production of any disease. This is a soil borne fungus and affects susceptible crops planted in infected soil containing sclerotia. The sclerotia that are developed in the life cycle can be spread to other fields by unsuccessful sanitation practices. The sclerotia can remain viable in the soil for years and germinate with a susceptible host to cause disease; therefore it is important to practice good sanitation efforts. Where the disease has occurred, recropping with further Allium species should be avoided for many years. The risk of infection can be reduced as far as possible in clean land by using disease-free planting material and avoiding contamination from infected fields. Making sure to clean machinery, boots and equipment will help to stop the spread of disease from an infected field. With infection occurring in cooler weather (), planting the crops at the right time is also important to not institute disease. Symptoms The first symptoms noted with S. cepivora are the foliar symptoms. Plants are stunted in growth with yellow and wilting foliage. The leaves eventually die and fall off with the older leaves dying first and then the aerial leaves. Soil conditions and the environment are determinants for extent of damage to the plant. The pathogen grows in moist cold temperatures. So, in the right conditions, pathogenic activity increases as the root systems develop. The disease attacks at all stages of growth, which leaves the plant to turn yellow and wilt when fully developed because the roots are rotting. Mycelial growth is a sign that appears on the roots and spreads to the bulb causing it to rot. This mycelial growth can be seen at the base of the stem when foliage is yellowing and the foliar symptoms are first appearing. Black globular sclerotia, that resemble poppy seeds can also appear on the mycelium. These black sclerotia are about 0.5mm(1/32in) in diameter. These survival structures (sclerotia) can detach and persist for years in a dormant state, waiting for a susceptible host. Environment The White Rot pathogen is dependent upon temperature. Environmental conditions influence the germination with it favoring cooler weather (50-70 F). If there is high soil moisture present, germination and infection will be favored. However, the sclerotia and fungal growth are inhibited above 70 F. With the pathogen favoring cool wet summers, irrigation can also be a problem in spreading the disease from an infected field to a clean field. Therefore, this pathogen is of great concern to growers experiencing cool wet summers. Disease cycle Stromatinia cepivora is a soil borne fungus. This is a monocyclic disease meaning it only has one reproductive cycle a season. This is a unique fungus as it does not produce any spores of importance to a normal life cycle. It exists and overwinters as sclerotia (the survival stage). These small black globular structures are resistant to adverse temperatures and can remain dormant in the soil for years even without a host. Sclerotia germinate in response to root exudates. Weather is also a factor of germination and hyphae growth. Mycelium grow through the soil and form an appresoria once a host root is available. Appresoria are able to attach and penetrate the host. Mycelium grow out from the roots and can spread to a neighboring plant which creates the row of disease. Even small amounts of sclerotia can cause disease and be difficult to control. Sclerotia infect the host and spread. They are formed on the decaying host tissue and then are left free in the soil. To control the disease there needs to be a reduction in the number of sclerotia in the soil so fungus growth can be halted and unable to grow. Overall, multiple controls are necessary to produce an adequate yield in infected fields. Anything that moves the infested soil, such as wind, water, equipment, boots, etc., will move sclerotia and cause the disease to spread. Importance This is serious disease for plants of the Allium genus. The soil borne fungus can persist in the soil for many years. This disease is present in all Allium-producing regions making it a threat in the Allium production industry and a worldwide disease. It has been found in the United States 10 times with the first in 1918 in Oregon and the latest in 2014 in an onion field. Onions and garlic are economically important vegetables in the world. S. cepivora is one of the most destructive diseases carrying high loss in onion and garlic. Once land has been infested, it is considered not suitable for garlic or onion production for up to 40 or more years. Furthermore, an extremely small infection can be devastating. If just one sclerotium is in 20 pounds of soil, it will cause disease and significant crop loss. “One sclerotium per 20 pounds of soil will cause disease and results in measurable crop loss.” White rot in the United States 1918: First found in La Grande, Oregon 1930s: San Francisco area 1940s: Gilroy and Tulelake, California; Klamath Falls, Oregon; Walla Walla, Washington 1950s: Salinas, Nevada; Willamette Valley, Oregon 1970s: Central Oregon, San Joaquin Valley 1989: Treasure Valley 2004: Marion County, Oregon 2008: Crook County, Oregon 2010: Home-grown garlic in the Palouse Falls region Management Cultural controls Knowing when to plant and harvest the crops is important to avoid the pathogen. Referring to the environmental section, this pathogen thrives under cool temps and moisture in the soil. Irrigation can cause disease, therefore if disease is present, looking at problems with moisture and reducing problems with irrigation can help to combat the pathogen and keep the disease in infected fields from spreading. Also planting in the spring and harvesting in the fall can help to reduce the disease. Sanitation One way to control this pathogen is planting clean seed. By planting clean seed and not infected seed, you are stopping the spread of disease. It is transported and spread in contaminated soil, for example on tools or equipment. If the infected soil is moved, the sclerotia will be dispersed as well. This is a survival structure in the life cycle of the pathogen that can stay active in the soil up to 30 years without a suitable host. By having sanitation practices in place the pathogen will not be spread. An example of a sanitation practice is washing the equipment with water and making sure all remnants of soil are gone so it cannot spread to a different allotment. Lastly, making sure that soil is not spread by tools or boots by washing them as well. Chemical controls As sclerotia are a survival structure in the life cycle for the pathogen, it is important to reduce and eliminate sclerotia in the soil. One effective way to reduce sclerotia is sclerotia germination stimulants. These germination stimulants can reduce sclerotia by 90%. One way to do this is using diallyl disulfide (DADS). This chemical is what triggers sclerotia to germinate. Upon using DADS in the soil, no Allium crops can be grown in that soil for a year to keep the treatment effective. If there are Allium crops growing they will be able to complete their lifecycle and keep sclerotia in the soil. Therefore, DADS is applied artificially in the field with no Allium species, which in turn has sclerotia germinate and unable to find a host and die rather than lay dormant in the soil. This can also be done by applying a garlic extract or the use of certain petroleum-based products. Dipping seed garlic in water at is effective, but higher temperatures may kill the cloves. It is also important to use fungicides with the chemical DADS. There are three fungicides that are registered for white rot. They are: tebuconazole, fludioxonil and boscalid with tebuconazole being the most effective. All of these fungicides need to be applied right at planting, as later fungicide applications are not effective to control disease. It is also important to note that once an infection is found there are no chemical controls to stop or reduce disease during that season. Use in biocontrol The three-cornered leek (Allium triquetrum) has been introduced into Australia where it has spread and become established in nutrient-deficient, damp habitats. The plant is now considered to be a noxious invasive species, as it is difficult to control or eradicate. S. cepivora is being investigated as a possible biological control agent for the plant. No naturally occurring members of the genus Allium occur in Australia, and in a trial, the fungus was found to be effective at killing all but one of the target samples on which it was tested. However, the researchers involved in the study acknowledged, "Releasing a virulent pathogen for cultivated Allium species into bushland or pasture is controversial and any field release would require safeguards against spread to areas suitable for the production of cultivated Allium species, such as onions, leeks and garlic, before S. cepivora could be introduced as a potential biological control agent. References Sclerotiniaceae Fungal plant pathogens and diseases Root vegetable diseases Fungus species Fungi described in 1841 Taxa named by Miles Joseph Berkeley
Stromatinia cepivora
[ "Biology" ]
2,096
[ "Fungi", "Fungus species" ]
38,966,423
https://en.wikipedia.org/wiki/JD%20Decompiler
JD (Java Decompiler) is a decompiler for the Java programming language. JD is provided as a GUI tool as well as in the form of plug-ins for the Eclipse (JD-Eclipse) and IntelliJ IDEA (JD-IntelliJ) integrated development environments. JD supports most versions of Java from 1.1.8 through 10.0.2 as well as JRockit 90_150, Jikes 1.2.2, Eclipse Java Compiler and Apache Harmony and is thus often used where formerly the popular JAD was operated. Variants In 2011, Alex Kosinsky initiated a variant of JD-Eclipse which supports the alignment of decompiled code by the line numbers of the originals, which are often included in the original Bytecode as debug information. In 2012, a branch of JDEclipse-Realign by Martin "Mchr3k" Robertson extended the functionality by manual decompilation control and support for Eclipse 4.2 (Juno). See also JAD (software) Mocha References External links Java decompilers
JD Decompiler
[ "Engineering" ]
225
[ "Reverse engineering", "Decompilers", "Java decompilers" ]
38,968,049
https://en.wikipedia.org/wiki/Zearalanone
Zearalanone (ZAN) is a mycoestrogen that is a derivative of zearalenone (ZEN). Zearalanone can be extracted from medical herbs and edible herbs along with aflatoxins in the same time by a specific immunoaffinity column. See also α-Zearalenol β-Zearalenol Taleranol Zeranol References Estrogens Lactones Resorcinols
Zearalanone
[ "Chemistry", "Biology" ]
92
[ "Biotechnology stubs", "Biochemistry stubs", "Organic compounds", "Biochemistry", "Organic compound stubs", "Organic chemistry stubs" ]
38,970,302
https://en.wikipedia.org/wiki/Wet%20process%20engineering
Wet Processing Engineering is one of the major streams in Textile Engineering or Textile manufacturing which refers to the engineering of textile chemical processes and associated applied science. The other three streams in textile engineering are yarn engineering, fabric engineering, and apparel engineering. The processes of this stream are involved or carried out in an aqueous stage. Hence, it is called a wet process which usually covers pre-treatment, dyeing, printing, and finishing. The wet process is usually done in the manufactured assembly of interlacing fibers, filaments and yarns, having a substantial surface (planar) area in relation to its thickness, and adequate mechanical strength giving it a cohesive structure. In other words, the wet process is done on manufactured fiber, yarn and fabric. All of these stages require an aqueous medium which is created by water. A massive amount of water is required in these processes per day. It is estimated that, on an average, almost 50–100 liters of water is used to process only 1 kilogram of textile goods, depending on the process engineering and applications. Water can be of various qualities and attributes. Not all water can be used in the textile processes; it must have some certain properties, quality, color and attributes of being used. This is the reason why water is a prime concern in wet processing engineering. Water Water consumption and discharge of wastewater are the two major concerns. The textile industry uses a large amount of water in its varied processes especially in wet operations such as pre-treatment, dyeing, and printing. Water is required as a solvent of various dyes and chemicals and it is used in washing or rinsing baths in different steps. Water consumption depends upon the application methods, processes, dyestuffs, equipment/machines and technology which may vary mill to mill and material composition. Longer processing sequences, processing of extra dark colors and reprocessing lead to extra water consumption. And process optimization and right first-time production may save much water. Fresh water: Most water used in the textile industry is from deep well water which is found 800 ft below the surface level. The main problem which is concerned with using water in textile processes is water hardness caused by the presence of soluble salts of metals including calcium and magnesium. Iron, aluminum, and copper salts may also contribute to the hardness, but their effects are much less. Using hard water in the wet process can cause problems such as the formation of scale in boilers, reactions with soap and detergents, reaction with dyes, and problems due to Iron. Water hardness can be removed by the boiling process, liming process, sodalime process, base exchange process, or synthetic ion exchange process. Recently, some companies have started harvesting rainwater for use in wet processes as it is less likely to cause the problems associated with water hardness. Wastewater: Textile mills including carpet manufacturers, generate wastewater from a wide variety of processes, including wool cleaning and finishing, yarn manufacturing and fabric finishing (such as bleaching, dyeing, resin treatment, waterproofing and retardant flameproofing). Pollutants generated by textile mills include BOD, SS, oil and grease, sulfide, phenols, and chromium. Insecticide residues in fleeces are a particular problem in treating waters generated in wool processing. Animal fats may be present in the wastewater, which, if not contaminated, can be recovered for the production of tallow or further rendering. Textile dyeing plants generate wastewater that contains synthetic (e.g., reactive dyes, acid dyes, basic dyes, disperse dyes, vat dyes, sulfur dyes, mordant dyes, direct dyes, ingrain dyes, solvent dyes, pigment dyes) and natural dyestuff, gum thickener (guar) and various wetting agents, pH buffers and dye retardants or accelerators. Following treatment with polymer-based flocculants and settling agents, typical monitoring parameters include BOD, COD, color (ADMI), sulfide, oil and grease, phenol, TSS and heavy metals (chromium, zinc, lead, copper). Pre-treatment Wet process engineering is the most significant division in textile preparation and processing. It is a major stream in textile engineering, which is under the section of textile chemical processing and applied science. Textile manufacturing covers everything from fiber to apparel; covering with yarn, fabric, fabric dyeing, printing, finishing, garments, or apparel manufacturing. There are many variable processes available at the spinning and fabric-forming stages coupled with the complexities of the finishing and coloration processes to the production of a wide range of products. In the textile industry, wet process engineering plays a vital role in the area of pre-treatment, dyeing, printing, and finishing of both fabrics and apparel. Coloration in fiber stage or yarn stage is also included in the wet processing division. All the processes of this stream are carried out in an aqueous state or aqueous medium. The main processes of this section include; Singeing Desizing Scouring Bleaching Mercerizing Dyeing Printing Finishing Singeing The process of singeing is carried out for the purpose of removing the loose hairy fibers protruding from the surface of the cloth, thereby giving it a smooth, even and clean looking face. Singeing is an essential process for the goods or textile material which will be subjected to mercerizing, dyeing and printing to obtain best results from these processes. The fabric passes over the brushes to raise the fibers, then passes over a plate heated by gas flames. When done to fabrics containing cotton, this results in increased water affinity, better dyeing characteristics, improved reflection, no "frosty" appearance, a smoother surface, better clarity in printing, improved visibility of the fabric structure, less pilling and decreased contamination through the removal of fluff and lint. Singeing machines can be of three types: plate singeing, roller singeing, or gas singeing. Gas singeing is widely used in the textile industry. In gas singeing, a flame comes into direct contact to the fabric and burn the protruding fiber. Here, flame height and fabric speed is the main concern to minimize the fabric damage. Singeing is performed only in the woven fabric. But in case of knit fabric, similar process of singeing is known as bio-polishing where enzyme is used to remove the protruding fibers. Desizing Desizing is the process of removing sizing materials from the fabric, which is applied in order to increase the strength of the yarn which can withstand with the friction of loom. Fabric which has not been desized is very stiff and causes difficulty in its treatment with a different solution in subsequent processes. After singeing operation the sizing material is removed by making it water-soluble and washing it with warm water. Desizing can be done by either the hydrolytic method (rot steep, acid steep, enzymatic steep) or the oxidative method (chlorine, chloride, bromite, hydrogen peroxide) Depending on the sizing materials that has been used, the cloth may be steeped in a dilute acid and then rinsed, or enzymes may be used to break down the sizing material. Enzymes are applied in the desizing process if starch is used as sizing materials. Carboxymethyl cellulose (CMC) and Poly vinyl alcohol (PVA) are often used as sizing materials. Scouring Scouring is a chemical washing process carried out on cotton fabric to remove natural wax and non-fibrous impurities (e.g. The remains of seed fragments) from the fibers and any added soiling or dirt. Scouring is usually carried in iron vessels called kiers. The fabric is boiled in an alkali, which forms a soap with free fatty acids (saponification). A kier is usually enclosed, so the solution of sodium hydroxide can be boiled under pressure, excluding oxygen which would degrade the cellulose in the fiber. If the appropriate reagents are used, scouring will also remove size from the fabric although desizing often precedes scouring and is considered to be a separate process known as fabric preparation. Preparation and scouring are prerequisites to most of the other finishing processes. Even the most naturally white cotton fibers are yellowish at this stage, thus at the next process, bleaching, is required. The three main processes involved in the scouring are saponification, emulsification and detergency. The main chemical reagent used in the cotton scouring is sodium hydroxide, which converts saponifiable fats and oils into soaps, dissolves mineral matter and converts pectose and pectin into their soluble salts. Another scouring chemical is a detergent which is an emulsifying agent and removes dust and dirt particles from the fabric. Since damage can be caused to the cotton substrate by sodium hydroxide. Due to this, and in order to reduce the alkali content in the effluent, Bio-scouring is introduced in the scouring process in which biological agent is used, such as an enzyme. Bleaching Bleaching improves whiteness by removing natural coloration and remaining trace impurities from the cotton; the degree of bleaching necessary is determined by the required whiteness and absorbency. Cotton being a vegetable fiber will be bleached using an oxidizing agent, such as dilute sodium hypochlorite or dilute hydrogen peroxide. If the fabric is to be dyed a deep shade, then lower levels of bleaching are acceptable. However, for white bedsheets and medical applications, the highest levels of whiteness and absorbency are essential. Reductive bleaching is also carried out, using sodium hydrosulphite. Fibers like polyamide, polyacrylics and polyacetates can be bleached using reductive bleaching technology. After scouring and bleaching, optical brightening agents (OBA), are applied to make the textile material appear more white. These OBAs are available in different tints such as blue, violet and red. Mercerizing Mercerization is a treatment for cotton fabric and thread that gives fabric or yarns a lustrous appearance and strengthens them. The process is applied to cellulosic materials like cotton or hemp. A further possibility is mercerizing during which the fabric is treated with a sodium hydroxide solution to cause swelling of the fibers. This results in improved luster, strength, and dye affinity. Cotton is mercerized under tension, and all alkalis must be washed out before the tension is released or shrinkage will take place. Mercerizing can take place directly on grey cloth, or after bleaching. Dyeing Dyeing is the application of dyes or pigments on textile materials such as fibers, yarns, and fabrics with the goal of achieving color with desired color fastness. Dyeing is normally done in a special solution containing dyes and particular chemical material. Dye molecules are fixed to the fiber by absorption, diffusion, or bonding with temperature and time being key controlling factors. The bond between dye molecule and fiber may be strong or weak, depending on the dye used. Dyeing and printing are different applications; in printing, color is applied to a localized area with desired patterns. In dyeing, it is applied to the entire textile. Solution dyeing Solution dyeing, also known as dope or spun dyeing, is the process of adding pigments or insoluble dyes to the spinning solution before the solution is extruded through the spinneret. Only manufactured fibers can be solution dyed. It is used for difficult-to-dye fibers such as olefin fibers, and for dyeing fibers for end uses that require excellent colorfastness properties. Because the color pigments become a part of the fiber, solution dyed materials have excellent colorfastness to light, washing, crocking (rubbing), perspiration, and bleach. Dyeing at the solution stage is more expensive since the equipment has to be cleaned thoroughly each time a different color is produced. Thus, the variety of colors and shades produced are limited. In addition, it is difficult to stock the inventory for each color. Decisions regarding color have to be made very early in the manufacturing process. Thus, this stage of dyeing is usually not used for apparel fabrics. Filament fibers that are produced using the wet spinning method can be dyed while the fibers are still in the coagulating bath. The dye penetration at this stage is high as the fibers are still soft. This method is known as gel dyeing. Fiber dyeing Stock dyeing, top dyeing, and tow dyeing are used to dye fibers at various stages of the manufacturing process prior to the fibers being spun into yarns. The names refer to the stage at which the fiber is when it is dyed. All three are included under the broad category of fiber dyeing. Stock dyeing is dyeing raw fibers, also called stock, before they are aligned, blended, and spun into yarns. Top dyeing is dyeing worsted wool fibers after they have been combed to straighten and remove the short fibers. The wool fiber at this stage is known as top. Top dyeing is preferred for worsted wools as the dye does not have to be wasted on the short fibers that are removed during the combing process. Tow dyeing is dyeing filament fibers before they are cut into short staple fibers. The filament fibers at this stage are known as tow. The dye penetration is excellent in fiber dyeing, therefore the amount of dye used to dye at this stage is also higher. Fiber dyeing is comparatively more costly than yarn, fabric, and product dyeing. The decision regarding the selection of colors has to be made early in the manufacturing process. Fiber dyeing is typically used to dye wool and other fibers that are used to produce yarns with two or more colors. Fibers for tweeds and fabrics with a "heather" look are often fiber dyed. Yarn dyeing Yarn dyeing adds color at the yarn stage. Skein, package, beam, and space dyeing methods are used to dye yarns. In skein dyeing the yarns are loosely wound into hanks or skein and then dyed. The yarns have good dye penetration, but the process is slow and comparatively more expensive. In package dyeing yarns that have been wound on perforated spools are dyed in a pressurized tank. The process is comparatively faster, but the dye uniformity may not be as good as that of skein dyed yarn. In beam dyeing a perforated warp beam is used instead of the spools used in package dyeing. Space dyeing is used to produce yarns with multiple colors. In general, yarn dyeing provides adequate color absorption and penetration for most materials. Thick and highly twisted yarns may not have good dye penetration. This process is typically used when different colored yarns are used in the construction of fabrics (e.g. plaids, checks, iridescent fabrics). Fabric dyeing Fabric dyeing, also known as piece dyeing, is dyeing fabric after it has been constructed. It is economical and the most common method of dyeing solid-colored fabrics. The decision regarding color can be made after the fabric has been manufactured. Thus, it is suitable for quick response orders. Dye penetration may not be good in thicker fabrics, so yarn dyeing is sometimes used to dye thick fabrics in solid colors. Various types of dyeing machines are used for piece dyeing. The selection of the equipment is based on factors such as dye and fabric characteristics, cost, and the intended end-use. Union dyeing Union dyeing is "a method of dyeing a fabric containing two or more types of fibers or yarns to the same shade so as to achieve the appearance of a solid-colored fabric". Fabrics can be dyed using a single or multiple step process. Union dyeing is used to dye solid colored blends and combination fabrics commonly used for apparel and home furnishings. Cross dyeing Cross dyeing is "a method of dyeing blend or combination fabrics to two or more shades by the use of dyes with different affinities for the different fibers". The cross dyeing process can be used to create heather effects, and plaid, check, or striped fabrics. Cross dyed fabrics may be mistaken for fiber or yarn-dyed materials as the fabric is not a solid color, a characteristic considered typical of piece-dyed fabrics. It is not possible to visually differentiate between cross-dyed fabrics and those dyed at the fiber or yarn stage. An example is cross dyeing blue worsted wool fabric with polyester pinstripes. When dyed, the wool yarns are dyed blue, whereas the polyester yarns remain white. Cross dyeing is commonly used with piece or fabric dyed materials. However, the same concept is applicable to yarn and product dyeing. For example, silk fabric embroidered with white yarn can be embroidered prior to dyeing and product dyed when an order is placed. Product dyeing Product dyeing, also known as garment dyeing, is the process of dyeing products such as hosiery, sweaters, and carpet after they have been produced. This stage of dyeing is suitable when all components dye the same shade (including threads). This method is used to dye sheer hosiery since it is knitted using tubular knitting machines and then stitched prior to dyeing. Tufted carpets, with the exception of carpets produced using solution dyed fibers, are often dyed after they have been tufted. This method is not suitable for apparel with many components such as lining, zippers, and sewing thread, as each component may dye differently. The exception is tinting jeans with pigments for a "vintage" look. In tinting, color is used, whereas in other treatments such as acid-wash and stone-wash, chemical or mechanical processes are used. After garment construction, these products are given the "faded" or "used" look by finishing methods as opposed to dyeing. Dyeing at this stage is ideal for a quick response. Many t-shirts, sweaters, and other types of casual clothing are product dyed for maximum response to fashion's demand for certain popular colors. Thousands of garments are constructed from prepared-for-dye (PFD) fabric, and then dyed to colors that sell best. Dye types Acid dyes are water-soluble anionic dyes that are applied to fibers such as silk, wool, nylon, and modified acrylic fibers using neutral to acid dye baths. Attachment to the fiber is attributed, at least partly, to salt formation between anionic groups in the dyes and cationic groups in the fiber. Acid dyes are not substantive to cellulosic fibers. Basic dyes are water-soluble cationic dyes that are mainly applied to acrylic fibers but find some use for wool and silk. Usually acetic acid is added to the dyebath to help the uptake of the dye onto the fiber. Direct or substantive dyeing is normally carried out in a neutral or slightly alkaline dyebath, at or near boiling point, with the addition of either sodium chloride, sodium sulfate or sodium carbonate. Direct dyes are used on cotton, paper, leather, wool, silk, and nylon. Mordant dyes require a mordant, which improves the fastness of the dye against water, light and perspiration. The choice of mordant is very important as different mordants can change the final color significantly. Most natural dyes are mordant dyes and there is therefore a large literature base describing dyeing techniques. The most important mordant dyes are the synthetic mordant dyes, or chrome dyes, used for wool; these comprise some 30% of dyes used for wool and are especially useful for black and navy shades. The mordant, potassium dichromate, is applied as an after-treatment. Many mordants, particularly those in the heavy metal category, can be hazardous to health and extreme care must be taken in using them. Vat dyes are essentially insoluble in water and incapable of dyeing fibers directly. However, reduction in alkaline liquor produces the water-soluble alkali metal salt of the dye, which, in this leuco form, has an affinity for the textile fiber. Subsequent oxidation reforms the original insoluble dye. The color of denim is due to indigo, the original vat dye. Reactive dyes utilize a chromophore attached to a substituent that is capable of directly reacting with the fiber substrate. The covalent bonds that attach reactive dye to natural fibers make them among the most permanent of dyes. "Cold" reactive dyes, such as Procion MX, Cibacron F, and Drimarene K, are very easy to use because the dye can be applied at room temperature. Reactive dyes are by far the best choice for dyeing cotton and other cellulose fibers at home or in the art studio. Disperse dyes were originally developed for the dyeing of cellulose acetate, and are water-insoluble. The dyes are finely ground in the presence of a dispersing agent and sold as a paste, or spray-dried and sold as a powder. Their main use is to dye polyester but they can also be used to dye nylon, cellulose triacetate, and acrylic fibers. In some cases, a dyeing temperature of 130 °C is required, and a pressurized dyebath is used. The very fine particle size gives a large surface area that aids dissolution to allow uptake by the fiber. The dyeing rate can be significantly influenced by the choice of dispersing agent used during the grinding. Azoic dyeing is a technique in which an insoluble azo dye is produced directly onto or within the fiber. This is achieved by treating a fiber with both diazoic and coupling components. With suitable adjustment of dyebath conditions the two components react to produce the required insoluble azo dye. This technique of dyeing is unique, in that the final color is controlled by the choice of the diazoic and coupling components. This method of dyeing cotton is declining in importance due to the toxic nature of the chemicals used. Sulfur dyes are two-part "developed" dyes used to dye cotton with dark colors. The initial bath imparts a yellow or pale chartreuse color, This is after–treated with a sulfur compound in place to produce the dark black we are familiar with in socks for instance. Sulfur Black 1 is the largest selling dye by volume. Printing Textile printing is referred as localized dyeing. It is the application of color in the form of a paste or ink to the surface of a fabric, in a predetermined pattern. Printing designs onto already dyed fabric is also possible. In properly printed fabrics, the color is bonded with the fiber, so as to resist washing and friction. Textile printing is related to dyeing but, whereas in dyeing proper the whole fabric is uniformly covered with one color, in printing one or more colors are applied to it in certain parts only, and in sharply defined patterns. In printing, wooden blocks, stencils, engraved plates, rollers, or silkscreens can be used to place colors on the fabric. Colorants used in printing contain dyes thickened to prevent the color from spreading by capillary attraction beyond the limits of the pattern or design. Finishing Textile finishing is the term used for a series of processes to which all bleached, dyed, printed, and certain grey fabrics are subjected before they put on the market. The object of textile finishing is to render textile goods fit for their purpose or end-use and/or improve serviceability of the fabric. Finishing on fabric is carried out for both aesthetic and functional purposes to improve the quality and look of a fabric. Fabric may receive considerable added value by applying one or more finishing processes. Finishing processes include Raising Calendering Crease resistance Filling Softening Stiffening Water repellency Moth proofing Mildew-proofing Flame retardant Anti-static soil resistance Calendering Calendering is an operation carried out on a fabric to improve its aesthetics. The fabric passes through a series of calender rollers by wrapping; the face in contact with a roller alternates from one roller to the next. An ordinary calender consists of a series of hard and soft (resilient) bowls (rollers) placed in a definite order. The soft roller may be compressed with either cotton or wool-paper, linen paper or flax paper. The hard metal bowl is either of chilled iron or cast iron or steel. The calender may consist of 3, 5, 6, 7 and 10 rollers. The sequence of the rollers is that no two hard rollers are in contact with each other. Pressure may be applied by compound levers and weights, or hydraulic pressure may be used as an alternative. The pressure and heat applied in calendering depend on the type of the finish required. The purposes of calendering are to upgrade the fabric hand and to impart a smooth, silky touch to the fabric, to compress the fabric and reduce its thickness, to improve the opacity of the fabric, to reduce the air permeability of the fabric by changing its porosity, to impart different degree of luster of the fabric, and to reduce the yarn slippage. Raising An important and oldest textile finishing is brushing or raising. Using this process a wide variety of fabrics including blankets, flannelettes, and industrial fabrics can be produced. The process of raising consists of lifting from the body of the fabric a layer of fibers which stands out from the surface which is termed as "pile". The formation of the pile on a fabric results in a "lofty" handle and may also subdue the weave or pattern and color of the cloth. There are two types of raising machines; the Teasel machine and the Card-wire machine. The speed of the card-wire raising machine varies from 12-15 yards per minute, which is 20-30% higher than that of teasel-raising. That is why the card-wire raising machine is widely used. Crease resistance Crease formation in woven or knitted fabric composed of cellulose during washing or folding is the main drawback of cotton fabrics. The molecular chains of the cotton fibers are attached to each other by weak hydrogen bonds. During washing or folding, the hydrogen bonds break easily, and after drying new hydrogen bonds form with the chains in their new position and the crease is stabilized. If crosslink between the polymer chains can be introduced by cross-linking chemicals, then it reinforces the cotton fibers and prevents the permanent displacement of the polymer chains when the fibers are stressed. It is therefore much more difficult for creases to form or for the fabric to shrink on washing. Crease-resist finishing of cotton includes the following steps: Padding the material with a solution containing a condensation polymer precursor and a suitable polymerization catalyst. Drying and curing in a stenter frame to form crosslink between the polymer chain and adjacent polymer chain. The catalyst allows the reaction to be carried out 130-180 degree temperature range usually employed in the textile industry and within the usual curing time (within 3 minutes, maximum). Mainly three classes of catalysts are commonly used now a day. Ammonium salts, e.g.Ammonium chloride, sulphate and nitrate. Metal salts e.g. Magnesium chloride, Zinc nitrate, Zinc chloride. Catalyst mixture e.g. magnesium chloride with added organic and inorganic acids or acid donors. The purpose of the additives is to offset or counterbalance partly or completely the adverse effect of the crosslinking agent. Thus softening and smoothing agents are applied not only to improve the handle but also to compensate as much as possible for losses in tear strength and abrasion resistance. Every resin finish recipe contains surfactants as emulsifiers, wetting agents and stabilizers. these surface-active substances are necessary to ensure that the fabric is wet rapidly and thoroughly during padding and the components are stable in the liquor. See also Cationization of cotton Dyeing Finishing (textiles) References Textile engineering
Wet process engineering
[ "Physics", "Engineering" ]
5,871
[ "Applied and interdisciplinary physics", "Textile engineering" ]
38,972,404
https://en.wikipedia.org/wiki/CARTaGENE%20biobank
CARTaGENE is a population-based cohort based on an ongoing and long-term health study of 40,000 men and women in Québec. It is a regional cohort member of the Canadian Partnership for Tomorrow's Health (CanPath). The project's core mandate is to identify the genetic and environmental causes of common chronic diseases affecting the Québecois population, and to develop personalized medicine and public policy initiatives targeting high-risk groups for the public. CARTaGENE is under the scientific direction of Sébastien Jacquemont, Ekaterini Kritikou, and Philippe Broët. Based in Montréal, Québec, Canada. It is operated under the infrastructure of the Sainte-Justine Children's Hospital University Health Center and has seen funding from Genome Canada, the Canadian Foundation for Innovation, Génome Québec and the Canadian Partnership Against Cancer (CPAC) since 2007, among other sources. The program was initially founded by Professors Claude Laberge and Bartha Knoppers, and developed through two phases of participant recruitment under the direction of Professor Philip Awadalla as Scientific Director of the cohort from 2009 to 2015, who is now the National Scientific Director of the Canadian Partnership for Tomorrow's Health (CanPath). Design The CARTaGENE cohort was set up to recruit men and women aged 40–69 years old from Québec representing an age range most at risk for developing chronic diseases, including cardiovascular disease, metabolic disorders like diabetes mellitus and cancer, among others. Taking place between August 2009 and October 2010, 20,007 participants were enrolled in its first phase of recruitment (Phase A) and between December 2012 and February 2015, a new wave of recruitment (Phase B) has enrolled an additional 20,000 participants. The participants were randomly selected and tracked based on their files in the governmental health administrative databases (RAMQ-Régie de l'Assurance Maladie du Québec). Participants were also selected to be representative of 1% of the metropolitan areas of Québec, specifically Montreal, Québec City, Sherbrooke and the Saguenay. Because of administrative linkage between the RAMQ and the CHU Sainte-Justine, participants can be passively followed for the next 50 years. Information packages about the project were first sent by mail and potential participants were contacted by telephone to enroll and schedule visits to one of the clinical assessment sites. CARTaGENE is part of a Canada-wide cohort collecting samples across the country whose methods were applied in the design of the five cohorts within the Canadian Partnership for Tomorrow Project (CPTP). Molecular profiling Detailed clinical chemistry and complete blood counts for each of the participants was collected. Detailed lipid profiles, Hba1c, bone density and creatinine were also collected. Blood collection was designed such that DNA and RNA can be extracted for future use, allowing for population level gene expression analysis and genotyping. Storage conditions are also optimized for proteomics and lipidomics. The CARTaGENE project has a Systems Genomics program to identify critical events associated with a number of cardiovascular related endophenotypes. It is developing integrative technologies and approaches to capture single-nucleotide polymorphisms (SNPs) associated with endophenotypes. Typical studies Typical studies include population-based longitudinal studies. Researchers may try to evaluate the contribution of a particular lifestyle, environmental and genetic factors and a chosen endophenotype. The use of endophenotypes facilitates a more realistic dataset on gene-environment interactions influencing particular endophenotypes. Development There was an initial pilot study done under the direction of Professor Bartha Knoppers (McGill University) and Professor Claude Laberge (Laval University) that involved 223 participants who responded to a questionnaire based on the P3G DataSHaPER model. The scales used in the questionnaires were developed and revised by more than 30 experts from various fields and are widely used. These included the Patient Health questionnaire, the General Anxiety Scale, the Job Content and International Physical Activity Questionnaire (IPAQ). Beginning in 2009, under the direction of Professor Philip Awadalla, a total of 12 assessment sites across the province were established for clinical and physical assessments. Following initial phone contact, participants were invited to come to the assessment site and sign a consent form. They were asked to complete a self-administered demographic and lifestyle questionnaire and an interviewer-administered health questionnaire. A genealogical questionnaire was also included for completion online. Non-invasive measurements were taken that included basic measurements such as weight, height, and blood pressure. Blood, saliva and urine were collected and preserved at the Biobanque Génome Québec and the affiliated University Hospital Center in Chicoutimi (Biobanque GQ-CAURC) for future use. Surveys about nutrition are also included and residential information, occupational history and food frequency data questionnaires are administered. Ending in February 2015, a total of over 40,000 participants were recruited to the CARTaGENE program and data is accessible through the Canadian Partnership for Tomorrow Program portal. Ethics and governance CARTaGENE complies with local, national, international laws and ethical norms. These include the Canadian Charter of Rights and Freedoms, the Charter of Human Rights and Freedoms, the Civil Code of Québec, the Declaration of Helsinki-World Medical Association (revised in 2008), the Universal Declaration on the Human Genome and Human Rights (1997) and the Universal Declaration on Bioethics and Human Rights: UNESCO (2005), among others. CARTaGENE also complies with recommendations by the "Plan d'action ministériel en éthique de la recherche et en intégrité scientifique" from the MSSS (1998), the "Guide d'élaboration de normes de gestion des banques de données" from the MSSS (2004) and the "Politique de la recherche avec des êtres humains" (2004), among others. Legal monitoring CARTaGENE is monitored by the research ethics Board of the Sainte-Justine University Health Center. It is also under the supervision of the Information Access Commission (the CAI). This organization authorizes the transfer of information from the RAMQ to the call center that contacts participants and all personal information held by CARTaGENE is subject to surveillance by the CAI. Access Participant medical history is maintained at a centralized governmental database (RAMQ), allowing researchers to track these individuals for the duration of the study and monitor all medical events, prescriptions of drugs and deaths. The personal information connecting medical records to the patient identification undergoes de-identification and is coded by CARTaGENE, but handled and managed by the RAMQ ensuring patient confidentiality. Researchers can request access to the CARTaGENE data through the Canadian Partnership for Tomorrow Project Portal. Researchers must submit an application and undergo evaluation by an independent Sample and Data Access Committee (SDAC). The dataset is available to researchers in industry and academic institutions. Applications detailing their project proposal are a requirement for review by an independent committee, the Sample and Data Access Committee (SDAC). The scientific management of CARTaGENE along with the SDAC determines if data or results should need to be returned to the project. Submission for access to the dataset is done directly online. Recruitment and reassessment Health reassessments will take place regularly, using web-based questionnaires in the coming years. Patients may be tracked for up to 50 years based on their linkage to governmental health administrative databases. Harmonization CARTaGENE has been designed such that its infrastructure including the collection of samples, measurements of biological variables and the storage procedures can be harmonized with other international large-scale cohorts via the Public Population Project in Genomics (P3G) platform. A nationwide effort is underway to collect samples from participants across Canada, with CaG representing one of five cohorts within the Canadian Partnership for Tomorrow (CPTP). CPTP has recruited over 300,000 participants to create a databank on Canadian health. Opinion and media The public was generally receptive to the creation of the CARTaGENE project and an independent study reported on the consultations held with members of the public. The main concerns raised were about safeguarding medical records and confidentiality, respect for individual transparency, the donor's right to feedback and governance. Print media Local and national media have reported on CARTaGENE. La Presse/La Presse Canadienne, Lia Levesque (15 January 2013) « Un ratio «inquiétant» de Québécois ont une maladie chronique à leur insu » Le Soleil, Jean François Cliche (15 January 2013) « Projet Cartagène: jusqu'à un Québécois sur deux malade sans le savoir » Le Droit/La Presse Canadienne, Lia Levesque (January 2013) « Des Québécois atteints de maladies chroniques sans le savoir » La Tribune/La Presse Canadienne, Lia Levesque (14 January 2013) « Des Québécois atteints de maladies chroniques sans le savoir » Le Devoir, Pauline Gravel (15 January 2013) « Cartagène commence à porter ses fruits » The Gazette, Charlie Fidelman (15 January 2013). Quebec's CARTaGene genetic study shows "huge portion" of population unaware of chronic diseases. The Ottawa Citizen, Charlie Fidelman (15 January 2013). Quebec's CARTaGene genetic study shows "huge portion" of population unaware of chronic diseases. Le Devoir, Pauline Gravel (18 January 2013) « Données génétiques - La biobanque Cartagène est hautement sécurisée » Television Radio-Canada TV, Catherine Kovacks (14 January 2013) « Cartagène à la recherche de volontaires » CBC News Montreal Late, Nancy Woods (14 January 2013) Montreal Late 12:28 References Biobanks
CARTaGENE biobank
[ "Biology" ]
2,054
[ "Bioinformatics", "Biobanks" ]
24,815,322
https://en.wikipedia.org/wiki/Pyruvate%20cycling
Pyruvate cycling commonly refers to an intracellular loop of spatial movements and chemical transformations involving pyruvate. Spatial movements occur between mitochondria and cytosol and chemical transformations create various Krebs cycle intermediates. In all variants, pyruvate is imported into the mitochondrion for processing through part of the Krebs cycle. In addition to pyruvate, alpha-ketoglutarate may also be imported. At various points, the intermediate product is exported to the cytosol for additional transformations and then re-imported. Three specific pyruvate cycles are generally considered, each named for the principal molecule exported from the mitochondrion: malate, citrate, and isocitrate. Other variants may exist, such as dissipative or "futile" pyruvate cycles. This cycle is usually studied in relation to Glucose Stimulated Insulin Secretion ( or GSIS ) and there is thought to be a relationship between the insulin response and NADPH produced from this cycle but the specifics are not clear and particular confusion exists about the role of malic enzymes. It has been observed in various cell types including islet cells. The pyruvate-malate cycle was described in liver and kidney preparations as early as 1971. References Further reading External links from Metabolism
Pyruvate cycling
[ "Chemistry", "Biology" ]
275
[ "Cellular processes", "Biochemistry", "Metabolism" ]
24,817,815
https://en.wikipedia.org/wiki/Oramir
Oramir Semiconductor Equipment Ltd. is an Israeli company that develops advanced laser cleaning technologies for semiconductor wafers, used during their manufacturing process. Oramir is located in Rehovot, Israel. History Oramir was founded in 1992 by Fairchild Corporation, Teuza Venture Capital Fund and Rafael Development Corporation of Israel. Oramir was named after Amir Sinai who was killed in service as an IDF special unit NCO in July 1984, during the war in Lebanon. Dan Sinai, Amir's father, was one of Oramir's founders. Oramir's notability derives from developing the advanced technology for cleaning silicon wafers in a one step dry process. Particles and other contaminants can be removed from a silicon substrate by a patented laser based technology. Applied Materials Inc. (NASDAQ: AMAT), a semiconductor equipment manufacturer, acquired Oramir for $21 million on June 27, 2001. See also Applied Materials Inc. Silicon Wadi References External links www.appliedmaterials.com Semiconductor device fabrication Equipment semiconductor companies Semiconductor companies of Israel Mergers and acquisitions of Israeli companies 2001 mergers and acquisitions
Oramir
[ "Materials_science", "Engineering" ]
232
[ "Equipment semiconductor companies", "Semiconductor device fabrication", "Semiconductor fabrication equipment", "Microtechnology" ]
24,818,800
https://en.wikipedia.org/wiki/Cytidine%20diphosphate%20glucose
Cytidine diphosphate glucose, often abbreviated CDP-glucose, is a nucleotide-linked sugar consisting of cytidine diphosphate and glucose. This nucleotide saccharide participates in the synthesis of deoxy sugars such as paratose and tyvelose. Metabolism CDP-glucose is produced from CTP and glucose-1-phosphate by the enzyme glucose-1-phosphate cytidylyltransferase. CDP-glucose is an important metabolite in certain bacteria, which synthesize O antigens from it. CDP-glucose can also be used as a substrate for glycogenin, along its native substrate, UDP-glucose. The same is true for TDP-glucose. References Biochemistry Nucleotides
Cytidine diphosphate glucose
[ "Chemistry", "Biology" ]
160
[ "Biochemistry", "nan" ]
24,820,253
https://en.wikipedia.org/wiki/Microwave%20heat%20distribution
The microwave heat distribution is the distribution (allocation) of the heat release inside the microwave absorptive material irradiated with high intensive microwaves. The pattern of microwave heat distribution depends on many physical parameters, which may include the electromagnetic field, the specific absorption rate and structure of the processed material, the geometrical dimensions of the processing cavity, etc. Most of the industrial microwave heating applications need a uniform heat distribution. For example, the uniformity of microwave heat distribution is key parameter in microwave food sterilization, due to the potential danger directly related to human health if the food has not been heated evenly up to desirable temperature for neutralization of possible bacteria population. There are many different methods for achieving uniform heat distribution inside the irradiated material. They may involve computer simulation and different mechanical mechanisms such as turntables and stirrers. The proper microwave energy pattern is necessary for attaining a uniform heat release. See also Susceptor Electromagnetic-Temperature Control & Optimization of Microwave Thermal Processing A hybrid technique for computing the power distribution generated in a lossy medium during microwave heating References Microwave processing of Materials, National Research Council, Publication NMAB-473, Washington, DC, 1994 J. Chang, and M. Brodwin, “A new applicator for efficient uniform heating using a circular cylindrical geometry,” J. Microwave Power & Electromagnetic Energy, vol. 28, pp. 32–40, March 1993 External links Microwave heat distribution visualization Electric and magnetic fields in matter
Microwave heat distribution
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
302
[ "Condensed matter physics", "Electric and magnetic fields in matter", "Materials science" ]
24,822,937
https://en.wikipedia.org/wiki/Superslow%20process
Superslow processes are processes in which values change so little that their capture is very difficult because of their smallness in comparison with the measurement error. Applications Most of the time, the superslow processes lie beyond the scope of investigation due to the reason of their superslowness. Multiple gaps can be easily detected in biology, astronomy, physics, mechanics, economics, linguistics, ecology, gerontology, etc. Biology: Traditional scientific research in this area was focused on the describing some brain reactions. Mathematics: In mathematics, when the fluid flows through thin and long tubes it forms stagnation zones where the flow becomes almost immobile. If the ratio of tube length to its diameter is large, then the potential function and stream function are almost invariable on very extended areas. The situation seems uninteresting, but if we remember that these minor changes occur in the extra-long intervals, we see here a series of first-class tasks that require the development of special mathematical methods. Mathematics: Apriori information regarding the stagnation zones contributes to optimization of the computational process by replacing the unknown functions with the corresponding constants in such zones. Sometimes this makes it possible to significantly reduce the amount of computation, for example in approximate calculation of conformal mappings of strongly elongated rectangles. Economic Geography: The obtained results are particularly useful for applications in economic geography. In a case where the function describes the intensity of commodity trade, a theorem about its stagnation zones gives us (under appropriate restrictions on the selected model) geometric dimensions estimates of the stagnation zone of the world-economy (for more information about a stagnation zone of the world-economy, see Fernand Braudel, Les Jeux de L'echange). For example, if the subarc of a domain boundary has zero transparency, and the flow of the gradient vector field of the function through the rest of the boundary is small enough, then the domain for such function is its stagnation zone. Stagnation zones theorems are closely related to pre-Liouville's theorems about evaluation of solutions fluctuation, which direct consequences are the different versions of the classic Liouville theorem about conversion of the entire doubly periodic function into the identical constant. Identification of what parameters impact the sizes of stagnation zones opens up opportunities for practical recommendations on targeted changes in configuration (reduction or increase) of such zones. References Applied sciences Mathematical analysis Dynamical systems
Superslow process
[ "Physics", "Mathematics" ]
505
[ "Mathematical analysis", "Mechanics", "Dynamical systems" ]
24,823,515
https://en.wikipedia.org/wiki/C20H19NO5
The molecular formula C20H19NO5 (molar mass: 353.37 g/mol, exact mass: 353.1263 u) may refer to: Chelidonine Lennoxamine LY-341,495 LY-344,545 Protopine Molecular formulas
C20H19NO5
[ "Physics", "Chemistry" ]
62
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
24,824,635
https://en.wikipedia.org/wiki/Electrochemical%20hydrogen%20compressor
An electrochemical hydrogen compressor is a hydrogen compressor where hydrogen is supplied to the anode, and compressed hydrogen is collected at the cathode with an exergy efficiency up to and even beyond 80% for pressures up to 10,000 psi or 700 bars. Principle A multi-stage electrochemical hydrogen compressor incorporates membrane-electrode-assemblies (MEAs) separated by proton exchange membranes (PEMs) in series to reach higher pressures, when a current is passed through the MEA protons and electrons are generated at the anode. The protons are electrochemically driven across the membrane to the cathode, after which they combine with the rerouted electrons to form hydrogen, which is fed to the hydrogen compressor to be oxidized at the anode of each cell to form protons and electrons. This type of compressor has no moving parts and is compact. With electrochemical compression of hydrogen a pressure of 14500 psi (1000 bar) is achieved, this world record was set by HyET from the Netherlands in 2011. Water vapor partial pressure, current density, operating temperature and hydrogen back diffusion due to the pressure gradient have an effect on the maximum output pressure. Applications Electrochemical hydrogen compressors have been proposed for use in hydrogen refueling stations to pressurize hydrogen gas for storage. They have also been applied into novel refrigeration systems to pressurize hydrogen for absorption into metal hydrides or to pressurize other working fluids (such as refrigerants) as demonstrated by Xergy Inc. winners of the global GE's Ecomagination awards for 2011. These electrochemical compressors are noiseless, scalable, modular and highly efficient without the use of CFC's. See also Guided rotor compressor Hydride compressor Ionic liquid piston compressor Gas diffusion electrode Linear compressor Timeline of hydrogen technologies Concentration cell Work (thermodynamics) References Gas compressors Hydrogen technologies
Electrochemical hydrogen compressor
[ "Chemistry" ]
389
[ "Gas compressors", "Turbomachinery" ]
24,824,947
https://en.wikipedia.org/wiki/Discontinuity%20layout%20optimization
Discontinuity layout optimization (DLO) is an engineering analysis procedure which can be used to directly establish the amount of load that can be carried by a solid or structure prior to collapse. Using DLO the layout of failure planes, or 'discontinuities', in a collapsing solid or structure are identified using mathematical optimization methods (hence the name, 'discontinuity layout optimization'). It is assumed that failure occurs in a ductile or 'plastic' manner. How it works The DLO procedure involves a number of steps, as outlined below. The set of potential discontinuities can include discontinuities which crossover one another, allowing complex failure patterns to be identified (e.g. involving ‘fan’ mechanisms, where many discontinuities radiate from a point). DLO can be formulated in terms of equilibrium relations ('static' formulation) or in terms of displacements ('kinematic' formulation). In the latter case the objective of the mathematical optimization problem is to minimize the internal energy dissipated along discontinuities, subject to nodal compatibility constraints. This can be solved using efficient linear programming techniques and, when combined with an algorithm originally developed for truss layout optimization problems, it has been found that modern computer power can be used to directly search through very large numbers of different failure mechanism topologies (up to approx. 21,000,000,000 different topologies on current generation PCs). A full description of the application of DLO to plane strain problems has been provided by Smith and Gilbert, to masonry arch analysis by Gilbert et al, to slab problems by Gilbert et al, and to 3D problems by Hawksbee et al, and Zhang. DLO vs FEM Whereas with finite element analysis (FEM), a widely used alternative engineering analysis procedure, mathematical relations are formed for the underlying continuum mechanics problem, DLO involves analysis of a potentially much simpler discontinuum problem, with the problem being posed entirely in terms of the individual discontinuities which interconnect nodes laid out across the body under consideration. Additionally, when general purpose finite element programs are used to analyse the collapse state often relatively complex non-linear solvers are required, in contrast to the simpler linear programming solvers generally required in the case of DLO. Compared with non-linear FEM, DLO has the following advantages and disadvantages: Advantages The collapse state is analysed directly, without the need to iterate. This means that solutions can generally be obtained much more quickly. The output, in the form of animated failure mechanisms is generally easier to interpret. Problems involving singularities in the stress or displacement fields can be dealt with without difficulty. As DLO is much simpler than non-linear FEM, users require less training in order to use the method effectively. Disadvantages As with other limit analysis techniques, DLO provides no information about displacements (or stresses) prior to collapse. DLO is fundamentally based in modelling compatible mechanisms for soil collapse and is therefore an upper bound method. As a result, the method will always predict an unconservative collapse load. Although the discontinuity layout generation and linear programming optimization schemes used in DLO will usually ensure that a good approximation of the true collapse mechanism is found, there is no way of discerning by how much the predicted collapse load will exceed the true collapse load without comparison to an independent lower bound analysis. DLO is a relatively new technique so only a limited range of software tools are currently available. Applications DLO is perhaps most usefully applied to engineering problems where traditional hand calculations are difficult, or simplify the problem too much, but where recourse to more complex non-linear FEM is not justified. Applications include: Analysis of geotechnical engineering problems (e.g. slope stability, bearing capacity or retaining wall problems). Analysis of concrete slab problems. Analysis of metal forming or extrusion problems. Software using Discontinuity Layout Optimization MATLAB script (2009-) Provided by the CMD research group at the University of Sheffield, UK. LimitState:GEO (2008-) General purpose geotechnical software application. LimitState:SLAB (2015-) Slab analysis software application. References External links DLO teaching resources provided by the Geotechnical Engineering Research Group at the University of Sheffield, UK. Structural analysis Soil mechanics
Discontinuity layout optimization
[ "Physics", "Engineering" ]
896
[ "Structural engineering", "Applied and interdisciplinary physics", "Structural analysis", "Soil mechanics", "Mechanical engineering", "Aerospace engineering" ]
24,827,002
https://en.wikipedia.org/wiki/Logical%20depth
Logical depth is a measure of complexity for individual strings devised by Charles H. Bennett based on the computational complexity of an algorithm that can recreate a given piece of information. It differs from Kolmogorov complexity in that it considers the computation time of the algorithm with nearly minimal length, rather than the length of the minimal algorithm. Informally, the logical depth of a string to a significance level is the time required to compute by a program no more than bits longer than the shortest program that computes . Formally, let be the shortest program that computes a string on some universal computer . Then the logical depth of to the significance level is given by where is the number of computation steps that made on to produce and halt. See also Effective complexity Self-dissimilarity Forecasting complexity Sophistication (complexity theory) References Information theory Computational complexity theory Measures of complexity
Logical depth
[ "Mathematics", "Technology", "Engineering" ]
177
[ "Telecommunications engineering", "Applied mathematics", "Computer science", "Information theory" ]
40,260,097
https://en.wikipedia.org/wiki/List%20of%20smallest%20known%20stars
This is a list of stars, neutron stars, white dwarfs and brown dwarfs which are the least voluminous known (the smallest stars by volume). List Notable small stars This is a list of small stars that are notable for characteristics that are not separately listed. Smallest stars by type Timeline of smallest red dwarf star recordholders Red dwarfs are considered the smallest star known that are active fusion stars, and are the smallest stars possible that is not a brown dwarf. Notes References Volume, least S
List of smallest known stars
[ "Physics", "Mathematics" ]
100
[ "Smallest things", "Quantity", "Physical quantities", "Size" ]
40,260,318
https://en.wikipedia.org/wiki/Scattered%20space
In mathematics, a scattered space is a topological space X that contains no nonempty dense-in-itself subset. Equivalently, every nonempty subset A of X contains a point isolated in A. A subset of a topological space is called a scattered set if it is a scattered space with the subspace topology. Examples Every discrete space is scattered. Every ordinal number with the order topology is scattered. Indeed, every nonempty subset A contains a minimum element, and that element is isolated in A. A space X with the particular point topology, in particular the Sierpinski space, is scattered. This is an example of a scattered space that is not a T1 space. The closure of a scattered set is not necessarily scattered. For example, in the Euclidean plane take a countably infinite discrete set A in the unit disk, with the points getting denser and denser as one approaches the boundary. For example, take the union of the vertices of a series of n-gons centered at the origin, with radius getting closer and closer to 1. Then the closure of A will contain the whole circle of radius 1, which is dense-in-itself. Properties In a topological space X the closure of a dense-in-itself subset is a perfect set. So X is scattered if and only if it does not contain any nonempty perfect set. Every subset of a scattered space is scattered. Being scattered is a hereditary property. Every scattered space X is a T0 space. (Proof: Given two distinct points x, y in X, at least one of them, say x, will be isolated in . That means there is neighborhood of x in X that does not contain y.) In a T0 space the union of two scattered sets is scattered. Note that the T0 assumption is necessary here. For example, if with the indiscrete topology, and are both scattered, but their union, , is not scattered as it has no isolated point. Every T1 scattered space is totally disconnected. (Proof: If C is a nonempty connected subset of X, it contains a point x isolated in C. So the singleton is both open in C (because x is isolated) and closed in C (because of the T1 property). Because C is connected, it must be equal to . This shows that every connected component of X has a single point.) Every second countable scattered space is countable. Every topological space X can be written in a unique way as the disjoint union of a perfect set and a scattered set. Every second countable space X can be written in a unique way as the disjoint union of a perfect set and a countable scattered open set. (Proof: Use the perfect + scattered decomposition and the fact above about second countable scattered spaces, together with the fact that a subset of a second countable space is second countable.) Furthermore, every closed subset of a second countable X can be written uniquely as the disjoint union of a perfect subset of X and a countable scattered subset of X. This holds in particular in any Polish space, which is the contents of the Cantor–Bendixson theorem. Notes References Engelking, Ryszard, General Topology, Heldermann Verlag Berlin, 1989. Properties of topological spaces
Scattered space
[ "Mathematics" ]
685
[ "Properties of topological spaces", "Topological spaces", "Topology", "Space (mathematics)" ]
44,936,061
https://en.wikipedia.org/wiki/Online%20deliberation
Online deliberation is a broad term used to describe many forms of non-institutional, institutional and experimental online discussions. The term also describes the emerging field of practice and research related to the design, implementation and study of deliberative processes that rely on the use of electronic information and communications technologies (ICT). Although the Internet and social media have fostered discursive participation and deliberation online through computer-mediated communication, the academic study of online deliberation started in the early 2000s. Effective support for online deliberation A range of studies have suggested that group size, volume of communication, interactivity between participants, message characteristics, and social media characteristics can impact online deliberation. and that democratic deliberation varies across platforms. For example, news forums have been shown to have the highest degree of deliberation followed by news websites, and then Facebook. Differences in the effectiveness of platforms as supporting deliberation has been attributed based on numerous factors such as moderation, the availability of information, and focusing on a well defined topic. A limited number of studies have explored the extent to which online deliberation can produce similar results to traditional, face-to-face deliberation. A 2004 deliberative poll comparing face-to-face and online deliberation on U.S. foreign policy found similar results. A similar study in 2012 in France found that, compared to the offline process, online deliberation was more likely to increase women’s participation and to promote the justification of arguments by participants. Research on online deliberation suggests that there are five key design considerations that will affect the quality of dialogue: asynchronous communication vs synchronous communication, post hoc moderation vs pre-moderation, empowering spaces vs un-empowering spaces, asking discrete questions vs broad questions, and the quality of information. Other scholars have suggested that successful online deliberation follows four central rules: discussions must be inclusive, rational-critical, reciprocal and respectful. In general, online deliberation require participants to be able to work together comfortably in order to make the best possible deliberations which can often require rules and regulations that help members feel comfortable with one another. Challenges Researchers have questioned the utility of online deliberation as an extension of the public sphere, arguing the idea that online deliberation is no less beneficial than face-to-face interaction. Computer-mediated discourse is deemed impersonal, and is found to encourage online incivility. Furthermore, users who participate in online discussions about politics are found to make comments only in groups that agree with their own views, indicating the possibility that online deliberation mainly promotes motivated reasoning and reinforces preexisting attitudes. Related Disciplines Scholarly research into online deliberation is interdisciplinary and includes practices such as online consultation, e-participation, e-government, Citizen-to-Citizen (C2C), online deliberative polling, crowdsourcing, online facilitation, online research communities, interactive e-learning, civic dialogue in Internet forums and online chat, and group decision making that utilizes collaborative software and other forms of computer-mediated communication. Work in all these endeavors is tied together by the challenge of using electronic media in a way that deepens thinking and improves mutual understanding. See also Argument map Computer supported cooperative work Deliberative democracy E-democracy Web annotation Popular tools: Loomio DemocracyOS LiquidFeedback Pol.is References External links Online deliberation resources Online Deliberation: A Review of The Literature, Bang The Table (blog), 7 Aug. 2017. Online Deliberation by Participedia Decidim- Free Open-Source participatory democracy software Conferences Developing and Using Online Tools for Deliberative Democracy - the first conference on online deliberation, Carnegie Mellon University (June 2003) Online Deliberation 2005 / DIAC-2005 - the Second Conference on Online Deliberation: Design, Research, and Practice, Stanford University (May 2005) Tools for Participation: Collaboration, Deliberation, and Decision Support (DIAC-2008/OD2008) - the Third Conference on Online Deliberation, University of California, Berkeley (June 2008) The Fourth International Conference on Online Deliberation - University of Leeds (June 30-July 2, 2010) Group processes Groupware Media studies Virtual communities Politics and technology Group decision-making E-democracy
Online deliberation
[ "Technology" ]
904
[ "E-democracy", "Computing and society" ]
44,940,451
https://en.wikipedia.org/wiki/Blennorrhoea
Blennorrhoea aka blennorrhagia or myxorrhoea ('blenno' mucus, 'rrhoea' flow), is a medical term denoting an excessive discharge of watery mucus, especially from the urethra or the vagina, and also used in ophthalmology for an abnormal discharge from the eye, but now regarded as a synonym for conjunctivitis and accordingly rarely used. Inclusion blennorrhoea aka chlamydial conjunctivitis or swimming pool conjunctivitis, is a condition affecting infants born to women infected with inclusion conjunctivitis of the urogenital tract, frequently caused by Chlamydia trachomatis, a sexually transmitted organism and often going unnoticed as a mild infection. Such infants may develop acute neonatal conjunctivitis within a few days of birth, and smears from their eyes reveal the presence of characteristic inclusion bodies. If left untreated the infection may persist for 3–12 months and heal, but could result in permanent scarring of the conjunctiva. References Genitourinary system
Blennorrhoea
[ "Biology" ]
241
[ "Organ systems", "Genitourinary system" ]
33,452,725
https://en.wikipedia.org/wiki/Unscented%20transform
The unscented transform (UT) is a mathematical function used to estimate the result of applying a given nonlinear transformation to a probability distribution that is characterized only in terms of a finite set of statistics. The most common use of the unscented transform is in the nonlinear projection of mean and covariance estimates in the context of nonlinear extensions of the Kalman filter. Its creator Jeffrey Uhlmann explained that "unscented" was an arbitrary name that he adopted to avoid it being referred to as the “Uhlmann filter.” Background Many filtering and control methods represent estimates of the state of a system in the form of a mean vector and an associated error covariance matrix. As an example, the estimated 2-dimensional position of an object of interest might be represented by a mean position vector, , with an uncertainty given in the form of a 2x2 covariance matrix giving the variance in , the variance in , and the cross covariance between the two. A covariance that is zero implies that there is no uncertainty or error and that the position of the object is exactly what is specified by the mean vector. The mean and covariance representation only gives the first two moments of an underlying, but otherwise unknown, probability distribution. In the case of a moving object, the unknown probability distribution might represent the uncertainty of the object's position at a given time. The mean and covariance representation of uncertainty is mathematically convenient because any linear transformation can be applied to a mean vector and covariance matrix as and . This linearity property does not hold for moments beyond the first raw moment (the mean) and the second central moment (the covariance), so it is not generally possible to determine the mean and covariance resulting from a nonlinear transformation because the result depends on all the moments, and only the first two are given. Although the covariance matrix is often treated as being the expected squared error associated with the mean, in practice the matrix is maintained as an upper bound on the actual squared error. Specifically, a mean and covariance estimate is conservatively maintained so that the covariance matrix is greater than or equal to the actual squared error associated with . Mathematically this means that the result of subtracting the expected squared error (which is not usually known) from is a semi-definite or positive-definite matrix. The reason for maintaining a conservative covariance estimate is that most filtering and control algorithms will tend to diverge (fail) if the covariance is underestimated. This is because a spuriously small covariance implies less uncertainty and leads the filter to place more weight (confidence) than is justified in the accuracy of the mean. Returning to the example above, when the covariance is zero it is trivial to determine the location of the object after it moves according to an arbitrary nonlinear function : just apply the function to the mean vector. When the covariance is not zero the transformed mean will not generally be equal to and it is not even possible to determine the mean of the transformed probability distribution from only its prior mean and covariance. Given this indeterminacy, the nonlinearly transformed mean and covariance can only be approximated. The earliest approximation was to linearize the nonlinear function and apply the resulting Jacobian matrix to the given mean and covariance. This is the basis of the extended Kalman Filter (EKF), and although it was known to yield poor results in many circumstances, there was no practical alternative for many decades. Motivation for the unscented transform In 1994 Jeffrey Uhlmann noted that the EKF takes a nonlinear function and partial distribution information (in the form of a mean and covariance estimate) of the state of a system but applies an approximation to the known function rather than to the imprecisely-known probability distribution. He suggested that a better approach would be to use the exact nonlinear function applied to an approximating probability distribution. The motivation for this approach is given in his doctoral dissertation, where the term unscented transform was first defined: Consider the following intuition: With a fixed number of parameters it should be easier to approximate a given distribution than it is to approximate an arbitrary nonlinear function/transformation. Following this intuition, the goal is to find a parameterization that captures the mean and covariance information while at the same time permitting the direct propagation of the information through an arbitrary set of nonlinear equations. This can be accomplished by generating a discrete distribution having the same first and second (and possibly higher) moments, where each point in the discrete approximation can be directly transformed. The mean and covariance of the transformed ensemble can then be computed as the estimate of the nonlinear transformation of the original distribution. More generally, the application of a given nonlinear transformation to a discrete distribution of points, computed so as to capture a set of known statistics of an unknown distribution, is referred to as an unscented transformation. In other words, the given mean and covariance information can be exactly encoded in a set of points, referred to as sigma points, which if treated as elements of a discrete probability distribution has mean and covariance equal to the given mean and covariance. This distribution can be propagated exactly by applying the nonlinear function to each point. The mean and covariance of the transformed set of points then represents the desired transformed estimate. The principal advantage of the approach is that the nonlinear function is fully exploited, as opposed to the EKF which replaces it with a linear one. Eliminating the need for linearization also provides advantages independent of any improvement in estimation quality. One immediate advantage is that the UT can be applied with any given function whereas linearization may not be possible for functions that are not differentiable. A practical advantage is that the UT can be easier to implement because it avoids the need to derive and implement a linearizing Jacobian matrix. Sigma points To compute the unscented transform, one first has to choose a set of sigma points. Since the seminal work of Uhlmann, many different sets of sigma points have been proposed in the literature. A thorough review of these variants can be found in the work of Menegaz et al. In general, sigma points are necessary and sufficient to define a discrete distribution having a given mean and covariance in dimensions. A canonical set of sigma points is the symmetric set originally proposed by Uhlmann. Consider the vertices of an equilateral triangle centered on origin in two dimensions: It can be verified that the above set of points has mean and covariance (the identity matrix). Given any 2-dimensional mean and covariance, , the desired sigma points can be obtained by multiplying each point by the matrix square root of and adding . A similar canonical set of sigma points can be generated in any number of dimensions by taking the zero vector and the points comprising the rows of the identity matrix, computing the mean of the set of points, subtracting the mean from each point so that the resulting set has a mean of zero, then computing the covariance of the zero-mean set of points and applying the square root of its inverse to each point so that the covariance of the set will be equal to the identity. Uhlmann showed that it is possible to conveniently generate a symmetric set of sigma points from the columns of and the zero vector, where is the given covariance matrix, without having to compute a matrix inverse. It is computationally efficient and, because the points form a symmetric distribution, captures the third central moment (the skew) whenever the underlying distribution of the state estimate is known or can be assumed to be symmetric. He also showed that weights, including negative weights, can be used to affect the statistics of the set. Julier also developed and examined techniques for generating sigma points to capture the third moment (the skew) of an arbitrary distribution and the fourth moment (the kurtosis) of a symmetric distribution. Example The unscented transform is defined for the application of a given function to any partial characterization of an otherwise unknown distribution, but its most common use is for the case in which only the mean and covariance is given. A common example is the conversion from one coordinate system to another, such as from a Cartesian coordinate frame to polar coordinates. Suppose a 2-dimensional mean and covariance estimate, , is given in Cartesian coordinates with: and the transformation function to polar coordinates, , is: Multiplying each of the canonical simplex sigma points (given above) by and adding the mean, , gives: Applying the transformation function to each of the above points gives: The mean of these three transformed points, , is the UT estimate of the mean in polar coordinates: The UT estimate of the covariance is: where each squared term in the sum is a vector outer product. This gives: This can be compared to the linearized mean and covariance: The absolute difference between the UT and linearized estimates in this case is relatively small, but in filtering applications the cumulative effect of small errors can lead to unrecoverable divergence of the estimate. The effect of the errors are exacerbated when the covariance is underestimated because this causes the filter to be overconfident in the accuracy of the mean. In the above example it can be seen that the linearized covariance estimate is smaller than that of the UT estimate, suggesting that linearization has likely produced an underestimate of the actual error in its mean. In this example there is no way to determine the absolute accuracy of the UT and linearized estimates without ground truth in the form of the actual probability distribution associated with the original estimate and the mean and covariance of that distribution after application of the nonlinear transformation (e.g., as determined analytically or through numerical integration). Such analyses have been performed for coordinate transformations under the assumption of Gaussianity for the underlying distributions, and the UT estimates tend to be significantly more accurate than those obtained from linearization. Empirical analysis has shown that the use of the minimal simplex set of sigma points is significantly less accurate than the use of the symmetric set of points when the underlying distribution is Gaussian. This suggests that the use of the simplex set in the above example would not be the best choice if the underlying distribution associated with is symmetric. Even if the underlying distribution is not symmetric, the simplex set is still likely to be less accurate than the symmetric set because the asymmetry of the simplex set is not matched to the asymmetry of the actual distribution. Returning to the example, the minimal symmetric set of sigma points can be obtained from the covariance matrix simply as the mean vector, plus and minus the columns of : This construction guarantees that the mean and covariance of the above four sigma points is , which is directly verifiable. Applying the nonlinear function to each of the sigma points gives: The mean of these four transformed sigma points, , is the UT estimate of the mean in polar coordinates: The UT estimate of the covariance is: where the each squared term in the sum is a vector outer product. This gives: The difference between the UT and linearized mean estimates gives a measure of the effect of the nonlinearity of the transformation. When the transformation is linear, for instance, the UT and linearized estimates will be identical. This motivates the use of the square of this difference to be added to the UT covariance to guard against underestimating of the actual error in the mean. This approach does not improve the accuracy of the mean but can significantly improve the accuracy of a filter over time by reducing the likelihood that the covariance is underestimated. Optimality of the unscented transform Uhlmann noted that given only the mean and covariance of an otherwise unknown probability distribution, the transformation problem is ill-defined because there is an infinite number of possible underlying distributions with the same first two moments. Without any a priori information or assumptions about the characteristics of the underlying distribution, any choice of distribution used to compute the transformed mean and covariance is as reasonable as any other. In other words, there is no choice of distribution with a given mean and covariance that is superior to that provided by the set of sigma points, therefore the unscented transform is trivially optimal. This general statement of optimality is of course useless for making any quantitative statements about the performance of the UT, e.g., compared to linearization; consequently he, Julier and others have performed analyses under various assumptions about the characteristics of the distribution and/or the form of the nonlinear transformation function. For example, if the function is differentiable, which is essential for linearization, these analyses validate the expected and empirically-corroborated superiority of the unscented transform. Applications The unscented transform can be used to develop a non-linear generalization of the Kalman filter, known as the Unscented Kalman Filter (UKF). This filter has largely replaced the EKF in many nonlinear filtering and control applications, including for underwater, ground and air navigation, and spacecraft. The unscented transform has also been used as a computational framework for Riemann-Stieltjes optimal control. This computational approach is known as unscented optimal control. Unscented Kalman Filter Uhlmann and Simon Julier published several papers showing that the use of the unscented transformation in a Kalman filter, which is referred to as the unscented Kalman filter (UKF), provides significant performance improvements over the EKF in a variety of applications. Julier and Uhlmann published papers using a particular parameterized form of the unscented transform in the context of the UKF which used negative weights to capture assumed distribution information. That form of the UT is susceptible to a variety of numerical errors that the original formulations (the symmetric set originally proposed by Uhlmann) do not suffer. Julier has subsequently described parameterized forms which do not use negative weights and also are not subject to those issues. See also Kalman filter Covariance intersection Ensemble Kalman filter Extended Kalman filter Non-linear filter Unscented optimal control References Control theory Nonlinear filters Linear filters Signal estimation
Unscented transform
[ "Mathematics" ]
2,924
[ "Applied mathematics", "Control theory", "Dynamical systems" ]
33,453,827
https://en.wikipedia.org/wiki/Microsoft%20Store
The Microsoft Store (formerly known as the Windows Store) is a digital distribution platform operated by Microsoft. It was created as an app store for Windows 8 as the primary means of distributing Universal Windows Platform apps. With Windows 10 1803, Microsoft merged its other distribution platforms (Windows Marketplace, Windows Phone Store, Xbox Music, Xbox Video, Xbox Store, and a web storefront also known as "Microsoft Store") into Microsoft Store, making it a unified distribution point for apps, console games, and digital videos. Digital music was included until the end of 2017, and E-books were included until 2019. As with other similar platforms, such as the Google Play and Mac App Store, Microsoft Store is curated, and apps must be certified for compatibility and content. In addition to the user-facing Microsoft Store client, the store has a developer portal with which developers can interact. Microsoft takes 5–15% of the sale price for apps and 30% on Xbox games. Prior to January 1, 2015, this cut was reduced to 20% after the developer's profits reached $25,000. In 2021, 669,000 apps were available in the store. Categories containing the largest number of apps are "Books and Reference", "Education", "Entertainment", and "Games". The majority of the app developers have one app. History The Web-based storefront Microsoft previously maintained a similar digital distribution system for software known as Windows Marketplace, which allowed customers to purchase software online. The marketplace tracked product keys and licenses, allowing users to retrieve their purchases when switching computers. Windows Marketplace was discontinued in November 2008. At this point, Microsoft opened a Web-based storefront called "Microsoft Store". Windows 8 Microsoft first announced Windows Store, a digital distribution service for Windows at its presentation during the Build developer conference on September 13, 2011. Further details announced during the conference revealed that the store would be able to hold listings for both certified traditional Windows apps, as well as what were called "Metro-style apps" at the time: tightly-sandboxed software based on Microsoft design guidelines that are constantly monitored for quality and compliance. For consumers, Windows Store is intended to be the only way to obtain Metro-style apps. While announced alongside the "Developer Preview" release of Windows 8, Windows Store itself did not become available until the "Consumer Preview", released in February 2012. Updates to apps published on the store after July 1, 2019, are no longer available to Windows 8 RTM users. Per Microsoft lifecycle policies, the RTM version of Windows 8 has been unsupported since January 12, 2016, excluding some Embedded editions, as well its server equivalent, Windows Server 2012. Windows 8.1 An updated version of Windows Store was introduced in Windows 8.1. Its home page was remodeled to display apps in focused categories (such as popular, recommended, top free and paid, and special offers) with expanded details, while the ability for apps to automatically update was also added. Windows 8.1 Update also introduced other notable presentation changes, including increasing the top app lists to return 1000 apps instead of 100 apps, a "picks for you" section, and changing the default sorting for reviews to be by "most popular". Updates to apps published on the Store after June 30, 2023, are no longer available to Windows 8.1. Per Microsoft lifecycle policies, the Windows 8.1 Update reached the end of its extended support on January 10, 2023, excluding some Embedded editions, as well its server equivalent, Windows Server 2012 R2. Windows 10 Windows 10 was released with an updated version of the Windows Store, which merged Microsoft's other distribution platforms (Windows Marketplace, Windows Phone Store, Xbox Video and Xbox Music) into a unified store front for Windows 10 on all platforms, offering apps, games, music, film, TV series, themes, and ebooks. In June 2017, Spotify became available in the Windows Store. In September 2017, Microsoft began to re-brand Windows Store as Microsoft Store, with a new icon carrying the Microsoft logo. Xbox Store was merged into this new version of the platform. This is in line with Microsoft's platform convergence strategy on all Windows 10-based operating systems. Web apps and traditional desktop software can be packaged for distribution on Windows Store. Desktop software distributed through Windows Store are packaged using the App-V system to allow sandboxing. In February 2018, Microsoft announced that Progressive Web Apps would begin to be available in the Microsoft Store, and Microsoft would automatically add selected quality progressive web apps through the Bing crawler or allow developers to submit Progressive Web Apps to the Microsoft Store. Starting from Windows 10 version 1803, fonts can be downloaded and installed from the Microsoft Store. Windows 11 In Windows 11, Microsoft Store received an updated user interface, and a new pop-up designed to handle installation links from websites. Microsoft also announced a number of changes to its policies for application submissions to improve flexibility and make the store more "open", including supporting "any kind of app, regardless of app framework and packaging technology", and the ability for developers to freely use first- or third-party payment platforms (in non-game software only) rather than those provided by Microsoft. Windows Server The Microsoft Store is not installed by default in Windows Server 2012 or later versions of Windows Server. Apps that would normally be available in the Store can be installed through sideloading. Store features Microsoft Store is the primary means of distributing Universal Windows Platform (UWP) apps to users. Sideloading apps from outside the store is supported on Windows 10 on an opt-in basis, but Windows 8 only allows sideloading to be enabled if the device is running the Enterprise edition of Windows 8 on a domain. Sideloading on Windows RT, Windows 8 Pro, and on Windows 8 Enterprise computers without a domain affiliation, requires the purchase of additional licenses through volume licensing. Individual developers are able to register for US$19 and companies for US$99. Initially, Microsoft took a 30% cut of app sales until it reached US$25,000 in revenue, after which the cut dropped to 20%. On January 1, 2015, the reduction in cut at $25,000 was removed, and Microsoft takes a 30% cut of all app purchases, regardless of overall sales. As of August 1, 2021, Microsoft only takes a 12% cut of app sales. Third-party transactions are also allowed, of which Microsoft does not take a cut. Windows apps and games In 2015, over 669,000 apps were available on the store, including apps for Windows NT, Windows Phone, and UWP apps, which work on both platforms. Categories containing the largest number of apps are "Games", "Entertainment", "Books and Reference", and "Education". The majority of the app developers have one app. Both free and paid apps can be distributed through Microsoft Store, with paid apps ranging in cost from US$0.99 to $999.99. Developers from 120 countries can submit apps to Microsoft Store. Apps may support any of 109 languages, as long as they support one of 12 app certification languages. From 2016 to 2019, most Microsoft Studios games ported to PC were distributed exclusively via Microsoft Store. Microsoft later abandoned this strategy in May 2019, amid criticism of limitations faced by UWP-based games, and a desire to also sell games on competing storefronts such as Steam. The new Xbox app subsequently became the main frontend for PC games available via Microsoft Store, and also integrates subscription service PC Game Pass. Movies and TV shows Movies and television shows are available for purchase or rental, depending on availability. Content can be played on the Microsoft Movies & TV app (available for Windows 10, Xbox One, Xbox 360 and Xbox X/S), or Xbox Video app (available for Windows 8/RT PCs and tablets, and Windows Phone 8). In the United States, a Microsoft account can be linked to the Movies Anywhere digital locker service (separate registration required), which allows purchased content to be played on other platforms (e.g. MacOS, Android, iOS). Microsoft Movies & TV is currently available in the following 21 countries: Australia, Austria, Belgium, Brazil, Canada, Denmark, Finland, France, Germany, Ireland, Italy, Japan, Mexico, Netherlands, New Zealand, Norway, Spain, Sweden, Switzerland, the United States, and the United Kingdom. The purchase of TV shows is not currently supported in Belgium. Former features Music On October 2, 2017, Microsoft announced that the sale of digital music on the Microsoft Store would cease on December 31 after the discontinuation of Groove Music Pass. Users were able to transfer their music to Spotify until January 31, 2018. Books Books bought from the Microsoft Store were formerly accessible on the EdgeHTML-based Microsoft Edge. The ability to open ePub e-books was removed during the shift to the Chromium-based Microsoft Edge. On April 2, 2019, Microsoft announced that the sale of e-books on the Microsoft Store had ceased. Due to DRM licenses that would not be renewed, all books became inaccessible by July 2019, and Microsoft automatically refunded all users that had purchased books via the service. Guidelines and developers Similar to Windows Phone Store, Microsoft Store is regulated by Microsoft. Applicants must obtain Microsoft's approval before their app becomes available on the store. These apps may not contain, support or approve, gratuitous profanity, obscenity, pornography, discrimination, defamation, or politically offensive content. They may also not contain contents that are forbidden by or offensive to the jurisdiction, religion or norms of the target market. They may also not encourage, facilitate or glamorize violence, drugs, tobacco, alcohol and weapons. Video game console emulators that are "primarily gaming experiences or target Xbox One" and third-party web browsers that use their own layout engines, are prohibited on Microsoft Store. Microsoft has indicated that it can remotely disable or remove apps from end-user systems for security or legal reasons; in the case of paid apps, refunds may be issued when this is done. Microsoft initially banned PEGI "18"-rated content from the store in Europe. However, critics noted that this made the content policies stricter than intended, as some PEGI 18-rated games are rated "Mature" on the U.S. ESRB system, which is the next lowest before its highest rating, "Adults Only". The guidelines were amended in December 2012 to remove the discrepancy. On October 8, 2020, Microsoft announced a commitment to ten "principles" of fairness to developers in the operation of the Microsoft Store. These include transparency over its rules, practices, and Windows' "interoperability interfaces", not preventing competing application storefronts to run on Windows, charging developers "reasonable fees" and not "forc[ing]" them to include in-app purchases, allowing access to the store by any developer as long as their software meets "objective standards and requirements", not blocking apps based on their business model, how it delivers its services, or how it processes payments, not impeding developers from "communicating directly with their users through their apps for legitimate business purposes", not using private data from the store to influence the development of competing for software by Microsoft, and holding its own software to the same standards as others on the store. The announcement came in the wake of a lawsuits against Apple, Inc. and Google LLC by Epic Games over alleged anticompetitive practices conducted by their own application stores. With the release of Windows 11, Microsoft announced that it would not require software (excluding games) distributed via Microsoft Store to use its own payment platforms, and that it will also allow third-party storefronts (such as Amazon Appstore—which will be used for its Android app support, and Epic Games Store) to offer their clients for download via Microsoft Store. Developer tools In addition to the user facing Microsoft Store client, the store also has a developer portal with which developers can interact. The Windows developer portal has the following sections for each app: App Summary - An overview page of a given app, including a downloads chart, quality chart, financial summary, and a sales chart. App Adoption - A page that shows adoption of the app, including conversions, referrers, and downloads. App Ratings - A ratings breakdown, as well as the ability to filter reviews by region. App Quality - An overview page showcasing exceptions that have occurred in the app. App Finance - A page where a developer can download all transactions related to their app. Microsoft Store provides developer tools for tracking apps in the store. The dashboard also presents a detailed breakdown of users by market, age, and region, as well as charts on the number of downloads, purchases, and average time spent in an app. Reception Microsoft Store has widely received negative reviews since its inception. Unavailability of popular apps has been the leading reason for the cold reception of the store. Phil Spencer, head of Microsoft's gaming division, has also opined that Microsoft Store "sucks". As a result, Office was removed as an installable app from the store, and made to redirect to its website. Malware had also made their way into the store masquerading as popular games. See also List of Microsoft software Mac App Store, equivalent platform on macOS References External links Microsoft website Products introduced in 2012 Windows components Software distribution platforms Universal Windows Platform apps Windows 8 Windows 10 Windows 11 Xbox One software Online content distribution Online-only retailers of video games Video on demand Mobile software distribution platforms Online retailers of the United States Xbox One Online marketplaces
Microsoft Store
[ "Technology" ]
2,794
[ "Mobile content", "Mobile software distribution platforms" ]
33,458,337
https://en.wikipedia.org/wiki/Thermal%20hydrolysis
Thermal hydrolysis is a process used for treating industrial waste, municipal solid waste and sewage sludge. Description Thermal hydrolysis is a two-stage process combining high-pressure boiling of waste or sludge followed by a rapid decompression. This combined action sterilizes the sludge and makes it more biodegradable, which improves digestion performance. Sterilization destroys pathogens in the sludge resulting in it exceeding the stringent requirements for land application (agriculture). In addition, the treatment adjusts the rheology to such an extent that loading rates to sludge anaerobic digesters can be doubled, and also dewaterability of the sludge is significantly improved. The first full-scale application of this process for sewage sludge was installed in Hamar, Norway in 1996. Since then, there have been over 30 additional installations globally. Commercial application at a sewage treatment plant Sewage treatment plants, such as Blue Plains in Washington, D.C., USA, have adopted thermal hydrolysis of sewage sludge in order to produce commercially valuable products (such as electricity and high quality biosolid fertilizers) out of the wastewater. The full-scale commercial application of thermal hydrolysis enables the plant to utilize the solids portion of the wastewater to make power and fine fertilizer directly from sewage waste. Municipal waste-to-fuel application The city of Oslo, Norway installed a system for converting domestic food waste to fuel in 2012. A thermal hydrolysis system produces biogas from the food waste, which provides fuel for the city bus system and is also used for agricultural fertilizer. 30 largest thermal hydrolysis plants See also List of waste-water treatment technologies References Further reading External links Biodegradable waste management Biofuels technology Chemical reactions Equilibrium chemistry Sewerage
Thermal hydrolysis
[ "Chemistry", "Engineering", "Biology", "Environmental_science" ]
366
[ "Biofuels technology", "Biodegradable waste management", "Water pollution", "Sewerage", "Biodegradation", "Equilibrium chemistry", "nan", "Environmental engineering" ]
30,883,607
https://en.wikipedia.org/wiki/Photoelectrochemistry
Photoelectrochemistry is a subfield of study within physical chemistry concerned with the interaction of light with electrochemical systems. It is an active domain of investigation. One of the pioneers of this field of electrochemistry was the German electrochemist Heinz Gerischer. The interest in this domain is high in the context of development of renewable energy conversion and storage technology. Historical approach Photoelectrochemistry has been intensively studied in the 1970-80s because of the first peak oil crisis. Because fossil fuels are non-renewable, it is necessary to develop processes to obtain renewable resources and use clean energy. Artificial photosynthesis, photoelectrochemical water splitting and regenerative solar cells are of special interest in this context. The photovoltaic effect was discovered by Alexandre Edmond Becquerel. Heinz Gerischer, H. Tributsch, AJ. Nozik, AJ. Bard, A. Fujishima, K. Honda, PE. Laibinis, K. Rajeshwar, TJ Meyer, PV. Kamat, N.S. Lewis, R. Memming, John Bockris are researchers which have contributed a lot to the field of photoelectrochemistry. Semiconductor electrochemistry Introduction Semiconductor materials have energy band gaps, and will generate a pair of electron and hole for each absorbed photon if the energy of the photon is higher than the band gap energy of the semiconductor. This property of semiconductor materials has been successfully used to convert solar energy into electrical energy by photovoltaic devices. In photocatalysis the electron-hole pair is immediately used to drive a redox reaction. However, the electron-hole pairs suffer from fast recombination. In photoelectrocatalysis, a differential potential is applied to diminish the number of recombinations between the electrons and the holes. This allows an increase in the yield of light's conversion into chemical energy. Semiconductor-electrolyte interface When a semiconductor comes into contact with a liquid (redox species), to maintain electrostatic equilibrium, there will be a charge transfer between the semiconductor and liquid phase if formal redox potential of redox species lies inside semiconductor band gap. At thermodynamic equilibrium, the Fermi level of semiconductor and the formal redox potential of redox species are aligned at the interface between semiconductor and redox species. This introduces an upward band bending in a n-type semiconductor for n-type semiconductor/liquid junction (Figure 1(a)) and a downward band bending in a p-type semiconductor for a p-type semiconductor/liquid junction (Figure 1(b)). This characteristic of semiconductor/liquid junctions is similar to a rectifying semiconductor/metal junction or Schottky junction. Ideally to get a good rectifying characteristics at the semiconductor/liquid interface, the formal redox potential must be close to the valence band of the semiconductor for a n-type semiconductor and close to the conduction band of the semiconductor for a p-type semiconductor. The semiconductor/liquid junction has one advantage over the rectifying semiconductor/metal junction in that the light is able to travel through to the semiconductor surface without much reflection; whereas most of the light is reflected back from the metal surface at a semiconductor/metal junction. Therefore, semiconductor/liquid junctions can also be used as photovoltaic devices similar to solid state p–n junction devices. Both n-type and p-type semiconductor/liquid junctions can be used as photovoltaic devices to convert solar energy into electrical energy and are called photoelectrochemical cells. In addition, a semiconductor/liquid junction could also be used to directly convert solar energy into chemical energy by virtue of photoelectrolysis at the semiconductor/liquid junction. Experimental setup Semiconductors are usually studied in a photoelectrochemical cell. Different configurations exist with a three electrode device. The phenomenon to study happens at the working electrode WE while the differential potential is applied between the WE and a reference electrode RE (saturated calomel, Ag/AgCl). The current is measured between the WE and the counter electrode CE (carbon vitreous, platinum gauze). The working electrode is the semiconductor material and the electrolyte is composed of a solvent, an electrolyte and a redox specie. A UV-vis lamp is usually used to illuminate the working electrode. The photoelectrochemical cell is usually made with a quartz window because it does not absorb the light. A monochromator can be used to control the wavelength sent to the WE. Main absorbers used in photoelectrochemistry Semiconductor IV C(diamond), Si, Ge, SiC, SiGe Semiconductor III-V BN, BP, BAs, AlN, AlP, AlAs, GaN, GaP, GaAs, InN, InP, InAs... Semiconductor II-VI CdS, CdSe, CdTe, ZnO, ZnS, ZnSe, ZnTe, MoS2, MoSe2, MoTe2, WS2, WSe2 Metal oxides TiO2, Fe2O3, Cu2O Organic dyes Methylene blue... Organometallic dyes Perovskites Very recently scalable all-perovskite based PEC photoelectrochemical system as solar hydrogen panel has been developed with >123 cm2 area. Applications Photoelectrochemical water splitting Photoelectrochemistry has been intensively studied in the field of hydrogen production from water and solar energy. The photoelectrochemical splitting of water was historically discovered by Fujishima and Honda in 1972 onto TiO2 electrodes. Recently many materials have shown promising properties to split efficiently water but TiO2 remains cheap, abundant, stable against photo-corrosion. The main problem of TiO2 is its bandgap which is 3 or 3.2 eV according to its crystallinity (anatase or rutile). These values are too high and only the wavelength in the UV region can be absorbed. To increase the performances of this material to split water with solar wavelength, it is necessary to sensitize the TiO2. Currently Quantum Dots sensitization is very promising but more research is needed to find new materials able to absorb the light efficiently. Photoelectrochemical reduction of carbon dioxide Photosynthesis is the natural process that converts CO2 using light to produce hydrocarbon compounds such as sugar. The depletion of fossil fuels encourages scientists to find alternatives to produce hydrocarbon compounds. Artificial photosynthesis is a promising method mimicking the natural photosynthesis to produce such compounds. The photoelectrochemical reduction of is much studied because of its worldwide impact. Many researchers aim to find new semiconductors to develop stable and efficient photo-anodes and photo-cathodes. Regenerative cells or Dye-sensitized solar cell (Graetzel cell) Dye-sensitized solar cells or DSSCs use TiO2 and dyes to absorb the light. This absorption induces the formation of electron-hole pairs which are used to oxidize and reduce the same redox couple, usually I−/I3−. Consequently, a differential potential is created which induces a current. References External links Complete review about semiconductor's photoelectrochemistry Review about semiconductor's photoelectrochemistry
Photoelectrochemistry
[ "Chemistry" ]
1,528
[ "Photoelectrochemistry", "Electrochemistry" ]
30,888,106
https://en.wikipedia.org/wiki/B%C3%B6ker
Böker () was one of the first companies to offer ceramic knives as a featured product line. History Böker traces its origin to the 17th century as a tool maker in Germany graduating to swords and blades by the 1800s. The company claims it was producing 2000 sabres a week by 1839 for use in various wars. By the 1860s the company had fractured with a branch of the family emigrating to North America and setting up plants in Canada, Mexico and The United States. The German and North American factories produced similar knives and used the "Tree Brand" trademark. This continued until World War II when the Solingen factory was destroyed and "Boker USA" took control of the trademark until the German factory was rebuilt in the 1950s. In the 1960s and 1970s the company changed hands several times, with the US facility (Hermann Boker & Co) shutting down in 1983. In 1986, Boker reacquired the rights to the American brand and Boker USA was started in Denver, Colorado for US production. Products The production is mainly of knives for leisure, hunting and collection, as well as those for sports and professional use for military and police bodies. There is also a production section dedicated to professional kitchen knives and shaving products. The production is divided under the brands: Böker Manufaktur Solingen is the brand offering handmade knives of the parent company Böker in Solingen, specialized in small series productions for collectors. Among the best known products there is the Speedlock switchblade and knives with damask blades, or unique pieces such as those made of steel obtained from the cannon of the Leopard tank or from the Tirpitz battleship. Böker Arbolito is the brand of handicraft products from the Buenos Aires factory. Böker Plus is the Böker brand for innovative and professional products conceived and developed in Solingen and manufactured overseas. Magnum by Böker is the brand for products conceived in Solingen and designed, developed and manufactured overseas. References External links Companies based in Solingen Manufacturing companies of Germany Mechanical hand tools Knife manufacturing companies German brands
Böker
[ "Physics" ]
420
[ "Mechanics", "Mechanical hand tools" ]
30,889,228
https://en.wikipedia.org/wiki/The%20Pollinator%20Pathway
The Pollinator Pathway is a participatory art, design and ecology social sculpture initiative founded by the artist and designer Sarah Bergmann. Its objective is to connect existing isolated green spaces and create a more hospitable urban environment for pollinators like bees with a system of ecological corridors of flowering plants by using existing urban infrastructure such as curb space and rooftops. Pathways The first pollinator pathway () is located on Seattle, Washington's east-west Columbia Street, and connects Seattle University's campus on 12th Avenue to Nora's Woods on 29th Avenue away, crossing one third of Seattle's width. A second long official pollinator pathway is slated for Seattle's north-south 11th Avenue, connecting Seattle University's campus to Volunteer Park. The first segment of the pathway on Columbia Street, which Bergmann received grants from the City of Seattle, Northwest Horticultural Society, and Awesome Foundation to create, replaced a long, grass strip between the street and sidewalk with plants that could attract pollinators. The pathways are composed of individual plots of perennial native plant species on city-owned property, tended by local volunteers. Bergmann had a related installation, Portal to The Pollinator Pathway, at Seattle Art Museum's Olympic Sculpture Park in 2012. In 2014, she made presentations on the project at Frye Art Museum and Seattle Tilth. Certification Since late 2013, Bergmann has offered a certification program for new pathways to use the trademarked Pollinator Pathway name. Other cities Cities other than Seattle have explored the idea of connecting landscapes for pollinators. In 2008, about the same time the Seattle project was getting under way, the Canadian Pollination Initiative wrote a paper on a "pollinator park" concept to include "...right-of-way passages, including highways, power lines, gas lines and other maintained corridors can be designed in such a way that they serve as pollinator habitats." In 2011, a New York author and artist Aaron Birk wrote an illustrated story, The Pollinator's Corridor, about a pathway connecting the city's landscape. City–citizen discussions Several cities have used official means to initiate citizen discussions on their own pollinator pathways following Seattle's model, including Redmond, Washington; the Niagara Falls, New York area; and Los Angeles, California via the mayor's blog. Awards In 2012, Bergmann received The Stranger'''s Genius Award and Seattle Art Museum's Betty Bowen Award for the project. In 2013, she was named one of Seattle's most influential people of the year by Seattle Magazine'', along with recipients of the award who had created other Seattle area pollinator conservation projects. Notes References Further reading Tracey Byrne (February 14, 2015) Pollinator Pathway® What Is It Really About? BeePeeking: online journal promoting environmental stewardship and the enhancement of urban ecosystems External links (2015) Geography of Seattle Pollination management Landscape architecture
The Pollinator Pathway
[ "Engineering" ]
596
[ "Landscape architecture", "Architecture" ]
30,890,448
https://en.wikipedia.org/wiki/Tron%20%28Scotland%29
A tron was a weighing beam in medieval Scotland, usually located in the marketplaces of burghs. There are various roads and buildings in several Scottish towns that are named after the tron. For example, Trongate in Glasgow and Tron Kirk in Edinburgh. Etymologically the word is derived from the Old French tronel or troneau, meaning "balance". Measurement of weight in medieval Scotland From the 12th century the city fathers of Scottish burghs needed to standardise weights and measures, partly to collect the correct taxation on goods, and partly to stop unscrupulous merchants shortchanging citizens. Trons were set up in marketplaces throughout Scotland. Each burgh had its own set of weights, which sometimes differed from those of other burghs. Some burghs had more than one tron; in Edinburgh a butter tron was located at the head of the West Bow, while a salt tron was located further down the Royal Mile. See also Obsolete Scottish units of measurement Tolbooth Tron-men References Obsolete Scottish units of measurement Weighing instruments
Tron (Scotland)
[ "Physics", "Technology", "Engineering" ]
226
[ "Weighing instruments", "Mass", "Matter", "Measuring instruments" ]
28,094,747
https://en.wikipedia.org/wiki/Febrifugine
Febrifugine is a quinazolinone alkaloid first isolated from the Chinese herb Dichroa febrifuga, but also found in the garden plant Hydrangea. Laboratory synthesis of febrifugine determined that the originally reported stereochemistry was incorrect. Febrifugine has antimalarial properties and the synthetic halogenated derivative halofuginone is used in veterinary medicine as a coccidiostat. Other synthetic febrifugine derivatives have been used against malaria, cancer, fibrosis, and inflammatory disease. References Quinazolinones Piperidine alkaloids
Febrifugine
[ "Chemistry" ]
133
[ "Alkaloids by chemical classification", "Piperidine alkaloids" ]
28,094,751
https://en.wikipedia.org/wiki/C16H19N3O3
{{DISPLAYTITLE:C16H19N3O3}} The molecular formula C16H19N3O3 (molar mass: 301.34 g/mol, exact mass: 301.1426 u) may refer to: Febrifugine HIOC Prazitone (AGN-511) Molecular formulas
C16H19N3O3
[ "Physics", "Chemistry" ]
74
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
28,096,212
https://en.wikipedia.org/wiki/PASTA%20domain
The PASTA domain is a small protein domain that can bind to the beta-lactam ring portion of various β-lactam antibiotics. The domain was initially discovered in 2002 by Yeats and colleagues as a region of sequence similarity found in penicillin binding proteins and PknB-like kinases found in some bacteria. The name is an acronym derived from PBP and Serine/Threonine kinase Associated domain. Structure The PASTA domain adopts a structure composed of an alpha-helix followed by three beta strands. Recent structural studies show that the extracellular region of PknB (protein kinase B) that is composed of four PASTA domains shows a linear arrangement of the domains. Species distribution PASTA domains are found in a variety of bacterial species including gram-positive Bacillota and Actinomycetota. References Protein domains
PASTA domain
[ "Biology" ]
173
[ "Protein domains", "Protein classification" ]
28,096,905
https://en.wikipedia.org/wiki/Fannes%E2%80%93Audenaert%20inequality
The Fannes–Audenaert inequality is a mathematical bound on the difference between the von Neumann entropies of two density matrices as a function of their trace distance. It was proved by Koenraad M. R. Audenaert in 2007 as an optimal refinement of Mark Fannes' original inequality, which was published in 1973. Mark Fannes is a Belgian physicist specialised in mathematical quantum mechanics, and he works at the KU Leuven. Koenraad M. R. Audenaert is a Belgian physicist and civil engineer. He currently works at University of Ulm. Statement of inequality For any two density matrices and of dimensions , where is the (Shannon) entropy of the probability distribution , is the (von Neumann) entropy of a matrix with eigenvalues , and is the trace distance between the two matrices. Note that the base for the logarithm is arbitrary, so long as the same base is used on both sides of the inequality. Audenaert also proved that—given only the trace distance T and the dimension d—this is the optimal bound. He did this by directly exhibiting a pair of matrices which saturate the bound for any values of T and d. The matrices (which are diagonal in the same basis, i.e. they commute) are Fannes' inequality and Audenaert's refinement The original inequality proved by Fannes was when . He also proved the weaker inequality which can be used for larger T. Fannes proved this inequality as a means to prove the continuity of the von Neumann entropy, which did not require an optimal bound. The proof is very compact, and can be found in the textbook by Nielsen and Chuang. Audenaert's proof of the optimal inequality, on the other hand, is significantly more complicated, and can be found in. References Inequalities
Fannes–Audenaert inequality
[ "Mathematics" ]
389
[ "Binary relations", "Mathematical relations", "Inequalities (mathematics)", "Mathematical problems", "Mathematical theorems" ]
28,098,591
https://en.wikipedia.org/wiki/Fort%20de%20l%27Infernet
The Fort de l'Infernet is a fortification complex near Briançon in the French Alps. It was built as part of the Séré de Rivières system of fortifications in 1876–78 to defend France against invasion from Italy. It specifically overlooks the valley of the Durance behind and the Fort du Gondran, closer to Italy. Built at an elevation of , the fort was accessed by an aerial tramway, which connected to the older Fort du Randouillet at lower elevation. It was the last French fort to be built from cut stone masonry. The construction of the fort required that its mountaintop be leveled, a process that produced landslides. The 210-man garrison served an armament consisting of seven 138 mm guns, five 155 mm guns, two 220 mm mortars, two 150 mm mortars and six more 138 mm guns in a separate battery. Much of the armament was placed on a cavalier or gun platform on top of the masonry barracks. The garrison was accommodated in two barracks at somewhat lower elevation, La Cochette and La Seyte, with a portion of the total contingent rotated into the fort for duty. The aerial tramway was operated by mule power. In 1940 the fort was manned as a backup fortification to the Alpine Line fortifications of the Maginot Line program, and was bombarded on 21 and 23 June 1940 by mortars at Fort Chaberton. 280 mm field mortars placed at Infernet replied, silencing the Italian battery. References External links Fort de l'Infernet at Chemins de mémoire Fort de l'Infernet at fortiffsere.fr Buildings and structures in Alpes-de-Haute-Provence Fortifications of Briançon Séré de Rivières system Alpine Line Fortified Sector of Dauphine
Fort de l'Infernet
[ "Engineering" ]
363
[ "Séré de Rivières system", "Fortification lines" ]
28,098,613
https://en.wikipedia.org/wiki/Crystalline%20silicon
Crystalline silicon or (c-Si) is the crystalline forms of silicon, either polycrystalline silicon (poly-Si, consisting of small crystals), or monocrystalline silicon (mono-Si, a continuous crystal). Crystalline silicon is the dominant semiconducting material used in photovoltaic technology for the production of solar cells. These cells are assembled into solar panels as part of a photovoltaic system to generate solar power from sunlight. In electronics, crystalline silicon is typically the monocrystalline form of silicon, and is used for producing microchips. This silicon contains much lower impurity levels than those required for solar cells. Production of semiconductor grade silicon involves a chemical purification to produce hyper-pure polysilicon, followed by a recrystallization process to grow monocrystalline silicon. The cylindrical boules are then cut into wafers for further processing. Solar cells made of crystalline silicon are often called conventional, traditional, or first generation solar cells, as they were developed in the 1950s and remained the most common type up to the present time. Because they are produced from 160 to 190 μm thick solar wafers—slices from bulks of solar grade silicon—they are sometimes called wafer-based solar cells. Solar cells made from c-Si are single-junction cells and are generally more efficient than their rival technologies, which are the second-generation thin-film solar cells, the most important being CdTe, CIGS, and amorphous silicon (a-Si). Amorphous silicon is an allotropic variant of silicon, and amorphous means "without shape" to describe its non-crystalline form. Overview Classification The allotropic forms of silicon range from a single crystalline structure to a completely unordered amorphous structure with several intermediate varieties. In addition, each of these different forms can possess several names and even more abbreviations, and often cause confusion to non-experts, especially as some materials and their application as a PV technology are of minor significance, while other materials are of outstanding importance. PV industry In photovoltaic industry,materials are commonly grouped into the following two categories: Crystalline silicon (c-Si), used in conventional wafer-based solar cells. Monocrystalline silicon (mono-Si) Polycrystalline silicon (multi-Si) Ribbon silicon (ribbon-Si), has currently no market Other materials, not classified as crystalline silicon, used in thin-film and other solar-cell technologies. Amorphous silicon (a-Si) Nanocrystalline silicon (nc-Si) Protocrystalline silicon (pc-Si) Other established non-silicon materials, such as CdTe, CIGS Emerging photovoltaics Multi-junction solar cells (MJ) commonly used for solar panels on spacecraft for space-based solar power. They are also used in concentrator photovoltaics (CPV, HCPV), an emerging technology best suited for locations that receive much sunlight. Generations Alternatively, different types of solar cells and/or their semiconducting materials can be classified by generations: First generation solar cells are made of crystalline silicon, also called, conventional, traditional, wafer-based solar cells and include monocrystalline (mono-Si) and polycrystalline (multi-Si) semiconducting materials. Second generation solar cells or panels are based on thin-film technology and are of commercially significant importance. These include CdTe, CIGS and amorphous silicon. Third generation solar cells are often labeled as emerging technologies with little or no market significance and include a large range of substances, mostly organic, often using organometallic compounds. Arguably, multi-junction photovoltaic cells can be classified to neither of these generations. A typical triple junction semiconductor is made of InGaP/(In)GaAs/Ge. Comparison of technical specifications Market share In 2013, conventional crystalline silicon technology dominated worldwide PV production, with multi-Si leading the market ahead of mono-Si, accounting for 54% and 36%, respectively. For the last ten years, worldwide market-share of thin-film technologies stagnated below 18% and currently stand at 9%. In the thin-film market, CdTe leads with an annual production of 2 GWp or 5%, followed by a-Si and CIGS, both around 2%. Alltime deployed PV capacity of 139 gigawatts (cumulative as of 2013) splits up into 121 GW crystalline silicon (87%) and 18 GW thin-film (13%) technology. Efficiency The conversion efficiency of PV devices describes the energy-ratio of the outgoing electrical power compared to the incoming radiated light. A single solar cells has generally a better, or higher efficiency than an entire solar module. Additionally, lab efficiency is always far superior to that of goods that are sold commercially. Lab cells In 2013, record Lab cell efficiency was highest for crystalline silicon. However, multi-silicon is followed closely by cadmium telluride and copper indium gallium selenide solar cells. 25.6% — mono-Si cell 20.4% — multi-Si cell 21.7% — CIGS cell 21.5% — CdTe cell Both-sides-contacted silicon solar cells as of 2021: 26% and possibly above. Modules The average commercial crystalline silicon module increased its efficiency from about 12% to 16% over the last ten years. In the same period CdTe-modules improved their efficiency from 9 to 16%. The modules performing best under lab conditions in 2014 were made of monocrystalline silicon. They were 7% above the efficiency of commercially produced modules (23% over 16%) which indicated that the conventional silicon technology still had potential to improve and therefore maintain its leading position. Energy costs of manufacture Crystalline silicon has a high cost in energy because silicon is produced by the reduction of high-grade quartz sand in an electric furnace. The electricity generated for this process may produce greenhouse gas emissions. This coke-fired smelting process occurs at high temperatures of more than 1,000 °C and is very energy intensive, using about 11 kilowatt-hours (kW⋅h) per kilogram of silicon. The energy requirements of this process per unit of silicon metal produced may be relatively inelastic. But major energy cost reductions per (photovoltaic) product have been made as silicon cells have become more efficient at converting sunlight, larger silicon metal ingots are cut with less waste into thinner wafers, silicon waste from manufacture is recycled, and material costs have reduced. Toxicity With the exception of amorphous silicon, most commercially established PV technologies use toxic heavy metals. CIGS often uses a CdS buffer layer, and the semiconductor material of CdTe-technology itself contains the toxic cadmium (Cd). In the case of crystalline silicon modules, the solder material that joins the copper strings of the cells, it contains about 36% of lead (Pb). Moreover, the paste used for screen printing front and back contacts contains traces of Pb and sometimes Cd as well. It is estimated that about 1,000 metric tonnes of Pb have been used for 100 gigawatts of c-Si solar modules. However, there is no fundamental need for lead in the solder alloy. Cell technologies PERC solar cell Passivated emitter rear contact (PERC) solar cells consist of the addition of an extra layer to the rear-side of a solar cell. This dielectric passive layer acts to reflect unabsorbed light back to the solar cell for a second absorption attempt increasing the solar cell efficiency. A PERC is created through an additional film deposition and etching process. Etching can be done either by chemical or laser processing. About 80% of solar panels worldwide use the PERC design. Martin Green, Andrew Blakers, Jianhua Zhao and Aihua Wang won the Queen Elizabeth Prize for Engineering in 2023 for development of the PERC solar cell. HIT solar cell A HIT solar cell is composed of a mono thin crystalline silicon wafer surrounded by ultra-thin amorphous silicon layers. The acronym HIT stands for "heterojunction with intrinsic thin layer". HIT cells are produced by the Japanese multinational electronics corporation Panasonic (see also ). Panasonic and several other groups have reported several advantages of the HIT design over its traditional c-Si counterpart: An intrinsic a-Si layer can act as an effective surface passivation layer for c-Si wafer. The p+/n+ doped a-Si functions as an effective emitter/BSF for the cell. The a-Si layers are deposited at much lower temperature, compared to the processing temperatures for traditional diffused c-Si technology. The HIT cell has a lower temperature coefficient compared to c-Si cell technology. Owing to all these advantages, this new hetero-junction solar cell is a considered to be a promising low cost alternative to traditional c-Si based solar cells. Fabrication of HIT cells The details of the fabrication sequence vary from group to group. Typically in good quality, CZ/FZ grown c-Si wafer (with ~1 ms lifetimes) are used as the absorber layer of HIT cells. Using alkaline etchants, such as, NaOH or (CH3)4NOH the (100) surface of the wafer is textured to form the pyramids of 5–10 μm height. Next, the wafer is cleaned using peroxide and HF solutions. This is followed by deposition of intrinsic a-Si passivation layer, typically through PECVD or Hot-wire CVD. The silane (SiH4) gas diluted with H2 is used as a precursor. The deposition temperature and pressure is maintained at 200 °C and 0.1−1 Torr. Precise control over this step is essential to avoid the formation of defective epitaxial Si. Cycles of deposition and annealing and H2 plasma treatment are shown to have provided excellent surface passivation. Diborane or Trimethylboron gas mixed with SiH4 is used to deposit p-type a-Si layer, while, Phosphine gas mixed with SiH4 is used to deposit n-type a-Si layer. Direct deposition of doped a-Si layers on c-Si wafer is shown to have very poor passivation properties. This is most likely due to dopant induced defect generation in a-Si layers. Sputtered Indium Tin Oxide (ITO) is commonly used as a transparent conductive oxide (TCO) layer on top of the front and back a-Si layer in bi-facial design, as a-Si has high lateral resistance. It is generally deposited on the back side as well fully metallized cell to avoid diffusion of back metal and also for impedance matching for the reflected light. The silver/aluminum grid of 50-100μm thick is deposited through stencil printing for the front contact and back contact for bi-facial design. The detailed description of the fabrication process can be found in. Opto-electrical modeling and characterization of HIT cells The literature discusses several studies to interpret carrier transport bottlenecks in these cells. Traditional light and dark I–V are extensively studied and are observed to have several non-trivial features, which cannot be explained using the traditional solar cell diode theory. This is because of the presence of hetero-junction between the intrinsic a-Si layer and c-Si wafer which introduces additional complexities to current flow. In addition, there has been significant efforts to characterize this solar cell using C-V, impedance spectroscopy, surface photo-voltage, suns-Voc to produce complementary information. Further, a number of design improvements, such as, the use of new emitters, bifacial configuration, interdigitated back contact (IBC) configuration bifacial-tandem configuration are actively being pursued. Mono-silicon Monocrystalline silicon (mono c-Si) is a form in which the crystal structure is homogeneous throughout the material; the orientation, lattice parameter, and electronic properties are constant throughout the material. Dopant atoms such as phosphorus and boron are often incorporated into the film to make the silicon n-type or p-type respectively. Monocrystalline silicon is fabricated in the form of silicon wafers, usually by the Czochralski Growth method, and can be quite expensive depending on the radial size of the desired single crystal wafer (around $200 for a 300 mm Si wafer). This monocrystalline material, while useful, is one of the chief expenses associated with producing photovoltaics where approximately 40% of the final price of the product is attributable to the cost of the starting silicon wafer used in cell fabrication. Polycrystalline silicon Polycrystalline silicon is composed of many smaller silicon grains of varied crystallographic orientation, typically > 1 mm in size. This material can be synthesized easily by allowing liquid silicon to cool using a seed crystal of the desired crystal structure. Additionally, other methods for forming smaller-grained polycrystalline silicon (poly-Si) exist such as high temperature chemical vapor deposition (CVD). Not classified as Crystalline silicon These allotropic forms of silicon are not classified as crystalline silicon. They belong to the group of thin-film solar cells. Amorphous silicon Amorphous silicon (a-Si) has no long-range periodic order. The application of amorphous silicon to photovoltaics as a standalone material is somewhat limited by its inferior electronic properties. When paired with microcrystalline silicon in tandem and triple-junction solar cells, however, higher efficiency can be attained than with single-junction solar cells. This tandem assembly of solar cells allows one to obtain a thin-film material with a bandgap of around 1.12 eV (the same as single-crystal silicon) compared to the bandgap of amorphous silicon of bandgap. Tandem solar cells are then attractive since they can be fabricated with a bandgap similar to single-crystal silicon but with the ease of amorphous silicon. Nanocrystalline silicon Nanocrystalline silicon (nc-Si), sometimes also known as microcrystalline silicon (μc-Si), is a form of porous silicon. It is an allotropic form of silicon with paracrystalline structure—is similar to amorphous silicon (a-Si), in that it has an amorphous phase. Where they differ, however, is that nc-Si has small grains of crystalline silicon within the amorphous phase. This is in contrast to polycrystalline silicon (poly-Si) which consists solely of crystalline silicon grains, separated by grain boundaries. The difference comes solely from the grain size of the crystalline grains. Most materials with grains in the micrometre range are actually fine-grained polysilicon, so nanocrystalline silicon is a better term. The term 'nanocrystalline silicon' refers to a range of materials around the transition region from amorphous to microcrystalline phase in the silicon thin film. Protocrystalline silicon Protocrystalline silicon has a higher efficiency than amorphous silicon (a-Si) and it has also been shown to improve stability, but not eliminate it. A Protocrystalline phase is a distinct phase occurring during crystal growth which evolves into a microcrystalline form. Protocrystalline Si also has a relatively low absorption near the band gap owing to its more ordered crystalline structure. Thus, protocrystalline and amorphous silicon can be combined in a tandem solar cell where the top layer of thin protocrystalline silicon absorbs short-wavelength light whereas the longer wavelengths are absorbed by the underlying a-Si substrate. Transformation of amorphous into crystalline silicon Amorphous silicon can be transformed to crystalline silicon using well-understood and widely implemented high-temperature annealing processes. The typical method used in industry requires high-temperature compatible materials, such as special high temperature glass that is expensive to produce. However, there are many applications for which this is an inherently unattractive production method. Low temperature induced crystallization Flexible solar cells have been a topic of interest for less conspicuous-integrated power generation than solar power farms. These modules may be placed in areas where traditional cells would not be feasible, such as wrapped around a telephone pole or cell phone tower. In this application, a photovoltaic material may be applied to a flexible substrate, often a polymer. Such substrates cannot survive the high temperatures experienced during traditional annealing. Instead, novel methods of crystallizing the silicon without disturbing the underlying substrate have been studied extensively. Aluminum-induced crystallization (AIC) and local laser crystallization are common in the literature, however not extensively used in industry. In both of these methods, amorphous silicon is grown using traditional techniques such as plasma-enhanced chemical vapor deposition (PECVD). The crystallization methods diverge during post-deposition processing. In aluminum-induced crystallization, a thin layer of aluminum (50 nm or less) is deposited by physical vapor deposition onto the surface of the amorphous silicon. This stack of material is then annealed at a relatively low temperature between 140 °C and 200 °C in a vacuum. The aluminum that diffuses into the amorphous silicon is believed to weaken the hydrogen bonds present, allowing crystal nucleation and growth. Experiments have shown that polycrystalline silicon with grains on the order of 0.2–0.3 μm can be produced at temperatures as low as 150 °C. The volume fraction of the film that is crystallized is dependent on the length of the annealing process. Aluminum-induced crystallization produces polycrystalline silicon with suitable crystallographic and electronic properties that make it a candidate for producing polycrystalline thin films for photovoltaics. AIC can be used to generate crystalline silicon nanowires and other nano-scale structures. Another method of achieving the same result is the use of a laser to heat the silicon locally without heating the underlying substrate beyond some upper-temperature limit. An excimer laser or, alternatively, green lasers such as a frequency-doubled Nd:YAG laser is used to heat the amorphous silicon, supplying the energy necessary to nucleate grain growth. The laser fluence must be carefully controlled in order to induce crystallization without causing widespread melting. Crystallization of the film occurs as a very small portion of the silicon film is melted and allowed to cool. Ideally, the laser should melt the silicon film through its entire thickness, but not damage the substrate. Toward this end, a layer of silicon dioxide is sometimes added to act as a thermal barrier. This allows the use of substrates that cannot be exposed to the high temperatures of standard annealing, polymers for instance. Polymer-backed solar cells are of interest for seamlessly integrated power production schemes that involve placing photovoltaics on everyday surfaces. A third method for crystallizing amorphous silicon is the use of a thermal plasma jet. This strategy is an attempt to alleviate some of the problems associated with laser processing – namely the small region of crystallization and the high cost of the process on a production scale. The plasma torch is a simple piece of equipment that is used to anneal the amorphous silicon thermally. Compared to the laser method, this technique is simpler and more cost-effective. Plasma torch annealing is attractive because the process parameters and equipment dimensions can be changed easily to yield varying levels of performance. A high level of crystallization (~ 90%) can be obtained with this method. Disadvantages include difficulty achieving uniformity in the crystallization of the film. While this method is applied frequently to silicon on a glass substrate, processing temperatures may be too high for polymers. See also List of types of solar cells References Silicon, crystalline Silicon solar cells Allotropes of silicon
Crystalline silicon
[ "Chemistry" ]
4,146
[ "Allotropes of silicon", "Semiconductor materials", "Allotropes", "Group IV semiconductors" ]
29,459,180
https://en.wikipedia.org/wiki/Stellar%20collision
A stellar collision is the coming together of two stars caused by stellar dynamics within a star cluster, or by the orbital decay of a binary star due to stellar mass loss or gravitational radiation, or by other mechanisms not yet well understood. Any stars in the universe can collide, whether they are "alive", meaning fusion is still active in the star, or "dead", with fusion no longer taking place. White dwarf stars, neutron stars, black holes, main sequence stars, giant stars, and supergiants are very different in type, mass, temperature, and radius, and accordingly produce different types of collisions and remnants. Types of stellar collisions and mergers Binary star mergers About half of all the stars in the sky are part of binary systems, with two stars orbiting each other. Some binary stars orbit each other so closely that they share the same atmosphere, giving the system a peanut shape. While most such contact binary systems are stable, some do become unstable and either eject one partner or eventually merge. Astronomers predict that events of this type occur in the globular clusters of our galaxy about once every 10,000 years. On 2 September 2008 scientists first observed a stellar merger in Scorpius (named V1309 Scorpii), though it was not known to be the result of a stellar merger at the time. Type Ia supernovae White dwarfs are the remnants of low-mass stars which, if they form a binary system with another star, can cause large stellar explosions known as type Ia supernovae. The normal route by which this happens involves a white dwarf drawing material off a main sequence or red giant star to form an accretion disc. Much more rarely, a type Ia supernova occurs when two white dwarfs orbit each other closely. Emission of gravitational waves causes the pair to spiral inward. When they finally merge, if their combined mass approaches or exceeds the Chandrasekhar limit, carbon fusion is ignited, raising the temperature. Since a white dwarf consists of degenerate matter, there is no safe equilibrium between thermal pressure and the weight of overlying layers of the star. Because of this, runaway fusion reactions rapidly heat up the interior of the combined star and spread, causing a supernova explosion. In a matter of seconds, all of the white dwarf's mass is thrown into space. Neutron star mergers Neutron star mergers occur in a fashion similar to the rare type Ia supernovae resulting from merging white dwarfs. When two neutron stars orbit each other closely, they spiral inward as time passes due to gravitational radiation. When they meet, their merger leads to the formation of either a heavier neutron star or a black hole, depending on whether the mass of the remnant exceeds the Tolman–Oppenheimer–Volkoff limit. This creates a magnetic field that is trillions of times stronger than that of Earth, in a matter of one or two milliseconds. Astronomers believe that this type of event is what creates short gamma-ray bursts and kilonovae. A gravitational wave event that occurred on 25 August 2017, GW170817, was reported on 16 October 2017 to be associated with the merger of two neutron stars in a distant galaxy, the first such merger to be observed via gravitational radiation. Thorne–Żytkow objects If a neutron star collides with red giant of sufficiently low mass and density, the merger is conjectured to produce a Thorne–Żytkow object, an hypothetical type of compact star containing a neutron star enveloped by a red giant. Formation of planets When two low-mass stars in a binary system merge, mass may be thrown off in the orbital plane of the merging stars, creating an excretion disk from which new planets can form. Discovery While the concept of stellar collision has been around for several generations of astronomers, only the development of new technology has made it possible for it to be more objectively studied. For example, in 1764, a cluster of stars known as Messier 30 was discovered by astronomer Charles Messier. In the twentieth century, astronomers concluded that the cluster was approximately 13 billion years old. The Hubble Space Telescope resolved the individual stars of Messier 30. With this new technology, astronomers discovered that some stars, known as blue stragglers, appeared younger than other stars in the cluster. Astronomers then hypothesized that stars may have "collided", or "merged", giving them more fuel so they continued fusion while fellow stars around them started going out. Stellar collisions and the Solar System While stellar collisions may occur very frequently in certain parts of the galaxy, the likelihood of a collision involving the Sun is very small. A probability calculation predicts the rate of stellar collisions involving the Sun is 1 in 1028 years. For comparison, the age of the universe is of the order 1010 years. The likelihood of close encounters with the Sun is also small. The rate is estimated by the formula: N ≈ 4.2 · D2 Myr−1 where N is the number of encounters per million years that come within a radius D of the Sun in parsecs. For comparison, the mean radius of the Earth's orbit, 1 AU, is . Our star will likely not be directly affected by such an event because there are no stellar clusters close enough to cause such interactions. KIC 9832227 and binary star mergers An analysis of the eclipses of KIC 9832227 initially suggested that its orbital period was indeed shortening, and that the cores of the two stars would merge in 2022. However subsequent reanalysis found that one of the datasets used in the initial prediction contained a 12-hour timing error, leading to a spurious apparent shortening of the stars' orbital period. The mechanism behind binary star mergers is not yet fully understood, and remains one of the main focuses of those researching KIC 9832227 and other contact binaries. References External links Stellar dynamics Stellar astronomy Impact events Concepts in stellar astronomy Articles containing video clips
Stellar collision
[ "Physics", "Astronomy" ]
1,215
[ "Concepts in astrophysics", "Stellar astronomy", "Astronomical events", "Impact events", "Astrophysics", "Concepts in stellar astronomy", "Astronomical sub-disciplines", "Stellar dynamics" ]
29,462,176
https://en.wikipedia.org/wiki/Van%20Laar%20equation
The Van Laar equation is a thermodynamic activity model, which was developed by Johannes van Laar in 1910-1913, to describe phase equilibria of liquid mixtures. The equation was derived from the Van der Waals equation. The original van der Waals parameters didn't give good description of vapor-liquid equilibria of phases, which forced the user to fit the parameters to experimental results. Because of this, the model lost the connection to molecular properties, and therefore it has to be regarded as an empirical model to correlate experimental results. Equations Van Laar derived the excess enthalpy from the van der Waals equation: In here ai and bi are the van der Waals parameters for attraction and excluded volume of component i. He used the conventional quadratic mixing rule for the energy parameter a and the linear mixing rule for the size parameter b. Since these parameters didn't lead to good phase equilibrium description the model was reduced to the form: In here A12 and A21 are the van Laar coefficients, which are obtained by regression of experimental vapor–liquid equilibrium data. The activity coefficient of component i is derived by differentiation to xi. This yields: This shows that the van Laar coefficients A12 and A21 are equal to logarithmic limiting activity coefficients and respectively. The model gives increasing (A12 and A21 >0) or only decreasing (A12 and A21 <0) activity coefficients with decreasing concentration. The model can not describe extrema in the activity coefficient along the concentration range. In case , which implies that the molecules are of equal size but different in polarity, then the equations become: In this case the activity coefficients mirror at x1=0.5. When A=0, the activity coefficients are unity, thus describing an ideal mixture. Recommended values An extensive range of recommended values for the Van Laar coefficients can be found in the literature. Selected values are provided in the table below. References Thermodynamic equations
Van Laar equation
[ "Physics", "Chemistry" ]
413
[ "Thermodynamic models", "Thermodynamics" ]
29,467,449
https://en.wikipedia.org/wiki/Protein%20function%20prediction
Protein function prediction methods are techniques that bioinformatics researchers use to assign biological or biochemical roles to proteins. These proteins are usually ones that are poorly studied or predicted based on genomic sequence data. These predictions are often driven by data-intensive computational procedures. Information may come from nucleic acid sequence homology, gene expression profiles, protein domain structures, text mining of publications, phylogenetic profiles, phenotypic profiles, and protein-protein interaction. Protein function is a broad term: the roles of proteins range from catalysis of biochemical reactions to transport to signal transduction, and a single protein may play a role in multiple processes or cellular pathways. Generally, function can be thought of as, "anything that happens to or through a protein". The Gene Ontology Consortium provides a useful classification of functions, based on a dictionary of well-defined terms divided into three main categories of molecular function, biological process and cellular component. Researchers can query this database with a protein name or accession number to retrieve associated Gene Ontology (GO) terms or annotations based on computational or experimental evidence. While techniques such as microarray analysis, RNA interference, and the yeast two-hybrid system can be used to experimentally demonstrate the function of a protein, advances in sequencing technologies have made the rate at which proteins can be experimentally characterized much slower than the rate at which new sequences become available. Thus, the annotation of new sequences is mostly by prediction through computational methods, as these types of annotation can often be done quickly and for many genes or proteins at once. The first such methods inferred function based on homologous proteins with known functions (homology-based function prediction). The development of context-based and structure based methods have expanded what information can be predicted, and a combination of methods can now be used to get a picture of complete cellular pathways based on sequence data. The importance and prevalence of computational prediction of gene function is underlined by an analysis of 'evidence codes' used by the GO database: as of 2010, 98% of annotations were listed under the code IEA (inferred from electronic annotation) while only 0.6% were based on experimental evidence. Homology-based methods Proteins of similar sequence are usually homologous and thus have a similar function. Hence proteins in a newly sequenced genome are routinely annotated using the sequences of similar proteins in related genomes. However, closely related proteins do not always share the same function. For example, the yeast Gal1 and Gal3 proteins are paralogs (73% identity and 92% similarity) that have evolved very different functions with Gal1 being a galactokinase and Gal3 being a transcriptional inducer. There is no hard sequence-similarity threshold for "safe" function prediction; many proteins of barely detectable sequence similarity have the same function while others (such as Gal1 and Gal3) are highly similar but have evolved different functions. As a rule of thumb, sequences that are more than 30-40% identical are usually considered as having the same or a very similar function. For enzymes, predictions of specific functions are especially difficult, as they only need a few key residues in their active site, hence very different sequences can have very similar activities. By contrast, even with sequence identity of 70% or greater, 10% of any pair of enzymes have different substrates; and differences in the actual enzymatic reactions are not uncommon near 50% sequence identity. Sequence motif-based methods The development of protein domain databases such as Pfam (Protein Families Database) allow us to find known domains within a query sequence, providing evidence for likely functions. The dcGO website contains annotations to both the individual domains and supra-domains (i.e., combinations of two or more successive domains), thus via dcGO Predictor allowing for the function predictions in a more realistic manner. Within protein domains, shorter signatures known as motifs are associated with particular functions, and motif databases such as PROSITE ('database of protein domains, families and functional sites') can be searched using a query sequence. Motifs can, for example, be used to predict subcellular localization of a protein (where in the cell the protein is sent after synthesis). Short signal peptides direct certain proteins to a particular location such as the mitochondria, and various tools exist for the prediction of these signals in a protein sequence. For example, SignalP, which has been updated several times as methods are improved. Thus, aspects of a protein's function can be predicted without comparison to other full-length homologous protein sequences. Structure-based methods Because 3D protein structure is generally more well conserved than protein sequence, structural similarity is a good indicator of similar function in two or more proteins. Many programs have been developed to screen a known protein structure against the Protein Data Bank and report similar structures (for example, FATCAT (Flexible structure AlignmenT by Chaining AFPs (Aligned Fragment Pairs) with Twists), CE (combinatorial extension)) and DeepAlign (protein structure alignment beyond spatial proximity). Similarly, the main protein databases, such as UniProt, have built-in tools to search any given protein sequences against structure databases, and link to related proteins of known structure. Protein structure prediction To deal with the situation that many protein sequences have no solved structures, some function prediction servers such as RaptorX are also developed that can first predict the 3D model of a sequence and then use structure-based method to predict functions based upon the predicted 3D model. In many cases instead of the whole protein structure, the 3D structure of a particular motif representing an active site or binding site can be targeted. The Structurally Aligned Local Sites of Activity (SALSA) method, developed by Mary Jo Ondrechen and students, utilizes computed chemical properties of the individual amino acids to identify local biochemically active sites. Databases such as Catalytic Site Atlas have been developed that can be searched using novel protein sequences to predict specific functional sites. Computational solvent mapping One of the challenges involved in protein function prediction is discovery of the active site. This is complicated by certain active sites not being formed – essentially existing – until the protein undergoes conformational changes brought on by the binding of small molecules. Most protein structures have been determined by X-ray crystallography which requires a purified protein crystal. As a result, existing structural models are generally of a purified protein and as such lack the conformational changes that are created when the protein interacts with small molecules. Computational solvent mapping utilizes probes (small organic molecules) that are computationally 'moved' over the surface of the protein searching for sites where they tend to cluster. Multiple different probes are generally applied with the goal being to obtain a large number of different protein-probe conformations. The generated clusters are then ranked based on the cluster's average free energy. After computationally mapping multiple probes, the site of the protein where relatively large numbers of clusters form typically corresponds to an active site on the protein. This technique is a computational adaptation of 'wet lab' work from 1996. It was discovered that ascertaining the structure of a protein while it is suspended in different solvents and then superimposing those structures on one another produces data where the organic solvent molecules (that the proteins were suspended in) typically cluster at the protein's active site. This work was carried out as a response to realizing that water molecules are visible in the electron density maps produced by X-ray crystallography. The water molecules are interacting with the protein and tend to cluster at the protein's polar regions. This led to the idea of immersing the purified protein crystal in other solvents (e.g. ethanol, isopropanol, etc.) to determine where these molecules cluster on the protein. The solvents can be chosen based on what they approximate, that is, what molecule this protein may interact with (e.g. ethanol can probe for interactions with the amino acid serine, isopropanol a probe for threonine, etc.). It is vital that the protein crystal maintains its tertiary structure in each solvent. This process is repeated for multiple solvents and then this data can be used to try to determine potential active sites on the protein. Ten years later this technique was developed into an algorithm by Clodfelter et al. Genome context-based methods Many of the newer methods for protein function prediction are not based on comparison of sequence or structure as above, but on some type of correlation between novel genes/proteins and those that already have annotations. Several methods have been developed to predict gene function on the local genomic or phylogenomic context and structure of genes: Phylogenetic profiling is based on the observation that two or more proteins with the same pattern of presence or absence in many different genomes most likely have a functional link. Whereas homology-based methods can often be used to identify molecular functions of a protein, context-based approaches can be used to predict cellular function, or the biological process in which a protein acts. For example, proteins involved in the same metabolic pathway are likely to be present in a genome together or are absent altogether, suggesting that these genes work together in a functional context. Operons are clusters of genes that are transcribed together. Based on co-transcription data but also based on the fact that the order of genes in operons is often conserved across many bacteria, indicates that they act together. Gene fusion occurs when two or more genes encode two or more proteins in one organism and have, through evolution, combined to become a single gene in another organism (or vice versa for gene fission). This concept has been used, for example, to search all E. coli protein sequences for homology in other genomes and find over 6000 pairs of sequences with shared homology to single proteins in another genome, indicating potential interaction between each of the pairs. Because the two sequences in each protein pair are non-homologous, these interactions could not be predicted using homology-based methods. Gene expression and location-based methods In prokaryotes, clusters of genes that are physically close together in the genome often conserve together through evolution, and tend to encode proteins that interact or are part of the same operon. Thus, chromosomal proximity also called the gene neighbour method can be used to predict functional similarity between proteins, at least in prokaryotes. Chromosomal proximity has also been seen to apply for some pathways in selected eukaryotic genomes, including Homo sapiens, and with further development gene neighbor methods may be valuable for studying protein interactions in eukaryotes. Genes involved in similar functions are also often co-transcribed, so that an unannotated protein can often be predicted to have a related function to proteins with which it co-expresses. The guilt by association algorithms developed based on this approach can be used to analyze large amounts of sequence data and identify genes with expression patterns similar to those of known genes. Often, a guilt by association study compares a group of candidate genes (unknown function) to a target group (for example, a group of genes known to be associated with a particular disease), and rank the candidate genes by their likelihood of belonging to the target group based on the data. Based on recent studies, however, it has been suggested that some problems exist with this type of analysis. For example, because many proteins are multifunctional, the genes encoding them may belong to several target groups. It is argued that such genes are more likely to be identified in guilt by association studies, and thus predictions are not specific. With the accumulation of RNA-seq data that are capable of estimating expression profiles for alternatively spliced isoforms, machine learning algorithms have also been developed for predicting and differentiating functions at the isoform level. This represents an emerging research area in function prediction, which integrates large-scale, heterogeneous genomic data to infer functions at the isoform level. Network-based methods Guilt by association type algorithms may be used to produce a functional association network for a given target group of genes or proteins. These networks serve as a representation of the evidence for shared/similar function within a group of genes, where nodes represent genes/proteins and are linked to each other by edges representing evidence of shared function. Integrated networks Several networks based on different data sources can be combined into a composite network, which can then be used by a prediction algorithm to annotate candidate genes or proteins. For example, the developers of the bioPIXIE system used a wide variety of Saccharomyces cerevisiae (yeast) genomic data to produce a composite functional network for that species. This resource allows the visualization of known networks representing biological processes, as well as the prediction of novel components of those networks. Many algorithms have been developed to predict function based on the integration of several data sources (e.g. genomic, proteomic, protein interaction, etc.), and testing on previously annotated genes indicates a high level of accuracy. Disadvantages of some function prediction algorithms have included a lack of accessibility, and the time required for analysis. Faster, more accurate algorithms such as GeneMANIA (multiple association network integration algorithm) have however been developed in recent years and are publicly available on the web, indicating the future direction of function prediction. Tools and databases for protein function prediction STRING: web tool that integrates various data sources for function prediction. VisANT: Visual analysis of networks and integrative visual data-mining. Mantis: A consensus-driven function prediction tool that dynamically integrates multiple reference databases. See also Gene prediction Protein–protein interaction prediction Protein structure prediction Structural genomics Functional genomics References External links The dcGO database Protein Data Bank Catalytic Site Atlas RaptorX Server for model-assisted protein function prediction Blast2GO, high-throughput tool for protein function prediction and functional annotation (webpage). Bioinformatics Protein methods
Protein function prediction
[ "Chemistry", "Engineering", "Biology" ]
2,872
[ "Biochemistry methods", "Biological engineering", "Protein methods", "Protein biochemistry", "Bioinformatics" ]
44,689,110
https://en.wikipedia.org/wiki/Experimental%20%26%20Molecular%20Medicine
Experimental & Molecular Medicine is a monthly peer-reviewed open access medical journal covering biochemistry and molecular biology. It was established in 1964 as the Korean Journal of Biochemistry or Taehan Saenghwa Hakhoe Chapchi and published bi-annually. It was originally in Korean becoming an English-language journal in 1975. In 1994 the journal began publishing quarterly. It obtained its current name in 1996 at which time it also began publishing bi-monthly, switching to monthly in 2009. It is the official journal of the Korean Society for Biochemistry and Molecular Biology. The editor-in-chief is Dae-Myung Jue (Catholic University of Korea). It is published by the Nature Publishing Group. The full text of the journal from 2008 to the present is available at PubMed Central. Abstracting and indexing The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal had a 2022 impact factor of 12.8. In 2020, it ranked at the 14th out of 140 journals in the category "Medicine, Research & Experimental" and 34th out of 298 journals in the category "Biochemistry & Molecular Biology". References External links Korean Society for Medical Biochemistry and Molecular Biology Biochemistry journals English-language journals Nature Research academic journals Academic journals established in 1964 Monthly journals Academic journals associated with learned and professional societies
Experimental & Molecular Medicine
[ "Chemistry" ]
268
[ "Biochemistry journals", "Biochemistry literature" ]