id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
51,560,864 | https://en.wikipedia.org/wiki/Medlar%20bodies | Medlar bodies, also known as sclerotic or muriform cells, are thick-walled cells (5–12 microns) with multiple internal transverse septa or chambers that resemble copper pennies. When present in skin or subcutaneous tissue, the cells are indicative of chromoblastomycosis.
References
Mycology | Medlar bodies | [
"Biology"
] | 71 | [
"Mycology"
] |
62,943,310 | https://en.wikipedia.org/wiki/Green%20hydrogen | Green hydrogen (GH2 or GH2) is hydrogen produced by the electrolysis of water, using renewable electricity. Production of green hydrogen causes significantly lower greenhouse gas emissions than production of grey hydrogen, which is derived from fossil fuels without carbon capture.
Green hydrogen's principal purpose is to help limit global warming to 1.5 °C, reduce fossil fuel dependence by replacing grey hydrogen, and provide for an expanded set of end-uses in specific economic sectors, sub-sectors and activities. These end-uses may be technically difficult to decarbonize through other means such as electrification with renewable power. Its main applications are likely to be in heavy industry (e.g. high temperature processes alongside electricity, feedstock for production of green ammonia and organic chemicals, as direct reduction steelmaking), long-haul transport (e.g. shipping, aviation and to a lesser extent heavy goods vehicles), and long-term energy storage.
As of 2021, green hydrogen accounted for less than 0.04% of total hydrogen production. Its cost relative to hydrogen derived from fossil fuels is the main reason green hydrogen is in less demand. For example, hydrogen produced by electrolysis powered by solar power was about 25 times more expensive than that derived from hydrocarbons in 2018. By 2024, this cost disadvantage had decreased to approximately 3x more expensive.
Definition
Most commonly, green hydrogen is defined as hydrogen produced by the electrolysis of water, using renewable electricity. In this article, the term green hydrogen is used with this meaning.
Precise definitions sometimes add other criteria. The global Green Hydrogen Standard defines green hydrogen as "hydrogen produced through the electrolysis of water with 100% or near 100% renewable energy with close to zero greenhouse gas emissions."
A broader, less-used definition of green hydrogen also includes hydrogen produced through various other methods that produce relatively low emissions and meet other sustainability criteria. For example, these production methods may involve nuclear energy or biomass feedstocks.
Electrolysis
Hydrogen can be produced from water by electrolysis. Electrolysis powered by renewable energy is carbon neutral. The business consortium Hydrogen Council said that, as of December 2023, manufacturers are preparing for a green hydrogen expansion by building out the electrolyzer pipeline by 35 percent to meet the needs of more than 1,400 announced projects.
Biochar-assisted
Biochar-assisted water electrolysis (BAWE) reduces energy consumption by replacing the oxygen evolution reaction (OER) with the biochar oxidation reaction (BOR). An electrolyte dissolves the biochar as the reaction proceeds. A 2024 study claimed that the reaction was 6x more efficient than conventional electrolysis, operating at <1 V, without production using ~250 mA/gcat current at 100% Faradaic efficiency. The process could be driven by small-scale solar or wind power.
Cow manure biochar operated at only 0.5 V, better than materials such as sugarcane husks, hemp waste, and paper waste. Almost 35% of the biochar and solar energy was converted into hydrogen. Biochar production (via pyrolysis) is not carbon neutral.
Uses
There is potential for green hydrogen to play a significant role in decarbonising energy systems where there are challenges and limitations to replacing fossil fuels with direct use of electricity.
Hydrogen fuel can produce the intense heat required for industrial production of steel, cement, glass, and chemicals, thus contributing to the decarbonisation of industry alongside other technologies, such as electric arc furnaces for steelmaking. However, it is likely to play a larger role in providing industrial feedstock for cleaner production of ammonia and organic chemicals. For example, in steelmaking, hydrogen could function as a clean energy carrier and also as a low-carbon catalyst replacing coal-derived coke.
Hydrogen used to decarbonise transportation is likely to find its largest applications in shipping, aviation and to a lesser extent heavy goods vehicles, through the use of hydrogen-derived synthetic fuels such as ammonia and methanol, and fuel cell technology. As an energy resource, hydrogen has a superior energy density (39.6 kWh) versus batteries (lithium battery: 0.15-0.25 kWh). For light duty vehicles including passenger cars, hydrogen is far behind other alternative fuel vehicles, especially compared with the rate of adoption of battery electric vehicles, and may not play a significant role in future.
Green hydrogen can also be used for long-duration grid energy storage, and for long-duration seasonal energy storage. It has been explored as an alternative to batteries for short-duration energy storage.
Green methanol
Green methanol is a liquid fuel that is produced from combining carbon dioxide and hydrogen () under pressure and heat with catalysts. It is a way to reuse carbon capture for recycling. Methanol can store hydrogen economically at standard outdoor temperatures and pressures, compared to liquid hydrogen and ammonia that need to use a lot of energy to stay cold in their liquid state. In 2023 the Laura Maersk was the first container ship to run on methanol fuel. Ethanol plants in the midwest are a good place for pure carbon capture to combine with hydrogen to make green methanol, with abundant wind and nuclear energy in Iowa, Minnesota, and Illinois. Mixing methanol with ethanol could make methanol a safer fuel to use because methanol doesn't have a visible flame in the daylight and doesn't emit smoke, and ethanol has a visible light yellow flame. Green hydrogen production of 70% efficiency and a 70% efficiency of methanol production from that would be a 49% energy conversion efficiency.
Market
As of 2022, the global hydrogen market was valued at $155 billion and was expected to grow at an average (CAGR) of 9.3% between 2023 and 2030.
Of this market, green hydrogen accounted for about $4.2 billion (2.7%).
Due to the higher cost of production, green hydrogen represents a smaller fraction of the hydrogen produced compared to its share of market value.
The majority of hydrogen produced in 2020 was derived from fossil fuel. 99% came from carbon-based sources. Electrolysis-driven production represents less than 0.1% of the total, of which only a part is powered by renewable electricity.
The current high cost of production is the main factor limiting the use of green hydrogen. A price of $2/kg is considered by many to be a potential tipping point that would make green hydrogen competitive against grey hydrogen. It is cheapest to produce green hydrogen with surplus renewable power that would otherwise be curtailed, which favours electrolysers capable of responding to low and variable power levels (such as proton exchange membrane electrolysers).
The cost of electrolysers fell by 60% from 2010 to 2022, and green hydrogen production costs are forecasted to fall significantly to 2030 and 2050, driving down the cost of green hydrogen alongside the falling cost of renewable power generation. Goldman Sachs analysis observed in 2022, just prior to Russia's invasion of Ukraine that the "unique dynamic in Europe with historically high gas and carbon prices is already leading to green H2 cost parity with grey across key parts of the region", and anticipated that globally green hydrogen achieve cost parity with grey hydrogen by 2030, earlier if a global carbon tax were placed on grey hydrogen.
As of 2021, the green hydrogen investment pipeline was estimated at 121 gigawatts of electrolyser capacity across 136 projects in planning and development phases, totaling over $500 billion. If all projects in the pipeline were built, they could account for 10% of hydrogen production by 2030.
The market could be worth over $1 trillion a year by 2050 according to Goldman Sachs.
An energy market analyst suggested in early 2021 that the price of green hydrogen would drop 70% by 2031 in countries that have cheap renewable energy.
Projects
Australia
In 2020, the Australian government fast-tracked approval for the world's largest planned renewable energy export facility in the Pilbara region. In 2021, energy companies announced plans to construct a "hydrogen valley" in New South Wales at a cost of $2 billion to replace the region's coal industry.
As of July 2022, the Australian Renewable Energy Agency (ARENA) had invested $88 million in 35 hydrogen projects ranging from university research and development to first-of-a-kind demonstrations. In 2022, ARENA is expected to close on two or three of Australia's first large-scale electrolyser deployments as part of its $100 million hydrogen deployment round.
In 2024 Andrew Forrest delayed or cancelled plans to manufacture 15 million tonnes of green hydrogen per year by 2030.
Brazil
Brazil's energy matrix is considered one of the cleanest in the world. Experts highlight the country's potential for producing green hydrogen. Research carried out in the country indicates that biomass (such as starches and waste from sewage treatment plants) can be processed and converted into green hydrogen (see: Bioenergy, Biohydrogen and Biological hydrogen production). The Australian company Fortescue Metals Group has plans to install a green hydrogen plant near the port of Pecém, in Ceará, with an initial forecast of starting operations in 2022. In the same year, the Federal University of Santa Catarina announced a partnership with the German Deutsche Gesellschaft für Internationale Zusammenarbeit, for the production of H2V. Unigel has plans to build a green hydrogen/green ammonia plant in Camaçari, Bahia, which is scheduled to come into operation in 2023. Initiatives in this area are also ongoing in the states of Minas Gerais, Paraná, Pernambuco, Piauí, Rio de Janeiro, Rio Grande do Norte, Rio Grande do Sul and São Paulo. Research work by the University of Campinas and the Technical University of Munich has determined the space required for wind and solar parks for large-scale hydrogen production. According to this, significantly less land will be required to produce green hydrogen from wind and photovoltaic energy than is currently required to grow fuel from sugarcane. In this study, author Herzog assumed an electricity requirement for the electrolysers of 120 gigawatts (GW). On November 20, 2023, Ursula von der Leyen, President of the European Commission, announced support for the production of 10 GW of hydrogen and subsequently ammonia in the state of Piauí. Ammonia will be exported from there.
Canada
World Energy GH2's Project Nujio'qonik aims to be Canada's first commercial green hydrogen / ammonia producer created from three gigawatts of wind energy on the west coast of Newfoundland and Labrador, Canada. Nujio'qonik is the Mi'kmaw name for Bay St. George, where the project is proposed. Since June 2022, the project has been undergoing environmental assessment according to regulatory guidelines issued by the Government of Newfoundland and Labrador.
Chile
Chile's goal to use only clean energy by the year 2050 includes the use of green hydrogen. The EU Latin America and Caribbean Investment Facility provided a €16.5 million grant and the EIB and KfW are in the process of providing up to €100 million each to finance green hydrogen projects.
China
In 2022 China was the leader of the global hydrogen market with an output of 33 million tons (a third of global production), mostly using fossil fuel.
As of 2021, several companies have formed alliances to increase production of the fuel fifty-fold in the next six years.
Sinopec aimed to generate 500,000 tonnes of green hydrogen by 2025. Hydrogen generated from wind energy could provide a cost-effective alternative for coal-dependent regions like Inner Mongolia. As part of preparations for the 2022 Winter Olympics a hydrogen electrolyser, described as the "world's largest" began operations to fuel vehicles used at the games. The electrolyser was powered by onshore wind.
Egypt
Egypt has opened the door to $40 billion of investment in green hydrogen and renewable technology by signing seven memoranda of understanding with international developers in the fields. The projects located in the Suez canal economic zone will see an investment of around $12 billion at an initial pilot phase, followed by a further $29 billion, according to the country's Planning Minister, Hala Helmy el-Said.
Germany
Germany invested €9 billion to construct 5 GW of electrolyzer capacity by 2030.
India
Reliance Industries announced its plan to use about 3 gigawatts (GW) of solar energy to generate 400,000 tonnes of hydrogen. Gautam Adani, founder of the Adani Group announced plans to invest $70 billion to become the world's largest renewable energy company, and produce the cheapest hydrogen across the globe. The power ministry of India has stated that India intends to produce a cumulative 5 million tonnes of green hydrogen by 2030.
In April 2022, the public sector Oil India Limited (OIL), which is headquartered in eastern Assam's Duliajan, set up India's first 99.99% pure green hydrogen pilot plant in keeping with the goal of "making the country ready for the pilot-scale production of hydrogen and its use in various applications" while "research and development efforts are ongoing for a reduction in the cost of production, storage and the transportation" of hydrogen.
In January 2024, nearly 412,000 metric tons/year capacity green hydrogen projects were awarded to produce green hydrogen by the end of 2026.
Japan
In 2023, Japan announced plans to spend US$21 billion on subsidies for delivered clean hydrogen over a 15-year period.
Mauritania
Mauritania launched two major projects on green hydrogen. The NOUR Project would become one of the world's largest hydrogen projects with 10 GW of capacity by 2030 in cooperation with Chariot company. The second is the AMAN Project, which includes 12GW of wind capacity and 18GW of solar capacity to produce 1.7 million tons per annum of green hydrogen or 10 million tons per annum of green ammonia for local use and export, in cooperation with Australian company CWP Renewables.
Namibia
Namibia has commissioned a green hydrogen production project with German support. The 10 billion dollar project involves the construction of wind farms and photovoltaic plants with a total capacity of 7 (GW) to produce. It aims to produce 2 million tonnes of green ammonia and hydrogen derivatives by 2030 and will create 15,000 jobs of which 3,000 will be permanent.
Oman
An association of companies announced a $30 billion project in Oman, which would become one of the world's largest hydrogen facilities. Construction was to begin in 2028. By 2038 the project was to be powered by 25 of wind and solar energy.
Portugal
In April 2021, Portugal announced plans to construct the first solar-powered plant to produce hydrogen by 2023. Lisbon based energy company Galp Energia announced plans to construct an electrolyser to power its refinery by 2025.
Saudi Arabia
In 2021, Saudi Arabia, as a part of the NEOM project, announced an investment of $5bn to build a green hydrogen-based ammonia plant, which would start production in 2025.
Singapore
Singapore started the construction of a 600 MW hydrogen-ready powerplant that is expected to be ready by the first half of 2026.
Spain
In February 2021, thirty companies announced a pioneering project to provide hydrogen bases in Spain. The project intended to supply 93 of solar and 67 GW of electrolysis capacity by the end of the decade.
United Arab Emirates
In 2021, in collaboration with Expo 2020 Dubai, a pilot project was launched which is the first "industrial scale", solar-driven green hydrogen facility in the Middle East and North Africa."
United Kingdom
In August 2017, EMEC, based in Orkney, Scotland, produced hydrogen gas using electricity generated from tidal energy in Orkney. This was the first time that hydrogen has been created from tidal energy anywhere in the world.
In March 2021, a proposal emerged to use offshore wind in Scotland to power converted oil and gas rigs into a "green hydrogen hub" which would supply fuel to local distilleries.
In June 2021, Equinor announced plans to triple UK hydrogen production. In March 2022 National Grid announced a project to introduce green hydrogen into the grid with a 200m wind turbine powering an electrolyser to produce gas for about 300 homes.
In December 2023, the UK government announced a £2 billion fund would be setup to back 11 separate projects. The then Energy Secretary, Claire Coutinho announced the funding would be invested over a 15-year period. The first allocation round would be known as HAR1. Vattenfall planned to generate green hydrogen from a test offshore wind turbine near Aberdeen in 2025.
United States
The federal Infrastructure Investment and Jobs Act, which became law in November 2021, allocated $9.5 billion to green hydrogen initiatives. In 2021, the U.S. Department of Energy (DOE) was planning the first demonstration of a hydrogen network in Texas. The department had previously attempted a hydrogen project known as Hydrogen Energy California. Texas is considered a key part of green hydrogen projects in the country as the state is the largest domestic producer of hydrogen and has a hydrogen pipeline network. In 2020, SGH2 Energy Global announced plans to use plastic and paper via plasma gasification to produce green hydrogen near Los Angeles.
In 2021 then New York governor Andrew Cuomo announced a $290 million investment to construct a green hydrogen fuel production facility. State authorities backed plans for developing fuel cells to be used in trucks and research on blending hydrogen into the gas grid. In March 2022 the governors of Arkansas, Louisiana, and Oklahoma announced the creation of a hydrogen energy hub between the states. Woodside announced plans for a green hydrogen production site in Ardmore, Oklahoma. The Inflation Reduction Act of 2022 established a 10-year production tax credit, which includes a $3.00/kg subsidy for green hydrogen.
Public-private projects
In October 2023, Siemens announced that it had successfully performed the first test of an industrial turbine powered by 100 per cent green hydrogen generated by a 1 megawatt electrolyser. The turbine also operates on gas and any mixture of gas and hydrogen.
Government support
In 2020, the European Commission adopted a dedicated strategy on hydrogen. The "European Green Hydrogen Acceleration Center" is tasked with developing a €100 billion a year green hydrogen economy by 2025.
In December 2020, the United Nations together with RMI and several companies, launched Green Hydrogen Catapult, with a goal to reduce the cost of green hydrogen below US$2 per kilogram (equivalent to $50 per megawatt hour) by 2026.
In 2021, with the support of the governments of Austria, China, Germany, and Italy, UN Industrial Development Organization (UNIDO) launched its Global Programme for Hydrogen in Industry. Its goal is to accelerate the deployment of GH2 in industry.
In 2021, the British government published its policy document, a "Ten Point Plan for a Green Industrial Revolution," which included investing to create 5 of low carbon hydrogen by 2030. The plan included working with industry to complete the necessary testing that would allow up to 20% blending of hydrogen into the gas distribution grid by 2023. A BEIS consultation in 2022 suggested that grid blending would only have a "limited and temporary" role due to an expected reduction in the use of natural gas.
The Japanese government planned to transform the nation into a "hydrogen society". Energy demand would require the government to import/produce 36 million tons of liquefied hydrogen. At the time Japan's commercial imports were projected to be 100 times less than this amount by 2030, when the use of fuel was expected to commence. Japan published a preliminary road map that called for hydrogen and related fuels to supply 10% of the power for electricity generation as well as a significant portion of the energy for uses such as shipping and steel manufacture by 2050. Japan created a hydrogen highway consisting of 135 subsidized hydrogen fuels stations and planned to construct 1,000 by the end of the 2020s.
In October 2020, the South Korean government announced its plan to introduce the Clean Hydrogen Energy Portfolio Standards (CHPS) which emphasizes the use of clean hydrogen. During the introduction of the Hydrogen Energy Portfolio Standard (HPS), it was voted on by the 2nd Hydrogen Economy Committee. In March 2021, the 3rd Hydrogen Economy Committee was held to pass a plan to introduce a clean hydrogen certification system based on incentives and obligations for clean hydrogen.
Morocco, Tunisia, Egypt and Namibia have proposed plans to include green hydrogen as a part of their climate change agenda. Namibia is partnering with European countries such as Netherlands and Germany for feasibility studies and funding.
In July 2020, the European Union unveiled the Hydrogen Strategy for a Climate-Neutral Europe. A motion backing this strategy passed the European Parliament in 2021. The plan is divided into three phases. From 2020 to 2024, the program aims to decarbonize existing hydrogen production. From 2024-2030 green hydrogen would be integrated into the energy system. From 2030 to 2050 large-scale deployment of hydrogen would occur. Goldman Sachs estimated hydrogen to 15% of the EU energy mix by 2050.
Six European Union member states: Germany, Austria, France, the Netherlands, Belgium and Luxembourg, requested hydrogen funding be backed by legislation. Many member countries have created plans to import hydrogen from other nations, especially from North Africa. These plans would increase hydrogen production, but were accused of trying to export the necessary changes needed within Europe. The European Union required that starting in 2021, all new gas turbines made in the bloc must be ready to burn a hydrogen–natural gas blend.
In November 2020, Chile's president presented the "National Strategy for Green Hydrogen," stating he wanted Chile to become "the most efficient green hydrogen producer in the world by 2030". The plan includes HyEx, a project to make solar based hydrogen for use in the mining industry.
Regulations and standards
In the European Union, certified 'renewable' hydrogen, defined as produced from non-biological feedstocks, requires an emission reduction of at least 70% below the fossil fuel it is intended to replace. This is distinct in the EU from 'low carbon' hydrogen, which is defined as made using fossil fuel feedstocks. For it to be certified, low carbon hydrogen must achieve at least a 70% reduction in emissions compared with the grey hydrogen it replaces.
In the United Kingdom, just one standard is proposed, for 'low carbon' hydrogen. Its threshold GHG emissions intensity of 20gCO2 equivalent per megajoule should be easily met by renewably-powered electrolysis of water for green hydrogen production, but has been set at a level to allow for and encourage other 'low carbon' hydrogen production, principally blue hydrogen. Blue hydrogen is grey hydrogen with added carbon capture and storage, which to date has not been produced with carbon capture rates in excess of 60%. To meet the UK's threshold, its government has estimated that an 85% carbon capture rate would be necessary.
In the United States, planned tax credit incentives for green hydrogen production are to be tied to the emissions intensity of 'clean' hydrogen produced, with greater levels of support on offer for lower greenhouse gas intensities.
See also
Alternative fuel
Carbon-neutral fuel
Combined cycle hydrogen power plant
Fossil fuel phase-out
Hydrogen economy
Green methanol fuel
References
External links
Green hydrogen explainer video from Scottish Power
Emissions reduction | Green hydrogen | [
"Chemistry"
] | 4,812 | [
"Greenhouse gases",
"Emissions reduction"
] |
62,943,949 | https://en.wikipedia.org/wiki/Zehev%20Tadmor | Zehev Tadmor (; born 1937) is a retired Israeli chemical engineer who has served as distinguished professor, president, and chairman of the Technion-Israel Institute of Technology. He is also chairman of the Samuel Neaman Institute for Advanced Studies in Science and Technology, a policy research center. His main research interest is polymer and plastics engineering and processing. He won the Emet Prize in 2005.
Biography
Tadmor received his B.Sc and M.Sc degrees in chemical engineering from the Technion-Israel Institute of Technology, and his doctorate in chemical engineering from the Stevens Institute of Technology in New Jersey.
Tadmor's main research interest is polymer and plastics engineering and processing. He has published three books and 75 papers in the field.
He worked for the Western Electric Company as a Senior Research Engineer, and then joined the Technion Faculty of Chemical Engineering in 1968. In 1975 Tadmor was appointed a Technion full professor, and in 1988 a Distinguished Technion Professor in the Department of Chemical Engineering. From 1984 to 1988 he served as Dean of the Department of Chemical Engineering.
Tadmor served as President of The Technion from 1990 to 1998. He is Chairman of the Technion-Israel Institute of Technology.
He is also Chairman of the Samuel Neaman Institute for Advanced Studies in Science and Technology, a policy research center.
Accolades
Tadmor was elected a member of the US National Academy of Engineering in 1991 for creative research and his influence on the practice of polymer processing. He is also an elected member of the Israel Academy of Sciences.
He was inducted into the Polymer Processing Hall of Fame in 1993, and received the Rotary Prize for "Outstanding Contributions to Higher Education in Israel". Tadmor was awarded an Honorary Doctorate in Industrial Chemistry from the University of Bologna in 1995, and received the Society of Plastics Engineers of the USA "Extrusion Division Distinguished Service Award" and "Outstanding Achievement Award in Plastics Engineering and Technology". He won the Emet Prize in 2005 in Exact Sciences in the field of chemical engineering "for his original and pioneering contribution to the field of polymer processing, transforming it into a new and important engineering discipline, and for his academic leadership as a pre-eminent mentor and researcher in chemical engineering in Israel."
References
Academic staff of Technion – Israel Institute of Technology
Technion – Israel Institute of Technology alumni
Stevens Institute of Technology alumni
Chemical engineering academics
Members of the Israel Academy of Sciences and Humanities
1937 births
Living people
20th-century Israeli engineers
Technion – Israel Institute of Technology presidents
Israeli chemical engineers
Foreign associates of the National Academy of Engineering
EMET Prize recipients in the Exact Sciences | Zehev Tadmor | [
"Chemistry"
] | 540 | [
"Chemical engineering academics",
"Chemical engineers"
] |
62,944,196 | https://en.wikipedia.org/wiki/Ethyl%20acetoxy%20butanoate | Ethyl acetoxy butanoate (EAB) is a volatile chemical compound found as a minor component of the odour profile of ripe pineapples, though in its pure form it has a smell more similar to sour yoghurt. It can be metabolized in humans into GHB, and thus can produce similar sedative effects.
It is synthesised by the reaction of gamma-butyrolactone and ethyl acetate with sodium ethoxide.
See also
1,4-Butanediol (1,4-BD)
1,6-Dioxecane-2,7-dione
Aceburic acid
gamma-Hydroxybutyraldehyde
References
Acetate esters
Butyrate esters
GABAB receptor agonists
GHB receptor agonists
Neurotransmitter precursors
Prodrugs
Sedatives | Ethyl acetoxy butanoate | [
"Chemistry"
] | 188 | [
"Chemicals in medicine",
"Prodrugs"
] |
62,944,905 | https://en.wikipedia.org/wiki/Somfy | Somfy is a French group of companies founded in 1969, some of the largest manufacturers and suppliers of controllers and drives for entrance gates, garage doors, window blinds and awnings. They also produce other home automation products such as security devices. Somfy is a member of the home automation committees for Matter, Thread and the Connectivity Standards Alliance.
References
Home automation
Proprietary hardware
Home automation companies
Heating, ventilation, and air conditioning companies
French companies established in 1969 | Somfy | [
"Technology"
] | 94 | [
"Home automation",
"Home automation companies"
] |
62,945,037 | https://en.wikipedia.org/wiki/Teacup%20galaxy | The Teacup galaxy, also known as the Teacup AGN or SDSS J1430+1339 is a low redshift type 2 quasar, showing an extended loop of ionized gas resembling a handle of a teacup, which was discovered by volunteers of the Galaxy Zoo project and labeled as a Voorwerpje.
Galaxy
The Teacup galaxy is dominated by a bulge and has an asymmetric structure with a shell-like structure and a tidal tail. The shell and tail are signatures of a recent merger of two galaxies. Dust lanes in the system are interpreted as a gas-rich merger. Several candidate star clusters were identified in this galaxy with Hubble Space Telescope images. Observations with the Gran Telescopio Canarias showed that the Teacup Galaxy has a giant reservoir of ionized gas extending up to 111 kpc. The optical/radio bubbles seem to be expanding across this intergalactic medium.
Active galactic nucleus
Early studies of the Teacup AGN suggested that it is fading, although there was no clear evidence. Observations with VLT/SINFONI showed a blueshifted nuclear outflow with a velocity of 1600–1800 km/s. Observations in x-rays with Swift, XMM-Newton and Chandra revealed a powerful, highly obscured active galactic nucleus. This new result suggests that the AGN might not require fading. The quasar has dimmed by only a factor of 25 or less over the past 100,000 years.
Bubbles
One bubble was discovered by Galaxy Zoo volunteers in SDSS images as a 5 kpc loop of ionized gas. The loop is dominated by emission lines, such as hydrogen alpha and doubly ionized oxygen, which gives the loop seen in SDSS images a purple color. The emission of [O II] is extremely strong in the Teacup AGN and the quasar 3C 48 shows a similar [O II]/Hβ ratio.
Follow-up observations with the Very Large Array showed two 10-12 kpc bubbles, one "eastern bubble", consistent with the loop in optical observations and a "western bubble", only visible in radio wavelengths. The study also found a bright emission towards the north-east of the AGN, which is consistent with high-velocity ionized gas (-740 km/s). The bubbles are either created by small-scale radio jets or by quasar winds. Observations with Chandra revealed a loop in x-ray emission, consistent with the "eastern bubble". The Chandra data also show evidence for hotter gas within the bubble, which may imply that a wind of material is blowing away from the black hole. Such a wind, which was driven by radiation from the quasar, may have created the bubbles found in the Teacup.
The bubbles were observed with VLT/MUSE, showing that the jet strongly perturbs the host interstellar medium (ISM). At the edge of the bubble the researchers find a ≤100-150 Myr young population of stars, which indicates triggered star formation. This so-called positive feedback is predicted. Observations with ALMA found that the radio jet is compressing and accelerating molecular gas. This drives a lateral outflow, perpendicular to the radio jet. This is based on observations of carbon monoxide (CO) gas.
See also
Extended emission-line region
IC 2497
Hanny's Voorwerp
Galaxy Zoo
Zooniverse
List of quasars
References
External links
Hubble spies the Teacup, and I spy Hubble blog post from the Galaxy Zoo website
Voorwerpjes in Space NASA Astronomy Picture of the Day
VLA Finds Unexpected Storm at Galaxy's Core press-release by NRAO
SDSS J1430+1339: Storm Rages in Cosmic Teacup photo album by the website of Chandra
1436754
F14281+1352
Quasars
Boötes | Teacup galaxy | [
"Astronomy"
] | 794 | [
"Boötes",
"Constellations"
] |
62,945,367 | https://en.wikipedia.org/wiki/List%20of%20pusher%20aircraft%20by%20configuration%20and%20date | A pusher aircraft is a type of aircraft using propellers placed behind the engines.
Pushers may be classified according to lifting surfaces layout (conventional or 3 surface, canard, joined wing, tailless and rotorcraft) as well as engine/propeller location and drive. For historical interest, pusher aircraft are also classified by date.
Some aircraft have a Push-pull configuration with both tractor and pusher engines. The list includes these even if the pusher engine is just added to a conventional layout (engines inside the wings or above the wing for example).
Conventional and three surface layouts
The conventional layout of an aircraft has wings ahead of the empennage.
Direct drive
Propeller ahead of tail
Between frames (Farman layout)
Voisin-Farman I 1907, 60 built
AEA June Bug 1908 experimental, 1 built
Cody British Army Aeroplane No 1 1908, 1 built
AEA Silver Dart 1909, first flight in Canada, 1 built
Curtiss No. 1 1909 Golden Flyer biplane, 1 built
Curtiss No. 2 1909 Reims racer biplane, 1 built
Cody Michelin Cup Biplane 1910, 1 built
Bristol Boxkite 1910 trainer, 78 built
Howard Wright 1910 Biplane 1910, 7 built
Royal Aircraft Factory F.E.1 1910 biplane, 1 built
Wright Model B 1910 biplane 2 seater, about 100 built
Cody Circuit of Britain biplane 1911, 1 built
Curtiss Model D 1911 biplane, 1 seat
Curtiss Model E 1911 biplane floatplane, 17+ built
Baldwin Red Devil 1911 aerobatic biplane, 6 built
Farman MF.7 1911 biplane, unk no. built
Cody V biplane 1912, 2 built
Cody VI biplane/floatplane 1913, 1 built
Short S.38 1912, 48 built
Farman HF.20 1913 military biplane, unk no. built
Farman MF.11 1913 biplane, unk no. built
Grahame-White Type X Charabanc 1913 transport, 1 built
Short S.80 Nile Pusher Biplane Seaplane 1913, 1 built
Grahame-White Type XV 1913 trainer, 135 built
Sopwith Bat Boat 1913, 6 built
Between frames or booms (1915 and later)
Breguet Bre.4 1914 2 seat military biplane, about 100 built
Grahame-White Type XI 1914 reconnaissance biplane, 1 built
Short S.81 1914, 1 built
Sopwith Gunbus 1914, 35 built (including floatplanes)
Vickers F.B.5 1914, 224 built
Voisin III 1914 bomber, about 3200 built
Wight Pusher Seaplane 1914, 11 built
Breguet Bre.5 1915 2 seat military biplane, unk no. built
AD Scout 1915 interceptor, 4 built
AGO C.II 1915 reconnaissance biplane, 15 built
Airco DH.1 1915 biplane, 2 seat, 100 built,
Airco DH.2 1915 biplane fighter, 453 built
Avro 508 1915, 1 built
Farman F.30 1915 military biplane, unk no. built
Farman F.40 1915 military biplane, unk no. built
Otto C.I 1915 reconnaissance biplane, unk no. built
Pemberton-Billing P.B.25 1915 scout, 20 built
Royal Aircraft Factory F.E.2 1915 military biplane, 1939 built
Royal Aircraft Factory F.E.8 1915 biplane fighter, 295 built
Voisin IV
Voisin V 1915 bomber, about 350 built
Breguet Bre.12 1916 2 seat military biplane, unk no. built
Friedrichshafen FF.34 1916 patrol seaplane, 1 built
Häfeli DH-1 1916 reconnaissance biplane, 6 built
Vickers F.B.12 1916 fighter, about 22 built
Voisin VII 1916 reconnaissance biplane, about 100 built
Voisin VIII 1916 bomber, about 1,100 built
Blackburn Triplane 1917 fighter, 1 built
Curtiss Autoplane 1917 (hops only) roadable aircraft, 1 built
Port Victoria P.V.4 1917 floatplane, 1 built.
Royal Aircraft Factory F.E.9 1917 2 seat fighter, 3 built
Royal Aircraft Factory N.E.1 1917 night fighter, 6 built
Savoia-Pomilio SP.3 1917 reconnaissance biplane about 350 built
Vickers F.B.26 Vampire 1917, 4 built
Voisin IX 1917 reconnaissance biplane, 1 built
Voisin X 1917 bomber, about 900 built
Vickers VIM 1920, 35 built
Between booms
Henderson H.S.F.1 1929 transport, 1 built
Campbell Model F 1930 private airplane, 1 built
Hanriot H.110 1933 fighter, 1 built
Stearman-Hammond Y-1 1934 safety airplane about 20 built
de Schelde Scheldemusch 1935 1 seat biplane trainer, 6 built
ITS-8 1936 motorglider monoplane, 2 built
SCAL FB.30 Avion Bassou 1936 2 seat light aircraft, 2 built
Abrams P-1 Explorer 1937, 1 built
SAIMAN LB.2 1937 2 seat monoplane, 1 built
Alliet-Larivière Allar 4, 1938 experimental 2 seat, 1 built
General Aircraft GAL.33 Cagnet 1939 trainer, 1 built
WNF Wn 16 1939, Austrian experimental aircraft
General Aircraft GAL.47 1940 observation, 1 built
de Schelde S.21 1940 fighter mockup (unflown)
Fane F.1/40 1941 observation monoplane, 1 built
Saab 21 1943 fighter, 298 built
Vultee XP-54 1943 fighter, 2 built
Skoda-Kauba V6 1944 1 seat, 1 built
1945 and later
Convair 106 Skycoach 1946 4 seater, one built
Fokker F.25 1946 4 seater, 20 built
SECAN Courlis 1946 transport, unk no. built
Anderson Greenwood AG-14 1947 2 seat experimental, 6 built
Heston JC.6/AOP 1947 2 seat reconnaissance, 2 built
Alaparma Baldo 1949 1 seat, about 35 built
SNCASO SO.8000 Narval 1949 naval fighter, 2 built
Anderson Greenwood AG-14 1950 2 seats, 6 built
SIAI-Marchetti FN.333 Riviera 1952 amphibie 4 seater, 29 built
Potez 75 1953 reconnaissance, 1 built
SIAI-Marchetti FN.333 Riviera 1962 4 seat amphibian, 29 built
Akaflieg Stuttgart FS-26 Moseppl 1970 1 seat powered sailplane, unk no. built
Cessna XMC 1971 research aircraft, 1 built
Akaflieg Stuttgart FS-28 Avispa 1972 2 seat transport, 1 built
Kortenbach & Rauh Kora 1973 Motor glider, 2 built
Lartin Skylark 1973 Utility Prototype
PZL M-17 1973 Trainer Prototype
Edgley Optica 1979 ducted fan observation aircraft 21 built
1980 and later
ADI Condor 1981 2 seat motorglider, unk no. built
Acapella 200 1982 homebuilt, 1 built
Applebay Zia 1982 1 seat ultralight motorglider, 4 built
Sadler Vampire 1982 Ultralight
Spectrum SA-550 1983 Utility prototype, 2 built
RTAF-5 1984 Trainer prototype
Aero Dynamics Sparrow Hawk Mk.II 1984 Experimental 2 seater
NPO Molniya 1993 transport 6 seater, 2 built
Yakovlev Yak-58 1993 Utility, 7 built
HFL Stratos 300 1996 1 seat ultralight motorglider
NPP Aerorik Dingo 1997 multi-role amphibian (air cushion), 6 built
Toucan PJ-1B 1998 experimental 1 seat, one built
Creative Flight Aerocat 2001 Transport Prototype
Airsport Song 2009 ultralight
Northrop Grumman Firebird 2010 Reconnaissance Prototype
Ion Aircraft Ion 2007 prototype 2 seater tandem, 1 built
Terrafugia Transition 2009 roadable airplane 2 seater, 2 built
WLT Sparrow 2010 Ultralight, 13 built
Synergy Aircraft Synergy 2011 Double boxtail demonstrator electric powered 1/4 scale model, in development
AHRLAC Holdings Ahrlac 2014 reconnaissance attack, 1 built
Commuter Craft Innovator 2016 prototype 2 seater, 1 built
ISA 180 Seeker 2019 prototype monoplane
Between booms / UAV's
IAI Scout 1977 UAV drone
AAI RQ-2 Pioneer1986 UAV drone
EADS Harfang 2008 UAV drone
Between outboard tail booms
Blohm & Voss P208 1944 fighter project
Skoda-Kauba SK SL6 1944 research one seat project
Coaxially in rear fuselage
Royal Aircraft Factory F.E.3/A.E.1 1913 armoured biplane, 1 built
Royal Aircraft Factory F.E.6, 1914, 1 built
Gallaudet D-4 1918 seaplane, 2 built
Vickers Type 161 1931 fighter prototype (with structural frame), 1 built
Austria Krähe 1960 1 seat motorglider, unk no. built
Brditschka HB-3 1971 2 seat motorglider, unk no. built
Rhein Flugzeugbau RW 3 Multoplan 1955 27 built
Rhein Flugzeugbau Sirius I 1969, 2 seats
RFB/Grumman American Fanliner 1973, 2 seats, 2 built
RFB Fantrainer 1977, 2 seats, 47 built
Buselec 2, 2010 project, with electric motor
Otto Celera 500L 2018
Nacelle above fuselage
WW1 or Before
Curtiss Model F 1912 flying boat, 150+ built
Benoist XIV 1913 transport flying boat, 2 built
FBA Type A, B, C 1913 patrol flying boat, unk no. built
Lohner E 1913, about 40 built
Donnet-Denhaut flying boat 1915 patrol flying boat, about 1,085 built
FBA Type H 1915 patrol flying boat, ~2000 built
Grigorovich M-5 1915 patrol flying boat, about 300 built
Lohner L, R and S 1915, 100+ built
AD Flying Boat, Supermarine Channel & Sea Eagle 1916 patrol and airline flying boat, 27 built.
Grigorovich M-9 1916 patrol flying boat, about 500 built
Grigorovich M-11 1916 fighter flying boat, about 60 built
Hansa-Brandenburg CC 1916 flying boat fighter, 73 built
Macchi L.2 1916, reconnaissance flying boat, 17 built
Macchi M.3 1916, reconnaissance flying boat, 200 built
Norman Thompson N.T.4 1916 patrol flying boat, 72 built
Oeffag-Mickl G 1916 trimotor patrol flying boat, 12 built
Curtiss HS 1917 patrol flying boat, about 1,178 built
Grigorovich M-15 1917 patrol flying boat, unk no. built
Macchi M.5 1917, flying boat fighter, 244 built
Norman Thompson N.T.2B 1917 flying boat trainer, 100+ built
Tellier T.3 and Tc.6 1917 patrol flying boat, about 155 built
Hansa-Brandenburg W.20 1918 U-boat flying boat, 3 built
Macchi M.7 1918 flying boat fighter, 100+ built
Macchi M.9 1918 flying boat bomber, 30 built
Macchi M.12 1918 flying boat bomber, about 10 built
Royal Aircraft Factory C.E.1 1918 flying boat, 2 built
SIAI S.9 1918 flying boat, unk no. built
SIAI S.12 1918 flying boat, 1 built
Sperry Land and Sea Triplane 1918 patrol flying boat, 2 built
Supermarine Baby 1918 flying boat fighter, 1 built
1920s
Aeromarine 40 1919 flying boat trainer, 50 built
Aeromarine 50 1919 transport flying boat, unk no. built
Boeing B-1 1919 transport flying boat, 1 built
SIAI S.13 1919 reconnaissance flying boat, unk no. built
SIAI S.16 1919 flying boat, 100+ built
Supermarine Sea Lion I & II 1919 racing flying boats, 2 built
Vickers Viking, Vulture and Vanellus 1919 amphibious flying boats, 34 built.
Vought VE-10 Batboat 1919 navy flying boat, 1 built
Macchi M.18 1920 flying boat, 90+ built
Supermarine Commercial Amphibian 1920, 1 built
Supermarine Scarab 1923, 12 built
Supermarine Seal 1921, 4+ built
Supermarine Seagull 1921, 34 built
CAMS 30 1922 flying boat trainer, 31 built
CAMS 31 1922 flying boat fighter, 2 built
Fokker B.I & III 1922 biplane reconnaissance flying boat, 2 built
SIAI S.51 1922 racing flying boat, 1 built
CAMS 38 1923 racing flying boat, 1 built
FBA 17 1923 flying boat trainer, 300+ built
Savoia-Marchetti S.57 1923 reconnaissance flying boat, 20 built
Supermarine Sea Eagle 1923, 3 built
Canadian Vickers Vedette 1924 forestry patrol flying boat, 60 built
CANT 7 1924 flying boat trainer, 34 built
Ikarus ŠM 1924 flying boat trainer, 42 built
Macchi M.26 1924 flying boat fighter, 2 built
CANT 10 1925 flying boat airliner, 18 built
Rohrbach Ro VII Robbe 1925 flying boat, 3 built
Savoia-Marchetti S.59 1925 reconnaissance flying boat, 240+ built
CAMS 37 1926 reconnaissance flying boat, 332 built
CAMS 46 1926 flying boat trainer, unk. no built
CANT 18 1926 flying boat trainer, 29 built
Savoia-Marchetti S.62 1926 reconnaissance flying boat, 175+ built
CANT 25 1927 flying boat fighter, unk no. built
Canadian Vickers Vista 1927 1 seat monoplane flying boat, 1 built
Boeing Model 204 Thunderbird 1929 flying boat, 7 built
Macchi M.41 1927 flying boat fighter, 42 built
Supermarine Sheldrake 1927, 1 built
Fokker F.11/B.IV 1928 monoplane transport flying boat, 7 built
Rohrbach Ro X Romar 1928 flying boat, 3 built
Savoia-Marchetti S.64 1928 distance record monoplane, 2 built
1930s
FBA 310 1930 amphibious flying boat transport, 9 built
SIAI S.67 1930 flying boat fighter, 3 built
FBA 290 1931, amphibious flying boat trainer, 10 built
Fizir AF-2 1931 amphibious flying boat trainer, 1 built
Amiot 110-S 1931 patrol flying boat, 2 built
Loening XSL 1931 submarine airplane, 1 built
Beriev MBR-2 1931 flying boat, 1365 built
Savoia-Marchetti S.66 1931 airliner flying boat, 24 built
Tupolev MDR-2 1931 flying boat, 1 built
Aichi AB-4 1932 flying boat, 6 built
Boeing-Canada A-213 Totem 1932 flying boat, 1 built
Dornier Do 12 1932 amphibian, 4 seats, 1 built
Savoia-Marchetti SM.78 1932 patrol flying boat, 49 built
General Aviation PJ 1933 monoplane flying boat, 5 built
Loire 50 1933 training amphibian, 7 built
Savoia-Marchetti SM.80bis 1933 transport amphibian, 1+ built
Supermarine Seagull/Walrus 1933 military flying boat, 740 built
Aichi E10A 1934 reconnaissance flying boat, 15 built
Loire 130 1934 reconnaissance flying boat, 125 built
Beriev MBR-2 1935 flying boat, 1365 built
Curtiss-Wright CA-1 1935 amphibious flying boat, 3 built
Dornier Do 18 1935 monoplane flying boat, 170 built
Aichi E11A 1937 reconnaissance flying boat, 17 built
Kawanishi E11K 1937 monoplane flying boat, 2 built
SNCAO 30 1938 flying boat trainer, 2 built
Nikol A-2 1939 amphibious flying boat trainer, 1 built
Post War II
SCAN 20 1945 flying boat trainer, 24 built
Volmer VJ-21 Jaybird 1947 2 seat light aircraft, unk no. built
Volmer VJ-22 Sportsman 1958 2 seat homebuilt amphibian, (not all are pushers), 100+ built
Lake Buccaneer 1959 amphibian, 4 seats, 1000+ built
Aerosport Woody Pusher 1967 tandem 2 seater, parasol wing, 27 ex.
Taylor Coot 1969 2 seat homebuilt amphibian, 70 built
Aerosport Rail 1970 single seat ultralight, twin engine, 1 built
Osprey Osprey 2 1973 2 seat homebuilt, unk no. built
3I Sky Arrow (now marketed by Magnaghi Aeronautica) 1982 maiden flight, ULM/LSA/GA tandem two-seater high wing, some 50 built
RFB X-114 1977 ground-effect craft prototype 6/7 seat, 1 built
Freedom Master FM-2 flying boat homebuilt prototype 2 seat, 1 built
3I Sky Arrow (Magnaghi Aeronautica) 1982, ULM/LSA/GA 2 seater tandem, about 50 built
Tisserand Hydroplum and SMAN Pétrel 1983 homebuilt amphibian, about 63 built
Microleve Corsario 1988 ultralight amphibious homebuilt, unk no. built
Creative Flight Aerocat 2001 amphibious 4 seater prototype, 1 built
Airmax Sea Max 2005 2 seat biplane amphibian, unk no. built
CZAW Mermaid 2005 2 seat amphibious biplane, unk no. built
Below tail boom
Nelson Dragonfly 1947 motorglider, 7 built
AmEagle American Eaglet 1975 ultralight motorglider, 12 built
Jean St-Germain Raz-Mut 1976 1 seat ultralight, 7 built
Alpaero Sirius 1984 1 seat UL motorglider, 20 built
Taylor Tandem, unk no. built
Between up and down tail booms
Raab Krähe 1958 motorglider 1 seat, 30 built
Brditschka HB-3, HB-21, HB-23 1971- 1982 motorgliders 2 seater
HB-204 TornadoHB Flugtechnik 2013 prototype 2 seater
Above tailboom
Loening Model 23 Air Yacht 1921 transport flying boat, 16 built
Koolhoven F.K.30 Toerist 1927 2 seat monoplane, 1 built
Curtiss-Wright Junior 1930 2 seat ultralight, 270 built
Curtiss-Wright CW-3 Duckling 1931 ultralight amphibious flying boat, 3 built
British Aircraft Company Drone 1932 1 seat ultralight, 33 built
Siebel Si 201 1938 reconnaissance 2 built
Republic RC-3 Seabee 1945 4 seat amphibian, 1,060 built
Fokker F.25 Promotor 1946 transport, 20 built
Aerauto PL.5C 1949 1949 roadable aircraft, 1 built
Janowski Don Kichot/J-1 1970 1 seat homebuilt, unk no. built
Spencer Air Car 1970 4 seat homebuilt amphibian, 51 built
SZD-45 Ogar 1973 2 seat motorglider, 65 built
Neukom AN-20 1978 motorglider experimental 1 seat
Taylor Bird 1980 2 seat homebuilt, unk no. built
Strojnik S-2 1980 motorglider 1 seater, 8+ built.
Aérostructure Lutin 80 1983 1 seat ultralight motorglider, 2 built
Birdman Chinook 1982 ultralight homebuilt, 1100+ built
Alpha J-5 Marco 1983 1 seat ultralight motorglider, unk no. built
Quad City Challenger 1983 2 seat ultralight, 3,000+ built
Spectrum Beaver 1983 ultralight homebuilt, 2080+ built
Funk Fk6 1985 1 seat ultralight motorglider, unk no. built
Advanced Aeromarine Buccaneer 1988 2 seat amphibious biplane, unk no. built
D-8 Moby Dick 1988 2 seater, 37 built
Seabird Seeker 1989 observation aircraft 2 seater, 31 built
Technoflug Piccolo 1989 1 seat ultralight motorglider, unk no. built
Rans s-12 Airaile 1990 2 seater, 1100+ built
Aeroprakt A-20 Vista 1991 2 seater
Aviasud Engineering Albatros 1991 UL biplane
Partenair Mystere 1996 2 seater, 3 built
AAC SeaStar 1998 2 seat amphibious biplane, 91 built
Alpaero Exel 1998 motoplaneur monoplane en kit, 9 ex.
Sea Storm Z2, 1998 hydravion biplane, 12 built
AAC SeaStar 2002 amphibious 2 seater, 91 built
Ekolot JK 01A Elf 2006 motorglider monoplane
Bagalini Bagaliante circa 2010 motorglider 1 seat
ICON Aircraft A5 2013 2 seat amphibious light sport, in production
Vickers Aircraft Wave 2 seat carbon fiber amphibious light sport aircraft, in final development
Propeller behind the tail
Pénaud Planophore 1871 first aerodynamically stable fixed-wing aeroplane, rubber powered model, 1 built
Convair 111 Air Car 1945 roadable airplane, 1 built
Prescott Pusher 1985 4 seat homebuilt, about 30 built
Air Quest Nova 21 1992 2 seat homebuilt, unk no. built
Eviation Alice 2019 transport electric plane prototype in development 10/11 seats
Lateral behind wing
Curtiss H-1 America 1914 transatlantic biplane, 2 built
Friedrichshafen G.I 1915 bomber, 1 built
LFG Roland G.I 1915 bomber, 1 built
Rumpler G.I, II and III 1915 bomber c.220 built
Schutte-Lanz G.I 1915 bomber 1 built (behind wing)
Airco DH.3 1916 bomber, 2 built
Avro 523 Pike 1916 bomber, 2 built
Friedrichshafen G.II 1916 bomber, 35 built
Gotha G.II 1916 bomber, 11 built
Gotha G.III 1916 bomber, 25 built
Gotha G.IV 1916 bomber, 230 built
Royal Aircraft Factory F.E.4 1916 bomber, 2 built
Friedrichshafen G.III 1917 bomber, 338 built
Gotha G.V 1917 bomber, 205 built
Boeing GA-1 1920 bomber 10 built
Udet U 11 Kondor 1926 airliner, 1 built
1930 and later
Praga E-210 and E-211 1936 transport, 2 built
Bell YFM-1 Airacuda 1937 interceptor, 13 built
Convair B-36 Peacemaker 1946 bomber, 384 built
Baumann Brigadier 1947 transport, 2 built
Nord 2100 Norazur 1947 transport, 1 built
Monsted-Vincent MV-1 Starflight 1948 airliner, 1 built
Piaggio P.136 1948 amphibious transport, 63 built
Dinfia IA 45 Querandi 1957 5/6 seater, 2 built
Piaggio P.166 1957 transport, 145 built
AAC Angel, 1984 transport, 4 built
Piaggio P.180 Avanti 1986 executive transport, 216+ built
Mc Donnel Douglas MD-80 1987 Liner experimental Propfan
EM-11 Orka 2003 4 seat transport, 5 built
Burevestnik-24 2004 ground-effect aircraft 24 seats, 6 built
OMA SUD Skycar 2007 transport, 1 built
Aeroprakt A-36 Vulcan 2011 2 seater
Lateral nacelles
Custer Channel Wing 1942 experimental aircraft, 4 built
Embraer/FMA CBA 123 Vector 1990 airliner, 2 built
NAL Saras 2004 airliner, 2 built
Remote drive
Propeller ahead of tail
Within airframe
Megone biplane 1913 2 seat, 1 built
Fischer Fibo-2a 1954 1 seat motorglider, 1 built
Rhein Flugzeugbau RW 3 Multoplan 1955 RFB Fantrainer prototype, 27 built
Kuffner WK-1 1970 motorglider 1 seat, 1 built
Rhein-Flugzeugbau Sirius II 1972 2 seat motorglider, unk no. built
Neukom AN-20C 1983 1 seat ultralight homebuilt motorglider, 1 built
PJ-II Dreamer 2016 jet fighter style 2 seater, 1 built
Behind wing
Burgess model I 1913 patrol floatplane, 1 built
Mann & Grimmer M.1 1915, 1 built
Carden-Baynes Bee 1937 2 seat tourer, 1 built
Raab Krähe 1958 1 seat motorglider, 30 built
Eipper Quicksilver 1974 1 seat ultralight
Theseus Aircraft 1996 NASA research aircraft, no pilot, 1 built
Inside tail
Bede XBD-2/BD-3 1961 ducted fan boundary layer control aircraft, 1 built
Mississippi State University XAZ-1 Marvelette 1962 experimental aircraft to test ideas XV-11 Marvel, 1 built
Mississippi State University XV-11 Marvel 1965 boundary layer control test aircraft, 1 built
Behind tail
Antoinette I, 1906, 2 seats experimental, project
Paulhan-Tatin Aéro-Torpille No.1 1911 monoplane, 1 built
Kasyanenko No. 5 1917 experimental biplane, 1 built
Göppingen Gö 9 1941 experimental propulsion aircraft, 1 built
Dornier Do 212 1942 experimental amphibian, 1 built
Douglas XB-42 Mixmaster 1944, bomber, 2 built
Douglas DC-8 (piston airliner) 1945, transport project, not built
Lockheed Big Dipper 1945 transport, 1 built
Douglas Cloudster II 1947 transport, 1 built
Waco Aristocraft 1947 transport, 1 built
Acme Sierra 1948 1 seat experimental, 1 built
Allenbaugh Grey Ghost, 1948 1 seat experimental, 1 built
Parks Alumni Racer, 1949 1 seat experimental, 1 built
Planet Satellite 1949 4 seat transport, 1 built
Taylor Aerocar 1949 2 seat roadable aircraft, 6 built
Pützer Bussard SR-57 1958 experimental 2 seater, 90 hp, 1 built
1960 and later
HMPAC Puffin 1961 human powered aircraft, 2 built
Lesher Nomad 1961 experimental 2 seater homebuilt, one built
Aerocar Aero-Plane 1964 four seater 1 built
Lesher Teal 1965 experimental 1 seat homebuilt, one built
HPA Toucan 1972 human powered aircraft, 1 built
Ryson STP-1 Swallow 1972 2 seat homebuilt motorglider, 1 built
Bede BD-5 1973 1 seat homebuilt, about 150 built
Aerocar Mini-IMP 1974 1 seat homebuilt, 250+ built
AmEagle American Eaglet 1975 1 seat self-launching ultralight sailplane, 12 built
Landray GL.02 1978 tandem layout (Pou du Ciel) 1 seat, 1 built
Grinvalds Orion 1981 4 seat homebuilt, about 17 built
LearAvia Lear Fan 1981 transport, 3 built
Cirrus VK-30 1988 5 seat homebuilt, about 13 built
Miller JM-2 and Pushy Galore 1989 racer, 3 built
SolarFlight Sunseeker I 1990 solar aircraft 1 seater, 1 ex.
Grob GF 200 1991 transport, 1 built
Myasishchev Mayal 1992 multi-purpose amphibian, 1 built
NASA Perseus 1994 research aircraft, 1 built
Vmax Probe 1997 homebuilt racer, 1 built
Ameur Aviation Balbuzard/Baljims/Altania 1995 2 seater prototypes, 5 built
Bede BD-12 1998 2 seat homebuilt, 1 built
Aceair AERIKS 200 2002 2 seat kitplane, 1 built
Chudzik CC-02 Rafale 2007 prototype three surface 2 seater tandem, 1ex.
LH Aviation LH-10 Ellipse 2007 2 seat homebuilt, 3 built
Propeller above fuselage or wing
Schleicher ASH 26 1995 1 seat glider with retractable propeller, 234 built
Airfish-3 WIG 1990 Wing In Ground Effect demonstrator one seat, 1 built
Airfish-8 WIG 2007 Wing In Ground Effect transport prototype 8/10 seats, 2 built
Canard and tandem layouts
A canard is an aircraft with a smaller wing ahead of the main wing. A tandem layout has both front and rear wings of similar dimensions.
Direct drive
Santos-Dumont 14-bis 1906 first public controlled sustained flight, 1 built
Fabre Hydravion 1910, first successful floatplane, 1 built
Paulhan biplane 1910, 3 built
Voisin Canard 1911 biplane, 10+ built
Gee Bee Model Q 1931 experimental, 1 built
Ambrosini SS.2 & 3 1935 experimental aircraft, 2 built
Ambrosini SS.4 1939 prototype fighter, 1 built
Curtiss-Wright XP-55 Ascender 1943 prototype fighter, 3 built
Miles M.35 Libellula 1942, experimental tandem wing carrier-based fighter, 1 built
Miles M.39B Libellula 1943, experimental (5/8 scale) tandem wing carrier-based bomber, 1 built
Skoda-Kauba V7 1944 1 seat, project
1945 and later
In this section Rutan pushers are more than 1000 built.
Mikoyan-Gurevich MiG-8 Utka 1945 swept wing demonstrator prototype, 1 built
Lockspeiser LDA-01 1971 experimental scale development aircraft, 1 built
Rutan VariViggen 1972 homebuilt, about 20 built
Rutan VariEze 1975 2 seat homebuilt, about 400 built
Rutan Long-EZ 1979 2 seat homebuilt, about 800 built
Diehl Aeronautical XTC Hydrolight 1981 amphibian UL 1 seat
OMAC Laser 300 1981, transport, 3 built
Cozy III 1982 3 seater amateur built
Avtek 400 1984 transport, 1 built
Cozy Mk IV 1988 four seater amateur built, ~ 350 built
Beechcraft Starship 1989 airliner, 53 built
Berkut 360 1988 2 seater tandem, 31 built
AASI Jetcruzer 1989 transport, 3 built
Velocity SE 1995 4 seater, ~ 268 built
Steve Wright Stagger-Ez 2003 modified Cozy homebuilt, 1 built
RMT Bateleur 115 T 2007 2 seater
E-Go Aeroplanes e-Go 2013 ultralight and light-sport aircraft, 1 built
Cobalt Co50 Walkyrie 2015 prototype 4 seater, 1 built
Remote engine mounting
Langley Aerodrome Number 5 1896 experimental model
Wright Flyer 1903 experimental airplane, first recognized powered, sustained flight, 1 built
Wright Model A 1906 biplane, about 60 built
Deperdussin-de Feure model 2, 1910, experimental, 1 built
De Bruyere C1 1917 fighter prototype 1 seater, 1 ex.
Kyūshū J7W, prototype fighter, 1 seat, 2130 hp, 1945, 2 built
AeroVironment Gossamer Condor 1977 human powered aircraft won Kremer prize, 1 built
AeroVironment Gossamer Albatross 1979 human powered aircraft, 2 built
Dickey E-Racer 1986 homebuilt, unk no. built
British Aerospace P.1233-1 Saba 1988 anti-helicopter and close air support attack aircraft, project
Joined wings
A tandem (or three-surface) configuration whose wingtips are joined is a Closed wing.
Ben Brown SC 1932, experimental joined wing, 1 built
Ligeti Stratos 1985 1 seat homebuilt, 2 built
Airkraft Sunny 1989 2 seater, 250 built
Tailless aircraft, Flying wings
Tailless aircraft
Tailless aircraft lack a horizontal stabilizer.
Dunne D.4 1908, 1 built
Dunne D.5 1910, 1 built
Dunne D.6 & D.7 1911 monoplane, 2 built
Dunne D.8 1912, 5 built
Westland-Hill Pterodactyl series 1928, several built
Lippisch Delta 1 1931, experimental tailless monoplane, 1 built
Waterman Whatsit 1932 roadable aircraft, 1 built
Waterman Arrowplane 1935 roadable aircraft, 1 built
Waterman Arrowbile 1937 roadable aircraft, 5 built
Kayaba Ku-4 1941 (not flown) research aircraft, 1 built
Handley Page Manx 1943 experimental tailless aircraft, 1 built
Northrop XP-56 Black Bullet 1943 tailless fighter, 2 built
Sud-Est SE-2100, prototype tourer, 2 seats, 140 hp, 1945
M.L. Aviation Utility 1953 inflatable wing, 4 built
DINFIA IA 38 1960 transport, 1 built
Fauvel AV.45 1960 1 seat motor glider, unk no. built
Rohr 2-175 1974 2 seat roadable aircraft, 1 built
Cascade Kasperwing I-80 1976 UL 1 seater
Pterodactyl Ascender 1979 1 seat ultralight, 1396 built
Mitchell U2 Superwing 1980 1 seat ultralight
Facet Opal, 1988, 1 seat, experimental flying wing, 1 built
Wingco Atlantica 2002 Blended wing-body demonstrator 5 seats, 1 built
Aériane Swift Light PAS 2007 monoplane
Horten Aircraft HX-2 2019 2 seat prototype
Tailless, fabric wing, no fuselage
Ultralight trike or Flexwing
Paramotor or Powered paraglider
Powered parachute
Flying wings
Flying wings lack a distinct fuselage, with crew, engines, and payload contained within the wing structure.
Horten V 1938 powered testbed, 3 built
Northrop N-1M 1940 experimental flying wing, 1 built
Northrop N-9M 1942 experimental flying wing, 4 built
Horten H.VII 1944 2 seat prototype
Northrop B-35 1946 bomber, 4 built
Davis Flying Wing 1987
Horten PUL-10 1992 2 seater
Push-pull aircraft
Sides of fuselage
Zeppelin-Staaken R.V 1917 bomber, 3 built
Bristol Braemar 1918 bomber, 2 built
Handley Page V/1500 1918 bomber, 63 built
Farman F.121 Jabiru 1923 airliner, 9 built
Dornier Do K 1929 airliner, 3 built
Fokker F.32 1929 airliner, 7 built
Farman F.220 1932 airliner and bomber, about 80 built
Above fuselage
Felixstowe Porte Baby 1915 patrol flying boat, 11 built
Curtiss NC 1918 patrol flying boat, 10 built
Johns Multiplane 1919 bomber, 1 built
Bristol Pullman 1920 airliner, 1 built
Naval Aircraft Factory TF 1920 fighter flying boat, 4 built
SIAI S.22 1921 racing flying boat, 1 built
Dornier Wal 1922 flying boat, about 300 built
CAMS 33 1923 patrol flying boat, 21 built
Macchi M.24 1924 flying boat, unk. no built
Savoia-Marchetti S.55 1924 flying boat, 243+ built
Boeing XPB 1925 patrol flying boat, 1 built
Caproni Ca.73 1925 bomber unk. no. built
NVI F.K.33 1925 airliner, 1 built
CAMS 51 1926 flying boat, 3 built
Dornier Do R Superwal 1926 airliner flying boat, 19 built
Kawasaki Ka 87 1926 bomber, 28 built
Latécoère 21 1926 airliner flying boat, 7 built
Latécoère 23 1927 transport flying boat, 1 built
Latécoère 24 1927 mailplane flying boat, 1 built
Farman F.180 1927 airliner, 3 built
Savoia-Marchetti S.63 1927 flying boat, 1 built
CAMS 53 1928 transport flying boat, 30 built
CAMS 55 1928 patrol flying boat, 112 built
Latécoère 32 1928 mailplane flying boat, 8 built
Latham 47 1928 patrol flying boat, 16 built
Dornier X 1929 airliner flying boat, 3 built
Comte AC-3 1930 bomber, 1 built
Dornier Do P 1930 bomber, 1 built
Dornier Do S 1930 flying boat, 1 built
Hinkler Ibis 1930 2 seat monoplane, 1 built
Latécoère 340 1930 airliner flying boat, 1 built
Latécoère 380 1930 flying boat, 5 built
Blériot 125 1931 airliner, 1 built
Bratu 220 1932 airliner, 1 built
Latécoère 500 1932 transport flying boat, 2 built
Caproni Ca.90 1929 bomber, 1 built
Sikorsky XP2S 1932 patrol flying boat, 1 built
CAMS 58 1933 airliner flying boat, 4 built
Lioré et Olivier LeO H-27 1933 mailplane flying boat 1 built
Loire 70 1933 patrol flying boat, 8 built
Tupolev ANT-16 1933 bomber 1 built
Tupolev ANT-20 1934 transport, 2 built
Tupolev MTB-1 1934 patrol flying boat, 25 built
Dornier Do 18 1935 patrol flying boat, 170 built
Bartini DAR 1936 patrol flying boat, 1 built
Chyetverikov ARK-3 1936 flying boat, 7 built
Dornier Do 26 1939 push-pull flying boat, 6 built
Dornier Seastar 1984 push-pull amphibious 12 seats, 2 built
Extremities
Caproni Ca.60 1921 airliner flying boat, 1 built
Dornier Do 335 1943 push-pull fighter, 38 built
Moynet Jupiter 1963 push-pull transport, 2 built
Aero Design DG-1 1977 push-pull racer, 1 built
Rutan Defiant 1978 transport, 19+ built
Rutan Voyager 1984 endurance record aircraft, 1 built*
Star Kraft SK-700 1994 push-pull transport,
Aeronix Airelle 2002 tandem wing 2 seater, 5 built
On nose and between booms
Siemens-Schuckert DDr.I 1917 fighter, 1 built
Thomas-Morse MB-4 1920 mailplane, 2+ built
Bellanca TES 1929, distance record aircraft, 1 built
Savoia-Marchetti S.65 1929 racing floatplane 1 built
Tupolev I-12 1931 Fighter prototype
Fokker D.XXIII 1939 fighter, 1 built
Moskalyev SAM-13 1940 (unflown) push-pull fighter, 0 built
Marton X/V (RMI-8) 1944 (unflown) fighter, 1 destroyed before completion
Cessna Skymaster 1963 push-pull transport, 2993 built
Canaero Toucan 1986 ultralight, 16+ built
Schweizer RU-38 Twin Condor 1995 push-pull reconnaissance aircraft, 5 built
Adam A500 2002 push-pull transport, 7 built
On wings and between booms
Caproni Ca.1 1914 bomber, 162 built
Caproni Ca.2 1915 bomber, 9 built
AD Seaplane Type 1000 1916 bomber, 1 built
Anatra DE 1916 bomber, 1 built
Caproni Ca.3 1916 bomber, about 300 built
Caproni Ca.4 1917 triplane bomber, 44-53 built
Caproni Ca.5 1917 bomber, 662 built
Gotha G.VI 1918 bomber, 2 built
Grahame-White Ganymede 1919 bomber/airliner, 1 built
Rotorcraft
Bensen autogyros 1953
Fairey Jet Gyrodyne 1954, experimental gyrodyne
McDonnell XV-1 1954, experimental compound helicopter, 550 hp
Avian Gyroplane 1960, 2 seats, about 6 built
Wallis autogyros 1961
CarterCopter / Carter PAV 1998
Sikorsky X2 2008, experimental compound helicopter
Sikorsky S-97 Raider 2015, experimental compound helicopter
See also
List of pusher aircraft by configuration - in alphabetical order
Pusher configuration
Push-pull configuration
Tractor configuration
Bibliography
Extension-Shaft Pusher Type Aircraft, Sport aviation
References
Notes
Citations
Bibliography
Aircraft configurations | List of pusher aircraft by configuration and date | [
"Engineering"
] | 7,768 | [
"Aircraft configurations",
"Aerospace engineering"
] |
62,946,131 | https://en.wikipedia.org/wiki/UK%20Battery%20Industrialisation%20Centre | The UK Battery Industrialisation Centre (UK BIC) is a research centre in the United Kingdom, to develop new electrical batteries, for the British automotive industry. UKBIC provides over £60 million worth of specialized manufacturing equipment, supporting manufacturers, entrepreneurs, researchers, and educators in battery technology development. It has accelerated low carbon R&D, contributing to the UK's Net Zero goal by 2050.
History
Funding for the UK Battery Industrialisation Centre (UKBIC) is supplied by United Kingdom Research and Innovation (UKRI). This financial support was announced on 29 November 2017. The facility was officially inaugurated by the British Prime Minister, Boris Johnson, in July 2021, as documented on the UKBIC's official website.
Location
The UKBIC facility is located outside Coventry, adjacent to Coventry airport and about half a mile east of the junction between the A46 and A45. This is just outside the city boundary, in the extreme north of Warwick District, Warwickshire.
References
External links
2020 establishments in England
Automotive industry in the United Kingdom
Engineering research institutes
Research institutes in Warwickshire
Warwick District | UK Battery Industrialisation Centre | [
"Engineering"
] | 219 | [
"Engineering research institutes"
] |
62,946,258 | https://en.wikipedia.org/wiki/Alfred%20Makower | Alfred Jacques Makower (9 May 1876 in London – 1 February 1941) was electrical engineer and community activist. He was head of the Electrical Engineering Department of South-Western Polytechnic.
Alfred was the son of a German silk merchant. He attended University College School from 1884, the University College itself in 1894, then Trinity College, Cambridge, in 1895. Here he took the Mathematical Tripos, before moving on to the Technische Hochschule in Charlottenburg (now Technische Universität Berlin), in 1898. Then in 1900 he was given a job by Union-Elektricitäts-Gesellschaft (UEG), a subsidiary of the Thomson-Houston Electric Company. He then returned to England to work for British Thomson-Houston Company in 1902. In 1904 he was appointed head of the Electrical Engineering Department of South-Western Polytechnic. In 1913 he became a founding director Mossay and Co., a company established by Paul Mossay, along with A. Berkeley and Alfred Mays-Smith.
Alfred was chair of the Professional Committee of the German Jewish Aid Committee, in which capacity he helped several German engineer refugees with financial support and help in finding employment amongst his contacts in the engineering sector. He was vice-president of the Jewish Board of Guardians whose General Relief Committee he also chaired.
He had a son, Ernest S. Makower.
References
1876 births
1941 deaths
Electrical engineers
Technische Universität Berlin alumni
People educated at University College School | Alfred Makower | [
"Engineering"
] | 301 | [
"Electrical engineering",
"Electrical engineers"
] |
62,947,198 | https://en.wikipedia.org/wiki/Blood%20compatibility%20testing | Blood compatibility testing is conducted in a medical laboratory to identify potential incompatibilities between blood group systems in blood transfusion. It is also used to diagnose and prevent some complications of pregnancy that can occur when the baby has a different blood group from the mother. Blood compatibility testing includes blood typing, which detects the antigens on red blood cells that determine a person's blood type; testing for unexpected antibodies against blood group antigens (antibody screening and identification); and, in the case of blood transfusions, mixing the recipient's plasma with the donor's red blood cells to detect incompatibilities (crossmatching). Routine blood typing involves determining the ABO and RhD (Rh factor) type, and involves both identification of ABO antigens on red blood cells (forward grouping) and identification of ABO antibodies in the plasma (reverse grouping). Other blood group antigens may be tested for in specific clinical situations.
Blood compatibility testing makes use of reactions between blood group antigens and antibodies—specifically the ability of antibodies to cause red blood cells to clump together when they bind to antigens on the cell surface, a phenomenon called agglutination. Techniques that rely on antigen-antibody reactions are termed serologic methods, and several such methods are available, ranging from manual testing using test tubes or slides to fully automated systems. Blood types can also be determined through genetic testing, which is used when conditions that interfere with serologic testing are present or when a high degree of accuracy in antigen identification is required.
Several conditions can cause false or inconclusive results in blood compatibility testing. When these issues affect ABO typing, they are called ABO discrepancies. ABO discrepancies must be investigated and resolved before the person's blood type is reported. Other sources of error include the "weak D" phenomenon, in which people who are positive for the RhD antigen show weak or negative reactions when tested for RhD, and the presence of immunoglobulin G antibodies on red blood cells, which can interfere with antibody screening, crossmatching, and typing for some blood group antigens.
Medical uses
Blood compatibility testing is routinely performed before a blood transfusion. The full compatibility testing process involves ABO and RhD (Rh factor) typing; screening for antibodies against other blood group systems; and crossmatching, which involves testing the recipient's blood plasma against the donor's red blood cells as a final check for incompatibility. If an unexpected blood group antibody is detected, further testing is warranted to identify the antibody and ensure that the donor blood is negative for the relevant antigen. Serologic crossmatching may be omitted if the recipient's antibody screen is negative, there is no history of clinically significant antibodies, and their ABO/Rh type has been confirmed against historical records or against a second blood sample; and in emergencies, blood may be transfused before any compatibility testing results are available.
Blood compatibility testing is often performed on pregnant women and on the cord blood from newborn babies, because incompatibility puts the baby at risk for developing hemolytic disease of the newborn. It is also used before hematopoietic stem cell transplantation, because blood group incompatibility can be responsible for some cases of acute graft-versus-host disease.
Principles
Blood types are defined according to the presence or absence of specific antigens on the surface of red blood cells. The most important of these in medicine are the ABO and RhD antigens but many other blood group systems exist and may be clinically relevant in some situations. As of 2021, 43 blood groups are officially recognized.
People who lack certain blood group antigens on their red cells can form antibodies against these antigens. For example, a person with type A blood will produce antibodies against the B antigen. The ABO blood group antibodies are naturally occurring, meaning that they are found in people who have not been exposed to incompatible blood. Antibodies to most other blood group antigens, including RhD, develop after people are exposed to the antigens through transfusion or pregnancy. Some of these antibodies can bind to incompatible red blood cells and cause them to be destroyed, resulting in transfusion reactions and other complications.
Serologic methods for blood compatibility testing make use of these antibody-antigen reactions. In blood typing, reagents containing blood group antibodies, called antisera, are added to suspensions of blood cells. If the relevant antigen is present, the antibodies in the reagent will cause the red blood cells to agglutinate (clump together), which can be identified visually. In antibody screening, the individual's plasma is tested against a set of red blood cells with known antigen profiles; if the plasma agglutinates one of the red blood cells in the panel, this indicates that the individual has an antibody against one of the antigens present on the cells. In crossmatching, a prospective transfusion recipient's plasma is added to the donor red blood cells and observed for agglutination (or hemolysis) to detect antibodies that could cause transfusion reactions.
Blood group antibodies occur in two major forms: immunoglobulin M (IgM) and immunoglobulin G (IgG). Antibodies that are predominantly IgM, such as the ABO antibodies, typically cause immediate agglutination of red blood cells at room temperature. Therefore, a person's ABO blood type can be determined by simply adding the red blood cells to the reagent and centrifuging or mixing the sample, and in crossmatching, incompatibility between ABO types can be detected immediately after centrifugation. RhD typing also typically uses IgM reagents although anti-RhD usually occurs as IgG in the body. Antibodies that are predominantly IgG, such as those directed towards antigens of the Duffy and Kidd systems, generally do not cause immediate agglutination because the small size of the IgG antibody prevents formation of a lattice structure. Therefore, blood typing using IgG antisera and detection of IgG antibodies requires use of the indirect antiglobulin test to demonstrate IgG bound to red blood cells.
In the indirect antiglobulin test, the mixture of antiserum or plasma and red blood cells is incubated at , the ideal temperature for reactivity of IgG antibodies. After incubation, the red blood cells are washed with saline to remove unbound antibodies, and anti-human globulin reagent is added. If IgG antibodies have bound to antigens on the cell surface, anti-human globulin will bind to those antibodies, causing the red blood cells to agglutinate after centrifugation. If the reaction is negative, "check cells"—reagent cells coated with IgG—are added to ensure that the test is working correctly. If the test result is indeed negative, the check cells should react with the unbound anti-human globulin and demonstrate agglutination.
Blood typing
ABO and Rh typing
In ABO and Rh typing, reagents containing antibodies against the A, B, and RhD antigens are added to suspensions of blood cells. If the relevant antigen is present, the red blood cells will demonstrate visible agglutination (clumping). In addition to identifying the ABO antigens, which is termed forward grouping, routine ABO blood typing also includes identification of the ABO antibodies in the person's plasma. This is called reverse grouping, and it is done to confirm the ABO blood type. In reverse grouping, the person's plasma is added to type A1 and type B red blood cells. The plasma should agglutinate the cells that express antigens that the person lacks, while failing to agglutinate cells that express the same antigens as the patient. For example, the plasma of someone with type A blood should react with type B red cells, but not with A1 cells. If the expected results do not occur, further testing is required. Agglutination is scored from 1+ to 4+ based on the strength of the reaction. In ABO typing, a score of 3+ or 4+ indicates a positive reaction, while a score of 1+ or 2+ is inconclusive and requires further investigation.
Other blood group systems
Prior to receiving a blood transfusion, individuals are screened for the presence of antibodies against antigens of non-ABO blood group systems. Blood group antigens besides ABO and RhD that are significant in transfusion medicine include the RhC/c and E/e antigens and the antigens of the Duffy, Kell, Kidd, and MNS systems. If a clinically significant antibody is identified, the recipient must be transfused with blood that is negative for the corresponding antigen to prevent a transfusion reaction. This requires the donor units to be typed for the relevant antigen. The recipient may also be typed for the antigen to confirm the identity of the antibody, as only individuals who are negative for a blood group antigen should produce antibodies against it.
In Europe, females who require blood transfusions are often typed for the Kell and extended Rh antigens to prevent sensitization to these antigens, which could put them at risk for developing hemolytic disease of the newborn during pregnancy. The American Society of Hematology recommends that people with sickle cell disease have their blood typed for the RhC/c, RhE/e, Kell, Duffy, Kidd, and MNS antigens prior to transfusion, because they often require transfusions and may become sensitized to these antigens if transfused with mismatched blood. Extended red blood cell phenotyping is also recommended for people with beta-thalassemia. Blood group systems other than ABO and Rh have a relatively small risk of complications when blood is mixed, so in emergencies such as major hemorrhage, the urgency of transfusion can exceed the need for compatibility testing against other blood group systems (and potentially Rh as well).
Antibody screening and identification
Antibodies to most blood group antigens besides those of the ABO system develop after exposure to incompatible blood. Such "unexpected" blood group antibodies are only found in 0.8–2% of people; however, recipients of blood transfusions must be screened for these antibodies to prevent transfusion reactions. Antibody screening is also performed as part of prenatal care, because antibodies against RhD and other blood group antigens can cause hemolytic disease of the newborn, and because Rh-negative mothers who have developed an anti-RhD antibody are not eligible to receive Rho(D) immune globulin (Rhogam).
In the antibody screening procedure, an individual's plasma is added to a panel of two or three sets of red blood cells which have been chosen to express most clinically significant blood group antigens. Only group O cells are used in antibody screening, as otherwise the cells would react with the naturally occurring ABO blood group antibodies. The mixture of plasma and red cells is incubated at 37°C and tested via the indirect antiglobulin test. Some antibody screening and identification protocols incorporate a phase of testing after incubation at room temperature, but this is often omitted because most unexpected antibodies that react at room temperature are clinically insignificant.
Agglutination of the screening cells by the plasma, with or without the addition of anti-human globulin, indicates that an unexpected blood group antibody is present. If this occurs, further testing using more cells (usually 10–11) is necessary to identify the antibody. By examining the antigen profiles of the red blood cells the person's plasma reacts with, it is possible to determine the antibody's identity. An "autocontrol", in which the individual's plasma is tested against their own red cells, is included to determine whether the agglutination is due to an alloantibody (an antibody against a foreign antigen), an autoantibody (an antibody against one's own antigens), or another interfering substance.
The image above shows the interpretation of an antibody panel used in serology to detect antibodies towards the most relevant blood group antigens. Each row represents "reference" or "control" red blood cells of donors which have known antigen compositions and are ABO group O. The + symbol means that the antigen is present on the reference red blood cells, and 0 means it is absent; nt means "not tested". The "result" column to the right displays reactivity when mixing reference red blood cells with plasma from the patient in 3 different phases: room temperature, 37°C and AHG (with anti-human globulin, by the indirect antiglobulin test).
Step 1; Annotated in blue: starting to exclude antigens without reaction in all 3 phases; looking at the first reference cell row with no reaction (0 in column at right, in this case cell donor 2), and excluding (here marked by X) each present antigen where the other pair is either practically non-existent (such as for DT) or 0 (presence is homozygous, in this case homozygous c).When both pairs are + (heterozygous cases), they are both excluded (here marked by X), except for C/c, E/e, Duffy, Kidd and MNS antigens (where antibodies of the patient may still react towards blood cells with homozygous antigen expression, because homozygous expression results in a higher dosage of the antigen). Thus, in this case, E/e is not excluded in this row, while K/k is, as well as Jsb (regardless of what Jsa would have shown).
Step 2: Annotated in brown: Going to the next reference cell row with a negative reaction (in this case cell donor 4), and repeating for each antigen type that is not already excluded.
Step 3: Annotated in purple. Repeating the same for each reference cell row with negative reaction.
Step 4: Discounting antigens that were absent in all or almost all reactive cases (here marked with \). These are often antigens with low prevalence, and while there is a possibility of such antibodies being produced, they are generally not the type that is responsible for the reactivity at hand.
Step 5: Comparing the remaining possible antigens for a most likely culprit (in this case Fya), and selectively ruling out significant differential antigens, such as with the shown additional donor cell type that is known to not contain Fya but contains C and Jka.
In this case, the antibody panel shows that anti-Fya antibodies are present. This indicates that donor blood typed to be negative for the Fya antigen must be used. Still, if a subsequent cross-matching shows reactivity, additional testing should be done against previously discounted antigens (in this case potentially E, K, Kpa and/or Lua).
When multiple antibodies are present, or when an antibody is directed against a high-frequency antigen, the normal antibody panel procedure may not provide a conclusive identification. In these cases, hemagglutination inhibition can be used, wherein a neutralizing substance cancels out a specific antigen. Alternatively, the plasma may be incubated with cells of known antigen profiles in order to remove a specific antibody (a process termed adsorption); or the cells can be treated with enzymes such as ficain or papain which inhibit the reactivity of some blood group antibodies and enhance others. The effect of ficain and papain on major blood group systems is as follows:
Enhanced: ABO, Rh, Kidd, Lewis, P1, Ii
Destroyed: Duffy (Fya and Fyb), Lutheran, MNS
Unaffected: Kell
People who have tested positive for an unexpected blood group antibody in the past may not exhibit a positive reaction on subsequent testing; however, if the antibody is clinically significant, they must be transfused with antigen-negative blood regardless.
Crossmatching
Crossmatching, which is routinely performed before a blood transfusion, involves adding the recipient's blood plasma to a sample of the donor's red blood cells. If the blood is incompatible, the antibodies in the recipient's plasma will bind to antigens on the donor red blood cells. This antibody-antigen reaction can be detected through visible clumping or destruction of the red blood cells, or by reaction with anti-human globulin, after centrifugation.
If the transfusion recipient has a negative antibody screen and no history of antibodies, an "immediate spin" crossmatch is often performed: the red blood cells and plasma are centrifuged immediately after mixing as a final check for incompatibility between ABO blood types. If a clinically significant antibody is detected (or was in the past), or if the immediate spin crossmatch demonstrates incompatibility, a "full" or "IgG crossmatch" is performed, which uses the indirect antiglobulin test to detect blood group incompatibility caused by IgG antibodies. The IgG crossmatching procedure is more lengthy than the immediate spin crossmatch, and in some cases may take more than two hours.
Individuals who have a negative antibody screen and no history of antibodies may also undergo an "electronic crossmatch", provided that their ABO and Rh type has been determined from the current blood sample and that the results of another ABO/Rh type are on record. In this case, the recipient's blood type is simply compared against that of the donor blood, without any need for serologic testing. In emergencies, blood may be issued before crossmatching is complete.
Methods
Tube and slide methods
Blood typing can be performed using test tubes, microplates, or blood typing slides. The tube method involves mixing a suspension of red blood cells with antisera (or plasma, for reverse grouping) in a test tube. The mixture is centrifuged to separate the cells from the reagent, and then resuspended by gently agitating the tube. If the antigen of interest is present, the red blood cells agglutinate, forming a solid clump in the tube. If it is absent, the red blood cells go back into suspension when mixed. The microplate method is similar to the tube method, except rather than using individual test tubes, blood typing is carried out in a plate containing dozens of wells, allowing multiple tests to be performed at the same time. The agglutination reactions are read after the plate is centrifuged.
Antibody screening and identification can also be carried out by the tube method. In this procedure, the plasma and red cells are mixed together in a tube containing a medium that enhances agglutination reactions, such as low ionic strength saline (LISS). The tubes are incubated at body temperature for a defined period of time, then centrifuged and examined for agglutination or hemolysis; first immediately following the incubation period, and then after washing and addition of anti-human globulin reagent. Crossmatching, likewise, may be performed by the tube method; the reactions are read immediately after centrifugation in the immediate spin crossmatch, or after incubation and addition of AHG in the full crossmatching procedure.
The slide method for blood typing involves mixing a drop of blood with a drop of antisera on a slide. The slide is tilted to mix the cells and reagents together and then observed for agglutination, which indicates a positive result. This method is typically used in under-resourced areas or emergency situations; otherwise, alternative methods are preferred.
Column agglutination
Column agglutination techniques for blood compatibility testing (sometimes called the "gel test") use cards containing columns of dextran-polyacrylamide gel. Cards designed for blood typing contain pre-dispensed blood typing reagents for forward grouping, and wells containing only a buffer solution, to which reagent red blood cells and plasma are added, for reverse grouping. Antibody screening and crossmatching can also be carried out by column agglutination, in which case cards containing anti-human globulin reagent are used. The gel cards are centrifuged (sometimes after incubation, depending on the test), during which red blood cell agglutinates become trapped at the top of the column because they are too large to migrate through the gel. Cells that have not agglutinated collect on the bottom. Therefore, a line of red blood cells at the top of the column indicates a positive result. The strength of positive reactions is scored from 1+ to 4+ depending on how far the cells have travelled through the gel. The gel test has advantages over manual methods in that it eliminates the variability associated with manually re-suspending the cells and that the cards can be kept as a record of the test. The column agglutination method is used by some automated analyzers to perform blood typing automatically. These analyzers pipette red blood cells and plasma onto gel cards, centrifuge them, and scan and read the agglutination reactions to determine the blood type.
Solid-phase assay
Solid-phase assays (sometimes called the "antigen capture" method) use reagent antigens or antibodies affixed to a surface (usually a microplate). Microplate wells coated with anti-A, -B and -D reagents are used for forward grouping. The test sample is added and the microplate is centrifuged; in a positive reaction, the red blood cells adhere to the surface of the well. Some automated analyzers use solid phase assays for blood typing.
Genotyping
Genetic testing can be used to determine a person's blood type in certain situations where serologic testing is insufficient. For example, if a person has been transfused with large volumes of donor blood, the results of serologic testing will reflect the antigens on the donor cells and not the person's actual blood type. Individuals who produce antibodies against their own red blood cells or who are treated with certain drugs may show spurious agglutination reactions in serologic testing, so genotyping may be necessary to determine their blood type accurately. Genetic testing is required for typing red blood cell antigens for which no commercial antisera are available.
The AABB recommends RhD antigen genotyping for women with serologic weak D phenotypes who have the potential to bear children. This is because some people with weak D phenotypes can produce antibodies against the RhD antigen, which can cause hemolytic disease of the newborn, while others cannot. Genotyping can identify the specific type of weak D antigen, which determines the potential for the person to produce antibodies, thus avoiding unnecessary treatment with Rho(D) immune globulin. Genotyping is preferred to serologic testing for people with sickle cell disease, because it is more accurate for certain antigens and can identify antigens that cannot be detected by serologic methods.
Genotyping is also used in prenatal testing for hemolytic disease of the newborn. When a pregnant woman has a blood group antibody that can cause HDN, the fetus can be typed for the relevant antigen to determine if it is at risk of developing the disease. Because it is impractical to draw blood from the fetus, the blood type is determined using an amniocentesis sample or cell-free fetal DNA isolated from the mother's blood. The father may also be genotyped to predict the risk of hemolytic disease of the newborn, because if the father is homozygous for the relevant antigen (meaning having two copies of the gene) the baby will be positive for the antigen and thus at risk of developing the disease. If the father is heterozygous (having only one copy), the baby only has a 50% chance of being positive for the antigen.
Limitations
ABO discrepancies
In ABO typing, the results of the forward and reverse grouping should always correspond with each other. An unexpected difference between the two results is termed an ABO discrepancy, and must be resolved before the person's blood type is reported.
Forward grouping
Weak reactions in the forward grouping may occur in people who belong to certain ABO subgroups—variant blood types characterized by decreased expression of the A or B antigens or changes in their structure. Weakened expression of ABO antigens may also occur in leukemia and Hodgkin's lymphoma. Weak reactions in forward grouping can be strengthened by incubating the blood and reagent mixture at room temperature or , or by using certain enzymes to enhance the antigen-antibody reactions.
Occasionally, two populations of red blood cells are apparent after reaction with the blood typing antisera. Some of the red blood cells are agglutinated, while others are not, making it difficult to interpret the result. This is called a mixed field reaction, and it can occur if someone has recently received a blood transfusion with a different blood type (as in a type A patient receiving type O blood), if they have received a bone marrow or stem cell transplant from someone with a different blood type, or in patients with certain ABO subgroups, such as A3. Investigation of the person's medical history can clarify the cause of the mixed field reaction.
People with cold agglutinin disease produce antibodies against their own red blood cells that cause them to spontaneously agglutinate at room temperature, leading to false positive reactions in forward grouping. Cold agglutinins can usually be deactivated by warming the sample to and washing the red blood cells with saline. If this is not effective, dithiothreitol can be used to destroy the antibodies.
Cord blood samples may be contaminated with Wharton's jelly, a viscous substance that can cause red blood cells to stick together, mimicking agglutination. Wharton's jelly can be removed by thoroughly washing the red blood cells.
In a rare phenomenon known as "acquired B antigen", a patient whose true blood type is A may show a weak positive result for B in the forward grouping. This condition, which is associated with gastrointestinal diseases such as colon cancer and intestinal obstruction, results from conversion of the A antigen to a structure mimicking the B antigen by bacterial enzymes. Unlike the true B antigen, acquired B antigen does not react with reagents within a certain pH range.
Reverse grouping
Infants under 3 to 6 months of age exhibit missing or weak reactions in reverse grouping because they produce very low levels of ABO antibodies. Therefore, reverse grouping is generally not performed for this age group. Elderly people may also exhibit decreased antibody production, as may people with hypogammaglobulinemia. Weak reactions can be strengthened by allowing the plasma and red cells to incubate at room temperature for 15 to 30 minutes, and if this is not effective, they can be incubated at .
Approximately 20% of individuals with the blood type A or AB belong to a subgroup of A, termed A2, while the more common subgroup, encompassing approximately 80% of individuals, is termed A1. Because of small differences in the structure of the A1 and A2 antigens, some individuals in the A2 subgroup can produce an antibody against A1. Therefore, these individuals will type as A or AB in the forward grouping, but will exhibit an unexpected positive reaction with the type A1 red cells in the reverse grouping. The discrepancy can be resolved by testing the person's red blood cells with an anti-A1 reagent, which will give a negative result if the patient belongs to the A2 subgroup. Anti-A1 antibodies are considered clinically insignificant unless they react at . Other subgroups of A exist, as well as subgroups of B, but they are rarely encountered.
If high levels of protein are present in a person's plasma, a phenomenon known as rouleaux may occur when their plasma is added to the reagent cells. Rouleaux causes red blood cells to stack together, which can mimic agglutination, causing a false positive result in the reverse grouping. This can be avoided by removing the plasma, replacing it with saline, and re-centrifuging the tube. Rouleaux will disappear once the plasma is replaced with saline, but true agglutination will persist.
Antibodies to blood group antigens other than A and B may react with the reagent cells used in reverse grouping. If a cold-reacting autoantibody is present, the false positive result can be resolved by warming the sample to . If the result is caused by an alloantibody, an antibody screen can be performed to identify the antibody, and the reverse grouping can be performed using samples that lack the relevant antigen.
Weak D phenotype
Approximately 0.2 to 1% of people have a "weak D" phenotype, meaning that they are positive for the RhD antigen, but exhibit weak or negative reactions with some anti-RhD reagents due to decreased antigen expression or atypical variants of antigen structure. If routine serologic testing for RhD results in a score of 2+ or less, the antiglobulin test can be used to demonstrate the presence of RhD. Weak D testing is also performed on blood donors who initially type as RhD negative. Historically, blood donors with weak D were treated as Rh positive and patients with weak D were treated as Rh negative in order to avoid potential exposure to incompatible blood. Genotyping is increasingly used to determine the molecular basis of weak D phenotypes, as this determines whether or not individuals with weak D can produce antibodies against RhD or sensitize others to the RhD antigen.
Red cell antibody sensitization
The indirect antiglobulin test, which is used for weak D testing and typing of some red blood cell antigens, detects IgG bound to red blood cells. If IgG is bound to red blood cells in vivo, as may occur in autoimmune hemolytic anemia, hemolytic disease of the newborn and transfusion reactions, the indirect antiglobulin test will always give a positive result, regardless of the presence of the relevant antigen. A direct antiglobulin test can be performed to demonstrate that the positive reaction is due to sensitization of red cells.
Other pretransfusion testing
Some groups of people have specialized transfusion requirements. Fetuses, very low-birth-weight infants, and immunocompromised people are at risk for developing severe infection with cytomegalovirus (CMV)―an opportunistic pathogen for which approximately 50% of blood donors test positive―and may be transfused with CMV-negative blood to prevent infection. Those who are at risk of developing graft-versus-host disease, such as bone marrow transplant recipients, receive blood that has been irradiated to inactivate the T lymphocytes that are responsible for this reaction. People who have had serious allergic reactions to blood transfusions in the past may be transfused with blood that has been "washed" to remove plasma. The history of the patient is also examined to see if they have previously identified antibodies and any other serological anomalies.
A direct antiglobulin test (Coombs test) is also performed as part of the antibody investigation.
Donor blood is generally screened for transfusion-transmitted infections such as HIV. As of 2018, the World Health Organization reported that nearly 100% of blood donations in high- and upper-middle-income countries underwent infectious disease screening, but the figures for lower-middle-income and low-income countries were 82% and 80.3% respectively.
History
In 1901, Karl Landsteiner published the results of an experiment in which he mixed the serum and red blood cells of five different human donors. He observed that a person's serum never agglutinated their own red blood cells, but it could agglutinate others', and based on the agglutination reactions the red cells could be sorted into three groups: group A, group B, and group C. Group C, which consisted of red blood cells that did not react with any person's plasma, would later be known as group O. A fourth group, now known as AB, was described by Landsteiner's colleagues in 1902. This experiment was the first example of blood typing.
In 1945, Robin Coombs, A.E. Mourant and R.R. Race published a description of the antiglobulin test (also known as the Coombs test). Previous research on blood group antibodies had documented the presence of so-called "blocking" or "incomplete" antibodies: antibodies that occupied antigen sites, preventing other antibodies from binding, but did not cause red blood cells to agglutinate. Coombs and his colleagues devised a method to easily demonstrate the presence of these antibodies. They injected human immunoglobulins into rabbits, which caused them to produce an anti-human globulin antibody. The anti-human globulin could bind to antibodies already attached to red blood cells and cause them to agglutinate. The invention of the antiglobulin test led to the discovery of many more blood group antigens. By the early 1950s, companies had begun producing commercial antisera for special antigen testing.
Notes
References
Blood tests
Transfusion medicine
Immunologic tests | Blood compatibility testing | [
"Chemistry",
"Biology"
] | 7,006 | [
"Blood tests",
"Chemical pathology",
"Immunologic tests"
] |
62,948,318 | https://en.wikipedia.org/wiki/Uri%20Sivan | Uri Sivan (אורי סיון)(born 1955) is an Israeli physicist who is the 17th president of the Technion – Israel Institute of Technology. He is also the holder of the Bertoldo Badler Chair in the Technion's Faculty of Physics.
Biography
Uri Sivan's parents immigrated to Mandatory Palestine from Poland in 1936. They studied at the Technion – Israel Institute of Technology after being banned from European universities because they were Jewish.
Sivan served as a pilot in the Israeli Air Force.
Sivan has a BSc in Physics and Mathematics, and an MSc and PhD in Physics from Tel Aviv University.
Sivan lives in Haifa, Israel. He is married and has three children.
Academic career
In 1991, after three years at IBM’s T. J. Watson Research Center in New York State, Sivan joined the Faculty of Physics at the Technion – Israel Institute of Technology, and became the holder of the Bertoldo Badler Chair.
Sivan set up and led the Russell Berrie Nanotechnology Research Institute at Technion from 2005 to 2010, and in 2017 he set up the National Advisory Committee for Quantum Science and Technology of the Council for Higher Education's Planning and Budgeting Committee. Israel's second astronaut carried the nano-bible, a 0.5 square-millimeter silicon nanochip with 1.2 million letters, created by Uri Sivan into space in 2022.
In September 2019, Sivan became the 17th President of the Technion – Israel Institute of Technology, replacing Peretz Lavie.
Awards and recognition
Sivan was awarded the Israel Academy of Sciences Bergmann Prize, the Mifal Hapais Landau Prize for the Sciences and Research, the Rothschild Foundation Bruno Prize, the Technion's Hershel Rich Innovation Award, and the Taub Award for Excellence in Research.
References
Scientists from Haifa
20th-century Israeli physicists
Academic staff of Technion – Israel Institute of Technology
Tel Aviv University alumni
Jewish physicists
Living people
Israeli people of Polish-Jewish descent
Technion – Israel Institute of Technology presidents
Quantum physicists
Physics educators
Israeli Air Force personnel
21st-century Israeli physicists
1955 births | Uri Sivan | [
"Physics"
] | 453 | [
"Quantum physicists",
"Quantum mechanics"
] |
62,950,092 | https://en.wikipedia.org/wiki/Dental%20aerosol | A dental aerosol is an aerosol that is produced from dental instrument, dental handpieces, three-way syringes, and other high-speed instruments. These aerosols may remain suspended in the clinical environment. Dental aerosols can pose risks to the clinician, staff, and other patients. The heavier particles (e.g., >50 μm) contained within the aerosols are likely to remain suspended in the air for relatively short period and settle quickly onto surfaces, however, the lighter particles may remain suspended for longer periods and may travel some distance from the source. These smaller particles are capable of becoming deposited in the lungs when inhaled and provide a route of diseases transmission. Different dental instruments produce varying quantities of aerosol, and therefore are likely to pose differing risks of dispersing microbes from the mouth. Air turbine dental handpieces generally produce more aerosol, with electric micromotor handpieces producing less, although this depends on the configuration of water coolant used by the handpiece.
Composition
These dental aerosols are bioaerosols which may be contaminated with bacteria, fungi, and viruses from the oral cavity, skin, and the water used in dental units. Dental aerosols also have micro-particles from dental burs, and silica particles which are one of the components of dental filling materials like dental composite. Depending upon the procedure and site, the aerosol composition may change from patient to patient. Apart from microorganisms, these aerosols may consist of particles from saliva, gingival crevicular fluid, blood, dental plaque, calculus, tooth debris, oronasal secretions, oil from dental handpieces, and micro-particles from grinding of the teeth and dental materials. They may also consist of abrasive particles that are expelled during air abrasion and polishing methods.
Size
Dental aerosols contain a wide range of particles with the majority being less than 50 μm. The smaller particles with size between 0.5 and 10 μm are more likely to be inhaled and have the potential to transmit infection. Smaller particles are likely to remain suspended for longer periods of time, and may travel further from the source. Settling time of particles is described by Stokes' law in part as a function of their aerodynamic diameter.
Potential hazards and mitigation
The water used in the dental units may be contaminated with Legionella, and the aerosols produced by dental handpieces may contribute to the spread of the Legionella in the environment; there is therefore a risk of inhalation by the dentist, staff and patients. The dental unit water lines (DUWLs) may also be contaminated with other bacteria like Mycobacterium spp and Pseudomonas aeruginosa. Infection from Legionella species causes infections like Legionellosis and several pneumonia like diseases. However, still there is no strong evidence that suggests the dentists are at greater occupational risk from Legionella. Transmission of tuberculosis also occurs from the cough producing procedures on the patients with tuberculosis that involve production of aerosols. Mycobacterium tuberculosis is transmitted in the form of droplet nuclei which are smaller than 5 μm which stay suspended in the environment for longer duration. The development of active tuberculosis in Dental Health Care Workers (DHCWs) is less likely than the rest of the other Health Care Workers (HCWs). There are lacking evidences to prove the active tuberculosis development resulting from this transmission in Dental health care Workers (DHCWs).
The virus that caused the COVID-19 pandemic is named as severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) by the International Committee on Taxonomy of Viruses (ICTV) on 11 February 2020. SARS-CoV-2 remains stable in aerosols for several hours. The virus is viable for hours in aerosols and for few days on surfaces, hence the transmission of SARS-CoV-2 is feasible through aerosols and also shows fomite transmission.
Dentists have previously been described as one of the top of the working groups with high risk of exposure to SARS-CoV-2. Due to the close proximity of the dental health care workers to the patients, dental procedures involving aerosol production is not advisable in patients who tested positive for COVID-19 except for emergency dental treatment. On 16 March 2020, the American Dental Association (ADA) has advised dentists to postpone all elective procedures. ADA also developed guidance specific to address dental services during the COVID-19 pandemic.
Elements like calcium, aluminium, silica and phosphorus can also be found in the dental aerosols produced during the procedures like debonding of orthodontic appliances. These particles may range from 2 to 30 μm in diameter and there are chances of inhaling them.
A number of methods have been proposed, and are widely used, to control dental aerosols and reduce risk of disease transmission. For example, dental aerosols can be controlled or reduced using dental suction, rubber dam, alternative handpieces, and local exhaust ventilation (extra-oral suction).
See also
Occupational hazards in dentistry
Dentistry
References
Further reading
External links
Aerosols
Dentistry
Occupational hazards
Occupational safety and health | Dental aerosol | [
"Chemistry"
] | 1,080 | [
"Aerosols",
"Colloids"
] |
62,951,851 | https://en.wikipedia.org/wiki/Random%20recursive%20tree | In probability theory, a random recursive tree is a rooted tree chosen uniformly at random from the recursive trees with a given number of vertices.
Definition and generation
In a recursive tree with vertices, the vertices are labeled by the numbers from to , and the labels must decrease along any path to the root of the tree. These trees are unordered, in the sense that there is no distinguished ordering of the children of each vertex. In a random recursive tree, all such trees are equally likely.
Alternatively, a random recursive tree can be generated by starting from a single vertex, the root of the tree, labeled , and then for each successive label from to choosing a random vertex with a smaller label to be its parent. If each of the choices is uniform and independent of the other choices, the resulting tree will be a random recursive tree.
Properties
With high probability, the longest path from the root to the leaf of an -vertex random recursive tree has length .
The maximum number of children of any vertex, i.e., degree, in the tree is, with high probability, .
The expected distance of the th vertex from the root is the th harmonic number, from which it follows by linearity of expectation that the sum of all root-to-vertex path lengths is, with high probability, .
The expected number of leaves of the tree is with variance , so with high probability the number of leaves is .
Applications
lists several applications of random recursive trees in modeling phenomena including disease spreading, pyramid schemes, the evolution of languages, and the growth of computer networks.
References
Trees (graph theory)
Random graphs | Random recursive tree | [
"Mathematics"
] | 339 | [
"Mathematical relations",
"Graph theory",
"Random graphs"
] |
62,952,157 | https://en.wikipedia.org/wiki/Gregory%20H.%20Robinson | Gregory H. Robinson FRSC is an American synthetic inorganic chemist and a Foundation Distinguished Professor of Chemistry at the University of Georgia. Robinson's research focuses on unusual bonding motifs and low oxidation state chemistry of molecules containing main group elements such as boron, gallium, germanium, phosphorus, magnesium, and silicon. He has published over 150 research articles, and was elected to the National Academy of Sciences in 2021.
Education
Robinson received his B.S. from Jacksonville State University (1980) and his Ph.D. from the University of Alabama (1984). He joined the faculty at the University of Georgia in 1995.
Discoveries
Robinson has made a number of seminal discoveries in the field of synthetic inorganic chemistry. Many of these discoveries have concerned unusual molecules involving the main group elements.
Aromatic molecules constitute a particularly important class of organic compounds. In general, aromatic molecules contain planar carbon-based cyclic ring systems. In addition, aromatic molecules also possess enhanced stability due to electron delocalization. The iconic aromatic molecule is benzene, C6H6. Inherent in the traditional concept of aromaticity, is the fact that metals were considered incapable of displaying traditional aromatic behavior. Robinson discovered that the main group metal gallium, if properly constrained, could exhibit aromatic behavior. Robinson's group prepared a compound that contained a three-membered ring of gallium atoms in a dianion, [R3Ga3]2- (R = large organic ligand). This [R3Ga3]2- dianion was found to be isoelectronic with the aromatic triphenylcyclopropenium cation, [Ph3C3]+. Thus, the concept of “metalloaromaticity”, the proposition that a metallic ring system could display traditional aromatic behavior historically restricted to carbon ring systems (i.e., benzene), was experimentally realized.
The chemistry of boron, the fifth element on the Periodic Table, is as rich as it is varied. However, boron had not been shown to engage in robust multiple bonding like its periodic neighbor carbon. Robinson utilized a class of organic bases known as carbenes (L:) to prepare the first neutral compound containing a boron-boron double bond, the first diborene, with the synthesis and molecular structure of L:(H)B=B(H):L. The chemistry of molecules containing boron-boron multiple bonds is now a thriving area of research.
Robinson utilized a similar technique to prepare a highly unusual compound containing a silicon-silicon double bond, with both silicon atoms residing in the formal oxidation state of zero, L:Si=Si:L. Essentially, this compound represented a means to stabilize the highly reactive diatomic allotropes of silicon at room temperature. Since this discovery, several other molecules have subsequently been prepared including diphosphorus.
Publications
Robinson has published over 150 research articles, including:
Wang, Y.; Quillian, B.; Wei, P.; Wannere, C. S.; Xie, Y.; King, R. B.; Schaefer, H. F. III; Schleyer, P. V. R.; and Robinson, G. H., “A Stable Neutral Diborene Containing a B=B Double Bond”, Journal of the American Chemical Society 2007, 129, 12412–12413.
Wang, Y.; Xie, Y.; Wei, P.; King, R. B.; Schaefer, H. F. III; Schleyer, P. V. R.; and Robinson, G. H., “Carbene-Stabilized Diphosphorus”, Journal of the American Chemical Society 2008, 130, 14970–14971.
Wang, Y.; Xie, Y.; Wei, P.; King, R. B.; Schaefer, H. F. III; Schleyer, P. V. R.; and Robinson, G. H., “A Stable Silicon (0) Compound with a Si=Si Double Bond”, Science 2008, 321, 1069-1071.
Wang, Y.; Chen, M.; Xie, Y.; Wei, P.; Schaefer, H. F. III; Schleyer, P. V. R.; and Robinson, G. H., "Stabilization of Elusive Silicon Oxides", Nature Chemistry 2015, 7, 509–513.
Wang, Y.; Hickox, H. P.; Xie, Y.; Wei, P.; Blair, S. A.; Johnson, M. K.; Schaefer, H. F. III; and Robinson, G. H., “A Stable Anionic Dithiolene Radical”, Journal of the American Chemical Society. 2017, 139, 6859-6862.
Wang, Y.; Xie, Y.; Wei, P.; Blair, S. A.; Cui, D.; Johnson, M. K.; Schaefer, H. F. III.; and Robinson, G. H., "Stable Boron Dithiolene Radicals", Angewandte Chemie, International Edition 2018, 57, 7865-7868.
Wang, Y.; Xie, Y.; Wei, P.; Schaefer, H. F. III.; and Robinson, G. H., “Redox Chemistry of an Anionic Dithiolene Radical”, Dalton Transactions 2019, 48, 3543-3546.
Wang, Y.; Tope, C. A.; Xie, Y.; Wei, P.; Urbauer, J. L.; Schaefer, H. F. III.; and Robinson, G. H., "Carbene-Stabilized Disilicon-Transfer Agent: Synthesis of a Diatonic Silicon Tris(dithiolene) Complex", Angewandte Chemie, International Edition 2020, 59, 8864-8867.
Wang, Y.; Xie, Y.; Wei, P.; Blair, S. A.; Cui, D.; Johnson, M. K.; Schaefer, H. F. III.; and Robinson, G. H., "A Stable Naked Dithiolene Radical Anion and Synergic THF Ring-Opening", Journal of the American Chemical Society 2020, 142, 17301-17305.
Wang, Y.; Tran, P. M.; Xie, Y.; Wei, P.; Glushka, J. N.; Schaefer, H. F. III.; and Robinson, G. H., "Carbene-Stabilized Dithiolene (L0) Zwitterions", Angewandte Chemie, International Edition 2021, 60, 22706-22710.
Awards
Sigma Xi Distinguished Lecturer (2004–2005)
Lamar Dobb Creative Research Award (2010)
Humboldt Research Award (2012)
F. Albert Cotton Award in Synthetic Inorganic Chemistry (2013)
SEC Faculty Achievement Award (2014)
Fellow of the Royal Society of Chemistry (2017)
Elected to the National Academy of Sciences (2021)
References
External links
The Robinson Lab
Year of birth missing (living people)
Living people
American chemists
American inorganic chemists
Members of the United States National Academy of Sciences | Gregory H. Robinson | [
"Chemistry"
] | 1,552 | [
"American inorganic chemists",
"Inorganic chemists"
] |
62,953,123 | https://en.wikipedia.org/wiki/XCO2 | {{DISPLAYTITLE:xCO2}}
XCO2 is the column-averaged of carbon dioxide in the atmosphere, represented in parts per million (ppm). Rather than taking a single observation at the surface, an integration of atmospheric CO2 above a specific location is observed. The 'X' refers to the observation taking place from a satellite platform. CO2 observing satellites cannot observe green house gasses directly, but instead average the entire atmospheric column of CO2. These estimates from satellites need ground truthing to ensure that XCO2 retrievals are accurate, with an average accuracy from OCO 2 and GOSAT of 0.267 ± 1.56 ppm between September 2014 to December 2016.
The largest recorded value XCO2 was recorded during May 2018 over the Northern Hemisphere, with a value of approximately 410 ppm. These values have been increasing steadily over recent years. Space-based CO2 measurements are used for climate-level scientific studies such as a further understanding of the El Niño–Southern Oscillation
See also
pCO2
Carbon dioxide in Earth's atmosphere
observing satellite
Orbiting Carbon Observatory 2
References
Carbon dioxide | XCO2 | [
"Chemistry"
] | 231 | [
"Greenhouse gases",
"Carbon dioxide"
] |
62,953,228 | https://en.wikipedia.org/wiki/Patoka%20Oil%20Terminal | Patoka Oil Terminal is a pipeline hub located near the towns of Patoka and Vernon. It services five major pipelines in the second district of the Petroleum Administration for Defense Districts including Dakota Access and the Keystone Pipeline.
Overview
The Patoka Oil Terminal Hub is located near the towns of Patoka and Vernon, Illinois. The Patoka Terminal is the second-largest pipeline terminal in the Midwest next to the Cushing-Drumright Oil Field. It has 82 storage tanks and stores up to 19 million barrels of crude oil, servicing five major incoming as well as five major outgoing pipelines. It has more than 50 storage tanks and facilitates the transport of oil through pipelines to refineries in various parts of the United States.
Patoka Oil Terminal is part of District Two of the Petroleum Administration for Defense Districts. It was responsible for three-quarters of pipeline movements in that district in 2010 and processes approximately 2.2 million barrels of oil per day.
Patoka is the main oil terminal in the region where oil was first discovered in 1938. Tax revenue from operations are collected and distributed by Marion County, Illinois. It was reported by the Chicago Tribune that Dakota Access paid approximately $750,000 in tax revenue for its operations in Illinois.
Pipelines
The following pipelines are part of the Patoka Energy Terminal:
Dakota Access Pipeline
Keystone Pipeline
Southern Access Extension
Capline
Patoka West
See also
Pipeline transport
List of oil refineries
References
Oil refining
Petroleum infrastructure in the United States
Pipeline transport | Patoka Oil Terminal | [
"Chemistry"
] | 298 | [
"Petroleum technology",
"Oil refining"
] |
62,953,266 | https://en.wikipedia.org/wiki/Chinese%20Society%20of%20Astronautics | The Chinese Society of Astronautics (; abbreviated CSA) is a professional association of individuals with an interest in space. As of 2019, the society has 38 specialized committees and 179 working committees with more than 30,000 individual members.
History
The initial concept of the Chinese Society of Astronautics was proposed in 1977 and accepted by the China Association for Science and Technology (CAST). The Chinese Society of Astronautics was founded by Qian Xuesen, Ren Xinmin and Zhang Zhenhuan on October 23, 1979. In September 1980 it became a member of the International Astronautical Federation (IAF).
Scientific publishing
Journal of Astronautics
Advances in Aerospace Science and Technology
Space Exploration
List of presidents
References
External links
Space advocacy organizations
Scientific organizations established in 1979
Organizations based in Beijing
1979 establishments in China
1979 in Beijing | Chinese Society of Astronautics | [
"Astronomy"
] | 165 | [
"Space advocacy organizations",
"Astronomy organizations"
] |
62,953,519 | https://en.wikipedia.org/wiki/Sleep%20problems%20in%20women | Sleep problems in women can manifest at various stages of their life cycle, as supported by both subjective and objective data. Factors such as hormonal changes, aging, psycho-social aspects, physical and psychological conditions and the presence of sleeping disorders can disrupt women's sleep. Research supports the presence of disturbed sleep during the menstrual cycle, pregnancy, postpartum period, and menopausal transition. The relationship between sleep and women's psychological well-being suggests that the underlying causes of sleep disturbances are often multi-factorial throughout a woman's lifespan.
Sleep during menstrual cycle
Initial variations of sleep in women begin with the menstrual cycle. In subjective studies, women who report PMS or PMDD declare increases in poor sleep quality. However, most objective laboratory-based PSG measures of young healthy women do not confirm irregular sleep patterns across the menstrual cycle, neither in sleep duration nor in sleep quality. One exception is the reduction of REM sleep, and markedly more so the increase of Stage 2 sleep during the luteal phase of the menstrual cycle. Several studies attribute this to increased estrogen and progesterone concentrations. One actigraphy study reports a modest decline in total sleep time of 25 min in late-reproductive women during the premenstrual week. The measurement of subjectively reported sleep during the menstrual cycle differs. Seventy percent of women report a negative impact on their sleep. Furthermore, they report a decrease in sleep quality on 2.5 days each month. Poor sleep quality, connected with poor mood and menstrual pain, especially during the premenstrual week, are most likely to be reported. Psychological factors influencing sleep quality in women, such as mood disorders and sleep disorders (related to hormonal fluctuations), are often higher in women after the onset menarche.
Sleep during pregnancy
An estimated 46% of women experience subjectively poor sleep during pregnancy, and this percentage increases progressively up to approximately 78% in the late stages of pregnancy. Reasons vary according to the trimester, related to hormonal changes and physical discomfort: anatomic changes, sleep fragmentation, fragmentation of breathing, metabolic changes which might increase sleep disorders such as restless leg syndrome, gastroesophageal reflux, increase in overnight sodium excretion, changes in the musculoskeletal system, nocturnal uterine contractions, changes in iron and folate metabolism, and changes in the circadian and homeostatic regulation of sleep.
First trimester
Laboratory-based studies show that most women experience more disruption during night-time sleep. They sleep on average more during this time compared to pre-pregnancy sleep time. Total sleep time, however, decreases as the pregnancy progresses. Nocturia and musculoskeletal discomfort account for the physiological factors impacting sleep during the first trimester. Subjectively, women report an increase in night-time awakening and an increase in total sleep time. Pregnant women's main physiological complaints about the quality of sleep during the first trimester are related to nausea and vomiting, urinary frequency, backaches, and feeling uncomfortable and fatigued; as well as tender breasts, headache, vaginal discharge, flatulence, constipation, shortness of breath, and heartburn. Other contributing factors for sleep quality are age, parity, mood disorders, anxiety and primary sleep disorders.
Second trimester
Laboratory based measures during the second trimester show a further decrease in total sleep time, slow-wave sleep and sleep quality. No changes in REM sleep have been observed. Fetal movements, uterine contractions, musculoskeletal discomfort and rhinitis and nasal congestion account for the physiological factors influencing sleep. Self-reported total sleep time and quality decreases during the second trimester. Reported contributing factors are fetal movements, heartburn, cramps or tingling in the legs, breathing problems, and anxiety.
Third trimester
Objectively, slow-wave sleep and total sleep time as well as general sleep quality decreases further progressively during the third trimester. More night-time awakenings are common. Sleep onset latency problems and napping becomes more frequent. Physiological factors impacting sleep at this stage during the pregnancy are nocturia, fetal movement, uterine contractions, heartburn, orthopnea, leg cramps, rhinitis, nasal congestion, and sleeping position. Women at the third trimester report progressively reduced total sleep time, and similarly to the second trimester, being uncomfortable, feeling fetal movements, heartburn, frequent urination, cramps and respiratory difficulties. The last weeks before delivery influence sleep quality most markedly. It is however surprising that in spite of virtually all women experiencing poor sleep, only one third consider themselves to have actual sleep problems.
Postpartum
Total sleep time is objectively the lowest during the first month postpartum though it steadily increases toward normal values. The main contributing factors influencing sleep during the postpartum period are infant behaviour such as sleep and feeding patterns, bed-sharing and infant temperament. It appears that slow-wave sleep is preserved during the first weeks postpartum in spite and because of chronic sleep deprivation. Frequent napping occurs. Recent studies suggest additionally a myriad of further contributing factors influencing postpartum sleep. It has been found that multiparas sleep remained relatively stable while first time mothers experienced a decline in sleep efficiency. Furthermore, mothers of bottle-fed babies experienced less night-time awakening than breast feeding mothers. The general physical and psychological health of parents should be considered as well. By three months postpartum, mothers' and infants' sleep tend to stabilise and mothers' sleep becomes more regular.
Menopausal transition
Poor sleep quality, sleep fragmentation and increased awakenings are common complaints during the menopausal transition. Reportedly, 31% to 42% of women suffer from chronic insomnia during their menopausal transition. However, some objective PSG studies have not shown significant differences in sleep architecture in pre‐, peri‐, and postmenopausal women. Nonetheless, quantitative and qualitative studies report elevated beta activity, resulting objectively and subjectively in a consistent coupling of sleep disturbances such as sleep fragmentation, increased waking after sleep onset and poor sleep efficiency with vasomotor symptoms such as hot flashes. Besides vasomotor symptoms are changes in hormone levels such as estrogen, affective disorders, stress and perceived health, urinary problems, obesity, gastrointestinal problems, endocrine problems, and cardiovascular problems contributing factors to menopause' associated sleeping problems and insomnia. Sleep during the menopausal transition is furthermore influenced by pain disorders and specifically by comorbid physical and psychiatric conditions. Other proposed causes for sleep problems during menopause are increased incidence of obstructive sleep apnea, increased sleep disordered breathing, and inadequate sleep hygiene. In general, another important factor contributing to changed sleep patterns in ageing women is the circadian disruption, with disturbed regulation of body temperature at sleep onset and early morning cortisol levels. Postmenopausal women tend to express a morning chronotype. These changes in chronotype compared to premenopausal women require a different sleep hygiene.
See also
Menopause
Sleep
References
Sleep disorders
Women's mental health | Sleep problems in women | [
"Biology"
] | 1,506 | [
"Behavior",
"Sleep",
"Sleep disorders"
] |
60,810,627 | https://en.wikipedia.org/wiki/Newton%E2%80%93Gauss%20line | In geometry, the Newton–Gauss line (or Gauss–Newton line) is the line joining the midpoints of the three diagonals of a complete quadrilateral.
The midpoints of the two diagonals of a convex quadrilateral with at most two parallel sides are distinct and thus determine a line, the Newton line. If the sides of such a quadrilateral are extended to form a complete quadrangle, the diagonals of the quadrilateral remain diagonals of the complete quadrangle and the Newton line of the quadrilateral is the Newton–Gauss line of the complete quadrangle.
Complete quadrilaterals
Any four lines in general position (no two lines are parallel, and no three are concurrent) form a complete quadrilateral. This configuration consists of a total of six points, the intersection points of the four lines, with three points on each line and precisely two lines through each point. These six points can be split into pairs so that the line segments determined by any pair do not intersect any of the given four lines except at the endpoints. These three line segments are called diagonals of the complete quadrilateral.
Existence of the Newton−Gauss line
It is a well-known theorem that the three midpoints of the diagonals of a complete quadrilateral are collinear.
There are several proofs of the result based on areas or wedge products or, as the following proof, on Menelaus's theorem, due to Hillyer and published in 1920.
Let the complete quadrilateral be labeled as in the diagram with diagonals and their respective midpoints . Let the midpoints of be respectively. Using similar triangles it is seen that intersects at , intersects at and intersects at . Again, similar triangles provide the following proportions,
However, the line intersects the sides of triangle , so by Menelaus's theorem the product of the terms on the right hand sides is −1. Thus, the product of the terms on the left hand sides is also −1 and again by Menelaus's theorem, the points are collinear on the sides of triangle .
Applications to cyclic quadrilaterals
The following are some results that use the Newton–Gauss line of complete quadrilaterals that are associated with cyclic quadrilaterals, based on the work of Barbu and Patrascu.
Equal angles
Given any cyclic quadrilateral , let point be the point of intersection between the two diagonals and . Extend the diagonals and until they meet at the point of intersection, . Let the midpoint of the segment be , and let the midpoint of the segment be (Figure 1).
Theorem
If the midpoint of the line segment is , the Newton–Gauss line of the complete quadrilateral and the line determine an angle equal to .
Proof
First show that the triangles are similar.
Since and , we know . Also,
In the cyclic quadrilateral , these equalities hold:
Therefore, .
Let be the radii of the circumcircles of respectively. Apply the law of sines to the triangles, to obtain:
Since and , this shows the equality The similarity of triangles follows, and .
Remark
If is the midpoint of the line segment , it follows by the same reasoning that .
Isogonal lines
Theorem
The line through parallel to the Newton–Gauss line of the complete quadrilateral and the line are isogonal lines of , that is, each line is a reflection of the other about the angle bisector. (Figure 2)
Proof
Triangles are similar by the above argument, so . Let be the point of intersection of and the line parallel to the Newton–Gauss line through .
Since and , and .
Therefore,
Two cyclic quadrilaterals sharing a Newton-Gauss line
Lemma
Let and be the orthogonal projections of the point on the lines and respectively.
The quadrilaterals and are cyclic quadrilaterals.
Proof
, as previously shown. The points and are the respective circumcenters of the right triangles . Thus, and .
Therefore,
Therefore, is a cyclic quadrilateral, and by the same reasoning, also lies on a circle.
Theorem
Extend the lines to intersect at respectively (Figure 4).
The complete quadrilaterals and have the same Newton–Gauss line.
Proof
The two complete quadrilaterals have a shared diagonal, . lies on the Newton–Gauss line of both quadrilaterals. is equidistant from and , since it is the circumcenter of the cyclic quadrilateral .
If triangles are congruent, and it will follow that lies on the perpendicular bisector of the line . Therefore, the line contains the midpoint of , and is the Newton–Gauss line of .
To show that the triangles are congruent, first observe that is a parallelogram, since the points are midpoints of respectively.
Therefore,
Also note that
Hence,
Therefore, and are congruent by SAS.
Remark
Due to being congruent triangles, their circumcircles are also congruent.
Relation with the Miquel point
The point at infinity along the Newton-Gauss line is the isogonal conjugate of the Miquel point.
Generalization
Dao Thanh Oai showed a generalization of the Newton-Gauss line.
For a triangle , let an arbitrary line and the Cevian triangle of an arbitrary point . intersects , and at and respectively. Then , and are colinear.
If is the centroid of the triangle , the line is Newton-Gauss line of the quadrilateral composed of and .
History
The Newton–Gauss line proof was developed by the two mathematicians it is named after: Sir Isaac Newton and Carl Friedrich Gauss. The initial framework for this theorem is from the work of Newton, in his previous theorem on the Newton line, in which Newton showed that the center of a conic inscribed in a quadrilateral lies on the Newton–Gauss line.
The theorem of Gauss and Bodenmiller states that the three circles whose diameters are the diagonals of a complete quadrilateral are coaxal.
Notes
References
(available on-line as)
External links
Geometry
Quadrilaterals | Newton–Gauss line | [
"Mathematics"
] | 1,294 | [
"Geometry"
] |
60,812,572 | https://en.wikipedia.org/wiki/Wi-Fi%207 | IEEE 802.11be, dubbed Extremely High Throughput (EHT), is a wireless networking standard in the IEEE 802.11 set of protocols which is designated by the Wi-Fi Alliance. It has built upon 802.11ax, focusing on WLAN indoor and outdoor operation with stationary and pedestrian speeds in the 2.4, 5, and 6 GHz frequency bands.
Throughput is believed to reach a theoretical maximum of 46 Gbit/s, although actual results are much lower.
Development of the 802.11be amendment is ongoing, with an initial draft in March 2021, and a final version expected by the end of 2024. Despite this, numerous products were announced in 2022 based on draft standards, with retail availability in early 2023. On 8 January 2024, the Wi-Fi Alliance introduced its Wi-Fi Certified 7 program to certify Wi-Fi 7 devices. While final ratification is not expected until the end of 2024, the technical requirements are essentially complete, and there are already products labeled as Wi‑Fi 7.
The global Wi-Fi 7 market was estimated at US$1 billion in 2023, and is projected to reach US$24.2 billion by 2030.
Core features
The following are core features that have been approved as of Draft 3.0:
4096-QAM (4K-QAM) enables each symbol to carry 12 bits rather than 10 bits, resulting in 20% higher theoretical transmission rates than WiFi 6's 1024-QAM.
Contiguous and non-contiguous 320/160+160 MHz and 240/160+80 MHz bandwidth
Multi-Link Operation (MLO), a feature that increases capacity by simultaneously sending and receiving data across different frequency bands and channels. (2.4 GHz, 5 GHz, 6 GHz)
16 spatial streams and Multiple Input Multiple Output (MIMO) protocol enhancements
Flexible Channel Utilization – Interference currently can negate an entire Wi-Fi channel. With preamble puncturing, a portion of the channel that is affected by interference can be blocked off while continuing to use the rest of the channel.
Candidate features
The main candidate features mentioned in the 802.11be Project Authorization Request (PAR) are:
Multi-Access Point (AP) Coordination (e.g. coordinated and joint transmission),
Enhanced link adaptation and retransmission protocol (e.g. Hybrid Automatic Repeat Request (HARQ)),
If needed, adaptation to regulatory rules specific to 6 GHz spectrum,
Integrating Time-Sensitive Networking (TSN) IEEE 802.1Q extensions for low-latency real-time traffic:
IEEE 802.1AS timing and synchronization
IEEE 802.11aa MAC Enhancements for Robust Audio Video Streaming (Stream Reservation Protocol over IEEE 802.11)
IEEE 802.11ak Enhancements for Transit Links Within Bridged Networks (802.11 links in 802.1Q networks)
Bounded latency: credit-based (IEEE 802.1Qav) and cyclic/time-aware traffic shaping (IEEE 802.1Qch/Qbv), asynchronous traffic scheduling (IEEE 802.1Qcr-2020)
IEEE 802.11ax Scheduled Operation extensions for reduced jitter/latency
Additional features
Apart from the features mentioned in the PAR, there are newly introduced features:
Newly introduced 4096-QAM (4K-QAM),
Contiguous and non-contiguous 320/160+160 MHz and 240/160+80 MHz bandwidth,
Frame formats with improved forward-compatibility,
Enhanced resource allocation in OFDMA,
Optimized channel sounding that requires less airtime,
Implicit channel sounding,
More flexible preamble puncturing scheme,
Support of direct links, managed by an access point.
Rate set
Comparison
802.11be Task Group
The 802.11be Task Group is led by individuals affiliated with Qualcomm, Intel, and Broadcom. Those affiliated with Huawei, Maxlinear, NXP, and Apple also have senior positions.
Commercial availability
Qualcomm announced its FastConnect 7800 series on 28 Feb 2022 using 14 nm chips. As of March 2023, the company claims 175 devices will be using their Wi-Fi 7 chips, including smartphones, routers, and access points.
Broadcom followed on 12 April 2022 with a series of 5 chips covering home, commercial, and enterprise uses. The company unveiled its second generation Wi-Fi 7 chips on 20 June 2023 featuring tri-band MLO support and lower costs.
The TP-Link Archer BE900 wireless router was available to consumers in April 2023. The company's Deco BE95 mesh networking system was also available that month. Asus, Eero, Linksys and Netgear had Wi-fi 7 wireless routers available by the end of 2023.
The ARRIS SURFboard G54 is a DOCSIS 3.1 cable gateway featuring Wi-Fi 7. It became available in October 2023.
Lumen's Quantum Fiber W1700K and W1701K are WiFi 7 certified and provided with their 360 WiFi offering. It is the first device made for a major Telecommunications Provider that's certified for WiFi 7.
Client devices
Intel launched the BE200 and BE202 wireless adapters for desktop and laptop motherboards in September 2023.
The Asus ROG Strix Z790 E II motherboard is among the first with built-in Wi-Fi 7.
Software
Android 13 and higher provide support for Wi-Fi 7.
The Linux 6.2 kernel provides support for Wi-Fi 7 devices. The 6.4 kernel added Wi-Fi 7 mesh support. Linux 6.5 included significant driver support by Intel engineers, particularly support for MLO.
Support for Wi-Fi 7 was added to Windows 11, as of build 26063.1.
Notes
References
be
Networking standards
Wireless communication systems | Wi-Fi 7 | [
"Technology",
"Engineering"
] | 1,212 | [
"Computer standards",
"Wireless networking",
"Wi-Fi",
"Computer networks engineering",
"Wireless communication systems",
"Networking standards"
] |
60,814,827 | https://en.wikipedia.org/wiki/NGC%205979 | NGC 5979 is a planetary nebula in the constellation Triangulum Australe. It was discovered by John Herschel on April 24, 1835. The central star of the planetary nebula is an O-type star with a spectral type of O(H)3-4.
Gallery
References
External links
Triangulum Australe
Planetary nebulae
5979 | NGC 5979 | [
"Astronomy"
] | 75 | [
"Nebula stubs",
"Triangulum Australe",
"Astronomy stubs",
"Constellations"
] |
60,815,189 | https://en.wikipedia.org/wiki/Climate%20change%20art | Climate change art is art inspired by climate change and global warming, generally intended to overcome humans' hardwired tendency to value personal experience over data and to disengage from data-based representations by making the data "vivid and accessible". One of the goal of climate change art is to "raise awareness of the crisis", as well as engage viewers politically and environmentally.
Some climate change art involves community involvement with the environment. Other approaches involve revealing socio-political concerns through their various artistic forms, such as painting, video, photography, sound and films. These works are intended to encourage viewers to reflect on their daily actions "in a socially responsible manner to preserve and protect the planet".
Climate change art is created both by scientists and by non-scientist artists. The field overlaps with data art.
History
The Guardian reported that in response to a backlash in the 1990s against fossil fuels and nuclear plants, major energy companies stepped up their philanthropic giving, including to arts organizations, "to a point where many major national institutions were on the payroll of the fossil fuel giants," effectively silencing many environmentally-focused artists.
In 2005 Bill McKibben wrote an article, What the Warming World Needs Now Is Art, Sweet Art that argued that "An intellectual understanding of the scientific facts was not enough – if we wanted to move forward and effect meaningful change, we needed to engage the other side of our brains. We needed to approach the problem with our imagination. And the people best suited to help us do that, he believed, were the artists." According to climate change in the arts organization The Arctic Cycle, "It took some time for artists to heed the call."
In 2009 The Guardian reported that the art world was "waking up to climate-change art." Reporting on the 2020 We Make Tomorrow conference on climate change and the arts in London, Artnet News commented that "instead of being seduced by sponsorships from deep-pocketed organizations invested in the fossil-fuel industry, institutions should look for new funding models."
Effects and influence
Representation and interpretation
According to Artnet News, climate change can be represented meaningfully through art because "Art has a way of getting ahead of the general discourse because it can convey information in novel ways." Climate change artworks differ in how they are interpreted by and how they impact the viewer. Laura Kim Sommer and Christian Andreas Klöckner (both from the Norwegian University of Science and Technology) conducted a survey of attendees of the Parisian art festival ArtCOP21 in 2015 (that was held at the same time as the 2015 United Nations Climate Change Conference) regarding 37 artworks within the festival. The responses led Sommer and Klöckner's research to develop four characterizations of the works of art in terms of their content and the responses of the viewers to the artworks. The first categorization was labeled "the comforting utopia", which meant that the artwork had given off positive emotions but did not inspire people to enact positive climate action. The second categorization was labeled "the challenging dystopia", which meant that the artwork had given off negative emotions and greatly inspired climate nonaction. The third categorization was labeled "the mediocre mythology", which meant that the artwork had given off neutral emotions and did not inspire people to enact positive climate action.
The final categorization was labeled "the awesome solution", which meant that the work of art had given off both positive and negative emotions but inspired people to enact positive climate action. The data collected by Sommer and Klöckner was categorized by them in 2019 into different psychological characteristics and connected these to functions of the brain to see where various emotions were triggered from observing the art and concluded that works of art that were not in "the challenging dystopia" category were generally more likely to leave audiences open to positive climate action, with "the awesome solution" works of art being the most likely of all the categories to inspire positive climate action.
Journalist Betsy Mason wrote in Knowable that humans are visual creatures by nature, absorbing information in graphic form that would elude them in words, adding that bad visuals can impair public understanding of science. Similarly, Bang Wong, creative director of MIT's Broad Institute, stated that visualizations can reveal patterns, trends, and connections in data that are difficult or impossible to find any other way.
In particular, climate change art has been used both to make scientific data more accessible to non-scientists and to express people's fears. Some research indicates that climate change art is not particularly effective in changing peoples views, though art with a "hopeful" message gives people ideas for change. Projecting a positive message, climate scientist Ed Hawkins said that "infiltrating popular culture is a means of triggering a change of attitude that will lead to mass action".
Students who are taught means to illustrate the concepts of global warming expressed through art can show greater learning gains than by learning the scientific basis alone. This was illustrated by a study conducted at a public high school in Portugal by Julia Bentz (a postgraduate researcher for the Centre for Ecology, Evolution, and Environmental Changes at the University of Lisbon in Portugal) in 2018 and 2019. In this study, 70 high school students between the ages of 16 and 18 undertook two separate projects relating to arts and global warming. The first art project involved the students finding a small but impactful change in their lives that leads to positive global warming change and sticking to it for 30 days, where the data they collected was reflected in various group discussions and individual writing and art projects. The second art project involved the students reading global warming-focused short stories then discussing their takeaways in group discussions and producing art projects focused on specific topics concerning what they discussed. Bentz took first-hand observations of all of the various group and individual discussions & assignments and transmuted them into analytic memos that suggested that the above projects be used by teachers to more positively engage their students more effectively about global warming than a more fear-based approach.
It is thought that people who engage with climate change art feel a sense of belonging, a feeling of connection to a cause, and a sense of empowerment. Participatory climate change art, such as downloading warming stripes graphics for one's own locality or using a climate-related logo, provides an interactive element that gets people involved.
Lucia Pietroiusti, the curator of "general ecology" at the Serpentine Galleries, suggested "a radical redefinition of what constitutes an artwork...to include environmental campaigns," saying that "By calling something an artwork, you are allowing an institution to support it."
Expansion of formats
In recent years, the expansion of climate change art beyond purely visual representations has allowed for an expansion of audiences able to appreciate and experience this art, specifically those who experience Visual impairment. These musical forms of climate change art include pieces performed using environmental media to represent climate change and popular music whose lyrical aspects address climate change topics. Climate change composer Daniel Crawford said that "climate scientists have a standard toolbox to communicate their data, and what we [climate change artists] are trying to do is to add to that another tool to that toolbox to people who might get more out of this than maps graphs and numbers". In the performing arts, there has been an increasing number of stage productions related to climate change, such as those performed by the global movement, Climate Change Theatre Action.
A 2022 survey article published in Music and Science noted that music was already being written and performed to address the climate crisis, but said that music psychology research had not addressed that question directly. The article said that there is "strong evidence" for the power of music "to change listeners' and performers' emotions, moods, thoughts, levels of empathy, and beliefs", and urged further research.
Use of climate change art by non-governmental organizations
Various non-governmental organizations (or NGOs) work to emphasize the effects climate change-inspired art can have to inspire positive climate action worldwide. In Australia, the NGO CLIMARTE aims for people to not just get the right information out through works of art made from the joint effort of artists and from climate change-focused scientists alike, but to enact positive climate action, opening a gallery based on such works of art in the Richmond neighborhood of Melbourne. In the Netherlands, the NGO Fossil Free Culture works to sever the linkage between fine arts organizations and global petroleum corporations, and to see that works of art that are critical of climate change get the proper forum to enact positive climate action. Based out of Yangon, Myanmar, but operating all over Southeast Asia, the NGO Kinnari Ecological Theatre Project (or KETEP) stages folk performative arts from the regional area with the intention of confronting an issue related to climate change decided by the performers to spread to its audience in hopes of enacting positive climate action. In the United Kingdom, the NGO Platform works to incorporate education into the mixture of science and fine arts by providing curriculums to schools that teach climate change science through various arts and literature-based projects.
Emphasis on solutions
The 2015 exhibition 'Art Works For Change' aims to demonstrate the options available to reduce emissions and other climate change impacts, such as reducing carbon footprints, conserving energy, and making sustainable transportation choices among others.
Reception
Journalist Betsy Mason wrote in Knowable that humans are visual creatures by nature, absorbing information in graphic form that would elude them in words, adding that bad visuals can impair public understanding of science. Similarly, Bang Wong, creative director of MIT's Broad Institute, stated that visualizations can reveal patterns, trends, and connections in data that are difficult or impossible to find any other way.
Malcolm Miles (professor of Cultural Theory at the University of Plymouth, U.K.) is among those who believe that art that is centered on global warming can potentially normalize climate inaction. Miles cites the Natural Reality art exhibition that was held in Aachen, Germany in 1999 as an example, which had a credo of needing to find original ideas for how to depict nature "'because the images of the visible nature it processed before have lost their validity'". Miles similarly mentions the 2006 art exhibition Climate Change and Cultural Change that was held in both Newcastle and Gateshead, in northern England, which tried to be more direct in their climate advocacy by commissioning works of art such as "a montage by [artist] Peter Kennard depicting the Earth attached to a petrol pump, choking on black oil" and Water Mist Wall (2005), a video instillation by David Buckland that detailed his efforts to provide a carbon-free schooner ride to the arctic to see first-hand the melting glaciers and icebergs caused by global warming. These intense visual displays led to a numbing effect among audience members, which led not to positive climate action but to climate inaction.
Miles also argues that art that is centered on global warming might be more truly centered on singularly moving forward the artist's feeling of self-representation and not propagating concrete positive change about global warming, that these works of art can only potentially spread awareness and nothing more. The history of 'found objects' as art that started in the Dadaist movement of modern art in the early 20th Century has transitioned in more recent years into "the art [sculptures] of natural conservation of Andy Goldsworthy", which comments on how modern landscapes are less focused on the natural aspects of an environment but more so on human interaction within an environment such as "war memorials" and "country walking". Miles mentions that the majority of people who see Goldsworthy's work do not see them in-person – and outdoors – but through photos found in books, websites, and gallery shows. Similarly, Miles cites the Groundworks art exhibition held in Pittsburgh, Pennsylvania in 2005 that was curated by "art historian Grant Kester", whom Miles quoted in saying that when talking about an artist's relationship to nature that "'the artist's brush can as easily resemble a dissecting scalpel as it can a lover's caress'"; which Kester says is due to an artist's need to be a part of the global market economy to sustain themselves.
Finally, Miles argues that art that is centered on global warming that is also seen to be aesthetically boring or awful is more likely to lead to inaction than works of art that are seen to be aesthetically exciting or awe-inspiring. The reviews of Goldworth's sculptures by David Matless – a professor of Cultural Geography at the University of Nottingham, U.K. – and George Revill – a professor of Cultural Historical Geography at The Open University, U.K. – were done so not so much for their aesthetic quality – which they go out of their way to not comment on – but for their environmental advocacy are used by Miles as an example of this.
Examples
Olafur Eliasson's "Ice Watch" piece is an example of climate change art.
Researchers analyzing artwork created between 2000 and 2016 found that climate change art production increased over the period.
In 1998, Matthew Brutner composed Sikuigvik (The Time of Ice Melting), which began as an ode to the "beauty of the Arctic", but over time has evolved into a frightening representation of the loss of the Arctic environment.
In 2002, Alan Sonfist created a series of wood sculptures sourced from the Roybal Fire in Santa Fe, New Mexico. The work included 22 pieces of salvaged wood standing vertically on concrete pedestals with tree seeds scattered on the surrounding floorspace, using natural elements to make ecological processes and concepts tangible. Along with the sculptures, he created a collection of paintings. Dian Parker writes in ArtNet, "In Burning Forest, a more recent series of paintings, Sonfist depicts trees selected from the majestic 19th-century landscapes of his “heroes,” the Hudson River School artists. Sonfist disrupts their visions of America’s pristine natural beauty, however, by setting the trees on fire to visually represent the climate crisis. He continues to work on this series to this day."
A group started in 2005 to create crochet versions of coral reefs grew by 2022 to over 20,000 contributors in what became the Crochet Coral Reef Project. Organized by Margaret and Christine Wertheim, the project promotes awareness of the effects of global warming. Project creations have been displayed in galleries and museums by an estimated 2 million people. Many creations apply hyperbolic (curved) geometric shapes—distinguished from Euclidean (flat) geometry—to emulate natural structures.
In 2007, artist Eve Mosher used a sports-field chalk marker to draw a blue "high-water" line around Manhattan and Brooklyn, showing the areas that would be underwater if climate change predictions are realized. Her HighWaterLine Project has since drawn high-water lines around Bristol, Philadelphia, and two coastal cities in Florida.
In 2012, filmmaker Jeff Orlowski made Chasing Ice, documenting photographer James Balog's Extreme Ice Survey, which uses time-lapse photography to show the disappearance of glaciers over time.
In 2015, an online exhibition called 'Footing The Bill: Art and Our Ecological Footprint', was created by Art Works For Change to show a range of artist expressions (such as Sebastian Copeland and Fred Tomaselli) of climate change through their work.
Starting in 2017 The Tempestry Project encouraged fiber artists to create "tempestries", scarf-size banners showing temperature change over time. Each tempestry is knitted or crocheted, one row per day in a color representing that day's high temperature, for a year. Two or more tempestries for the same location, each representing different years, are displayed together to show daily-high temperature change over time.
In 2018 artist Xavier Cortada's project Underwater Home Owner's Association placed signs in front yards throughout Miami, Florida indicating each property's height above sea level to illustrate what the sea level rise would flood that property.
In 2019, the Grantham Institute - Climate Change and the Environment, Imperial College London, launched its inaugural Grantham Art Prize, commissioning original works by six artists who collaborated with climate researchers.
See also
Craftivism
Environmentalism
Climate change
Environmental Art
Ecological Art
References
External links
— Survey of climate change visualizations
"Footing the Bill: Art and Our Ecological Footprint (2020)" Art Works For Change (archive)
Visual arts genres
Climate and weather statistics
Climate change in art
Climate communication
Data and information visualization
Visual arts | Climate change art | [
"Physics"
] | 3,367 | [
"Weather",
"Physical phenomena",
"Climate and weather statistics"
] |
60,815,736 | https://en.wikipedia.org/wiki/Kink%20%28materials%20science%29 | Kinks are deviations of a dislocation defect along its glide plane. In edge dislocations, the constant glide plane allows short regions of the dislocation to turn, converting into screw dislocations and producing kinks. Screw dislocations have rotatable glide planes, thus kinks that are generated along screw dislocations act as an anchor for the glide plane. Kinks differ from jogs in that kinks are strictly parallel to the glide plane, while jogs shift away from the glide plane.
Energy
Pure-edge and screw dislocations are conceptually straight in order to minimize its length, and through it, the strain energy of the system. Low-angle mixed dislocations, on the other hand, can be thought of as primarily edge dislocation with screw kinks in a stair-case structure (or vice versa), switching between straight pure-edge and pure-screw dislocation segments. In reality, kinks are not sharp transitions. Both the total length of the dislocation and the kink angle are dependent on the free energy of the system. The primary dislocation regions lie in Peierls-Nabarro potential minima, while the kink requires addition energy in the form of an energy peak. To minimize free energy, the kink equilibrates at a certain length and angle. Large energy peaks create short but sharp kinks in order to minimize dislocation length within the high energy region, while small energy peaks create long and drawn-out kinks in order to minimize total dislocation length.
Kink movement
Kinks facilitate the movement of dislocations along its glide plane under shear stress, and is directly responsible for plastic deformation of crystals. When a crystal undergoes shear force, e.g. cut with scissors, the applied shear force causes dislocations to move through the material, displacing atoms and deforming the material. The entire dislocation does not move at once – rather, the dislocation produces a pair of kinks, which then propagates in opposite directions down the length of the dislocation, eventually shifting the entire dislocation by a Burgers vector. The velocity of dislocations through kink propagation also clearly limited on the nucleation frequency of kinks, as a lack of kinks compromises the mechanism by which dislocations move.
As shear force approaches infinity, the velocity at which dislocations migrate is limited by the physical properties of the material, maximizing at the material's sound velocity. At lower shear stresses, the velocity of dislocations end up relating exponentially with the applied shear force:
where
is applied shear force
and are experimentally found constants
The above equation gives the upper limit on dislocation velocity. The interactions of dislocation movement on its environment, particularly other defects such as jogs and precipitates, results in drag and slows down the dislocation:
where
is the drag parameter of the crystal
Kink movement is strongly dependent on temperature as well. Higher thermal energy assists in the generation of kinks, as well as increasing atomic vibrations and promoting dislocation motion.
Kinks may also form under compressive stress due to the buckling of crystal planes into a cavity. At high compressive forces, masses of dislocations move at once. Kinks align with each other, forming walls of kinks that propagate all at once. At sufficient forces, the tensile force produced by the dislocation core exceeds the fracture stress of the material, combining kink boundaries into sharp kinks and de-laminating the basal planes of the crystal.
References
Crystallographic defects | Kink (materials science) | [
"Chemistry",
"Materials_science",
"Engineering"
] | 767 | [
"Crystallographic defects",
"Crystallography",
"Materials degradation",
"Materials science"
] |
60,816,692 | https://en.wikipedia.org/wiki/Milan%20Mrksich | Milan Mrksich (born 15 August 1968) is an American chemist. He is the Henry Wade Rogers Professor at Northwestern University with appointments in chemistry, biomedical engineering and cell & developmental biology.
He also served as both the founding director of the Center for Synthetic Biology and as an associate director of the Robert H. Lurie Comprehensive Cancer Center at Northwestern. Mrksich also served as the Vice President for Research of Northwestern University.
His research involves the chemistry and synthesis of surfaces that contact biological environments. His laboratory has pioneered several technologies, including strategies to integrate living cells with microelectronic devices, methods to enable high throughput assays for drug discovery, and approaches to making synthetic fusion proteins for applications as therapeutics. Most notably, he developed the SAMDI-MS biochip technology that allows for high-throughput quantification of surface-based biochemical assays using MALDI mass spectrometry. Through SAMDI-MS, Mrksich has become a leader in using label-free technology for drug discovery, founding the company SAMDI Tech in 2011 that primarily serves global pharmaceutical companies. His work has been described in over 240 publications (h-index 98), 500 invited talks, and 18 patents.
Early life and education
Milan Mrksich () was born on August 15, 1968, to Serbian immigrants and raised in Justice, Illinois. He graduated from University of Illinois at Urbana-Champaign in 1989 with a B.S. in chemistry working in the laboratory of Steven Zimmerman on molecular tweezers. He completed his PhD in organic chemistry in 1994 from Caltech under chemist Peter B. Dervan. After graduate school, he was an American Chemical Society postdoctoral fellow at Harvard University under chemist George M. Whitesides before joining the faculty at the University of Chicago in 1996. He worked there for 15 years before joining the faculty at Northwestern University in 2011.
Research history
Early career
Early on as an independent investigator, Mrksich developed and executed the concept of dynamic substrates for cell culture. Here, self-assembled monolayers (SAMs) present cell adhesive ligands with perfect control over density and orientation against a non-adhesive, inert background, such as ethylene glycol. These monolayers can be further modified with electroactive groups that selectively release immobilized ligand when stimulated with an electric potential. Several strategies using this approach were studied in the context of cell signaling, migration, and co-culture. Subsequent cell-based work focused on developing methods to pattern cells on the aforementioned SAMs. The work has mostly utilized microcontact printing to confine adherent cells into defined positions, shapes, and sizes. Ultimately, his group's work has revealed examples of how cellular mechanics and cytoskeletal structure influence phenotype. A primary example of this involved investigating how cell shape exerts control over the differentiation of mesenchymal stem cells. Further work utilized these patterned monolayers to investigate the relationship between various cytoskeletal elements and to observe complex phenotypic differences in patient-derived neuroprogenitor cells. Recent work in the group investigating cell patterning has utilized photoactive adhesive peptides, allowing for local, spatiotemporal control of cell adhesion to study gap junction formation.
SAMDI-MS
While performing much of the early dynamic substrate and cell patterning work, Mrksich also pioneered an assay platform that utilizes SAMs of alkanethiolates on gold. The monolayers contain capture ligands (e.g. biotin or maleimide) that can selectively immobilize a peptide of interest. Subsequently, the monolayer can treated with a specific enzyme or a complex mixture, such as cell lysate, that can modify the peptide through various biological processes (e.g. phosphorylation). For quality control, the monolayers present these peptides against a background of tri(ethylene glycol) groups to prevent the nonspecific adsorption of protein to the surface that could obfuscate the reaction signal and, therefore, enable quantitative and reproducible assays. Most significantly, the monolayers can be characterized with MALDI mass spectrometry in a technique known as SAMDI-MS, which provides the masses of the substituted alkanethiolates and, therefore, the mass change of the immobilized peptide that results from enzyme activity. The method is compatible with standard array formats and liquid handling robotics, allowing a throughput in the tens of thousands of reactions per day. Importantly, the matrix-assisted laser desorption time-of-flight mass spectrometry (MALDI-TOF) analysis provides a fast and quantitative mass shift readout without the need for labels.
Megamolecules
Most recently, Mrksich's group has focused on developing a technique for assembling large molecular structures with perfectly defined structures and orientations, known as Megamolecules. This is primarily done through use of fusion proteins and irreversible inhibitor linkers that assemble stable intermediates. Structure-function relationships, including synthesis of cyclic and antibody-mimic structures have been investigated for potential therapeutic application.
Entrepreneurship
Mrksich has been an active entrepreneur over the past twenty years. He co-founded SAMDI Tech in 2011, which uses his label-free assay technology to perform high throughput screens for pharmaceutical companies. SAMDI Tech entered into a partnership with Charles River Laboratories in 2018 and was purchased by CRL in 2023. Mrksich also co-founded WMR Biomedical in 2008, with George Whitesides and Carmichael Roberts to develop resorbable stent materials; this company was renamed Lyra Therapeutics and had an IPO in 2020 (NASDAQ LYRA) and has drug-eluting stents in clinical trials for ear, nose and throat disease, including chronic rhinosinusitis. Mrksich has recently founded ModuMab Therapeutics, which applies his megamolecule technology to creating antibody mimics for a broad range of diseases.
Service
Mrksich has also been active in serving the scientific community in a number of roles. These include his current service as the Scientific Director of the Searle Scholars Program, as a member of the Board of Governors for Argonne National Laboratory, and as a member of the Board of Directors for the Camille & Henry Dreyfus Foundation. His past appointments include service and chairing DARPA’s Defense Sciences Research Council and many program advisory committees.
Awards and honors
Personal life
Milan lives in Hinsdale, Illinois with his two children.
References
American bioengineers
Scientists from Chicago
Northwestern University faculty
University of Illinois Urbana-Champaign alumni
California Institute of Technology alumni
American Chemical Society
University of Chicago faculty
People from Hinsdale, Illinois
Living people
1968 births
American people of Serbian descent | Milan Mrksich | [
"Physics",
"Chemistry"
] | 1,403 | [
"Monolayers",
"American Chemical Society",
"Atoms",
"Matter"
] |
60,817,325 | https://en.wikipedia.org/wiki/Diffusion-limited%20escape | Diffusion-limited escape occurs when the rate of atmospheric escape to space is limited by the upward diffusion of escaping gases through the upper atmosphere, and not by escape mechanisms at the top of the atmosphere (the exobase). The escape of any atmospheric gas can be diffusion-limited, but only diffusion-limited escape of hydrogen has been observed in our solar system, on Earth, Mars, Venus and Titan. Diffusion-limited hydrogen escape was likely important for the rise of oxygen in Earth's atmosphere (the Great Oxidation Event) and can be used to estimate the oxygen and hydrogen content of Earth's prebiotic atmosphere.
Diffusion-limited escape theory was first used by Donald Hunten in 1973 to describe hydrogen escape on one of Saturn's moons, Titan. The following year, in 1974, Hunten found that the diffusion-limited escape theory agreed with observations of hydrogen escape on Earth. Diffusion-limited escape theory is now used widely to model the composition of exoplanet atmospheres and Earth's ancient atmosphere.
Diffusion-Limited Escape of Hydrogen on Earth
Hydrogen escape on Earth occurs at ~500 km altitude at the exobase (the lower border of the exosphere) where gases are collisionless. Hydrogen atoms at the exobase exceeding the escape velocity escape to space without colliding into another gas particle.
For a hydrogen atom to escape from the exobase, it must first travel upward through the atmosphere from the troposphere. Near ground level, hydrogen in the form of H2O, H2, and CH4 travels upward in the homosphere through turbulent mixing, which dominates up to the homopause. At about 17 km altitude, the cold tropopause (known as the "cold trap") freezes out most of the H2O vapor that travels through it, preventing the upward mixing of some hydrogen. In the upper homosphere, hydrogen bearing molecules are split by ultraviolet photons leaving only H and H2 behind. The H and H2 diffuse upward through the heterosphere to the exobase where they escape the atmosphere by Jeans thermal escape and/or a number of suprathermal mechanisms. On Earth, the rate-limiting step or "bottleneck" for hydrogen escape is diffusion through the heterosphere. Therefore, hydrogen escape on Earth is diffusion-limited.
By considering one dimensional molecular diffusion of H2 through a heavier background atmosphere, you can derive a formula for the upward diffusion-limited flux of hydrogen ():
is a constant for a particular background atmosphere and planet, and is the total hydrogen mixing ratio in all its forms above the tropopause. You can calculate by summing all hydrogen bearing species weighted by the number of hydrogen atoms each species contains:
For Earth's atmosphere, cm−2⋅s−1, and, the concentration of hydrogen bearing gases above the tropopause is 1.8 ppmv (parts per million by volume) CH4, 3 ppmv H2O, and 0.55 ppmv H2. Plugging these numbers into the formulas above gives a predicted diffusion-limited hydrogen escape rate of H atoms cm−2⋅s−1. This calculated hydrogen flux agrees with measurements of hydrogen escape.
Note that hydrogen is the only gas in Earth's atmosphere that escapes at the diffusion-limit. Helium escape is not diffusion-limited and instead escapes by a suprathermal process known as the polar wind.
Derivation
Transport of gas molecules in the atmosphere occurs by two mechanisms: molecular and eddy diffusion. Molecular diffusion is the transport of molecules from an area of higher concentration to lower concentration due to thermal motion. Eddy diffusion is the transport of molecules by the turbulent mixing of a gas. The sum of molecular and eddy diffusion fluxes give the total flux of a gas through the atmosphere:
The vertical eddy diffusion flux is given by
is the eddy diffusion coefficient, is the number density of the atmosphere (molecules cm−3), and is the volume mixing ratio of gas . The above formula for eddy diffusion is a simplification for how gases actually mix in the atmosphere. The eddy diffusion coefficient can only be empirically derived from atmospheric tracer studies.
The molecular diffusion flux, on the other hand, can be derived from theory. The general formula for the diffusion of gas 1 relative to gas 2 is given by
Each variable is defined in table on right. The terms on the right hand side of the formula account for diffusion due to molecular concentration, pressure, temperature, and force gradients respectively. The expression above ultimately comes from the Boltzmann transport equation. We can simplify the above equation considerably with several assumptions. We will consider only vertical diffusion, and a neutral gas such that the accelerations are both equal to gravity () so the last term cancels. We are left with
We are interested in the diffusion of a lighter molecule (e.g. hydrogen) through a stationary heavier background gas (air). Therefore, we can take velocity of the heavy background gas to be zero: . We can also use the chain rule and the hydrostatic equation to rewrite the derivative in the second term.
The chain rule can also be used to simplify the derivative in the third term.
Making these substitutions gives
Note that we have also made the substitution . The flux of molecular diffusion is given by
By adding the molecular diffusion flux and the eddy diffusion flux, we get the total flux of molecule 1 through the background gas
Temperature gradients are fairly small in the heterosphere, so , which leaves us with
The maximum flux of gas 1 occurs when . Qualitatively, this is because must decrease with altitude in order to contribute to the upward flux of gas 1. If decreases with altitude, then must decrease rapidly with altitude (recall that ). Rapidly decreasing would require rapidly increasing in order to drive a constant upward flux of gas 1 (recall ). Rapidly increasing isn't physically possible. For a mathematical explanation for why , see Walker 1977, p. 160. The maximum flux of gas 1 relative to gas 2 (, which occurs when ) is therefore
Since ,
or
This is the diffusion-limited flux of a molecule. For any particular atmosphere, is a constant. For hydrogen (gas 1) diffusion through air (gas 2) in the heterosphere on Earth , m⋅s−2, and K. Both H and H2 diffuse through the heterosphere, so we will use a diffusion parameter that is the weighted sum of H and H2 number densities at the tropopause.
For molecules cm−3, molecules cm−3, cm−1⋅s−1, and cm−1⋅s−1, the binary diffusion parameter is . These numbers give molecules cm−2⋅s−1. In more detailed calculations the constant is molecules cm−2⋅s−1. The above formula can be used to calculate the diffusion-limited flux of gases other than hydrogen.
Diffusion-limited escape in the Solar System
Every rocky body in the solar system with a substantial atmosphere, including Earth, Mars, Venus, and Titan, loses hydrogen at the diffusion-limited rate.
For Mars, the constant governing diffusion-limited escape of hydrogen is molecules cm−2⋅s−1. Spectroscopic measurements of Mars' atmosphere suggest that . Multiplying these numbers together gives the diffusion-limited rate escape of hydrogen:
H atoms cm−2⋅s−1
Mariner 6 and 7 spacecraft indirectly observed hydrogen escape flux on Mars between and H atoms cm−2⋅s−1. These observations suggest that Mars' atmosphere is losing hydrogen at roughly the diffusion limited value.
Observations of hydrogen escape on Venus and Titan are also at the diffusion-limit. On Venus, hydrogen escape was measured to be about H atoms cm−2⋅s−1, while the calculated diffusion limited rate is about H atoms cm−2⋅s−1, which are in reasonable agreement. On Titan, hydrogen escape was measured by the Cassini spacecraft to be H atoms cm−2⋅s−1, and the calculated diffusion-limited rate is H atoms cm−2⋅s−1.
Applications to Earth's ancient atmosphere
Oxygen content of the prebiotic atmosphere
We can use diffusion-limited hydrogen escape to estimate the amount of O2 on the Earth's atmosphere before the rise of life (the prebiotic atmosphere). The O2 content of the prebiotic atmosphere was controlled by its sources and sinks. If the potential sinks of O2 greatly outweighed the sources, then the atmosphere would have been nearly devoid of O2.
In the prebiotic atmosphere, O2 was produced by the photolysis of CO2 and H2O in the atmosphere:
CO_2 + h\nu -> CO + O
H_2O + h\nu -> 1/2O_2 + 2H
These reactions aren't necessarily a net source of O2. If the CO and O produced from CO2 photolysis remain in the atmosphere, then they will eventually recombine to make CO2. Likewise, if the H and O2 from H2O photolysis remain in the atmosphere, then they will eventually react to form H2O. The photolysis of H2O is a net source of O2 only if the hydrogen escapes to space.
If we assume that hydrogen escape occurred at the diffusion-limit in the prebiotic atmosphere, then we can estimate the amount of H2 that escaped due to water photolysis. If the prebiotic atmosphere had a modern stratospheric H2O mixing ratio of 3 ppmv which is equivalent to 6 ppmv of H after photolysis, then
H atoms cm−2⋅s−1
Stoichiometry says that every mol of H escape produced 0.25 mol of O2 (i.e. 2H_2O -> O_2 +4H), so the abiotic net production of O2 from H2O photolysis was O2 molecules cm−2⋅s−1. The main sinks of O2 would have been reactions with volcanic hydrogen. The modern volcanic H flux is about H atoms cm−2⋅s−1. If the prebiotic atmosphere had a similar volcanic hydrogen flux, then the potential O2 sink would have been a fourth of the hydrogen volcanism, or O2 molecules cm−2⋅s−1. These calculated values predict that potential O2 sinks were ~50 times greater than the abiotic source. Therefore, O2 must have been nearly absent in the prebiotic atmosphere. Photochemical models, which do more complicated versions of the calculations above, predict prebiotic O2 mixing ratios below 10−11, which is extremely low compared to the modern O2 mixing ratio of 0.21.
Hydrogen content of the prebiotic atmosphere
H2 concentrations in the prebiotic atmosphere were also controlled by its sources and sinks. In the prebiotic atmosphere, the main source of H2 was volcanic outgassing, and the main sink of outgassing H2 would have been escape to space. Some outgassed H2 would have reacted with atmospheric O2 to form water, but this was very likely a negligible sink of H2 because of scarce O2 (see the previous section). This is not the case in the modern atmosphere where the main sink of volcanic H2 is its reaction with plentiful atmospheric O2 to form H2O.
If we assume that the prebiotic H2 concentration was at a steady-state, then the volcanic H2 flux was approximately equal to the escape flux of H2.
Additionally, if we assume that H2 was escaping at the diffusion-limited rate as it is on the modern Earth then
If the volcanic H2 flux was the modern value of H atoms cm−2⋅s−1, then we can estimate the total hydrogen content of the prebiotic atmosphere.
ppmv
By comparison, H2 concentration in the modern atmosphere is 0.55 ppmv, so prebiotic H2 was likely several hundred times higher than today's value.
This estimate should be considered as a lower bound on the actual prebiotic H2 concentration. There are several important factors that we neglected in this calculation. The Earth likely had higher rates of hydrogen outgassing because the interior of the Earth was much warmer ~ 4 billion years ago. Additionally, there is geologic evidence that the mantle was more reducing in the distant past, meaning that even more reduced gases (e.g. H2) would have been outgassed by volcanos relative to oxidized volcanic gases. Other reduced volcanic gases, like CH4 and H2S should also contribute to this calculation.
References
Atmosphere
Hydrogen
Meteorological hypotheses
Origin of life
Oxygen
Proterozoic | Diffusion-limited escape | [
"Biology"
] | 2,580 | [
"Biological hypotheses",
"Origin of life"
] |
60,819,045 | https://en.wikipedia.org/wiki/Galactic%20algorithm | A galactic algorithm is an algorithm with record-breaking theoretical (asymptotic) performance, but which isn't used due to practical constraints. Typical reasons are that the performance gains only appear for problems that are so large they never occur, or the algorithm's complexity outweighs a relatively small gain in performance. Galactic algorithms were so named by Richard Lipton and Ken Regan, because they will never be used on any data sets on Earth.
Possible use cases
Even if they are never used in practice, galactic algorithms may still contribute to computer science:
An algorithm, even if impractical, may show new techniques that may eventually be used to create practical algorithms.
Available computational power may catch up to the crossover point, so that a previously impractical algorithm becomes practical.
An impractical algorithm can still demonstrate that conjectured bounds can be achieved, or that proposed bounds are wrong, and hence advance the theory of algorithms. As Lipton states: Similarly, a hypothetical algorithm for the Boolean satisfiability problem with a large but polynomial time bound, such as , although unusable in practice, would settle the P versus NP problem, considered the most important open problem in computer science and one of the Millennium Prize Problems.
Examples
Integer multiplication
An example of a galactic algorithm is the fastest known way to multiply two numbers, which is based on a 1729-dimensional Fourier transform. It needs bit operations, but as the constants hidden by the big O notation are large, it is never used in practice. However, it also shows why galactic algorithms may still be useful. The authors state: "we are hopeful that with further refinements, the algorithm might become practical for numbers with merely billions or trillions of digits."
Primality testing
The AKS primality test is galactic. It is the most theoretically sound of any known algorithm that can take an arbitrary number and tell if it is prime. In particular, is provably polynomial-time, deterministic, and unconditionally correct. All other known algorithms fall short on at least one of these criteria, but the shortcomings are minor and the calculations are much faster, so they are used instead. ECPP in practice runs much faster than AKS, but it has never been proven to be polynomial time. The Miller–Rabin test is also much faster than AKS, but produces only a probabilistic result. However the probability of error can be driven down to arbitrarily small values (say ), good enough for practical purposes. Finally the Miller–Rabin test is fully deterministic and runs in polynomial time over all inputs, but its correctness depends on the generalized Riemann hypothesis (which is widely believed, but not proven). The existence of these (much) faster alternatives means AKS is not used in practice.
Matrix multiplication
The first improvement over brute-force matrix multiplication (which needs multiplications) was the Strassen algorithm: a recursive algorithm that needs multiplications. This algorithm is not galactic and is used in practice. Further extensions of this, using sophisticated group theory, are the Coppersmith–Winograd algorithm and its slightly better successors, needing multiplications. These are galactic – "We nevertheless stress that such improvements are only of theoretical interest, since the huge constants involved in the complexity of fast matrix multiplication usually make these algorithms impractical."
Communication channel capacity
Claude Shannon showed a simple but asymptotically optimal code that can reach the theoretical capacity of a communication channel. It requires assigning a random code word to every possible -bit message, then decoding by finding the closest code word. If is chosen large enough, this beats any existing code and can get arbitrarily close to the capacity of the channel. Unfortunately, any big enough to beat existing codes is also completely impractical. These codes, though never used, inspired decades of research into more practical algorithms that today can achieve rates arbitrarily close to channel capacity.
Sub-graphs
The problem of deciding whether a graph contains as a minor is NP-complete in general, but where is fixed, it can be solved in polynomial time. The running time for testing whether is a minor of in this case is , where is the number of vertices in and the big O notation hides a constant that depends superexponentially on . The constant is greater than in Knuth's up-arrow notation, where is the number of vertices in . Even the case of cannot be reasonably computed as the constant is greater than 2 pentated by 4, or 2 tetrated by 65536, that is, .
Cryptographic breaks
In cryptography jargon, a "break" is any attack faster in expectation than brute force – i.e., performing one trial decryption for each possible key. For many cryptographic systems, breaks are known, but are still practically infeasible with current technology. One example is the best attack known against 128-bit AES, which takes only operations. Despite being impractical, theoretical breaks can provide insight into vulnerability patterns, and sometimes lead to discovery of exploitable breaks.
Traveling salesman problem
For several decades, the best known approximation to the traveling salesman problem in a metric space was the very simple Christofides algorithm which produced a path at most 50% longer than the optimum. (Many other algorithms could usually do much better, but could not provably do so.) In 2020, a newer and much more complex algorithm was discovered that can beat this by percent. Although no one will ever switch to this algorithm for its very slight worst-case improvement, it is still considered important because "this minuscule improvement breaks through both a theoretical logjam and a psychological one".
Hutter search
A single algorithm, "Hutter search", can solve any well-defined problem in an asymptotically optimal time, barring some caveats. It works by searching through all possible algorithms (by runtime), while simultaneously searching through all possible proofs (by length of proof), looking for a proof of correctness for each algorithm. Since the proof of correctness is of finite size, it "only" adds a constant and does not affect the asymptotic runtime. However, this constant is so big that the algorithm is entirely impractical. For example, if the shortest proof of correctness of a given algorithm is 1000 bits long, the search will examine at least 2999 other potential proofs first.
Hutter search is related to Solomonoff induction, which is a formalization of Bayesian inference. All computable theories (as implemented by programs) which perfectly describe previous observations are used to calculate the probability of the next observation, with more weight put on the shorter computable theories. Again, the search over all possible explanations makes this procedure galactic.
Optimization
Simulated annealing, when used with a logarithmic cooling schedule, has been proven to find the global optimum of any optimization problem. However, such a cooling schedule results in entirely impractical runtimes, and is never used. However, knowing this ideal algorithm exists has led to practical variants that are able to find very good (though not provably optimal) solutions to complex optimization problems.
Minimum spanning trees
The expected linear time MST algorithm is able to discover the minimum spanning tree of a graph in , where is the number of edges and is the number of nodes of the graph. However, the constant factor that is hidden by the Big O notation is huge enough to make the algorithm impractical. An implementation is publicly available and given the experimentally estimated implementation constants, it would only be faster than Borůvka's algorithm for graphs in which .
Hash tables
Researchers have found an algorithm that achieves the provably best-possible asymptotic performance in terms of time-space tradeoff. But it remains purely theoretical: "Despite the new hash table’s unprecedented efficiency, no one is likely to try building it anytime soon. It’s just too complicated to construct." and "in practice, constants really matter. In the real world, a factor of 10 is a game ender.”
References
Mathematical notation
Asymptotic analysis
Analysis of algorithms | Galactic algorithm | [
"Mathematics"
] | 1,686 | [
"Mathematical analysis",
"Asymptotic analysis",
"nan"
] |
60,819,517 | https://en.wikipedia.org/wiki/Crabb%C3%A9%20reaction | The Crabbé reaction (or Crabbé allene synthesis, Crabbé–Ma allene synthesis) is an organic reaction that converts a terminal alkyne and aldehyde (or, sometimes, a ketone) into an allene in the presence of a soft Lewis acid catalyst (or stoichiometric promoter) and secondary amine. Given continued developments in scope and generality, it is a convenient and increasingly important method for the preparation of allenes, a class of compounds often viewed as exotic and synthetically challenging to access.
Overview and scope
The transformation was discovered in 1979 by Pierre Crabbé and coworkers at the Université Scientifique et Médicale (currently merged into Université Grenoble Alpes) in Grenoble, France. As initially discovered, the reaction was a one-carbon homologation reaction (the Crabbé homologation) of a terminal alkyne into a terminal allene using formaldehyde as the carbon source, with diisopropylamine as base and copper(I) bromide as catalyst.
Despite the excellent result for the substrate shown, yields were highly dependent on substrate structure and the scope of the process was narrow. The author noted that iron salts were completely ineffective, while cupric and cuprous chloride and bromide, as well as silver nitrate provided the desired product, but in lower yield under the standard conditions.
Shengming Ma (麻生明) and coworkers at the Shanghai Institute of Organic Chemistry (SIOC, Chinese Academy of Sciences) investigated the reaction in detail, including clarifying the critical role of the base, and developed conditions that exhibited superior functional-group compatibility and generally resulted in higher yields of the allene. One of the key changes was the use of dicyclohexylamine as the base. In another important advance, the Ma group found that the combination of zinc iodide and morpholine allowed aldehydes besides formaldehyde, including benzaldehyde derivatives and a more limited range of aliphatic aldehydes, to be used as coupling partners, furnishing 1,3-disubstituted allenes via an alkyne-aldehyde coupling method of substantial generality and utility. A separate protocol utilizing copper catalysis and a fine-tuned amine base was later developed to obtain better yields for aliphatic aldehydes.
The Crabbé reaction is applicable to a limited range of ketone substrates for the synthesis of trisubstituted allenes; however, a near stoichiometric quantity (0.8 equiv) of cadmium iodide (CdI2) is needed to promote the reaction. Alternatively, the use of cuprous bromide and zinc iodide sequentially as catalysts is also effective, provided the copper catalyst is filtered before zinc iodide is added.
Prevailing mechanism
The reaction mechanism was first investigated by Scott Searles and coworkers at the University of Missouri. Overall, the reaction can be thought of as a reductive coupling of the carbonyl compound and the terminal alkyne. In the Crabbé reaction, the secondary amine serves as the hydride donor, which results in the formation of the corresponding imine as the byproduct. Thus, remarkably, the secondary amine serves as Brønsted base, ligand for the metal ion, iminium-forming carbonyl activator, and the aforementioned two-electron reductant in the same reaction.
In broad strokes, the mechanism of the reaction is believed to first involve a Mannich-like addition of the species into the iminium ion formed by condensation of the aldehyde and the secondary amine. This first part of the process is a so-called A3 coupling reaction (A3 stands for aldehyde-alkyne-amine). In the second part, the α-amino alkyne then undergoes a formal retro-imino-ene reaction, an internal redox process, to deliver the desired allene and an imine as the oxidized byproduct of the secondary amine. These overall steps are supported by deuterium labeling and kinetic isotope effect studies. Density functional theory computations were performed to better understand the second part of the reaction. These computations indicate that the uncatalyzed process (either a concerted but highly asynchronous process or a stepwise process with a fleeting intermediate) involves a prohibitively high-energy barrier. The metal-catalyzed reaction, on the other hand, is energetically reasonable and probably occurs via a stepwise hydride transfer to the alkyne followed by C–N bond scission in a process similar to those proposed for formal [3,3]-sigmatropic rearrangements and hydride transfer reactions catalyzed by gold(I) complexes. A generic mechanism showing the main features of the reaction (under Crabbé's original conditions) is given below:(The copper catalyst is shown simply as "CuBr" or "Cu+", omitting any additional amine or halide ligands or the possibility of dinuclear interactions with other copper atoms. Condensation of formaldehyde and diisopropylamine to form the iminium ion and steps involving complexation and decomplexation of Cu+ are also omitted here for brevity.)
Since 2012, Ma has reported several catalytic enantioselective versions of the Crabbé reaction in which chiral PINAP (aza-BINAP) based ligands for copper are employed. The stepwise application of copper and zinc catalysis was required: the copper promotes the Mannich-type condensation, while subsequent one-step addition of zinc iodide catalyzes the imino-retro-ene reaction.
See also
Mannich reaction
Ene reaction
Coupling reaction
Alkynylation
References
Organic chemistry
Name reactions | Crabbé reaction | [
"Chemistry"
] | 1,224 | [
"Name reactions",
"nan",
"Organic reactions"
] |
60,819,688 | https://en.wikipedia.org/wiki/Skin%20temperature%20%28atmosphere%29 | The skin temperature of an atmosphere is the temperature of a hypothetical thin layer high in the atmosphere that is transparent to incident solar radiation and partially absorbing of infrared radiation from the planet. It provides an approximation for the temperature of the tropopause on terrestrial planets with greenhouse gases present in their atmospheres.
The skin temperature of an atmosphere should not be confused with the surface skin temperature, which is more readily measured by satellites, and depends on the thermal emission at the surface of a planet.
Background
The concept of a skin temperature builds on a radiative-transfer model of an atmosphere, in which the atmosphere of a planet is divided into an arbitrary number of layers. Each layer is transparent to the visible radiation from the Sun but acts as a blackbody in the infrared, fully absorbing and fully re-emitting infrared radiation originating from the planet's surface and from other atmospheric layers. Layers are warmer near the surface and colder at higher altitudes. If the planet's atmosphere is in radiative equilibrium, then the uppermost of these opaque layers should radiate infrared radiation upwards with a flux equal to the incident solar flux. The uppermost opaque layer (the emission level) will thus radiate as a blackbody at the planet's equilibrium temperature.
The skin layer of an atmosphere references a layer far above the emission level, at a height where the atmosphere is extremely diffuse. As a result, this thin layer is transparent to solar (visible) radiation and translucent to planetary/atmospheric (infrared) radiation. In other words, the skin layer acts as a graybody, because it is not a perfect absorber/emitter of infrared radiation. Instead, most of the infrared radiation coming from below (i.e. from the emission level) will pass through the skin layer, with only a small fraction being absorbed, resulting in a cold skin layer.
Derivation
Consider a thin layer of gas high in the atmosphere with some absorptivity (i.e. the fraction of incoming energy that is absorbed), ε. If the emission layer has some temperature Teq, the total flux reaching the skin layer from below is given by:
assuming the emission layer of the atmosphere radiates like a blackbody according to the Stefan-Boltzmann law. σ is the Stefan-Boltzmann constant.
As a result:
is absorbed by the skin layer, while passes through the skin layer, radiating directly into space.
Assuming the skin layer is at some temperature Ts, and using Kirchhoff's law (absorptivity = emissivity), the total radiation flux produced by the skin layer is given by:
where the factor of 2 comes from the fact that the skin layer radiates in both the upwards and downwards directions.
If the skin layer remains at a constant temperature, the energy fluxes in and out of the skin layer should be equal, so that:
Therefore, by rearranging the above equation, the skin temperature can be related to the equilibrium temperature of an atmosphere by:
The skin temperature is thus independent of the absorptivity/emissivity of the skin layer.
Applications
A multi-layered model of a greenhouse atmosphere will produce predicted temperatures for the atmosphere that decrease with height, asymptotically approaching the skin temperature at high altitudes. The temperature profile of the Earth's atmosphere does not follow this type of trend at all altitudes, as it exhibits two temperature inversions, i.e. regions where the atmosphere gets warmer with increasing altitude. These inversions take place in the stratosphere and the thermosphere, due to absorption of solar ultraviolet (UV) radiation by ozone and absorption of solar extreme ultraviolet (XUV) radiation respectively. Although the reality of Earth's atmospheric temperature profile deviates from the many-layered model due to these inversions, the model is relatively accurate within Earth's troposphere. The skin temperature is a close approximation for the temperature of the tropopause on Earth. An equilibrium temperature of 255 K on Earth yields a skin temperature of 214 K, which compares with a tropopause temperature of 209 K.
References
Temperature
Atmospheric radiation | Skin temperature (atmosphere) | [
"Physics",
"Chemistry"
] | 838 | [
"Scalar physical quantities",
"Temperature",
"Thermodynamic properties",
"Physical quantities",
"SI base quantities",
"Intensive quantities",
"Thermodynamics",
"Wikipedia categories named after physical quantities"
] |
60,820,678 | https://en.wikipedia.org/wiki/Bolesatine | Bolesatine is a glycoprotein isolated from the Rubroboletus satanas (Boletus satanas Lenz) mushroom which has a lectin function that is specific to the sugar binding site of D-galactose. It is a monomeric protein with a compact globular structure and is thermostable. One tryptophan can be found in its primary sequence along with one disulfide bridge.
Bolesatine causes gastroenteritis in humans and, at high enough concentrations, inhibits protein synthesis. It does not inhibit protein synthesis directly. Instead, it acts as a phosphatase for nucleoside triphosphate, particularly for GTP. At lower concentrations, it is a mitogen to human and rat T lymphocytes. Studies have shown that at low concentrations, protein kinases C (PKC) are activated in vitro and in vero cells, leading to an increase in DNA synthesis activity.
Effects of bolesatine poisoning
Other than the accumulation of toxins in human liver and organs, Bolesatine poisoning causes agglutination in human red blood cells and platelets at threshold concentrations. The following symptoms of hypertension and dizziness would be expected when affected. In severe cases, death may result.
References
Lectins
Glycoproteins
Mycotoxins | Bolesatine | [
"Chemistry",
"Biology"
] | 284 | [
"Biochemistry",
"Biotechnology stubs",
"Biochemistry stubs",
"Glycoproteins",
"Glycobiology"
] |
60,821,156 | https://en.wikipedia.org/wiki/International%20Consortium%20on%20Landslides | The International Consortium on Landslides is a non-governmental organization created in 2002 to promote landslide research, education, and risk evaluation and reduction. It is located in Kyoto, Japan. The organization has consultative status with UNESCO.
The ICL's journal is Landslides. It holds regular symposiums, including the World Landslide Forum, which is held every three years by its "International Programme on Landslides". It has various committees and programs, and was a co-founder of the "2006 Tokyo Action Plan", to carry out global cooperation on landslide monitoring and early warning; hazard mapping, vulnerability, and risk assessment; study of catastrophic landslides and landslides that threaten culturally important sites; and preparedness, mitigation, and recovery after landslides.
As of 2012, the organization had 52 member organizations, such as the United States Geological Survey, the China Geological Survey, the National Institute of Disaster Management in India, and the Disaster Prevention Research Institute at Kyoto University.
References
Landslides
Non-profit organizations based in Japan | International Consortium on Landslides | [
"Environmental_science"
] | 208 | [
"Landslides",
"Environmental soil science"
] |
60,823,132 | https://en.wikipedia.org/wiki/C21H26N4O3 | {{DISPLAYTITLE:C21H26N4O3}}
The molecular formula C21H26N4O3 may refer to:
N-Desethylisotonitazene, a designer drug with opiod effects
Metonitazene, an analgesic compound | C21H26N4O3 | [
"Chemistry"
] | 63 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
60,823,876 | https://en.wikipedia.org/wiki/Coral%20of%20life | The coral of life is a metaphor or a mathematical model useful to illustrate evolution of life or phylogeny at various levels of resolution, including individual organisms, populations, species and large taxonomic groups. Its use in biology resolves several practical and conceptual difficulties that are associated with the tree of life.
History of the concept
In biological context, the 'coral of life' as a metaphor is almost as old as the 'tree of life'. After returning from his voyage around the world, Darwin suggested in his notebooks that:
with obvious reference to branching corals whose dead colonies may form very thick deposits in the ocean (representing past life) with live animals occurring only on the top (recent life). This comment was illustrated by two simple diagrams, the first coral metaphors of evolution ever drawn in the history of biology. However, Darwin later abandoned his idea, and in the Origin of Species he referred to the tree of life as the most appropriate means to summarize affinities of living organisms, thanks most likely to obvious connotations of this metaphor with religion, ancient and folk art and mythology.
Darwin’s early musing was rediscovered by several authors more than a century later, graphical schemes as simple heuristics were drawn again early this century, and corals were raised to the level of mathematically defined objects even more recently.
Structure
The picture to the right explains the different parts of a coral. Vertical axis is time, horizontal axis may be richness, morphological diversity, some other population measure or even scaled arbitrarily. Each point x in the diagram corresponds to an individual, a population or a taxon. At point of time h, there is an equivalence partition of points into classes C. A class Cg1 is the ancestor of the entire branch above it, Cr1 is the closest common ancestor class of two segments. A segment S is defined by two classes such that there is no branching between them. On top in the middle is visualized a horizontal event, such as hybridization between members of different classes, leading to a new segment and thus to a fan coral.
Advantages of corals as metaphors of phylogeny
Botanical trees and (many) corals share only one fundamental property, namely branching, which makes both of them suitable to illustrate evolutionary divergence. Regarding other features, corals are superior to trees as metaphors of phylogeny because:
Only the uppermost sections of a coral are alive, whereas all parts of a tree, from its thinnest roots to the uppermost leaves, are living;
The coral starts its development from a small initial colony, grows upwards and ramifies later, while a tree grows from seed into two opposite directions, producing roots and the crown which may be equally large and similarly complex in shape;
The diameter of a tree continuously decreases from the trunk to the twigs, a property without phylogenetic implications, while corals may be even wider above than below, better symbolizing temporal change of taxonomic richness or population size;
Normally, tree branches never fuse whereas coral branches may exhibit anastomoses, thereby representing horizontal evolutionary events.
Advantages of corals as mathematical models of phylogeny
Trees as graph theoretical constructs are composed of vertices (nodes) representing biological entities and connecting edges (links) corresponding to relations between entities. Being a special case of branching silhouette diagrams, corals may also be defined mathematically; these are geometric shapes embedded into a two- or three-dimensional space with time as one axis and some other meaningful property, such as taxon richness, as the other (one or two). Regarding their applicability to represent phylogenies, corals and trees compare in the following way:
Trees are tools of discrete mathematics, and are therefore inappropriate to demonstrate evolutionary continuum, whereas corals as geometric shapes may very well serve this purpose;
Trees, by definition, cannot have cycles and – contrary to networks – are thus inappropriate to reflect horizontal events, such as hybridization or endosymbiosis, while fan corals allow links between branches;
Trees are computationally feasible, and remain fundamental tools in revealing phylogenetic relationships; corals can be constructed by hand based on trees and additional information from various sources in systematics, paleontology, geology etc.
There is considerable freedom in the graphical visualization of both trees and corals, although the latter may involve more artistic and heuristic elements.
When nodes of trees and networks represent individuals, so that the graphs demonstrate parent-offspring relations for asexual and sexual populations, respectively, one may zoom out so that the minor details (nodes and edges) of the diagram disappear and the discrete graph is smoothed into a coral – which is often called a “tree”, unfortunately.
Visualizing the entirety of life on earth
While corals may be drawn for any particular taxonomic group, e.g., “coral of plants,” the term “coral of life” specifically refers to all cellular life, viruses excluded.
The figure on the right is a first attempt to display a coral of life.
The coral and the classification of life
The last, but not the least important, feature of the coral of life is that it requires a classification valid for the past and present life viewed together. To see how it is possible, we may refer again to Darwin, who warned that the system of Linnaean ranks works only thanks to our insufficient knowledge of the past life. It is due to the absence of extinct forms
Earlier, in the Origin of Species, he commented that groups that are clearly separable at present, based on many characters, have much fewer differences for their ancient members, which are therefore closer to each other in the past than are their descendants in the present. That is, gaps observed between recent taxa paradoxically disappear when we go back to the ancestors – questioning the meaningfulness of Linnaean ranks . Consistently with this, Darwin suggested further that the natural classification system
The system can be made genealogical if we abandon the rank system and consider coral branches as taxa, analogously to clades derived from tree representations of phylogeny. That is, every branch of the coral is a monophyletic group whose members are derived from the same equivalence class such that no other branches arise from that class.
Footnotes
History of biology
Tree of life (biology) | Coral of life | [
"Biology"
] | 1,281 | [
"Tree of life (biology)"
] |
60,825,035 | https://en.wikipedia.org/wiki/Milton%20Morrison | Milton Teófilo Morrison Ramírez(born August 14, 1975) is a Dominican electrical engineer, writer, businessperson, Dominican politician and a 2020 Dominican Republic presidential candidate.
Milton was born and raised in Santo Domingo, Dominican Republic to Mateo Morrison, an award-winning poet, and Cristobalina Ramírez, a librarian. He earned his Bachelor of Science degree in electrical engineering from the Santo Domingo Institute of Technology, and his MS from the University of Bradford. In 2017, he founded a political party in the Dominican Republic named País Posible, which has over 128.000 active members as of 2019.
Biography
Early life and education
Milton Teófilo Morrison Ramirez was born on August 14, 1975. His father is Mateo Morrison, an award-winning poet, whose parents were English-Jamaican migrants. His mother is Cristobalina Ramírez, a Dominican-American librarian. Morrison grew up in an empoverished area of the Dominican Republic, Los Tres Brazos where Milton then completed his secondary education at the Escuela Nueva y Mahatma Gandhi in the area. In 1992, at only 19 years old, he earned his Bachelor of Science degree in electrical engineering, where he graduated with cum laude honors. In 1999, he migrated to England to pursue his Master of Science degree in Development and Project Planning at the University of Bradford. In 2002, he obtained a postbaccalaureate degree in International Business from the University of Florida.
Career
Morrison started his professional career in 1998, that same year he founded with his brother Nelson Morrison the engineering company Morrison Ingenieros.
In 2000, Morrison joined the United Nations' program Millennium Development Goals. That same year, he is named the director of renewable energies at the Ministerio de Industria y Comercio in the Dominican Republic. In 2001, he started his career as a professor at his alma mater, Santo Domingo Institute of Technology teaching electrical engineering. In 2002, he joined the Fifth Episcopal Conference of Latin America representing the Dominican Republic in New Delhi, India. In 2006, he was named Executive VP at the Asociación Dominicana de la Industria Eléctrica (ADIE), where he served until 2017.
References
1975 births
Living people
People from Santo Domingo
Presidents of political parties in the Dominican Republic
Dominican Republic people of Cocolo descent
Santo Domingo Institute of Technology alumni
Alumni of the University of Bradford
Dominican Republic engineers
Electrical engineers | Milton Morrison | [
"Engineering"
] | 483 | [
"Electrical engineering",
"Electrical engineers"
] |
60,825,357 | https://en.wikipedia.org/wiki/Photon%20%28arcade%20cabinet%29 | Photon () is a Soviet arcade cabinet produced between the late 1980s and the early 1990s in Penza.
Production
The Photon arcade cabinet was produced in the late 1980s to early 1990s by the eponymous cooperative in Penza. Components were purchased from the plants of Voronezh, Saransk and Nizhny Novgorod. Machines were purchased from the manufacturer "Union" and the Ministry of Culture. The volume of production of automatic machines was up to 150 units per month.
Games
The Photon machine with a PC8000 () card has three games loaded on it. The game variants of the machine differ by a sign with instructions depending on the installed game.
Питон (Python)
Клад / Лабиринт (Klad / Labyrinth, Treasure / Labyrinth)
Тетрис (Tetris)
Versions of the games "Labyrinth" and "Treasure" also exist for the ordinary PC8000. The versions differ due to the lack of a coin acceptor and a time counter.
The following sets of games are known for the ZX Spectrum-compatible machine:
Бродяга (Brodjaga, Inspector Gadget and the Circus of Fear)
Чёрный корабль (Czernyj Korabl, Black Beard)
Повар / Собрать Буран / Агропром (Povar / Sobrat' Buran / Agroprom, Cookie / Jetpack / Pssst)
Technical specifications
Made on the basis of a slightly modified Soviet PC8000 consumer computer. The modification entails replacing the ROM chips with the BASIC interpreter with the ROM with the game program. Also, there is no keyboard and piezo emitter mounted on the computer board for audio playback. Instead, the sound output is connected to a tape recorder. The joystick is connected to the standard connector of the joystick.
Also known is a later version of the automaton, Foton-IK02, where the ZX Spectrum-compatible board is used. Compared to the PC8000 mainboard, its graphics capabilities are weaker, but faster.
Emulation
In 2009, enthusiasts dumped a ROM from one of the games for the machine (Tetris). Support for the machine with a PC8000 board and the first game for it was added to the MAME emulator in version 0.133u1 in August 2009. The next version (0.134, September of the same year) adds support for a machine with a ZX Spectrum-compatible card.
See also
TIA-MC-1
List of Soviet computer systems
References
Arcade system boards
Computing in the Soviet Union
Soviet brands
Goods manufactured in the Soviet Union
Arcade-only video games
Arcade video games | Photon (arcade cabinet) | [
"Technology"
] | 575 | [
"Computing in the Soviet Union",
"History of computing"
] |
67,261,093 | https://en.wikipedia.org/wiki/Catenulispora | Catenulispora is a Gram-positive, rod-shaped and aerobic genus of bacteria.
Phylogeny
The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) and National Center for Biotechnology Information (NCBI)
See also
List of bacterial orders
List of bacteria genera
References
Actinomycetia
Bacteria genera
Taxa described in 2006 | Catenulispora | [
"Biology"
] | 83 | [
"Bacteria stubs",
"Bacteria"
] |
67,263,464 | https://en.wikipedia.org/wiki/Graham%20Ivan%20Clark | Graham Ivan Clark (born January 9, 2003) is an American computer hacker, cybercriminal and a convicted felon regarded as the mastermind behind the 2020 Twitter account hijacking.
Early life
Graham Ivan Clark grew up in Hillsborough County, Florida, with his mother, father, and older sister. His parents divorced when he was 7; as of 2020, his father lives in Indiana. During his teenage years, Clark used various aliases while participating in online communities, gaining notoriety as a scammer in the "hardcore factions" Minecraft community. In 2018, Graham joined OGUsers, a forum dedicated to selling, buying, and trading online accounts, and was banned after four days.
In 2019, at the age of 16, Clark was involved in stealing 164 bitcoins from Gregg Bennett, a Seattle-based angel investor, through a SIM swap attack. Clark sent two extortion notes under the alias "Scrim", stating, "We just want the remainder of the funds in the Bittrex", referring to the cryptocurrency exchange "Bittrex" that Bennett had used, and "We are always one step ahead and this is your easiest option." The United States Secret Service managed to recover only 100 bitcoins from the heist. In an interview, Bennett said he was told by a Secret Service agent that the person with the stolen bitcoins was not arrested because he was a minor.
Role in the 2020 Twitter account hijacking
Clark is widely regarded as the "mastermind" of the 2020 Twitter account hijacking, an event in which Clark worked with Mason Sheppard and Nima Fazeli to compromise 130 high-profile Twitter accounts to push a cryptocurrency scam involving bitcoin along with seizing "OG" (short for original) usernames to sell on OGUsers. At the time, Sheppard was 19, Fazeli was 22, and Clark was 17. Sheppard and Fazeli specialized in playing the role of brokers in selling the Twitter handles on OGUsers.
The Twitter hack began on June 14 when Sheppard and Fazeli assisted Clark in manipulating employees through social engineering. This involved calling multiple Twitter employees and posing as the help desk in Twitter's IT department responding to a reported problem with Twitter's internal VPN. From there, Clark directed the employee to a phishing site that was identical in appearance to Twitter's VPN log-in portal. When the employee entered their information into the phishing portal, the credentials were simultaneously entered onto the real log-in page. After one employee account was compromised, it was used to review instructions on Twitter's intranet on how to take over Twitter accounts.
Arrest
On July 31, 2020, Clark was arrested at his home in Northdale, Florida. He faced 30 criminal charges, including 17 counts of communication fraud, 11 counts of fraudulent use of personal information, one count of organized fraud for more than $5,000, and one count of accessing a computer or electronic device without authority. His bail was set at $725,000 and he pleaded not guilty. His hearing was held on March 16, 2021, via Zoom at Hillsborough County Jail. He was sentenced to three years in prison followed by three years of probation as part of a plea deal under Florida's Youthful Offender Act, which limits the penalties for convicted felons under the age of 21. According to the Tampa Bay Times, he was able to serve part of his time in a military-style boot camp.
The plea agreement stipulated that Clark could not "direct[ly] or indirect[ly] access" any electronic device without both the express permission of his probation officer and the notification of the Florida Department of Law Enforcement. He was also required to provide a list of "any and all electronic mail addresses, Interactive computer services, Internet domain names, commercial social networking websites, online or remote storage and computing devices, Internet identifiers and each Internet identifier's corresponding website [sic] homepage or application software name; home telephone numbers and cellular telephone numbers in his care custody or control." Additionally, he was ordered to disclose passwords, security codes, tokens, and key fobs.
Clark was released from Saint Petersburg Community Release Center on February 16, 2023. He is currently under probation until February 15, 2026.
References
2003 births
American cybercriminals
Hackers
Living people
People from Hillsborough County, Florida | Graham Ivan Clark | [
"Technology"
] | 917 | [
"Lists of people in STEM fields",
"Hackers"
] |
67,263,558 | https://en.wikipedia.org/wiki/Sperm%20Chromatin%20Structure%20Assay | Sperm Chromatin Structure Assay (SCSA) is a diagnostic approach that detects sperm abnormality with a large extent of DNA fragmentation. First described by Evenson in 1980, the assay is a flow cytometric test that detects the vulnerability of sperm DNA to acid-induced denaturation DNA in situ. SCSA measures sperm DNA fragmentation attributed to intrinsic and extrinsic factors and reports the degree of fragmentation in terms of DNA Fragmentation Index (DFI). The use of SCSA expands from evaluation of male infertility and subfertility, toxicology studies and evaluation of quality of laboratory semen samples. Notably, SCSA outcompetes other convention sperm DNA fragmentation (sDF) assays such as TUNEL and COMET in terms of efficiency, objectivity, and repeatability.
History
Before the development of SCSA, diagnosis or prognosis of male infertility/subfertility was principally referenced the World Health Organisation (WHO) manual-based semen parameters, including semen concentration, motility, and morphology. Yet, several reports of pregnancy failure had the parameters within normal range, suggesting that none of these measurements has drawn a reliable conclusion to reflect chance of fertility of a couple. Furthermore, such parameters are often associated with high labour intensity and lack of statistical power.
In the late 1970s, Donald P. Evenson at Memorial Sloan Kettering Cancer Centre in the United States received an NIH Research Project Grant (RO1) for mammalian sperm chromatin structure study. Various techniques have since been adopted to gain access to sperm DNA integrity. In particular, transmission electron microscopy reflected a significant amount of sperm chromatin heterogeneity.
The heterogeneity was then confirmed through flow cytometry by contrasting AO staining results between human and mouse sperm nuclei. Homogeneous results were observed in the mouse sample while heterogeneous fluorescence intensity varied among the human sample. A hypothesis was proposed “single-stranded/double-stranded DNA breaks-induced sperm DNA fragmentation is correlated to male infertility.” In 1980, Evenson et al. published papers that synthesise this knowledge into clinical tests and found SCSA.
Initially, utilization of thermal energy in buffer (100 °C, 5 min) was proposed and used for denaturation of DNA at sites DNA damage. However, the heated sperm protocol was time-consuming and induced random loss of sperm sample. Therefore, acid-induced denaturation has replaced heat-induced denaturation due to greater convenience of low pH technique and similarity in results.
Principle
SCSA is a widespread diagnostic tool in detection of sperm samples with a high degree of DNA fragmentation and absence of histone-to-protamine proteins exchange in sperm nuclei. SCSA defines sperm abnormality as an increased vulnerability of sperm DNA to in-situ heat/acid-induced denaturation. Theoretically, a completely mature and healthy sperm nuclei, which is rich in disulfide bond (S-S), shall have its DNA preserved in double-stranded form. A low pH treatment opens up defective sperm DNA at the sites of damage. Through acridine orange (AO) staining, AO molecules are intercalated into double-stranded DNA in intact sperms while aggregation of AO molecules occurs at single-stranded DNA in defective sperms. Undergoing flow cytometry (blue light), green (native DNA) and red (damaged DNA) fluorescence will be emitted from intact and defective sperms respectively. Signals will be analysed with software programming in examination of both sperm DNA fragmentation (sDF) and atypical chromatin structure.
Causes for sperm DNA damage
The integrity of sperm DNA is in close correlation with the transfer of paternal DNA into the oocyte during fertilisation. The etiology of sperm DNA damage can be subdivided into intrinsic and extrinsic factors. The former is attributed to a series of pathophysiological phenomena during spermatogenesis; the latter is caused by postnatal exposure to endogenous sources of DNA breaks.
Intrinsic factors
Abnormality in recombination and chromatin restructuring: During spermatogenesis, crossing-over of chromatid segments between homologous chromosomes may have occurred abnormally. Specific nucleases are programmed to introduce DNA double-stranded breaks for the progress of crossing-over. To prevent undesirable alterations, DNA damage checkpoint is activated and progress of meiosis will be suspended when DNA is damaged. Incorrect activation or inactivation of the checkpoint is suspected to be the cause of fragmented DNA in ejaculated spermatozoa. However, such a conclusion is theoretically speculative and currently there is no direct confirmation of this hypothesis in humans. During spermiogenesis, DNA double-strand breaks are introduced to relieve torsional stress and to enable the substitution of nucleosome histone cores by transitional proteins. Alteration to such processes may be detrimental to the chromosomal integrity of sperm.
Abortive apoptosis: Apoptosis refers to a programmed cell death that removes abnormal cells and to prevent their over-proliferation. If apoptosis is not activated efficiently, overpopulation of germ cells or escape of defective germ cells will lead to sperm DNA damage.
Oxidative stress: Oxidative stress denotes the imbalance of activity between reactive oxygen species (ROS) and endogenous antioxidant agents. High levels of ROS cause DNA damage in terms of single-stranded breaks and double-stranded breaks frequently recorded in infertile men's sperms.
Extrinsic factors
Age: Although males produce sperm throughout their adulthood, older age is associated with increased number of DNA double-stranded breaks and decreased frequency of sperm apoptosis. Such observation is implicative of deterioration of sperm selection, quality, and integrity.
Heat stress: High temperatures cause adverse effects to sperm DNA and male fertility. Excessive heat is related to impaired sperm chromatin integrity, and testis overheating is associated with reduced fertility potential.
Smoking: Toxins in common tobaccos may increase the prevalence of fragmented DNA. Smoking is associated with significantly escalated levels of seminal ROS and oxidative stress. Increased ROS activity leads to apoptosis and increased fragmentation of DNA.
Procedure
Currently, only the SCSA protocol developed by Evenson et al. has received trademark protection in achievement of clinical relevance between different laboratories. The individual steps of SCSA are as follows:
Freezing/Thawing: After ejaculation, human sperm samples are subjected to a 30-minute semen liquefaction at 37 °C, followed by cryopreservation in an ultra-low temperature freezer (–70 to –110 °C) or placed directly into liquefied nitrogen cryovial. Frozen or fresh sperm samples are thawed in a 37 °C water bath and diluted with a TNE buffer to obtain 200 μL suspension (sperm concentration: 1-2 x 10^6 mL).
Acid-induced denaturation: 400 μL acidic solution (pH 1.2) containing 0.15M sodium chloride solution, 0.08M Tris hydrochloric acid, and 0.1% Triton-X 100 are added into 200 μL sperm suspension, and the solution is mixed strictly for 30 seconds. Such a process enables denaturation of sperm nuclei with DNA damage.
Acridine orange (AO) staining: Next, 1.20 ml of AO staining solution with 6 μg AO/ml staining buffer is added into the mixture. The small AO molecules penetrate through the sperm chromatin in access to double-stranded DNA and single-stranded DNA in intact and defective sperm nuclei respectively.
Flow cytometry (FCM): Using a flow cytometer, 500-1000 sperms can be examined within minutes on a 1024 x 1024 gradation scale through a dual parameter. Visualized under blue light at the wavelengths of 450-490 nm, double-stranded DNA from intact sperms emit green fluorescence (488 nm) while aggregation of AO molecules single-stranded DNA from defective sperms leads to metachromatic shift to red fluorescence (>630 nm).
Data analysis: A scattergram (cytogram) will be generated from the flow cytometer reflecting DNA stainability from red (X-axis) and green (Y-axis) dots to single out the heterogeneity; With SCSAsoft® software, data from the scattergram will be converted into frequency histograms in calculation of DNA fragmentation index (DFI) / Cells Outside the Main Peak of αt (COMPαt), alpha t (αt), and High DNA Stainable fraction (HDS).
Parameters
SCSA consists of a fixed flow cytometry protocol and a specific computing program, SCSAsoft ®. Measurements include DNA fragmentation index (DFI) and High DNA Stainable (HDS) fraction, which represent the percentage of sperm with DNA breaks/protamine defects and immature spermatozoa without full protamination respectively.
DNA fragmentation index (DFI)
Also known as Cells Outside the Main Peak of αt (COMPαt), DFI can be further sub-classified into mean DFI (X DFI) and standard deviation DFI (SD DFI). The index has been determined as the most sensitive criteria for fertility assessment in reflection of sperm DNA integrity. Normal DFI implies no measurable value; moderate DFI sample infers normal sperm morphology; and high DFI fractions exhibited elongated nuclei and signs of apoptosis. In general, the greater the DFI, the higher the chance of infertility or subfecundity.
Within DFI of 0-20%, the occurrence of spontaneous pregnancy remains consistent; when DFI exceeds 20%, the rate of natural fertility gradually declines; when DFI exceeds 30%, the odds ratio for natural or Intrauterine insemination (IUI) fertility is greatly reduced by 8-10 folds, suggesting a close-to-zero chance of pregnancy.
High DNA Stainable (HDS) fraction
The HDS sperm population has a remarkably high degree of DNA staining by AO molecules due to the presence of unprocessed P2 protamines. Determination of HDS value reflects structural chromatin abnormalities. A high HDS value is indicative of immature sperm morphology and hence pregnancy failure.
Applications
Diagnosis of male infertility or subfertility
Since the SCSA can be performed to assess the sperm abnormality, it is a valid instrument to determine male infertility or subfertility.
Although the causes and events that actuate sperm DNA damage and fragmentation are not yet fathomed, Sperm DNA fragmentation has been shown to be closely correlated with fertility and subfertility in not only humans, but also bulls, boars, and stallions. Such finding asserts the DFI determined by SCSA to be a strong independent predictor of in vivo pregnancy and a clinically useful technique.
Currently, 25% DFI is the established clinical threshold in classifying males into statistical probability of: 1) increased time for natural pregnancy, 2) lower chance of Intrauterine insemination (IUI) success, 3) more miscarriage, or 4) infertility. High HDS values are in positive correlation to pregnancy failures.
In such cases, other assisted reproductive technologies (ART) may be performed, including intracytoplasmic sperm injection (ICSI) (for sperm sample with DFI>25%) or testicular sperm extraction (TESE) (for sperm sample with DFI>50%).
Toxicology studies
Sperm DNA damage can be attributed to exposure chemotherapy, radiotherapy, or other environmental toxicants. SCSA is highly dose-responsive to sperm DNA fragmentation induced by chemical toxicants. Therefore, SDαt is the most important variable for toxicology studies.
Evaluation of cool-stored semen
SCSA is also performed to assess the quality of laboratory sperm samples that have been stored for at least 24 hours. Semen samples that have been stored at appropriate conditions will have essentially no change, while greater change in DNA quality indicates an improper handling.
Advantages
SCSA has numerous advantages when compared to other sperm DNA fragmentation (sDF) assays [TUNEL assay, COMET assay, and Sperm Chromatin Dispersion (SCD)], which include:
More time and cost efficient: 5000-10000 spermatozoa can be analysed in less than 5 minutes. The efficiency is higher than any other existing sperm fragmentation protocols. Moreover, the requirements for equipment and reagents are relatively low. Only 10 cents are required per test for the reagents required.
Higher objectivity and accuracy: Conventional sperm analysis includes sperm count, morphology and motility in determination of infertility or subfertility. However, several reports of pregnancy failure had the aforementioned parameters within normal range. For SCSA, machine-guided DFI and HDS values with an unbiased threshold are measured rather than subjective human-eye evaluation, resulting in a higher precision (Coefficient of Variation testing, CV's of 1-3%).
Higher repeatability: Since sperm count, morphology and motility of semen samples fluctuate within a short period of time, results of analysis are less repeatable. SCSA has a repeatability of 0.98-0.99 in clinical settings. Unless disruption is made by different lifestyles or medical intervention, experimental results are reproducible.
Limitations
Despite the objective data and advantages offered, the efficacy of SCSA in fertility assessment remains doubted clinically. Suggested limitations include:
Poor association between DFI and reproductive outcomes: Low odd ratios were observed between DFI and fertility outcomes in several meta-analysis. Furthermore, experimental results are mainly obtained as level 3 evidence (from retrospective cohort studies, case-control studies, and meta-analysis of level 3 studies) with reference to the evidence based medicine (EBM), suggesting its low clinical value.
Unvalidated threshold: The DFI and HDS were established through measuring the vulnerability of sperm DNA to acid-induced denaturation instead of a direct measurement of sperm DNA integrity. AO staining on intact and defected sperms may not represent the degree of sperm fragmentation. Hence, the current threshold may not be accurate in reflecting the actual fertility situation.
References
Semen
Human male reproductive system
Flow cytometry
DNA | Sperm Chromatin Structure Assay | [
"Chemistry",
"Biology"
] | 2,970 | [
"Flow cytometry"
] |
67,263,972 | https://en.wikipedia.org/wiki/Imidine | In chemistry imidines are a rare functional group, being the nitrogen analogues of anhydrides and imides. They were first reported by Adolf Pinner in 1883, but did not see significant investigation until the 1950s, when Patrick Linstead and John Arthur Elvidge developed a number of compounds.
Imidines may be prepared in a modified Pinner reaction, by passing hydrogen chloride into an alcoholic solution of their corresponding di-nitriles (i.e. succinonitrile, glutaronitrile, adiponitrile) to give imino ethers which then condense when treated with ammonia. As a result, most structures are cyclic.
The compounds are highly moisture sensitive and can be converted into imides upon exposure to water.
See also
Amidine
Guanidines
References
Functional groups | Imidine | [
"Chemistry"
] | 171 | [
"Functional groups"
] |
67,264,815 | https://en.wikipedia.org/wiki/Sexual%20anomalies | Sexual anomalies, also known as sexual abnormalities, are a set of clinical conditions due to chromosomal, gonadal and/or genitalia variation. Individuals with congenital (inborn) discrepancy between sex chromosome, gonadal, and their internal and external genitalia are categorised as individuals with a disorder of sex development (DSD). Afterwards, if the family or individual wishes, they can partake in different management and treatment options for their conditions (e.g. hormone therapy). Many intersex people are engaged in activism to stop such treatments, citing the extreme and harmful nature of many of the treatments, further arguing that many of the treatments serve no medical purpose.
Infants born with atypical genitalia often cause confusion and distress for the family. Psychosexual development is influenced by numerous factors that include, but are not limited to, gender differences in brain structure, genes associated with sexual development, prenatal androgen exposure, interactions with family, and cultural and societal factors. Because of the complex and multifaceted factors involved, communication and psychosexual support are all important.
A team of experts, or patient support groups, are usually recommended for cases related to sexual anomalies. This team of experts are usually derived from a variety of disciplines including pediatricians, neonatologists, pediatric urologists, pediatric general surgeons, endocrinologists, geneticists, radiologists, psychologists and social workers. These professionals are capable of providing first line (prenatal) and second line diagnostic (postnatal) tests to examine and diagnose sexual anomalies.
Overview
In the normal prenatal stages of fetal development, the fetus is exposed to testosterone - albeit more in male fetuses than female ones. Upon the presence of the 5α-reductase enzyme, testosterone is converted to dihydrotestosterone (i.e. DHT). If DHT is present, the male external genitalia will develop.
Development of male external genitalia:
Genital tubercle forms the penis
Urethral folds form the penile raphe
Genital swellings form the scrotum
On the other hand, if maternal placenta estrogen is present without DHT, then the development of female external genitalia occurs.
Development of female external genitalia (the vulva):
Genital tubercle forms the clitoris
Urethral folds form the labia minora
Genital swellings form the labia majora
However, in abnormal cases, sexual anomalies occur due to a variety of factors that lead to an excess of androgens in the fetus. The effects of excessive androgens differ in fetuses with XX chromosome (female) and XY chromosomes (male).
In XX chromosome fetuses, excess androgens result in ambiguous genitalia. This makes identification of external genitalia as male or female difficult. Additionally, the individual may have clitoromegaly, a shallow vagina, early and rapid growth of pubic hair in childhood, delayed puberty, hirsutism, virilisation, irregular menstrual cycle in adolescence and infertility due to anovulation.
In XY chromosome fetuses, excess androgens result in a functional and average-sized penis with extreme virilisation, but the inability for sperm production. Additionally, the individual will also experience early and rapid growth of pubic hair during childhood and precocious puberty stages.
Classification
Differences/disorders of sexual development (DSD) are classified into different categories: chromosomal variation, gonadal development disorders, abnormal genital development and others.
Chromosomal variation
DSDs caused by chromosomal variation generally do not present with genital ambiguity. This includes sex chromosome DSDs such as Klinefelter syndrome, Turner syndrome and 45,X or 46,XY gonadal dysgenesis.
Males with Klinefelter syndrome usually have a karyotype of 47,XXY as a result of having two or more X chromosomes. Affected patients generally have normal genital development, yet are infertile and have small, poor functioning testes, breast growth and delayed puberty. The incidence for 47,XXY is 1 in 500 males, but severe and rare cases of Klinefelter syndrome presents as three or more X chromosomes.
Turner syndrome is classified as aneuploidy or structural rearrangement of the X chromosome. Signs and symptoms of affected females vary among them, such as low birth weight, low-set ears, short stature, short neck and delayed puberty. The incidence is 1 in 2500 live-born females, while most patients do not survive for more than one year after birth.
Gonadal development disorders
Gonadal development disorders form a wide spectrum, classified by their cytogenetic and histopathological features. However, unsolved diagnosis and malignancy still represent difficulties in the sex determination of these patients. Such disorders include partial or complete gonadal dysgenesis, ovotesticular DSD, testicular DSD and sex reversal.
Abnormal genital development
Genital abnormality can occur in the penis, scrotum or testes in males; and vagina and labia in females. Sometimes, ambiguous genitalia could occur, where the clear distinction of external genitalia is absent in both male and female. Hence, examination (typically at birth) is carried out where the sex of the patient will be determined through imaging and blood tests. Abnormal genital development includes disorders of fetal origin, disorders in androgen synthesis or action, disorders in anti-Müllerian hormone synthesis or action.
Others
In addition to the aforementioned sexual anomalies, there are other unclassified sexual anomalies. In males, this includes severe early-onset intrauterine growth restriction, isolated hypospadias, congenital hypogonadotropic hypogonadism, hypogonadism and cryptorchidism. In females, this includes Malformation syndromes, Müllerian agenesis/hypoplasia, uterine anomalies, vaginal atresia and labial adhesions.
Causes
Sexual anomalies often generate from genetic abnormalities caused by many factors, leading to different sexual development. These genetic abnormalities occur during the prenatal stage of an individual's fetal development. During this stage, genetic mutations can result from endocrine disrupters in the mother's diet or environmental factors. The general causes of sexual anomalies can not be outlined due to the high variability of each individual's situations. Thus, the cause of each specific anomaly has to be studied independently.
Sexual differentiation occurs through various processes during the prenatal development period of the fetus. These processes are initiated and regulated by biological metabolites such as DNA, hormones and proteins. The initial steps of sexual differentiation begin with the development of the gonads and genitals. This process is consistent with both genders spanning over the course of the first 6 weeks following conception, during which the embryo remains pluripotent. Differentiation of the gonads begins after the 6th week, which is determined by the sex-determining region Y (SRY) gene in the Y chromosome.
The SRY gene plays an important role in developing the testes of a male individual. Following the development of the testes, hormones synthesized within the testes regulate the differentiation of both internal and external parts of the genitals. The absence of the testicles or the hormones synthesized may lead to irregular differentiation of the genitals. Genetic abnormalities or environmental factors that influence these procedures may lead to the incomplete development of the gonads and the genitals. These malformations can occur any time during the development or the birth of the embryo, manifesting as ambiguous genitals or dissonance within the genotypic and phenotypic sex of the individual, leading to a late onset of puberty, amenorrhea, a lack of or excess virilization, or later in life, infertility or early occurrence of menopause.
Diagnosis and symptoms
First line diagnostic tests (prenatal)
Family history
Symptoms such as infertility, early menopause, amenorrhea or sudden infant death syndrome (SIDS) could be a sign. Hence, an early check-up should be conducted.
Analysis of karyotype
Peripheral blood is collected for karyotyping. This helps classify the patient in one of the three main categories of DSD: chromosomal variation, gonadal development disorders and abnormal genital development.
Abdominal ultrasounds
The presence of gonads, uterus and vagina should be monitored. This can be done through abdominal ultrasounds. However, the absence of these sex organs will lead to difficulties in gender identification.
Second line diagnostic tests (postnatal)
Physical Examination
Inspection of the genitalia with care and palpation must be conducted with the following points in mind.
Determining the degree of virilization or masculinisation:
In a female fetus, the Prader scale should be used to assess the extent of the virilisation if the karyotyping results are not out yet.
In males, the external masculinization score should be used.
Palpation of gonads from the labioscrotal fold to the abdomen (inguinal canal).
Hydration and blood pressure assessment should be conducted.
Additional dysmorphic features should be ruled out because genitalia malformations would occur if the patient has multiple malformation syndromes.
Evaluation of hormones 48 hours after birth
17-Hydroxyprogesterone can be used to screen for congenital adrenal hyperplasia (CAH). This is commonly found in patients with 46, XX DSD.
Dehydroepiandrosterone (DHEA) in addition to progesterone allows for the diagnosis of more uncommon forms of CAH and other inherited disorders.
Base testosterone, follicle stimulating hormone (FSH) and luteinising hormone (LH) levels are precursors in individuals with 46,XX DSD. These tests are conducted within the timeframe of thirty hours post-birth to anywhere between fifteen and ninety days post-birth. This data collected within time frame can be used to gauge the growth of the fetus when it reaches six months of age.
Basal cortisol levels and adrenocorticotropic hormone (ACTH) is essential in diagnosing panhypopituitarism and enzymatic disorders affecting adrenal steroidogenesis.
The anti-Müllerian hormone is used for evaluating the function of Sertoli cells.
A urinary steroid profile shows the ratio of precursor metabolites within measured urine concentrations and the resultant products produced indicates the enzyme is the cause of a sexual defect. This is a more specific procedure in the detection of the defect in comparison to analysing blood.
Treatment and management
The treatment and/or management of DSDs with atypical genitalia will vary from person to person. This may include gender affirmation surgery, medical treatment and surgical treatment.
Gender affirmation surgery
Gender affirmation plays a critical role in the management of sexual anomaly cases. Ultimately, the parents and a multidisciplinary team are responsible for assigning the sex that is affirmative the gender of the concerned person. The current guidelines of gender affirmation include the psychosocial effects in adults with etiological diagnosis, the potential for fertility, surgical opportunities and hormone replacement therapy in the course of puberty.
There are other factors considered during this process. This may include cultural and religious factors as well as the implications it has on the individual in later life. It is regulated by reference centers with groups specialised in managing cases of sexual anomalies.
Medical treatment
Hormonal treatment is an accepted and standardised approach to treat different congenital sexual anomalies. Patients that are deficient in hormones produced by the adrenal glands require immediate medical attention. They are given a hormone called hydrocortisone, a form of hormone replacement therapy, with the objective to induce puberty.
Utilizing sex steroids as hormonal therapy is deemed controversial with concerns of its duration of initiation, dosage and regimen. However, it is agreed amongst most clinicians that low doses of hormonal treatment should begin around the age of 11 to 12 years old and should be increased progressively.
Surgical treatment
Surgical procedures are an alternative to hormonal treatment available for patients to address genital anomalies and improve the body's sexual functions. However, a common dilemma in these procedures is that they are often derived from the patient's expectation of 'normal' genitals from an aesthetic and functional standpoint. Oftentimes, this leads to extensive surgical interventions.
In most cases, surgical procedures result in permanent changes to the appearance and function of the patient's body. Therefore, the decision to proceed with this arrangement must be a joint agreement between the family and the multidisciplinary team. The most ideal situation would be to include the patient as part of the decision-making process. However, cases where surgical treatments were performed at an early age are recognised as mutilation of the body. Subsequently, it has become increasingly common to defer surgical treatments until the patient is of appropriate age to be involved in the decision-making process.
Controversy and implications
Even though the term disorder of sex development (DSD) is widely accepted by the medical community, its suitability and adequacy to represent these individuals are criticised by many support and advocacy groups. Firstly, the word 'disorder' carries negative connotations. Secondly, with current nomenclature, DSD is an overly generalised term for conditions that do not have differences in genital appearance or gender identity (e.g. Klinefelter syndrome and Turner syndrome). Thirdly, the term 'DSD' lacks specificity and clarity; and therefore unhelpful in the diagnosis process. Hence, many support groups and advocates believe that the medical community should discontinue the use of 'DSD' as a designation tool.
Furthermore, people who live with conditions regarding sexual abnormalities may encounter various mental and physical health problems. This may include traumatic experience with their own bodies, dissatisfaction with body image, low-self esteem, anxiety, depression, bipolar disorders, eating disorders, personality disorders, schizophrenia disorders, trauma and stress-related disorders, etc.
See also
Intersex medical interventions
Intersex healthcare
Patient support group
References
Sexuality
Sex organs
Diseases and disorders | Sexual anomalies | [
"Biology"
] | 3,012 | [
"Behavior",
"Sexuality",
"Sex"
] |
67,265,144 | https://en.wikipedia.org/wiki/Safe%20Water%20System | The Safe Water System (SWS) is a series of inexpensive technologies that can be applied as water quality interventions in developing countries. It was developed in conjunction by the US Centers for Disease Control and Prevention and the Pan American Health Organization. As of 2014, SWS had been implemented in thirty-five countries.
Background
As of 2012, 780 million people lack access to an improved water source and 2.5 billion people (half of all people in developing countries) lack access to adequate sanitation. Inadequate water sanitation is a public health hazard, as it is a major source of diarrheal illnesses such as cholera. Diarrheal illnesses are a significant source of mortality for children, killing more children than the combined mortality of measles, malaria, and AIDS. For children under five, diarrheal disease is the second-leading cause of death worldwide.
History and methods
In 1992, the US Centers for Disease Control and Prevention (CDC) and the Pan American Health Organization collaborated to reduce waterborne diseases in developing countries. They called the new methodology the Safe Water System (SWS); it consisted of three components:
Water treatment at point of use with a locally made diluted bleach solution
Preventing recontamination of water by safely storing treated water in containers with narrow mouths, lids, and spigots
Education to improve the handling and sanitation of food and water
From 1994 to 1995, the CDC implemented the SWS in Bolivia in a pilot experiment, where it improved water quality and reduced diarrheal illness by 40%. Following the success of the program in Bolivia, the CDC received permission from the Zambian Ministry of Health to conduct field trials in 1998 in Kitwe, Zambia. Compared to the control group, the households that received the SWS and education on best hygiene practices experienced a 48% reduced risk of diarrheal disease. In response to marketing efforts by the CDC, the sanitizing solution, sold as Clorin, experienced a steep increase in demand in Zambia. In 1999, about 187,000 bottles of Clorin were sold; in 2004, over 1.8 million bottles were sold. Each bottle sanitizes enough water for one month for a family of six. Clorin is subsidized by the United States Agency for International Development (USAID); as of 2003, each bottle is sold for US$0.09, with USAID paying $0.33 per bottle (each bottle therefore has a net cost of $0.24 to USAID).
Household water treatment now encompasses other methods, such as use of flocculants that cause contaminants within water to sink to the bottom of a container or float at the top where they can be more easily removed. Methods like disinfectant powder, solar water disinfection, ceramic filtration, and slow sand filtration are also incorporated.
Impact
From 1998 to 2014, the CDC implemented the SWS program in thirty-five countries. During this time period, they distributed enough sanitizing agents to clean over 137 billion liters of water. Products that the CDC has distributed as part of the Safe Water Systems includes the three-component system initially piloted in Bolivia, as well as water treatment tablets. SWS has been implemented in the following countries:
Afghanistan
Angola
Benin
Botswana
Burkina Faso
Burma (Myanmar)
Burundi
Cambodia
Cameroon
Côte d'Ivoire
Democratic Republic of Congo
Dominican Republic
Eswatini
Ethiopia
Guinea
Haiti
India
Kenya
Liberia
Madagascar
Malawi
Mali
Mozambique
Namibia
Nepal
Nigeria
Pakistan
Papua New Guinea
Republic of Congo (Brazzaville)
Rwanda
Senegal
South Sudan
Swaziland
Tanzania
Uganda
Uzbekistan
Vietnam
Zambia
Zimbabwe
Because the goal of the SWS interventions is to reduce the incidence of water-borne illness, SWS technologies do not mitigate other hazards in water such as chemical contaminants. Studies of SWS interventions showed a reduction of diarrhea by 24% in Bangladesh, 25% in Guatemala, and 30% among people with HIV in rural Uganda.
References
Drinking water
Water treatment
Sanitation
Waterborne diseases | Safe Water System | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 815 | [
"Water treatment",
"Environmental engineering",
"Water technology",
"Water pollution"
] |
67,265,858 | https://en.wikipedia.org/wiki/Double-strand%20break%20repair%20model | A double-strand break repair model refers to the various models of pathways that cells undertake to repair double strand-breaks (DSB). DSB repair is an important cellular process, as the accumulation of unrepaired DSB could lead to chromosomal rearrangements, tumorigenesis or even cell death. In human cells, there are two main DSB repair mechanisms: Homologous recombination (HR) and non-homologous end joining (NHEJ). HR relies on undamaged template DNA as reference to repair the DSB, resulting in the restoration of the original sequence. NHEJ modifies and ligates the damaged ends regardless of homology. In terms of DSB repair pathway choice, most mammalian cells appear to favor NHEJ rather than HR. This is because the employment of HR may lead to gene deletion or amplification in cells which contains repetitive sequences. In terms of repair models in the cell cycle, HR is only possible during the S and G2 phases, while NHEJ can occur throughout whole process. These repair pathways are all regulated by the overarching DNA damage response mechanism. Besides HR and NHEJ, there are also other repair models which exists in cells. Some are categorized under HR, such as synthesis-dependent strain annealing, break-induced replication, and single-strand annealing; while others are an entirely alternate repair model, namely, the pathway microhomology-mediated end joining (MMEJ).
Causes
DSB can occur naturally due to the presence of reactive species generated by metabolism, and various external factors (e.g. ionizing radiation or chemotherapeutic drugs).
In mammalian cells, there are numerous cellular processes that induce DSB. Firstly, DNA topological strain from topoisomerase during normal cell growth can cause the majority a cell’s DSB. Secondly, cellular processes such as meiosis and the maturation of antibodies can cause nuclease-induced DSB. Thirdly, the cleavage of different DNA structures such as reversed or blocked DNA replication forks, R-loops and DNA interstrand crosslinks can also cause DSB.
Different models
Homologous recombination
Homologous recombination involves the exchange of DNA materials between homologous chromosomes. There are multiple pathways of HR to repair DSBs, which includes double-strand break repair (DSBR), synthesis-dependent strand annealing (SDSA), break-induced replication (BIR), and single-strand annealing (SSA).
The regulation of HR in mammalian cells involves key HR proteins such as BRCA1 and BRCA2. And as mentioned, since HR can lead to aggressive chromosomal rearrangement, loss of genetic information that could contribute to cell death, it explains why HR is strictly regulated.
Double-strand break repair
HR repairs DSB by copying intact and homologous DNA molecules. The blunt ends of the DSB are processed into ssDNA with 3’ extensions, which allows RAD51 recombinase (eukaryotic homologue of prokaryotic RecA) to bind to it to form a nucleoprotein filament. The function of the filament is to locate the template DNA and form a joint heteroduplex molecule. Other proteins such as RP-A protein and RAD52 also coordinate in the heteroduplex formation, the RP-A protein has to be removed for the RAD51 to form the filament, whereas the RAD52 is a key HR mediator. Afterwards, the 3’ ssDNA invades the template DNA, and displaces a DNA strand to form a D-loop. DNA polymerase and other accessory factors follows by replacing the missing DNA via DNA synthesis. Ligase then attaches the DNA strand break, resulting in the formation of 2 Holliday junctions. The recombined DNA strands then undergoes resolution by cleavage. The orientation of the cleavage determines whether the resolution results in either cross-over or noncross-over products. Lastly, the strands finally separate and revert to its original form.
, the main pathway for resolution relies on the BTR (BLM helicase-TopoisomeraseIIIα-RMI1-RM2) complex, where it induces the resolution of the 2 Holliday junctions, but this pathway favors the noncross-over cleavage.
Synthesis-dependent strain annealing
Synthesis-dependent strain annealing is the most preferred repair mechanism in somatic cells. The pathway of SDSA is similar to DSBR until just after the D-loop formation. Instead of forming Holliday junctions after DNA synthesis, the nascent strand dissociates via RETL1 helicase and anneals back to the other end of the resected strand. This explains why SDSA results in a non-crossover pathway. The remaining gap is filled in and the nick is attached by the ligase.
Break-induced replication
Although there is little research in regards of break-induced replication, it is known that it is a one-ended recombination mechanism, where only of the one ends of a DSB will be involved in strand invasion. This means that unlike DSBR, BIR does not link back to the second DSB end after the strand invasion and replication.
Single-strand annealing
Single-strand annealing involves homologous/repeated sequences flanking a DSB. The process starts with the key end resection factor CtlP, which mediates the end resection of DSBs, resulting in the formation of a 3' ssDNA extension. Meditated by RAD52, the flanking homologous sequences are annealed, and forms a synapse intermediate. Then, the nonhomologous 3’ extension is removed by the ERCC1-XPF complex through endonucleolytic cleavage, with RAD52 increasing the efficiency of the ERCC1-XPF complex activity. It is only after the removal of 3’ ssDNA, where the polymerase will fill the missing gaps and the ligase to ligate the strands. Since SSA results in the deletion of repetitive sequences, this could potentially lead to error-prone repair.
Single-strand annealing differs from SDSA and DSBR in numerous ways. For instance, the 3’ extension after the end resection in SSA anneals to the repeated/homologous sequences of the other end, whereas in other pathways the strand invasion to another homologous DNA template. Moreover, SSA does not require RAD51, because it does not involve strand invasion, but rather the annealing of homologous sequences.
Non-homologous end joining
Non-homologous end joining (NHEJ) is one of the major pathways in DSB repair besides HR. The basic concept of NHEJ involves three steps. First, the ends of a DSB is captured by a group of enzymes. The enzymes then form a bridge which connects the DSB ends together, and is lastly followed by religation of the DNA strands. To initiate whole process, the Ku70/80 protein complex binds to the damaged ends of the DSB strands. This forms a preliminary scaffold which allows the recruitment of various NHEJ factors, such as the DNA-dependent protein kinase catalytic subunit (DNA-PKcs), DNA Ligase IV and X-ray cross complementing protein 4 (XRCC4) to form a bridge and bring both ends of the damaged DNA strands together. This is then followed by the processing of any non-ligatable DNA termini by a group of proteins including Artemis, PNKP, APLF and Ku, before the XRCC4 and DNA Ligase IV ligate the bridged DNA.
Microhomology-mediated end joining
Microhomology-mediated end joining (MMEJ), also known as alt-non-homologous end joining, is another pathway to repair DSBs. The process of MMEJ can be summarized in five steps: the 5' to 3' cutting of DNA ends, annealing of microhomology, removing heterologous flaps, and ligation and synthesis of gap filling DNA. It was found that the selection between MMEJ and NHEJ is mainly dependent on Ku levels and the concurrent cell cycle.
The regulation of double-strand break repair pathways
DNA damage response
DNA damage response (DDR) is the overarching mechanism which mediates the cell's detection and response to DNA damage. This includes the process of detecting DSB within the cell, and the subsequent triggering and regulation of DSB repair pathways. Upstream detections of DNA damage via DDR will lead to the activation of downstream responses such as senescence, cell apoptosis, halting transcription and activating DNA repair mechanisms. Proteins such as the proteins ATM, ATR and DNA-dependent protein kinase (DNA-PK) are vital for the process of detection of DSB in DDR, and these proteins are recruited to the DSB site in the DNA. In particular, ATM has been identified as the protein kinase in charge of the global meditation of cellular responses to DSB, which includes various DSB repair pathways. Following the recruitment of the aforementioned proteins to DNA damage sites, they will in turn trigger cellular responses and repair pathways to mitigate and repair the damage caused. In short, these vital upstream proteins and downstream repair pathways altogether forms the DDR, which plays a vital role in DSB repair pathways regulation.
Fanconi anemia complex in one DNA damage response pathway
The image in this section illustrates molecular steps in a DNA damage response pathway in which a Fanconi anemia complex is activated during repair of a double-strand break. ATM (ATM) is also a protein kinase that is recruited and activated by DNA double-strand breaks. DNA double-strand damages
activate the Fanconi anemia core complex (FANCA/B/C/E/F/G/L/M). The FA core complex monoubiquitinates the downstream targets FANCD2 and FANCI. ATM activates (phosphorylates) CHEK2 and FANCD2 CHEK2 phosphorylates BRCA1. Ubiquinated FANCD2 complexes with BRCA1 and RAD51. The PALB2 protein acts as a hub, bringing together BRCA1, BRCA2 and RAD51 at the site of a DNA double-strand break, and also binds to RAD51C, a member of the RAD51 paralog complex RAD51B-RAD51C-RAD51D-XRCC2 (BCDX2). The BCDX2 complex is responsible for RAD51 recruitment or stabilization at damage sites. RAD51 plays a major role in homologous recombinational repair of DNA during double strand break repair. In this process, an ATP dependent DNA strand exchange takes place in which a template strand invades base-paired strands of homologous DNA molecules. RAD51 is involved in the search for homology and strand pairing stages of the process.
Double-strand break repair pathway choice
As cells have developed various DSB repair models, it is said that specific pathways are favoured for their ability to repair DSB depending on the cellular context. These conditions include the type of DSB involved, the species of cells involved, and the stage of the cell cycle.
In various types of DSB
Cells have evolved a multitude of DSB repair pathways in response to the various types of DSB. Hence, various pathways are favoured in different situations. For instance, frank DSB, which are DSB induced by substances like as ionizing radiation, and nucleases, can be repaired by both HR and NHEJ. On the other hand, DSB due to replication fork collapse mainly favours HR.
In higher eukaryotes and yeast cells
It is said that the favoured pathway in a particular situations is also largely dependent on the species of the cell, the cell type, and cell cycle phases; and are all modulated and triggered by different upstream regulatory proteins. As compared to higher eukaryotes, yeast cells have adopted HR as the main repair pathway for DSB. Imprecise NHEJ, the primary pathway for NHEJ to repair "dirty" ends due to IR, was found to be inefficient at repairing DSB in yeast cells. It was hypothesized that this inefficiency as compared to mammalian cells is due to the lack of three vital NHEJ proteins, including DNA-PKcs, BRCA1, and Artemis. Contrary to yests, higher eukaryotes has a much higher frequency and efficiency at adopting NHEJ pathways. Research hypothesize that this is due to the higher eukaryote's larger genome size, as it means that more NHEJ related proteins are encoded for NHEJ repair pathways; and a larger genome implies a challenging obstacle to find a homologous template for HR.
In cell cycle
HR and NHEJ pathways are favoured in various phases of cell cycles for a multitude of factors. As S and G2 phases of the cell cycle generate more chromatids, the increased availability of template access for HR results in the up-regulation of the pathway. This rise is further increased due to the activation of CDK1 and the increase of RAD51 and RAD52 levels during G1 phase. Despite this, NHEJ not is inactive during the HR up-regulation. In fact, NHEJ was shown to be active throughout all stages of the cell cycle, and is favoured in G1 phase during low resection action intervals. This suggests the competition between HR and NHEJ for DSB repair in cells. It should be noted, however, that there is a shift of favour from NHEJ to HR when the cell cycle is progressing from G1 to S/G2 phases in eukaryotic cells.
During meiosis
In diploid eukaryotic organisms, the events of meiosis can be viewed as occurring in three steps. (1) Haploid gametes undergo syngamy/fertilisation with the result that chromosome sets of different parental origin come together to share the same nucleus. (2) Homologous chromosomes originating from different cells (i.e. non-sister chromosomes) align in pairs and undergo recombination involving double-strand break repair. (3) Two successive cell divisions (without duplication of chromosomes) result in haploid gametes that can then repeat the meiotic cycle. During step (2), damages in DNA of the germline can be removed by double-strand break repair. In particular, double-strand breaks in one duplex DNA molecule can be accurately repaired using information from a homologous intact DNA molecule by the process of homologous recombination.
Defective DSB repair
Although there is no universal model to explain disease etiology caused by DNA repair deficiency, it is said that the accumulation of unrepaired DNA damage may lead to various diseases, including various metabolic syndromes and types of cancers. Some examples of diseases caused by defects of DSB repair mechanisms are listed below:
Fanconi Anemia (FA) and Hereditary breast and ovarian cancer (HBOC) syndrome are caused by defects in homologous recombination. Biallelic mutation of either BRCA1/2 gene results in the loss of homologous recombination activity.
Chordomas, a rare bone tumour, might suggest defects in homologous recombination and mutations affecting HR-related genes.
Defects in the NHEJ mechanism are related to the mutations in hRAD50 and/or hMRE11 genes in mismatch repair deficient tumors.
Aging
Women tend to live longer than men and the gender gap in life expectancy suggests differences in the ageing process between the sexes. Sex specific differences in DNA double-strand break repair of cycling human lymphocytes during aging were studied. It was found that the repair of DNA double-strand breaks changes upon aging and the changes are distinct in men and women.
Cancer
Activation of gene transcription during oncogenesis is often associated with the introduction of DNA double-strand breaks and their repair by a process employing RAD51. This transcription-coupled DNA repair tends to occur in specific regions of the DNA termed super-enhancers.
See also
DNA damage & repair
Homologous Recombination
Synthesis-dependent strain annealing
Non-homologous end joining
Microhomology-mediated end joining
Cell cycle
DNA synthesis
References
DNA repair
Biological models | Double-strand break repair model | [
"Biology"
] | 3,422 | [
"Cellular processes",
"Molecular genetics",
"Biological models",
"DNA repair"
] |
67,267,123 | https://en.wikipedia.org/wiki/Nicos%20Kouyialis | Nicos Kouyialis is a Greek Cypriot politician. He served as Minister of Agriculture, Rural Development and Environment of Cyprus between 2013 and 2018.
Biography
Nicos Kouyialis was born in Nicosia, Cyprus on March 30, 1967. He studied in the USA and he holds a BSc and MSc in Electrical Engineering from the North Carolina State University (NCSU). He was the Chairman of the Institution of Engineering and Technology (IET), UK and an elected member of the IET World Council. He also served as chairman of the Institution of Electrical Engineers (IEE), UK. He served unionism as the Assistant Secretary of the Professional Employees Union of the Electricity Authority of Cyprus (SEPAIK). He worked at the Electricity Authority of Cyprus as an Electrical Engineer. Previously, he worked for the IBM Research Triangle Park NC/USA, ALCATEL Network Systems NC/USA, Siemens Cyprus and the North Carolina State University as a lecturer in telecommunications.
Political career
He was a member of the European Party (Cyprus), of which he was also Vice-President. On March 1, 2013, Nicos Kouyialis was appointed Minister of Agriculture, Rural Development and Environment of Cyprus in the government of Nicos Anastasiades. In August 2013, Cyprus signed an agreement with Greece and Israel to link the three countries’ electricity grids via an underwater cable and enforce the Energy Triangle plan. On April 22, 2016, Nicos Kouyialis represented Cyprus in the United Nations and signed the Paris Agreement on Climate Change. He has led the government's remarkable changes in the public's perception of recycling for promoting programs aimed at the reduction, reuse, and recycling of waste. During his term, he was actively involved with developing and applying EU policies to tackle water scarcity and climate change. Furthermore, he has shown a personal commitment and given real leadership to receive Protected Designation of Origin status for multiple Cypriot agriculture products and foodstuffs including Cyprus ‘Χαλλουμι’ (Halloumi)/‘Hellim’ cheese.
References
1967 births
Living people
Greek Cypriot politicians
Politicians from Nicosia
North Carolina State University alumni
Electrical engineers
Ministers of agriculture, natural resources and the environment of Cyprus | Nicos Kouyialis | [
"Engineering"
] | 451 | [
"Electrical engineering",
"Electrical engineers"
] |
67,267,891 | https://en.wikipedia.org/wiki/Kathrin%20Altwegg | Kathrin Altwegg is an astrophysicist, who is an Associate Professor in the Department of Space Research and Planetology, and former director of the (CSH) at the University of Bern. She is a member of the International Astronomical Union.
Early life
Kathrin Altwegg was born on 11 December 1951 in Balsthal. Between 1957 and 1970, she completed her primary education in Balsthal and passed the High school diploma in Switzerland at the lycée in Solothurn.
Education and research career
In 1975, she graduated studying physics at University of Basel, where she was the only woman in her year. In 1980, she obtained her doctorate in experimental physics from the University of Basel and proceeded to undertake post-doctoral research in the physics-chemistry department of the University of Technology, Design and Architecture, in New York.
In 1982, she returned to Switzerland, where she gained a position at the University of Bern, in the space exploration and planetology department. In 1996, she passed her university accreditation in the field of solar system physics and became project manager of the project -2001.html ROSINA (Rosetta Orbiter Spectrometer for Ion and Neutral Analysis) concerning the comet 67P/Churyumov–Gerasimenko. Since 2001, she has been a full professor and researcher, with the title of associate professor, at the department of space research and planetology at the University of Bern.
Prizes and awards
In 2015, she received the prize of the Commerce and Industry association of Bern 2015 for her mission in the Rosetta project but also for her commitment to young people.
References
Living people
Women astronomers
University of Basel alumni
Academic staff of the University of Bern
1951 births | Kathrin Altwegg | [
"Astronomy"
] | 350 | [
"Women astronomers",
"Astronomers"
] |
67,268,051 | https://en.wikipedia.org/wiki/Leonidas%20Zervas | Leonidas Zervas (, ; 21 May 1902 – 10 July 1980) was a Greek organic chemist who made seminal contributions in peptide chemical synthesis. Together with his mentor Max Bergmann they laid the foundations for the field in 1932 with their major discovery, the Bergmann-Zervas carboxybenzoxy oligopeptide synthesis which remained unsurpassed in utility for the next two decades. The carboxybenzyl protecting group he discovered is often abbreviated Z in his honour.
Throughout his life Zervas also served in many important posts, including President of the Academy of Athens or briefly Minister of Industry of Greece. He received numerous awards and honours during his life and posthumously, such as Foreign Member of the USSR Academy of Sciences or the first Max Bergmann golden medal.
Biography
Early life and career abroad
Zervas was born in 1902 in the rural town of Megalopolis in Arcadia, southern Greece. He was the first of 7 children of lawyer and parliamentarian Theodoros Zervas with Vasiliki Zerva (née Gyftaki). After finishing secondary education at the local Gymnasion of Kalamata in 1918, he went to study Chemistry at the University of Athens. Before finishing his studies there, he moved to Berlin in 1921 where he graduated with a degree in chemistry from the University of Berlin in 1924.
Under the supervision of Max Bergmann, he finished his doctoral thesis on the reactions of amino acids with aldehydes and was awarded his Dr. rer. nat. from the University of Berlin in 1926. He proceeded to work with Bergmann in the Kaiser Wilhelm Institute for Leather Research in Dresden, of which Bergmann was the founder and director. From 1926 to 1929 Zervas was a research associate and eventually rose to head of the organic chemistry division and vice-director of the institute (1929–1934). It was at this period that the two men developed the Bergmann-Zervas oligopeptide synthesis which brought them international fame within academic circles.
Zervas, by that point a close personal friend of Bergmann, decided to follow the latter to the US in 1934 after Bergmann emigrated from Nazi Germany in 1933 under pressure due to his Jewish origin. In New York, Zervas spent 3 years as lecturer and researcher at the Rockefeller Institute for Medical Research.
In 1930, he married Hildegard Lange, and they remained together until his death.
Return to Greece
After his Berlin, Dresden and New York years, Zervas decided to return to Greece in 1937. He was immediately appointed full Professor of Organic Chemistry and Biochemistry at the Aristotle University of Thessaloniki in recognition of his distinguished international work. He stayed in this position until 1939, when he was invited to the Professorship of Organic Chemistry at the University of Athens and also appointed director of the Laboratory of Organic Chemistry of the same institution. He continued conducting research, despite the severe limitations he often faced from the lack of equipment and funding. Concurrent to research, Zervas taught organic chemistry, oversaw the laboratory and guided many generations of young chemists as doctoral advisor for the 29 years he held the post at the University of Athens.
During the Axis occupation of Greece Zervas played an active part in the Greek Resistance as a member of EDES; he was imprisoned twice, first by the Italian and then by the German occupying forces, and his laboratory was destroyed. Following the liberation of Greece, Zervas managed to secure a small part of the American postwar aid for repairs in the University of Athens and the Athens Polytechnic, and thus rebuilt his laboratory in 1948–1951.
In the following years, guided by a sense of personal and professional duty, Zervas voluntarily took on a variety of responsibilities within the Greek state. At his own insistence, he never got paid for these posts and kept receiving only his professorial salary. Some notable positions he held in chronological order until 1968 include:
Member of the State Committee on Vocational Education (1948–1951)
Member of numerous committees for the foundation of new industries in postwar Greece (throughout the 1950s)
First Vice-President of the National Hellenic Research Foundation (1958–1968), of which he was a key founder
Minister of Industry in the Paraskevopoulos technocratic caretaker government (1963–1964)
President of the Greek Atomic Energy Commission (1964–1965)
The democratic ideals of Zervas made him a target of the military junta established in 1967, which removed him from his position in the University of Athens in 1968 after almost three decades of dedicated research and teaching. In response, the Academy of Athens of which Zervas had been a member since 1956 elected him as its president in 1970. After his term as President of the Academy, Zervas retired in 1971.
Later years
With the restoration of democracy in 1974, Zervas was able to contribute once more to research and educational policy. As previously, refusing to take a salary for these positions, he served a second time as the President of the Greek Atomic Energy Commission (1974–1975) and then as the President of the National Hellenic Research Foundation (1975–1979).
Zervas had suffered from periodic issues with respiratory health throughout his adult life, but in his final years the situation deteriorated. The extended use of phosgene in his research has been implicated as the cause of this chronic pulmonary disease. He showed perseverance and a pleasant attitude despite his health issues, continuing to attend meetings of the Academy of Athens until the very end of his life. This came in the summer of 1980 after an acute pulmonary episode, which lasted three weeks before he died at the age of 78.
Contribution to Chemistry
The enduring contributions of Zervas were made together with Bergmann and involved the first successful synthesis of substantial length oligopeptides. They achieved this using the carboxybenzyl amine protecting group for the masking of the N-terminus of the growing oligopeptide chain to which amino acid residues are added in a serial manner. The carboxybenzyl group discovered by Zervas is introduced by reaction with benzyl chloroformate, originally in aqueous sodium carbonate solution at 0 °C:
The protecting group is abbreviated Cbz or, in honour of Zervas, simply Z. The typical route for deprotection involves hydrogenolysis under mild conditions e.g. with hydrogen gas and a catalyst such as palladium on charcoal.
The discovery of the Bergmann-Zervas synthesis has been characterised as "epoch-making" as it allowed the advent of controlled synthetic peptide chemistry, completing the work started in the early 20th century by Bergmann's mentor Emil Fischer. Previously impossible to synthesise oligopeptides with a highly specific sequence and reactive side chains were consequently produced in the 1930s by Bergmann and Zervas. The ability of Z-protection to prevent racemization of activated derivatives of the protected amino acids and the importance thereof were also noted by the two chemists.
Indeed, their method became the standard in the field for the following two decades until further developments in the early 1950s with the introduction of mixed anhydrides (e.g. the Boc group).
Zervas continued his research on peptide synthesis in New York and later in Greece. The first topic of his research once in Greece was the synthesis of N- or O-phosphorylated amino acids, in which he demonstrated the utility of dibenzyl chlorophosphonate. He continued his efforts on the development of new methods within peptide chemistry, including the introduction of the o-nitrophenylsulfenyl (NPS) amino protecting group and peptide synthesis using N-tritylamino acids.
One of the major issues which occupied his interests was the chemical synthesis of insulin after its characterisation by Frederick Sanger (1951). The insulin peptide hormone features two protein chains cross-linked by disulfide bridges from cysteine thiols. For this reason, Zervas undertook a systematic study on asymmetric cysteine-containing peptides. In his attempts he introduced new mercaptan protecting groups (e.g. trityl, benzhydryl or benzoyl), which finally made it possible to produce disulfide bridges in a controlled manner. This was a triumph for peptide chemistry in the lab, but could not be possibly scaled to industrial procedures. Building on this work, the first complete synthesis of insulin was simultaneously achieved in 1963 in RWTH Aachen University by Helmut Zahn and in the University of Pittsburgh by Panayotis Katsoyannis, a student of Zervas. Further work on asymmetrical cysteine polypeptides was also done in Athens by Iphigenia Photaki, another student of his.
Overall, the research work of Zervas spans across six decades (1925–1979) and amounts to 96 publications in international chemistry journals.
Honours and legacy
The scientific work of Leonidas Zervas had a global resonance and his contribution was recognised by multiple awards throughout his life. In 1960 he received an honorary doctorate from the University of Basel on the occasion of the university's 500th anniversary, upon recommendation of Hans Erlenmeyer and Nobel laureate Tadeusz Reichstein. In 1969 he was bestowed honorary membership of the American Society of Biological Chemists. In 1976 he was conferred the (1st class) by the Socialist Republic of Romania. In the same year Zervas was made Foreign Member of the USSR Academy of Sciences, an indication of the great respect for his work in the Eastern Block, too. The Max-Bergmann-Kreis company of German peptide chemists planned to present Zervas with the first Max Bergmann golden medal for peptide chemistry in 1980, but his sudden death necessitated a posthumous award ceremony.
In honour of Zervas, a commemorative bust has been unveiled in his birthtown Megalopolis in 1991 and the main conference hall of the National Hellenic Research Foundation is called the "Leonidas Zervas amphitheatre".
The European Peptide Society has established the Leonidas Zervas Award "in commemoration of his outstanding contributions to peptide science", awarded biennially since 1988. The award is given to the "scientist who has made the most outstanding contributions to the chemistry, biochemistry and/or biology of peptides in the five years preceding the date of selection".
References
Greek chemists
Organic chemists
1902 births
1980 deaths
20th-century Greek scientists
Humboldt University of Berlin alumni
Max Planck Society people
Rockefeller University faculty
Academic staff of the Aristotle University of Thessaloniki
Academic staff of the National and Kapodistrian University of Athens
National Republican Greek League members
Members of the Academy of Athens (modern)
Foreign members of the USSR Academy of Sciences
Government ministers of Greece
Scientists from Kalamata
People from Megalopoli, Greece | Leonidas Zervas | [
"Chemistry"
] | 2,205 | [
"Organic chemists"
] |
67,268,300 | https://en.wikipedia.org/wiki/Windows%20Server%202022 | Windows Server 2022 is the thirteenth major version of the Windows NT operating system produced by Microsoft to be released under the Windows Server brand name. It was announced at Microsoft's Ignite event from March 2–4, 2021. It was released on August 18, 2021, almost 3 years after Windows Server 2019, and a few months before the Windows 11 operating system.
Windows Server 2022 is based on the "Iron" codebase. It is similar to Windows 10 21H2, but its updates are incompatible with it. Like its predecessor, Windows Server 2019, it requires x64 processors.
It was succeeded by Windows Server 2025 on November 1, 2024.
History
Microsoft announced Windows Server 2022 on February 22, 2021, scheduled for March 2. On March 3, Microsoft started distributing preview builds on Windows Update. Windows Server 2022 reached general availability on August 18, 2021.
In June 2022, as a part of its monthly schedule for preview updates (also known as the "C updates"), Microsoft released KB5014665 to test upcoming fixes for Windows Server 2022. The update aimed to address connectivity issues with RDP, RRAS, SSTP VPN clients, and Wi-Fi hotspots.
Features
Windows Server 2022 has the following features:
Security
Enhanced boot-time security via TPM 2.0 and System Guard (a component of Microsoft Defender Antivirus)
Credential Guard
Hypervisor-protected Code Integrity (HVCI)
UEFI Secure Boot
Protection against malicious attacks via the DMA path
DNS over HTTPS
AES-256 encryption of SMB traffic
SMB over QUIC instead of TCP
Storage
Storage Migration Service (SMS)
Compression of SMB traffic
Cloud
Azure hybrid capabilities
Software
Microsoft Edge
Editions
Essentials
Only available through Microsoft OEM partners
Intended for small businesses
Supports a maximum of 25 users and 50 devices
Requires no client access licenses (CALs)
Standard
Intended for physical or weak VCC environments
Only two virtual machines and one Hyper-V host are deemed usable.
Datacenter
Intended for highly virtualized data centers and cloud environments
Azure Datacenter
Designed for the Microsoft Azure platform
Hardware requirements
References
External links
2021 software
2022
X86-64 operating systems
Server 2022 | Windows Server 2022 | [
"Technology"
] | 454 | [
"Computing platforms",
"Microsoft Windows"
] |
67,268,699 | https://en.wikipedia.org/wiki/Constipatic%20acid | Constipatic acid is a fatty acid found in several lichen species. It was isolated, identified, and named by Douglas Chester and John Alan Elix in a 1979 publication. The compound was extracted from the Australian leafy lichen called Xanthoparmelia constipata (after which the compound is named), which was collected on schist boulders west of Springton, South Australia. The related compounds protoconstipatic acid and dehydroconstipatic acid were also reported concurrently. Syo Kurokawa and Rex Filson had previously detected the compounds using thin-layer chromatography when they formally described the lichen as a new species in 1975, but had not characterised them chemically.
After conversion of constipatic acid to methyl constipatate, a mass spectra of the compound revealed four diagnostic peaks at the mass-to-charge ratio (m/e) of 367, 338, 279 and 169. The peaks correspond to the cleavage of a methyl group, the 1-hydroxyethyl moiety, the methoxycarbonyl group (i.e. CH3-O-CO-) and 1-cleavage of the side chain. Additional analysis with proton nuclear magnetic resonance corroborated these results and confirmed the linear nature of the aliphatic chain.
In addition to Xanthoparmelia constipata, constipatic acid has been isolated from several other Xanthoparmelia species, including X. perezdepazii, X. filarskyana, X. flavecentireagens, X. lineola, and X. metaclystoides. It has been isolated from lichens in other genera as well. Examples include Parmelia xanthosorediata, Heterodermia appendiculata, Heterodermia japonica, Protoparmelia nebulosa, Hertelidea wankaensis, Lepraria coriensis, Punctelia negata, and Rhizoplaca melanophthalma.
Some sources consider the molecule to have an unusual or humorous name due to its similarity to the word "constipation".
See also
List of chemical compounds with unusual names
References
Fatty acids
Lichen products | Constipatic acid | [
"Chemistry"
] | 470 | [
"Natural products",
"Lichen products"
] |
67,268,828 | https://en.wikipedia.org/wiki/HK%20Tauri | HK Tauri is a young binary star system in the constellation of Taurus about 434 light-years away, belonging to the Taurus Molecular Cloud.
System
The two stars of the HK Tauri system are separated by , equivalent to at the distance of HK Tauri. The primary is a pre-main sequence star with a mass of , while the secondary has a mass of .
Properties
Both members of the binary are medium-mass objects still contracting towards the main sequence and accreting mass. Their ages are probably young (below 10 million years) but cannot be estimated with any accuracy because both stars are strongly obscured by the protoplanetary disks.
Protoplanetary system
The companion star HK Tauri B is surrounded by a protoplanetary disk visible nearly edge-on. It contains water and carbon dioxide ices, along with gaseous carbon monoxide. The disk is unusually flat, with an aspect ratio of 4.4, while most young stars host disks with aspect ratios of about 3. The disk also contain relatively few large dust particles compared to fine dust, with a size distribution power-law slope of 4.2. The disk mass is relatively small, not larger than 0.0005, and dust distribution is asymmetric. The plane of the disk is not aligned with the orbit of the binary.
Multiple planets embedded in the disk of HK Tauri B have been suspected since 1993, although none were detected by 2020.
References
Binary stars
T Tauri stars
Circumstellar disks
Taurus (constellation)
J04315056+2424180
Tauri, HK | HK Tauri | [
"Astronomy"
] | 324 | [
"Taurus (constellation)",
"Constellations"
] |
67,269,120 | https://en.wikipedia.org/wiki/Drugs%20and%20sexual%20desire | Drugs and sexual desire is about sexual desire being manipulated through drugs from various approaches. Sexual desire is generated under the effects from sex hormones and microcircuits from brain regions. Neurotransmitters play essential roles in stimulating and inhibiting the processes that lead to libido production in both men and women. For instance, a positive stimulation is modulated by dopamine from the medial preoptic area in the hypothalamus and norepinephrine. At the same time, inhibition occurs when prolactin and serotonin are released for action.
Drugs acting on the above neurotransmitters can be used to upregulate or downregulate sexual desire due to diseased conditions. During drug development specialized for women, the Female Sexual Function Index-Desire Domain (FSFI-D) provides a reference measurement for researchers to evaluate recipients' responses and results. FSFI values allow researchers to monitor the change of sexual desire with a more solid definition, and at the same time, establish records for the U.S. Food and Drug Administration (FDA) to process applications for drug approval. Similarly, the Male Desire Scale (MDS) is used for men.
After evaluating symptom severity using the scales, patients are then prescribed different types of drugs. Flibanserin and Bremelanotide were developed for raising sexual desire in women, whereas similar conditions in men are treated using medications for sexual dysfunction. On the other hand, down-regulation on libido comes in two approaches: a direct or an indirect mechanism. Multiple drugs from each category have been proven effective.
Marketized drugs have encountered market demands, also boosted personalized medication developments aiming at a broader range of recipients. Still, disease establishment dilemmas and FDA drug approvals give rise to ethical concerns, posing obstacles in the field's development.
Drugs enhancing sexual desire
These drugs are expected to restore a normal libido in patients. Targeting acquired and generalized hypoactive sexual desire disorder (HSDD), improvement in sexual desire, and alleviation of psychological stress are to relieve the correlated symptoms. However, the treatments cannot cope with medically or psychiatrically related conditions, nor the effects from other medications.
Specialized to premenopausal women
Flibanserin
Flibanserin is the first pharmaceutical product from Sprout Pharmaceuticals for premenopausal women with HSDD. The drug was approved by the FDA in 2015. It compiles a serotonin 1A receptor agonist and a serotonin 2A receptor antagonist. and is an antidepressant that was originally developed for depression. This weak partial agonist that acts on dopamine D4 receptors, is also postsynaptic and functions by modulating various neurotransmitters, including dopamine, norepinephrine, and serotonin.
Pharmacodynamics
Flibanserin contains centrally active piperazine/benzimidazole-derived molecules, that aim to limit forskolin-stimulated cAMP production. Thus, eliminating the phosphatidyl-inositol turnover, which 5-HT stimulates typically in the brain cortex. However, the precise mechanism is oblique. It is estimated that the drug targets brain regions, especially in the medial prefrontal cortex, hypothalamus, limbic regions, and brainstem.
Efficacy
After its launch in 2015 and marketed as Addyi, the drug experienced controversies and rejection from being acknowledged after three reviews on the clinical trials. Regarding the eligible prescription criteria concluded from the trials, the patient should have a diagnosed psychological pathology, medical comorbidities, and the presence of personal relationship issues. Despite being the first drug, its efficacy was not significant for treatment less than at least four weeks. Treatment withdrawal is also practiced if recipients do not experience improvements in symptoms after eight weeks. Nevertheless, around 18% of the gain was observed after 24 weeks of treatment.
Adverse effects
Due to a wide range of observed side-effects, flibanserin's safety has been called into question. Clinical trials reported adverse reactions including dizziness, nausea, fatigue, and insomnia. Hypotension and central nervous system depression (somnolence) leading to sedation and sleepiness symptoms were also observed. In order to lower the chances of occurrence, the drug is usually prescribed to be taken only once per day before bed.
On the other hand, the third stage of the trial suggests that risks are derived from any moderate or strong cytochrome P-450 3A4 (CYP3A4) inhibitors that are often present in antihypertensive drugs, antiretroviral drugs, antibiotics, or fluconazole. An alcohol-interaction study was also carried out. Instead of premenopausal women, the study was on male participants, with the conclusion that alcohol may pose a risk of systolic and diastolic blood pressure reduction to recipients.
Pharmacokinetics
Flibanserin is delivered through oral administrations on a half-life of 11 hours. A steady state can be achieved after three days of treatment. The metabolites of the drug are predominantly eliminated through urination and defecation.
Bremelanotide
Bremelanotide was first developed by Palatin Technologies, then out-licensed to AMAG Pharmaceuticals Inc. after its approval by the FDA on 21 June 2019. Marked as VyleesiTM, it was primarily designed for both men and women in the form of an intranasal formulation, particularly for treating male erectile dysfunction. However, the research was discontinued and focused on targeting female HSDD in a subcutaneous injection to increase bioavailability. Bremelanotide is usually injected a minimum of 45 minutes before sexual activity. Despite having identical prescription criteria as that of flibanserin, bremelanotide is not recommended for pregnant patients. No trial was tested on pregnant patients. Therefore, patients at childbearing ages are recommended to use contraception continuously during treatment and should discontinue once they become pregnant.
Pharmacodynamics
The melanocortin receptor agonist proposes to activate multiple receptor subtypes nonselectively, with the highest affinity with MC1R, then MC4R, MC3R, MC5R, and MC2R receptors. As MC4R receptors are present on neurons in the central nervous system, a function in modulating brain pathways is suggested, albeit a precise mechanism remains unknown.
Efficacy
Sustained and significant improvement in FSFI scores throughout a 52-week extension was attained, indicating a high efficacy of the drug treating HSDD, most notably when anticipating the challenge in psychological distress.
Adverse effects
Mild or moderate adverse events are common, expressing symptoms like nausea, facial flushing, headache, and sunburn. Unlike flibanserin, bremelanotide induces fewer side effects and is not affected nor develops severe complications with alcohol. However, the drug slows down gastric emptying, which can cause an impact on oral drug uptake and the subsequent drug effectiveness. As activation of MC1R gives rise to hyperpigmentation, treatment dosage is at a maximum of eight doses per month. Besides, to minimize the chances of cardiovascular complications, the prescribed daily dosage is at one dose.
Pharmacokinetics
Bremelanotide is exposed through a single subcutaneous administration. With a half-life of an hour, the drug is excreted through urine and faeces.
Specialized to men
There are already 26 drugs available for men's sexual dysfunction. Since HSDD in men is not acknowledged and was categorized to be a distinct sexual dysfunction, there are currently no drugs specialized for men in similar conditions as stated above.
Drugs suppressing sexual desire
Drugs down-regulating libido are generally either intended for libido suppression via direct down-regulating mechanisms or eliciting such side-effects by their unintended indirect mechanisms.
Direct mechanism
Two types of drugs are mainly prescribed to people suffering from overwhelming sexual desire: gonadotropin-releasing hormone agonists (GnRH agonists) and steroidal anti-androgens.
Gonadotropin-releasing hormone agonists
GnRH agonists are a group of drugs intended to activate GnRH receptors in the anterior pituitary gland. They are synthesized by replacing the sixth and tenth amino acids of the original gonadotropin-releasing peptide hormone. After the modification, they can bind to the GnRH receptors more strongly and are less degradable by enzymes when compared to the natural GnRH, making them more biologically active. GnRH agonists commonly used are leuprorelin, goserelin, and triptorelin, which are marked as Lupron, Zoladex, and Decapetyl, respectively. Nafarelin, marked as Synarel, is also occasionally prescribed in the form of nasal spray. These drugs are all approved by the US FDA, and their working principles target both sexes equally.
Pharmacodynamics
This class of drugs initially stimulates the anterior pituitary gland to secrete more GnRH, leading to a temporary surge in GnRH level in the circulation. Yet, because of the desensitization effect on the receptors upon continuous stimulation, in the long term, its secretion will be inhibited if continual medication is prescribed. Particularly, GnRH is essential for the release of gonadotropic hormones, such as LH and follicle-stimulating hormone (FSH), from the anterior pituitary-body. These hormones are responsible for the synthesis of steroid sex hormones (testosterone in men; progesterone and estrogen in women). Therefore, GnRH inhibition by these drugs, in turn, reduces the secretion of steroid sex hormones from the sex organs, eventually leading to libido suppression.
Efficacy
Many studies have shown that leuprorelin, goserelin and triptorelin are effective in suppressing sexual desires and increasing control against sex. Patients who were prescribed with these drugs have sexual thoughts less frequently and strongly.
Adverse effects
Common adverse effects elicited by these drugs include hot flushes, bone loss, headache, unpredictable mood changes, depression, vaginal dryness, or even atrophy for females and penile atrophy for males. These adverse effects can be counteracted and treated by add-back therapy, also known as hormone replacement therapy. People treated with GnRH agonists are suggested to undergo this therapy simultaneously by taking adequate progestin, vitamin D, and calcium supplement pills daily.
Pharmacokinetics
Gonadotropin-releasing hormone agonists are largely parenterally administered to the body, that is, via subcutaneous or intramuscular injection. At the same time, nafarelin is an exception in which its prescription is a nasal spray, and some may be implanted into fats. In general, their half-lives are approximately two to four hours. Some agonists are mainly excreted via urine while other agonists are mainly excreted via faeces.
Steroidal anti-androgens (Cyproterone acetate)
Steroidal anti-androgens are a class of steroid drugs that inhibit the actions of androgens. Cyproterone acetate, a 17-hydroxyprogesterone acetate derivative, is a very potent one and is widely used. Though it is not approved by the US FDA owing to its toxicity, it has been approved by Canada and a plethora of countries in Europe with the brand name Androcur in the market. This drug mainly targets men due to its mechanism of action.
Pharmacodynamics
Cyproterone acetate suppresses libido by directly reducing the level of active androgen, testosterone, in males. The suppression of testosterone level results from its inhibition of the release of luteinizing hormone (LH) from the anterior section of the pituitary gland, interfering with testosterone production from the testes as LH stimulates testosterone production. It also blocks the conversion of testosterone to dihydrotestosterone for action. In addition to this mechanism, it also competes for the androgen receptors against testosterone and dihydrotestosterone, causing interference with the androgen-receptor interaction on the reproductive organ, thereby lowering sexual desire.
Efficacy
Cyproterone acetate has been proven effective in restraining sexual drive and fantasies in patients with high libido. Its usage in treating hypersexuality has been advocated by the World Federation of Societies of Biological Psychiatry (WFSBP).
Adverse effects
Common adverse effects of cyproterone acetate include depression, hepatotoxicity, dyspnea, change in body weight, hot flushes, sweating, fatigue, and gynaecomastia.
Pharmacokinetics
Cyproterone acetate is mainly orally administered to the body, and it has a half-life of 1.8 days. After metabolism, its metabolites are predominantly excreted via faeces.
Indirect mechanism
There are many types of drugs that unintentionally lower sexual desire through indirect mechanisms. It is one of their side-effects as the outcome of libido suppression is not intended. However, selective serotonin reuptake inhibitors (SSRIs), being one of them, are indeed often prescribed to people who have immense sexual desire. SSRIs reduce the re-uptake of serotonin back to the neurons, leading to an increase in serotonin level in the body. Due to the fact that serotonin can interfere with other neurotransmitters and hormones, for instance, sex hormones, SSRIs can therefore lower sexual desire.
Apart from SSRIs, other types of drugs that could lower sexual desire are not intended to suppress libido originally. Thus, sexually active people are suggested to avoid the usage of the following drugs:
Antihypertensive drugs
Anti-anxiety drugs (e.g. benzodiazepines)
Antipsychotic drugs
Anticonvulsants
Non-steroidal anti-inflammatory drugs (e.g. Antihistamine)
Opioid
Medical marijuana
Recreational drugs
Future development
Postmenopausal women may also suffer from HSDD due to a decline in androgen production from menopause. A research proposed two combinations of drugs, each designed against the different causes of HSDD. One combination is to utilise sublingual testosterone with a 5-HT1A receptor agonist to raise motivation for sex, by lifting the inhibition mechanisms in the brain's prefrontal regions. Testosterone is also proposed to be coupled with the PDE5 inhibitor, targeting an insensitive system for the production of sexual desire. As both clinical trials showed desirable results, this indicates the prosperity of further developing a single drug targeting HSDD for all women of different status, even though the FDA has not approved the two combinations.
Ethical concerns
In fact, HSDD was defined shortly before the release of flibanserin to the market and its approval by the US FDA. This has brought up a controversy about whether it is appropriate and ethical to create a medical condition for the benefit of the sales of pharmaceutical products.
Apart from this ethical issue, there is a dilemma of whether a medical doctor should prescribe drugs to sexual offenders whose libido is subpar. On the one hand, it is considered unethical for doctors not to treat them as they are expected to treat patients indiscriminately. On the other hand, treating sexual offenders may impose a risk to society. Physicians have been struggling with this dilemma, and it is still difficult for them to make a choice at this moment in time.
References
Drugs
Sexuality | Drugs and sexual desire | [
"Chemistry",
"Biology"
] | 3,273 | [
"Pharmacology",
"Behavior",
"Sex",
"Products of chemical industry",
"Sexuality",
"Chemicals in medicine",
"Drugs"
] |
67,270,055 | https://en.wikipedia.org/wiki/Peripheral%20ulcerative%20keratitis | Peripheral Ulcerative Keratitis (PUK) is a group of destructive inflammatory diseases involving the peripheral cornea in human eyes. The symptoms of PUK include pain, redness of the eyeball, photophobia, and decreased vision accompanied by distinctive signs of crescent-shaped damage of the cornea. The causes of this disease are broad, ranging from injuries, contamination of contact lenses, to association with other systemic conditions. PUK is associated with different ocular and systemic diseases. Mooren's ulcer is a common form of PUK. The majority of PUK is mediated by local or systemic immunological processes, which can lead to inflammation and eventually tissue damage. Standard PUK diagnostic test involves reviewing the medical history and a completing physical examinations. Two major treatments are the use of medications such as corticosteroids or other immunosuppressive agents and surgical resection of the conjunctiva. The prognosis of PUK is unclear with one study providing potential complications. PUK is a rare condition with an estimated incidence of 3 per million annually.
Signs and symptoms
The most easily identifiable sign is a visible lesion of the cornea presented usually in a crescent shape. Common reasons for destruction are stromal degradation and epithelial defects on the inflammatory cells. There would be a change in conformation of the peripheral cornea, depending on the severity of corneal thinning. This process is usually accompanied by the possibility of concealing perforation. The formation of an oval-shaped ulcer at the margin of the cornea is also a sign.
Symptoms of PUK include pain, redness, tearing, increased sensitivity to bright light, impaired or blurred vision, and the feeling of foreign objects trapped in the eyes.
Association
There are several associations of PUK to ocular and systemic diseases. Rheumatoid arthritis (RA), Wegner's granulomatosis (WG), and Polyarteritis Nodosa (PAN) are the most common systemic conditions.
Rheumatoid arthritis: Approximately 50% of PUK are related to collagen vascular diseases, in which RA is the most common category. Around 34-42% of PUK patients have RA.
Wegner's granulomatosis: WG is a rare autoimmune disease associated with PUK. It causes vasculitis of the lower and upper respiratory tracts, and it also affects multiple organs, including eyes. Without timely initiation of systemic therapy, WG patients will develop conjunctival and scleral inflammations. The inflammation will eventually cause corneal thinning and worsen PUK.
Polyarteritis Nodosa: PAN is another autoimmune disease in which the body's immune system attacks small and medium-sized arteries of its own by mistake. PUK is one of the predominant ocular inflammatory manifestations of PAN.
Causes
There are three major causes for PUK. One possible cause is injury due to any kind of scratches by sharp or hard objects on the surface of the cornea. The scratched area forms an opening in the cornea, allowing microorganisms to access the cornea and lead to infection. Contamination of contact lenses is another cause as fungi, bacteria and parasites, microscopic parasite acanthamoeba, in particular, could inhabit the surface of the carrying case of the contact lens. When placing the contact lens to one's eyes, invisible microorganisms may contaminate the cornea resulting in PUK. An extended period of wearing contact lenses could also cause damage on the cornea surface, allowing the entry of microorganisms to the cornea. Other than contamination of contact lenses, contamination occurring in water could also cause PUK. Especially in places like the ocean, rivers, lakes and hot tubs, massive amounts of bacteria, fungi, and parasites exist. When there is an injury on the cornea surface, contact with contaminated water could transfer unwanted microorganisms into the cornea resulting in PUK. Virus and bacteria are sources of infection to the cornea. Herpes virus and bacteria that cause gonorrhea are some examples.
Anatomy and pathogenesis
The corneal epithelium consists of five to six layers of cells with a total thickness of around 0.52mm. The cornea thickens to 0.65mm towards the periphery of the cornea. Stroma, which accounts for 90% of the corneal thickness, refers to the middle layer between epithelium and endothelium. It is present in the peripheral cornea to act as a transitional zone between the sclera and cornea. Limbal vasculature, deriving from capillaries that surround the peripheral cornea, supplies the stroma. Various molecules normally diffuse from these capillaries at the periphery to the central cornea. With limited diffusion, there is a higher concentration of IgM, factor C1 of the complement cascade, and Langerhans cells.
Any kind of inflammatory stimulus present in the peripheral cornea results in recruitment of neutrophil and activation of both classical and alternative pathways of immune response, namely the humoral and cell-mediated autoimmune responses. These responses will lead to the formation of antigen-specific antibodies to combat foreign antigens. However, antigen-antibody complexes formed may deposit in the vascular endothelium and activate complements leading to severe local inflammation. Under this circumstance, inflammatory cells, such as macrophages and neutrophils, enter the peripheral cornea. These inflammatory cells release enzymes protease and collagenases, causing potential disruption of the corneal stroma. The additional release of cytokines, for example, interleukin-1, from these cells further accelerates the process of stromal destruction.
Mooren's ulcer and relevant classification
Mooren's ulcer is a common form of PUK. One classification of Mooren's ulcer, based on the clinical presentation, includes bilateral indolent mooren's ulcer, bilateral aggressive mooren's ulcer and unilateral mooren's ulcer. Unilateral mooren's ulcer, meaning ulcer of one eye, mainly affects elderly above 60 years old. Rapid onset with redness and severe pain of the affected eye and either slow or extremely quick progression are some typical characteristics of unilateral mooren's ulcer. Bilateral aggressive mooren's ulcer is prevalent in Indian between age 14 to 40. The common presentation includes the appearance of lesions in one eye, followed by the development of lesions in another eye. Finally, bilateral indolent mooren's ulcer is common in patients of at least 50-year-old. It usually progresses slowly and causes little or no pain.
Other classification methods also exist. The first one is classifying Mooren's ulcers based on clinical presentation and prognosis into two categories. The first type is usually presented unilaterally, accompanied by symptoms ranging from mild to moderate. Therefore, it has a more effective response to treatment. In contrast, type II appears in a bilateral manner, with severe symptoms and poor outcome of treatment. The second classification is based on severity. Grade I refers to corneal thinning, grade II describes impending corneal perforation, and grade III is corneal perforation with a diameter greater than 2mm.
Diagnosis
There are many investigative modalities available for diagnosing PUK, including history review and physical examination. A thorough history of ocular infections, contact lens usage, other medication, or surgery is necessary to identify possible presence of associated diseases. An ophthalmic examination helps identify whether it is due to local pathogenesis. Physical examinations allow more understanding of the underlying systemic process.
A standard testing procedure includes hematological investigations, immunological testing, followed by chest X-ray. Hematological investigations are blood tests estimating hemoglobin, platelet counts, total white blood cell counts, erythrocyte sedimentation rate and viscosity. Other common body checks include urinalysis and liver and renal function tests. The selection of immunological testing for various markers is based on numerous additional medical examinations and clinical history of the patient. Possible markers are antinuclear antibodies, anti-rheumatoid antibodies, and antibodies to cyclic citrullinated peptides. Finally, a chest X-ray helps distinguish whether there are complications, such as pulmonary diseases, due to systemic conditions associated with PUK.
One of the common causes of PUK is ocular infections by microorganisms such as bacteria, viruses, and fungi. To detect the causative microorganism, doctors usually collect samples before the commencement of therapy and send them to laboratories. Laboratory personnel then perform smear examination, inoculate the samples on culture media, and perform serological testing. Serological testing is an antibody test providing information on PUK etiology. The diagnosis of PUK due to systemic conditions requires a combination of serological and hematological testing, together with imaging techniques such as radiography and CT scanning.
Treatments
Various PUK therapies are of different objectives, for example, inflammation control, halting of disease progression, stroma repairment, avoidance of secondary complications, and vision restoration. A thorough understanding of PUK and different therapies is important. Medical and surgical treatments are two major approaches to manage PUK.
Medical therapy
As for medical therapy, there are several types of drugs available for PUK. Topical corticosteroids usually serve as therapy for milder unilateral cases of RA-associated PUK. Systemic corticosteroids in the form of an oral dose are the acute management of more severe cases. However, there are side effects with prolonged usage of oral corticosteroids. Immunosuppressive agents, such as azathioprine, cyclophosphamide, and methotrexate, have demonstrated efficacy in treating inflammatory eye diseases, including PUK. The combined therapy of systemic corticosteroids up to 100 mg/day and immunosuppressive agents are used for severe cases of PUK. Biological agents, such as anti-tissue necrosis factors (anti-TNF), is a well-established treatment of systemic inflammatory diseases, Infliximab and Adalimumab are TNF blockers for treating RA-associated PUK. However, the high cost and uncertainty of long-term side effects are the possible drawbacks.
Surgical treatment
In terms of surgical treatment, conjunctival resection is a common procedure, which can temporarily remove local inflammatory mediators and collagenases and therefore slow down the disease progression. Other surgical management includes corneal gluing, or keratoplasty procedures. Corneal transplantation is a management option when there is severe corneal melting or perforation although one possible disadvantage is the risk of rejection.
Surgical treatment helps maintain the integrity of the globe, but it is usually complementary because it alone cannot influence the underlying immunological process. Therefore, medical and surgical treatments are commonly used in conjunction.
Choice of treatment
The choice of treatment may be different depending on the nature of PUK, infectious or noninfectious. Selection of the right targeted antimicrobial therapy for infectious PUK is based on clinical judgement and culture results. For example, the appropriate treatment for bacterial infections is antibiotics, such as fluoroquinolones. As for Mooren's ulcers, 56% of unilateral PUK and 50% of bilateral PUK in one eye showed recovery with intensive topical steroids. Only 18% of patients with bilateral ulcers occurring simultaneously in both eyes show improvements with topical steroids alone; therefore a combination of immunosuppressive agents and systemic steroids should be given in early courses of management. Corticosteroids are the first line of therapy, but side effects may arise from long-term usage. In addition, conjunctival resection can be performed to temporarily remove local inflammatory mediators, followed by the use of immunosuppressants.
Prognosis
Currently, there are limited studies regarding the prognosis of PUK. However, one study has pointed out possible complications surrounding PUK include moderate to severe vision loss, corneal perforation and increased risk of recurrence.
Epidemiology
PUK is a rare condition with an estimated incidence of 3 per million annually. Studies have reported that most patients with PUK are older than 60 years of age (32%). Among them, men have a higher occurrence rate in men (60%). Most patients live in rural areas (66%) and are in the lower socioeconomic groups. The age of those with PUK ranges from 5 to 89 years, with a mean age of 45.5 years.
The mortality rate after PUK diagnosis in an investigation of 34 patients with and without immunosuppressive medication is 53% and 5%, respectively. Another single-centre study involving 46 patients with RA reported a mortality rate of 15%. Reports have also shown a possibility of PUK occurrence after any ocular surgery. In a retrospective study of 771 eyes, 1.4% of participants reported developing late-onset PUK at an average of 3–6 months after surgery.
References
Immunology
Inflammations
Disorders of sclera and cornea | Peripheral ulcerative keratitis | [
"Biology"
] | 2,773 | [
"Immunology"
] |
67,271,552 | https://en.wikipedia.org/wiki/E-gree%20%28app%29 | E-gree is a legal app that became well known in 2020. It was the first app of its kind to protect users against a number of dating-related issues, including revenge porn.
Background
The app was co-founded by Araz Mamet, Keith Fraser and Ilya Flaks. The app focuses on privacy, with users being able to set up various contracts to protect themselves following a breakup, or while dating. This notably included signing an NDA when sexting. The app received investment from a number of notable people and companies, including Natalia Vodianova.
References
Application software | E-gree (app) | [
"Technology"
] | 121 | [
"Mobile software stubs",
"Mobile technology stubs"
] |
67,272,105 | https://en.wikipedia.org/wiki/Femoral%20nerve%20dysfunction | Femoral nerve dysfunction, also known as femoral neuropathy, is a rare type of peripheral nervous system disorder that arises from damage to nerves, specifically the femoral nerve. Given the location of the femoral nerve, indications of dysfunction are centered around the lack of mobility and sensation in lower parts of the legs. The causes of such neuropathy can stem from both direct and indirect injuries, pressures and diseases. Physical examinations are usually first carried out, depending on the high severity of the injury. In the cases of patients with hemorrhage, imaging techniques are used before any physical examination. Another diagnostic method, electrodiagnostic studies, are recognized as the gold standard that is used to confirm the injury of the femoral nerve. After diagnosis, different treatment methods are provided to the patients depending upon their symptoms in order to effectively target the underlying causes. Currently, femoral neuropathy is highly underdiagnosed and its precedent medical history is not well documented worldwide.
Femoral Nerve
The femoral nerve is the largest nerve of the lumbar plexus. It is located in the pelvis, and travels down at the front of the leg. The nerve has several branches given its origin from the lumbar spine, down the pelvis and further into the lower spine. Anatomically, it is formed by the dorsal division of the ventral rami of spinal nerves L2-L4, specifically the posterior divisions of the lumbar plexus. The femoral nerve travels posterior to the inguinal ligament within the muscular lacuna which contains the iliopsoas muscle. It travels along with the femoral artery, vein and lymphatics in the femoral triangle which allows the supply of oxygenated blood to maintain its motor and sensory functions. For its motor sensory, the nerve controls the major hip flexor muscles as well as knee extension muscles to allow movement of the hips and straightening of the leg. As for its sensory processing, it has control over the anterior and medial thigh as well as the medial leg down to the hallux, providing sensation to the front of the thigh and part of the lower leg.
Signs and symptoms
Those with femoral nerve dysfunction may present problems of difficulties in movement and a loss of sensation. The patient, in terms of motor skills, may have problems such as quadriceps wasting, loss of knee extension and a lesser extent of hip flexion given the femoral nerve involvement of the iliacus and pectineus muscles. One may experience numbness and tingling in any part of the leg, typically in the front and the inside of one's thighs and down to the feet. They may also experience a dull ache in the genital region given that the inguinal ligament is actually divided into the femoral and genital branches. Feelings of the patient's leg and knees giving out may also be prevalent due to lower extremity muscle weakness and quadriceps weakness. In terms of sensory skills, patients may observe a decrease in sensation over the front and medial sections of the thigh and medial aspects of the lower legs and feet due to their involvement of the anterior and medial cutaneous nerves of the thigh and the saphenous nerve respectively.
Causes
The symptoms of femoral neuropathy is due to either specifically just the femoral nerve or several damaged nerves. This local cause of damage to just the femoral nerve is termed mononeuropathy. Although damage to the femoral nerve is uncommon due to its location, there are numerous risk factors including injuries, prolonged pressure and damage from diseases that can still lead to such neuropathy. These include:
A direct physical sharp trauma, which is the most common etiology
A tumor or other growth blocking or trapping part of the nerve
Intra abdominal, hip and other injuries and operations due to prolonged compression, retraction or stretching of the nerve, such as:
Pelvic fracture
Radiation to the pelvis
A catheter placed in the femoral artery
Proximal interlocking screw placement through femoral IM nailing
Growth of masses on the muscles in the thigh
Bleeding in the abdomen
Tumor or growth on the kidneys
Complex anterior and posterior spinal surgery
Hemorrhage
Diabetes: most common reason for peripheral neuropathy in people with diabetes for more than 25 years
Diagnosis
The diagnosis of femoral neuropathy can be done through physical examinations, several imaging techniques and electrodiagnostic studies. Provided patients do not suffer from haemorrhage, physical examinations is the first line of diagnosis. These examinations are carried out in order to evaluate whether nerves of the lower back, lower limbs and hips are functioning well. They can also help determine whether it is strictly an injury in the femoral nerve or a systemic disorder. Other than questioning about possible recent injuries, surgeries, and medical history, inspection of asymmetry or atrophy of the quadriceps muscles, muscle stretch reflexes, and sensory testing through pinpricks and light touches are conducted. By looking at the asymmetry or atrophy of the quadriceps muscles, weaknesses in knee extension or hip flexion can be observed. Furthermore, physicians palpate over the inguinal ligament to inspect the anterior and medial leg, anterior thigh, and quadriceps reflex. In addition, comparison of quadriceps strength to adductor strength help point towards femoral neuropathy. However, given that the diagnosis of femoral neuropathy through physical examination is subject to how severe the injury is, additional imaging testing such as computed tomography, magnetic resonance imaging, ultrasounds and nerve conduction studies and electromyography are also done.
Imaging studies are strongly recommended in case of suspected haemorrhage. First, computed tomography or magnetic resonance imaging is carried out to confirm the presence of a haemorrhage. These scans also can be used to look for tumors, growths, or any masses surrounding the femoral nerve that could lead to compression. Then ultrasound scans can be conducted to localize the femoral nerve using sound waves to create images and identify any injury to the femoral nerve.
In general, electrodiagnostic studies are perceived as the gold standard that diagnoses femoral neuropathy. The studies include nerve conduction studies and electromyography. Nerve conduction looks at the speed of electrical impulses while the conduction studies can localize the damaged femoral nerve, electromyography can evaluate muscles innervated by femoral, tibial, obturator, and peroneal nerves.
Treatment
Treatment for femoral dysfunction comes in several ways depending on the symptoms of the patient. This includes dealing with the underlying causes, lifestyle remedies, medications, physical therapy and surgery. In order to relieve minor symptoms, patients are to deal with the underlying cause and make changes to their lifestyles. For example, if compression on the nerve is the underlying cause, it is important to avoid tight clothing, or activities that can put pressure on the femoral nerve for a long period of time in order to relieve the compression. If diabetes is the underlying condition, patients will need to lose weight or find ways to bring their blood sugar back to normal. However, if the condition still persists, treatments such as medication and physical therapy are required.
In addition to the corticosteroids injection in the leg to reduce inflammation, pain medications are prescribed to alleviate pain. For such neuropathic pains, the most common prescriptions are gabapentin, pregabalin, or amitriptyline. Physical therapy on the other hand, not only helps to build strength in leg muscles, but also helps to reduce pain and promote mobility. Rehabilitation will be focused on areas such as hip abduction, hip rotation and kneeling hip flexor stretch. Moreover, orthopaedic devices may also be given to patients to assist with mobilization. If conservative treatments above still lead to unsuccessful treatment outcomes, surgery, which is more invasive, is the last resort. However, up till now, surgery for femoral neuropathy poses a tough challenge because there have been no cases of complete functional recovery despite the microsurgical equipment development.
Epidemiology
Femoral nerve dysfunction is classified under peripheral neuropathy. Although the prevalence of peripheral neuropathy is known to increase with age, medical reports of the peripheral neuropathy diagnosis are still not well documented and highly underdiagnosed. For this reason, there is no epidemiological study that can accurately estimate its global prevalence. The figures of epidemiological studies regarding peripheral neuropathy vary to a great extent depending on the literature source, as available data sources does not focus on the general population. However, as an estimation, there are roughly 2-7% individuals worldwide are affected by peripheral neuropathy. It is also found that peripheral neuropathy is more common in Western countries when compared to developing countries.
References
Neuroscience | Femoral nerve dysfunction | [
"Biology"
] | 1,811 | [
"Neuroscience"
] |
67,272,314 | https://en.wikipedia.org/wiki/Bell%27s%20mania | Bell's mania, also known as delirious mania, refers to an acute neurobehavioral syndrome. This is usually characterized by an expeditious onset of delirium, mania, psychosis, followed by grandiosity, emotional lability, altered consciousness, hyperthermia, and in extreme cases, death. It is sometimes misdiagnosed as excited delirium (EXD) or catatonia due to the presence of overlapping symptoms. Pathophysiology studies reveal elevated dopamine levels in the neural circuit as the underlying mechanism. Psychostimulant users as well as individuals experiencing severe manic episodes are more prone to the manifestation of this condition. Management solutions such as sedation and ketamine injections have been discussed for medical professionals and individuals with the condition. Bell's mania cases are commonly reported in countries like the United States and Canada and are commonly associated with psychostimulant use and abuse.
Clinical features
The majority of Bell's mania cases studied are triggered by psychostimulant drug usage or preexisting medical or neurological conditions, which impedes the apprehension of this syndrome. Hence at present, there is still no scientific consensus on the clinical features of Bell's mania. Researchers are currently working on varying case studies to derive common clinical characteristics. Some frequent signs and symptoms include acute onset of delirium, mania or psychosis. Patients with Bell's mania have fluctuating severity of symptoms over time with altered consciousness and emotional lability. They tend to be excited, agitated, paranoid, delusional and alarmed. They display impulsive, hostile and destructive behavior towards others that can last for hours to days, as well as unexpected physical strength. Catatonic symptoms such as grimacing, echopraxia, negativism, echolalia and stereotypy are often present. Impaired concentration, memory loss, disorientation, insomnia, auditory and visual hallucinations are additional symptoms that follow. There are shifts from having loud and disorganized speech to mutism. Some typical physiological signs include hyperthermia, tachycardia, hypertension, and hyperventilation.
Diagnosis
This condition is currently not recognized as a diagnosable issue by psychiatric journals such as the Diagnostic and Statistical Manual of Mental Disorders-IV by the American Psychiatric Association or the tenth revision of the International Statistical Classification of Diseases and Related Health Problems by World Health Organization (WHO).
Physical examination
When examined, patients with Bell's mania fail to recall names, recent experiences and are poorly oriented for location, date, and time. Moreover, their blood pressure and respiratory rate are increased. Additionally, mental status examination using questionnaires and three diagnostic tests are taken, including drawing a clock-face, The Face-Hand test and Hidden figures tests. Patients with Bell's mania tend to make obvious mistakes in these tests, for instance drawing a clock-face with incorrect numbering or missing clock hands.
Differential diagnosis
Upon acute onset of the symptoms, an instant investigation for a toxic or systemic cause is undertaken. Prominence of thought disorder, grandiosity and delusional ideation, and catatonic signs indicates the diagnosis of acute schizophrenia, bipolar disorder and catatonia respectively. Diagnostic complications arise as these signs are also often the notable feature of Bell's mania. With the cause undetermined, Bell's mania diagnosis is usually justified with the presence of both mania and delirium regardless of the catatonic symptoms.
Distinguishing between Bell's mania and catatonia
Bell's mania and catatonia are regarded as "overlapping syndromes", making differential diagnosis essential when catatonic signs are observed. Thus, researchers must distinguish between excited catatonia and Bell's mania, and among malignant catatonia, excited catatonia, and neuroleptic malignant syndrome (NMS). When catatonic features are prominent, it is diagnosed as excited catatonia and when absent or subtle, it is identified as Bell's mania. Alternatively, the presence of delirium is recognized as the discerning factor. A difference between the two is that catatonia is viewed from a movement aspect, whereas delirium from consciousness.
Nevertheless, a formal set of diagnostic criteria is required to distinguish between Bell's mania and catatonia. Failure to diagnose Bell's mania appearing as catatonia could lead to deleterious consequences and, in worse cases, death.
Pathophysiology
Dopamine is the primary neurotransmitter involved in the pathophysiology of Bell's mania. Elevated dopamine levels in the neural circuit concerned with neuropsychiatric disorders are postulated to be responsible for the manic and psychosis symptoms and other signs, including fluctuations in body temperatures and fear. Increased extracellular dopamine levels can be caused by low levels of dopaminergic transporters, sensitization of postsynaptic dopaminergic receptors, and dopamine transporters dysfunction.
Role of genetics and signaling pathways
Mania is a prominent symptom of both bipolar disorder and Bell's mania. Hence, studying bipolar patients can provide insight into the pathophysiology of this Bell's mania. PET scans of manic patients illustrate anomalous activation of the dorsal anterior cingulate, right inferior frontal cortical regions. Manic symptoms exacerbate with increasing anterior cingulate activation, which is posited to be associated with escalating dopamine transmission in the nucleus accumbens.
Dopamine transporter regulates the reuptake of dopamine to keep the synaptic dopamine levels within normal range. Hence, the elevation of such transporter levels in the striatum decreases neurotransmission. Genetic studies have hypothesized a relationship between low transporter protein levels and the gene for dopamine transporter in bipolar affective patients.
Stimulant elicited responses
Sensitization of postsynaptic dopaminergic receptors
External sources contributing to the hypothesized hyperdopaminergic mechanism include psychostimulants like cocaine. These substances provoke behavioral changes similar to mania. In chronic users, drug sensitization occurs which induces increased neurotransmission and modified protein expression within the mesolimbic dopamine neurons. Adaptations in dopamine transporters is further triggered causing behavioral sensitization. This phenomenon is not distinct to drug abuse but also other psychomotor stimulants such as stress.
Dopaminergic transporters dysfunction
Dopaminergic transporters dysfunction is caused by acute mania of bipolar disorder, psychostimulant use, and environmental stress. It is suggested to be common mechanism in excited delirium (EXD). EXD is commonly observed in psychostimulant abusers as these drugs directly impact the dopaminergic transporters, increasing the extracellular dopamine levels.
Amplified excitation of the dopaminergic systems can induce extreme fear and magnify both approach and avoidance behaviors. The hyperdopaminergic state triggers aggression, agitation and psychomotor excitement. Additionally, CNS dopamine signaling is active in heart rate, respiration and body temperature regulation. Dopamine imbalance can hence result in hyperthermia, tachycardia, hyperventilation, hypertension and sleep disturbance symptoms.
Risk Factors
Given that hyperdopaminergic state is postulated to be the underlying mechanism of Bell's Mania, people prone to dopamine imbalance, sensitization and low levels of dopamine transporters are susceptible. Furthermore, this syndrome is usually precipitated from prevailing neurological and physiological conditions. Hence, those at risk include
Psychostimulant users and abusers
Patients who are drug withdrawal states or are non-compliant with psychotropic drugs
Undiagnosed/ untreated psychiatric patients
People with severe manic episodes
Sleep deprivation which can trigger and even worsen acute manic episodes
People with medical history of neurological and physiological conditions
Treatment and Management
Whilst the scope of Bell's Mania is extensively studied, there remain some significant challenges that need to be solved with respect to treatment and management.
Recognition
Over the course of time the significance of this syndrome has been increasingly recognized in correspondence to the manner of death, specifically because the anatomic cause of death is hard to define during autopsy. Recent studies have elicited neurochemical imbalances resulting in autonomic hyperactivity and increase in dopamine levels in the victims. Emergency personnel need to recognize these symptoms promptly to avoid the individual from spiraling into metabolic acidosis, rhabdomyolysis, multiorgan failure and ultimately death. In light of the clinical findings, some treatments have been described which include effective sedation, followed closely by external cooling, monitoring medical complications and the administration of intravenous (IV) fluids.
Rapid Sedation
One management technique is rapid sedation in view of the unpredictable aggressive nature of the patient with Bell's Mania, especially if the symptoms that need to be handled are associated with the causes like dopamine regulation. Turning off the catecholamine cascade and rapidly sedating the patient using several sedatives like Benzodiazepines or Neuroleptics can help. Several studies also point at the increased effectiveness of combination of two or more sedatives in the treatment of hyper agitated patients.
Intramuscular Ketamine Injections
Patients with Bell's mania may not have optimum time for the sedatives to start showing effect. Due to this fact, electroconvulsive therapy and Intramuscular Ketamine injections are alternative solutions that have been proposed. With an onset time of 30 seconds to 4 minutes ketamine proves to be more effective than Benzodiazepines. Although adult data on the use of Ketamine on patients isn't readily available, a study by Strayer et al. concluded that the use of ketamine for controlling the hyperactivity was reliable and can further facilitate other management techniques with fewer side effects.
Other Preventative Measures
Along with sedation techniques, a few other prevention and protection measures can decrease fatal outcomes, some of which are:
management of catecholamine cascade,
basic medical monitoring of bodily functions,
blood tests,
physical examination and
rapid cooling of the body temperature.
The urgency and medical severity of the condition needs to be given impetus in terms of handling patients with Bell's Mania. Due to the homologous nature of this syndrome with malignant hyperthermia (MH) and neuroleptic malignant syndrome (NMS), Dantrolene is also a probable treatment route owing to its swift acidosis correction. Although more research is needed in correspondence to the cause and consequences of this disease, the significance of the behavioral and physical symptoms need to be given importance to provide medical institutions as well the constabulary the necessary information to respond to Bell's Mania appropriately.
Epidemiology
The first case of Bell's Mania was observed by medical examiners during the cocaine epidemic in countries like the United States of America and Canada with some other cases being related to police brutality and restraint. The term Bell's Mania was first coined to describe the clinical condition with a 75% mortality rate. The prevalence of this condition ranges from 15% to 25% in the society and, is not an infrequent occurrence.
History
First mention of Bell's Mania
Bell's Mania is a syndrome with unexplained etiology which was first explained by American psychiatrist Luther Bell in the 1850s after observing institutionalized psychiatric patients. The first clinical reports and descriptions of people with acute exhaustive mania and delirium were provided by a few psychiatrists in the United States of America, France and the United Kingdom. The description of the symptoms seemed to be quite similar to that of patients with schizophrenia (hallucinations and delirium) however additional hyperactivity, heightened arousal, and altered sleep cycle was also reported in patients with Bell's Mania.
First Appearance of Related Symptoms
The suggestive symptoms of this disorder were first observed in the 19th century, out of which some of the most significant ones are the onset of aggression, bizarre behavior, violence, excessive shouting, panic, paranoia and increase in body temperature. In 1934, Stauder described a series of acute onset of psychomotor agitation in young people with no history of physical or psychological disturbances. He termed this description as Lethal Catatonia. Other reasons for the manifestation of Bell's Mania, points at the use of stimulant drugs in excessive amounts and also psychiatric diseases like depression or schizophrenia.
In 1985, Bell's Mania was first mentioned in a definitive manner using the term Excited Delirium (EXD). Prior to that year, most cases of death by cocaine intoxication during the cocaine epidemic that happened in a sudden manner. This involved the exposure to highly toxic amounts of the drug due to the bursting of cocaine packets being carried within the body by "body stuffers". In the same year a series of observations were made by Welti and Fishbain regarding psychosis, cardiorespiratory arrest and sudden death in individuals with cocaine addiction. Since the law enforcement were often called to contain violent behavior exhibited by these individuals, it was speculated that police brutality might be the underlying cause of the deaths. Upon medical review of the cases related to the use of batons, pepper sprays and restraint methods did not disclose any autonomic cause behind the death, albeit problems like cardiac diseases and trauma was excluded from the extensive evaluation.
See also
Excited Delirium
Stimulant Psychosis
Adrenergic storm
References
Further reading
Neuroscience
Neurology | Bell's mania | [
"Biology"
] | 2,837 | [
"Neuroscience"
] |
67,274,684 | https://en.wikipedia.org/wiki/Acer%20Chromebook%20Tab%2010 | The Acer Chromebook Tab 10 (D651N) is a tablet computer manufactured by Acer Inc. It is the first ChromeOS tablet that was released and received software updates until 2023. The tablet was announced in March 2018.
Specifications
The SoC is a Rockchip OP1. It has 4GiB RAM and 32GiB of storage, which can be extended with a MicroSD card. It has a 9.7" inch display with a resolution of 2048×1536, with a dpi of 264. The code name of the device is scarlet.
It is primarily designed for education.
Reception
TechRadar noted the excellent screen. PCMag noted that ChromeOS without a keyboard poses some problems.
References
External links
Acer.com - Chromebook Tab 10
Tablet computers introduced in 2018
Chromebook | Acer Chromebook Tab 10 | [
"Technology"
] | 169 | [
"Mobile technology stubs"
] |
53,112,990 | https://en.wikipedia.org/wiki/Cheong%20%28food%29 | Cheong () is a name for various sweetened foods in the form of syrups, marmalades, and fruit preserves. In Korean cuisine, cheong is used as a tea base, as a honey-or-sugar-substitute in cooking, as a condiment, and also as an alternative medicine to treat the common cold and other minor illnesses.
Originally, the word cheong () was used to refer to honey in Korean royal court cuisine. The name jocheong (; "crafted honey") was given to mullyeot (liquid-form yeot) and other human-made honey-substitutes. Outside the royal court, honey has been called kkul (), which is the native (non-Sino-Korean) word.
Varieties
Jocheong (; "crafted honey") or mullyeot (; liquid yeot): rice syrup or more recently also corn syrup
Maesil-cheong or Maesilaek (; "plum syrup")
Mogwa-cheong (; quince preserve)
Mucheong (; radish syrup)
Mu-kkul-cheong (; radish and honey syrup)
Yuja-cheong (; yuja marmalade)
Saenggang-cheong (; ginger marmalade)
Gochu-cheong (; Korean green chili marmalade)
Maneul-cheong (; garlic pickle)
Yangpa-cheong (; onion marmalade)
Odi-cheong (; mulberry marmalade)
Omija-cheong (; magnolia berry marmalade)
Painaepeul-cheong (; pineapple marmalade)
Bae-cheong (; Korean pear marmalade)
Bae-doraji-cheong (; Korean pear and bellflower root marmalade)
Maesil-cheong
Maesil-cheong (, ), also called "plum syrup", is an anti-microbial syrup made by sugaring ripe plums (Prunus mume). In Korean cuisine, maesil-cheong is used as a condiment and sugar substitute. The infusion made by mixing water with maesil-cheong is called maesil-cha (plum tea).
It can be made by simply mixing plums and sugar together, and then leaving them for about 100 days. To make syrup, the ratio of sugar to plum should be at least 1:1 to prevent fermentation, by which the liquid may turn into maesil-ju (plum wine). The plums can be removed after 100 days, and the syrup can be consumed right away, or mature for a year or more.
Mogwa-cheong
Mogwa-cheong ( ), also called "preserved quince", is a cheong made by sugaring Chinese quince (Pseudocydonia sinensis). Either sugar or honey can be used to make mogwa-cheong. Mogwa-cheong is used as a tea base for mogwa-cha (quince tea) and mogwa-hwachae (quince punch), or as an ingredient in sauces and salad dressings.
Yuja-cheong
Yuja-cheong (, ), also called "yuja marmalade", is a marmalade-like cheong made by sugaring peeled, depulped, and thinly sliced yuja (Citrus junos). It is used as a tea base for yuja-cha (yuja tea), as a honey-or-sugar-substitute in cooking, and as a condiment.
Gallery
See also
Fruit syrup
List of spreads
List of syrups
Korean tea
Yeot
References
External links
Condiments
Food ingredients
Food preservation
Honey
Jams and jellies
Korean condiments
Marmalade
Preserved fruit
Syrup
Citrus dishes | Cheong (food) | [
"Technology"
] | 800 | [
"Food ingredients",
"Components"
] |
53,114,129 | https://en.wikipedia.org/wiki/Cupania%20elegans | Cupania elegans is a horticultural name (a name that has never been validly published in scientific literature) for a plant in the family Sapindaceae.
References
elegans
Plants described in 1893
Nomina nuda
Unplaced names | Cupania elegans | [
"Biology"
] | 50 | [
"Biological hypotheses",
"Nomina nuda",
"Controversial taxa",
"Unplaced names"
] |
53,114,151 | https://en.wikipedia.org/wiki/Computational%20history | Computational History (not to be confused with computation history), sometimes also called Histoinformatics, is a multidisciplinary field that studies history through machine learning and other data-driven, computational approaches.
See also
References
Historiography | Computational history | [
"Technology"
] | 52 | [
"Computing stubs",
"Computer science",
"Computer science stubs"
] |
53,115,925 | https://en.wikipedia.org/wiki/Microflotation | Microflotation is a further development of standard dissolved air flotation (DAF). Microflotation is a water treatment technology operating with microbubbles of 10–80 μm in size instead of 80-300 μm like conventional DAF units.
The general operating method of microflotation is similar to standard recycled stream DAF units. The advancements of microflotation are lower pressure operation, smaller footprints and less energy consumption.
Process description
The method of Microflotation is comparable to recycled stream DAF.
A portion of the clarified effluent water leaving the Microflotation tank is pumped into a small pressure vessel into which compressed air is also introduced. This results in saturating the pressurized effluent water with air. The air-saturated water stream is recycled to the front of the Microflotation cell and flows through a pressure release valve just as it enters the front of the float tank, which results in the air being released in the form of tiny bubbles. Bubbles form at nucleation sites on the surface of the suspended particles, adhering to the particles. As more bubbles form, the lift from the bubbles eventually overcomes the force of gravity. This causes the suspended matter to float to the surface where it forms a froth layer which is then removed by a skimmer. The froth-free water exits the float tank as the clarified effluent from the Microflotation unit. A particular circular DAF system is called "Zero speed", allowing quite water status then highest performances; a typical example is an Easyfloat 2K DAF system.
Advantages
Microflotation is an enhanced method to float particles to the surface with the aid of adherent air bubbles.
The adherence of suspended solids to bubbles is easier and more intensive, the smaller the bubbles are. Because of the improved adherence capacity of small microbubbles, the saturation of the introduced air as well as the reduction capability of particles lead to an improved suspended solids reduction, a higher solids content in the float sludge and a more stable float sludge on the surface of the microflotation cell.
A difference has to be made to dispersed flotation used in mining industry in mineral segregation processes where the bubble are bigger being 500-2000 μm in size and volume of air is many fold compared to the water volume. Traditional Dissolved Air flotation (DAF) mainly operates with bubble sizes ranging from 80 to 300 μm with very inhomogeneous bubble size distribution.
A major difference of low pressure dissolved air flotation and other flotation processes lies in the volumes of bubbles, amount of air and raising speeds. One macro bubble can be 1000 times bigger in volume compared to one micro bubble. And vice versa the number of micro bubbles can be 1000 fold in number compared to one macro bubble having same volume.
Microflotation enables bubbles in size 40-70 μm with rise rates from 3–10 m/h. The rise rate is slow enough not to destroy the fragile flocks forming an agglomeration of particles with weak mutual bonding and high enough to allow time for separation of the agglomeration. With the attachment of particles to bubbles the size range of "flock-bubble" grows, and the rise velocities grow simultaneously. The separation rate is accelerated leading to residence times of combined chemical precipitation and flotation from 10 to 60 minutes with need of small footprint areas of treatment plants and decreasing the cost structures of treatment processes.
A distribution of bubble sizes between 20 and 50 microns is the necessary requirement for an optimum flotation result. Even a small number of bubbles with diameters of above 100 microns can disable a flotation separation process, because larger bubbles rise more quickly and cause turbulence, which severely destroys already build air-flocks-agglomerates.
Applications
Microflotation is technically appropriately and primarily economic to substitute classic technology like sand filtration and sedimentation. Beyond there are several applications at which low pressure Microflotation is an alternative to membrane technology or represents a convincing addition.
Microflotation can be used as:
Non-Chemical/Chemical Industrial PreTreatment (COD, BOD, F.O.G., TSS reduction. heavy metal- and color removal)
Primary treatment
Tertiary treatment
Replacement or protection of filtration units
Sludge thickening
Protection and performance improvement of MBR units, aerobic and anaerobic biologies
References
Flotation processes
Water treatment
Waste treatment technology | Microflotation | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 931 | [
"Water treatment",
"Water pollution",
"Environmental engineering",
"Oil refining",
"Flotation processes",
"Water technology",
"Waste treatment technology"
] |
53,117,142 | https://en.wikipedia.org/wiki/Microbes%20and%20Man | Microbes and Man is a popularising book by the English microbiologist John Postgate FRS on the role of microorganisms in human society, first published in 1969, and still in print in 2017. Critics called it a "classic" and "a pleasure to read".
Book
Contents
The book is structured as follows:
1 Man and microbes
2 Microbiology
3 Microbes in society
4 Interlude: how to handle microbes
5 Microbes in nutrition
6 Microbes in production
7 Deterioration, decay and pollution
8 Disposal and cleaning-up
9 Second interlude: microbiologists and man
10 Microbes in evolution
11 Microbes in the future
Illustrations
The 4th edition has 32 illustrations, ranging from photographs of microscopic algae, protozoa, fungi, viruses and bacteria, to the macroscopic effects of microbes such as a sulphur-forming lake in Libya and fish killed by bacterial reduction of sulphate in water.
Editions
1st edition, Cambridge University Press, 1969
2nd edition, Cambridge University Press, 1986
3rd edition, Cambridge University Press, 1992
4th edition, Cambridge University Press, 2000
The book has been translated into nine languages: Arabic, Chinese, Czech, French, German, Japanese, Polish, Portuguese, and Spanish.
Reception
The Guardian described the book as "a passionate case for the importance of micro-organisms".
In his textbook Essential Microbiology, Stuart Hogg recommends the book to readers who want a general overview of microbes and their uses, stating "there can be no better starting point than John Postgate's classic".
New Scientist described the book as "a pleasure to read from first page to last. It is a literal statement. Start to read it and the first page, describing the astonishing dispersion of microbes, from the upper atmosphere to the depths of the sea, will provide any reader with enough wonder and excitement to take them through to the last page and the surface of Venus." The magazine commented that Postgate's "admirable, elegantly written and painlessly informative book" came close to losing its alliterative title, at the hands of "militant feminists" at Penguin Books editing the paperback version in 1986.
Dennis R. Schneider, reviewing the 3rd edition in 1992 for Cell, described the book as having "succinctly and carefully explained examples of how microorganisms affect our lives ... one of the classics of popular science", standing alongside classics like Rosebury's Life of Man and De Kruif's Microbe Hunters. Schneider wrote that the book's Britishness "'colours' the text", but Postgate's emphasis on the beneficial and not just the harmful effects of microbes was welcome and admirably explored. He noted few errors, but objected to Postgate's assertion that AIDS "originated by transmission from a primate", for which there was at that time no evidence. Schneider would have liked a "better and longer" account of molecular biology. His chief criticism, however, was that by the 1990s the book no longer had an audience, since "the Victorial ideal of the educated middle class has vanished into the wasteland of broken families, double digit unemployment and a damaged educational system". All the same, he found the book "of value and beauty (except perhaps to the publisher)".
Charles W. Kim, reviewing the 3rd edition for The Quarterly Review of Biology, stated that "If the author's intent was to present the impact of the ubiquitous microorganisms on the environment and humans, he has succeeded admirably", describing Postgate's style as "unique".
D. Roy Cullimore, in his Practical Atlas for Bacterial Identification, comments that all four editions were "easy reading", addressing the challenges that microbes presented to human society. He suggests that "ideally" all four books be read in sequence for an overview of the development of microbiology in half a century.
Notes
References
1969 non-fiction books
Microbiology
Popular science books | Microbes and Man | [
"Chemistry",
"Biology"
] | 816 | [
"Microbiology",
"Microscopy"
] |
53,117,147 | https://en.wikipedia.org/wiki/Tonio%20%28app%29 | Tonio is an audio decoding app for mobile devices that allows you to send information inaudibly via radio or TV signals to smartphones. The app was developed by the Austrian company information.io GmbH and initially released in October 2014. Tonio has been awarded with the Austrian Radio Award and with the Austrian Media Future Award. The name Tonio is a neology from “Tone with Information”.
Usage
Information which the app receives inaudibly via audio signals may include URLs, background information, tickets, coupons and music downloads. The developers have mentioned subtitles for operas as another possible use case. The Austrian marketer of advertisements in movie theaters, Cinecom, is offering the option to add background information to their commercials via the Tonio technology to its customers. The four largest chains of movie theaters in Austria removed their ban on cellphones in response to the new technology. Radio Eins, a public radio station in Berlin, sent URL links inaudibly via Tonio that were mentioned on air. The Austrian public broadcaster ORF is working on a cooperation with Tonio for various programs according to the Austrian business magazine trend as well as the station LoungeFM that intends to complement its radio news with links to the largest Austrian news website DerStandard.at. Tonio has been used for a campaign in movie theaters in April and May 2016, where visitors received trivia questions to the movie shown.
Technology
Tonio decodes audio information that has been enriched with an inaudible code transmitted from a radio or TV station and translates the code into an URL for example. Therefore, a certain modification of the audio information by the broadcaster is required. In contrast, an acoustic fingerprint is a digital code to characterize sounds and audio recordings with specific acoustic characteristics (e.g. the app Shazam). Those fingerprints must be stored in databases to identify unknown sounds and acoustic signals (e.g. voice prints or songs), while Tonio codes and decodes additional information into the signal. Other apps with inaudible signals which were used for surveillance in the US by sending information about the user back to the server, have been flagged by the privacy protection organization Center for Democracy and Technology. However, according to its developers, Tonio decodes the information sent locally and not on the servers of the company, which makes tracking of consumer behaviour impossible.
Awards
Österreichischer Medienzukunftspreis 2015 (Austrian Media Future Award), category "Zukunftsweisende Medien-Unternehmen und ihr Medium" (Trendsetting media companies and their medium)
Österreichischer Radiopreis (Austrian Radio Award) 2015, category "Best Innovation"
Futurezone-Award 2015, "Best Infotainment Startup, 2nd Place
References
Android (operating system) software
IOS software
Radio technology | Tonio (app) | [
"Technology",
"Engineering"
] | 585 | [
"Information and communications technology",
"Telecommunications engineering",
"Radio technology"
] |
53,117,334 | https://en.wikipedia.org/wiki/Ranjan%20Mallik | Ranjan Kumar Mallik (born 1964) is an Indian electrical and communications engineer and a professor at the Department of Electrical Engineering of the Indian Institute of Technology, Delhi. He held the Jai Gupta Chair at IIT Delhi from 2007 to 2012 and the Brigadier Bhopinder Singh Chair from 2012 to 2017. He is known for his researches on multiple-input multi-output systems and is an elected fellow of all the three major Indian science academies viz. Indian Academy of Sciences, Indian National Science Academy, and The National Academy of Sciences, India. He is also an elected fellow of The World Academy of Sciences, Indian National Academy of Engineering, and The Institute of Electrical and Electronics Engineers, Inc.
The Council of Scientific and Industrial Research, the apex agency of the Government of India for scientific research, awarded him the Shanti Swarup Bhatnagar Prize for Science and Technology, one of the highest Indian science awards for his contributions to Engineering Sciences in 2008.
Biography
R. K. Mallik, born in November 1964 in Kolkata, the capital city of the Indian state of West Bengal, to Radharaman Mallik and Tapati, earned a BTech in electrical engineering from the Indian Institute of Technology, Kanpur in 1987. His higher studies were at the University of Southern California from where he completed an MS in electrical engineering in 1988 and followed it up with a PhD in 1992. Returning to India, he joined Defence Electronics Research Laboratory, Hyderabad the same year as a Grade C Scientist and worked there for two years till his move to the Indian Institute of Technology, Kharagpur in 1994 as a lecturer. His stay at IIT Kharagpur lasted till 1996 during which period he served as an assistant professor during 1995–96. His next move was to the Indian Institute of Technology, Guwahati as an assistant professor and in 1998 he shifted his base to Delhi to join the Indian Institute of Technology, Delhi where he serves as a professor in the Department of Electrical Engineering. In between, he held the IDRC Research Chair in Wireless Communications from 2009 to 2015, the Jai Gupta Chair from 2007 to 2012, and the Brigadier Bhopinder Singh Chair from 2012 to 2017. He is also associated with Bharti School of Telecommunication Technology and Management of IIT Delhi as a faculty.
Mallik is married to Sumona DasGupta and the couple has one child, Upamanyu.
Legacy
Mallik's researches have been in the fields of communication theory and systems, difference equations, and linear algebra and he has worked extensively on the performance analysis of multiple-input multi-output systems, especially characterization of fading channel statistics and error analysis under correlated fading conditions. He has documented his researches by way of several articles; Google Scholar and ResearchGate, online article repositories of scientific articles, have listed many of them. Besides, he has contributed chapters to books including the 2011 edition of Issues in Telecommunications Research published by ScholarlyEditions.
Mallik is a member of the Program Advisory Committee on Electrical, Electronics and Computer Engineering of the Science and Engineering Research Board of the Department of Science and Technology. He is a former treasurer (2005 and 2006) of the Delhi Section of The Institute of Electrical and Electronics Engineers, Inc. (IEEE) and served as a member of IEEE Communications Society's Awards Standing Committee during 2015-2017. He is a founder member of the Communication Systems and Networks Association (COMSNETS) and has contributed to the organizing of several conferences. The invited or key note speeches delivered by him include the lecture on Performance Evaluation and Channel Characterization for Wireless Communication System at National Science day (2009), the address on Optimal Signaling for Multi-Level Amplitude-Shift Keying with Single-Input Multiple-Output and Non-coherent Reception at CALCON (2014), Alexander Graham Bell lecture of IEEE (2015), and the keynote speech at IEEE Patna 5G Summit (2016).
Awards and honors
Mallik became an elected fellow of the National Academy of Sciences, India in 2007. He received the Hari Om Ashram Prerit Dr. Vikram Sarabhai Research Award in electronics, telematics, informatics, and automation of the Physical Research Laboratory in 2008 and the Council of Scientific and Industrial Research awarded him the Shanti Swarup Bhatnagar Prize, one of the highest Indian science awards the same year. The Indian National Science Academy elected him as a fellow in 2011 and the Indian Academy of Sciences followed suit a year later. In addition, the years 2012 and 2013 brought him two elected fellowships; that of The World Academy of Sciences and The Institute of Electrical and Electronics Engineers, Inc., respectively. He is also an elected fellow of The Institution of Engineering and Technology and the Indian National Academy of Engineering.
Selected bibliography
See also
Arogyaswami Paulraj
Thomas Kailath
Gregory Raleigh
Gerard J. Foschini
MIMO-OFDM
Single-input single-output system
Notes
References
External links
Recipients of the Shanti Swarup Bhatnagar Award in Engineering Science
1964 births
Scientists from Kolkata
Bengali scientists
Indian electrical engineers
Fellows of the Indian Academy of Sciences
IIT Kanpur alumni
USC Viterbi School of Engineering alumni
Academic staff of IIT Kharagpur
Academic staff of IIT Delhi
Academic staff of the Indian Institute of Technology Guwahati
Fellows of the National Academy of Sciences, India
Fellows of the Indian National Science Academy
Fellows of the IEEE
Fellows of the Institution of Engineering and Technology
TWAS fellows
Living people
Fellows of the Indian National Academy of Engineering
Engineers from West Bengal | Ranjan Mallik | [
"Engineering"
] | 1,114 | [
"Institution of Engineering and Technology",
"Fellows of the Institution of Engineering and Technology"
] |
53,117,514 | https://en.wikipedia.org/wiki/Intensive%20farming%20in%20Almer%C3%ADa | The intensive agriculture of the province of Almeria, Spain, is a model of the utilization of highly technical means to achieve maximum economic yield based on the rational use of water, use of plastic greenhouses, highly technical training and high levels of employment of inputs, applied to the special characteristics of a particular environment. The greenhouses (invernaderos) are located between Motril and Almeria. The area of El Ejido is well known to be agriculturally productive.
History
The first greenhouse was built in 1963 and the technique was extended by the Campo de Dalías or Poniente Almeriense and later by the Campo de Níjar, in the east. The use of polyethylene as a substitute for glass had already been tested in the Canary Islands and Catalonia. The plastic was spread over wooden posts or metal structures and secured by wire. The transparent plastic intensifies the heat and maintains the humidity. This allows harvests to be harvested one month earlier than in an open field and ahead of other regions, starting harvesting in December and allowing the plant growth of the autumn-winter plantings until March. This allows for doubling, and sometimes tripling, the number of harvests.
Migrant agricultural workers
Beginning in the 2000s in the El Ejido region of Andalusia, African (including large numbers of Moroccan) immigrant greenhouse workers have been documented as being faced with severe social marginalization and racism while simultaneously exposed to extremely difficult working conditions with significant exposure to toxic pesticides. The El Ejido region has been described by environmentalists as a "sea of plastic" due to the expansive swaths of land covered by greenhouses, and has also been labeled "Europe's dirty little secret" due to the documented abuses of workers who help produce large quantities of Europe's food supply.
In these greenhouses, workers are allegedly required to work under "slave-like" conditions in temperatures as high as 50 degrees Celsius with nonexistent ventilation, while being denied basic rest facilities and earning extremely low wages, among other workplace abuses. As of 2015, out of 120,000 immigrant workers employed in the greenhouses, 80,000 are undocumented and not protected by Spanish labour legislation, according to Spitou Mendy of the Spanish Field Workers Syndicate (SOC). Workers have complained of ill health effects as a result of exposure to pesticides without proper protective equipment.
Groundwater pollution and drilling of water wells
Groundwater is being polluted with fertilisers and pesticides. Some 5200 tons of chemical waste is dumped into the area each year.
The local government has also banned drilling new water wells, but this is often ignored and new wells are drilled up to a depth of 2000 meters.
Plastic waste
Some 30 000 tons of plastic waste is created each year, and in places where the soil has become infertile, the greenhouses are abandoned after plastics shredding. The plastic waste from the greenhouses is reported to run off into the Mediterranean Sea.
Commercial evolution
In February 2010 a new certification regulation of the N brand of AENOR for fruits and vegetables for fresh consumption came into force. This regulation describes the control system of the ISO 155 standard. This mark guarantees to customers that the products comply with quality protocols that include good agricultural practices, respect for the environment, traceability and social measures. The fulfillment of the norm covers almost all the requirements that the great European distribution demands to the producers of fruits and vegetables. These standards are homologated with the GLOBALGAP protocol.
According to data from EXTENDA (Andalusian Agency for Foreign Promotion), the value of exports of fruit and vegetables in 2012 amounted to 1,914.1 million euros, a growth of 9.7% compared to 2011. Fresh vegetables and vegetables contributed 1,665.5 million. There were 359 exporting companies, 222 regular. These sales accounted for 47.3% of the total of the autonomous community. Among the client countries are Germany, 29.7% of the total, France, 15%, the Netherlands, 13.1%, the United Kingdom, 11.3%, and Italy, 7.2%. They are followed by Poland, Belgium, Sweden, Denmark and Portugal. According to the same source, in the first six months of 2013 sales totaled 1,600 million euros, 14.6% more than the previous year.
Between January and October 2013, the province exported more than 12.8 million kilos of live plants and cut flowers, 18.4% more than in the same period of 2012. The turnover amounted to almost 18.7 million euros, 56% more. Exports of ornamental plants accounted for more than 17.8 million euros, an increase of 59% over the same period in 2012. The main buyers are France, with 59.6% of the plants, Germany, with a 14.2%, and the Netherlands, with 10.6%. They are followed by Belgium, Portugal, Italy, United Kingdom, United States and Morocco.
The specialization of farmers by a single product is increasingly observed, as is the concentration of gender marketing in a few large firms. The largest companies such as Agroponiente, Unica Group, CASI, Alhóndiga La Unión, Agroiris and Vicasol account for 35% of the market share in 2015.
Sustainable practices
Some farms in El Ejido have begun to take up sustainable growing practices, i.e. practicing integrated pest management (IPM), using biological crop protection and residue-free growing. Some comply to very strict specifications, forcing them to develop a greenhouse management system based on the principles of integrated production with a minimum use of synthetic products for fertilization (meaning that no pesticides are used).
Integrated Pest Management (IPM)
In agriculture, integrated pest management (IPM) or integrated pest control (IPC) is understood as a strategy that uses a variety of complementary methods: physical, mechanical, chemical, biological, genetic, legal and cultural for control of pests. These methods are applied in three stages: prevention, observation and application. It is an ecological method that aims to reduce or eliminate the use of pesticides and minimize the impact on the environment. There is also talk about ecological pest management (EPM) and natural pest management.
Up to 2015, 60% of the area devoted to horticultural crops in the province used biological pest control techniques. The percentages are higher in some fundamental crops like pepper, 100%, and tomato, 85%. In all, some 26600 hectares of protected horticulture use these techniques, when in 2006 they were only used in about 129.
The Regional Ministry of Agriculture and Fisheries of the Andalusian Government is launching a plan (Compromiso Verde or Green Commitment) in 2016 to expand the area to 100%. For the Regional Government, this is the model that should extend the distances with traditional crops and leave a definitive patent that Almeria produces with more quality, more traceability and more food security than any:
...guarantees the quality and improves the positioning of our products in international markets, increases the profitability of farms, enhances respect for the environment and minimizes the presence of insect vectors of viruses and favors the correct management of pests." (Carmen Ortiz Rivas, Minister of Agriculture, Fisheries and Rural Development.)
Analysis of horticultural products indicate that only 0,6% of samples show pesticide residues, when the European average is 2,8% (five times more).
Hydroponics
Some greenhouses are beginning to use computer-controlled hydroponics systems, using rock wool for rooting.
Aerial photographs and coordinates
Centre of Campo de Dalías - Google maps Dense concentration of plastic greenhouses
Eastern outskirts of Almería
Níjar valley
Carchuna
Valley north of Castell de Ferro
See also
Prostitution in Roquetas de Mar
References
Bibliography
Vázquez de Parga, Raúl. "Campo de Dalías, milagroso oasis de Almería" (Field of Dalías, miraculous oasis of Almería), Selecciones del Reader's Digest, tome LXXXIV, nº 504, November 1982, D.L.: M. 724-1958
Several authors. "Atlas Geográfico de la Provincia de Almería. El medio – La sociedad – Las actividades" (Geographic Atlas of the Province of Almería. The medium - The society - The activities), Ed. Instituto de Estudios Almerienses, Diputación de Almería, D.L. AL 818–2009,
Agriculture in Spain
Food industry
Intensive farming
Province of Almería | Intensive farming in Almería | [
"Chemistry"
] | 1,779 | [
"Eutrophication",
"Intensive farming"
] |
53,117,864 | https://en.wikipedia.org/wiki/Propane%20Education%20and%20Research%20Council | The Propane Education and Research Council (PERC) is a nonprofit that provides propane safety and training programs and invests in research and development of new propane-powered technologies. PERC is operated and funded by the propane industry. PERC programs benefit a variety of markets including transportation, agriculture, commercial landscaping, residential, and commercial building.
PERC was authorized by the U.S. Congress with the passage of the Propane Education and Research Act (PERA), signed into law on Oct. 11, 1996. PERC is governed by a 21-member board appointed by the National Propane Gas Association and the Gas Processors Association. Each association appoints nine Council members and they cooperate in the appointment of three public members.
PERC's operations and activities are funded by an assessment levied on each gallon of propane gas at the point it is odorized or imported into the United States.
There has been controversy about the provocative anti-electrification messaging using influencers, despite the money collected by fees on propane sales being supposed to be used for research and safety. In 2023, the organization planned to spend $13 million on its anti-electrification campaign, including $600,000 on “influencers”.
References
External links
Propane
Trade associations based in the United States
Non-profit organizations based in Washington, D.C.
Natural gas organizations | Propane Education and Research Council | [
"Engineering"
] | 279 | [
"Natural gas organizations",
"Energy organizations"
] |
53,118,571 | https://en.wikipedia.org/wiki/Franz%20Knoop | Georg Franz Knoop (20 September 1875 in Shanghai – 2 August 1946 in Tübingen) was a German biochemist, most well known for his discovery of the β-oxidation of the fatty acids in 1905.
Alongside Hans Adolf Krebs and Carl Martius, he clarified the reaction sequence of the citric acid cycle in 1937. He determined the structure of histidine and demonstrated that amino acids can be synthesized not only in plants, but also in animals.
References
1875 births
1946 deaths
German biochemists
Citric acid cycle | Franz Knoop | [
"Chemistry"
] | 108 | [
"Carbohydrate metabolism",
"Biochemistry stubs",
"Biochemists",
"Biochemist stubs",
"Citric acid cycle"
] |
53,120,575 | https://en.wikipedia.org/wiki/Disodium%20helide | Disodium helide (Na2He) is a compound of helium and sodium that is stable at high pressures above . It was first predicted using the USPEX crystal structure prediction algorithm and then synthesised in 2016.
Synthesis
Na2He was predicted to be thermodynamically stable over 160 GPa and dynamically stable over 100 GPa. This means it should be possible to form at the higher pressure and then decompress to 100 GPa, but below that it would decompose. Compared with other binary compounds of other elements and helium, it was predicted to be stable at the lowest pressure of any such combination. This also means, for example, that a helium-potassium compound is predicted to require much higher pressures of the order of terapascals.
The material was synthesized by putting tiny plates of sodium in a diamond anvil cell along with helium at 1600 bar and then compressing to 130 GPa and heating to 1,500 K with a laser. Disodium helide is predicted to be an insulator and transparent. At 200 GPa the sodium atoms have a Bader charge of +0.599, the helium charge is −0.174, and the two-electron spots are each near −0.511. This phase could be called disodium helium electride. Disodium helide melts at a high temperature near 1,500 K, much higher than the melting point of sodium. When decompressed, it can keep its form as low as 113 GPa. As pressure increases, the sodium is predicted to gain more positive charge, the helium to lose negative charge and the free electron density to increase. Energy is compensated by the relative shrinking of the helium atoms and the space for electrons.
Structure
Disodium helide has a cubic crystal structure, resembling that of fluorite. At 300 GPa the edge of a unit cell of the crystal has . Each unit cell contains four helium atoms on the centre of the cube faces and corners, and eight sodium atoms at coordinates halfway between the center and each corner. Electron pairs (2e−) are positioned on each edge and the centre of the unit cell. Each pair of electrons is spin paired. The presence of these isolated electrons makes this an electride. The helium atoms do not participate in any bonding; however, the electron pairs can be considered as an eight-centre two-electron bond.
Footnotes
References
Sodium compounds
Helium compounds
Binary compounds
Substances discovered in the 2010s
Electrides | Disodium helide | [
"Chemistry"
] | 510 | [
"Electron",
"Electrides",
"Salts"
] |
53,121,641 | https://en.wikipedia.org/wiki/Nano-suction%20technology | Nano-suction is a technology that uses vacuum, negative fluid pressure and millions of nano-sized suction cups to securely adhere any object to a flat non-porous surface. When the nano-suction object is pressed against a flat surface, millions of miniature suction cups create a large vacuum, generating a strong suction force that can hold a tremendous amount of weight. The nature of the technology allows easy removal without residue, and makes it reusable.
Applications
There have been a wide range of applications of nano-suction technology, also known as "anti-gravity", ranging from hooks, frames, mirrors, notepad organisers, mobile phone cases and large houseware products.
See also
Synthetic setae
Suction cup
References
Nanotechnology
Tools
Vacuum
Joining | Nano-suction technology | [
"Physics",
"Materials_science",
"Engineering"
] | 159 | [
"Nanotechnology",
"Vacuum",
"Matter",
"Materials science"
] |
53,123,104 | https://en.wikipedia.org/wiki/Query%20understanding | Query understanding is the process of inferring the intent of a search engine user by extracting semantic meaning from the searcher’s keywords. Query understanding methods generally take place before the search engine retrieves and ranks results. It is related to natural language processing but specifically focused on the understanding of search queries.
Methods
Stemming and lemmatization
Many languages inflect words to reflect their role in the utterance they appear in. The variation between various forms of a word is likely to be of little importance for the relatively coarse-grained model of meaning involved in a retrieval system, and for this reason the task of conflating the various forms of a word is a potentially useful technique to increase recall of a retrieval system.
Stemming algorithms, also known as stemmers, typically use a collection of simple rules to remove suffixes intended to model the language’s inflection rules.
For some languages, there are simple lemmatisation methods to reduce a word in query to its lemma or root form or its stem; for others, this operation involves non-trivial string processing and may require recognizing the word's part of speech or referencing a lexical database.
The effectiveness of stemming and lemmatization varies across languages.
Query Segmentation
Query segmentation is a key component of query understanding, aiming to divide a query into meaningful segments. Traditional approaches, such as the bag-of-words model, treat individual words as independent units, which can limit interpretative accuracy. For languages like Chinese, where words are not separated by spaces, segmentation is essential, as individual characters often lack standalone meaning. Even in English, the BOW model may not capture the full meaning, as certain phrases—such as "New York"—carry significance as a whole rather than as isolated terms. By identifying phrases or entities within queries, query segmentation enhances interpretation, enabling search engines to apply proximity and ordering constraints, ultimately improving search accuracy and user satisfaction.
Entity recognition
Entity recognition is the process of locating and classifying entities within a text string. Named-entity recognition specifically focuses on named entities, such as names of people, places, and organizations. In addition, entity recognition includes identifying concepts in queries that may be represented by multi-word phrases. Entity recognition systems typically use grammar-based linguistic techniques or statistical machine learning models.
Query rewriting
Query rewriting is the process of automatically reformulating a search query to more accurately capture its intent. Query expansion adds additional query terms, such as synonyms, in order to retrieve more documents and thereby increase recall. Query relaxation removes query terms to reduce the requirements for a document to match the query, thereby also increasing recall. Other forms of query rewriting, such as automatically converting consecutive query terms into phrases and restricting query terms to specific fields, aim to increase precision.
Spelling Correction
Automatic spelling correction is a critical feature of modern search engines, designed to address common spelling errors in user queries. Such errors are especially frequent as users often search for unfamiliar topics. By correcting misspelled queries, search engines enhance their understanding of user intent, thereby improving the relevance and quality of search results and overall user experience.
External links
Proceedings of ACM SIGIR 2011 Workshop on Query Representation and Understanding
Query Understanding for Search Engines (Yi Chang and Hongbo Deng, Eds.)
References
Information retrieval techniques
Natural language processing | Query understanding | [
"Technology"
] | 682 | [
"Natural language processing",
"Natural language and computing"
] |
53,123,871 | https://en.wikipedia.org/wiki/Disordered%20Structure%20Refinement | The Disordered Structure Refinement program (DSR), written by Daniel Kratzert, is designed to simplify the modeling of molecular disorder in crystal structures using SHELXL by George M. Sheldrick. It has a database of approximately 120 standard solvent molecules and molecular moieties. These can be inserted into the crystal structure with little effort, while at the same time chemically meaningful binding and angular restraints are set. DSR was developed because the previous description of disorder in crystal structures with SHELXL was very lengthy and error-prone. Instead of editing large text files manually and defining restraints manually, this process is automated with DSR.
Application
DSR can be started in a command line. The call has the basic form:
dsr [option] (SHELXL file)
DSR is controlled with a special command in the corresponding SHELXL file. This has the following syntax:
REM DSR PUT/REPLACE "fragment" WITH (atoms) ON (atoms or q-peaks) PART 1 OCC -21 =
RESI DFIX
The DSR command must always start with REM so that SHELXL does not recognize this line as its own command. Which atom of the molecule fragment from the database corresponds to which atom or q-peak in the crystal structure is specified in the list following WITH and ON.
By running
dsr -r file.res
the fragment fit is performed and the restraints transferred.
Graphical user interface
Since 2016 ShelXle has a graphical interface to DSR. Most commands of the command line version can be executed there.
In order to transfer a fragment into a structure, three atoms / q-peaks have to be selected in ShelXle and in the DSR GUI each to specify the position of the fragment. The 3D view of the fragment then shows a preview of the subsequent fragment fit.
Programming
DSR is only programmed in Python. Therefore, it runs in any Python-supported operating system.
It is under the free Beerware license and can be downloaded free of charge and changed as desired.
References
External links
Crystallography software
Python (programming language) software | Disordered Structure Refinement | [
"Chemistry",
"Materials_science"
] | 440 | [
"Crystallography",
"Crystallography software"
] |
53,124,143 | https://en.wikipedia.org/wiki/Bang%27s%20theorem%20on%20tetrahedra | In geometry, Bang's theorem on tetrahedra states that, if a sphere is inscribed within a tetrahedron, and segments are drawn from the points of tangency to each vertex on the same face of the tetrahedron, then all four points of tangency have the same triple of angles. In particular, it follows that the 12 triangles into which the segments subdivide the faces of the tetrahedron form congruent pairs across each edge of the tetrahedron. It is named after A. S. Bang, who posed it as a problem in 1897.
References
Theorems in geometry
Euclidean solid geometry
Tetrahedra | Bang's theorem on tetrahedra | [
"Physics",
"Mathematics"
] | 134 | [
"Mathematical theorems",
"Euclidean solid geometry",
"Space",
"Geometry",
"Theorems in geometry",
"Spacetime",
"Mathematical problems"
] |
53,125,965 | https://en.wikipedia.org/wiki/List%20of%20yttrium%20compounds | This list of yttrium compounds shows compounds of yttrium. Inclusion criteria: those that have applications, academic significance, single crystal structures or have their own Wikipedia articles.
References
Yttrium compounds
Yttrium compounds | List of yttrium compounds | [
"Chemistry"
] | 46 | [
"nan"
] |
53,127,729 | https://en.wikipedia.org/wiki/Precision%20Time%20Protocol%20Industry%20Profile | Industrial automation systems consisting of several distributed controllers need a precise synchronization for commands, events and process data.
For instance, motors for newspaper printing are synchronized within some 5 microseconds to ensure that the color pixels in the different cylinders come within 0.1 mm at a paper speed of some 20 m/s. Similar requirements exist in high-power semiconductors (e.g. for converting between AC and DC grids) and in drive-by-wire vehicles (e.g. cars with no mechanical steering wheel).
This synchronisation is provided by the communication network, in most cases Industrial Ethernet.
Many ad-hoc synchronization schemes exist, so IEEE published a standard Precision Time Protocol IEEE 1588 or "PTP", which allows sub-microsecond synchronization of clocks.
PTP is formulated generally, so concrete applications need a stricter profile. In particular, PTP does not specify how the clocks should operate when the network is duplicated for better resilience to failures.
The PTP Industrial Profile (PIP) is a standard of the IEC 62439-3 that specifies in its Annex C two Precision Time Protocol IEEE 1588 / IEC 61588 profiles, L3E2E and L2P2P, to synchronize network clocks with an accuracy of 1 μs and provide fault-tolerance against clock failures.
The IEC 62439-3 PTP profiles are applicable to most Industrial Ethernet networks, for synchronized drives, robotics, vehicular technology and other applications that require precise time distribution, not necessarily using redundant networks.
The IEC 62439-3 profile L2P2P has been adopted as IEC/IEEE 61850-9-3 by the power utility industry to support precise time stamping of voltage and current measurement for differential protection, wide area monitoring and protection, busbar protection and event recording.
The IEC 62439-3 PTP profiles can be used to ensure deterministic operation of critical functions in the automation system itself, for instance precise starting of tasks, resource reservation and deadline supervision.
The IEC 62439-3 Annexes belongs to the Parallel Redundancy Protocol and High-availability Seamless Redundancy standard suite for high availability automation networks. However, this specification also applies to networks that have no redundancy and do not use PRP or HSR.
Topology
The PIP relies on the IEEE 1588 topology, consisting of grandmaster clocks (GC), ordinary clocks (OC), boundary clocks (BC), transparent clocks (TC) and hybrid clocks (HC = TC&OC).
For redundancy, a PIP network contains several clocks that are master-capable. Normally, the best master clock ensures that only one grandmaster broadcasts the time.
In redundant networks, and especially in PRP, several masters can be active at the same time, the slave then chooses its master.
PIP Profiles and Annexes
IEC 62439-3 Annex A specifies how to attach clocks to duplicated networks paths and how to support simultaneously active redundant master clocks for all profiles.
IEC 62439-3 Annex B specifies the L2PTP profile for substation automation IEC/IEEE 61850-9-3. In contrast to IEC/IEEE 61850, double attachment by PRP or HSR is mandatory.
IEC 62439-3 Annex C specifies two profiles, L3E2E and L2P2P, that are subsets of IEEE Std 1588 Precision Time Protocol (PTP) when clocks are singly attached.
IEC 62439-3 Annex D is a tutorial for IEEE 1588 that concentrates only on PIP.
IEC 62439-3 Annex E contains the SNMP objects for managing the doubly-attached clocks.
Main features
IEC 62439-3 Annex C uses the following IEEE Std 1588 options:
uses the PTP timescale based on TAI International Atomic Time, also delivers UTC Coordinated Universal Time
transmits the clock correction indifferently with 1-step (preferred) or 2-step (can be mixed)
operates with the default best master clock algorithm, performed by master and by slave clocks
supports both options to measure the link delay:
L3E2E: End-to-end measurement (Delay_Req/Delay_Resp) over Layer 3 (Internet Protocol) to fulfill the requirements of ODVA;
L2P2P: Peer-to-peer measurement (Pdelay_Req/Pdelay_Resp) over Layer 2 Ethernet (IEEE 802.3) links.
Performance
IEC 62439-3 Annex C aims at an accuracy of better than 1 μs after crossing 15 bridges with transparent clocks.
It assumes that all network elements (bridges, routers, media converters, links) support PTP with a given performance:
Grandmaster (GC): 250 ns maximum inaccuracy
Transparent Clocks (TC): 50 ns maximum inaccuracy
Boundary Clocks (BC): 200 ns maximum inaccuracy
Media Converters: 50 ns maximum jitter and 25 ns maximum asymmetry
Link asymmetry: 25 ns maximum asymmetry
By relying on these guaranteed values, the network engineer can calculate the time inaccuracy at different nodes of the network and place the clocks, especially the grandmaster clocks suitably.
IEC TR 61850-90-4 (Network engineering guidelines) gives advice on the use of IEC/IEEE 61850-9-3 in substation automation networks.
IEEE 1588 settings
IEC 62439-3 Annex C restricts the parameters of IEEE Std 1588 to the following values:
domainNumber: 0 (default range)
Announce interval: (default range) 1 s (L2P2P) or 2 s (L3E2E)
Sync interval: 1 s (fixed)
Pdelay interval: 1 s (fixed)
Announce receipt time-out (number of Announce interval that has to pass without receipt of an Announce message before Announce timeout is issued): 3 (fixed)
priority1: 255 for slave-only
priority2: 255 for slave-only
transparent clock primary syntonization domain: 0 (default)
Additions to IEEE Std 1588
IEC 62439-3 Annex C specifies requirements in addition to IEEE 1588:
A clock shall accept both 1-step and 2-step corrections (improves plug-and-play)
All clocks can be doubly attached using the IEC 62439-3 protocol (PRP "Parallel Redundancy Protocol" or HSR "High-availability Seamless Redundancy")
Several master clocks can be active at the same time; the slave selects the best master.
Time-outs ensure that the clocks can detect the loss of PTP messages also on the unused path.
Identification of the peer node to check the topology of the network and ensure that all elements support the protocol.
In network using store-and-forward media converters and for L2P2P only, the master appends a padding to Sync messages to ensure that Sync and Pdelay_Req/Pdelay_Resp messages have the same size (this will specified in IEEE 1588:2017)
Network management by SNMP according to IEC 62439-3 Annex E
Standard owners
This protocol has been developed by the IEC SC65C WG15 in the framework of IEC 62439, which applies to all IEC industrial networks.
To avoid parallel standards in IEC and IEEE in the field of grid automation, the L2PTP profile specific to grid automation previous IEC 62439-3 Annex B has been placed under the umbrella of the IEC&IEEE Joint Development 61850-9-3.
Technical responsibility rests with IEC SC65C WG15, which is committed to keep the IEC 62439-3 profile L2P2P and IEC/IEEE 61850-9-3 aligned.
References
External links
IEC 61588:2009 Precision clock synchronization protocol for networked measurement and control systems
IEC/IEEE 61850-9-3, Communication networks and systems for power utility automation – Part 9-3: Precision time protocol profile for power utility automation
IEC TR 61850-90-4:2013 Communication networks and systems for power utility automation - Part 90-4: Network engineering guidelines
Tutorial on HSR
Tutorial on Parallel Redundancy Protocol (PRP)
Tutorial on the precision time protocol industrial profiles in IEC 62439-3
IEC 62439-3 Tissues (Technical issues) database for IEC 62439-3 / IEC/IEEE 61850-9-3
Networking standards
Network protocols | Precision Time Protocol Industry Profile | [
"Technology",
"Engineering"
] | 1,788 | [
"Networking standards",
"Computer standards",
"Computer networks engineering"
] |
70,157,641 | https://en.wikipedia.org/wiki/Mirror%20descent | In mathematics, mirror descent is an iterative optimization algorithm for finding a local minimum of a differentiable function.
It generalizes algorithms such as gradient descent and multiplicative weights.
History
Mirror descent was originally proposed by Nemirovski and Yudin in 1983.
Motivation
In gradient descent with the sequence of learning rates applied to a differentiable function , one starts with a guess for a local minimum of and considers the sequence such that
This can be reformulated by noting that
In other words, minimizes the first-order approximation to at with added proximity term .
This squared Euclidean distance term is a particular example of a Bregman distance. Using other Bregman distances will yield other algorithms such as Hedge which may be more suited to optimization over particular geometries.
Formulation
We are given convex function to optimize over a convex set , and given some norm on .
We are also given differentiable convex function , -strongly convex with respect to the given norm. This is called the distance-generating function, and its gradient is known as the mirror map.
Starting from initial , in each iteration of Mirror Descent:
Map to the dual space:
Update in the dual space using a gradient step:
Map back to the primal space:
Project back to the feasible region : , where is the Bregman divergence.
Extensions
Mirror descent in the online optimization setting is known as Online Mirror Descent (OMD).
See also
Gradient descent
Multiplicative weight update method
Hedge algorithm
Bregman divergence
References
Mathematical optimization
Optimization algorithms and methods
Gradient methods | Mirror descent | [
"Mathematics"
] | 311 | [
"Mathematical optimization",
"Mathematical analysis"
] |
70,158,242 | https://en.wikipedia.org/wiki/Stephan%20Ulamec | Stephan Ulamec is an Austrian geophysicist, born in Salzburg on January 27, 1966, with more than 100 articles in peer-reviewed journals and several participations in space missions and payloads operated by diverse space agencies. He is working at the German Aerospace Center (Deutsches Zentrum für Luft- und Raumfahrt, DLR) in Cologne. He is regularly giving lectures about his publications in aerospace engineering at the University of Applied Sciences: Fachhochschule FH-Aachen. Main aspects of his work are related to the exploration of small bodies in the solar system (asteroids and comets).
Education
Ulamec studied Geophysics at the Karl-Franzens University in Graz (Austria) as student of Prof. Siegfried J. Bauer. He finished his PhD on “Acoustic and Electrical Methods for the Exploration of Atmospheres and Surfaces, with Application to Saturn's Moon Titan” in 1991.
Career
From 1991 till 1993 he worked as a research fellow at the European Space Agency (ESA), specifically at European Space Research and Technology Centre (ESTEC) in Noordwijk, in The Netherlands. Since 1994, he is at the Microgravity User Support Center (MUSC) which is part of the DLR Space Operations and Astronaut Training (SOAT). He has made several presentations at the International Astronautical Congress (IAC).
Involvement in space missions
Mission to 67P/Churyumov-Gerasimenko
Stephan Ulamec has been the project manager of the Rosetta lander Philae, which successfully landed on comet 67P/Churyumov-Gerasimenko in 2014.
Mission to (162173) Ryugu
He has also been Payload Manager of MASCOT, a lander made in common by the French space agency (CNES) and the DLR, that has been delivered by the JAXA Hayabusa2 spacecraft to asteroid (162173) Ryugu in 2018.
Mission to Phobos (Mars I)
He is one of two lead scientists (Co Principal Investigator) of the French-German MMX rover called IDEFIX©, together with Dr Patrick Michel. This rover is to be launched in 2024 by the Japanese Mars Moons eXploration (MMX), a JAXA (Japan Aerospace eXploration Agency) mission to the Mars natural satellite Phobos.
Mission to (65803) Didymos
He is also part of the Science Management Board for the ESA Hera mission, to be launched in 2024 with a Space X Falcon 9 shuttle, aimed at operating a rendezvous and characterising in details the asteroid (65803) Didymos and its natural satellite Dimorphos, and also analysing the artificial impact created by the American space agency NASA probe DART in September 2022.
Involvement in other projects and working groups
NEO-MAPP
He is involved in NEO-MAPP, a European Union Horizon 2020 project to study mitigation and characterisation techniques for potentially hazardous asteroids.
SSEWG and SSAC
From January 2020 till December 2023, he is chairing the ESA Solar System and Exploration Working Group (SSEWG) and is a member of the Space Science Advisory Committee (SSAC).
Writings
Raumsonde Rosetta ( ).
Handbuch der Raumfahrttechnik, chapter on Weltraumastronomie und Planetenmissionen ( ).
Spacecraft Operations, chapter on Lander Operations ().
Awards and honours
Member of International Academy of Astronautics (IAA).
Member of European Geosciences Union (EGU).
Asteroid (11818) Ulamec was named in his honour by the International Astronomical Union (IAU).
Sir Arthur Clark Award, 2014.
Wernher von Braun Ehrung (in German, honour) by the German Society for Aeronautics and Astronautics (DGLR), 2015.
Representative publications
References
Further reading
Related articles
Geophysics
Near Earth Objects
Rovers
Solar System
Official agencies external links
https://www.gov.uk/government/news/philae-finds-hard-ice-and-organic-molecules
https://www.esa.int/esatv/Videos/2014/09/Rosetta_Mission_Status/Stephan_Ulamec_Philae_Lander_Manager_DLR_ITV_in_German
https://www.esa.int/esatv/Videos/2014/11/Rosetta_wrap_up
https://www.esa.int/ESA_Multimedia/Videos/2016/02/Philae_facing_eternal_hibernation
https://www.eso.org/public/archives/capjournals/pdf/capj_0019.pdf (An Historic Encounter: Reviewing the Outreach around ESA’s Rosetta Mission: pages 44 to 47)
Media coverage
Comet lander's scientific harvest may be its last: Philae has fallen silent after fragmentary messages
The return of Philae: Revived after hibernation, comet lander awaits orders
(in French) https://www.sudouest.fr/sciences-et-technologie/mission-rosetta-il-est-temps-de-dire-au-revoir-a-philae-3689370.php
https://www.science.org/content/article/philae-s-scientific-harvest-may-be-its-last
https://www.scientificamerican.com/article/asteroid-ryugu-poses-landing-risks-for-japanese-mission/
https://www.theguardian.com/science/2015/apr/14/rise-and-shine-rosettas-philae-probe-could-be-awake-within-weeks
https://www.dailymotion.com/video/x30qpfs
https://www.irishtimes.com/news/science/philae-lander-goes-to-sleep-after-sending-data-to-earth-1.2002692
https://spacenews.com/esa-moves-two-missions-to-falcon-9/
https://www.dlr.de/en/blog/archive/2024/we-didnt-just-land-once-we-landed-twice
Geophysicists
Astronomers
21st-century Austrian physicists
Year of birth missing (living people)
Living people
21st-century Austrian geologists | Stephan Ulamec | [
"Astronomy"
] | 1,349 | [
"Astronomers",
"People associated with astronomy"
] |
70,161,362 | https://en.wikipedia.org/wiki/Freight%20train | A freight train, also called a goods train or cargo train, is a railway train that is used to carry cargo, as opposed to passengers. Freight trains are made up of one or more locomotives which provide propulsion, along with one or more railroad cars (also known as wagons) which carry freight. A wide variety of cargos are carried on trains, but the low friction inherent to rail transport means that freight trains are especially suited to carrying bulk and heavy loads over longer distances.
History
The earliest recorded use of rail transport for freight was in Babylon, circa 2,200 B.C.E. This use took the form of wagons pulled on wagonways by horses or even humans.
Locomotives
Freight trains are almost universally powered by locomotives. Historically, steam locomotives were predominant, but beginning in the 1920s diesel and electric locomotives displaced steam due to their greater reliability, cleaner emissions, and lower costs.
Freight cars
Freight trains carry cargo in freight cars, also known as goods wagons, which are unpowered and designed to carry various types of goods.
Different types of freight cars may be used by a train, such as:
Boxcar
Tank Car
Hopper Car
Covered Hopper Car
Centerbeam Car
Flatcar
Intermodal Well Car
Gondola Car
Autorack Car
As of April 2020, there were 1.6 million rail cars in North America.
Operations
Freight trains often operate between classification yards, which are hubs where incoming freight trains are received, and then broken up, with the cars then being assembled into new trains for other destinations. In contrast to this type of operation, which is known as wagonload (or carload) freight, there are also unit trains, which exclusively carry one type of cargo. They normally operate directly between origin and destination points, such as a coal mine and a power plant, without any changes to the makeup of the freight cars in between. This allows cargo to reach its destination faster, and increases utilization of freight cars, lowering operating costs.
Unlike passenger trains, freight trains often do not follow fixed schedules, but are run as needed. When sharing tracks with passenger trains, freight trains are scheduled to use lines during specific times to minimize their impact on passenger train operations, especially during the morning and evening rush hours.
See also
Intermodal freight transport where containerized cargo is changed between freight train, to truck, or ship
References
Bibliography
Rail freight transport
Trains | Freight train | [
"Technology"
] | 476 | [
"Trains",
"Transport systems"
] |
70,161,885 | https://en.wikipedia.org/wiki/2%2C4%2C6-Tri-tert-butylpyrimidine | 2,4,6-Tri-tert-butylpyrimidine is the organic compound with the formula HC(ButC)2N2CtBu where tBu = (CH3)3C. It is a substituted derivative of the heterocycle pyrimidine. Known also as TTBP, this compound is of interest as a base that is sufficiently bulky to not bind boron trifluoride but still able to bind protons. It is less expensive that the related bulky derivatives of pyridine such as 2,6-di-tert-butylpyridine, 2,4,6-tri-tert-butylpyridine, and 2,6-di-tert-butyl-4-methylpyridine.
References
Pyrimidines
Reagents for organic chemistry
Non-nucleophilic bases
Tert-butyl compounds | 2,4,6-Tri-tert-butylpyrimidine | [
"Chemistry"
] | 192 | [
"Non-nucleophilic bases",
"Bases (chemistry)",
"Reagents for organic chemistry"
] |
70,162,844 | https://en.wikipedia.org/wiki/Saltcellar%20with%20Portuguese%20Figures | The Saltcellar with Portuguese Figures is a salt cellar in carved ivory, made in the Kingdom of Benin in West Africa in the 16th century, for the European market. It is attributed to an unknown master or workshop who has been given the name Master of the Heraldic Ship by art historians. It depicts four Portuguese figures, two of higher class and the other two are possibly guards protecting them. In the 16th century, Portuguese visitors ordered ivory salt cellars and ivory spoons similar to this object. This Afro-Portuguese ivory salt cellar was carved in the style of a Benin court ivory, comparable to the famous Benin bronzes and Benin ivory masks.
These kinds of ivory arts were commissioned and exported initially from Sierra Leone and later Benin City, Nigeria. During the age of exploration European powers expanded their trade and efforts towards establishing trade posts in the New World, Africa, the Middle East and Asia. Portuguese sailors disembarked from their caravels to buy goods for trading like ivory, gold, and others. These goods were taken from markets to colonial outposts to Portugal and then traded within European markets. During the 16th and 17th century countries that participated in colonialism reaped the economical benefits from its international trade.
The salt cellar was probably carved for a Portuguese nobleman to put it on his dining table. It is one of four almost identical pieces, probably made as a set. The other three are now in European museums. Ivory salt cellars and ivory spoons like the Sapi-Portuguese Ivory Spoon, also in the Metropolitan Museum of Art, were common pieces of art that Portuguese sailors brought back from West African countries. There are no records of the order for this commission but it is believed that a Benin Ivory carver produced this in the Benin Kingdom, in modern day Nigeria.
Description
The figures, in high relief form a circle around the shaft of the elephant tusk, supporting the bowl at top used to hold the salt. The amount and type of decoration indicates that this piece was created in a Benin court. Two of the four male figures are from clearly of a higher rank, probably from a higher class. They are more elaborately carved and shown frontally, while the other two have less ornament and are shown in profile. The men on the front and on the back are dressed with elaborate clothes with a cross necklace, showing they are European Christians. In addition they are wearing hats and holding spears in their left hand.
The style used to carve the ivory piece may be intended to be somewhat grotesque. In Afro-Portuguese ivories there are three African elements that are fundamental to call a piece African art: a focus on the human figure, an enunciation of the parts and a preference for pure geometric forms. Individuals are presented as the main subject in African art usually depicting an important figure like royalty or a deity, this is shown in the ivory salt cellar and other Benin Bronzes. The faces of each man are bigger with their long beards and deep eyes than their body while keeping their proportions in check. The geometry of the pattern of the men's clothing, the socket of the spear is another example where this geometry is repeated.
Background
The kingdom of Benin existed in the southwestern region of Nigeria in modern Edo state, Nigeria. According to scholars the kingdom of Benin (also known as the Edo Kingdom, or the Benin Empire) originated around the year 900 by the Ogiso kings, it is said between the eleventh and the thirteenth a member from the Oba dynasty would take control of the state. This dynasty would rule until 1897 when the British occupied the kingdom of Benin in February 9. The kingdom reached its peak during the rule of Ewuare the Great, he ruled from 1440 to 1473. King Ewuare expanded its natural borders and introduced wood and ivory carving to the kingdom. One of the first recorded visits to Benin City was made by Portuguese explorer, João Afonso de Aveiro in 1486. After contact with the Portuguese the Benin Kingdom established a strong mercantile relationship with Portugal and later other European states. They traded slaves and Beninese products such as ivory, pepper, gold and palm oil for European goods such as manillas, metals and guns. In addition they established diplomatic relations in the late 15th century, the Oba sent an ambassador to Lisbon, and the king of Portugal sent Christian missionaries to Benin City in 1486.
References
Ivory works of art
Benin art
Salts | Saltcellar with Portuguese Figures | [
"Chemistry"
] | 889 | [
"Salts"
] |
70,163,331 | https://en.wikipedia.org/wiki/Hydraulic%20modular%20trailer | A hydraulic modular trailer (HMT) is a special platform trailer unit which feature swing axles, hydraulic suspension, independently steerable axles, two or more axle rows, compatible to join two or more units longitudinally and laterally and uses power pack unit (PPU) to steer and adjust height. These trailer units are used to transport oversized load, which are difficult to disassemble and are overweight. These trailers are manufactured using high tensile steel, which makes it possible to bear the weight of the load with the help of one or more ballast tractors which push and pull these units via drawbar or gooseneck this combination of tractor and trailer is also termed as heavy hauler.
Typical loads include oil rig modules, bridge sections, buildings, ship sections, and industrial machinery such as generators and turbines also many militaries uses HMT for tank transportation. There is a limited number of manufacturers who produce these heavy-duty trailers because the market share of oversized loads is very thin when we talk about the over all transportation industry. There are self powered units of hydraulic modular trailer which are called SPMT which are used when the ballast tractors can not be applied due to space.
History
In 1957 the first every hydraulic modular trailers were made by Willy Scheuerle a Germany based trailer specialist which were four axles 32 wheeled modules for Robert Wynn and Sons Ltd, a Shaftesbury-based Guinness Book of Record-winning heavy haulage company. Wynns were also the first to use pneumatic tires for loads weighing more than 100 tons and also to use hydraulic suspension trailers which were manufactured by Cranes Trailers limited from Dereham.
In 1962 Cranes Trailers limited developed two four-axle 32-wheel modules for Pickfords, a London based heavy haulage company, with combined payload capacity of 160 tons on a total of eight axles and 64 wheels the modules incorporated hydraulic suspensions and each axle interlinked with mechanical steering system at an operational height varied from 2.9 to 3.11ft. The modules had drawbar coupling which could be coupled at either of both ends or even both for push-pull combination.
In 1963 Goldhofer developed modular trailers in Europe for heavy haulers. In the same year, Cometto developed a 300-ton capacity module in 14-axle, seven-row configuration. Scheuerle also demonstrated its modules at events in 1967 and later King Truck Equipment Ltd signed an agreement with Scheuerle which gave them exclusive manufacturing rights to produce their trailers in the UK.
In 1971, King Truck Equipment Ltd demonstrated two units that were custom-built for Pickfords. A single unit was able to carry 150 tons on six axle rows and 48 wheels in total. Who would use them mostly with their Scammell ballast tractors via a drawbar coupling. These trailers had independent suspension and steering abilities via the Petter twin-cylinder diesel engine used as a PPU.In the 1970s, many manufacturers started to developed HMTs as the industry believed that the conventional low loaders had various limitations. To comply with new regulations and keeping safety in mind, the industry knew that they needed more axles to distribute the payload and the ultimate solution for the demand would be HMTs. Manufacturers opted hydraulic suspension instead of mechanical leaf springs and air suspension due to its efficient size and adjustable characteristics. Manufacturers chose high-tensile steel instead of aluminum because when it comes to HMTs and oversize loads, the minimizing the weight of the HMT is not relevant when they have their own payload capacity excluding the ballast tractor. The only weak point that existed on a HMT were the tires, which are still a significant weakness till today, that's the reason why SPMTs have solid tires. HMTs operate at a higher speed then SPMts that's why solid tires are not an option for HMTs.
Specifications
The number of axles on a HMT is not specified; two-, three-, four-, five-, six-, and eight-axle units are manufactured. Multiple units can be coupled longitudinally and laterally to transport a heavier load; each axle has a lifting capacity ranging from 18 tons to 45 tons. With a steering capacity of 50 to 60 degrees. Some combinations require a trailer operator who controls steering and height adjustments of the trailer via a controller which is modular and can be mounted at the frontend or rear end of the trailer. Huge combinations may also have a cabin for the operator, while typical combinations have a seat attached to the controller.
Hydraulic cylinders are used for steering and suspension of the trailer each axle has an individual suspension cylinder, steering rod which is connected to the main steering cylinder which is at the frontend of the trailer which makes all the axles steer at once in the same direction one row of axle consist of two turn tables, two knees, two suspension cylinders and four to eight wheels attached to a high strength metal platform. Steering and suspension cylinders are hydraulically operated using hydraulic fluid through hose pipe from the hydraulic tank, which is located near the PPU. PPU, which powers the steering, suspension to and fro flow of hydraulic fluid from hydraulic tank to suspensions and steering cylinders, puts out about 18 to 25 hp of power and are available in both diesel and petrol variants manufactured by renowned brands like Kohler, Yanmar and Hatz.
Multiple units of HMT can be interconnected longitudinally by pins and interconnecting couplings mounted in the centre of the chassis in the front and rear to interconnect them laterally they are bolted on the side wall of the chassis. HMTs can not move themselves, so There are two ways by which a HMT can be coupled with a tractor unit which can push and pull the trailer, these are gooseneck and drawbar.
Gooseneck is the most common coupling used in the industry. A swan shaped coupling is coupled to the trailer and the tractor via connection of trailer pin and tractor fifth wheel. This coupling can be hydraulically adjusted to suit the tractor's height also the steering controls are connected to the coupling. Goosenecks are easy to use and gives benefit to using conventional tractors, but this coupling has two huge drawbacks. This coupling can not be applied in a two file or side by side HMT configuration which limits the payload. Additionally, it can not be applied in push and pull configuration. Goosenecks are manufactured by the trailers manufactures themselves.
Drawbar is the most efficient and economical coupling which consists of an A-shaped frame with an I-shaped loop which is coupled to the trailer and connected to a ballast tractor via a towing hitch of the tractor. This coupling is widely used in developing countries because of its economical cost. Unlike gooseneck, this coupling can be applied to side by side and push & pull configuration which, but this coupling can not be connected to a typical tractor, it requires a ballast tractor which has a ballast box instead of a fifth wheel and tow hitches in the rear and front. Draw bars and tow hitches are manufacture red by companies like jost and Ringfeder.
Since 2005 in the United States of America, HMT have extra features and design changes which include widening axles, and half way folding system. Due to different road regulations in different states, almost all manufacturers have adopted the US design and developed a product for the US market. These HMT trailers are named dual lane trailers, which comes from the widening characteristic of the trailer. Dual lane trailers have capability to change its width from to wide to make transport of empty trailers easy and also comply with state regulations when required.
Accessories
Gooseneck
Draw bar
Drop Deck
Vessel Bridge
Intermediate spacer
Excavator deck
Extendable spacer
Turntables (bolster)
Blade Lifter
Tower adapter
Girder frame
Trailer power assist
Manufacturers
Goldhofer
Scheuerle
Nicolas
Kamag
Tiiger
Tratec
Faymonville
Cometto
Drake Trailers
Kennedy Trailers
Trabosa
Taskers of Andover
DOLL Fahrzeugbau
TRT Australia
Broshuis
Tuff Trailers
Colombo
Capperi
Modern Transport Engineers
King Trailers Limited
Leonardo DRS
BEML
Operators
ALE
Sarens
Mammoet
Lampson International
Pickfords
Alstom
ALE
CLP Group
Omega morgan
United States Army
Indian Army
British Army
Republic of Korea Armed Forces
French Army
Italian Army
Turkish Armed Forces
Gallery
See also
Heavy hauler
Tractor unit
Ringfeder
Ballast tractor
Transporter Industry International
Faymonville Group
References
Trailers
Heavy haulage
Engineering vehicles
Modularity
Machines | Hydraulic modular trailer | [
"Physics",
"Technology",
"Engineering"
] | 1,712 | [
"Physical systems",
"Engineering vehicles",
"Machines",
"Mechanical engineering"
] |
70,163,790 | https://en.wikipedia.org/wiki/Mercedes%20V6%20hybrid%20Formula%20One%20power%20unit | The Mercedes V6 hybrid Formula One power unit is a series of 1.6-litre, hybrid turbocharged V6 racing engines which features both a kinetic energy recovery system (MGU-K) and a heat energy recovery system (MGU-H), developed and produced by Mercedes-AMG High Performance Powertrains for use in Formula One. The engines were in use since the 2014 season by the Mercedes works team. Over years of development, engine power was increased from at 15,000 rpm, to at 15,000 rpm. Customer team engines were used by Williams, McLaren, Lotus, Manor Racing, Force India, Racing Point Force India, Racing Point and Aston Martin. Their most recent championship victories are in (Drivers') and (Constructors').
Enduring a successful run since the 2014 season, the Mercedes V6 Hybrid engine has become one of the most successful Formula One engines of all time. It broke the record for most wins in a season in 2016 (this record has since been surpassed by Honda with Red Bull Racing in 2023), as well as among many other major constructor and driver F1 records. Notably, Lewis Hamilton won a record breaking six drivers' championships and the Mercedes factory team won a record-breaking eight consecutive constructors' championships powered by Mercedes V6 hybrid engines.
List of Formula One engines
Statistics
The Formula One regulations in saw Mercedes produce a unique hybrid 1.6-liter turbocharged V6 engine, that could produce a significant amount of power with less fuel consumption compared to Ferrari and Renault engines. It also featured the kinetic energy recovery system (MGU-K) and heat energy recovery system (MGU-H). The engine was soon proved to have a clear advantage over other engines, as cars powered by the Mercedes engine scored the majority of points during the 2014 season. Since the introduction of this engine formula, Mercedes-powered cars scored pole position in 133 and won 124 of 228 races (as of the 2024 Abu Dhabi Grand Prix), and won 7 drivers' championships and 9 constructors' championships.
Season statistics for Mercedes engines
Other applications
Mercedes-AMG One
The Mercedes-AMG One production hypercar features a powertrain similar to modern Formula One cars. The production version of the car features a modified version of the Mercedes-Benz PU106B Hybrid E-turbo V6 engine, used in the Mercedes F1 W06 Hybrid Formula One car. Modifications were done to the engine, resulting in a reduction in idle rpm and redline rpm among many other changes to make it road-legal. The modified internal combustion engine (ICE) produces a maximum power output of . Torque figures were unmeasurable due to the complex powertrain.
The internal combustion engine works in conjunction with four electric motors; a MGU-K coupled to the crankshaft, a MGU-H coupled to the turbocharger, and two electric motors in the front axle producing . The One has a total combined power output of . The MGU-K and MGU-H are similar as in use in Formula One cars, which were responsible for recovering energy and improving efficiency during the operation of the car. More specifically, the MGU-K serves to generate electricity during braking, while the MGU-H serves to eliminate turbo lag and improve throttle response by keeping the turbine spinning at lower engine high speeds. Two electric motors drive the front wheels and creates an all-wheel drive drivetrain, the sum of these four electric motors contribute of effective power to the total power output figure of the AMG One.
The head of Mercedes-AMG, Tobias Moers, claimed that the engine idles at 1,280 rpm and 11,000 rpm at its redline limit. However, the engine will only last for and the owners would have to return their cars for an engine refurbishment costing 850,000 euros. This Formula One inspired powertrain helps the car attain a top speed of . According to Mercedes-AMG, the car can accelerate from 0 to in 2.9 seconds, 0 to in 7.0 seconds and 0 to in 15.6 seconds.
References
Mercedes-Benz engines
Formula One engines
Engines by model
Gasoline engines by model
V6 engines | Mercedes V6 hybrid Formula One power unit | [
"Technology"
] | 845 | [
"Engines",
"Engines by model"
] |
70,164,151 | https://en.wikipedia.org/wiki/Bahram%20Kalhornia | Bahram Kalhornia (; 1952) is a drawing artist, sculptor, graphic designer, painter, researcher, and educator in the field of art from Iran.
Origin and Early Life
Kalhornia was born into a prominent family with a cultural background from the Kurdish Kalhor tribe in Kermanshah on March 30, 1952. Throughout his childhood and youth, living in the city and the countryside allowed him to experience the vibrant nature of the western region of Iran, and become familiar with and benefit from the world of plants and animals. Additionally, his exposure to the stories, beliefs, ethnic experiences, work methods, and lifestyles of the people, as well as hearing epic tales from Ferdowsi's Shahnameh and Kurdish Shahnameh, and blending these concepts with the lifestyle and aspirations of the tribal community, left a profound and lasting impact on him, greatly influencing his life path and the works he created. During his school days at Dr. Abdolhamid Zangeneh High School in Kermanshah, the presence of influential and humanistic teachers such as Saeednia, Etekal, Toulaii, Nazempour, Katanchi, Sadri, and others expanded his thinking, and attention to the value of “questioning”. In his childhood and adolescence, due to the cultural atmosphere of his family, guidance, and support from some relatives, the enthusiasm for reading took root in him, and it was during this period that he eagerly read various books in the fields of sciences, sociology, history, and especially classical literature with great interest.
Education
In 1972, by entering the University of Decorative Arts in Tehran, Kalhornia became acquainted with a variety of artistic practices and various fields of visual work. The formation of new social relationships between him and the professors and fellow students created years full of intellectual motivations for him. His years at the university provided a good opportunity for practice, work, effort, and discovery. During these years, his presence beside his professors such as Shiddel, Lilit Teryan, Hossein Kazemi, Mohammad Ebrahim Jafari, Gholamhossein Nami, Mirfenderski, Anvari, and Ahrari contributed to the flourishing of his existential values. Among them, Shiddel and Teryan, in closer relationships, influenced his emotional-intellectual life. More than two years of companionship and collaboration with Professor Shiddel during the execution of a large mural painting at the Agricultural Palace in Tehran led to development of his intellectual and practical abilities in artistic activities. During his years at the University of Decorative Arts, he carried out pioneering and cultural activities due to social motivations in cooperation with some professors and fellow students. Initiating the first student magazine, establishing a library, forming a theater group under the guidance of Professor Abolhasan Vandehvar, organizing poetry and literature nights, lectures, and dozens of other efforts all contributed to the creation of a comprehensive and dynamic cultural perspective, and the results of these capabilities proved beneficial in the subsequent years in the course of his educational experiences as an art instructor.
Activities and Profession
After completing his military service, Kalhornia collaborated with the University of Art for one year. Returning to Kermanshah during the Iran-Iraq war, with the intention of working, being active, and serving in his hometown, provided an opportunity to establish the first School of Visual Arts for Boys in Kermanshah and collaborate with art education centers and other organizations. This period also contributed to the formation of the foundations of his thoughts regarding the remnants of the traditional way of life and the mythological thinking of the Kurdish people in the region, which over time led to a more comprehensive understanding of national foundational myths.
The establishment of the Kermanshah Visual Art School, along with his wife Forough Dorafshan and Morteza Sharifi, created a chance for talented young people to find new opportunities for growth and development. From this art school emerged some valuable figures in Iran’s contemporary world of art and contemplation, such as Fereidoun Biglari (archaeologist), Ardeshir Pazhouheshi, Masoud Hemmati, Mohammad Reza Naderi, Kamran Sharifi, and others. After returning to Tehran, his collaboration with the Faculty of Art and Architecture of the Central Tehran Branch of the Islamic Azad University began in 1990. His long teaching career continued until 2022. During these years, many individuals in the fields of intellectual, scientific, and artistic activities flourished under his influence. The Department of Visual Design, by his management from 1996, had a brilliant and successful educational period.
Teaching various art disciplines at the Faculty of Cinema and Broadcasting and other educational centers, collaborating with the commission for authoring art education books of the University of Applied Science and Technology, and collaborating with governmental and non-governmental organizations, both nationally and internationally, as a consultant and advisor in research, art, and culture are among his activities.
His teaching method arises from the interweaving of artistic, spiritual, psychological, and philosophical values. He draws inspiration from ancient Iranian myths in creating his works, and the concept of good and evil in them is reflected extensively. Some critics consider him the William Blake of Iran, and he is also remembered as one of the prominent representatives of Iranian painting-poetry.
Throughout the years, due to his intellectual position and diverse thinking, he presented numerous lectures on various concepts in various centers.
Planning, consulting, and executing cultural projects, and participating in many art festivals as a member of the policy-making council, director, or judge, are among his activities. Various artistic-cultural commissions, especially in the Ministry of Culture and Islamic Guidance, IRIB, Ministry of Petroleum, Ministry of Energy, Khuzestan Sugar Cane Board, municipalities, Jahad University, various museums, and also civil institutions such as UNESCO and UNICEF, have benefited from his skills and way of thinking in multiple periods. Serving as an art director, both in economic projects and cultural institutions or with publishers, has been part of his efforts.
Alongside these activities, participation in group art exhibitions and holding several solo exhibitions are also notable in his portfolio.
Due to his analytical-critical thinking, extensive engagement in visual projects and educational studies, and dissemination of certain perspectives, Kalhornia has become known as an independent figure and a thinker. Over the years, various national organizations and institutions have held ceremonies to honor him. These include the Niavaran Cultural Center, Administration of Culture and Islamic Guidance of Kermanshah, the award of the Sarv-e Mordad medal by the Graphic Designers Association of Kermanshah in recognition of his founding of graphic education in Kermanshah (2011), the launch of the Cultural House in Sarpol-e Zahab in his name, and others.
Since the formation of the Iranian Graphic Designers Society (IGDS), Bahram Kalhornia, along with other graphic pioneers, has collaborated as a member and in some periods as a member of the board of directors with the association and has had an influential role in shaping its legal framework and drafting executive regulations. Since 2011, he has also been an honorary member of the Graphic Designers Association of Kermanshah.
During a period of activities at IRIB, while providing technical consultations to improve production quality in Channel 4, he produced programs such as "Simaye 4" and "Char Khand-e Kherad" in 2007-2008 with a cultural and social focus, presenting analytical and critical analyses.
In 2019, Bahram Kalhornia established "Vard Gallery," a cultural center to fulfill his artistic-cultural desires and aspirations, enabling him to continue his activities in a new role. This led to the organization of numerous exhibitions and numerous critical analysis and discussion sessions on artistic concepts. These days, alongside cultural-artistic-research activities, he is actively involved as the chairman of the board of directors of the Iranian Graphic Designers Society (IGDS) and a member of the Supreme Council of the Iranian Artists Forum in creating suitable conditions for the activities of artists and cultural figures.
In an interview about his artworks in the "Sahm-e Shak (Share of Doubt)” solo exhibition, he said: "In my works, I have placed mirrors in front of my viewers so that they may discover other aspects of their existence. If we set aside physical appearance, there is something else within us, within our beings. Inside me, riders and warriors have set an ambush. Inside another person, bandits lie in wait, and inside yet another, a mythical bird sings. Life has blocked the expression of these creatures. We do not see our hidden selves, and we have closed our ears to our storyteller's voice. I have awakened these things and shown the viewer that they also reside within. When several people like one drawing, I realize what interesting commonalities exist among them. I realize that the hidden entity and the archetypal existence can sensitize people to these paintings and uplift them. This is important to me.
The reality of the matter is that these things have not suddenly appeared; they have existed before me. The world is an eternal phenomenon, and I only have a short opportunity to experience a small section of existence. The eternal entity of existence flows within me. I live my life differently, whether in the awe of my cosmic existence or the awe of my personal life. I have read well, lived well, utilized the resources of the world well, and been enriched by the blessings of the universe. I believe that every person has multiple personas within themselves; beings that lie beneath their skin and are destined to awaken and take charge at any moment. If anyone observes their behavior from dawn to dusk, they can understand the changes within themselves. A person is constantly changing, and this change is connected to the breath of the entities within us.
In the thoughts of the wise scholars of Iranian contemplation, the diversity, the thousand faces, and the Simurgh-like nature of individuals are always discussed. I take this very seriously and believe that the history of each individual is embedded with thousands of people. Sometimes we do not perceive it, and sometimes it is close to us. Due to my way of life, I have become close to this world. Not only have I become close to it, but in returning to within, I have played, been childish, mischievous, and lazy. I have stood alongside heroes, criminals, outlaws, and... I have wandered in this world, slept, woken up, and fallen in love. I do not want to believe, but I want to accept that the destiny that humanity has found and has turned into a superficial and low-quality consumer is the authentic destiny of humanity. I believe that the existential roots of humans are much richer and stronger than what they, themselves think. We have a historical entity, and in this historical entity, the wonders of creation are found, and these wonders are other faces of our existence that have come in this form."
Hossein Norouzi, an art critic and painter, wrote about Kalhornia's works in the solo exhibition "Sahm-e Shak (Share of Doubt)”: "Personalities and special talents and creativity of some individuals seem to be overlooked among many, often remaining unseen or barely noticed. I'm referring to someone like Bahram Kalhornia. A man whose cloak of many knowledge and arts are fitting and dignified on him, and no cloak of Iranian knowledge and culture and art does not suit him.
A knowledgeable man and in the old sense, wise, artist, orator, teacher, and capable presenter in grand gatherings and mass media, possessing an eloquent expression, his words are like obedient servants of his sweet discourse and processing mind, and the owner of ideas and thoughts in the field of major management and influential contemporary art in Iran.
... But this time Bahram Kalhornia appears on the stage of his life in another garment and in a surprising guise; Kalhornia, a worthy and capable drawing artist. With magnificent works in dimensions not so common and ordinary. This time, our contemporary master sculptor exhibits a collection of unconventional drawings that neither I nor many others have seen and were unaware of their existence, at A Gallery in Tehran. The power of hand in drawing engages the viewer with extremely influential and stimulating compositions, and themes that attempt to convey a new concept among ancient myths and powerful contemporary figures in an artistically expressive manner.
Kalhornia is a sharp and highly multifaceted diamond. His personality and artistic character cannot be recognized all at once. An artist intensely layered like his Iranian predecessors and a man who is restless and unsettled. Although he shows us the passage of time with his white beard, his creative force, enthusiasm, and hope are equal to the younger generation. With his exemplary humility, calmness, and teaching manner, he brings attention to the shortcomings in the field of contemporary visual arts in Iran.
Drawing! In his works, he wholeheartedly reminds us of the solitude of graphite, the seclusion of blacks and grays, and the overlooked value of drawing as an Intersection and fundamental element of many visual arts."
Activities and roles of Bahram Kalhornia
Membership in the boards of various sessions of the Iranian Graphic Designers Society (IGDS)
Membership in the Supreme Council of the Iranian Artists Forum
Membership in the commission for sending artists to the Cité Internationale des Arts in Paris since 2012
Consulting and collaboration with UNICEF and UNESCO organizations
Membership in the supervisory council for the reconstruction of the Tehran Museum of Contemporary Art
High-level consultant for the Cinema Museum of Iran
Consultant of Iran Photo Museum
Membership in the Visual Arts Committee of the Iranian Artists Forum
Membership in the committee for determining the copyright of artists
Consulting and collaboration with the Children's Book Council
Consulting and collaboration with the World Children's Institute
Collaboration with various publishers as an art director
Art director of the "Exploration" journal of the Ministry of Petroleum
Membership in the specialized committee for the compilation of graphic design textbooks at the University of Applied Science and Technology
Membership in the color policy commission in Tehran's urban beautification department
General secretary of the "Blue Sky" Poster and Caricature Festival in Tehran in 1997
Jury member of the First "Ordibehesht" Research Awards in 2007
Advisor and member of the Commission for the Establishment of the Museum of the Islamic Consultative Assembly, 2007
Secretary of the Committee for Strategic Studies in Art Affairs of Tehran Municipality, 2008
Judge of the Eighth Annual Picture Festival, 2010
Secretary-General of the Second Festival of Urban Postcards of Tehran Municipality, 2010
Member of the Special Council for the Examination of Artistic Works Registration at the Ministry of Guidance, 2011-2013
Art Expert of the International Festival of Visual Arts for Children and Youth, 2012
Membership in the policymaking council of national and international biennials and numerous festivals such as Tehran International Graphic Design Biennale, Fajr visual festivals, Shamseh festivals, international and national youth visual arts festivals
Secretary-General, secretaryship, and judge for many visual festivals such as the Fajr Visual Festival, National Advertising Festivals, and National Television Festivals.
Art adviser for governmental organizations such as the Ministry of Petroleum, National Museum of Iran, Ministry of Energy, Ministry of Culture and Islamic Guidance, Ministry of Education, and various banks
Organizing art workshops
Holding analytical-scientific lectures and seminars
Graphic designer and art director for private-sector organizations
Cooperation in planning and education with different cultural-educational institutions in the private sector
Designing and constructing Mina statue for the first International Puppet Festival in Iran
Designing and constructing the first and second statue for the Sima Festival
Art adviser and graphic designer for the Sima Festival
Consultant and expert of the Deputy of Virtual Media of Sima
Publications
Kalhornia, B., 2006. "Mehrnegarineh", an album of calligraphy works by Mousavi Jazayeri in Kufic script.
Kalhornia, B., 2010. "Neshaneh be Sar: Mythical Bird of Graphic Design", the bird that gives identity wherever it flies, adds meaning. Sign sometimes adds a thousand sermons: Selected works of graphic designers from Kermanshah, Ferdabeh.
Mousavi Jazayeri, S.M.V., Ghlichkhani, H., Kalhornia, B., 2013. "From Kufic Inscriptions to Contemporary Typography, Steps Toward a Comprehensive and Deserving Persian Font."
Kalhornia, B., 2014. "Full Moon Mother, Full Sun Father", on the occasion of Children's Week, National Museum of Iran, Tehran.
Kalhornia, B., 2017. "Share of Doubt", Selected Works, A Gallery, Tehran.
Zadeyeh Khorshid, Scientific Editor
Art-Science Book (from UNESCO Publications), Art Editor
Kalhornia, B., 2012. "Baniasadi’s Birth Place and Background, Mohammed Ali Baniasadi: History and Innovation", Bookbird: A Journal of International Children's Literature
Articles and interviews published in various newspapers and magazines such as Jahate Etela (Specialized Journal of the Iranian Graphic Designers Society (IGDS)), Tandis, Paayaab, Donya-ye Sokhan, Adineh, Shargh, Ettela'at, Iran, Entekhab, Tamashegaraan, Manzar, Mash’al (Ministry of Petroleum Magazine) ...
Curating and Organizing Group Exhibitions
2020 - Group graphic exhibition "Woman+Man", Vard Gallery, Tehran
2020 - Group illustration exhibition "From Devil to Angel to Good to Evil to Black to White to Color In the Twilight of Imagination and Myth and Painting", Vard Gallery, Tehran
2020 - Group calligraphy exhibition "Three Thousand Times Writing", Vard Gallery, Tehran
2020 - Group illustration exhibition "Guardian of the City of Shahrazad", Vard Gallery, Tehran
2022 - Group drawing exhibition "His Highness Mr. Huntsman", Vard Gallery, Tehran
2022 - Group handprint exhibition "Bargardaan", Vard Gallery, Tehran
Solo exhibition
2005 - "Stone Song", Drawing Exhibition, Zangar Gallery, Tehran
2016 - "Share of Doubt", Drawing Exhibition, A Gallery, Tehran
Group exhibitions
1975 - Student Photography Group Exhibition, University of Decorative Arts, Tehran
? - Individual and Group Design Exhibition, Punch Gallery, Kermanshah
? - International Group Drawing Exhibition, Museum of Contemporary Art, Tehran
? - International Drawing Biennial, Museum of Contemporary Art, Tehran
? - Graphic Group Exhibition, Imam Ali Museum, Tehran
? - Graphic Group Exhibition, Iranian Artists Forum, Tehran
? - Image of the Year Group Exhibition, Iranian Artists Forum, Tehran
2016 - "Poetics: Passage through Dreams", Laleh Gallery, Tehran
2018 - "Drawing as Living", Group Drawing Exhibition, House of Artists, Tehran
2019 - "Woman+Man", Group Graphic Exhibition, Vard Gallery, Tehran
2019 - " From Devil to Angel to Good to Evil to Black to White to Color In the twilight of Imagination and Myth and painting ", Group Illustration Exhibition, Vard Gallery, Tehran
References
21st-century Iranian painters
Contemporary painters
Iranian graphic designers
Iranian art critics
Living people
Iranian contemporary artists
Iranian art writers
21st-century Iranian people
1952 births
People from Kermanshah
Iranian biographers
Draughtsmen | Bahram Kalhornia | [
"Engineering"
] | 3,962 | [
"Design engineering",
"Draughtsmen"
] |
70,165,141 | https://en.wikipedia.org/wiki/Biothesiometry | Biothesiometry is a noninvasive medical test used to quantify the perception of vibration by measuring its threshold. It is used in neurology and electrophysiology to diagnose a number of conditions, like diabetic neuropathy and erectile dysfunction, where the vibration perception threshold (VPT) would be higher than average. The numerical nature of the test can help stage the progression of disease or complications.
The test is done through a biothesiometer, which is composed of a handheld probe wired to a display unit. Both digital and analog types are commercially available, giving the reading on either a dial or a screen.
In a systematic review of screening methods for pediatric diabetic peripheral neuropathies, biothesiometry and fine microfilaments were shown to be the only diagnostic methods with high sensitivity and specificity.
A systematic review showed that there is a strong co-relation between HbA1c values and Vibration Perception Test and could be a predictor for complications in the foot following Diabetic Peripheral Neuropathy.
In a systemic review of modern devices available for the assessment and screening of peripheral neuropathy, digital devices were evaluated to measure tactile sensation, vibration perception, thermal perception and foot skin temperature.
References
Neurology
Electrophysiology
Medical equipment
Medical tests | Biothesiometry | [
"Biology"
] | 274 | [
"Medical equipment",
"Medical technology"
] |
70,165,938 | https://en.wikipedia.org/wiki/TU%20Ursae%20Majoris | TU Ursae Majoris is a variable star in the northern circumpolar constellation of Ursa Major. It is classified as a Bailey-type 'ab' RR Lyrae variable with a period of 0.557648 days that ranges in brightness from apparent visual magnitude of 9.26 down to 10.24. The distance to this star is approximately 2,090 light years based on parallax measurements. It is located near the north galactic pole at a distance that indicates this is a member of the galactic halo.
The periodic variability of this star was discovered by P. Guthnick and R. Prager in 1929. Its relative brightness has made this star the subject of regular observation since its discovery, both photographically and then photoelectrically starting in 1957. It was initially classed as a Bailey-type "a" RR Lyrae variable. The variations were found to be somewhat similar to RR Lyrae, with the periodicity of TU UMa differing by less than 1% of a day. However, no evidence of a long-period modulation, known as the Blazhko effect, was found in this star.
In 1990, A. Saha and R. E. White found variations in radial velocity over time that suggested this is a binary system. However, confirmation of this proved difficult because of the distance and the pulsational behavior of the variable. The system shows significant evidence of proper motion acceleration from a binary interaction. Analysis of long-term oscillatory variations suggests an orbital period of 23.3 years and an eccentricity of 0.79, with the secondary having at least 33% of the mass of the Sun.
References
Further reading
RR Lyrae variables
A-type giants
F-type giants
Am stars
Ursa Major
BD+30 2162
056088
Ursae Majoris, TU | TU Ursae Majoris | [
"Astronomy"
] | 384 | [
"Ursa Major",
"Constellations"
] |
70,166,344 | https://en.wikipedia.org/wiki/Louise%20Gray%20Young | Louise Gray Young (October 4, 1935 - March 2, 2018) was an American astronomer and researcher who specialised in molecular spectroscopy. She is best known for her spectroscopic analysis of the planetary atmospheres of Earth, Venus and Mars.
Early life and education
Louise Dillon was born October 4, 1935, in Los Angeles, California, to Ruth Davis and Frank Dillon. She studied at the University of California, Los Angeles, graduating with Bachelor (1958) and Master (1959) degrees in engineering. She was awarded her Ph.D. in engineering science at California Institute of Technology. Her thesis was in the emission and transfer of radiation in gases under the direction of Stanford S. Penner.
Research and career
In 1965, Young started working at the engineering faculty at University of California, Los Angeles. In 1967, she became a research associate in astronomy at the University of Texas at Austin. Young then went on to work at NASA's Jet Propulsion Lab until 1974. After which, Young became a research scientist at Texas A&M University.
In 1976, Young became a fellow of the Optical Society of America. She was also a member of the American Astronomical Society, International Astronomical Union, and American Meteorological Society. Between 1969 and 1977, Young was an Associate Editor of the Journal of Quantitative Spectroscopy and Radiative Transfer.
Selected publications
R Schorn; L Gray Young; E Barker. (May 1970). "High-dispersion spectroscopic observations of Venus". Icarus. 12(3). 391–401. doi: 10.1016/0019-1035(70)90007-2
L Gray Young. (August 1971). "Calculation of the partition function for 14N216O". Journal of Quantitative Spectroscopy and Radiative Transfer. 11(8). 1265–1270. doi:10.1016/0022-4073(71)90099-9
L Gray Young. (November 1970). "Effective Pressure for Line Formation in the Atmosphere of Venus". Icarus. 13(3). 449–458. doi:10.1016/0019-1035(70)90092-8
L Gray Young. (July 1971). "Interpretation of high resolution spectra of Mars—II calculations of CO2 abundance, rotational temperature and surface pressure". Journal of Quantitative Spectroscopy and Radiative Transfer. 11(7). doi:10.1016/0022-4073(71)90127-0
Personal life
Louise Gray Young was married to Andrew T. Young. She had two children, Gregory and Elizabeth. She died aged 82 in San Diego, California on March 2, 2018.
References
1935 births
2018 deaths
Women astronomers
People from Los Angeles
Scientists from California
University of California alumni
California Institute of Technology alumni
20th-century American astronomers
20th-century American engineers
20th-century American women engineers | Louise Gray Young | [
"Astronomy"
] | 591 | [
"Women astronomers",
"Astronomers"
] |
70,166,417 | https://en.wikipedia.org/wiki/OnePlus%2010%20Pro | The OnePlus 10 Pro 5G is a high-end Android-based smartphone manufactured by OnePlus, unveiled on January 11, 2022. Succeeding the OnePlus 9 Pro, the phone also features upgraded cameras developed with Hasselblad.
References
External links
OnePlus mobile phones
Phablets
Mobile phones introduced in 2022
Mobile phones with multiple rear cameras
Mobile phones with 8K video recording | OnePlus 10 Pro | [
"Technology"
] | 84 | [
"Crossover devices",
"Phablets"
] |
70,168,481 | https://en.wikipedia.org/wiki/Commission%20de%20r%C3%A9gulation%20de%20l%27%C3%A9nergie | The (CRE, or French Energy Regulatory Commission under its official English title) is an independent body that regulates the French electricity and gas markets. It is a member of the European Union organisation ACER and the all-European CEER (Council of European Energy Regulators).
History
The commission was established by the laws of February 10, 2000, related to the modernization and development of the public electricity service, originally named "Commission de régulation de l'électricité" (Electricity Regulatory Commission), and by the law of January 3, 2003, concerning the gas and electricity markets and the public energy service. These laws transposed into French legislation the European directives of December 19, 1996, and June 22, 1998. The second law opened the gas market and extended to this sector the powers that the (CRE) already had over the electricity market.
These directives, making up the "energy package," organize the liberalization of the energy market at the European Community level by ensuring:
the free choice of supplier for consumers;
the freedom of establishment for producers;
and the right of access to the distribution and transport networks under objective, transparent, and non-discriminatory conditions for all users.
To ensure transparency and non-discrimination in access to public electricity networks, the commission decided on April 7, 2004, to set up a technical reference framework for the managers of public electricity networks.
Functions
According to the law of December 7, 2006, "the Energy Regulatory Commission contributes, for the benefit of final consumers, to the proper functioning of the electricity and natural gas markets. It ensures, in particular, that the conditions of access to electricity and natural gas transport and distribution networks do not hinder the development of competition. It monitors, for electricity and natural gas, the transactions carried out between suppliers, traders, and producers, the transactions carried out on organized markets, as well as exchanges across borders. It ensures the consistency of the offers from suppliers, traders, and producers with their economic and technical constraints."
Composition
At its inception, the CRE was composed of six members appointed for a non-renewable six-year term: three, including the president, appointed by decree—therefore by the government—and the other three appointed respectively by the president of the Senate, the president of the National Assembly, and the president of the Economic, Social, and Environmental Council. The law of January 3, 2003, increased this number to seven: two members, including the president, appointed by decree, two others appointed by the president of the National Assembly, two by the president of the Senate, and the last by the president of the Economic and Social Council.
The law of December 7, 2006, altered the board of commissioners by appointing two vice-presidents from among the commissioners designated by the presidents of the National Assembly and the Senate, and by adding two new commissioners representing consumers, appointed by decree, which then brought the number of members to nine.
The new board of the Energy Regulatory Commission (CRE), established by the law on the new organization of the electricity market (NOME law), now consists of five members, one president, and four commissioners who serve full-time:
The president of the CRE is appointed for a six-year term by decree of the President of the Republic after consultation with the Parliamentary committees competent in energy matters (the Economic Affairs Committee of the National Assembly and the Committee on the Economy, Sustainable Development, and Spatial Planning of the Senate);
One commissioner is appointed by the president of the Senate for four years;
One commissioner is appointed by the president of the National Assembly for four years;
Two commissioners are appointed by decree after consultation with the Parliamentary committees competent in energy matters for two years.
Thus, Emmanuelle Wargon was appointed president of the CRE by Emmanuel Macron on August 16, 2022, for a six-year term. She is accompanied on the board by four commissioners:
Anthony Cellier appointed on October 24, 2022, at the suggestion of the president of the National Assembly;
Ivan Faucheux appointed on August 5, 2019, by decree of the President of the Republic;
Valérie Plagnol appointed on November 2, 2021, at the suggestion of the president of the Senate;
Lova Rinel Rajaoarinela appointed on July 26, 2023, at the suggestion of the minister in charge of overseas territories;
The presidents of the CRE since its creation are:
2000-2006: Jean Syrota;
2006-2017: Philippe de Ladoucette, formerly CEO of Charbonnages de France (appointed in 2006 and reappointed in 2011);
2017-2022: Jean-François Carenco. He left his position in July 2022, after joining the government.
Since August 2022: Emmanuelle Wargon.
References
Electric power in France
Energy regulatory authorities
Organizations established in 2000
Energy in France
Energy organizations | Commission de régulation de l'énergie | [
"Engineering"
] | 987 | [
"Energy organizations"
] |
70,168,502 | https://en.wikipedia.org/wiki/Hart%20circle | In geometry, the Hart circle is derived from three given circles that cross pairwise to form eight circular triangles. For any one of these eight triangles, and its three neighboring triangles, there exists a Hart circle, tangent to the inscribed circles of these four circular triangles. Thus, the three given circles have eight Hart circles associated with them. The Hart circles are named after their discover, Andrew Searle Hart. They can be seen as analogous to the nine-point circle of straight-sided triangles.
References
External links
History of the Nine-Point Circle, Cambridge University
Discussion of Hart Circle in context of Feuerbach's theorem
On Centers and Central Lines of Triangles in the Elliptic Plane
CRC Concise Encyclopedia of Mathematics by Eric W. Weisstein
Geometry
Triangles
Circles
Triangle geometry
Polygons
Eponymous geometric shapes | Hart circle | [
"Mathematics"
] | 162 | [
"Circles",
"Pi",
"Geometry"
] |
70,169,832 | https://en.wikipedia.org/wiki/Hi-C%20%28genomic%20analysis%20technique%29 | Hi-C is a high-throughput genomic and epigenomic technique to capture chromatin conformation (3C). In general, Hi-C is considered as a derivative of a series of chromosome conformation capture technologies, including but not limited to 3C (chromosome conformation capture), 4C (chromosome conformation capture-on-chip/circular chromosome conformation capture), and 5C (chromosome conformation capture carbon copy). Hi-C comprehensively detects genome-wide chromatin interactions in the cell nucleus by combining 3C and next-generation sequencing (NGS) approaches and has been considered as a qualitative leap in C-technology (chromosome conformation capture-based technologies) development and the beginning of 3D genomics.
Similar to the classic 3C technique, Hi-C measures the frequency (as an average over a cell population) at which two DNA fragments physically associate in 3D space, linking chromosomal structure directly to the genomic sequence. The general procedure of Hi-C involves first crosslinking chromatin material using formaldehyde. Then, the chromatin is solubilized and fragmented, and interacting loci are re-ligated together to create a genomic library of chimeric DNA molecules. The relative abundance of these chimeras, or ligation products, is correlated to the probability that the respective chromatin fragments interact in 3D space across the cell population. While 3C focuses on the analysis of a set of predetermined genomic loci to offer “one-versus-some” investigations of the conformation of the chromosome regions of interest, Hi-C enables “all-versus-all” interaction profiling by labeling all fragmented chromatin with a biotinylated nucleotide before ligation. As a result, biotin-marked ligation junctions can be purified more efficiently by streptavidin-coated magnetic beads, and chromatin interaction data can be obtained by direct sequencing of the Hi-C library.
Analyses of Hi-C data not only reveal the overall genomic structure of mammalian chromosomes, but also offer insights into the biophysical properties of chromatin as well as more specific, long-range contacts between distant genomic elements (e.g. between genes and regulatory elements), including how these change over time in response to stimuli. In recent years, Hi-C has found its application in a wide variety of biological fields, including cell growth and division, transcription regulation, fate determination, development, autoimmune disease, and genome evolution. By combining Hi-C data with other datasets such as genome-wide maps of chromatin modifications and gene expression profiles, the functional roles of chromatin conformation in genome regulation and stability can also be delineated.
History
At its inception, Hi-C was a low-resolution, high-noise technology that was only capable of describing chromatin interaction regions within a bin size of 1 million base pairs (Mb). The Hi-C library also required several days to construct, and the datasets themselves were low in both output and reproducibility. Nevertheless, Hi-C data offered new insights for chromatin conformation as well as nuclear and genomic architectures, and these prospects motivated scientists to put efforts to modify the technique over the past decade.
Between 2012 and 2015, several modifications to the Hi-C protocol have taken place, with 4-cutter digestion or adapted deeper sequencing depth to obtain higher resolution. The use of restriction endonucleases that cut more frequently, or DNaseI and Micrococcal nucleases also significantly increased the resolution of the method. More recently (2017), Belaghzal et al. described a Hi-C 2.0 protocol that was able to achieve kilobase (kb) resolution. The key adaptation to the base protocol was the removal of the SDS solubilization step after digestion to preserve nuclear structure and prevent random ligation between fragmented chromatin by ligation within the intact nuclei, which formed the basis of in situ Hi-C. In 2021, Hi-C 3.0 was described by Lafontaine et al., with higher resolution achieved by enhancing crosslinking with formaldehyde followed by disuccinimidyl glutarate (DSG). While formaldehyde captures the amino and imino groups of both proteins and DNA, the NHS-esters in DSG react with primary amines on proteins and can capture amine-amine interactions. These updates to the base protocol allowed the scientists to look at more detailed conformational structures such as chromosomal compartment and topologically associating domains (TADs), as well as high-resolution conformational features such as DNA loops.
To date, a variety of derivatives of Hi-C have already emerged, including in situ Hi-C, low Hi-C, SAFE Hi-C, and Micro-C, with distinctive features related to different aspects of standard Hi-C, but the basic principle has remained the same.
Traditional Hi-C
The outline of the classical Hi-C workflow is as follows: cells are cross-linked with formaldehyde; chromatin is digested with a restriction enzyme that generates a 5’ overhang; the 5’ overhang is filled with biotinylated bases and the resulting blunt-ended DNA is ligated. The ligation products, with biotin at the junction, are selected for using streptavidin and further processed to prepare a library ready for subsequent sequencing efforts.
The pairwise interactions that Hi-C can capture across the genome are immense and so it is important to analyze an appropriately large sample size, in order to capture unique interactions that may only be observed in a minority of the general population. To obtain a high complexity library of ligation products that will ensure high resolution and depth of data, a sample of 20–25 million cells is required as input for Hi-C. Primary human samples, which may be available only in fewer cell numbers, could be used for standard Hi-C library preparation with as low as 1–5 million cells. However, using such a low input of cells may be associated with low library complexity which results in a high percentage of duplicate reads during library preparation.
Standard Hi-C gives data on pairwise interactions at the resolution of 1 to 10 Mb, requires high sequencing depth and the protocol takes around 7 days to complete.
Formaldehyde cross-linking
Cell and nuclear membranes are highly permeable to formaldehyde. Formaldehyde cross-linking is frequently employed for the detection and quantification of DNA-protein and protein-protein interactions. Of interest in the context of Hi-C, and all 3C-based methods, is the ability of formaldehyde to capture cis chromosomal interactions between distal segments of chromatin. It does so by forming covalent links between spatially adjacent chromatin segments. Formaldehyde can react with macromolecules in two steps: first it reacts with a nucleophilic group on a DNA base for example, and forms a methylol adduct, which is then converted to a Schiff base. In the second step, the Schiff base, which can decompose rapidly, forms a methylene bridge with another functional group on another molecule. It can also make this methylene bridge with a small molecule in solution such as glycine, which is used in excess to quench formaldehyde in Hi-C. Quenchers can typically exert an effect on formaldehyde from outside the cell. A key feature of this two-step formaldehyde crosslinking reaction is that all the reactions are reversible, which is vital for chromatin capture.
Crosslinking is a pivotal step of the chromatin capture workflow as the functional readout of the technique is the frequency at which two genomic regions are crosslinked to each other. Thus, the standardization of this step is important and for that, one must consider potential sources of variation. Presence of serum, which contains a high concentration of protein, in culture media can decrease the effective concentration of formaldehyde available for chromatin crosslinking, by sequestering it in the culture media. Therefore, in cases where serum is used in culture, it should be removed for the crosslinking step. The nature of cells, i.e., whether they are suspension or adherent, is also a pertinent consideration for the crosslinking step. Adherent cells bind to surfaces with the help of molecular mechanisms of cytoskeletons. It has been shown that there is a link between cytoskeleton-maintained nuclear and cellular morphology which, if altered, may negatively impact global nuclear organization. Adherent cells therefore, should be crosslinked while still attached to their culture surface.
Lysis, restriction digest and biotinylation
Cells are lysed on ice with cold hypotonic buffer containing sodium chloride, Tris-HCl at pH 8.0, and non-ionic detergent IGEPAL CA-630, supplemented with protease inhibitors. The protease inhibitors and incubation on ice help preserve the integrity of crosslinked chromatin complexes from endogenous proteases. The lysis step helps to release the nucleic material from the cells.
Following cell lysis, chromatin is solubilized with dilute SDS in order to remove proteins that have not been crosslinked and to open chromatin and make it more accessible for subsequent restriction endonuclease-mediated digestion. If the incubation with SDS exceeds the recommended 10 minutes, the formaldehyde crosslinks can be reversed and so the incubation with SDS must be immediately followed by an incubation on ice. A non-ionic detergent called Triton X-100 is used to quench SDS in order to prevent enzyme denaturation in the next step.
Any restriction enzyme that generates a 5’ overhang, such as HindIII can be used to digest the now accessible chromatin overnight. This 5’ overhang provides the template required by the Klenow fragment of DNA Polymerase I to add biotinylated CTP or ATP to the digested ends of chromatin. This step allows for the selection of Hi-C ligation products for library preparation.
Proximity ligation
A dilution ligation is performed on DNA fragments that are still crosslinked to one another in order to favor the intramolecular ligation of fragments within the same chromatin complex instead of ligation events between fragments across different complexes. Since this ligation step occurs between blunt-ended DNA fragments (since the sticky ends have been filled in with biotin-labeled bases), the reaction is allowed to go on for up to 4 hours to make up for its inherent inefficiency. As a result of proximity ligation, the terminal HindIII sites are lost and an NheI site is generated.
Biotin removal, DNA shearing, size selection and end repair
The biotin-labeled ligation products can be purified using phenol-chloroform DNA extraction. To remove any fragments with biotin-labeled ends that have not been ligated, T4 DNA Polymerase with 3’ to 5’ exonuclease activity is used to remove nucleotides from the ends of such fragments. This step ensures that none of these unligated fragments are selected for library preparation. The reaction is stopped with EDTA and the DNA is purified once again using phenol-chloroform DNA extraction.
The ideal size of DNA fragments for the sequencing library depends on the sequencing platform that will be used. DNA can first be sheared to fragments around 300–500 bp long using sonication. Fragments of this size are suitable for high-throughput sequencing. Following sonication, fragments can be size selected using AMPure XP beads from Beckman Coulter to obtain ligation products with a size distribution between 150 and 300 bp. This is the optimal fragment size window for HiSeq cluster formation.
DNA shearing causes asymmetric DNA breaks and must be repaired before biotin pulldown and sequencing adaptor ligation. This is achieved by using a combination of enzymes that fill in 5’ overhangs, and add 5’ phosphate groups and adenylate to the 3’ ends of fragments to allow for ligation of sequencing adaptors.
Biotin pull-down
Using an excess of streptavdin beads, such as the My-One C1 streptavidin bead solution from Dynabeads, biotinylated Hi-C ligation products can be pulled-down and enriched for. Ligation of the Illumina paired-end adapters is performed while the DNA fragments are bound to the streptavidin beads. Adsorption to the beads increases efficiency of the ligation of these blunt-ended DNA fragments to the adaptors, as it decreases their mobility.
Library preparation and sequencing
After the ligation of the adaptors is complete, PCR amplification of the library is performed. The PCR step can introduce high number of duplicates in a low complexity Hi-C ligation product sample as a result of over-amplification. This results in very few interactions being captured and oftentimes, this is because the input sample size had a low amount of cells. It is important to titrate the number of cycles required to get at least 50 ng of Hi-C library DNA for sequencing. Fewer the cycle number, the better so that there are no PCR artifacts (such as off-target amplicons, non-specificity, etc.). The ideal range of PCR cycles is 9–15 and it is more ideal to pool multiple PCR reactions to get enough DNA for sequencing, than to increase the number of cycles for one PCR reaction. The PCR products are purified again using AMPure beads to remove primer dimers and then quantified before being sequenced. Regions of chromatin that interact with each other are then identified by paired-end sequencing of the biotinylated, ligated products.
Any platform that can allow for the ligated fragments to be sequenced across the NheI junction (Roche 454) or by paired-end or mate-paired reads (Illumina GA and HiSeq platforms) would be suitable for Hi-C. Before high-throughput sequencing, the quality of the library should be verified using Sanger sequencing, wherein the long sequencing read will read through the biotin junction. Thirty-six or 50 bp reads are sufficient to identify most chromatin interacting pairs using Illumina paired-end sequencing. Since the average size of fragments in the library is 250 bp, 50bp paired-end reads have been found to be optimum for Hi-C library sequencing.
Quality control of Hi-C libraries
There are several pressure points throughout the workflow of Hi-C sample preparation that are well documented and reported. DNA at various stages can be run on 0.8% agarose gels to assay the size distribution of fragments. This is particularly important after shearing of size selection steps. Degradation of DNA can also be monitored as smears appearing as a result under low molecular weight products on gels. Degradation can occur due to not adding sufficient protease inhibitors during lysis, endogenous nuclease activity or thermal degradation due to incorrect icing. 3C PCR reactions can be performed to test for the formation of proximity ligation products.
Variants
Standard Hi-C has a high input cell number cost, requires deep sequencing, generates low-resolution data, and suffers from formation of redundant molecules that contribute to low complexity libraries when cell numbers are low. To combat these issues in order to be able to apply this technique in contexts where cell number is a limiting factor, for example, with primary human cell work, several Hi-C variants have been developed since the first conceptualization of Hi-C.
The four main classes under which Hi-C variants fall under are: dilution ligation, in situ ligation, single cell, and low noise improvement systems. Standard Hi-C is a type of dilution ligation and other dilution ligation include DNase Hi-C and Capture Hi-C. In contrast to standard and Capture Hi-C, DNase Hi-C requires only 2–5 million cells as input, uses DNaseI for chromatin fragmentation and employs an in-gel dilution proximity ligation. The use of DNaseI has been shown to greatly improve efficiency and resolution of Hi-C. Capture Hi-C is a genome-wide assaying technique to look at chromatin interactions of specific loci using a hybridization-based capture of targeted genomic regions. It was first developed by Mifsud et al. to map long-range promoter contacts in human cells by generating a biotinylated RNA bait library that targeted 21,841 promoter regions. These variants, in addition to others (described below), represent modifications to the foundational technique of standard Hi-C and address and alleviate one or more limitations of the original method.
In situ Hi-C
In situ Hi-C combines standard Hi-C with nuclear ligation assay, i.e., proximity ligation performed in intact nuclei. The protocol is similar to standard Hi-C in terms of the basic workflow outline but differs in other ways. In situ Hi-C requires 2 to 5 million cells compared to the ideal 20 to 25 million required for standard Hi-C and it requires only 3 days to complete the protocol versus 7 days for standard Hi-C. Furthermore, proximity ligation does not take place in solution like in standard Hi-C, decreasing the frequency of random, biologically irrelevant contacts and ligations, as indicated by the lower frequency of mitochondrial and nuclear DNA contacts in captured biotinylated DNA. This is achieved by leaving the nuclei intact for the ligation step. Cells are still lysed with a buffer containing Tris-HCl at pH 8.0, sodium chloride, and the detergent IGEPAL CA630 before ligation, but instead of homogenization of the cell lysate, cell nuclei are pelleted after initial lysis to degrade the cell membrane. After proximity ligation is complete, cell nuclei are incubated for at least 1.5 hours at 68 degrees Celsius to permeabilize the nuclear membrane and release its nuclear contents.
The resolution that can be achieved with in situ Hi-C can be up to 950 to 1000 bp compared to the 1 to 10 Mb resolution of standard Hi-C and the 100 kb resolution of DNase Hi-C. While standard Hi-C makes use of a 6-bp cutter such as HindIII for the restriction digest step, in situ Hi-C uses a 4-bp cutter such as MboI or its isoschizomer DpnII (which is not sensitive to CpG methylation) to increase efficiency and resolution (as the restriction sites of MboI and DpnII are more frequently occurring in the genome). Data between replicates for in situ Hi-C is consistent and highly reproducible, with very less background noise and demonstrating clear chromatin interactions. It is however possible that some of the captured interactions may not be accurate intermolecular interactions since the nucleus is densely packed with protein and DNA so performing proximity ligations in intact nuclei may pull down confounding interactions that may only form due to the nature of nuclear packaging and not so much unique chromosomal interactions with cellular functional impact. It also requires an extremely high sequencing depth of around 5 billion paired-end reads per sample to achieve the resolution of data described by Rao et al. Several techniques that have adapted the concept of in situ Hi-C exist, including Sis Hi-C, OCEAN-C and in situ capture Hi-C. Described below are two of the most prominent in situ Hi-C based techniques.
1. Low-C
Low-C is an in situ Hi-C protocol adapted for use on low cell numbers, which is particularly useful in contexts where cell number is a limiting agent, for example, in primary human cell culture. This method makes use of minor changes, including volumes and concentrations used and the timing and order of certain experimental steps to allow for the generation of high-quality Hi-C libraries from cell numbers as low as 1000 cells. Despite the potential of generating usable and high resolution data with as few as 1000 cells, Diaz et al. still recommend using at least 1 to 2 million cells if feasible, or if not a minimum of 500 K cells. Library quality was first assessed on the Illumina MiSeq (2x84 np paired-end reads) platform and once passed quality control criteria (including low PCR duplicates), the library was sequenced on Illumina NextSeq (2x80 bp paired-end). Overall, this technique circumvents the issue of requiring a high cell number input for Hi-C and the high sequencing depth required to obtain high resolution data, but can only achieve resolutions of up to 5 kb and may not always be reproducible due the variable nature of sample sizes used and the data generated from it.
2. SAFE Hi-C
SAFE Hi-C, or simplified, fast, and economically efficient Hi-C, generates sufficient ligated fragments without amplification for high-throughput sequencing. In situ Hi-C data that has been published indicates that amplification (at the PCR step for library preparation) introduces distance-dependent amplification bias, which results in a higher noise to signal ratio against genomic distance. SAFE Hi-C was successfully used to generate an amplification-free, in situ Hi-C ligation library from as low as 250 thousand K562 cells. Ligation fragments are anywhere between 200 and 500 bp long, with an average at about 370 bp. All ligation product libraries were sequenced using the Illumina HiSeq platform (2x150 bp paired-end reads). Although SAFE Hi-C can be used for a cell input as low as 250 thousand, Niu et al. recommend using 1 to 2 million cells. Samples produce enough ligates to be sequenced on one-fourth of a lane. SAFE Hi-C has been demonstrated to increase library complexity due to the removal of PCR duplicates which lower the overall percentage of unique paired reads. Overall, SAFE Hi-C preserves the integrity of chromosomal interactions while also reducing the need to have high sequencing depth and saving overall cost and labor.
Micro-C
Micro-C is a version of Hi-C that includes a micrococcal nuclease (MNase) digestion step to look at interactions between pairs of nucleosomes, thus enabling resolution of sub-genomic TAD structures at the 1 to 100 nucleosome scale. It was first developed for use in yeast and was shown to conserve the structural data obtained from a standard Hi-C but with greater signal-to-noise ratio. When used with human embryonic stem cells and fibroblasts, 2.6 to 4.5 billion uniquely mapped reads were obtained per sample. Hsieh et al. analyzed 2.64 billion reads from mouse embryonic stem cells and demonstrated that there was increased power for detecting short-range interactions.
Single cell Hi-C
Hi-C has also been adapted for use with single cells but these techniques require high levels of expertise to perform and are plagued with issues such as low data quality, coverage, and resolution.
Adaptations for Ancient DNA: PaleoHi-C
PaleoHi-C is a specialized adaptation of the Hi-C genomic analysis technique designed to study the three-dimensional genome architecture in ancient DNA samples. It addresses the challenges posed by degraded and fragmented DNA, enabling researchers to reconstruct chromatin interactions in extinct species.
Methodology
PaleoHi-C modifies the traditional Hi-C protocol to account for the specific characteristics of ancient DNA:
Sample Preparation: DNA is extracted from well-preserved tissues, such as bones or skin, often found in cold or arid environments that minimize degradation.
Fragmentation and Ligation: Due to the inherent fragmentation of ancient DNA, PaleoHi-C utilizes optimized ligation protocols to capture chromatin interactions even in highly degraded samples.
Data Analysis: Advanced computational tools process the interaction data, reconstructing chromatin structures and identifying features like topologically associating domains (TADs) and chromatin compartments.
Applications
PaleoHi-C has opened new avenues in paleogenomics, including:
Genome Reconstruction: It has been used to map the three-dimensional genome architecture of extinct species, such as the 52,000-year-old woolly mammoth (Mammuthus primigenius), revealing similarities with modern relatives like the Asian elephant (Elephas maximus).
Epigenetic Insights: By identifying preserved chromatin interactions, PaleoHi-C provides a unique window into the regulation of genes in ancient organisms. Studies have demonstrated that chromatin organization, including Barr bodies representing inactive X chromosomes, can remain intact in ancient nuclei.
Evolutionary Studies: The technique aids in understanding how genome organization has evolved over time and across species.
Significance
The adaptation of Hi-C for ancient DNA has transformed the field of paleogenomics, allowing for detailed studies of extinct species at a molecular level. By preserving and analyzing chromatin interactions, PaleoHi-C sheds light on genome structure, evolution, and adaptation in ancient ecosystems.
Limitations
PaleoHi-C is constrained by the availability of well-preserved samples and the inherent challenges of working with highly degraded DNA. However, advances in sequencing technologies and computational methods continue to expand its potential applications.
Data analysis
The chimeric DNA ligation products generated by Hi-C represent pairwise chromatin interactions or physical 3D contacts within the nucleus, and can be analyzed by a variety of downstream approaches. Briefly, deep sequencing data is used to build unbiased genome-wide chromatin interaction maps. Then several different methods can be employed to analyze these maps to identify chromosomal structural patterns and their biological interpretations. Many of these data analysis approaches also apply to 3C-sequencing or other equivalent data.
Read mapping
Hi-C data produced by deep sequencing is in the form of a traditional FASTQ file, and the reads can be aligned to the genome of interest using sequence alignment software (e.g. Bowtie, bwa, etc.). Because Hi-C ligation products may span hundreds of megabases and may bridge loci on different chromosomes, Hi-C read alignment is often chimeric in the sense that different parts of a read may be aligned to loci distant apart, possibly in different orientations. Long-read aligners (e.g. minimap2) often support chimeric alignment and can be directly applied to long-read Hi-C data. Short-read Hi-C alignment is more challenging.
Notably, Hi-C generates ligation junctions of varying sizes, but the exact position of the ligation site is not measured. To circumvent this problem, iterative mapping is used to avoid the search for the junction site before being able to split the reads into two and mapping them separately to identify the interaction pairs. The idea behind iterative mapping is to map as short a sequence as possible to ensure unique identification of interaction pairs before reaching the junction site. As a result, 25-bp long reads starting from the 5’ end are mapped to the genome at first, and reads that do not uniquely map to a single loci are extended by an additional 5 bp and then re-mapped. This process is repeated till all reads uniquely map, or till the reads are extended to their entirety. Only paired end reads with each side uniquely mapped to a single genomic loci are kept. All other paired end reads are discarded.
Several variations of read mapping techniques are implemented in many bioinformatics pipelines, such as ICE, HiC-Pro, HIPPIE, HiCUP, and TADbit, to map two portions of a paired end read separately, in the case that the two portions match distinct genomic positions, thus addressing the challenge where reads span the ligation junctions.
With increased read length, more recent pipelines (e.g. Juicer and the 4D-Nucleosome Data Portal) often align short Hi-C reads with an alignment algorithm capable of chimeric alignment, such as bwa-mem, chromap and dragmap. This procedure calls alignment once and is simpler than iterative mapping.
Fragment assignment and filtering
The mapped reads are then each assigned a single genomic alignment location according to its 5’ mapped position in the genome. For each read pair, a location is assigned to only one of the restriction fragments, thus should fall in close proximity to a restriction site and less than the maximum molecule length away. Reads mapped more than the maximum molecule length away from the closest restriction sites are the results of physical breakage of the chromatin or non-canonical nuclease activities. Because these reads also instruct information on chromatin interactions, they are not discarded, but appropriate filtering must take place after assigning genomic locations to remove technical noise in the dataset.
Depending on whether the read pair falls within the same or different restriction fragments, different filtering criteria are applied. If the paired reads map to the same restriction fragment, they likely represent un-ligated dangling ends or circularized fragments that are uninformative, and are therefore removed from the dataset. These reads could also represent PCR artifacts, undigested chromatin fragments, or simply, reads with low alignment quality. Whatever their origin, reads mapped to the same fragment are considered “spurious signals” and are typically discarded before downstream processing.
The remaining paired reads mapped to distinct restriction fragments are also filtered to discard identical/redundant PCR products, and this is achieved by removing reads sharing the exact same sequence or 5’ alignment positions. Additional levels of filtering could also be applied to fit the experimental purpose. For example, potential undigested restriction sites could be specifically filtered out, rather than passively identified, by removing reads mapped to the same chromosomal strand with a small distance (user-defined, experience-based) in between.
Binning and bin-level filtering
Based on their midpoint coordinates, Hi-C restriction fragments are binned into fixed genomic intervals, with bin sizes ranging from 40 kb to 1 Mb. The rationale behind this approach is that by reducing the complexity of the data and lowering the number of candidate genome-wide interactions per bin, genomic bins allow for the construction of more robust and less noisy signals, in the form of contact frequencies, at the expense of resolution (though restriction fragment length still remains the ultimate physical limit to Hi-C resolution). Bin to bin interactions are aggregated by simply taking the sum, although more focused and informative methods have also been developed over the years to further enhance the signal. One such method described by Rao et al. aims to push the limit of bin size to smaller and smaller bins, eventually having > 80% of bins covered by 1000 reads each, which significantly increased the resolution of the final analysis results.
Bin-level filtering, just like fragment-level filtering, also takes place to shed experimental artifacts from the obtained data. Bins with high noise and low signals are removed as they typically represent highly repetitive genomic contents around the telomeres and centromeres. This is done by comparing the individual bin sums to the sum of all bins and removing the bottom 1% of bins, or by using the variance as a measure of noise. Low-coverage bins, or bins three standard deviations below the center of a log-normal distribution (which fits the total number of contacts per genomic bin), are removed using the MAD-max (maximum allowed median absolute deviation) filter. After binning, Hi-C data will be stored in a symmetrical matrix format.
More recently, many approaches have been proposed to predetermine the optimal bin size for different Hi-C experiments. Li et al. in 2018 described deDoc, a method where bin size is selected as the one at which the structural entropy of the Hi-C matrix reaches a stable minimum. QuASAR, on the other hand, offers a bit more quality assessment, and compares replicate scores of the samples (given that replicates are indeed included for the experimental purpose) to find the maximum usable resolution. Some publications also tried to score interaction frequencies at the single-fragment level, where a higher coverage can be achieved even with a lower number of reads. HiCPlus, a tool developed by Zhang et al. in 2018, is able to impute Hi-C matrices similar to the original ones using only 1/16 of the original reads.
Balancing/normalization
Balancing refers to the process of bias correction of the obtained Hi-C data, and can be either explicit or implicit. Explicit balancing methods require the explicit definitions of biases known to be associated with Hi-C reads (or any high-throughput sequencing technique in general) including the read mappability, GC content, as well as individual fragment length. A correction factor is first computed for each of the considered biases, followed by each of their combination, and then applied to the read counts per genomic bin.
However, some biases can come from an unknown origin, in which case an implicit balancing approach is used instead. Implicit balancing relies on the assumption that each genomic locus should have “equal visibility”, which suggests that the interaction signal at each genomic locus in the Hi-C data should add up to the same total amount. One approach called iterative correction uses the Sinkhorn–Knopp balancing algorithm and attempts to balance the symmetrical matrix using the aforementioned assumption (by equalizing the sum of each and every row and column in the matrix). The algorithm iteratively alternates between two steps: 1) dividing each row by its mean, and 2) dividing each column by its mean, which are guaranteed to converge in the end and leave no obviously high rows or columns in the interaction matrix. Other computational methods also exist to normalize the biases inherent to Hi-C data, including sequential component normalization (SCN), the Knight-Ruiz matrix-balancing approach, and eigenvector decomposition (ICE) normalization. In the end, both the explicit and the implicit bias correction methods yield comparable results.
Analysis and data interpretation
With a binned, genome-wide interaction matrix, common interaction patterns observed in mammalian genomes can be identified and interpreted biologically, while more rare, less frequently observed patterns such as circular chromosomes and centromere clustering, may require additional specially-tailored methods to be identified.
1. Cis/trans interaction ratio
Cis/trans interactions are one of the two strongest interaction patterns observed in Hi-C maps. They are not locus-specific, and thus are considered as a genome-level pattern. Typically, a higher interaction frequency is observed, on average, for pairs of loci residing on the same chromosome (in cis) than pairs of loci residing on different chromosomes (in trans). In Hi-C interaction matrices, cis/trans interactions appear as square blocks centered along a diagonal, matching individual chromosomes at the same time. Because this pattern is relatively consistent across different species and cell types, it can be used to assess the quality of the data. A noisier experiment, due to random background ligation or any unknown factor, will result in a lower cis to trans interaction ratio (as the noise is expected to affect both cis and trans interactions to a similar extent), and high-quality experiments typically have a cis/trans interaction ratio between 40 and 60 for the human genome.
2. Distance-dependent interaction frequency
This pattern refers to the distance-dependent decay of interaction frequencies on a genome level, and represents the second one of the two strongest Hi-C interaction patterns. As the interaction frequencies between cis-interacting loci decrease (as a result of further distance between them), a gradual decrease of interaction frequency can be observed moving away from the diagonal in the interaction matrix.
Various polymer models exist to statistically characterize the properties of loci pairs separated by a given distance, but discrete binning and fitting continuous functions are two common ways to analyze the distance-dependent interaction frequencies between datapoints. First, interaction frequencies can be binned based on their genomic distance, then a continuous function is fitted to the data using information of the average of each bin. The resulting decay function is plotted on a log-log plot so that a linear line can be used to represent the power-law decays predicted by polymer models. However, oftentimes a simple polymer model will not be sufficient to fully represent the distance-dependent interaction frequencies, at which point more complicated decay functions might result, which might affect the reproducibility of the data due to the presence of locus-specific rather than genome-wide patterns observed in the Hi-C matrix (which are not taken into consideration by polymer models).
3. Chromatin compartments
The strongest locus-specific pattern found in Hi-C maps is chromatin compartments, which takes the shape of a plaid or “checker-board”-like pattern on the interaction matrix, with alternating blocks that range between 1 and 10 Mb in size (which makes them easy to extract even in experiments with very low sampling) in the human genome. This pattern can be found at both high and low frequencies. Because chromosomes consist of two types of genomic regions that alternate along the length of individual chromosomes, the interaction frequencies between two regions of the same type and interaction frequencies between two regions of different types can be quite different.
The definition of the active (A) and inactive (B) chromatin compartments is based on principal component analysis, first established by Lieberman-Aiden et al. in 2009. Their approach calculated the correlation of the Hi-C matrix of observed vs. expected signal (obtained from a distance-normalized contact matrix) ratio, and used the sign of the first eigenvector to denote positive and negative parts of the resulting plot as A and B compartments, respectively. Many genomic studies have indicated that chromatin compartments are correlated with chromatin states, such as gene density, DNA accessibility, GC content, replication timing, and histone marks. Therefore, type A compartments are more specifically defined to represent the gene-dense regions of euchromatin, while type B compartments represent heterochromatic regions with less gene activities. Overall, chromatin compartments offer insights on the general organization principles of the genome of interest.
More and more bioinformatics tools capable of performing compartment calling have been developed over the past decade, including HOMER, HiTC R, and CscoreTool. Although they each has their own differences and optimizations made on the original 2009 approach, their base protocols still rely on principal component analysis.
4. Topologically associating domains (TADs)
TADs are sub-Mb structures that may harbor gene-regulatory features, such as local promoter-enhancer interactions. More generally, TADs are considered as an emergent property of underlying biological mechanisms, which defines TADs as loop extrusions, compartmentalizations, or any dynamic genomic pattern rather than a static structural feature of the genome. Thus, TADs represent regulatory microenvironments and usually show up on a Hi-C map as blocks of highly self-interacting regions in which interaction frequencies within the region are significantly higher than interaction frequencies between two adjacent regions. In Hi-C interaction matrices, TADs are square blocks of elevated interaction frequencies centred along the diagonal. However, this is merely an oversimplified description, and identifying the actual pattern requires much more statistical processing and estimation.
One approach to identify TADs was described by Dixon et al., where they first calculated (within some genomic range) the difference between the average upstream interactions and the average downstream interactions of each bin in the matrix. This difference was then transformed into a chi-squared statistic based on the Hidden Markov Model, and any sharp change in this chi-squared value, called the directionality index, will define the boundaries of TADs. Alternatively, one could simply take the ratio between average upstream and downstream interactions to define TAD boundaries, as did Naumova et al.
Another approach is to calculate the average interaction frequencies crossing over each bin, again within some predetermined genomic range. The resulting value is referred to as the insulation score and can be thought of as the average of a square sliding along the diagonal of the matrix (Crane et al.). This value is expected to be lower at TAD boundaries; thus, one can use standard statistical techniques to find local minima (boundaries), and define regions between consecutive boundaries to be TADs.
However, as is increasingly recognized today, TADs represent a hierarchical series of structures that cannot be fully characterized by one-dimensional scores given by the previous methods. The increased resolution available in newer datasets can now explicitly address TADs with multiscale analysis approaches. As first introduced by Armatus, resolution specific domains can be identified and a consensus set of domains conserved across resolutions can be calculated, which transforms the problem of TAD calling into the optimization of scoring functions based on their local interaction densities. Variations of this approach with different objective functions, such as Lavaburst, MrTADFinder, 3DNetMod, and Matryoshka, are also developed to achieve better computing performance on higher resolution datasets.
5. Point interactions
Biologically, regulatory interactions usually occur at much smaller scale than TADs, and two genomic elements can activate/inhibit the expression of a gene within as small a distance as 1 kb. Therefore, point interactions are important in interpreting Hi-C maps, and are expected to appear as local enrichments in contact probability. However, current methodologies for the identification of point interactions are all implicit in nature, in that they do not instruct what a point interaction should look like. Instead, point mutations are identified as outliers with higher interaction frequencies than expected within the Hi-C matrix, given that the background model consists only of the strongest signals such as the distance-decay functions. The background model can be estimated and constructed using both local signal distributions and global approaches (i.e. chromosome-wide/genome-wide). Many of the aforementioned bioinformatics packages incorporate algorithms to identify point interactions. In short, the significance of individual pairwise interaction is calculated, and significantly high outliers are corrected for multiple testing before they are recognized as truly informative point interactions. It is helpful to compliment identified point interactions with additional evidence such as analysis of enrichment scores and biological replicates, to indicate that these interactions are indeed of biological significance.
Uses
Development
1. Cell division
Hi-C can reveal chromatin conformation changes during cell division. In interphase, chromatins are generally loose and vivacious so that transcription regulation and other regulatory activities could take place. When entering mitosis and cell division, chromatins become compactly folded into dense cylindrical chromosomes. Within the past five years, the development of single-cell Hi-C has enabled the depiction of the entire 3D structural landscape of chromatins/chromosomes throughout the cell cycle, and many studies have discovered that these identified genomic domains remain unchanged in interphase, and are erased by silencing mechanisms when the cell enters mitosis. When mitotic division is completed and the cell re-enters the interphase, chromatin 3D structures are observed to be re-established, and transcription regulation is restored.
2. Transcription regulation and fate determination
It has been suspected that the differentiation of embryonic stem cells (ESCs) and induced pluripotent stem cells (iPSCs) into various mature cell lineages is accompanied by global changes in chromosomal structures and consequently interaction dynamics to allow for the regulation of transcriptional activation/silencing. Standard Hi-C can be used to investigate this research question.
In 2015, Dixon et al. applied standard Hi-C to capture global 3D dynamics in human ESCs during their differentiation into high five cells. Due to the ability of Hi-C to depict dynamic interactions in differentiation-related TADs, the researchers discovered increases in the number of DHS sites, CTCF binding ability, active histone modifications, and target gene expressions within these TADs of interest, and found significant participation of major pluripotency factors such as OCT4, NANOG, and SOX2 in the interaction network during somatic cell reprogramming. Since then, Hi-C has been recognized as one of the standard methods to probe for transcriptional regulatory activities, and has confirmed that chromosome architecture is closely related to cell fate.
3. Growth and development
Mammalian somatic growth and development starts with the fertilization of sperm and oocyte, followed by the zygote stage, the 2-cell, 4-cell, and the 8-cell stage, the blastocyst stage, and finally the embryo stage. Hi-C made it possible to explore the comprehensive genomic architecture during growth and development, as both sis-Hi-C and in situ Hi-C have reported that TADs and genomic A and B compartments are not obviously present and appear to be less well-structured in oocyte cells. These structural features of the chromatin only gradually establish from weaker frequencies to cleaner and more frequent datapoints after fertilization, as developmental stages progress.
Genome evolution
As data on 3D genome structures becomes more and more prevalent in recent years, Hi-C begins to be used as a means to track evolutionary structural features/changes. Genomic single nucleotide polymorphisms (SNPs) and TADs are typically conserved across species, along with the CTCF factor in the chromatin domain evolution. Other factors, however, have been revealed by Hi-C techniques to experience structural evolutions in 3D architecture. These include codon usage frequency similarity (CUFS), paralog gene co-regulation, and spatially co-evolving orthologous modules (SCOMs). For large-scale domain evolution, chromosomal translocations, syntenic regions, as well as genomic rearrangement regions were all relatively conserved. These findings imply that Hi-C technologies is capable of providing an alternative point of view in the eukaryotic tree of life.
Cancer
Several studies have employed the use of Hi-C to describe and study chromatin architecture in different cancers and their impact on disease pathogenesis. Kloetgen et al. used in situ Hi-C to study T cell acute lymphoblastic leukemia (T-ALL) and found a TAD fusion event that removed a CTCF insulation site, allowing for the oncogene MYC’s promoter to directly interact with a distal super enhancer. Fang et al. have also shown how there are T-ALL specific gain or loss of chromatin insulation, which alters the strength of TAD architecture of the genome, using in situ Hi-C. Low-C has been used to map the chromatin structure of primary B cells of a diffuse large B-cell lymphoma patient and was used to find high chromosome structural variation between the patient and healthy B-cells. Overall, the application of Hi-C and its variants in cancer research provides unique insight into the molecular underpinnings of the driving factors of cell abnormality. It can help explain biological phenomena (high MYC expression in T-ALL) and help aid drug development to target mechanisms unique to cancerous cells.
References
Genomics techniques | Hi-C (genomic analysis technique) | [
"Chemistry",
"Biology"
] | 9,874 | [
"Genetics techniques",
"Genomics techniques",
"Molecular biology techniques"
] |
70,171,044 | https://en.wikipedia.org/wiki/Cramaucheniinae | Cramaucheniinae is a paraphyletic subfamily of macraucheniids that originated in the middle Eocene (Mustersan SALMA). The size range of the group ranged from small, basal forms to larger and more derived forms. During their evolution, the cramaucheniines undergone a trend from evolving from small basal forms such as Polymorphis into larger, more derived taxa such as Theosodon.
References
Macraucheniids
Paleogene mammals of South America
Neogene mammals of South America
Mammal subfamilies
Lutetian first appearances
Serravallian extinctions
Paraphyletic groups | Cramaucheniinae | [
"Biology"
] | 130 | [
"Phylogenetics",
"Paraphyletic groups"
] |
70,171,384 | https://en.wikipedia.org/wiki/Samsung%20Galaxy%20Book%20Pro | Samsung Galaxy Book Pro is a notebook computer announced by Samsung Electronics in April 2021. It has a 13.3 inch display with 1080p resolution and a 720p webcam.
References
Book Pro
Computer-related introductions in 2021
Samsung laptops | Samsung Galaxy Book Pro | [
"Technology"
] | 50 | [
"Computing stubs"
] |
70,171,813 | https://en.wikipedia.org/wiki/Mobility%20transition | Mobility transition is a set of social, technological and political processes of converting traffic (including freight transport) and mobility to sustainable transport with renewable energy resources, and an integration of several different modes of private transport and local public transport. It also includes social change, a redistribution of public spaces, and different ways of financing and spending money in urban planning. The main motivation for mobility transition is the reduction of the harm and damage that traffic causes to people (mostly but not solely due to collisions) and the environment (which also often directly or indirectly affects people) in order to make (urban) society more livable, as well as solving various interconnected logistical, social, economic and energy issues and inefficiencies.
Motivation
Environmental damage
An important goal is the reduction of greenhouse gas emissions such as CO2. To achieve the goal set in the Paris Agreement, that is, to restrict global warming to clearly below 2 °C, the burning of fossil fuels is to be discontinued around 2040. Because the CO2 emissions of traffic practically need to be reduced to zero, the measures taken so far in the transport sector are not sufficient in order to achieve the climate change mitigation goals that have been set.
Air pollution
A mobility transition also serves health purposes in the metropolitan regions and large cities and is intended in particular to counteract the massive air pollution. For example, in Germany in 2015, traffic caused about 38% of human-related nitrogen oxide emissions. According to Lelieveld et al. (2015), air pollution from land traffic alone killed around 164,000 people in 2010; in Germany alone, it was over 6,900 people. A 2017 study by the same lead author concluded that air pollution from road traffic in Germany causes 11,000 deaths every year that could potentially be avoided. This figure is 3.5 times the number of fatalities from accidents.
To demonstrate how much road traffic contributes to air pollution in Germany, for every 100 inhabitants, 58 of them owned passenger cars, according to Federal Statistical Office of Germany.
Accident fatalities, quality of life, aggressive behaviour
Further motives for the mobility transition are the desire for less noise, streets with quality of life and lower accident risks (see also Vision Zero). According to estimates by the European Environment Agency, 113 million people in Europe are affected by road noise at unhealthy levels. With increasing traffic and commuter numbers, many citizens also wished for more attractive places to spend time in public spaces. A mobility transition therefore also serves to increase the quality of life.
The mobility transition is also seen by some as a means of reducing aggressive behaviour in traffic (road rage) and in society. Studies indicate that people in large and expensive cars are more likely to behave more recklessly. According to the German Verkehrsklima 2020 (Traffic Mood 2020) study, women feel more insecure in traffic than men, and they want more controls and stricter laws. On the other hand, the "evil eye" design of vehicles is increasingly used by manufacturers to sell vehicles to drivers who want to feel strong and superior on the road. Accident reporting by the press and the police sometimes paints a distorted picture.
Traffic congestion
Traffic congestion has been increasing in streets and roads. Traditional traffic policy usually relies on expanding the roads to solve the congestion problem. From a global perspective, there are two important factors behind the increasing traffic jams urbanisation and the purchase of more automobiles (also known as status symbol) are being bought as prosperity increases. A return to more public and non-motorised transport is likely in the future.
Peak oil
Petroleum production is approaching its peak, or by some estimates may already have been passed in the 2020s. The Earth's oil reserves are finite, and oil extraction will become inadequate to power as many petroleum-fueled vehicles. Sooner or later in the 21st century, mobility must rely on other energy sources.
Mobility transition concept
Origins
There has been criticism of automotive cities and car dependency since at least the 1960s. In the Netherlands, Provo Luud Schimmelpennink's 1965 White Bicycle Plan was an early attempt to stop the rising death toll due to car-related traffic accidents, and to stimulate cycling as a safer and healthier alternative for short-distance travel in the city of Amsterdam. Although the plan itself was a complete failure, it drew widespread publicity and influenced urban planning ideas around the world – with the white bicycle becoming 'an almost mythical worldwide symbol for a better world'. It inspired the emergence of both strongly anti-car movements such as Kabouter (Gnome), Amsterdam Autovrij ("Amsterdam Car-Free") and De Lastige Amsterdammer ("The Troubled/Troublesome Amsterdammer"), as well as pro-cycling movements in Amsterdam and elsewhere in the Netherlands in the early 1970s. A prominent example was protest group Stop de Kindermoord ("Stop the Child Murder"), founded in 1972 (formalised in 1973) by a journalist from Eindhoven whose young daughter was killed in a traffic accident, and shortly thereafter another daughter of his was almost killed as well. The movement highlighted how lethally dangerous traffic had become for children in particular, and that the authorities had failed to acknowledge and address the problem. It mobilised parents, teachers, journalists, other citizens and politicians; even right-wing politicians, who had traditionally promoted automobile interests, were influenced by the campaign and became more willing to adopt preventive measures. In Autokind vs Mankind (1971) and On the Nature of Cities (1979), American author Kenneth R. Schneider vehemently criticised the excesses of automobile dependence and called for a struggle to halt and partially reverse negative developments in transportation, although he was largely ignored at the time.
An early theorist on mobility transitions was American cultural geographer Wilbur Zelinsky, whose 1971 paper "The Hypothesis of the Mobility Transition" formed the basis of what has become known as the Zelinsky Model. In 1975, Austrian civil engineer and transportation planner Hermann Knoflacher sought to promote cycling traffic in Vienna. He caricatured the enormous spatial demands of automobiles with his self-invented Gehzeug ("walking gear/vehicle").
Definitions and scope
The German dictionary Duden defines 'mobility transition' (German: Verkehrswende) as "fundamental conversion of public transport [especially with ecological objectives]" (German: „grundlegende Umstellung des öffentlichen Verkehrs [besonders mit ökologischen Zielvorstellungen]"). Adey et al. (2021) defined 'mobility transition' as 'the necessary and inevitable transformation from a world in which mobility is dominated by the use of fossil fuels, the production of greenhouse gases and the dominance of automobility to one in which mobility entails reduced or eliminated fossil fuels and GHG emissions and is less dependent on the automobile.'
According to a 2016 thesis paper by Agora Verkehrswende – a joint initiative of Stiftung Mercator and the European Climate Foundation – the goal of a traffic transition (Verkehrswende) in Germany is ensuring climate neutrality in transport by 2050. It must be based on two pillars:
Mobility transition (Mobilitätswende): The goal is a significant reduction of energy consumption. The mobility transition is intended to bring about a qualitative change in traffic behaviour (Verkehrsverhalten), in particular avoiding and relocating traffic. An efficient design of the traffic systems without restricting mobility should be achieved.
Energy transition in traffic (Energiewende im Verkehr, see also phase-out of fossil fuel vehicles): In order to decarbonise traffic, the conversion of the energy supply of traffic towards renewable energy is considered a necessity.
A mobility transition also includes a cultural change, in particular a re-evaluation of "the street". Currently, the primary purpose of streets is to direct traffic through the city with as little disruption as possible. In the future, the dominance of the car should give way to equal rights for all modes of transport.
In an expanded definition, the mobility transition is distinguished from a pure propulsion transition on the one hand to a fundamental mobility transition on the other:
Propulsion transition (Antriebswende): the gradual replacement of internal combustion engines by those powered by hydrogen, fuel cells or battery-electric power.
Traffic transition (Verkehrswende): private car traffic is reduced or replaced by other modes of transportation. In the large cities and metropolitan regions in particular, the focus is increasingly on establishing and spreading alternative means of transport - from the expansion of public transport to the promotion of so-called active transport (pedestrian and bicycle traffic), the approval of new electrified micro-vehicles such as e-scooters and the range of different mobility services (the so-called MaaS, "mobility as a service").
Mobility transition (Mobilitätswende): This perspective takes into account not only the distances travelled and the means of transport used for them, but also the socio-economic, cultural and spatial dynamics and constraints that cause the need to overcome distances. These include, for example, settlement and transport policies, housing and labour markets, social policy and migration. The need to quickly overcome distances is not understood as an invariant characteristic of people, but as part and prerequisite of the current, growth-oriented capitalist shape of society.
In some cases, a mobility transition is also presented as a paradigm shift of the 'understanding of ownership'. Collective use of means of transport makes it possible to use modes of transportation 'adapted to specific needs', such as carsharing, peer-to-peer carsharing, bicycle-sharing systems. It also enables connecting different modes of transportation to one another on a route to be travelled. Electromobiles could better exploit their advantages in networking with other means of transport. Electric vehicles adapted to the respective uses can be small or large depending on the application, and do not (always) have to be designed for long distances. A suitable charging infrastructure is required. Under certain circumstances, in such an environment it will no longer be necessary to own private transport for one's own use.
In Germany, the mobility transition can be contrasted to the Bundesverkehrswegeplan 2030 ('Federal Transport Routes Plan 2030'). The mobility transition is based on avoiding traffic and shifting to rail, but the Bundesverkehrswegeplan is based on the construction and expansion of trunk roads in Germany (including but not limited to the Autobahn). Transport scientist regards the transition as a "turning away from car subsidies through billions [of euros] in road network expansion". He sees a decisive change in the priorities of transport policy as a necessary condition to achieve this.
The Umweltbundesamt announced that in 2018, the sum of all environmentally harmful subsidies in Germany was 65.4 billion euros, almost half of them in the areas of traffic and transport. In traffic, such subsidies with harmful effects even increased from 2012 to 2018.
Changes in behaviour due to the COVID-19 pandemic
The COVID-19 pandemic made it clear that work and transport can be organised differently, even in a comparatively short time. An increased focus on working from home could save millions of tonnes of greenhouse gases.
Measures in passenger transport
Overview
Various measures have been proposed by different people and groups to achieve a mobility transition.
In a 2017 position paper, German think tank Agora Verkehrswende described how a climate-neutral conversion of transport would be possible by 2050 without sacrificing mobility. In addition to technological innovations, there are new traffic concepts, regulatory measures and cultural change. Multi-link transport chains (Intermodal passenger transport) are considered. Amongst other things, there were also studies on this in November 2019 by the (VCD, "Traffic Club Germany") and the Heinrich Böll Foundation.
Mobility transition
Various measures have been proposed to achieve the mobility transition – in particular a significant reduction in energy requirements and a change in traffic behaviour:
Major changes can succeed with the help of traffic avoidance, and a shift towards sustainable transport in the form of pedestrian traffic, cycling, rail transport and local public transport. According to a 2010 report, each person in Germany in 2008 conducted an average of 3.4 trips a day, with an average length of 11.5 kilometres. On average, private cars were parked for around 22,5 hours a day, because they were used for only 1 hour and 19 to 28 minutes a day. Electric cars with a short range, bicycles, electric bicycles (e-bikes), pedelecs, cargo bikes, but also recently e-scooters, are usually well suited for a majority of these routes. The joint use of automobiles in carsharing could increase the utilisation of the vehicles and lead to fewer cars being needed overall. This could also reduce the land consumption of parking spaces and free up space for other uses. In 2002 and 2008, vehicles in Germany were occupied by an average of 1.5 people. One method of efficient use of passenger cars is the formation of carpools and the operation of ridesharing companies. Needs-based use of various sorts of low emission vehicles can also serve to reduce fuel consumption. The latter measures would lead to an increase in energy and vehicle efficiency. Another component in the future mobility mix could be Neighborhood Electric Vehicles.
Numerous regulatory control measures are possible, for example congestion charges, aviation taxation and subsidies (such as a jet fuel tax and a departure tax), a reform of company car taxation, parking space management (for example through pay and display), or an extension of emissions trading to road traffic. The introduction of speed limits, or lowering existing speed limits, would also have an impact on greenhouse gas emissions such as CO2 (carbon dioxide) and NOx (nitric oxide and nitrogen dioxide). Passenger cars consume a disproportionately large amount of fuel at high speeds. A speed limit can also have secondary emissions-reducing effects, about which there is still considerable uncertainty: lower maximum speeds and longer travel times can contribute to a shift in traffic to rail and to the promotion of vehicles with lower engine power.
The externalities of traffic, namely the impact that air pollution caused by motor vehicles has on society and the environment, must also be taken into account here.
The , which indirectly caused the Dutch farmers' protests, convinced the government in November 2019 to lower the speed limits in the Netherlands on national roads to 100 kilometres per hour during the day, from 6 am to 7 pm. In the evening and at night the old speeds were maintained. Meanwhile, the State of the Netherlands v. Urgenda Foundation court case was decided in favour of its plaintiff Urgenda (initially in June 2015, upheld on appeal in October 2018, and finally confirmed by the Supreme Court of the Netherlands on 20 December 2019), who successfully forced the government to implement the necessary measures to reduce the Netherlands' CO2 emissions from 1990 levels by 25% by 2020. Although the government was free to choose which measures it would take to achieve this reduction, the plaintiff and other environmentalists had been suggesting throughout the legal process to lower the speed limit as one of several effective options to do so. Similar environmental arguments for speed limits have been proposed in Germany.
As one of several methods to mitigate the environmental impact of aviation, a shift to other modes of transport or a switch from short-haul air traffic to high-speed trains has been proposed. In several countries in Europe, increasingly in the 2010s and early 2020s, some governments have even imposed a short-haul flight ban on all airlines, while many governmental agencies, commercial companies, universities, and NGOs have imposed restrictions or prohibitions on their employees to not take short-haul flights that can also be properly accomplished by train.
In the field of urban planning, there are concepts for walkability, the compact city (or 'city of short distances'), New Urbanism (or its variant New Pedestrianism), and car-free living. In research policy, there are demands to give more consideration to the consequences of motorised private transport in the form of practice- and solution-oriented research.
Further development of local public transport
According to a 2015 study by the Verkehrsclub Deutschland, local public transport in Germany was not customer-friendly enough. Cryptic route networks, opaque fare systems, ticket machines that cannot be operated, draft bus stops, and a lack of announcements about transfer and connection options were criticised. The club also called for better linking of local public transport with other modes of transportation. This included bike racks at bus stops, information on taking bikes on buses and trains, and options for switching to carsharing providers. Furthermore, the synchronisation of timetables was criticised, because it led to unnecessarily long waiting times for connecting buses or trains. In 2012, several local public transport companies reportedly had been making efforts to improve the usability of ticket machines in Bavaria and Saxony. Against this background, Federal Transport Minister Alexander Dobrindt in 2017 called for electronic tickets and a uniform tariff system for all transport associations to be established by 2019.
Since the 2010s, there have been frequent discussions on whether local public transport should be free of charge. The best-known example of free public transport is the Estonian capital Tallinn, where buses and trains have been free since 2013. By 2021, most counties in Estonia had also introduced free buses and trains. Public transport is also free throughout Luxembourg. In Germany, the cities of Monheim am Rhein and Langenfeld, Rhineland were testing free public transport as of September 2021.
Some cities have introduced mini electric buses, primarily in inner-city areas. The historic city centre of Aix-en-Provence, France is very narrow and closed to cars, taxis and normal bus traffic. In order to get people with restricted mobility to their destination, wheelchair-accessible electric minibuses are frequented there without a fixed timetable. Likewise, in the medieval old town of Regensburg, only mini-ebuses are still driving around. Furthermore, two self-propelled e-shuttles are in use in Regensburg's industrial park. Berlin and Göppingen also want to supplement their local public transport with electric, highly automated minibuses.
In some cities, cableways are built as part of local public transit. Such cableways can be found in places such as Medellín (see Metrocable (Medellín)), La Paz (see Mi Teleférico), New York (see Roosevelt Island Tramway), Portland (see Portland Aerial Tram), Algiers (see ), Lisbon (see ), Brest (see ), Bozen, London (see Emirates Air Line (cable car)) and Ankara. Cable cars are electrically operated and they have very low CO2 emissions compared to other modes of transport. At 50% capacity, a cable car causes 27 grams of CO2 per person and kilometre, a train with an electric locomotive 30 grams, a bus with a diesel engine 38.5 grams, and a car with a combustion engine even 248 grams. Furthermore, cable cars cause practically no noise pollution on the route, since the individual gondolas do not have their own drive, but are moved by a central motor housed in the station. In Germany, on the occasion of the Bundesgartenschau ('Federal Horticultural Show'), cable cars have emerged in Berlin (see IGA Cable Car), Koblenz (see Koblenz cable car) and Cologne (see Cologne Cable Car). Compared to underground or suburban trains, cable cars are relatively cheap and can be built quickly. As of November 2021, there are projects to build more cable cars to supplement local public transit in Berlin, Bonn, Düsseldorf, Cologne, Munich, Stuttgart and Wuppertal.
Continuous development is also affecting the rural areas as well. As a solution, what came into play was the integrated systems of public transport that is playing an important role in the development of rural areas, especially in post-communist countries.
Propulsion and energy transition in transport
In order to achieve the energy transition in transport, it is considered necessary to refrain from burning petroleum-based fuel and to use more climate-friendly propulsion technologies or fuels. Electricity from renewable sources, or e-fuels or biofuels produced from green electricity, can serve as substitutes for petrol and diesel fuel.
Since the overall efficiency of e-fuels is far lower than direct electrification via electric cars, the German Advisory Council on the Environment has recommended restricting the use of electricity-based synthetic fuels to air and shipping traffic in particular, in order not to increase electricity consumption too much. For example, hydrogen-powered fuel cell vehicles (FCVs) require more than twice as much energy per kilometre as battery electric vehicles (BEVs), and vehicles with combustion engines powered by power-to-liquid fuels even need between four and six times as much. Battery vehicles therefore have significantly better energy efficiency than vehicles that are operated with e-fuels. In general, electric cars consume around 12 to 15 kWh of electrical energy per 100 km, while conventionally powered cars use the equivalent of around 50 kWh per 100 km. At the same time, the energy required for the production, transport and distribution of fuels such as petrol or diesel is also eliminated. In China in particular, the switch from internal combustion engines to electromobility is being promoted for health reasons (to avoid smog) in order to counteract the massive air pollution in the cities.
According to Canzler & Wittowsky (2016), the propulsion transition could also become the central building block of Germany's Energiewende, While the switch to renewable energies is already underway worldwide, the energy transition in transport is proving more difficult, especially with the switch from oil to sustainable energy sources. However, disruptive technologies (such as the development of more powerful and cheaper batteries or innovations in the field of autonomous driving) and new business models (especially in the field of digitalisation) can also lead to unpredictable, rapid and far-reaching changes in mobility.
New methods of getting around in urban traffic have also emerged:
Vienna
Vienna, the capital of Austria, has been consistently developing into a city that is restructuring public space and promoting local public transport. Viennese urban planner Hermann Knoflacher has stated: 'The money comes on foot or by bike.' The economic use of space as parking spaces is inefficient. A car-free street increases the turnover of restaurants, clothing stores and retailers. This would create new jobs.
The attractiveness of public transport can be stimulated by lowering the price of an annual pass: in Vienna one can use public transport with a subscription fee of 1 euro a day. Between 2012 and 2018 the number of annual ticket holders increased from 373,000 to 780,000. At the same time as the changeover, the city began to invest more heavily in local transport. In July 2018, some German cities announced that they would follow the Viennese model and lower the prices for annual tickets.
Luxembourg
Since 1 March 2020, local public transport across Luxembourg has been free of charge for everyone. The Grand Duchy thus became the first country in the world to introduce free local public transit. An exception to this is first class travel on the railways. A major reason for the overhaul was the increasingly problematic traffic jams on Luxembourg's roads.
Further examples
Several more significant examples of (potential) components and initiatives for mobility transition that have been proposed, studied, or put into practice include:
As an alternative to the Viennese model of the annual ticket, a citizen ticket is being discussed in some German municipalities as a new way of financing and using local public transport. It is to be financed by a levy for all citizens of a municipality and function as a kind of flat rate for buses and trains.
Phase-out of fossil fuel vehicles: In Germany, a ban on the sale of combustion engines from 2030 has been adopted by the Bundesrat in October 2016. Norway, on the other hand, already wants no cars with petrol or diesel engines to be registered from 2025 and ships and ferries only to be registered without fossil fuels from 2030, and is therefore considered a leading nation in electromobility. The Netherlands are also planning a ban on the registration of conventional drives in cars from 2025. In China, all automotive groups are obliged to meet a quota for the production and sale of purely electric or plug-in hybrid drives.
There are numerous electromobility projects in Germany, such as the Modellregionen Elektromobilität and BeMobility. The German Association of Towns and Municipalities (DStGB) sees towns and municipalities as drivers and designers of the mobility transition and also supports a number of projects.
Critical Mass is a form of direct action for promoting more and safer cycling in cities around the world. When riding together through inner cities, cyclists draw attention to cycling as a form of individual transport, advocate for mobility transition and, in particular, more rights for cyclists, better cycling traffic networks and infrastructure, and more room for non-motorised traffic. The first Critical Mass action took place in September 1992 in San Francisco.
To improve air quality, efforts across Europe are being stepped up to introduce low-emission zones. A progressive approach is the French Crit'air, which provides for different restrictions depending on air pollution. The applicable prohibitions can be viewed on the Internet or via phone app. Electric vehicles or hydrogen-powered vehicles receive category 0 (green vignette) and can always drive anywhere. were also issued in Germany.
Instead of a company car, individual companies offer their employees a that can be used to pay for different means of transport for business purposes.
The city-state of Singapore has not allowed additional private cars since 1 February 2018 under its Vehicular Quota System. This is intended to promote the switch to other means of transport. It is the only country in the world which requires prospective vehicle owners to bid for a Certificate of Entitlement before they are allowed to own a vehicle for up to 10 years. The state only gives permission for a new car if another has been de-registered. Singapore was also the first country in the world to implement congestion pricing in 1975.
Since 2003, there has been a London congestion charge which drivers have to pay in Central London. From October 2017 on, an additional, new fee for older and more polluting cars and vans is due with a toxicity charge.
In many cities in Germany there are citizens' initiatives which, following the example of the Initiative Volksentscheid Fahrrad ("Cycling Referendum Initiative") in Berlin, advocate for mobility transition and "bicycle laws". In June 2018, the Berlin Mobility Act to promote cycling was passed in Berlin, also due to a successful application for a referendum.
Traffic lights are being tested in Karlsruhe as part of a pilot project which, in contrast to conventional pedestrian traffic lights, display a permanent green light for pedestrians and cyclists, not for vehicles, and only interrupt this when a vehicle approaches.
In Japan, it is generally illegal to park a car on the street; a car buyer must provide evidence of owning private parking space or renting a public parking space for the car. As of 2019, renting fees for public parking spaces in the more central districts of Tokyo cost about a month, while in residential areas on the outskirts of Tokyo they cost around a month. Only after the police have verified that the parking lot exists and is large enough for the car the owner want to buy, the car dealer approves the purchase, and gives the owner a parking sticker to put on the new car's front or rear window. The Japanese state has been using regulations to discourage the sale of luxury cars and to stimulate consumers to buy small light-weight cars with small engines (see also: kei car) or to motivate them to switch to local public transport.
In Spain, a general speed limit of in built-up areas was introduced in 2021. On narrow streets with only one lane (often found in historic city centres), the permitted speed was limited to a maximum of ; for streets with more than one lane in both directions, the previously set speed limit was maintained at 50 km/h. A total of 509 people died in urban traffic accidents in Spain in 2019. The 2021 reduction of urban speed limits was intended to reduce the risk of pedestrians dying after being hit by a car by 80%.
With the educational motto Weniger Wagen wagen ("risk fewer cars"), the Roman Catholic Archdiocese of Cologne has sought to raise awareness, and has calculated: 'Due to mobility (journeys to work, committees, church services, etc.), around 16,370 tons of (as of 2012) are emitted annually in the Archdiocese of Cologne. This corresponds to a share of approx. 13 per cent of the archdiocese's total emissions.' In response, the Archdiocese stated it sought 'strategic and practical reorientation of mobility', including stimulating cycling through the Pharr-Rad initiative (a pun on Pfarrer "priest" and Fahrrad "bycicle") and the BistumsTicket ("diocesan ticket") which offers reduced fees for public transport travels by groups of 50 people or more to Catholic events organised within the archdiocese.
Short-haul flight ban
By July 2019, most political parties in Germany, including the Left Party, the Social Democrats, the Green Party and the Christian Democrats, started to agree to move all governmental institutions remaining in Bonn (the former capital of West Germany) to Berlin (the official capital since German Reunification in 1990), because ministers and civil servants were flying between the two cities about 230,000 times a year, which was considered too impractical, expensive and environmentally damaging. The distance of 500 kilometres between Bonn and Berlin could only be travelled by train in 5.5 hours, so either the train connections required upgrading, or Bonn had to be abolished as the secondary capital.
Measures in freight transport
Sea freight
By far the largest part of the world's freight traffic is sea freight. In 2010, about 60,000 trillion kilometre-tonnes were transported by sea, which was 85% of the world's total freight traffic. According to a 2015 forecast by Statista, by 2050 the volume of freight will have increased to four times the levels of 2010, while the share of sea freight will remain about the same.
Transporting goods by container ship is very efficient. Relatively few carbon dioxide (CO2) emissions are caused per transported tonne and kilometre compared to transport by truck (lorry). According to the Naturschutzbund Deutschland (NABU), the latter emit 50 grams of carbon dioxide per tonne and kilometre, while container ships only emit 15 grams. However, the mineral oil-based ship fuel used by container ships is particularly polluting; 90 per cent of all large ships run on heavy fuel oil (bunker fuel). Among other things, this means that emissions of toxic sulfur oxide are many times higher. To counteract this problem, the International Maritime Organization (IMO) lowered the limit value for sulfur in fuel from 3.5% to 0.5% in 2020.
Efficiency can be further increased and fuel consumption reduced by building the ships even larger.
There are innovations to harness wind power for sea transportation. These include cylindrical sails that can be retrofitted to cargo ships (making them "rotor ships" or "Flettner ships") and can reduce fuel consumption. Another option is a towing kite construction, which was originally developed in 2001 by the Hamburg-based company SkySails and is now being sold by AirSeas. The sail has an area of 1,000 square metres and was developed to reduce fuel consumption on cargo ships by up to 20%. As of 2019, the aviation group Airbus was testing this idea on four of its own freighters with the aim of saving up to 8,000 tonnes of carbon dioxide emissions.
Inland navigation
As inland navigation (also known as 'inland waterway transport' (IWT) or 'inland shipping') is a relatively environmentally friendly option for freight transport (similar to rail freight transport), researchers and policy makers have been aiming to shift the volume of cargo transported by more pollutive means towards inland navigation (for example, as part of the 2019 European Green Deal). According to the Research Information System for Mobility and Traffic (FIS; an agency of the German Transport Ministry), deficits in the competitiveness of German inland navigation, especially in an international comparison, are responsible for the stagnating transport volume of German inland navigation. A water infrastructure that is not optimally developed with insufficient water channel depths and bridge clearance heights lead to low loading capacities and thus to high costs. A certain exception are the waterways of the Rhine area, which also have by far the highest transport volume. Furthermore, the German inland waterway fleet is quite old by international comparison (45 years in 2013).
Inland navigation is closely related to seaport hinterland traffic. For example, in the modal split in hinterland traffic at the Dutch and Belgian seaports (Rotterdam, Amsterdam, Antwerp and Zeebrugge), inland shipping has a share of around 55%, while in Germany it usually remains below 10% of hinterland traffic. The reason for this is the better expansion of the Rhine waterways. Furthermore, the majority of the 250 important inland ports in Germany are owned by large companies that only handle transport goods from third-party companies to a small extent. Against this background, the FIS has called for the expansion and maintenance of German waterways. The number and carrying capacity of the German inland waterway vessels has remained constant in the early 21st century and was around 2.61 million tonnes in 2015.
Various approaches to energy efficiency and air pollution reduction are being tested and researched in inland shipping. This includes propulsion configurations such as the father–son concept, diesel-electric hybrid drives, hydrodynamic optimisations, fuel water emulsion injection, SCR-catalysts, diesel particulate filters, gas-to-liquid fuels (GTL) or Liquified Natural Gas (LNG), some of which can also be used in combination and are suitable for retrofitting existing systems. With an engine funding program, the German Transport Ministry supports inland navigation companies in the installation and retrofitting of low-emission engines or other emission-reducing technologies. The funding rate is up to 70%.
Road freight and modal share
In road freight transport, some transport companies are proposing partly new technologies such as trolleytrucks, electric trucks or electric cargo bikes. Package delivery services are experimenting with new concepts of smart logistics. Trolleytrucks with an auxiliary battery offer the possibility of lower-emission long-distance truck transport that is also more energy-efficient than battery-powered trucks. Equipping motorways with overhead lines for heavy goods vehicles (HGVs) has the advantage that HGVs would only have to carry small batteries, as only comparatively short distances would be covered in battery-only mode. At the same time, trolleytrucks would be a cost-effective way to make freight transport climate-friendly, as the electrification of motorways, at a cost of 3 million euros/km, does not represent too much of a financial outlay.
Another option to reduce CO2 emissions and environmental problems is to shift truck traffic to freight rail and inland waterway transport. This process is also known as modal shift. The German Environment Agency gives the climate impact of transport by truck in the reference year 2020 as 126 grams of CO2 equivalents per tonne-kilometre on average (g/tkm). According to the Environment Agency, transport by freight train has a climate impact of 33 g/tkm and transport by inland waterway vessel has a climate impact of 43 g/tkm, making rail and ship significantly more climate-friendly.
Although the European Union and its member states strongly promote the use of inland waterways and rail in combination with truck transport, in some cases financially, only HGVs have been developing positively in the 2010s, while shipping and rail have been stagnating or recording declines. For 2016, the Federal Statistical Office of Germany reported a decline in transport performance of 3.7% for inland waterways, a decline of 0.5% for rail and growth of 2.8% for trucks. In 2015, with a growing transport volume of 1.1%, there was a plus of 1.9% for road, a minus of 1% for rail and a minus of 3.2% for inland waterways. Overall, 71% of the transport performance is accounted for by the truck.
With growing containerization however, a combination of different modes of transport (intermodal freight transport) becomes more efficient. In so-called multimodal transport or combined transport, the truck only has to cover the last mile between the port or rail terminal and the customer. Measures to promote combined transport are, for example:
The Port of Rotterdam has set a quota for the modal share of hinterland transport modes: the truck share is to drop from 47% to 35%, while rail is to provide 20% instead of 13% in the future, and the transport performance of inland waterways is to increase from 40% to 45%.
Instead of burdening trunk roads with the transport of heavy goods such as industrial plants or components for wind turbines, German transport companies have ben required since 2010 to use the electronic portal Procedural Management of Large and Heavy Goods Transport (VEMAGS) to check whether alternative transport routes such as ship and rail are available, and if not, to explain that in their application for a permit to transport goods via road trucks.
With the promotion of handling facilities for combined transport, the German federal government supports the shift in traffic to inland waterways and freight trains.
The Lower-Rhine Chamber of Commerce and Industry, the Schifferbörse and the Development Centre for Naval Technology and Transport Systems (DST) in Duisburg jointly offer an additional course. Apprentice forwarding and logistics clerks should thus learn about the advantages of alternative modes of transport, rail and inland waterway, and thus integrate them more easily into their everyday work. Frequently, the curriculum only includes road freight transport and additional sea freight or air transport.
See also
Energy transition
Jet fuel tax
Phase-out of fossil fuel vehicles
Urban sprawl
References
Literature
, Format: PDF, KBytes: 2326
Udo Becker: Grundwissen Verkehrsökologie: Grundlagen, Handlungsfelder und Maßnahmen für die Verkehrswende. München 2016, ISBN 978-3-86581-993-2.
Andrej Cacilo: Wege zu einer nachhaltigen Mobilität: Im Spannungsfeld kultureller Werte, ökonomischer Funktionslogik und diskursrationaler Wirtschafts- und Umweltethik. 2., durchges. Aufl., Metropolis, Marburg 2021, ISBN 978-3-7316-1473-9.
Weert Canzler, Andreas Knie: Schlaue Netze – Wie die Energie- und Verkehrswende gelingt. München 2013, ISBN 978-3-86581-440-1.
Weert Canzler, Andreas Knie, Lisa Ruhrort, Christian Scherf: Erloschene Liebe? Das Auto in der Verkehrswende. Soziologische Deutungen. transcript, Bielefeld 2018, ISBN 978-3-8376-4568-2.
Hermann Knoflacher: Zurück zur Mobilität! Anstöße zum Umdenken. Ueberreuter, Wien 2013, ISBN 978-3-8000-7557-7.
, Format: PDF, KBytes: 2940
Markus Hesse: Verkehrswende. ökologisch-ökonomische Perspektiven für Stadt und Region. Marburg 1993, ISBN 978-3-926570-62-8.
External links
, long article on the mobility transition in Germany
Energy policy
Environmental policy
Sustainability
Transport and the environment
Urban planning | Mobility transition | [
"Physics",
"Engineering",
"Environmental_science"
] | 8,241 | [
"Transport and the environment",
"Energy policy",
"Physical systems",
"Transport",
"Urban planning",
"Environmental social science",
"Architecture"
] |
68,669,415 | https://en.wikipedia.org/wiki/Telosa | Telosa is a proposed utopian planned US city conceived by American billionaire Marc Lore and announced in September 2021. The project has a target population of 5 million people by 2050, with the first phase of construction expected to house 50,000. The location had initially not been chosen, with the project's planners intending the city to be built on cheap land in Appalachia or the American West desert.
The name Telosa is derived from the Ancient Greek word telos, in this case meaning "purpose".
Planning
Telosa was conceived by former Walmart U.S. eCommerce president and billionaire Marc Lore. In a statement announcing his resignation from Walmart, Lore expressed his desire to construct a "city of the future" based on a "reformed version of capitalism". Lore refers to his design philosophy for the city as "equitism", described as "a new model for society, where wealth is created in a fair way... It's not burdening the wealthy; it's not increasing taxes. It is simply giving back to the citizens and the people the wealth that they helped create".
Lore hired the architectural firm Bjarke Ingels Group, owned by Danish architect Bjarke Ingels, to handle the proposed city's master planning.
Features
Telosa is planned to be a 15-minute city, with workplaces, schools, and basic goods and services being within a 15-minute commute from residents' homes. Vehicles that are powered by fossil fuels will not be permitted within the city, with an emphasis instead being placed upon walkability and the use of scooters, bicycles, and autonomous electric vehicles.
A massive skyscraper, dubbed "Equitism Tower", is conceived to serve as a "beacon for the city". The skyscraper's projected features include space for water storage, aeroponic farms, and a photovoltaic roof.
The proposed land ownership in the city is based on Georgist principles, as advocated by political economist Henry George in his 1879 book Progress and Poverty. Under the proposed rules, anyone would be licensed to build, keep or sell a home, building or any other structure, and residents would share ownership of the land under a community endowment.
Possible locations
The project's planners intend the city to be built on cheap desert land in a location not yet decided , with Utah, Idaho, Nevada, Arizona, Texas, and Appalachia proposed as potential locations.
Reception
Writing in Timeout.com in September 2021, Ed Cunningham stated that "the blueprint designs are, depending on your taste, either dazzlingly utopian or unsettlingly dystopian. There’s plenty of innovative architecture on display, alongside futuristic visions of public transport and spaces filled with greenery and nature." It has been criticized as being an unrealistic vanity project which would be less sustainable than building upon existing urban areas.
See also
Neom
References
External links
Architecture related to utopias
Georgist communities
Proposed populated places in the United States
Utopian communities in the United States | Telosa | [
"Engineering"
] | 621 | [
"Architecture related to utopias",
"Architecture"
] |
68,670,913 | https://en.wikipedia.org/wiki/Musca%20depicta | Musca depicta ("painted fly" in Latin; plural: muscae depictae) is a depiction of a fly as a conspicuous element of various paintings. The feature was widespread in 15th- and 16th-century European paintings, and its presence has been subject to various interpretations by art historians.
Interpretations
James N. Hogue, writing in the Encyclopedia of Insects, lists the following reasons behind musca depicta: as a jest; to symbolize the worthiness of even minor "objects of creation"; as an exercise in artistic privilege; as an indication that the portrait is post mortem; and as an imitation of works of previous painters. Many art historians argue that the fly holds religious significance, carrying connotations of sin, corruption or mortality.
Another theory is that Renaissance artists strove to demonstrate their mastery in portraying nature, with André Chastel writing that musca depicta became as an "emblem of the avant-garde in painting" at the time. There exist several anecdotes from the biographies of various artists who, as apprentices, allegedly painted a fly with such skill as to fool their teacher into believing it was real. Well-known examples are those about Giotto as an apprentice of Cimabue and Andrea Mantegna and his master Francesco Squarcione. Kandice Rawlings argues that since these anecdotes were widespread, they contributed to the humorous interpretation of some trompe-l'œil flies.
Commenting on the Czech portrait of Francysk Skaryna, Ilya Lemeshkin brings attention to the fly painted on a corner of a page of Skaryna's Bible. He argues that the function of the fly is to secularize the image – in other words, to indicate that the depicted object is not a cult object to be venerated, but simply a painting.
Andor Pigler surmises that the painted fly served an apotropaic function, that is to serve as a type of magic intended to turn away harm or evil influences, as in deflecting misfortune or averting the evil eye. Kandice Rawlings challenges this notion, writing that Pigler fails to take into account other traditions associated with flies.
Trompe-l'œil fly
Both Konečný, writing about Dürer's Feast of the Rosary (copy), and Lemeshkin, writing about Skaryna's portrait, observe the flies painted in each do not exactly "sit" the underlying painted objects, but rather sit them. Based on this observation, as well as noting the disproportionately large relative size of the flies compared with the other depicted objects, Konečný argues that this was intended as a trompe-l'œil (illusion), that the fly sits . He also remarks that the fly in the Portrait of a Carthusian (pictured above) serves to intensify the illusion of the trompe-l'œil frame. The Portrait of a Carthusian, dated about 1446, is the earliest known example of panel painting with a trompe-l'œil fly.
Trompe-l'œil flies are recognized in over twenty Netherlandish, German, and north Italian paintings dated between 1450 and the 1510s, and are analysed by André Chastel in a book eponymously dedicated to musca depicta. Of them, eight are portraits, thirteen are religious miniatures, and only two are large-size works. Chastel remarks that trompe-l'œil flies were a passing fad, with artists later having found other ways to demonstrate their skill.
In popular culture
The musca depicta is a recurring topic in the 2019 film, The Burnt Orange Heresy. The main character, an art dealer, explains to a woman he meets that it signifies corruption.
Gallery
References
Further reading
Kemp, Cornelia (2003) Fliege, In: Reallexikon zur deutschen Kunstgeschichte, Pt. 9 pp. 1196–1221
The section Trompe l'oeil of the book lists many paintings with musca depicta, classified into categories "Portrait painting", "Madonna", "Still Life", etc.
Weixlgärtner, Arpad, "Die Fliege auf dem Rosenkranzfest" (1928) - In: Mitteilungen der Gesellschaft für vervielfältigende Künste (1928) pp. 20-25
Konečný states that this text was first to bring an attention to the fly as a notable difference between the classical version of the Feast of the Rosary and its copies. According to Konečný, Weixlgärtner is the only author who gave a detailed description of the fly, identifying it as blue bottle fly, rather than a common housefly.
Insects in art
Flies
Visual motifs | Musca depicta | [
"Mathematics"
] | 1,014 | [
"Symbols",
"Visual motifs"
] |
68,671,395 | https://en.wikipedia.org/wiki/Chemiluminescent%20immunoassay | Chemiluminescent immunoassay (CLIA) is a type of immunoassay employing chemiluminescence.
See also
Enzyme-linked immunosorbent assay (ELISA)
References
Immunologic tests | Chemiluminescent immunoassay | [
"Biology"
] | 53 | [
"Immunologic tests"
] |
68,671,459 | https://en.wikipedia.org/wiki/Eta%20Octantis | Eta Octantis, Latinized from η Octantis, is a solitary star located in the southern circumpolar constellation Octans. It has an apparent magnitude of 6.19, making it faintly visible to the naked eye. The object is situated at a distance of 358 light years but is approaching the Solar System with a heliocentric radial velocity of .
Eta Octantis has a stellar classification of A1 Va, indicating that it is an ordinary A-type main sequence star. At present it has 2.37 times the Sun's mass and 2.6 times the Sun's radius. It shines with a luminosity of from its photosphere at an effective temperature of 9,500 K, giving a white hue. Eta Octantis is a rapidly rotating star, with a projected rotational velocity of , and is estimated to be 547 million years old, having completed 72% of its main sequence lifetime.
References
Octans
A-type main-sequence stars
096124
4312
053702
PD-83 00386
Octantis, 11
Octantis, Eta | Eta Octantis | [
"Astronomy"
] | 229 | [
"Octans",
"Constellations"
] |
68,672,890 | https://en.wikipedia.org/wiki/NGC%20839 | NGC 839 is a lenticular galaxy located in the constellation Cetus. It was discovered November 28, 1785 in a sky survey by Wilhelm Herschel. It is one of the galaxies that are part of the quadruplet family HGC 16, along with the unbarred lenticular galaxy NGC 838.
NGC 839 is a luminous infrared galaxy (LIRG) that shows signs of high amounts of star formation; therefore, it is also classified as a starburst galaxy. It is similar in appearance to Messier 82, suggesting a similar formation history.
References
Lenticular galaxies
Cetus
0839
Luminous infrared galaxies
Starburst galaxies
008254 | NGC 839 | [
"Astronomy"
] | 135 | [
"Cetus",
"Constellations"
] |
68,674,273 | https://en.wikipedia.org/wiki/Swedish%20Agency%20for%20Accessible%20Media | The Swedish Agency for Accessible Media (, MTM), formerly the Audiobook and Braille Library (, TPB), is a Swedish governmental administrative agency under the Ministry of Culture.
The agency's task is to work in collaboration with other libraries in the country to ensure that everyone has access to literature and social information based on their own abilities, regardless of reading ability or disability, and to make easy-read literature available. For example, the agency must ensure that people with reading and writing difficulties/dyslexia and visual impairments have access to literature in media adapted for them: audiobooks, Braille books, tactile picture books and e-books. All audiobooks are made in DAISY format. DAISY stands for Digital Accessible Information System and is an open, internationally established standard. In addition to cooperation with other area libraries on lending accessible media, the agency also has its own program to lend Braille books. It is also working on developing technology for media for people with reading disabilities.
The available books and newspapers are downloaded from the agency's digital library Legimus. In March 2016, there were over 100,000 audiobooks, more than 18,000 Braille titles, around 3,000 e-books and 150 books in sign language.
The (), and as of 1 August 2010, the (), are part of MTM.
The agency is located in Bylgiahuset in Malmö and has operated there since 1 January 2020.
History
A Braille library was established in Stockholm in 1892 by Amy Segerstedt, director of Tysta skolan (lit. 'the Silent School'), a private school for the deaf. It moved into the same building as the Swedish Association of the Blind () in 1895 and was taken over by the association in 1912.
The Swedish Association of the Blind began lending audiobooks in 1955. Library activities continued when the association changed its name to the in 1977.
The Audiobook and Braille Library became an authority in 1980. When the agency was established, all books were transferred from the Swedish Association of the Visually Impaired to the agency, which thus became the lending center for audiobooks and braille books.
On 1 January 2013, the Audiobook and Braille Library changed its name to ('the Agency for Accessible Media'). One of the reasons for the name change was that its assignment has been broadened from audiobooks and Braille books to include other accessible media.
Publications
Since 2015, MTM has taken over the state's responsibility for publishing and distributing easy-to-read literature and making easy-to-read news information available through the publication of the easy-to-read news magazine .
The agency publishes three free publications, , , and .
Nordic cooperation
MTM cooperates with similar agencies in the Nordic countries: the Norwegian Library of Talking Books and Braille, Nota in Denmark, in Finland; and the Icelandic . An agreement was signed in 2009 which allows accessible literature to be shared between these countries. The agreement increases user access and also eliminates unnecessary duplication of work in creating accessible versions.
MTM's awards
MTM has two awards: ('Reading Ambassador of the Year') and ('Reading Gold'). The award is presented to a reading ambassador or narrator (for recordings) who has made outstanding efforts to promote reading in the care sector. recognizes organizations or institutions that do an excellent job of enabling people with reading difficulties or disabilities to read on their own terms. Previously, the Amy Award () and the Best Easy-Reading Library Award () were awarded, now combined and known as .
Current awards
('Reading Gold') is MTM's accessibility award, presented to an individual or organization that has made an exciting or progressive contribution to accessible media during the year. Formerly known as the Amy Award, it is named after Amy Segerstedt, who founded the Association for Braille in 1892, a direct predecessor of MTM.
Recipients
2018 – The ('youth reading for the elderly') project by Helena Pennlöv Smedberg and Laven Fathi at Gottsunda Library in Uppsala
2019 – The Sustainable Poetry project in Trelleborg, project leader Maria Glawe
2020 – Eva Fridh and Martin von Knorring for a cookbook for the visually-impaired
The award ('Reading Ambassador of the Year Award') is presented to a reading ambassador or narrator for efforts to promote reading in care for disabled or elderly people.
Recipients
2012 – Marie Schelander, Härryda
2013 – Barbro Granberg and Helena Oskarsson, Piteå
2014 – Ann Erixson, Halmstad
2015 – Susanne Sandberg, Skövde
2016 – Ingrid Jonsson, Lidköping
2017 – Ingeborg Albrecht, Ystad
2018 – Bitte Sahlström, Östhammar
2019 – Agneta Json Granemalm, Ljungby
2020 – Sebastian Åkesson
Previous awards
Amy Award
The Amy Award was MTM's accessibility award, presented to an individual or organization that made an exciting or progressive contribution to accessible media during the year. In 2018, the Amy Award and the Best Easy-Reading Library Award were merged to form .
Recipients
2010 – Minabibliotek.se, six libraries in the Umeå region
2011 – Komvux Kärnan in Helsingborg
2012 – Heidi Carlsson Asplund, librarian and project manager
2013 – Anna Fahlbeck, librarian, Linköping library
2014 – Anne Ljungdahl, school library developer, Västerås
2015 – Jenny Edvardsson, teacher at Wendesgymnasiet, Kristianstad
2016 – Göteborg University Library's reading service
2017 – no award
Best Easy-Reading Library
The prize was awarded to a library that recognized the need for easy reading among several target groups and actively worked with marketing and well-planned information about easy reading.
Recipients
2009 – Norrköping Library
2010 – Sundbyberg Library
2011 – Strängnäs Library
2012 – Mjölby Library
2013 – Värnamo Library and Gävle Library
2014 – Halmstads Library
2015 – Linköpings Library
2016 – Tumba Library
2017 – no award
See also
Scandinavian Braille
References
External links
The Swedish Agency for Accessible Media
Library-related organizations
Accessibility
Government agencies of Sweden
Libraries for the blind
Deaf culture in Sweden | Swedish Agency for Accessible Media | [
"Engineering"
] | 1,289 | [
"Accessibility",
"Design"
] |
68,674,556 | https://en.wikipedia.org/wiki/Smart%20Living%20Lab | The Smart Living Lab is an academic research and development center dedicated to contribute to the future of the built environment. Located in Fribourg in the Bluefactory innovation district, it is affiliated with the Switzerland Innovation Park Network West EPFL. This living lab focuses its research activities on human comfort and well-being in indoor spaces, environmental performance of buildings, and the digital transformation of the architecture, engineering and construction (AEC) industry.
History
A collaboration between the EPFL, the School of Engineering and Architecture of Fribourg, and the University of Fribourg, the Smart Living Lab conducts interdisciplinary research since 2014 on construction technologies, energy systems, building user behavior, and design processes. It was initiated when the associated campus of EPFL Fribourg was created.
In 2017, the NeighborHub solar pavilion was designed at the Smart Living Lab through the association of its partner institutions, and the Geneva University of Art and Design. By demonstrating sustainable technologies and architecture, this project won the Solar Decathlon competition in the United States. Ever since, it has served as a community center in the Bluefactory innovation district and as a research prototype for the Smart Living Lab.
In 2018, the Smart Living Lab launched a parallel study mandate for the construction of an experimental building. In 2019, the Smart Living Lab published the result of a preliminary research dedicated to its future building in the editorial project "Towards 2050" by Park Books. This building should define environmental goals, which aim to meet the Energy Strategy 2050 of the Swiss Confederation 30 years ahead of schedule. In this context, the building will be studied to investigate ways to reduce power consumption and greenhouse gas emissions. The Smart Living Lab building, which is scheduled to open in 2024, will provide workspace for the 130 members of 11 research teams from the EPFL, the School of Engineering and Architecture of Fribourg, and the University of Fribourg.
In 2021, the Smart Living Lab joined the European network of living labs Enoll. Also in 2021, the Smart Living Lab demonstrated the recycling of concrete for the construction of a footbridge prototype.
Structure
The Smart Living Lab is composed of research groups from three universities in Fribourg:
The EPFL: Structural Xploration Lab (SXL), Laboratory of Construction and Architecture (FAR), Integrated Comfort Engineering (ICE), Human-Oriented Built Environment Lab (HOBEL), and Building2050 (BUILD)
The School of Engineering and Architecture of Fribourg: Institute of Applied Research in Energy Systems (ENERGY), Institute of Architecture: Heritage, Construction and Users (TRANSFORM), and Institute of Construction and Environmental Technologies (iTEC),
The University of Fribourg: international institute of management in technology (iimt), Human-IST Institute, and Institute for Swiss and International Construction Law
Publication
References
External links
Website of the Smart Living Lab
Website of the EPFL
Website of the EPFL Fribourg
Website of the School for engineering and architecture Fribourg
Website of the University of Fribourg
Website of the Switzerland Innovation Park Network West EPFL
University research institutes
Architecture organizations
Sustainability and environmental management | Smart Living Lab | [
"Engineering"
] | 643 | [
"Architecture organizations",
"Architecture"
] |
68,675,721 | https://en.wikipedia.org/wiki/Thermal%20laser%20epitaxy | Thermal laser epitaxy (TLE) is a physical vapor deposition technique that utilizes irradiation from continuous-wave lasers to heat sources locally for growing films on a substrate. This technique can be performed under ultra-high vacuum pressure or in the presence of a background atmosphere, such as ozone, to deposit oxide films.
TLE operates at power densities between 104 – 106 W/cm2, which results in evaporation or sublimation of the source material, with no plasma or high-energy particle species being produced. Despite operating at comparatively low power densities, TLE is capable of depositing many materials with low vapor pressures, including refractory metals, a process that is challenging to perform with molecular beam epitaxy.
Physical process
TLE uses continuous-wave lasers (typically with a wavelength of around 1000 nm) located outside the vacuum chamber to heat sources of material in order to generate a flux of vapor via evaporation or sublimation. Owing to the localized nature of the heat induced by the laser, a portion of the source may be transformed into a liquid state while the rest remains solid, such that the source acts as its own crucible. The strong absorption of light causes the laser-induced heat to be highly localized via the small diameter of the laser beam, which can also have the effect of confining the heat to the axis of the source. The resulting absorption corresponds to a typical photon penetration depth on the order of 2 nm due to the high absorption coefficients of α ~ 105 cm−1 of many materials. Heat loss via conduction and radiation further localizes the high-temperature region close to the irradiated surface of the source. The localized character of the heating enables many materials to be grown by TLE from freestanding sources without a crucible. Owing to the direct transfer of energy from the laser to the source, TLE is more efficient than other evaporation techniques such as evaporation and molecular beam epitaxy, which typically rely on wire-based Joule heaters to reach high temperatures.
By heating the source, a flux of vapor is produced, the pressure of which frequently has an approximately exponential relation to temperature. The vapor is then deposited onto a laser-heated substrate. The very high substrate temperatures achievable by laser heating allow the use of adsorption-controlled growth modes, similar to molecular beam epitaxy, ensuring precise control of the stoichiometry and temperature of the deposited film. This precise control is valuable for growing thin-film heterostructures of complex materials, such as high-Tc superconductors. By positioning all lasers outside of the evaporation chamber, contamination can be reduced compared to using in situ heaters, resulting in highly pure deposited films.
The deposition rate of the vapor impinging upon the substrate is controlled by adjusting the power of the incident source laser. The deposition rate frequently increases exponentially with source temperature, which in turn increases linearly with incident laser power. Stability in the deposition rate may be achieved by continuously moving the laser beam around the source, while compensating for any coating of any laser optics inside the TLE chamber.
The gas in the chamber can be incorporated in the deposition film. With the addition of an oxygen or ozone atmosphere, oxide films can readily be grown with TLE at pressures up to 10−2 hPa.
History
Shortly after the invention of the laser by Theodore Maiman in 1960, it was quickly recognized that a laser could act as a point source to evaporate source material in a vacuum chamber for fabricating thin films. In 1965, Smith and Turner succeeded in depositing thin films using a ruby laser, after which Groh deposited thin films using a continuous-wave CO2 laser in 1968. Further work demonstrated that laser-induced evaporation is an effective way to deposit dielectric and semiconductor films. However, issues occurred with regard to stoichiometry and the uniformity of the deposited films, thus diminishing their quality compared to films deposited by other techniques. Experiments to investigate the deposition of thin films using a pulsed laser at high power densities laid the foundation for pulsed laser deposition, an extremely successful growth technique that is widely used today.
Experiments utilizing continuous-wave lasers continued to be performed throughout the latter half of the twentieth century, highlighting the many advantages of continuous-wave laser evaporation including low power densities, which can reduce surface damage to sensitive films. It proved challenging to achieve congruent evaporation from compound sources using continuous-wave lasers, and film deposition was typically limited to sources with high vapor pressures due to the low continuous wave power densities available.
In 2019, the evaporation of sources using continuous-wave lasers was rediscovered at the Max Planck Institute for Solid State Research and dubbed "thermal laser epitaxy". This new technique uses elemental sources illuminated by high-power continuous-wave lasers (typically with peak powers around 1 kW at a wavelength of 1000 nm), thus allowing the deposition of low-vapor-pressure materials such as carbon and tungsten while avoiding issues with congruent evaporation from compound sources.
References
External links
Thermal Laser Epitaxy - Max Planck Institute for Solid State Research
Physical vapor deposition techniques
Thin film deposition
Semiconductor device fabrication
Crystallography
Methods of crystal growth | Thermal laser epitaxy | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 1,079 | [
"Thin film deposition",
"Microtechnology",
"Methods of crystal growth",
"Coatings",
"Thin films",
"Materials science",
"Semiconductor device fabrication",
"Crystallography",
"Condensed matter physics",
"Planes (geometry)",
"Solid state engineering"
] |
68,676,174 | https://en.wikipedia.org/wiki/Acoustic%20communication | Acoustic communication means communication by means of sound, such as:
Underwater acoustic communication
Acoustic communication in aquatic animals
Acoustic communication in fish
Auditory animal communication
Human speech
Bird vocalization
Acoustics | Acoustic communication | [
"Physics"
] | 36 | [
"Classical mechanics",
"Acoustics"
] |
68,678,155 | https://en.wikipedia.org/wiki/Hebb%E2%80%93Williams%20maze | The Hebb–Williams maze is a maze used in comparative psychology to assess the cognitive ability of small animals such as mice and rats. It was developed by Donald O. Hebb and his student Kenneth Williams in 1946, when both men were working at Queen's University at Kingston. A modified version, intended specifically to measure the intelligence of rats, was described in a 1951 paper by Hebb's students Rabinovitch and Rosvold. This modified version is the most commonly used in research where the aim is to measure animals' problem-solving abilities. In general, animals are tested in the Hebb–Williams maze's twelve separate mazes after acclimating to six practice mazes, though some studies have not used all twelve testing mazes. The two main procedures for the maze are the reward conditioning task and the water escape task. The maze has been used to investigate strain and sex differences in mice. A 2018 study argued that the maze is potentially useful for translational research in fragile X syndrome in humans.
References
Behavioral neuroscience
Animal testing mazes | Hebb–Williams maze | [
"Biology"
] | 218 | [
"Behavioural sciences",
"Behavior",
"Behavioral neuroscience"
] |
68,679,266 | https://en.wikipedia.org/wiki/Tarmac%20scam | The tarmac scam is a confidence trick in which criminals sell fake or shoddy tarmac (asphalt) and driveway resurfacing. It is particularly common in Europe but practiced worldwide. Other names include the paving scam, tarmacking, the asphalt scam, driveway fraud or similar variants. Non-English names include "" (Italian), "" (German) and "" (French).
Method
A conman typically goes door-to-door, claiming to be a builder working on a contract who has some leftover tarmac, and offering to pave a driveway at a low cost.
The paving is in fact often simply gravel chippings covered with engine oil, or not the right depth and type of materials to form a lasting road surface. Milk has been used to make a fake sealant.
The conmen may target elderly, vulnerable residents, and claim to be official contractors working on roadworks to add credibility. Reported escalation has included increasing the cost, claiming that the job has required more material than expected, and making threats.
Criminals
Tarmac fraud is particularly associated with the Rathkeale Rovers and other gangs from the Irish traveller community. The organiser of the scheme may lead a gang of low-paid workers, or human trafficking victims. Cases have been reported since the 1980s.
Irish crime reporter Eamon Dillon, an expert on the gangs involved, interviewed a builder who worked with a gang who said that they had custom-built lorries which could never do a proper job: "a proper tarring lorry will have sixty jets, our tar lorries have eight". In another case, the equipment was rented in Romania and then never returned. Another gang used a lorry with Highways Agency branding.
The relative mundanity of tarmacking may have made it a low priority for law enforcement. Dillon has estimated that the scheme may earn up to $140 million a year and that in 2010 there were 20 gangs active in Italy alone, earning €2 million a week.
References
Confidence tricks
Asphalt
Road construction
Pavements | Tarmac scam | [
"Physics",
"Chemistry",
"Engineering"
] | 421 | [
"Unsolved problems in physics",
"Construction",
"Road construction",
"Chemical mixtures",
"Asphalt",
"Amorphous solids"
] |
68,679,499 | https://en.wikipedia.org/wiki/Ms2%20%28software%29 | ms2 is a non-commercial molecular simulation program. It comprises both molecular dynamics and Monte Carlo simulation algorithms. ms2 is designed for the calculation of thermodynamic properties of fluids. A large number of thermodynamic properties can be readily computed using ms2, e.g. phase equilibrium, transport and caloric properties. ms2 is limited to homogeneous state simulations.
Features
ms2 contains two molecular simulation techniques: molecular dynamics (MD) and Monte-Carlo. ms2 supports the calculation of vapor-liquid equilibria of pure components as well as multi-component mixtures. Different Phase equilibrium calculation methods are implemented in ms2. Furthermore, ms2 is capable of sampling various classical ensembles such as NpT, NVE, NVT, NpH. To evaluate the chemical potential, Widom's test molecule method and thermodynamic integration are implemented. Also, algorithms for the sampling of transport properties are implemented in ms2. Transport properties are determined by equilibrium MD simulations following the Green-Kubo formalism and the Einstein formalism.
Applications
ms2 has been frequently used for predicting thermophysical properties of fluids for chemical engineering applications as well as for scientific computing and soft matter physics. It has been used for modelling both model fluids as well as real substances. A large number interaction potentials are implemented in ms2, e.g. the Lennard-Jones potential, the Mie potential, electrostatic interactions (point charges, point dipoles and point quadrupoles), and external forces. Force fields from databases such as the MolMod database can readily be used in ms2.
See also
Comparison of software for molecular mechanics modeling
List of Monte Carlo simulation software
List of free and open-source software packages
References
External links
Molecular dynamics software
Computational chemistry
Molecular modelling software
Molecular dynamics
Force fields (chemistry) | Ms2 (software) | [
"Physics",
"Chemistry"
] | 380 | [
"Molecular dynamics software",
"Molecular modelling software",
"Molecular physics",
"Computational chemistry software",
"Computational physics",
"Molecular dynamics",
"Computational chemistry",
"Theoretical chemistry",
"Molecular modelling",
"Force fields (chemistry)"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.