id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
64,768,088
https://en.wikipedia.org/wiki/International%20Mars%20Ice%20Mapper%20Mission
The International Mars Ice Mapper Mission (I-MIM) is a proposed Mars orbiter being developed by NASA, Japan Aerospace Exploration Agency (JAXA), the Canadian Space Agency (CSA), and the Italian Space Agency (ASI). As the mission concept evolves, there may be opportunities for other space agency and commercial partners to join the mission. The goal of the orbiter is the quantification of extent and volume of water ice in non-polar regions of Mars. The results are intended to support future Mars missions, especially with respect to the search for habitable environments and accessible In situ resource utilization (ISRU) resources. The International-Mars Ice Mapper is an "exploration precursor mission", comparing it to the Lunar Reconnaissance Orbiter (LRO) mission. The mission was envisioned to be launched as early as 2026. However, in March 2022, it was revealed in its fiscal year 2023 budget proposal that the US government would terminate NASA financial support for the Mars Ice Mapper, casting the project's future into uncertainty. Mission The mission is to search for ice deposits under the surface of Mars, precursor for human missions there. By identifying locations where water ice may exist within 5-10 meters of the surface and thus could be accessed by crewed expeditions. The mission plans to scan specific locations on the Martian surface below elevation (to enable entry, descent and landing). The target areas for radar scans are between 25° and 40° northern latitude and 25° and 40° southern latitude. The upper limit of 40° was chosen to have favorable conditions for solar arrays. The lower bound of 25° is intended to maximize the proximity of locating ground ice (since availability of ground ice generally decreases toward the equator due to increased insolation). The ice-mapping mission could help the agency identify potential science objectives for initial human missions to Mars, which are expected to be designed for about 30 days of exploration on the surface. For example, identifying and characterizing accessible water ice could lead to human-tended science, such as ice coring to support the search for life. Mars Ice Mapper also could provide a map of water-ice resources for later human missions with longer surface expeditions, as well as help meet exploration engineering constraints, such as avoidance of rock and terrain hazards. Mapping shallow water ice could also support supplemental high-value science objectives related to Martian climatology and geology. Science Beyond promoting scientific observations while the orbiter completes its reconnaissance work, the agency partners will explore mission-enabling rideshare opportunities as part of their next phase of study. All science data from the mission would be made available to the international science community for both planetary science and Mars reconnaissance. This approach is similar to what NASA is doing at the Moon under the Artemis program – sending astronauts to lunar South Pole, where ice is trapped in the permanently shadowed regions of the pole. Access to water ice would also be central to scientific investigations on the surface of Mars that are led by future human explorers. Such explorers may one day core, sample, and analyze the ice to better understand the record of climatic and geologic change on Mars and its astrobiological potential, which could be revealed through signs of preserved ancient microbial life or even the possibility of living organisms, if Mars ever harbored life. Ice is also a critical natural resource that could eventually supply hydrogen and oxygen for fuel. These elements could also provide resources for backup life support, civil engineering, mining, manufacturing, and, eventually, agriculture on Mars. Transporting water from Earth to deep space is extremely costly, so a local resource is essential to sustainable surface exploration. "In addition to supporting plans for future human missions to Mars, learning more about subsurface ice will bring significant opportunities for scientific discovery", said Eric Ianson, NASA Planetary Science Division Deputy Director and Mars Exploration Program Director. "Mapping near-surface water ice would reveal an as-yet hidden part of the Martian hydrosphere and the layering above it, which can help uncover the history of environmental change on Mars and lead to our ability to answer fundamental questions about whether Mars was ever home to microbial life or still might be today". Mars has been a primary target for robotic exploration and the search for ancient life in our Solar System. Mars Ice Mapper would complement surface missions on the planet, including the Perseverance rover that landed on February 18, 2021, following a seven-month journey in space. NASA and the European Space Agency (ESA) also recently announced they are moving forward with the Mars sample-return mission. Spacecraft The CSA would provide the radar instrument, JAXA the spacecraft bus and ASI the communications subsystem for the spacecraft. NASA would be responsible for overall mission management and for providing the launch of the spacecraft. The mission will cost US$185 million. NASA included an illustration of Mars Ice Mapper communicating with three spacecraft in Mars orbit, acting as communications relays back to Earth. The agency has previously discussed developing a communications satellite network at Mars, perhaps through public-private partnerships, to support Mars Ice Mapper. In March 2024, Thales Alenia Space signed a €22 million Phase B1 contract with the Italian Space Agency to develop the spacecraft's communications subsystems, following the Phase A contract previously awarded to the company in 2021. Instrument The mission concept plans to utilize Synthetic-aperture radar, based on technology used by the Canadian RADARSAT satellite constellation. The radar has the following technical specification: Frequency: 900 MHz Antenna: Parabolic antenna with 6 meters diameter deployed on orbit Energy consumption: between 500 and 1000 watts Polarization: hybrid (circular transmit, dual linear reception) Two modes: SAR mode and Sounder mode SAR mode: map swath width 30 km, penetration depth 6 m Sounder mode: vertical resolution 1 m, along-track resolution: 30 m, across-track resolution: 1,5 km See also Mars sample-return mission References Space
International Mars Ice Mapper Mission
[ "Physics", "Mathematics" ]
1,203
[ "Spacetime", "Space", "Geometry" ]
64,770,950
https://en.wikipedia.org/wiki/Private%20Fuel%20Storage
Private Fuel Storage LLC (PFS) was a nuclear power industry consortium organized to manage spent nuclear fuel based in La Crosse, Wisconsin. The plan was to store it above-ground in dry casks on the Goshute's Skull Valley Indian Reservation, Tooele County, Utah. It was withdrawn in 2012. Project history DOE ownership The Department of Energy took responsibility for the disposal of spent nuclear fuel (SNF) in the Nuclear Waste Policy Act of 1982 (NWPA). A temporary Monitored Retrievable Storage (MRS) program was expected to be used alongside a geological storage (permanent) program. Eventually the latter focused on Yucca Mountain nuclear waste repository, and MRS was evaluated on many different sites. Department of Energy (DOE) was told to take title to the SNF no later than 1998. By 1987 DOE planned to build the facility near Oak Ridge, Tennessee on federal land as government nuclear research projects meant skilled personnel and infrastructure were already in place. Opposition to the Oak Ridge location led to its prohibition in a 1987 amendment to the NWPA. Nuclear Waste Negotiator This 1987 amendment also created the Office of the United States Nuclear Waste Negotiator (ONWN), which was active in the early 1990s after its first negotiator, David Leroy, was appointed in 1991. Leroy described the government as "seek[ing] volunteers" of counties and tribes willing to host a MRS. After rounds of grants and applications to the counties and tribes, the willing sites was reduced to Indian reservations that had the tribal sovereignty to host spent fuel, as counties were described by Leroy as having "angry mobs" opposed to it. This was considered "particularly reprehensible from an ethical standpoint because of the long and often destructive history of Indian involvement in U.S. nuclear programs." By March 1993 only four tribes remained interested in the project, and by August 1993 two were in negotiations: the Mescalero tribe's Mescalero Apache Indian Reservation in New Mexico and the Goshute tribe's Skull Valley Indian Reservation in Utah. The ONWN was defunded in late 1993 and expired in 1995. The Skull Valley band applied for grants created by the 1987 program and by 1990. The first round of grants, approximately $100,000, funded the band executive committee's travel to Sacramento, California's Rancho Seco nuclear plant, Washington state's Hanford Site, Florida Power & Light nuclear facilities, and Virginia's Surry Nuclear Power Plant. The second phase of grants, approximately $200,000, sent the committee to Japan's Fugen nuclear plant and Tōkai reprocessing facility, France's La Hague reprocessing facility, UK's Sellafield power/reprocessing/storage facility, and Sweden's Clab storage facility. Private Fuel Storage, LLC creation After ONWN wound down, MRS efforts were picked up by a private consortium, Private Fuel Storage, LLC, led by Xcel Energy (via Northern States Power Company's Jim Howard and the "locally visible and vocal" engineer, Scott Northard) to store on of Mescalero land in New Mexico, called the Mescalero Utility Fuel Storage Initiative. This was controversial and opposed by many in the tribe as well as New Mexico's senators Jeff Bingaman and Pete V. Domenici. The Mescalero tribe voted against the $250 million deal in 1995, then proceeded into a second referendum and accusations of coercion and outside involvement, though the vote supposedly ended in PFS's favor, leading Tom Udall (then the New Mexico Attorney General) to state "It appears to me that the tribal leadership has strong-armed members to get this result", with allegations of job losses to opponents, intimidation through killing horses and dogs of a tribe member, and assaulting children. The agreement had a minority of tribal members on its board (4 from the tribe, 5 from PFS), and there was a possibility for the transfer the SNF title to the tribe. The tribe broke off negotiations in 1996. Negotiations with the Goshute tribe were ongoing and became primary. Xcel/Northern claimed 33 companies were involved, but by 1995 there were only 10 companies, and when Private Fuel Storage was organized in 1996 the number was reduced to eight, being Xcel/Northern, Southern Nuclear, Genoa FuelTech, Southern California Edison, Entergy (via ConEd), American Electric Power (via Indiana-Michigan Power), Florida Power and Light, and FirstEnergy. Boston Edison withdrew in 1997 after citizen objections, and Illinois Power sold its share to Florida Power in 2000. Wisconsin Electric and Pacific Gas and Electric Company (PG&E) also withdrew. General Public Utilities had merged with Xcel Energy. The Private Fuel Storage project would have stored approximately of spent fuel in 4000 Holtec International dry casks on of Goshute land. The fuel would have come from over 100 power plants in a project worth approximately $3 billion and was technically temporary storage. Some of the Goshutes were in favor of the project for the economic boost, including Danny Quintana, a non-Indian attorney representing the tribe in the project, who described the tribe as "very astute in terms of business deals" in 1995. It was also opposed by others in the tribe and by many outside groups, such as Senators Orrin Hatch and Bob Bennett, and House Representatives Rob Bishop, Chris Cannon, James V. Hansen, Jim Matheson, Utah governors Mike Leavitt (who famously said "over my dead body" to the proposal) and Jon M. Huntsman Jr., and Salt Lake City mayor Rocky Anderson. The conflict and concerns reached the front page of the New York Times in 1998. Legal protection under the Price–Anderson Nuclear Industries Indemnity Act, even for private operation on Indian lands, was settled with 1999's El Paso Natural Gas Co. v. Neztsosie, which noted Price-Anderson's "unmistakable preference for a federal forum". Still, Price-Anderson had many unanswered questions and grey areas for the project. Leon Bear and tribal disputes Leon D. Bear (), identified as chairman of the Goshutes by PFS, had pushed for the project, describing it as appropriate given the surrounding toxic sites that already existed. Mary Allen, a Goshute executive committee member, doubted the tribe could handle it, as did tribe member Margene Bullcreek, who said about $300,000 had been granted to the tribe by 1995. Bear shaped tribal decisions by attaching tribal money to them, in contrast to the previously equal distribution of funds. For instance, the tribe's 1998 Christmas bonuses were tied to acceptance of the PFS project and would result in $6000 bonuses for supporters and $400 bonuses for those against PFS. In 2000 a tribe member stated "everyone who supports the facility has a new truck—if you don’t support Leon, you don’t have anything", showing the lack of voice and access to the tribe and the shaping of the tribe's identity through the distorted narrative. In April 1999 the tribe passed a resolution stating that all tribal documents are confidential and proprietary to "protect the Band from outsiders". After this point the committee attendance (sign-in) sheet was a legal agreement, binding members to confidentiality. Some members refused to sign such a document. In March 1999, eighteen tribe members including Sammy Blackbear () and Margene Bullcreek sued BIA over the legality of the 1997 lease agreement, which they BIA approved (per the Indian Long-Term Leasing Act) in three days, an unusually prompt event. The Blackbear/Bullcreek suit was dismissed in February 2000 without prejudice (and upheld in an April 2000 appeal) as it had not met a ripeness threshold. They proceeded to appeal BIA's approval of the lease agreement in September 2000 and were joined by other parties including the Ohngo Gaudedah Devia Awareness (OGDA or OGD) and the State of Utah. This was then filed as a lawsuit on 2 May 2001, over the legality of the 1997 lease agreement, stating the "Bear regime" (Leon and his uncle Lawrence) had been recalled in 1994 over the spent fuel storage, and the Blackbears had been elected. BIA continued to support the Bear leadership, which continued to support the PFS project. Bullcreek indicated Bear had begun to receive payments from PFS in 1996, then signed the lease and received BIA approval. Blackbear alleged that Leon Bear had made "extraordinary purchases" for personal use and also did not allocate PFS project money to the tribe. The following day, as part of a NRC ASLPB investigation, Leon Bear and John Donnell (PFS project member working for Stone & Webster) were deposed. The transcript was then released on 17 May 2001 in redacted form after a protective order was granted. Bear noted there were 112 people enrolled in the tribe, and "about 15" lived on the reservation. No members of the tribe were employed at Tekoi, compared with several in 1995. Donnell indicated he believed Bear to be the chairman of the tribe but did not verify it. He recalled being at a General Council meeting of the tribe where Bullcreek "challenged Leon's role in leadership". Bear was the tribal secretary in 1990–1991; he was elected as chair in November 1995 and again in November 2000; his uncle, Lawrence Bear, was the previous chair. Bear presented the executed lease with PFS to the council. In Utah and OGD's complaints surrounding environmental justice issues under Pres. Clinton's Executive Order 12898, in 2002 the licensing board ordered the Skull Valley Band to account for lease revenues and distribution to the Band, defining OGD as a minority subgroup. On behalf of the Band, attorney Tim Vollmann contended this violated tribal sovereignty and was intervening in "internal tribal governmental matters", also noting Utah's FOIA request to the DOI for the lease details was filled, with compensation amounts redacted as confidential proprietary information. While discussing the accusations of embezzlement, Vollmann noted the tribal leaders "are currently cooperating with a pending federal law enforcement investigation", but stated that wasn't under the ASLPB's purview. Leon Bear stated that disclosing revenues, including from PFS, and the allocation to members, "would violate tribal law and custom." Leon Bear was criminally indicted by a federal grand jury in December 2003 on two counts of thefts from Indian Tribal Organizations, one count of theft of programs receiving federal funds, over conversion and embezzlement of the Goshute funds: nearly $130,000 from an economic development office and over $25,000 by double-dipping travel stipends. He was also charged with three counts of falsifying tax returns (from 2000 to 2002), which required enforcing an IRS summons. In an unrelated case, Bear and tribal businesses, Starlike Properties Inc. and Diversified Acquisition Star LLC, were also under investigation for tax fraud from a Japanese Yen currency put option in 1998. In 2005, Bear pleaded guilty to lesser charges and was required to pay $31,000 to the tribe account and $13,000 in federal taxes. Sammy Blackbear, an attorney, and two other tribe members were charged with similar counts of theft after a soft coup in 2001 where they withdrew over $45,000 in tribal funds and transferred over $400,000 in funds to the falsified new tribal organization (with authorization from the Henry Clayton, the non-recognized Nato Indian nation's self-described "residing judge of the First Federal District Court"), attempted to get $250,000 at a second branch, and attempted to withdraw $385,000 from another bank. In 2005, Sammy Blackbear pleaded guilty to the misuse of $1000 in tribal funds. After the Bear chaos, the tribe filed for the record to be reopened in January 2004, but the NRC chose not to intervene, stating the "concerns are very serious, but they belong in another forum, not an NRC licensing proceeding." Despite the indictments, the Salt Lake Tribune described the tribe as being "in meltdown" by late 2006, with their Salt Lake development office locked and mail piling up. Vice Chairman Lori Bear, Lawrence Bear's daughter, resigned in August stating she was "tired of working with a 'king' and forced to sign blank checks", and the tribe voted to shut down the executive committee. The band failed to reach a quorum, which meant Leon Bear was still the leader, and he described himself as "chief for life at this point" to Reuters. Noting the lack of government, the BIA said they may step in. Further opposition Other groups that opposed it included Public Citizen, who noted heavy lobbying from PFS through McClure, Gerard & Neuenschwander, led in part by former Idaho Senator James A. McClure, as well as lobbying by the PFS corporations and other connected industry associations, identifying $14.4 million in direct lobbying expenses and $22.5 million in related associations in an 18-month period starting January 1999. They also identified nearly $5 million in campaign contributions from the groups during the same time period. The Sierra Club also opposed it. In October 2000, Bonnie Raitt and the Indigo Girls held a concert in Salt Lake City to raise awareness to the project. The Indigo Girls, Ani DiFranco, Winona LaDuke, James Cromwell, Rep. Dennis Kucinich, Public Interest Research Group's Navin Nayak, and Margene Bullcreek also held a press briefing in Washington DC on July 25, 2005. Aside from the Goshutes in Utah, other Utah residents and NRC acknowledged the disproportionate effects of US nuclear weapon testing on Utah residents (see Downwinders), especially after the intense fallout from the Upshot-Knothole Harry test, later nicknamed "Dirty Harry". This fallout led to substantial increases in cancer rates of southern Utah residents and even a Hollywood film crew making The Conqueror near St. George, Utah. John Wayne's lung cancer in 1964 and 1979 stomach cancer and death are often linked to the Dirty Harry test; at least 91 of 220 people on the set were diagnosed or died from cancer, including Susan Hayward, Pedro Armendáriz, and Dick Powell. Residents also linked the Tooele County area to other waste projects and incidents such as the Dugway sheep incident, where the accidental release of VX gas in Skull Valley killed 6000 sheep, which were buried on the Skull Valley Band's land with a financial settlement. Bayley Lopez of Nuclear Age Peace Foundation called the waste storage on Indian lands proposals "a form of economic racism akin to bribery". Nuclear Regulatory Commission application The project application was initially submitted to the Nuclear Regulatory Commission in 1997 by PFS's Chairman of the Board John D. Parkyn. The initial application stated there were zero facilities within a ten-mile radius; by 2002, applications indicated the Goshute village, two ranches, and the Tekoi solid fuel rocket testing facility were noted as being within five miles. By 2002, however, Tekoi was no longer in operation. Shaw Pittman represented the tribe to the NRC. The Department of the Interior (DOI), Bureau of Indian Affairs, and Bureau of Land Management blocked parts of the plan- for instance, DOI denied the right-of-way required for transportation to the project because it was against the public interest. The Interior Department's objections were struck down in court as "arbitrary and capricious" in 2010. Utah laws and objections Six parties, including the State of Utah filed initial objections to the plan in 1997. Utah filed 68 out of the 160 total contentions. Utah contended that NRC does not have jurisdiction via the Nuclear Waste Policy Act of 1982 (NWPA), since it was for an intermediate offsite spent fuel storage facility (ISFSI), which was not explicitly discussed. This was rejected in 1998, siding with both NRC and PFS's arguments that precise enumeration was not needed. Utah filed a similar complaint in 2002, which was rejected by the commission two months later. By 2000, however, the Tooele County was on board with PFS; the commissioners, Teryl Hunsaker and Gary Griffith, spoke about the economic boost to the county. Gov Leavitt stated the county had been bought off. Griffith, however, lost his commission seat later that year to a critic of PFS, Gene White. Beginning in 2001, Utah also passed a series of laws to require a $5 million nonrefundable application fee, restrict transportation of nuclear waste, add a 75% tax on it, requiring $150 billion in upfront fees, and similar maneuvers. Utah also sued PFS for concerns including the Yucca Mountain storage project, risks from stray bombs dropped at the nearby Utah Test and Training Range, credible risk of an aviation crash and risks of accidents at the nearby Tekoi rocket facility. Of Utah's objections, Sue Martin of PFS said "it seems like this is a blatant attempt to divert the court's attention", and Deseret News's editorial page editor Jay Evensen wrote editorial stating that while Leavitt and 80% of Utahns stand against the project, it might lead to a situation like the WTO riots in Seattle with "bullets and tear gas", and called the laws passed by Utah as "more like blackmail than a simple protest", and that Utah doesn't want outside protesters coming in. Advocates, however, pointed to the 2000-2001 California rolling power blackouts as rationale for the continued need of nuclear power. By the end of 2002 it was clear Enron's market manipulation was a key factor, and CEO Kenneth Lay was convicted on multiple charges in 2006 related to the events. In December 2001, after the 9/11 attacks, the state of Utah filed contention RR, "Suicide Mission Terrorism and Sabotage". The commissioners invited parties to comment on the issue in February 2002, specifically asking "What is an agency's responsibility under NEPA to consider intentional malevolent acts, such as those directed at the United States on September 11, 2001? The parties should cite all relevant cases, legislative history or regulatory analysis." Military aviation crashes In 2003 NRC's Atomic Safety and Licensing Board's administrative judges posted a 222-page "partial initial decision" regarding "credible accidents", primarily from a military aircraft accident, discussing the probability of a hypothetical F-16 crash at the site. Topics such as nose angle, lookdown angle, and zoom climbs were evaluated. The Skull Valley corridor was used for approximately 7000 sorties per year during training (day and night, as low as above ground level). NRC specifically didn't evaluate intentional terrorist aircraft crashes, a new issue at the time, nor did they evaluate the damage that might occur to the nuclear casks- instead, they simply discussed the probability of a crash. The report remarked that it was the 55th decision related to the PFS application, that the transcript of the topic's 2002 hearings was 11,000 pages, 475 exhibits were shown, and the post-trial briefs covered another 2200 pages. The metric adopted was a one in one million (1*10^-6) probability of an aviation accident occurring per year. PFS attempted to add a fifth factor to the standard NUREG-0800 3.5.1.6-3 four-factor airway calculation, further reducing the odds by the likelihood that a pilot could recognize and steer away from a dangerous crash site, initially discussed as being an 85.5% reduction. Ultimately NRC calculated a higher probability above four in one million (4.29*10^-6), compared with PFS's calculation of 2*10^-8. The possibility of a plane crash was considered credible (needing to be evaluated) rather than incredible (so unlikely that it does not need to be evaluated) as PFS claimed. NRC staff also calculated the probability of jettisoned ordnance (before or during an aircraft emergency) to be 2.11*10^-7. While outside the metric, it is an added factor to the overall risk, so it was considered worthy of consideration. NRC ruled against PFS for this, though PFS's Scott Northard was still optimistic about the project. PFS and NRC's staff appealed the commission's ruling a few weeks later. The board reevaluated of the crash likelihood and factored in only crashes that would result in breaching the dry cask, including delving into minutae like ductility ratios of buildings versus casks. NRC ruled in PFS's favor, then again in PFS's favor during an appeal from Utah in 2005, though with one of the three commissioners, Gregory Jaczko, dissenting. NRC accepted PFS's calculations of 0.74*10^-6 for a military plane crash resulting in a cask failure. Several objections noted Utah had "waived the right" to arguments because they failed to bring them up in a timely manner (such as the 15 previous hearings). Another ruling referred to Utah's continued objection ('Contention UU') as a "thinly-supported new contention". Final ruling on Utah concerns and NRC approval Ultimately, Utah's concerns (125 specific contentions) were struck down in court, finding the state had overstated their case, and it was ruled in PFS's favor. The state laws were struck down in 2002 over federal preemption, and it was upheld in a 2004 Tenth Circuit appeals court. By 2003 the NRC application process was still ongoing. Representative Rob Bishop, along with Cannon and Matheson, sponsored a successful amendment (Amendment 383) to the 2006 National Defense Authorization Act to create the Cedar Mountain Wilderness (over ) and a moratorium on Bureau of Land Management on related land use planning. These actions were specifically to block a proposed rail spur that would have delivered casks to the PFS site, which would have crossed or impacted eight historic sites: the Hastings Cutoff, US Route 40, Victory Highway ("old" and "new"), a Western Union telegraph line, the Western Pacific Railroad, and two roads. This was part of a Gov. Leavitt strategy of putting a "land moat" around Skull Valley. Since the rail spur was blocked through law and BIA Chad Calbert's Record of Decision (ROD), and BLM wasn't allowed to sign a MOA due to the moratorium, the Advisory Council on Historic Preservation withdrew, as all concerns over the National Historic Preservation Act were rendered moot. Southern Company and Xcel Energy backed out of PFS by December 2005; Xcel had been the majority shareholder. Florida Power & Light backed out a week later, and the remaining utilities stopped funding PFS. This withdrawal also allowed the full approval for the PFS project on February 21, 2006, as Materials License number SNM-2513, titled "License For Independent Storage of Spent Nuclear Fuel and High-Level Radioactive Waste", subject to DOI approval. DOI's James Cason formally rejected the project on September 6, 2006, in a record of decision, usurping the lower BIA's review. The rail line spur was also denied by DOI. With the rejection of the lease, Orrin Hatch said "We just wanted to put a spike right through the heart of this project and this does it". The two RODs were described as "curious documents", clearly "based more heavily in politics than "reasoned decisionmaking". In July 2007, Skull Valley Band and PFS filed a lawsuit against DOI for blocking the plan through the Administrative Procedure Act. The lawsuit indicated the PFS contract was worth $200,000 per year in the construction phase, $1 million per year after opening, plus profit sharing. Since DOI's objection caused the cancellation, the suit was asking for damages as well as overturning their decision. The case was decided in July 2010, overturning both the Calvert and Cason decisions, with the court explaining that their decision was "arbitrary and capricious" and an abuse of discretion. For instance, the environmental impact statement, filed well before 2001, did not include discussion of terrorist attacks like were seen in the 9/11 attacks, leading the judicial opinion to state "the DOI had an obligation to prepare an adequate [environmental impact statement", especially since the information "generally appears to be readily obtainable". While the Tribe and PFS had sent multiple letters offering to furnish more information to no avail, part of the denial was for lack of information. The ruling meant DOI was required to reconsider the application. Sen. Hatch and Utah's congressional delegation criticized the reopening, with Hatch calling it "a lawyer-employment plan funded by the last holdout member of PFS". In March 2006 PFS's Parkyn celebrated the appeal, stating "Yes, there is hope for our future" to applause at an industry forum. TIME magazine stated the tribe was slated to get $100 million over 45 years from the project, but neither PFS's Sue Martin nor the band's Leon Bear would confirm that. Margene Bullcreek said she had still not seen the contract. Project cancellation The Blue Ribbon Commission on America's Nuclear Future was created by presidential memorandum in 2010 and a report was issued in December 2012, discussing nuclear waste especially after the termination of the Yucca Mountain project. Changes were also made to the Waste Confidence Rule in 2010, requiring nuclear power plant operators and others to have confidence in the ability to dispose of spent nuclear fuel. Since a private temporary storage site would not cause title to the SNF to be assumed by DOE until being taken to a permanent site, and with no permanent site even in a planning state, the risk of a private facility is substantially higher. In 2010-2011 the status of the project was described as "uncertain". PFS withdrew their application on December 20, 2012, which was signed by PFS's Chairman of the Board Robert M. Palmberg. It was estimated that $70 million had been spent on the project's application and legal by then. In an October 2013 letter acknowledging a Fiscal Year 2013 exemption from annual license fees on the unused storage license, Palmberg indicated that PFS would like to keep their license open if the 2014 fee exemption was allowed. A formal request for withdrawal of termination was made in 2014, apparently after the exemptions were granted. PFS, pursuant to program applications in 1996, 2001, and 2006, was required to make biannual NRC Quality Assurance Program filings. Those were made in 2017 Sovereignty, economic justice, and communication scholarship Decisions surrounding nuclear waste siting by the tribes, especially Mescalaro and Goshutes, brought up issues of tribal sovereignty, economic exploitation, forcing Western democracy on tribes, and cultural imperialism. Expecting a tribe to 'volunteer' to handle the waste was described as a "modern Hobson's choice". Further, the relative wealth of the Mescalero compared to the Skull Valley band raised the question if consent could be freely given by the band. Targeting Skull Valley for waste can be seen as part of an ongoing failure of exploitation and environmental justice. On the other hand, the band spent years learning about the risks, the Mescaleros were able to decline, which could make second-guessing the band's decision as racism and paternalism. Opposition to the project "dealt heavily with the rhetoric of death", such as Leavitt's "over my dead body" comment and Orrin Hatch stating it was "dead on arrival". Other opposition tended to focus on "death" and "cancer" when discussing risks. In a journal article described elsewhere as "the first comprehensive synthesis of the narratives employed by proponents of a nuclear site", Jennifer A. Peeples described the communication dynamics in charge of agents, agency, and purpose. For instance, she explained the pro-PFS tribal group (Larry Bear) as fitting a narrative frame of self-determinism, "The Goshutes (agent) have made an educated decision (agency) about this facility and we feel it is in our best interest to go forward with the project. It benefits ourselves and the nation (purpose). Those who oppose us do so out of ignorance and prejudice." The PFS argument was given in a dispassionate, pragmatic, and scientific tone; even references to the fuel storage facility rarely mentioned the humans working there. Proponents in the community framed their arguments in terms of morality, equity, and economic justice (financial restitution through the PFS money) and emotional attacks against opponents, or constructing rationale why opponents argued against the project, including blaming the opponents of racism. Additionally, the perception and stigma of nuclear waste combines to reduce institutional trust and promote a NIMBY attitude, leading to the siting of locally unwanted land uses in minority communities with less time or resources to organize against it. Values and cost assigned to materials are culturally dependent and open to interpretation. Understandings and beliefs about the dangers of radiation, for example, are culturally dependent, with some tribes (such as the Paiutes) assigning a heavy spiritual cost to radioactivity. Grassroots activism in such communities is more similar to civil rights movements than environmental movements. Peeples stated that the combination of these three disparate approaches was "particularly problematic" based on the issue. While the Goshutes tried to establish trust in their decisionmaking abilities, the PFS argument excluded them from the narrative, and the community advocates eroded trust by referencing the downwinder damage and by eroding the motives of politicians and local opponents, while the opponents "won" through use of repetition more than accuracy. Tracylee Clarke also described the intra-tribe dynamic that led to lack of voice and access, shaping of the tribe's identity through the distorted narrative of the Larry Bear group. Weiss also argues that the rhetoric strategies and polarization makes social constructionism very applicable to the rhetorical themes and tactics. The harsh environment shaped much of the Goshute tribal identity, lacking sufficient resources to allow for a powerful central rule or sense of community. Arguments, or claims-making, is used to argue a viewpoint to gain the moral high ground. Common tactics in the dispute was frame and reframe an opposing point of view and react to that framing, and to vilify the opposition by examining motives and by exaggerating imperfections. Proponents gave rhetorical trust to scientists and tribal leadership, while distrusting the state of Utah and its actors (such as Gov. Leavitt). Expert proponents described their years of experience and awards won (such as six Nobel laureates who supported the project) to impress "with credentials rather than data". Proponents made arguments against Utah that were framed in political motivations, such as indicating that Gov. Leavitt's concern was with reelection, not the project itself. This was reinforced by ennobling Leon Bear as having the best interests of the tribe at heart. Aligning proponents with ennobled scientists legitimizes their arguments. In contrast, opponents vilified Leon Bear, disputing the legitimacy of his leadership and claiming corruption, such as embezzlement and having bribed tribe members. Opponents also used charges of racism, especially in terms of environmental justice and environmental racism. Proponents, especially Leon Bear, used these same charges to state the band is not being allowed to profit from this storage, perpetuating racism through paternalism. Proponents also argued that Leavitt was racist, which then makes any decisions by him tainted, making the proponents portray themselves as standing against racism. The framework of environmental justice and environmental racism is argued as too simplistic or part of a "oversimplified dichotomy" in cases like this, which also involve procedural justice, restorative justice, tribal identity politics, various definitions of sovereignty, self-determination, a spectrum of assimilation versus traditionalism, and moral purity.] Policy legacy The groundwork from PFS allowed NRC to produce a generic environmental impact statement (GEIS) in 2014, NUREG-1751, on siting ISFSIs and dry cask transfer systems (DTS, not needing a spent fuel pool), including environmental justice impacts. See also Iosepa, Utah Clive, Utah Uranium mining and the Navajo people Three Mile Island accident (operated by GPU) Prairie Island Nuclear Power Plant (Xcel facility adjacent to Prairie Island Indian Community reservation) Black Mesa Peabody Coal controversy (Coal mining on Hopi Indian reservation) Kayenta Mine (Coal mining on Hopi Indian reservation) Four Corners Generating Station (Coal power plant on Hopi Indian reservation) Church Rock uranium mill spill (contaminated Navajo Nation land) Navajo Generating Station Basel Convention References External links Archives of Private Fuel Storage website: 1999, 2006, 2013 'Skull Valley' documentary description and slideshow NATIVE AMERICAN FORUM ON NUCLEAR ISSUES (APRIL 11, 2008) Utah Education Network: The Skull Valley Goshutes and the Nuclear Storage... KUED's We Shall Remain: The Goshute: chapter 4 (video) First Nation rights: Skull Valley nuclear storage (video) on ''Source Code, Free Speech TV Radioactive waste
Private Fuel Storage
[ "Chemistry", "Technology" ]
6,891
[ "Radioactive waste", "Environmental impact of nuclear power", "Radioactivity", "Hazardous waste" ]
64,771,464
https://en.wikipedia.org/wiki/Jack%20Throck%20Watson
Jack Throck Watson (May 2, 1939 – September 3, 2016) was an American biochemist who was a professor of biochemistry and chemistry at the Michigan State University (MSU), where he was also director of the MSU Mass Spectrometry Facility. While at MIT, Watson developed a gas chromatography–mass spectrometry interface, known as the Watson–Biemann separator, that removes helium from the gas chromatograph column effluent, thereby allowing analysis of less volatile and more polar compounds. Watson later worked on methods for the structure elucidation of peptides and proteins using fast atom bombardment and matrix-assisted laser desorption ionization (MALDI) mass spectrometry. After retirement in 2006, he continued to work on his introductory mass spectrometry textbook and teach short-courses in mass spectrometry. Early life and education Jack Watson was born on May 2, 1939, in Casey, Iowa, to Jesse H. and Anne Watson. Jack grew up in a town of about 1,000 residents in northern Iowa, Nora Springs. His father was the area's school superintendent and he had one brother. After graduating from Nora Springs High School 1957, he went to Iowa State University, majoring in chemistry and taking part in the University’s Air Force ROTC program for four years which accounts for the four years he spent on active duty in California and Texas. Before serving his Air Force obligation, after graduation Iowa State with a degree in Chemical Technology in 1961, he went to graduate school at the Massachusetts Institute Technology (MIT). At MIT, Watson was a PhD candidate in the laboratory of Klaus Biemann, one of the most notable experts in organic mass spectrometry at the time. As soon as he graduated from MIT, Watson reported for duty in the United States Air Force in the San Francisco Bay area. A friend of his from high school, introduced Watson to Judith Sjoberg. Not long after that, they were married and moved to Brooks Air Base in San Antonio, Texas. After completing his tour of duty in the Air Force, Watson took a one-year postdoctoral position in Strasbourg France at the Institut de Chimie, Université de Strasbourg under the direction of Robert Wolf. During this time and through the licensing of the Watson-Biemann gas separator to Thomson-CSF, for use in a gas chromatograph-mass spectrometer they ware manufacturing at the time, Watson made everlasting ties to the French Mass Spectrometry community. Career After completing his postdoctoral fellowship in France, in 1969, Watson returned the United States and held a position as an Assistant Professor in the Department of Pharmacology at Vanderbilt University, in Nashville Tennessee. Jack was promoted to Associate Professor with tenure in 1974. While at Vanderbilt, Watson published the first edition of Introduction to Mass Spectrometry: Biomedical, Environmental, and Forensic Applications in 1976. It was the first book to include journal titles as part of the cited literature. Harold G. (Harry) Walsh had just joined the ACS as director of the Short Course program. Walsh approached Watson and asked him to teach a course. Walsh also asked that Watson select someone from the mass spectrometry industry to co-teach the course. Watson had met O. David Sparkman, an American working for the French Gas Chromatography/Mass Spectrometry company, Riber, in Paris, a few months earlier. Watson asked Sparkman to contribute to the data systems part of the course. They taught the first session at the annual Pittsburgh Conference on Analytical Chemistry and Applied Spectroscopy in the Spring of 1978. They taught the course two more times that year at the annual ACS meetings and continued teaching into the first decade of the next millennium. In 1980 Watson accepted a joint appointment in the Departments of Biochemistry and Chemistry at Michigan State University, East Lansing Michigan. He also became the director (Principal Investigator) of the National Institutes of Health (NIH) P41 Regional Resource in Mass Spectrometry at MSU. He remained director of the NIH facility until the NIH no longer funded such facilities and retired from his teaching position in 2006. The MS Facility continued to operate after funding stopped through the efforts of Watson, the Biochemistry and Chemistry Departments. Personal life A friend of his from high school introduced Watson to Judith Sjoberg. Not long after that, they were married and moved to Brooks Air Base in San Antonio, Texas. They had two children in Nashville, Jennifer, born in 1970, and Brent born in 1972. Legacy A Fellowship in Watson's name has been established at Michigan State University where recipients will be graduate students in the Department of Biochemistry and Molecular Biology.This is the “Jack Throck Watson Graduate Fellowship in Biochemistry Endowment.” References 1939 births 2016 deaths American biochemists Mass spectrometrists Massachusetts Institute of Technology alumni
Jack Throck Watson
[ "Physics", "Chemistry" ]
1,002
[ "Biochemists", "Mass spectrometry", "Spectrum (physical sciences)", "Mass spectrometrists" ]
69,140,980
https://en.wikipedia.org/wiki/Self-healing%20concrete
Self-healing concrete is characterized as the capability of concrete to fix its cracks on its own autogenously or autonomously. It not only seals the cracks but also partially or entirely recovers the mechanical properties of the structural elements. This kind of concrete is also known as self-repairing concrete. Because concrete has a poor tensile strength compared to other building materials, it often develops cracks in the surface. These cracks reduce the durability of the concrete because they facilitate the flow of liquids and gases that may contain harmful compounds. If microcracks expand and reach the reinforcement, not only will the concrete itself be susceptible to attack, but so will the reinforcement steel bars. Therefore, it is essential to limit the crack's width and repair it as quickly as feasible. Self-healing concrete would not only make the material more sustainable, but it would also contribute to an increase in the service life of concrete structures and make the material more durable and environmentally friendly. Self-healing is an old and well-known phenomenon for concrete, given that it contains innate autogenous healing characteristics. Cracks may heal over time due to continued hydration of clinker minerals or carbonation of calcium hydroxide. Autogenous healing is difficult to control since it can only heal small cracks and is only effective when water is present. This limitation makes it tough to use. On the other hand, concrete may be altered to provide self-healing capabilities for cracks. There are many solutions for improving autogenous healing by adding the admixtures, such as mineral additions, crystalline admixtures, and superabsorbent polymers. Further, concrete can be modified to built-in autonomous self-healing techniques. The capsule-based self-healing, the vascular self-healing, and the microbiological self-healing are the most common types of autonomous self-healing techniques. History The ancient Romans used a type of lime mortar that has been found to be self-healing. The stratlingite crystals form along the interfacial zones of Roman concrete, binding the aggregate and mortar together and this process continued even after 2000 years and it was discovered by the geologist Marie Jackson and her colleagues in 2014. In the early 1990s, Carolyn M. Dry created the first modern contemporary self-healing approach by developing a configuration that facilitates the release of repair chemicals from fibers embedded in a cementitious matrix. Since then, research community has developed various techniques to incorporate self-healing properties in the concrete. Among the other self-healing materials, in recent years, self-healing concrete research works are growing exponentially because of government-funded consortiums like SARCOS COST Action, RM4L, ReSHEALience, and SMARTINCS. The worldwide market for self-healing concrete is anticipated to grow at a CAGR of 36.8% during the forecast period, with revenue increasing from US$34.10 billion in 2021 to US$562.97 billion in 2030. Rising investment in large-scale infrastructure projects and rising collaboration among governments of different nations to engage in infrastructure projects for long-term goals are factors driving market expansion. Autogenous healing Autogenous healing of cementitious materials influences crack self-closure and, subsequently, durability and physical-mechanical performance of composites. It is considered to be one of the main reasons for substantial life extension of ancient structures and buildings. Autogenous self-healing in cement-based composites was first noticed by the French Academy of Science in 1836, when cracks in pipes, water-retaining structures, etc., self-healed. Significant theoretical and experimental research in the 1900s demonstrated that autogenous self-healing processes are mostly linked to physical, mechanical, and chemical processes inside the cementitious matrix are shown in the scheme. During the so-called "surface-controlled crystal development" that occurs when cracking is induced, calcium ions are immediately accessible from the fracture faces, and crystal growth is accelerated. After an initial layer of calcite is formed on the crack walls and the surrounding concrete matrix becomes less rich in calcium ions, the transition to the so-called "diffusion-controlled crystal growth" occurs, which means that the Ca2+ ions must diffuse through the concrete, and the CaCO3 layer in order to reach the crack surface and ensure the precipitation of the healing products. Clearly, the second phase is much slower than the first. In the case of composite cement, including pozzolanic additions, a portion of the calcium hydroxide, which has been identified as a primary source of Ca2+ ions, is used in the particular pozzolanic reaction for CSH formation. This will result in a delayed and weaker precipitation of calcium carbonate. Other minor mechanisms depicted in the scheme include the swelling of hydrated cement paste along the crack walls due to water absorption by calcium silicate hydrates and mechanical crack blocking by means of debris and fine concrete particles, direct results of the cracking process or as a result of impurities in the water entering the crack. Autogenous healing mechanisms are only effective for small cracks, although there is a wide range of maximum widths for healable cracks: 10–100 μm, sometimes up to 200 μm but less than 300 μm, only in the presence of water. They are challenging to control and forecast because to their usually scattered outcomes and dependence on a number of factors and variables. 1) the age and composition of the concrete itself, 2) the presence of water, and 3) the thickness and form of the concrete fracture are the most influential elements. Stimulated autogenous healing When crack widths are constrained, autogenous healing is more successful. The presence of water is also a significant element. The stimulation of continuous hydration or crystallization promotes self-healing as well. Therefore, methods that restrict crack width, provide water, or boost hydration or crystallization will be categorized as promoting or enhancing autogenous healing. Use of mineral additions Most research on the effects of mineral addition on self-healing has been conducted on blast-furnace slag and fly ash. Continuous hydration promotes autogenous healing because major sections of these additions remain unhydrated even at older ages. The pozzolanic reaction, which is specific to siliceous and/or aluminous additions (fly ash, blast-furnace slag, silica fume, calcined clay, etc.) in composite cement, can strengthen the continuous hydration of cement grains in terms of long-term CSH development and, as a result, a certain degree of autogenous self-healing. Use of crystalline admixtures The phrase "crystalline admixtures" is a label that does not necessarily indicate functionality or molecular structure since it is derived from commercially accessible goods whose components are often not specified. Practically, commercial crystalline admixtures may be distinguished from supplementary cementitious materials (SCMs) by their dosage, generally 1% by cement weight for crystalline admixtures and more than 5% for SCMs. Crystalline admixtures (CA) are categorized as a unique type of permeability-reducing admixtures. The category of permeability reducing admixtures includes a diverse variety of materials, which may also be referred to by the general term "crystalline admixtures." Furthermore, most commercial products include proprietary ingredients, and their formulations are kept secret. However, in general, CAs are extremely hydrophilic products created by "active chemicals" that are often blended with cement and sand. In the presence of water, they react, creating water-insoluble pore/crack-blocking precipitates that improve CSH density and resistance to water penetration. CAs have been demonstrated to enhance the mechanical qualities of concrete when used at 3%, 5%, and 7% of the cement content when exposed to moisture. However, the percentages mentioned above may be fairly high for an addition. Use of superabsorbent polymers Superabsorbent polymers are natural, or synthetic 3D cross-linked homopolymers or copolymers with a high fluid absorption capacity. The swelling capacity varies according to the monomers' type and the cross-linking density and may reach 1000 g g-1. The maximal swelling results from a balance between osmotic pressure, which is related to the presence of electrically charged groups, and the elastic retractive forces of the polymer matrix. Furthermore, since osmotic pressure is related to the concentration of ions in the aqueous solution, the ionic strength of the swollen medium substantially influences absorption behavior. Aside from the several application areas (e.g., sanitary and biomedical sector, agricultural sector) where SAPs are currently used, more and more research is focusing on the use of SAPs in mortar/concrete. To limit self-desiccation shrinkage during hardening, SAPs were added as an internal curing agent in cementitious systems with a low water-to-binder ratio. Aside from reducing autogenous shrinkage, SAPs may be added to cementitious materials to improve freeze-thaw resistance and induce self-sealing and self-healing properties. In terms of the latter, the inclusion of SAPs serves many purposes. First and foremost, SAPs, which absorb mixing water during concrete mixing and shrink when the matrix hardens, leave behind macropores. These macropores operate as weak matrix sites, attracting and encouraging multiple cracking. Both actions promote crack closure by allowing cracks to cross SAP macropores and generate narrower cracks. However, these macropores may be accountable for strength loss, but not always, since SAPs can also operate as an internal healing agent and drive more hydration, as previously mentioned. It all relies on the kind of SAP utilized, particle size and shape, the number of SAPs, the w/c ratio of the mix, the addition of water to compensate for the loss in workability, and the mixing technique, among other things. Additives that Promote Self-healing on Heat Exposure Carbon Nanotube Reinforced Concrete (CNT-RC) can heal after being subjected to fires and high temperatures. Research by Szeląg investigated the healing ability of CNT-RC after being subjected to high temperatures. The study found that the addition of CNTs to cement paste improved the thermal stability of the material and allowed for it to maintain its mechanical properties at elevated temperatures up to 800 °C. Additionally, after the material was exposed to high temperatures and subsequently cooled, it still maintained its healing ability and was able to repair any cracks that formed during the thermal loading process. Autonomous self-healing Autonomous self-healing depends on integrating atypical engineering modifications in the matrix to give a self-healing function. Encapsulation has long been the favored method for delivering healing agents directly to the cracks, allowing in-place repair. In encapsulating healing compounds, there are two approaches: discrete and continuous. The key distinction is the mechanism utilized to store the healing agent, which determines the extent of damage that may be treated, the repeatability of healing, and the recovery rate for each strategy. However, several elements must be addressed in the design of an encapsulated-based self-healing system, from capsule system creation through integration, mechanical characterization, triggering, and healing assessment. Microencapsulation Microencapsulation (diameter < 1 mm) remains a popular technology for manufacturing autonomous self-healing components for cementitious systems, inspired by the pioneering study of White et al. Microcapsules were directly incorporated into the matrix and upon crack development, and releasing the core in the crack volume. The discharged substance would then react with a distributed catalyst in the matrix to heal the crack. On several occasions, the proof of concept for microcapsule-based healing in concrete has been proven. Recent capsule research has continued to emphasize the usage of adhesive two-component systems necessitating the simultaneous embedding of a catalyst into the matrix for activation and hardening. Wang et al. advised a ratio of 0.5 catalyst to microcapsules, although others have suggested a ratio of 1.3 catalyst to microcapsules to guarantee activation of the encapsulated epoxy. However, the long-term stability of reacted organic healing agents in the extremely alkaline cementitious matrix and their long-term functioning remain uncertain. Emerging research, however, promotes compatibility and bonding with the mineral substrate of the cementitious matrix, moving toward a capsule that may provide such healing products; these include encapsulated bacterial spores and mineral cargos such as colloidal silica and sodium silicate. The former may increase carbonate precipitation, while the latter can convert calcium hydroxide to a more desirable CSH gel. Macroencapsulation Dry conducted one of the early researches using macroencapsulation, proposing polypropylene and glass fibers with a mono- or multicomponent methyl methacrylate core for healing concrete cracks. The selection of the fibers was prompted by the combination of mechanical strengthening, crack sealing, and a cost-effective encapsulating technique. Moreover, this method was favored over implanted microcapsules because it gave the benefit of retaining a higher quantity of the healing agent and the possibility of many healings. The ultimate objective was to avoid adhesive breakdown over time. The release of the healing agent was triggered by the creation of cracks, which led to the destruction of the implanted brittle fibers. Lower processing temperatures and the ability to integrate extrusion, filling, and sealing stages make polymeric capsules potentially simpler to manufacture. In the case of cylindrical capsules, the diameters range from 0.8 to 5 mm so that the attractive capillary force of the crack and the gravitational force on the fluid mass is sufficient to overcome the capillary resistive force of the cylindrical capsules and the negative pressure forces resulting from the sealed ends. In other words, the crack width of the matrix should be less than the capsules' inner diameter. Vascular healing The concept of vascular healing in concrete utilizes a biomimetic approach to self-healing. The human cardiovascular system, which conducts blood throughout the body, and the plant vascular tissue system, which transports food, water, and minerals via xylem and phloem networks, are examples of vascular network systems. Similarly, vascular networks in concrete may transport liquid healing chemicals to damaged areas. Theoretically, there is no limit to the quantity of damaged material that may be fixed when this healing substance is provided from an external source. Early work by Dry included embedding long, thin glass channels in concrete.  This self-healing mechanism was eventually scaled up and used on a sample bridge deck. The difficulty of casting concrete with these very fragile materials was one obstacle preventing this technique's widespread use. The significant advantage of the vascular technique over the encapsulation method is that the healing agent may be administered continuously. Indeed, different healing agents may be used at different periods to heal different kinds of concrete damage. Additionally, the healing agent may be delivered under pressure to guarantee that it reaches the desired damage zones, similar to the notion of injecting epoxy for fixing concrete fractures. In concrete, several types of vascular networks have been implemented. The simplest version consists of a 1D channel, both ends of which are accessible from the concrete surface. Complex two- and three-dimensional channel networks have been developed in concrete to give various and alternative routes for the transfer of healing agents to damaged areas. Using multiflow junction nodes inside the network, these complex shapes also have been utilized. Self-healing bioconcrete The formation of calcium carbonate as a byproduct of microbial activity is an additional method for "engineering" the self-healing ability of concrete. It holds the potential for active and long-lasting crack repair while also being a potentially ecologically beneficial technique. Calcium carbonate (CaCO3), often known as limestone, has an effective bonding capability and is compatible with current concrete formulations. As a result of the carbonation of existing calcium hydroxide (portlandite) minerals, calcium carbonate may be included in the concrete mix design or chemically created inside the concrete matrix. Limestone generated inside the matrix of concrete may result in the densification of the matrix by pore filling and can help to self-heal crack, reducing its (water) permeability and resulting in the recovery of lost strength. If circumstances are favorable, most bacteria can precipitate CaCO3 from the solution. However, the carbonatogenesity of bacteria following distinct metabolic routes for the precipitation of bacterial CaCO3 varies. Additionally, many extrinsic variables influence the precipitation efficiency and cause the same bacterial strain to produce varying amounts of carbonate. It is probable that in a wet-dry environment, healing happens more quickly. In addition, the regulation of crack width is crucial for achieving quicker and more effective healing through biological activity. See also References Concrete Biomaterials Smart materials
Self-healing concrete
[ "Physics", "Materials_science", "Engineering", "Biology" ]
3,485
[ "Biomaterials", "Structural engineering", "Materials science", "Materials", "Concrete", "Smart materials", "Matter", "Medical technology" ]
69,144,441
https://en.wikipedia.org/wiki/Matthias%20Scheffler
Matthias Scheffler (born June 25, 1951, in Berlin) is a German theoretical physicist whose research focuses on condensed matter theory, materials science, and artificial intelligence. He is particularly known for his contributions to density-functional theory and many-electron quantum mechanics and for his development of multiscale approaches. In the latter, he combines electronic-structure theory with thermodynamics and statistical mechanics, and also employs numerical methods from engineering. As summarized by his appeal "Get Real!" he introduced environmental factors (e. g. partial pressures, deposition rates, and temperature) into ab initio calculations. In recent years, he has increasingly focused on data-centric scientific concepts and methods (the 4th paradigm of materials science) and on the goal that materials-science data must become "Findable and Artificial Intelligence Ready". Academic career Matthias Scheffler studied physics at Technische Universität (TU) Berlin. He carried out his doctoral work in the field of theoretical solid-state physics at the Fritz Haber Institute of the Max Planck Society (FHI) and received his Ph.D. from the TU Berlin in 1978. He then moved to the Physikalisch-Technische Bundesanstalt in Braunschweig, where he was employed as a research associate from 1978 to 1987. From 1979 to 1980, he was also a visiting scientist at the IBM T.J. Watson Research Center, Yorktown Heights, USA. He received his habilitation in 1984 from the TU Berlin. In 1988, he was appointed as a scientific member of the Max Planck Society and founding director of the Theory Department of the Fritz Haber Institute of the Max Planck Society in Berlin. The following year he received an honorary professorship at the TU Berlin. This was followed by further honorary professorships at Freie Universität Berlin (2006), Humboldt-Universität zu Berlin (2016), and in Hokkaido, Japan (2016). He is also Distinguished Visiting Professor of Computational Materials Science and Engineering at the University of California, Santa Barbara since 2005. Since 2015, he heads the European Center of Excellence NOMAD (Novel Materials Discovery), since 2020 the NOMAD Laboratory at the FHI, and since 2021, he is Deputy Spokesperson of the FAIRmat project at the Humboldt-Universität zu Berlin . Research focus Since the beginning of his career, Matthias Scheffler has been working on fundamental aspects of the chemical and physical properties of surfaces, interfaces, clusters, and nanostructures. Current research activities include studies of heterogeneous catalysis, thermal conductivity, electrical conductivity, thermoelectric materials, defects in semiconductors, inorganic/organic hybrid materials, and biophysics. These are studies that combine quantum mechanics, ab initio calculations of the electron structure and molecular dynamics with methods from thermodynamics, statistical mechanics, and engineering. In this way, the understanding of meso- and macroscopic phenomena can be developed or deepened under realistic conditions (T, p). Scheffler is also working on the development of theoretical models for the calculation of excited states and electron correlations. The software package FHI-aims developed for this purpose by Scheffler, together with Volker Blum and many others, was specifically designed for large-scale calculations on high-performance computers. Matthias Scheffler has investigated many different classes of materials with high application relevance (e.g. compound semiconductors, metals, oxides, two-dimensional materials, organic materials, surfaces), as well as successfully developing a wide range of phenomena with direct practical relevance (e.g. crystal structure and growth, electronic material properties, metastability of impurities in semiconductors, electrical and thermal conductivity, heterogeneous catalysis). More than 70 of his former employees now hold professorships or alike positions. Scheffler is one of the most highly cited scientists in his field Data science and development of the NOMAD database Since 2003, Matthias Scheffler and his group have been developing artificial intelligence methods and are increasingly engaged in scientific data-sharing activities. Worldwide, vast amounts of scientific data are generated on materials, but only a fraction of it is actually used and published. Often, data are not adequately characterized and described, and most data are not considered further because they are not useful for the ongoing, focused research project. However, they may contain valuable information for other topics ("recycle the waste!"). For computational materials science, Scheffler, together with Claudia Draxl, designed and set up a database where research data can be stored in a well-documented manner and where the research data are also available to other researchers. These activities, together with international colleagues, resulted in the foundation of the NOMAD Center of Excellence (CoE). In the meantime, NOMAD is the world's largest database of results from highly complex quantum mechanical calculations performed on state-of-the-art high-performance computers. Since 2020, the NOMAD CoE is increasingly focusing on software developments for exascale computers. As of October 2021, the FAIRmat consortium (FAIR Data Infrastructure for Condensed-Matter Physics and the Chemical Physics of Solids), funded by the German government, has been set up. Here, the original NOMAD concepts are advanced to the areas of materials synthesis and experimental research, and a corresponding metadata catalog, ontologies and workflows, as well as a federated infrastructure of data repositories (NOMAD Oasis) are being developed. With the detailed description and availability of data, artificial intelligence methods can be applied and materials with novel and advantageous properties can be identified. The previously often very lengthy value creation process in the development of new materials, from basic research to market-ready product, can thus be significantly shortened. Awards and honors 2001 Max Planck Research Award jointly awarded by the Alexander von Humboldt Foundation and the Max Planck Society 2003 Medard W. Welch Award of the AVS (association for science and technology of materials, interfaces and processing) 2004 Max Born Medal and Prize jointly awarded by the British Institute of Physics (IOP) and the German Physical Society (DPG) 2007 Honorary doctorate from the Lund University, Sweden 2010 Rudolf Jaeckel Prize of the German Vacuum Society (DVG) Since 1998 Fellow of the American Physical Society Since 2002 Member of the Berlin-Brandenburg Academy of Sciences and Humanities Since 2017 Member of the German National Academy of Sciences Leopoldina Bibliography References External links Website of the NOMAD Laboratory at the Fritz Haber Institute of the Max Planck Society Publications by Matthias Scheffler at google scholar Website of the European Center of Excellence NOMAD (Novel Materials Discovery) Website of the NFDI consortium FAIRmat Website of the NOMAD database 1951 births Living people German theoretical physicists Members of the German National Academy of Sciences Leopoldina Fellows of the American Physical Society Max Planck Society people Technische Universität Berlin alumni German materials scientists University of California, Santa Barbara faculty Condensed matter physicists Scientists from Berlin 20th-century German physicists 21st-century German physicists Max Planck Institute directors
Matthias Scheffler
[ "Physics", "Materials_science" ]
1,465
[ "Condensed matter physicists", "Condensed matter physics" ]
69,144,535
https://en.wikipedia.org/wiki/DIMPL
DIMPL (Discovery of Intergenic Motifs PipeLine) is a bioinformatic pipeline that enables the extraction and selection of bacterial GC-rich intergenic regions (IGRs) that are enriched for structured non-coding RNAs (ncRNAs). The method of enriching bacterial IGRs for ncRNA motif discovery was first reported for a study in "Genome-wide discovery of structured noncoding RNAs in bacteria". DIMPL pipeline automates the process of total genome analysis by extracting IGRs, filtering them by length and nucleic acid composition, and collecting the data necessary to identify candidate motifs and assign their possible functions. DIMPL pipeline provides reproducible techniques for identifying genomic regions enriched for ncRNA through support vector machine (SVM) classifiers. It can be used to look for nucleic acid and protein motifs, including riboswitch-like elements, upstream open reading frames (uORFs), short open reading frames (sORFs), ribosomal protein leader sequences, selfish genetic elements and other structured RNA motifs of unknown function. DIMPL uses various sequence analysis resources, including: Rfam database, as a reference of known RNA families BLASTX search tool, to eliminate unannotated protein coding regions INFERNAL package, to search the IGSs sequences CMfinder, to look for possible RNA secondary structure features R-scape software and R2R drawing algorithm, to generate the consensus model RNAcode, to look for the presence of coding regions GenomeView, to visualize the genetic context of the RNA motif RNA motifs discovered using DIMPL include HMP-PP riboswitch, icd-II ncRNA motif, carA ncRNA motif, ldh2 ncRNA motif, among others. References Bioinformatics Computational biology
DIMPL
[ "Engineering", "Biology" ]
370
[ "Bioinformatics", "Biological engineering", "Computational biology" ]
72,122,427
https://en.wikipedia.org/wiki/GsMTx-4
Grammostola mechanotoxin #4 (GsMTx-4, GsMTx4, GsMTx-IV), also known as M-theraphotoxin-Gr1a (M-TRTX-Gr1a), is a neurotoxin isolated from the venom of the spider Chilean rose tarantula Grammostola spatulate (or Grammostola rosea). This amphiphilic peptide, which consists of 35 amino acids, belongs to the inhibitory cysteine knot (ICK) peptide family. It reduces mechanical sensation by inhibiting mechanosensitive channels (MSCs). GsMTx-4 also serves as a cationic antimicrobial peptide against Gram-positive bacteria. Source GsMTx-4 was isolated from the venom of Grammostola spatulata. After a blocking effect on mechanosensitive channels of the spider venom was detected in 1996, GsMTx-4 was isolated and identified from the venom later in 2000. Its concentration in the venom is ~2 mM. Chemistry Structure GsMTx-4 has a polypeptide chain of 35 amino acids with the sequence GCLEF-WWKCN-PNDDK-CCRPK-LKCSK-LFKLC-NFSF, the C-terminus is amidated. The toxin is an amphipathic peptide consisting of a large hydrophobic patch which is surrounded by a ring of six polar lysine residues. These hydrophobic residues enable the toxin to carry an overall charge of +5. The toxin contains three intramolecular disulfide bonds that contribute to the formation of its inhibitor cystine knot (ICK). Homology GsMTx-4 shares less than 50% of its sequence homology with all other known peptide toxins. The highest percentage of sequence homology is shared with other tarantula toxins that block voltage-gated calcium channels and voltage-gated potassium channels. The ICK, as well as the residues F4, D13, and L20, are conserved in these tarantula toxins. Properties Like other peptides belonging to the super-family of the ICK, GsMTx-4 is amphipathic. Therefore, GsMTx-4 is able to interact with the hydrophobic side of the lipid bilayer. It can insert itself into the membrane by binding to anionic and cationic groups based on hydrophobic and electrostatic interactions. However, GsMTx-4 has a weak selectivity for the anionic phospholipids over the zwitterionic phospholipids of the lipid bilayer compared to other ICK peptides. For all ICK blocker peptides, the dominating aromatic residues in the hydrophobic face are widely considered to promote the binding and adsorption of the peptide to the lipid bilayer by positively contributing to its bilayer partitioning energy. Compared with other ICK peptides, GsMTx-4 has a relatively high content of lysine residues, which causes the peptide to be more positively charged. This is important for its orientation and depth of the peptide penetration into the lipid bilayer. Target GsMTx-4 mainly targets mechanosensitive channels from the Piezo and TRP families, such as Piezo1 and TRPC6 which are generally bilayer tension-sensitive. This corresponds to the strong bilayer partitioning energy of GsMTx-4. It also targets a spectrum of voltage-dependent sodium channels (human Nav1.1- Nav1.7), human ERG channels (Kv11.1 and Kv11.2), and acetylcholine receptors. Mode of action The molecular mechanism of inhibiting mechanosensitive channels by GsMTx-4 is bilayer-dependent. Rather than directly binding to the gating structures like other ICK peptides do, GsMTx4 makes the mechanosensitive channels less sensitive to mechanical tension of the bilayer membrane. By its tension-dependent insertion into the membrane, GsMTx4 is thought to distort the distribution of tension near mechanosensitive channels, which will make the transfer of force from the bilayer to the channel less efficient. Unlike other ICK peptides, the action of GsMTx-4 is not stereospecific, as both L- and D-GsMTx-4 can block MSCs. Binding affinity Published KD value and IC50 values are listed here. Therapeutic use GsMTx-4 might play a role in the treatment of volume-activated arrhythmias or muscular dystrophy; it potentially has good therapeutic properties because it is well tolerated following injection in mice, it is non-immunogenic, biologically stable, does not directly interact with MSCs, and has a long pharmacokinetic lifetime. References Ion channel toxins Spider toxins Neurotoxins
GsMTx-4
[ "Chemistry" ]
1,037
[ "Neurochemistry", "Neurotoxins" ]
72,123,000
https://en.wikipedia.org/wiki/SPINA-GR
SPINA-GR is a calculated biomarker for insulin sensitivity. It represents insulin receptor gain. How to determine GR The index is derived from a mathematical model of insulin-glucose homeostasis. For diagnostic purposes, it is calculated from fasting insulin and glucose concentrations with: . [I](∞): Fasting Insulin plasma concentration (μU/mL) [G](∞): Fasting blood glucose concentration (mg/dL) G1: Parameter for pharmacokinetics (154.93 s/L) DR: EC50 of insulin at its receptor (1,6 nmol/L) GE: Effector gain (50 s/mol) P(∞): Constitutive endogenous glucose production (150 μmol/s) Clinical significance Validity Compared to healthy volunteers, SPINA-GR is significantly reduced in persons with prediabetes and diabetes mellitus, and it correlates with the M value in glucose clamp studies, triceps skinfold, subscapular skinfold and (better than HOMA-IR and QUICKI) with the two-hour value in oral glucose tolerance testing (OGTT), glucose rise in OGTT, waist-to-hip ratio, body fat content (measured via DXA) and the HbA1c fraction. Clinical utility Both in the FAST study, an observational case-control sequencing study including 300 persons from Germany, and in a large sample from the NHANES study, SPINA-GR differed more clearly between subjects with and without diabetes than the corresponding HOMA-IR, HOMA-IS and QUICKI indices. Scientific implications and other uses Together with the secretory capacity of pancreatic beta cells (SPINA-GBeta), SPINA-GR provides the foundation for the definition of a fasting based disposition index of insulin-glucose homeostasis (SPINA-DI). In combination with SPINA-GBeta and whole-exome sequencing, calculating SPINA-GR helped to identify a new form of monogenetic diabetes (MODY) that is characterised by primary insulin resistance and results from a missense variant of the type 2 ryanodine receptor (RyR2) gene (p.N2291D). Pathophysiological implications In lean subjects it is significantly higher than in a population with obese persons. In several populations, SPINA-GR correlated with the area under the glucose curve and 2-hour concentrations of glucose, insulin and proinsulin in oral glucose tolerance testing, concentrations of free fatty acids, ghrelin and adiponectin, and the HbA1c fraction. Predictive aspects In a longitudinal evaluation of the NHANES study, a large sample of the general US population, over 10 years, reduced SPINA-DI, calculated as the product of SPINA-GBeta times SPINA-GR, significantly predicted all-cause mortality. See also SPINA-GBeta SPINA-GD SPINA-GT Homeostatic model assessment QUICKI Notes References External links Functions for R and S for calculating SPINA-GBeta and SPINA-GR. (Permanent DOI) Diabetes Endocrinology Human homeostasis Endocrine procedures Static endocrine function tests
SPINA-GR
[ "Biology" ]
675
[ "Human homeostasis", "Homeostasis" ]
72,123,015
https://en.wikipedia.org/wiki/SPINA-GBeta
SPINA-GBeta is a calculated biomarker for pancreatic beta cell function. It represents the maximum amount of insulin that beta cells can produce per time-unit (e.g. in one second). How to determine GBeta The index is derived from a mathematical model of insulin-glucose homeostasis. For diagnostic purposes, it is calculated from fasting insulin and glucose concentrations with: . [I](∞): Fasting Insulin plasma concentration (μU/mL) [G](∞): Fasting blood glucose concentration (mg/dL) Dβ: EC50 for glucose at beta cells (7 mmol/L) G3: Parameter for pharmacokinetics (58,8 s/L) Clinical significance Validity SPINA-GBeta significantly correlates with the M value in glucose clamp studies and (better than HOMA-Beta) with the two-hour value in oral glucose tolerance testing (OGTT), glucose rise in OGTT, subscapular skinfold, truncal fat content and the HbA1c fraction. It has the additional advantage that it circumvents the HOMA-blind zone, which renders the calculation of HOMA-Beta impossible if the fasting glucose concentration is 3.5 mmol/L (63 mg/dL) or below. Unlike HOMA-Beta, SPINA-Beta can be sensibly calculated in the whole range of measurements. Reliability In repeated measurements, SPINA-GBeta had higher retest reliability than HOMA-Beta, a measurement for beta cell function from the homeostasis model assessment. Clinical utility In the FAST study, an observational case-control sequencing study including 300 persons from Germany, SPINA-GBeta differed more clearly between subjects with and without diabetes than the corresponding HOMA-Beta index. Scientific implications and other uses Together with the reconstructed insulin receptor gain (SPINA-GR), SPINA-GBeta provides the foundation for the definition of a fasting based disposition index of insulin-glucose homeostasis (SPINA-DI). In combination with SPINA-GR and whole-exome sequencing, calculating SPINA-GBeta helped to identify a new form of monogenetic diabetes (MODY) that is characterised by primary insulin resistance and results from a missense variant of the type 2 ryanodine receptor (RyR2) gene (p.N2291D). Pathophysiological implications In several populations, SPINA-GBeta correlated with the area under the glucose curve and 2-hour concentrations of glucose, insulin and proinsulin in oral glucose tolerance testing, concentrations of free fatty acids, ghrelin and adiponectin, and the HbA1c fraction. Predictive aspects In a longitudinal evaluation of the NHANES study, a large sample of the general US population, over 10 years, reduced SPINA-GBeta significantly predicted all-cause mortality. See also SPINA-GR SPINA-GD SPINA-GT Homeostatic model assessment QUICKI Notes References External links Functions for R and S for calculating SPINA-GBeta and SPINA-GR. (Permanent DOI) Diabetes Endocrinology Human homeostasis Endocrine procedures Static endocrine function tests
SPINA-GBeta
[ "Biology" ]
679
[ "Human homeostasis", "Homeostasis" ]
72,123,426
https://en.wikipedia.org/wiki/Fionn%20Dunne
Fionn Patrick Edward Dunne is a Professor of Materials Science at Imperial College London and holds the Chair in Micromechanics and the Royal Academy of Engineering/Rolls-Royce Research Chair. Professor Dunne specialises in computational crystal plasticity and microstructure-sensitive nucleation and growth of short fatigue cracks in engineering materials, mainly Nickel, Titanium and Zirconium alloys. Early life and education Dunne completed a Bachelor of Science and Master of Engineering degree from the Department of Mechanical Engineering, University of Bristol by 1989, and moved to the Department of Mechanical and Process Engineering, University of Sheffield, for a Doctor of Philosophy in Computer Aided Modelling of Creep-cyclic Plasticity Interaction in Engineering Materials and Structures. Research and career In 1994, Dunne was appointed as a Postdoctoral research associate in the Department of Mechanical Engineering, University of Manchester (UMIST), before being appointed a Research Fellowship at Hertford College, Oxford and the Department of Engineering Science, University of Oxford from 1996 until 2012. He became the dean of the department but moved to Imperial College London in 2012. He is an Emeritus Fellow of Hertford College, Oxford. While in Oxford, Dune was part of the Materials for fusion & fission power program. He led the Micro-mechanical modelling techniques for forming texture, non-proportionality and failure in auto materials program at the Department of Engineering Science, University of Oxford between October 2011 and June 2012, when he moved the grant with him to the Department of Materials, Imperial College London from June 2012 until it ended in March 2015. He also led the Heterogeneous Mechanics in Hexagonal Alloys across Length and Time Scales (HexMat) program, which was Engineering and Physical Sciences Research Council (EPSRC) funded at a value of £5 million between May 2013 and November 2018. Dunne was the director of the Rolls-Royce Nuclear University Technology Centre at Imperial College London. He is part of a £7.2 million program on Mechanistic understanding of Irradiation Damage in fuel Assemblies (MIDAS) that is funded by Engineering and Physical Sciences Research Council until April 2024 As of November 2022, Dunne is a Professor of Materials Science at Imperial College London and holds the Chair in Micromechanics and the Royal Academy of Engineering (RAEng)/Rolls-Royce Research Chair. He is also a Rolls-Royce consultant, and an Honorary Professor and co-director of the Beijing International Aeronautical Materials (BIAM). Dunne's research focuses on computational crystal plasticity, discrete dislocation plasticity, and microstructure-sensitive nucleation and growth of short fatigue cracks in engineering materials, mainly Nickel, Titanium, and Zirconium alloys. Awards and honours In 2010, Dunne was elected a Fellow of the Royal Academy of Engineering (FREng). In 2016, he was awarded the Institute of Materials, Minerals and Mining (IoM3) Harvey Flower Titanium Prize. In 2017, Dunne's Engineering Alloys team shared the Imperial President's Award for Outstanding Research Team with Professor Chris Phillips’s team. Selected publications References Fellows of the Royal Academy of Engineering Living people Academics of Imperial College London Fellows of the Institute of Materials, Minerals and Mining Year of birth missing (living people) Metallurgists
Fionn Dunne
[ "Chemistry", "Materials_science" ]
657
[ "Metallurgists", "Metallurgy" ]
72,127,382
https://en.wikipedia.org/wiki/Ninomiya%20Kiln%20ruins
is an archaeological site consisting of the remains of two Nara period kilns located in what is now the Takase neighborhood of the city of Mitoyo, Kagawa Prefecture on the island of Shikoku, Japan. It has been protected by the central government as a National Historic Site since 1932. Overview The use of tiled roofs, which was a symbol of continental culture and the advanced state of the central administration, spread during the Asuka and Nara period to Buddhist temples and regional administrative centers. The Ninomiya kilns are located on the slope of a hill within the precincts of the Ōminakami Shrine, which is the ninomiya (second shrine), of Sanuki Province. The kiln ruins were discovered in 1925. This kiln site was built from the late Heian period to the Kamakura period, and consists of the ruin of two kilns, one of which has an elliptical body and a firing port facing north. It is a noborigama climbing kiln with several vein-shaped fire grooves on the bottom and a mounting base on which roof tiles are placed. Flat tiles used at the eaves with arabesque patterns were excavated from the inside. The other kiln was is square, flat kiln, with the firing port facing northeast and a built-in base with several fire grooves. Eaves tiles, earthenware, and ink-stones have been excavated from the inside. The site is about 20 minutes by car from the JR Shikoku Yosan Line Takase Station. See also List of Historic Sites of Japan (Kagawa) References External links Mitoyo City official site Mitoyo, Kagawa Japanese pottery kiln sites History of Kagawa Prefecture Historic Sites of Japan Sanuki Province
Ninomiya Kiln ruins
[ "Chemistry", "Engineering" ]
358
[ "Kilns", "Japanese pottery kiln sites" ]
72,131,781
https://en.wikipedia.org/wiki/Liquid%20phase%20exfoliation
First demonstrated in 2008, liquid-phase exfoliation (LPE) is a solution-processing method which is used to convert layered crystals into two-dimensional nanosheets in large quantities. It is currently one of the pillar methods for producing 2D nanosheets. According to IDTechEx, the family of exfoliation techniques which are directly or indirectly descended from LPE now make up over 60% of global graphene production capacity. This method involves adding powdered layered crystals, for example of graphite, to appropriate solvents and inserting energy, often by ultrasonication, although high-shear mixing is often commonly used. The addition of energy causes a combination of fragmentation and exfoliation resulting in the removal of small nanosheets from the layered crystals. In this way graphite can be converted into large quantities of graphene nanosheets. In general, these nanosheets tend to be a few monolayers thick and of lateral sizes ranging from tens of nanometers to many microns. These dispersed nanosheets form quasi stable suspensions so long as solvents used have surface energies similar to that of the nanosheets. Dispersed concentrations of order 1 gram per litre can be achieved. In addition to solvents, it is also possible to use molecular stabilizers, for example surfactants or polymers to coat the nanosheets and stabilise them against regaggregation. This has the advantage that it allows nanosheets to be suspended in water. Although this method was first applied to exfoliate graphite to yield graphene nanosheets, it has since been used to produce a wide range of 2D materials including molybdenum disulfide, tungsten diselenide, boron nitride, nickel(II) hydroxide, germanium monosulfide, SnP3, and black phosphorus. The liquid suspensions produced by liquid phase exfoliation can be used to create a range of functional structures. For example, they can be printed into thin films and networks using standard techniques such as inkjet printing. Printed structures have been used in a range of applications in areas included printed electronics, sensors and nanocomposites. Related methods include exfoliation by wet ball milling, homogenization, microfluidization and wet jet milling. Liquid phase exfoliation is different from other liquid exfoliation methods, for example the production of graphene oxide, because it is much less destructive, leaving minimal defects in the basal planes of the nanosheets. It has recently emerged that LPE can also be used to convert non-layered crystals into quasi-2D nanoplatelets. Origins Liquid phase exfoliation was first described in detail in a paper by a research team in Ireland in 2008, although a very short description of a similar process was published by the Manchester group around the same time. While other papers had previously described methods to exfoliate layered crystals in liquids, these papers were the first to describe exfoliation in liquids without any previous ion intercalation or chemical treatment. Exfoliation methods LPE involves inserting layered crystals into appropriate stabilizing liquids and then adding energy to remove nanosheets from the layered crystals. A number of different methods have been used to supply energy to the liquid. The earliest and most common is ultrasonication. In order to scaleup the process, high shear mixing was introduced in 2014. This method proved extremely useful and inspired a number of other methods of generating shear in the suspension, including wet ball milling, homogenization, microfluidization and wet jet milling. Stabilisers The simplest stabilizing liquids are solvents with surface energy close to the layered crystal being exfoliated. In practice, liquids with surface tensions close to 70 mJ/m2 are used. In addition aqueous surfactant solutions are often used. Less common, but useful for certain applications, is using molecular or polymeric additives to stabilise the exfoliated nanosheets. LPE of 2D materials beyond graphene A very wide range of 2D materials have been produced by LPE. The first material to be exfoliated was graphene in 2008. This was followed in 2011 by the exfoliation of BN, MoS2 and WS2. Since, the a wide range of 2D materials have been exfoliated including molybdenum diselenide, tungsten diselenide, gallium sulphide, molybdemum trioxide, nickel(II) hydroxide, germanium monosulfide, SnP3, black phosphorus etc. LPE of non-layered materials Recent work has shown that liquid phase exfoliation can be used to produce 2D-nanoplatelets from non-layered 3D-strongly bonded bulk materials. This is intuitively unexpected as these 3D-solid bulk crystals consists of strong bonds in all the three-directions. Nevertheless, many non-layered materials such as boron, silicon, germanium, iron disulfide, iron oxide, iron trifluoride, manganese telluride, have been converted to 2D nanoplatelets when sonicated in appropriate solvents. This raises many open questions on the mechanism of liquid-phase exfoliation process. For layered materials, the energy required to break inter-plane (perdominately van der Waals) bonds forces is small compared to that required to break in-plane ionic or covalent bonds. Then, the exfoliation procedure results in the formation of 2D-nanosheets. However, for non-layered 3D-strongly bonded materials, with minimal difference in bonding between different atomic planes, there is no "easily exfoliated" direction and sonication should yield quasi spherical particles. Nevertheless, near isotropic materials such as silicon have been exfoliated to give high-aspect ratio platelets. Therefore, developing an understanding of the mechanisms by which non-layered materials are exfoliated will be important, in particular because the application scope of such nonlayered 2D-nanoplatelets is broad, ranging from biomedical applications to energy storage to opto-electronics. References Chemical physics Laboratory techniques
Liquid phase exfoliation
[ "Physics", "Chemistry" ]
1,268
[ "nan", "Applied and interdisciplinary physics", "Chemical physics" ]
62,503,788
https://en.wikipedia.org/wiki/Carleman%20linearization
In mathematics, Carleman linearization (or Carleman embedding) is a technique to transform a finite-dimensional nonlinear dynamical system into an infinite-dimensional linear system. It was introduced by the Swedish mathematician Torsten Carleman in 1932. Carleman linearization is related to composition operator and has been widely used in the study of dynamical systems. It also been used in many applied fields, such as in control theory and in quantum computing. Procedure Consider the following autonomous nonlinear system: where denotes the system state vector. Also, and 's are known analytic vector functions, and is the element of an unknown disturbance to the system. At the desired nominal point, the nonlinear functions in the above system can be approximated by Taylor expansion where is the partial derivative of with respect to at and denotes the Kronecker product. Without loss of generality, we assume that is at the origin. Applying Taylor approximation to the system, we obtain where and . Consequently, the following linear system for higher orders of the original states are obtained: where , and similarly . Employing Kronecker product operator, the approximated system is presented in the following form where , and and matrices are defined in (Hashemian and Armaou 2015). See also Carleman matrix Composition operator References External links A lecture about Carleman linearization by Igor Mezić Dynamical systems Functions and mappings Functional analysis Eponyms in mathematics
Carleman linearization
[ "Physics", "Mathematics" ]
287
[ "Mathematical analysis", "Functions and mappings", "Functional analysis", "Mathematical objects", "Mechanics", "Mathematical relations", "Dynamical systems" ]
62,507,181
https://en.wikipedia.org/wiki/TectoRNA
TectoRNAs are modular RNA units able to self-assemble into larger nanostructures in a programmable fashion. They  are generated by rational design through an approach called RNA architectonics, which make use of  RNA structural modules identified in natural (or sometimes artificial) RNA molecules to form pre-defined 3D structures spontaneously. The abilities of RNA which is capable of catalysis and non-canonical base pairing make it an attractive biomolecule for design. By applying the knowledge of computational modeling and biochemical characterization, RNA can be shaped into defined geometries and conduct various functions. As such, tectoRNA can also carry functions to build large functional nanostructures which can be used for synthetic biology and nanotechnology application. Overview Nadrian Seeman was the first one who proposed that DNA could be used as material for generating nanoscopic self-assembling structures. This concept was extended to RNA by Jaeger and collaborators in 2000 by taking advantage of the concept of RNA tectonics initially proposed by Jaeger and Westhof and collaborators in 1996. To design a tectoRNA, the deep knowledge of RNA tertiary structure is required. The rational design of tectoRNA is based on known X-ray and NMR structures. TectoRNAs can be seen as analogous to words, and, by using the natural syntax of RNA structural motifs, all kinds of thermodynamically stable shapes can be rationally designed and synthesized. The sequence specifying for stable, recurrent, and modular structural motifs, e.g. GNRA tetraloop, kissing loops, kink turns, A-minor interaction, etc., can be encoded within tectoRNAs to control their geometry and self-assembly into nanostructures. However, tectoRNA can also incorporate flexible junctions and RNA modules (or RNA aptamers) responsive to ligands. Nowadays, extensive databases and powerful algorithms can be useful tools to design sequences of tectoRNAs. The folding of tectoRNAs are optimized by minimizing the free energy and maximizing their thermodynamic stability. The RNA sequences are mainly transcribed in vitro, and the folding condition for RNA is also important. Mg2+ and other salts must be added into solution and the concentration is well controlled to fold RNA properly. Their expected folding and self-assembly properties are characterized by a wide range of biochemical tools. Native poly-acrylamide gel electrophoresis (PAGE) is used to test the Kd of self-assembled tectoRNAs. Temperature gradient gel electrophoresis (TGGE) is applied to characterize the thermodynamic stability of nanostructures. Chemical probing, like DMS probing, allows us to indirectly understand the folding of RNA structure. Atomic force microscopy (AFM), transmission electron microscopy (TEM), and cryo-EM are powerful techniques which give us a direct clue how RNA nanostructures look like. By far, delicate structures like squares or hearts have been successfully demonstrated in different research. RNA architectonics or RNA modular origami TectoRNAs are the basic self-assembling unit in RNA architectonics. In RNA architectonics, the sequence length of tectoRNA is usually less than 200 nts. TectoRNAs are typically originating from single stranded RNA molecules and once folded, they act like LEGO bricks to build up higher order architectures. They can be synthesized, folded and self-assembled into multimeric nanostructures during transcription in isothermal conditions. As such, the RNA architectonics approach can be seen as RNA modular origami. This approach was extended to the synthesis of larger self-assembling units of more than 400 nts. More recently, RNA origami was extended to the design of long single stranded RNA sequences able to fold into large pre-defined nanostructures. Hence, RNA modular origami (originally called RNA architectonics), RNA origami and RNA single stranded origami are both originating from the same concept where RNA sequences can be design to self-fold and assemble into predefined shapes. Note that conceptually, DNA single stranded origami is more related to RNA origami than DNA origami. Applications Though RNA nanotechnology is still a burgeoning field, tectoRNAs and resulting nanostructures have already been shown to be useful in nanomedicine, nanotechnology, and synthetic biology. This includes the development of programmable nano-scaffolds and nano-particles for the delivery of RNA therapeutics. As such, RNA nanoparticles, like hexagonal nanorings, can be used as a delivery vehicle carrying therapeutic RNA to targeting cells. It is also possible to incorporate modified nucleotides within tectoRNAs in order to increase their chemical stability and resistant towards degradation. Yet, the full potential of tectoRNAs and resulting nanostructures for recruiting proteins and ligands still remain largely unexplored. See also DNA nanotechnology DNA origami RNA origami References RNA Nanotechnology
TectoRNA
[ "Materials_science", "Engineering" ]
1,060
[ "Nanotechnology", "Materials science" ]
62,510,434
https://en.wikipedia.org/wiki/GUIDE-Seq
GUIDE-Seq (Genome-wide, Unbiased Identification of DSBs Enabled by Sequencing) is a molecular biology technique that allows for the unbiased in vitro detection of off-target genome editing events in DNA caused by CRISPR/Cas9 as well as other RNA-guided nucleases in living cells. Similar to LAM-PCR, it employs multiple PCRs to amplify regions of interest that contain a specific insert that preferentially integrates into double-stranded breaks. As gene therapy is an emerging field, GUIDE-Seq has gained traction as a cheap method to detect the off-target effects of potential therapeutics without needing whole genome sequencing. Principles Conceived to work in concert with next-gen sequencing platforms such as Illumina dye sequencing, GUIDE-Seq relies on the integration of a blunt, double-stranded oligodeoxynucleotide (dsODN) that has been phosphothioated on two of the phosphate linkages on the 5' end of both strands. The dsODN cassette integrates into any site in the genome that contains a double-stranded break (DSB). This means that along with the target and off-target sites that may exist as a result of the activity of a nuclease, the dsODN cassette will also integrate into any spurious sites in the genome that have a DSB. This makes it critical to have a dsODN only condition that controls for errant and naturally occurring DSBs, and is required to use the GUIDE-seq bioinformatic pipeline. After integration of the dsODN cassette, genomic DNA (gDNA) is extracted from the cell culture and sheared to 500bp fragments via sonication. The resulting sheared gDNA undergoes end-repair and adapter ligation. From here, DNA specifically containing the dsODN insert is amplified via two rounds of polymerase chain reaction (PCR) that proceeds in a unidirectional manner starting from the primers that are complementary to the dsODN. This process allows for the reading of the adjacent sequences, both the sense and anti-sense strands, flanking the insert. The final product is a panoply of amplicons, describing the DSB distribution, containing indices for sample differentiation, p5 and p7 Illumina flow-cell adapters, and the sequences flanking the dsODN cassette. GUIDE-Seq is able to achieve detection of rare DSBs that occur with a 0.1% frequency, however this may be as a result of the limitations of next-generation sequencing platforms. The greater the depth of reads an instrument is able to achieve, the better it can detect rarer events. Additionally, GUIDE-Seq is able to detect sites not predicted by the "in silico" methods which often will predict sites based on sequence similarity and percent mismatch. There have been cases of GUIDE-Seq not detecting any off-targets for certain guide RNAs, suggesting that some RNA-guided nucleases may have no associated off-targets. GUIDE-Seq has been used to show that engineered variants of Cas9 can have reduced off-target effects. Caveats GUIDE-Seq has been shown to miss some off-targets, when compared to the genome-wide sequencing DIGENOME-Seq method, due to the nature of its targeting. Another caveat is that GUIDE-Seq has been observed to generate slightly different off-target sites depending on the cell line. This could be due to cell lines having different parental genetic origins, cell line specific mutations, or, in the case of some immortal cell lines such as K562s, having aneuploidy. This suggests that it would be pertinent for researchers to test multiple cell lines to validate efficacy and accuracy. GUIDE-Seq cannot be used to identify off-targets in vivo. References Genome editing Molecular biology
GUIDE-Seq
[ "Chemistry", "Engineering", "Biology" ]
810
[ "Genetics techniques", "Genome editing", "Genetic engineering", "Molecular biology", "Biochemistry" ]
63,402,706
https://en.wikipedia.org/wiki/Pentaerythritol%20tetraacrylate
Pentaerythritol tetraacrylate (PETA, sometimes PETTA, PETRA) is an organic compound. It is a tetrafunctional acrylate ester used as a monomer in the manufacture of polymers. As it is a polymerizable acrylate monomer, it is nearly always supplied with an added polymerisation inhibitor, such as MEHQ (monomethyl ether hydroquinone). Uses PETA is part of a family of acrylates used in epoxy resin chemistry and ultraviolet cure of coatings. Similar monomers used are 1,6-hexanediol diacrylate and trimethylol propane triacrylate. It is a derivative of pentaerythritol One of the key uses of the material is in polymeric synthesis where it can form micelles and block copolymers. The molecule's acrylate group functionality enables the molecule to do the Michael reaction with amines. It is therefore sometimes used in epoxy chemistry enabling a large reduction in cure time. As the molecule has 4 functional acrylate groups it confers high cross-link density. Ethoxylation maybe used to produce ethoxylated versions which find use in electron beam curing. The material also has pharmaceutical uses See also 1,6-Hexanediol diacrylate Trimethylolpropane triacrylate Acrylic acid References External links Safety Data Sheet Acrylate esters Monomers
Pentaerythritol tetraacrylate
[ "Chemistry", "Materials_science" ]
312
[ "Monomers", "Polymer chemistry" ]
63,409,943
https://en.wikipedia.org/wiki/NAIL-MS
NAIL-MS (short for nucleic acid isotope labeling coupled mass spectrometry) is a technique based on mass spectrometry used for the investigation of nucleic acids and its modifications. It enables a variety of experiment designs to study the underlying mechanism of RNA biology in vivo. For example, the dynamic behaviour of nucleic acids in living cells, especially of RNA modifications, can be followed in more detail. Theory NAIL-MS is used to study RNA modification mechanisms. Therefore, cells in culture are first fed with stable isotope labeled nutrients and the cells incorporate these into their biomolecules. After purification of the nucleic acids, most often RNA, analysis is done by mass spectrometry. Mass spectrometry is an analytical technique that measures the mass-to-charge ratio of ions. Pairs of chemically identical nucleosides of different stable-isotope composition can be differentiated in a mass spectrometer due to their mass difference. Unlabeled nucleosides can therefore be distinguished from their stable isotope labeled isotopologues. For most NAIL-MS approaches it is crucial that the labeled nucleosides are more than 2 Da heavier than the unlabeled ones. This is because 1.1% of naturally occurring carbon atoms are 13C isotopes. In the case of nucleosides this leads to a mass increase of 1 Da in ~10% of the nucleosides. This signal would disturb the final evaluation of the measurement. NAIL-MS can be used to investigate RNA modification dynamics by changing the labeled nutrients of the corresponding growth medium during the experiment. Furthermore, cell populations can be compared directly with each other without effects of purification bias. Furthermore, it can be used for the production of biosynthetic isotopologues of most nucleosides which are needed for quantification by mass spectrometry and even for the discovery of yet unknown RNA modifications. General procedure In general, cells are cultivated in unlabeled or stable (non-radioactive) isotope labeled media. For example, the medium can contain glucose labeled with six carbon-13 atoms (13C) instead of the normal carbon-12 (12C). Cells growing in this medium, will, depending on model organism, incorporate the heavy glucose into all of their RNA molecules. Thereafter, all nucleotides are 5 Da heavier than their unlabeled isotopologues due to a complete carbon labeling of the ribose. After cultivation and appropriate labeling of the cells, they are generally harvested using phenol/chloroform/guanidinium isothiocyanate. Other extraction methods are possible and sometimes needed (e.g. for yeast). RNA is then isolated by Phenol-Chloroform extraction and iso-Propanol precipitation. Further purification of specific RNA species (e.g. rRNA, tRNA) is usually done by size-exclusion chromatography (SEC) but other approaches are available as well. For most applications the final product needs to be enzymatically digested to nucleosides before analysis by LC-MS. Therefore, digestion enzymes such as benzonase, NP1 and CIP are used. Typically, a triple quadrupole in MRM mode is used for the measurements. Labeling of cells How the labeling of RNA molecules is achieved depends on the model organism. For E.coli (bacteria) the minimum medium M9 can be used and supplemented with the stable isotope labeled variants of the needed salts. This enables labeling with 13C-carbon, 15N-nitrogen, 34S-sulfur and 2H-hydrogen. In S.cerevisiae (yeast) there are currently two possibilities: First, the use of commercially available complete growth medium, which enables labeling with 13C-carbon and/or 15N-nitrogen and second the use of minimal YNB medium which has to be supplemented with several amino acids and glucose which can be added as stable isotope labeled variants in order to achieve 13C-carbon, 15N-nitrogen and 2H-hydrogen labeling of RNA. While labeling in model organisms like E.coli and S.cerevisiae is fairly simple, stable isotope labeling in cell culture is much more challenging as the composition of the growth media is much more complex. Neither the supplementation of stable isotope labeled glucose nor the supplementation of stable isotope labeled variants of simple precursors of nucleoside biosynthesis such as glutamine and/or aspartate is sufficient for a defined mass increase higher than 2 Da. Instead, most cells kept in cell culture can be fed with stable isotope labeled methionine for labeling of methyl groups and with stable isotope labeled variants of adenin and uridine for labeling of the nucleoside's base body. Special care must be taken when supplementing the medium with FBS (fetal bovine serum), as it also contains small metabolites used for the biosynthesis of nucleosides. The use of dialyzed FBS is therefore advisable when defined labeling of all nucleosides is desired. Applications With NAIL-MS different experiment designs are possible. Production of SILIS NAIL-MS can be used to produce stable isotope labeled internal standards (ISTD). Therefore, cells are grown in medium which results in complete labeling of all nucleosides. The purified mix of nucleosides can then be used as ISTD which is needed for accurate absolute quantification of nucleosides by mass spectrometry. This mixture of labeled nucleosides is also referred to as SILIS (stable isotope labeled internal standard). The advantage of this approach is, that all modifications present in an organism can thereby be biosynthesized as labeled compounds. The production of SILIS was already done before the term NAIL-MS emerged. Comparative Experiments A comparative NAIL-MS experiment is quite similar to a SILAC experiment but for RNA instead of proteins. First, two populations of the respective cells are cultivated. One of the cell populations is fed with growth medium containing unlabeled nutrients, whereas the second population is fed with growth medium containing stable isotope labeled nutrients. The cells then incorporate the respective isotopologues into their RNA molecules. One of the cell populations serves as a control group whereas the other is subject to the associated research (e.g. KO strain, stress). Upon harvesting of the two cell populations they are mixed and co-processed together to exclude purification-bias. Due to the distinct masses of incorporated nutrients into the nucleosides a differentiation of the two cell populations is possible by mass spectrometry. Pulse-Chase Experiments Upon initiation of a pulse-chase experiment the medium is switched from medium(1) to medium(2). The two media must only differ in their isotope content. Thereby it is possible to distinguish between RNA molecules already existent before experiment initiation (= RNA molecules grown in medium(1)) and RNA molecules that are newly transcribed after experiment initiation (= RNA molecules grown in medium(2)). This allows the detailed study of modification dynamics in vivo. The supplementation of labeled methionine in either medium(1) or medium(2) allows the tracing of methylation processes. Other isotopically labeled metabolites potentially allow for further modification analysis. Altogether NAIL-MS enables the investigation of RNA modification dynamics by mass spectrometry. With this technique, enzymatic demethylation has been observed for several RNA damages inside living bacteria. Discovery of new RNA modifications For the discovery of uncharacterized modifications cells are grown in unlabeled or 13C‑labeled or 15N‑labeled or 2H‑labeled or 34S‑labeled medium. Unknown signals occurring during mass spectrometry are then inspected in all differentially labeled cultures. If retention times of unknown compounds with appropriately divergent m/z values overlap, a sum formula of the compound can be postulated by calculating the mass differences of the overlapping signal in the differentially labeled cultures. With this method several new RNA modifications could be discovered. This experimental design also was the initial idea that started the concept of NAIL-MS. Oligonucleotide NAIL-MS NAIL-MS can also be applied to oligonucleotide analysis by mass spectrometry. This is useful when the sequence information is to be retained. References External links https://www.cup.lmu.de/oc/kellner/research/ https://iimcb.genesilico.pl/modomics/ Biochemistry detection methods Biotechnology Epigenetics Genetics techniques RNA Isotopes Mass spectrometry
NAIL-MS
[ "Physics", "Chemistry", "Engineering", "Biology" ]
1,789
[ "Biochemistry methods", "Genetics techniques", "Spectrum (physical sciences)", "Instrumental analysis", "Mass", "Biochemistry detection methods", "Isotopes", "Chemical tests", "Genetic engineering", "Biotechnology", "Mass spectrometry", "nan", "Nuclear physics", "Matter" ]
63,411,286
https://en.wikipedia.org/wiki/Infeld%E2%80%93Van%20der%20Waerden%20symbols
The Infeld–Van der Waerden symbols, sometimes called simply Van der Waerden symbols, are an invariant symbol associated to the Lorentz group used in quantum field theory. They are named after Leopold Infeld and Bartel Leendert van der Waerden. The Infeld–Van der Waerden symbols are index notation for Clifford multiplication of covectors on left handed spinors giving a right-handed spinors or vice versa, i.e. they are off diagonal blocks of gamma matrices. The symbols are typically denoted in Van der Waerden notation as and so have one Lorentz index (m), one left-handed (undotted Greek), and one right-handed (dotted Greek) Weyl spinor index. They satisfy They need not be constant, however, and can therefore be formulated on curved space time. Background The existence of this invariant symbol follows from a result in the representation theory of the Lorentz group or more properly its Lie algebra. Labeling irreducible representations by , the spinor and its complex conjugate representations are the left and right fundamental representations and while the tangent vectors live in the vector representation The tensor product of one left and right fundamental representation is the vector representation,. A dual statement is that the tensor product of the vector, left, and right fundamental representations contains the trivial representation which is in fact generated by the construction of the Lie algebra representations through the Clifford algebra (see below) Representations of the Clifford algebra Consider the space of positive Weyl spinors of a Lorentzian vector space with dual . Then the negative Weyl spinors can be identified with the vector space of complex conjugate dual spinors. The Weyl spinors implement "two halves of a Clifford algebra representation" i.e. they come with a multiplication by covectors implemented as maps and which we will call Infeld–Van der Waerden maps. Note that in a natural way we can also think of the maps as a sesquilinear map associating a vector to a left and righthand spinor respectively . That the Infeld–Van der Waerden maps implement "two halves of a Clifford algebra representation" means that for covectors resp. , so that if we define then Therefore extends to a proper Clifford algebra representation . The Infeld–Van der Waerden maps are real (or hermitian) in the sense that the complex conjugate dual maps coincides (for a real covector ) : . Likewise we have . Now the Infeld the Infeld–Van der Waerden symbols are the components of the maps and with respect to bases of and with induced bases on and . Concretely, if T is the tangent space at a point O with local coordinates () so that is a basis for and is a basis for , and () is a basis for , is a dual basis for with complex conjugate dual basis of , then Using local frames of the (co)tangent bundle and a Weyl spinor bundle, the construction carries over to a differentiable manifold with a spinor bundle. Applications The symbols are of fundamental importance for calculations in quantum field theory in curved spacetime, and in supersymmetry. In the presence of a tetrad for "soldering" local Lorentz indices to tangent indices, the contracted version can also be thought of as a soldering form for building a tangent vector out of a pair of left and right Weyl spinors. Conventions In flat Minkowski space, A standard component representation is in terms of the Pauli matrices, hence the notation. In an orthonormal basis with a standard spin frame, the conventional components are Note that these are the blocks of the gamma matrices in the Weyl Chiral basis convention. There are, however, many conventions. Citations References Mathematical physics Representation theory of Lie groups Spinors
Infeld–Van der Waerden symbols
[ "Physics", "Mathematics" ]
787
[ "Applied mathematics", "Theoretical physics", "Mathematical physics" ]
56,442,223
https://en.wikipedia.org/wiki/Blocknots
Blocknots were random sequences of numbers contained in a book and organized by numbered rows and columns and were used as additives in the reciphering of Soviet Union codes, during World War II. The Blocknot consisted of fifty sheets of 5-figure random additive, 100 additive groups to a sheet. No sheet was used more than once, thus the blocknots were in effect a form of one-time pad. The Soviet Unions highest grade ciphers that were used in the East, were the 5-figure codebook enciphered with the Blocknot book, and were generally considered unbreakable. Technical Description Blocknots were distributed centrally from an office in Moscow. Every Blocknot contained 5-figure groups in a number of sheets, for the enciphering of 5-figure messages. The encipherment was effected by applying additives taken from the pad, of which 50-100 5-figure groups appeared. Each pad had a 5-figure number and each sheet had a 2-figure number running consecutively. There were 5 different types of Blocknots, in two different categories The Individual in which each table of random numbers was used only once. The General in which each page of the Blocknot was valid for one day. The security of the additive sequence rested on the choice of different starting points for each message. In 5-figure messages, the blocknot was one of the first 10 Groups in the message. Its position changed at long intervals, but was always easy to re-identify. The Russians differentiated between three types of blocks: The 3-block, DRIERBLOCK. I-block for Individual Block: 50 pages, additive read off in one direction only. The messages could be used and read only between 2 wireless telegraphy stations on one net. The 6-block, SECHSERBLOCK. Z-block for Circular Block: 30 pages, additive read off in either direction. The messages could be used and read, between all W/T stations in a net. The 2-block, ZWEIERBLOCK. OS-block. Used only in traffic from lower to higher formations. Two other types were used, in lower echelons. Notblock: Used in an emergency. Blocknot used for passing on traffic. The distribution of Blocknots was carried out centrally from Moscow to Army Groups then to Armies. The Army was responsible for their distribution throughout the lower levels of the army down to company level. Independent units took their cipher material with them. Occasionally the same blocknot was distributed to two units on different parts of the front, which enabled Depth to be established. Records of all Blocknots used were kept in Berlin and when a repeat was noticed a BLOCKNOT ANGEBOT message was sent out to all German Signals units, to indicate that it may have been possible to break the code using it. There was no certainty in this. A cryptanalyst with the General der Nachrichtenaufklärung stated while being interrogated by TICOM: It seems that depths of up to 8 were established at the beginning of the Russian Campaign but that no 5-figure code was broken after May 1943 German cryptanalysts who were prisoners of war stated under interrogation, that each of the figures 0 to 9 were placed en clair usually within the first ten groups of the text or sometimes at the end. One indicator was the Blocknot number and the consisted of two random figures, the figure representing the type, and the remaining two, the page of the Blocknot being used. In long messages, 000000 was placed in the message when the end of a page had been reached. Chi number The Chi-number was the serial numbering of all 5-figure messages passing through the hands of the Cipher Officer, starting on the first of January and ending on thirty-first December of the current year. It always appeared as the last group in an intercepted message, e.g. 00001 on the 1st January, or when the unit was newly set up. The progression of Chi-numbers was carefully observed and recorded in the form of a graph. A Russian corps had about 10 5-figure messages per day, and Army about 20-30 and a Front about 60-100. After only a relatively short time, the individual curves separated sharply and the type of formation could be recognized by the height of the Chi-number alone. Monitoring Blocknots were tracked in a card index, that was maintained by the Signal Intelligence Evaluation Centre (NAAS). The NAAS functionality included evaluation and traffic analysis, cryptanalysis, collation and dissemination of intelligence. The card index, which was one amongst several Card Indexes. A careful recording and study of blocks provided the positive clues in the identification and tracking of formations using 5-figure ciphers. The index was subdivided into two files: Search card index, contained all blocknots and chi-numbers whether or not they were known. Unit card index, contained only known Block and Chi-numbers. Inspector Berger, who was the chief cryptanalyst of NAAS 1 stated that the two files formed: The most important and surest instruments for identifying Russian radio nets, known to him. The Blocknots were also used in the Stationary Intercept Company (Feste), the military unit that were designed to work at a lower level to the NAAS, at the Army level and were semi-motorized, and closer to the front. The Feste used the Blocknot value along with several other parameters to build a network diagram. The network diagram was studied extensively, as part of a 6-stage process, that involved several departments within the Feste. The final outcome was a metric which determined the most interesting circuit for traffic monitoring, and least interesting, where monitoring of traffic should cease. Analysis Johannes Marquart was a mathematician and cryptanalyst who initially worked for Inspectorate 7/VI and later led Referat Ia of Group IV of the General der Nachrichtenaufklärung. Marquart was assigned the study of the Soviet Union Blocknot traffic. Marquart and his unit conducted extensive research in an attempt to discover the method by which they were produced. All the counts which they made, however, failed to reveal any non-random characteristics in the design of the tables, and while they thought the Blocknots must have been generated by machine, they were never able to draw any concrete deductions as a result of their research. Example The Soviet 3rd Guard Tank Army transmits a 5-figure message with the Blocknot of 37581 (one of the first 10 groups in the message). On the same day the Block 37582 was used by the same formation. The next day 37583 appeared. Thereafter, for a period, the Army was not heard by German Wireless telegraphy intercept operators, as it was maintaining wireless silence. After a few days, an unidentified net with the Blocknot 37588 is picked up. This message net is claimed, because of the proximity of the blocks (88/83) to be the 3rd Guard Tank Army. The missing Blocknots 84-87 were presumably used in telegraphic, telephonic or courier communications. The Chi number provides confirmation of the first assumption, based on proximity of blocknots in most cases. Notes References Cryptography Ciphers
Blocknots
[ "Mathematics", "Engineering" ]
1,491
[ "Applied mathematics", "Cryptography", "Cybersecurity engineering" ]
56,446,915
https://en.wikipedia.org/wiki/Lamb%20surface
In fluid dynamics, Lamb surfaces are smooth, connected orientable two-dimensional surfaces, which are simultaneously stream-surfaces and vortex surfaces, named after the physicist Horace Lamb. Lamb surfaces are orthogonal to the Lamb vector everywhere, where and are the vorticity and velocity field, respectively. The necessary and sufficient condition are Flows with Lamb surfaces are neither irrotational nor Beltrami. But the generalized Beltrami flows has Lamb surfaces. See also Beltrami flow References Fluid dynamics
Lamb surface
[ "Chemistry", "Engineering" ]
100
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
56,447,040
https://en.wikipedia.org/wiki/Lamb%20vector
In fluid dynamics, Lamb vector is the cross product of vorticity vector and velocity vector of the flow field, named after the physicist Horace Lamb. The Lamb vector is defined as where is the velocity field and is the vorticity field of the flow. It appears in the Navier–Stokes equations through the material derivative term, specifically via convective acceleration term, In irrotational flows, the Lamb vector is zero, so does in Beltrami flows. The concept of Lamb vector is widely used in turbulent flows. The Lamb vector is analogous to electric field, when the Navier–Stokes equation is compared with Maxwell's equations. Gromeka–Lamb equation The Euler equations written in terms of the Lamb vector is referred to as the Gromeka–Lamb equation, named after Ippolit S. Gromeka and Horace Lamb. This is given by Properties of Lamb vector The divergence of the lamb vector can be derived from vector identities, At the same time, the divergence can also be obtained from Navier–Stokes equation by taking its divergence. In particular, for incompressible flow, where , with body forces given by , the Lamb vector divergence reduces to where In regions where , there is tendency for to accumulate there and vice versa. References Fluid dynamics Vector calculus
Lamb vector
[ "Chemistry", "Engineering" ]
269
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
56,449,950
https://en.wikipedia.org/wiki/Biomaterials%20Science
Biomaterials Science is a peer-reviewed scientific journal that explores the underlying science behind the function, interactions and design of biomaterials. It is published by the Royal Society of Chemistry. The current editor-in-chief is Jianjun Cheng (Westlake University, China), while the executive editor is Maria Southall. The journal was established in 2013 and since January 2018 has been the official journal of the European Society for Biomaterials. Since the start of 2016 the journal has been online only. It publishes primary research (Communications and full paper articles) and review-type articles (reviews and minireviews). Abstracting and indexing The journal is abstracted and indexed in: Science Citation Index Index Medicus/MEDLINE/PubMed Scopus See also List of scientific journals in chemistry Journal of Materials Chemistry B MedChemComm References External links Materials science journals Royal Society of Chemistry academic journals Biochemistry journals Academic journals established in 2013 Monthly journals English-language journals
Biomaterials Science
[ "Chemistry", "Materials_science", "Engineering" ]
203
[ "Biochemistry journals", "Biochemistry literature", "Materials science journals", "Materials science" ]
73,532,723
https://en.wikipedia.org/wiki/Ultrasound-triggered%20drug%20delivery%20using%20stimuli-responsive%20hydrogels
Ultrasound-triggered drug delivery using stimuli-responsive hydrogels refers to the process of using ultrasound energy for inducing drug release from hydrogels that are sensitive to acoustic stimuli. This method of approach is one of many stimuli-responsive drug delivery-based systems that has gained traction in recent years due to its demonstration of localization and specificity of disease treatment. Although recent developments in this field highlight its potential in treating certain diseases such as COVID-19, there remain many major challenges that need to be addressed and overcome before more related biomedical applications are clinically translated into standard of care. Types of Hydrogels Used in Drug Delivery Systems Traditional Hydrogels Hydrogels are three dimensional structures consisting of hydrophilic polymers (i.e., polymers, colloids, etc.) that form networks through cross-linking processes. The macromolecules involved in the formation of hydrogels are able to absorb and retain large amounts of water and other aqueous substances. Since its discovery in 1960, hydrogels have become a crucial component in biomedical research and applications. A few examples of hydrogel use include organ regeneration, wound healing, and drug delivery. Hydrogels are generally classified based on the following characteristics: material, crosslinking mechanism, physical structure, electric charge, and response to stimuli. Synthesis of hydrogels are developed from a combination or isolated forms of natural and synthetic polymers. The main examples of natural polymers used to derive hydrogels include polysaccharides, polypeptides, and polynucleotides. Several known examples of synthetic polymeric constituents include poly (vinyl alcohol) (PVA), poly (acrylic acid) (PAA), and poly (2-hydroxyethyl methacrylate) (PHEMA). The crosslinking mechanism of the hydrophilic macromolecules are driven by covalent bonding, resulting in a physical- or chemical-type hydrogel. Physical hydrogels contain reversible matrices of hydrogen and non-covalent bonds, while chemical hydrogels are composed of irreversible matrices that are molecularly held together by covalent bonds. Used as another parameter in characterizing gels, electric charge (also referred to as ionic character) describes the ability of the macromolecules to drive swelling behavior. Hydrogels classified based on this property fall under three main categories: cationic, anionic, and amphoteric. Bawa et al. demonstrated that cationic gels swell in acidic environments but remain condensed in basic environments. Smart Hydrogel Polymers Since traditional hydrogels were able to encapsulate and carry materials, research into drug-loaded hydrogels began to expand in the field of drug delivery. Dubbed as “smart hydrogels” or “stimuli-responsive hydrogels”, these gels are able to dynamically respond to external or internal stimuli in addition to possessing similar swelling-deswelling properties of traditional hydrogels. Various examples of external stimuli that have been used to control smart hydrogels in drug delivery systems include temperature, pH, light, ultrasound, and enzymes. Additional considerations in designing smart hydrogels involve fundamental understanding of bond strength, molecular weight, degree of polymerization, polymer structure, and molecular assembly. The bond strength describes the cross-linking strength of the hydrogel, which is considered in designing drug release mechanisms of hydrogel-based platforms. Scientific understanding of the molecular weight of gels is taken into account when loading drugs of increasing weight. Similar to conventional hydrogels, the polymeric chain (or backbone) of the smart hydrogels is derived from polysaccharides, polypeptides, and polynucleotides. Examples of natural polymers include alginate, chitosan, cellulose, gelatin, fibrin, and collagen. Hydrogel size and type are the two main properties considered in designing hydrogels when seeking the optimal delivery route for drug administration. Various examples of hydrogel type designs include nanoparticles, nanogels, and microgels. For example, El-Sherbiny et al. proposed gelatin-based hydrogel nanoparticles that were stimulated by magnetic forces. Other variables considered in hydrogel design include safety, biodegradability, drug loading capacity, and on-demand control of drug release [23]. The main safety concerns in formulating hydrogels include bacterial infection and biocompatibility. The final parameter considered in developing hydrogels for drug delivery systems revolve around the embedded payload within the hydrogel. Cells, proteins, and therapeutic drugs are the main payloads used in hydrogel-based drug delivery platforms. In one example of payload use, Jiang et al. demonstrated the stimulated release of gallic acid from chitin-based hydrogel via ultrasound induction. Use of Ultrasound for Drug Therapy General Overview of Ultrasound According to the Moyano et al., ultrasound refers to vibrational mechanical waves with frequencies greater than 20 kilohertz (kHz). Ultrasound is traditionally used for imaging, monitoring, and diagnosing a broad range of conditions in the medical field. Various examples of ultrasound modalities include Doppler ultrasound, focused ultrasound, and echocardiography. The key component of using most ultrasound devices is a transducer that consists of an array of piezoelectric crystals. The atoms within these crystals vibrate under electrical current stimulation, converting this electrical energy into mechanical, in this case, high acoustic or ultrasonic energy. When the sonicating transducer is directed at the human body, the resulting sound pressure waves produced by the transducer will pass through the dermal layer and reach the tissue where the waves are reflected (or echoed) back to the transducer and converted back into electrical signals for image reconstruction. Tissue characteristics such as density affect the intensity of the reflected sound waves. Other parameters such as beam frequency, equipment components, and imaging settings contribute towards the resolution of the ultrasound application. Ultrasound has also been used for therapeutic purposes because it is non-invasiveness, able to provide deeper tissue penetration, and safely localize application of acoustic energy. While ultrasound modalities are generally considered safe, extreme levels of human exposure to ultrasound can increase injury risk. In the US, the Food and Drug Administration (FDA) guidelines, the maximum allowed exposure to ultrasound for use is defined by the following key parameters: mechanical index, thermal index, spatial-peak temporal-peak intensity, spatial-peak pulse-average power, and spatial-peak temporal-average power. Mechanical index (MI) is a unitless metric that is used to measure the acoustic power output from ultrasound use. Since the MI is inversely proportional to the ultrasonic beam frequency, the MI will be lower at higher frequencies. The thermal index (TI) describes the risk of increasing the temperature of the tissue being sonicated by ultrasound. A solution to decreasing TI involves the reduction of the time that the sonicating transducer is focused on the targeted area. The spatial-peak temporal-peak (SPTP) power refers to the highest intensity output of the ultrasound beam during implementation. The spatial-peak pulse-average (SPPA) power is a measure of the maximum intensity output averaged over the duration of ultrasound use in. The spatial-peak temporal-average power describes the measure of the highest intensity output generated by the repeating pulse of the ultrasound beam over a period of time. Effects of Focused Ultrasound on Smart Hydrogels Due to the sonication capability of ultrasound and drug-release property of smart hydrogels, there has been scientific interest in controlling the release of the payload from hydrogels. Focusing and directing acoustic energy (that can convert to thermal or mechanical energy) towards smart hydrogels, implanted within tissue at times, induces a hydrogel response that results in the release of the embedded payload. Although hydrogels that are sensitive to mechanical pressure are generally used in ultrasound-triggered drug delivery platforms, hydrogels that respond to changes in temperature have also been used for these systems.  For example, Makhmalzadeh et al. proposed an ultrasound-triggered drug delivery method involving the use of thermo-responsive hydrogels loaded with silibinin, a cancer drug for treating melanoma. At low temperatures, these thermo-responsive hydrogels exist in liquid form but following ultrasonication, they transition into a gel state. Although ultrasound- and thermo-sensitive hydrogels are responsive to certain ultrasound modalities, they differ in how they respond to external stimuli. Ultrasound-responsive hydrogels are capable of being stimulated by more than one type of stimulation force through ultrasound. Conversely, thermo-responsive hydrogels, as the name specifies, can only respond to the thermal forces induced by ultrasound. Despite this, thermo-responsive hydrogels have been widely used in cancer-based drug delivery systems. Of the existing ultrasound modalities, focused ultrasound has been used extensively in drug delivery research. High-Intensity Focused Ultrasound (HIFU) and Low-Intensity Focused Ultrasound are the two main techniques used in inducing drug release from smart hydrogels. Current HIFU applications are used for ablating tumors located at increased depths. Since HIFU is able to invoke high temperatures, they have been used for cancer therapy by stimulating drug release from smart hydrogels via thermolysis mechanisms. In regard to the use of ultrasound- and thermo-responsive hydrogels for drug delivery, HIFU is able to stimulate both types of hydrogels. In one study related to cancer therapy, HIFU exhibited high efficiency inducing nanovaccine release from hydrogel-based carriers. Although HIFU has been studied in various capacities, this technique can cause irreparable damage to healthy tissue. Therefore, LIFU has been the conventional method for use in hydrogel responsive drug delivery platforms. In other areas of the biomedical field, LIFU has been used for stimulation such as bone regeneration in tissue engineering applications. Due to its lower generated acoustic power output, LIFU is preferred over HIFU in biomedical applications involving neuromodulation and other brain-related procedures. Studies have shown that LIFU has proven to be a cost-effective and non-invasive method for hydrogel-based drug delivery. The underlying drug-releasing mechanism induced by focused ultrasound onto ultrasound-sensitive hydrogels is based on mechanical or thermal effects. Mechanical-based ultrasound sonication mechanisms refer to the conversion of acoustic energy into mechanical energy with various types that include acoustic cavitation force, or oscillation force. Generally, applying mechanical pressure to a responsive hydrogel loaded with drugs causes the hydrogel to deform. This deformation reduces the structural integrity of the hydrophobic core, allowing for the release of the drug payload. Both ultrasound- and thermo- responsive hydrogels are capable of carrying various embedded carriers of drug payloads which include metal-organic framework, nanoparticles, and liposomes. Although many studies have demonstrated the irreversible compression of hydrogels induced under ultrasound, Goncalves et al. designed hydrogel-based nanoparticles that were capable of “self-healing”, meaning they were able to return to their original form following drug release from its depot. Acoustic cavitation forces, specifically, have been used in conjunction with ultrasound-responsive hydrogels for drug delivery. This type of mechanical force refers to the formation, growth, and destruction of bubble occurs that results in the generation of acoustic energy. There are varying degrees of cavitation which divided into three groups: sonoporation, stable cavitation, and inertial cavitation. Sonoporation refers to the process of using ultrasound to open pores (or permeability) of cellular membranes to allow substances of interest to enter into the targeted cell. In cases where microbubbles are coated with hydrogels, these embedded carrier systems undergo stable cavitation and inertial cavitation. Stable cavitation characterizes vapor bubbles that oscillate within its own equilibrium, while inertial cavitation describes bubbles that generate a net growth each time the bubble expands and results in the bubble collapsing violently. Severe cavitation increases the risk of damage to tissue and drug degradation. Other forces generated by ultrasound that is used in several hydrogel-based platforms are hyperthermia and radiation. These forces are generally created by HIFU as they generate high levels of heat. Thus, guidelines established by the FDA help ensure the safe use of ultrasound in all biomedical applications, inclusive of drug delivery systems, based on the scientific understanding of these mechanical forces. Drug delivery applications and effects Tissue engineering In regard to tissues, ultrasound is generally used for imaging and monitoring tissue pathologies. Due to its ability to penetrate through tissue easily, ultrasound has been widely studied and developed for drug delivery applications in the field of tissue engineering. In order for hydrogels to release drugs at the targeted location, they must be injected or implanted within the tissue. Injection of hydrogels is usually preferred over implantation due to its minimal invasiveness, reduced healing time following the procedure, and biocompatibility. In one study, Liu et al. proposed a novel design of injectable chemotaxis hydrogels to help promote the migration of bone marrow mesenchymal cells for cartilage repair. Other examples of using smart hydrogels and ultrasound in tissue engineering applications include cartilage repair, bone repair, and wound healing. The design of these drug delivery platforms is specific to each tissue type and its intended use. Cancer treatment In the field of cancer, ultrasound is commonly used for helping health care professionals detect and develop a diagnosis in affected patients. In the context of drug delivery, ultrasound has been used for a wide variety of therapeutic applications which include but are not limited to melanoma, ovarian cancer, and breast cancer. Hydrogels are generally used in designing these drug delivery platforms due to minimal invasiveness (if injected) and its ability to carry a different cancer drugs. These hydrogel-based systems are also paired with chemotherapy treatments. Cancer drugs used in this drug delivery platforms include doxorubicin, mitoxantrone, paclitaxel, silibinin, and cisplatin. In a cancer therapy study, Baghbani et al. proposed a method of pairing ultrasound with doxorubicin-loaded alginate-stabilized perfluorohexane (PFH) nanodroplets. Gene therapy Although it is generally used in combination with cancer therapeutic treatments, gene therapy has become a topic of interest in the drug delivery field. Gene therapy refers to the insertion of genes into a biological system in an attempt to add or modify mutated genes for therapeutic benefit. In order to attain high transgene expression, the electrostatic interaction between the gene and hydrogel polymer and the controlled release of the drug payload from the hydrogel is necessary. Several gene therapy drugs used in hydrogel-based drug delivery systems include CRISPR/Cas9, siRNA, and other RNA-based drugs. In a gene therapy study, Han et al. proposed a focused ultrasound-responsive hydrogel-based system for delivering siRNA nanoparticles to the targeted tumor site Challenges and future development The main challenge for future ultrasound-triggered hydrogel responsive delivery systems is to develop safer guidelines for using HIFU to take advantage of its benefits. In doing so, this will lead to improvements on FDA guidelines for ultrasound use. Therefore, the use of LIFU or lower acoustic energy intensity settings is suggested as the conventional method for decreasing injury risk, specifically damage to healthy tissue, until then. Focused ultrasound continues to be the primary type of ultrasound technique used in drug delivery systems. Another challenge presented in using ultrasound for inducing drug release from smart hydrogels in delivery platforms is inappropriate drug administration and unexpected complications. Currently, on-demand drug release from ultrasound-responsive hydrogels is still difficult to fully control when only using ultrasound. Yeingst et al. suggested that future hydrogel-based delivery platforms will be designed based on the drug payload to optimize the interaction between the ultrasound and stimuli-responsive hydrogel. Future development of drug delivery systems will continue to incorporate ultrasound and smart hydrogel designs. References Drug delivery devices Ultrasound
Ultrasound-triggered drug delivery using stimuli-responsive hydrogels
[ "Chemistry" ]
3,360
[ "Pharmacology", "Drug delivery devices" ]
73,542,847
https://en.wikipedia.org/wiki/Chimeric%20small%20molecule%20therapeutics
Chimeric small molecule therapeutics are a class of drugs designed with multiple active domains to operate outside of the typical protein inhibition model. While most small molecule drugs inhibit target proteins by binding their active site, chimerics form protein-protein ternary structures to induce degradation or, less frequently, other protein modifications. Background Small molecule drugs, compounds typically <1 kD in mass, comprise a large portion of the therapeutic market. These drugs usually operate by agonizing or antagonizing the active site on a disease-linked protein of interest, though allosteric regulation is possible. With an estimated 93% of the human proteome lacking druggable binding sites, methods have been developed to modulate protein activity through binding of any available site rather than only the active site. These drugs contain a target protein binding warhead in addition to a linker-separated active domain. This domain may recruit a second protein to the proximity, induce protease-mediated degradation, or recruit a kinase for directed phosphorylation, among other functions. These drugs expand both the mechanism of action for small molecule therapeutics and the pool of potential protein targets. Proteolysis-targeting chimeras Proteolysis-targeting chimeras (PROTACs) were first reported by Kathleen Sakamoto, Craig Crews, and Raymond Deshaies in 2001. A chimeric molecule consisting of ovalicin (a MetAP-2 small molecule inhibitor) and IκBα phosphopeptide (a recruiter of the SCFβ-TRCP E3 ligase complex) separated by a linker was constructed and shown to induce MetAP-2 degradation in in vitro cell models. Further study confirmed that E3 ligase-mediated ubiquitination and subsequent proteasome degradation was responsible for reduced MetAP-2 levels. Continued work on this system by Craig Crews and others has expanded the potential pool of E3 ligases and degradation targets with Arvinas Inc. founded in 2013 to bring PROTAC drugs to market. As of April 2023, Arvinas has one drug in Stage 3 clinical trials (ARV-471, an estrogen receptor degrader), and two drugs in Stage 2 clinical trials (androgen receptor degraders ARV-110 and ARV-766) for treatment of breast and prostate cancer, respectively. Arvinas released Phase 2 clinical trial results for ARV-471 in December, 2022 reporting a clinical benefit rate of 40% in CDK4/6 inhibitor-pretreated patients and an absence of dose-limiting toxicities. Hydrophobic tag degradation Hydrophobic tag degraders contain a binding domain in addition to a linker-separated hydrophobic moiety, such as adamantyl, to induce protein degradation. An early example of a hydrophobically tagged degrader is fulvestrant, an estrogen receptor antagonist that contains a long hydrophobic side chain that induces the degradation of the estrogen receptor. Fulvestrant has inspired the development of additional selective estrogen receptor degraders (SERDs). As exposed hydrophobicity is characteristic of protein misfolding, the native cell proteasome may recognize and degrade proteins tagged with the hydrophobic moiety. Taavi Neklesa and Craig Crews first reported hydrophobic tag degradation in 2011 as a tool to probe protein function in conjunction with cognate HaloTag fusion proteins. This principle has also been further used to effectively degrade transcription factors (a traditionally difficult class to drug) and cancer-linked EZH2 in in vitro models. As of yet, no drug candidates have been publicly identified making use of this technology. Additional use cases Lysosome-targeting chimeras (LYTACs) have been developed, combining target-binding compounds or antibodies and glycopeptide ligands to stimulate the lysosomal degradation pathway. Unlike the proteasome pathway, this enables the targeted degradation of extracellular and membrane-bound proteins in addition to cytoplasmic ones. Autophagy-targeting chimeras (AUTACs) can be employed to degrade proteins as well as protein aggregates and organelles. AUTAC degradation tags are typically derived from guanine though the particular mechanism of action is still unclear. Autophagosome-tethering compounds (ATTECs) mimic this strategy, directly appending a target protein to the autophagosome membrane for degradation absent the use of a linker. Phosphorylation-inducing chimeric small molecules (PHICS) employ the warhead-linker-recruiter structure to direct phosphorylation of a given target by proximity to a desired kinase. This technique does not necessarily involve protein degradation and may instead be used to modulate protein function to direct or inhibit certain pathways. Further work in the Crews Lab has used chimeric oligonucleotides, the dCas9 protein, and chimeric small molecules to create the TRAFTAC system for generalizable transcription factor degradation. Advantages The ability to inhibit or modify enzyme function absent a catalytic pocket binding site target greatly expands the potentially druggable portion of the proteome. Furthermore, most classes of chimeric small molecules can act on many targets over their life cycle, lowering the effective dose compared to traditional inhibitors that act only on one protein at a time. These therapeutics provide an alternative mechanism of action that may be useful as a combination therapy in diseases where drug resistance is a concern. Chimeric drug activity is also highly dependent on distance between targeted proteins allowing effect to be effectively tuned through optimization of the linker structure. Challenges The existence of two or more binding domains increases the difficulty of synthesis for chimeric molecules. Each component must be discovered, optimized, and synthesized in such a way that they can be linked together, driving up cost relative to single-domain inhibitors. The large size of chimeric molecules (typically 700-1100 Da) makes effective delivery difficult and increases complexity in pharmacokinetic design. Care must be taken to ensure that the molecule is capable of passing through the cell membrane and subsisting long enough to have therapeutic effect. Additionally, protein-protein ternary complexes are generally unstable, adding to the difficulty of chimeric drug design References Medicinal chemistry
Chimeric small molecule therapeutics
[ "Chemistry", "Biology" ]
1,291
[ "Biochemistry", "nan", "Medicinal chemistry" ]
67,730,272
https://en.wikipedia.org/wiki/Institute%20of%20Solid%20State%20Chemistry%20and%20Mechanochemistry
Institute of Solid State Chemistry and Mechanochemistry of the Siberian Branch of the RAS () is a research institute in Novosibirsk, Russia. It was founded in 1944. History Institute of Solid State Chemistry and Mechanochemistry is one of the oldest scientific institutes in Siberia. It was founded in 1944 as the Chemical and Metallurgical Institute. Five years later, thanks to the institute, a ceramic pipe plant was launched in Dorogino. Later, the institute became part of the Siberian Branch of the USSR Academy of Sciences. In 1964, the scientific organization was renamed the Institute of Physicochemical Principles of Mineral Raw Materials Processing, and in 1980, it was renamed the Institute of Solid State Chemistry and Mineral Raw Materials Processing. In 1997, the institute was renamed the Institute of Solid State Chemistry and Mechanochemistry. Locations The institute is located in Tsentralny District (Frunze Street 13) and Akademgorodok. Branches Kemerovo Division of Institute of Solid State Chemistry and Mechanochemistry of the Siberian Branch of the RAS External links Институт химии твердого тела и механохимии СО РАН. ГПНТБ СО РАН. Механохимия нас связала: ИХТТМ отмечает 75-й день рождения. Наука в Сибири. Institute of Solid State Chemistry and Mechanochemistry of the Siberian Branch of the RAS. The SB RAS Portal. Research institutes in Novosibirsk Solid-state chemistry Research institutes established in 1944 1944 establishments in the Soviet Union Research institutes in the Soviet Union
Institute of Solid State Chemistry and Mechanochemistry
[ "Physics", "Chemistry", "Materials_science" ]
389
[ "Condensed matter physics", "nan", "Solid-state chemistry" ]
67,732,953
https://en.wikipedia.org/wiki/Post-Minkowskian%20expansion
In physics, precisely in the general theory of relativity, post-Minkowskian expansions (PM) or post-Minkowskian approximations are mathematical methods used to find approximate solutions of Einstein's equations by means of a power series development of the metric tensor. Unlike post-Newtonian expansions (PN), in which the series development is based on a combination of powers of the velocity (which must be negligible compared to that of light) and the gravitational constant, in the post-Minkowskian case the developments are based only on the gravitational constant, allowing analysis even at velocities close to that of light (relativistic). One of the earliest works on this method of resolution is that of Bruno Bertotti, published in Nuovo Cimento in 1956. References General relativity
Post-Minkowskian expansion
[ "Physics" ]
168
[ "General relativity", "Theory of relativity" ]
54,978,275
https://en.wikipedia.org/wiki/C18H10O8
{{DISPLAYTITLE:C18H10O8}} The molecular formula C18H10O8 (molar mass: 354.27 g/mol, exact mass: 354.0376 u) may refer to: Cyclovariegatin Xerocomorubin Molecular formulas
C18H10O8
[ "Physics", "Chemistry" ]
66
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
54,978,619
https://en.wikipedia.org/wiki/Angiotensin%20%281-7%29
Angiotensin (1-7) (; Molecular weight = 899.02 g/mol; H-Asp-Arg-Val-Tyr-Ile-His-Pro-OH) is an active heptapeptide of the renin–angiotensin system (RAS). It also known by the generic name talfirastide (development name TXA127). In 1988, Santos et al demonstrated that angiotensin (1-7) was a main product of the incubation of angiotensin I with brain micropunch biopsies and Schiavone et al reported the first biological effect of this heptapeptide. Benter et al were the first to report that Ang-(1-7) behaves in a way opposite to that of Ang II and that intavenous administration of Ang-(1-7) produces blood pressure lowering effects by activating its own receptor Angiotensin (1-7) is a vasodilator agent affecting cardiovascular organs, such as heart, blood vessels and kidneys, with functions frequently opposed to those attributed to the major effector component of the RAS, angiotensin II (Ang II). Synthesis The polypeptide Ang I can be converted into Ang (1-7) by the actions of neprilysin (NEP) and thimet oligopeptidase (TOP) enzymes. Also, Ang II can be hydrolyzed into Ang (1-7) through the actions of angiotensin-converting enzyme 2 (ACE2). Ang (1-7) binds and activates the G-protein coupled receptor Mas receptor leading to opposite effects of those of Ang II. Possible pathways Action of neprilysin on angiotensin I or angiotensin II. Action of prolyl endopeptidase on angiotensin I. Action of ACE on angiotensin 1-9. Action of neprilysin on angiotensin 1-9. Action of ACE2 on angiotensin II. Effects Ang (1-7) has been shown to have anti-oxidant and anti-inflammatory effects. It helps protect cardiomyocytes of spontaneously hypertensive rats by increasing the expression of endothelial and neuronal nitric oxide synthase enzymes, augmenting production of nitric oxide. Pharmacological interactions Ang (1-7) contributes to the beneficial effects of ACE inhibitors and angiotensin II receptor type 1 antagonists. Clinical trials Talfirastide has been tested in people with COVID-19 and stroke. References Peptides Angiology Endocrinology Hypertension
Angiotensin (1-7)
[ "Chemistry" ]
567
[ "Biomolecules by chemical classification", "Peptides", "Molecular biology" ]
54,981,574
https://en.wikipedia.org/wiki/Configuration%20space%20%28mathematics%29
In mathematics, a configuration space is a construction closely related to state spaces or phase spaces in physics. In physics, these are used to describe the state of a whole system as a single point in a high-dimensional space. In mathematics, they are used to describe assignments of a collection of points to positions in a topological space. More specifically, configuration spaces in mathematics are particular examples of configuration spaces in physics in the particular case of several non-colliding particles. Definition For a topological space and a positive integer , let be the Cartesian product of copies of , equipped with the product topology. The nth (ordered) configuration space of is the set of n-tuples of pairwise distinct points in : This space is generally endowed with the subspace topology from the inclusion of into . It is also sometimes denoted , , or . There is a natural action of the symmetric group on the points in given by This action gives rise to the th unordered configuration space of , which is the orbit space of that action. The intuition is that this action "forgets the names of the points". The unordered configuration space is sometimes denoted , , or . The collection of unordered configuration spaces over all is the Ran space, and comes with a natural topology. Alternative formulations For a topological space and a finite set , the configuration space of with particles labeled by is For , define . Then the th configuration space of X is denoted simply . Examples The space of ordered configuration of two points in is homeomorphic to the product of the Euclidean 3-space with a circle, i.e. . More generally, the configuration space of two points in is homotopy equivalent to the sphere . The configuration space of points in is the classifying space of the th braid group (see below). Connection to braid groups The -strand braid group on a connected topological space is the fundamental group of the th unordered configuration space of . The -strand pure braid group on is The first studied braid groups were the Artin braid groups . While the above definition is not the one that Emil Artin gave, Adolf Hurwitz implicitly defined the Artin braid groups as fundamental groups of configuration spaces of the complex plane considerably before Artin's definition (in 1891). It follows from this definition and the fact that and are Eilenberg–MacLane spaces of type , that the unordered configuration space of the plane is a classifying space for the Artin braid group, and is a classifying space for the pure Artin braid group, when both are considered as discrete groups. Configuration spaces of manifolds If the original space is a manifold, its ordered configuration spaces are open subspaces of the powers of and are thus themselves manifolds. The configuration space of distinct unordered points is also a manifold, while the configuration space of not necessarily distinct unordered points is instead an orbifold. A configuration space is a type of classifying space or (fine) moduli space. In particular, there is a universal bundle which is a sub-bundle of the trivial bundle , and which has the property that the fiber over each point is the n element subset of classified by p. Homotopy invariance The homotopy type of configuration spaces is not homotopy invariant. For example, the spaces are not homotopy equivalent for any two distinct values of : is empty for , is not connected for , is an Eilenberg–MacLane space of type , and is simply connected for . It used to be an open question whether there were examples of compact manifolds which were homotopy equivalent but had non-homotopy equivalent configuration spaces: such an example was found only in 2005 by Riccardo Longoni and Paolo Salvatore. Their example are two three-dimensional lens spaces, and the configuration spaces of at least two points in them. That these configuration spaces are not homotopy equivalent was detected by Massey products in their respective universal covers. Homotopy invariance for configuration spaces of simply connected closed manifolds remains open in general, and has been proved to hold over the base field . Real homotopy invariance of simply connected compact manifolds with simply connected boundary of dimension at least 4 was also proved. Configuration spaces of graphs Some results are particular to configuration spaces of graphs. This problem can be related to robotics and motion planning: one can imagine placing several robots on tracks and trying to navigate them to different positions without collision. The tracks correspond to (the edges of) a graph, the robots correspond to particles, and successful navigation corresponds to a path in the configuration space of that graph. For any graph , is an Eilenberg–MacLane space of type and strong deformation retracts to a CW complex of dimension , where is the number of vertices of degree at least 3. Moreover, and deformation retract to non-positively curved cubical complexes of dimension at most . Configuration spaces of mechanical linkages One also defines the configuration space of a mechanical linkage with the graph its underlying geometry. Such a graph is commonly assumed to be constructed as concatenation of rigid rods and hinges. The configuration space of such a linkage is defined as the totality of all its admissible positions in the Euclidean space equipped with a proper metric. The configuration space of a generic linkage is a smooth manifold, for example, for the trivial planar linkage made of rigid rods connected with revolute joints, the configuration space is the n-torus . The simplest singularity point in such configuration spaces is a product of a cone on a homogeneous quadratic hypersurface by a Euclidean space. Such a singularity point emerges for linkages which can be divided into two sub-linkages such that their respective endpoints trace-paths intersect in a non-transverse manner, for example linkage which can be aligned (i.e. completely be folded into a line). Compactification The configuration space of distinct points is non-compact, having ends where the points tend to approach each other (become confluent). Many geometric applications require compact spaces, so one would like to compactify , i.e., embed it as an open subset of a compact space with suitable properties. Approaches to this problem have been given by Raoul Bott and Clifford Taubes, as well as William Fulton and Robert MacPherson. See also Configuration space (physics) State space (physics) References Manifolds Topology Algebraic topology
Configuration space (mathematics)
[ "Physics", "Mathematics" ]
1,318
[ "Algebraic topology", "Space (mathematics)", "Topological spaces", "Fields of abstract algebra", "Topology", "Space", "Manifolds", "Geometry", "Spacetime" ]
54,985,664
https://en.wikipedia.org/wiki/EURO%20Journal%20on%20Transportation%20and%20Logistics
The EURO Journal on Transportation and Logistics (EJTL) is a peer-reviewed academic journal in operations research that was established in 2011 and is now published by Elsevier. It is an official journal of the Association of European Operational Research Societies, promoting the use of mathematics in general, and operations research in particular, in the context of transportation and logistics. The editor-in-chief is Dominique Feillet. Past Editor-in-Chief: Michel Bierlaire (2011-2019). Abstracting and indexing The journal is abstracted and indexed in the following databases: EBSCO Information Services Emerging Sources Citation Index Google Scholar International Abstracts in Operations Research OCLC Research Papers in Economics Scopus Summon by ProQuest Transportation Research International Documentation (TRID) of Transportation Research Board External links Operations research English-language journals Academic journals established in 2011 Transportation journals
EURO Journal on Transportation and Logistics
[ "Mathematics" ]
173
[ "Applied mathematics", "Operations research" ]
54,987,630
https://en.wikipedia.org/wiki/EF-Tu%20receptor
EF-Tu receptor, abbreviated as EFR, is a pattern-recognition receptor (PRR) that binds to the prokaryotic protein EF-Tu (elongation factor thermo unstable) in Arabidopsis thaliana (and other members of Brassicaceae). This receptor is an important part of the plant immune system as it allows the plant cells to recognize and bind to EF-Tu, preventing genetic transformation by and protein synthesis in pathogens such as Agrobacterium. Background The plant Arabidopsis thaliana has a genome with only around 135 megabase pairs (Mbp), making it small enough to fully synthesize. It also makes it relatively easy to study, leading to its use as a common model organism in the field of plant genetics. One important use of A. thaliana is in the study of plant immunity. Plant pathogens are able to travel through a plant's vascular system, but plants do not have specific immune cells that can travel this way. Plants also do not have an adaptive immune system, so other forms of immunity are required. One is the use of pattern-recognition receptors (PRR) to bind to pathogen-associated molecular patterns (PAMP), which are highly conserved structures on the outside of many invasive organisms. This form of immunity acts on intercellular pathogens, which are ones outside of individual plant cells. PRRs are transmembrane proteins, which have an anchor inside the cell and portions that extend beyond the membrane. They are part of the innate immune system and bind to and prevent the proliferation of pathogens with the PAMPs that they can bind. EF-Tu, a very common and highly conserved protein, is an example of a PAMP that can be found in numerous pathogens. Its function as an elongation factor means that it helps create new proteins during translation in the ribosome. When a protein is being formed, amino acids are connected in a long sequence, known as a protein's primary structure. Elongation factors help coordinate the movement of transfer RNAs (tRNA) and messenger RNAs (mRNA) so they stay aligned as the ribosome translocates along the mRNA chain. Due to its importance in ensuring the accuracy of translation and preventing mutations, EF-Tu is a good target of both immune systems and drug therapies designed to prevent infections and subsequent diseases. Biological function Synthesis EFR, like other proteins, undergoes translation in a cell's ribosomes. After the primary structure of the protein has been formed it must fold into its three dimensional tertiary structure to become functional. This occurs in the endoplasmic reticulum (ER). While in the ER, this primary polypeptide chain undergoes a regulatory process known as ER-quality control (ER-QC) to help ensure it folds into the correct 3-D structure. ER-QC process consists of a series of chaperone proteins that help guide the folding of the EFR polypeptide chains, preventing the aggregation of many polypeptide chains into one large group. Proteins that have not folded are kept in the ER until they have folded into their correct 3-D shape. If folding does not occur then the unfolded protein is eventually destroyed. One of the control mechanisms of EFR is the protein Arabidopsis stromal-derived factor-2 (SDF2). A genetic variant of the A. thaliana plant that did not have the gene to encode for this protein had a far lower production of functional EFR proteins. SDF2 also cannot be substituted for other enzymes in EFR production. Experimental analysis indicated that EFR is destroyed in the cell when it is produced without SDF2, though the mechanism of this action is unknown. Other proteins that are required for the proper synthesis of EFR include Arabidopsis CRT3 and UGGT, which are members of the EFR-QC and act as chaperones to help folding. Role in plant immunity EFR receptors have a high affinity for the EF-Tu PAMP. This has been proven analytically through competitive binding assays and SDS-PAGE analysis. When EFR binds to EF-Tu, the basal resistance is activated. This response happens after an infection has already been established and it is important to the plant immune system because it prevents the spread of the pathogen throughout the plant. Only bacteria that have a high amount of EF-Tu are effectively inhibited by EFR, such as Agrobacterium tumefaciens. Similarities to FLS2 Like EFR, FLS2 (flagellin-sensing 2) is a plant receptor-like kinase that acts as a PRR in the plant innate immune system. Instead of binding to EF-Tu, it binds to flagellin, another highly conserved structure present on many pathogens. Flagellin, like EF-Tu, is a good target for the plant immune system since it is so widespread. It also triggers an immune response in a larger variety of plants than EF-Tu. The immune response triggered by FLS2 is very similar to the one that is triggered by EFR and the enzymes that are activated by both receptors likely come from a common pool that is found in many cells. This indicates that the two receptor pathways converge, which has been shown to occur at the ion channels in the plasma membrane. By perceiving multiple PAMPs, a plant is able to respond to a pathogenic infection more quickly and efficiently, as well as respond to a wider array of pathogens. Applications EFR is found only in the plant family Brassicaceae, meaning it has a limited effect in nature. Experiments have demonstrated the ability to successfully transfer EFR to plants in other families, such as Nicotiana benthamiana, a relative of tobacco, and Solanum lycopersicum, the tomato plant. The ability to transfer PRRs between plants and have them retain their effectiveness broadens genetic engineering techniques to promote disease resistance in crops. It can also reduce chemical wastes associated with mass agriculture and enable the transfer of immunity rapidly and without traditional breeding. See also EF-Tu FLS2 Flagellin Arabidopsis thaliana References Receptors Immune system Plant anatomy
EF-Tu receptor
[ "Chemistry", "Biology" ]
1,292
[ "Organ systems", "Receptors", "Immune system", "Signal transduction" ]
59,814,243
https://en.wikipedia.org/wiki/Molecular%20demon
A molecular demon or biological molecular machine is a biological macromolecule that resembles and seems to have the same properties as Maxwell's demon. These macromolecules gather information in order to recognize their substrate or ligand within a myriad of other molecules floating in the intracellular or extracellular plasm. This molecular recognition represents an information gain which is equivalent to an energy gain or decrease in entropy. When the demon is reset i.e. when the ligand is released, the information is erased, energy is dissipated and entropy increases obeying the second law of thermodynamics. The difference between biological molecular demons and the thought experiment of Maxwell's demon is the latter's apparent violation of the second law. Cycle The molecular demon switches mainly between two conformations. The first, or basic state, upon recognizing and binding the ligand or substrate following an induced fit, undergoes a change in conformation which leads to the second quasi-stable state: the protein-ligand complex. In order to reset the protein to its original, basic state, it needs ATP. When ATP is consumed or hydrolyzed, the ligand is released and the demon acquires again information reverting to its basic state. The cycle may start again. Ratchet The second law of thermodynamics is a statistical law. Hence, occasionally, single molecules may not obey the law. All molecules are subject to the molecular storm, i.e. the random movement of molecules in the cytoplasm and the extracellular fluid. Molecular demons or molecular machines either biological or artificially constructed are continuously pushed around by the random thermal motion in a direction that sometimes violates the law. When this happens and the gliding back of the macromolecule from the movement it had made or the conformational change it underwent to its original state can be prevented, as is the case with molecular demons, the molecule works as a ratchet; it is possible to observe for example the creation of a gradient of ions or other molecules across the cell membrane, the movement of motor proteins along filament proteins or also the accumulation of products deriving from an enzymatic reaction. Even some artificial molecular machines and experiments are capable of forming a ratchet apparently defying the second law of thermodynamics. All these molecular demons have to be reset to their original state consuming external energy that is subsequently dissipated as heat. This final step in which entropy increases is therefore irreversible. If the demons were reversible, no work would be done. Artificial An example of artificial ratchets is the work by Serreli et al. (2007). Serreli et al. constructed a nanomachine, a rotaxane, that consists of a ring-shaped molecule, that moves along a tiny molecular axle between two different equal compartments, A and B. The normal, random movement of molecules sends the ring back and forth. Since the rings move freely, half of the rotaxanes have the ring on site B and the other half on site A. But the system used by Serreli et al. has a chemical gate on the rotaxane molecule and the axle contains two sticky parts, one at either side of the gate. This gate opens when the ring is close by. The sticky part in B is close to the gate and the rings pass more readily to A than from A to B. They obtained a deviation from equilibrium of 70:50 for A and B respectively, a bit like the demon of Maxwell. But this system works only when light is shone on it and thus needs external energy, just like molecular demons. Energy and information Landauer stated that information is physical. His principle sets fundamental thermodynamical constraints for classical and quantum information processing. Much effort has been dedicated to incorporating information into thermodynamics and measuring the entropic and energetic costs of manipulating information. Gaining information, decreases entropy which has an energy cost. This energy has to be collected from the environment. Landauer established equivalence of one bit of information with entropy which is represented by kT ln 2, where k is the Boltzmann constant and T is room temperature. This bound is called the Landauer's limit. Erasing energy increases entropy instead. Toyabe et al. (2010) were able to demonstrate experimentally that information can be converted in free energy. It is a quite elegant experiment that consists of a microscopic particle on a spiral-staircase-like potential. The step has a height corresponding to kBT, where kB is the Boltzmann constant and T is the temperature. The particle jumps between steps due to random thermal motions. Since the downward jumps following the gradient are more frequent than the upward ones, the particle falls down the stairs, on average. But when an upward jump is observed, a block is placed behind the particle to prevent it from falling, just like in a ratchet. This way it should climb the stairs. Information is gained by measuring the particle's location, which is equivalent to a gain in energy, i.e. a decrease in entropy. They used a generalized equation for the second law that contains a variable for information: ΔF is the free energy between states, W is the work done on the system, k is the Boltzmann constant, T is temperature, and I is the mutual information content obtained by measurements. The brackets indicate that the energy is an average. They could convert the equivalent of one bit information to 0.28 of energy or, in other words, they could exploit more than a quarter of the information’s energy content. Cognitive demons In his book Chance and Necessity, Jacques Monod described the functions of proteins and other molecules capable of recognizing with 'elective discrimination' a substrate or ligand or other molecule. In describing these molecules he introduced the term 'cognitive' functions, the same cognitive functions that Maxwell attributed to his demon. Werner Loewenstein goes further and names these molecules 'molecular demon' or 'demon' in short. Naming the biological molecular machines in this way makes it easier to understand the similarities between these molecules and Maxwell's demon. Because of this real discriminative if not 'cognitive' property, Jacques Monod attributed a teleonomic function to these biological complexes. Teleonomy implies the idea of an oriented, coherent and constructive activity. Proteins therefore must be considered essential molecular agents in the teleonomic performances of all living beings. See also Molecular machine Protein–ligand complex Protein Ligand Maxwell's demon Jacques Monod Teleonomy References Biophysics Entropy and information Molecular machines Cell biology
Molecular demon
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Technology", "Biology" ]
1,340
[ "Machines", "Applied and interdisciplinary physics", "Physical quantities", "Cell biology", "Entropy and information", "Physical systems", "Molecular machines", "Entropy", "Biophysics", "Nanotechnology", "Dynamical systems" ]
59,817,638
https://en.wikipedia.org/wiki/Redfish%20%28specification%29
The Redfish standard is a suite of specifications that deliver an industry standard protocol providing a RESTful interface for the management of servers, storage, networking, and converged infrastructure. History The Redfish standard has been elaborated under the SPMF umbrella at the DMTF in 2014. The first specification with base models (1.0) was published in August 2015. In 2016, Models for BIOS, disk drives, memory, storage, volume, endpoint, fabric, switch, PCIe device, zone, software/firmware inventory & update, multi-function NICs), host interface (KCS replacement) and privilege mapping were added. In 2017, Models for Composability, Location and errata were added. There is work in progress for Ethernet Switching, DCIM, and OCP. In August 2016, SNIA released a first model for network storage services (Swordfish), an extension of the Redfish specification. Industry adoption Redfish support on server Advantech SKY Server BMC Dell iDRAC BMC with minimum iDRAC 7/8 FW 2.40.40.40, iDRAC9 FW 3.00.00.0 Fujitsu iRMCS5 BMC HPE iLO BMC with minimum iLO4 FW 2.30, iLO5 and more recent HPE Moonshot BMC with minimum FW 1.41 Lenovo XClarity Controller (XCC) BMC with minimum XCC FW 1.00 Supermicro X10 BMC with minimum FW 3.0 and X11 with minimum FW 1.0 IBM Power Systems BMC with minimum OpenPOWER (OP) firmware level OP940 IBM Power Systems Flexible Service Processor (FSP) with minimum firmware level FW860.20 Cisco Integrated Management Controller with minimum IMC SW Version 3.0 Redfish support on BMC Insyde Software Supervyse BMC OpenBMC a Linux Foundation collaborative open-source BMC firmware stack American Megatrends MegaRAC Remote Management Firmware Vertiv Avocent Core Insight Embedded Management Systems Software using Redfish APIs OpenStack Ironic bare metal deployment project has a Redfish driver. Ansible has multiple Redfish modules for Remote Management including redfish_info, redfish_config, and redfish_command ManageIQ Redfish libraries and tools DMTF libraries and tools GoLang gofish Mojo::Redfish::Client python-redfish Sushy Redfish is used by both proprietary software (such as HPE OneView) as well as FLOSS ones (such as OpenBMC). See also Intelligent Platform Management Interface (IPMI) Create, read, update and delete (CRUD) JSON OData – Protocol for REST APIs References Networking standards DMTF standards System administration Out-of-band management Computer hardware standards
Redfish (specification)
[ "Technology", "Engineering" ]
595
[ "Computer standards", "DMTF standards", "Computer networks engineering", "System administration", "Information systems", "Networking standards", "Computer hardware standards" ]
74,916,950
https://en.wikipedia.org/wiki/Diatomaceous%20earth%20filtration
Diatomaceous earth filtration is a special filtration process that removes particles from liquids as it passes through a layer of fossilized remains of microscopic water organism called diatoms. These diatoms are mined from diatomite deposits which are located along the Earth's surface as they have accumulated in sediment of open and moving bodies of water. Obtained diatomaceous earth is then purified using acid leaching or liquid-liquid extraction in order for it to be used in any form of application. The process of D.E. filtration is composed of three main stages: pre-coating, body feed, and cleaning. Due to the precision of diatomaceous earth filtration; being able to capture dangerous and microscopic particles while maintaining efficiency has allowed D.E. filters to be a highly popularized choice for aquariums, wastewater treatment, food and beverage filtration, and more. Function Swimming pools Diatomaceous earth filters has been generally accepted to be the top contender for removing pollutants while having a high efficiency rate. When applied to pool filtration, a DE filter has demonstrated its capability in capturing varying particle sizes to maintain water clarity. Recent studies show that diatomaceous earth filters have been able to remove particles, ranging from 1-6 micrometers (micron) in size, thus maximizing water quality. This degree of filtration allows small particles to be removed including bacteria, algae, viruses and other microscopic particles. Many of these particles come from bodily fluids, fecal matter, and other bacteria that can contaminate the water. Although there are coagulants, such as chlorine, that can be added to aid the filtration process by eliminating such particles; common pollutants that can not be efficiently removed by chlorine can include cryptosporidium, giardia duodenalis, pseudomonas aeruginosa and more. These parasites often have a high tolerance to chlorine and therefore are resistant to removal through conventional means such as coagulation. Despite this, outbreaks of cryptosporidium or giardia are seemingly low and in recent studies conducted in Atlanta, Georgia: out of 160 pools, 13 pool samples (18.1%) tested positive for at least one of both parasites. Other possible bacteria, viruses, parasites Source: Hepatitis A Norovirus E. coli Legionella Shigella Cercariae Campylobacter Staphylococcus Considering the nature of certain bacteria, viruses, and parasites filtration is a key component in ensuring the well-being of those that utilize swimming pools. If a diatomaceous earth filter is employed and properly designed, the application can prove to be extremely efficient in the removal and minimization of almost 100% of parasites. In order to achieve this, filtration media (diatomaceous earth) must be at least 4 micrometers to remove Cryptosporidium, and at least 7 micrometers to filter Giarda duodenalis (G. lamblia). Research has shown that DE filtration can provide a greater reduction in parasite oocyst concentrations that other methods including conventional and granular media filtration. DE filtration studies showed 6 logs of removal of parasitic oocysts in a full-scale water treatment simulation. (6 logs refers to the reduction of a microorganism by 99.9999% of one million) Because of DE filters low micron rating, it is able to trap the smallest pollutant particles present. DE Filters usage in surface water treatment and recreational water treatment requires regular maintenance and depending on volume of water, maintenance and replacement may be required more frequently. Efficiency of filtration requires continuous flow of water through the filter, with periodic pressure checks. Maintenance must be conducted regularly and the filter must be backwashed every four to six weeks; with fresh DE media added after every backwash. If maintenance is not conducted properly, build of bacteria, viruses, and parasites may overflow and the efficiency will be comprised. It is important to note that filtration systems do not guarantee full removal of possible contaminants, therefore risk of bacteria, viruses, and parasites can still be present. Food & beverage industry Diatomaceous earth filtration can also be used in food and beverage application to eliminate contaminants including bacteria and microorganisms which can often change the quality of the consumable item. If bacteria and fungi are not removed from certain consumable liquids, it can result in long term contamination which affects the preservation and quality of the product. Many products must meet filtration requirements, for example: brewers must meet certain requirements during the production of certain alcohols including malt beverages (beer, ale, etc.). It is common that beer filtration must remove the turbidity, (yeast, hops resin, calcium oxalate) which can leave harmful microorganisms and affect the taste of the beverage. By conducting this filtration, microbes are eliminated improving the taste and appearance of the beer, while allowing preservation to be extended. While there are many ways to filter, diatomaceous earth filtration is used as a catcher, which intercepts particles in beer thus improving clarity. Diatomaceous earth has become a relatively simple choice for brewers, as it undergoes a natural process with no chemicals and quantity of D.E. can be adjust based on individual brewing needs. Environmental remediation Major components and process Diatomaceous Earth (D.E.) filters can be modified based on the planned function of this filter, but all basic D.E. filters are composed of similar parts. The process first begins with a direct pipeline to a raw water source, in which the water flow can be continuously controlled. Throughout the whole mechanism, it is recommended to use copper metal pipes as it is corrosion resistant. Adjoining water pipes Filtration of liquids must be supplied from a direct water source; which can vary in regards to location and water supply. Depending on the location or distribution of such fluids, the materials used to facilitate the flow of these fluids must be rust proof and corrosion resistant. Among the popular choice of materials, copper is the most commonly used element which has distributed safe drinking due to its strong characteristics resisting natural wear. In most industrial and non-commercial usage, copper piping used for fluids can also be insulated with sleeving or wrapping with polyphenylene ether pipe sleeves to grant additional protection. Some alternatives to metal depending on independent usages, can include water pipes made with polvinyl chloride pipes (PVC), cross-linked polyethylene (PEX), and acrylonitrile butadiene styrene (ABS). This category of pipes utilizes plastic as they durable and can easily conformable; while being able to withstand high pressures and prevents rust or corrosion. During D.E. filtration, the same material for piping must be utilized throughout the process to maintain the purity of water flow as it undergoes the filtration process. Precoat tank / body feed tank Fluids, commonly known as slurry, often consists of a mixture of particles varying in size which can not be efficiently filtered out by the main D.E. filter. Build up of such particles can increase pressure which results in reduced flow of liquid and a nonfunctional filter. To prevent this, the filtration process can include additional filter aid to distribute certain particles to prevent any problems that hinder the filtration process. Filter aid are solid particles that can improve the permeability and porosity to improve filtrate clarity by trapping specific sized particles while allowing continuous flow of liquids. When filter aid builds up it has a high porosity; although the volume may accumulate, approximately fifteen percent of the total volume is solid, which leaves the rest to be empty space. These filters can serve as a precoat that is applied prior to the filtration process. It is pumped through the filter press, simultaneously creating a porous filter cake on the specified filter cloths. Body feed is an additional filter aid which is often pumped throughout the whole filtration process to improve clarification and prevent build up of filter cake. Build up the filter cake can be detrimental as it becomes impermeable and can block the continuous flow of slurry. Usually, body feed is coarse and has a greater volume, which can assist filter cake build up while allowing particles to be efficiently removed. The pre-coat tank and body feed tank generally serve the same purpose which is to filter out larger particles that can impede the filtration process. Depending on the purity of the initial slurry, the quantity needed of either the body feed or pre-coat can vary. Septum The formation of filter cake does not occur spontaneously, and requires a membrane to support the accumulation of filter cake. This membrane is commonly known as the septum, which often is made up of plastic or metallic material that serves a similar function as mesh. The septum is porous and permeable with openings, allowing slurry to flow while diatomaceous earth accumulate and crowd the septum openings. Water pressure regulator Cycle times, improper maintenance, damaged septum, and an increase/decrease in flow can result in a change of pressure. Pressure is crucial to the efficiency in filtration: high pressure can damage the filter which can lead to unnecessary forces that push fluids to quickly through the septum. It is important to monitor the flow of filtrate as well as pre-coat and body feed to ensure that the proper flow is achieved with no hindrance. Manufactured diatomaceous filter types Pressure filter A method for eliminating particulates like iron, magnesium, mill scale, and other precipitates involves the usage of a pressure filter. This type of filter comprises a sturdy filter vessel designed to withstand internal pressure, along with a network of pipes for water distribution and collection, and can incorporate one or more types of filter media. Pressure filters find widespread application in municipal water systems, industrial settings, residential well water systems, and swimming pools. These DE filtration systems are rather simple and can be used in a vertical or horizontal setting and can be modified to allow the application of multimedia filters. Pressure filter systems have a water inlet and outlet with the inlet site at the top and the water outlet at the base of the filter. As the water flows through water inlet, it will encounters a grid assembly covered in synthetic cloth which provides support for the diatomaceous earth cake. Gravity plays a part by forcing the flow of water to pass through the D.E. cake which filters out any unwanted particles. As the flow of water continues, water that has been clarified at the base of the filtration tank exits through the water outlet to any designated vessel. These pressure filters serve a general purpose and are most applicable where the flow of fluids is consistent, thus requiring internal pressure monitoring of filtrated fluids. Vacuum filter References Additional reading Drinking water treatment processes for removal of Cryptosporidium and Giardia Recommended Standards for WaterWorks List of State-Specific Water Quality Standards for Turbidity Water Turbidity Benchmarks (CA) Precoat Filtration with Body-feed and Variable What is Precoat and Body Feed? Handbook of Water and Wastewater Treatment Plant Operations, 4th Edition Written Report on FILTRATION (Marciano et al., 2011) Advanced Physiochemical Treatment Processes Volume 4 (Kang et al. 2006) Slow sand and diatomaceous earth filtration of cysts and other particulates (Schuler et. al. 2003) Wikipedia Student Program Filtration techniques
Diatomaceous earth filtration
[ "Chemistry" ]
2,388
[ "Filtration techniques", "Filtration" ]
74,917,010
https://en.wikipedia.org/wiki/Toulouse%20Aerospace
Toulouse Aerospace, formerly Montaudran Aerospace or Aerospace Campus, is a campus project linked to the aeronautics, space and embedded systems jobs and part of Aerospace Valley. Located in Toulouse in the Montaudran district, it will be built entirely by the Toulouse Métropole. Its surface area will be 40 hectares on the site of the former Toulouse-Montaudran airport which saw the beginnings of Aéropostale. Description It is the town planner David Mangin who will direct the entire project, construction of which began in the first quarter of 2011. The Institut Clément Ader, from the Federal University of Toulouse Midi-Pyrénées, has set up within the walls of the Espace Clément Ader in March 2014, and was inaugurated in October. Building B 612 of 26,140 m2 opened its doors on July 1, 2018. Mangin's project was preferred for the place it gave to the preservation of the heritage of Aéropostale: ten hectares should be dedicated to it and certain historic buildings will be preserved (map room, Château Petit Raynal, etc.), to make the L'Envol des pionniers museum. Project Like the Cancéropôle for oncology, this involves bringing together in the same place the main players in training and research in a field, in this case aeronautics and space: Bringing together the two Toulouse aeronautical grandes écoles of the GEA: ENAC and SUPAERO – as well as universities and university institutes located in the same geographical area: Toulouse III - Paul Sabatier University, INSA Toulouse and INPT. The Maison de la formation Jacqueline Auriol will thus bring together under the same roof all Toulouse training courses in mechanical and production engineering in the aeronautics and space sector (opening planned for January 2022). Group of 1000 researchers mainly from ONERA, CCR EADS, CNRS, and CNES. Creation of infrastructure necessary for the development of SMEs and provision of common services. This project is a continuation of the Rangueil scientific complex where ISAE, ENAC, INSA Toulouse, Paul-Sabatier University, LAAS-CNRS, CNES are already located... and close to important players such as Airbus, Airbus Defense and Space, Thales Alenia Space, Freescale, Latécoère, Siemens VDO Automotive, Thales. Toulouse Aerospace will therefore be the whole made up of this new area under development and the current Rangueil complex. It is part of the continuation of making Toulouse the international capital of aeronautics and space. References External links Website of Toulouse Aerospace High-technology business districts in France Aviation in France Aerospace engineering organizations Toulouse Midi-Pyrénées Companies based in Occitania (administrative region)
Toulouse Aerospace
[ "Engineering" ]
552
[ "Aeronautics organizations", "Aerospace engineering organizations", "Aerospace engineering" ]
74,917,376
https://en.wikipedia.org/wiki/Atmospheric%20methane%20removal
Atmospheric methane removal is a category of potential approaches being researched to accelerate the breakdown of methane that is in the atmosphere, for the purpose of mitigating some of the impacts of climate change. Atmospheric methane has increased since pre-industrial times from 0.7 ppm to 1.9 ppm. From 2010 to 2019, methane emissions caused 0.5 °C (about 30%) of observed global warming. Global methane emissions approached a record 600 Tg CH4 per year in 2017. Natural atmospheric methane sinks Methane has a limited atmospheric lifetime, about 10 years, due to substantial methane sinks. The primary methane sink is atmospheric oxidation, from hydroxyl radicals (~90% of the total sink) and chlorine radicals (0-5% of the total sink). The rest is consumed by methanotrophs and other methane-oxidizing bacteria and archaea in soils (~5%). Potential approaches Different methods to remove methane from the atmosphere include thermal-catalytic oxidation, photocatalytic oxidation, biological methanotrophic methane removal, concentration with zeolites or other porous solids, and separation by membranes. Potential methods can be categorized by the underlying catalytic process, or the potential deployment form. Enhanced atmospheric methane oxidation Enhanced Atmospheric Methane Oxidation is the concept of enhancing the overall oxidative methane sink in the atmosphere, through generating additional hydroxyl or chlorine atmospheric radicals. Iron salt aerosols Iron salt aerosols are one proposed method of enhanced atmospheric methane oxidation which involves lofting iron-based particles into the atmosphere (e.g. from planes or ships) to enhance atmospheric chlorine radicals, a natural methane sink. Winds over the Sahara raise dust into the troposphere and disperse it over the Atlantic. A 2023 study suggests that this has contributed to natural atmospheric methane oxidation. Iron salt aerosols are being studied for the potential of iron(III) chloride (FeCl3) to catalyze chlorine radical production. Chlorine atoms are produced by photolysis from the FeCl3 stemming from iron-containing airborne dust aerosol particles in the oceanic boundary layer. FeCl3 + hv → FeCl2 + oCl The chlorine atoms initiate methane oxidation: CH4 + oCl → HCl + oCH3 The resulting methyl radical is unstable and oxidises naturally to CO2 and water: 3.5O2 + 2oCH3 → 2CO2 + 3H2O Side effects of ferric chloride Fine particles dispersed in the atmosphere can serve as cloud condensation nuclei and thereby cause marine cloud brightening Eventually all FeCl3 particles are washed out of the air and fall on land or water, where they dissolve into iron compounds and salt. Iron salt aerosols may also therefore contribute to iron fertilization. Terrestrial methanotroph enhancement Soil bacteria and archaea account for approximately 5% of the natural methane sink. Early research is going into how the activity of these bacteria may be able to be enhanced, either through the use of soil amendments, or introduction of selected or engineered methane-oxidizing bacteria. Catalytic engineered systems Catalytic engineered systems are designed to pass air from the atmosphere, either passively or actively, through catalytic systems which leverage energy from the sun, an artificial light, or heat to oxidize methane. These catalysts include thermocatalysts, photocatalysts, and radicals produced artificially through photolysis (using light to break apart a molecule). References Climate engineering Greenhouse gases
Atmospheric methane removal
[ "Chemistry", "Engineering", "Environmental_science" ]
720
[ "Greenhouse gases", "Geoengineering", "Environmental chemistry", "Planetary engineering" ]
74,921,090
https://en.wikipedia.org/wiki/Magnesium%20permanganate
Magnesium permanganate is an inorganic compound with the chemical formula Mg(MnO4)2. It can be used as an oxidant. Preparation Magnesium permanganate hexahydrate was prepared by E. Mitserlich and H. Aschoff by reacting barium permanganate with magnesium sulfate: It can be obtained by the reaction of magnesium chloride and silver permanganate: The hexahydrate Mg(MnO4)2·6H2O can be crystallized from the solution, which is slightly hygroscopic. The anhydrous form can be obtained by decomposing the hexahydrate by heating it. Chemical properties Magnesium permanganate hexahydrate is a blue-black solid. It decomposes at 130 °C with the evolution of oxygen in an autocatalytic decomposition process. The tetrahydrate decomposes above 150 °C. The crystals are practically insoluble in carbon trichloride, carbon tetrachloride, benzene, toluene, nitrobenzene ether, ligroin and carbon disulfide, but soluble in pyridine and glacial acetic acid. It dissolves in water and dissociates completely in dilute solutions. It oxidizes a range of organic compounds and reacts instantly (in some cases with fire) with common solvents such as tetrahydrofuran, ethanol, methanol, t-butanol, acetone and acetic acid. Applications Magnesium permanganate is used in various branches of industry and technology, such as: a wood impregnation agent. an additive in tobacco filters. as a catalyst in the air oxidation of toluene to benzoic acid and in proteome research. References Magnesium compounds Permanganates
Magnesium permanganate
[ "Chemistry" ]
385
[ "Oxidizing agents", "Permanganates" ]
58,175,832
https://en.wikipedia.org/wiki/Multitask%20optimization
Multi-task optimization is a paradigm in the optimization literature that focuses on solving multiple self-contained tasks simultaneously. The paradigm has been inspired by the well-established concepts of transfer learning and multi-task learning in predictive analytics. The key motivation behind multi-task optimization is that if optimization tasks are related to each other in terms of their optimal solutions or the general characteristics of their function landscapes, the search progress can be transferred to substantially accelerate the search on the other. The success of the paradigm is not necessarily limited to one-way knowledge transfers from simpler to more complex tasks. In practice an attempt is to intentionally solve a more difficult task that may unintentionally solve several smaller problems. There is a direct relationship between multitask optimization and multi-objective optimization. Methods There are several common approaches for multi-task optimization: Bayesian optimization, evolutionary computation, and approaches based on Game theory. Multi-task Bayesian optimization Multi-task Bayesian optimization is a modern model-based approach that leverages the concept of knowledge transfer to speed up the automatic hyperparameter optimization process of machine learning algorithms. The method builds a multi-task Gaussian process model on the data originating from different searches progressing in tandem. The captured inter-task dependencies are thereafter utilized to better inform the subsequent sampling of candidate solutions in respective search spaces. Evolutionary multi-tasking Evolutionary multi-tasking has been explored as a means of exploiting the implicit parallelism of population-based search algorithms to simultaneously progress multiple distinct optimization tasks. By mapping all tasks to a unified search space, the evolving population of candidate solutions can harness the hidden relationships between them through continuous genetic transfer. This is induced when solutions associated with different tasks crossover. Recently, modes of knowledge transfer that are different from direct solution crossover have been explored. Game-theoretic optimization Game-theoretic approaches to multi-task optimization propose to view the optimization problem as a game, where each task is a player. All players compete through the reward matrix of the game, and try to reach a solution that satisfies all players (all tasks). This view provide insight about how to build efficient algorithms based on gradient descent optimization (GD), which is particularly important for training deep neural networks. In GD for MTL, the problem is that each task provides its own loss, and it is not clear how to combine all losses and create a single unified gradient, leading to several different aggregation strategies. This aggregation problem can be solved by defining a game matrix where the reward of each player is the agreement of its own gradient with the common gradient, and then setting the common gradient to be the Nash Cooperative bargaining of that system. Applications Algorithms for multi-task optimization span a wide array of real-world applications. Recent studies highlight the potential for speed-ups in the optimization of engineering design parameters by conducting related designs jointly in a multi-task manner. In machine learning, the transfer of optimized features across related data sets can enhance the efficiency of the training process as well as improve the generalization capability of learned models. In addition, the concept of multi-tasking has led to advances in automatic hyperparameter optimization of machine learning models and ensemble learning. Applications have also been reported in cloud computing, with future developments geared towards cloud-based on-demand optimization services that can cater to multiple customers simultaneously. Recent work has additionally shown applications in chemistry. See also Multi-objective optimization Multi-task learning Multicriteria classification Multiple-criteria decision analysis References Machine learning
Multitask optimization
[ "Engineering" ]
715
[ "Artificial intelligence engineering", "Machine learning" ]
58,176,588
https://en.wikipedia.org/wiki/Joint%20Global%20Ocean%20Flux%20Study
The Joint Global Ocean Flux Study (JGOFS) was an international research programme on the fluxes of carbon between the atmosphere and ocean, and within the ocean interior. Initiated by the Scientific Committee on Oceanic Research (SCOR), the programme ran from 1987 through to 2003, and became one of the early core projects of the International Geosphere-Biosphere Programme (IGBP). The overarching goal of JGOFS was to advance the understanding of, as well as improve the measurement of, the biogeochemical processes underlying the exchange of carbon across the air—sea interface and within the ocean. The programme aimed to study these processes from regional to global spatial scales, and from seasonal to interannual temporal scales, and to establish their sensitivity to external drivers such as climate change. Early in the programme in 1988, two long-term time-series projects were established in the Atlantic and Pacific basins. These — Bermuda Atlantic Time-series Study (BATS) and Hawaii Ocean Time-series (HOT) — continue to make observations of ocean hydrography, chemistry and biology to the present-day. In 1989, JGOFS undertook the multinational North Atlantic Bloom Experiment (NABE) to investigate and characterise the annual spring bloom of phytoplankton, a key feature in the carbon cycle of the open ocean. An important aspect of JGOFS lay in its objective to develop an increased network of observations, made using routine procedures, and curated such that they were easily available to researchers. JGOFS also oversaw the development of models of the marine system based on understanding gained from its observational programme. See also Biological pump Geochemical Ocean Sections Study (GEOSECS) Global Ocean Data Analysis Project (GLODAP) Global Ocean Ecosystem Dynamics (GLOBEC) Solubility pump World Ocean Atlas (WOA) World Ocean Circulation Experiment (WOCE) References External links International Web Site of the Joint Global Ocean Flux Study , Woods Hole Oceanographic Institution Joint Global Ocean Flux Study CD-ROM National Oceanic and Atmospheric Administration Biological oceanography Carbon Chemical oceanography Oceanography Physical oceanography
Joint Global Ocean Flux Study
[ "Physics", "Chemistry", "Environmental_science" ]
428
[ "Hydrology", "Applied and interdisciplinary physics", "Oceanography", "Chemical oceanography", "Physical oceanography" ]
58,179,215
https://en.wikipedia.org/wiki/Hot%20Particulate%20Ingestion%20Rig
The Hot Particulate Ingestion Rig (HPIR) is a gas burner that can shoot sand into a hot gas flow and onto a target material to test how that material's thermal barrier coating is impacted by the molten sand. It was developed by the U.S. Army Research Laboratory (ARL) to experiment with new coating materials for gas turbine engines used in military aircraft. Mechanism The HPIR uses standard military fuel and dry compressed air to produce combusted gas flows that can range from 400 °C to 1650 °C that travels as fast as 1060 meters per second or Mach 0.8. A LabVIEW interface is used to monitor and control all the operations of the HPIR parameters and pneumatic table. Monitoring is also performed by Williamson PRO series single/dual wavelength pyrometers, S-type thermocouples, and a FLIR SC6700 mid-wave infrared (IR) camera in order to determine the emissivity of each sample. Samples are placed in a steel holder in front of the rig at a 10 degree incident angle so that heats up the surface in a uniform manner. A pneumatic table moves the sample into the flame and an S-type thermocouple is used to monitor the flame's temperature. During testing, the sample is initially exposed to a hot gas flow at 0.28 Mach at a flame temperature of 815 °C until the pyrometer detects that the surface temperature of the target has reached 540 °C. Then, the sample goes through several cycles of heating and cooling as an initial survivability check before it can be exposed to even higher temperatures. Short-term durability testing consists of three of these cycles with the heating stage reaching engine-relevant temperatures and the cooling stage set at ambient conditions. In 2016, the HPIR was modified to ingest sand and salt into the combustion chamber at 1 to 200 grams per minute. Sandphobic coating technology In 2015, researchers at ARL were tasked with finding a way to prevent flying, micron-sized sand and dust particles from entering the gas turbine engines of military aircraft and damaging the internal machinery. While modern engines have particle separators that can filter out large particles, fine, powder-like sand particles that are smaller than 100 micrometers in size have consistently managed to pass through the engine's combustors and attach to the blades and vanes. As the rotor blades experienced cycles of heating and cooling during operation, the particles melted due to the extreme temperatures and then subsequently hardened onto the turbine blades. As a result, the micron-sized sand particles have frequently destroyed the engine's internal coating, which has led to severe sand glazing, blade tip wear, calcia-magnesia-alumina-silicate (SMAS) attack, oxidation, plugged cooling holes, and, ultimately, engine loss. This problem has recently worsened due to the fact that more recent, state-of-the-art turbine engines operate at much higher temperatures than past generation turbomachinery, ranging from 1400 °C to 1500 °C. According to ARL scientists, the damage caused by these tiny sand particles have reduced the lifespan of a typical T-700 engine from 6000 hours to 400 hours, and replacing the rotors can cost more than $30,000. They estimate that one third of fielded engines used by the military have been affected by this sand ingestion problem. As part of a collaborative research effort with the Aviation and Missile Research, Development, and Engineering Center (AMRDEC), the U.S. Navy Naval Air Systems Command (NAVAIR) and the National Aeronautics and Space Administration (NASA), ARL modified the HPIR so that it can model how sand particles adhere, melt, and glassify on thermal barrier coatings. According to ARL researchers, the HPIR is the first system to confirm how the sand particles damage the turbine blades at temperatures similar to that of a turbine engine out on the field. Using high-speed imaging technology, ARL scientists were able to film how sand particles experience a phase change from solid to liquid before being deposited onto turbine blade material targets and vaporizing. In 2018, the team used the HPIR to test different coating materials and develop what they call “sandphobic coatings,” which will be designed so that the sand particles flake off the rotor blades instead of attaching to them. References Military technology Gas turbines Turbines Test equipment Sand
Hot Particulate Ingestion Rig
[ "Chemistry", "Technology" ]
912
[ "Engines", "Turbines", "Turbomachinery", "Gas turbines" ]
58,183,003
https://en.wikipedia.org/wiki/Ko%C5%A1evski%20Potok
Koševski Potok is a river in Sarajevo, Bosnia and Herzegovina. The river is partially subterranean, as a significant portion of its course passing through long box culvert, covering the river in a man-made structure and diversion project, designed for gaining space for urban development since the late 1940s and early 1950s. Headwaters The Koševski Potok originates from a confluence of two smaller creeks in the region of Nahorevo neighbourhood, on the northern outskirts of Sarajevo, Nahorevski Potok and Grončavica creek (itself continuation of sinking creek called Grabovica which runs between plateaus of Crepoljsko and Biosko), draining from plateaus of Bukovik, Crepoljsko and Biosko, southeastern and southern slopes of Ozren mountain. Subterranean section The Koševski Potok enters the urban area of Sarajevo from the north, between Pionirska Dolina recreation park and neighbourhood of Koševo, and at that point is diverted underground. From this point Koševski Potok is underground and it runs through the urban area of Sarajevo all the way to Skenderija neighbourhood, where it meets the Miljacka river near the Sarajevo City Hall. The stream emerges from underground just below ZETRA Olympic hall, and runs through open space for about 100 meters, before enters culvert again. Some 100 meters before the confluence with the Miljacka, Koševski Potok emerges from it within a public park where the Sarajevo City Hall is situated, just a few meters below the Ali Pasha Mosque. It runs through the park and enters culvert once more for the last 30 or so meters, running under the one of city's main street before empties into the Miljacka. References External links Rivers of Bosnia and Herzegovina Subterranean rivers of Bosnia and Herzegovina Subterranean rivers of Sarajevo Geography of Sarajevo Miljacka Hydraulic engineering Water tunnels
Koševski Potok
[ "Physics", "Engineering", "Environmental_science" ]
392
[ "Hydrology", "Physical systems", "Hydraulics", "Civil engineering", "Hydraulic engineering" ]
66,240,342
https://en.wikipedia.org/wiki/Conbercept
Conbercept, sold under the commercial name Lumitin, is a novel vascular endothelial growth factor (VEGF) inhibitor used to treat neovascular age-related macular degeneration (AMD) and diabetic macular edema (DME). The anti-VEGF was approved for the treatment of neovascular AMD by the China State FDA (CFDA) in December 2013. As of December 2020, conbercept is undergoing phase III clinical trials through the U.S. Food and Drug Administration’s PANDA-1 and PANDA-2 development programs. Conbercept was developed by Chengdu Kanghong Biotech Company in the People’s Republic of China and is marketed under the name Lumitin. Medical uses It is used for the treatment of neovascular age-related macular degeneration (nAMD), choroidal neovascularization secondary to pathologic myopia (), diabetic macular edema (DME). The medication is given through intravitreal injection (IVT). Contraindications Conbercept is contraindicated in patients with known hypersensitivity to the active ingredient, in patients with ocular or periocular infections, and in patients with active intraocular inflammation. Adverse effects Common adverse effects of the eye formulation include eye pain, transient intraocular pressure (IOP) increase and conjunctival hemorrhage. Mechanism of action Conbercept is a soluble receptor decoy that binds specifically to VEGF-B, placental growth factor (PlGF), and various isoforms of VEGF-A. Conbercept has a VEGF-R2 kinase insert domain receptor (KDR) Ig-like region 4 (KDRd4) which improves the three-dimensional structure and efficiency of dimer formation, thereby increasing the binding capacity of conbercept to VEGF. Composition Conbercept is a recombinant fusion protein composed of VEGFR-1 (second domain) and VEGFR-2 (third and fourth domains) regions fused to the Fc portion of human IgG1 immunoglobulin. History Chengdu Kanghong Pharmaceutical Group, a medical company based in Sichuan, started the development of conbercept in 2005. In 2012, the drug was included on the World Health Organization’s Drug Information 67th List of Recommended International Nonproprietary Names, which was the first Chinese innovator biotech drug to be recognized on the list. In November 2013, the Chinese Food and Drug Administration approved conbercept for the treatment of AMD. By 2014, conbercept was marketed for treatment of wAMD in China. In 2016, Phase III clinical trials of conbercept were authorized by the U.S. Food and Drug Administration. In 2017, Kanghong Pharmaceutical Group partnered with Syneos Health to process Phase III clinical trials simultaneously in more than 30 countries around the world with an investment of $228 million. In 2020, conbercept was approved for use in Mongolia. Clinical trials in China Conbercept is the only anti-VEGF drug confirmed by randomized controlled trials (RCT) to sustain visual improvements with 3+Q3M regimens (PHOENIX study) Conbercept significantly improves visual acuity and anatomical outcomes in patient with PCV (AURORA Study). Conbercept provides significantly visual acuity improvement in DME patients (SAILING study). Society and culture Legal Status In 2013, the CFDA approved conbercept for the treatment of neovascular age-related macular degeneration (nAMD) In 2017, the CFDA approved it for the treatment of pathologic myopia associated choroidal neovascularization () In 2019, the CFDA approved it for the treatment of diabetic macular edema (DME) Economic Conbercept has been shown to be a cost-effective wAMD treatment option in China. Compared to two similar anti-VEGF intravitreal drugs, ranibizumab and aflibercept, conbercept has been shown to be the most cost-effective option for treatment of wAMD in China. In 2017, the national basic medical insurance in China began covering conbercept. References External links Conbercept, Drug Information Portal. U.S. National Library of Medicine. Ophthalmology drugs Angiogenesis inhibitors Engineered proteins
Conbercept
[ "Biology" ]
937
[ "Angiogenesis", "Angiogenesis inhibitors" ]
66,241,271
https://en.wikipedia.org/wiki/Ergodicity%20economics
Ergodicity economics is a research programme that applies the concept of ergodicity to problems in economics and decision-making under uncertainty. The programme's main goal is to understand how traditional economic theory, framed in terms of the expectation values, changes when replacing expectation value with time averages. In particular, the programme is interested in understanding how behaviour is shaped by non-ergodic economic processes, that is processes where the expectation value of an observable does not equal its time average. Background Mean values and expected values are used extensively in economic theory, most commonly as a summary statistic, e.g. used for modelling agents’ decisions under uncertainty. Early economic theory was developed at a time when the expected value had been invented but its relation to the time average had not been studied. No clear distinction was made between the two mathematical objects, which can be interpreted as an implicit assumption of ergodicity. Ergodicity economics explores what aspects of economics can be informed by avoiding this implicit assumption. While one common critique of modelling decisions based on expected values is the sensitivity of the mean to outliers, ergodicity economics focuses on a different critique. It emphasizes the physical meaning of expected values as averages across a statistical ensemble of parallel systems. It insists on a physical justification when expected values are used. In essence, at least one of two conditions must hold: the average value of an observable across many real systems is relevant to the problem, and the sample of systems is large enough to be well approximated by a statistical ensemble; the average value of an observable in one real system over a long time is relevant to the problem, and the observable is well modelled as ergodic. In ergodicity economics, expected values are replaced, where necessary, by averages that account for the ergodicity or non-ergodicity of the observables involved. Non-ergodicity is closely related to the problems of irreversibility and path dependence that are common themes in economics. Relation to other sciences In mathematics and physics, the concept of ergodicity is used to characterise dynamical systems and stochastic processes. A system is said to be ergodic, if a point of a moving system will eventually visit all parts of the space that the system moves in, in a uniform and random sense. Ergodicity implies that the average behaviour along a single trajectory through time (time average) is equivalent to the average behaviour of a large ensemble at one point in time (ensemble average). For an infinitely large ensemble, the ensemble average of an observable is equivalent to the expected value. Ergodicity economics inherits from these ideas the probing of the ergodic properties of stochastic processes used as economic models. Historical Background Ergodicity economics questions whether expected value is a useful indicator of an economic observable's behaviour over time. In doing so it builds on existing critiques of the use of expected value in the modeling of economic decisions. Such critiques started soon after the introduction of expected value in 1654. For instance, expected-utility theory was proposed in 1738 by Daniel Bernoulli as a way of modeling behavior which is inconsistent with expected-value maximization. In 1956, John Kelly devised the Kelly criterion by optimizing the use of available information, and Leo Breiman later noted that this is equivalent to optimizing time-average performance, as opposed to expected value. The ergodicity economics research programme originates in two papers by Ole Peters in 2011, a theoretical physicist and current external professor at the Santa Fe Institute. The first studied the problem of optimal leverage in finance and how this may be achieved by considering the non-ergodic properties of geometric brownian motion. The second paper applied principles of non-ergodicity to propose a possible solution for the St. Petersburg paradox. More recent work has suggested possible solutions for the equity premium puzzle, the insurance puzzle, gamble-selection, probability weighting, and has provided insights into the dynamics of income inequality. Decision theory Ergodicity economics emphasizes what happens to an agent's wealth over time . From this follows a possible decision theory where agents maximize the time-average growth rate of wealth. The functional form of the growth rate, , depends on the wealth process . In general, a growth rate takes the form , where the function , linearizes , such that growth rates evaluated at different times can be meaningfully compared. Growth processes generally violate ergodicity, but their growth rates may nonetheless be ergodic. In this case, the time-average growth rate, can be computed as the rate of change of the expected value of , i.e. . (1) In this context, is called the ergodicity transformation. Relation to classic decision theory An influential class of models for economic decision-making is known as expected utility theory. The following specific model can be mapped to the growth-rate optimization highlighted by ergodicity economics. Here, agents evaluate monetary wealth according to a utility function , and it is postulated that decisions maximize the expected value of the change in utility, . (2) This model was proposed as an improvement of expected-value maximization, where agents maximize . A non-linear utility function allows the encoding of behavioral patterns not represented in expected-value maximization. Specifically, expected-utility maximizing agents can have idiosyncratic risk preferences. An agent specified by a convex utility function is more risk-seeking than an expected wealth maximizer, and a concave utility function implies greater risk aversion. Comparing (2) to (1), we can identify the utility function with the linearization , and make the two expressions identical by dividing (2) by . Division by simply implements a preference for faster utility growth in the expected-utility-theory decision protocol. This mapping shows that the two models will yield identical predictions if the utility function applied under expected-utility theory is the same as the ergodicity transformation, needed to compute an ergodic growth rate. Ergodicity economics thus emphasizes the dynamic circumstances under which a decision is made, whereas expected-utility theory emphasizes idiosyncratic preferences to explain behavior. Different ergodicity transformations indicate different types of wealth dynamics, whereas different utility functions indicate different personal preferences. The mapping highlights the relationship between the two approaches, showing that differences in personal preferences can arise purely as a result of different dynamic contexts of decision makers. Continuous example: Geometric Brownian motion A simple example for an agent's wealth process, , is geometric Brownian motion (GBM), commonly used in mathematical finance and other fields. is said to follow GBM if it satisfies the stochastic differential equation , (3) where is the increment in a Wiener process, and ('drift') and ('volatility') are constants. Solving (3) gives . (4) In this case the ergodicity transformation is , as is easily verified: grows linearly in time. Following the recipe laid out above, this leads to the time-average growth rate . (5) It follows that for geometric Brownian motion, maximizing the rate of change in the logarithmic utility function, , is equivalent to maximizing the time-average growth rate of wealth, i.e. what happens to the agent's wealth over time. Stochastic processes other than (3) possess different ergodicity transformations, where growth-optimal agents maximize the expected value of utility functions other than the logarithm. Trivially, replacing (3) with additive dynamics implies a linear ergodicity transformation, and many similar pairs of dynamics and transformations can be derived. Discrete example: multiplicative Coin Toss A popular illustration of non-ergodicity in economic processes is a repeated multiplicative coin toss, an instance of the binomial multiplicative process. It demonstrates how an expected-value analysis can indicate that a gamble is favorable although the gambler is guaranteed to lose over time. Definition In this thought experiment, discussed in, a person participates in a simple game where they toss a fair coin. If the coin lands heads, the person gains 50% on their current wealth; if it lands tails, the person loses 40%. The game shows the difference between the expected value of an investment, or bet, and the time-average or real-world outcome of repeatedly engaging in that bet over time. Calculation of Expected Value Denoting current wealth by , and the time when the payout is received by , we find that wealth after one round is given by the random variable , which takes the values (for heads) and (for tails), each with probability . The expected value of the gambler's wealth after one round is therefore By induction, after rounds expected wealth is , increasing exponentially at 5% per round in the game. This calculation shows that the game is favorable in expectation—its expected value increases with each round played. Calculation of Time-Average The time-average performance indicates what happens to the wealth of a single gambler who plays repeatedly, reinvesting their entire wealth every round. Due to compounding, after rounds the wealth will be where we have written to denote the realized random factor by which wealth is multiplied in the round of the game (either for heads; or , for tails). Averaged over time, wealth has grown per round by a factor Introducing the notation for the number of heads in a sequence of coin tosses we re-write this as For any finite , the time-average per-round growth factor, , is a random variable. The long-time limit, found by letting the number of rounds diverge , provides a characteristic scalar which can be compared with the per-round growth factor of the expected value. The proportion of heads tossed then converges to the probability of heads (namely 1/2), and the time-average growth factor is Discussion The comparison between expected value and time-average performance illustrates an effect of broken ergodicity: over time, with probability one, wealth decreases by about 5% per round, in contrast to the increase by 5% per round of the expected value. How the mind is tricked when betting on a non-stationary system To explain the danger of betting in a non-stationary system, a simple game is used. We have two people sitting opposite each other separated by a black cloth, so that they cannot see each other. They are playing the following game: the person we will call A tosses a coin and the person we will call B tries to guess the state in which the coin is on the table. This game lasts an arbitrary interval of time and person A is free to choose how many tosses to make during the chosen interval of time, person B does not see the toss of the coin but can at any time, within the interval of time, make a bet. When he makes a bet, if he guesses the state in which the coin is at that moment, he wins. The game begins, A tosses only once (result: heads), while B bets twice on heads, winning both times. Question: What is the overall probability of the outcome? • B calculates a probability of 25% (0.5 × 0.5), considering bets as independent. • A calculates a probability of 50% since there was only one toss and not two separate events. The difference arises from the estimate of the conditional probability: • B estimates the conditional probability in this way P(E2 | E1) = P(E2) treating the events (bets) as completely independent. • A estimates the conditional probability in this other way P(E2 | E1) = 1 treating the events as completely dependent. E1=first bet E2=second bet The correct answer depends on information: only the person tossing the coin (A) knows the number of tosses and can correctly estimate the probability. Application to financial markets The game highlights how traders (player B) often treat their trades as independent, ignoring the non-ergodic structure of financial markets (player A). Markets are not ergodic because sequences of events cannot be simply represented by long-term statistical averages. In other words, returns do not follow independent and identically distributed (i.i.d.) processes, and historical conditions profoundly influence future outcomes. This error leads to an overestimation of predictive capabilities and excessive risk-taking . Coverage in the wider media In December 2020, Bloomberg news published an article titled "Everything We’ve Learned About Modern Economic Theory Is Wrong" discussing the implications of ergodicity in economics following the publication of a review of the subject in Nature Physics. Morningstar covered the story to discuss the investment case for stock diversification. In the book Skin in the Game, Nassim Nicholas Taleb suggests that the ergodicity problem requires a rethinking of how economists use probabilities. A summary of the arguments was published by Taleb in a Medium article in August 2017. In the book The End of Theory, Richard Bookstaber lists non-ergodicity as one of four characteristics of our economy that are part of financial crises, that conventional economics fails to adequately account for, and that any model of such crises needs to take adequate account of. The other three are: computational irreducibility, emergent phenomena, and radical uncertainty. In the book The Ergodic Investor and Entrepreneur, Boyd and Reardon tackle the practical implications of non-ergodic capital growth for investors and entrepreneurs, especially for those with a sustainability, circular economy, net positive, or regenerative focus. James White and Victor Haghani discuss the field of ergodicity economics in their book The Missing Billionaires. Criticisms It has been claimed that expected utility theory implicitly assumes ergodicity in the sense that it optimizes an expected value which is only relevant to the long-term benefit of the decision-maker if the relevant observable is ergodic. Doctor, Wakker, and Tang argue that this is wrong because such assumptions are “outside the scope of expected utility theory as a static theory”. They further argue that ergodicity economics overemphasizes the importance of long-term growth as “the primary factor that explains economic phenomena,” and downplays the importance of individual preferences. They also caution against optimizing long-term growth inappropriately. Doctor, Wakker, and Tang gives the example of a short-term decision between A) a great loss incurred with certainty and B) a gain enjoyed with almost-certainty paired with an even greater loss at negligible probability. In the example the long-term growth rate favors the certain loss and seems an inappropriate criterion for the short-term decision horizon. Finally, an experiment by Meder and colleagues claims to find that individual risk preferences change with dynamical conditions in ways predicted by ergodicity economics. Doctor, Wakker, and Tang criticize the experiment for being confounded by differences in ambiguity and the complexity of probability calculations. Further, they criticize the analysis for applying static expected utility theory models to a context where dynamic versions are more appropriate. In support of this, Goldstein claims to show that multi-period EUT predicts a similar change in risk preferences as observed in the experiment. See also Santa Fe Institute St. Petersburg paradox References Paradoxes in economics Behavioral finance Mathematical economics Coin flipping Economic theories Ergodic theory
Ergodicity economics
[ "Mathematics", "Biology" ]
3,191
[ "Behavior", "Applied mathematics", "Ergodic theory", "Behavioral finance", "Mathematical economics", "Human behavior", "Dynamical systems" ]
66,242,082
https://en.wikipedia.org/wiki/Magnetic%20resonance%20myelography
Magnetic resonance myelography (MR myelography or MRI myelography) is a noninvasive medical imaging technique that can provide anatomic information about the subarachnoid space. It is a type of MRI examination that uses a contrast medium and magnetic resonance imaging scanner to detect pathology of the spinal cord, including the location of a spinal cord injury, cysts, tumors and other abnormalities. The procedure involves the injection of a gadolinium based contrast media into the cervical or lumbar spine, followed by the MRI scan. Procedure The radiologist will first numb the skin with the local anesthetic and then inject the gadolinium based contrast media into the spinal cord at the interspace between third and fourth lumbar vertebrae (L3-L4). Then the patient will be asked to roll on the table until the contrast is evenly distributed in the spinal cord and fill the nerve roots. Then the patient will be transferred to the MRI table and the scan will be taken. Postprocedural care The patient should be adequately hydrated to remove contrast from the body. The patient should be observed following the examination for adverse effects of contrast media. The myelogram is performed on an outpatient basis, So the patient should be properly instructed regarding limitations following the procedure such as driving. Instructions regarding postprocedural care, including warning signs of adverse reactions and the possibility of persistent headaches, should be given to the patient by a trained professional. A physician should be available to answer questions and provide patient management following the procedure. Indications Demonstration of the site of a cerebrospinal fluid leak (postlumbar puncture headache, postspinal surgery headache, rhinorrhea, or otorrhea) Surgical planning, especially in regard to the nerve roots. Radiation therapy planning. Diagnostic evaluation of spinal or basal cisternal disease. Nondiagnostic MRI studies of the spine or skull base. Poor correlation of physical findings with MRI. Contraindications Metallic Implants unless made up of titanium. Pacemakers Advantages Major advantages of MR myelography over conventional radiographic myelography include its lack of ionizing radiation, noninvasive nature, and lack of need for intrathecal contrast material. See also Myelography MRI References Magnetic resonance imaging Spinal cord
Magnetic resonance myelography
[ "Chemistry" ]
472
[ "Nuclear magnetic resonance", "Magnetic resonance imaging" ]
56,454,187
https://en.wikipedia.org/wiki/Bundle%20of%20principal%20parts
In algebraic geometry, given a line bundle L on a smooth variety X, the bundle of n-th order principal parts of L is a vector bundle of rank that, roughly, parametrizes n-th order Taylor expansions of sections of L. Precisely, let I be the ideal sheaf defining the diagonal embedding and the restrictions of projections to . Then the bundle of n-th order principal parts is Then and there is a natural exact sequence of vector bundles where is the sheaf of differential one-forms on X. See also Linear system of divisors (bundles of principal parts can be used to study the oscillating behaviors of a linear system.) Jet (mathematics) (a closely related notion) References Appendix II of Exp II of Algebraic geometry
Bundle of principal parts
[ "Mathematics" ]
161
[ "Fields of abstract algebra", "Algebraic geometry" ]
56,456,637
https://en.wikipedia.org/wiki/Dirubidium
Dirubidium is a molecular substance containing two atoms of rubidium found in rubidium vapour. Dirubidium has two active valence electrons. It is studied both in theory and with experiment. The rubidium trimer has also been observed. Synthesis and properties Dirubidium is produced when rubidium vapour is chilled. The enthalpy of formation (ΔfH°) in the gas phase is 113.29 kJ/mol. In practice, an oven heated to 600 to 800K with a nozzle can squirt out vapour that condenses into dimers. The proportion of Rb2 in rubidium vapour varies with its density, which depends on the temperature. At 200° the partial pressure of Rb2 is only 0.4%, at 400 °C it constitutes 1.6% of the pressure, and at 677 °C the dimer has 7.4% of the vapour pressure (13.8% by mass). The rubidium dimer has been formed on the surface of helium nanodroplets when two rubidium atoms combine to yield the dimer: Rb + Rb → Rb2 Rb2 has also been produced in solid helium matrix under pressure. Ultracold rubidium atoms can be stored in a magneto-optic trap and then photoassociated to form molecules in an excited state, vibrating at a rate so high they barely hang together. In solid matrix traps, Rb2 can combine with the host atoms when excited to form exciplexes, for example Rb2(3Πu)He2 in a solid helium matrix. Ultracold rubidium dimers are being produced in order to observe quantum effects on well-defined molecules. It is possible to produce a set of molecules all rotating on the same axis with the lowest vibrational level. Spectrum Dirubidium has several excited states, and spectral bands occur for transitions between these levels, combined with vibration. It can be studied by its absorption lines, or by laser induced-fluorescence. Laser induced-fluorescence can reveal the life-times of excited states. In the absorption spectrum of rubidium vapour, Rb2 has a major effect. Single atoms of rubidium in the vapour cause lines in the spectrum, but the dimer causes wider bands to appear. The most severe absorption between 640 and 730 nm makes the vapour almost opaque from 670 to 700 nm, wiping out the far red end of the spectrum. This is the band due to X→B transition. From 430 to 460 nm there is a shark-fin shaped absorption feature due to X→E transitions. Another shark fin like effect around 475 nm s due to X→D transitions. There is also a small hump with peaks at 601, 603 and 605.5 nm 1→3 triplet transitions and connected to the diffuse series. There are a few more small absorption features in the near infrared. There is also a dirubidium cation, Rb2+ with different spectroscopic properties. Bands Molecular constants for excited states The following table has parameters for 85Rb85Rb the most common for the natural element. Related species The other alkali metals also form dimers: dilithium Li2, Na2, K2, and Cs2. The rubidium trimer has also been observed on the surface of helium nanodroplets. The trimer, Rb3 has the shape of an equilateral triangle, bond length of 5.52 A˚ and a binding energy of 929 cm−1. References Rubidium Homonuclear diatomic molecules Allotropes
Dirubidium
[ "Physics", "Chemistry" ]
735
[ "Periodic table", "Properties of chemical elements", "Allotropes", "Materials", "Matter" ]
56,457,036
https://en.wikipedia.org/wiki/Sensors%20and%20Materials
Sensors and Materials is a monthly peer-reviewed open access scientific journal covering all aspects of sensor technology, including materials science as applied to sensors. It is published by Myu Scientific Publishing and the editor-in-chief is Makoto Ishida (Toyohashi University of Technology). The journal was established in 1988 by a group of Japanese academics to promote the publication of research by Asian authors in English. Abstracting and indexing The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2022 impact factor of 1.2. References External links English-language journals Materials science journals Monthly journals Open access journals Academic journals established in 1988
Sensors and Materials
[ "Materials_science", "Engineering" ]
138
[ "Materials science journals", "Materials science" ]
56,459,938
https://en.wikipedia.org/wiki/Sequence-defined%20polymer
Sequence-defined polymer (Syn. sequence-specific polymer, sequence-ordered polymer) is a uniform macromolecule with an exact chain-length and a perfectly defined sequence of monomers. In other words, each monomer unit is at a defined position in the chain e.g. peptides, proteins, oligonucleotides. Sequence-defined polymers constitute therefore a subclass of the field of sequence-controlled polymers. References Polymers
Sequence-defined polymer
[ "Chemistry", "Materials_science" ]
93
[ "Polymer stubs", "Polymers", "Polymer chemistry", "Organic chemistry stubs" ]
70,646,792
https://en.wikipedia.org/wiki/Direct%20detection%20of%20dark%20matter
Direct detection of dark matter is the science of attempting to directly measure dark matter collisions in Earth-based experiments. Modern astrophysical measurements, such as from the cosmic microwave background, strongly indicate that 85% of the matter content of the universe is unaccounted for. Although the existence of dark matter is widely believed, what form it takes or its precise properties has never been determined. There are three main avenues of research to detect dark matter: attempts to make dark matter in accelerators, indirect detection of dark matter annihilation, and direct detection of dark matter in terrestrial labs. The founding principle of direct dark matter detection is that since dark matter is known to exist in the local universe, as the Earth, Solar System, and the Milky Way Galaxy carve out a path through the universe they must intercept dark matter, regardless of what form it takes. Direct detection of dark matter faces several practical challenges. The theoretical bounds for the supposed mass of dark matter are immense, spanning some 90 orders of magnitude from to about that of a Solar Mass. The lower limit of dark matter is constrained by the knowledge that dark matter exists in dwarf galaxies. From this knowledge a lower constraint is put on the mass of dark matter, as any less massive dark matter would have a de Broglie wavelength too massive to fit inside observed dwarf galaxies. On the other end of the spectrum the upper limit of dark matter mass is constrained experimentally; gravitational microlensing using the Kepler telescope is done to detect MACHOs (MAssive Compact Halo Objects). Null results of this experiment exclude any dark matter candidate more massive than about a solar mass. As a result of this extremely vast parameter space, there exist a wide variety of proposed types of dark matter, in addition to a broad assortment of proposed experiments and methods to detect them. The spectrum of proposed dark matter matter mass is split into three broad, loosely defined categories as follows: In the range of zepto-electronvolts (zeV) to 1 eV theories predict a bosonic or field like dark matter. The primary dark matter candidate in the range are axions, or axion-like particles. From about 1 eV to the Planck Mass, dark matter is projected to be fermionic or particle-like. Favorites in this range include WIMPS, thermal relics, and sterile neutrinos. Finally, in the mass range between the Planck Mass to masses on the order of the Solar mass, dark matter would be a composite particle. The leading theory for composite dark matter are primordial black holes. Bosonic / field dark matter Any dark matter candidate with a mass less than approximately 1 eV and greater than 1 zeV is projected to be bosons, or a field, as opposed to a more traditional particle. Any lesser mass could not fit its de Broglie wavelength into dwarf galaxies. Axions Axions are theoretical, as of yet undiscovered, subatomic particles originally proposed in 1977 to solve inconsistencies in the Standard Model, i.e. the strong CP problem. A consequence of this solution is to generate an axion field, which would in turn indicate a cosmological abundance of axions that depend on the mass of the axion. If the axion mass is heavier than 5 μeV/c2, then axions could account for all dark matter phenomena. One of the only experiments to detect axions as dark matter is the Axion Dark Matter Experiment (ADMX). Located at the University of Washington, ADMX uses a resonant microwave cavity in a strong magnetic field to convert dark matter into microwave photons by means of the Primakoff effect. Microwave cavities are simple electrical devices that are built to resonate at extremely precise frequencies to create standing microwaves inside of the cavity. ADMX uses this technology to tune their microwave cavity to the resonance of axions located in the Milky Way halo. The purpose of this is to increase the interaction of axions with the high strength eight Tesla magnetic field present to better facilitate the Primakoff effect. The Primakoff effect is an as yet un proven mechanism for the production of mesons from high energy interactions of photons with a nucleus. Axions qualify for this interaction, meaning that infamously undetectable dark matter could theoretically be converted into mundane photons. Although ADMX has yet to detect dark matter, its capabilities are promising. The experiment is capable of probing previously difficult to reach sections of the parameter space. The primary downside of the ADMX experiment is that the microwave cavity requires very fine tuning, meaning only a minuscule amount of the parameter space is probed at a time. Weakly Interacting slim particles (WISPs) Weakly Interacting Slim Particles (WISPs) are a broader category of particles with extremely small masses and interaction cross sections, of which axions are a member. Active neutrinos are the only WISP confirmed to exist, although they have been definitively ruled out as a dark matter candidate. In common usage, WISP is generally used to refer to any non axion ultra light dark matter particle. Leading theories suggest that such particles would interact with the standard model largely through coupling to photons, and would survive to the modern era after creation in the early universe. Fermionic / particle dark matter Dark matter masses between 1 eV and the Planck Mass are hypothesized to be fermionic particles. Weakly interacting massive particles Weakly Interacting Massive Particles (WIMPs) are a broad category of theoretical particles, that interact not at all or very weakly with all forces except gravity. WIMPs are a member of a broader category of particles called thermal relics, particles which were created thermally in the early universe, as opposed to being created non-thermally later during a phase transition. As with all dark matter candidates, interaction probability is extraordinarily low, leading to a variety of techniques to be developed. Experimental techniques Direct detection of dark matter is based upon the premise that since it is known that dark matter exists in some form, Earth must intercept some as it carves out a path through the universe. Direct detection experiments attempt to create highly sensitive systems capable of detecting these rare and weak events. Cryogenic crystal detectors Cryogenic Crystal Detectors use disks of germanium and silicon cooled to around 50 millikelvin. These disks are coated in either tungsten or aluminum. An interacting WIMP would in theory excite the crystal lattice, sending vibrations to the surface, which is held precisely at its superconductivity threshold. Due to this the coating material's resistivity is highly dependent on heat, enough so that the energy deposited by the vibration is detectable. One such detector is the Cryogenic Rare Event Search with Superconducting Thermometers (CRESST) located at the Gran Sasso National Laboratory in Assergi, Italy. Operating in multiple generations since 2000 CRESST has continually been evolving and improving its sensitivity range, although it has not yet definitively detected dark matter. As a notable side achievement, CRESST was the first experiment to detect the alpha decay of tungsten-180. The most recent generation of CRESST has enhanced its capabilities to detect WIMP dark matter as light as 160 MeV/c2. Noble gas scintillators Noble gas scintillators use the property of certain materials to scintillate, which is when a material absorbs energy from a particle and remits the same amount of energy as light. Of particular interest for dark matter detection is the use of noble gases, even more specifically liquid xenon. The XENON series of experiments, also located at the Gran Sasso National Lab, is a forefront user of liquid xenon scintillators. Common across all generations of the experiment, the detector consists of a tank of liquid xenon with a gaseous layer on top. At the top and bottom of the detector is a layer of photomultiplier tubes (PMTs). When a dark matter particle collides with the liquid xenon, it rapidly releases a photon which is detected by the PMTs. To cross reference this data point an electric field is applied which is sufficiently large to prevent complete recombination of the electrons knocked loose by the interaction. These drift to the top of the detector and are also detected, creating two separate detections for each event. Measuring the time delay between these allows for a complete 3-D reconstruction of the interaction. The detector is also able to discriminate between electronic recoils and nuclear recoils, as both types of events would produce differing ratios of the photon energy and the released electron energy. The most recently completed version of the XENON experiment is XENON1T, which used 3.2 tons of liquid xenon. This experiment produced a then record limit for the cross section of WIMP dark matter of at a mass of 30 GeV/c2. The most recent iteration of the XENON succession is XENONnT, which is currently running with 8 tones of liquid xenon. This experiment is projected to be able to probe WIMP-nucleon cross sections of for a 50 GeV/c2 WIMP mass. At this ultra-low cross section, interference from the background neutrino flux is predicted to be problematic. Crystal scintillators Crystal scintillator experiments are a middle ground between cryogenic crystal detectors and noble gas scintillators, using the crystals of the former and the scintillation properties of the latter. One such experiment that uses this technology is the DAMA/LIBRA experiment, once again located in the Gran Sasso National Laboratory in Italy. Unique to dark matter experiments DAMA/LIBRA attempts to measure an annual variation of the flux of dark matter. This concept is born from the knowledge that as the Earth's rotation comes in sync and out of sync of the Sun's motion through the Milky Way, the relative motion of a terrestrial detector to the dark matter halo would change, resulting in a differing flux of dark matter. DAMA/LIBRA has claimed to see such modulation, although the scientific community as a whole has yet to accept these results as valid. Disbelievers of this result claim that it is not due to a variation of WIMP flux, but rather due to uncontrolled seasonal changes. To test this other similar experiments, namely the Sodium-iodide with Active Background Rejection (SABRE) are being built in Gran Sasso and another instalment in Australia. The purpose of spreading out the experiments across both hemispheres is that if the modulation for the locations is in sync then that would positively indicate a change in the dark matter flux, whereas if the measured variations are six months out of sync, then that would indicate unaccounted for seasonal variations. Bubble chambers Bubble chambers, originally invented in 1952, are largely phased out but still have some use in WIMP dark matter detection. Bubble chambers are filled with superheated liquid held close to its phase transition. When a particle interacts with the superheated liquid the energy it imparts is enough to trigger a phase transition, causing any charged particles to leave an ionization trail of bubbles, which are detected. One such experiment that uses a bubble chamber is PICO, at SNOLAB in Canada. PICO was formed in 2013 as a combination of two previous similar experiments, PICASSO and COUPP. PICO employs a more advanced form of a bubble chamber, using individual droplets of a superheated gas, namely Freon, that are suspended in a gel matrix. The advantage of this setup is that the individual droplets slow down the phase transition, allowing for longer periods of detector activity. PICO currently has a 2-liter and a 60-liter detector, with a new version with a mass in the range of 250-500 liters being planned. Although PICO like all bubble chambers has fantastically low background noise, they are still detecting anomalous background events inconsistent with assumed dark matter characteristics. Additionally PICO was capable of ruling out interactions with unwanted iodine as the cause of the previously mentioned DAMA/LIBRA experiment's claimed dark matter modulation. Sterile neutrinos A Sterile neutrino is a theoretical type of neutrino that interacts only via gravity. The weak force only interacts with particles with left chirality, or left-handed neutrinos. Sterile neutrinos are proposed to be right handed, meaning they would only interact with gravity. Sterile neutrinos are viable dark matter candidates because they only interact via gravity, as is predicted for dark matter. Unfortunately, most current theories predict cold dark matter, meaning dark matter candidates that are non-relativistic. Due to their mass and energy, sterile neutrinos would be likely relativistic and thus count as hot dark matter. Sterile neutrinos could still be a constituent of dark matter, but it is highly unlikely that they are the only component. Composite dark matter Dark matter mass between the Planck Mass and those on the order of the Solar Mass are hypothesized to be macroscopic composite objects. Masses much beyond the solar mass are ruled out observationally by the lack of gravitational microlensing events using the Kepler telescope. Primordial black hole Primordial black holes are black holes that formed very early in the universe, and without the collapse of a star. The theory behind primordial black holes is that in the extremely early universe, under one second, random fluctuations would cause local gravitational collapse into black holes. Since primordial black holes did not form from stellar collapse, they can have masses far below that of a solar mass, ranging from 10 micrograms to many solar masses. However, only primordial black holes with masses above 1011 kg would still exist today, as any less massive would have completely evaporated via Hawking radiation by the modern era. Primordial black holes are plausible dark matter candidates, however arguments based upon their observed abundance cast doubt on their ability to be the only constituent of dark matter. Conversely, other research groups claim that gravitational waves detected by LIGO/VIRGO are consistent with primordial black holes making up 100% of dark matter, given if a relatively large amount of them were clustered within the halos of dwarf galaxies. An additional inconsistency with this claim is that the primordial black hole mass claimed could overlap with excluded mass range from Kepler micro-lensing. The GAIA spacecraft, launched by the European Space Agency is tasked with creating the largest and most detailed map of space and all objects within it ever created, including possible composite dark matter candidates. Although not specifically searching for dark matter, it is possible that dark matter scientists will be able to find dark matter among the 1 billion objects it will catalogue during its lifetime. References Dark matter Physics beyond the Standard Model Observational cosmology
Direct detection of dark matter
[ "Physics", "Astronomy" ]
3,033
[ "Dark matter", "Unsolved problems in astronomy", "Concepts in astronomy", "Unsolved problems in physics", "Particle physics", "Exotic matter", "Physics beyond the Standard Model", "Matter" ]
70,650,257
https://en.wikipedia.org/wiki/Covalent%20adaptable%20network
Covalent adaptable networks (CANs) are a type of polymer material that closely resemble thermosetting polymers (thermosets). However, they are distinguished from thermosets by the incorporation of dynamic covalent chemistry into the polymer network. When a stimulus (for example heat, light, pH, ...) is applied to the material, these dynamic bonds become active and can be broken or exchanged with other pending functional groups, allowing the polymer network to change its topology. This introduces reshaping, (re)processing and recycling into thermoset-like materials. Background Historically, polymer materials have always been subdivided in two categories based on their thermomechanical behaviour. Thermoplastic polymer materials melt upon heating and become viscous liquids, whereas thermosetting polymer materials remain solid as a result of cross-linking. Thermoplastics consist of long polymer chains that are stiff at service temperatures but become softer with increasing temperature. At low temperatures, the molecular motion of the polymer chains is limited due to chain-entanglements, resulting in a hard and glassy material. Increasing the temperature will lead to a transition from a hard to a soft material at the glass transition temperature (Tg) yielding a visco-elastic liquid. In the case of (semi-)crystalline polymer materials, viscous flow is achieved when the melting point (Tm) is reached and the intermolecular forces in the ordered crystalline domain are overcome. Thermoplastics regain their solid properties upon cooling and can thus be reshaped by polymer processing methods such as extrusion and injection moulding and they can also be recycled. Examples of thermoplastic polymers are polystyrene, polycarbonate, polyethylene, nylon, Acrylonitrile butadiene styrene (ABS), etc. Thermosets, on the other hand, are three-dimensional networks that are formed through permanent chemical cross-linking of multifunctional compounds. This is an irreversible process that results in infusible and insoluble polymer networks with superior properties compared to most thermoplastics. When a thermoset is exposed to heat, it maintains its dimensional stability and thus cannot be reshaped. These polymer materials are generally used for demanding applications (e.g. wind turbines, aerospace, etc.) that require chemical resistance, dimensional stability and good mechanical properties. Typical thermosetting materials include epoxy resins, polyester resins, polyurethanes, etc. In the framework of sustainability, the combination of the mechanical properties of thermosets with the reprocessability of thermoplastics through the introduction of dynamic bonds has been the topic of numerous research studies. The use of non-covalent interactions such as hydrogen bonding, pi-stacking or crystallization that lead to physical cross-links between polymer chains is one way of introducing dynamic cross-linking. The thermoreversible nature of the physical cross-links results in polymer materials with improved mechanical properties without losing reprocessability. The properties of these physical networks are highly dependent on the used backbone and type of non-covalent interactions, but typically they are brittle at low temperature and become elastic or rubbery above Tg. Upon further heating, the physical cross-links disappear and the material behaves as a visco-elastic liquid, allowing it to be reprocessed. These materials are also known as thermoplastic elastomers. Covalent adaptable networks (CANs) instead use dynamic covalent bonds that are able to undergo exchange reactions upon application of an external stimulus, typically heat or light. In absence of a stimulus, these materials behave as thermosets, showing high chemical resistance and dimensional stability, but when the stimulus is applied, the dynamic bonds become activated, enabling the network to rearrange its topology on a molecular level. As a result, these materials are able to undergo permanent deformations, enabling reshaping, reprocessing, self-healing, etc. As such, CANs can be seen as an intermediate bridge between thermosets and thermoplastics. In 2011, the research group of French researcher Ludwik Leibler developed a specific class of CANs based on an associative exchange mechanism (see subsection Classification). By adding a suitable catalyst to epoxy/acid polyester based networks, they were able to prepare a permanent epoxy network that showed a gradual viscosity decrease upon heating. This type of behaviour is typical for vitreous silica and had never before been seen in organic polymer materials. Therefore, the authors introduced the name Vitrimers for these kind of materials. Recent advancements in the field of CANs have shown their potential to someday replace conventional non-recyclable thermosetting materials. The exponential growth of publications involving CANs seen in literature indicate the increasing interest from academia. Additionally, there's also a growing interest in CANs from industry with, for example, the first vitrimer start-up company Mallinda and multiple European Union funded research projects with collaborations between academic and industry partners (such as Vitrimat, PUReSmart and NIPU-EJD). Classification CANs are currently subdivided in two groups, dissociative CANs and associative CANs, based on the underlying mechanism of the bond exchange reactions (i.e. the order in which the bond forming and breaking occurs) and their resulting temperature dependence. Dissociative CANs The exchange mechanism of dissociative CANs requires a bond-breaking event prior to the formation of a new bond (i.e. an elimination/addition pathway). Upon application of a stimulus, the equilibrium shifts to the dissociated state, resulting in a temporarily decreased cross-link density in the network. When a sufficient amount of dynamic bonds dissociate due to the equilibrium being shifted below the gel point, the material will suffer a loss of dimensional stability and show a sudden and drastic viscosity decrease.  After removal of the stimulus, the bonds reform and, in the ideal case, the original cross-link density is restored. This temporary decrease in cross-link density enables very fast topology rearrangements in dissociative CANs, such as viscous flow and stress relaxation, which allows the reprocessing of covalently cross-linked polymer networks. Additionally, dissociative CANs can be solubilized in good solvents. Associative CANs In contrast to dissociative CANs, networks in associative CANs do not depolymerize upon application of a stimulus and maintain a near constant cross-link density. Here, the exchange mechanism relies on the formation of a new bond before fragmentation of another bond (i.e. an addition/elimination pathway). This means that bond exchange occurs via a temporarily more cross-linked intermediate state. However, in practice, this small increase will often be negligible, resulting in a practically constant cross-link density. As a result, associative CANs typically remain insoluble in inert solvents, even at elevated temperatures, although it has become apparent that some associative CANs can be dissolved in a good solvent. In the case of Vitrimers, associative exchange is triggered by heat and the viscosity of these materials is controlled by chemical exchange reactions, leading to a linear dependence of viscosity with inverse temperature according to the Arrhenius law. The decreased viscosity caused by rapid dynamic bond exchanges enables stress relaxation and network topology rearrangements in these materials. Applications Recycling of PU foams Polyurethane (PU) foams are highly versatile engineering materials used for a wide range of applications such as mattresses, insulation, automotive, footwear and construction materials. Conventional PU foams are cross-linked materials or thermosets. PU foams can either be mechanically recycled (where PU foams are grinded and used as fillers), or chemically recycled (where PU foams are downcycled into polyols or other monomeric components via chemical degradation). However, most PU foams end up on landfills. Currently, CANs are being investigated as a replacement for conventional foams, which would allow for easier recyclability of PU waste. For example, it was shown recently that the incorporation of disulfide bonds in PU foams led to their malleability and reprocessability into elastomers. Another possible solution is the addition of catalyst to post-consumer PU, which activates the exchange of urethane bonds and makes them reprocessable . Self-healing materials Polymer networks are susceptible to damage during their use. Self-healing is a promising tool to increase the lifetime and performance of the polymer, while simultaneously reducing plastic waste. Self-healing can operate via extrinsic or intrinsic mechanisms. Extrinsic systems rely on the incorporation of small capsules containing healing agents that get released during damage/cracking and heal the material, while intrinsic systems are inherently able to restore their integrity through, for example, incorporation of dynamic bonds into the polymer network. The most known example of intrinsic self-healing is thermally healable crosslinked networks with Diels-Alder adducts, but various other chemistries have also been investigated, including transesterification, olefin metathesis, and alkoxyamine chemistry. Another promising strategy involves light-activated systems, such as photothermal and photoreversible chemistry. For photothermal systems, the healing is triggered by heating, even if light is the transient stimulus that makes the healing possible. Dynamic exchange reactions are also often activated by direct infrared heating with the assistance of photothermal fillers (e.g. carbon black, graphene, and gold nanoparticles). Self-healing materials based on direct photoreversible chemistry in principle don't involve heating. Some examples of this include the systems based on photoreversible cycloaddition that require ultraviolet (UV) irradiation, as well as photo-triggered radical reshufflings of sulfur-based dynamic covalent bonds. Nanocomposites Thermosets are currently in high demand for high-performance composites that are heavily needed in lightweight engineering and ultrahigh-performance mechanical parts. Applications include: packaging, remediation, energy storage, electromagnetic absorption, sensing and actuation, transportation and safety, defense systems, thermal flow control, information industry, catalysts, cosmetics, sports, etc. Such materials consist of a “soft” polymer phase that is combined with nanoparticles dispersed in the polymer phase. The shape of these nanoparticles can vary wildly, from rods to spheres to platelets, to fibres, etc. The unique thermo-responsive properties of CANs, induced by bond exchange kinetics, open interesting possibilities for the introduction of property switches based on various external effects. For example, the addition of a resistive heater for electrothermal conversion (e.g. single walled carbon nanotubes) can allow for an on-demand mechanical property switch via an electric current. Alternatively, by adding a filler like graphene oxide, light irradiation can be used for an induced photo-thermal effect allowing for switching of the mechanical properties as a response to light-irradiation. Other interesting nanoparticles for the application in CANs include clay nanosheets, graphene and cellulose. 3D printing In recent years, 3D printing, or additive manufacturing (AM), saw rapid developments as the technique became more and more popular. Currently, plastics are the most common raw material used for 3D printing due to their wide availability, diversity and light weight. The versatility of AM and its significant development resulted in its use for many applications ranging from manufacturing and medical sectors to the custom art and design sector. With the market of 3D printing expected to grow even further in the coming years, the use of CANs as a resource for AM is under investigation as a replacement for traditional thermosets, which could make up 22% of the global market for AM by the end of 2029. By replacing traditional thermoset ink with CAN-based inks, complicated 3D geometries can still be printed that behave like traditional thermosets with excellent mechanical properties at service conditions, but can later also be recycled into new ink for the next round of 3D printing. One example involved the 3D printing of an epoxy ink which is able to undergo transesterification reactions after printing. During the printing cycle, the ink is first slightly cured before being printed at high temperature into the desired 3D structure, and followed by a second curing step in an oven after printing. The printed epoxy parts can then be recycled by dissolving in ethylene glycol at high temperature and reused as ink in a new printing cycle. Chemistries used in CANs Various dynamic chemistries have already been incorporated in CANs; some of the more notable ones include transesterification, Diels-Alder exchange, imine metathesis, disulfide exchange, transamination of vinylogous urethanes, transcarbamoylation of urethanes, olefin metathesis, and trans-N-alkylation of 1,2,3-triazolium salts. References Polymers
Covalent adaptable network
[ "Chemistry", "Materials_science" ]
2,765
[ "Polymers", "Polymer chemistry" ]
70,660,540
https://en.wikipedia.org/wiki/Alternative%20abiogenesis%20scenarios
A scenario is a set of related concepts pertinent to the origin of life (abiogenesis), such as the iron-sulfur world. Many alternative abiogenesis scenarios have been proposed by scientists in a variety of fields from the 1950s onwards in an attempt to explain how the complex mechanisms of life could have come into existence. These include hypothesized ancient environments that might have been favourable for the origin of life, and possible biochemical mechanisms. A scenario The biochemist Nick Lane has proposed a possible scenario for the origin of life that integrates much of the available evidence from biochemistry, geology, phylogeny, and experimentation: Environments Many environments have been proposed for the origin of life. Fluctuating salinity: dilute and dry-down Harold Blum noted in 1957 that if proto-nucleic acid chains spontaneously form duplex structures, then there is no way to dissociate them. The Oparin-Haldane hypothesis addresses the formation, but not the dissociation, of nucleic acid polymers and duplexes. However, nucleic acids are unusual because, in the absence of counterions (low salt) to neutralize the high charges on opposing phosphate groups, the nucleic acid duplex dissociates into single chains. Early tides, driven by a close moon, could have generated rapid cycles of dilution (high tide, low salt) and concentration (dry-down at low tide, high salt) that exclusively promoted the replication of nucleic acids through a process dubbed tidal chain reaction (TCR). This theory has been criticized on the grounds that early tides may not have been so rapid, although regression from current values requires an Earth–Moon juxtaposition at around two Ga, for which there is no evidence, and early tides may have been approximately every seven hours. Another critique is that only 2–3% of the Earth's crust may have been exposed above the sea until late in terrestrial evolution. The tidal chain reaction theory has mechanistic advantages over thermal association/dissociation at deep-sea vents because it requires that chain assembly (template-driven polymerization) takes place during the dry-down phase, when precursors are most concentrated, whereas thermal cycling needs polymerization to take place during the cold phase, when the rate of chain assembly is lowest and precursors are likely to be more dilute. Hot freshwater lakes Jack W. Szostak suggested that geothermal activity provides greater opportunities for the origination of life in open lakes where there is a buildup of minerals. In 2010, based on spectral analysis of sea and hot mineral water, Ignat Ignatov and Oleg Mosin demonstrated that life may have predominantly originated in hot mineral water. Hot mineral water that contains hydrogen carbonate and calcium ions has the most optimal range. This case is similar to the origin of life in hydrothermal vents, but with hydrogen carbonate and calcium ions in hot water. At a pH of 9–11, the reactions can take place in seawater. According to Melvin Calvin, certain reactions of condensation-dehydration of amino acids and nucleotides in individual blocks of peptides and nucleic acids can take place in the primary hydrosphere with pH 9–11 at a later evolutionary stage. Some of these compounds like hydrocyanic acid (HCN) have been proven in the experiments of Miller. This is the environment in which the stromatolites have been created. David Ward described the formation of stromatolites in hot mineral water at the Yellowstone National Park. In 2011, Tadashi Sugawara created a protocell in hot water. Geothermal springs Bruce Damer and David Deamer argue that cell membranes cannot be formed in salty seawater, and must therefore have originated in freshwater environments like pools replenished by a combination of geothermal springs and rainfall. Before the continents formed, the only dry land on Earth would be volcanic islands, where rainwater would form ponds where lipids could form the first stages towards cell membranes. During multiple wet-dry cycles, biopolymers would be synthesized and are encapsulated in vesicles after condensation. Zinc sulfide and manganese sulfide in these ponds would have catalyzed organic compounds by abiotic photosynthesis. Experimental research at geothermal springs successfully synthesized polymers and were encapsulated in vesicles after exposure to UV light and multiple wet-dry cycles. At temperatures of 60 to 80 °C at geothermal fields, biochemical reactions can occur. These predecessors of true cells are assumed to have behaved more like a superorganism rather than individual structures, where the porous membranes would house molecules which would leak out and enter other protocells. Only when true cells had evolved would they gradually adapt to saltier environments and enter the ocean. 6 of the 11 biochemical reactions of the rTCA cycle can occur in hot metal-rich acidic water which suggests metabolic reactions might have originated in this environment, this is consistent with the enhanced stability of RNA phosphodiester, aminoacyl-tRNA bonds, and peptides in acidic conditions. Cycling between supercritical and subcritical CO2 at tectonic fault zones might have led to peptides integrating with and stabilizing lipid membranes. This is suggested to have driven membrane protein evolution, as it shown that a selected peptide (H-Lys-Ser-Pro-Phe-Pro-Phe-Ala-Ala-OH) causes the increase of membrane permeability to water. David Deamer and Bruce Damer states that the prebiotic chemistry does not require ultraviolet irradiation as the chemistry could also have occurred under shaded areas that protected biomolecules from photolysis. Deep sea alkaline vents Nick Lane believes that no known life forms could have utilized zinc-sulfide based photosynthesis, lightning, volcanic pyrite synthesis, or UV radiation as a source of energy. Rather, he instead suggests that deep sea alkaline vents is more likely to have been a source energy for early cellular life. Serpentinization at alkaline hydrothermal vents produce methane and ammonia. Mineral particles that have similar properties to enzymes at deep sea vents would catalyze organic compounds out of dissolved CO2 within seawater. Porous rock might have promoted condensation reactions of biopolymers and act as a compartment of membranous structures, however it is unknown about how it could promote coding and metabolism. Acetyl phosphate, which is readily synthesized from thioacetate, can promote aggregation of adenosine monophosphate of up to 7 monomers which is considered energetically favored in water due to interactions between nucleobases. Acetyl phosphate can stabilize aggregation of nucleotides in the presence of Na+ and could possibly promote polymerization at mineral surfaces or lower water activity. An external proton gradient within a membrane would have been maintained between the acidic ocean and alkaline seawater. The descendants of the last universal common ancestor, bacteria and archaea, were probably methanogens and acetogens. The earliest microfossils, dated to be 4.28 to 3.77 Ga, were found at hydrothermal vent precipitates. These microfossils suggest that early cellular life began at deep sea hydrothermal vents. Exergonic reactions at these environments could have provided free energy that promoted chemical reactions conducive to prebiotic biomolecules. Nonenzymatic reactions of glycolysis and the pentose phosphate pathway can occur in the presence of ferrous iron at 70 °C, the reactions produce erythrose 4-phosphate, an amino acid precursor and ribose 5-phosphate, a nucleotide precursor. Pyrimidines are shown to be synthesized from the reaction between aspartate and carbamoyl phosphate at 60 °C and in the presence of metals, it is suggested that purines could be synthesized from the catalysis of metals. Adenosine monophosphate are also shown to be synthesized from adenine, monopotassium phosphate or pyrophosphate, and ribose at silica at 70 °C. Reductive amination and transamination reactions catalyzed by alkaline hydrothermal vent mineral and metal ions produce amino acids. Long chain fatty acids can be derived from formic acid or oxalic acid during Fischer-Tropsch-type synthesis. Carbohydrates containing an isoprene skeleton can be synthesized from the formose reaction. Isoprenoids incorporated into fatty acid vesicles can stabilize the vesicles, which are suggested to have driven the divergence of bacterial and archaeal lipids. Volcanic ash in the ocean Geoffrey W. Hoffmann has argued that a complex nucleation event as the origin of life involving both polypeptides and nucleic acid is compatible with the time and space available in the primary oceans of Earth. Hoffmann suggests that volcanic ash may provide the many random shapes needed in the postulated complex nucleation event. This aspect of the theory can be tested experimentally. Gold's deep-hot biosphere In the 1970s, Thomas Gold proposed the theory that life first developed not on the surface of the Earth, but several kilometers below the surface. It is claimed that the discovery of microbial life below the surface of another body in our Solar System would lend significant credence to this theory. Radioactive beach hypothesis Zachary Adam claims that tidal processes that occurred during a time when the Moon was much closer may have concentrated grains of uranium and other radioactive elements at the high-water mark on primordial beaches, where they may have been responsible for generating life's building blocks. According to computer models, a deposit of such radioactive materials could show the same self-sustaining nuclear reaction as that found in the Oklo uranium ore seam in Gabon. Such radioactive beach sand might have provided sufficient energy to generate organic molecules, such as amino acids and sugars from acetonitrile in water. Radioactive monazite material also has released soluble phosphate into the regions between sand-grains, making it biologically "accessible." Thus amino acids, sugars, and soluble phosphates might have been produced simultaneously, according to Adam. Radioactive actinides, left behind in some concentration by the reaction, might have formed part of organometallic complexes. These complexes could have been important early catalysts to living processes. John Parnell has suggested that such a process could provide part of the "crucible of life" in the early stages of any early wet rocky planet, so long as the planet is large enough to have generated a system of plate tectonics which brings radioactive minerals to the surface. As the early Earth is thought to have had many smaller plates, it might have provided a suitable environment for such processes. The hypercycle In the early 1970s, Manfred Eigen and Peter Schuster examined the transient stages between the molecular chaos and a self-replicating hypercycle in a prebiotic soup. In a hypercycle, the information storing system (possibly RNA) produces an enzyme, which catalyzes the formation of another information system, in sequence until the product of the last aids in the formation of the first information system. Mathematically treated, hypercycles could create quasispecies, which through natural selection entered into a form of Darwinian evolution. A boost to hypercycle theory was the discovery of ribozymes capable of catalyzing their own chemical reactions. The hypercycle theory requires the existence of complex biochemicals, such as nucleotides, which do not form under the conditions proposed by the Miller–Urey experiment. Iron–sulfur world In the 1980s, Wächtershäuser and Karl Popper postulated the iron–sulfur world hypothesis for the evolution of pre-biotic chemical pathways. It traces today's biochemistry to primordial reactions which synthesize organic building blocks from gases. Wächtershäuser systems have a built-in source of energy: iron sulfides such as pyrite. The energy released by oxidising these metal sulfides can support synthesis of organic molecules. Such systems may have evolved into autocatalytic sets constituting self-replicating, metabolically active entities predating modern life forms. Experiments with sulfides in an aqueous environment at 100 °C produced a small yield of dipeptides (0.4% to 12.4%) and a smaller yield of tripeptides (0.10%). However, under the same conditions, dipeptides were quickly broken down. Several models postulate a primitive metabolism, allowing RNA replication to emerge later. The centrality of the Krebs cycle (citric acid cycle) to energy production in aerobic organisms, and in drawing in carbon dioxide and hydrogen ions in biosynthesis of complex organic chemicals, suggests that it was one of the first parts of the metabolism to evolve. Concordantly, geochemists Szostak and Kate Adamala demonstrated that non-enzymatic RNA replication in primitive protocells is only possible in the presence of weak cation chelators like citric acid. This provides further evidence for the central role of citric acid in primordial metabolism. Russell has proposed that "the purpose of life is to hydrogenate carbon dioxide" (as part of a "metabolism-first", rather than a "genetics-first", scenario). The physicist Jeremy England has argued from general thermodynamic considerations that life was inevitable. An early version of this idea was Oparin's 1924 proposal for self-replicating vesicles. In the 1980s and 1990s came Wächtershäuser's iron–sulfur world theory and Christian de Duve's thioester models. More abstract and theoretical arguments for metabolism without genes include Freeman Dyson's mathematical model and Stuart Kauffman's collectively autocatalytic sets in the 1980s. Kauffman's work has been criticized for ignoring the role of energy in driving biochemical reactions in cells. A multistep biochemical pathway like the Krebs cycle did not just self-organize on the surface of a mineral; it must have been preceded by simpler pathways. The Wood–Ljungdahl pathway is compatible with self-organization on a metal sulfide surface. Its key enzyme unit, carbon monoxide dehydrogenase/acetyl-CoA synthase, contains mixed nickel-iron-sulfur clusters in its reaction centers and catalyzes the formation of acetyl-CoA. However, prebiotic thiolated and thioester compounds are thermodynamically and kinetically unlikely to accumulate in the presumed prebiotic conditions of hydrothermal vents. One possibility is that cysteine and homocysteine may have reacted with nitriles from the Strecker reaction, forming catalytic thiol-rich polypeptides. It has been suggested that the iron-sulfur world hypothesis and RNA world hypothesis are not mutually exclusive as modern cellular processes do involve both metabolites and genetic molecules. Zinc world Armen Mulkidjanian's zinc world (Zn-world) hypothesis extends Wächtershäuser's pyrite hypothesis. The Zn-world theory proposes that hydrothermal fluids rich in H2S interacting with cold primordial ocean (or Darwin's "warm little pond") water precipitated metal sulfide particles. Oceanic hydrothermal systems have a zonal structure reflected in ancient volcanogenic massive sulfide ore deposits. They reach many kilometers in diameter and date back to the Archean. Most abundant are pyrite (FeS2), chalcopyrite (CuFeS2), and sphalerite (ZnS), with additions of galena (PbS) and alabandite (MnS). ZnS and MnS have a unique ability to store radiation energy, e.g. from ultraviolet light. When replicating molecules were originating, the primordial atmospheric pressure was high enough (>100 bar) to precipitate near the Earth's surface, and ultraviolet irradiation was 10 to 100 times more intense than now; hence the photosynthetic properties mediated by ZnS provided the right energy conditions for the synthesis of informational and metabolic molecules and the selection of photostable nucleobases. The Zn-world theory has been filled out with evidence for the ionic constitution of the interior of the first protocells. In 1926, the Canadian biochemist Archibald Macallum noted the resemblance of body fluids such as blood and lymph to seawater; however, the inorganic composition of all cells differ from that of modern seawater, which led Mulkidjanian and colleagues to reconstruct the "hatcheries" of the first cells combining geochemical analysis with phylogenomic scrutiny of the inorganic ion requirements of modern cells. The authors conclude that ubiquitous, and by inference primordial, proteins and functional systems show affinity to and functional requirement for K+, Zn2+, Mn2+, and . Geochemical reconstruction shows that this ionic composition could not have existed in the ocean but is compatible with inland geothermal systems. In the oxygen-depleted, CO2-dominated primordial atmosphere, the chemistry of water condensates near geothermal fields would resemble the internal milieu of modern cells. Therefore, precellular evolution may have taken place in shallow "Darwin ponds" lined with porous silicate minerals mixed with metal sulfides and enriched in K+, Zn2+, and phosphorus compounds. Clay The clay hypothesis was proposed by Graham Cairns-Smith in 1985. It postulates that complex organic molecules arose gradually on pre-existing, non-organic replication surfaces of silicate crystals in contact with an aqueous solution. The clay mineral montmorillonite has been shown to catalyze the polymerization of RNA in aqueous solution from nucleotide monomers, and the formation of membranes from lipids. In 1998, Hyman Hartman proposed that "the first organisms were self-replicating iron-rich clays which fixed carbon dioxide into oxalic acid and other dicarboxylic acids. This system of replicating clays and their metabolic phenotype then evolved into the sulfide rich region of the hot spring acquiring the ability to fix nitrogen. Finally phosphate was incorporated into the evolving system which allowed the synthesis of nucleotides and phospholipids." Biochemistry Different forms of life with variable origin processes may have appeared quasi-simultaneously in the early Earth. The other forms may be extinct, having left distinctive fossils through their different biochemistry. Metabolism-like reactions could have occurred naturally in early oceans, before the first organisms evolved. Some of these reactions can produce RNA, and others resemble two essential reaction cascades of metabolism: glycolysis and the pentose phosphate pathway, that provide essential precursors for nucleic acids, amino acids and lipids. Fox proteinoids In trying to uncover the intermediate stages of abiogenesis mentioned by Bernal, Sidney Fox in the 1950s and 1960s studied the spontaneous formation of peptide structures under plausibly early Earth conditions. In one of his experiments, he allowed amino acids to dry out as if puddled in a warm, dry spot in prebiotic conditions: In an experiment to set suitable conditions for life to form, Fox collected volcanic material from a cinder cone in Hawaii. He discovered that the temperature was over 100 °C just beneath the surface of the cinder cone, and suggested that this might have been the environment in which life was created—molecules could have formed and then been washed through the loose volcanic ash into the sea. He placed lumps of lava over amino acids derived from methane, ammonia and water, sterilized all materials, and baked the lava over the amino acids for a few hours in a glass oven. A brown, sticky substance formed over the surface, and when the lava was drenched in sterilized water, a thick, brown liquid leached out. He found that, as they dried, the amino acids formed long, often cross-linked, thread-like, submicroscopic polypeptides. Protein amyloid An origin-of-life theory based on self-replicating beta-sheet structures has been put forward by Maury in 2009. The theory suggest that self-replicating and self-assembling catalytic amyloids were the first informational polymers in a primitive pre-RNA world. The main arguments for the amyloid hypothesis is based on the structural stability, autocatalytic and catalytic properties, and evolvability of beta-sheet based informational systems. Such systems are also error correcting and chiroselective. First protein that condenses substrates during thermal cycling: thermosynthesis The thermosynthesis hypothesis considers chemiosmosis more basal than fermentation: the ATP synthase enzyme, which sustains chemiosmosis, is the currently extant enzyme most closely related to the first metabolic process. The thermosynthesis hypothesis does not even invoke a pathway: ATP synthase's binding change mechanism resembles a physical adsorption process that yields free energy. The result would be convection which would bring a continual supply of reactants to the protoenzyme. The described first protein may be simple in the sense that it requires only a short sequence of conserved amino acid residues, a sequent sufficient for the appropriate catalytic cleft. Pre-RNA world: The ribose issue and its bypass A different type of nucleic acid, such as peptide nucleic acid, threose nucleic acid or glycol nucleic acid, could have been the first to emerge as a self-reproducing molecule, later replaced by RNA. Larralde et al., say that "the generally accepted prebiotic synthesis of ribose, the formose reaction, yields numerous sugars without any selectivity". They conclude that "the backbone of the first genetic material could not have contained ribose or other sugars because of their instability", meaning that the ester linkage of ribose and phosphoric acid in RNA is prone to hydrolysis. Pyrimidine ribonucleosides and nucleotides have been synthesized by reactions which by-pass the free sugars, and are assembled stepwise using nitrogenous or oxygenous chemistries. Sutherland has demonstrated high-yielding routes to cytidine and uridine ribonucleotides from small 2 and 3 carbon fragments such as glycolaldehyde, glyceraldehyde or glyceraldehyde-3-phosphate, cyanamide and cyanoacetylene. A step in this sequence allows the isolation of enantiopure ribose aminooxazoline if the enantiomeric excess of glyceraldehyde is 60% or greater. This can be viewed as a prebiotic purification step. Ribose aminooxazoline can then react with cyanoacetylene to give alpha cytidine ribonucleotide. Photoanomerization with UV light allows for inversion about the 1' anomeric centre to give the correct beta stereochemistry. In 2009 they showed that the same simple building blocks allow access, via phosphate controlled nucleobase elaboration, to 2',3'-cyclic pyrimidine nucleotides directly, which can polymerize into RNA. Similar photo-sanitization can create pyrimidine-2',3'-cyclic phosphates. Autocatalysis Autocatalysts are substances that catalyze the production of themselves and therefore are "molecular replicators." The simplest self-replicating chemical systems are autocatalytic, and typically contain three components: a product molecule and two precursor molecules. The product molecule joins the precursor molecules, which in turn produce more product molecules from more precursor molecules. The product molecule catalyzes the reaction by providing a complementary template that binds to the precursors, thus bringing them together. Such systems have been demonstrated both in biological macromolecules and in small organic molecules. It has been proposed that life initially arose as autocatalytic chemical networks. Julius Rebek and colleagues combined amino adenosine and pentafluorophenyl esters with the autocatalyst amino adenosine triacid ester (AATE). One product was a variant of AATE which catalyzed its own synthesis. This demonstrated that autocatalysts could compete within a population of entities with heredity, a rudimentary form of natural selection. Synthesis based on hydrogen cyanide A research project completed in 2015 by John Sutherland and others found that a network of reactions beginning with hydrogen cyanide and hydrogen sulfide, in streams of water irradiated by UV light, could produce the chemical components of proteins and lipids, as well as those of RNA, while not producing a wide range of other compounds. The researchers used the term "cyanosulfidic" to describe this network of reactions. Simulated chemical pathways In 2020, chemists described possible chemical pathways from nonliving prebiotic chemicals to complex biochemicals that could give rise to living organisms, based on a new computer program named AllChemy. Viral origin Evidence for a "virus first" hypothesis, which may support theories of the RNA world, was suggested in 2015. One of the difficulties for the study of the origins of viruses is their high rate of mutation; this is particularly the case in RNA retroviruses like HIV. A 2015 study compared protein fold structures across different branches of the tree of life, where researchers can reconstruct the evolutionary histories of the folds and of the organisms whose genomes code for those folds. They argue that protein folds are better markers of ancient events as their three-dimensional structures can be maintained even as the sequences that code for those begin to change. Thus, the viral protein repertoire retain traces of ancient evolutionary history that can be recovered using advanced bioinformatics approaches. Those researchers think that "the prolonged pressure of genome and particle size reduction eventually reduced virocells into modern viruses (identified by the complete loss of cellular makeup), meanwhile other coexisting cellular lineages diversified into modern cells." The data suggest that viruses originated from ancient cells that co-existed with the ancestors of modern cells. These ancient cells likely contained segmented RNA genomes. A computational model (2015) has shown that virus capsids may have originated in the RNA world and served as a means of horizontal transfer between replicator communities. These communities could not survive if the number of gene parasites increased, with certain genes being responsible for the formation of these structures and those that favored the survival of self-replicating communities. The displacement of these ancestral genes between cellular organisms could favor the appearance of new viruses during evolution. Viruses retain a replication module inherited from the prebiotic stage since it is absent in cells. So this is evidence that viruses could originate from the RNA world and could also emerge several times in evolution through genetic escape in cells. Encapsulation without a membrane Polyester droplets Tony Jia and Kuhan Chandru have proposed spontaneously-forming membraneless polyester droplets in early cellularization before the innovation of lipid vesicles. Protein function within and RNA function in the presence of certain polyester droplets was shown to be preserved within the droplets. The droplets have scaffolding ability, by allowing lipids to assemble around them; this may have prevented leakage of genetic materials. Proteinoid microspheres Fox observed in the 1960s that proteinoids could form cell-like structures named "proteinoid microspheres". The amino acids had combined to form proteinoids, which formed small globules. These were not cells; their clumps and chains were reminiscent of cyanobacteria, but they contained no functional nucleic acids or other encoded information. Colin Pittendrigh stated in 1967 that "laboratories will be creating a living cell within ten years", a remark that reflected the typical contemporary naivety about the complexity of cell structures. Jeewanu protocell A further protocell model is the Jeewanu. First synthesized in 1963 from simple minerals and basic organics while exposed to sunlight, it is reported to have some metabolic capabilities, the presence of a semipermeable membrane, amino acids, phospholipids, carbohydrates and RNA-like molecules. However, the nature and properties of the Jeewanu remains to be clarified. Electrostatic interactions induced by short, positively charged, hydrophobic peptides containing 7 amino acids in length or fewer can attach RNA to a vesicle membrane, the basic cell membrane. RNA-DNA world In 2020, coevolution of a RNA-DNA mixture based on diamidophosphate was proposed. The mixture of RNA-DNA sequences, called chimeras, have weak affinity and form weaker duplex structures. This is advantageous in an abiotic scenario and these chimeras have been shown to replicate RNA and DNA – overcoming the "template-product" inhibition problem, where a pure RNA or pure DNA strand is unable to replicate non-enzymatically because it binds too strongly to its partners. This could lead to an abiotic cross-catalytic amplification of RNA and DNA. A continuous chemical reaction network in water and under high-energy radiation can generate precursors for early RNA. In 2022, evolution experiments of self-replicating RNA showed how RNA may have evolved to diverse complex molecules in RNA world conditions. The RNA evolved to a "replicator network comprising five types of RNAs with diverse interactions" such as cooperation for replication of other members (multiple coexisting host and parasite lineages). See also RNP world First universal common ancestor References Sources Origin of life
Alternative abiogenesis scenarios
[ "Biology" ]
6,143
[ "Biological hypotheses", "Origin of life" ]
64,776,609
https://en.wikipedia.org/wiki/Construction%20of%20Gothic%20cathedrals
The construction of Gothic cathedrals was an ambitious, expensive, and technically demanding aspect of life in the Late Middle Ages. From the late 11th century until the Renaissance, largely in Western Europe, Gothic cathedral construction required substantial funding, highly skilled workers, and engineering solutions for complex technical problems. Completion of a new cathedral often took at least half a century, yet many took longer or were rebuilt after fires or other damage. Because construction could take so long, many cathedrals were built in stages and reflect different aspects of the Gothic style. Motivation The 11th to 13th century brought unprecedented population growth and prosperity to northern Europe, particularly to the large cities, and particularly to those cities on trading routes. The old Romanesque cathedrals were too small for the population, and city leaders wanted visible symbols of their new wealth and prestige. The frequent fires in old cathedrals were also a reason for constructing a new building, as with Chartres Cathedral, Rouen Cathedral, Bourges Cathedral, and numerous others. Finance Bishops, like Maurice de Sully of Notre-Dame de Paris, usually contributed a substantial sum. Wealthy parishioners were invited to give a percentage of their income or estate in exchange for the right to be buried under the floor of the cathedral. In 1263 Pope Urban IV offered Papal indulgences, or the remission of the temporal effects of sin for one year, to wealthy donors who made large contributions. For less wealthy church members, contributions in kind, such as a few days' labour, the use of their oxen for transportation, or donations of materials were welcomed. The sacred relics of saints held by the cathedrals were displayed to attract pilgrims, who were also invited to make donations. Sometimes relics were taken in a procession to other towns to raise money. The guilds of the various professions in the town, such as the bakers, fur merchants and drapers, frequently made donations, and in exchange small panels of the stained glass windows in the new cathedral windows illustrated their activities. Master builders and masons The key figure in the construction of a cathedral was the master builder or master mason, who was the architect in charge of all aspects of the construction. One example was Gautier de Varinfroy, master builder of Évreux Cathedral. His contract, signed in 1253 with the master of the cathedral and Chapter of Évreux, paid him fifty pounds a year. He was required to live in Évreux, and to never be absent from the construction site for more than two months. Master masons were members of a particularly influential guild, the Corporation of Masons, the best-organized and most secretive of the medieval guilds. The names of the master masons of Early Gothic architecture are sometimes unknown, but later master masons, such as Godwin Gretysd, builder of Westminster Abbey for King Edward the Confessor, and Pierre de Montreuil, who worked on Notre-Dame de Paris and the Abbey of Saint-Denis, became very prominent. Eudes de Montreuil, the master mason for Louis IX of France, advised him on all architectural matters, and accompanied the king on his disastrous Seventh Crusade. The skill of masonry was frequently passed from father to son. A famous family of cathedral-builders was that of Peter Parler, born in 1325, who worked on Prague Cathedral, and was followed in his position by his son and grandson. The Parlers' work influenced European cathedrals as far away as Spain. Master masons frequently travelled to see other projects, and consulted with one another on technical issues. They also became wealthy. While the salary of an average mason or carpenter was the equivalent of twelve pounds a year, the master mason William Wynford received the equivalent of three thousand pounds a year. The master mason was responsible for all aspects of the building site, including preparing the plans, selecting the materials, coordinating the work of the craftsmen, and paying the labourers. He also needed a substantial knowledge of Christian theology, as he had to consult regularly with the bishop and canons about the religious functions of the building. The epitaph of the master mason Pierre de Montreuil of Notre-Dame de Paris described him appropriately as a "doctor of stones". The tomb of Hugues Libergier, master mason of Reims Cathedral, also depicts him in the robes of a doctor of theology. Plans Master masons usually first made a model of the building, either of papier-mâché, wood, plaster or stone, to present to the bishop and canons. The plans for certain parts were sometimes drawn or inscribed in full scale on the floor in the crypt or other portion of the worksite, where they could be easily consulted. The original plans of Prague Cathedral were rediscovered in the 19th century and were used to complete the building. Materials Building a cathedral required enormous quantities of stone, as well as timber for the scaffolding and iron for reinforcement. Stone Sometimes stone from earlier buildings was recycled, as at Beauvais Cathedral, but usually new stone had to be quarried, and in most cases the quarries were a considerable distance from the cathedral site. In a few cases, such as Lyon and Chartres cathedrals, the quarries were owned by the cathedral. In other cases, such as Tours Cathedral and Amiens Cathedral, the builders purchased the rights to extract all the stone needed from a quarry for a certain period of time. The stones were usually extracted and roughly trimmed at the quarry, and then taken by road, or preferably shipped, to the building site. For some early English cathedrals, some stone was shipped from Normandy, whose quarries produced an exceptionally fine pale-coloured stone – Caen stone. The preferred building stone in the Île-de-France was limestone. As soon as they were cut, the stones gradually developed a coating of calcination, which protected them. The stone was carved at the quarry so the calcination could develop before being shipped to the building site. When the facade of Notre-Dame de Paris was cleaned of soot and grime in the 1960s, the original white colour was exposed. On land, stones were often moved by oxen; some shipments required as many as twenty teams of two oxen each. The oxen were particularly important in the construction of Laon Cathedral, moving all the stones to the top of a steep hill. They were honoured for their work by statues of sixteen oxen placed on the towers of the cathedral. Each stone at the construction site carried three mason's marks, placed on the side which would not be visible in the finished cathedral. The first indicated its quarry of origin; the second indicated the position of the stone and the direction it should face; and the third was the signature mark of the stone carver, so the master mason could evaluate the quality. These marks allowed modern historians to trace the work of individual stone carvers from cathedral to cathedral. Wood An enormous amount of wood was used in Gothic construction for scaffolding, platforms, hoists, and beams. Durable hard woods such as oak and walnut were often used, which led to a shortage of these trees, and eventually led to the practice of using softer pine for scaffolds, and reusing old scaffolding from worksite to worksite. Iron Iron was also used in early Gothic for the reinforcement of walls, windows and vaults. Because the iron rusted and deteriorated, causing walls to fail, it was gradually replaced by other more durable forms of support, such as flying buttresses. The most visible use of iron was to reinforce the glass of rose windows and other large stained glass windows, making possible their enormous size (fourteen metres in diameter at Strasbourg Cathedral) and their intricate designs. Standardization As the period advanced, the materials gradually became more standardized, and certain parts, such as columns and cornices and building blocks, could be produced in series for use at multiple sites. Full-size templates were used to make complex structures such as vaults or the pieces of ribbed columns, which could be reused at different sites. The result of these efforts was a remarkable degree of precision. The stone columns of the triforium of the apse of Chartres Cathedral have a maximum variation of plus or minus . Excess materials and stone chips were not wasted. Instead of building walls of solid stone, walls were often built with two smooth stone faces filled in the interior with stone rubble. Construction site Cathedrals were traditionally built from east to west. If a new building was replacing an older cathedral, the choir at the east end of the old cathedral was demolished first, to begin construction of the new building, while the nave to the west was left standing for religious services. Once the new choir was completed and sanctified, the rest of the old cathedral was gradually torn down. The walls and pillars, timber scaffolding and roof were built first. Once the roof was in place, and the walls were reinforced with buttresses, the construction of the vaults could begin. One of the most complex steps was the construction of the rib vaults, which covered the nave and choir. Their slender ribs directed the weight of the vaults to thin columns leading down to the large support pillars below. The first step in the construction of a vault was the construction of a wooden scaffold up to the level of the top of the supporting columns. Next, a precise wooden frame was constructed on top of the scaffold in the exact shape of the ribs. The stone segments of the ribs were then carefully laid into the frame and cemented. When the ribs were all in place, the keystone was placed at the apex where they converged. Once the keystone was in place, the ribs could stand alone, Workers then filled in the compartments between the ribs with a thin layer of small fitted pieces of brick or stone. The framework was removed. Once the compartments were finished, their interior surface, visible from below, was plastered and then painted, and the vault was complete. This process required a team of specialized workers. This included the hewers who cut the stone, the posers who set the stones in place; and layers who cemented the pieces together. These craftsmen worked alongside the carpenters who built the complex scaffolds and models. Work continued six days a week, from sunrise until sundown, except Sundays and religious holidays. The stones were finished at the site and put into place, following the drawings by the master mason displayed in his workshop on the site, or sometimes inscribed on the floor of the cathedral itself. Some of these plans can still be seen on the floor of Lyon Cathedral. Since the Gothic cathedrals were the tallest buildings built in Europe since the Roman Empire, new technologies were needed to lift the stones up to the highest levels. A variety of cranes were developed. These included the treadmill crane, a type of hoist powered by one or more men walking inside a large treadmill (). The wheel varied in size from , and allowed a single man to hoist a weight of up to . During the winter, the construction on the site was usually shut down. To prevent the rain or snow from damaging the unfinished masonry, it was usually covered with fertilizer. Only the sculptors and stone-cutters, in their workshops, were able to continue work. Workers The stone-cutters, mortar-makers, carpenters and other workers were highly skilled but usually illiterate. They were managed by foremen who reported to the master mason. The foremen used tools such as the compass to measure and enlarge the plans to full size, and levels using lead in glass tubes to assure that the blocks were level. The stone dressers used similar tools to make sure the surfaces were flat and the edges were at precise right angles. Their tools were frequently shown in the medieval miniature drawings of the period (see gallery). Crypt Crypts, with underground vaults, were usually part of the foundation of the building, and were built first. Many Gothic cathedrals, like Notre-Dame de Paris and Chartres, were built on the sites of Romanesque cathedrals, and often used the same foundations and crypt. In Romanesque times the crypt was used to keep sacred relics, and often had its own chapels and, as in the 11th-century crypt of the first Chartres Cathedral, a deep well. The Romanesque crypt of Chartres Cathedral was greatly enlarged in the 11th century; it is U-shaped and long. It survived the fire in the 12th century which destroyed the Romanesque cathedral, and was used as the foundation for the new Gothic cathedral. The walls of the crypt chapels were painted with Gothic murals. The names of the master masons were frequently inscribed on the walls of cathedral crypts. Windows and stained glass The stained glass windows were an essential element of the cathedral, filling the interior with coloured light. They grew larger and larger over the course of the Gothic period, until they filled the entire walls beneath the vaults. In the early Gothic period the windows were relatively small, and the glass was thick and densely coloured, giving the light a mysterious quality which contrasted strongly with the dark interiors. In the later period, the builders installed much larger windows, and frequently used grey-or white coloured glass, or grisaille, which made the interior much brighter. The windows themselves were made by two different groups of craftsmen, usually at different locations. The colored glass was made at workshops located near forests, because an enormous amount of firewood was needed to melt the glass. The molten glass was colored with metal oxides, and then blown into a bubble, which was cut and flattened into small sheets. The glass sheets were then transferred to the workshop of the window-maker, usually close to the cathedral site. There a full-size precise drawing of the window was made on a large table, with the colors indicated. The craftsmen cracked off small pieces of colored glass to fill in the design. When it was complete, the pieces of glass were fit into slots of thin lead strips, and then the strips were soldered together. The faces and other details were painted onto the glass in vitreous enamel colours, which were fired in a kiln to fuse the paint to the glass. The sections were and fixed into the stone mullions of the window. and reinforced with iron bars. In the later Gothic periods, the windows were larger and were painted with more sophisticated techniques. They were sometimes covered with a thin layer of coloured glass, and this layer was carefully scratched to achieve finer shading, giving the images greater realism. This was called flashed glass. Gradually the windows came more and more to resemble paintings, but lost some of the vivid contrast and richness of color of early Gothic glass. Sculpture Sculpture was an essential element of the Gothic cathedral. Its purpose was to illustrate the personages, stories and messages of the Bible to the ordinary church-goers, of whom the great majority were illiterate. Figurative sculpture was common on the tympana of Romanesque churches, but in Gothic architecture it gradually spread across the entire facade and the transepts, and even the interior of the facade. The sculptors did not select the subject matter of their work. Church doctrine specified that the themes would be selected by the church fathers, not the artists. Nonetheless, the Gothic artists over time began to add figures with increasingly realistic features and expressive faces, and gradually the statues became more lifelike and separate from the walls. An early innovative feature used by Gothic sculptors was the statue-column. At Saint-Denis, twenty statues of the apostles supported the central portal, literally portraying them as "pillars of the church." The originals were destroyed during the French Revolution, and only fragments remain. The idea was adapted for the west porch of Chartres Cathedral (about 1145), possibly by the same sculptors. Traces of paint on sculpture of Gothic cathedral portals indicate they were likely originally painted in bright colors. This effect has been recreated at Amiens Cathedral using projected light. Towers and bells Towers were an important feature of a Gothic cathedral; they symbolized the aspiration toward heaven. The traditional Gothic arrangement, following Saint-Denis near Paris, was two towers of equal size on the west facade, flanking a porch with three portals. In Normandy and England, a central tower was often added over the meeting point of the transept and the main body of the church, as at Salisbury Cathedral. Since the towers were usually built last, sometimes long after other parts of the building, they were often built entirely or partly in different styles. The south tower of Chartres Cathedral is in large part still the originally Romanesque tower, built in the 1140s. The north tower, built at the same time, was struck by lightning and was rebuilt in the 16th century in the Flamboyant style. Towers had had the practical purpose of serving as watch towers, and, more important, housing bells which chimed the hour, and rang to celebrate important events, when the King was in attendance, or for funerals and periods of mourning. Notre-Dame de Paris was originally equipped with ten bells, eight in the north tower and two, the largest, in the south tower. The principal bell, or bourdon, called Emmanuel, was installed in the north tower in the 15th century, and is still in place. It required the strength of eleven men, pulling on ropes from a chamber below, to ring that single bell. The ringing from the bells was so loud that the bell-ringers were deafened for several hours afterwards. See also Gothic architecture Gothic cathedrals and churches Notes and citations Bibliography Construction Architectural history Architectural styles European architecture Architecture in England Medieval French architecture
Construction of Gothic cathedrals
[ "Engineering" ]
3,573
[ "Construction", "Architectural history", "Architecture" ]
64,777,898
https://en.wikipedia.org/wiki/Kersti%20Hermansson
Kersti Hermansson (born in 1951) is a Professor for Inorganic Chemistry at Uppsala University. Education and professional career She did her PhD on "The Electron Distribution in the Bound Water Molecule" in 1984. From 1984 to 1986, she had a postdoctoral fellowship from the Swedish Research Council with Dr. E. Clementi at IBM-Kingston, USA. From 1986–1988, she was a Högskolelektor in Inorganic Chemistry at Uppsala University. In 1988, she was a docent of Inorganic Chemistry at Uppsala University. In 1996, she was a Biträdande professor. Since 2000, she is a professor of Inorganic Chemistry at Uppsala University. During this time (2008-2013), she was a part-time guest professor at KTH Stockholm. Research Her research focuses on condensed-matter chemistry including the investigation of chemical bonding and development of quantum chemical methods. Awards She received several prizes for her research: "Letterstedska priset" from the Swedish Royal Academy of Sciences (KVA) (1987) "Oskarspriset" from Uppsala University (1988) "Norblad-Ekstrand" medal in gold from the Swedish Chemical Society (2003) Member of Kungl. Vetenskapssamhället (Academia regia scientiarum Upsaliensis, KVSU), Uppsala (since 1988) Member of Royal Society of Science (since 2002) Member of Royal Swedish Academy of Sciences (since 2007) Adjunct professor at the Kasetsart University, Bangkok (2005) Honorary guest professor at the Department of Ion Physics and Applied Physics, Innsbruck University (since June 2009) References Academic staff of Uppsala University Quantum chemistry Living people Swedish Royal Academies Kersti Hermansson IBM Fellows 1951 births
Kersti Hermansson
[ "Physics", "Chemistry" ]
350
[ "Quantum chemistry", "Quantum mechanics", "Theoretical chemistry", " molecular", "Atomic", " and optical physics" ]
64,781,650
https://en.wikipedia.org/wiki/ELMo
ELMo (embeddings from language model) is a word embedding method for representing a sequence of words as a corresponding sequence of vectors. It was created by researchers at the Allen Institute for Artificial Intelligence, and University of Washington and first released in February, 2018. It is a bidirectional LSTM which takes character-level as inputs and produces word-level embeddings, trained on a corpus of about 30 million sentences and 1 billion words. The architecture of ELMo accomplishes a contextual understanding of tokens. Deep contextualized word representation is useful for many natural language processing tasks, such as coreference resolution and polysemy resolution. ELMo was historically important as a pioneer of self-supervised generative pretraining followed by fine-tuning, where a large model is trained to reproduce a large corpus, then the large model is augmented with additional task-specific weights and fine-tuned on supervised task data. It was an instrumental step in the evolution towards transformer-based language modelling. Architecture ELMo is a multilayered bidirectional LSTM on top of a token embedding layer. The output of all LSTMs concatenated together consists of the token embedding. The input text sequence is first mapped by an embedding layer into a sequence of vectors. Then two parts are run in parallel over it. The forward part is a 2-layered LSTM with 4096 units and 512 dimension projections, and a residual connection from the first to second layer. The backward part has the same architecture, but processes the sequence back-to-front. The outputs from all 5 components (embedding layer, two forward LSTM layers, and two backward LSTM layers) are concatenated and multiplied by a linear matrix ("projection matrix") to produce a 512-dimensional representation per input token. ELMo was pretrained on a text corpus of 1 billion words. The forward part is trained by repeatedly predicting the next token, and the backward part is trained by repeatedly predicting the previous token. After the ELMo model is pretrained, its parameters are frozen, except for the projection matrix, which can be fine-tuned to minimize loss on specific language tasks. This is an early example of the pretraining-fine-tune paradigm. The original paper demonstrated this by improving state of the art on six benchmark NLP tasks. Contextual word representation The architecture of ELMo accomplishes a contextual understanding of tokens. For example, the first forward LSTM of ELMo would process each input token in the context of all previous tokens, and the first backward LSTM would process each token in the context of all subsequent tokens. The second forward LSTM would then incorporate those to further contextualize each token. Deep contextualized word representation is useful for many natural language processing tasks, such as coreference resolution and polysemy resolution. For example, consider the sentenceShe went to the bank to withdraw money.In order to represent the token "bank", the model must resolve its polysemy in context. The first forward LSTM would process "bank" in the context of "She went to the", which would allow it to represent the word to be a location that the subject is going towards. The first backward LSTM would process "bank" in the context of "to withdraw money", which would allow it to disambiguate the word as referring to a financial institution. The second forward LSTM can then process "bank" using the representation vector provided by the first backward LSTM, thus allowing it to represent it to be a financial institution that the subject is going towards. Historical context ELMo is one link in a historical evolution of language modelling. Consider a simple problem of document classification, where we want to assign a label (e.g., "spam", "not spam", "politics", "sports") to a given piece of text. The simplest approach is the "bag of words" approach, where each word in the document is treated independently, and its frequency is used as a feature for classification. This was computationally cheap but ignored the order of words and their context within the sentence. GloVe and Word2Vec built upon this by learning fixed vector representations (embeddings) for words based on their co-occurrence patterns in large text corpora. Like BERT (but unlike "bag of words" such as Word2Vec and GloVe), ELMo word embeddings are context-sensitive, producing different representations for words that share the same spelling. It was trained on a corpus of about 30 million sentences and 1 billion words. Previously, bidirectional LSTM was used for contextualized word representation. ELMo applied the idea to a large scale, achieving state of the art performance. After the 2017 publication of Transformer architecture, the architecture of ELMo was changed from a multilayered bidirectional LSTM to a Transformer encoder, giving rise to BERT. BERT has the same pretrain-fine-tune workflow, but uses a Transformer for parallelizable training. References Machine learning Natural language processing Natural language processing software Computational linguistics
ELMo
[ "Technology", "Engineering" ]
1,087
[ "Machine learning", "Computational linguistics", "Natural language processing", "Artificial intelligence engineering", "Natural language and computing" ]
64,783,551
https://en.wikipedia.org/wiki/Vadym%20Slyusar
Vadym Slyusar (born 15 October 1964, vil. Kolotii, Reshetylivka Raion, Poltava region, Ukraine) is a Soviet and Ukrainian scientist, Professor, Doctor of Technical Sciences, Honored Scientist and Technician of Ukraine, founder of tensor-matrix theory of digital antenna arrays (DAAs), N-OFDM and other theories in fields of radar systems, smart antennas for wireless communications and digital beamforming. Scientific results N-OFDM theory In 1992 Vadym Slyusar patented the 1st optimal demodulation method for N-OFDM signals after Fast Fourier transform (FFT). From this patent was started the history of N-OFDM signals theory. In this regard, W. Kozek and A. F. Molisch wrote in 1998 about N-OFDM signals with the sub-carrier spacing , that "it is not possible to recover the information from the received signal, even in the case of an ideal channel." But in 2001 Vadym Slyusar proposed such Non-orthogonal frequency digital modulation (N-OFDM) as an alternative of OFDM for communications systems. The next publication of V. Slysuar about this method has priority in July 2002 before the conference paper of I. Darwazeh and M.R.D. Rodrigues (September, 2003) regarding SEFDM. The description of the method of optimal processing for N-OFDM signals without FFT of ADC samples was transferred to publication by V. Slyusar in October 2003. The theory N-OFDM of V. Slyusar inspired numerous investigations in this area of other scientists. Tensor-matrix theory of digital antenna array In 1996 V. Slyusar proposed the column-wise Khatri–Rao product to estimate four coordinates of signals sources at a digital antenna array. The alternative concept of the matrix product, which uses row-wise splitting of matrices with a given quantity of rows (Face-splitting product), was proposed by V. Slyusar in 1996 as well. After these results the tensor-matrix theory of digital antenna arrays and new matrix operations was evolved (such as the Block Face-splitting product, Generalized Face-splitting product, Matrix Derivative of Face-splitting product etc.), which used also in artificial intelligence and machine learning systems to minimization of convolution and tensor sketch operations, in a popular Natural Language Processing models, and hypergraph models of similarity. The Face-splitting product and his properties used for multidimensional smoothing with P-splines and Generalized linear array model in the statistic in two- and multidimensional approximations of data as well. Theory of odd-order I/Q demodulators The theory of odd-order I/Q demodulators, which was proposed by V. Slyusar in 2014, started from his investigations of the tandem scheme of two-stage signal processing for the design of an I/Q demodulator and multistage I/Q demodulators concept in 2012. As result, Slyusar "presents a new class of I/Q demodulators with odd order derived from the even order I/Q demodulator which is characterized by linear phase-frequency relation for wideband signals". Results in the other fields of research V. Slyusar provided numerous theoretical works realized in several experimental radar stations with DAAs which were successfully tested. He investigated electrical small antennas and new constructions of such antennas, evolved the theory of metamaterials, proposed new ideas to implementation of augmented reality, and artificial intelligence to combat vehicles as well. V. Slyusar has 68 patents, and 850 publications in the areas of digital antenna arrays for radars and wireless communications. Life data 1981–1985 – listener of Orenburg Air Defense high military school. In this time started the scientific carrier of V. Slyusar, which published a first scientific report in 1985. June 1992 – defended the dissertation for a candidate degree (Techn. Sci.) at the Council of Military Academy of Air Defense of the Land Forces (Kyiv). The significant stage of the recognition of Vadym Slyusar’s scientific results became the defense of the dissertation for a doctoral degree (Techn. Sci.) in 2000. Professor – since 2005, Honored Scientist and Technician of Ukraine – 2008. Since 1996 – work at Central Scientific Research Institute of Armament and Military Equipment of the Armed Forces of Ukraine (Kyiv). Military Rank - Colonel. Since 2003 – participates in Ukraine-NATO cooperation as head of the national delegations, a person of contact, and national representative within experts groups of NATO Conference of National Armaments Directors and technical members of the Research Task Groups (RTG) of NATO Science and Technology Organisation (STO). Since 2009 – member of editorial board of Izvestiya Vysshikh Uchebnykh Zavedenii. Radioelektronika. Selected awards Honored Scientist and Technician of Ukraine (2008) Soviet and Ukraine military medals Gallery See also Digital antenna array N-OFDM Face-splitting product of matrix Tensor random projections References External links Personal Website Few Inventors of Vadym Slyusar – Ukrainian Patents Data Base. «Науковці України – еліта держави». Том VI, 2020. – С. 216 Who's Who in the World 2013. - P. 2233 Who's Who in the World 2014 People from Poltava Oblast Soviet systems scientists Soviet computer scientists Soviet Army officers Soviet Air Defence Force officers Ukrainian colonels Ukrainian electronics engineers Ukrainian mathematicians 20th-century Ukrainian scientists 1964 births Systems engineers Soviet inventors Soviet military engineers Ukrainian military officers Ukrainian inventors Ukrainian computer scientists 21st-century Ukrainian scientists Radar signal processing Living people
Vadym Slyusar
[ "Engineering" ]
1,201
[ "Systems engineers", "Systems engineering" ]
64,785,482
https://en.wikipedia.org/wiki/Blanet
A blanet is a member of a hypothetical class of exoplanets that directly orbit black holes. Blanets are fundamentally similar to other planets; they have enough mass to be rounded by their own gravity, but are not massive enough to start thermonuclear fusion and become stars. In 2019, a team of astronomers and exoplanetologists showed that there is a safe zone around a supermassive black hole that could harbor thousands of blanets in orbit around it. Etymology The team led by Keiichi Wada of Kagoshima University in Japan has given this name to black hole planets. The word is a portmanteau of black hole and planet. Formation Blanets are suspected to form in the accretion disk that orbits a sufficiently large black hole. Possible candidates The unconfirmed extragalactic planet M51-ULS-1b. The unconfirmed planet or brown dwarf, IGR J12580+0134 b, being disrupted by a supermassive black hole. In fiction In the two episodes "The Impossible Planet" and "The Satan Pit" (both 2006) of the British television series Doctor Who, the plot of the episode takes place on the titular “impossible planet”, a barren blanet called Krop Tor orbiting a black hole called K37 Gem 5. In Interstellar (2014), two of the 3 terrestrial planets orbiting supermassive black hole Gargantua are proper blanets. The other one orbits a main-sequence star named Pantagruel. References Exoplanetology Hypothetical planet types Black holes 2020 neologisms
Blanet
[ "Physics", "Astronomy" ]
333
[ "Black holes", "Physical phenomena", "Physical quantities", "Unsolved problems in physics", "Astrophysics", "Density", "Stellar phenomena", "Astronomical objects" ]
69,150,068
https://en.wikipedia.org/wiki/Vaccine%20equity
Vaccine equity means ensuring that everyone in the world has equal access to vaccines. The importance of vaccine equity has been emphasized by researchers and public health experts during the COVID-19 pandemic but is relevant to other illnesses and vaccines as well. Historically, world-wide immunization campaigns have led to the eradication of smallpox and significantly reduced polio, measles, tuberculosis, diphtheria, whooping cough, and tetanus. There are important reasons to establish mechanisms for global vaccine equity. Multiple factors support the emergence and spread of pandemics, not least the ability of people to travel long distances and widely transmit viruses. A virus that remains in circulation somewhere in the world is likely to spread and recur in other areas. The more widespread a virus is, and the larger and more varied the population it affects, the more likely it is to evolve more transmissible, more virulent, and more vaccine resistant variants. Vaccine equity can be essential to stop both the spread and the evolution of a disease. Ensuring that all populations receive access to vaccines is a pragmatic means towards achieving global public health. Failing to do so increases the likelihood of further waves of a disease. Infectious diseases are disproportionately likely to affect those in low and middle-income neighborhoods and countries (LMICs), making vaccine equity an issue for local and national public health and for foreign policy. Ethically and morally, access for all to essential medicines such as vaccines is fundamentally related to the human right to health, which is well founded in international law. Economically, vaccine inequity damages the global economy. Supply chains cross borders: areas with very high vaccination rates still depend on areas with lower vaccination rates for goods and services. Achieving vaccine equity requires addressing inequalities and roadblocks in the production, trade, and health care delivery of vaccines. Challenges include scaling-up of technology transfer and production, costs of production, safety profiles of vaccines, and anti vaccine disinformation and aggression. Patterns of vaccine inequality The wealthy generally have better access to vaccines than the poor, both between and within countries. Within countries, there may be lower rates of vaccination in racial and ethnic minority groups, in older adults, and among those living with disabilities or chronic conditions. The distribution and accessibility of vaccines show significant disparities between urban and rural areas especially in low- and middle-income countries. Some countries have programs to redress this inequality. Political, economic, social, and diplomatic factors can limit vaccine availability in some countries. Factors Achieving control of a disease (such as COVID-19) requires not only developing and licensing vaccines but also producing them at scale, pricing them so that they are globally affordable, allocating them to be available where and when they are needed, and deploying them to local communities. An effective global approach to achieving vaccine equity must address challenges in the dimensions of vaccine production, allocation, affordability, and deployment. Doctors Without Borders (MSF) lists five major obstacles to vaccine equity, taking into account that many of those to be vaccinated are children: Vaccine prices; new vaccines are on-patent and expensive (affordability) Getting vaccines to children; this is expensive and gets even more difficult in conflict zones and natural disasters (affordability, deployment) Five clinic visits in the first year of life is often too many; for people in remote areas with many children, it can be much more costly and difficult to get to a clinic. (deployment) Keeping vaccines cold; see cold chain. (deployment) Age-out; children who don't get vaccinated on-schedule often have to pay for their shots. Disruption from natural disasters or conflict can mean that entire generations go unprotected.(affordability, deployment) Achieving vaccine equity depends on having a sufficient supply of affordable vaccines available for global use. Ideally, a vaccine that is suitable for global use will be based on established technology; will have multiple available suppliers of the materials and equipment needed for production; be appropriate to the regions where it is to be produced or deployed, in terms of scalability of production and storage conditions; and be supported by local infrastructure for its production, delivery and regulation. Vaccine development Developing a new drug and gaining regulatory approval for it is a long and expensive process that can involve a variety of stakeholders. The time to develop a new drug can be 10 to 15 years, or longer. The average cost of developing at least one successful epidemic infectious disease vaccine from preclinical to the launch phase, taking into account the cost of failed attempts, has been estimated at from 18.1 million to US$1 billion. Decisions about what drugs to develop reflect the priorities of the companies and countries where drug development occurs. As of 2021, the United States was the country launching the highest number of new drugs, and the country with the largest expenditure overall on pharmaceutical discovery, approximately 40% of the research done globally. The United States is also the country with the highest profits for pharmaceutical companies, and the highest drug costs for patients. Emerging and reemerging viruses substantially affect people in low and middle income countries (LMICs), a pattern that is likely to increase due to climate change. Pharmaceutical companies have few financial incentives to develop treatments for neglected tropical diseases in poor countries. International organizations such as the World Health Organization, Unicef and the Developing Countries Vaccine Manufacturers Network support development of treatments for diseases such as West Nile virus, dengue fever; Chikungunya, Middle East respiratory syndrome (MERS), severe acute respiratory syndrome (SARS), Ebola, enterovirus D68 and Zika virus. Vaccine affordability A major factor in the economics of vaccines is intellectual property law. IP currently operates by granting pharmaceutical monopolies lasting decades. The economics of monopoly power give the monopolist a strong financial incentive to use value-based pricing and set prices that many, often most, potential customers can't afford (a pricing strategy that charges what the market will bear, unlike traditional cost-plus pricing charges the cost of production plus a markup). Price discrimination attempts to charge each person the maximum they would be willing to pay, and charges every purchaser more than they would be charged in a fully-competitive market. A vaccine monopolist has no incentive to let the rich actually subsidize the poor. Medical-product monopolists may claim that the high prices charged to the rich subsidize the lower prices charged to the poor when in fact both are being charged well over independent estimates of the cost of production (see, for instance, GeneXpert cartridges and pneumococcal vaccine). Amnesty International, Oxfam International, and Médecins Sans Frontières (MSF; Doctors without Borders) have criticized government support of some vaccine monopolies, on the grounds that the monopolies dramatically increase prices and impair vaccine equity. During the COVID-19 pandemic, there were calls for COVID-related IP to be suspended, using the TRIPS Waiver. The waiver had support from most countries, but opposition from within the EU (especially Germany), UK, Norway, and Switzerland, among others. Vaccine production Low and middle income countries tend to lack technological expertise and manufacturing capacity for the production of drugs and medical products. This leaves them dependent on diagnostics, treatments and vaccines from manufacturers in other countries and on availability in the global market. There are some exceptions such as China, Cuba, and India, which are actively producing pharmaceuticals to internationally accepted standards. The COVID-19 pandemic has led to recommendations to diversify pharmaceutical production and increase the productive ability of LMICs. This could allow those countries to better ensure that their own production needs are being met, which would help to achieve global vaccine equity. For example, the African Union Commission and Africa Centres for Disease Control and Prevention has called on countries and organizations to enable the production of at least 60% of the total vaccine doses required on the continent by 2040. Potential problems to this can involve: Availability of capital, technology and skills Adherence to quality standards Inconsistent or unsupportive national and international policy frameworks Size of markets, purchasing power, and variable demand for vaccines Lack of national or local infrastructure (e.g. reliable energy, electricity, transportation) Even when organizations are willing to share their information, knowledge transfer can create serious delays for the production of vaccines. This may be particularly true in the case of novel technologies. LMICs may be better situated to produce vaccines that are based on more established technologies, if those are available. Vaccine allocation In the absence of well-organized systems to develop and distribute vaccines, vaccine companies and high income nations may monopolize available resources. Organizations such as GAVI, the Coalition for Epidemic Preparedness Innovations, and the World Health Organization have proposed multilateral initiatives such as Covax for the improvement of vaccine allocation. The intention with Covax was to collectively pool resources to ensure vaccine development and production. The resulting vaccine supplies could be fairly distributed to reach less wealthy countries and achieve vaccine equity. Foreign aid and resources from richer countries would cover the cost of distributing doses to lower-middle and low income countries. As an allocation mechanism, Covax has succeeded in distributing COVID-19 vaccines, beginning with a shipment to Ghana on 24 February 2021. In the next year Covax delivered 1.2 billion vaccines to 144 countries. Covax was not able to acquire doses directly from manufacturers at the levels it had hoped. An estimated that 60% of the doses it distributed in 2021 (543 million out of 910 million) were donated doses from wealthy countries, beginning with the USA (41% of all donated doses). Covax is an unprecedented initiative, but it has not met the goal of achieving vaccine equity. Higher income nations bypassed the proposed mechanism and negotiated directly with vaccine manufacturers, leaving Covax without the resources it needed to buy and distribute vaccines in a timely fashion. Smaller and poorer countries had to wait or negotiate for themselves, with varying success. Middle income countries with finances to cover the cost of vaccines still had considerable difficulty in obtaining them. Ideally a global vaccine hub could have been developed by the international community before it was needed, rather than under the pressures of a pandemic. Improving it is important in preparation for future health crises. Analyses of Covax' institutional design and governance structures suggest that it lacked leverage to influence the behavior of donor states and pharmaceutical companies. It has been suggested that initiatives for vaccine allocation and vaccine equity could be improved by increasing the simplicity, transparency and accountability of their mechanisms. Others argue that such a body needs high-level leadership that is able to act at political and diplomatic levels to address issues of vaccine diplomacy as well as streamlining its mechanisms. The allocation of vaccines and the issue of wastage are related. When high income countries buy more than they use, doses go to waste. If higher income countries donate near-expiration doses to lower income countries, those doses may expire before they can be effectively reallocated and used. This type of closed vial wastage could be reduced, through the improvement of supply chain management within countries, the internationally coordinated monitoring and tracking of vaccines, and well-organized systems for the timely donation and reallocation of surplus vaccines. Open vial wastage, which occurs when only part of a vial of vaccine is used, could also be reduced. Strategies include making less doses available in a single vial, and organizing appointments to more effectively ensure that doses are used by overbooking (since some people will not appear) or not booking (so that only those who do appear receive doses). Vaccine deployment Barriers to deployment may be both physical and mental. In addition to supply and demand, barriers to immunization can include systems barriers related to organization of the health care system; health care provider barriers relating to availability and education of health care staff; and patient barriers around a parent or patient's fears or beliefs about immunization. Cheap vaccines are often not administered due to a lack of infrastructure funding. Logistical difficulties are an obstacle to achieving global vaccine equity. Hot climates, remote regions, and low-resource settings need cheap, transportable, easy-to-use vaccines. To achieve vaccine equity, vaccine development needs to prioritize concerns about whether a vaccine can survive outside a fridge or be administered in a single shot. To reach communities and successfully deploy a vaccine and achieve vaccine equity, it is important to take a “human-centered” public health approach that can address and respond to the concerns of local individuals and organizations. For example, vaccines could be made available by going to where people live, and partnering with houses of worship and other community centers, rather than relying on people to travel to hospitals or doctor's offices. In Laos, measures taken included repairing roads to remote areas, buying vans with modern refrigeration to transport vaccines, and visiting residences, temples, and schools to discuss the importance of vaccination. As part of Laos' public health campaign, President Thongloun Sisoulith was publicly vaccinated, on television, to encourage others to follow his example. Working with leaders and trusted community members within communities who can present important information and publicly identify and counter misinformation can be very successful. This type of approach was used in India, which was certified as free of poliomyelitis in 2014. In that public health campaign, 98% of the “social mobilizers” involved were women, whose involvement was critical. Vaccine messaging Communicating about public health risks is more effective when a message involves three or four specific talking points, which are then backed up with evidence. An initial message may focus on what is happening, what to do, and how to do it, followed up by details and how to find more information. Part of effective communication is to avoid confusing or overwhelming people. A simple message can be followed by more complex ones. Messages should be clear about the limits of what is known: explicitly identifying the boundaries of evolving knowledge rather than speculating and sending out conflicting and confusing messages. Often, the most useful and effective communication comes from local officials and people with expertise who know their community and the issue involved well. It is important to be aware of and address issues such as medical disparities, abuse, neglect, and disinformation that may affect communities. Disinformation tends to thrive under conditions of confusion, distrust and disenfranchisement. Countering disinformation is not just a matter of presenting facts and figures. People need to feel heard and their concerns need to be considered. Geographical distributions Migrant populations Migrants and refugees arriving and living in Europe face various difficulties in getting vaccinated and many of them are not fully vaccinated. People arriving from Africa, Eastern Europe, the Eastern Mediterranean, and Asia are more likely to be under-vaccinated (partial or delayed vaccination). Also, recently arrived refugees, migrants and seekers of asylum were less likely to be fully vaccinated than other people from the same groups. Those with little contact to healthcare services, no citizenship and lower income are also more likely to be under-vaccinated. Vaccination barriers to migrants include language/literacy barriers, lack of understanding of the need for or their entitlement to vaccines, concerns about the side-effects, health professionals lack of knowledge of vaccination guidelines for migrants, and practical/legal issues, for example, having no fixed address. Vaccines uptake of migrants can be increased by customised communications, clear policies, community-guided interventions (such as vaccine advocates), and vaccine offers in local accessible settings. COVID-19 Priorly developed work for other coronaviruses allowed the COVID-19 vaccination development team to have a head start, speeding up development and trials. Specifically, COVID-19 vaccination development began in January 2020. On May 15, 2020, Operation Warp Speed was announced as a partnership between the United States Department of Health and Human Services and the Department of Defense. $18 Billion was contracted out to eight different companies to develop COVID-19 vaccinations intended for the US population; major companies included where Moderna, Pfizer, and Johnson & Johnson. These three companies received the earliest emergency use approval from the FDA, therefore being the most common vaccinations in the United States. Vaccine inequality has been a major concern in the COVID-19 pandemic, with most vaccines being reserved by wealthy countries, including vaccines manufactured in developing countries. Globally, the problem has been distribution; supply is adequate. Not all countries have the ability to produce the vaccine. In low-income countries, vaccination rates long remained almost zero. This has caused sickness and death. Vaccine inequity during the COVID-19 pandemic showed the disparity between minority groups and countries. Based on income and rural or urban setting, vaccination rates were vastly disproportionate. As of 19 March 2022, 79% of people in high income countries had received one or more doses of a COVID-19 vaccine, compared with just 14% of people in low income countries. By April 25, 2022, 15.2% of people in low income countries had received at least one dose, while overall globally 65.1% of the global population had received at least one dose. Throughout the data of COVID-19 vaccination records, rates have consistently been much lower for lower income groups than that of middle and higher income groups. COVID-19 vaccination rates are higher in urban settings, and lower in rural settings. In an underdeveloped country such as Nigeria, vaccination rates are under 11% nationally. Because of persistent vaccine inequity, many countries continue to not have access to free or affordable COVID-19 vaccinations. Our World in Data provides up to date statistics of COVID-19 vaccine access between nations, socioeconomic groups, and more. In September 2021, it was estimated that the world would have manufactured enough vaccines to vaccinate everyone on the planet by January 2022. Vaccine hoarding, booster shots, a lack of funding for vaccination infrastructure, and other forms of inequality mean that it is expected that many countries will still have inadequate vaccination. On August 4, 2021, the United Nations called for a moratorium on booster doses in high-income countries, so that low-income countries can be vaccinated. The World Health Organization repeated these criticisms of booster shots on the 18th, saying "we're planning to hand out extra life-jackets to people who already have life-jackets while we're leaving other people to drown without a single life jacket". UNICEF supported a "Donate doses now" campaign. On 29 January 2022, Pope Francis denounced the "distortion of reality based on fear" that has ripped across the world during the COVID-19 pandemic. He urged journalists to help those misled by coronavirus-related misinformation and fake news to better understand the scientific facts. See also Economics of vaccines Vaccine resistance GAVI COVAX CEPI Developing Countries Vaccine Manufacturers Network References Vaccination Public health Health equity
Vaccine equity
[ "Biology" ]
3,935
[ "Vaccination" ]
69,151,671
https://en.wikipedia.org/wiki/Needle%20spiking
Needle spiking (also called injection spiking) is a phenomenon initially reported in the UK and Ireland where people have reported themselves subjected to surreptitious injection of unidentified sedative drugs, usually in a crowded environment such as the dancefloor of a nightclub, producing symptoms typical of date rape drugs. A Home Affairs Committee report noted a lack of motive in respect of needle spiking. In 692 incidents recorded in the last three months of 2021, there was only one claimed further offence of sexual offending or robbery. No verified toxicological results have been published showing the presence of known incapacitating agents in alleged victims; the prevalence of genuine cases is unknown and has been controversial, with experts expressing doubts as to how easily such injections could be carried out without it being immediately obvious to the victim and attributing the reports to hysteria. Dr Emmanuel Puskarczyk, head of the poison control centre at the , when noting the absence of objective proof has stated that the administration of a substance would require several seconds meaning that the recipient would likely notice at the time. Reports UK 1,032 reported claims of spiking by injection were recorded from the beginning of September to the end of December 2021. In Nottingham, where 15 reports of needle spiking were made in October, police identified only one case where a victim's injury "could be consistent with a needle". In November that year, there followed reports in Brighton and Eastbourne; and it was reported that two women alleged they had been spiked with needles inside a Yorkshire nightclub. In Northern Ireland, the PSNI began an investigation after a woman believed she was spiked with a needle in Omagh on 6 November 2021. In December 2021, Nottinghamshire Police Service had received 146 reports of suspected needle spiking. Nine arrests were made but no suspects were subsequently charged. VICE News were informed by the National Police Chiefs' Council (NPCC) of 274 reported cases between September-November 2021 in the UK. The NPCC said that no cases of injection of drugs had been confirmed, and that there was one confirmed case of "needle-sticking", involving someone being jabbed, but not necessarily injected, with a needle; investigations were continuing to determine whether the needle contained any spiking drugs. Despite the allegations, there has not been a single prosecution from needle spiking in the UK. Furthermore, experts from the scientific and academic community have claimed the likelihood of being spiked by injection is extremely low. Prof Adam Winstock, a trained consultant psychiatrist from the Global Drugs Survey, explained that “Needles have to be inserted with a level of care […] The idea these things can be randomly given through clothes in a club is just not that likely." France Since the summer of 2021, over 100 cases of needle spiking have been reported in French nightclubs. In May 2022, the Ministry of the Interior commented that it had found the majority of those reporting incidents had been injected with something; their spokesman said, "Too often the absence of traces detected cannot be interpreted as the absence of an injection, but as sampling too late." Ireland In Ireland, the Garda Síochána carried out multiple needle spiking investigations in October and November 2021. The first known report of claimed needle spiking in Ireland was on 27 October 2021, when a woman claimed she was spiked with a needle in a Dublin nightclub. Belgium There has been an incident of needle spiking of football supporters in May 2022 during a match between KV Mechelen and Racing Genk. Fourteen soccer fans from the same section of the stadium felt a prick and subsequently became unwell, although initial toxicological reports found nothing. Also in May 2022, in the city of Hasselt (Limburg), twenty-four youngsters became unwell at teen festival We R Young after what may have been needle spiking or mass hysteria. Germany In May 2022, Australian musician Zoè Zanias of Linea Aspera claimed she was attacked in a needle-spiking at the Berghain nightclub in Berlin, suffering from respiratory depression and an unwanted "psychedelic" experience as a result. Spain As of summer 2022, the Spanish police have registered 23 cases in Catalonia and 12 in the Basque Country. No traces of drugs were detected and there were no cases of related sexual violence. Switzerland On 13 August 2022 the Street Parade, a large open air rave event with hundreds of thousands of participants, took place in Zurich. A total of 8 female attendees contacted first aid services claiming needle spiking attacks. One of the victims, a 16 year old girl, was allegedly spiked 14 times. Australia On April 24, 2022, a woman claimed she was spiked at a Melbourne nightclub. Reactions Concerns have been raised by campaigners, politicians and student bodies. In October 2021 it was reported that British home secretary Priti Patel had requested police forces investigate the alleged incidents. In December that year, the Home Affairs Select Committee launched a new inquiry into spiking, including needle spiking, and the effectiveness of the police response to it. In Ireland, Young Fine Gael drafted a bill, which Fine Gael members introduced in Seanad Éireann in July 2023, "to provide for the specific offence of spiking characterised by the administration, injection, or causation of the taking orally of a substance, knowing that the person to whom the substance is administered, injected, or caused to be taken does not consent, or being reckless as to whether the person consents, and where the perpetrator intends to overpower or sedate the person, to engage in a sexual act, cause harm, make a gain or cause a loss, or otherwise commit an offence." Boycotts and tougher checks In response, a number of women from university cities decided to boycott nightclubs for "girls' nights in". Campaigners also called on nightclubs to impose tougher checks on entry; an online petition on the issue was considered by Parliament on 4 November 2021, where it was decided no changes to the law should be made. See also Drink spiking Needlestick injury Pin prick attack References Assault Crimes Incapacitating agents October 2021 crimes in Europe November 2021 crimes in Europe September 2021 crimes in Europe
Needle spiking
[ "Chemistry" ]
1,277
[ "Incapacitating agents", "Chemical weapons" ]
69,152,704
https://en.wikipedia.org/wiki/Digital%20Building%20Logbook
The Digital Building Logbook is a proposal aiming at establishing a common European approach that aggregates all relevant data about a building and ensures that authorised people can access accurate information about the building. See also Energy performance certificate Building information modeling References Building engineering Building information modeling
Digital Building Logbook
[ "Engineering" ]
53
[ "Building engineering", "Building information modeling", "Civil engineering", "Architecture" ]
69,154,444
https://en.wikipedia.org/wiki/Brellochs%20reaction
In organoboron chemistry, the Brellochs reaction provides a way to generate the monocarboranes. The use of acetylenes to insert two carbons into boron hydrides is well established. The Brellochs method uses formaldehyde to insert single carbon atoms into boron hydrides. Illustrative is the synthesis of CB9H14− from commercially available decaborane. B10H14 + CH2O + 2 OH− + H2O → CB9H14− + B(OH)4− + H2 Oxidation of the arachno anion gives nido-6-CB9H12−. Base degradation of the latter gives arachno-4-CB8H14. References   Organoboron compounds Cluster chemistry
Brellochs reaction
[ "Chemistry" ]
167
[ "Cluster chemistry", "Organometallic chemistry" ]
69,155,647
https://en.wikipedia.org/wiki/Antimony%28III%29%20oxide%20hydroxide%20nitrate
Antimony(III) oxide hydroxide nitrate is an inorganic compound with the chemical formula Sb4O4(OH)2(NO3)2. It is one of the very few nitrates of antimony. No evidence for a simple trinitrate has been reported. According to X-ray crystallography, its structure consists of cationic layers of antimony oxide/hydroxide with intercalated nitrate anions. This compound is produced by the reaction of antimony(III) oxide and nitric acid at 110 °C. References Antimony(III) compounds Nitrates
Antimony(III) oxide hydroxide nitrate
[ "Chemistry" ]
122
[ "Oxidizing agents", "Nitrates", "Salts" ]
69,159,943
https://en.wikipedia.org/wiki/Organic%20azide
An organic azide is an organic compound that contains an azide (–) functional group. Because of the hazards associated with their use, few azides are used commercially although they exhibit interesting reactivity for researchers. Low molecular weight azides are considered especially hazardous and are avoided. In the research laboratory, azides are precursors to amines. They are also popular for their participation in the "click reaction" between an azide and an alkyne and in Staudinger ligation. These two reactions are generally quite reliable, lending themselves to combinatorial chemistry. History Phenyl azide ("diazoamidobenzol"), was prepared in 1864 by Peter Griess by the reaction of ammonia and phenyldiazonium. In the 1890s, Theodor Curtius, who had discovered hydrazoic acid (), described the rearrangement of acyl azides to isocyanates subsequently named the Curtius rearrangement. Rolf Huisgen described the eponymous 1,3-dipolar cycloaddition. The interest in azides among organic chemists has been relatively modest due to the reported instability of these compounds. The situation has changed dramatically with the discovery by Sharpless et al. of Cu-catalysed (3+2)-cycloadditions between organic azides and terminal alkynes. The azido- and the alkyne groups are "bioorthogonal", which means they do not interact with living systems, and at the same time they undergo an impressively fast and selective coupling. This type of formal 1,3-dipolar cycloaddition became the most famous example of so-called "click chemistry" (perhaps, the only one known to a non-specialist), and the field of organic azides exploded. Preparation Myriad methods exist, most often using preformed azide-containing reagent. Alkyl azides By halide displacement As a pseudohalide, azide generally displaces many leaving group, e.g. , , , sulfonate, and others to give the azido compound. The azide source is most often sodium azide (), although lithium azide () has been demonstrated. From alcohols Aliphatic alcohols give azides via a variant of the Mitsunobu reaction, with the use of hydrazoic acid. Hydrazines may also form azides by reaction with sodium nitrite: Alcohols can be converted into azides in one step using 2-azido-1,3-dimethylimidazolinium hexafluorophosphate (ADMP) or under Mitsunobu conditions with diphenylphosphoryl azide (DPPA). From epoxides and aziridines Trimethylsilyl azide , and tributyltin azide , have all been used, including enantioselective modifications of the reaction are also known. Aminoazides are accessible by the epoxide and aziridine ring cleavage, respectively. From amines The azo-transfer compounds, trifluoromethanesulfonyl azide and imidazole-1-sulfonyl azide react with amines to give the corresponding azides. Diazo transfer onto amines using trifluoromethanesulfonyl azide () and Tosyl azide () has been reported. Hydroazidation Hydroazidation of alkenes has been demonstrated Aryl azides Aryl azides may be prepared by displacement of the appropriate diazonium salt with sodium azide or trimethylsilyl azide. Nucleophilic aromatic substitution is also possible, even with chlorides. Anilines and aromatic hydrazines undergo diazotization, as do alkyl amines and hydrazines. Acyl azides Alkyl or aryl acyl chlorides react with sodium azide in aqueous solution to give acyl azides, which give isocyanates in the Curtius rearrangement. Dutt–Wormall reaction A classic method for the synthesis of azides is the Dutt–Wormall reaction in which a diazonium salt reacts with a sulfonamide first to a diazoaminosulfinate and then on hydrolysis the azide and a sulfinic acid. Reactions Organic azides engage in useful organic reactions. The terminal nitrogen is mildly nucleophilic. Generally, nucleophiles attack the azide at the terminal nitrogen Nγ, while electrophiles react at the internal atom Nα. Azides easily extrude diatomic nitrogen, a tendency that is exploited in many reactions such as the Staudinger ligation or the Curtius rearrangement. Azides may be reduced to amines by hydrogenolysis or with a phosphine (e.g., triphenylphosphine) in the Staudinger reaction. This reaction allows azides to serve as protected -NH2 synthons, as illustrated by the synthesis of 1,1,1-tris(aminomethyl)ethane: In the azide alkyne Huisgen cycloaddition, organic azides react as 1,3-dipoles, reacting with alkynes to give substituted 1,2,3-triazoles. Some azide reactions are shown in the following scheme. Probably the most famous is the reaction with phosphines, which leads to iminophosphoranes 22; these can be hydrolysed into primary amines 23 (the Staudinger reaction), react with carbonyl compounds to give imines 24 (the aza-Wittig reaction), or undergo other transformations. Thermal decomposition of azides gives nitrenes, which participate in a variety of reactions; vinyl azides 19 decompose into 2H-azirines 20. Alkyl azides with low nitrogen-content ((nC + nO) / nN ≥ 3) are relatively stable and decompose only above ca. 175 °C. Direct photochemical decomposition of alkyl azides leads almost exclusively to imines (e.g. 25 and 26). It is proposed that the azide group is promoted to the singlet excited state and then undergoes concerted rearrangement without the intermediacy of nitrenes. The presence of triplet sensitisers, however, may change the reaction mechanism and result in the formation of triplet nitrenes. The latter were observed directly by ESR spectroscopy at −269 °C as well as inferred in some photolyses. Triplet methyl nitrene is 31 kJ/mol more stable than its singlet form, and thus is most likely the ground state. The (3+2)-cycloaddition of azides to double or triple bonds is one of the most utilised cycloadditions in organic chemistry and affords triazolines (e.g. 17) or triazoles, respectively. The uncatalysed reaction is a concerted pericyclic process, in which the configuration of the alkene component is transferred to the triazoline product. The Woodward–Hoffmann denomination is [π4s+π2s] and the reaction is symmetry-allowed. According to Sustmann, this is a Type II cycloaddition, which means the two HOMOs and the two LUMOs have comparable energies, and thus both electron-withdrawing and electron-donating substituents may lead to an increase in the reaction rate. The reaction is generally free from significant solvent effects because both the reactants and the transition state (TS) are non-polar. Another azide regular is tosyl azide here in reaction with norbornadiene in a nitrogen insertion reaction: Applications Some azides are valuable as bioorthogonal chemical reporters, molecules that can be "clicked" to see the metabolic path it has taken inside a living system. The antiviral drug zidovudine (AZT) contains an azido group. Safety Some organic azides are classified as highly explosive and toxic. References Additional sources Wolff, H. Org. React. 1946, 3, 337–349. External links Synthesis of organic azides, recent methods Synthesizing, Purifying, and Handling Organic Azides Functional groups Leaving groups
Organic azide
[ "Chemistry" ]
1,749
[ "Functional groups", "Leaving groups" ]
62,525,480
https://en.wikipedia.org/wiki/ENUBET
The Enhanced NeUtrino BEams from kaon Tagging or ENUBET is an ERC funded project that aims at producing an artificial neutrino beam in which the flavor, flux and energy of the produced neutrinos are known with unprecedented precision. Interest in these types of high precision neutrino beams has grown significantly in the last ten years, especially after the start of the construction of the DUNE and Hyper-Kamiokande detectors. DUNE and Hyper-Kamiokande are aimed at discovering CP violation in neutrinos observing a small difference between the probability of a muon-neutrino to oscillate into an electron-neutrino and the probability of a muon-antineutrino to oscillate into an electron-antineutrino. This effect points toward a difference in the behavior of matter and antimatter. In quantum field theory, this effect is described by a violation of the CP symmetry in particle physics. The experiments that will measure CP violation need a very precise knowledge of the neutrino cross-sections, i.e. the probability for a neutrino to interact in the detector. This probability is measured counting the number of interacting neutrinos divided by the flux of incoming neutrinos. Current neutrino cross-section experiments are limited by large uncertainties in the neutrino flux. A new generation of cross-section experiment is therefore needed to overcome these limitations with new techniques or high precision beams, as ENUBET. In ENUBET, neutrinos are produced by focusing mesons in a narrow band beam towards an instrumented decay tunnel, where charged leptons produced in association with neutrinos by mesons' decay can be monitored at the single particle level. Beams like ENUBET are called monitored neutrino beams. Mesons (essentially pions and kaons) are produced in the interactions of accelerated protons with a Beryllium or Graphite target. The proposed facility is being studied taking into account the energies of currently available proton drivers: 400 GeV (CERN SPS), 120 GeV (FNAL Main Injector), 30 GeV (J-PARC Main Ring). Kaons and pions are momentum and charge selected in a short transfer line by means of dipole and quadrupole magnets and are focused in a collimated beam into an instrumented decay tube. Large angle muons and positrons from kaon decays (, , ) are measured by detectors on the tunnel walls, while muons from pion decays () are monitored after the hadron dump at the end of the tunnel. The decay region is kept short (40 m) in order to reduce the neutrino contamination from muon decays (). In this way, the neutrino flux is assessed in a direct way with a precision of 1%, without relying on complex simulations of the transfer line and on hadro-production data extrapolation that currently limits the knowledge of the flux to 5-10%. The ENUBET facility can be used to perform precision studies of the neutrino cross section and of sterile neutrinos or Non-Standard Interaction models. This method can also be extended to detect other leptons in order to have a complete monitored neutrino beam. The ENUBET project started in 2016. As of 2024, it involves 17 European institutions in 5 European countries and brings together about 80 scientists. ENUBET studies all technical and physics challenges to demonstrate the feasibility of a monitored neutrino beam: it has built a full-scale demonstrator of the instrumented decay tunnel (3 m length and partial azimuthal coverage) and assesses costs and physics reach of the proposed facility. The first end-to-end simulation of the ENUBET monitored neutrino beam was published in 2023. The ENUBET ERC project was completed in 2022. Since March 2019, ENUBET has been part of the CERN Neutrino Platform (NP06/ENUBET) for the development of a new generation of neutrino detectors and facilities. References Neutrinos Neutrino experiments Nuclear physics
ENUBET
[ "Physics" ]
870
[ "Nuclear physics" ]
62,528,636
https://en.wikipedia.org/wiki/Epichlo%C3%AB%20sylvatica
Epichloë sylvatica is a haploid sexual species in the fungal genus Epichloë. A systemic and seed-transmissible grass symbiont first described in 1998, Epichloë sylvatica forms a clade within the Epichloë typhina complex. Epichloë sylvatica is found from Europe to Asia, where it has been identified in association with two grass species, Brachypodium sylvaticum and Hordelymus europaeus. Subspecies Epichloë sylvatica has one subspecies, Epichloë sylvatica subsp. pollinensis Leuchtm. & M. Oberhofer. Described in 2013, Epichloë sylvatica subsp. pollinensis has been found in Europe in the grass species Hordelymus europaeus. References sylvatica Fungi described in 1998 Fungi of Asia Fungi of Europe Fungus species
Epichloë sylvatica
[ "Biology" ]
179
[ "Fungi", "Fungus species" ]
62,528,650
https://en.wikipedia.org/wiki/Central%20configuration
In celestial mechanics, a central configuration is a system of point masses with the property that each mass is pulled by the combined gravitational force of the system directly towards the center of mass, with acceleration proportional to its distance from the center. Central configurations are studied in -body problems formulated in Euclidean spaces of any dimension, although only dimensions one, two, and three are directly relevant for celestial mechanics in physical space. Examples For equal masses, one possible central configuration places the masses at the vertices of a regular polygon (forming a Klemperer rosette), a Platonic solid, or a regular polytope in higher dimensions. The centrality of the configuration follows from its symmetry. It is also possible to place an additional point, of arbitrary mass, at the center of mass of the system without changing its centrality. Placing three masses in an equilateral triangle, four at the vertices of a regular tetrahedron, or more generally masses at the vertices of a regular simplex produces a central configuration even when the masses are not equal. This is the only central configuration for these masses that does not lie in a lower-dimensional subspace. Dynamics Under Newton's law of universal gravitation, bodies placed at rest in a central configuration will maintain the configuration as they collapse to a collision at their center of mass. Systems of bodies in a two-dimensional central configuration can orbit stably around their center of mass, maintaining their relative positions, with circular orbits around the center of mass or in elliptical orbits with the center of mass at a focus of the ellipse. These are the only possible stable orbits in three-dimensional space in which the system of particles always remains similar to its initial configuration. More generally, any system of particles moving under Newtonian gravitation that all collide at a single point in time and space will approximate a central configuration, in the limit as time tends to the collision time. Similarly, a system of particles that eventually all escape each other at exactly the escape velocity will approximate a central configuration in the limit as time tends to infinity. And any system of particles that move under Newtonian gravitation as if they are a rigid body must do so in a central configuration. Vortices in two-dimensional fluid dynamics, such as large storm systems on the Earth's oceans, also tend to arrange themselves in central configurations. Enumeration Two central configurations are considered to be equivalent if they are similar, that is, they can be transformed into each other by some combination of rotation, translation, and scaling. With this definition of equivalence, there is only one configuration of one or two points, and it is always central. In the case of three bodies, there are three one-dimensional central configurations, found by Leonhard Euler. The finiteness of the set of three-point central configurations was shown by Joseph-Louis Lagrange in his solution to the three-body problem; Lagrange showed that there is only one non-collinear central configuration, in which the three points form the vertices of an equilateral triangle. Four points in any dimension have only finitely many central configurations. The number of configurations in this case is at least 32 and at most 8472, depending on the masses of the points. The only convex central configuration of four equal masses is a square. The only central configuration of four masses that spans three dimensions is the configuration formed by the vertices of a regular tetrahedron. For arbitrarily many points in one dimension, there are again only finitely many solutions, one for each of the linear orderings (up to reversal of the ordering) of the points on a line. For every set of point masses, and every dimension less than , there exists at least one central configuration of that dimension. For almost all -tuples of masses there are finitely many "Dziobek" configurations that span exactly dimensions. It is an unsolved problem, posed by and , whether there is always a bounded number of central configurations for five or more masses in two or more dimensions. In 1998, Stephen Smale included this problem as the sixth in his list of "mathematical problems for the next century". As partial progress, for almost all 5-tuples of masses, there are only a bounded number of two-dimensional central configurations of five points. Special classes of configurations Stacked A central configuration is said to be stacked if a subset of three or more of its masses also form a central configuration. For example, this can be true for equal masses forming a square pyramid, with the four masses at the base of the pyramid also forming a central configuration, or for masses forming a triangular bipyramid, with the three masses in the central triangle of the bipyramid also forming a central configuration. Spiderweb A spiderweb central configuration is a configuration in which the masses lie at the intersection points of a collection of concentric circles with another collection of lines, meeting at the center of the circles with equal angles. The intersection points of the lines with a single circle should all be occupied by points of equal mass, but the masses may vary from circle to circle. An additional mass (which may be zero) is placed at the center of the system. For any desired number of lines, number of circles, and profile of the masses on each concentric circle of a spiderweb central configuration, it is possible to find a spiderweb central configuration matching those parameters. One can similarly obtain central configurations for families of nested Platonic solids, or more generally group-theoretic orbits of any finite subgroup of the orthogonal group. James Clerk Maxwell suggested that a special case of these configurations with one circle, a massive central body, and much lighter bodies at equally spaced points on the circle could be used to understand the motion of the rings of Saturn. used stable orbits generated from spiderweb central configurations with known mass distribution to test the accuracy of classical estimation methods for the mass distribution of galaxies. His results showed that these methods could be quite inaccurate, potentially showing that less dark matter is needed to predict galactic motion than standard theories predict. References Classical mechanics Orbits
Central configuration
[ "Physics" ]
1,249
[ "Mechanics", "Classical mechanics" ]
72,142,668
https://en.wikipedia.org/wiki/AES50
AES50 is an Audio over Ethernet protocol for multichannel digital audio. It is defined in the AES50-2011 standard for High-resolution multi-channel audio interconnection (HRMAI). Origins AES50 is based on the SuperMAC protocol created by Sony Pro Audio Lab (now Oxford Digital). The preliminary standard was assigned the AES-X140 project designation in 2003, and was finally approved in 2005 as a royalty-free open standard. HyperMAC is an improved protocol based on Gigabit Ethernet physical layer, allowing more channels and lower audio latency. It was considered for an alternate physical layer in a future revision of AES50, but standardisation did not move forward. Sony licensed its proprietary software implementations of SuperMAC and HyperMAC to Midas Consoles for their Midas XL8 digital mixer. Midas parent Klark Teknik took over the SuperMAC and HyperMAC patents in 2007, then in 2009 Midas and Klark Teknik were acquired by Uli Behringer's Music Group. The AES50 protocol is implemented in digital mixing consoles by Midas and Behringer to transfer digital audio between a console and remote stage boxes. Specifications AES50 is a point-to-point interconnect which carries multiple channels of AES3, PCM or DSD bitstream formats, along with system clock and synchronisation signals, over Cat 5 cable using 100 Mbit/s Fast Ethernet physical layer. AES50 uses the four pairs of the Cat 5 cable in the 8P8C connector: Audio data transmit + Audio data transmit – Audio data receive + Sync signal transmit + Sync signal transmit – Audio data receive – Sync signal receive + Sync signal receive – Audio data is transmitted in bidirectional full-duplex mode over two differential pairs used by the 100BASE-TX standard, and word clock sync signal is transmitted over the remaining differential pairs not used by the Fast Ethernet layer. Using separate copper pairs for clock signal simplifies connection setup and allows phase-accurate low-jitter clock sync. AES50 only employs the Ethernet protocol's physical layer (layer 1), relying on Ethernet frames to continuously stream audio data. A proprietary link layer (layer 2) implements a point-to-point audio transmission protocol. It uses a cyclic redundancy check (CRC) for each Ethernet frame and a Hamming code scheme can recover from individual bit errors. The audio data is interleaved so that neighbouring bits belong to different samples, allowing the receiving end to correct burst errors. Specialised cross-point routers can convert multiple point-to-point AES50 links to a centralised star topology. The AES50 protocol supports 24-bit PCM audio and delta-sigma bistream formats (Direct Stream Digital), with sample rates that are a multiple of 44.1 or 48 kHz. The bandwidth of 100 Mbit/s allows 48 channels at 48 kHz sample rate, or 24 channels at 96 kHz sample rate. The latency is 6 samples at 96 kHz and 3 samples at 48 kHz, or 62.50 μs. In practical implementations of the SuperMAC and HyperMAC protocols, only 96 kHz PCM formats are supported. AES50 also supports packet-based auxiliary channel for control data over the same data link. The control channel is allocated a fixed bandwidth of 5 Mbit/s; control data are embedded in the same Ethernet frame as the audio data. HyperMAC The HyperMAC protocol is based on the Gigabit Ethernet physical layer for Cat 5e cable (up to 100 m) or OM2 multi-mode fibre (up to 500 m) with embedded clocking. It allows up to 192 bidirectional channels at 96 kHz and 384 channels at 48 kHz; the latency is 4 samples at 96 kHz or 2 samples at 48 kHz, or 41.66 μs. The bandwidth of the auxiliary data link is increased to 200 Mbit/s and control data is transmitted with separate control frames. Implementations Midas Heritage D HD96-24 digital mixer Midas PRO Series digital mixers Midas M32 digital mixer Midas XL8 digital mixer Midas DL Series digital stage boxes Midas DL4xx Series Audio System Signal Router Midas Neutron Audio System Signal Router Behringer X32 digital mixer Behringer WING digital mixer Behringer S Series digital stage boxes Klark Teknik DN9620 AES50 Extender Klark Teknik DN9630 USB Interface Klark Teknik DN9650 Network bridge References External links Audio network protocols Networking standards Audio engineering Audio Engineering Society standards
AES50
[ "Technology", "Engineering" ]
963
[ "Computer standards", "Computer networks engineering", "Audio Engineering Society standards", "Networking standards", "Electrical engineering", "Audio engineering" ]
72,149,189
https://en.wikipedia.org/wiki/Richmann%27s%20law
Richmann's law, sometimes referred to as Richmann's rule, Richmann's mixing rule, Richmann's rule of mixture or Richmann's law of mixture, is a physical law for calculating the mixing temperature when pooling multiple bodies. It is named after the Baltic German physicist Georg Wilhelm Richmann, who published the relationship in 1750, establishing the first general equation for calorimetric calculations. Origin Through experimental measurements, Wilhelm Richmann determined that the following relationship holds when water of different temperatures is mixed: It follows: Here and are the masses of the two mixture components, and are their respective initial temperatures, and is the mixture temperature. This observation is called Richmann's law in the narrower sense and applies in principle to all substances of the same state of aggregation. According to this, the mixing temperature is the weighted arithmetic mean of the temperatures of the two initial components. Richmann's rule of mixing can also be applied in reverse, for example, to the question of the ratio in which quantities of water of given temperatures must be mixed to obtain water of a desired temperature. Determining the quantities and required for this purpose, given a total quantity , is accomplished with the mixing cross. The corresponding formula, obtained from the above equation by rearrangement, is: or . For the mixing ratio, this gives: . The physical background of the mixing rule is the fact that the heat energy of a substance is directly proportional to its mass and its absolute temperature. The proportionality factor is the specific heat capacity, which depends on the nature of the substance, but which was not described until some time after Richmann's discovery by Joseph Black. Thus, the validity of the formula is limited to mixtures of the same substance, since it assumes a uniform specific heat capacity. Another condition is that both components be uniformly warm everywhere and that there be no appreciable heat exchange with their other surroundings. If one wants to mix two substances with different - but known - specific heat capacities, one can formulate the mixing rule more generally, as shown below. General formulation Under the condition that no change of aggregate state occurs and the system is closed, i.e., in particular, there is no heat exchange with the environment, the following holds: Where and represent the specific enthalpy of the respective components. If the specific heat capacities and can be assumed to be constant, this can be transformed to. The formula resolved by the mixture temperature is then: In a wider sense this equation is also referred to as Richmann's law because it simply extends Richmann's established relationship to include the specific heat capacity, thus allowing the calculation of the mixing temperature of different substances. If the heat capacities are not constant over the entire temperature range, the above formula can be used with an average heat capacity for component : . In this formula, with or represents the specific heat capacity of the two components, which may be temperature dependent. Application of the formula may require an iterative procedure to determine the mixture temperature, since the average heat capacity is also temperature dependent. References Scientific laws Calorimetry
Richmann's law
[ "Mathematics" ]
630
[ "Mathematical objects", "Scientific laws", "Equations" ]
72,150,315
https://en.wikipedia.org/wiki/List%20of%20electric%20truck%20makers
This is a list of electric truck makers that have produced medium- and heavy-duty commercial battery-powered all-electric trucks. Multiple-brand corporations The following truck brands are owned by Big Three automobile manufacturers and other corporations which hold multiple automobile and truck brands. Hyundai-Kia In 2020, Hyundai sold over 9,000 units of its Porter Electric truck in South Korea while Kia sold over 5,000 units of the Kia Bongo EV in the same market. Mercedes-Benz Group Mercedes-Benz Mercedes-Benz began delivering eActros units to 10 customers in September 2018 for a two-year real-world test. Customers include Dachser, Edeka, Hermes, Kraftverkehr Nagel, Ludwig Meyer, Pfenning Logistics, TBS Rhein-Neckar and Rigterink of Deutschland, and Camion Transport and Migros of Switzerland. In 2023, the eActros 600 with a 621 kWh battery and a range of 500 km was presented, with production starting in 2024. Daimler AG Freightliner Freightliner began delivering e-M2 trucks to Penske in December 2018, and will commercialize its larger e-Cascadia in 2019. Since 2023 Daimler offers MT50e electric step-van with the exact cargo capacity and dimensions as its diesel counterpart. The 2024 model offers level 2 home charging which was absent on the 2023 model. Mitsubishi Fuso Mitsubishi Fuso began deliveries of the eCanter in 2017. Rizon Daimler launched the all-electric truck Rizon brand in the United States in 2023. Journalists questioned whether the Rizon trucks are rebranded Mitsubishi Fuso eCanter trucks, but Daimler did not address these questions. Paccar DAF DAF delivered its first CF semi-truck to Jumbo for testing in December 2018. It uses a VDL powertrain. The logistics company Tinie Manders Transport received a unit in February 2019, and Contargo in Germany received two units in May. Peterbilt Peterbilt unveiled in early 2018 a partnership with Meritor and TransPower, who will supply all-electric drivetrain systems for two Peterbilt vehicle platforms. They will produce twelve Class 8 579EV day cab tractors and three 520EV trash trucks that will be tested for about a year. In January 2019, Peterbilt unveiled its medium-duty 220EV also made in partnership with Meritor and TransPower. Six units should be delivered to its major customer in 2019. The manufacturer expects to have a total of more than 30 electric trucks on the road by the end of 2019. Tata Tata Ultra T.7 is India's first fully electric truck. The truck comes with mordern design and powertrain of zero emission.It is designed to bear a payload range of 3692–4935 kg. It has a weight of 7490 kg and equipped with 6 wheels. Tata motors also launched an electric version of Tata Ace. It is a small commercial vehicle which is designed to be used in cities. The Ace EV is the first product featuring Tata Motors' EVOGEN powertrain. It is powered by a 27 kW (36 hp) motor with 130 N m of peak torque, cargo volume of 208 ft^3 and grade-ability of 22%. Toyota Hino Hino Motors partnered with SEA Electric to provide Hino electric trucks using the SEA-Drive powertrain. The trucks are scheduled to become available in 2024 in the United States. SEA Electric has been installing its electric powertrains in medium and heavy-duty trucks and buses since 2017. Volkswagen AG MAN MAN began delivering a dozen units of various e-TGM trucks in September 2018 for testing purposes with different customers. Serial production was scheduled to begin in 2022. Volvo AB Mack Mack unveiled the LR refuse truck in May 2019. Its commercialization should begin in 2019. New York City Department of Sanitation will test one unit beginning in 2020. Renault Trucks Renault Trucks, part of Volvo, began selling an electric version of its Maxity small truck in 2010. Renault Trucks was the first to build heavy-duty trucks, with three prototypes of electric Renault Midlum and a later Renault D tested in real conditions by different customers (Carrefour, Nestlé, Guerlain) for a few years between 2012 and 2016. A prototype D truck was delivered to Delanchy in November 2017. After testing is completed, Renault will commercialize its D and D Wide trucks in 2019. They will be built in France alongside their Volvo counterparts. Renault Trucks has unveiled the models of its heavy-duty all-electric range in November 2022. The Renault Trucks E-Tech T and C, which are for regional distribution and construction, will be produced in series at the Bourg-en-Bresse factory from 2023. Volvo Volvo planned to launch their first mass-produced electric FE and FL trucks in early 2021, to be built in France alongside their Renault counterparts. An electric VNR semi-trailer truck was delivered to North American customers for testing in 2021. An updated VNR Electric is scheduled to begin production in 2022, in Dublin, Virginia. Single-brand corporations Alkè Some of the electric cars made by Alkè (for example the Alkè ATX 100 E) are used in soccer stadiums as open ambulances. The operator of London's cycle hire scheme uses a small number of Alkè electric utility vehicles (alongside other cars and vans) to tow trailers for distributing bicycles. Autocar trucks Autocar's E-ACTT is the fully-electric version of its leading ACTT terminal tractor model. Andrew Taitz, chairman of Autocar said, "The E-ACTT is the only original equipment manufacturer (OEM) terminal tractor with an OEM developed electric vehicle system, all Autocar". Autocar announced two alpha units of the E-ACX low cabover model began field testing in August 2022. BYD In China, BYD sold 7,969 all-electric/PHEV/hydrogen commercial vehicles in 2018, and 3,836 of them in 2019. These figures exclude buses. The manufacturer sells light-, medium- and heavy-duty electric trucks. The heaviest of them is the 8TT, which is a Class 8 semi-tractor equipped with a 435 kWh battery. The Chinese manufacturer gained a foothold in the US market: its customers include Anheuser-Busch, which deployed 21 electric semi-tractors from BYD in California. E-Force One In January 2014, COOP Switzerland began operating an 18-ton (16 metric ton) electric truck with a replaceable battery. 18 square meters of photovoltaic elements are positioned on its roof. The truck's battery has a capacity of 300 kWh. The solar panels along with regenerative braking provide 23 percent of the total energy. The range is 240 km per day. Energy consumption is 130 kWh per 100 km. Net of the solar/regenerative energy it consumes about 100 kWh per 100 km, about the energy needed by a comparable diesel engine. The truck weighs eight tons, with a gross vehicle weight of 18 tonnes and costs 380,000 Swiss francs. It is about twice as expensive as the diesel version. The truck is based on an Iveco Stralis chassis. The truck's operating price is 10 francs per 100 kilometres, much less than the diesel version at 50 francs per 100 kilometres. The truck has two LiFePO4 batteries with a capacity of 120 kWh with a weight of 1300 kg. The battery can be replaced within 10 minutes. Maintenance and the service life are not higher than a comparable diesel truck. Two trucks began operating in mid-2014 at Lidl in Switzerland and one at Feldschlösschen Beverages Ltd. In June 2015, Pistor began operating one. Shipping company Meyer Logistics uses refrigerated models in Berlin. EVage Motors EVage Motors is an Indian electric vehicle manufacturer focusing on the commercial electric vehicle sector. The company aims to enhance last-mile delivery solutions through its electric vehicles. EVage's flagship product, is a last-mile delivery van, the FR8. GGT Electric In 2011, GGT Electric, an automotive engineering, design and manufacturing company based in Milford, Michigan, introduced a new line of all-electric trucks for sale. GGT has developed LSV zero-emission electric vehicles for fleet markets, municipalities, universities, and state and federal government. The company offers 4-door electric pick-up trucks, electric passenger vans, and flatbed electric trucks with tilt and dump capability. Haul truck The company Lithium Storage GmbH is building together with the company Kuhn Switzerland AG a battery-powered haul truck. The vehicle is to go the end of 2016 in operation. The dump truck weighs 110 tons. The chassis is a Komatsu 605–7. The vehicles have an electric motor with 800 hp and can thus produce 5900 Nm. The battery is a 600 kWh lithium-ion battery. For comparison, diesel vehicles of this type consume approximately 50,000 to 100,000 liters of diesel per year. Motiv Power Systems Beginning in 2015 Motiv's delivery trucks have been in service with AmeriPride in linen delivery applications. Motiv electrified chassis were certified by the California Air Resources Board, enabling them to be sold through California state programs including HVIP. Motiv collaborates with existing truck body manufacturers to allow them to sell electric options using the electrified chassis as a drop in replacement on their existing manufacturing lines. An example of this type of manufacturing can be seen in the development of delivery vans with both Morgan Olson and Utilimaster. Nikola Motors Nikola Motors produces a battery-electric variant of their Nikola Tre truck. As of July, 2024 Nikola motor has produced more than 225 BEV & FCEV class 8 trucks. And currently 200+ Class 8 Trucks are on the roads in California and Canada. Orange EV Riverside, Missouri-based Orange EV began producing Class-8 all-electric terminal trucks for industrial use in 2012. It operates in 35 states in America and is the manufacturer in the country with the most zero emission trucks in operation. The company uses a turnkey approach that enables its clients to get end-to-end support. In June 2022, the 3rd generation 4x2 e-TRIEVER truck was announced. Its e-TRIEVER truck has a gross vehicle weight rating of 81,000 lb, a max speed of 25 mph, maximum lift height of 62 inches, and battery capacity of 100 or 180 kWh. After delivering over 950 Pure Electric Yard Dogs to the North American Market, In 2023, Orange EV introduced a second model, the HUSK-e. Designed to work in Port and Rail operations, it is capable of pulling up to 180k lb GCVW up to 32 mph and has a 243 kWh LFP battery pack. PMP PMP, an electric vehicle design, manufacturing and engineering company based in Saskatchewan, Canada introduced a full line of electric heavy-duty underground mining trucks ranging from utility vehicles to 12-passenger personnel transport vehicle. Rivian Rivian has produced Rivian EDV since 2019. The original batch of vehicles were produced for Amazon. Smith Electric Vehicles Launched in 2006, the Newton electric truck is an all-electric commercial vehicle from Smith Electric Vehicles. The Newton comes in three GVW configurations: , and . Each is available in short, medium or long wheelbase. The truck was launched with a 120 kilowatt electric induction motor from Enova Systems, driven by Lithium-Ion Iron Phosphate batteries supplied by Valence Technology. In 2012 Smith re-released the Newton with new driveline and battery systems that were developed in-house. Smith offers the battery pack in either 80 kWh or 120 kWh configurations. , the Newton is sold worldwide and available with three different payload capacities from . The lithium-ion battery pack is available in varying sizes that deliver a range from and a top speed of . Terberg The Dutch manufacturer, Terberg, has provided an electrically powered 40-ton truck for transporting material on public roads; it is reported to commute eight times a day between a logistics center and the Munich BMW plant. The truck battery takes three to four hours to charge. When fully charged, the vehicle has a range of up to 100 kilometres. Thus, the electric truck can theoretically complete a full production day without any additional recharging. Compared to a diesel engine truck, the electric truck will save 11.8 tons of CO2 annually. Tesla Tevva In September 2021, Tevva unveiled its Tevva Truck – the first British designed 7.5-tonne electric truck intended for mass production in the UK. The truck has a range of up to 160 miles (250 km) in pure battery electric vehicle (BEV) form or up to 310 miles (500 km) with its patented range extender technology (REX). The Tevva Truck can carry up to 16 euro pallets and over two tonnes payload at 7.5-tonnes Gross Vehicle Weight (GVW). The total cost of ownership is comparable to a diesel; parity is achieved at approximately 3,000 km or when 500 litres of diesel is consumed per month. VinFast At the 2024 Consumer Electronics Show, VinFast introduced their first all-electric mid-size pickup truck VF Wild with length of 209 inches (5324 mm) and a width of 79 inches (1997 mm). Volta The company has developed a truck that gives the driver a 220-degree view, similar to what one might see on a city bus. The driver's seat is in the centre of the cab. On the inside of the 16-ton truck, called Volta Zero, sits a single unit containing an electric motor, transmission and rear axle supplied by OEM supplier Meritor. The truck has a range of between 150 and 200 km per charge. Workhorse Workhorse manufactures the Workhorse W56. Xos Xos, Inc. manufactures and sells Class 5, 6, 7, and 8 commercial and heavy-duty battery electric trucks for "last-mile" and "back-to-base" routes. See also List of electric bus makers Electric van Electric vehicle conversion References Makers and models Electrical-engineering-related lists Truck-related lists Lists of manufacturers
List of electric truck makers
[ "Engineering" ]
2,913
[ "Electrical engineering", "Electrical-engineering-related lists" ]
59,833,659
https://en.wikipedia.org/wiki/Gapped%20Hamiltonian
In many-body physics, most commonly within condensed-matter physics, a gapped Hamiltonian is a Hamiltonian for an infinitely large many-body system where there is a finite energy gap separating the (possibly degenerate) ground space from the first excited states. A Hamiltonian that is not gapped is called gapless. The property of being gapped or gapless is formally defined through a sequence of Hamiltonians on finite lattices in the thermodynamic limit. An example is the BCS Hamiltonian in the theory of superconductivity. In quantum many-body systems, ground states of gapped Hamiltonians have exponential decay of correlations. In quantum field theory, a continuum limit of many-body physics, a gapped Hamiltonian induces a mass gap. References Quantum mechanics
Gapped Hamiltonian
[ "Physics" ]
167
[ "Theoretical physics", "Quantum mechanics", "Quantum physics stubs" ]
67,740,110
https://en.wikipedia.org/wiki/Geometric%20and%20Topological%20Inference
Geometric and Topological Inference is a monograph in computational geometry, computational topology, geometry processing, and topological data analysis, on the problem of inferring properties of an unknown space from a finite point cloud of noisy samples from the space. It was written by Jean-Daniel Boissonnat, Frédéric Chazal, and Mariette Yvinec, and published in 2018 by the Cambridge University Press in their Cambridge Texts in Applied Mathematics book series. The Basic Library List Committee of the Mathematical Association of America has suggested its inclusion in undergraduate mathematics libraries. Topics The book is subdivided into four parts and 11 chapters. The first part covers basic tools from topology needed in the study, including simplicial complexes, Čech complexes and Vietoris–Rips complex, homotopy equivalence of topological spaces to their nerves, filtrations of complexes, and the data structures needed to represent these concepts efficiently in computer algorithms. A second introductory part concerns material of a more geometric nature, including Delaunay triangulations and Voronoi diagrams, convex polytopes, convex hulls and convex hull algorithms, lower envelopes, alpha shapes and alpha complexes, and witness complexes. With these preliminaries out of the way, the remaining two sections show how to use these tools for topological inference. The third section is on recovering the unknown space itself (or a topologically equivalent space, described using a complex) from sufficiently well-behaved samples. The fourth part shows how, with weaker assumptions about the samples, it is still possible to recover useful information about the space, such as its homology and persistent homology. Audience and reception Although the book is primarily aimed at specialists in these topics, it can also be used to introduce the area to non-specialists, and provides exercises suitable for an advanced course. Reviewer Michael Berg evaluates it as an "excellent book" aimed at a hot topic, inference from large data sets, and both Berg and Mark Hunacek note that it brings a surprising level of real-world applicability to formerly-pure topics in mathematics. References Mathematics books Computational geometry Computational topology Geometry processing 2018 non-fiction books Cambridge University Press books
Geometric and Topological Inference
[ "Mathematics" ]
436
[ "Computational topology", "Topology", "Computational mathematics", "Computational geometry" ]
54,992,251
https://en.wikipedia.org/wiki/Botrytis%E2%80%93induced%20kinase%201
Botrytis–induced kinase 1 (BIK1) is a membrane-anchored enzyme in plants. It is a kinase that provides resistance to necrotrophic and biotrophic pathogens. As its name suggests, BIK1 is only active after being induced by Botrytis infection. When Botrytis cinerea is present, the BIK1 gene is transcribed so that the kinase is present to defend the cell. BIK1 functions to regulate the amount of salicylic acid (SA) present in the cell. When Botrytis cinerea or Alternaria brassicicola or any other necrotrophic pathogen is present, BIK1 is transcribed to regulate the pathogen response mechanisms. When BIK1 is present, SA levels decrease, allowing the nectrotrophic response to take place. When nectrotrophic pathogens are not present, BIK1 is not transcribed and SA levels increase, limiting the necrotrophic resistance pathway. Only the pathogenic defense response that is initiated by BIK1 is dependent on SA levels. Non-pathogenic cellular functions occur independently. In terms of non-pathogenic cellular functions, BIK1 is described as a critical component of ET signaling and PAMP-triggered immunity to pathogens. Functions of BIK1 For cellular processes that are not directly related to pathogen resistance or defense, BIK1 does not utilize traditional defense-mediating hormones such as SA, JA, or ACC, but instead utilizes an herbicide, known as paraquat which produces ROIs. It is believed that SA, JA, and ACC have no effect on BIK1 induction because they are likely located downstream from the BIK1 gene, or it is possible that BIK1 operates completely independently. However, it is believed that BIK1 does play a vital role in the ET signaling pathway. Based on the signaling function of BIK1 in ET responses, it is believed that Botrytis-induced kinase1 accumulates response signals that it receives from upstream regulators and then integrates them into its own resistance mechanism. BIK1 is a receptor-like cytoplasmic kinase (RLCK) that associates with a cell-surface receptor, FLS2, and a co-receptor kinase, BAK1 to transduce signals when a PAMP is detected. In order for BIK1 to be activated, site-specific phosphorylation must occur. Effects of Phosphorylation on BIK1 Function Because BIK1 is a possible regulator of the FLS2-BAK1 complex, it is speculated that in vitro, BAK1 phosphorylates BIK1, which then phosphorylates both FLS2 and BAK1. However, in vivo, BIK1 is not phosphorylated until about 5-10 minutes after the addition of FS2, and the peak phosphorylation occurs just after the phosphorylation of the FS2-BAK1 complex. It is speculated that BIK1 activation might be enhanced through transphosphorylation by BAK1 rather than by FLS2 because FLS2 more likely serves as a scaffold protein for the arrangement of the BAK1-FLS2 complex. This hypothesis will require more testing in vivo. Research has shown that BIK1 and BAK1 are signaling partners for the flagellin receptor FLS2 and that the three together initiate defense response. However, BIK1 and BAK1 phosphorylate different residues of the FLS2 receptor with the exception of only a select few. This suggests that both BAK1 and BIK1 play unique roles in defense response by a series of phosphorylation reactions with one another and the flagellin receptor FLS2. BIK1 effect on Plant Growth and Development Root systems in plants with an expressed BIK1 gene and in plants with a loss-of-function mutant show that without an expressed BIK1 gene, roots grow more laterally, in greater numbers, and with shorter primary roots. With a functional BIK1 gene, roots grew downward into the soil and had less root hairs. Additionally, without a functional BIK1 gene, leaves showed serrated edges and considerable wrinkles whereas leaves with a functional BIK1 gene showed stronger, smoother leaves. Flowering plants that lack a functional BIK1 gene flower an average of six days before those with a functional BIK1 gene and show weaker stem strengths, reduced fertility, and smaller siliques. The BIK1 protein contributes to overall stronger stems, broader leaves, and a healthy flowering timeline. Plants lacking a BIK1 protein or that have a BIK1 protein whose functions are being inhibited may exhibit a shorter flowering period and a smaller stature for the plant overall. This suggests that BIK1 plays a significant role in a plant's ability to grow properly as well as its ability to maintain an adequate rigidity and stem strength that contribute to overall plant health. Research Current research regarding Botrytis-induced kinase1 aims to determine how BIK1 interacts with MAPK pathway proteins as well as with the OXI1 kinase. Also, studies are being conducted to determine the relationship between BIK1 and the phosphorylizing homolog kinases PEPR1 and PEPR2. Though it is believed that PEPR1 and PEPR2 act as enzymes toward BIK1 and phosphorylate the kinase, research is still being done to examine the effects of the interaction on a broader scale. Previously published research suggests that PEPR1 and PEPR2 work with the ET signal pathway and Botrytis-induced kinase1 in order to amplify the defense mechanism in immune response. Additionally, future research may explore the mechanism that allows BIK1 and BAK1 to cooperate with the FLS2 receptor to initiate defense response. While it is known that the three work together and each is required for the process to occur efficiently, but the exact relationship between the three remains unknown and the specific binding residues for each component have yet to be determined in vivo. References EC 2.7 Immune system Protein kinases Signal transduction
Botrytis–induced kinase 1
[ "Chemistry", "Biology" ]
1,280
[ "Immune system", "Signal transduction", "Organ systems", "Biochemistry", "Neurochemistry" ]
54,993,244
https://en.wikipedia.org/wiki/Environmental%20personhood
Environmental personhood or juridic personhood is a legal concept which designates certain environmental entities the status of a legal person. This assigns to these entities, the rights, protections, privileges, responsibilities and legal liability of a legal personality. Because environmental entities such as rivers and plants can not represent themselves in court, a "guardian" can act on the entity's behalf to protect it. Environmental personhood emerged from the evolution of legal focus in pursuit of the protection of nature. Over time, focus has evolved from human interests in exploiting nature, to protecting nature for future human generations, to conceptions that allow for nature to be protected as intrinsically valuable. This concept can be used as a vehicle for recognising Indigenous peoples' relationships to natural entities, such as rivers. Environmental personhood, which assigns nature (or aspects of it) certain rights, concurrently provides a means to individuals or groups such as Indigenous peoples to fulfill their human rights. Background The United States Professor Christopher D. Stone first discussed the idea of attributing legal personality to natural objects in the 1970s, in his article "Should trees have standing? Towards legal rights for natural objects". A legal person cannot be owned; therefore, no ownership can be attributed to an environmental entity with established legal personality. Standing is directly related to legal personality. Entities with standing, or locus standi, have the right or capacity to bring action or appear in court. Environmental entities cannot themselves bring action or appear in court. However, this action or standing can be achieved on behalf of the entity by a representing legal guardian. Representation could increase protection of culturally significant aspects of the natural environment, or areas vulnerable to exploitation and pollution. Although there is no federal law in the United States implementing environmental personhood, the idea has been advocated for by a US Supreme Court Justice. In the decision of the 1972 US Supreme Court case Sierra Club v. Morton, Justice William Douglas wrote a dissenting opinion arguing that certain "environmental elements" should have locus standi, and that people with a meaningful relationship to that environmental element should be able to act on its behalf for its protection. As of June 2021, at least 53 initiatives in 12 countries have used the concept of 'person' in their legal text. The Sierra Club, an environmental advocacy group, brought this suit against then Secretary of the Interior of the United States, Roger C. B. Morton stating that the federal government, according to the Administrative Procedure Act, could not grant permits for developers to build infrastructure – specifically a highway, powerlines, and a ski resort – in the Mineral King Valley, part of the Sequoia National Forest. The Sierra Club aimed to protect this undeveloped land within the national forest, but the U.S. Court of Appeals for the Ninth Circuit had stated that because the members of the Sierra Club would not be directly affected they could not sue under the Administrative Procedure Act, which "provides standards for judicial review" for instances where a person is negatively impacted by an agency action, such as granting a permit. The Supreme Court agreed that the Sierra Club could not sue under the Administrative Procedure Act, as it could not show that the actions of the defendant caused or would cause injury to its members. This ruling led Supreme Court Justice William Douglas to write his dissenting opinion, arguing that people should be allowed to sue on behalf of non-living things writing, "[t]hose who have that intimate relation with the inanimate object about to be injured, polluted, or otherwise despoiled are its legitimate spokesmen." This opinion is shared by those who continue to argue for environmental personhood in the United States and around the world. Domestic rights of nature New Zealand In 2014, Te Urewera National Park was declared Te Urewera, an environmental legal entity. The area encompassed by Te Urewera ceased to be a government-owned national park and was transformed into freehold, inalienable land owned by itself. Following the same trend, New Zealand's Whanganui River was declared to be a legal person in 2017. This new legal entity was named Te Awa Tupua and is now recognised as "an indivisible and living whole from the mountains to the sea, incorporating the Whanganui River and all of its physical and metaphysical elements." The river would be represented by two guardians, one from the Whanganui iwi and the other from the Crown. Also in 2017, the New Zealand government signed an agreement granting similar legal personality to Mount Taranaki and pledging a name change for Egmont National Park, which surrounds the mountain. India The Ganges and Yamuna Rivers are now considered legal persons in an effort to combat pollution. The rivers are sacred to Hindu culture for their healing powers and attraction of pilgrims who bathe and scatter the ashes of their dead. The rivers have been heavily polluted by 1.5 billion litres of untreated sewage and 500 million litres of industrial waste entering the rivers daily. The High Court in the northern Indian state of Uttarakhand ordered in March 2017 that the Ganges and its main tributary, the Yamuna, be assigned the status of legal entities. The rivers would gain "all corresponding rights, duties and liabilities of a living person." This decision meant that polluting or damaging the rivers is equivalent to harming a person. The court cited the example of the New Zealand Whanganui River, which was also declared to possess full rights of a legal person. This development of environmental personhood has been met with scepticism as merely announcing that the Ganges and Yamuna are living entities will not save them from significant, ongoing pollution. There is a possible need to change long-held cultural attitudes towards the Ganges, which hold that the river has self-purifying properties. There is further criticism that the guardianship of the rivers was only granted to Uttarakhand, a region in northern India which houses a small part of the rivers' full extent. The Ganges flows for 2,525 km through Uttarakhand, Uttar Pradesh, Bihar, Jharkhand and West Bengal, with only a 96 km stretch running through Uttarakhand. Only a small section of the 1,376 km Yamuna tributary runs through Uttarakhand – which also crosses through the states of Haryana, Himachal Pradesh, Delhi and Uttar Pradesh. Regardless of scepticism surrounding the decision of the Uttarakhand High Court, proclaiming these vulnerable rivers as legal entities invokes a movement of change towards environmental and cultural rights protection. The decisions may be built upon as a foundation for future environmental legislative change. United States In 2006, the borough of Tamaqua, Pennsylvania, worked with a rights of nature group called the Community Environmental Legal Defense Fund (CELDF). Together, the groups drafted legislation to protect the community and its environment from the dumping of toxic sewage. Since 2006, CELDF has assisted with over 30 communities in ten states across the United States to develop local laws codifying the rights of nature. CELDF also assisted in the drafting of Ecuador's 2008 constitution following a national referendum. Besides Tamaqua, several other towns throughout the United States have drafted legislation that would, in effect, give nature natural rights. In 2008, residents in a town by the name of Shapleigh, Maine, added new provisions to the town's legal code. The new sections granted rights to the nature and natural bodies of water that surrounded Shapleigh, and purported to strip the rights of corporations granted by the United States Constitution. What prompted the change to Shapleigh's legal code was a plan by the Nestle Corporation, which owns several water bottle brands such as Poland Spring, to pump truckloads of groundwater from Shapleigh to a water bottling facility. As of 2019, no lawsuits have been filed against Shapleigh, Maine for the change in the town's legal code, and the Nestle Corporation has not chosen to challenge the code either. In this case the CELDF did not assist the residents of Shapleigh in drafting sections 99-11 and 99-12 of their legal code, they were instead assisted by lawyers from Vermont. In April 2013, the CELDF assisted officials in Mora County, New Mexico, in creating an ordinance that limited the ability of corporations to extract gas and oil, and gave rights to the natural ecosystems and bodies of water that resided within Mora County. This ordinance made Mora County the very first place within the United States to ban the production of gas and oil, within a certain area, in an official statement. A lawsuit was filed against Mora County on November 12, 2013, which asserted that Mora County's ordinance infringed on corporations rights, especially the first, fifth, and fourteenth amendments. In January 2015, Mora County's ordinance was overthrown by U.S. District Judge James O. Browning as he viewed the ordinance to violate the first amendment rights of corporations. In early 2014, Grant Township, Indiana, Pennsylvania, enlisted the CELDF's help in drafting an ordinance that would give the natural bodies of water surrounding Grant Township natural rights. A company named Pennsylvania General Energy (PGE) had converted an old oil and gas well into a "wastewater injection well," and residents became concerned for what that could mean for the natural ecosystems surrounding their township. The water in a wastewater injection well is waste that is left over from a process called fracking. This water can contain harmful pollutants and chemicals that can poison groundwater. In Grant Township, most residents rely on the Little Mahoning Creek for their water needs. If the wastewater injection well were to leak, there is a possibility it could contaminate the Little Mahoning. The risk of contamination is what prompted Grant Township residents to ask the CELDF for assistance in drafting an ordinance. Grant Township's ordinance gave natural rights to the ecosystems and bodies of water that were within the borders of Grant Township. Grant Township's ordinance also stripped corporations of their rights deeming that corporations would not be seen as "persons" within the borders of Grant Township. In August 2014, PGE sued Grant Township which began a legal battle that would last for almost five years. Grant Township lost the lawsuit against PGE in April 2019, and Judge Susan Baxter ordered Grant Township to pay PGE's legal expenses which were over $100,000. In addition, Grant Township's ordinance was declared invalid. On 26 February 2019, voters in Toledo, Ohio passed the Lake Erie Bill of Rights. The main point of the Lake Erie Bill of Rights is that Lake Erie has the right to "flourish." Residents of Toledo, and surrounding areas, have suffered times where the tap water, which comes from Lake Erie, was not safe to drink, or use, due to pollution. Cases of unsafe water conditions, amongst other pollution problems, is what prompted residents of Toledo to ask the CELDF for help. On 27 February 2019, the day after the Lake Erie Bill of Rights was passed by voters, a lawsuit was filed by an Ohio farmer. On 27 February 2020, U.S. District Judge Jack Zouhary invalidated the bill, ruling it was "unconstitutionally vague" and beyond "the power of municipal government in Ohio." In the summer of 2019, the Yurok tribe in northern California gave the Klamath River personhood status. Ecuador The rights of nature "to exist, persist, maintain and regenerate its vital cycles" have been proclaimed under Ecuador's 2008 constitution. This occurred after a national referendum in 2008, allowing the Ecuador constitution to reflect rights for nature, a world first. Every person and community has the right to advocate on nature's behalf. The Constitution proclaims that the "State shall give incentives to natural persons and legal entities and to communities to protect nature and to promote respect for all the elements comprising an ecosystem." The first successful case of the rights of nature implementation under Ecuador constitutional law was presented before the Provincial Court of Justice of Loja in 2011. This case involved the Vilcabamba River as the plaintiff, representing itself with its own rights to 'exist' and 'maintain itself' – as it attempted to halt construction of a government highway project interfering with the natural health of the river. This case was brought before court by two individuals, Richard Frederick Wheeler and Eleanor Geer Huddle, as legal guardians acting in favour of nature – specifically the Vilcabamba River. A constitutional injunction was granted in favour of the Vilcabamba River and against the Provincial government of Loja, attempting to conduct the environmentally-harmful project. The project was forced to be halted and the area was to be rehabilitated. Bolivia The constitutional change in Ecuador was followed legislatively by Bolivia in 2010, passing the 'Law of the Rights of Mother Earth' (Ley de Derechos de la Madre Tierra). This legislation designates Mother Earth the character of 'a collective subject of public interest' with inherent rights specified in the law. The Law of the Rights of Mother Earth give aspects of legal personhood to the natural environment. Judicial action can be taken for infringements against individuals and groups as part of Mother Earth as 'a collective subject of public interest'. The legislation states that "Mother Earth is the dynamic living system made up of the indivisible community of all living systems, living, interrelated, interdependent and complementary, sharing a common destiny." Colombia The Colombia Constitutional Court found in November 2016 that the Atrato River basin possesses rights to "protection, conservation, maintenance, and restoration." This ruling came about as a result of degradation to the river basin from mining, impacting nature and harming of Indigenous peoples and their culture. The court referred to the New Zealand declaration of the Whanganui River as a legal person holding environmental personhood. The court ordered that joint guardianship would be undertaken in the representation of the Atrato River basin. Similarly to the New Zealand declaration, the representatives would come from the national government and the Indigenous people living in the basin. The court stated:(I)t is the human populations that are interdependent of the natural world – and not the opposite – and that they must assume the consequences of their actions and omissions with the nature. It is a question of understanding this new sociopolitical reality with the aim of achieving a respectful transformation with the natural world and its environment, as has happened before with civil and political rights…Now is the time to begin taking the first steps to effectively protect the planet and its resources before it is too late...In April 2018 the Supreme Court of Colombia has issued a decision recognizing the Amazon River ecosystem as a subject of rights and beneficiary of protection. Canada The Magpie river in the Côte-Nord region of Quebec was given a set of rights, including the right to take legal action, by the Innu Council of Ekanitshit and Minganie county. Representatives can be appointed by the regional municipality and the Innu to act on behalf of the river and take legal action to protect its rights which they define as: "the right to flow; the right to respect for its cycles; the right for its natural evolution to be protected and preserved; the right to maintain its natural biodiversity; the right to fulfil its essential functions within its ecosystem; the right to maintain its integrity; the right to be safe from pollution; the right to regenerate and be restored; and finally, the right to sue." This aligns with the belief that the river is an independent, living entity separate from human activity. Spain In Spain, the Law recognizes environmental personhood to Mar Menor. Arguments for and against The concept of environmental personhood is controversial, even among environmentalists. One can advocate for a legal framework that acknowledges rights of nature, but may not believe that environmental personhood is the right way to implement it. Proponents of environmental personhood argue that it is valuable to be able to sue on behalf of the environment, because it would allow for environmental protection that does not rely on harm being done to human beings. Environmental personhood also better honors the significant relationships of Indigenous peoples to their environment. However, there are arguments against the concept of environmental personhood. One concern is that the status of legal personhood implies a right not only to sue but to be sued. Can a river be liable for damage it causes in a flood? Would the guardians of that river be asked to pay for damages caused by natural disasters? Community Environmental Defense Fund lawyer Lindsey Schromen-Wawrin writes that this concern is "one of the things that could derail in my opinion the ability for rights in nature to be a check on destructive activities and instead could set up kind of like natural resource trustees for ecosystems where there's a flood and now the ecosystem has to pay out of the fund that would otherwise have gone to restoring habitat that had been destroyed." Another concern is that even with a legal right to sue on behalf of a natural entity, lawsuits are expensive. There are issues of environmental justice if the cost to exercise the right to sue is inaccessible. Other issues arise when environmental entities exist beyond the bounds of the jurisdiction that decided on environmental personhood, which was the case with a river which held rights as a legal person in Uttarakhand, India. According to reporting by National Public Radio, there are also cases where the rights of environmental entities may be at odds with the rights of human beings, "Many of the [environmental personhood] laws have also been met with resistance from industry, farmers and river communities, who argue that giving nature personhood infringes on their rights and livelihoods." Significance for cultural human rights The recognition of the Whanganui River as a legal entity in New Zealand (Te Awa Tupua) encompassed a vivid sense of cultural "inalienable connection" to the local iwi and hapu of the river. Māori culture considers natural features such as the Whanganui River as ancestors and iwi hold deep connections with them as living entities. This inalienable connection of indigenous culture to their natural surroundings is apparent in other parts of the world such as Colombia where a similar environmental personhood declaration was made for the Atrato River basin. The lead negotiator for the Whanganui iwi, Gerrard Albert, said "we consider the river an ancestor and always have...treating the river as a living entity is the correct way to approach it, as an indivisible whole, instead of the traditional model for the last 100 years of treating it from a perspective of ownership and management." James D K Morris and Jacinta Ruru suggest that giving "legal personality to rivers is one way in which the law could develop to provide a lasting commitment to reconciling with Maori." This was the longest-running legal dispute in New Zealand. The Whanganui iwi had been fighting to assert their rights in harmony with the river since the 1870s. Ecocide The concept of environmental protection on behalf of the environment is not new, and widespread harm to the environment has a name: ecocide. The Independent Expert Panel for the Legal Definition of Ecocide defines ecocide as "unlawful or wanton acts committed with knowledge that there is a substantial likelihood of severe and either widespread or long-term damage to the environment being caused by those acts." There are advocates of making ecocide an international crime, like the crimes dealt with by the Rome Statute of the International Criminal Court (ICC). This would place ecocide alongside currently recognized international crimes like genocide, war crimes, and crimes against humanity. If added, ecocide would be the only crime "in which human harm is not a prerequisite for prosecution." This protection of nature for nature's sake is central to the advocacy behind environmental personhood. Do human beings need to be harmed to warrant legal action? The concept of ecocide is not new, nor is the advocacy for adding it to the Rome Statute of the ICC. Extraterrestrial With new increased interest in extraterrestrial spaceflight in the 2020s planetary personhood has been discussed, for Mars (including Martian meteorites), but particularly for the Moon, recognizing the Moon as having memory and agency, with its surface interacting, changing and remembering. See also Corporate personhood Legal person Personhood Rights of nature Te Urewera Whanganui River References External links Whanganui River Maori Trust Board Whanganui's Official Tourism Portal Te Urewera the Tuhoe Homeland CELDF Website 2008 Constitution of Ecuador Legal entities Rights Environmental law legal terminology Corporate personhood Personhood Environmental law Environmental ethics
Environmental personhood
[ "Environmental_science" ]
4,240
[ "Environmental personhood", "Environmental ethics" ]
54,993,625
https://en.wikipedia.org/wiki/Phosphirene
Phosphirene is the hypothetical organophosphorus compound with the formula C2H2PH. As the simplest cyclic, unsaturated organophosphorus compound, phosphirene is the prototype of a family of related compounds that have attracted attention from researchers. Phosphirenes, that is substituted phosphirene compounds where one or more of the H's are replaced by organic substituents, are far more commonly discussed than the parent phosphirene. The first example of a phosphirene, 1,2,3-triphenylphosphirene was prepared via trapping of the phosphinidine complex Mo(CO)5PPh with diphenylacetylene. Placement of the double bond between the carbon atoms provides a 1Hphosphirene in which the phosphorus center is bonded to two carbon atoms and a hydrogen atom. Alternatively, placement of the double bond between the phosphorus center and a carbon atom generates a 2H-phosphirene. The first 2H-phosphirene was synthesized as early as 1987 by Regitz group. However, the chemistry of 2H-phosphirenes was relatively dormant until a series of reports by Stephan group. References Phosphorus heterocycles Three-membered rings Hypothetical chemical compounds
Phosphirene
[ "Chemistry" ]
279
[ "Theoretical chemistry", "Hypothetical chemical compounds", "Hypotheses in chemistry" ]
55,000,798
https://en.wikipedia.org/wiki/Minimal%20algebra
Minimal algebra is an important concept in tame congruence theory, a theory that has been developed by Ralph McKenzie and David Hobby. Definition A minimal algebra is a finite algebra with more than one element, in which every non-constant unary polynomial is a permutation on its domain. In simpler terms, it’s an algebraic structure where unary operations (those involving a single input) behave like permutations (bijective mappings). These algebras provide intriguing connections between mathematical concepts and are classified into different types, including affine, Boolean, lattice, and semilattice types. Classification A polynomial of an algebra is a composition of its basic operations, -ary operations and the projections. Two algebras are called polynomially equivalent if they have the same universe and precisely the same polynomial operations. A minimal algebra falls into one of the following types (P. P. Pálfy) is of type , or unary type, iff , where denotes the universe of , denotes the set of all polynomials of an algebra and is a subgroup of the symmetric group over . is of type , or affine type, iff is polynomially equivalent to a vector space. is of type , or Boolean type, iff is polynomially equivalent to a two-element Boolean algebra. is of type , or lattice type, iff is polynomially equivalent to a two-element lattice. is of type , or semilattice type, iff is polynomially equivalent to a two-element semilattice. References Abstract algebra
Minimal algebra
[ "Mathematics" ]
319
[ "Abstract algebra", "Algebra" ]
55,002,259
https://en.wikipedia.org/wiki/Non-canonical%20base%20pairing
Non-canonical base pairs are planar hydrogen bonded pairs of nucleobases, having hydrogen bonding patterns which differ from the patterns observed in Watson-Crick base pairs, as in the classic double helical DNA. The structures of polynucleotide strands of both DNA and RNA molecules can be understood in terms of sugar-phosphate backbones consisting of phosphodiester-linked D 2’ deoxyribofuranose (D ribofuranose in RNA) sugar moieties, with purine or pyrimidine nucleobases covalently linked to them. Here, the N9 atoms of the purines, guanine and adenine, and the N1 atoms of the pyrimidines, cytosine and thymine (uracil in RNA), respectively, form glycosidic linkages with the C1’ atom of the sugars. These nucleobases can be schematically represented as triangles with one of their vertices linked to the sugar, and the three sides accounting for three edges through which they can form hydrogen bonds with other moieties, including with other nucleobases. The side opposite to the sugar linked vertex is traditionally called the Watson-Crick edge, since they are involved in forming the Watson-Crick base pairs which constitute building blocks of double helical DNA. The two sides adjacent to the sugar-linked vertex are referred to, respectively, as the Sugar and Hoogsteen (C-H for pyrimidines) edges. Each of the four different nucleobases are characterized by distinct edge-specific distribution patterns of their respective hydrogen bond donor and acceptor atoms, complementarity with which, in turn, define the hydrogen bonding patterns involved in base pairing. The double helical structures of DNA or RNA are generally known to have base pairs between complementary bases, Adenine:Thymine (Adenine:Uracil in RNA) or Guanine:Cytosine. They involve specific hydrogen bonding patterns corresponding to their respective Watson-Crick edges, and are considered as Canonical Base Pairs. At the same time, the helically twisted backbones in the double helical duplex DNA form two grooves, major and minor, through which the hydrogen bond donor and acceptor atoms corresponding respectively to the Hoogsteen and sugar edges are accessible for additional potential molecular recognition events. Experimental evidences reveal that the nucleotide bases are also capable of forming a wide variety of pairing between bases in various geometries, having hydrogen bonding patterns different from those observed in canonical base pairs. These base pairs, which are generally referred to as Non-Canonical Base Pairs, are held together by multiple hydrogen bonds, and are mostly planar and stable. Most of these play very important roles in shaping the structure and function of different functional RNA molecules. In addition to their occurrences in several double stranded stem regions, most of the loops and bulges that appear in single-stranded RNA secondary structures form recurrent 3D motifs, where non-canonical base pairs play a central role. Non-canonical base pairs also play crucial roles in mediating the tertiary contacts in RNA 3D structures. History Double helical structures of DNA as well as folded single stranded RNA are now known to be stabilized by Watson-Crick base pairing between the purines, adenine and guanine, with the pyrimidines, thymine (or uracil for RNA) and cytosine. In this scheme, the N1 atoms of the purine residues respectively form hydrogen bond with the N3 atoms of the pyrimidine residues in A:T and G:C complementarity. The second hydrogen bond in A:T base pairs involves the N6 amino group of adenine and the O4 atom of thymine (or uracil in RNA). Similarly, the second hydrogen bond in G:C base pairs involves O6 atom and N4 amino group of guanine and cytosine, respectively. The G:C base pairs also have a third hydrogen bond involving the N2 amino group of guanine and the O2 atom of cytosine. However, even till about twenty years after this scheme was initially proposed by James D. Watson and Francis H.C. Crick, experimental evidences suggesting other forms of base-base interactions continued to draw the attention of researchers investigating the structure of DNA. The first high resolution structure of an adenine:thymine base pair, as solved by Karst Hoogsteen by single crystal X-ray crystallography in 1959 revealed a structure whose geometry was very different from what was proposed by Watson and Crick. It had two hydrogen bonds involving N7 and N6 atoms of adenine and N3 and O4 (or O2) atoms of thymine. It may be noted that due to use of thymine base with methyl group representing sugar, a symmetry axis appears passing through N1 and C6 atoms and the O2 and O4 atoms appears identical. In order to distinguish this alternate base pairing scheme from the Watson-Crick scheme, base pairs where a hydrogen bond involves the N7 atom of a purine residue have been referred to as Hoogsteen base pair, and later, the purine base edge which includes its N7 atom is referred to as its Hoogsteen edge. The first high resolution structure of guanine:cytosine pair, obtained by W. Guschelbauer also was similar to the Hoogsteen base pair, although this structure required an unusual protonation of N1 imino nitrogen of cytosine, which is possible only at significantly lower pH. Experimental evidences, including low resolution NMR studies as well as high resolution X-ray crystallographic studies, supporting Watson-Crick base pairing were obtained as late as in the early '70s. Almost a decade later, with the advent of efficient DNA synthesis methods, Richard Dickerson followed by several other groups, solved structures of the physiological double helical B-DNA with a complete helical turn, based on the crystals of synthetic DNA oligomers. The pairing geometries of the A:T (A:U in RNA) and G:C pairs in these structures confirmed the common or canonical form of base pairing as proposed by Watson and Crick, while those with all other geometries, and compositions, are now referred to as non-canonical base pairs. It was noticed that even in double stranded DNA, where canonical Watson Crick base pairs associate the two complementary anti-parallel strands together, there were occasional occurrences of Hoogsteen and other non-Watson-Crick base pairs. It was also proposed that within Watson-Crick base pair dominated DNA double helices, Hoogsteen base pair formation could be a transient phenomenon. While canonical Watson-Crick base pairs are most prevalent and are commonly observed in a majority of chromosomal DNA and in most functional RNAs, presence of stable non-canonical base pairs is also extremely significant in DNA biology. An example of non-Watson-Crick, or non-canonical, base pairing can be found at the ends of chromosomal DNA. The 3'-ends of chromosomes contain single stranded overhangs with some conserved sequence motifs (such as TTAGGG in most vertebrates). The single stranded region adopts some definite three-dimensional structures, which has been solved by X-ray crystallography as well as by NMR spectroscopy. The single strands containing the above sequence motifs are found to form interesting four stranded mini-helical structures stabilized by Hoogsteen base pairing between guanine residues. In these structures, four guanine residues form a near planar base quartet, referred to as G-quadruplex, where each guanine participates in base pairing with its neighboring guanine, involving their Watson-Crick and Hoogsteen edges in a cyclic manner. The four central carbonyl groups are often stabilized by potassium ions (K+). From the full genomic sequences of different organisms, it has been observed that telomere like sequences sometimes also interrupt double helical regions near transcription start site of some oncogenes, such as c-myc. It is possible that these sequence stretches form G-quadruplex like structures, which can suppress the expression of the related genes. The complementary cytosine rich sequences, on the other strand, may adopt another similar four stranded structure, the i-motif, stabilized by cytosine:cytosine non-canonical base pairs. While non-canonical base pairs are still relatively rare in DNA, in RNA molecules, where generally a single polymeric strand folds onto itself to form various secondary and tertiary structures, the occurrence of non-Watson-Crick base pairs turns out to be far more prevalent. As early as in the 1970s, analysis of the crystal structure of yeast tRNAPhe showed that RNA structures possess significant non-canonical variations in base pairing schemes. Subsequently, the structures of ribozymes, ribosome, riboswitches, etc. have highlighted their abundance, and hence the need for a comprehensive characterization of Non-Canonical Base Pairs. These three-dimensional RNA structures generally possess several secondary structural motifs, such as double helical stems, stems with hairpin loops, symmetric and asymmetric internal loops, kissing loops between two hairpin motifs, pseudoknots, continuous stacks between two segments of helices, multi helix junctions etc. along with single stranded regions. These secondary structural motifs, except for the single stranded motifs, are stabilized by hydrogen bonded base pairs and several of these are non-canonical base pairs, including G:U Wobble base pairs. It is notable in this context, that the Wobble hypothesis of Francis Crick predicted the possibility of G:U base pair, in place of the canonical G:C or A:U base pairs, also mediating the recognition between mRNA codons and tRNA anticodons, during protein synthesis. The G:U wobble base pair is the most numerously observed non-canonical base pair. While, because of its geometric similarity with the canonical base pairs, they frequently occur in the double helical stem regions of RNA structures, the geometric differences continue to draw the attention of nucleic acid researchers, providing new insights related to its structural significance. It may be noted that though the base pairs in the folded RNA structures, give rise to double helical stems, its two cleft regions – the major groove and minor groove, differ in their respective dimensions from those in DNA double helices. Unlike for those in DNA, the sequence discriminating major grooves in RNA double helices are very narrow and deep. On the other hand, the minor groove regions, though wide and shallow, do not carry much sequence specific information in terms of the hydrogen bonding donor-acceptor positioning of the corresponding base pair edges. The G:U wobble base pairs, along with the various other non-canonical base pairs, introduce variations in the structures of RNA double helices, thus enhancing the accessibility of the discriminating major groove edges of associated base pairs. This has been seen to be very important for molecular recognition steps during tRNA aminoacylation as well as in ribosome functions. Considering the immense importance of the non-canonical base pairs in RNA structure, folding and functions, researchers from multiple domains – biology, chemistry, physics, mathematics, computer science, etc., have joined in the effort to understand their structure, dynamics, function and their consequences. The complexities associated with experimental handling of RNA further underline the importance of diverse theoretical inputs towards addressing these issues. Types Two bases may approach each other in various ways, eventually leading to specific molecular recognition mediated by, often non-canonical, base pairing interactions, in addition to strong stacking interactions. These are essential for the process of RNA single strands folding into three-dimensional structures. Early studies on such unusual base pairs by Jiri Sponer, Pavel Hobza and their group were somewhat disadvantaged due to the unavailability of suitable unambiguous systematic naming schemes. While some of the observed base pair were assigned names following the Saenger nomenclature scheme. others were arbitrarily assigned names by different researchers.  It may be mentioned that some attempts were also made by Michael Levitt and coworkers to classify base-base association in terms of adjacency of bases, through either pairing or stacking interactions. There was clearly a need for a classification scheme for different types of non-canonical base pairs, which could comprehensively and unambiguously handle newer variants coming up due to the rapid increase in the sampling space. Different approaches which have evolved in response to this need are described below. Based on hydrogen bonding The nucleotide bases are nearly planar heterocyclic moieties, with conjugated pi-electron cloud, and with several hydrogen bonding donors and accepters distributed around the edges, usually designated as W, H or S, based on whether the edges can respectively be involved in forming Watson-Crick base pair, Hoogsteen base pair, or, whether the edge is adjacent to the C2’-OH group of the ribose sugar. Eric Westhof and Neocles Leontis used these edge designations to propose a currently widely accepted nomenclature scheme for base pairs. The hydrogen bonding donor and acceptor atoms could thus be classified in terms of their positioning along their three edges, namely the Watson-Crick or W edge, the Hoogsteen or H edge, and the Sugar or S edge. Since base pairs are mediated through hydrogen bonding interactions based on hydrogen bond donor-acceptor complementarity, this, in turn, provides a convenient bottoms-up approach towards classifying base pair geometries in terms of respective interacting edges of the participating bases. It may be noted that, unlike the Hoogsteen edge of purines, the corresponding edges of the pyrimidine bases do not have any polar hydrogen bond acceptor atom such as N7. However, these bases have C—H groups at their C6 and C5 atoms, which can act as weak hydrogen bond donors, as proposed by Gautam Desiraju. The Hoogsteen edge, hence, is also called Hoogsteen/C-H edge in a unified scheme for designating equivalent positions of purines as well as pyrimidines. Thus, the total number of possible edge combinations involved in base pairing are 6, namely Watson-Crick/Watson-Crick (or W:W), Watson-Crick/Hoogsteen (or W:H), Watson-Crick/Sugar (or W:S), Hoogsteen/Hoogsteen (or H:H), Hoogsteen/Sugar (or H:S) and Sugar/Sugar (or S:S). In the canonical Watson-Crick base pairs, the glycosidic bonds attaching the N9 (of purine) and N1 (of pyrimidine) of the paired bases with their respective sugar moieties, are on the same side of the mean hydrogen bonding axis, and are hence called Cis Watson-Crick base pairs. However, the relative orientations of the two sugars may also be Trans with respect to the mean hydrogen bonding direction giving rise to a distinct Trans Watson-Crick geometric class, consisting of species which were earlier referred to as reverse Watson-Crick base pairs according to Saenger nomenclature. The possibility of both Cis and Trans glycosidic bond orientation for each of the 6 possible edge combinations, gives rise to 12 geometric families of base pairs (see table). According to the Leontis-Westhoff scheme, any base pair can be systematically and unambiguously named using the syntax <Base_1: Base_2><Edge_1: Edge_2><Glycosidic Bond Orientation> where Base_1 and Base_2 carry information on respective base identities and their nucleotide number. This nomenclature scheme also allows us to enumerate the total number of distinct possible base pair types. For a given glycosidic bond orientation, say Cis, the four naturally occurring bases each have three possible edges for formation of base pairs giving rise to 12 such possible base pairing edge identities, each of which can in principle form base pairing with any edge of another base, irrespective of complementarity. This gives rise to a 12x12 symmetric matrix displaying 144 pairwise permutations of base pairing edge identities, where, apart from the 12 diagonal entries, others include repeat combinations. Thus, there are 78 (= 12 + 132/2) unique entries corresponding to the cis glycosidic bond orientation.  Considering both cis and trans glycosidic bond orientations, the number of base pair types amounts to 156. Of course, this number 156 is only an indicator. It includes base-edge combinations where base pairs cannot be formed due to absence of hydrogen bond donor acceptor complementarities.  For example, potential pairing between two guanine residues utilizing their Watson-Crick edges in cis form (cWW) is not supported by hydrogen bonding donor-acceptor complementarity, and is not observed with consistent hydrogen bonding pattern. This method of enumerating the possible number of distinct base pair types also does not consider possibilities of multimodality or bifurcated base pairs, or even instances of base pairs involving modified bases, protonated bases and water or ion mediation in hydrogen bond formation. Two cytosine bases can form trans Watson-Crick/Watson-Crick (tWW) base pairing with their neutral as well as hemi protonated forms, possibly both, giving rise to the i-motif DNA. However, both C(+):C tWW and C:C tWW, are counted as one type among 156 possible types. Based on isosteres Although significant differences are there between structures of non-canonical base pairs belonging to different geometric families, some base pairs within the same geometric family have been found to substitute each other without disrupting the overall structure. These base pairs are called isosteric base pairs. Isosteric base pairs always belong to same geometric families, but all the base pairs in a particular geometric family are not always isosteric. Two base pairs are called isosteric if they meet the following three criteria: (i) The C1′–C1′ distances should be similar; (ii) the paired bases should be related by the similar rotation in 3D space; and (iii) H-bonds formation should occur between equivalent base positions. A detailed approach towards quantifying isostericity, in terms of an IsoDiscrepancy Index (IDI), which can facilitate reliable prediction regarding which base pair substitutions can potentially occur in conserved motifs, was formulated by Neocles Leontis, Craig Zirbel and Eric Westhof. Based on IDI values and available base pair structural data, the group maintains a curated online base pair catalogue and an updated set of Isostericity Matrices (IM) corresponding to each of the 12 geometric families. Using this resource, one can comprehensively classify different types of canonical and non-canonical base pairs in terms of their positions in the Isostericity Matrices. This approach, for example, indicates that the four base pair types: A:U cWW, U:A cWW, G:C cWW and C:G cWW are isosteric to each other. Thus, as also confirmed by detailed sequence comparisons, double mutations altering A:U cWW to U:A cWW or even to G:C cWW may not disturb the structure, and, unless stability issues are involved, the function of the related RNA.  It was also found that the wobble G:U cWW base pair is not really isosteric to U:G cWW base pair, indicating that such double mutations may significantly affect the functioning of the corresponding RNA. On the other hand, some of the base pairs which are stabilized involving Sugar edge of the bases are mutually isosteric. Based on local strand direction It may be noted here that because of the geometric relationship of the bases with the sugar phosphate backbone, these 12 geometric families of base pairs are associated with two possible local strand orientations, namely parallel and antiparallel. For the 6 families with edge combinations involving Watson-Crick and Sugar edges, W:W, W:S and S:S, cis and trans families are respectively associated with antiparallel and parallel 5' to 3' local strand direction. Introduction of the Hoogsteen edge, as one of the partners in the combination, causes an inversion in the relationship. Thus, for W:H and H:S, cis and trans respectively correspond to parallel and antiparallel local strand orientation. As expected, when both the edges are H, a double inversion is observed, and H:H cis and trans correspond respectively to antiparallel and parallel local strand orientations. The annotation of local strand orientation in terms of parallel and antiparallel directions helps to understand which faces of the individual bases can be seen for a given base pair from the 5’- or the 3’ sides. This annotation also helps in classifying the 12 geometries into two groups of 6 each, where the geometries can potentially interconvert within each group, by in-plane relative rotation of the bases. However, one should note that the above theory is applicable only when the glycosidic torsion angles of both the nucleotide residues are anti. Notably, crystallographic observations and energetic considerations indicate that syn glycosidic torsions are also quite possible.  Hence the above classification of parallel or antiparallel nature of strand directions, by itself, does not always provide the complete understanding. Various functional RNA molecules are stabilized, in their specific folded pattern, by both canonical as well as non-canonical base pairs. Most tRNA molecules, for example, are known to have four short double helical segments, giving rise to a cloverleaf like two-dimensional structure. The three-dimensional structure of tRNA, however, takes an L-shape. This is mediated by several non-canonical base pairs and base triplets. The D-loop and TψC loop are held together by several such base pairs.  There is a variety of non-canonical base pair varieties, which can be browsed through different websites such as NDB, RNABPDB, RNABP COGEST, etc., to get a better understanding. It may be noted that the above scheme is valid for naturally occurring nucleotide bases. However, there are plenty of examples of post-transcriptional chemical modifications of the bases, many of which are seen in tRNAs or ribosomes. It may be important to understand their structural features also. Identification In case of double helical DNA, identification of base pairs is quite trivial using molecular visualizers such as VMD, RasMol, PyMOL etc. It is, however, not so simple for single stranded folded functional RNA molecules.  Several algorithms have been implemented in software tools for the automated detection of base pairs in RNA structures solved by X-ray crystallography, NMR or other methods. Essentially the programs detect hydrogen bonds between two bases, and ensure their (near) planar orientation, before reporting that they constitute a base pair. Since most of the structures of RNA, available in public domain, are solved by X-ray crystallography, the positions of hydrogen atoms are rarely reported. Hence, detection of hydrogen bond becomes a non-trivial job. The DSSR algorithm by Lu and Wilma K. Olson considers two bases to be paired when they detect one or more hydrogen bond(/s) between the bases, by actually modeling the positions of the hydrogen atoms, and by ensuring the perpendiculars to the two bases being nearly parallel to each other. The positions of the hydrogen atoms can be deduced by converting Internal Coordinates (bond length, bond angle and torsion angle) along with positions of precursor atoms, such as amino group nitrogen atoms and those bonded to the nitrogen or Z-matrix to external Cartesian Coordinates. The base pairs identified by this method are listed in NDB and FR3D databases. A unique way of identification of base pairs in RNA was incorporated in MC-Annotate by Francois Major. In this algorithm they make use of the positions of the hydrogen atoms as well as lone-pair electrons using suitable molecular mechanics/dynamics force-fields and derive hydrogen bond formation probabilities for them. The final identifications of base pairs are done based on these probabilities and approach of hydrogen atoms to lone-pairs electrons of nitrogen or oxygen. This method also attempted to classify the base pair nomenclature with additional information of each interacting edge, such as Ws indicating the sugar edge corner of the Watson-Crick edge, Wh representing the Hoogsteen edge corner of Watson-Crick edge, Bw indicating bifurcated three-center hydrogen bond involving both the hydrogen atoms of amino groups to form hydrogen bonds with a carbonyl oxygen involving both of its lone-pairs, etc. As claimed by the authors, this nomenclature scheme adds some additional features to the Leontis-Westhof (LW) scheme and may be referred to as the LW+ scheme. A major advantage of this scheme lies in its ability to distinguish between alternative base pairing geometries, where multimodality is observed within an LW family. This method, however, does not consider the possible participation of the 2'-OH group of the ribose sugars in base pair formation. Another algorithm, namely BPFIND by Dhananjay Bhattacharyya and coworkers, demands at least two hydrogen bonds using two distinct sets of donors and acceptors atoms between the bases. This hypothesis driven algorithm considers distances between two pairs of atoms (hydrogen bond donor (D1 and D2) and acceptor (A1 and A2) and four suitably chosen precursor atoms (PD1, PD2, PA1, PA2) corresponding to the D's and A's. Small values of such distances in conjunction with large values of the angles defined by θ1(PD1—D1—A1), θ2(D1—A1—PA1), θ3(PD2—D2—A2), θ4(D2—A2—PA2) (close to 180o or πc) ensures two structural features which characterize well defined base pairs: i) the hydrogen bonds are strong and linear and ii) the two bases are co-planar. Notably, so long as one restricts the search to base pairs which are stabilized by at least two distinct hydrogen bonds, the above algorithms, by and large, yield the same set of base pairs in different RNA structures. Sometimes in the crystal structures it is observed that two closely spaced bases are oriented in such a way that apart from the regular hydrogen bonds two additional electronegative hydrogen bond acceptor atoms are very close to each other, which may cause electrostatic repulsion. The concept of protonated base pairing, implicating a possible protonation of one of these electronegative, (potentially) hydrogen bond acceptor atoms thus converting it into a hydrogen bond donor, was introduced to explain stability of such geometries. Some of the NMR derived structures also support the protonation hypothesis, but possibly more rigorous studies using neutron diffraction or other techniques would be able to confirm it. The quality of the crystal structures permitting, some algorithms also attempted to detect water or cation mediated base pair formation. Stability The canonical Watson-Crick base pairs, G:C and A:T/U as well as most of the non-canonical ones are stabilized by two or more (e.g. 3 in the case of G:C cWW) hydrogen bonds. Justifiably, a significant amount of research on non-canonical base pairs has been carried out towards bench-marking their strengths (interaction energies) and (geometric) stability against those of the canonical base pairs. It may be noted here that base pair geometries, as observed in the crystal structures, are often influenced by several interactions present in the crystal environment, thus perturbing their intrinsically stable geometries arising out of the hydrogen bonding and related interactions between the two bases. Therefore, in principle, it is possible that the observed geometries in some cases are intrinsically unstable, and that they are stabilized by other interactions provided by the environment. Several groups have attempted to determine the interaction energies in these non-canonical base pairs using different quantum chemistry based approaches, such as Density functional theory (DFT) or MP2 methods. These methods were applied on suitably truncated, hydrogen-added, and geometry optimized models of the base (or nucleoside) pairs extracted from PDB structures. Depending upon the optimization protocol, typically three types of interaction energies have been reported. In the first method, the base pair model geometries, isolated from their respective environments, are fully optimized without any constraints. thus providing the intrinsic geometries and interaction energies of the isolated models. This procedure, however, sometimes leads to optimized geometries of base pairs involving edges different from initial crystal geometry. Abhijit Mitra and collaborators also used an additional second protocol, where the heavy atom (non-hydrogen) coordinates are retained as in the crystal geometries, optimizing only the positions of the added hydrogen atoms. In the third protocol, followed mostly by Jiri Sponer and his group, optimization was carried out with constraints on some angles and dihedrals.  Given that the models are extracted from their respective crystal structures, and are isolated from their crystal environments, the second and the third protocols provide two different approaches towards approximating the environmental effects, without explicit considerations of any specific environmental interactions.  This has further been addressed in some reports by considering specific environmental factors, such as coordination with Magnesium, or even some covalent modifications to the bases. All the three protocols are useful in their respective contexts. Further, a comparison of the model geometries, obtained by the different protocols, provide an idea regarding both, the stability of the corresponding base pair geometries, as well as regarding the probable extent and nature of environmental influences. It was found that most non-canonical base pairs, having two or more hydrogen bonds, generally maintain the same hydrogen bonding pattern in the crystal and in fully optimized in isolation geometries, respectively, thus indicating their intrinsic geometric stability. Interaction energies calculated from these optimized models also indicated the energetic stability of the corresponding non-canonical base pairs.  The previous notion that non-canonical base pairs are weaker than the Watson-Crick base pairs, was found to be incorrect. Interaction energies between the bases of Several base pairs, such as G:G tWW, G:G cWH, A:U cHW, G:A cWW, G:U cWW, etc., are found to be larger than that of canonical A:U cWW base pair. Of course all non-canonical base pairs are not necessarily very strong or stable in terms of interaction energy.  Several base pairs have been detected on the basis of weak hydrogen bonds involving C—H...O/N atoms, where interaction energies are rather small. Further, geometry optimizations of some of the observed base pairs, in particular, but not limited to those involving weak hydrogen bonds, or those stabilized by single hydrogen bonds, were found to adopt alternate geometries, thus indicating their intrinsic lack of geometric stability. These alteration of hydrogen bonding schemes, giving rise to changes in base pairing family upon free optimization, may have some functional implication in RNA, such as their action as conformational switch. Accordingly, as mentioned above in the Sponer's protocol, there have been some attempts to restrain the experimentally observed geometry while carrying out geometry optimization for interaction energy calculations. Interestingly, in several cases, interaction energies calculated for these ‘away from intrinsically stable’ geometries also indicate good energetic stability. Though the energetics and geometric stabilities of different non-canonical base pairs do not show any generalized correlations, analysis of several databases, such as RNABPDB and RNABP COGEST, which catalogue structural and energetic features of some of the observed base pair and their stacks, reveal some interesting general trends. For example, geometry optimizations of several base pairs involving 2’-OH group of sugar residue resulted in significant alterations from their initial geometry. This is possibly due to flexibility of the sugar puckers and glycosidic torsions. The significantly high interaction energies of protonated base pairs, despite the high energy cost of base protonation, also deserve a special mention in this context. This can mostly be attributed to the additional   charge-induced dipole interactions which are associated with protonated base pairs. Structure Base pairing An estimated 60% of bases in structured RNA participate in canonical Watson-Crick base pairs. Base pairing occurs when two bases form hydrogen bonds with each other. These hydrogen bonds can be either polar or non-polar interactions. The polar hydrogen bonds are formed by N-H...O/N and/or O-H...O/N interactions. Non-polar hydrogen bonds are formed between C-H...O/N. Edge interactions Each base has three potential edges where it can interact with another base. The Purine bases have 3 edges which are able to hydrogen bond. Those are known as the Watson-Crick edge(WC), the Hoogsteen edge(H), and the Sugar edge(S). Pyrimidine bases also have three hydrogen-bonding edges. Like the purine, there is the Watson-Crick edge(WC) and the Sugar edge(S) but the third edge is referred to as the "C-H" edge(H) on the pyrimidine bases. This C-H edge is sometimes also referred to as the Hoogsteen edge for simplicity. The various edges for the purine and pyrimidine bases are shown in Figure 2. Besides the three edges of interaction, base pairs can also vary in their cis/trans forms. The cis and trans structures depend on the orientation of the ribose sugar as opposed to the hydrogen bond interaction. These various orientations are shown in Figure 3. Therefore, with the cis/trans forms and the 3 hydrogen bond edges, there are 12 basic types of base pairing geometries which can be found in RNA structures. Those 12 types are WC:WC (cis/trans), WC:HC (cis/trans), WC:S (cis/trans), H:S (cis/trans), H:H (cis/trans), and S:S (cis/trans). Classification These 12 types can be further divided into more subgroups which are dependent on the directionality of the glycosidic bonds and steric extensions. With all of the various base pair combinations there are 169 theoretically possible base pair combinations. The actual number of base-pair combinations is lower because some combinations result in non-favorable interactions. This number of possible non-canonical base pairs is still being determined as it depends strongly on base pairing criteria . Understanding base pair configuration is similarly difficult since the pairing is depends on the bases surroundings. These surroundings can consist of adjacent base pairs, adjacent loops, or third interactions (such as a base triple). The bonds between various bases are well defined because of their rigid and planar shape. The spatial interactions between the two bases can be classified in 6 rigid-body parameters or intra-base pair parameters (3 translational, 3 rotational) as shown in Figure 4. These parameters describe the base pairs' three dimensional conformation. The three translational arrangements are known as shear, stretch, and stagger. These three parameters are directly related to the proximity and direction of the hydrogen bonds. The rotational arrangements are buckle, propeller, and opening. Rotational arrangements relate to the non-planar confirmation (as compared to the ideal coplanar geometry). Intra-base pair parameters are used to determine the structure and stabilities of non-canonical base pairs and were originally created for the base pairings in DNA, but were found to also fit the non-canonical base models. Types The most common non-canonical base pairs are trans A:G Hoogsteen/sugar edge, A:U Hoogsteen/WC, and G:U Wobble pairs. Hoogsteen base pairs Hoogsteen base pairs occur between adenine (A) and thymine (T); and guanine (G) and cytosine(C); similarly to Watson-Crick base pairs. However, the purine (A and G) takes on an alternative conformation with respect to the pyrimidine. In the A-U Hoogsteen base pair, the adenine is rotated 180° about the glycosidic bond, resulting in an alternative hydrogen bonding scheme which has one hydrogen bond in common with the Watson-Crick base pair (adenine N6 and thymine N4), while the other hydrogen bond, instead of occurring between adenine N1 and thymine N3 as in the Watson-Crick base pair, occurs between adenine N7 and thymine N3. The A-U base pair is shown in Figure 5. In the G-C Hoogsteen base pair, like the A-T Hoogsteen base pair, the purine (guanine) is rotated 180° about the glycosidic bond while the pyrimidine (cytosine) remains in place. One hydrogen bond from the Watson-Crick base pair is maintained (guanine O6 and cytosine N4) and the other occurs between guanine N7 and a protonated cytosine N3 (note that the Hoogsteen G-C base pair has two hydrogen bonds, while the Watson-Crick G-C base pair has three). Wobble base pairs Wobble base pairing occur between two nucleotides that are not Watson-Crick base pairs and was proposed by Watson in 1966. The 4 main examples are guanine-uracil (G-U), hypoxanthine-uracil (I-U), hypoxanthine-adenine (I-A), and hypoxanthine-cytosine (I-C). These wobble base pairs are very important in tRNA. Most organisms have less than 45 tRNA molecules even though 61 tRNA molecules would technically be necessary to canonically pair to the codon. Wobble base pairing allows for the 5' anticodon to bond to a non-standard base pair. Examples of wobble base pairs are given in Figure 6. 3-D Structure The secondary and three-dimensional structures of RNA are formed and stabilized through non-canonical base pairs. Base pairs make up many secondary structural blocks which aid the folding of RNA complexes and three dimensional structures. The overall folded RNA is stabilized by the tertiary and secondary structures canonically base pairing together. Due to the many possible non-canonical base pairs, there are an unlimited amount of structures, which allows for the diverse functions of RNA. The arrangement of the non-canonical bases also allow long-range RNA interactions, recognition of proteins and other molecules, and structural stabilizing elements. Many of the common non-canonical base pairs can be added to a stacked RNA stem without disturbing its helical character. Secondary Basic secondary structural elements of RNA include bulges, double helices, hairpin loops, and internal loops. An example of a hairpin loop of RNA is given in Figure 7. As shown in the figure, hairpin loops and internal loops require a sudden change in backbone direction. Non-canonical base pairing allows for the increased flexibility at junctions or turns required in the secondary structure. Tertiary Three-dimensional structures are formed through the long-range intra-molecular interactions between the secondary structures. This leads to the formation of pseudoknots, ribose zippers, kissing hairpin loops, or co-axial pseudocontinuous helices. The three-dimensional structures of RNA are primarily determined through molecular simulations or computationally guided measurements. An example of a Pseudoknot is given in Figure 8. Structural features of a base-pair, formed by two planar rigid units, can be quantified, using six parameters – three translational and three rotational. IUPAC recommended parameters are Propeller, Buckle, Open Angle, Stagger, Shear and Stretch (Figure 8). There are several publicly available software, such as Curves by Richard Lavery, 3DNA by Olson, NUPARM by Manju Bansal, etc., which may be used to calculate these parameters. While the first two calculate the parameters of canonical and non-canonical base-pairs relative to the standard canonical Watson-Crick base pairs geometry, the NUPARM algorithm calculates in absolute terms using base pairing edge specific axis system. Hence, for most non-canonical base-pairs, which involve non-Watson-Crick edges, some of the parameters (Open, Shear and Stretch) calculated by Curves or 3DNA are usually large even in their respective intrinsically most stable geometries.  On the other hand, the values provided by NUPARM indicate the quality of hydrogen bonding and planarity of the two bases in a more realistic fashion. Thus, the NUPARM Stretch values, indicating separation of the two bases of a base pair, and which depend on optimal hydrogen bonding distances, are always around 3Ǻ. Some other general trends observed in the values of the above parameters may be of interest to note. Most of the cis base pairs are seen to have Propeller values around -10o and small values of Buckle and Stagger. The Open and Shear values often depend on positions of the hydrogen bonding atoms. As for example, GU cWW wobble base pairs have Shear value around -2.2Ǻ while GC or AU cWW base pairs have Shear values around zero. The Open values for most base pairs are close to zero but the values are often rather large for those involving 2’-OH group of sugar in the NUPARM derived parameter set. The trans base pairs, however, do not show any systematic trend in their Propeller values. Roles In RNA The structural hierarchy in RNA is usually described in terms of a stem-loop 2D secondary structure, which further folds to form its 3D tertiary structure, stabilized by what are referred to as long range tertiary contacts. Most often the non-canonical base pairs are involved in those tertiary contacts or extra-stem base pairs. For example, some of the non-canonical base pairs in tRNA appear between the D-stem and TψC loops (Figure 5), which are close in the three-dimensional structure. Such base pairing interactions give stability to the L-shaped structure of tRNA. In this region, some base pairs are found to be additionally hydrogen bonded to a third base.  Thus, the 23rd residue is simultaneously paired to 9th and 12th residues, together forming a base triple, the smallest member of the class of higher order multiplets. Multiplets One base, in addition to forming proper planar base pairing with a second base, can often participate in base pair formation with a third base forming a base triple. One such classic example is in formation of DNA triple helix, where two bases of two antiparallel strands form consecutive Watson-Crick base pairs in a double helix and a base of a third strand form Hoogsteen base pairing with the purine bases of the Watson-Crick base pairs. Many different types of base triples have been reported in the available RNA structures and have been elegantly classified in the literature. Multiplets are however not limited to triplet formation. Four bases giving rise to a base quartet is now well documented in the structure of the G-quadruplex characteristically found in the telomere. Here four Guanine residues pair up within themselves in a cyclic form involving Watson-Crick/Hoogsteen cis (cWH) base pairing scheme and each of the Guanine bases are found to be respectively interact with two other guanine bases. Three to four such base G-quadruplexes stack on top of the other to form a four stranded DNA structure. In addition to such a cyclic topology, several other topologies of base:base pairings are possible for higher order multiplets such as quartets, pentets etc. Double helical regions Non-canonical base pairs quite frequently appear within double helical regions of RNA. The G:U cWW non-canonical base pairs are seen very frequently within double helical regions as this base pair is nearly isosteric to the other canonical ones. Due to complication of strand direction, as elaborated in the Classification section (Table 1), not all types of non-canonical base pairs can be accommodated within double helical regions with anti glycosidic torsion angles. However, many non-canonical base pairs, e.g. A:G tHS (trans Hoogsteen/Sugar edge) or A:U tHW (trans Hoogsteen/Watson-Crick), A:G cWW, etc., are often seen within double helical regions giving rise to symmetric internal loop like motifs. Attempts have been made to classify all such situations where two base pairs (canonical or non-canonical) stack in anti-parallel sense possibly giving rise to double helical regions in RNA structures. These base pairs are quite stable, and they are able to maintain the helical property quite well. The backbone torsion angles around these residues are also generally within reasonable limits: C3'-endo sugar pucker with anti glycosidic torsion, α/γ torsion angles around -60o/60o, β/ε torsion angles around 180o. Recurrent structural motifs Non-canonical base pairs often appear in different structural motifs, including pseudoknots, with their special hydrogen bonding features. Structural features of these recurrent motifs have been archived in searchable databases, such as, FR3D and RNA FRABASE. Also, several of these motifs can be identified in a given query PDB file by the NASSAM web-server. They are most frequently detected at the termini of double helical segment acting as capping residues, often preceding hairpin loops. The most frequently found non-canonical base pair, namely G:A tSH, is an integral part of GNRA tetraloops, where N can be any nucleotide residue and R is a purine residue. This motif shows some amount of flexibility and alterations of structural features depending on whether the Guanine and Adenine are paired or not. Several other types of tetraloops motifs, such as UNCG, YNMG, GNAC, CUYG, (where Y stands for pyrimidine and M is either Adenine or Cytosine) etc., have been found in available RNA structures. However, these do not generally show involvement of non-canonical base pairing. In addition to these common hairpin motifs, where the loop residues largely remain unpaired, there are also a few motifs where the loop residues make extensive interactions between themselves or with other residues external to the loop. A common example is the C-loop motif, where the bulging loop residues make non-canonical base pairing with the bases of double helical regions forming non-canonical base pairing (Figure 9). The extra base pairs in these cases give rise to additional stabilization to the composite double helix containing motif. Non-canonical base pairs are also involved in receptor-loop interaction, such as in T-loop motif. Another interesting example of the involvement of non-canonical base pairs in recurrent contexts was detected as the GAAA receptor motif, which consists of A:A cHS base pair followed by U:A tWH base pair stacked on both sides by G:C cWW base pairs. Here we have successive non-canonical base pairs within an antiparallel RNA double helical domain.  Similarly there is an A:A cSH base pair involving two consecutive residues in this motif. Such pairing between consecutive residues, which is also termed as a dinucleotide platform motif, is quite commonly observed. They appear in many RNA structures and the pairing can also be between other bases. Such dinucleotide platform was reported in A:A, A:G, A:U, G:A, G:U base pairs belonging to the cSH class and also in A:A cHH base pairs. These motifs can alter the strand direction within a double helix by formation of kinks. Such dinucleotide platform along with triplet formation is also an integral component of the Sarcin-ricin motif. Modeling Prediction of biomolecular structure from sequence alone is a long-term goal of scientists working in the fields of bioinformatics, computational chemistry, statistical physics as well as in computer science. Prediction of protein structures from amino acid sequence by methods like homology modeling, comparative modeling, threading, etc. were largely successful due to availability of about 1200 unique protein folds. Inspired by the protein experience, there are now several approaches towards predicting RNA structures, albeit with varying degrees of success.  It can be seen that most of the approaches are essentially limited to the prediction of RNA 2D stem-loop structure, also referred to as RNA secondary structure. For example, minimum computed free energy prediction of double helical regions of RNA sequences from the energy of base pairing and stacking interactions, essentially computationally derived from experimental thermodynamic data, was initially introduced by Ruth Nussinov and later by Michael Zuker. This, in turn, has inspired several related modified algorithms, including data on neighboring group interactions etc. Most of these approaches, however, mainly consider data on canonical base pairing, with only a few which also consider thermodynamic data on Hoogsteen base pairs. Thus, in addition to the computational costs and complications associated with the identification of pseudoknots, all these methods also suffer from the drawback associated with the paucity of experimental data on non-canonical base pairs. However, there are also several approaches which attempt at predicting the tertiary 3D structure corresponding to given predicted 2D structures. There are also a few involving 3D fragment based modeling, which are getting further facilitated with the increasing availability of motif wise curated RNA 3D structure data. It is also encouraging to note that there are now some software and servers, such as MC-Fold, RNAPDBee, RNAWolfe, etc. available for exploring non-canonical base pairing in RNA 3D structures. Some of these methods depend on structural database of RNA, such as FRABASE, to obtain 3D coordinates of motifs containing non-canonical base pairs and stitch the information with 3D structure of double helices containing canonical base pairs. It may be relevant in this context, to mention about the approach towards 3D model building of double helical regions with both canonical and non-canonical base pairs used in 3DNA by Olson or in RNAHelix by Bhattacharyya and Bansal.  These software suites use base pair parameters to generate 3D coordinates of individual dinucleotide steps, which can be extended to model double helices of arbitrary lengths with canonical or non-canonical base pairs.  The above-mentioned methods attempt to model a single structure (2D or 3D) of a given RNA sequence. However, growing evidences indicate that a given RNA sequence can adopt ensemble of structures and possibly interconvert between them.  This ensembles obviously adopt different base pairing patterns between different sets of residues. Thus, there are enough pointers to suggest that the focus on modeling single structures appears to have been a bottleneck for accurate modeling of RNA structure. The theoretical prediction of RNA 2D structure and consequently 3D structure can also be confirmed by different chemical probing methods. One of the latest such tools is SHAPE (Selective 2′-hydroxyl acylation analyzed by primer extension), and SHAPE-Directed RNA Secondary Structure Prediction appears to be most promising. Coupled with mutational profiling, ensembles of RNA structures, which often include non-canonical base pairing, can be experimentally studied using the SHAPE-MaP approach. One of the ways ahead today appears to be an integration of Zuker's minimum free energy approach with experimentally derived SHAPE data, including simulated SHAPE data as outlined in Montaseri et al. (2016) and Spasic et al. (2017). See also Hoogsteen base pair Wobble base pair References Molecular genetics Nucleic acids
Non-canonical base pairing
[ "Chemistry", "Biology" ]
10,917
[ "Biomolecules by chemical classification", "Nucleic acids", "Molecular genetics", "Molecular biology" ]
55,002,968
https://en.wikipedia.org/wiki/Haldia%20Multi-Modal%20Terminal
The Haldia Multi-Modal Terminal is a inland-terminal in Port City Haldia in East Midnapore district of West Bengal and a small barrier set for small ships. The terminal is built near the Haldia Port. The terminal built as a inland-river port with 61 acres of land. The terminal is built by Inland Waterways Authority of India by help of West Bengal and the Calcutta Port Trust. Cargo is handled through flyash berths and multi-purpose berths located within the terminal's jetty. It have a maximum depth of and able big barge. According to the Inland Waterways Authority of India, the draft of the port is around with tidal support, which accommodate 3,000 DWT (deadweight tonnage) vessels at the terminal's jetty. Background Due to the shortage of pontoon transport through the waterway, the government of India has decided to transport the commodity to the waterways as compared to the road and railways. For this, the government announced that the goods will be transported from Haldia to Allahabad by sea. For this, the government began to build uplines for small ships or barges in the Hooghly and Ganges rivers. It is said from the government that the terminals will be constructed in Haldia Sahebganj and Varanasi for shipping the goods through the waterway. To this end, construction of multi-modal terminals in Haldia began. Terminal details harbour The harbour of the terminal has a natural harbor, which is protected by balari sandbar. The water depth of the harbour basin is , which accommodate large barges. The approach channel forms a turning circle with diameter in the harbour with a depth of , which is used to change the direction of the vessel as required before berthing the vessel at the jetty. Approach channel The approach channel connects the deep water body to the harbour, Which is constructed by dredging the riverbed from to deep. An 7 kilometers long approach channel at river will be used for the movement of barges to the terminal's jetty. The approach channel has a depth of and a minimum width of 45 meters, allowing vessels with a draft of to arrive and depart the harbor without tidal assistance. However, the highest and lowest tides observed in the harbor area are and meters respectively, which can significantly increase the depth of the approach channel. At high tide the channel more than deep with tidal support; vessels with a draft of or more are able to navigate during this period. Transport of product The proposed Haldia multipurpose terminal in West Bengal will become a major hub for the transportation of goods in West Bengal and north-east India. The terminal has the promise and potential of 5.92 MMPPA freight traffic by 2018.The main products that will be transported through this terminal include fly ash, banaspati oil, cement etc. See also National Waterway 1 Varanasi Multi-Modal Terminal Sahebgang Multimodal Terminal References Bibliography External links Ports and harbours of West Bengal Ports and harbours of India River ports of India Transport in Haldia Intermodal transport Proposed ports in India
Haldia Multi-Modal Terminal
[ "Physics" ]
626
[ "Physical systems", "Transport", "Intermodal transport" ]
55,008,417
https://en.wikipedia.org/wiki/Tissue%20nanotransfection
Tissue nanotransfection (TNT) is an electroporation-based technique capable of gene and drug cargo delivery or transfection at the nanoscale. Furthermore, TNT is a scaffold-less tissue engineering (TE) technique that can be considered cell-only or tissue inducing depending on cellular or tissue level applications. The transfection method makes use of nanochannels to deliver cargo to tissues topically. History Cargo delivery methods rely on carriers, for example nanoparticles, viral vectors, or physical approaches such as gene guns, microinjection, or electroporation. The various methods can be limited by size constraints or their ability to efficiently deliver cargo without damaging tissue. Electroporation is a physical method which harnesses an electric field to open pores in the normally semi-permeable cell membrane through which cargo can enter. In this process, the charges can be used to drive cargo in a specific direction. Bulk electroporation (BEP) is the most conventional electroporation method. Benefits come in the form of high throughput and minimal set-up times. The downside of BEP is that the cell membrane experiences an uneven distribution of the electric field and many membranes receive irreversible damage from which they can no longer close, thus leading to low cell viability. Attempts have been made to miniaturize electroporation such as microelectroporation (MEP) and nanochannel electroporation (NEP) which uses electroporation approached to deliver cargo through micro/nanochannels respectively. These techniques have shown to have higher efficiency of delivery, increased uniform transfection, and increased cell viability compared to BEP. Technique Tissue nanotransfection uses custom fabricated nanochannel arrays for nanoscale delivery of genetic cargo directly onto the surface of the skin. The postage stamp-sized chip is placed directly on the skin and an electric current is induced lasting for milliseconds to deliver the gene cargo with precise control. This approach delivers ample amounts of reprogramming factors to single-cells, creating potential for a powerful gene transfection and reprogramming method. The delivered cargo then transforms the affected cells into a desired cell type without first transforming them to stem cells. TNT is a novel technique and has been used on mice models to successfully transfect fibroblasts into neuron-like cells along with rescue of ischemia in mice models with induced vasculature and perfusion. Current methods require the fabricated TNT chip to be placed on the skin and the loading reservoir filled with a gene solution. An electrode (cathode) is placed into the well with a counter electrode (anode) placed under the chip intradermally (into the skin). The electric field generated delivers the genes. Initial TNT experiments showed that genes could be delivered to the skin of mice. Once this was confirmed, a cocktail of gene factors (ABM) used by Vierbuchen and collaborators to reprogram fibroblast into neurons was used. Delivery of these factors demonstrated successful reprogramming in-vivo and signals propagated from the epidermis to the dermis skin layers. This phenomenon is believed to be mediated by extracellular vesicles and potentially other factors [18]. Successful reprogramming was determined by performing histology and electrophysiological tests to confirm the tissue behaved as functional neurons. Beyond inducing neurons, Gallego-Perez et al. also set out to induce endothelial cells in an ischemic mouse limb that, without proper blood flow, becomes necrotic and decays. Using a patented cocktail of plasmids (Etv2, Fli1, Foxc2, or EFF), these factors were delivered to the tissue above the surgery site. Using various methods, including histology and laser speckle imaging, perfusion and the establishment of new vasculature was verified as early as 7 days post-treatment. The technique was developed to combat the limitations of current approaches, such as a shortage in donors to supply cell sources and the need to induce pluripotency. Reprogramming cells in vivo takes advantage of readily available cells, bypassing the need for pre-processing. Most reprogramming methods have a heavy reliance on viral transfection. TNT allows for implementation of a non-viral approach which is able to overcome issues of capsid size, increase safety, and increase deterministic reprogramming. Development The tissue nanotransfection technique was developed as a method to efficiently and benignly deliver cargo to living tissues. This technique builds on the high-throughput nanoelectroporation methods developed for cell reprogramming applications by Lee and Gallego-Perez of Ohio State's Chemical and Biomolecular Engineering department. Sen (Surgery/Regenerative Medicine) adapted this technology, in collaboration with Lee in Engineering, for in vivo tissue reprogramming applications with Gallego-Perez serving the role of a shared fellow between the two programs. Development was a joint effort between OSU's College of Engineering and College of Medicine led by Gallego-Perez (Ph.D), Lee (Ph.D), and Sen (Ph.D). This technology was fabricated using cleanroom techniques and photolithography and deep reactive ion etching (DRIE) of silicon wafers to create nanochannels with backside etching of a reservoir for loading desired factors as described in Gallego-Perez et al 2017. This chip is then connected to an electrical source capable of delivering an electrical field to drive the factors from the reservoir into the nanochannels, and onto the contacted tissue. Later, with support from Xuan, Sen developed the current version of the tissue nanotransfection chip. References External links Modification of genetic information Nanotechnology Cellular processes
Tissue nanotransfection
[ "Materials_science", "Biology" ]
1,205
[ "Modification of genetic information", "Nanomedicine", "Molecular genetics", "Cellular processes", "Nanotechnology" ]
55,012,592
https://en.wikipedia.org/wiki/Christian%20Hamel
Christian Hamel (4 October 1955 – 15 August 2017) was a French Professor at the Institute for Neurosciences of Montpellier, Hôpital Saint Eloi (INM) research unit INSERM 583 of the University. He studied transduction, integration and disorders of sensory and motor systems with the ultimate goal of finding treatments for degeneration of the retina and optic nerve. Hamel discovered and described in 1993 the RPE65 protein. Retinal pigment epithelium-specific 65 kDa protein is an enzyme in the vertebral visual pigment. The next year he mapped the RPE65 gene to human chromosome 1 (mouse chromosome 3) and refined it to 1p31 by fluorescence in situ hybridization. His research interests were to find the causes of inherited diseases of the retina and optic nerve. References 1955 births 2017 deaths French medical researchers Genetic engineering Engineering
Christian Hamel
[ "Chemistry", "Engineering", "Biology" ]
182
[ "Biological engineering", "Genetic engineering", "Molecular biology" ]
55,013,132
https://en.wikipedia.org/wiki/Safety%20of%20magnetic%20resonance%20imaging
Magnetic resonance imaging (MRI) is in general a safe technique, although injuries may occur as a result of failed safety procedures or human error. During the last 150 years, thousands of papers focusing on the effects or side effects of magnetic or radiofrequency fields have been published. They can be categorized as incidental and physiological. Contraindications to MRI include most cochlear implants and cardiac pacemakers, shrapnel and metallic foreign bodies in the eyes. The safety of MRI during the first trimester of pregnancy is uncertain, but it may be preferable to other options. Since MRI does not use any ionizing radiation, its use generally is favored in preference to CT when either modality could yield the same information. (In certain cases, MRI is not preferred as it may be more expensive, time-consuming and claustrophobia-exacerbating.) Structure and certification In an effort to standardize the roles and responsibilities of MRI professionals, an international consensus document, written and endorsed by major MRI and medical physics professional societies from around the globe, has been published formally. The document outlines specific responsibilities for the following positions: MR Medical Director / Research Director (MRMD) – This individual is the supervising physician who has oversight responsibility for the safe use of MRI services. MR Safety Officer (MRSO) – Roughly analogous to a radiation safety officer, the MRSO acts on behalf of, and on the instruction of, the MRMD to execute safety procedures and practices at the point of care. MR Safety Expert (MRSE) – This individual serves in a consulting role to both the MRMD and MRSO, assisting in the investigation of safety questions that may include the need for extrapolation, interpolation, or quantification to approximate the risk of a specific study. The American Board of Magnetic Resonance Safety (ABMRS) provides testing and board certification for each of the three positions, MRMD, MRSO, and MRSE. As most MRI accidents and injuries are directly attributable to decisions at the point of care, testing and certification of MRI professionals seeks to reduce the rates of MRI accidents and improve patient safety through the establishment of safety competency levels for MRI professionals. Implants All patients are reviewed for contraindications prior to MRI scanning. Medical devices and implants are categorized as MR Safe, MR Conditional or MR Unsafe: MR-Safe – The device or implant is completely non-magnetic, non-electrically conductive, and non-RF reactive, eliminating all of the primary potential threats during an MRI procedure. MR-Conditional – A device or implant that may contain magnetic, electrically conductive, or RF-reactive components that is safe for operations in proximity to the MRI, provided the conditions for safe operation are defined and observed (such as 'tested safe to 1.5 teslas' or 'safe in magnetic fields below 500 gauss in strength'). MR-Unsafe – Objects that are significantly ferromagnetic and pose a clear and direct threat to persons and equipment within the magnet room. The MRI environment may cause harm in patients with MR-Unsafe devices such as cochlear implants, aneurysm clips, and many permanent pacemakers. In November 1992, a patient with an undisclosed cerebral aneurysm clip was reported to have died shortly after an MRI exam. Several deaths have been reported in patients with pacemakers who have undergone MRI scanning without appropriate precautions. Increasingly, MR-conditional pacemakers are available for selected patients. Ferromagnetic foreign bodies such as shell fragments, or metallic implants such as surgical prostheses and ferromagnetic aneurysm clips also are potential risks. Interaction of the magnetic and radio frequency fields with such objects may lead to heating or torque of the object during an MRI. MRI is contraindicated in those suspected with metallic foreign body in the eye. MRI may be considered if there is strong suspicion of non-metallic foreign body. Titanium and its alloys are safe from attraction and torque forces produced by the magnetic field, although there may be some risks associated with Lenz effect forces acting on titanium implants in sensitive areas within the subject, such as stapes implants in the inner ear. Intrauterine devices with copper are generally safe in MRI, but may become dislodged or even expelled, and it is therefore recommended to check the location of the IUD both before and after MRI. Other implants that are contraindicated in MRI includes: magnetic dental implants, tissue expander, artificial limb, hearing aid, catheters with metallic components such as Swan-Ganz catheter and piercing. However, tooth amalgam is not contraindicated in MRI. Risk of implant heating under MRI Titanium and its alloys can heat from the radiofrequency field, as well as the switched gradient field (due to Faraday's law of magnetic induction). The amount of heating that takes place has a number of contributing factors: Injuries have been reported by this heating of metallic implants: Projectile risk The very high strength of the magnetic field may cause projectile effect (or "missile-effect") accidents, where ferromagnetic objects are attracted to the center of the magnet. Pennsylvania reported 27 cases of objects becoming projectiles in the MRI environment between 2004 and 2008. There have been incidents of injury and death. In one case, a six-year-old boy died in July 2001, during an MRI exam at the Westchester Medical Center, New York, after a metal oxygen tank was pulled across the room and crushed the child's head. To reduce the risk of projectile accidents, ferromagnetic objects and devices are typically prohibited near the MRI scanner, and patients undergoing MRI examinations must remove all metallic objects, often by changing into a gown or scrubs. Some radiology departments use ferromagnetic detection devices to ensure that no ferromagnetic objects enter the scanner room. MRI-EEG In research settings, structural MRI or functional MRI (fMRI) may be combined with EEG (electroencephalography) under the condition that the EEG equipment is MR-compatible. Although EEG equipment (electrodes, amplifiers, and peripherals) are either approved for research or clinical use, the same MR Safe, MR Conditional and MR Unsafe terminology applies. With the growth of the use of MR technology, the U.S. Food & Drug Administration [FDA] recognized the need for a consensus on standards of practice, and the FDA sought out ASTM International [ASTM] to achieve them. Committee F04 of ASTM developed F2503, Standard Practice for Marking Medical Devices and Other Items for Safety in the Magnetic Resonance Environment. Genotoxic effects There is no proven risk of biological harm from any aspect of an MRI scan, including very powerful static magnetic fields, gradient magnetic fields, or radio frequency waves. Some studies have suggested possible genotoxic (i.e., potentially carcinogenic) effects of MRI scanning through micronuclei induction and DNA double strand breaks in vivo and in vitro, however, in most, if not all cases, others have been unable to repeat or validate the results of these studies, and the majority of research shows no genotoxic, or otherwise harmful, effects caused by any part of MRI. A recent study confirmed that MRI using some of the most potentially-risky parameters tested to date (7-tesla static magnetic field, 70 mT/m gradient magnetic field, and maximum strength radio frequency waves) did not cause any DNA damage in vitro. Peripheral nerve stimulation The rapid switching on and off of the magnetic field gradients is capable of causing nerve stimulation. Volunteers report a twitching sensation when exposed to rapidly switched fields, particularly in their extremities. The reason the peripheral nerves are stimulated is that the changing field increases with distance from the center of the gradient coils (which more or less coincides with the center of the magnet). Although PNS was not a problem for the slow, weak gradients used in the early days of MRI, the strong, rapidly switched gradients used in techniques such as EPI, fMRI, diffusion MRI, etc. are capable of inducing PNS. American and European regulatory agencies insist that manufacturers stay below specified dB/dt limits (dB/dt is the change in magnetic field strength per unit time), or else prove that no PNS is induced for any imaging sequence. As a result of dB/dt limitation, commercial MRI systems cannot use the full rated power of their gradient amplifiers. Heating caused by absorption of radio waves Every MRI scanner has a powerful radio transmitter that generates the electromagnetic field that excites the spins. If the body absorbs the energy, heating occurs. For this reason, the transmitter rate at which energy is absorbed by the body must be limited (see Specific absorption rate). It has been claimed that tattoos made with iron-containing dyes may lead to burns on the subject's body. Cosmetics are very unlikely to undergo heating, as well as body lotions, since the outcome of the reactions between those with the radio waves is unknown. The best option for clothing is 100% cotton. There are several positions strictly forbidden during measurement such as crossing arms and legs, and the patient's body may not create loops of any kind for the RF during the measurement. Acoustic noise Switching of field gradients causes a change in the Lorentz force experienced by the gradient coils, producing minute expansions and contractions of the coil. As the switching typically is in the audible frequency range, the resulting vibration produces loud noises (clicking, banging or beeping). This behaviour, of sound being generated by the vibration of the conducting components, is described as a coupled acousto-magneto-mechanical system, solutions to which provide useful insight to the behaviour of the scanners. This is most marked with high-field machines, and rapid-imaging techniques in which sound pressure levels may reach 120 dB(A) (equivalent to a jet engine at take-off), and therefore, appropriate ear protection is essential for anyone inside the MRI scanner room during the examination. Radio frequency in itself does not cause audible noises (at least for human beings), since modern systems are using frequencies of 8.5 MHz (0.2 T system) or higher. Cryogens As described in the Physics of magnetic resonance imaging article, many MRI scanners rely on cryogenic liquids to enable the superconducting capabilities of the electromagnetic coils within. Although the cryogenic liquids used are non-toxic, their physical properties present specific hazards. An unintentional shut-down of a superconducting electromagnet, an event known as "quench", involves the rapid boiling of liquid helium from the device. If the rapidly expanding helium cannot be dissipated through an external vent, sometimes referred to as a 'quench pipe', it may be released into the scanner room where it may cause displacement of the oxygen and present a risk of asphyxiation. Oxygen deficiency monitors usually are used as a safety precaution. Liquid helium, the most commonly used cryogen in MRI, undergoes near explosive expansion as it changes from a liquid to gaseous state. The use of an oxygen monitor is important to ensure that oxygen levels are safe for patients and physicians. Rooms built for superconducting MRI equipment should be equipped with pressure relief mechanisms and an exhaust fan, in addition to the required quench pipe. Because a quench results in rapid loss of cryogens from the magnet, recommissioning the magnet is expensive and time-consuming. Spontaneous quenches are uncommon, but a quench also may be triggered by an equipment malfunction, an improper cryogen fill technique, contaminants inside the cryostat, or extreme magnetic or vibrational disturbances. Pregnancy No effects of MRI on the fetus have been demonstrated. As opposed to many other forms of medical imaging in pregnancy, MRI avoids the use of ionizing radiation, to which the fetus is particularly sensitive. As a precaution, however, many guidelines recommend pregnant women only undergo MRI when essential, especially during the first trimester. The concerns in pregnancy are the same as for MRI in general, but the fetus may be more sensitive to the effects—particularly to heating and to noise. The use of gadolinium-based contrast media in pregnancy is an off-label indication and may be administered only in the lowest dose required to provide essential diagnostic information. Despite these concerns, MRI is rapidly growing in importance as a way of diagnosing and monitoring congenital defects of the fetus because it is able to provide more diagnostic information than ultrasound and it lacks the ionizing radiation of CT. MRI without contrast agents is the imaging mode of choice for pre-surgical, in-utero diagnosis and evaluation of fetal tumors, primarily teratomas, facilitating open fetal surgery, other fetal interventions, and planning for procedures (such as the EXIT procedure) to safely deliver and treat babies whose defects would otherwise be fatal. Claustrophobia and discomfort Although painless, MRI scans may be unpleasant for those who are claustrophobic or otherwise uncomfortable with the imaging device surrounding them. Older closed bore MRI systems have a fairly long tube or tunnel. The part of the body being imaged must lie at the center of the magnet, which is at the absolute center of the tunnel. Because scan times on these older scanners may be long (occasionally up to 40 minutes for the entire procedure), people with even mild claustrophobia are sometimes unable to tolerate an MRI scan without management. Some modern scanners have larger bores (up to 70 cm) and scan times are shorter. A 1.5 T wide short bore scanner increases the examination success rate in patients with claustrophobia and substantially reduces the need for anesthesia-assisted MRI examinations even when claustrophobia is severe. Alternative scanner designs, such as open or upright systems, may be helpful where these are available. Although open scanners have increased in popularity, they produce inferior scan quality because they operate at lower magnetic fields than closed scanners. Commercial 1.5-tesla open systems have become available recently, however, providing much better image quality than previous lower field strength open models. Mirror glasses may be used to help create the illusion of openness. The mirrors are angled at 45 degrees, allowing the patient to look down their body and out the end of the imaging area. The appearance is of an open tube pointing upward (as seen when lying in the imaging area). Even though one is able to see around the glasses and the proximity of the device is very evident, this illusion is quite persuasive and relieves the claustrophobic feeling. For young children who cannot hold still or would be frightened during the examination, chemical sedation or general anesthesia are the norm. Some hospitals encourage children to pretend the MRI machine is a spaceship or other adventure. Certain hospitals with Children's wards have decorated scanners for this purpose, such as that at the Boston Children's Hospital, which operates a scanner with a special casing designed to resemble a sandcastle. Obese patients and pregnant women may find the MRI machine a tight fit. Pregnant women in the third trimester also may have difficulty lying on their backs for an hour or more without moving. MRI versus CT MRI and computed tomography (CT) are complementary imaging technologies and each has advantages and limitations for particular applications. CT is more widely used than MRI in OECD countries with a mean of 132 vs. 46 exams per 1000 population performed respectively. A concern is the potential for CT to contribute to radiation-induced cancer and in 2007 it was estimated that 0.4% of current cancers in the United States were due to CTs performed in the past, and that in the future this figure may rise to 1.5–2% based on historical rates of CT usage. An Australian study found that one in every 1800 CT scans was associated with an excess cancer. An advantage of MRI is that no ionizing radiation is used and so it is recommended over CT when either approach could yield the same diagnostic information. Although the cost of MRI has fallen, making it more competitive with CT, there are not many common imaging scenarios in which MRI can simply replace CT, however, this substitution has been suggested for the imaging of liver disease. The effect of low doses of radiation on carcinogenesis also are disputed. Although MRI is associated with biological effects, these have not been proven to cause measurable harm. Iodinated contrast medium is routinely used in CT and the main adverse events are anaphylactoid reactions and nephrotoxicity. Commonly used MRI contrast agents have a good safety profile, but linear non-ionic agents in particular have been implicated in nephrogenic systemic fibrosis in patients with severely impaired renal function. MRI is contraindicated in the presence of MR-unsafe implants, and although these patients may be imaged with CT, beam hardening artefact from metallic devices, such as pacemakers and implantable cardioverter-defibrillators, also may affect image quality. MRI is a longer investigation than CT and an exam may take between 20 and 40 minutes depending on complexity. Guidance Safety issues, including the potential for biostimulation device interference, movement of ferromagnetic bodies, and incidental localized heating, have been addressed in the American College of Radiology's White Paper on MR Safety, which originally was published in 2002 and expanded in 2004. The ACR White Paper on MR Safety has been rewritten and was released early in 2007 under the new title ACR Guidance Document for Safe MR Practices. In December 2007, the Medicines and Healthcare products Regulatory Agency (MHRA), a UK healthcare regulatory body, issued their Safety Guidelines for Magnetic Resonance Imaging Equipment in Clinical Use. In February 2008, the Joint Commission, a U.S. healthcare accrediting organization, issued a Sentinel Event Alert #38, their highest patient safety advisory, on MRI safety issues. In July 2008, the United States Veterans Administration, a federal governmental agency serving the healthcare needs of former military personnel, issued a substantial revision to their MRI Design Guide, that includes physical and facility safety considerations. The European Directive on electromagnetic fields This Directive (2013/35/EU – electromagnetic fields) covers all known direct biophysical effects and indirect effects caused by electromagnetic fields within the EU and repealed the 2004/40/EC directive. The deadline for implementation of the new directive was 1 July 2016. Article 10 of the directive sets out the scope of the derogation for MRI, stating that the exposure limits may be exceeded during "the installation, testing, use, development, maintenance of or research related to magnetic resonance imaging (MRI) equipment for patients in the health sector, provided that certain conditions are met." Uncertainties remain regarding the scope and conditions of this derogation. References History of medical imaging Magnetic resonance imaging Magnetic resonance imaging
Safety of magnetic resonance imaging
[ "Chemistry" ]
3,875
[ "Nuclear magnetic resonance", "Magnetic resonance imaging" ]
74,926,555
https://en.wikipedia.org/wiki/Direct%20reduction
In the iron and steel industry, direct reduction is a set of processes for obtaining iron from iron ore, by reducing iron oxides without melting the metal. The resulting product is pre-reduced iron ore. Historically, direct reduction was used to obtain a mix of iron and slag called a bloom in a bloomery. At the beginning of the 20th century, this process was abandoned in favor of the blast furnace, which produces iron in two stages (reduction-melting to produce cast iron, followed by refining in a converter). However, various processes were developed in the course of the 20th century and, since the 1970s, the production of pre-reduced iron ore has undergone remarkable industrial development, notably with the rise of the . Designed to replace the blast furnace, these processes have so far only proved profitable in certain economic contexts, which still limits this sector to less than 5% of world steel production. History Bloomery Historically, the reduction of iron ore without smelting is the oldest process for obtaining steel. Low-temperature furnaces, unable to reach the melting temperatures of iron alloys, produce a bloom, a heterogeneous agglomerate of metallic iron more or less impregnated with carbon, gangue, and charcoal. This process was gradually succeeded, from the 1st century in China and the 13th century in Europe, by the blast furnace, which simultaneously reduces and melts iron. Elaborate low furnaces, such as the tatara or the Catalan forge, survived until the early 19th century. Compared with the indirect process (reduction-melting in the blast furnace, followed by cast-iron refining), these processes only survived when they enjoyed at least one of the following two advantages: ability to process ores that are incompatible with blast furnaces (such as iron sands that clog blast furnaces, or ores that generate slag that is too pasty to be drained); a more "reasonable" size than that of giant plants and their constraints (ore and capital requirements, production to sell off, etc.). Modern direct reduction More advanced direct reduction processes were developed at the beginning of the 20th century, when it became possible to smelt pre-reduced ores using the Martin-Siemens process or the electric arc furnace. Based on this technical and economic model, a number of processes were industrialized before World War II (the Krupp-Renn process adopted by the Shōwa Steel Works, the Chenot process, etc.). They remained confidential, however, and their profitability was generally debated. Modern direct reduction processes, based on the use of natural gas instead of coal, were studied intensively in the 1950s. On December 5, 1957, the Mexican company Hylsa started up the first industrial production unit of this type in Monterrey, with the pre-reduced ore obtained destined for smelting in an electric arc furnace. As the production of pre-reduced ore with natural gas was economically viable, several plants were built in the late 1960s. As a cheap supply of natural gas was essential to their profitability, most plants were located in countries with gas deposits, in Latin America (where many were developed) and in the Middle East. In 1970, worldwide production of pre-reduced iron ore reached 790,000 tonnes. The processes then in operation were the HYL process (680,000 tonnes produced), an SL/RN unit, a Purofer unit, and the first plant to use the Midrex process. Although profitable and innovative, the processes invented did not ultimately prove to be a technological revolution capable of supplanting the traditional blast furnace-based process. However, the quantity of steel produced from pre-reduced materials grew steadily, outstripping world steel production: in 1975, NML played a significant role in developing a ‘Direct Reduction Technology’ for producing sponge iron with solid fuel like non-metallurgical coal. This formed the basis of the first commercial sponge iron plant of India. in 1976, installations in service totalled less than 5 Mt; in 1985, annual production was 11 Mt for an installed capacity of around 20 Mt, the difference being explained by fluctuations in energy costs; in 1991, production reached 20 Mt. in 1995, worldwide production of prereducts passed the 30 Mt mark for the first time. In 2010, 70 Mt were produced, 14% from HYL processes and 60% from the Midrex process. The latter accounts for most of the growth in natural gas-fired production of pre-reduced products, although since 2005 coal-fired processes have been making a strong comeback, mainly in India. Packaging of pre-reduced iron ore is evenly divided between sponge iron and briquettes. Sponges are a highly porous metallic product, close to the original ore but highly pyrophoric, which limits their transport. They are therefore often subjected to hot compaction, which improves both product density and handling safety. In 2012, 45% of prereducts were transformed into briquettes in this way. Chemical reactions Iron oxide reduction Iron oxides are reduced in the following sequence:      Fe2O3  →    Fe3O4     →   FeO → Fe    hematite → magnetite →   wustite   → iron Each transition from one oxide to the next is due to two simultaneous high-temperature reduction reactions by carbon monoxide CO or dihydrogen H2: These temperatures differ from those predicted by the Ellingham diagram. In reality, there is a coupling between carbon monoxide reduction and dihydrogen, so that these reactions work together, with hydrogen significantly improving the efficiency of CO reduction. Reducing gas production Coal-fired processes In coal-fired processes, part of the fuel is first burnt to heat the charge. The product of this combustion is CO2. When the temperature reaches 1,000 °C, the CO2 reacts with the unburned carbon to create CO:            CO2 + C ⇌ 2 CO          when T > 1 000 °C (Boudouard reaction) The production of H2 cannot be achieved by the thermal decomposition of water, as the temperatures involved are too low. Hydrogen is in fact produced along with carbon monoxide by the reaction:            H2O + C → H2 + CO          when T > 1 000 °C These two reducing gas production reactions, which consume 172.45 and 131.4 kJ/mol respectively, are highly endothermic and operate by limiting charge heating. Natural gas processes The reducing atmosphere, rich in CO and H2, can be created from the high-temperature cracking of natural gas at around 1100-1150 °C, in the presence of oxidized gases (H2O and CO2) from ore reduction reactors.          CH4 + CO2 → 2 CO + H2           CH4 + H2O → CO + 3 H2 The system that generates the reducing gases is called a "reformer". In the Midrex process, it consists of tubes heated by the combustion of a portion (around a third) of the gas from the reactor. Procedures Plants for the production of pre-reduced iron ore are known as direct reduction plants. The principle involves exposing iron ore to the reducing action of a high-temperature gas (around 1000 °C). This gas is composed of carbon monoxide and dihydrogen, the proportions of which depend on the production process. Generally speaking, there are two main types of processes: processes where the reducing gas is obtained from natural gas. In this case, the ore is reduced in tanks; processes where the reducing gas is obtained from coal. The reactor is generally an inclined rotary kiln, similar to those used in cement plants, in which coal is mixed with limestone and ore, then heated. Another way of classifying processes is to distinguish between those where the reducing gases are produced in specific facilities separate from the reduction reactor - which characterizes most processes using natural gas - and those where the gases are produced inside the fusion reactor: coal-fired processes generally fall into this category. However, many "gas-fired" processes can be fed by gasification units producing a reducing gas from coal. In addition, since the melting stage is necessary to obtain alloys, reduction-melting processes have been developed which, like blast furnaces, produce a more or less carburized liquid metal. Finally, many more or less experimental processes have been developed. Tank processes In these processes, iron ore is brought into contact with reducing gases produced and heated by a separate plant in a closed enclosure. As a result, these processes are naturally suited to the use of natural gas. Cyclic processes In these processes, the ore is fed into a tank, where it remains until it is completely reduced. The vessel is then emptied of its pre-reduced ore, and filled with another charge of untreated ore. These processes can therefore be easily extrapolated from laboratory experiments. What's more, their principle, based on batch production, facilitates process control. Natural gas processes In natural gas cyclic processes, a unit produces hot reducing gas, which is injected into the reactor. To ensure continuous operation of the unit converting natural gas into reducing gas, several tanks are operated in parallel and with a time lag. The best-known of this type is HYL I and its improved variant, HYL II. This is the oldest industrial direct gas reduction process, developed in Mexico in 1957 by the Hylsa company. Retorts These are exclusively coal-fired processes, with the reducing gases generated inside the reduction vessel. The ore is charged with coal into a closed container. This is then heated until the oxygen present in the ore combines with the carbon before being discharged, mainly in the form of CO or CO2. This production of gas by heating a solid material means that the reactor belongs to the retort category. The principle is an ancient one: in northern China, the shortage of charcoal led to the development of processes using hard coal before the 4th century. To avoid any contact between iron and sulfur, the brittle element provided by coal, China developed a process that involved placing iron ore in batteries of elongated tubular crucibles and covering them with a mass of coal, which was then burned. This process survived into the 20th century. More recently, other historic processes have come to the fore, such as that of Adrien Chenot, operational in the 1850s in a number of plants in France and Spain. Successive improvements by Blair, Yutes, Renton, and Verdié are not significant. Among the processes developed is the HOGANAS process, perfected in 1908. Three small units are still operational (as of 2010). Not very productive, it is limited to the production of powdered iron, but as it is slow and operates in closed retorts, it easily achieves the purities required by powder metallurgy. Other retort processes were developed, such as KINGLOR-METOR, perfected in 1973. Two small units were built in 1978 (closed) and 1981 (probably closed). Continuous processes Based on the principle of counter-current piston flow, these processes are the closest to the blast furnace or, more accurately, the stückofen. Hot reducing gases are obtained from natural gas, in a separate unit from the shaft, and injected at the bottom of the shaft, while the ore is charged at the top. The pre-reduced materials are extracted hot, but in solid form, from the bottom of the shaft. This similarity to a blast furnace without its crucible made it one of the first processes explored by metallurgists, but the failures of the German Gurlt in 1857, and the French Eugène Chenot (son of Adrien) around 1862, led to the conclusion that "the reduction of iron ore [...] is therefore [not] possible in large quantities by gas alone". Developed in the 1970s, the Midrex process is the best example of a continuous direct reduction process. As much a technical success as a commercial one, since 1980 it has accounted for around two-thirds of the world's production of pre-reduced materials. Its similarity to the blast furnace means that it shares some of its advantages, such as high production capacity, and some disadvantages, such as the relative difficulty of controlling several simultaneous reactions in a single reactor (since the nature of the product changes considerably as it travels through the vessel). The strategy of selling turnkey units, combined with a cautious increase in production capacity, has given this process good financial and technical visibility... compared with the often dashed hopes of competing processes. Its direct competitor, the HYL III process, is the result of a research effort by the Tenova Group (de), heir to the Mexican Hylsa pioneers. Accounting for almost 20% of pre-reduced product production, it differs from the Midrex process in that it features an in-house reforming unit for the production of reducing gases. Other processes have been developed based on this continuous reactor principle. Some, like ULCORED, are still at the study stage. Most have only been developed in a single country, or by a single company. Others were failures, such as the NSC process, of which a single plant was built in 1984 and converted to HYL III in 1993, ARMCO (a single unit commissioned in 1963 and shut down in 1982) or PUROFER (3 units operational from 1970 to 1979, small-scale production resumed in 1988). Coal-fired processes are variants of natural gas processes, where the gas can be synthesized from coal in an additional unit. Among these variants, the MxCol, of which one commercial unit in Angul commissioned by Jindal Steel and Power has been operational since 2014, is a Midrex fed by a coal gasification unit. Technically mature but more complex, they are at a disadvantage compared with equivalent gas-fired processes, which require slightly less investment. Fluidized beds Given that direct reduction is a chemical exchange between gas and solid, the fluidization of ore by reducing gases is an attractive line of research. However, the changing nature of the constituents, combined with the high temperature and the difficulty of controlling the fluidization phenomenon, make its adoption singularly difficult. Many processes have been developed on this principle. Some have been technical failures, such as the HIB (a single plant commissioned in 1972, converted to the Midrex in 1981) or economic failures, such as the FIOR process (a single plant commissioned in 1976, mothballed since 2001, the forerunner of FINMET). Developed in 1991 from the FIOR process, the FINMET process seems more mature, but its expansion has not materialized (two plants were built, and only one was in operation as of 2014). The CIRCORED process, also recent, is similarly stagnant (just one plant built, commissioned in 1999, mothballed in 2012), despite its adaptability to coal (CIRCOFER process, no industrial production). Rotating furnace processes Rotation of the reduction furnace may be a design choice intended to circulate the ore through the furnace. It can also play an active part in the chemical reaction by ensuring mixing between the reactants present. Rotary hearth processes, where the ore rests on a fixed bed and travels through a tunnel, fall into the first category. Rotary kiln processes, where the ore is mixed with coal at high temperature, constitute the second category. Rotary hearth These processes consist of an annular furnace in which iron ore mixed with coal is circulated. Hot reducing gases flow over, and sometimes through, the charge. The ore is deposited on a tray, or carts, rotating slowly in the furnace. After one rotation, the ore is reduced; it is then discharged and replaced by oxidized ore. A number of processes have been developed based on this principle. In the 1970s-1980s, the INMETCO process demonstrated only the validity of the idea, with no industrial application. The MAUMEE (or DryIron) process came to fruition in the US with the construction of two small industrial units in the 1990s. Similarly, in Europe, a consortium of Benelux steelmakers developed the COMET process in the laboratory from 1996 to 1998. Despite the consortium's withdrawal from the research program in 1998, a single industrial demonstrator was extrapolated from it, the SIDCOMET, which was discontinued in 2002. RedIron, whose only operational unit was inaugurated in Italy in 2010, also benefits from this research. Japan has adopted the FASTMET process, with the commissioning of three units dedicated to the recovery of iron-rich powders, and is proposing an improved version, the ITmk3 process, with one unit in operation in the United States. This non-exhaustive list shows that, despite the keen interest shown by steelmakers in developed countries during the 1990s, none of these processes met with commercial success. Rotary drums These processes involve high-temperature blending of iron ore and coal powder, with a little limestone to reduce the acidity of the ore. Processes such as Carl Wilhelm Siemens', based on the use of a short drum, first appeared at the end of the 19th century. The tool used then evolved into a long tubular rotary kiln, inspired by those used in cement works, as in the Basset process, developed in the 1930s. A process of historic importance is the Krupp-Renn. Developed in the 1930s, there were as many as 38 furnaces in 1945 which, although they only had a capacity of 1 Mt/year at the time, were installed all over the world. This process was improved and inspired the German Krupp-CODIR furnaces and the Japanese Kawasaki and Koho processes. Both Japanese processes integrate a pelletizing unit for steel by-products upstream of the rotary furnaces. Two units of each process were built between 1968 (Kawasaki) and 1975 (Koho). The ACCAR process, developed in the late 1960s and used confidentially until 1987, uses a mixture of 80% coal and 20% oil or gas: the hydrocarbons, although more expensive, enrich the reducing gas with hydrogen. The German Krupp-CODIR process, operational since 1974, has had little more success: only three units have been commissioned. Finally, Indian steelmakers are behind the SIIL, Popurri, Jindal, TDR and OSIL processes, which are simply variants developed to meet specific technical and economic constraints. Other processes, built on the same principle, failed to develop, such as the Strategic-Udy, consisting of a single plant commissioned in 1963 and shut down in 1964. The SL/RN process, developed in 1964, dominated coal-fired processes in 2013. In 1997, it accounted for 45% of pre-reduced coal production. In 2012, however, production capacity for this process had fallen to just 1.8 Mt/year, out of a total of 17.06 Mt attributed to coal-fired processes. Reduction-melting processes As the smelting stage is necessary to obtain alloys and shape the product, direct reduction processes are frequently combined with downstream smelting facilities. Most pre-reduced iron ore is smelted in electric furnaces: in 2003, 49 of the 50 Mt produced went into electric furnaces. Process integration is generally highly advanced, to take advantage of the high temperature (over 600 °C) of the prereduct from the direct reduction reactor. One idea is to carry out the entire reduction-melting process in the arc furnace installed downstream of the reduction plant. Several plasma processes operating above 1530 °C have been devised and sometimes tested. Furnaces can be either non-transferred arc (Plasmasmelt, Plasmared) or transferred arc (ELRED, EPP, SSP, The Toronto System, falling plasma film reactor). All these processes share the electric furnace's advantage of low investment cost, and its disadvantage of using an expensive energy source. In the case of direct reduction, this disadvantage is outweighed by the fact that a great deal of heat is required, both for the reduction process and because of the gangue to be melted. An alternative to the electric furnace is to melt the pre-reduction with a fuel. The cupola furnace is ideally suited to this task, but since one reason for the existence of direct reduction processes is the non-use of coke, other melting furnaces have emerged. The COREX process, in operation since 1987, consists of a direct-reduction shaft reactor feeding a blast furnace crucible, in which the pre-reduced ore is brought to a liquid smelting state, consuming only coal. This process also produces a hot reducing gas, which can be valorized in a Midrex-type unit. An equivalent to COREX, based on the FINMET fluidized bed instead of the Midrex vessel, is the Korean FINEX process (a contraction of FINMET and COREX). Both processes are in industrial operation at several plants around the world. Last but not least, a number of reduction-melting furnaces in the same reactor have been studied, but have not yet led to industrial development. For example, the ISARNA process and its derivative HISARNA (a combination of the ISARNA and HISMELT processes), is a cyclonic reactor that performs melting before reduction. These processes have culminated in an industrial demonstrator tested in the Netherlands since 2011. Similarly, Japanese steelmakers joined forces in the 1990s to develop the DIOS process which, like many reduction-fusion processes, is similar to oxygen converters. The TECNORED process, studied in Brazil, also performs reduction-melting in the same vessel, but is more akin to a blast furnace modified to adapt to any type of solid fuel. Of all the processes of this type that have been developed, a single ISASMELT-type industrial unit built in Australia, with a capacity of 0.8 Mt/year, operated from 2005 to 2008 before being dismantled and shipped to China, where it was restarted in 2016. Economic importance Controlling capital and material requirements In the US, where the Midrex process was first developed, direct reduction was seen in the 1960s as a way of breathing new life into electric steelmaking. The techno-economic model of the mini-mill, based on flexibility and reduced plant size, was threatened by a shortage of scrap metal, and a consequent rise in its price. With the same shortage affecting metallurgical coke, a return to the blast furnace route did not seem an attractive solution. Direct reduction is theoretically well-suited to the use of ores that are less compatible with blast furnaces (such as fine ores that clog furnaces), which are less expensive. It also requires less capital, making it a viable alternative to the two tried-and-tested methods of electric furnaces and blast furnaces. The comparative table shows that the diversity of processes is also justified by the need for quality materials. The coking plant that feeds a battery of blast furnaces is just as expensive as the blast furnace and requires a specific quality of coal. Conversely, many direct-reduction processes are disadvantaged by the costly transformation of ore into pellets: these cost on average 70% more than raw ore. Finally, gas requirements can significantly increase investment costs: gas produced by a COREX is remarkably well-suited to feeding a Midrex unit, but the attraction of the low investment then fades. The benefits of direct fuel reduction Although gas handling and processing are far more economical than converting coal into coke (not to mention the associated constraints, such as bulk handling, high sensitivity of coking plants to production fluctuations, environmental impact, etc.), replacing coke with natural gas only makes direct reduction attractive to steelmakers with cheap gas resources. This point is essential, as European steelmakers pointed out in 1998:"There's no secret: to be competitive, direct reduction requires natural gas at $2 per gigajoule, half the European price." - L'Usine nouvelle, September 1998, La réduction directe passe au charbon.This explains the development of certain reduction-melting processes which, because of the high temperatures involved, have a surplus of reducing gas. Reduction-melting processes such as the COREX, capable of feeding an ancillary Midrex direct reduction unit, or the Tecnored, are justified by their ability to produce CO-rich gas despite their higher investment cost. In addition, coke oven gas is an essential co-product in the energy strategy of a steel complex: the absence of a coke oven must therefore be compensated for by higher natural gas consumption for downstream tools, notably hot rolling and annealing furnaces. The worldwide distribution of direct reduction plants is therefore directly correlated with the availability of natural gas and ore. In 2007, the breakdown was as follows: natural gas processes are concentrated in Latin America (where many have already been developed) and the Middle East; coal-fired processes are remarkably successful in India, maintaining the proportion of steel produced by direct reduction despite the strong development of the Chinese steel industry. China, a country with gigantic needs and a deficit of scrap metal, and Europe, lacking competitive ore and fuels, have never invested massively in these processes, remaining faithful to the blast furnace route. The United States, meanwhile, has always had a few units, but since 2012, the exploitation of shale gas has given a new impetus to natural gas processes. However, because direct reduction uses much more hydrogen as a reducing agent than blast furnaces (which is very clear for natural gas processes), it produces much less CO2, a greenhouse gas. This advantage has motivated the development of ULCOS processes in developed countries, such as HISARNA, ULCORED, and others. The emergence of mature gas treatment technologies, such as pressure swing adsorption or amine gas treating, has also rekindled the interest of researchers. In addition to reducing CO2 emissions, pure hydrogen processes such as Hybrit are being actively studied with a view to decarbonizing the steel industry. Notes References See also Bibliography Amit Chatterjee, Sponge Iron Production By Direct Reduction Of Iron Oxide, PHI Learning Private Limited, 2010, 353 p. (, read online archive) "Process technology followed for sponge iron" archive, Environment Compliance Assistance Centre (ECAC) "World direct reduction statistics" archive of August 29th, 2005, Midrex, 2001. "World direct reduction statistics " archive, Midrex, 2012. J. Feinman, "Direct Reduction and Smelting Processes " archive, The AISE Steel Foundation, 1999. "Direct Reduced Iron " archive, The Institute for Industrial Productivity. Related articles Loupe (sidérurgie) Krupp-Renn Process Direct reduced iron. Direct reduction (blast furnace) Histoire de la production de l'acier. Ore deposits Chemistry Iron Metallurgy Blast furnaces Metallurgical processes
Direct reduction
[ "Chemistry", "Materials_science", "Engineering" ]
5,485
[ "Metallurgical processes", "Metallurgy", "History of metallurgy", "Materials science", "Blast furnaces", "nan" ]
74,932,775
https://en.wikipedia.org/wiki/Direct%20reduction%20%28blast%20furnace%29
Direct reduction is the fraction of iron oxide reduction that occurs in a blast furnace due to the presence of coke carbon, while the remainder - indirect reduction - consists mainly of carbon monoxide from coke combustion. It should also be noted that many non-ferrous oxides are reduced by this type of reaction in a blast furnace. This reaction is therefore essential to the operation of historical processes for the production of non-ferrous metals by non-steel blast furnaces (i.e. blast furnaces dedicated to the production of ferromanganese, ferrosilicon, etc., which have now disappeared). Direct-reduction steelmaking processes that bring metal oxides into contact with carbon (typically those based on the use of hard coal or charcoal) also exploit this chemical reaction. In fact, at first glance, many of them seem to use only this reaction. Processes that historically competed with blast furnaces, such as the Catalan forge, have been assimilated into this reaction. But modern direct reduction processes are often based on the exclusive use of reducing gases: in this case, their name takes on the exact opposite meaning to that of the chemical reaction. Definition For blast furnaces, direct reduction corresponds to the reduction of oxides by the carbon in the coke. However, in practice, direct reduction only plays a significant role in the final stage of iron reduction in a blast furnace, by helping to reduce wustite (FeO) to iron. In this case, the chemical reaction can be trivially described as follows: FeO + C → Fe + CO consuming 155,15 kJ/mol However, "in the solid state, there is virtually no reaction in the absence of gases, even between finely ground iron ore and coal powders. In other words, it seems certain that the reaction takes place via gases". This means that direct reduction most probably corresponds to the following chain of reactions: FeO + CO → Fe + CO2 producing 17,45 kJ/mol (reduction by CO) CO2 + C ⇌ 2 CO consuming 172,45 kJ/mol (Boudouard reaction) Roles This reaction accounts for around half of the transformation of wustite FeO into iron, and removes 30% of the total oxygen supplied, mainly in the form of iron oxide Fe2O3. This mode of wustite reduction is highly endothermic, whereas the reduction of iron oxides by CO is slightly exothermic (+155.15 kJ/mol vs. -17.45 kJ/mol), so it is essential to keep it to a minimum. This reaction concerns all the iron oxides present in a blast furnace, but also manganese(II) oxides (Mno), silica (SiO2), chromium, vanadium and titanium, which are partially reduced in blast furnaces. These chemical reactions are described below: MnO + C → Mn + CO consuming 282,4 kJ/mol à 1 400 °C (begins above 1,000°C and involves half of the manganese present in the charge) SiO2 + 2 C → Si + 2 CO consuming 655,5 kJ/mol (begins above 1 500 °C) Chromium and vanadium behave like manganese, titanium like silicon. As for the other iron oxides, their direct reduction is of negligible importance. This can be written as: 3 Fe2O3 + C → 2 Fe3O4 + CO consuming 118,821 kJ/mol Fe3O4 + C → 3 FeO + CO consuming 209,256 kJ/mol In non-steel blast furnaces, dedicated to the production of ferroalloys, direct reduction is fundamental. For example, for ferronickel production, both direct reduction reactions are used: NiO + C → Ni + CO above 445 °C FeO + C → Fe + CO above 800 °C So, although nickel reduces slightly more easily than iron, it cannot be reduced and cast independently of iron. Notes References Iron Blast furnaces Metallurgy Steelmaking
Direct reduction (blast furnace)
[ "Chemistry", "Materials_science", "Engineering" ]
854
[ "Metallurgical processes", "Metallurgy", "Steelmaking", "History of metallurgy", "Materials science", "Blast furnaces", "nan" ]
77,908,009
https://en.wikipedia.org/wiki/Basquin%27s%20law
Basquin's law of fatigue states that the lifetime of the system has a power-law dependence on the external load amplitude, , where the exponent has a strong material dependence. It is useful in expressing S-N relationships. It is a fundamental principle in materials science that describes the relationship between the stress amplitude experienced by a material and its fatigue life under cyclic loading conditions. The law is named after American scientist O. H. Basquin, who introduced the law in 1910. The law provides a mathematical model to predict the number of cycles to failure (N) based on the applied stress amplitude . A High Cycle Fatigue Test is used to determine material behaviour under repetitive cyclic loads. This test aims to establish the stress-cycles-to-failure characteristics of materials, primarily utilising an identified stress range and load application frequency. It is usually performed using a standard fatigue testing machine where the test specimen is prepared in a specifically defined manner and then subjected to loads until failure takes place. Throughout the test, computer software is used to record various necessary parameters such as the number of cycles experienced and the exact point of failure. This testing protocol enables the development of an S-N curve (also known as a Wöhler curve), a graphical representation of stress amplitude (S) versus the number of cycles to failure (N). By plotting these curves for different materials, engineers can compare them and make informed decisions on the optimal material selection for specific engineering applications. The S-N relationship can generally be expressed by the Basquin's law of fatigue, which is given by: , where is the stress amplitude, is the fatigue strength coefficient, is the number of cycles to failure, is the fatigue ductility coefficient, and is the fatigue strength exponent. Both and are properties of the material. Basquin's Law can also be expressed as , where is the change in stress, is the number of cycles to failure, and both and are constants. References External links ADVANCED STATISTICAL EVALUATION OF FATIGUE DATA OBTAINED DURING THE MEASUREMENT OF CONCRETE MIXTURES WITH VARIOUS WATER-CEMENT RATIO Fatigue - ETH Zürich Mathematical modeling Materials science
Basquin's law
[ "Physics", "Materials_science", "Mathematics", "Engineering" ]
428
[ "Mathematical modeling", "Applied and interdisciplinary physics", "Applied mathematics", "Materials science", "nan" ]
77,912,374
https://en.wikipedia.org/wiki/Load%20modulation
Load Modulation is a method of conveying a signal from one device to another by means of modulating the load that the transmitting device imposes on a radio signal provided by the receiving device. This is used, for example, to allow ISO/IEC 14443 and ISO/IEC 15693 NFC cards to reply to the reading device without the need for the NFC card to contain a power source. Instead, it is powered by the radio signal provided by the reader and it conveys its data back to the reader by modulating the load that the NFC card imposes upon the reader's radio signal. References Signal transmission
Load modulation
[ "Physics", "Technology", "Engineering" ]
127
[ "Physical phenomena", "Telecommunications engineering", "Waves", "Information technology", "Signal transmission" ]
77,919,001
https://en.wikipedia.org/wiki/HRAC%20classification
The Herbicide Resistance Action Committee (HRAC) classifies herbicides by their mode of action (MoA) to provide a uniform way for farmers and growers to identify the agents they use and better manage pesticide resistance around the world. It is run by CropLife International in conjunction with the Weed Science Society of America (WSSA). Resistance overview A weed that develops resistance to one herbicide typically has resistance to other herbicides with the same mode of action (MoA), so herbicides with different MoAs, or different resistance groups, are needed. Preventative weed resistance management rotates herbicide types to prevent selective breeding of resistance to the same mode of action. By rotating MoAs, successive generations gain no advantage from any resistant mutations of the last generation. Cross-resistant and multiply resistant weeds resist multiple MoAs, and are particularly difficult to control. There is limited evidence of resistance undoing other resistances. For example, prosulfocarb and trifluralin: their inverse mechanisms of resistance contradict, and so by evolving to one the weed loses resistance to the other, at least by metabolic resistance. Prosulfocarb requires a weed to metabolise it very slowly to survive; trifluralin on the other hand must be metabolised quickly before it can deal damage to the weed. Naming types The HRAC give a letter based class to each active constituent herbicide. The Australian HRAC code is separately assigned, though is often the same as the global code. In 2021, alternative numeric classes were added, to make codes globally more consistent. This set of classification changes also added or moved a few herbicides that had been misclassified, and reduced regional concerns that using the English alphabet could be an impediment for international growers. Herbicides that act through multiple modes have multiple classifications, corresponding to each MoA. For example, Quinmerac is classified as Group 4/29 (O/L) because it is both an Auxin mimic (Group 4 or O) and inhibits cellulose synthesis (Group 29 or L). Groups See also Insecticide Resistance Action Committee References Herbicides
HRAC classification
[ "Biology" ]
443
[ "Herbicides", "Biocides" ]
73,548,983
https://en.wikipedia.org/wiki/Transition%20metal%20complexes%20of%20pyridine-N-oxides
Transition metal complexes of pyridine-N-oxides encompass coordination complexes that contain pyridine-N-oxides as ligands. Particularly common are the octahedral homoleptic complexes of the type where M = Mn(II), Fe(II), Co(II), Ni(II). Many variations of pyridine N-oxide are known, such as the dioxides of 2,2'- and 4,4'-2,2'-bipyridine. Complexes derived from the trioxide of terpyridine have been crystallized as well. Structure and bonding Pyridine-N-oxides bind to metals through the oxygen. According to X-ray crystallography, the M-O-N angle is approximately 130° in many of these complexes. As reflected by the pKa of 0.79 for , pyridine N-oxides are weakly basic ligands. Their complexes are generally high spin, hence they are kinetically labile. Applications Zinc pyrithione is a coordination complex of a sulfur-substituted pyridine-N-oxide. This zinc complex has useful fungistatic and bacteriostatic properties.. References Amine oxides Pyridinium compounds
Transition metal complexes of pyridine-N-oxides
[ "Chemistry" ]
266
[ "Amine oxides", "Functional groups" ]
73,557,966
https://en.wikipedia.org/wiki/3-Methyl-3-sulfanylhexan-1-ol
3-Methyl-3-sulfanylhexan-1-ol is a primary alcohol that is hexan-1-ol which is substituted by a methyl group and a thiol group at position 3. It is the odor component of human axilla sweat and the major species at pH 7.3. See also Body odor References Primary alcohols Thiols
3-Methyl-3-sulfanylhexan-1-ol
[ "Chemistry", "Biology" ]
78
[ "Biotechnology stubs", "Thiols", "Organic compounds", "Biochemistry stubs", "Biochemistry" ]
73,558,859
https://en.wikipedia.org/wiki/Branch%20number
In cryptography, the branch number is a numerical value that characterizes the amount of diffusion introduced by a vectorial Boolean function that maps an input vector to output vector . For the (usual) case of a linear the value of the differential branch number is produced by: applying nonzero values of (i.e., values that have at least one non-zero component of the vector) to the input of ; calculating for each input value the Hamming weight (number of nonzero components), and adding weights and together; selecting the smallest combined weight across for all nonzero input values: . If both and have components, the result is obviously limited on the high side by the value (this "perfect" result is achieved when any single nonzero component in makes all components of to be non-zero). A high branch number suggests higher resistance to the differential cryptanalysis: the small variations of input will produce large changes on the output and in order to obtain small variations of the output, large changes of the input value will be required. The term was introduced by Daemen and Rijmen in early 2000s and quickly became a typical tool to assess the diffusion properties of the transformations. Mathematics The branch number concept is not limited to the linear transformations, Daemen and Rijmen provided two general metrics: differential branch number, where the minimum is obtained over inputs of that are constructed by independently sweeping all the values of two nonzero and unequal vectors , ( is a component-by-component exclusive-or): ; for linear branch number, the independent candidates and are independently swept; they should be nonzero and correlated with respect to (the coefficient of the linear approximation table of should be nonzero): . References Sources Cryptography
Branch number
[ "Mathematics", "Engineering" ]
364
[ "Applied mathematics", "Cryptography", "Cybersecurity engineering" ]
73,561,616
https://en.wikipedia.org/wiki/Wind%20setup
Wind setup, also known as wind effect or storm effect, refers to the rise in water level in seas, lakes, or other large bodies of water caused by winds pushing the water in a specific direction. As the wind moves across the water’s surface, it applies shear stress to the water, generating a wind-driven current. When this current encounters a shoreline, the water level increases due to the accumulation of water, which creates a hydrostatic counterforce that balances the shear force applied by the wind. During storms, wind setup forms part of the overall storm surge. For example, in the Netherlands, wind setup during a storm surge can raise water levels by as much as 3 metres above normal tidal levels. In tropical regions, such as the Caribbean, wind setup during cyclones can elevate water levels by up to 5 metres. This phenomenon becomes especially significant when water is funnelled into shallow or narrow areas, leading to higher storm surges. Examples of the effects of wind setup include Hurricanes Gamma and Delta in 2020, during which wind setup was a major factor when strong winds and atmospheric pressure drops caused higher-than-expected coastal flooding across the Yucatán Peninsula in Mexico. Similarly, in California’s Suisun Marsh, wind setup has been show to be a significant factor affecting local water levels, with strong winds pushing water into levees, contributing to frequent breaches and flooding. Observation In lakes, wind setup often leads to noticeable fluctuations in water levels. This effect is particularly clear in lakes with well-regulated water levels, such as the IJsselmeer, where the relationship between wind speed, water depth, and fetch length can be accurately measured and observed. At sea, however, wind setup is typically masked by other factors, such as tidal variations. To measure the wind setup effect in coastal areas, the (calculated) astronomical tide is subtracted from the observed water level. For instance, during the North Sea flood of 1953, the highest water level along the Dutch coast was recorded at 2.79 metres at the Vlissingen tidal station, while the highest wind setup—measuring 3.52 metres—was observed at Scheveningen. The highest wind setup ever recorded in the Netherlands, reaching 3.63 metres, occurred in Dintelsas, Steenbergen during the 1953 flood. However, globally, tropical regions like the Gulf of Mexico and the Caribbean often experience even higher wind setups during hurricane events, underscoring the importance of this phenomenon in coastal and flood management strategies. Calculation of wind setup Based on the equilibrium between the shear stress due to the wind on the water and the hydrostatic back pressure, the following equation is used: in which: h = water depth x = distance u= wind speed , Ippen suggests = 3.3*10−6 = angle of the wind relative to the coast g = acceleration of gravity cw has a value between 0.8*10−3 and 3.0*10−3 Application at open coasts For an open coast, the equation becomes: in which Δh = wind setup F = fetch length, this is the distance the wind blows over the water However, this formula is not always applicable, particularly when dealing with open coasts or varying water depths. In such cases, a more complex approach is needed, which involves solving the differential equation using a one- or two-dimensional grid. This method, combined with real-world data, is used in countries like the Netherlands to predict wind setup along the coast during potential storms. Application at (shallow) lakes and confined small-fetch areas To calculate the wind setup in a lake, the following solution for the differential equation is used: In 1966 the Delta Works Committee recommended using a value of 3.8*10−6 for under Dutch conditions. However, an analysis of measurement data from the IJsselmeer between 2002 and 2013 led to a more reliable value for , specifically = 2.2*10−6. This study also found that the formula underestimated wind setup at higher wind speeds. As a result, it has been suggested to increase the exponent of the wind speed from 2 to 3 and to further adjust to =1.7*10−7. This modified formula can predict the wind setup on the IJsselmeer with an accuracy of approximately 15 centimetres. For confined environments such as marshes or small fetches, a simplified empirical model for wind setup has been proposed by Algra et al (2023). This model was designed to estimate wind setup in the Suisun Marsh, where fetch lengths are smaller and shallow water depth conditions apply. The equation is expressed as: Where: = wind setup (water level rise), = constant (typically derived empirically), = wind speed measured 10 metres above the water surface, = gravitational constant, = average water depth, = fetch length, = angle between wind direction and the fetch. This equation assumes that the fetch is small and simplifies the wind setup process by making the wind setup linearly proportional to the square of the wind speed. In their 2023 analysis of Van Sickle Island, Algra et al. found this model effective for environments with limited fetch and shallow depth, where the more complex approaches used for open coasts are unnecessary. Unlike the more detailed differential equation formulations used for larger open coasts or lakes, the Van Sickle model provides a practical approximation for confined areas where wind setup may still be significant but where spatial constraints simplify the overall water movement dynamics. Note Wind setup should not be mistaken for wave run-up, which refers to the height which a wave reaches on a slope, or wave setup which is the increase in water level caused by breaking waves. See also Storm surge Coastal flooding Coastal Engineering References Coastal engineering Civil engineering Hydraulic engineering Physical oceanography Water waves
Wind setup
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
1,172
[ "Physical phenomena", "Hydrology", "Applied and interdisciplinary physics", "Water waves", "Coastal engineering", "Physical systems", "Waves", "Construction", "Hydraulics", "Civil engineering", "Physical oceanography", "Hydraulic engineering", "Fluid dynamics" ]
58,190,388
https://en.wikipedia.org/wiki/List%20of%20works%20by%20Petr%20Van%C3%AD%C4%8Dek
This is the list of works by Petr Vaníček. Remarks B Book TB Textbook LN Lecture Notes PR Paper in a Refereed Journal R Research Paper C Critique, Reference Paper IP Invited Paper to a Meeting NP Paper Read at a Meeting TH Thesis RT Report (non-technical) RW Review Paper (technical) List of works Sources Works about mathematics Geodesy Geophysics University of New Brunswick
List of works by Petr Vaníček
[ "Physics", "Mathematics" ]
83
[ "Applied mathematics", "Applied and interdisciplinary physics", "Geodesy", "Geophysics" ]
58,192,917
https://en.wikipedia.org/wiki/Schlichting%20jet
Schlichting jet is a steady, laminar, round jet, emerging into a stationary fluid of the same kind with very high Reynolds number. The problem was formulated and solved by Hermann Schlichting in 1933, who also formulated the corresponding planar Bickley jet problem in the same paper. The Landau-Squire jet from a point source is an exact solution of Navier-Stokes equations, which is valid for all Reynolds number, reduces to Schlichting jet solution at high Reynolds number, for distances far away from the jet origin. Flow description Consider an axisymmetric jet emerging from an orifice, located at the origin of a cylindrical polar coordinates , with being the jet axis and being the radial distance from the axis of symmetry. Since the jet is in constant pressure, the momentum flux in the direction is constant and equal to the momentum flux at the origin, where is the constant density, are the velocity components in and direction, respectively and is the known momentum flux at the origin. The quantity is called as the kinematic momentum flux. The boundary layer equations are where is the kinematic viscosity. The boundary conditions are The Reynolds number of the jet, is a large number for the Schlichting jet. Self-similar solution A self-similar solution exist for the problem posed. The self-similar variables are Then the boundary layer equation reduces to with boundary conditions . If is a solution, then is also a solution. A particular solution which satisfies the condition at is given by The constant can be evaluated from the momentum condition, Thus the solution is Unlike the momentum flux, the volume flow rate in the is not constant, but increases due to slow entrainment of the outer fluid by the jet, increases linearly with distance along the axis. Schneider flow describes the flow induced by the jet due to the entrainment. Other variations Schlichting jet for the compressible fluid has been solved by M.Z. Krzywoblocki and D.C. Pack. Similarly, Schlichting jet with swirling motion is studied by H. Görtler. See also Landau-Squire jet Schneider flow Bickley jet References Flow regimes Fluid dynamics
Schlichting jet
[ "Chemistry", "Engineering" ]
451
[ "Piping", "Chemical engineering", "Flow regimes", "Fluid dynamics" ]
63,426,903
https://en.wikipedia.org/wiki/Guanine%20tetrad
In molecular biology, a guanine tetrad (also known as a G-tetrad or G-quartet) is a structure composed of four guanine bases in a square planar array. They most prominently contribute to the structure of G-quadruplexes, where their hydrogen bonding stabilizes the structure. Usually, there are at least two guanine tetrads in a G-quadruplex, and they often feature Hoogsteen-style hydrogen bonding. Guanine tetrads are formed by sequences rich in guanine, such as GGGGC. They may also play a role in the dimerization of non-endogenous RNAs to facilitate the replication of some viruses. Guanine tetrads dimerize through their 5' ends since it is more energetically favorable. They can be stabilized by central cations, such as lithium, sodium, potassium, rubidium, or caesium. However, they still form a variety of different structures. Guanine tetrads are not always stable, but the sugar-phosphate backbone of DNA can assist in stability of the guanine tetrads themselves. Guanine tetrads are more stable when stacked, as intermolecular forces between each layers help stabilize them. Guanine tetrads can also influence recombination, replication, and transcription. For instance, guanine tetrads are found in the promoter region of the Myc family of oncogenes. They also function in immunoglobulin class switching and may play a role in the genome of HIV. Guanine tetrads appear frequently in the telomeric regions of DNA. See also G-quadruplex Hoogsteen base pair Heterochromatin Regulation of gene expression Guanine Telomere References External links QGRS Mapper QuadBase2 Molecular biology Molecular genetics Cell biology DNA G-quadruplex
Guanine tetrad
[ "Chemistry", "Biology" ]
405
[ "Biochemistry", "Molecular genetics", "Cell biology", "Molecular biology" ]
63,427,012
https://en.wikipedia.org/wiki/Pablo%20Sinues
Pablo Sinues (also published as Pablo Martinez-Lozano Sinues) is an associate professor at the Department of Biomedical Engineering at the University of Basel (Basel, Switzerland) and lecturer at the Department of Chemistry and Applied Biosciences at ETH Zürich. He received his Ph.D. in Mechanical Engineering from the Charles III University of Madrid (Spain) and Habilitation in Analytical Chemistry at ETH Zürich. Sinues heads the Translational Breath Research group located at the University Children’s Hospital Basel. Academic activity Sinues has pioneered Secondary electrospray ionization with a focus in Breath gas analysis applications. He co-authored over 50 peer-reviewed articles covering fields ranging from engineering to medicine. He is President of the Society of Spanish Researchers in Switzerland (ACECH) and Vice-president of the Swiss Metabolomics Society (SMS). He also serves as an expert for InnoSuisse, the Swiss Innovation Agency Sinues is principal investigator of the Research Network Zurich Exhalomics, which is an initiative by scientists from the Zurich area with the goal to provide technical solutions for the rapid and sensitive on-line analysis of breath. He is co-inventor of five patents and winner of the 2020 SGMS award. He co-founded the start-up company 'Deep Breath Initiative (DBI)' to uncover the full potential of Molecular Breath Analysis to advance precision medicine and make it available for general health care. References Outreach Activities Martínez-Lozano et al., J Am Soc Mass Spectrom 2009, 20, 1060-63 Martinez-Lozano Sinues et al., PLoS ONE 2013, 8 BBC News ETH news (30/1/2016) Swiss national television (2/2/2016; in Italian) External links Deep Breath Initiative Sinueslab ACECH Zurich Exhalomics Year of birth missing (living people) Living people 21st-century Spanish scientists Mass spectrometrists
Pablo Sinues
[ "Physics", "Chemistry" ]
392
[ "Biochemists", "Mass spectrometry", "Spectrum (physical sciences)", "Mass spectrometrists" ]
63,432,250
https://en.wikipedia.org/wiki/NGC%20901
NGC 901 is an elliptical galaxy in the constellation Aries. It is estimated to be 441 million light years from the Milky Way and has a diameter of approximately 50,000 ly. NGC 901 was discovered on September 5, 1864, by Albert Marth. See also List of NGC objects (1–1000) References External links Aries (constellation) 0901 Elliptical galaxies 212967
NGC 901
[ "Astronomy" ]
83
[ "Aries (constellation)", "Constellations" ]
76,476,481
https://en.wikipedia.org/wiki/Plogoff%20nuclear%20power%20plant%20project
The Plogoff nuclear power plant project was an EDF project to build a nuclear power plant in the commune of Plogoff in Finistère, Brittany. Popular mobilization against the project between 1978 and 1981 led to its abandonment. This anti-nuclear movement was part of a period marked by the birth of political ecology worldwide. Plogoff site Plogoff is a commune in Basse-Bretagne, near the Pointe du Raz in Finistère. The nuclear power plant would have been located on the edge of Audierne Bay. Chronology of events In response to the first oil crisis in October 1973, the first Pierre Messmer government accelerated the civil nuclear program, and on March 5, 1974, launched an ambitious program of 13 900-megawatt units over six years (at an estimated cost of 13 billion francs), with plans to build 200 power plants in France by 2000. In 1975, the General Councils and the Economic and Social Council agreed in principle to build a nuclear power plant in Brittany on 167 hectares of Breton moorland (4 generating units of 1,300 MW each, for a total capacity of 5,200 MW). Five sites have been identified for prospecting in Brittany: Beg an Fry in Guimaëc, Ploumoguer, Plogoff (near Pointe du Raz), Saint-Vio in Tréguennec and Erdeven. In June 1976, EDF engineers began drilling the first exploratory boreholes, triggering the first major reactions from the local population, which until then had been largely uninformed. The mobilization in Erdeven and Ploumoguer was such that these sites were quickly ruled out. A defense committee was formed on June 6, at the initiative of the mayor of Plogoff, Jean-Marie Kerloch. On June 8, Plogoffites set up barricades at the entrance to their commune for three days, as EDF geologists and technicians were forced to give way. On September 11, 1978, this committee decided to create a GFA (based on the Larzac model) to make expropriation procedures more difficult6. Despite the structuring of the anti-nuclear movement, notably through CLINs and CRINs, the Plogoff site was selected on September 12 and 25, 1978 by the Conseil économique et social de Bretagne and the Conseil général du Finistère. On November 29, 1978, the General Council of Finistère voted 28 to 17 in favor of building a nuclear power plant at Plogoff, marking the end of the period of the "strolling power plant". Citizen opposition continued unabated: in early May 1979, the defense committee decided to install the Feunteun-Aod alternative sheepfold on the GFA. On January 30, 1980, the files for the public utility inquiry were received at Plogoff town hall, where they were burnt that very afternoon. The prefectural authorities responded by hiring vans to act as "annex town halls" (protected by gendarmes) to gather the population's favorable opinions, so that the public utility inquiry could begin on January 31, 1980. During the public inquiry, a free radio station - Radio Plogoff - began broadcasting. It broadcast radio programs until the Socialist victory in 1981. After the public inquiry, demonstrations took place, leading to sometimes violent clashes with the CRS. On several occasions, demonstrators were arrested and put on trial for damaging public buildings and throwing projectiles, with the struggle now seen as a battle of "stones against guns". On March 16, 1980, 50,000 people demonstrated to mark the closure of the public utility inquiry. On May 24, 1980, 100 to 150,000 demonstrators celebrated the end of the procedure, with 50 to 60,000 staying on for the fest-noz that brought the festivities to a close. On April 9, 1981, at his rally in Brest, candidate François Mitterrand declared that Plogoff "does not figure, nor will it figure" in the nuclear plan he would implement if elected. In keeping with his campaign promise, the press release of June 3, 1981, issued after the Council of Ministers of President Mitterrand's Mauroy government, confirmed the abandonment of plans to extend the Larzac military camp and build the Plogoff nuclear power plant. Instead, two coal-fired units, each rated at 600 MW, were added to the Cordemais power plant in 1983 and 1984. A 446 MW gas-fired combined-cycle power plant in the commune of Landivisiau, in the north of the Finistère département, goes into production on March 31, 2022. The Plogoff women Women played a particularly important role in this mobilization. Many were seamen's wives, and therefore often single homemakers. Along with young people and pensioners, they are the most present in the village and very active in this struggle. When the vans, protected by mobile guards, were set up to act as a town hall annex, the women began a war of nerves with the gendarmes. During the six weeks of the public inquiry, they mobilized daily, spending hours and sometimes days in front of the guards, talking to them and sometimes discouraging the younger ones. At 5 p.m., as the women leave the town halls, they are joined by men and anti-nuclear activists from the region. This "five o'clock mass" often takes a more violent turn, involving the use of Molotov cocktails, with the gendarmes retaliating with offensive grenades. During the clashes, women were also in the front ranks, blocking the way of the gendarmes. They included Annie Carval, now president of the defense committee (in place of Jean-Marie Kerloc'h, who was accused of allowing himself to be influenced by EDF), and Amélie Kerloc'h, first deputy and then mayor of the commune. The latter encouraged residents to block access to Plogoff to make it "an island inaccessible to the police". A film, Plogoff, des pierres contre des fusils, recounts the events and shows the mobilization of the women, who harassed the young mobile guards every day, causing several of them to break down. At the head of the Plogoff mobilization are Amélie Kerloc'h, Plogoff's first deputy mayor, who is seen in the film urging outraged demonstrators to "make Plogoff an island", and Annie Carval, president of the defense committee. Plogoff, des pierres contre des fusils was directed by Nicole Le Garrec, with Félix Le Garrec as a cinematographer and Jakez Bernard as a sound technician. Released in November 1980 (before the project was abandoned), the film was restored in 2019 and selected for Cannes Classics 2019 (the Cannes Film Festival's selection of heritage films). It was re-released in cinemas on February 12. A book entitled Femmes de Plogoff (Women of Plogoff) has been published by Renée Conan and Annie Laurent. In it, they recount how they learned to fight, the violence they had to face, The changes in their family lives during this time, the courage they showed, and the connections they forged with each other and with other struggles elsewhere. A show based on the book was staged by Laëtitia Rouxel (drawing), Brigitte Stanislas (reading), and Patrice Paichereau (music). Law enforcement Seven squadrons of mobile gendarmes were stationed in Pont-Croix and Loctudy, and intervened in Plogoff. Gendarmes-parachutistes were deployed as reinforcements. The mobile gendarmes sometimes use wheeled armored vehicles at Plogoff. A helicopter watched over the demonstrators. It was there to protect the movement of the annexed city halls. Military engineering vehicles from Angers were mobilized to breach the barricades. In terms of weaponry, incapacitating grenades were used in large numbers, including "instant tear-gas grenades" (tolite + CS gas) (e.g. 85 on Friday, March 14, the last day of the public inquiry). Ethyl bromoacetate, although banned, seems to have been used by urban police during riots in Quimper, who are said to have disposed of old stocks. Consequences Following a major mobilization, the project was abandoned, a first in France. The previous year, a demonstration at Creys-Malville had had tragic consequences: one demonstrator was killed, and many others injured, including a mobile guard whose hand was torn off. Parallel to this action, demonstrations occurred at Le Pellerin, near Nantes, where another nuclear power plant was planned. This was abandoned in 1983, as promised by Socialist candidate François Mitterrand, before being replaced by another project, the Le Carnet nuclear power plant, also abandoned in 1997. Posterity The Tri Yann music group, opposed to the project, wrote the eponymous song for the album An heol a zo glaz, released in 1981. Kan ar kann (Breton for "song of combat") describes the struggle of the people of Plogoff to oppose the construction of a nuclear power plant. The Iroise Marine Natural Park was created by decree on September 28, 2007. Covering an area of 3,500 km2, it is likely to be extended, with a proposal to the communes of Cap Sizun (of which Plogoff is a part) to involve them in decision-making and take into account the ecological and socio-economic coherence of the professional and leisure activity basins. On April 22, 2021, France 3 broadcast the film Plogoff, les révoltés du nucléaire (2021, France, 55 minutes) by François Reinhardt. See also Timeline of history of environmentalism Brennilis Nuclear Power Plant Plogoff References Notes Bibliography 7 Documentaries 1980: Plogoff, des pierres contre des fusils, Nicole Le Garrec (France, 112 minutes), theatrical feature documentary. The restored film has been selected for the 2019 Cannes Film Festival (Cannes Classics). 1980: Le Dossier Plogoff, François Jacquemain (France, 50 minutes), restored in 2018 by Synaps. 2000: L'affaire Plogoff (France, 52 minutes), Brigitte Chevet. 2018: Plogoff mon amour, mémoire d'une lutte, Dominique Agniel (France, 60 minutes). 2021: Plogoff, les révoltés du nucléaire, François Reinhardt (France, 55 minutes). External links Plogoff, un moment d’écologie populaire Plogoff, 30 ans après Plogoff, chronique de la lutte Plogoff, mémoire d'une lutte | Lutte antiucléaire | Bretagne Hervé Thomas, 40 ans déjà! Nuclear power Nuclear power in France Nuclear power by country Nuclear power stations in France
Plogoff nuclear power plant project
[ "Physics" ]
2,288
[ "Power (physics)", "Physical quantities", "Nuclear power" ]
76,478,062
https://en.wikipedia.org/wiki/Knuth%E2%80%93Plass%20line-breaking%20algorithm
The Knuth–Plass algorithm is a line-breaking algorithm designed for use in Donald Knuth's typesetting program TeX. It integrates the problems of text justification and hyphenation into a single algorithm by using a discrete dynamic programming method to minimize a loss function that attempts to quantify the aesthetic qualities desired in the finished output. The algorithm works by dividing the text into a stream of three kinds of objects: boxes, which are non-resizable chunks of content, glue, which are flexible, resizeable elements, and penalties, which represent places where breaking is undesirable (or, if negative, desirable). The loss function, known as "badness", is defined in terms of the deformation of the glue elements, and any extra penalties incurred through line breaking. Making hyphenation decisions follows naturally from the algorithm, but the choice of possible hyphenation points within words, and optionally their preference weighting, must be performed first, and that information inserted into the text stream in advance. Knuth and Plass' original algorithm does not include page breaking, but may be modified to interface with a pagination algorithm, such as the algorithm designed by Plass in his PhD thesis. Typically, the cost function for this technique should be modified so that it does not count the space left on the final line of a paragraph; this modification allows a paragraph to end in the middle of a line without penalty. The same technique can also be extended to take into account other factors such as the number of lines or costs for hyphenating long words. Computational complexity A naive brute-force exhaustive search for the minimum badness by trying every possible combination of breakpoints would take an impractical time. The classic Knuth-Plass dynamic programming approach to solving the minimization problem is a worst-case algorithm but usually runs much faster in close to linear time. Solving for the Knuth-Plass optimum can be shown to be a special case of the convex least-weight subsequence problem, which can be solved in time. Methods to do this include the SMAWK algorithm. Simple example of minimum raggedness metric For the input text AAA BB CC DDDDD with line width 6, a greedy algorithm that puts as many words on a line as possible while preserving order before moving to the next line, would produce: ------ Line width: 6 AAA BB Remaining space: 0 CC Remaining space: 4 DDDDD Remaining space: 1 The sum of squared space left over by this method is . However, the optimal solution achieves the smaller sum : ------ Line width: 6 AAA Remaining space: 3 BB CC Remaining space: 1 DDDDD Remaining space: 1 The difference here is that the first line is broken before BB instead of after it, yielding a better right margin and a lower cost 11. References External links Breaking Paragraphs into Lines, the original paper by Knuth and Plass Algorithms Typography
Knuth–Plass line-breaking algorithm
[ "Mathematics" ]
619
[ "Algorithms", "Mathematical logic", "Applied mathematics" ]
76,484,106
https://en.wikipedia.org/wiki/NGC%201947
NGC 1947 is a peculiar lenticular galaxy in the constellation Dorado. The galaxy lies about 50 million light years away from Earth, which means, given its apparent dimensions, that NGC 1947 is approximately 75,000 light years across. It was discovered by James Dunlop on November 5, 1826. Characteristics The galaxy is characterised by the presence of dust lanes across the minor axis of the galaxy, indicating it is a polar-ring galaxy. The galaxy has one central dust lane while three more less pronounced lanes are visible which look like concentric rings. Although it is categorised as a lenticular galaxy, it lacks a disk, having thus more in common with elliptical galaxies. Molecular gas has been detected around the nucleus of the galaxy with an estimated hydrogren mass of . The gas rotates in an axis perpendicular to that of the stars of the galaxy, but in its inner region it is warped. The kinematics suggest that the dust and gas have an external origin, probably accreted from a gas-rich galaxy, as there is a lack of tidal tails that would indicate it is as a result of an unequal mass merger with a disk galaxy. The nucleus of the galaxy has been found to be active and it is categorised as a LINER. The most accepted theory for the energy source of active galactic nuclei is the presence of an accretion disk around a supermassive black hole. Nearby galaxies NGC 1947 is the brightest galaxy in the NGC 1947 group, which also includes the galaxies ESO 085–065, ESO 085–088, and ESO 086–010. This group lies close to the Dorado Group, and is part of the Southern Supercluster. References External links Lenticular galaxies Polar-ring galaxies Peculiar galaxies Dorado NGC 1947 Group 1947 17296 Discoveries by James Dunlop Astronomical objects discovered in 1826 085-G87 05264-6347
NGC 1947
[ "Astronomy" ]
392
[ "Dorado", "Constellations" ]
76,494,117
https://en.wikipedia.org/wiki/Gustaaf%20Van%20Tendeloo
Gustaaf Van Tendeloo (born 1950), or Staf Van Tendeloo is a Belgian physicist known for his contributions to electron microscopy, electron crystallography, and the physics of materials. In 2011, his group reported the first atomically resolved reconstruction of a nanoparticle in 3D. Van Tendeloo was born in Lier, Belgium. He obtained his licentiate in physics from the Vrije Universiteit Brussel (VUB; 'Free University of Brussels') in 1972, followed by his doctorate from the University of Antwerp in 1974 under the supervision of Severin Amelinckx. He received an aggregation from the VUB in 1981. Since 1972, Van Tendeloo has been associated with the University of Antwerp, where he is the professor of solid-state physics. Additionally, he serves as Professor of the Physics of Materials at the University of Antwerp. In 1986 he became part-time professor at the VUB and since 1994 he has been a full professor at the University of Antwerp. Since 2003, he has been the head of the EMAT (Electron Microscopy of Materials) laboratory on electron microscopy. Throughout his career, Van Tendeloo has undertaken significant research endeavors both domestically and internationally, including research stints at the University of California, Berkeley, University of Illinois Urbana-Champaign and the Université de Caen. He is a member of the Royal Flemish Academy of Belgium for Science and the Arts since 2010. He received the Dr. De Leeuw-Damry-Bourlart Prize from the Research Foundation – Flanders (FWO) in 2015. In 2023, He received an honorary doctorate degree from the University of Zaragoza. Bibliography References Living people 1950 births People from Lier, Belgium Belgian physicists Materials scientists and engineers Vrije Universiteit Brussel alumni University of Antwerp alumni Academic staff of the University of Antwerp Academic staff of Vrije Universiteit Brussel Members of the Royal Flemish Academy of Belgium for Science and the Arts Crystallographers
Gustaaf Van Tendeloo
[ "Chemistry", "Materials_science", "Engineering" ]
412
[ "Crystallographers", "Crystallography", "Materials scientists and engineers", "Materials science" ]
76,498,336
https://en.wikipedia.org/wiki/Finnish%20Energy%20Authority
The Finnish Energy Authority () is an expert authority within the Ministry of Economic Affairs and Employment in Finland. It was initially named Electricity Market Center (SMK) and before the most recent name change, Energy Market Authority (EMV). Background Electricity transmission in Finland is a natural monopoly, managed by regional transmission companies. The Energy Authority oversees these companies to ensure they do not abuse their monopoly position and overcharge their customers. Functions The Energy Authority's tasks include monitoring the pricing of the electricity transmission grid and natural gas markets, and maintaining Finland's national emission trading registry. The agency also provides information on electricity prices to consumers to support the competition of electricity suppliers. Additionally, the Energy Authority oversees the implementation of electricity origin guarantees, ensuring that the amount of energy sold to consumers by energy companies, such as wind energy, is produced by wind power. The Energy Authority also administers the renewable energy feed-in tariffs that came into effect in 2011. Regarding emissions trading, the Energy Authority grants and monitors emissions permits, oversees the implementation of emissions trading, approves emissions verifiers, and acts as the auctioneer of emission allowances in Finland. History In early 2014, the Energy Authority adopted its current name and assumed tasks related to energy efficiency and the promotion of renewable energy from the Ministry of Economic Affairs and Employment. At that time, the Energy Authority employed 70 people. Simo Nurmi has been the Director-General since April 2015. See also European Union Agency for the Cooperation of Energy Regulators References External links Energy Authority Government agencies of Finland Electric power Energy production
Finnish Energy Authority
[ "Physics", "Engineering" ]
318
[ "Power (physics)", "Electrical engineering", "Electric power", "Physical quantities" ]
69,164,293
https://en.wikipedia.org/wiki/Compliance%20constants
Compliance constants are the elements of an inverted Hessian matrix. The calculation of compliance constants provides an alternative description of chemical bonds in comparison with the widely used force constants explicitly ruling out the dependency on the coordinate system. They provide the unique description of the mechanical strength for covalent and non-covalent bonding. While force constants (as energy second derivatives) are usually given in aJ/Å or N/cm, compliance constants are given in Å/aJ or Å / mdyn. History Hitherto, recent publications that broke the wall of putative chemical understanding and presented detection/isolation of novel compounds with intriguing bonding characters can still be provocative at times. The stir in such discoveries arose partly from the lack of a universally accepted bond descriptor. While bond dissociation energies (BDE) and rigid force constants have been generally regarded as primary tools for such interpretation, they are prone to flawed definition of chemical bonds in certain scenarios whether simple or controversial. Such reasons prompted the necessity to seek an alternative approach to describe covalent and non-covalent interactions more rigorously. , a German chemist at the TU Braunschweig and his Ph.D. student at the time, Kai Brandhorst, developed a program COMPLIANCE (freely available to the public), which harnesses compliance constants for tackling the aforementioned tasks. The authors use an inverted matrix of force constants, i.e., inverted Hessian matrix, originally introduced by W. T. Taylor and K. S. Pitzer. The insight in choosing the inverted matrix is from the realization that not all elements in the Hessian matrix are necessary—and thus redundant—for describing covalent and non-covalent interactions. Such redundancy is common for many molecules, and more importantly, it ushers in the dependence of the elements of the Hessian matrix on the choice of coordinate system. Therefore, the author claimed that force constants albeit more widely used are not an appropriate bond descriptor whereas non-redundant and coordinate system-independent compliance constants are. Theory Force constants By Taylor series expansion, the potential energy, , of any molecule can be expressed as: (eq. 1) where is a column vector of arbitrary and fully determined displacement coordinates, and and are the corresponding gradient (first derivative of ) and Hessian (second derivative of ), respectively. The point of interest is the stationary point on a potential energy surface (PES), so is treated as zero, and by considering the relative energy, as well becomes zero. By assuming harmonic potential and regarding the third derivative term and forth as negligible, the potential energy formula then simply becomes: (eq. 2) Transitioning from cartesian coordinates to internal coordinates , which are more commonly used for the description of molecular geometries, gives rise to equation 3: (eq. 3) where is the corresponding Hessian for internal coordinates (commonly referred to as force constants), and it is in principle determined by the frequencies of a sufficient set of isotopic molecules. Since the Hessian is the second derivative of the energy with respect to displacements and that is the same as the first derivative of the force, evaluation of this property as shown in equation 4 is often used to describe chemical bonds. (eq. 4) Nevertheless, there are several issues with this method as explained by Grunenberg, including the dependence of force constants on the choice of internal coordinates and the presence of the redundant Hessian which has no physical meaning and consequently engenders ill-defined description of bond strength. Compliance constants Rather than internal displacement coordinates, an alternative approach to write the potential energy of a molecule as explained by Decius is to write it as a quadratic form in terms of generalized displacement forces (negative gradient) . (eq. 5) This gradient is the first derivative of the potential energy with respect to the displacement coordinates, which can be expressed as shown: (eq. 6) By substituting the expression of in eq. 5 into equation 5, equation 7 is obtained. (eq. 7) Thus, with the knowledge that is positive definite, the only possible value of which is the compliance matrix then must be: (eq. 8) Equation 7 offers a surrogate formulation of the potential energy which proves to be significantly advantageous in defining chemical bonds. Specially, this method is independent on coordinate selection and also eliminates such issue with redundant Hessian that the common force constant calculation method suffers with. Intriguingly, compliance constants calculation can be employed regardless of the redundancy of the coordinates. Archetype of compliance constants calculation Cyclobutane: force constants calculations To illustrate how choices of coordinate systems for calculations of chemical bonds can immensely affect the results and consequently engender ill-defined descriptors of the bonds, sample calculations for n-butane and cyclobutane are shown in this section. Note that it is known that the all the four equivalent C-C bonds in cyclobutane are weaker than any of the two distinct C-C bonds in n-butane; therefore, juxtaposition and evaluation of the strength of the C-C bonds in this C4 system can exemplify how force constants fail and how compliance constants do not. The tables immediately below are results that are calculated at MP2/aug-cc-pvtz level of theory based on typical force constants calculation. Tables 1 and 2 display a force constant in N/cm between each pair of carbon atoms (diagonal) as well as the coupling (off-diagonal). Considering natural internal coordinates on the left, the results make chemical sense. Firstly, the C-C bonds are n-butane are generally stronger than those in cyclobutane, which is in line with what is expected. Secondly, the C-C bonds in cyclobutane are equivalent with the force constant values of 4.173 N/cm. Lastly, there is little coupling between the force constants as seen as the small compliance coupling constants in the off-diagonal terms. However, when z-matrix coordinates are used, the results are different from those obtained from natural internal coordinates and become erroneous. The four C-C bonds all have distinct values in cyclobutane, and the coupling becomes much more pronounced. Significantly, the force constants of the C-C bonds in cyclobutane here are also larger than those of n-butane, which is in conflict with chemical intuition. Clearly for cyclobutane—and numerous other molecules, using force constants therefore gives rise to inaccurate bond descriptors due to its dependence on coordinate systems. Cyclobutane: compliance constants calculations A more accurate approach as claimed by Grunenberg is to exploit compliance constants as means for describing chemical bonds as shown below. All the calculated compliance constants above are given in N−1 unit. For both n-butane and cyclobutane, the results are the same regardless of the choice of the coordinate systems. One aspect of compliance constants that proves more powerful than force constants in cyclobutane is because of less coupling. This compliance coupling constants are the off-diagonal elements in the inverted Hessian matrix and altogether with the compliance constants, they physically describe the relaxed distortion of a molecule closely through a minimum energy path. Moreover, the values of the compliance constants yield the same results for all the C-C bonds and the values are less compared to those obtained for n-butane. Compliance constants, thus, give results that are in accordance to what are generally known about the ring strain of cyclobutane. Applications to main group compounds Diboryne Diboryne or a compound with boron-boron triple bond was first isolated as a N-heterocyclic carbene supported complex (NHC-BB-NHC) in the Braunschweig group, and its unique, peculiar bonding structure thereupon catalyzed new research to computationally assess the nature of this at that time controversial triple bond. A few years later, Köppe and Schnöckel published an article arguing that the B-B bond should be defined as a 1.5 bond based on thermodynamic view and rigid force constant calculations. That same year, Grunenberg reassessed the B-B bond using generalized compliance constants of which he claimed better suited as a bond strength descriptor. The calculated relaxed force constants show a clear trend as the bond order between the B-B bond increases, which advocates the existence of the triple bond in Braunschweig's compound. Digallium bonds Grunenberg and N. Goldberg probed the bond strength of a Ga-Ga triple bond by calculating the compliance constants of digallium complexes with a single bond, a double bond, or a triple bond. The results show that the Ga-Ga triple bond of a model Na[H-GaGa-H] compound in C symmetry has a compliance constant value of 0.870 aJ/Å is in fact weaker than a Ga-Ga double bond (1.201 aJ/Å). Watson-Crick base pairs Besides chemical bonds, compliance constants are also useful for determining non-covalent bonds, such as H-bonds in Watson-Crick base pairs. Grunenberg calculated the compliance constant for each of the donor-H⋯acceptor linkages in AT and CG base pairs and found that the central N-H⋯N bond in CG base pair is the strongest one with the compliance constant value of 2.284 Å / mdyn. (Note that the unit is reported in a reverse unit.) In addition, one of the three hydrogen bonding interactions in a AT base pair shows an extremely large compliance value of >20 Å / mdyn indicative of a weak interaction. References Chemical bonding Intermolecular forces Chemistry
Compliance constants
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
2,037
[ "Molecular physics", "Materials science", "Intermolecular forces", "Condensed matter physics", "nan", "Chemical bonding" ]
69,164,461
https://en.wikipedia.org/wiki/Tauc%E2%80%93Lorentz%20model
The Tauc–Lorentz model is a mathematical formula for the frequency dependence of the complex-valued relative permittivity, sometimes referred to as the dielectric function. The model has been used to fit the complex refractive index of amorphous semiconductor materials at frequencies greater than their optical band gap. The dispersion relation bears the names of Jan Tauc and Hendrik Lorentz, whose previous works were combined by G. E. Jellison and F. A. Modine to create the model. The model was inspired, in part, by shortcomings of the Forouhi–Bloomer model, which is aphysical due to its incorrect asymptotic behavior and non-Hermitian character. Despite the inspiration, the Tauc–Lorentz model is itself aphysical due to being non-Hermitian and non-analytic in the upper half-plane. Further researchers have modified the model to address these shortcomings. Mathematical formulation The general form of the model is given by where is the relative permittivity, is the photon energy (related to the angular frequency by ), is the value of the relative permittivity at infinite energy, is related to the electric susceptibility. The imaginary component of is formed as the product of the imaginary component of the Lorentz oscillator model and a model developed by Jan Tauc for the imaginary component of the relative permittivity near the bandgap of a material. The real component of is obtained via the Kramers-Kronig transform of its imaginary component. Mathematically, they are given by where is a fitting parameter related to the strength of the Lorentzian oscillator, is a fitting parameter related to the broadening of the Lorentzian oscillator, is a fitting parameter related to the resonant frequency of the Lorentzian oscillator, is a fitting parameter related to the bandgap of the material. Computing the Kramers-Kronig transform, where , , , , . See also Cauchy equation Sellmeier equation Lorentz oscillator model Forouhi–Bloomer model Brendel–Bormann oscillator model References Condensed matter physics Electric and magnetic fields in matter Optics
Tauc–Lorentz model
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
466
[ "Applied and interdisciplinary physics", "Optics", "Phases of matter", "Electric and magnetic fields in matter", "Materials science", "Condensed matter physics", " molecular", "Atomic", "Matter", " and optical physics" ]