id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
74,123,635
https://en.wikipedia.org/wiki/Kenneth%20E.%20Goodson
Kenneth Eugene Goodson (born August 1, 1967) is an American mechanical engineer and academic at Stanford University. He serves as Davies Family Provostial Professor within the university, as well as Senior Associate Dean for Faculty and Academic Affairs within its School of Engineering. Early life According to Who's Who in the World, Goodson was born in Lafayette, Indiana on August 1, 1967. Education Goodson has received four academic degrees from the Massachusetts Institute of Technology (MIT): two Bachelors of Science in 1989 (one in mechanical engineering and another in music), a Master of Science in 1991 (in mechanical engineering), and a Doctor of Philosophy in 1993 (also in mechanical engineering). Career From 1993 to 1994, he worked for Daimler-Benz AG in Germany as a visiting materials scientist. He has been employed at Stanford since 1994 as a professor in the mechanical engineering department. By courtesy, he also holds a professorship in the materials science & engineering department. Starting in 2008, he was the vice chair of mechanical engineering, and from 2013 to 2019, he held the Robert Bosch Chairman position in the department. Additionally, since 2014, he has held the Davies Family Professorship. He is the principal investigator of the Stanford NanoHeat Lab, and is also an affiliated faculty member of Stanford Bio-X. Goodson is a fellow of the American Association for the Advancement of Science, American Society of Mechanical Engineers, Institute of Electrical and Electronics Engineers, American Physical Society, and National Academy of Inventors. He is also an elected member of the National Academy of Engineering (class of 2020). Recognition Goodson appears in the 35th through 38th editions of American Men and Women of Science. Personal life Goodson moonlights as a baritone soloist in oratorio, and has held voice fellowships from the Tanglewood Music Festival and a Sudler Prize for Arts Achievement (conferred by MIT in 1989). He also posts about woodworking and cycling to his Instagram and Strava accounts, respectively. His wife, Laura Dahl, is a pianist who plays with the Stanford music faculty. Goodson has also been noted as a cellist. Notes References External links Kenneth Goodson on Stanford Profiles 1967 births People from Lafayette, Indiana Energy engineers American mechanical engineers Engineers from California Massachusetts Institute of Technology alumni Stanford University Department of Mechanical Engineering faculty Members of the United States National Academy of Engineering Fellows of the American Association for the Advancement of Science Fellows of the American Society of Mechanical Engineers Fellows of the IEEE Fellows of the American Physical Society Fellows of the National Academy of Inventors Living people
Kenneth E. Goodson
[ "Engineering" ]
514
[ "Energy engineering", "Energy engineers" ]
74,127,498
https://en.wikipedia.org/wiki/Arsene%20Tema%20Biwole
Arsene Tema Biwole is a cameroonian nuclear engineer and plasma physicist at the Massachusetts Institute of Technology (MIT). Biography Early life an education Arsene Tema Biwole was born on June 15, 1992, at the "Camp Bamoun" - built during German colonisation - in Bafoussam, western Cameroon. Premature and ill during his childhood, he and his brothers were raised by a single and modest mother. Arsene studied Newtonian physics in science books at home without electricity, using the light of a lamp. He studied nuclear engineering at the Polytechnic School of Turin, becoming the only Cameroonian engaged in this course. In April 2017, with a grant from the United States Department of Energy, he continued his research for a Master's thesis in San Diego, California at General Atomics. Thus working in the Fusion Theory Group of this company. Scientific career In 2017, Arsene Tema Biwole participated in the 59th Meeting of the American Physical Society Division of plasma physics with General atomics. Thus becoming the first Cameroonian in history to both join General Atomics and the Division of Plasma Physics of the American physical Society. He holds a Doctorate in Physics, obtained at the École Polytechnique Fédérale de Lausanne, titled as follows : "Measuring the electron energy distribution in tokamak plasmas from polarized electron cyclotron radiation". In June 2023, Arsene Tema Biwole joined the Massachusetts Institute of Technology (MIT), to work for the SPARC tokamak, operated by Commonwealth Fusion Systems in collaboration with the Massachusetts Institute of Technology (MIT) Plasma Science and Fusion Center (PSFC). Honors and distinctions Arsene Tema Biwole was cited by Jeune Afrique in 2018, as one of the most promising African scientists. In 2020, Arsene Tema Biwole won the Youth Excellence Prize in Cameroon and is designated Ambassador of the Youth Connekt Cameroon project. During a popular poll carried out by the online information platform Afrik-inform, Arsene Tema Biwole was designated as the favorite Cameroonian personality in the diaspora for the year 2020. From January to February 2021, he travelled through high schools and universities in Cameroon to promote science and encourage vocations among the youth. On February 10, 2021, Arsene Tema Biwole was cited by Paul Biya, President of the Republic of Cameroon, as a role model for the youth. In February 2021, Arsene Tema Biwole received, during a public address, congratulations and encouragement from Maurice Kamto for his ambitions and projects for Africa and Humanity,. Arsene Tema Biwole was the guest of "Actualités Hebdo", a weekly news program of CRTV on February 14, 2021. During the program, Arsene discussed the nuclear perspectives in Africa and the issue of electrification in Cameroon. In March 2023, Arsene Tema Biwole defended his Doctorate thesis in physics at the École Polytechnique Fédérale de Lausanne. Thesis which was unanimously proposed by the jury for the EPFL Doctoral program thesis prize. Honors EPFL Doctoral Program Thesis Distinction, 2023, Nominee. Excellence in Africa Ambassador of the Federal Polytechnic School of Lausanne. Knight of the Order of Cameroonian Merit by decree of August 31, 2021, signed by the President of the Republic of Cameroon. Banca Sella research award, 2016. EDISU Piemonte super merit student prize, 2012. Politecnico di Torino Distinguished academic achievement award, 2012. Notes and references See also Henri Hogbe Nlend 1992 births Living people Nuclear engineers Plasma physicists Cameroonian engineers People from Bafoussam École Polytechnique Fédérale de Lausanne alumni Massachusetts Institute of Technology people
Arsene Tema Biwole
[ "Physics" ]
767
[ "Plasma physicists", "Plasma physics" ]
78,456,163
https://en.wikipedia.org/wiki/Loop%20extrusion
Loop extrusion is a major mechanism of Nuclear organization. It is a dynamic process in which structural maintenance of chromosomes (SMC) protein complexes progressively grow loops of DNA or chromatin. In this process, SMC complexes, such as condensin or cohesin, bind to DNA/chromatin, use ATP-driven motor activity to reel in DNA, and as a result, extrude the collected DNA as a loop. Background The organization of DNA presents a remarkable biological challenge: human DNA can reach 2 meters and is packed into the nucleus with the diameter of 5-20 µm. At the same time, the critical cell processes involve complex processes on highly compacted DNA, such as transcription, replication, recombination, DNA repair, and cell division. Loop extrusion is a key mechanism that organizes DNA into loops, enabling its efficient compaction and functional organization. For instance, in vitro experiments show that cohesin can compact DNA by 80%, while condensin achieves a remarkable 10,000-fold compaction of mitotic chromosomes, as evidenced by microscopy, Hi-C, and polymer simulations. Another challenge lies in establishing long-range genomic communication, which can span hundreds of thousands of base pairs. Physical encounters between genomic elements are intrinsically random and promiscuous without mechanisms to facilitate them. Loop extrusion has been proposed to provide an effective solution to regulate contacts by bringing target elements into proximity while limiting contact with unwanted loci. Key components The key components of the loop extrusion process are DNA molecule that serves as the substrate for the movement of extruder Extruders, usually SMC complexes, that moves along DNA in ATP-dependent manner Accessory factors Loaders of the extruder, a factor that facilitates loading of extruder on DNA (NIPBL/MAU2 usually play the key role in loading extruder on DNA) Unloaders of the extruder, the molecule that facilitates detachment of extruder from DNA (for example, WAPL) Road-blocks located on DNA that present a hindrance to extruder movement and lead to stalling of the extrusion machinery. SMC proteins Loop extrusion is performed by the SMC family of protein-complexes which includes cohesin, condensin, and SMC5/6 each playing specialized roles depending on the organism, cell cycle phase, and biological context. Cohesin mediates chromatin loop formation and stabilization, particularly during interphase in vertebrates, where it facilitates transcriptional regulation by promoting distal enhancer-promoter interactions. During mitosis and meiosis, cohesin dissociates from chromosome arms ceding its loop extrusion role to condensin. Loop extrusion by condensin mediates large-scale chromosome compaction, creating the compact, rod-like chromosome structures required for accurate segregation. Unlike cohesin and condensin, SMC5/6 is a loop extruding factor which primarily functions in maintaining genome integrity during DNA damage repair and resolving replication stress. Despite their distinct roles, SMC complexes share a highly conserved ring-like structure. Two SMC proteins (usually, SMC1 and SMC3) are connected via a hinge region and linked at their heads by a kleisin subunit, forming a closed ring. These two SMC proteins have ATPase domains at their heads, which bind together and hydrolyze ATP. Cycles of ATP binding and hydrolysis mediate conformational changes in the ring structure, driving DNA translocation and stepwise loop extrusion. ATP is essential for both initiating loop extrusion (e.g., loading SMC complexes onto DNA) and propagating it (growing loops by translocating along DNA). The tension within the DNA significantly influences extrusion efficiency. At low tension, SMC complexes can make larger loop-capture steps, while higher tension can lead to stalling or reversal of loop extrusion. Modifications and factors for loading/unloading The dynamic nature of loop extrusion is tightly controlled by accessory factors and post-translational modifications, especially in the case of cohesin. In vertebrates, NIPBL (and orthologs like Mau2 in yeast or SCC2 and SCC4) is crucial for loading SMC complexes onto DNA, initiating and maintaining active extrusion. PDS5 is thought to pause the extrusion process. The SMC can then either restart extruding or be unloaded by the additional binding of WAPL, which ensure proper recycling and turnover. Post-translational modifications also play a key role. Acetylation of cohesin by enzymes such as ESCO1 and ESCO2 stabilizes chromatin loops, particularly at CTCF-bound sites. Similarly, SUMOylation, mediated by the NSE2 subunit of the SMC5/6 complex, enhances the recruitment of SMC5/6 to sites of DNA damage, supporting its role in genomic stability. Roadblocks of loop extrusion Loop extruders can encounter various obstacles while extruding. For example, many of which were shown to directly interact with cohesin and hypothesized to stop its movement on DNA. However, in vivo experiments demonstrate that cohesin can frequently bypass obstacles larger than its ring size. Other cohesin and condensin molecules: Extruding cohesins and condensins has been found to be obstacle to other extruders that they encounter on the way. As such, they present a fundamental road-block that can be randomly encountered on the DNA. CTCF: The C-terminal DNA-binding domain of CTCF has been shown to directly interact with SA2 and SCC1 subunits of cohesin to stop extrusion and retain it on DNA with recent evidence suggesting a tension-dependence to the interaction. CTCF stalls cohesin in a highly directional manner where cohesin can bypass CTCF in one orientation but stalls when encountering it in the opposite orientation. This directionality allows for the creation of isolated domains on the genome called Topologically Associating Domains (TADs) which have been proposed to have a large role in gene-regulation. Polymerase: Transcribing polymerases can serve as barriers to cohesin that may not only stall extruders but also act as a motor pushing cohesin in the direction of polymerase movement. The size of a polymerase with an RNA transcript is usually larger than the size of the cohesin ring, and the stall force of cohesin is much smaller than that of polymerase, allowing for effective barrier function by polymerase. Furthermore, it has been found that RNA can directly interact with cohesin subunits. Helicase: MCM helicase has been found to counteract the extrusion of cohesin on DNA. R-loops: Some evidence suggests that R-loops can also act as barriers to loop extrusion, and R-loops have been shown to interact with cohesin subunits. However, other evidence suggests that R-loops may instead act as cohesin loaders. Molecular mechanism The molecular mechanisms of DNA-loop extrusion by SMC proteins have not yet been fully understood, but recent structural studies have made significant progress in developing several working models, like the scrunching model, the Brownian-ratchet model, the DNA-segment capture model/DNA-pumping model, the hold-and-feed model and the swing-and-clamp model. Evidence for loop extrusion Evidence for loop extruding molecules and their properties The first direct evidence of loop extrusion came from in vitro imaging studies on fluorescently labeled DNA with condensin or cohesin. Extrusion was found to be ATP-dependent and happened at ~1-3kb/s. The stall force was measured to be around 0.1-1pN which is small compared to other molecular motors. Evidence for the biological role of loop extrusion Most work on the biological role of loop extrusion relies on inhibiting loop extruders and observing the consequences. Depletion of cohesin leads to the disappearance of TADs and some loss in transcription genome-wide. In more specific settings, inhibition of cohesin has been found to inhibit neuronal maturation and differentiation and function of dendritic cells. Depletion of either condensin I or condensin II at the entry into mitosis leads to abnormal chromosome formation and improper segregation of sister chromatids. Biological function Loop extrusion has been found across the tree of life with suggested roles in immune response, DNA repair, enhancer-promoter interactions, and mitosis. Mitosis in eukaryotes: In mitosis, loop extrusion by condensin is critical for the segregation of sister chromatids and for providing structural rigidity after separation. Condensin I has been found to modulate the size and arrangement of nested inner loops and condensin II organizing the backbone from which loops emanate. Cell division in bacteria: In bacteria, SMC proteins have been found to maintain the juxtaposition of the chromosome arms by loading at the centromere and extruding until the terminus. Topologically associating domains (TADs): During interphase, chromosomes are locally compacted at the sub-megabase scale into so-called TADs. Generally, they are bordered by motifs for CTCF and completely disappear if either cohesin or CTCF is degraded.   V(D)J recombination: Loop extrusion by cohesin has been found to play a key role in V(D)J recombination to generate diversity in antibodies and T-cell receptors as depletion of cohesin inhibits V(D)J recombination. There are CTCF motifs throughout the recombination region, and inversions of their orientation or mutation of the motifs lead to changes in recombination probabilities consistent with those predicted by loop extrusion. Protocadherin promoter choice: Protocadherins are mammalian proteins involved in cell adhesion of the neurons encoded in DNA in multiple similar genes located in the protocadherin locus. Neurons usually express only a subset of the protocadherins, enabling variability in the interactions between neurons. The choice of protocadherins rely on cohesin, which bridges alternative promoters of protocadherin with the enhancer in a CTCF-dependent manner. This process involves intricate regulation by CTCF and WAPL. Theoretical models of loop extrusion In mathematical models of loop extrusion, the two legs of a loop-extruding factor (LEF) are represented as points on a one-dimensional line, evolving according to different extrusion policies: LEF Translocation: These dictate how LEFs move along the chromatin. These include symmetric extrusion—where both legs move in opposite directions—and one-sided extrusion—where one leg remains stalled while the other moves. Cohesin is often modeled with symmetric extrusion, while condensin is thought to follow a one-sided extrusion mechanism. Stochastic Binding and Unbinding: LEFs bind to chromatin at a random time and position along the chain, and unbind after a characteristic time. LEF-LEF interactions: When LEFs encounter one another, different interaction policies can be implemented. LEFs may halt upon collision, or bypass each other, as observed in some contexts. Extrusion Barriers: Bound proteins such as CTCF or RNA polymerase II can act as obstacles, stalling or halting LEF motion. Since the exact modalities of LEF dynamics remain uncertain, these models provide a flexible framework to explore different hypothetical behaviors of LEFs. In these models, the statistics of LEFs are characterized by two key physical parameters: Processivity (): Average size of a loop extruded by an unobstructed LEF before dissociating. This characteristic loop size depends on the extrusion speed and the residence time of the LEF on the chromatin. Separation (): Average distance between LEFs on the chromatin fiber. It is determined by the total number of LEFs and the length of the chromatin . A shorter separation results in denser packing of loops, while larger separation leaves gaps between loops. The interplay of these two parameters, encapsulated by the dimensionless parameter , defines two states of chromatin organization: Sparse State (): LEFs operate independently, forming isolated loops with large gaps between them. This state results in minimal compaction of the chromatin fiber. Dense State (): LEFs are abundant enough to form a continuous, gapless array of loops. This leads to significant chromatin compaction, as seen during mitosis. References Nuclear organization
Loop extrusion
[ "Biology" ]
2,714
[ "Nuclear organization", "Cellular processes" ]
78,470,042
https://en.wikipedia.org/wiki/Joint%20Universities%20Accelerator%20School
The Joint Universities Accelerator School (JUAS) is an educational program that specializes in the field of particle accelerator science and technology. Established in 1994, JUAS is a collaborative initiative involving multiple universities, research centers, and institutions. It provides advanced training in accelerator physics and engineering for graduate students and professionals. The program is hosted annually, in Archamps, near Geneva, Switzerland, under the auspices of the European Scientific Institute (ESI). Its location near the European Organization for Nuclear Research (CERN) and other major research facilities offers participants unique access to cutting-edge expertise and resources. JUAS originated in 1990 through a collaboration between CERN physicists, the European Synchrotron Radiation Facility (ESRF), and the Université Joseph Fourier in Grenoble, France, with support from Haute-Savoie local authorities. Since its inception, the school has trained more than 1,400 participants, creating a skilled workforce for addressing challenges in accelerator physics and technology. Alumni frequently secure positions at leading laboratories, including CERN, DESY, and Fermilab, or apply their skills in sectors such as medicine and energy. By 2024, the school had partnered with 14 universities across Europe. JUAS caters primarily to graduate students with a scientific background who are new to accelerator physics. The program offers two complementary five-week courses: Course 1: Focuses on the theoretical foundations of particle accelerator science. Course 2: Covers the technology and applications of particle accelerators. Participants may enroll in one or both courses, which feature interactive teaching by experts from leading institutions and hands-on learning opportunities. Since August 2021, JUAS has been directed by Elias Métral (CERN). Many of the JUAS lecturers do also lecture at the CERN Accelerator School. List of former JUAS Directors References External links Official website Physics education Education in France CERN
Joint Universities Accelerator School
[ "Physics" ]
377
[ "Applied and interdisciplinary physics", "Physics education" ]
69,629,099
https://en.wikipedia.org/wiki/Katherine%20Lemos
Katherine Andrea Lemos is an American safety professional and the former chairperson and CEO of the U.S. Chemical Safety and Hazard Investigation Board (CSB). Early life Katherine Lemos was born to John Curtis and Laura Curtis. Her father was a United States Air Force and Air National Guard pilot and a commercial airline pilot. Lemos started flight lessons at the age of fourteen, at which time her father required her to read National Transportation Safety Board publications to learn about aviation safety and accidents. Lemos earned a B.B.A. in business management from Belmont University, a M.S. in behavioral counseling from California Lutheran University, and a Ph.D. in social psychology from University of Iowa. She also worked as a postdoctoral researcher at University of Iowa Operator Performance Laboratory and as a NASA Faculty Fellow at Langley Research Center. Lemos is a pilot and certified flight instructor. Career Prior to her appointment to CSB, Lemos worked at Northrop Grumman from 2014 to 2020, serving as the company's director of autonomy and director of programs for the aerospace sector. She had previously worked as a technical leader and program manager for aviation safety at the Federal Aviation Administration and as an accident investigator and later Special Assistant to Vice Chairman of the Board of the National Transportation Safety Board. She had also held academic positions at University of Maryland and Instituto Tecnológico de Aeronáutica. Lemos has specialized in system safety, accident investigation, and human factors. At the time she was nominated to CSB, she had no experience in chemical manufacturing or refinery operations, fields which fall under the purview of CSB investigation. Chemical Safety Board Katherine Lemos was nominated by President Donald Trump to be a member of CSB on June 13, 2019. On July 22, she was nominated by President Trump to serve concurrent five-year appointments as chairperson and CEO of CSB. At the time, the CSB's five-seat board had only three members, one of whom would leave in December 2019. The problem of vacancies in the CSB board was noted by a May 2019 Environmental Protection Agency Office of Inspector General report to be detrimental to CSB's ability to function effectively. A hearing on her nomination was held by the United States Senate Committee on Environment and Public Works in September 2019. Lemos received bipartisan support from committee members during her nomination. Her appointment was confirmed by the Senate by unanimous consent on March 23, 2020. Senator John Barrasso said "it was critical the Senate confirm Dr. Lemos to provide a working quorum to the board"; at the time of Lemos's confirmation, the CSB board had only one member, Kristen Kulinowski, and only eight investigators. She began her tenure on April 23, 2020. Four days after Lemos's term began, Kulinowski announced that she would resign from CSB on May 1, ending the CSB's brief quorum. At this time, CSB had ten unfilled investigator positions. Thereafter, Lemos declared that she could operate as a "quorum of one", citing a legal opinion from the CSB general counsel allowing her to unilaterally run the CSB. A July 2020 Environmental Protection Agency Office of Inspector General report concluded that it remained an open question whether a single CSB board member may constitute a quorum, as doing so would impair the segregation of duties mandated by the Government Accountability Office. In May 2021, Public Employees for Environmental Responsibility criticized Lemos for accruing $33,000 in travel expenses and $20,000 in office renovations, and for hiring a senior advisor from Northrop Grumman for an undisclosed salary. In a September 2021 hearing before the United States House Energy Subcommittee on Oversight and Investigations, Lemos testified that the CSB is "on an upward trend". She said that she intended to expand the staff of CSB to 61 people by September 2023. In September 2021, the Senate Committee on Environment and Public Works approved the nominations of three new board members of the CSB, and in December 2021, two members were confirmed, Steve Owens and Sylvia Johnson. Once seated on the board in February 2022, Owens and Johnson openly disagreed with changes Lemos approved to a board order, which resulted in an expansion of the chairperson's authority. They attempted a procedural vote to make further changes to the order, but Lemos tabled the vote for a public meeting, which ultimately did not occur due to her resignation. Lemos submitted her letter of resignation to the White House in June 2022, citing lost confidence in the board's focus on the agency's mission. Her resignation became effective on July 22, 2022. In June 2023, the EPA Inspector General released a report stating that Lemos violated federal regulations for her use of board funds for travel, office refurbishment, and media training, but did not violate restrictions placed by a continuing resolution and did not violate regulations for the hiring of senior aides. Senator Chuck Grassley wrote a letter to Lemos requesting that she repay the money indicated as improperly spent in the report. References United States Chemical Safety and Hazard Investigation Board Belmont University alumni California Lutheran University alumni University of Iowa alumni First Trump administration personnel Living people 1967 births
Katherine Lemos
[ "Chemistry" ]
1,070
[ "United States Chemical Safety and Hazard Investigation Board" ]
69,632,821
https://en.wikipedia.org/wiki/YugabyteDB
YugabyteDB is a high-performance transactional distributed SQL database for cloud-native applications, developed by Yugabyte. History Yugabyte was founded by ex-Facebook engineers Kannan Muthukkaruppan, Karthik Ranganathan, and Mikhail Bautin. At Facebook, they were part of the team that built and operated Cassandra and HBase for workloads such as Facebook Messenger and Facebook's Operational Data Store. The founders came together in February 2016 to build YugabyteDB. YugabyteDB was initially available in two editions: community and enterprise. In July 2019, Yugabyte open-sourced previously commercial features and launched YugabyteDB as open-source under the Apache 2.0 license. Funding In October 2021, five years after the company's inception, Yugabyte closed a $188 Million Series C funding round to become a Unicorn start-up with a valuation of $1.3Bn Architecture YugabyteDB is a distributed SQL database that aims to be strongly transactionally consistent across failure zones (i.e. ACID compliance]. Jepsen testing, the de facto industry standard for verifying correctness, has never fully passed, mainly due to race conditions during schema changes. In CAP Theorem terms YugabyteDB is a Consistent/Partition Tolerant (CP) database. YugabyteDB has two layers, a storage engine known as DocDB and the Yugabyte Query Layer. DocDB The storage engine consists of a customized RocksDB combined with sharding and load balancing algorithms for the data. In addition, the Raft consensus algorithm controls the replication of data between the nodes. There is also a Distributed transaction manager and Multiversion concurrency control (MVCC) to support distributed transactions. The engine also exploits a Hybrid Logical Clock that combines coarsely-synchronized physical clocks with Lamport clocks to track causal relationships. The DocDB layer is not directly accessible by users. YugabyteDB Query Layer Yugabyte has a pluggable query layer that abstracts the query layer from the storage layer below. There are currently two APIs that can access the database: YSQL is a PostgreSQL code-compatible API based around v11.2. YSQL is accessed via standard PostgreSQL drivers using native protocols. It exploits the native PostgreSQL code for the query layer and replaces the storage engine with calls to the pluggable query layer. This re-use means that Yugabyte supports many features, including: Triggers & Stored Procedures PostgreSQL extensions that operate in the query layer Native JSONB support YCQL is a Cassandra-like API based around v3.10 and re-written in C++. YCQL is accessed via standard Cassandra drivers using the native protocol port of 9042. In addition to the 'vanilla' Cassandra components, YCQL is augmented with the following features: Transactional consistency - unlike Cassandra, Yugabyte YCQL is transactional. JSON data types supported natively Tables can have secondary indexes Currently, data written to either API is not accessible via the other API, however YSQL can access YCQL using the PostgreSQL foreign data wrapper feature. The security model for accessing the system is inherited from the API, so access controls for YSQL look like PostgreSQL, and YCQL looks like Cassandra access controls. Cluster-to-cluster replication In addition to its core functionality of distributing a single database, YugabyteDB has the ability to replicate between database instances. The replication can be one-way or bi-directional and is asynchronous. One-way replication is used either to create a read-only copy for workload off-loading or in a read-write mode to create an active-passive standby. Bi-directional replication is generally used in read-write configurations and is used for active-active configurations, geo-distributed applications, etc. Migration tooling Yugabyte also provides YugabyteDB Voyager, tooling to facilitate the migration of Oracle and other similar databases to YugabyteDB. This tool supports the migration of schemas, procedural code and data from the source platform to YugabyteDB. See also Cloud database Distributed SQL Comparison of relational database management systems Comparison of object–relational database management systems Cloud native computing Database management system List of databases using MVCC List of relational database management systems CockroachDB TiDB References External links Slack community Cloud databases Database companies Bigtable implementations Database-related software for Linux NewSQL Distributed computing Computer systems Software companies of the United States Companies based in Silicon Valley
YugabyteDB
[ "Engineering" ]
987
[ "Reliability engineering", "Fault tolerance" ]
69,633,577
https://en.wikipedia.org/wiki/Crown%20distillery
A crown distillery () was one of the state-operated distilleries in Sweden between 1775 and 1824. Establishment The network of crown distilleries was established in 1775, when King Gustav III, acting on the advice of his finance minister , declared a state monopoly over the production and sale of alcoholic spirits, with the twofold goal of raising extra revenues for the state while also reducing alcohol consumption and its accompanying health and social problems. It therefore became illegal to obtain spirits by any means other than from the new Crown Distilleries, and as such the importation of spirits from abroad was banned, as was the distillation of spirits by private individuals. Distilleries Some 50-60 Crown Distilleries were established across the kingdom, including the following: , Södermalm (Stockholm) Strömsbro Crown Distillery, Gävle , Södermalm (Stockholm) Closure The state monopoly was hugely unpopular, especially among the common people, as it banned the longstanding Swedish tradition of (roughly translatable as 'distillation for household needs'), though many people flouted the restrictions and continued to distil spirits illegally (). Moreover, the Crown Distilleries themselves failed to turn a profit. Gustav III was therefore forced to lift the monopoly at the of the Riksdag (the Swedish parliament), and most of the Crown Distilleries were shut down over the next couple of years. However, a few of the more profitable ones remained in operation for some time thereafter, and the last did not close until as late as 1824. See also Gustav III of Sweden Systembolaget References Industrial history of Sweden Alcohol monopolies 18th century in Sweden Alcohol in Sweden Distilleries Alcohol law
Crown distillery
[ "Chemistry" ]
361
[ "Distilleries", "Distillation" ]
66,909,336
https://en.wikipedia.org/wiki/Lanthanum%28III%29%20nitrate
Lanthanum(III) nitrate is any inorganic compound with the chemical formula . It is used in the extraction and purification of lanthanum from its ores. The compound decomposes at 499°C to lanthanum oxide, nitric oxide and oxygen. Preparation Lanthanum nitrate is prepared by reacting lanthanum oxide with nitric acid which creates lanthanum(III) nitrate and water. References Lanthanum compounds Nitrates
Lanthanum(III) nitrate
[ "Chemistry" ]
93
[ "Inorganic compounds", "Oxidizing agents", "Salts", "Nitrates", "Inorganic compound stubs" ]
66,917,811
https://en.wikipedia.org/wiki/1-Hexyne
1-Hexyne is a hydrocarbon consisting of a straight six-carbon chain having a terminal alkyne. Its molecular formula is . A colorless liquid, it is one of three isomers of hexyne. It is used as a reagent in organic synthesis. Synthesis and reactions 1-Hexyne can be prepared by the reaction of monosodium acetylide with butyl bromide: Its reactivity illustrates the behavior of terminal alkylacetylenes. The hexyl derivative is common test substrate because it is conveniently volatile. It undergoes deprotonation at C-3 and C-1 with butyl lithium: This reaction allows alkylation at the 3-position. Catechol borane adds to 1-hexyne to give the 1-hexenyl borane. 1-Hexyne reacts with diethyl fumarate to produce . See also 2-Hexyne 3-Hexyne References Alkynes
1-Hexyne
[ "Chemistry" ]
203
[ "Organic compounds", "Alkynes" ]
77,151,490
https://en.wikipedia.org/wiki/List%20of%20Tesla%20Autopilot%20crashes
Tesla Autopilot, a Level 2 advanced driver assistance system (ADAS), was released in October 2015 and the first fatal crashes involving the occurred less than one year later. The fatal crashes attracted attention from news publications and United States government agencies, including the National Transportation Safety Board (NTSB) and National Highway Traffic Safety Administration (NHTSA), which has argued the Tesla Autopilot death rate is higher than the reported estimates. In addition to fatal crashes, there have been many nonfatal ones. Causes behind the incidents include the ADAS failing to recognize other vehicles, insufficient Autopilot driver engagement, and violating the operational design domain. , there have been hundreds of nonfatal incidents involving Autopilot and fifty-one reported fatalities, forty-four of which NHTSA investigations or expert testimony later verified and two that NHTSA's Office of Defect Investigations verified as happening during the engagement of Full Self-Driving (FSD). Collectively, these cases culminated in a general recall in December 2023 of all vehicles equipped with Autopilot, which Tesla claims it resolved by an over-the-air software update. Immediately after closing its investigation in April 2024, NHTSA opened a recall query to determine the effectiveness of the recall. Fatal crashes Handan, Hebei, China (January 20, 2016) On January 20, 2016, Gao Yaning, the driver of a Tesla Model S in Handan, Hebei, China, was killed when his car crashed into a stationary truck. The Tesla was following a car in the far left lane of a multi-lane highway; the car in front moved to the right lane to avoid a truck stopped on the left shoulder, and the Tesla, which the driver's father believes was in Autopilot mode, did not slow before colliding with the stopped truck. According to footage captured by a dashboard camera, the stationary street sweeper on the left side of the expressway partially extended into the far left lane, and the driver did not appear to respond to the unexpected obstacle. Initially, Yaning was held responsible for the collision by local traffic police and, in September 2016, his family filed a lawsuit in July against the Tesla dealer who sold the car. The family's lawyer stated the suit was intended "to let the public know that self-driving technology has some defects. We are hoping Tesla when marketing its products, will be more cautious. Do not just use self-driving as a selling point for young people." Tesla released a statement which said they "have no way of knowing whether or not Autopilot was engaged at the time of the crash" since the car telemetry could not be retrieved remotely due to damage caused by the crash. In 2018, the lawsuit was stalled because telemetry was recorded locally to a SD card and was not able to be given to Tesla, who provided a decoding key to a third party for independent review. Tesla stated that "while the third-party appraisal is not yet complete, we have no reason to believe that Autopilot on this vehicle ever functioned other than as designed." Chinese media later reported that the family sent the information from that card to Tesla, which admitted Autopilot was engaged two minutes before the crash. Tesla since then removed the term "Autopilot" from its Chinese website. Williston, Florida, USA (May 7, 2016) On May 7, 2016, Tesla driver Joshua Brown was killed in a crash with an 18-wheel tractor-trailer in Williston, Florida. By late June 2016, the NHTSA opened a formal investigation into the fatal autonomous accident, working with the Florida Highway Patrol. According to the NHTSA, preliminary reports indicate the crash occurred when the tractor-trailer made a left turn in front of the 2015 Tesla Model S at an intersection on a non-controlled access highway, and the car failed to apply the brakes. The car continued to travel after passing under the truck's trailer. The Tesla was eastbound in the rightmost lane of US 27, and the westbound tractor-trailer was turning left at the intersection with NE 140th Court, approximately west of Williston; the posted speed limit is . The diagnostic log of the Tesla indicated it was traveling at a speed of when it collided with and traveled under the trailer, which was not equipped with a side underrun protection system. A reconstruction of the accident estimated the driver would have had approximately 10.4 seconds to detect the truck and take evasive action. The underride collision sheared off the Tesla's greenhouse, destroying everything above the beltline, and caused fatal injuries to the driver. In the approximately nine seconds after colliding with the trailer, the Tesla traveled another and came to rest after colliding with two chain-link fences and a utility pole. The NHTSA's preliminary evaluation was opened to examine the design and performance of any automated driving systems in use at the time of the crash, which involves a population of an estimated 25,000 Model S cars. On July 8, 2016, the NHTSA requested Tesla Inc. to hand over to the agency detailed information about the design, operation and testing of its Autopilot technology. The agency also requested details of all design changes and updates to Autopilot since its introduction, and Tesla's planned updates scheduled for the next four months. According to Tesla, "neither autopilot nor the driver noticed the white side of the tractor-trailer against a brightly lit sky, so the brake was not applied." The car attempted to drive full speed under the trailer, "with the bottom of the trailer impacting the windshield of the Model S". Tesla also stated that this was Tesla's first known Autopilot-related death in over 130 million miles (208 million km) driven by its customers while Autopilot was activated. According to Tesla there is a fatality every 94 million miles (150 million km) among all type of vehicles in the U.S. It is estimated that billions of miles will need to be traveled before Tesla Autopilot can claim to be safer than humans with statistical significance. Researchers say that Tesla and others need to release more data on the limitations and performance of automated driving systems if self-driving cars are to become safe and understood enough for mass-market use. The truck's driver told the Associated Press that he could hear a Harry Potter movie playing in the crashed car, and said the car was driving so quickly that "he went so fast through my trailer I didn't see him. [The film] was still playing when he died and snapped a telephone pole a quarter-mile down the road." According to the Florida Highway Patrol, they found in the wreckage an aftermarket portable DVD player. (It is not possible to watch videos on the Model S touchscreen display while the car is moving.) A laptop computer was recovered during the post-crash examination of the wreck, along with an adjustable vehicle laptop mount attached to the front passenger's seat frame. The NHTSA concluded the laptop was probably mounted, and the driver may have been distracted at the time of the crash. In January 2017, the NHTSA Office of Defects Investigations (ODI) released a preliminary evaluation, finding that the driver in the crash had seven seconds to see the truck and identifying no defects in the Autopilot system; the ODI also found that the Tesla car crash rate dropped by 40 percent after Autosteer installation, but later also clarified that it did not assess the effectiveness of this technology or whether it was engaged in its crash rate comparison. The NHTSA Special Crash Investigation team published its report in January 2018. According to the report, for the drive leading up to the crash, the driver engaged Autopilot for 37 minutes and 26 seconds, and the system provided 13 "hands not detected" alerts, to which the driver responded after an average delay of 16 seconds. The report concluded "Regardless of the operational status of the Tesla's ADAS technologies, the driver was still responsible for maintaining ultimate control of the vehicle. All evidence and data gathered concluded that the driver neglected to maintain complete control of the Tesla leading up to the crash." In July 2016, the NTSB announced it had opened a formal investigation into the fatal accident while Autopilot was engaged. The NTSB is an investigative body that only has the power to make policy recommendations. An agency spokesman said, "It's worth taking a look and seeing what we can learn from that event, so that as that automation is more widely introduced we can do it in the safest way possible." The NTSB opens annually about 25 to 30 highway investigations. In September 2017, the NTSB released its report, determining that "the probable cause of the Williston, Florida, crash was the truck driver's failure to yield the right of way to the car, combined with the car driver's inattention due to overreliance on vehicle automation, which resulted in the car driver's lack of reaction to the presence of the truck. Contributing to the car driver's overreliance on the vehicle automation was its operational design, which permitted his prolonged disengagement from the driving task and his use of the automation in ways inconsistent with guidance and warnings from the manufacturer." Mountain View, California, USA (March 23, 2018) On March 23, 2018, a second U.S. Autopilot fatality occurred in Mountain View, California. The crash occurred just before 9:30 a.m. Pacific Standard Time on southbound US 101 at the carpool lane exit for southbound Highway 85, at a concrete barrier where the left-hand carpool lane offramp separates from 101. After the Model X crashed into the narrow concrete barrier, it was struck by two following vehicles, and then it caught on fire. The driver was Apple engineer Walter Huang. Both the NHTSA and NTSB began investigations into the March 2018 crash. Another driver of a Model S demonstrated that Autopilot appeared to be confused by the road surface marking in April 2018. The gore ahead of the barrier is marked by diverging solid white lines (a vee-shape) and the Autosteer feature of the Model S appeared to mistakenly use the left-side white line instead of the right-side white line as the lane marking for the far left lane, which would have led the Model S into the same concrete barrier had the driver not taken control. Ars Technica concluded that "as Autopilot gets better, drivers could become increasingly complacent and pay less and less attention to the road." In a corporate blog post, Tesla noted the impact attenuator separating the offramp from US 101 had been previously crushed and not replaced prior to the Model X crash on March 23. The post also stated that Autopilot was engaged at the time of the crash, and the driver's hands had not been detected manipulating the steering wheel for six seconds before the crash. Vehicle data showed the driver had five seconds and a "unobstructed view of the concrete divider, ... but the vehicle logs show that no action was taken." The NTSB investigation had been focused on the damaged impact attenuator and the vehicle fire after the collision, but after it was reported the driver had complained about the Autopilot functionality, the NTSB announced it would also investigate "all aspects of this crash including the driver's previous concerns about the autopilot". A NTSB spokesman stated the organization "is unhappy with the release of investigative information by Tesla". Elon Musk dismissed the criticism, tweeting that NTSB was "an advisory body" and that "Tesla releases critical crash data affecting public safety immediately & always will. To do otherwise would be unsafe." In response, NTSB removed Tesla as a party to the investigation on April 11. NTSB released a preliminary report on June 7, 2018, which provided the recorded telemetry of the Model X and other factual details. Autopilot was engaged continuously for almost nineteen minutes prior to the crash. In the minute before the crash, the driver's hands were detected on the steering wheel for 34 seconds in total, but his hands were not detected for the six seconds immediately preceding the crash. Seven seconds before the crash, the Tesla began to steer to the left and was following a lead vehicle; four seconds before the crash, the Tesla was no longer following a lead vehicle; and during the three seconds before the crash, the Tesla's speed increased to . The driver was wearing a seatbelt and was pulled from the vehicle before it was engulfed in flames. The crash attenuator had been previously damaged on March 12 and had not been replaced at the time of the Tesla crash. The driver involved in the accident on March 12 collided with the crash attenuator at more than and was treated for minor injuries; in comparison, the driver of the Tesla collided with the collapsed attenuator at a slower speed and died from blunt force trauma. After the accident on March 12, the California Highway Patrol failed to report the collapsed attenuator to Caltrans as required. Caltrans was not aware of the damage until March 20, and the attenuator was not replaced until March 26 because a spare was not immediately available. This specific attenuator had required repair more often than any other crash attenuator in the Bay Area, and maintenance records indicated that repair of this attenuator was delayed by up to three months after being damaged. As a result, the NTSB released a Safety Recommendation Report on September 9, 2019, asking Caltrans to develop and implement a plan to guarantee timely repair of traffic safety hardware. At a NTSB meeting held on February 25, 2020, the board concluded the crash was caused by a combination of the limitations of the Tesla Autopilot system, the driver's over-reliance on Autopilot, and driver distraction likely from playing a video game on his phone. The vehicle's ineffective monitoring of driver engagement was cited as a contributing factor, and the inoperability of the crash attenuator contributed to the driver's injuries. As an advisory agency, NTSB does not have regulatory power; however, NTSB made several recommendations to two regulatory agencies. The NTSB recommendations to the NHTSA included: expanding the scope of the New Car Assessment Program to include testing of forward collision avoidance systems; determining if "the ability to operate [Tesla Autopilot-equipped vehicles] outside the intended operational design domain pose[s] an unreasonable risk to safety"; and developing driver monitoring system performance standards. The NTSB submitted recommendations to the OSHA relating to distracted driving awareness and regulation. In addition, NTSB issued recommendations to manufacturers of portable electronic devices (to develop lock-out mechanisms to prevent driver-distracting functions) and to Apple (banning the nonemergency use of portable electronic devices while driving). Several NTSB recommendations previously issued to NHTSA, DOT, and Tesla were reclassified to "Open—Unacceptable Response". These included H-17-41 (recommendation to Tesla to incorporate system safeguards that limit the use of automated vehicle control systems to design conditions) and H-17-42 (recommendation to Tesla to more effectively sense the driver's level of engagement). Tesla settled a lawsuit over the incident with the engineer's family in April 2024. Kanagawa, Japan (April 29, 2018) On April 29, 2018, a Tesla Model X operating on Autopilot struck and killed a pedestrian in Kanagawa, Japan, after the driver had fallen asleep. According to a lawsuit filed against Tesla in federal court (N.D. Cal.) in April 2020, the Tesla Model X accelerated from after the vehicle in front of it changed lanes; it then crashed into a van, motorcycles, and pedestrians in the far right lane of the expressway, killing a 44-year-old man on the road directing traffic. The original complaint claims the accident occurred due to flaws in Tesla's Autopilot system, such as inadequate monitoring to detect inattentive drivers and an inability to handle traffic situations "that drivers will almost always certainly encounter". In addition, the original complaint claimed this is the first pedestrian fatality to result from the use of Autopilot. According to vehicle data logs, the driver of the Tesla had engaged Autopilot at 2:11 p.m. Japan Standard Time, shortly after entering the Tōmei Expressway. The driver's hands were detected on the wheel at 2:22 p.m. At some point before 2:49 p.m., the driver began to doze off, and at approximately 2:49 p.m., the vehicle ahead of the Tesla signaled and moved one lane to the left to avoid the vehicles stopped in the far right lane of the expressway. While the Tesla was accelerating to resume its preset speed, it struck the man, killing him. He belonged to a motorcycle riding club which had stopped to render aid to a friend that had been involved in an earlier accident; he specifically had been standing apart from the main group while trying to redirect traffic away from that earlier accident. The driver of the Tesla was convicted in a Japanese court of criminal negligence and sentenced to three years in prison (suspended for five years). The suit against Tesla in California was dismissed for forum non-conveniens by Judge Susan van Keulen in September 2020 after Tesla said it would accept a case brought in Japan. The plaintiffs appealed the dismissal to the Ninth Circuit Court of Appeals in February 2021, which upheld the lower court's dismissal. Delray Beach, Florida, USA (March 1, 2019) At approximately 6:17 a.m. Eastern Standard Time on the morning of March 1, 2019, a Tesla Model 3 driving southbound on US 441/SR 7 in Delray Beach, Florida, struck a semi-trailer truck that was making a left-hand turn to northbound SR 7 out of a private driveway at Pero Family Farms; the Tesla underrode the trailer, and the force of the impact sheared off the greenhouse of the Model 3, resulting in the death of the Tesla driver. The driver of the Tesla had engaged Autopilot approximately 10 seconds before the collision and preliminary telemetry showed the vehicle did not detect the driver's hands on the wheel for the eight seconds immediately preceding the collision. The driver of the semi-trailer truck was not cited. Both the NHTSA and NTSB dispatched investigators to the scene. According to telemetry recorded by the Tesla's restraint control module, the Tesla's cruise control was set to 12.3 seconds prior to the collision and Autopilot was engaged 9.9 seconds prior to the collision; at the moment of impact, the vehicle speed was . After the crash and underride, the Tesla continued southbound on SR 7 for approximately before coming to rest in the median between the northbound and southbound lanes. The car sustained extensive damage to the roof, windshield, and other surfaces above , the clearance under the trailer. Although the airbags did not deploy following the collision, the Tesla's driver remained restrained by his seatbelt; emergency response personnel were able to determine the driver's injuries were incompatible with life upon arriving at the scene. In May 2019 the NTSB issued a preliminary report that determined that neither the driver of the Tesla or the Autopilot system executed evasive maneuvers. The circumstances of this crash were similar to the fatal underride crash of a Tesla Model S in 2016 near Williston, Florida; in its 2017 report detailing the investigation of that earlier crash, NTSB recommended that Autopilot be used only on limited-access roads (i.e., freeway), which Tesla did not implement. The NTSB issued its final report in March 2020. The probable cause of the collision was the truck driver's failure to yield the right of way to the Tesla; however, the report also concluded that "[a]t no time before the crash did the car driver brake or initiate an evasive steering action. In addition, no driver-applied steering wheel torque was detected for 7.7 seconds before impact, indicating driver disengagement, likely due to overreliance on the Autopilot system." In addition, the NTSB concluded the operational design of the Tesla Autopilot system "permitted disengagement by the driver" and Tesla failed to "limit the use of the system to the conditions for which it was designed"; the NHTSA also failed to develop a method of verifying that manufacturers had safeguards in place to limit the use of ADAS to design conditions. Key Largo, Florida, USA (April 25, 2019) While driving on Card Sound Road, a 2019 Model S ran through a stop sign and flashing red stop light at the T-intersection with County Road 905, then struck a parked Chevrolet Tahoe which then spun and hit two pedestrians, killing one. A New York Times article later confirmed Autopilot was engaged at the time of the accident. The driver of the Tesla, who was commuting to his home in Key Largo from his office in Boca Raton, dropped his phone while on a call to make flight reservations and bent down to pick it up, failing to stop at the intersection: "I looked down, and I ran the stop sign and hit the guy's car ... When I popped up and I looked and saw a black truck — it happened so fast", later telling the responding police officers that Autopilot was "stupid cruise control". When the driver of the Tesla called authorities to respond, he spotted only one injured man, who was unconscious and bleeding from the mouth. He told police at the scene that he was driving in "cruise" and was allowed to leave without receiving a citation. Emergency medical personnel saw a woman's shoe under the Tahoe, prompting a search for the second victim, who was found approximately away from the scene, where she had been thrown from the impact. The decedent's family filed separate lawsuits against Tesla and the driver; the suit against the driver was settled out of court. The lawsuit against Tesla alleges the company marketed a vehicle with "defective and unsafe characteristics, such as the failure to adequately determine stationary objects in front of the vehicle, which resulted in the death of [the victim]". Fremont, California, USA (August 24, 2019) In Fremont, California on I-880, while driving north of Stevenson Boulevard, a Ford Explorer pickup was rear-ended by a Tesla Model 3 using Autopilot, causing the pickup's driver to lose control. The pickup overturned and a 15-year-old passenger in the Ford, who was not seat-belted, was jettisoned from the pickup and killed. The deceased's parents sued Tesla and claimed in their filing that "Autopilot contains defects and failed to react to traffic conditions." In response, a lawyer for Tesla noted the police had cited the driver of the Tesla for inattention and operating the car at an unsafe speed. The incident has not been investigated by the NHTSA. Cloverdale, Indiana, USA (December 29, 2019) An eastbound Tesla Model 3 rear-ended a fire truck parked along I-70 near mile marker 38 in Putnam County, Indiana at approximately 8 a.m.; both the driver and passenger in the Tesla, a married couple, were injured and taken to Terre Haute Regional Hospital, where the passenger later died from her injuries. The driver stated he regularly uses Autopilot mode, but could not recall if it was engaged when the Tesla hit the fire truck. The NHTSA announced it was investigating the crash on January 9 and later confirmed the use of Autopilot at the time of the crash. The driver filed a civil lawsuit against Tesla in November 2021; it was moved to federal court in February 2022. Gardena, California, USA (December 29, 2019) Shortly before 12:39 a.m. on December 29, 2019, a westbound Tesla Model S exited the freeway section of SR 91, failed to stop for a red light, and crashed into the driver's side of Honda Civic in Gardena, California, killing the driver and passenger in the Civic and injuring the driver and passenger in the Tesla. The freeway section of SR 91 ends just east of the intersection with Vermont Ave and continues as Artesia Blvd. The Tesla was proceeding west on Artesia against the red light when it struck the Civic, which was turning left from Vermont onto Artesia. The occupants of the Tesla were taken to the hospital with non life-threatening injuries. The NHTSA initiated an investigation of the crash, which was considered unusual for a two-vehicle collision, and later confirmed in January 2022 that Autopilot was engaged during the crash. The driver of the Tesla was charged in October 2021 with vehicular manslaughter in Los Angeles County Superior Court. The families of the two killed also have filed separate civil lawsuits against the driver of the Tesla, for his negligence, and Tesla, for selling defective vehicles. In May 2022, a preliminary court hearing was held to determine if there was probable cause to proceed with a trial; a Tesla engineer testified the driver of the Tesla had engaged the Autopilot system approximately 20 minutes prior to the crash, setting the speed at . The Tesla was traveling at when it collided with the Honda. The judge ordered the driver of the Tesla to stand trial on two counts of vehicular manslaughter. Telemetry data indicated the driver had a hand on the steering wheel, but no brake inputs were detected in the six minutes preceding the crash, despite multiple signs at the end of the freeway warning drivers to slow down. The driver of the Tesla pleaded not guilty in June. The trial, scheduled for November 15, was postponed to late February 2023. The driver changed his plea to no contest and was sentenced to two years of probation that June. Arendal, Norway (May 29, 2020) After being notified that some straps on his trailer had come loose, on May 29, 2020, at approximately 11:00 a.m., a solo truck driver parked a tractor-trailer on the hard shoulder of northbound E18, northeast of the Torsbuås tunnel exit, just outside Arendal. Because of the restricted shoulder width, part of the truck was protruding into the right lane of E18. While fixing the loose strap that was securing the load, he was struck and killed by a northbound Tesla Model S. The Tesla driver had engaged Autopilot approximately south of the accident site; as he exited the tunnel and approached the parked truck, he observed there were no warning lights on the truck or a warning triangle on the ground and he assumed the truck was abandoned. He then "heard a loud bang, and the car's windscreen cracked"; after pulling over to the shoulder, he walked back towards the parked truck and saw the truck driver's body. The Tesla's driver was charged with negligent homicide. Early in the trial, an expert witness testified that the car's computer indicated Autopilot was engaged at the time of the incident. A forensic scientist said the victim was less visible because he was in the shadow of the trailer. The driver said he had both hands on the wheel, and that he was vigilant. He was sentenced to three months' imprisonment in December 2020. The Accident Investigation Board Norway investigated the crash and published its report in June 2022. According to the investigation report, the truck driver had failed to report his stop to fix the strap to the Traffic Control Centre, and no passing motorists reported the parked truck; consequently, the driver of the Tesla was not notified there was a truck parked outside the tunnel. The Tesla's driver believed there was sufficient room to pass the parked truck while remaining in the right lane. Because the truck driver was next to the trailer in the shadow cast by the truck, the Tesla driver's view of the truck driver may have been compromised. In addition, the company responsible for planning and constructing the road, Nye Veier AS, was faulted by the investigators. During the planning phase, Nye Veier proposed a narrower shoulder of rather than as originally designed; this variance was approved by the Norwegian Public Roads Administration contingent on Nye Veier implementing mitigations. Nye Veier did not implement the proposed mitigations. Marietta, Georgia, USA (September 17, 2020) On September 17, 2020, at approximately 5:24 a.m. EDT, the driver of a 2020 Tesla Model 3 crashed into an occupied CobbLinc bus shelter, demolishing it and killing the man waiting inside. The Tesla was driving north on South Cobb Drive near the intersection with Leader Road. Because the car's event data recorder showed it had reached a speed of prior to the crash and that area has a posted speed limit of , police charged the driver with first-degree vehicular homicide and reckless driving. At the time of the crash, it was not determined if Autopilot was engaged. In September 2022, data provided by Tesla to the NHTSA demonstrated that Autopilot was active at the time of the crash. Fontana, California, USA (May 5, 2021) At 2:35 a.m. PDT on May 5, 2021, a Tesla Model 3 crashed into an overturned tractor-trailer on the westbound Foothill Freeway (I-210) in Fontana, California. The driver of the Tesla was killed, and a man who had stopped to assist the driver of the truck was struck and injured by the Tesla. California Highway Patrol (CHP) officials announced on May 13 that Autopilot "was engaged" prior to the crash, but added a day later that "a final determination [has not been] made as to what driving mode the Tesla was in or if it was a contributing factor to the crash". The CHP and NHTSA are investigating the crash. Telemetry data indicate that an automated driving system was in use at the time of the crash. Queens, New York, USA (July 26, 2021) On July 26, 2021, just after midnight, a man was hit and killed by a driver in a Tesla Model Y SUV. The victim had parked his vehicle on the left shoulder of the westbound Long Island Expressway (I-495), just east of the College Point Boulevard exit in Flushing, Queens, New York, to change a flat tire. The NHTSA later determined Autopilot was active during the collision and sent a team to further investigate. Evergreen, Colorado, USA (May 16, 2022) In the evening of May 16, 2022, the driver of a Tesla Model 3 left Upper Bear Creek Road in Evergreen, Colorado and collided with a tree. After the car caught on fire, a passenger was able to exit, but the driver was unable to leave the car and died at the scene. A subsequent Colorado State Patrol (CSP) investigation determined the driver would have survived the crash, but died from smoke inhalation and thermal injuries. Law enforcement suspect that the Tesla was operating in Autopilot, but due to the remote location, no data was uploaded, and the fire destroyed the onboard data, so the pre-crash telemetry could not be used to verify. The CSP investigation could not determine why the driver did not exit the vehicle. An autopsy of the driver determined their blood alcohol content was 0.264%, more than three times the legal limit. The crash occurred while the two were returning from an outing to play golf. The surviving passenger recalled the driver had engaged FSD on the trip to the golf course, but was forced to make many manual steering corrections on the winding road. After the crash, the passenger told the 9-1-1 operator the "auto-drive feature on the Tesla" was being used and the vehicle "just ran straight off the road". The lead CSP investigator determined from the tire markings left at the scene the driver never used the brakes and the motors continued to power the wheels after impact, concluding that since "the vehicle drove off the road with no evidence of a sudden maneuver, that fits with the [driver-assistance] feature being engaged". In an article published by The Washington Post in 2024, based on the driver's history and interviews with the surviving passenger and the driver's spouse, the newspaper concluded this was likely the first fatality with FSD engaged. The Post article also used features enabled in the vehicle purchase order, vehicle records, and "a recent message from the company offering [the driver's] account the ability to 'Transfer Your Full Self-Driving Capability to a New Tesla to determine the car was equipped with FSD. Mission Viejo, California, USA (May 17, 2022) At 10:51 p.m. PDT on May 17, 2022, a pedestrian walking on southbound I-5 near Crown Valley Parkway in Mission Viejo, California was struck and killed by a driver operating a Tesla Model 3. After the pedestrian was hit, the driver of the Tesla parked the car and exited it to stand on the right shoulder of the freeway; an impaired driver then crashed their car into the Tesla, and a third driver crashed into the two-car wreck, which was in a construction zone. Field report data confirmed the Tesla was operating in Autopilot when the pedestrian was killed. Gainesville, Florida, USA (July 6, 2022) At approximately 2:00 p.m. EDT on July 6, 2022, the driver of a Tesla Model S traveling southbound on I-75 exited at a rest area just south of Gainesville, Florida, near Paynes Prairie Preserve State Park, and smashed into the rear of a parked Walmart tractor-trailer. Both the driver and passenger of the Tesla, a married couple from Lompoc, California, were killed. A spokesperson for the Florida Highway Patrol noted "[The vehicle] came off the exit ramp to the rest area, continued south for a short period, and turned into an easterly direction and that's at what time we had the collision where the Tesla struck the rear of the tractor-trailer." The NHTSA confirmed it had sent an investigation team to the site. Data reported by Tesla under NHTSA SGO-2021-01 indicate that Autopilot may have been engaged during the crash. However, in February 2023, a Florida Highway Patrol investigation concluded the crash was due to driver error: while exiting the freeway to the rest area, the driver pressed the accelerator pedal instead of the brake, and the Tesla hit a curb at , then collided with the parked truck. The family sued Tesla in March 2023, alleging the "defective and unreasonably dangerous" Tesla had "malfunctioned during reasonable and foreseeable use", adding the vehicle "was equipped with several crash avoidance and crash mitigation features and technologies". Riverside, California, USA (July 7, 2022) It was initially (and incorrectly) reported that at 4:47 a.m. PDT on July 7, 2022, a driver in a Tesla Model Y approached from behind, and then struck a motorcyclist on a Yamaha V-Star. Both vehicles were traveling eastbound in the high-occupancy vehicle lane of SR 91, west of Magnolia Avenue in Riverside, California. The motorcyclist was ejected from his vehicle and died at the scene, while the driver of the Tesla was uninjured after the Model Y went off the road. The driver of the Tesla was not arrested. Subsequent CHP investigation showed the motorcyclist struck the dividing wall and fell off his motorcycle; the Tesla Model Y following behind struck the motorcycle (which was already lying on its side) but not the motorcyclist. Telemetry data from Tesla later confirmed the Model Y driver was using Autopilot. Data reported by Tesla under NHTSA SGO-2021-01 also confirmed that Autopilot was engaged during the crash. Draper, Utah, USA (July 24, 2022) A motorcycle rider was struck from behind by a driver using Autopilot in a Tesla Model 3 on southbound Interstate 15 near 15000 S in Draper, Utah, at 1:09 a.m. MDT on July 24, 2022. The collision threw the motorcycle rider from his Harley-Davidson to the ground, killing him. The driver told police he did not see the motorcyclist and he was using Autopilot at the time of the crash. Telemetric data submitted to NHTSA later confirmed his statements. Michael Brooks, the acting executive director of the Center for Auto Safety commented "It's pretty clear to me, and it should be to a lot of Tesla owners by now, this stuff isn't working properly and it's not going to live up to the expectations, and it is putting innocent people in danger on the roads ... Drivers are being lured into thinking this protects them and others on the roads, and it's just not working." Boca Raton, Florida, USA (August 26, 2022) On August 26, 2022 at 2:11 a.m. EDT, a motorcycle rider on a Kawasaki Vulcan was struck from behind by a driver in a Tesla Model 3 while both vehicles were traveling westbound on SW 18th Street approaching Boca Rio Road in Sandalfoot Cove, a census-designated place in unincorporated Palm Beach County, just outside the city of Boca Raton, Florida. The motorcycle rider was thrown from her motorcycle into the windshield of the Tesla; the rider was transported to a hospital, where she later died from the injuries she sustained in the collision. The driver of the Tesla was suspected of driving under the influence of alcohol and/or prescription drugs. The Palm Beach County Sheriff's Office later confirmed the driver of the Tesla was using Autopilot. Data reported by Tesla under NHTSA SGO-2021-01 also confirm that Autopilot was engaged during the crash. There have been multiple fatal collisions in the United States during 2022 in which a Tesla operating with Autopilot struck a motorcycle from the rear; in each instance, the motorcyclist was killed. One theory is that because Tesla has shifted to exclusively visual sensors, the Autopilot logic to set the gap between the Tesla and a leading vehicle assumes the distance to a vehicle in front is inversely proportional to the spacing between that leading vehicle's taillights. Because motorcycle taillights are close-set, Tesla Autopilot may assume incorrectly the motorcycle is a distant car or truck. Walnut Creek, California, USA (February 18, 2023) The driver of a 2014 Tesla Model S was killed after the vehicle he was driving crashed into a Contra Costa County fire truck parked across several lanes of northbound I-680 south of the Treat Boulevard offramp in Walnut Creek, California, at 4 a.m. on February 18, 2023. The truck was parked with its lights on to protect the scene of an earlier accident that did not result in any injuries. The Tesla had to be cut open to extricate the passenger, who was taken to the hospital to treat their injuries; four firefighters in the fire truck also were injured and taken to the hospital. Initially, the California Highway Patrol stated it was not clear if the driver was intoxicated or operating the car with assistance features, but NHTSA confirmed in March they suspected that an "automated driving system" was being used when the Tesla crashed into the fire truck, and had sent a special crash investigation team as part of a larger probe (EA 22-002) involving multiple incidents in which Teslas operating with Autopilot have crashed into stationary emergency response vehicles. Tesla confirmed in April the car was operating under Autopilot at the time of the crash. Telemetry data indicate that an automated driving system was in use at the time of the crash. Corona, California, USA (March 28, 2023) On March 28, 2023, at approximately 10:15 p.m., the driver of a Tesla Model Y died after the Tesla was struck by the driver of Ford F-150 pickup truck, who had entered the intersection of Foothill Parkway and Rimpau Avenue in Corona, California against a red light. The Tesla was proceeding through the intersection on a green light. Telemetry data indicate that an automated driving system was in use at the time of the crash. Central Point, Oregon, USA (June 5, 2023) The Oregon State Police responded to a single-vehicle accident reported at 3:30 a.m. (PDT) on June 5, 2023 in Jackson County, Oregon; the Tesla Model S was driving northbound on I-5 near milepost 33 when the car departed from the roadway, striking a fence and then a tree before catching on fire. The driver was pronounced dead at the scene. Telemetry data indicate that an automated driving system was in use at the time of the crash. Brooklyn, New York, USA (June 7, 2023) On June 7, 2023, at approximately 9 p.m., the driver of a Tesla Model S traveling along Ocean Parkway in Midwood, Brooklyn left the roadway, striking and killing a pedestrian waiting on the sidewalk to cross the street at the intersection with Avenue M. The driver then struck a light pole and collided with a park bench on the median, injuring a man who had been seated on it. The driver was arrested for leaving the scene of the crash. Telemetry data indicate that an automated driving system was in use at the time of the crash. Turlock, California, USA (June 20, 2023) At approximately 3:15 a.m. on June 20, 2023, a driver operating a white sedan the wrong way (south in the northbound lanes) on SR 99 near Lander Avenue in Turlock, California, collided with a northbound Tesla Model Y traveling at approximately . The driver of the wrong-way vehicle was killed, and the driver and passenger in the Tesla were injured. Alcohol appears to have been a factor. Telemetry data indicate that an automated driving system was in use at the time of the crash. South Lake Tahoe, California, USA (July 5, 2023) On July 5, 2023, at approximately 5:30 p.m. (PDT), the driver of a Subaru Impreza traveling north on Pioneer Trail at collided head-on with a Tesla Model 3 traveling south at . The collision happened just south of the intersection with Fair Meadow Trail. The driver of the Subaru was taken to Barton Memorial Hospital, where he died from his injuries. The five occupants of the Tesla were taken to UC Davis Medical Center, and one died, a three-month-old infant. NHTSA dispatched an investigation team to the scene of the crash. Telemetry data indicate that an automated driving system was in use at the time of the crash. Opal, Virginia, USA (July 19, 2023) While traveling north on the concurrent US 15/17/29 (James Madison Highway) at approximately 6:31 p.m. (EDT) on July 19, 2023, the driver of a Tesla Model Y collided with and continued under the side of the trailer of a combination truck pulling out of the Quarles Truck Stop fuel station near Opal, Virginia, south of Warrenton. The Tesla driver was killed and the truck driver was cited for reckless driving. Two days later, the Fauquier County Sheriff's Office executed a search warrant for data from the Tesla, based on witness reports that said the Tesla driver did not attempt to brake before the collision. The reckless driving charge against the truck driver was dropped in November, after the sheriff's office found the Tesla was travelling at prior to impact, exceeding the speed limit in that area. That August, NHTSA sent a team to investigate the collision; the Tesla is suspected of being operated under Autopilot. Telemetry data indicate that an automated driving system was in use at the time of the crash. The sheriff's office concluded that Autopilot was in use, based on data obtained from the vehicle's event data recorder; the system had warned the driver to take control because it had detected an obstacle ahead, and the driver applied the brakes approximately one second before impact, but it was not clear if that was sufficient to disengage Autopilot. Had the vehicle been traveling at the speed limit, the sheriff's office determined the driver would have had adequate time to avoid the collision. Snohomish County, Washington, USA (April 19, 2024) At approximately 3:54 p.m. on April 19, 2024, a motorcyclist was killed after a driver in a 2022 Tesla Model S crashed into the rear of the motorcycle. Both vehicles were traveling on eastbound Washington State Route 522 just west of its intersection with Fales Road, in unincorporated Snohomish County, Washington, close to Maltby. The motorcyclist had slowed down due to traffic conditions, but the Tesla driver did not. The Tesla driver reported he heard a bang as the car collided with the motorcycle and lurched forward; the motorcyclist was ejected and was pinned underneath the Tesla. A few days later, the Tesla driver was arrested for vehicular homicide due to distracted driving based on his admission that he "had the Tesla on Autopilot while looking at his phone", and was released after posting bond. The motorcyclist was wearing a GoPro, which police collected for evidence. During the investigation, the Washington State Patrol determined the Tesla was operating in FSD, based on telemetry data from the car. It is the second fatal accident involving FSD. The NHTSA is gathering information about this crash from local law enforcement. Non-fatal crashes Culver City, California, USA (January 22, 2018) On January 22, 2018, a 2014 Tesla Model S crashed into a fire truck parked on the side of the I-405 freeway in Culver City, California, while traveling at a speed exceeding and the driver survived with no injuries. The driver told the Culver City Fire Department that he was using Autopilot. The fire truck and a California Highway Patrol vehicle were parked diagonally across the left emergency lane and high-occupancy vehicle lane of the southbound I-405, blocking off the scene of an earlier accident, with emergency lights flashing. According to a post-accident interview, the driver stated he was drinking coffee, eating a bagel, and maintaining contact with the steering wheel while resting his hand on his knee. During the trip, which lasted 66 minutes, the Autopilot system was engaged for slightly more than 29 minutes; of the 29 minutes, hands were detected on the steering wheel for only 78 seconds in total. Hands were detected applying torque to the steering wheel for only 51 seconds over the nearly 14 minutes immediately preceding the crash. The Tesla had been following a lead vehicle in the high-occupancy vehicle lane at approximately ; when the lead vehicle moved to the right to avoid the fire truck, approximately three or four seconds prior to impact, the Tesla's traffic-aware cruise control system began to accelerate the Tesla to its preset speed of . When the impact occurred, the Tesla had accelerated to . The Autopilot system issued a forward collision warning half a second before the impact, but did not engage the automatic emergency braking (AEB) system, and the driver did not manually intervene by braking or steering. Because Autopilot requires agreement between the radar and visual cameras to initiate AEB, the system was challenged due to the specific scenario (where a lead vehicle detours around a stationary object) and the limited time available after the forward collision warning. Several news outlets started reporting that Autopilot may not detect stationary vehicles at highway speeds and it cannot detect some objects. Raj Rajkumar, who studies autonomous driving systems at Carnegie Mellon University, believes the radars used for Autopilot are designed to detect moving objects, but are "not very good in detecting stationary objects". Both NTSB and NHTSA dispatched teams to investigate the crash. Hod Lipson, director of Columbia University's Creative Machines Lab, faulted the diffusion of responsibility concept: "If you give the same responsibility to two people, they each will feel safe to drop the ball. Nobody has to be 100%, and that's a dangerous thing." In August 2019, the NTSB released its accident brief for the accident. HAB-19-07 concluded the driver of the Tesla was at fault due to "inattention and overreliance on the vehicle's advanced driver assistance system", but added the design of the Tesla Autopilot system "permitted the driver to disengage from the driving task". After the earlier crash in Williston, the NTSB issued a safety recommendation to "[d]evelop applications to more effectively sense the driver's level of engagement and alert the driver when engagement is lacking while automated vehicle control systems are in use." Among the manufacturers that the recommendation was issued to, only Tesla has failed to issue a response. South Jordan, Utah, USA (May 11, 2018) In the evening of May 11, 2018, a 2016 Tesla Model S with Autopilot engaged crashed into the rear of a fire truck that was stopped in the southbound lane at a red light in South Jordan, Utah, at the intersection of SR-154 and SR-151. The Tesla was moving at an estimated and did not appear to brake or attempt to avoid the impact, according to witnesses. The driver of the Tesla, who survived the impact with a broken foot, admitted she was looking at her phone before the crash. The NHTSA dispatched investigators to South Jordan. According to telemetry data recovered after the crash, the driver repeatedly did not touch the wheel, including during the 80 seconds immediately preceding the crash, and only touched the brake pedal "fractions of a second" before the crash. The driver was cited by police for "failure to keep proper lookout". The Tesla had slowed to to match a vehicle ahead of it, and after that vehicle changed lanes, accelerated to in the 3.5 seconds preceding the crash. Tesla CEO Elon Musk criticized news coverage of the South Jordan crash, tweeting that "a Tesla crash resulting in a broken ankle is front page news and the ~40,000 people who died in US auto accidents alone in [the] past year get almost no coverage", additionally pointing out that "[a]n impact at that speed usually results in severe injury or death", but later conceding that Autopilot "certainly needs to be better & we work to improve it every day". In September 2018, the driver of the Tesla sued the manufacturer, alleging the safety features designed to "ensure the vehicle would stop on its own in the event of an obstacle being present in the path ... failed to engage as advertised." According to the driver, the Tesla failed to provide an audible or visual warning before the crash. Moscow, Russia (August 10, 2019) On the night of August 10, 2019, a Tesla Model 3 driving in the left-hand lane on the Moscow Ring Road in Moscow, Russia, crashed into a parked tow truck with a corner protruding into the lane and subsequently burst into flames. According to the driver, the vehicle was traveling at the speed limit of with Autopilot activated; he also claimed his hands were on the wheel, but was not paying attention at the time of the crash. All occupants were able to exit the vehicle before it caught on fire; they were transported to the hospital. Injuries included a broken leg (driver) and bruises (his children). The force of the collision was enough to push the tow truck forward into the central dividing wall, as recorded by a surveillance camera. Passersby also captured several videos of the fire and explosions after the accident, these videos also show the tow truck that the Tesla crashed into had been moved, suggesting the explosions of the Model 3 happened later. Chiayi, Taiwan (June 1, 2020) Traffic cameras captured the moment when a Tesla Model 3 slammed into an overturned cargo truck in Taiwan on June 1, 2020. The crash occurred at 6:40 a.m. National Standard Time on the southbound National Freeway 1 in Chiayi, Taiwan, at approximately the south 268.4 km marker. The truck had been involved in a traffic accident at 6:35 a.m. and overturned with its roof facing oncoming traffic; the driver of the truck got out to warn other cars away. The driver of the Tesla was uninjured and told emergency responders that the car was in Autopilot mode, traveling at . The driver told authorities that he saw the truck and thought the Tesla would brake automatically upon encountering an obstacle; when he realized it would not, he manually applied the brakes, although it was too late to avoid the crash, which is apparently indicated on the video by a puff of white smoke coming from the tires. Arlington Heights, Washington, USA (May 15, 2021) A Tesla Model S crashed into a stopped Snohomish County, Washington, sheriff's patrol car at 6:40 p.m. PDT on May 15, 2021, shortly after the deputy parked it while responding to an earlier crash which had broken a utility pole near the intersection of SR 530 and 103rd Ave NE in Arlington Heights, Washington. The patrol car was parked to partially block the roadway and protect the collision scene, and the patrol car's overhead emergency lights were activated. Neither the deputy nor the driver of the Tesla were injured. The driver of the Tesla assumed his car would slow and move over on its own because it was in "Auto-Pilot mode". Brea, California, USA (November 3, 2021) The driver of a Tesla Model Y reported a crash to the NHTSA that occurred on November 3, 2021 while operating in FSD Beta. The incident was described as a "severe" crash after "the car by itself took control and forced itself into the incorrect lane" during a left turn. It is likely this is the first complaint filed with NHTSA that alleges FSD caused a crash; NHTSA requested further information from Tesla, but other details of the crash, such as the driver's identity and location of the crash, were not released. Armadale, Victoria, Australia (March 22, 2022) On March 22, 2022 at approximately 6:30 a.m., the driver of a Tesla Model 3 struck a woman boarding a city-bound tram on Wattletree Road in Armadale, an inner suburb of Melbourne in the Australian state of Victoria. After being struck, the victim was dragged for approximately . She was taken to the hospital with life-threatening injuries. The driver of the Tesla fled the scene initially, then turned herself in to police two hours later. According to the official report, the driver stated her Tesla 3 was on Autopilot when she struck the pedestrian. The driver pleaded not guilty to four charges in April 2023, including dangerous driving causing serious injury, and was ordered to stand trial after the magistrate heard testimony from five witnesses. The tram operator testified he saw a woman rise from a seat at the tram stop and start walking toward the tram before she was struck: "I hear a thud, a whoosh, a car went passed ". The chief safety officer of Yarra Trams testified that "once the tram has stopped... there are big flashing lights (at the rear of the vehicle), we call them school lights", adding the tram could not have opened its doors before the crash. The driver eventually filed a guilty plea, admitting she was in full control the whole time and Autopilot wasn’t engaged. Maumee, Ohio, USA (November 18, 2022) On November 18, 2022 at 8:21 a.m., a Tesla Model 3 collided with the rear end of a stationary Ohio State Highway Patrol cruiser in the left lane of eastbound U.S. 24 near milepost 64, where it passes over Waterville–Monclova Road near Maumee, Ohio, a suburb of Toledo. The cruiser was parked with its emergency lights flashing to protect the vehicle involved in an earlier single-car accident at the scene. The OSHP officer and the driver from the earlier accident were sitting in the cruiser; both sustained minor injuries from the impact. In December, the NHTSA confirmed they were investigating the crash, which may have involved Autopilot. Telemetry data indicate that Autopilot was active. San Francisco–Oakland Bay Bridge, California, USA (November 24, 2022) The driver of a 2021 Tesla Model S told the California Highway Patrol that while driving eastbound on "Full Self-Driving" mode in the Yerba Buena Tunnel portion of the San Francisco–Oakland Bay Bridge near Treasure Island, at approximately noon on November 24, 2022, the vehicle cut across several lanes of traffic to the far left lane and abruptly slowed from , causing a chain-reaction collision involving eight vehicles. Nine were treated for injuries, and two lanes of traffic were closed for 90 minutes. Surveillance footage acquired by The Intercept corroborated the vehicle's sudden movements. NHTSA confirmed they would send a team to investigate the crash. Telemetry data indicate that an automated driving system was in use at the time of the crash. Halifax County, North Carolina, USA (March 15, 2023) On Wednesday, March 15, 2023, in Halifax County, North Carolina, a 17-year-old high school student attending the Haliwa-Saponi Tribal School was struck by a driver in a 2022 Tesla Model Y. The student had just exited a school bus and was crossing the road to his house when he was struck by the Tesla. The bus was stopped with flashing lights and its stop arm deployed; the North Carolina State Highway Patrol initially attributed the cause of the injury to "distracted driving". The student's father rendered first aid after witnessing the collision, which left the teenager with a broken neck and internal bleeding. He was flown to WakeMed and placed on a ventilator. It is unclear whether the car was in Autopilot during the accident, but it is being investigated by the State Highway Patrol. NHTSA have dispatched a team to investigate. Telemetry data indicate that an automated driving system was in use at the time of the crash. Las Vegas, Nevada, USA (April 10, 2024) On April 10, 2024, the driver of a Tesla carrying a passenger for Uber collided with an SUV at an intersection near Las Vegas; the driver of the Tesla was using Full Self-Driving, but the vehicle did not slow after the SUV emerged from a blind spot. The driver of the SUV was cited by police for failing to yield the right of way. Fullerton, California, USA (June 13, 2024) On June 13, 2024, a driver in a Tesla struck a parked police vehicle at the intersection of West Orangethorpe and Courtney avenues in Fullerton, California; the police vehicle was parked to protect the scene of an earlier fatal collision, blocking traffic, with flares deployed and emergency lights operating. The driver of the Tesla admitted that he had engaged the "self-drive" system and was using his cell phone. References Advanced driver assistance systems Automotive technology tradenames Automotive accessories Automotive technologies Deaths caused by robots and artificial intelligence Autopilot
List of Tesla Autopilot crashes
[ "Engineering" ]
12,201
[ "Robotics engineering", "Deaths caused by robots and artificial intelligence" ]
77,154,566
https://en.wikipedia.org/wiki/Glassmakers%27%20symbol
The glassmaker's mark (rarely glassmaker's cross: ) is a symbol of glassmakers. It is a figure eight (infinity sign) over a sword or cross, illustrating a German glassmaker's saying: Es ist ein unendlich Kreuz, Glas zu machen ("it is an endless cross to make glass"). The symbol dates to 1948, when the magazine Glastechnischen Berichte displayed it on the cover. The figure eight was said to symbolize the material glass in its two phases, as a molten liquid on the left and as a non-crystalline solid on the right. In the transition between the two states, a metal sword, which stood for the glassmaker's pipe, was a sign of man's dominion over nature. The German Technical Glass Society (Deutsche Glastechnische Gesellschaft) and the Czech Glass Society (Česká sklářská společnost) use the glassmaker's mark as the symbol of their associations. Sources Friedrich Holl: Symbol für Glas, In: Friedrich Holl (Hrsg.): Die Poesie des Glases. Des Glases Lob – Der Arbeit Lied – Des Glases Geist, 3. Aufl., Zwiesel, 1983; 1. Aufl. 1966 auch: Werks-Kurznachrichten der Grazer Glasfabrik (Untertitel: „Der Motzer“), Nr. 60, Graz 1963, S. 1–56 Julius Broul: Das Symbol für Glas, In: Glass Science and Technology, Verlag der Deutschen glastechnischen Gesellschaft, Frankfurt am Main, 1999, Jg. 72, Heft 4, S. N39-N40, Marita Haller: „Das unendliche Kreuz der Glasmacher“ unterstützt die Glasaktivitäten der Region, Pressglas Korrespondenz, 2010/3, S. 316 References Glass production Alchemical symbols
Glassmakers' symbol
[ "Materials_science", "Engineering" ]
432
[ "Glass engineering and science", "Glass production" ]
77,158,366
https://en.wikipedia.org/wiki/Department%20of%20Forest%20Biomaterials
The Department of Forest Biomaterials at North Carolina State University (NC State) is an academic department specializing in the study and development of forest-based materials, bioenergy, and sustainability. The department is part of the College of Natural Resources. Research Areas Bioenergy Sustainable Biomaterials Forest Products Environmental Sustainability Education Undergraduate and graduate programs (#1 in Wood Science & Wood Products/Pulp & Paper Technology in 2024 in the United States), including Bachelor's, Master's, and Ph.D. degrees. Courses cover topics such as wood science, bioenergy, sustainable biomaterials, and environmental sustainability. Master of Science in Forest Biomaterials Master of Forest Biomaterials (non-thesis) Distance Master of Forest Biomaterials (online-only) Ph.D. in Forest Biomaterials Collaborations and Partnerships The Department of Forest Biomaterials collaborates with various institutions, organizations, and industries: University of British Columbia Aalto University Auburn University Georgetown University USDA Department of Energy WestRock Valmet Cascades Adidas Unilever Kimberly-Clark Rayonier Advanced Materials Procter & Gamble CMPC (company) Suzano Eastman Essity International Paper Andritz AG Nalco Water Faculty Dimitris S. Argyropoulos Ali Ayoub Medwick Byrd Jr Edward Funkhouser Ronalds W. Gonzalez Martin Hubbe Hasan Jameel John Kadla Bo Kasal Frederik Laleicke Kai Lan Nathalie Lavoine Lucian Lucia Marian McCord Lokendra Pal Sunkyu Park Melissa Pasquinelli Joel Pawlak Perry Peralta Ilona Peszlen Richard Phillips Rico Ruffino Daniel Saloni David Tilotta Richard Venditti Jingxin Wang Yuan Yao Faculty Emeritus Hou-min Chang John Heitmann Jr Stephen Kelley Adrianna Kirkman Michael Kocurek Philip Mitchell Elisabeth Wheeler Postdoctoral Scholars Nelson Barrios Sharmita Bera Seong-Min Cho Karthik Ananth Mani Raman Rao Md Imrul Reza Shishir Song Wang Saurabh K Kardam Staff Shelley Barry Brittany Hayes Olivia Lenahan Shannon Lora Ronald Marquez Beverly Miller Julie Paradiso Melissa Rabil Jessica Rogers Angela Rush Elisha Swartz Loi Tran External links Department of Forest Biomaterials at NC State References North Carolina State University Biomaterials Paper Bioenergy Sustainability
Department of Forest Biomaterials
[ "Physics", "Biology" ]
486
[ "Biomaterials", "Materials", "Matter", "Medical technology" ]
77,161,520
https://en.wikipedia.org/wiki/Interior%20light
An interior light is a type of light that is generally used to illuminate the cabin of a vehicle. Interior lighting setups can vary greatly in both complexity and size, and certain vehicles, depending on a number of factors, may use simpler, more utilitarian lighting configurations, or choose to incorporate grander systems (known as ambient lighting). Application in Automobiles Courtesy, Overhead, and Dome Lights Many economy vehicles use more prosaic interior lighting systems, which is common as these types of vehicles tend to be much more affordable. These systems are usually in the form of overhead, dome, or courtesy lights, which are driver/passenger actuated and can generally be toggled via some button that can be easily located in the interior while driving or when parked. (For example, in the vehicle's sun visor, or built directly into the roof). Dome lights are the brightest and largest of these simple lighting systems. They also tend to, at least partially, illuminate the entire vehicle, rather than just the front or back seats, and are usually built into the car's ceiling. (often between both areas of the cabin to maximize effect). Overhead lights are similar to dome lights in the sense that they are brighter, but are, for the most part, limited to the front seats. Courtesy lights are much smaller, and are generally intended to illuminate small areas of the cabin (or other areas of the vehicle) such as the trunk or glove compartment. Ambient lighting systems Luxury vehicles, particularly newer models, tend to incorporate more complex systems that create a sense of ambiance while driving. Certain ultra-luxury manufacturers, such as Rolls-Royce, take this philosophy further, allowing buyers to completely customize the type of lighting system they desire before finally taking ownership of the car. The lighting system, dubbed "Starlight Headliner" by Rolls-Royce, is made completely by hand by workers at their factory in England. References Light Lighting Interior design
Interior light
[ "Physics" ]
403
[ "Physical phenomena", "Spectrum (physical sciences)", "Electromagnetic spectrum", "Waves", "Light" ]
74,131,735
https://en.wikipedia.org/wiki/Miguel%20Modestino
Miguel A. Modestino is a Venezuelan-born chemical engineer and co-founder of Sunthetics along with Myriam Sbeiti and Daniela Blanco. Sunthetics uses artificial intelligence to optimize chemical reactions by inducing electrical pulses, from renewable energy, into the reaction instead of just heating them. Modestino is a part of the Joint Center for Artificial Photosynthesis, which is a group focused on reducing the need for fossil fuel by developing solar fuels as a direct alternative. Modestino also formed a group called the Modestino Group, which specialize in developing state of the art electrochemical devices to optimize and tackle the issues revolving renewable energy at New York University (NYU), where he is the Donald F. Othmer Associate Professor of Chemical Engineering and the Director of Sustainable Engineering Initiative. Education Miguel Modestino earned his bachelor's degree in chemical engineering in 2007 and his M.S. in chemical engineering in 2008 from Massachusetts Institute of Technology (MIT), and a PH.D. in chemical engineering from University of California, Berkeley in 2013. During his time at MIT, Modestino was a research assistant from October 2003 to June 2007 under the supervision of Paula Hammond, and worked on film assembly which used a layer-by-layer method for biomedical purposes, while he was working towards his B.S. in chemical engineering. He remained at MIT to complete his M.S. in chemical engineering while simultaneously being a Teaching Assistant for Chemical Engineering Projects Lab from February 2008 to May 2008. In between his M.S. and starting his Ph.D., Modestino was an intern at Novartis and BP in 2008 under David H. Koch School of Chemical Engineering Practice. After he obtained a Ph.D., Modestino did his postdoctoral research at École Polytechnique Fédérale de Lausanne (EPFL) from 2013 to 2016. Research and career Modestino is currently the Donald F. Othmer Associate Professor of Chemical Engineering and the Director of Sustainable Engineering Initiative at NYU. At NYU, Modestino carries out his research into renewable energy and production of eco-friendly electrochemical devices. Sunthetics Modestino co-founded Sunthetics, along with NYU graduates Myriam Sbeiti and Daniela Blanco. Sunthetics is a startup company whose goal is to reduce to reliance on fossil fuels for heating chemical reactions and instead using electrical pulses to supply energy for various chemical reactions to occur. Initially the idea was coined by Blanco as part of her PhD thesis at NYU. The goal was to apply this to nylon, however due to the lack of support from nylon manufacturing companies the idea pivoted to Artificial Intelligence to drive chemical reactions through renewable energy. This led to machine learning optimizing this technology to be applied across several industries. Modestino Group The Modestino Group focuses on the development of electrochemical devices, which are devices used in energy conversion technologies and chemical processes. Through these devices the group can address a wide range of issues such as carbon dioxide reduction, improving grip flexibility characterizing multiphase flow in reactors and developing sustainable clothing. The group has expertise in manufacturing, developing, processing and characterizing composite materials which they use to refine electrochemical reactors in industrial applications. The group has a number of projects under their hood such as solar textiles, materials for electrochemical catalyst layers, advanced electrolysis devices, and multiphase-flow micro-electrochemical reactors. The group consists of Modestino as the leader and head of team, and several Ph.D. students, M.S. students, B.S. students and Alumni. Joint Center for Artificial Photosynthesis While at Berkeley, Modestino participated in a project that harnessed solar energy to convert carbon-dioxide from the air into fuel similar to that of plants this process is call solar fuels as part of his Ph.D program. At Berkeley, he joined a group called Joint Center for Artificial Photosynthesis (JCAP). JCAP's goal is to research and develop these solar fuels so that it can be used and applied in many facets of the world while also being cost efficient enough to actually challenge or be the better alternative to fossil fuels. Awards and recognition In 2015, Modestino won the award for Energy and Environmental Science Reader's Choice Lectureship Award, for his publication of Design and cost considerations for practical solar-hydrogen generators, which was among the most downloaded and read articles of 2014. In 2017, Modestino won the Global Change Award with Daniela Blanco and Myriam Sbeiti for their work on Sunthetics. In 2017, Modestino won the MIT Technology Review Innovators Under 35 Award in the region Latin America, for his work in developing and optimizing the chemical industry to become safer for the environment. In 2018, Modestino was awarded $110,000 in the Doctoral New Investigator Award by the American Chemical Society Petroleum Research Fund, to commemorate his work with Ionic Liquid-Polymer Gel Electrolytes for Electrochemical Olefin Separations. In 2019, Modestino was awarded National Science Foundation Career Award. In 2020, Modestino was included in MIT Technology Review magazine's "Innovators Under 35" list, for his team's innovation of using artificial intelligence to make chemical reactions more efficient using electrical pulses instead of heating while simultaneously having it adapt to different chemicals. In 2020, Modestino was awarded the Goddard Junior Faculty Fellowship Award in New York University. Modestino earned the reward as a tenure-tract faculty member who has successfully pass their three-year review, the reward is either one course deduction to focus on research or scholarships or $5,000 towards their scholarship. In 2021, Modestino was awarded the TED Idea Search Latin America. References Living people Chemical engineers New York University faculty Massachusetts Institute of Technology alumni University of California, Berkeley alumni Venezuelan engineers Year of birth missing (living people) Hispanic and Latino American scientists
Miguel Modestino
[ "Chemistry", "Engineering" ]
1,189
[ "Chemical engineering", "Chemical engineers" ]
74,133,841
https://en.wikipedia.org/wiki/Kahm
Kahm or Kahm yeast is a layer of wild yeast which tends to form on fermented foods such as sauerkraut. It is typically harmless but the smell and appearance tends to spoil the food. The yeast genera which form these films include Debaryomyces, Mycoderma and Pichia. The word “kahm” traces back to the Middle High German “kan” which in turn derives from the Vulgar Latin “cana” (greyish layer of dirt on wine. See also Flor – a layer of yeast which forms on the surface of wine Pellicle (cooking) – a skin which develops in smoked foods References Yeasts
Kahm
[ "Biology" ]
135
[ "Yeasts", "Fungi" ]
74,136,754
https://en.wikipedia.org/wiki/Berkelium%28III%29%20bromide
Berkelium bromide is a bromide of berkelium, with the chemical formula BkBr3. Structure Berkelium bromide has a PuBr3 structure at low temperature and is in the orthorhombic crystal system, with lattice parameters a = 403 pm, b = 1271 pm and c = 912 pm. At high temperature, berkelium bromide has an AlCl3 structure and a monoclinic crystal system with lattice parameters a = 723 pm, b = 1253 pm, c = 683 pm and β = 110.6°. References External reading Berkelium compounds Bromides Actinide halides
Berkelium(III) bromide
[ "Chemistry" ]
136
[ "Bromides", "Salts" ]
74,138,985
https://en.wikipedia.org/wiki/Esmodafinil
Esmodafinil (also known as (S)-modafinil or (+)-modafinil; developmental code name CRL-40983) is the enantiopure (S)-(+)-enantiomer of modafinil. Unlike armodafinil ((R)-(–)-modafinil), esmodafinil has never been marketed on its own. Esmodafinil is suspected to be less clinically useful for treating conditions that modafinil and armodafinil are marketed for, such as narcolepsy, shift work sleep disorder, and obstructive sleep apnea. Pharmacology Pharmacodynamics Esmodafinil has a 3-fold lower affinity for the dopamine transporter (DAT) compared to armodafinil. Both enantiomers of modafinil preferentially bind to the DAT in an inward facing conformation that is associated with atypical dopamine reuptake inhibitor (DRI) profiles. Esmodafinil and armodafinil are said to have equipotent pharmacological effects but differing pharmacokinetics (see below). Pharmacokinetics Esmodafinil possesses a substantially shorter elimination half-life (3–5hours) compared to armodafinil (10–17hours). Chemistry Esmodafinil, or (S)-(+)-modafinil, is the enantiopure (S)-(+)-enantiomer of the racemic mixture modafinil, while armodafinil is the (R)-(–)-enantiomer. A number of analogues of esmodafinil are known, including adrafinil, flmodafinil, fladrafinil, and others. Preclinical research Esmodafinil has been researched for the treatment of cocaine addiction. Like armodafinil, esmodafinil attenuates the effects of cocaine by occupying the dopamine transporter. While doing so, esmodafinil increases dopamine levels in the nucleus accumbens to a lesser extent than cocaine. However, the short half-life of esmodafinil has been cited as reason to investigate armodafinil as a cocaine addiction treatment instead. Analysis in biological samples Modafinil is considered a stimulant doping agent and as such is prohibited by World Anti-Doping Agency in sports competitions. Modafinil enantiomers can be separately quantified in biological samples. References Acetamides Anticonvulsants Benzhydryl compounds CYP3A4 inducers Dopamine reuptake inhibitors Enantiopure drugs Stimulants Sulfoxides Wakefulness-promoting agents Modafinil analogues
Esmodafinil
[ "Chemistry" ]
621
[ "Stereochemistry", "Enantiopure drugs" ]
74,143,959
https://en.wikipedia.org/wiki/Selective%20organ%20targeting
Selective organ targeting (SORT) is a novel approach in the field of targeted drug delivery that systematically engineers multiple classes of lipid nanoparticles (LNPs) to enable targeted delivery of therapeutics to specific organs in the body. The SORT molecule alters tissue tropism by adjusting the composition and physical characteristics of the nanoparticle. Adding a permanently cationic lipid, a permanently anionic lipid, or ionizable amino lipid increases delivery to the lung, spleen, and liver, respectively. SORT LNPs utilize SORT molecules to accurately tune and mediate gene delivery and editing, resulting in predictable and manageable protein synthesis from mRNA in particular organ(s), which can potentially improve the efficacy of drugs while reducing side effects. Overview LNPs are non-viral synthetic nanoparticles that can carry and deliver different functional molecules to specific tissues. Traditionally, LNPs are composed of four indispensable lipid components: an ionizable amino lipid that aids in both escaping the endosomes and binding nucleic acids to the particle, an amphipathic phospholipid that promotes fusion with the target cell and endosomes, cholesterol to enhance nanoparticle stability, and a polyethylene glycol lipid that improves colloidal stability and reduces clearance of the particle by the reticuloendothelial system. LNPs have demonstrated safety and effectiveness but are limited to intramuscular and intravenous administration targeting the liver. This limitation largely stems from LNPs' resemblance to very-low-density lipoprotein, leading to a propensity for adsorbing apolipoprotein E (ApoE) present in blood plasma. Consequently, LNPs accumulate in the liver by binding to the low-density lipoprotein receptor found in hepatocytes. SORT LNPs overcome this limitation by augmenting the LNP with an additional component (termed a SORT molecule), allowing delivery to targeted tissues beyond the liver. Mechanism Traditionally, LNPs utilize an optimal balance of ionizable amines and nanoparticle-stabilizing hydrophobicity to deliver functional molecules to cells effectively but are limited to liver hepatocytes. In the SORT strategy, these nanoparticles are systematically engineered without altering the molar ratio of the core four components in LNPs, ensuring that the ability to encapsulate RNA and escape from endosomes remains intact. The addition of a SORT molecule alters the biodistribution and redirects the molecules facilitating the uptake in specific organs via endogenous targeting mechanisms of action or by influencing the binding affinity to specific serum proteins. Tissue tropism is determined by the distinct chemical functional groups present on the surface of the nanoparticle, which alter the physicochemical properties of the LNP. These properties encompass factors such as molarity, percentage added, and various other characteristics. The critical factor that governs tissue tropism is the modulation of the surface's acid dissociation constant (pKa), which corresponds to the pH at which the proportion of charged and uncharged ionizable lipids at the particle's surface is equal and depends on the type of ionizable lipids and charged helper lipids used in the nanoparticle formulation. The shift from liver tissues is attributed to the alteration in the surface pKa induced by the addition of an anionic head group, which subsequently reduced the strength of interactions with ApoE. Change in surface pKa promotes the adsorption of plasma proteins such as β2- glycoprotein I (β2-GPI) instead of ApoE, resulting in altered protein corona that mediates tissue-specific delivery towards the spleen and lung. Adding a cationic quaternary amino lipid, such as 1,2-dioleoyl-3-trimethylammonium-propane (DOTAP), in an increasing molar percentage, was able to shift the distribution progressively from the liver to spleen and then the lung, with a threshold that allowed for exclusive lung delivery. Negatively charged SORT lipids allow for direct delivery to the spleen. Synthesis of SORT LNPs To prepare self-assembled SORT LNPs, the lipids are mixed in ethanol to create a dissolved lipid mixture solution, ensuring that the initial relative molar ratios of the four fundamental components remain unaltered. mRNA is dissolved in citrate buffer separately. To encourage that uniform LNPs are formed, it is necessary to rapidly mix both solutions: the lipid solution containing all lipids and the buffer solution containing mRNA. By employing high-speed mixing, the environmental polarity is enhanced, facilitating the formation of homogenous LNPs. Mixing methods include pipetting, vortex or microfluidic mixing. After mixing, characterize SORT LNPs to measure the particle size and encapsulation efficiency and proceed to delivery into the organism. Delivery can be intrathecal, intravenous, intramuscular or through nebulization. Applications SORT LNPs can mediate therapeutically relevant protein production levels and safely deliver proteins to specific tissues and even particular cell populations. The tissue specificity occurs quickly and is not dependent on time. Further benefits of SORT LNPs include formulation stability and conservation of physiochemical properties over time, including a maintained in vivo efficacy after storage at 4 degrees Celsius. LNPs, in general, are well tolerated in mice and humans, and no alterations in kidney and liver function or alteration of serum proteins have been found in studies with murine models evaluating in vivo toxicity. SORT has the potential to revolutionize drug delivery by improving the efficacy and pharmacokinetics of drugs while reducing side effects. SORT molecules can reach deep tissues that were previously inaccessible for treatment, enhancing tissue penetration. This holds significant promise in benefiting a wide range of genetic disorders, enabling advancements in protein replacement therapy and gene editing, as this strategy allows for gene editing without local administration. The benefits of targeted delivery of protein products or gene editing machinery to the liver are shown in genetic diseases affecting the liver or in which the altered gene product is produced in the liver, such as tyrosinemia, and transthyretin amyloidosis, respectively, and the addition of a SORT molecule has been shown further improve liver-targeting LNP systems further. However, the SORT strategy could potentially extrapolate these benefits to other organs. One promising target for gene editing is cystic fibrosis, as a tailored therapy with an effective delivery system could significantly rescue CFTR expression. Other possible applications include restoration of gene expression in other organs, such as restoring dystrophin expression in muscle for Duchenne muscular dystrophy. Targeted approaches for bone marrow and brain tropism are currently in development One of the most promising applications of SORT is cancer treatment. By targeting the cancerous cells in a specific organ, SORT may be able to deliver drugs or gene therapies directly to the cancerous cells while sparing the healthy cells in other organs. Selectivity for the spleen could also be applicable in treating cancer via chimeric antigen receptor (CAR)-T cell therapy and opens a new path for developing in vivo T-cell targeted mRNA delivery systems able to induce robust and transient CAR expression. There are promising applications in the combination of SORT and different delivery methods besides intravenous administration, such as nebulization, intrathecal or intramuscular administration, as these will deliver deliver the SORT molecules directed to targeted organs and further reduce systemic exposure. Additionally, SORT technology is applicable to several classes of established four-component LNPs, and various non-lipid nanoparticle components. This broadens the spectrum of its applications and enables the delivery of diverse therapeutics, encompassing not only nucleic acids but also single or multiple proteins, and even entire genome editors. Limitations At present, the SORT strategy is capable of achieving targeted delivery exclusively to specific organs such as the liver, lungs, and spleen. Establishing the SORT LNP formulation is a fine-tuning process, as some concentrations of SORT molecules may aid in delivery to other organs, whereas different concentrations completely select delivery to another organ. However, this fine-tuning mechanism is limited as it can also alter the molecule's activity and render it ineffective. Moreover, it is difficult to accurately predict the biodistribution of LNPs based on their physicochemical parameters, and biodistribution alone cannot predict mRNA-induced protein expression in a specific tissue. There is no indication that a massive accumulation of LNPs in a given tissue will necessarily lead to a high degree of protein expression in the targeted cells See also Personalized medicine Solid lipid nanoparticle Targeted drug delivery Targeted therapy References Nanotechnology Medical equipment
Selective organ targeting
[ "Materials_science", "Engineering", "Biology" ]
1,825
[ "Nanotechnology", "Materials science", "Medical equipment", "Medical technology" ]
68,342,104
https://en.wikipedia.org/wiki/International%20Axion%20Observatory
The International Axion Observatory (IAXO) is a next-generation axion helioscope for the search of solar axions and Axion-Like Particles (ALPs). It is the follow-up of the CERN Axion Solar Telescope (CAST), which operated from 2003 to 2022. IAXO will be set up by implementing the helioscope concept bringing it to a larger size and longer observation times. The IAXO collaboration The Letter of Intent for International Axion Observatory was submitted to the CERN in August 2013. IAXO formally founded in July 2017 and received an advanced grant from the European Research Council in October 2018. The near-term goal of the collaboration is to build a precursor version of the experiment, called BabyIAXO, which will be located at DESY, Germany. The IAXO Collaboration is formed by 21 institutes from 7 different countries. Principle of operation The IAXO experiment is based on the helioscope principle. Axions can be produced in stars (like the sun) via the Primakoff effect and other mechanisms. These axions would reach the helioscope and would be converted into soft X-ray photons in the presence of a magnetic field. Then, these photons travel through a focusing X-ray optics, and are expected as an excess of signal in the detector when the magnet points to the Sun. The potential of the experiment can be estimated by means of the figure of merit (FOM), which can be defined as , where the first factor is related to the magnet and depends on the magnetic field (B), the length of the magnet (L) and the area of the bore (A). The second part depends on the efficiency () and background (b) of the detector. The third is related to the optics, more specifically the efficiency () and the area of the focused signal on the detector readout (). The last term is related to the time (t) of operation and the fraction of time that sun is tracked (). The objective is to maximise the value of the figure of merit in order to optimise the sensitivity of the experiment to axions. Sensitivity and physics potential IAXO will primarily be searching for solar axions, along with the potential to observe the quantum chromodynamics (QCD) axion in the mass range of 1 meV to 1 eV. It is also expected to be capable of discovering ALPs. Therefore, IAXO will have the potential to solve both the strong CP problem and the dark matter problem. It could also be later adapted to test models of hypothesized hidden photons or chameleons. Also, the magnet can be used as a haloscope to search for axion dark matter. IAXO will have a sensitivity to the axion-photon coupling 1–1.5 order of magnitude higher than that achieved by previous detectors. Axion sources accessible to IAXO Any particle found by IAXO will be at the least a sub-dominant component of the dark matter. The observatory would be capable of observing from a wide range of sources given below. Solar axions. QCD axions. Dark matter axions. Axions from astrophysical hints such as white dwarf and neutron star anomalous cooling, globular clusters, and supergiant stars powered by helium. IAXO: The International Axion Observatory IAXO will be a next-generation enhanced helioscope, with a signal to noise ratio five orders of magnitude higher compared to current-day detectors. The cross-sectional area of the magnet equipped with an X-ray focusing optics is meant to increase this signal to background ratio. When the solar axions interact with the magnetic field, some of them may convert into photons through the Primakoff effect. These photons would then be detected by the X-ray detectors of the helioscope. The magnet will be a purpose‐built large‐scale superconductor with a length of 20 m and an average field strength of 2.5 Tesla. The whole helioscope will feature 8 bores of 60 cm diameter. Each of the bores will be equipped with a focusing X-ray optic and a low-background X-ray detector. The helioscope will also be equipped with a mechanical system allowing it to follow the sun consistently throughout half of the day. Tracking data will be taken during the day and background data will be taken during the night, which is the ideal split of data and background for properly estimating the event rate in each case and determining the axion signal. BabyIAXO BabyIAXO is an intermediate scale version of the IAXO experiment with axion discovery potential and a FOM around 100 times larger than CAST. It will also serve as a technological prototype of all the subsystems of the helioscope as an first step to explore further improvements to the final IAXO experiment. It will consist of a 10 m long magnet with 2 bores and 2 detection lines equipped with an X-ray optic and an ultra-low background X-ray detector each. BabyIAXO will be set up in the HERA South Hall at DESY in Hamburg (Germany) by the IAXO collaboration with the involvement of DESY and CERN. The data taking by BabyIAXO is scheduled to start in 2028. BabyIAXO design Design Magnet The superconducting magnet has a toroidal multibore configuration in order to generate a strong magnetic field over a large volume. It will be a 10 m long magnet consisting of two different coils made out of 35 km Rutherford cable. This configuration will generate a 2.5 Tesla magnetic field within the two 70 cm diameter bores. The magnet subsystem is inspired by the ATLAS experiment. X-ray optics Since BabyIAXO will have two bores in the magnet, two X-ray optics are required to operate in parallel. Both of them are Wolter optics (type I). One of the two BabyIAXO optics will be based on a mature technology developed for NASA's NuStar X-ray satellite. The signal from the 0.7 m diameter bore will be focused to 0.2 area. The second BabyIAXO optics will be one of the flight models of the XMM-Newton space mission that belongs to the ESA. Detectors IAXO and BabyIAXO will have multiple and diverse detectors working in parallel, mounted to the different magnet bores. Based upon the experience from CAST, the baseline detector technology will be a Time Projection Chamber (TPC) with a Micromegas readout. In addition, there are several other technologies under study: GridPix, Metallic Magnetic Calorimeters (MMC), Transition Edge Sensors (TES) and Silicon Drift Detectors (SDD). The detectors for this experiment need to meet certain technical requirements. They need a high detection efficiency in the ROI (1 – 10 keV) where the Primakoff axion signal is expected. They also need a very low radioactive background in ROI of under (less than 3 counts per year of data). To reach this background level, the detector relies on: The use of both passive shielding to block environmental gammas. The use of active to tag cosmic ray induced events. The intrinsic radiopurity of the construction materials. The advanced event discrimination strategies based on topological information, validated with simulations. See also CERN Axion Solar Telescope References External links International Axion Observatory Website List of experiments for dark matter search CERN experiments Particle physics facilities Physics experiments CERN facilities CERN
International Axion Observatory
[ "Physics" ]
1,563
[ "Dark matter", "Physics experiments", "Unsolved problems in physics", "Experiments for dark matter search", "Experimental physics" ]
78,481,849
https://en.wikipedia.org/wiki/Schroeder%27s%20paradox
Schroeder's paradox refers to the phenomenon of certain polymers exhibiting more solvent uptake (observed as swelling) when exposed to a pure liquid versus a saturated vapor. It is named after the German chemist Paul von Schroeder, who first reported the phenomenon working on a sample of gelatin in contact with water in 1903. An equivalent observation has also been independently discovered and discussed within the biophysical community as the vapor pressure paradox. The phenomenon was recognized as notable due to its application to the Nafion/water system, with technological importance due to application in proton-exchange membrane fuel cells. Theories According to phase equilibrium theory, the activity of a chemical species should be equal to its equilibrium partial vapor pressure, so both saturated vapor and pure liquid should exhibit the same equilibrium for absorption into the polymer. For this reason, Schroeder's experimental results were immediately questioned, and the phenomenon has often been attributed to experimental error, such as failure to attain proper water saturation or isothermal conditions between the phases. However, even exact measurements support an existence of a systematic difference between sorption from saturated vapor and from pure liquid for certain systems. Additional surface effects along the polymer-liquid interface are required to explain the difference. A mechanism based on action of Maxwell stresses due to formation of an electrical double layer at the polymer's surface, present only where the polymer is submerged in liquid, has been proposed to explain this effect in the case of ion-exchange polymers, and a similar mechanism involving van der Waals and solvation forces for the case of nonionogenic polymers. Mechanistic interpretations based on wetting of micropores in the polymer matrix have also been proposed. The difference in absorption can in either case be explained by a difference in surface stresses on the interface, which differs between immersion in pure liquid and saturated vapor, resolving the paradox without requiring a difference in activity between the two. Examples Schroeder's paradox has been reported for various polymer/solvent pairs, such as: gelatin/water (Schroeder, 1903) phospholipid multilayers/water (Rand & Parsegian, 1989) polyvinyl alcohol/water (Heintz & Stephan, 1994) polyvinyl alcohol/ethanol (Heintz & Stephan, 1994) Nafion/water (Gates, 2000) Nafion/methanol (Gates, 2000) sulfonated polyethylene/water (Freger, 2000) sulfonated polyimide/water (Cornet, 2001) polydimethylsiloxane/2-propanol (Valieres, 2005) kerogen/propane (Li, 2021) kerogen/n-butane (Li, 2021) kerogen/n-pentane (Li, 2021) References Polymer physics
Schroeder's paradox
[ "Chemistry", "Materials_science" ]
580
[ "Polymer physics", "Polymer chemistry" ]
75,532,995
https://en.wikipedia.org/wiki/Magnetic%20resonance%20fingerprinting
Magnetic resonance fingerprinting (MRF) is methodology in quantitative magnetic resonance imaging (MRI) characterized by a pseudo-randomized acquisition strategy. It involves creating unique signal patterns or 'fingerprints' for different materials or tissues after which a pattern recognition algorithm matches these fingerprints with a predefined dictionary of expected signal patterns. This process translates the data into quantitative maps, revealing information about the magnetic properties being investigated. MRF has shown promise in providing reproducible and quantitative measurements, offering potential advantages in terms of objectivity in tissue diagnosis, comparability across different scans and locations, and the development of imaging biomarkers. The technology has been explored in various clinical applications, including brain, prostate, liver, cardiac, and musculoskeletal imaging, as well as the measurement of perfusion and microvascular properties through MR vascular fingerprinting. Motivation In practical magnetic resonance acquisitions, measurements are often qualitative or 'weighted,' lacking inherent quantifiability. Factors like scanner type, setup, and detectors contribute to varying signal intensities for the same material across datasets. Current clinical MRI relies on terms like 'hyperintense' or 'hypointense,' lacking quantitative severity indicators and global sensitivity. Although quantitative multiparametric acquisition has been a research goal, existing methods often focus on single parameters, demand substantial scan time, and are sensitive to system imperfections. Simultaneous multiparametric measurements are generally impractical due to time constraints and experimental conditions. Consequently, qualitative magnetic resonance measurements remain the prevalent standard, especially in clinical settings. MRF is connected to compressed sensing and shares expected benefits. Initial findings suggest that MRF could provide fully quantitative results in a time similar to traditional qualitative MRI, with reduced sensitivity to measurement errors. Importantly, MRF has the potential to simultaneously quantify numerous MRI parameters given sufficient scan time, expanding capabilities compared to current MRI techniques. This opens possibilities for computer-aided multiparametric MRI analyses, like genomics or proteomics, detecting complex changes across various parameters simultaneously. When paired with a suitable pattern recognition algorithm, MRF exhibits enhanced resilience to noise and acquisition errors, mitigating their impact. Working principle MRF involves a three-step process: data acquisition, pattern matching, and tissue property visualization. During data acquisition, MR system settings are intentionally varied in a pseudorandom manner to create unique signal evolutions or "fingerprints" for each combination of tissue properties. Individual voxel fingerprints are compared with a simulated collection in a generated dictionary for the MRF sequence. The best match is selected through pattern matching, and the identified tissue properties are depicted as pixel-wise maps, providing quantitative and anatomical information. Originally designed for T1, T2, static magnetic field (B0) inhomogeneity, and proton density M0 measurements, recent advancements have demonstrated the feasibility of measuring additional properties such as radio frequency transmit field inhomogeneity (B1), T2* properties. Data acquisition Magnetic Resonance Fingerprinting (MRF) unlike MRI, dynamically varies acquisition parameters throughout the process. Unlike traditional methods that repetitively use the same parameters until full k-space data are acquired, MRF's flexible approach involves adjusting radiofrequency excitation angle (FA), phase, repetition time, and k-space sampling trajectory. This dynamic variation generates a unique signal time-course for each tissue, and proper sequence design is crucial for achieving useful, time-efficient, accurate, precise, and clinically relevant information. Despite significant under-sampling, the signal evolution from all data points allows accurate and repeatable quantitative mapping. Spatio-temporal incoherence of under-sampling artifacts is a key consideration in designing the sampling strategy. Spiral or radial trajectories are commonly used for their higher spatial incoherence and sampling efficiency. Echo-planar imaging (EPI) and Cartesian trajectories have also demonstrated utility in the MRF framework. The trajectory re-ordering can be sequential, uniformly rotated, or random, depending on the sequence type and application. MRF provides a flexible framework, theoretically allowing any sequence structure to be adopted for obtaining relevant tissue properties. The original MRF description was based on inversion recovery prepared balanced steady-state free precession (IR-bSSFP), sensitive to T1, T2, and static field (B0) inhomogeneity. Subsequent adaptations introduced various sequences, each addressing limitations, conferring advantages, or measuring additional tissue properties. Pattern matching and tissue property visualization Pattern matching in MRF involves comparing the patterns of signal evolutions from individual tissue voxels with entries in a precomputed dictionary of possible signal evolutions for the specific MRF sequence. This dictionary is generated using mathematical algorithms predicting spin behavior and signal evolution during the acquisition. Various models, such as Bloch Equations simulations and the extended phase graph formalism, have been employed to create these databases. More complex models have been used in MR vascular fingerprinting (MRvF) and Arterial Spin Labeling (MRF-ASL) perfusion to generate fingerprints for pattern matching. Pattern matching introduces a degree of error tolerance, as long as the errors are spatially and temporally incoherent. In the original MRF acquisition, template matching involved calculating the vector-dot product of the acquired signal with each simulated fingerprint signal. The dictionary entry with the highest dot product was considered the best match, and the corresponding T1, T2, and B0 values were assigned to that voxel. The M0 value was determined as the multiplicative factor between the acquired and simulated fingerprints. This process proved time-efficient, accurate (showing good correlation with phantom values), precise, and insensitive to motion artifacts. The collection of fingerprints may be generated once for each sequence and applied universally or individually for each patient, depending on the organ or physiological properties under evaluation. To enhance the speed, robustness, and accuracy of pattern matching and visualization, efforts have been directed toward speeding up the process. Compression methods in the time dimension or the application of fast group matching algorithms have been explored, resulting in a time reduction factor of 3–5 times with less than a 2% decrease in the accuracy of tissue property estimation. Clinical applications Cardiac Cardiac MRF has focused on myocardial tissue property mapping, offering simultaneous estimation of T1, T2, and M0 values with good concordance to conventional mapping methods. Future developments aim to reduce scan time, achieve volumetric acquisition for whole-heart coverage, and optimize M0 values. Brain relaxometry In brain relaxometry studies, MRF has shown good correlation for T1 and T2 values of grey and white matter. Studies have demonstrated its ability to simultaneously estimate T1 and T2 values for different brain regions, providing fast and regional relaxometry with correlations to age and gender. MRF has been employed in characterizing and differentiating intra-axial brain tumors, offering a valuable tool for distinguishing gliomas and metastases. Abdomen Adopting MRF for abdominal imaging presents unique challenges, including the need for fast sequences, high spatial resolution, and compensation for B0 and B1 inhomogeneities. Approaches like measuring B1 variation through separate scans and incorporating it into the dictionary simulation have been proposed, enabling successful application in abdominal imaging, even in the presence of liver metastases. References Magnetic resonance imaging Fingerprinting algorithms
Magnetic resonance fingerprinting
[ "Chemistry" ]
1,525
[ "Nuclear magnetic resonance", "Magnetic resonance imaging" ]
75,539,112
https://en.wikipedia.org/wiki/Area%20sampling%20frame
An area sampling frame is an alternative to the most traditional type of sampling frames. A sampling frame is often defined as a list of elements of the population we want to explore through a sample survey. A slightly more general concept considers that a sampling frame is a tool that allows the identification and access to the elements of the population, even if an explicit list does not exist. Traditional sampling frames are sometimes referred to as list frames In many cases, suitable lists are not available. This can happen for several reasons, for example: Existing lists, such as population censuses, are too old and do not correspond anymore to the current reality. We are targeting a population whose list is not feasible, for example a wild animal species. The population is a continuous feature in a given geographic area and the definition of its elements is not straightforward. This often happens for sample surveys designed to produce environmental statistics. Area sampling frames are generally defined by two elements: The boundaries of a target region in a given cartographic projection. The type of geographic units to be sampled. We can mention three main types of units: Points. In principle, points are dimensionless, but, for practical reasons, we can attribute them a certain size, such as 1 m x 1 m. The suitable size is linked to the accuracy of the tool used for the location of the point. Possible tools are GPS devices, orthophotos or satellite images. Point sampling can be based on a two-stage scheme, sampling clusters in the first stage and sampling points in the second stage. Another option is a two-phase scheme of unclustered points: a large first-phase sample is selected. A stratification is conducted only for the first-phase sample and a stratified sample is chosen in the second phase. Transects. A transect is a piece of straight line of a given length. Transect sampling is useful to estimate the total length of linear landscape elements Areal units defined by polygons. In the jargon of agricultural surveys, areal units are generally called "segments", even if a segment in geometry rather corresponds to the concept of transect used in area sampling frames. Segments can be delineated by photo-interpretation or generated automatically, usually on the basis of a regular grid. The optimal size of segments depends on the spatial auto-correlation of the monitored processes and the cost function that links the price of data collection with the size of the sample unit Fields of application The oldest field of application area sampling frames has been probably forest inventories, one of the fields with the most obvious geographic component in which the traditional list frame approach cannot be applied. For the same reason, area frames appear as a natural tool for many environmental topics, such as soil surveys and other topics that require spatial statistics tools. Different area frame approaches have been widely discussed and compared for agricultural statistics. In the 1930's the of the National Agricultural Statistical Service of the US Department of Agriculture introduced area sampling frames for the estimation of crop area and yield on the basis of a sample of areal units (segments). The French Teruti survey chose in the 1960's an approach based on a systematic sample of clusters of points. The Italian AGRIT survey has explored different approaches, comparing segment and point methods. The Joint Research Centre of the EC has conducted a large number of studies on area sampling frame methodology and area frame surveys for agricultural, forestry, environmental and human settlement studies. The soaring number of applications of satellite images has boosted the interest on area sampling frames, not only because of the use of remote sensing for statistics and because the integration of satellite images has improved the quality of samplig frames and related estimators, but also because satellite images may need to be sampled. Validation of thematic maps produced by satellite image analysis has become one of the main application fields of area sampling frames References Sampling (statistics) Survey methodology Spatial analysis
Area sampling frame
[ "Physics" ]
789
[ "Spacetime", "Space", "Spatial analysis" ]
71,219,978
https://en.wikipedia.org/wiki/McVittie%20metric
In the general theory of relativity, the McVittie metric is the exact solution of Einstein's field equations that describes a black hole or massive object immersed in an expanding cosmological spacetime. The solution was first fully obtained by George McVittie in the 1930s, while investigating the effect of the, then recently discovered, expansion of the Universe on a mass particle. The simplest case of a spherically symmetric solution to the field equations of General Relativity with a cosmological constant term, the Schwarzschild-De Sitter spacetime, arises as a specific case of the McVittie metric, with positive 3-space scalar curvature and constant Hubble parameter . Metric In isotropic coordinates, the McVittie metric is given by where is the usual line element for the euclidean sphere, M is identified as the mass of the massive object, is the usual scale factor found in the FLRW metric, which accounts for the expansion of the space-time; and is a curvature parameter related to the scalar curvature of the 3-space as which is related to the curvature of the 3-space exactly as in the FLRW spacetime. It is generally assumed that , otherwise the Universe is undergoing a contraction. One can define the time-dependent mass parameter , which accounts for the mass density inside the expanding, comoving radius at time , to write the metric in a more succinct way Causal structure and singularities From here on, it is useful to define . For McVittie metrics with the general expanding FLRW solutions properties and , the spacetime has the property of containing at least two singularities. One is a cosmological, null-like naked singularity at the smallest positive root of the equation . This is interpreted as the black hole event-horizon in the case where . For the case, there is an event horizon at , but no singularity, which is extinguished by the existence of an asymptotic Schwarzschild-De Sitter phase of the spacetime. The second singularity lies at the causal past of all events in the space-time, and is a space-time singularity at , which, due to its causal past nature, is interpreted as the usual Big-Bang like singularity. There are also at least two event horizons: one at the largest solution of , and space-like, protecting the Big-Bang singularity at finite past time; and one at the smallest root of the equation, also at finite time. The second event horizon becomes a black hole horizon for the case. Schwarzschild and FLRW limits One can obtain the Schwarzschild and Robertson-Walker metrics from the McVittie metric in the exact limits of and , respectively. In trying to describe the behavior of a mass particle in an expanding Universe, the original paper of McVittie a black hole spacetime with decreasing Schwarschild radius for an expanding surrounding cosmological spacetime. However, one can also interpret, in the limit of a small mass parameter , a perturbed FLRW spacetime, with the Newtonian perturbation. Below we describe how to derive these analogies between the Schwarzschild and FLRW spacetimes from the McVittie metric. Schwarzschild In the case of a flat 3-space, with scalar curvature constant , the metric (1) becomes which, for each instant of cosmic time , is the metric of the region outside of a Schwarzschild black hole in isotropic coordinates, with Schwarzschild radius . To make this equivalence more explicit, one can make the change of radial variables to obtain the metric in Schwarzschild coordinates: The interesting feature of this form of the metric is that one can clearly see that the Schwarzschild radius, which dictates at which distance from the center of the massive body the event horizon is formed, shrinks as the Universe expands. For a comoving observer, which accompanies the Hubble flow this effect is not perceptible, as its radial coordinate is given by , such that, for the comoving observer, is constant, and the Event Horizon will remain static. FLRW In the case of a vanishing mass parameter , the McVittie metric becomes exactly the FLRW metric in spherical coordinates which leads to the exact Friedmann equations for the evolution of the scale factor . If one takes the limit of the mass parameter , the metric (1) becomes which can be mapped to a perturberd FLRW spacetime in Newtonian gauge, with perturbation potential ; that is, one can understand the small mass of the central object as the perturbation in the FLRW metric. See also Cosmology Singularity theorems References Exact solutions in general relativity Spacetime Mathematical methods in general relativity Lorentzian manifolds
McVittie metric
[ "Physics", "Mathematics" ]
997
[ "Exact solutions in general relativity", "Vector spaces", "Mathematical objects", "Equations", "Space (mathematics)", "Theory of relativity", "Spacetime" ]
71,227,625
https://en.wikipedia.org/wiki/Hamdy%20Doweidar
Hamdy Doweidar Taki El-Din Doweidar (PhD, DSc) was an Egyptian condensed matter physicist whose research topics included inorganic glasses, glass-ceramics, bio-active glasses, and structure-property correlations. He developed the Doweidar Model which is used to correlate density, thermal expansion coefficient, molar refraction, and refractive index with the concentration of structural units in numerous types of glass. Doweidar also obtained a patent with two researchers for the preparation of a biologically active glass ionomer cement as a dental filling, characterized by vital activity due to the presence of bioactive crystalline phases in the retina glass (such as apatite and fluoroapatite), which react with SBF simulating solution to precipitate layers of hydroxyapatite and hydroxyapatite. These represent the basic crystalline phases in the formation of bones and teeth. He was a Professor Emeritus at the Mansoura University. Education and career Doweidar graduated from Assiut University in 1964 with a bachelor's in physics and chemistry, a master's in Physical chemistry from Cairo University in 1969, and a Ph.D. in applied physics from Bauhaus-Universität Weimar in 1974. Doweidar was a researcher at the National Research Centre from 1965 to 1975 and became an associate professor at Mansoura University until 1986, when he became a Distinguished Professor. In 1977, Doweidar found the Glass Research Laboratory at the Mansoura University. He was a visiting professor at the École Normale Supérieure in Algeria from 1980 to 1984, and at the Sanaa University in Yemen from 1990 to 1994. Recognition Doweidar has received the Award of Academy of Scientific Research and Technology (Promotional State-Prize in Physics), Cairo in 1999, the Mansoura University Award (Distinction Prize in Physics) in 2000, and the Scopus Award for contribution to materials Science, presented from Elsevier and the Egyptian Ministry of High Education in 2008. He has over 120 peer-reviewed publications in international journals and was named one of the world's top 2% most cited scientists by Stanford University in 2019, 2020, 2021, 2022, and 2023. References Academic staff of Mansoura University Cairo University alumni Bauhaus University, Weimar alumni Assiut University alumni Condensed matter physicists 20th-century physicists Egyptian physicists
Hamdy Doweidar
[ "Physics", "Materials_science" ]
492
[ "Condensed matter physicists", "Condensed matter physics" ]
72,705,930
https://en.wikipedia.org/wiki/Darkon%20%28unparticle%29
Darkon is a hypothetical scalar unparticle to introduce to the Minimal Supersymmetric Standard Model, a dark matter candidate. History A. Zee and V. Silveira were the first to consider the darkon field as dark matter in 1985. This approach was then used by several others groups of physicists. Concept In addition to the Standard Model particles, It contains the darkon, a real singlet field. To play the role of dark matter, the darkon field must interact weakly with the standard matter field sector and should not rapidly decay into particles. The simplest way of introducing the darkon is to demand that they can only be annihilated or created in pairs and to make it stable against decay. See also Lightest supersymmetric particle Physics beyond the Standard Model SUSY WIMPs References Further reading Dark matter Physics beyond the Standard Model Dark concepts in astrophysics
Darkon (unparticle)
[ "Physics", "Astronomy" ]
184
[ "Dark matter", "Unsolved problems in astronomy", "Concepts in astronomy", "Theoretical physics", "Unsolved problems in physics", "Astrophysics", "Exotic matter", "Dark concepts in astrophysics", "Particle physics", "Theoretical physics stubs", "Particle physics stubs", "Physics beyond the Stan...
72,709,760
https://en.wikipedia.org/wiki/Caesium%20telluride
Caesium telluride or Caesium telluridocaesium is an inorganic salt with a chemical formula Cs2Te. Caesium telluride is used to make photo cathodes. Caesium telluride is the photoemissive material used in many laser-driven radio frequency (RF) electron guns like in the TESLA Test Facility (TTF). References Caesium compounds Tellurides
Caesium telluride
[ "Chemistry" ]
90
[ "Inorganic compounds", "Inorganic compound stubs" ]
69,642,430
https://en.wikipedia.org/wiki/Ultrapolynomial
In mathematics, an ultrapolynomial is a power series in several variables whose coefficients are bounded in some specific sense. Definition Let and a field (typically or ) equipped with a norm (typically the absolute value). Then a function of the form is called an ultrapolynomial of class , if the coefficients satisfy for all , for some and (resp. for every and some ). References Mathematical analysis
Ultrapolynomial
[ "Mathematics" ]
87
[ "Mathematical analysis" ]
69,646,023
https://en.wikipedia.org/wiki/Variable-buoyancy%20pressure%20vessel
A variable-buoyancy pressure vessel system is a type of rigid buoyancy control device for diving systems that retains a constant volume and varies its density by changing the weight (mass) of the contents, either by moving the ambient fluid into and out of a rigid pressure vessel, or by moving a stored liquid between internal and external variable-volume containers. A pressure vessel is used to withstand the hydrostatic pressure of the underwater environment. A variable-buoyancy pressure vessel can have an internal pressure greater or less than ambient pressure, and the pressure difference can vary from positive to negative within the operational depth range, or remain either positive or negative throughout the pressure range, depending on design choices. Variable buoyancy is a useful characteristic of any mobile underwater system that operates in mid-water without external support. Examples include submarines, submersibles, benthic landers, remotely operated and autonomous underwater vehicles, and underwater divers. Several applications only need one cycle from positive to negative and back to get down to depth and return to the surface between deployments; others may need tens to hundreds of cycles over several months during a single deployment, or continual but very small adjustments in both directions to maintain a constant depth or neutral buoyancy at changing depths. Several mechanisms are available for this function; some are suitable for multiple cycles between positive and negative buoyancy, and others must be replenished between uses. Their suitability depends on the required characteristics for the specific application. Uses of variable buoyancy in diving systems Mobile underwater systems that operate in mid-water without external support need variable buoyancy, and as such these systems are a major research topic in the field of underwater vehicles. Examples include submarines, submersibles, benthic landers, remotely operated and autonomous underwater vehicles, and ambient-pressure and single-atmosphere underwater divers. A submarine can closely approach equilibrium when submerged but have no inherent stability in depth. The sealed pressure hull structure is usually slightly more compressible than water and will consequently lose buoyancy with increased depth. For precise and quick control of buoyancy and trim at depth, submarines use depth control tanks (DCT)—also called hard tanks (due to their ability to withstand higher pressure) or trim tanks. These are variable-buoyancy pressure vessels. The amount of water in depth control tanks can be controlled to change the buoyancy of the vessel so that it moves up or down in the water column, or to maintain a constant depth as outside conditions (mainly water density) change, and water can be pumped between trim tanks to control longitudinal or transverse trim without affecting buoyancy. The operating depth of underwater vehicles can be controlled by controlling the buoyancy—by changing either the overall weight or the displaced volume—or by vectored thrust. Buoyancy can be controlled by changing the overall weight of the vehicle at constant volume, or by changing the displaced volume at a constant vehicle weight. The resulting buoyancy is used to control heave velocity and hovering depth, and in underwater gliders a positive or negative net buoyancy is used to drive forward motion. The Avelo scuba system uses a variable-buoyancy pressure vessel, which is both the primary breathing gas cylinder and the scuba buoyancy compensator, with a rechargeable-battery–powered pump and dump valve unit which is demountable from the cylinder. Variable-buoyancy systems have been considered for depth control of tethered ocean current turbine electrical generation. The type of variable-buoyancy system best suited to an application depends on the precision of control required, the amount of change needed, and the number of cycles of buoyancy change necessary during a deployment. Types of variable-buoyancy systems Several types of variable-buoyancy systems have been used, and are briefly described here. Some are based on a relatively incompressible pressure vessel and are nearly stable with variation of hydrostatic pressure. Ambient-pressure buoyancy/ballast tanks (unstable with depth change), such as the main ballast tanks on a submarine, or an inflatable diver's buoyancy compensator. These are not pressure vessels, as the contents are at ambient pressure. Weight-discharge variable-mass system. This is generally a system by which ballast of higher density than the surroundings is discharged, and once discharged the ballast is lost. The system is simple and appropriate for vehicles that only need to make a very limited number of buoyancy adjustments during a deployment. It is a common method of achieving positive buoyancy in an emergency as it is simple to arrange a fail-safe discharge mechanism. An analogous system for releasing fixed low-density material is also possible. These are also not pressure vessels as the weights or incompressible buoys are stored at ambient pressure. One-way tank-flood variable-mass system. This is simply an empty tank that can withstand external working pressure and can be partly or completely flooded by a control valve. The tank can be drained again at the surface for subsequent dives, but not while under pressure during a dive. Pumped-oil constant-mass, variable-volume system. This method uses more power but is indefinitely repeatable while power lasts, as it does not discharge any consumables. A positive-displacement pump transfers oil stored in a variable-volume container inside a gas-filled pressure vessel to an external variable-volume container, incompressibly increasing the displaced volume of the vehicle. Return transfer may be by pressure difference controlled by a valve, or also pumped. Piston-driven oil-filled constant-mass, variable-volume system. This works very similarly to the pumped oil system, but the internal storage is in a cylinder with a piston which decreases or increases its volume using a mechanical drive, typically powered by an electric motor. In effect the piston acts as a pump. Pumped-water variable-buoyancy system. Ambient water is moved into and out of the pressure vessel to change the overall density of the vessel, and thereby of the vehicle of which it is a component. In one direction this transfer may be possible by pressure difference, but in at least one direction it must be pumped. The process is repeatable while power lasts, as the ballast is drawn from the surroundings. Mechanism A buoyancy tank that is within the pressure hull of the vehicle, as in a submarine, will be exposed to the internal pressure of the vehicle, so external pressure loads on the tank may be relatively low. In this case the ballast water transfer into the tank may not require pumping, though a positive-displacement pump may still be useful to accurately control the volume of water admitted. Discharge of ballast water is against the external pressure, which will depend on depth, and will generally require significant work. If the buoyancy tank is directly exposed to the ambient hydrostatic pressure, the external load due to depth can be high, but if the internal gas pressure is high enough, the pressure difference will be lower, and the pressure vessel is not subjected to high net external pressure loads which can cause buckling instability, which can allow a lower structural weight. In the extreme case the internal pressure is high enough to rapidly eject the water ballast at maximum operational depth, as in the case of the Avelo integrated diving cylinder and buoyancy control device. A pump is used to move ambient water into the pressure vessel against the internal pressure, compressing the gas further in proportion to volume decrease, so the entire internal volume is not available to hold ballast, as although the gas will decrease in volume, there will always be some gas volume remaining. The water and air in the pressure vessel may be separated by a membrane or free piston to prevent pumping out air in some orientations, and to prevent the air from dissolving in the ballast water under high pressure. See also References Diver buoyancy control equipment Buoyancy devices Pressure vessels
Variable-buoyancy pressure vessel
[ "Physics", "Chemistry", "Engineering" ]
1,614
[ "Structural engineering", "Chemical equipment", "Physical systems", "Hydraulics", "Pressure vessels" ]
66,918,999
https://en.wikipedia.org/wiki/Nitrate%20selenite
The nitrate selenites are mixed anion compounds containing distinct nitrate (NO3−)and selenite (SO32−) groups. The compounds are colourless unless coloured by cations. List References Nitrates Selenites Mixed anion compounds
Nitrate selenite
[ "Physics", "Chemistry" ]
53
[ "Matter", "Mixed anion compounds", "Nitrates", "Salts", "Oxidizing agents", "Ions" ]
66,922,545
https://en.wikipedia.org/wiki/Globo%20H
Globo H (globohexaosylceramide) is a globo-series glycosphingolipid antigen that is present on the outer membrane of some cancer cells. Globo H is not expressed in normal tissue cells, but is expressed in a number of types of cancers, including cancers of the breast, prostate, and pancreas. Globo H's exclusivity for cancer cells makes it a target of interest for cancer therapies. Structure Defined by the monoclonal antibody MBr1, Globo H has been isolated from breast cancer cell line MCF-7, and its structure has been determined through several analyses, including NMR spectroscopy and methylation analysis. Globo H consists of a hexasaccharide of the structure Fucα(1-2)Galβ(1-3)GalNAcβ(1-3)Galα(1-4)Galβ(1-4)Glcβ(1) with a ceramide attached to its terminal glucose ring at the 1 position in a beta linkage. Synthesis Biosynthesis Globo H's biosynthetic pathway is involved in the synthesis pathways of other globo-series glycosphingolipid antigens that are also specific to cancer cells, including stage-specific embryonic antigen-3 (SSEA3) and stage-specific embryonic antigen-4 (SSEA4). The biosynthetic pathway of these antigens includes the enzyme β 1,3-galactosyltransferase V (β3GalT5). β3GalT5 catalyzes the galactosylation of globoside-4 (Gb4) to SSEA3. SSEA3 can then be converted to SSEA4 by sialyltransferase adding a sialic acid group to its end, or it can be converted to Globo H by fucosyltransferase adding a fucose ring to its end. Playing a part in the formation of three different cancer-specific antigens, β3GalT5 is of particular interest in its relevance to cancer treatment, and it has been shown to be critical for cancer cell survival. Chemical Synthesis In order to study its potential as a cancer therapy target, Globo H has been synthesized in the laboratory. One synthesis is achieved by first building two trisaccharides from their component sugars, and then linking them. The trisaccharides, with most of their functional groups protected to prevent side reactions, are linked by creating the GalNAcβ(1-3)Gal bond. A thioethyl group is added to the 1 position on one of the protected galactose rings, and in the presence of methyl triflate, this reacts with the hydroxyl group on the 3 position of the other galactose to link the trisaccharides and form the hexasaccharide. The ceramide is added to the 1 position of the terminal glucose ring after hexasaccharide formation. Globo H as a Therapeutic Target As a Tumor Associated Carbohydrate Antigen (TACA), Globo-H is a promising clinical target for immunotherapy. While absent in normal tissues, the glycosphingolipid is overexpressed in a variety of epithelial cancer cell types including human pancreatic, gastric, lung, colorectal, esophageal, and breast tumors. Globo H Anticancer Vaccines Globo-H's TACA character allows for its utilization as an anticancer vaccine, inducing antibody response against the epitope. The resulting humoral immunity could enable the selective eradication of Globo H-presenting tumors. The Taiwanese biopharma company OBI Pharma, Inc., was first to develop Adagloxad Simolenin (OBI-822), a Globo H hexasaccharide conjugated with the immunostimulatory carrier protein KLH. The Phase III GLORIA study is underway evaluating the carbohydrate-based immunogen's effects in high risk triple-negative breast cancer (TNBC) patients with an estimated completion date in 2027. Alternative vaccine conjugates have been developed which avoid issues associated with the protein carrier KLH by substituting it with a lipid or carbohydrate-based carrier. Examples include the use of lipid A derivatives or entirely carbohydrate vaccine conjugates such as Globo H-PS A1 Anti-Globo H Antibodies Globo H-targeting antibodies are another strategy currently being evaluated in the cancer therapeutic space. OBI Pharma's OBI-888 is a humanized IgG1 antibody that selectively binds to the Globo H antigen among other Globo series glycosphingolipids such as SSEA-3 and SSEA-4. Additionally, in vivo studies of OBI-888 in various Globo H-positive (GH+) xenografts models showed promising tumor growth inhibition results. OBI-888's human Phase I/II study for the treatment of metastatic and locally advanced solid tumors is estimated to finish in December 2022. Based on OBI-888, the first-in-class antibody-drug conjugate (ADC) 0BI-999 was additionally developed, linking OBI-888 to monomethyl auristatin E, a synthetic antineoplastic agent. The ADC is currently undergoing phase II trial in patients with advanced solid tumors, with an estimated completion date in Dec 2023. In Dec 2019 & Jan 2020, OBI-999 was granted two Orphan Drug Designations by the FDA for the treatment of pancreatic and gastric cancer. References Biomolecules Immune system Immune receptors Glycolipids Antigens Cancer immunotherapy Experimental cancer drugs
Globo H
[ "Chemistry", "Biology" ]
1,266
[ "Carbohydrates", "Natural products", "Antigens", "Immune system", "Glycolipids", "Organic compounds", "Organ systems", "Biomolecules", "Structural biology", "Biochemistry", "Glycobiology", "Molecular biology" ]
66,929,000
https://en.wikipedia.org/wiki/Thermal%20acoustic%20imaging
Thermal acoustic imaging (TAI) is a proprietary active thermographic inspection process developed by Pratt and Whitney (P&W) in 2005; TAI is a nondestructive testing (NDT) method to detect internal and external cracking of hollow core turbofan engine fan blades. TAI is performed to inspect the PW4000 diameter fan blades in an enclosed air-conditioned room within P&W's overhaul and repair facility in East Hartford, Connecticut. Technical description In the TAI process, sound energy is applied to excite the fan blade. If a discontinuity exists in the metal, the excitation will cause each side of the contacting discontinuity to move, resulting in frictional heating. The frictional heating is detected on the surface of the fan blade by a thermal imaging sensor. To examine a complete fan blade, the convex and concave surfaces of the fan blade airfoil are divided into zones and the computer controlled thermal sensor takes an image of each zone while sound energy is applied. After both sides of the fan blade have been completely scanned, the images are processed by a computer and then displayed on a monitor for evaluation by an inspector. The computer can enhance the image to assist the inspector in evaluating any indications. Some indeterminate indications may require reinspection, which in turn may require repainting of the fan blade and repeating the TAI process. If a fan blade has an indication the inspector is not able to evaluate conclusively, the inspector should forward the images along with the fan blade to a Process Engineer for further evaluation and possible application of alternative NDT methods such as ultrasonic and/or x-ray inspection. History In 2005, when TAI was initiated, P&W, following standard NDT industry practice, categorized the TAI as a new and emerging technology that allowed TAI to be performed without establishing a formal training program and certification requirements. In 2018, P&W continued to categorize TAI as a new and emerging technology, despite the manufacture and subsequent TAI inspection of over 9,000 fan blades. In the final report on the 2018 United Airlines Flight 1175 (UA1175) contained engine failure of its PW4000-112 series engine, where the fractured fan blade was found to have had a rejectable indication at the previous TAI inspection that was not properly identified, the National Transportation Safety Board faulted P&W for this, concluding the probable cause of the UA1175 incident was: After this incident, P&W initiated an overinspection and reviewed the TAI inspection records for all 9,606 previously inspected PW4000 112-inch fan blades. During the overinspection, there were two fan blades that were in service at Korean Air and United Airlines that had TAI indications that could not be resolved. Subsequent x-ray inspection of both revealed peening shot in the cavity in the area where the previous TAI indication had been reported. P&W also reported that between December 2004 and the time of the UA1175 incident in 2018, cracks had been detected in five PW4000 112-inch fan blades. One was identified visually and the other four were detected by TAI. On February 23, 2021, four days after a similar contained engine failure incident that occurred in another PW4000 engine on United Airlines Flight 328 (UA328), the U.S. Federal Aviation Administration (FAA) issued an Emergency Airworthiness Directive that required U.S. operators of airplanes equipped with Pratt & Whitney PW4000-112 engines to inspect these engines before further flight. After reviewing the available data and considering other safety factors, the FAA determined that operators must conduct a TAI inspection of the large titanium fan blades located at the front of each engine. FAA noted that TAI technology can detect cracks on the interior surfaces of the hollow fan blades, or in areas that cannot be seen during a visual inspection. The previous inspection interval for this engine was 6,500 flight cycles. References 21st-century inventions Acoustics 2005 introductions Aircraft manufacturing Imaging Nondestructive testing
Thermal acoustic imaging
[ "Physics", "Materials_science", "Engineering" ]
822
[ "Aircraft manufacturing", "Classical mechanics", "Acoustics", "Nondestructive testing", "Materials testing", "Mechanical engineering by discipline", "Aerospace engineering" ]
68,352,514
https://en.wikipedia.org/wiki/John%20Edwin%20Field
John Edwin Field FRS (20 September 1936 – 21 October 2020) was a British experimental physicist whose research focused on the physics and chemistry of solids, particularly involving energetic phenomena and materials, covering a variety of topics from erosion to explosives. Following his undergraduate studies in physics at University College London, Field moved to the Cavendish Laboratory at the University of Cambridge, undertaking his PhD in the Physics and Chemistry of Solids Group. He remained at Cambridge for the rest of his career which included terms as Head of the Physics and Chemistry of Solids Group and Deputy Head of the Cavendish Laboratory. He was also a Fellow of Magdalene College. His achievements were recognised by an OBE in 1987 and Fellowship of the Royal Society in 1994. References 1936 births 2020 deaths Officers of the Order of the British Empire Experimental physicists Alumni of University College London Alumni of the University of Cambridge Fellows of the Royal Society Fellows of Magdalene College, Cambridge
John Edwin Field
[ "Physics" ]
179
[ "Experimental physics", "Experimental physicists" ]
77,164,081
https://en.wikipedia.org/wiki/Tasmanite%20%28tektite%29
Tasmanite are tektites found in Tasmania, a regional form of australite, the most common type of tektite, glass of meteorite origin, traditionally named for its geographic location. Quite often, tasmanites are found in the literature under the name australites (from Tasmania), together with which they are included in a very broad category of tektites, originating from the largest Australasian tektite strewnfield on earth. In the northern part of the scatter field, australites partially overlap and connect with part of the range of indochinites, and on the southern border they are present under the name tasmanites. In general, all of the listed regional tektites are included in the general class of indochinites-australites, sometimes referred to under the summary name Australasian tektites. Under normal lighting, tasmanites are most often opaque and have a dark brown, brownish-greenish or almost dull black color. Among the general part of tasmanites belonging to the southern branch of australites, there are varieties of regular “aerodynamic" shape (sometimes in the form of a small bowl), disc-shaped or hollow balls. In some specimens, the correct form has a man-made appearance, creating the impression of artificial origin. In addition to the regional part of the australite scatter field, Tasmanites also include purely local tektites, characteristic exclusively of certain regions of Tasmania. Tasmanites were found in considerable quantities in an area of about 410 km² south of Queenstown in the vicinity of Mount Darwin and were called "Darwin glass". According to calculations (approximate quantity per unit of territory), the total weight of this variety of tasmanites should have been several thousand tons. History of the study Unlike other regional forms of impact tektites, Australian meteorite glass has much more diverse forms, among which there are samples that even at first glance give the impression of man-made or high-tech objects. After the completion of the colonization of Australia, this circumstance attracted increased attention to them, making them an object of collecting and souvenir sale, and, starting from the mid-19th century, of professional study. ...tektites were found in gold placers and other places on the Australian mainland, which amazed scientists with their unusual shape. Some of them resembled buttons, others were surprisingly similar to mushrooms, and others were like hourglasses. There were also hollow glass balls the size of an apple with a wall thickness of only 1 millimeter, as if some joker had blown some semblance of a soap bubble from natural glass! In January–February 1836, during his voyage around the world on the Beagle, Charles Darwin personally collected tektite samples during stops in South Australia and Tasmania. A hundred years later they were called australites and tasmanites. Darwin spent just over two weeks in Australia. On February 5, 1836, the Beagle arrived in Storm Bay and spent more than two days in the port of Hobart (in the southeast of the island). During his brief stay in Tasmania and South Australia, Darwin partly acquired and partly collected a small collection of local black glass. He can rightfully be considered a collector and systematizer of the most extensive field of tektites – Australasian. When the Beagle dropped anchor off the coast of Tasmania, Darwin, setting off on another excursion inland, unexpectedly discovered on the ground hollow balls of black glass, little larger than a walnut. After carefully examining them, he mistook them for volcanic bombs. However, there were no volcanoes nearby. This was confirmed by special geological routes. It remains to be assumed that the balls were brought here by nomadic natives. Darwin made an entry in his diary, but later did not return to this issue. The Tasmanian discovery of black glass balls occurred in February 1836. Just at this time, the debate about the origin of moldavites, the only tektites known at that time, was in full swing in the Central European scientific world. However, Darwin knew nothing about this scientific discussion, as well as about the various versions and assumptions of Czech and German scientists. It is for this reason that the newly discovered Tasmanian tektites did not attract much attention from the scientist. Considering black glasses to be a purely geological object formed in the bowels of the earth, Darwin characterized them as a kind of "volcanic bomb", thrown out from the craters of volcanoes during an eruption. At the end of the 19th century, australites often became known as "obsidian bombs" or "negro buttons". In 1857, Charles Darwin also received several samples of natural black glass from the collection of Thomas Mitchell. Darwin, based on the similarity of the studied samples with obsidian, concluded that australites (tasmanites) are of volcanic origin. Important studies that sharply increased the interest of the scientific community in tektites were the generalizing theoretical works of the Austrian geologist, Professor Eduard Suess. While studying moldavites, in 1900 he put forward a version of their meteorite nature, and the term tektites came into scientific use. Using the example of the only European tektites, which were Czech fossil glasses, Suess, without chemical analysis, came to the conclusion that they were of cosmic origin and were not related to the surrounding geological rocks. The basis for the conclusion was a visual comparison of several groups of samples of various minerals, including ferruginous meteorite fragments. One of the first scientists to study australites was Charles Fenner, who first encountered these tektites in 1907. He concluded that australites are of cosmic origin and are by nature fragments of glass meteorites. Almost all assumptions and scientific conclusions made regarding australite-indochinites are directly related to tasmanites, the main part of which belongs to a single Australo-Tasmanian dispersion arc, and only isolated southwestern tektites from the Darwin Crater are of purely local origin. Mineral source The Australasian tektite belt has an elongated ″S″-shape and is the largest on Earth both in terms of dispersion area and in the number of tektite samples found. In Australia alone, along with the island of Tasmania, several million particles of these glasses were collected. They were discovered and raised even from the bottom of the Indian and Pacific oceans. Early versions of the origins of the Australites were based on terrestrial versions. The most common explanations for the appearance of black fused glass were volcanoes, as well as forest fires, which are common in Australia. There was also a hypothesis about the fulgurite origin of australites, as a result of a lightning strike into sand or sandy (quartz) rocks. A new hypothesis about the origin of australites arose in the late 1960s based on data received from the American Surveyor 7 spacecraft, which landed in 1968 near the Tycho crater. The samples of lunar soil he took in this area turned out to be close in chemical composition to tektites, namely australites. This led to the emergence of a version of the lunar origin of the Australasian tektite field. The new theory, in particular, made it possible to explain the bizarre shape of the area within which the main finds of australites took place. A fairly clear S-shaped stripe of the dispersion field stretched from Madagascar through Australia and Indochina (indochinites) to the Philippines (philippinites). Professor D. Chapman of the Ames Research Center (Mountain View, California) was able to prove using computer programming that only the stream of glass splashes ejected from the Tycho crater during the eruption, combined with the rotation of the Earth, could create a scattering streak such an unusual shape. Moreover, part of the jet should have splashed out onto the surface of the Moon, and left a bright trace on it, visible from the earth, the "Ross ray", which goes from the Tycho crater for thousands of kilometers and passes through the small Ross crater along the way. A separate problem for researchers was the estimated age of the australites. According to the surrounding layer of sedimentary rocks, in which australites are found en masse, the age of australites is no more than ten thousand years. According to these data, Australasian and African tektites are the youngest on Earth. Evidence from Aboriginal people in Australia and the Ivory Coast indirectly supported these assessments. They traditionally endowed local tektites with magical properties and called them "moon stones", as if in the foreseeable legendary times their ancestors were contemporaries and witnesses of their "fall from the sky." However, data using the potassium-argon dating method gave completely different dates for this grandiose event. According to dozens of different samples, australites fell to Earth about 700 thousand years ago. The discrepancy between the two ages is two orders of magnitude. Regarding radiation measurements, selenologists for a long time could not develop a consensus; chronological estimates had an incredibly inflated appearance, as if the real picture was deliberately distorted by some unaccounted for factor or unknown to modern science. Therefore, in the 1970s, by default, the age of 10 thousand years for the Australites was recognized as more reliable and dated to the known moment of the explosion of the lunar crater Tycho. However, after a decade and a half, all the contradictions were resolved by themselves, when the lunar theory of the origin of australites was refuted with the help of a more detailed (chemical and radiochemical) analysis of lunar rocks. Although various versions of the origin of australites are still in circulation, most scientists are inclined to believe that australites were formed and scattered as a result of the collision of a large asteroid or comet with the surface of the Earth. As a result of a powerful explosion, many hot particles were thrown into the stratosphere, including glass particles containing a large amount of impurities: a mixture of planetary soil with meteorite matter. Australites most likely received their streamlined aerodynamic shapes during the secondary re-entry of debris into the Earth's atmosphere, when pieces of glass in a molten state flew at high speed. To clarify the hypothesis about the cosmic nature of tektites (australites), in 1962 a theoretical assumption was put forward about a giant astrobleme, which became the final point of the Australasian tektite dispersion field. Half a century later, in 2006–2009 in Antarctica (Wilkes Land), according to the results of satellite research, a huge crater hidden under a deep layer of ice, a trace of one of the largest meteorite collisions with the Earth, was discovered in practice. According to scientists, its diameter is about 240 kilometers. This huge crater was at the end point of the australasian-tasmanian arc, which is the main area for tektites in the Southern Hemisphere. Trying to confirm or refute the cosmic hypothesis of the origin of australites, American physicists Chapman and Larson conducted a series of experiments attempting to artificially form tektites using various forms of ablation. During the experiments, it was possible to reproduce in the smallest detail almost all existing forms of australites, including the disk-shaped one, and to obtain the aerodynamic relief of the rings on the front surface. Based on the results of the first experiments, a positive conclusion was made about the extraterrestrial origin of the australites, but later in repeated studies a reservation was made that it could only be near space, located within the Earth-Moon system. Most australites have a scattering field in South Australia, rarely rising above 25 degrees latitude. Judging by their similar age and composition, the Australites belong to the southern margin of the largest known Australasian scatter field, extending from Indochina (indochinites) to Tasmania (tasmanites, Darwin glass). In turn, Tasmanites represent the extreme, southeastern point of the range of australites. Outside Tasmania, searches for tektites could be carried out at the bottom of the Pacific Ocean. The Australasian tektite field is between 610 and 750 thousand years old and may be the result of a major catastrophe on Wilkes Land, as well as a number of smaller regional catastrophes, for example, on the Bolaven Plateau about 790 thousand years ago, which blocked the northern part of the distribution area of australites. Mineral properties Based on their appearance and chemical composition, tasmanites are divided into two clearly differentiated types. The first are among the southern Australians and most often have the following basic shapes: sphere, oval, boat, dumbbell and drop. The second — from among the Darwin glasses – differ both in shape, much more random and fragmentary in nature, and in color – reaching green or even light green, completely uncharacteristic of australites. Most tektites known on Earth bear obvious traces of passage through the atmosphere (ablations): both on the surface and in shape. This also applies to Australian-Tasmanites. A number of studies of "flange"-shaped samples have shown that exactly such an aerodynamic profile can be obtained from an initial glass sphere invading the earth's atmosphere at cosmic speed. When passing through dense layers, the frontal part of the sphere melts, and the oncoming air flow flattens the sphere, turning it into something like a "button". Other forms of tektites can be explained in a similar way. The greatest difficulty in modeling the situation is the need to distinguish between the effects of terrestrial and cosmic factors, since the surface structure of tektites sometimes seems too complex. References See also Australite Darwin glass Indochinite Billitonite Bediasite Libyan desert glass Moldavite Tektites Glass in nature Impact event minerals Oxide minerals Planetary science Rocks Meteorites
Tasmanite (tektite)
[ "Physics", "Astronomy" ]
2,849
[ "Matter", "Physical objects", "Planetary science", "Rocks", "Astronomical sub-disciplines" ]
78,491,645
https://en.wikipedia.org/wiki/Water%20jacket%20furnace%20%28metallurgy%29
A water jacket furnace is a type of blast furnace used to smelt non-ferrous metallic ores, most typically ores of copper, lead, or silver-lead. It takes its name from the water jacket arrangement used to cool the lower furnace casing and prolong the life of the furnace hearth. It is sometimes referred to as a water-jacketed blast furnace, copper blast furnace, or lead blast furnace. The water jacket furnace is now virtually an obsolete technology for copper smelting, being nearly entirely replaced, by flash smelting of copper ore concentrates. It remains in use, in a modified form, for lead smelting. The terminology is also used for an indirect heating device used in the petroleum oil and gas industry, generally known as a water jacket heater or water bath heater, which should not be confused with the metallurgical water jacket furnace. History In the mid 19th Century, most non-ferrous smelting was done using reverberatory furnaces. Blast furnaces were used to smelt sulphide copper ore in the Harz Mountains of Germany. The mines at Burra in South Australia tried to adopt the technology, in 1847, but without success because the German furnace design, using horse-powered bellows to provide the air blast, was not well suited to their carbonate copper ore. The 'water jacket' blast furnace design for non-ferrous smelting arose in North America, during the 1870s, and an alternative name for it, in Australia, was 'American water jacket furnace'. The design evolved from earlier German cupola furnace designs, with the distinguishing innovation being a well-controlled cooling of the furnace shell. Water jacket furnaces began to be common in the later part of the century, from the 1880s, particularly for smelting sulphide ores. Unlike reverberatory furnaces, water jacket furnaces could be made in a factory and then assembled at site. Not all situations and ores were well-suited to water jacket furnace operation. Some attempts to apply them were costly failures, such as at the North Lyell mine, at Crotty, Tasmania, and Lloyd's Mine at Burraga and the Overflow Mine at Bobadah, both in New South Wales. However, the furnaces were hugely successful, when well applied, such as at the vast Anaconda Copper Mine, in Butte, Montana, the Mt Lyell Mine in Tasmania, and at many other mines. Water jacket furnaces only ever partially displaced reverberatory furnaces in the copper industry, until both furnace types were displaced, almost entirely, by flash smelting, between around 1949 and 1980. Technology and application Smelting Lead and silver-lead ore smelting A water jacket furnace can be used to reduce non-ferrous oxide ores mixed with coke, to produce metal and slag. When smelting lead, the feedstock is lead oxide, coke and fluxes. When smelting lead sulphide ores, the ore is first sintered to form a lead oxide sinter. Lead and silver ores often occur in the same ore body. Separating silver metal from the crude lead produced by a furnace requires a second process of refining, such as the Parkes process. When smelting lead, there was the added complication that measures were necessary to protect workers from harmful lead vapours. Copper ore smelting The pyrometallurgical process of a water jacket furnace, when smelting copper sulphide ores, was fundamentally different to a conventional blast furnace used to make iron, or a water jacket furnace used to make lead. The conventional blast furnace process produces molten metal by reducing the ore, and separating out the silica as slag. Water jacket furnaces, when smelting sulphide copper ores, used an oxidation reaction that produces molten copper matte, which must be further treated in a convertor (similar in concept to a Bessemer convertor) or reverberatory furnace to produce copper metal. The product of that conversion process is known as blister copper. If a smelter did not have a convertor, the matte was poured into moulds and allowed to solidify. The smelting of sulphide copper ores in a water jacket furnace can be viewed as concentrating the non-ferrous metallic portion of the ore, as matte, and separating out some impurities, such as silica and iron, in the mainly iron silicate slag, and much of the sulphur, as sulphur dioxide in the off-gas. The molten slag and matte separate, with the denser molten matte accumulating at the bottom of the furnace, with a layer of molten slag immediately above it. Depending upon the composition of the ore being smelted, the choice of a suitable flux was particularly important. Fluxes used could be limestone, iron oxide, or silica (quartz), depending upon what was needed to create slag and to minimise the loss of copper with that slag. When both 'basic' (oxide or carbonate) ores and 'siliceous' sulphide ores were available, feeding the furnaces with a mixture of the two copper ore types reduced the amount of other fluxes needing to be added. Advantages and disadvantages Water jacket furnaces had some advantages over reverberatory furnaces. Fuel consumption was lower. Sulphide copper ores could be smelted without first roasting the ore. Production per furnace was generally higher. Low grade ore could be smelted, because the water jacket furnace could more readily discharge large amounts of molten slag. Because solidified slag is unsuitable to backfill the voids (stopes) created by underground mining, disposal of large volumes of slag, from smelting of low grade ores, was a significant problem. Some mines treated molten slag with water to create granulated slag, which could be used to backfill stopes. Otherwise, the molten slag was dumped and large slag dumps accumulated near the smelter, becoming a lasting legacy of smelting operations. Another advantage of the water jacket furnace was that, while out of service, the bottom of the furnace, if so designed, could be 'dropped' for cleaning it up or for repair. Over time, a significant amount of copper material would accumulate in the bottom of a reverberatory furnace which could not be accessed without effectively demolishing the furnace. A disadvantage of the water jacket furnace was that it could not handle fine ore well and was so was better suited to lump ore. Fines tended to either choke the furnace or were blown into the flue by the air blast. Eventually, the second problem, only, would be solved by capturing the flue dust and recycling it. An initial disadvantage of the water jacket furnace was its use of coke as fuel. It could not use the cheaper fuels such as firewood or fine raw coal that could be used to fire a reverberatory furnace. That disadvantage was offset by lower overall fuel consumption. In the first years of the 20th Century, the perfection of a technique known as pyritic smelting greatly reduced coke consumption, when smelting suitable ores such as chalcopyrite, by optimizing the use of the sulphur and iron in the ore itself, as a fuel generating heat. The iron oxide so produced combined with molten silica to form an iron silicate slag. Water jacket furnaces needed blowers and a cooling water supply, and were more complex to build and operate than the reverberatory furnaces. However, they were also more versatile, being a readily scalable technology; large or small furnaces could be made, and would operate effectively. Water jacket furnaces, like other blast furnaces, are best operated continuously, and smelters that used them had to work continuously too. However, this was in one way an advantage over reverberatory furnaces that operated as a batch process with around 24 hours typical duration. The associated cycles of heating and cooling of a reverberatory furnace led, over time, to damage of its masonry and higher maintenance and downtime as a result. In contrast, a well-operated water jacket furnace might achieve years of operation before needing new fire bricks. Differences from conventional blast furnace design The internal operating temperature of the water jacket furnace is lower than that of a blast furnace used to make iron, and the process does not depend upon the formation of a bosh shell, as is critical in the operation of a blast furnace making iron. Conventional blast furnaces used for smelting iron ore use a hot blast. Water jacket furnaces most commonly used a cold air blast, typically provided by a positive-displacement blower, such as a Roots blower. Preheating of the air blast was used on some water jacket furnaces, but preheating of the blast had no advantage when the furnace was being used for pyritic smelting of copper ore. The horizontal cross-section of water jacket furnaces was usually rectangular—although circular and oval cross-section ones did exist—whereas conventional blast furnaces making iron always have a circular horizontal cross-section. In some larger furnace designs, molten metal / matte and molten slag were tapped at the opposite narrow ends of the rectangular base. Water jacket furnaces typically have a higher number of smaller tuyeres than a conventional iron-making blast furnace. Typically, feedstock was fed into a water jacket furnace through a sliding door arrangement in the side of the upper furnace structure, but not via the top itself as in a blast furnace for iron. At the top of a water jacket furnace was a fixed flue. The off-gas from copper smelting was not suitable to be recycled as a fuel, as is done in a blast furnace making iron (blast furnace gas). However, the sulphur dioxide in the flue gases, from those furnaces using the pyritic smelting process, was concentrated enough that it could be used to make sulphuric acid, which reduced the level of air pollution created by the smelter. If the flue gas needed to be dispersed to the atmosphere, a tall chimney was necessary. to ensure that the noxious gases from smelting, largely sulphur dioxide, were carried far from the smelter and any nearby settlement. Dust carried in the flue gas was often collected, as it had a significant metallic content. A furnace smelting lead uses a reduction reaction; it generates an off-gas containing carbon monoxide, similar to blast furnace gas, which needs to be disposed of, usually by flaring. In contrast to a conventional blast furnace used to make iron, lead metal or copper matte and slag were run off more or less continuously. There was a separate slag spout to run off slag. The molten copper matte run from such a furnace, under this arrangement, still contained a proportion of slag. The copper matte was run first into a vessel known as a 'settler' to allow any slag to accumulate on the surface, from where it could overflow from the settler. From tap holes at the bottom of the settler, the molten matte was run into ladles, which were used to transport the matte to a convertor, where it was processed to make blister copper. The 'settler' presented a hazard to furnace operators. Other applications A variant of the water jacket furnace was used to smelt lead-zinc ores using the Imperial Smelting Process. In that case, the furnace was completely sealed, to allow the zinc to be recovered, from flue gases, in its vapour phase. Water jacket furnaces were used to reprocess copper smelter slag that still contained a significant amount of copper, especially slag from smelting high-grade copper ore in reverberatory furnaces. Although rarely done, small water jacket furnaces have been used to recover gold from quartz rock—particularly if the ore was very rich in gold or sulphide ores of other metals were also present—as an alternative to crushing the rock and extracting the gold using other methods. However, it was a very inefficient method of extracting gold. Where gold and silver were present in copper ores, the precious metals were present in the copper matte produced by a water jacket furnace. The precious metals could later be separated from blister copper, using electrolytic copper refining, and delivered in the form of dore bullion. Demise (copper smelting) As the average ore grades of copper mines declined, ore smelting became uneconomic. Smelters such as the Great Cobar mine were struggling to achieve economic operation, as early as 1912, despite buoyant copper prices. Smelting of low-grade copper ore at the mine site was largely superseded by a process consisting of copper ore concentration, especially using froth flotation, and smelting of ore concentrates. The water jacket furnace was less well suited to that new regime—especially large ones that had been used to smelt low grade copper ores—and, after the introduction of flash smelting, from around 1949, had fallen out of favour by 1980. Modern lead smelting Water jacket furnaces are now a largely forgotten technology for copper smelting, but remain in use, in a modified form, for lead smelting. Modern lead furnaces are more commonly referred to as lead blast furnaces, but retain most features of water jacket furnaces. Gallery See also Blast furnace Reverberatory furnace Flash smelting ISASMELT References External links Modern Copper Smelting - Lectures by Donald M. Levy, M.Sc., Assoc. R.S.M - University of Birmingham (1912) Metallurgical furnaces Blast furnaces
Water jacket furnace (metallurgy)
[ "Chemistry", "Materials_science" ]
2,892
[ "Metallurgy", "Blast furnaces", "History of metallurgy", "Metallurgical furnaces" ]
78,494,871
https://en.wikipedia.org/wiki/Hywind
Hywind was the first MW-class floating wind turbine concept, developed by StatoilHydro (now Equinor). It has a rated power of 2.3 megawatts (MW), and is mounted on a spar foundation derived from oil platforms. The basic development was done by Norsk Hydro, hence the name. The Hywind turbines are designed to be installed offshore in water depths of 120–700 metres. The first pilot Hywind turbine was installed and commissioned in the North Sea, south-west of Karmøy, south-west Norway, in September 2009. In 2019, the turbine was acquired by Unitech Offshore and renamed the Unitech Zefyros. It will be used for development and testing of new technologies, and as a hub to connect other turbines in the Marine Energy Test Centre (METCentre) test site for offshore wind turbines. Following the single turbine demonstration, the Hywind Scotland and Hywind Tampen wind farms have been constructed. Device concept The Hywind platform is a spar-buoy type floating foundation, a slender vertical cylinder that extends below the sea surface. This is anchored to the seabed with three cables, with slack moorings allowing the turbine to move sideways in surge and sway. The foundation is however designed, with the centre of gravity is below the sea surface, to prevent the turbine from pitching and rolling or from heaving up-and-down, all of which could mean the blades hitting the water. This type of foundation has been used for oil and gas platforms for many years. It is similar to that used in the Troll B & C platforms. The position of Hywind is monitored using GPS. A standard Siemens Wind Power offshore wind turbine is mounted on top of the foundation. The Hywind concept was originally developed by marine engineer Dagfinn Sveen in 2001, at Norsk Hydro's new energy department. The concept was patented and industrial relations were established with Siemens, among others. When Statoil (now Equinor) took over Norsk Hydro's oil division in 2008, Hywind was also transferred. Demo-project The Hywind demo project consists of the wind turbine, the floating foundation and anchors as well as a connection cable to shore. The floating structure was developed, built, and installed by the French engineering company Technip, while the actual construction was carried out by the Finnish subsidiary Technip Pori. Of the total weight of approx. 5,300 tonnes, approximately 3,500 tonnes consists of ballast, mainly olivine with a density of 2.6 t/m3. The wind turbine is a standard Siemens 2.3 MW wind turbine with Statoil's proprietary control system. Nexans Norway has supplied and installed the 13-kilometre cable that supplies power to the local grid supplier Haugaland Kraft. The cable comes ashore near Skudeneshavn on the southern tip of Karmøy. The investment amounts to almost NOK 400 million (around US$62 million) to build and deploy., of which NOK 59 million is support from the Norwegian government through Enova. Statoil receives income from electricity production, but this is not the primary focus of the project. The main purpose is to gain experience from full-scale power production from floating wind turbines, and is one of several of Statoil's focus areas within renewable energy. Equinor estimates that floating wind turbines in the North Sea will deliver the equivalent of 4,000 full load hours — which corresponds to a production of 46% of installed capacity. For a 2.3 MW wind turbine, as in the pilot project, this would mean an annual production of 9.2 GWh. In order to obtain the best possible wind data, Statoil has entered into a collaboration with the Norwegian Meteorological Institute and Kjeller Vindteknikk for measuring and forecasting wind and waves. The Norwegian Meteorological Institute has set up special versions of its numerical weather models that include measurement data from Statoil's 100-metre-high wind measuring mast on Karmøy, as well as the wave and current measuring buoy at Hywind. In addition, a LIDAR on Utsira will be used to assess the quality of the wind forecasts. As the wind turbine is floating, the wind and waves will cause movement in all six degrees of freedom of motion. The movement leads to complicated dynamic loads on the wind turbine and tower, and is one of the most important test areas for the project as this is difficult to calculate correctly with computer-aided design. In the Autumn of 2005, model tests were carried out at Marintek (now SINTEF) in Trondheim with a 1:47 scale model. The First Turbine The floating foundation was built in Finland, then was towed floating horizontally to Norway. In the Åmøyfjord near Stavanger, the spar foundation was then rotated vertically on 23 April 2009, and ballasted. The wind turbine was then mounted on top of the floating structure. On 6 June 2009, the entire structure was towed approximately 10 km south west of Karmøy, where it was anchored with three anchors at a depth of approximately 220 metres. This was initially for a two-year test deployment. The long submarine power transmission cable was installed in July 2009 and system test including rotor blades and initial power transmission was conducted shortly thereafter. The turbine was connected to the grid in August, with the official inauguration on 8 September 2009. The first full year the turbine was in trial operation, 2010, it delivered 7.3 GWh against the expected 3.5 GWh. The turbine was exposed to waves up to 11 m and proved more stable than expected. The floating installation does not place greater loads on the turbine than an onshore installation, and vibration loads are reduced compared to land-based turbines. By 2016, the turbine had produced 50 GWh; an overall capacity factor of 41%. The turbine survived 40 m/s wind speed and wave high. In 2019, the turbine was sold to Unitech Offshore, with the expectation of 10 more years of production and tests. In 2022, Unitech mounted a helicopter pad on the turbine, the first time for a floating offshore wind turbine. Full-scale measurements The Hywind demo was extensively instrumented to see how it behaved compared to calculations made with time-domain simulation software. More than 200 sensors were used. Some of the conclusions are that, with a few exceptions, it is possible to calculate the movements well. The exceptions are related to anchoring, where the response for long wave periods is much larger than in the analyses, while in the wave period range the measured response is smaller than what the analyses show. Furthermore, the statistical differences between simulated and measured wind turbine parameters are relatively large, but within acceptable limits given the uncertainties. Further development Due to the success of the project, Equinor moved to the next phase where the focus was on cost reduction and on increasing the number of possible site options by reducing the minimum depth to 100 metres or less. The next phase was a farm with three to five turbines, with both Scotland and Maine considered as possible locations in 2015-16. The Hywind 2 project in Maine was abandoned, with Statiol citing legislative changes as the reason. A new bill (LD 1472) would allow the University of Maine to bid in a second round competitive auction for an offshore wind project in the state’s waters. However, Statoil built five 6 MW floating turbines, deployed in the Hywind Scotland project in 2017 at 70 per cent lower cost. References Offshore wind farms in the North Sea Equinor Renewable energy in Norway Floating wind turbines
Hywind
[ "Engineering" ]
1,576
[ "Floating wind turbines", "Offshore engineering" ]
74,149,063
https://en.wikipedia.org/wiki/Wrinklon
A wrinklon is a type of quasiparticle introduced in the study of wrinkling behavior in thin sheet materials, such as graphene or fabric. It is a localized excitation corresponding to wrinkles in a constrained two dimensional system. It represents a localized region where two wrinkles in the material merge into one, serving as part of the pattern seen when the material forms wrinkles. The term "wrinklon" is derived from "wrinkle" and the suffix "-on", the latter commonly used in physics to denote quasiparticles, such as the "phonon" or "polaron". The concept of wrinklons aids in understanding and describing the complex wrinkling patterns observed in a variety of materials. This understanding could prove useful in fields such as material science and nanotechnology, particularly in the study and development of two-dimensional materials like graphene. Further studies have expanded the understanding of wrinklons, demonstrating that the behavior of these wrinkles in thin films, such as graphene, can differ depending on the substrate they are on. For instance, when graphene is on a compliant polymer substrate, the properties of the wrinklons change with the thickness of the graphene. This suggests that the characteristics of the substrate have a significant role in wrinklon formation and behavior, which is important to consider in various applications of thin film materials. See also Straintronics References Physical phenomena Condensed matter physics Quantum phases Quasiparticles Mesoscopic physics
Wrinklon
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
312
[ "Quantum phases", "Matter", "Physical phenomena", "Phases of matter", "Quantum mechanics", "Materials science", "Condensed matter physics", "Quasiparticles", "Mesoscopic physics", "Subatomic particles" ]
74,150,111
https://en.wikipedia.org/wiki/High-temperature%20oxidation
High-temperature oxidation refers to a scale-forming oxidation process involving a metallic object and atmospheric oxygen that produces corrosion at elevated temperatures. High-temperature oxidation is a kind of High-temperature corrosion. Other kinds of high-temperature corrosion include high-temperature sulfidation and carbonization. High temperature oxidation and other corrosion types are commonly modelled using the Deal-Grove model to account for diffusion and reaction processes. Mechanism of oxidation High temperature oxidation is generally occurs via the following chemical reaction between oxygen (O2) and a metal M: nM + 1/2kO2 = MnOk According to Wagner's theory of oxidation, oxidation rate is controlled by partial ionic and electronic conductivities of oxides and their dependence on the chemical potential of the metal or oxygen in the oxide. References Redox
High-temperature oxidation
[ "Chemistry" ]
161
[ "Electrochemistry", "Redox", "nan" ]
74,151,612
https://en.wikipedia.org/wiki/Stein-Rosenberg%20theorem
The Stein-Rosenberg theorem, proved in 1948, states that under certain premises, the Jacobi method and the Gauss-Seidel method are either both convergent, or both divergent. If they are convergent, then the Gauss-Seidel is asymptotically faster than the Jacobi method. Statement Let . Let be the spectral radius of a matrix . Let and be the matrix splitting for the Jacobi method and the Gauss-Seidel method respectively. Theorem: If for and for . Then, one and only one of the following mutually exclusive relations is valid: . . . . Proof and applications The proof uses the Perron-Frobenius theorem for non-negative matrices. Its proof can be found in Richard S. Varga's 1962 book Matrix Iterative Analysis. In the words of Richard Varga: the Stein-Rosenberg theorem gives us our first comparison theorem for two different iterative methods. Interpreted in a more practical way, not only is the point Gauss-Seidel iterative method computationally more convenient to use (because of storage requirements) than the point Jacobi iterative matrix, but it is also asymptotically faster when the Jacobi matrix is non-negative Employing more hypotheses, on the matrix , one can even give quantitative results. For example, under certain conditions one can state that the Gauss-Seidel method is twice as fast as the Jacobi iteration. References Theorems in linear algebra Numerical linear algebra Relaxation (iterative methods)
Stein-Rosenberg theorem
[ "Mathematics" ]
317
[ "Theorems in algebra", "Theorems in linear algebra" ]
74,155,230
https://en.wikipedia.org/wiki/Tissue%20membrane
A tissue membrane is a thin layer or sheet of cells that covers the outside of the body (for example, skin), the organs (for example, pericardium), internal passageways that lead to the exterior of the body (for example, mucosa of stomach), and the lining of the moveable joint cavities. There are two basic types of tissue membranes: connective tissue and epithelial membranes. Connective tissue membrane The connective tissue membrane is formed solely from connective tissue. These membranes encapsulate organs, such as the kidneys, and line our movable joints. A synovial membrane is a type of connective tissue membrane that lines the cavity of a freely movable joint. For example, synovial membranes surround the joints of the shoulder, elbow, and knee. Fibroblasts in the inner layer of the synovial membrane release hyaluronan into the joint cavity. The hyaluronan effectively traps available water to form the synovial fluid, a natural lubricant that enables the bones of a joint to move freely against one another without much friction. This synovial fluid readily exchanges water and nutrients with blood, as do all body fluids. Epithelial membrane The epithelial membrane is composed of epithelium attached to a layer of connective tissue, for example, skin. The mucous membrane is also a composite of connective and epithelial tissues. Sometimes called mucosae, these epithelial membranes line the body cavities and hollow passageways that open to the external environment, and include the digestive, respiratory, excretory, and reproductive tracts. Mucus, produced by the epithelial exocrine glands, covers the epithelial layer. The underlying connective tissue, called the lamina propria (literally “own layer”), help support the fragile epithelial layer. A serous membrane is an epithelial membrane composed of mesodermally derived epithelium called the mesothelium that is supported by connective tissue. These membranes line the coelomic cavities of the body, that is, those cavities that do not open to the outside, and they cover the organs located within those cavities. They are essentially membranous bags, with mesothelium lining the inside and connective tissue on the outside. Serous fluid secreted by the cells of the thin squamous mesothelium lubricates the membrane and reduces abrasion and friction between organs. Serous membranes are identified according locations. Three serous membranes line the thoracic cavity; the two pleura that cover the lungs and the pericardium that covers the heart. A fourth, the peritoneum, is the serous membrane in the abdominal cavity that covers abdominal organs and forms double sheets of mesenteries that suspend many of the digestive organs. The skin is an epithelial membrane also called the cutaneous membrane. It is a stratified squamous epithelial membrane resting on top of connective tissue. The apical surface of this membrane is exposed to the external environment and is covered with dead, keratinized cells that help protect the body from desiccation and pathogens. Source text The source of the contents will be mentioned as follows on Wikipedia articles included under the scope of this project: References Membrane biology Tissues (biology)
Tissue membrane
[ "Chemistry" ]
713
[ "Membrane biology", "Molecular biology" ]
74,158,384
https://en.wikipedia.org/wiki/Short%20circuit%20ratio%20%28electrical%20grid%29
In an electrical grid, the short circuit ratio (or SCR) is the ratio of: the short circuit apparent power (SCMVA) in the case of a line-line-line-ground (3LG) fault at the location in the grid where some generator is connected, to: the power rating of the generator itself (GMW). Since the power that can be delivered by the grid varies by location, frequently a location is indicated, for example, at the point of interconnection (POI): SCR is used to quantify the system strength of the grid (its ability to deal with changes in active and reactive power injection and consumption). On a simplified level, a high SCR indicates that the particular generator represents a small portion of the power available at the point of its connection to the grid, and therefore the generator problems cannot affect the grid in a significant way. SCMVA is defined as a product of the voltage before the 3LG fault and the current that would flow after the fault (this worst-case combination will not happen in practice, but provides a useful estimation of the capacity of the circuit). SCMVA is also called a short circuit level (SCL), although sometimes the term SCL is used to designate just the short-circuit current. Grid strength The term grid strength (also system strength) is used to describe the resiliency of the grid to the small changes in the vicinity of the grid location (“grid stiffness”). From the side of an electrical generator, the system strength is related to the changes of voltage the generator encounters on its terminals as the generator's current injection varies. Therefore, the quantification of the system strength can be done through finding the equivalent (Thévenin) electrical impedance of the system as observed from these terminals (the strength is inversely proportional to the resistance). SCR and its variations provide a convenient way to calculate this impedance under normal or contingency conditions (these estimates are not intended for the actual short-circuit state). Strong grids provide a reliable reference for power sources to synchronize. In a very stiff system the voltage does not change with variations of the power injected by a particular generator, making its control simpler. In a traditional grid dominated by synchronous generators, a strong grid with SCR greater than 3.0 will have the desired voltage stability and active power reserves. A weak grid (with SCR values between 2.0 and 3.0) can exhibit voltage instability and control problems. A grid with SCR below 2.0 is very weak. Importance of overcurrent Grid strength is also important for its overcurrent capabilities that are essential for the power system operations. Lack of overcurrent capability (low SCR) in a weak grid creates a multitude of problems, including: transients during the large load changes will cause large variations of the grid voltage, causing problems with the loads (e.g., some motors might not be able to start in the undervoltage condition); the grid protection devices are designed to be triggered by a sufficient level of overcurrent. In a weak system the short circuit current might be hard to distinguish from a normal transient overcurrent encountered during the load changes; during a black start operation after a power outage, large inrush current might be needed to energize the system components. For example, if some loads in a weak system remain connected, an inverter-based resource might not be able to start. Presence of inverter-based resources Large penetration of the inverter-based resources (IBRs) reduces the short circuit level: a typical synchronous generator can deliver a significant overcurrent, 2-5 p.u., for a relatively long time (minutes), while the component limitations of the IBRs result in overcurrent limits of less than 2 p.u. (usually 1.1-1.2 p.u.). The original SCR definition above was intended for a system with predominantly synchronous generation, so multiple alternative metrics, including weighted short circuit ratio (WSCR), composite short circuit ratio (CSCR), equivalent circuit short circuit ratio (ESCR), and short circuit ratio with interaction factors (SCRIF), have been proposed for the grids with multiple adjacent IBRs to avoid an overestimation of the grid strength (an IBR relies on grid strength to synchronize its operation and does not have much overcurrent capacity). Henderson et al. argue that in case of IBRs the SCR and system strength are in fact decoupled and propose a new metric, grid strength impedance. Integrating renewable energy sources often raises concerns about the system's strength. The ability of different components in a power system to perform effectively depends on the system's strength, which measures the system variables' sensitivity to disturbances. The short circuit ratio (SCR) is an indicator of the strength of a network bus about the rated power of a device and is frequently used as a measure of system strength. A higher SCR value indicates a stronger system, meaning that the impact of disturbances on voltage and other variables will be minimized. A strong system is defined as having an SCR above three, and the SCRs of weak and very weak systems range between three and two and below two, respectively. Power electronic applications often encounter issues related to SCR, particularly in renewable energy systems that use power converters to connect to power grids. When connecting HVDC/FACTs devices based on current source converters to weak AC systems, particular technologies must be employed to overcome SCR of less than three. For HVDC, voltage-source-based converters or capacitor-commutated converters are utilized in applications with SCR near one. Failing to use these technologies will require special studies to determine the impact and take measures to prevent or minimize the adverse effects, as low levels of SCR can cause problems such as high over-voltages, low-frequency resonances, and instability in control systems. Wind farms are commonly linked to less robust network sections away from the main power consumption areas. Problems with voltage stability that arise from incorporating large-scale wind power into vulnerable systems are crucial issues that require attention. Some wind turbines have specific minimum system strength criteria. GE indicates that the standard parameters of their wind turbine model are appropriate for systems with a Short Circuit Ratio (SCR) of five or higher. However, if connecting to weaker systems, it is necessary to carry out further analysis to guarantee that the model parameters are adequately adjusted. Specifically designed control methods for wind turbines or dynamic reactive compensation devices, such as STATCOM, are required to ensure optimal performance. Example An experience at ERCOT in early 21st century provides a prime example of how the wind turbine's performance is affected by a weak system strength. The wind power plant, linked to the ERCOT grid through two 69kV transmission lines, worked efficiently when the SCR was around 4 during normal operations. However, when one of the 69kV lines was disconnected, the SCR dropped to 2 or less, leading to unfavorable, poorly damped, or un-damped voltage oscillations that were documented by PMUs at the Point of Interconnection (POI) of the wind plant. After a thorough investigation, it was determined that the aggressive voltage control used by the WPP was not appropriate for a weak grid environment and was the primary cause of the oscillatory response. Due to the low short circuit level detected by the wind generator voltage controller and the high voltage control gain, the oscillation occurred. When compared to the normal grid with high SCR, the closed loop voltage control would have a faster response under weak grid conditions. To replicate the oscillatory response, the event was simulated using a detailed dynamic model representing the WPP. Impact on grid The SCR can be calculated for each point on an electrical grid. A point on a grid having a number of machines with an SCR above a number between 1 and 1.5 has less vulnerability to voltage instability. Hence, such a grid is known strong grid or power system. A power system (grid) having a lower SCR has more vulnerability to grid voltage instability. Hence such a grid or system is known as a weak grid or a weak power system. Grid strength can be increased by installing synchronous condensers. References Sources Power engineering Electrical grid
Short circuit ratio (electrical grid)
[ "Engineering" ]
1,764
[ "Power engineering", "Electrical engineering", "Energy engineering" ]
74,158,460
https://en.wikipedia.org/wiki/Short%20circuit%20ratio%20%28synchronous%20generator%29
In a synchronous generator, the short circuit ratio is the ratio of field current required to produce rated armature voltage at the open circuit to the field current required to produce the rated armature current at short circuit. This ratio can also be expressed as an inverse of the saturated direct-axis synchronous reactance (in p.u.): Effects of SCR values Higher SCR requires lower reactance that in practice means a larger air gap. Both high and low levels of SCR have their benefits: low SCR: in case of a short circuit, the current is proportional to SCR, therefore generators with low SCR require less protection and thus are cheaper; low SCR allows shorter air gap and lower excitation field, both decreasing the size (an cost) of the generator; with low SCR the amounts of iron and copper are reduced, lowering the cost; high SCR: generator with high SCR provides more power when overloaded, improving the system stability in case of a contingency; high SCR inherently provides lower voltage variations in case of an oscillatory load, thus contributing to system stability; high-SCR generator has more synchronizing power, making it easier to operate generators in parallel. Therefore, in practice the design of a generator is seeking an SCR that balances benefits and drawbacks for a particular application. SCR is a measure of the electrical stiffness (also known as a synchronizing coefficient) of the machine. Synchronization coefficient is proportional to the SCR. A stiff machine has a higher SCR, is more loosely coupled to the network and thus is slower in following. A less stiff machine with lower SCR (a typical situation for modern h=generators) will follow the grid faster. Stiffness is a ratio of the change in power output to the change of power angle. For example, if the system frequency decreases, the stiffer generator provides more power thus contributing to the system stability. Effects of construction The larger the SCR, the smaller is alternator reactance (Xd) and inductance Ld. This is the result of larger air gaps in generator design (As in Hydro generators or Salient Pole Machines). It results into Machine loosely coupled to the grid, and its response will be slow. This increases the machines’ stability while operating on the grid, but simultaneously will increase the short circuit current delivery capability of the machine (higher short circuit current) and subsequently larger machine size and its cost. Typical values of SCR for Hydro alternators may be in the range of 1 to 1.5. Conversely, the smaller the SCR, the larger is alternator's reactance (Xd), the larger is Ld. It results from small air gaps in machine design (As in Turbo generators or Cylindrical rotor Machines). Machines are tightly coupled to the grid, and their response will be fast. This reduces the machine's stability while operating on the grid and will reduce the short circuit current delivery capability (lower short circuit current), smaller machine size, and lower cost subsequently. Typical values of SCR for turbo alternators may be in the range of 0.45 to 0.9. References Sources Electromechanical engineering
Short circuit ratio (synchronous generator)
[ "Engineering" ]
670
[ "Electrical engineering", "Electromechanical engineering", "Mechanical engineering by discipline" ]
72,719,782
https://en.wikipedia.org/wiki/Polynomial%20root-finding
Finding polynomial roots is a long-standing problem that has been the object of much research throughout history. A testament to this is that up until the 19th century, algebra meant essentially theory of polynomial equations. Principles Finding the root of a linear polynomial (degree one) is easy and needs only one division: the general equation has solution For quadratic polynomials (degree two), the quadratic formula produces a solution, but its numerical evaluation may require some care for ensuring numerical stability. For degrees three and four, there are closed-form solutions in terms of radicals, which are generally not convenient for numerical evaluation, as being too complicated and involving the computation of several th roots whose computation is not easier than the direct computation of the roots of the polynomial (for example the expression of the real roots of a cubic polynomial may involve non-real cube roots). For polynomials of degree five or higher Abel–Ruffini theorem asserts that there is, in general, no radical expression of the roots. So, except for very low degrees, root finding of polynomials consists of finding approximations of the roots. By the fundamental theorem of algebra, a polynomial of degree has exactly real or complex roots counting multiplicities. It follows that the problem of root finding for polynomials may be split in three different subproblems; Finding one root Finding all roots Finding roots in a specific region of the complex plane, typically the real roots or the real roots in a given interval (for example, when roots represents a physical quantity, only the real positive ones are interesting). For finding one root, Newton's method and other general iterative methods work generally well. For finding all the roots, arguably the most reliable method is the Francis QR algorithm computing the eigenvalues of the companion matrix corresponding to the polynomial, implemented as the standard method in MATLAB. The oldest method of finding all roots is to start by finding a single root. When a root has been found, it can be removed from the polynomial by dividing out the binomial . The resulting polynomial contains the remaining roots, which can be found by iterating on this process. However, except for low degrees, this does not work well because of the numerical instability: Wilkinson's polynomial shows that a very small modification of one coefficient may change dramatically not only the value of the roots, but also their nature (real or complex). Also, even with a good approximation, when one evaluates a polynomial at an approximate root, one may get a result that is far to be close to zero. For example, if a polynomial of degree 20 (the degree of Wilkinson's polynomial) has a root close to 10, the derivative of the polynomial at the root may be of the order of this implies that an error of on the value of the root may produce a value of the polynomial at the approximate root that is of the order of For avoiding these problems, methods have been elaborated, which compute all roots simultaneously, to any desired accuracy. Presently the most efficient method is Aberth method. A free implementation is available under the name of MPSolve. This is a reference implementation, which can find routinely the roots of polynomials of degree larger than 1,000, with more than 1,000 significant decimal digits. The methods for computing all roots may be used for computing real roots. However, it may be difficult to decide whether a root with a small imaginary part is real or not. Moreover, as the number of the real roots is, on the average, proportional to the logarithm of the degree, it is a waste of computer resources to compute the non-real roots when one is interested in real roots. The oldest method for computing the number of real roots, and the number of roots in an interval results from Sturm's theorem, but the methods based on Descartes' rule of signs and its extensions—Budan's and Vincent's theorems—are generally more efficient. For root finding, all proceed by reducing the size of the intervals in which roots are searched until getting intervals containing zero or one root. Then the intervals containing one root may be further reduced for getting a quadratic convergence of Newton's method to the isolated roots. The main computer algebra systems (Maple, Mathematica, SageMath, PARI/GP) have each a variant of this method as the default algorithm for the real roots of a polynomial. The class of methods is based on converting the problem of finding polynomial roots to the problem of finding eigenvalues of the companion matrix of the polynomial, in principle, can use any eigenvalue algorithm to find the roots of the polynomial. However, for efficiency reasons one prefers methods that employ the structure of the matrix, that is, can be implemented in matrix-free form. Among these methods are the power method, whose application to the transpose of the companion matrix is the classical Bernoulli's method to find the root of greatest modulus. The inverse power method with shifts, which finds some smallest root first, is what drives the complex (cpoly) variant of the Jenkins–Traub algorithm and gives it its numerical stability. Additionally, it has fast convergence with order (where is the golden ratio) even in the presence of clustered roots. This fast convergence comes with a cost of three polynomial evaluations per step, resulting in a residual of , that is a slower convergence than with three steps of Newton's method. Finding one root The most widely used method for computing a root is Newton's method, which consists of the iterations of the computation of by starting from a well-chosen value If is a polynomial, the computation is faster when using Horner's method or evaluation with preprocessing for computing the polynomial and its derivative in each iteration. Though the convergence is generally quadratic, it may converge much slowly or even not converge at all. In particular, if the polynomial has no real root, and is real, then Newton's method cannot converge. However, if the polynomial has a real root, which is larger than the larger real root of its derivative, then Newton's method converges quadratically to this largest root if is larger than this larger root (there are easy ways for computing an upper bound of the roots, see Properties of polynomial roots). This is the starting point of Horner's method for computing the roots. When one root has been found, one may use Euclidean division for removing the factor from the polynomial. Computing a root of the resulting quotient, and repeating the process provides, in principle, a way for computing all roots. However, this iterative scheme is numerically unstable; the approximation errors accumulate during the successive factorizations, so that the last roots are determined with a polynomial that deviates widely from a factor of the original polynomial. To reduce this error, one may, for each root that is found, restart Newton's method with the original polynomial, and this approximate root as starting value. However, there is no warranty that this will allow finding all roots. In fact, the problem of finding the roots of a polynomial from its coefficients can be highly ill-conditioned. This is illustrated by Wilkinson's polynomial: the roots of this polynomial of degree 20 are the 20 first positive integers; changing the last bit of the 32-bit representation of one of its coefficient (equal to –210) produces a polynomial with only 10 real roots and 10 complex roots with imaginary parts larger than 0.6. Closely related to Newton's method are Halley's method and Laguerre's method. Both use the polynomial and its two first derivations for an iterative process that has a cubic convergence. Combining two consecutive steps of these methods into a single test, one gets a rate of convergence of 9, at the cost of 6 polynomial evaluations (with Horner's rule). On the other hand, combining three steps of Newtons method gives a rate of convergence of 8 at the cost of the same number of polynomial evaluation. This gives a slight advantage to these methods (less clear for Laguerre's method, as a square root has to be computed at each step). When applying these methods to polynomials with real coefficients and real starting points, Newton's and Halley's method stay inside the real number line. One has to choose complex starting points to find complex roots. In contrast, the Laguerre method with a square root in its evaluation will leave the real axis of its own accord. Finding roots in pairs If the given polynomial only has real coefficients, one may wish to avoid computations with complex numbers. To that effect, one has to find quadratic factors for pairs of conjugate complex roots. The application of the multidimensional Newton's method to this task results in Bairstow's method. The real variant of Jenkins–Traub algorithm is an improvement of this method. Finding all roots at once The simple Durand–Kerner and the slightly more complicated Aberth method simultaneously find all of the roots using only simple complex number arithmetic. Accelerated algorithms for multi-point evaluation and interpolation similar to the fast Fourier transform can help speed them up for large degrees of the polynomial. It is advisable to choose an asymmetric, but evenly distributed set of initial points. The implementation of this method in the free software MPSolve is a reference for its efficiency and its accuracy. Another method with this style is the Dandelin–Gräffe method (sometimes also ascribed to Lobachevsky), which uses polynomial transformations to repeatedly and implicitly square the roots. This greatly magnifies variances in the roots. Applying Viète's formulas, one obtains easy approximations for the modulus of the roots, and with some more effort, for the roots themselves. Exclusion and enclosure methods Several fast tests exist that tell if a segment of the real line or a region of the complex plane contains no roots. By bounding the modulus of the roots and recursively subdividing the initial region indicated by these bounds, one can isolate small regions that may contain roots and then apply other methods to locate them exactly. All these methods involve finding the coefficients of shifted and scaled versions of the polynomial. For large degrees, FFT-based accelerated methods become viable. For real roots, see next sections. The Lehmer–Schur algorithm uses the Schur–Cohn test for circles; a variant, Wilf's global bisection algorithm uses a winding number computation for rectangular regions in the complex plane. The splitting circle method uses FFT-based polynomial transformations to find large-degree factors corresponding to clusters of roots. The precision of the factorization is maximized using a Newton-type iteration. This method is useful for finding the roots of polynomials of high degree to arbitrary precision; it has almost optimal complexity in this setting. Real-root isolation Finding the real roots of a polynomial with real coefficients is a problem that has received much attention since the beginning of 19th century, and is still an active domain of research. Most root-finding algorithms can find some real roots, but cannot certify having found all the roots. Methods for finding all complex roots, such as Aberth method can provide the real roots. However, because of the numerical instability of polynomials (see Wilkinson's polynomial), they may need arbitrary-precision arithmetic for deciding which roots are real. Moreover, they compute all complex roots when only few are real. It follows that the standard way of computing real roots is to compute first disjoint intervals, called isolating intervals, such that each one contains exactly one real root, and together they contain all the roots. This computation is called real-root isolation. Having an isolating interval, one may use fast numerical methods, such as Newton's method for improving the precision of the result. The oldest complete algorithm for real-root isolation results from Sturm's theorem. However, it appears to be much less efficient than the methods based on Descartes' rule of signs and Vincent's theorem. These methods divide into two main classes, one using continued fractions and the other using bisection. Both method have been dramatically improved since the beginning of 21st century. With these improvements they reach a computational complexity that is similar to that of the best algorithms for computing all the roots (even when all roots are real). These algorithms have been implemented and are available in Mathematica (continued fraction method) and Maple (bisection method). Both implementations can routinely find the real roots of polynomials of degree higher than 1,000. Finding multiple roots of polynomials Numerical computation of multiple roots Multiple roots are highly sensitive, known to be ill-conditioned and inaccurate in numerical computation in general. A method by Zhonggang Zeng (2004), implemented as a MATLAB package, computes multiple roots and corresponding multiplicities of a polynomial accurately even if the coefficients are inexact. The method can be summarized in two steps. Let be the given polynomial. The first step determines the multiplicity structure by applying square-free factorization with a numerical greatest common divisor algorithm. This allows writing as where are the multiplicities of the distinct roots. This equation is an overdetermined system for having variables on equations matching coefficients with (the leading coefficient is not a variable). The least squares solution is no longer ill-conditioned in most cases. The second step applies the Gauss-Newton algorithm to solve the overdetermined system for the distinct roots. The sensitivity of multiple roots can be regularized due to a geometric property of multiple roots discovered by William Kahan (1972) and the overdetermined system model maintains the multiplicities . Square-free factorization For polynomials whose coefficients are exactly given as integers or rational numbers, there is an efficient method to factorize them into factors that have only simple roots and whose coefficients are also exactly given. This method, called square-free factorization, is based on the multiple roots of a polynomial being the roots of the greatest common divisor of the polynomial and its derivative. The square-free factorization of a polynomial p is a factorization where each is either 1 or a polynomial without multiple roots, and two different do not have any common root. An efficient method to compute this factorization is Yun's algorithm. References Polynomials
Polynomial root-finding
[ "Mathematics" ]
2,936
[ "Polynomials", "Algebra" ]
72,721,049
https://en.wikipedia.org/wiki/Jedi1
Jedi1 (IUPAC name: 2-methyl-5-phenylfuran-3-carboxylic acid) is a chemical compound which acts as an agonist for the mechanosensitive ion channel PIEZO1, and is used in research into the function of touch perception. See also Yoda1 and Jedi2 References Furans Carboxylic acids
Jedi1
[ "Chemistry" ]
81
[ "Pharmacology", "Carboxylic acids", "Functional groups", "Medicinal chemistry stubs", "Pharmacology stubs" ]
72,727,446
https://en.wikipedia.org/wiki/Water%20supply%20and%20sanitation%20in%20Angola
Water supply and sanitation is an ongoing challenge in the nation of Angola. Background Angola has historically had issues with corruption and instability hindering its water infrastructure development. Recent developments Despite being a relatively poor country, water access has improved in recent history. The percentage of Angolans with access to a stable water supply grew from 42% in 1990 to 54% in 2012. References Water supply Health in Angola
Water supply and sanitation in Angola
[ "Chemistry", "Engineering", "Environmental_science" ]
80
[ "Hydrology", "Water supply", "Environmental engineering" ]
66,936,728
https://en.wikipedia.org/wiki/Electromaterials
In physics, electrical engineering and materials science, electromaterials are the set of materials which store, controllably convert, exchange and conduct electrically charged particles. The term electromaterial can refer to any electronically or ionically active material. While this definition is quite broad, the term is typically used in the context of properties and/or applications in which atomic electronic transition is pertinent. The word electromaterials is a compound form of the Ancient Greek term, ἤλεκτρον ēlektron, "Amber", and the Latin term, materia, "Matter". Properties Electromaterials enable the transport of charged species (electrons and/or ions) as well as facilitate the exchange of charge to other materials. For atomic and molecule systems, this is observed as atomic electronic transition between discrete orbitals, while for bulk semiconductor materials electronic bands determine which transitions may occur. Metals, in which the conduction band is permanently populated, may also be considered electromaterials, although this is typically outside the category compared to other conduction mechanisms such as for a degenerate semiconductor (transparent conductive oxides) or polaron hopping (organic conductor). Materials which can be ionised (i.e. electrons either added or stripped away) may also be considered electronically active. Electromaterials have a number of properties broadly, including: Opto-electronic properties Photoelectric properties Exotic phenomena such as super-conductive properties Partial charge transfer, adsorption of species leading to change in electronic properties of material Ion conductive materials Applications In the application of electromaterials, ions or electrons are used to carry out a specific function. For example, the oxidation or reduction (loss or gain of electrons, respectively) of another species. Materials such as metals, metal particles, conducting polymers, conducting carbon, e.g. CNTs, graphene, carbon fibres, electrodes, electrolytes, electrocatalysts, light harvesting materials (e.g. dyes for DSSCs) find applications in which electromaterials are critical to their functionality: Batteries Super-capacitors Fuel cells Photovoltaics Artificial muscles Chemical sensors LEDs Energy conversion/storage devices Systems that interact with living tissue and soft robotics (prosthetics) Characterisation Electromaterials can be explored by techniques such as (but not limited to) absorption spectroscopy, photoluminescence spectroscopy, electrochemistry, FTIR, Raman spectroscopy or combinations of the above, such as raman spectroelectrochemistry. See also Metal Electrolyte Electrical conductor Piezoelectricity References Electricity Materials science
Electromaterials
[ "Physics", "Materials_science", "Engineering" ]
544
[ "Applied and interdisciplinary physics", "Materials science", "nan" ]
66,946,224
https://en.wikipedia.org/wiki/Gamma%20ray%20cross%20section
A gamma ray cross section is a measure of the probability that a gamma ray interacts with matter. The total cross section of gamma ray interactions is composed of several independent processes: photoelectric effect, Compton (incoherent) scattering, electron-positron pair production in the nucleus field and electron-positron pair production in the electron field (triplet production). The cross section for single process listed above is a part of the total gamma ray cross section. Other effects, like the photonuclear absorption, Thomson or Rayleigh (coherent) scattering can be omitted because of their nonsignificant contribution in the gamma ray range of energies. The detailed equations for cross sections (barn/atom) of all mentioned effects connected with gamma ray interaction with matter are listed below. Photoelectric effect cross section The photoelectric effect phenomenon describes the interaction of a gamma photon with an electron located in the atomic structure. This results in the ejection of that electron from the atom. The photoelectric effect is the dominant energy transfer mechanism for X-ray and gamma ray photons with energies below 50 keV. It is much less important at higher energies, but still needs to be taken into consideration. Usually, the cross section of the photoeffect can be approximated by the simplified equation of where k = Eγ / Ee, and where Eγ = hν is the photon energy given in eV and Ee = me c2 ≈ 5,11∙105 eV is the electron rest mass energy, Z is an atomic number of the absorber's element, α = e2/(ħc) ≈ 1/137 is the fine structure constant, and re2 = e4/Ee2 ≈ 0.07941 b is the square of the classical electron radius in barns. For higher precision, however, the Sauter equation is more appropriate: where and EB is a binding energy of electron, and ϕ0 is a Thomson cross section (ϕ0 = 8πe4/(3Ee2) ≈ 0.66526 barn). For higher energies (>0.5 MeV) the cross section of the photoelectric effect is very small because other effects (especially Compton scattering) dominates. However, for precise calculations of the photoeffect cross section in high energy range, the Sauter equation shall be substituted by the Pratt-Scofield equation where all input parameters are presented in the Table below. Compton scattering cross section Compton scattering (or Compton effect) is an interaction in which an incident gamma photon interacts with an atomic electron to cause its ejection and scatter of the original photon with lower energy. The probability of Compton scattering decreases with increasing photon energy. Compton scattering is thought to be the principal absorption mechanism for gamma rays in the intermediate energy range 100 keV to 10 MeV. The cross section of the Compton effect is described by the Klein-Nishina equation: for energies higher than 100 keV (k>0.2). For lower energies, however, this equation shall be substituted by: which is proportional to the absorber's atomic number, Z. The additional cross section connected with the Compton effect can be calculated for the energy transfer coefficient only – the absorption of the photon energy by the electron: which is often used in radiation protection calculations. Pair production (in nucleus field) cross section By interaction with the electric field of a nucleus, the energy of the incident photon is converted into the mass of an electron-positron (e−e+) pair. The cross section for the pair production effect is usually described by the Maximon equation: for low energies (k<4), where . However, for higher energies (k>4) the Maximon equation has a form of where ζ(3)≈1.2020569 is the Riemann zeta function. The energy threshold for the pair production effect is k=2 (the positron and electron rest mass energy). Triplet production cross section The triplet production effect, where positron and electron is produced in the field of other electron, is similar to the pair production, with the threshold at k=4. This effect, however, is much less probable than the pair production in the nucleus field. The most popular form of the triplet cross section was formulated as Borsellino-Ghizzetti equation where a=-2.4674 and b=-1.8031. This equation is quite long, so Haug proposed simpler analytical forms of triplet cross section. Especially for the lowest energies 4<k<4.6: For 4.6<k<6: For 6<k<18: For k>14 Haug proposed to use a shorter form of Borsellino equation: Total cross section One can present the total cross section per atom as a simple sum of each effects: Next, using the Beer–Lambert–Bouguer law, one can calculate the linear attenuation coefficient for the photon interaction with an absorber of atomic density N: or the mass attenuation coefficient: where ρ is mass density, u is an atomic mass unit, a A is the atomic mass of the absorber. This can be directly used in practice, e.g. in the radiation protection. The analytical calculation of the cross section of each specific phenomenon is rather difficult because appropriate equations are long and complicated. Thus, the total cross section of gamma interaction can be presented in one phenomenological equation formulated by Fornalski, which can be used instead: where ai,j parameters are presented in Table below. This formula is an approximation of the total cross section of gamma rays interaction with matter, for different energies (from 1 MeV to 10 GeV, namely 2<k<20,000) and absorber's atomic numbers (from Z=1 to 100). For lower energy region (<1 MeV) the Fornalski equation is more complicated due to the larger function variability of different elements. Therefore, the modified equation is a good approximation for photon energies from 150 keV to 10 MeV, where the photon energy E is given in MeV, and ai,j parameters are presented in Table below with much better precision. Analogically, the equation is valid for all Z from 1 to 100. XCOM Database of cross sections The US National Institute of Standards and Technology published on-line a complete and detailed database of cross section values of X-ray and gamma ray interactions with different materials in different energies. The database, called XCOM, contains also linear and mass attenuation coefficients, which are useful for practical applications. See also Cross section (physics) Gamma ray Linear attenuation coefficient Mass attenuation coefficient Neutron cross section Nuclear cross section External links XCOM Database References Atomic physics Physical quantities Measurement Nuclear physics Gamma rays Radiation
Gamma ray cross section
[ "Physics", "Chemistry", "Mathematics" ]
1,397
[ "Transport phenomena", "Physical phenomena", "Spectrum (physical sciences)", "Physical quantities", "Nuclear physics", "Quantity", "Electromagnetic spectrum", "Measurement", "Size", "Quantum mechanics", "Waves", "Radiation", "Gamma rays", " molecular", "Atomic physics", "Atomic", "Ph...
75,547,310
https://en.wikipedia.org/wiki/Kirbee%20Kiln%20Site
The Kirbee Kiln Site is a 19th-century kiln ruin located in Montgomery County, Texas, where stoneware was manufactured by the Kirbee family. It is one of the largest groundhog kilns ever recorded in the American South. The exact location of the site is restricted. It was listed on the National Register of Historic Places in 1973. History The Kirbee Kiln was founded and operated by James Kirbee, who was originally from Edgefield, South Carolina, and had relatives and acquaintances who were also potters. One of his acquaintances might have been David Drake, a potter who was enslaved by Kirbee's associate Rev. John Landrum. By 1830, Kirbee and his family had relocated to Georgia; and by 1840, they had migrated to Montgomery County, Texas. The kiln itself was likely built around 1849, as it appeared in the 1850 Schedule of Industry and Manufacture. James was likely assisted by his sons M.J. and Louis. The annual value of the stoneware produced did not exceed $500, much lower than other local kilns. The kiln likely ceased operations in the 1860s. The site was one of several kilns surveyed by the Texas Historical Commission between 1973 and 1974. It was listed on the National Register of Historic Places on August 28, 1973. It was the first site from in the area to be added to the NRHP. Architecture and pottery At the time of the archaeological surveys in the 1970s, the Texas Historical Commission named the Kirbee Kiln Site as the largest groundhog kiln that had then been excavated in Texas, and it remains one of the largest ever recorded in the American South. It measured across and 8 to 10 inches wide and was constructed of brick. The kiln was rectangular in shape, consisting of an opening at the very front for loading and firing, a depressed firebox, the loading shelf in the middle, and a fireplace-shaped chimney at the very back. A unique feature of this kiln was the presence of a second firing box located midway along the loading shelf; a side door would have provided access. The chimney is believed by the excavators to have decreased in width towards its top. The buttresses of the Kirbee Kiln were large and angled but also included several smaller ones, a rare feature that could have functioned to support its size, offer resistance against the sloped ground, and double as a retaining wall. The entire floor of the kiln was sandy soil. Kirbee's stoneware had similarities to techniques observed elsewhere in Georgia and South Carolina, particularly the alkaline glaze that was characteristic of contemporary Edgefield stoneware; and the vessels were also comparable in features such as their handles and shape. This style of pottery is very similar to Catawba Valley Pottery, which was developed in nearby North Carolina. The trademarks on the Kirbee stoneware were a round stamp resembling the letter "O". References Kilns American pottery Industrial buildings and structures on the National Register of Historic Places in Texas National Register of Historic Places in Montgomery County, Texas 1849 establishments in Texas Industrial buildings completed in 1849
Kirbee Kiln Site
[ "Chemistry", "Engineering" ]
644
[ "Chemical equipment", "Kilns" ]
75,549,316
https://en.wikipedia.org/wiki/Alexander%20Mihailovich%20Zamorzaev
Alexander Mihailovich Zamorzaev (; 27January 1927 – 1November 1997) was a Soviet mathematician and crystallographer. In 1953 Zamorzaev was the first to derive the complete list of magnetic space groups (Shubnikov groups). In 1957 Zamorzaev founded the field of generalised antisymmetry by introducing the concept of more than one kind of two-valued antisymmetry operation. Life Career Zamorzaev was born on 23 January 1927 in Leningrad. In 1953 at the University of Leningrad, under the supervision of A.D. Aleksandrov, he gained the M.A. degree with the dissertation Generalization of Fedorov groups, in which he developed the general theory of antisymmetry. In this work he derived for the first time the 1651 antisymmetry space groups, and named them "Shubnikov groups", after A.V. Shubnikov the pioneer of antisymmetry. In 1953 he became a mathematics lecturer at the newly opened University of Kishinev (Chișinău). Besides teaching the regular mathematics curriculum, and supervising graduate students, Zamorzaev devised and taught new courses in the areas of discrete geometry, theoretical crystallography, and antisymmetry and its generalisations. In 1971 he gained his doctoral degree with a thesis entitled Theory of Antisymmetry and its Different Generalizations. The thesis was based on his new theories of geometry and mathematical crystallography, 1) multiple antisymmetry; 2) similarity and conformal symmetry; and 3) P-symmetry, including generalisations of A. V. Shubnikov's antisymmetry and N. V. Belov's color symmetry. In 1973 a department of higher geometry was established within the university and Zamorzaev was appointed as professor and head of the department. A history of the personnel and achievements of Zamorzaev's school of geometry is available online. Works The majority of Zamorzaev's works were published in Russian. Books published by Zamorzaev: Theory of simple and multiple antisymmetry (1976) Theory of discrete symmetry groups (1977) Color symmetry, its generalizations and applications (1978) P-symmetry and its further development (1986) Zamorzaev published 110 papers. Selected papers available in English: Similarity symmetric and antisymmetric groups (1964). Quasisymmetry (p-symmetry) groups (1968) Color-symmetry space groups (1969) Antisymmetry, its generalizations and geometrical applications (1980) Generalized antisymmetry (1988). Honours and awards E. S. Fedorov Prize of the Russian Academy of Sciences for his contributions to the theory of symmetry (1973) State Prize of the Moldovan SSR in Science and Technology for his contributions in the field of discrete geometry (1974) Honored Worker of Science of the Moldovan SSR for his achievements in science and education (1977) Elected Corresponding Member of the Academy of Sciences of Moldova (1989) References 1927 births 1997 deaths Soviet mathematicians Crystallographers Scientists from Saint Petersburg
Alexander Mihailovich Zamorzaev
[ "Chemistry", "Materials_science" ]
648
[ "Crystallographers", "Crystallography" ]
75,551,560
https://en.wikipedia.org/wiki/Velagliflozin
Velagliflozin, sold under the brand name Senvelgo, is an antidiabetic medication used for the treatment of cats. Velagliflozin is a sodium-glucose cotransporter 2 (SGLT2) inhibitor. It is taken by mouth. Medical uses Velagliflozin is indicated to improve glycemic control in otherwise healthy cats with diabetes not previously treated with insulin. References Cat medications SGLT2 inhibitors Nitriles Cyclopropyl compounds Sugar alcohols Veterinary drugs
Velagliflozin
[ "Chemistry" ]
111
[ "Carbohydrates", "Nitriles", "Sugar alcohols", "Functional groups" ]
75,558,170
https://en.wikipedia.org/wiki/Charge%20based%20boundary%20element%20fast%20multipole%20method
The charge-based formulation of the boundary element method (BEM) is a dimensionality reduction numerical technique that is used to model quasistatic electromagnetic phenomena in highly complex conducting media (targeting, e.g., the human brain) with a very large (up to approximately 1 billion) number of unknowns. The charge-based BEM solves an integral equation of the potential theory written in terms of the induced surface charge density. This formulation is naturally combined with fast multipole method (FMM) acceleration, and the entire method is known as charge-based BEM-FMM. The combination of BEM and FMM is a common technique in different areas of computational electromagnetics and, in the context of bioelectromagnetism, it provides improvements over the finite element method. Historical development Along with more common electric potential-based BEM, the quasistatic charge-based BEM, derived in terms of the single-layer (charge) density, for a single-compartment medium has been known in the potential theory since the beginning of the 20th century. For multi-compartment conducting media, the surface charge density formulation first appeared in discretized form (for faceted interfaces) in the 1964 paper by Gelernter and Swihart. A subsequent continuous form, including time-dependent and dielectric effects, appeared in the 1967 paper by Barnard, Duck, and Lynn. The charge-based BEM has also been formulated for conducting, dielectric, and magnetic media, and used in different applications. In 2009, Greengard et al. successfully applied the charge-based BEM with fast multipole acceleration to molecular electrostatics of dielectrics. A similar approach to realistic modeling of the human brain with multiple conducting compartments was first described by Makarov et al. in 2018. Along with this, the BEM-based multilevel fast multipole method has been widely used in radar and antenna studies at microwave frequencies as well as in acoustics. Physical background - surface charges in biological media The charge-based BEM is based on the concept of an impressed (or primary) electric field and a secondary electric field . The impressed field is usually known a priori or is trivial to find. For the human brain, the impressed electric field can be classified as one of the following: A conservative field derived from an impressed density of EEG or MEG current sources in a homogeneous infinite medium with the conductivity at the source location; An instantaneous solenoidal field of an induction coil obtained from Faraday's law of induction in a homogeneous infinite medium (air), when transcranial magnetic stimulation (TMS) problems are concerned; A surface field derived from an impressed surface current density of current electrodes injecting electric current at a boundary of a compartment with conductivity when transcranial direct-current stimulation (tDCS) or deep brain stimulation (DBS) are concerned; A conservative field of charges deposited on voltage electrodes for tDCS or DBS. This specific problem requires a coupled treatment since these charges will depend on the environment; In application to multiscale modeling, a field obtained from any other macroscopic numerical solution in a small (mesoscale or microscale) spatial domain within the brain. For example, a constant field can be used. When the impressed field is "turned on", free charges located within a conducting volume immediately begin to redistribute and accumulate at the boundaries (interfaces) of regions of different conductivity in . A surface charge density appears on the conductivity interfaces. This charge density induces a secondary conservative electric field following Coulomb's law. One example is a human under a direct current powerline with the known field directed down. The superior surface of the human's conducting body will be charged negatively while its inferior portion is charged positively. These surface charges create a secondary electric field that effectively cancels or blocks the primary field everywhere in the body so that no current will flow within the body under DC steady state conditions. Another example is a human head with electrodes attached. At any conductivity interface with a normal vector  pointing from an "inside" (-) compartment of conductivity to an "outside" (+) compartment of conductivity , Kirchhoff's current law requires continuity of the normal component of the electric current density. This leads to the interfacial boundary condition in the form for every facet at a triangulated interface. As long as are different from each other, the two normal components of the electric field, , must also be different. Such a jump across the interface is only possible when a sheet of surface charge exists at that interface. Thus, if an electric current or voltage is applied, the surface charge density follows. The goal of the numerical analysis is to find the unknown surface charge distribution and thus the total electric field  (and the total electric potential if required) anywhere in space. System of equations for surface charges Below, a derivation is given based on Gauss's law and Coulomb's law. All conductivity interfaces, denoted by , are discretized into planar triangular facets  with centers . Assume that an -th facet with the normal vector  and area carries a uniform surface charge density . If a volumetric tetrahedral mesh were present, the charged facets would belong to tetrahedra with different conductivity values. We first compute the electric field at the point , for i.e., just outside facet 𝑚 at its center. This field contains three contributions: The continuous impressed electric field itself; An electric field of the -th charged facet itself. Very close to the facet, it can be approximated as the electric field of an infinite sheet of uniform surface charge . By Gauss's Law, it is given by where is a background electrical permittivity; An electric field generated by all other facets , which we approximate as point charges of charge at each center . A similar treatment holds for the electric field  just inside facet 𝑚, but the electric field of the flat sheet of charge changes its sign. Using Coulomb's law to calculate the contribution of facets different from , we find From this equation, we see that the normal component of the electric field indeed undergoes a jump through the charged interface. This is equivalent to a jump relation of the potential theory. As a second step, the two expressions for are substituted into the interfacial boundary condition , applied to every facet 𝑚. This operation leads to a system of linear equations for unknown charge densities which solves the problem: where is the electric conductivity contrast at the -th facet. The normalization constant will cancel out after the solution is substituted in the expression for and becomes redundant. Application of fast multipole method For modern characterizations of brain topologies with ever-increasing levels of complexity, the above system of equations for is very large; it is therefore solved iteratively. An initial guess for is the last term on its right-hand side while the sum is ignored. Next, the sum is computed and the initial guess is refined, etc. This solution employs the simple Jacobi iterative method. The more rigorous generalized minimum residual method (GMRES) yields a much faster convergence of the BEM-FMM. In either case, the major work is in computing the underbraced sum in the system of equations above for every at every iteration; this operation corresponds to a repetitive matrix-vector multiplication. However, one can recognize this sum as an electric field (times ) of charges to be computed at  observation points. Such a computation is exactly the task of the fast multipole method, which performs fast matrix-by-vector multiplication in  or even  operations instead of . The FMM3D library realized in both Python and MATLAB can be used for this purpose. It is therefore unnecessary to form or store the dense system matrix typical for the standard BEM. Continuous charge-based BEM. Near-field correction The system of equations formulated above is derived with the collocation method and is less accurate. The corresponding integral equation is obtained from the local jump relations of the potential theory and the local interfacial boundary condition of normal electric current continuity. It is a Fredholm integral equation of the second kind Its derivation does not involve Green's identities (integrations by parts) and is applicable to non-nested geometries. When the Galerkin method is applied and the same zeroth-order basis functions (with a constant charge density for each facet) are still used on triangulated interfaces, we obtain exactly the same discretization as before if we replace the double integrals over surfaces and of triangles and , respectively, by where is the surface area of the triangle . This approximation is only valid when  is much larger than a typical facet size i.e., in the "far field". Otherwise, semi-analytical formulae and Gaussian quadratures for triangles should be used. Typically, 4 to 32 such neighbor integrals per facet should be precomputed, stored, and then used at every iteration. This is an important correction to the plain fast multipole method in the "near field" which should also be used in the simple discrete formulation derived above. Such a correction makes it possible to obtain an unconstrained numerical (but not anatomical) resolution in the brain. Applications and limitations Applications of the charge-based BEM-FMM include modeling brain stimulation with near real-time accurate TMS computations as well as neurophysiological recordings. They also include modeling challenging mesoscale head topologies such as thin brain membranes (dura mater, arachnoid mater, and pia mater). This is particularly important for accurate transcranial direct-current stimulation and electroconvulsive therapy dosage predictions. The BEM-FMM allows for straightforward adaptive mesh refinement including multiple extracerebral brain compartments. Another application is modeling electric field perturbations within densely packed neuronal/axonal arbor. Such perturbations change the biophysical activating function. A charge-based BEM formulation is being developed for promising bi-domain biophysical modeling of axonal processes. In its present form, the charge-based BEM-FMM is applicable to multi-compartment piecewise homogeneous media only; it cannot handle macroscopically anisotropic tissues. Additionally, the maximum number of facets (degrees of freedom) is limited to approximately for typical academic computer hardware resources used as of 2023. See also Computational electromagnetics Boundary element method Fast multipole method Computational neuroscience Transcranial magnetic stimulation Transcranial direct-current stimulation Electroencephalography Magnetoencephalography External links A survey on integral equations for bioelectric modeling, preprint. Flatiron Institute - Simons Foundation FMM3D GitHub Project Site. References Computational electromagnetics Dimension reduction
Charge based boundary element fast multipole method
[ "Physics" ]
2,235
[ "Computational electromagnetics", "Computational physics" ]
71,238,169
https://en.wikipedia.org/wiki/Read%27s%20conjecture
Read's conjecture is a conjecture, first made by Ronald Read, about the unimodality of the coefficients of chromatic polynomials in the context of graph theory. In 1974, S. G. Hoggar tightened this to the conjecture that the coefficients must be strongly log-concave. Hoggar's version of the conjecture is called the Read–Hoggar conjecture. The Read–Hoggar conjecture had been unresolved for more than 40 years before June Huh proved it in 2009, during his PhD studies, using methods from algebraic geometry. References Conjectures that have been proved Graph theory
Read's conjecture
[ "Mathematics" ]
122
[ "Graph theory stubs", "Mathematical theorems", "Graph theory", "Mathematical relations", "Conjectures that have been proved", "Statements in graph theory", "Mathematical problems" ]
71,241,698
https://en.wikipedia.org/wiki/Regge%E2%80%93Wheeler%E2%80%93Zerilli%20equations
In general relativity, Regge–Wheeler–Zerilli equations are a pair of equations that describes gravitational perturbations of a Schwarzschild black hole, named after Tullio Regge, John Archibald Wheeler and Frank J. Zerilli. The perturbations of a Schwarzchild metric is classified into two types, namely, axial and polar perturbations, a terminology introduced by Subrahmanyan Chandrasekhar. Axial perturbations induce frame dragging by imparting rotations to the black hole and change sign when azimuthal direction is reversed, whereas polar perturbations do not impart rotations and do not change sign under the reversal of azimuthal direction. The equation for axial perturbations is called Regge–Wheeler equation and the equation governing polar perturbations is called Zerilli equation. The equations take the same form as the one-dimensional Schrödinger equation. The equations read as where characterizes the polar perturbations and the axial perturbations. Here is the tortoise coordinate (we set ), belongs to the Schwarzschild coordinates , is the Schwarzschild radius and represents the time frequency of the perturbations appearing in the form . The Regge–Wheeler potential and Zerilli potential are respectively given by where and characterizes the eigenmode for the coordinate. For gravitational perturbations, the modes are irrelevant because they do not evolve with time. Physically gravitational perturbations with (monopole) mode represents a change in the black hole mass, whereas the (dipole) mode corresponds to a shift in the location and value of the black hole's angular momentum. The shape of above potentials are exhibited in the figure. Remember that in the tortoise coordinate, denotes event horizon and is equivalent to i.e., to distances far away from the back hole. The potentials are short ranged as they decay faster than ; as we have and as , we have Consequently, the asymptotic behaviour of the solutions for is Relations between the two problems In 1975, Subrahmanyan Chandrasekhar and Steven Detweiler discovered a one-to-one mapping between the two equations, leading to a consequence that the spectrum corresponding to both potentials are identical. The two potentials can also be written as The relations between and are given by Reflection and transmission coefficients Here is always positive and the problem is one of reflection and transmission of waves incident from to . The problem is essentially the same as that of a reflection and transmission problem by a potential barrier in quantum mechanics. Let the incident wave with unit amplitude be , then the asymptotic behaviours of the solution are given by where and are respectively are the reflection and transmission amplitudes. In the second equation, we have imposed the physical requirement that no waves emerge from the event horizon. The reflection and transmission coefficients are thus defined as subjected to the condition Because of the inherent connection between the two equations as outlined in the previous section, it turns out and thus consequently, since and differ only in their phases, we get It is clear from the figure for the reflection coefficient that small-frequency perturbations are readily reflected by the black hole whereas large-frequency ones are absorbed by the black hole. Quasi-normal modes Quasi-normal modes correspond to pure tones of the black hole. It describes for arbitrary, but small, perturbations such as an object falling into the black hole, accretion of matter surrounding it, last stage of slightly aspherical collapse etc. Unlike the reflection and transmission coefficient problem, quasi-normal modes are characterised by complex-valued 's with the convention . The required boundary conditions are indicating that we have purely outgoing waves with amplitude and purely ingoing waves at the horizon. The problem becomes an eigenvalue problem. The quasi-normal modes are of damping type in time, although these waves diverge in space as (this is due to the implicit assumption that the perturbation in quasi-normal modes is 'infinite' in the remote past). Again because of the relation mentioned between the two problem, the spectrum of and are identical and thus it enough to consider the spectrum of The problem is simplified by introducing The nonlinear eigenvalue problem is given by The solution is found to exist only for a discrete set of values of This equation also implies the identity See also Chandrasekhar–Page equations Teukolsky equations References Astrophysics Eponymous equations of physics General relativity Ordinary differential equations
Regge–Wheeler–Zerilli equations
[ "Physics", "Astronomy" ]
928
[ "Equations of physics", "Eponymous equations of physics", "Astrophysics", "General relativity", "Theory of relativity", "Astronomical sub-disciplines" ]
68,366,509
https://en.wikipedia.org/wiki/No.%20402%20Wing%20RAAF
No. 402 Wing was a Royal Australian Air Force wing. Formed in July 1996, it maintained and provided technical support for Tactical Fighter Group's aircraft. The wing was disbanded in October 1998, with most of its functions being transferred to No. 81 Wing. History No. 402 Wing was formed from No. 481 Wing on 1 July 1996. It was responsible for maintaining and providing technical support for the aircraft operated by the RAAF's Tactical Fighter Group. These mainly comprised the McDonnell Douglas F/A-18 Hornets operated by No. 81 Wing. The wing's establishment was the result of a review conducted to determine the best way of maintaining the Hornets. These tasks had previously been undertaken by No. 481 Wing, and No. 402 Wing replaced it. The wing was made up of a headquarters, No. 481 Squadron, Weapons System Support Flight and Field Training Flight. No. 481 Squadron was established at the same time as No. 402 Wing was formed, and included the maintenance elements that hat previously been organised into two squadrons in No. 481 Wing. No. 402 Wing was mainly located at RAAF Base Williamtown. No. 402 Wing as disbanded as a result of a decision by the Chief of Air Force that the RAAF's flying squadrons should undertake aircraft maintenance. The wing transferred most of its functions to No. 81 Wing and ceased operations on 31 July 1998. It and No. 481 Squadron were formally disbanded on 30 October 1998. Weapons System Support Flight and Field Training Flight were retained, with the former becoming part of the Tactical Fighter Group and the later reporting to No. 81 Wing's headquarters. References Citations Works consulted 402 Wing 402 Wing 402 Wing Aircraft maintenance 1996 establishments in Australia 1998 disestablishments in Australia
No. 402 Wing RAAF
[ "Engineering" ]
356
[ "Aircraft maintenance", "Aerospace engineering" ]
68,367,262
https://en.wikipedia.org/wiki/Hurd%E2%80%93Mori%201%2C2%2C3-thiadiazole%20synthesis
The Hurd–Mori 1,2,3-thiadiazole synthesis is a name reaction in organic chemistry that allows for the generation of 1,2,3-thiadiazoles through the reaction of hydrazone derivatives with an N-acyl or N-tosyl group reacted with thionyl chloride. An analogous reaction gives 1,2,3-selenadiazoles by using selenium dioxide instead of thionyl chloride. References Carbon-heteroatom bond forming reactions Name reactions
Hurd–Mori 1,2,3-thiadiazole synthesis
[ "Chemistry" ]
109
[ "Organic reactions", "Name reactions", "Carbon-heteroatom bond forming reactions", "Chemical reaction stubs", "Ring forming reactions" ]
78,513,397
https://en.wikipedia.org/wiki/Thermoplasmatota
Thermoplasmatota is phylum of Archaea. It is among six other phyla validly published according to the Bacteriological Code. These Archaea can live in acidic environments and have also been found in the South China Sea and Mediterranean grassland soil. Phylogeny See also List of Archaea genera References External links Archaea Taxa described in 2024
Thermoplasmatota
[ "Biology" ]
79
[ "Archaea", "Microorganisms", "Prokaryotes" ]
78,515,669
https://en.wikipedia.org/wiki/Central%20Brain%20Tumor%20Registry%20of%20the%20United%20States
The Central Brain Tumor Registry of the United States (CBTRUS) is the primary national database of malignant and benign tumors of the brain, "other central nervous system (CNS), tumors of the pituitary and pineal glands, olfactory tumors of the nasal cavity, and brain lymphoma and leukemia." A non-profit, it was established in 1992. External links Official website References Diagnostic radiology Oncology Databases in the United States Health informatics Medical databases
Central Brain Tumor Registry of the United States
[ "Biology" ]
101
[ "Health informatics", "Medical technology" ]
78,515,890
https://en.wikipedia.org/wiki/Weierstrass%20Nullstellensatz
In mathematics, the Weierstrass Nullstellensatz is a version of the intermediate value theorem over a real closed field. It says: Given a polynomial in one variable with coefficients in a real closed field F and in , if , then there exists a in such that and . Proof Since F is real-closed, F(i) is algebraically closed, hence f(x) can be written as , where is the leading coefficient and are the roots of f. Since each nonreal root can be paired with its conjugate (which is also a root of f), we see that f can be factored in F[x] as a product of linear polynomials and polynomials of the form , . If f changes sign between a and b, one of these factors must change sign. But is strictly positive for all x in any formally real field, hence one of the linear factors , , must change sign between a and b; i.e., the root of f satisfies . References R. G. Swan, Tarski's Principle and the Elimination of Quantifiers at Richard G. Swan Real closed field Theory of continuous functions
Weierstrass Nullstellensatz
[ "Mathematics" ]
239
[ "Theory of continuous functions", "Algebra stubs", "Topology", "Algebra" ]
78,516,937
https://en.wikipedia.org/wiki/Elementary%20Theory%20of%20the%20Category%20of%20Sets
In mathematics, the Elementary Theory of the Category of Sets or ETCS is a set of axioms for set theory proposed by William Lawvere in 1964. Although it was originally stated in the language of category theory, as Leinster pointed out, the axioms can be stated without references to category theory. ETCS is a basic example of structural set theory, an approach to set theory that emphasizes sets as abstract structures (as opposed to collections of elements). Axioms Informally, the axioms are as follows: (here, set, function and composition of functions are primitives) Composition of functions is associative and has identities. There is a set with exactly one element. There is an empty set. A function is determined by its effect on elements. A Cartesian product exists for a pair of sets. Given sets and , there is a set of all functions from to . Given and an element , the pre-image is defined. The subsets of a set correspond to the functions . The natural numbers form a set. (weak axiom of choice) Every surjection has a right inverse (i.e., a section). The resulting theory is weaker than ZFC. If the axiom schema of replacement is added as another axiom, the resulting theory is equivalent to ZFC. References A post about the paper at the n-category café. Clive Newstead, An Elementary Theory of the Category of Sets at the n-Category Café Further reading ETCS in nLab ZFC and ETCS: Elementary Theory of the Category of Sets Tom Leinster, Axiomatic Set Theory 1: Introduction at the n-Category Café How would set theory research be affected by using ETCS instead of ZFC? Axioms of set theory
Elementary Theory of the Category of Sets
[ "Mathematics" ]
357
[ "Axioms of set theory", "Mathematical axioms" ]
78,516,998
https://en.wikipedia.org/wiki/Bazargan%20chronology
Bazargan chronology refers to an ordering of chapters in the Quran according to the sequence of revelation. It is named after Iranian scholar Mehdi Bazargan who enunciated the chronology in 1976 by publication of his landmark work Sayr-i taḥawwul-i Qurʾān. This chronology is based on statistical procedure, and suggests that roughly half of the chapters, which is 55 out of a total of 114, consisted of collections of proclamations from various time periods. Bazargan proposed that the length of verses tended to grow continuously over time, without reverting back, and based on this concept, he reorganized 'blocks'. In Bazargan chronology, 114 chapters of the Quran are divided into 194 blocks, keeping some as complete single blocks while splitting others into two or more blocks. These blocks are then reorganized roughly according to increasing average verse length. This sequence is suggested to represent the chronological order, with the underlying assumption that the Quran's style, as indicated by verse length, evolved gradually. It is emphasized that this proposed chronology should not be seen as absolute, as it is based on statistical analysis, which offers strong conclusions about averages of collections rather than individual elements. See also Geschichte des Qorāns Tanzil Citations References Quran Chronology
Bazargan chronology
[ "Physics" ]
258
[ "Spacetime", "Chronology", "Physical quantities", "Time" ]
77,175,112
https://en.wikipedia.org/wiki/Hizen%20Porcelain%20Kiln%20Sites
The refers to Edo Period kilns located in the town of Arita and cities of Takeo and Ureshino, Saga Prefecture, Japan which were designated National Historic Sites in 1980, and were re-designated as a single collective National Historic Site in 1981. Overview The Hizen Porcelain Kiln Sites are located in former Hizen Province and are important kiln sites for understanding the transition of porcelain production in the development of early Arita ware and Imari ware. At the end of the Edo period, there were more than 100 climbing kilns of various sizes, but most of them have been lost today, and only about 66 kiln sites have been confirmed. The National Historic Site designation covers: According to tradition, the Korean potter Yi Sam-pyeong (d. 1655), or Kanagae Sanbee (), often considered the father of Arita ware porcelain, first discovered porcelain clay at this location in what is now part of the town of Arita. The first Arita ware kilns were built at Tengudani in what is now the Shirakawadani neighborhood of the town of Arita by Yi Sam-pyeong due to its proximity to the Izumiyama quarry and the availability of water and firewood for fuel. The site was in use from about 1630 to the 1660s When the kiln was first built, porcelain and earthenware were made together here, but later only porcelain was produced, and it is now Arita's oldest dedicated porcelain kiln. Tengudani was the first of 66 kiln sites to be investigated by 20th century archaeologists in Arita, and was excavated in 1965-1970 and again in 1999–2001. The first was the first early modern ceramic kiln site to be excavated in Arita, and was a landmark for art history and geology. These excavations uncovered the remains of at least four climbing kilns. The heat of firing caused the chambers inside the kilns to gradually collapse, and replacement kilns were built in succession. The kiln that remains at this site is the best-preserved of the four, and was therefore chosen for preservation and public display. Known as "B" kiln because it was the second to be discovered, it is believed to have been in use between the 1640s and 1650s. At around 70 metres long and with 21 chambers, it was very large for its time. Managed by Saga Domain, the kiln used a lottery system to determine which chamber in the kiln each porcelain producer would use. Because the fire was fired from the bottom of the slope, the lower rooms were at risk of becoming too hot, while the higher rooms were often too cold and did not fire properly. Many shards of broken and discarded porcelain pieces have been found around the kiln ruins. A model based on the Tengudani Kiln's climbing kiln is on display at the Arita Town History and Folklore Museum.The Tengudani Kiln Site is about a 20-minute drive from Arita Station on the JR Kyushu Sasebo Line. The Yamabeda Kiln Site was located in the Kuromuta neighborhood, northwest of Arita. It was in operation from the 1590s to the 1660s and at its height had up to 30 workshops with a round 300 craftsmen. Excavations conducted in 1972-1975 uncovered the remains of nine kilns on the hills next to the rice paddies, and porcelain shards trace the transition of porcelain from Imari to the Kokutani styles. The oldest kiln, No. 4, mainly produced ceramic bowls and large plates, with iron painting. The kiln had been in operation before porcelain production began. Kiln No.7 likewise produced both pottery and porcelain, mostly large blue-and-white plates. Around this time, Arita saw a major turning point in the ceramic industry, with a policy to consolidate kilns being implemented by Saga Domain from 1637 as the number of kilns and potters in Arita had increased, raising concerned about the indiscriminate felling of trees for firing the kilns. While many of the kilns in western Arita that fired ceramics were forced to close, Yamabeda Kiln is a rare example of a kiln that avoided closure and continued to operate. One of the reasons for this is its specialization in the production of large plates. From kilns No. 3, 6, and 9, blue and white porcelain products as well as colored enamel bases were excavated. Large plates were also found, indicating that from the 1640s onwards, large colored enamel plates were being produced in addition to blue and white porcelain. The later No. 1 and No. 2 kilns are thought to have produced items for overseas export. The Yamabeda Kiln closed around the same time, in the late 1650s to 1660s, and it is believed that this was due to the shift in the Imari ware production system and the change in the style required. The Haraake site consists of four kilns and a waste dump located in western Arita. It was excavated in 1974-1975 and 1993. Only pottery was found in the lowest layers whereas a mixture of pottery and porcelain was found in the upper layers. The kiln was founded in the 1600s to 1630s, and is thought to be one of the earliest kilns in Arita to produce porcelain. The Hyakken Kiln was located in former Yamauchi town, now part of the city of Takeo. It was a stepped, multi-chambered climbing kiln that climbs from east to west up the western slope of the Itanokawauchi ridge, and the size of the firing chamber confirmed by excavation surveys was 3.6 meters wide and 1.6 meters deep. The fired products include porcelain such as white porcelain and celadon, mainly with blue-and-white porcelain, as well as inlaid and two-colored pottery. The kiln is known for its wide variety of products, including bowls, plates, bowls, jars, and water jars, and is thought to have been in operation in the first half of the 17th century. The Fudoyama Kiln was located in the Sarayadani neighborhood of the city of Ureshino. Died hand plates and celadon and white porcelain shards have been excavated. See also List of Historic Sites of Japan (Saga) References External links Japan Heritage Toguri Museum of Art home page Historic Sites of Japan History of Saga Prefecture Edo-period sites Hizen Province Japanese pottery kiln sites Arita, Saga Takeo, Saga
Hizen Porcelain Kiln Sites
[ "Chemistry", "Engineering" ]
1,354
[ "Kilns", "Japanese pottery kiln sites" ]
77,175,631
https://en.wikipedia.org/wiki/Kompaneyets%20equation
Kompaneyets equation refers to a non-relativistic, Fokker–Planck type, kinetic equation for photon number density with which photons interact with an electron gas via Compton scattering, first derived by Alexander Kompaneyets in 1949 and published in 1957 after declassification. The Kompaneyets equation describes how an initial photon distribution relaxes to the equilibrium Bose–Einstein distribution. Kompaneyets pointed out the radiation field on its own cannot reach the equilibrium distribution since the Maxwells equation are linear but it needs to exchange energy with the electron gas. The Kompaneyets equation has been used as a basis for analysis of the Sunyaev–Zeldovich effect. Mathematical description Consider a non-relativistic electron bath that is at an equilibirum temperature , i.e., , where is the electron mass. Let there be a low-frequency radiation field that satisfies the soft-photon approximation, i.e., where is the photon frequency. Then, the enery exchange in any collision between photon and electron will be small. Assuming homogeneity and isotropy and expanding the collision integral of the Boltzmann equation in terms of small energy exchange, one obtains the Kompaneyets equation. The Kompaneyets equation for the photon number density reads where is the total Thomson cross-section and is the electron number density; is the Compton range or the scattering mean free path. As evident, the equation can be written in the form of the continuity equation If we introudce the rescalings the equation can be brought to the form The Kompaneyets equation conserves the photon number where is a sufficiently large volume, since the energy exchange between photon and electron is small. Furthermore, the equilibrium distribution of the Kompaneyets equation is the Bose–Einstein distribution for the photon gas, References Physical cosmology Transport phenomena Partial differential equations Equations of physics
Kompaneyets equation
[ "Physics", "Chemistry", "Astronomy", "Mathematics", "Engineering" ]
407
[ "Transport phenomena", "Physical phenomena", "Astronomical sub-disciplines", "Equations of physics", "Chemical engineering", "Theoretical physics", "Mathematical objects", "Astrophysics", "Equations", "Physical cosmology" ]
77,178,021
https://en.wikipedia.org/wiki/Index%20of%20quality%20engineering%20articles
This is an alphabetical list of articles pertaining specifically to quality engineering. For a broad overview of engineering, please see List of engineering topics. For biographies please see List of engineers. A American Customer Satisfaction Index (ACSI) active listening affinity diagram Automotive Industry Action Group (AIAG) American Society for Quality (ASQ) Audit Appraisal B Bar chart Benchmarking C change management Code of Ethics Continuous quality improvement Cost of Poor Execution (COPE) Cost of quality (CoQ) Cost of poor quality (COPQ) Customer satisfaction research D Define, measure, analyze, improve and control (DMAIC) E European Foundation for Quality Management (EFQM) F Five whys G Groupthink Gantt chart H House of Quality Human reliability assessment (HRA) I Incrementalism J Joseph M. Juran K Kaizen L Lean manufacturing M Malcolm Baldrige National Quality Award Monitoring and evaluation Muda (Japanese term) N National Council on Physical Distribution Management (NCPDM) Next operation as customer(NOAC) Nine windows Nominal group technique O Organizational culture Overall equipment effectiveness (OEE) P Plan, Do, Check, Act (PDCA) Poka-yoke Process decision program chart (PDPC) Process improvement Q Quality assurance (QA) Quality by design (QbD) Quality function deployment (QFD) Quality improvement (QI) Quality management (QM) Quality storyboard R Risk management Root cause analysis S Suppliers, inputs, process, outputs and customers (SIPOC) Six Sigma T Tactical planning U Union of Japanese Scientists and Engineers (JUSE) V Voice of the customer W Warranty Waste X Y Z Zero Defects References Quality engineering Quality engineering Quality engineering topics
Index of quality engineering articles
[ "Engineering" ]
350
[ "Quality engineering" ]
77,180,584
https://en.wikipedia.org/wiki/John%20P.%20Fackler%20Jr.
John P. Fackler Jr. (July 31, 1934 – February 25, 2023) was an American inorganic chemist. John P. Fackler Jr. was born in Toledo, Ohio, on July 31, 1934, to parents John Fackler Sr. and Ruth Eleanor Moehring Fackler. He had two younger brothers. After graduating from DeVilbiss High School in 1952, Fackler enrolled at the Massachusetts Institute of Technology for one year, then transferred to Valparaiso University, where he completed a Bachelor of Arts degree in chemistry, physics, and mathematics. Fackler subsequently obtained a doctorate in inorganic chemistry at MIT in 1960. Fackler began his academic career at the University of California, Berkeley as an assistant professor. He moved to Case Western Reserve University in 1962, where he was named to a Teagle Professorship in 1978. He left Case Western in 1983 to serve as dean of the College of Science at Texas A&M University until 1992. Between 1987 and 2006, Fackler was a distinguished professor of chemistry at Texas A&M. He was granted emeritus status in 2008. For eleven years, Fackler served as editor-in-chief of the academic journal Comments on Inorganic Chemistry. Fackler was awarded a Guggenheim Fellowship in 1976. He received the American Chemical Society Award for Distinguished Service in the Advancement of Inorganic Chemistry in 2001, and was named an inaugural fellow of the ACS in 2009. Fackler was also a fellow of the American Association for the Advancement of Science (1990) and the American Institute of Chemists, as well as a member of the Royal Society of Chemistry and Sigma Xi, among other organizations. Fackler moved to The Woodlands, Texas in 2014, and died there on February 25, 2023, aged 88. References 1934 births 2023 deaths 20th-century American chemists 21st-century American chemists American inorganic chemists People from Toledo, Ohio Chemists from Ohio Massachusetts Institute of Technology alumni Valparaiso University alumni Case Western Reserve University faculty University of California, Berkeley faculty Texas A&M University faculty American university and college faculty deans Fellows of the American Chemical Society Fellows of the American Association for the Advancement of Science Chemistry journal editors
John P. Fackler Jr.
[ "Chemistry" ]
451
[ "American inorganic chemists", "Inorganic chemists" ]
77,181,850
https://en.wikipedia.org/wiki/Asakura%20Sue%20Ware%20Kiln%20Sites
The is a collective designation for a number of archaeological sites containing a Kofun period kilns located in the town of Chikuzen, Asakura District, Fukuoka Prefecture, Japan. The sites were designated a National Historic Site of Japan in 2018. Overview The site consists of the Koguma, Yamakuma and Yatsunami kiln groups. Sue pottery is believed to have originated in the 5th or 6th century in the Kaya region of southern Korea, and was brought to Japan by immigrant craftsmen. The earliest centralized production of Sue ware was long believed to have been The Suemura kilns in southern Osaka Prefecture; however, archaeological excavations at this site in northern Kyushu is challenging this theory. The is located on the southeastern slope of a hill called Jonyama (Hanatateyama) in former Miwa town, Asakura District, Fukuoka. Excavations were carried out by the Kyushu University in 1989, and a total of four kiln sites were identified. Sue ware, cylindrical haniwa, hand-kneaded pottery, and other artifacts have been excavated from each kiln site. The Sue ware consists mainly of jars, vases, and high cups, and are in the "early sueki,"or earliest form of Sue ware. Cylindrical haniwa and Sue ware items that can be traced to these kilns also have been excavated from kofun burial mounds. The Yamakuma kiln site is thought to have been in operation in the first half of the 5th century, and operations appear to have ceased after a relatively short period of time. The are located nearby. The Koguma site consists of the remains of seven semi-underground kilns, one residence, two workshops, one unknown structure, and two earth pits. When the Yatsumami kiln sites were discovered in 1967, the cross sections of three kilns were exposed on the slope, but they were washed away by a landslide and cannot be seen at present. The excavated remains have revealed the actual situation of the transition from the production of early Kaya-style Sue ware in the first half to the middle Kofun period (first half of the 5th century) to the production of Suemura-style Sue ware in the latter half of the middle Kofun period (second half of the 5th century), and was one of the earliest places to start producing Sue ware on an industrial scale. See also List of Historic Sites of Japan (Fukuoka) References History of Fukuoka Prefecture Chikuzen, Fukuoka Chikuzen Province Historic Sites of Japan Japanese pottery kiln sites
Asakura Sue Ware Kiln Sites
[ "Chemistry", "Engineering" ]
525
[ "Kilns", "Japanese pottery kiln sites" ]
77,182,528
https://en.wikipedia.org/wiki/Wood%20degradation
Wood degradation is a complex process influenced by various biological, chemical, and environmental factors. It significantly impacts the durability and longevity of wood products and structures, necessitating effective preservation and protection strategies. It primarily involves fungi, bacteria, and insects. Fungi are the most significant agents, causing decay through the breakdown of wood's structural components, such as cellulose, hemicellulose, and lignin. Chemical degradation is likewise significant. Degradation of wood in a concrete matrix is mostly attributed to the affect of alkaline environment and hydrolysis of lignin and hemicellulose and elevated temperatures may accelerate the degradation process of the cell walls. Prevention Applying preservatives, such as chromated copper arsenate (CCA) or borates, can protect wood from biological and chemical degradation. Coatings, such as paints, varnishes, and water repellents, provide a barrier against moisture and UV radiation. Advanced coatings containing UV stabilizers and biocides offer enhanced protection. References Wood Woodworking Materials degradation Fungi and humans Environmental chemistry Building materials Structural engineering
Wood degradation
[ "Physics", "Chemistry", "Materials_science", "Engineering", "Biology", "Environmental_science" ]
228
[ "Structural engineering", "Fungi", "Building engineering", "Environmental chemistry", "Materials science", "Architecture", "Construction", "Fungi and humans", "Materials", "Civil engineering", "nan", "Materials degradation", "Humans and other species", "Matter", "Building materials" ]
77,183,860
https://en.wikipedia.org/wiki/Zeldovich%20regularization
Zeldovich regularization refers to a regularization method to calculate divergent integrals and divergent series, that was first introduced by Yakov Zeldovich in 1961. Zeldovich was originally interested in calculating the norm of the Gamow wave function which is divergent since there is an outgoing spherical wave. Zeldovich regularization uses a Gaussian type-regularization and is defined, for divergent integrals, by and, for divergent series, by See also Abel's theorem Borel summation References Summability methods Concepts in physics
Zeldovich regularization
[ "Physics", "Mathematics" ]
117
[ "Sequences and series", "Summability methods", "Mathematical structures", "nan" ]
66,950,084
https://en.wikipedia.org/wiki/Straintronics
Straintronics (from strain and electronics) is the study of how folds and mechanically induced stresses in a layer of two-dimensional materials can change their electrical properties. It is distinct from twistronics in that the latter involves changes in the angle between two layers of 2D material. However, in such multi-layers if strain is applied to only one layers, which is called heterostrain, strain can have similar effect as twist in changing electronic properties. It is also distinct from, but similar to, the piezoelectric effects which are created by bending, twisting, or squeezing of certain material. References Superconductivity
Straintronics
[ "Physics", "Materials_science", "Engineering" ]
129
[ "Physical quantities", "Superconductivity", "Materials science", "Condensed matter physics", "Electrical resistance and conductance" ]
66,962,929
https://en.wikipedia.org/wiki/McMaster%20Manufacturing%20Research%20Institute
The McMaster Manufacturing Research Institute (MMRI) is a major manufacturing research facility affiliated with the Department of Mechanical Engineering at McMaster University in Hamilton, Ontario. The institute opened in 2001, and has an endowed research chair affiliated with it. It is a member of SONAMI (the Southern Ontario Network for Advanced Manufacturing Innovation) along with centers at Niagara College, Mohawk College, and Sheridan College. In 2020, after the impact of the COVID-19 pandemic became known, the MMRI collaborated with Hamilton Health Sciences to develop face shields and help with other types of PPE manufacturing. References Manufacturing companies of Canada Organizations based in Hamilton, Ontario McMaster University Manufacturing companies based in Ontario
McMaster Manufacturing Research Institute
[ "Engineering" ]
140
[ "Mechanical engineering stubs", "Mechanical engineering" ]
74,170,779
https://en.wikipedia.org/wiki/Toroidal%20solenoid
The toroidal solenoid was an early 1946 design for a fusion power device designed by George Paget Thomson and Moses Blackman of Imperial College London. It proposed to confine a deuterium fuel plasma to a toroidal (donut-shaped) chamber using magnets, and then heating it to fusion temperatures using radio frequency energy in the fashion of a microwave oven. It is notable for being the first such design to be patented, filing a secret patent on 8 May 1946 and receiving it in 1948. A critique by Rudolf Peierls noted several problems with the concept. Over the next few years, Thomson continued to suggest starting an experimental effort to study these issues, but was repeatedly denied as the underlying theory of plasma diffusion was not well developed. When similar concepts were suggested by Peter Thonemann that included a more practical heating arrangement, John Cockcroft began to take the concept more seriously, establishing small study groups at Harwell. Thomson adopted Thonemann's concept, abandoning the radio frequency system. When the patent had still not been granted in early 1948, the Ministry of Supply inquired about Thomson's intentions. Thomson explained the problems he had getting a program started and that he did not want to hand off the rights until that was clarified. As the directors of the UK nuclear program, the Ministry quickly forced Harwell's hand to provide funding for Thomson's program. Thomson then released his rights the patent, which was granted late that year. Cockcroft also funded Thonemann's work, and with that, the UK fusion program began in earnest. After the news furor over the Huemul Project in February 1951, significant funding was released and led to rapid growth of the program in the early 1950s, and ultimately to the ZETA reactor of 1958. Conceptual development The basic understanding of nuclear fusion was developed during the 1920s as physicists explored the new science of quantum mechanics. George Gamow's 1928 work on quantum tunnelling demonstrated that nuclear reactions could take place at lower energies than classical theory predicted. Using this theory, in 1929 Fritz Houtermans and Robert Atkinson demonstrated that expected reaction rates in the core of the Sun supported Arthur Eddington's 1920 suggestion that the Sun is powered by fusion. In 1934, Mark Oliphant, Paul Harteck and Ernest Rutherford were the first to achieve fusion on Earth, using a particle accelerator to shoot deuterium nuclei into a metal foil containing deuterium, lithium or other elements. This allowed them to measure the nuclear cross section of various fusion reactions, and determined that the deuterium-deuterium reaction occurred at a lower energy than other reactions, peaking at about 100,000 electronvolts (100 keV). This energy corresponds to the average energy of particles in a gas heated to a billion Kelvin. Materials heated beyond a few tens of thousand Kelvin dissociate into their electrons and nuclei, producing a gas-like state of matter known as plasma. In any gas the particles have a wide range of energies, normally following the Maxwell–Boltzmann statistics. In such a mixture, a small number of particles will have much higher energy than the bulk. This leads to an interesting possibility; even at temperatures well below 100,000 eV, some particles will randomly have enough energy to undergo fusion. Those reactions release huge amounts of energy. If that energy can be captured back into the plasma, it can heat other particles to that energy as well, making the reaction self-sustaining. In 1944, Enrico Fermi calculated this would occur at about 50,000,000 K. Confinement Taking advantage of this possibility requires the fuel plasma to be held together long enough that these random reactions have time to occur. Like any hot gas, the plasma has an internal pressure and thus tends to expand according to the ideal gas law. For a fusion reactor, the problem is keeping the plasma contained against this pressure; any known physical container would melt at temperatures in the thousands of Kelvin, far below the millions needed for fusion. A plasma is electrically conductive, and is subject to electric and magnetic fields. In a magnetic field, the electrons and nuclei orbit the magnetic field lines. A simple confinement system is a plasma-filled tube placed inside the open core of a solenoid. The plasma naturally wants to expand outwards to the walls of the tube, as well as move along it, towards the ends. The solenoid creates a magnetic field running down the centre of the tube, which the particles will orbit, preventing their motion towards the sides. Unfortunately, this arrangement does not confine the plasma along the length of the tube, and the plasma is free to flow out the ends. Initial design The obvious solution to this problem is to bend the tube, and solenoid, around to form a torus (a ring or doughnut shape). Motion towards the sides remains constrained as before, and while the particles remain free to move along the lines, in this case, they will simply circulate around the long axis of the tube. But, as Fermi pointed out, when the solenoid is bent into a ring, the electrical windings of the solenoid would be closer together on the inside than the outside. This would lead to an uneven field across the tube, and the fuel will slowly drift out of the centre. Some additional force needs to counteract this drift, providing long-term confinement. Thomson began development of his concept in February 1946. He noted that this arrangement caused the positively charged fuel ions to drift outward more rapidly than the negatively charged electrons. This would result in a negative area in the center of the chamber that would develop over a short period. This net negative charge would then produce an attractive force on the ions, keeping them from drifting too far from the center, and thus preventing them from drifting to the walls. It appeared this could provide long-term confinement. This leaves the issue of how to heat the fuel to the required temperatures. Thomson proposed injecting a cool plasma into the torus and then heating it with radio frequency signals beamed into the chamber. The electrons in the plasma would be "pumped" by this energy, transferring it to the ions though collisions. If the chamber held a plasma with densities on the order of 1014 to 1015 nuclei/cm3, it would take several minutes to reach the required temperatures. Filing a patent In early March, Thomson sent a copy of his proposal to Rudolf Peierls, then at the University of Birmingham. Peierls immediately pointed out a concern; both Peierls and Thomson had been to meetings at the Los Alamos in 1944 where Edward Teller held several informal talks, including the one in which Fermi outlined the basic conditions needed for fusion. This was in the context of an H-bomb, or "the super" as it was then known. Peierls noted that the US might claim priority on such information and consider it highly secret, which meant that while Thomson was privy to the information, it was unlikely others at Imperial were. Considering the problem, Thomson decided to attempt to file a patent on the concept. This would ensure the origins of the concepts would be recorded, and prove that the ideas were due to efforts in the UK and not his previous work on the atom bomb. At the time, Thomson was not concerned with establishing personal priority for the concept nor generating income from it. At his suggestion, on 26 March 1946 they met with Arthur Block of the Ministry of Supply (MoS), which led to B.L. Russel, the MoS' patent agent, beginning to write a patent application that would be owned entirely by the government. Peierls' concerns Peierls then followed up with a lengthy critique of the concept, noting three significant issues. The major concern was that the system as a whole used a toroidal field to confine the electrons, and the electric field resulting to confine the ions. Peierls pointed out that this "cross field" would cause the particles to be forced across the magnetic lines due to the right hand rule, causing the electrons to orbit around the chamber in the poloidal direction, eliminating the area of increased electrons in the center, and thereby allowing the ions to drift to the walls. Using Thomson's own figures for the conditions in an operating reactor, Peierls demonstrated that the resulting neutralized region would extend all the way to the walls, by less than the radius of the electrons in the field. There would be no confinement of the ions. He also included two additional concerns. One involved the issue of the deuterium fuel ions impacting with the walls of the chamber and the effects that would have, and the other that having electrons leave the plasma would cause an ion to be forced out to maintain charge balance, which would quickly "clean up" all of the gas in the chamber. Pinch emerges Thomson was not terribly concerned about the two minor problems but accepted that the primary one about the crossed fields was a serious issue. Considering the issue, a week later he wrote back with a modified concept. In this version, the external magnets producing the toroidal field were removed, and confinement was instead provided by running a current through the plasma. He proposed inducing this current using radio signals injected through slots cut into the torus at spaces that would create a wave moving around the torus similar to the system used in linear accelerators used to accelerate electrons. A provisional patent was filed on 8 May 1946, updated to use the new confinement system. In the patent, Thomson noted that the primary problem would be overcoming energy losses through bremsstrahlung. He calculated that a plasma density of 1015 would remain stable long enough for the energy of the pumped electrons to heat the D fuel to the required 100 keV over the time of several minutes. Although the term "pinch effect" is not mentioned, except for the current generation concept, the description was similar to the pinch machines that would become widespread in the 1950s. Further criticism Thomson was then sent to New York City as part of the British delegation to the United Nations Atomic Energy Commission and did not return until late in the year. After he returned, in January 1947, John Cockcroft called a meeting at Harwell to discuss his ideas with a group including Peierls, Moon and Sayers from Birmingham University, Tuck from the Clarendon Laboratory at Oxford University, and Skinner, Frisch, Fuchs, French and Bretscher from Harwell. Thomson described his concept, including several possible ways to drive the current. Peierls reiterated his earlier concerns, mentioning the observations by Mark Oliphant and Harrie Massey who had worked with David Bohm on isotopic separation at Berkeley. Bohm had observed greatly increased rates of diffusion well beyond what classical diffusion would suggest, today known as Bohm diffusion. If this was inherent to such designs, Peierls suggested there was no way the device would work. He then added a highly prescient statement that there may be further unknown instabilities that would ruin confinement. Peierls concluded by suggesting initial studies on the pinch effect be carried out by Moon in Birmingham, where Moon had some experience in these sorts of devices and especially because Sayers was already planning experiments with powerful spark discharges in deuterium. There is no record that this work was carried out, although theoretical studies on the behaviour of plasma in a pinch was worked on. Early experiments The main outcome of the meeting was to introduce Thomson to the wirbelrohr, a new type of particle accelerator built in 1944 in Germany. The wirbelrohr used a cyclotron-like arrangement to accelerate the electrons in a plasma, which its designer, Max Steenbeck, believed would cause them to "break away" from the ions and accelerate to very high speeds. The parallels between this device and Thomson's concept were obvious, but Steenbeck's acceleration mechanism was novel and presented a potentially more efficient heating system. When he returned to London after the meeting, Thomson had two PhD students put on the project, with Alan Ware tasked with building a wirbelrohr and Stanley Cousins starting a mathematical study on diffusion of plasma in a magnetic field. Ware build a device using 3 cm tube bent around into a 25 cm wide torus. Using a wide variety of gas pressures and currents up to 13,000 Amps, Ware was able to show some evidence of the pinching of the plasma, but failed, as had the Germans, to find any evidence of the break away electrons. With this limited success, Ware and Cousins built a second device at 40 cm and up to 27,000 Amps. Once again, no evidence of electron break away was seen, but this time a new high-speed rotating-mirror camera was able to directly image the plasma during the discharge and was able to conclusively show the plasma was indeed being pinched. Classification concerns While Cousins and Ware began their work, in April 1947 Thomson filed a more complete patent application. This described a larger wide torus with many ports for injecting and removing gas and to inject the radio frequency energy to drive the current. The entire system was then placed within a large magnet that produced a moderate 0.15 T vertical magnetic field across the entire torus, which kept the electrons confined. He predicted that a power input of 1.9 MW would be needed and calculated that the D-D and D-T reactions would generate 9 MW of fusion energy, of which 1.9 MW was in the form of neutrons. He suggested that the neutrons could be used as a power source, but also if the system was surrounded by natural uranium, mostly 238U, the neutrons would transmute it into plutonium-239, a major component of atomic bombs. It was this last part that raised new concerns. If, as Thomson described, one could make a relatively simple device that could produce plutonium there was an obvious nuclear security concern and such work would need to be secret. Neither Thomson or Harwell were happy performing secret work at the university. Considering the problem, Thomson suggested moving this work to RAF Aldermaston. Associated Electrical Industries (AEI) was outgrowing their existing labs in Rugby and Trafford Park, and had already suggested building a new secure lab at Aldermaston. AEI was looking to break into the emerging nuclear power field, and its director of research, Thomas Allibone, was a friend of Thomson's. Allibone strongly supported Thomson's suggestion, and further backing was received from Nobel winner James Chadwick. Cockcroft, on the other hand, believed it was too early to start the large program Thomson was suggesting, and continued to delay. Thonemann's concept Around the same time, Cockcroft learned of similar work carried out independently by Peter Thonemann at Clarendon, triggering a small theoretical program at Harwell to consider it. But all suggestions of a larger development program continued to be rejected. Thonemann's concept was to replace the radio frequency injection used by Thonemann and arrange the reactor like a betatron, that is, wrapping the torus in a large magnet and using its field to induce a current in the torus in a fashion similar to an electrical transformer. Betatrons had a natural limitation that the number of electrons in them was limited due to their self-repulsion, known as the space charge limit. Some had suggested introducing a gas to the chamber; when ionized by the accelerated electrons, the leftover ions would produce a positive charge that would help neutralize the chamber as a whole. Experiments to this end instead showed that collisions between the electrons and ions would scatter so rapidly that the number of electrons remaining was actually lower than before. This effect, however, was precisely what was desired in a fusion reactor, where the collisions would heat the deuterium ions. At an accidental meeting at Clarendon, Thonemann ended up describing his idea to Thomson. Thonemann was not aware he was talking to Thomson, nor of Thomson's work on similar ideas. Thomson followed up with Skinner, who strongly supported Thonemann's concept over Thomson's. Skinner then wrote a paper on the topic, "Thermonuclear Reactions by Electrical Means", and presented it to the Atomic Energy Commission on 8 April 1948. He clearly pointed out where the unknowns were in the concepts, and especially the possibility of destructive instabilities that would ruin confinement. He concluded that it would be "useless to do much further planning" before further study on the instability issues. It was at this point that a curious bit of legality comes into the events. In February 1948, Thompson's original patent filing had not been granted as the Ministry of Supply was not sure about his intentions on assigning the rights. Blackman was ill with malaria in South Africa, and the issue was put off for a time. It was raised again in May when he returned, resulting in a mid-July meeting. Thompson complained that Harwell was not supporting their efforts, and that as none of this was classified, he wanted to remain open to turning to private funding. In that case, he was hesitant to assign the rights to the Ministry. The Ministry, who was in charge of the nuclear labs including Harwell, quickly arranged for Cockroft to fund Thompson's development program. The program was approved in November, and the patent was assigned to the Ministry by the end of the year. Move to AEI The work on fusion at Harwell and Imperial remained relatively low-level until 1951, when two events occurred that changed the nature of the program significantly. The first was the January 1950 confession by Klaus Fuchs that he had been passing atomic information to the Soviets. His confession led to immediate and sweeping classification of almost anything nuclear related. This included all fusion related work, as the previous fears about the possibility of using fusion as a neutron source to produce plutonium now seemed like a serious issue. The earlier plans to move the team from Imperial were put into effect immediately, with the AEI labs being set up at the former Aldermaston and opening in April. This lab soon became the Atomic Weapons Research Establishment. The second was the February 1951 announcement that Argentina had successfully produced fusion in its Huemul Project. Physicists around the world quickly dismissed it as impossible, which was revealed to be the case by 1952. However, it also had the effect of making politicians learn of the concept of fusion, and its potential as an energy source. Physicists working on the concept suddenly found themselves able to talk to high-ranking politicians, who proved rather receptive to increasing their budgets. Within weeks, programs in the US, UK and USSR were seeing dramatic expansion. By the summer of 1952, the UK fusion program was developing several machines based on Thonemann's overall design, and Thomson's original RF-concept was put aside. Notes References Citations Bibliography Magnetic confinement fusion devices Nuclear power in the United Kingdom Nuclear technology in the United Kingdom Physics
Toroidal solenoid
[ "Chemistry" ]
3,876
[ "Particle traps", "Magnetic confinement fusion devices" ]
74,173,318
https://en.wikipedia.org/wiki/Ambident%20%28chemistry%29
In chemistry, ambident is a molecule or group that has two alternative and interacting reaction sites, to either of which a bond may be made during a reaction. Ambident dienophile Ambident dienophile 57 reacts with DAPC 54 at the cyclobutene π-bond to produce ligand 58; in contrast, the related ambident dienophile 59 reacts with DAPC 54 at the naphthoquinone π-center to produce adduct 60 (lack of shielding of the methylene protons supports the stereochemical assignment). Ambident Nucleophile An Ambident nucleophile refers to an anionic nucleophile that exhibits resonance delocalization of its negative charge over two unlike atoms or over two like but non-equivalent atoms. Enolate ions are Ambident Nucleophile. References Physical organic chemistry
Ambident (chemistry)
[ "Chemistry" ]
187
[ "Physical organic chemistry" ]
74,174,558
https://en.wikipedia.org/wiki/Fanzor
The Fanzor (Fz) protein is an eukaryotic, RNA-guided DNA endonuclease, which means it is a type of DNA cutting enzyme that uses RNA to target genes of interest. It has been recently discovered and explored in a number of studies. In bacteria, RNA-guided DNA endonuclease systems, such as the CRISPR/Cas system, serve as an immune system to prevent infection by cutting viral genetic material. Currently, CRISPR/Cas9-mediated's DNA cleavage has extensive application in biological research, and wide-reaching medical potential in human gene editing. Fanzor belongs to the OMEGA system. Evolutionarily, it shares a common ancestor, OMEGA TnpB, with the CRISPR/Cas12 system. Due to the shared ancestry between the OMEGA system and the CRISPR system, the protein structure and DNA cleavage function of Fanzor and Cas12 remain largely conserved. Combined with the widespread presence of Fanzor across the diverse genomes of different eukaryotic species, this raises the possibility of OMEGA Fanzor being an alternative to CRISPR/Cas system with better efficiency and compatibility in other complex eukaryotic organisms, such as mammals. Fanzor as a potential human genome editor Due to its eukaryotic origin, the OMEGA Fanzor system may have some advantages over the better studied CRISPR/Cas gene editor in terms of human genome editing applications. In a CRISPR/Cas9 system, Cas9 proteins are guided by the guide RNA (gRNA) and protospacer adjacent motif (PAM) for DNA cleavage. Interestingly, Fanzor genes in the soil fungus S. punctatus also contain non-coding sequences called ωRNA. Similar to CRISPR/Cas9, Fanzor protein is shown to cleave DNA in test tubes under the guidance of ωRNA and Target-adjacent motif (TAM). In human cells, the Fanzor protein of Spizellomyces punctatus was successfully tested and shown to cleave DNA effectively. However, its efficiency is lower compared to the closely related CRISPR/Cas12a system. By modifying and tweaking the ωRNA and the amino acid sequence, a second version of the S. punctatus Fanzor protein with improved cleavage efficiency - comparable to that of the CRISPR/Cas12a system - was engineered. This shows that, with better modifications and more research, OMEGA Fanzor has the potential to match the CRISPR system in human genome editing in the future. Clinical and Biotechnological Significance Studies conclude that Fanzor has great potential for efficient human genome editing with a higher chance of not getting attacked by the immune system. For example, Fanzor could be used in personalized cancer treatments where the patient's own T-cells - important cells of the immune system that recognize and fight foreign pathogens - are edited in order to recognize and destroy cancer cells. In the field of regenerative medicine, it offers hope for an application in stem cell therapy to treat many disease of genetic origin like type 1 diabetes or neurodegenerative diseases. Furthermore, Fanzor could potentially be used for genome editing in eggs and sperm for disease prevention and infertility treatment. However, the intervention in such cells' DNA comes with risks and requires strict ethical guidelines. One major advantage of Fanzor in comparison to the CRISPR/Cas9 system is its small size. Therefore, it can be delivered with viral vectors, which are modified dead bodies of viruses engineered to safely deliver genetic material, such as adenoviruses. Adenoviruses are commonly used in medical applications like gene deliveries or vaccines that do not elicit immune responses within the human body. However, researchers caution that further research is necessary to improve the editing efficiency and precision. Next to the application in human cells, Fanzor is a prospective tool for specific genome editing in plants, because of the aforementioned advantages of the protein being a small size. Thereby, the nutrient content, the resistance to diseases and the affordability of crops could be improved. Moreover, in regard to the current and arising challenges caused by climate change, crops could be adjusted to better endure stress factors such as drought, salinity and increasing temperatures. References Deoxyribonucleases Eukaryote proteins Genetic engineering Genome editing
Fanzor
[ "Chemistry", "Engineering", "Biology" ]
896
[ "Genetics techniques", "Biological engineering", "Genome editing", "Genetic engineering", "Molecular biology" ]
74,175,376
https://en.wikipedia.org/wiki/Type%20and%20cotype%20of%20a%20Banach%20space
In functional analysis, the type and cotype of a Banach space are a classification of Banach spaces through probability theory and a measure, how far a Banach space from a Hilbert space is. The starting point is the Pythagorean identity for orthogonal vectors in Hilbert spaces This identity no longer holds in general Banach spaces, however one can introduce a notion of orthogonality probabilistically with the help of Rademacher random variables, for this reason one also speaks of Rademacher type and Rademacher cotype. The notion of type and cotype was introduced by French mathematician Jean-Pierre Kahane. Definition Let be a Banach space, be a sequence of independent Rademacher random variables, i.e. and for and . Type is of type for if there exist a finite constant such that for all finite sequences . The sharpest constant is called type constant and denoted as . Cotype is of cotype for if there exist a finite constant such that respectively for all finite sequences . The sharpest constant is called cotype constant and denoted as . Remarks By taking the -th resp. -th root one gets the equation for the Bochner norm. Properties Every Banach space is of type (follows from the triangle inequality). A Banach space is of type and cotype if and only if the space is also isomorphic to a Hilbert space. If a Banach space: is of type then it is also type . is of cotype then it is also of cotype . is of type for , then its dual space is of cotype with (conjugate index). Further it holds that Examples The spaces for are of type and cotype , this means is of type , is of type and so on. The spaces for are of type and cotype . The space is of type and cotype . Literature References Functional analysis Banach spaces
Type and cotype of a Banach space
[ "Mathematics" ]
389
[ "Functional analysis", "Mathematical objects", "Functions and mappings", "Mathematical relations" ]
74,176,783
https://en.wikipedia.org/wiki/Waterloopkundig%20Laboratorium
The (Hydraulic Research Laboratory) was an independent Dutch scientific institute specialising in hydraulics and hydraulic engineering. The laboratory was established in Delft from 1927, moving to a new location in the city in 1973. The institute later became known as WL | Delft Hydraulics. In 2008, the laboratory was incorporated into the international nonprofit Deltares institute. Purpose The Hydraulic Laboratory was classified by the Dutch Government as a and was tasked with acquiring, generating, and disseminating knowledge on hydraulics and hydraulic engineering. The laboratory conducted research into the causes of changes in the course of rivers, estuaries, and coasts, and the possible influences on them due to hydraulic engineering activities, along with a range of studies on topics such as dredging, wave action and coastal morphodynamics. The laboratory played a significant advisory role in the conception, design, and implementation of the Zuiderzee Works and the Delta Works, along with several international projects. History The laboratory was established in 1927 by Rijkswaterstaat, under the directorship of Professor ir. J.Th. Thijsse (1893-1984). It was initially located in the basement of the Civil Engineering Department building at Delft University of Technology. Thijsse's role on the Zuiderzee State Commission had introduced him to hydrodynamic model research, an innovative approach to understanding the dynamics of water. In 1927, both Rijkswaterstaat and Delft University of Technology began incorporating this research methodology, prompting the establishment of the laboratory. The impetus for the formation of the laboratory began in the 1920s, and lay in the design of the sluices for the Afsluitdijk, a significant project requiring extensive research and experimentation. The task was initially assigned to Professor Theodor Rehbock at the (river construction laboratory) at the Technical University of Karlsruhe, a major institute in the field of hydraulic engineering research at the time. The results of this investigation were documented in a report which was published in 1931. This report was subject to review by Thijsse, who advised the Dutch authorities on the need for additional research of this type, not just for the Zuiderzee Works, but also for other projects across the Netherlands. This recommendation precipitated the decision to establish a laboratory similar to that in Karlsruhe, to serve the Netherlands. Thijsse spearheaded the initial research at the newly formed laboratory and documented the findings in a follow-up report to Rehbock's original study. To facilitate third-party contract research, such as work for Rijkswaterstaat and international schemes, it was decided that the laboratory would operate independently from the Delft University of Technology, and be established as a financially autonomous foundation, with its board appointed from university staff, major consultants, and representatives from Rijkswaterstaat. Experiments into the behaviour of irregular waves had been undertaken in the Netherlands since 1920, with initial experiments on irregular wave behaviour in wind tunnels. This pioneering research, including investigations into wave run-up, led to the construction of a specialised wind wave flume at the laboratory in 1933. Unprecedented at the time of construction, the flume boasted dimensions of 25 metres in length, 4 metres in width, and a maximum water depth of 0.45 metres. Subsequently, in order to better satisfy the necessary conditions for wave height and period, the flume was extended to 50 metres in length, and fitted with a monochromatic wave generator. These enhancements enabled a wider variety of research projects, including studies on wave overtopping, the stability of rubble-mound breakwaters, wave impact forces, and the stability of floating structures. By the time of World War II, research had extended into model investigations of wave generation, with outcomes corroborating prototype data collected by Harald Sverdrup and Walter Munk. In 1969, new wave flumes with typical widths of 8 metres were installed in the laboratory in order to permit modelling and testing of breakwaters and dikes whilst simulating arbitrary angles of wave attack. The previously available flume widths of 4 metres had proved too small for this purpose, and the new flumes therefore provided the laboratory with the ability to model and test the performance of significant coastal and river engineering structures. In 1973, the laboratory moved from its location in the centre of Delft to a new location at the most southern end of the Delft Technological University campus, becoming known locally as the (Thijsse yard). Throughout its history, the laboratory undertook national and international research on numerous civil and hydraulic engineering subjects including dredging technology, density issues, pumps, and detailed structural studies on locks and weirs. International projects included the Belgian Port of Zeebrugge (1933–36), the cut-off of the Abidjan lagoon (1933–46), and flood prevention works in Nottingham (1946–51). The Waterloopkundig Laboratorium "de Voorst" From 1951 to 1996, a second location known as the (Hydraulic Laboratory "de Voorst") was located in Noordoostpolder, between Marknesse, Kraggenburg, and Vollenhove. The establishment of a second laboratory at de Voorst was prompted by the lack of space in Delft for large outdoor models. Utilising land on the outskirts of Delft was not feasible due to the damp peat soil, which made it difficult to construct large models without soil settlement. In an environment where water levels are measured on a millimetric scale, even minute settlements were unacceptable. Additional benefits of the de Voorst location included its location within a low-lying polder, eliminating the need for an additional pumping system, and its availability due to the heavy boulder clay composition of the soil making it unsuitable for farming. Since the land was government-owned, no financial acquisition was required. From 1951, the Waterloopkundig Laboratorium therefore operated two facilities: an indoor modelling laboratory in Delft, and an outdoor model facility in De Voorst. In the 1970s, indoor laboratory facilities were added to the de Voorst location. A significant advantage of the de Voorst location was the ability to construct large-scale models of estuaries and ports, enabling model tests to predict the influence of hydraulic works on the watercourses, making use of the large differences in water levels from the surrounding surface water. These models were pivotal during the planning and construction phase of the Delta Works in Zeeland, and also allowed research works to be undertaken for international projects such as the reconstruction of the Port of Lagos. Other international projects where research was carried out at the laboratory to inform the design and construction included the construction of the Eider Barrage, and works at the mouth of the Volta River in Ghana. The scale of the physical models in the laboratory were often substantial, with many being large enough to permit model ships which necessitated pilotage by helmsmen, an example being the model created for Jo Thijsse's design for the junction of the Amsterdam–Rhine Canal and the Lek, a large structure which came to be known as (Thijsse's eggs). From the 1980s, computer-assisted mathematical modelling began to be useful in mapping potential water flows, reducing the need for very large-scale physical water models. Consequently, the decision was made in 1995 to concentrate activities at the Delft location and close the de Voorst facilities. The site was purchased by Natuurmonumenten and renamed the , where visitors can view the models and associated infrastructure via a walking route through the woods. Consolidation into Deltares By 2008, the Delft laboratory had become known by the English name WL | Delft Hydraulics, and in an effort to consolidate knowledge with similar institutes, it was merged with other research institutes and sections of Rijkswaterstaat to form the Deltares Institute. The laboratory continues to operate today as part of Deltares. Directors and notable figures The following people were directors of the laboratory from its foundation in 1927 until it merged with Deltares in 2008. The following personnel served as Heads of the de Voorst facility. Significant engineering figures who undertook research or served in senior positions with the Waterloopkundig Laboratorium included Eco Bijker (various roles including head of department, head of the de Voorst Laboratory, and deputy director), Pieter Abraham van de Velde, Frank Spaargaren (interim general director, 1995–1997), Krystian Pilarczyk (research engineer, 1966–1968), and PJ Wemelsfelder, who undertook research at the facility and served as head of the Hydrometric Department. The Waterbouwkundig Laboratorium (Belgium) A similar institution known as the (Hydraulic Engineering Research Laboratory) is located in Borgerhout, Belgium. It was established in 1933. Gallery See also Flood control in the Netherlands Zuiderzee Works Rijkswaterstaat Delta Works References Coastal engineering Civil engineering Hydraulic engineering Delta Works
Waterloopkundig Laboratorium
[ "Physics", "Engineering", "Environmental_science" ]
1,841
[ "Hydrology", "Coastal engineering", "Physical systems", "Construction", "Hydraulics", "Delta Works", "Civil engineering", "Hydraulic engineering" ]
68,370,135
https://en.wikipedia.org/wiki/Tetraphenylarsonium%20chloride
Tetraphenylarsonium chloride is the organoarsenic compound with the formula (C6H5)4AsCl. This white solid is the chloride salt of the tetraphenylarsonium cation, which is tetrahedral. Typical of related quat salts, it is soluble in polar organic solvents. It often is used as a hydrate. Synthesis and reactions It is prepared by neutralization of tetraphenylarsonium chloride hydrochloride, which is produced from triphenylarsine: (C6H5)3As + Br2 → (C6H5)3AsBr2 (C6H5)3AsBr2 + H2O → (C6H5)3AsO + 2 HBr (C6H5)3AsO + C6H5MgBr → (C6H5)4AsOMgBr (C6H5)4AsOMgBr + 3 HCl → (C6H5)4AsCl.HCl + MgBrCl (C6H5)4AsCl.HCl + NaOH → (C6H5)4AsCl + NaCl + H2O Like other quat salts, it is used to solubilize polyatomic anions in organic media. To this end, aqueous or methanolic solutions containing the anion of interest are treated with a solution of tetraphenylarsonium chloride, typically resulting in precipitation of the tetraphenylarsonium anion salt. Related compounds Tetraphenylphosphonium chloride Tetrabutylammonium chloride Tetraethylammonium chloride References Chlorides Phenyl compounds
Tetraphenylarsonium chloride
[ "Chemistry" ]
357
[ "Chlorides", "Inorganic compounds", "Salts" ]
68,373,977
https://en.wikipedia.org/wiki/Tellurite%20fluoride
A tellurite fluoride is a mixed anion compound containing tellurite and fluoride ions. They have also been called oxyfluorotellurate(IV) where IV is the oxidation state of tellurium in tellurite. Comparable compounds are sulfite fluorides or selenite fluorides. List References Fluorides Tellurites Mixed anion compounds
Tellurite fluoride
[ "Physics", "Chemistry" ]
85
[ "Matter", "Mixed anion compounds", "Salts", "Fluorides", "Ions" ]
68,377,408
https://en.wikipedia.org/wiki/1%2C2-Bis%28diphenylphosphino%29benzene
1,2-Bis(diphenylphosphino)benzene (dppbz) is an organophosphorus compound with the formula C6H4(PPh2)2 (Ph = C6H5). Classified as a diphosphine ligand, it is a common bidentate ligand in coordination chemistry. It is a white, air-stable solid. As a chelating ligand, dppbz is very similar to 1,2-bis(diphenylphosphino)ethylene. References Chelating agents Diphosphines Phenyl compounds
1,2-Bis(diphenylphosphino)benzene
[ "Chemistry" ]
128
[ "Chelating agents", "Process chemicals" ]
68,377,413
https://en.wikipedia.org/wiki/Near-field%20radiative%20heat%20transfer
Near-field radiative heat transfer (NFRHT) is a branch of radiative heat transfer which deals with situations for which the objects and/or distances separating objects are comparable or smaller in scale or to the dominant wavelength of thermal radiation exchanging thermal energy. In this regime, the assumptions of geometrical optics inherent to classical radiative heat transfer are not valid and the effects of diffraction, interference, and tunneling of electromagnetic waves can dominate the net heat transfer. These "near-field effects" can result in heat transfer rates exceeding the blackbody limit of classical radiative heat transfer. History The origin of the field of NFRHT is commonly traced to the work of Sergei M. Rytov in the Soviet Union. Rytov examined the case of a semi-infinite absorbing body separated by a vacuum gap from a near-perfect mirror at zero temperature. He treated the source of thermal radiation as randomly fluctuating electromagnetic fields. Later in the United States, various groups theoretically examined the effects of wave interference and evanescent wave tunneling. In 1971, Dirk Polder and Michel Van Hove published the first fully correct formulation of NFRHT between arbitrary non-magnetic media. They examined the case of two half-spaces separated by a small vacuum gap. Polder and Van Hove used the fluctuation-dissipation theorem to determine the statistical properties of the randomly fluctuating currents responsible for thermal emission and demonstrated definitively that evanescent waves were responsible for super-Planckian (exceeding the blackbody limit) heat transfer across small gaps. Since the work of Polder and Van Hove, significant progress has been made in predicting NFRHT. Theoretical formalisms involving trace formulas, fluctuating surface currents, and dyadic Green's functions, have all been developed. Though identical in result, each formalism can be more or less convenient when applied to different situations. Exact solutions for NFRHT between two spheres, ensembles of spheres, a sphere and a half-space, and concentric cylinders have all been determined using these various formalisms. NFRHT in other geometries has been addressed primarily through finite element methods. Meshed surface and volume methods have been developed which handle arbitrary geometries. Alternatively, curved surfaces can be discretized into pairs of flat surfaces and approximated to exchange energy like two semi-infinite half spaces using a thermal proximity approximation (sometimes referred to as the Derjaguin approximation). In systems of small particles, the discrete dipole approximation can be applied. Theory Fundamentals Most modern works on NFRHT express results in the form of a Landauer formula. Specifically, the net heat power transferred from body 1 to body 2 is given by , where is the reduced Planck constant, is the angular frequency, is the thermodynamic temperature, is the Bose function, is the Boltzmann constant, and . The Landauer approach writes the transmission of heat in terms discrete of thermal radiation channels, . The individual channel probabilities, , take values between 0 and 1. NFRHT is sometimes alternatively reported as a linearized conductance, given by . Two half-spaces For two half-spaces, the radiation channels, , are the s- and p- linearly polarized waves. The transmission probabilities are given by where is the component of the wavevector parallel to the surface of the half-space. Further, where: are the Fresnel reflection coefficients for polarized waves between media 0 and , is the component of the wavevector in the region 0 perpendicular to the surface of the half-space, is the separation distance between the two half-spaces, and is the speed of light in vacuum. Contributions to heat transfer for which arise from propagating waves whereas contributions from arise from evanescent waves. Applications Thermophotovoltaic energy conversion Thermal rectification Localized cooling Heat-assisted magnetic recording References Heat transfer Mechanical engineering Electromagnetism Optics Light
Near-field radiative heat transfer
[ "Physics", "Chemistry", "Engineering" ]
813
[ "Transport phenomena", "Physical phenomena", "Heat transfer", "Applied and interdisciplinary physics", "Optics", "Electromagnetism", "Spectrum (physical sciences)", "Electromagnetic spectrum", "Waves", "Light", "Thermodynamics", "Fundamental interactions", " molecular", "Atomic", "Mechan...
68,379,149
https://en.wikipedia.org/wiki/Perceiver
Perceiver is a variant of the Transformer architecture, adapted for processing arbitrary forms of data, such as images, sounds and video, and spatial data. Unlike previous notable Transformer systems such as BERT and GPT-3, which were designed for text processing, the Perceiver is designed as a general architecture that can learn from large amounts of heterogeneous data. It accomplishes this with an asymmetric attention mechanism to distill inputs into a latent bottleneck. Perceiver matches or outperforms specialized models on classification tasks. Perceiver was introduced in June 2021 by DeepMind. It was followed by Perceiver IO in August 2021. Design Perceiver is designed without modality-specific elements. For example, it does not have elements specialized to handle images, or text, or audio. Further it can handle multiple correlated input streams of heterogeneous types. It uses a small set of latent units that forms an attention bottleneck through which the inputs must pass. One benefit is to eliminate the quadratic scaling problem found in early transformers. Earlier work used custom feature extractors for each modality. It associates position and modality-specific features with every input element (e.g. every pixel, or audio sample). These features can be learned or constructed using high-fidelity Fourier features. Perceiver uses cross-attention to produce linear complexity layers and to detach network depth from input size. This decoupling allows deeper architectures. Components A cross-attention module maps a (larger) byte array (e.g., a pixel array) and a latent array (smaller) to another latent array, reducing dimensionality. A transformer tower maps one latent array to another latent array, which is used to query the input again. The two components alternate. Both components use query-key-value (QKV) attention. QKV attention applies query, key, and value networks, which are typically multilayer perceptrons – to each element of an input array, producing three arrays that preserve the index dimensionality (or sequence length) of their inputs. Perceiver IO Perceiver IO can flexibly query the model's latent space to produce outputs of arbitrary size and semantics. It achieves results on tasks with structured output spaces, such as natural language and visual understanding, StarCraft II, and multi-tasking. Perceiver IO matches a Transformer-based BERT baseline on the GLUE language benchmark without the need for input tokenization and achieves state-of-the-art performance on Sintel optical flow estimation. Outputs are produced by attending to the latent array using a specific output query associated with that particular output. For example to predict optical flow on one pixel a query would attend using the pixel’s xy coordinates plus an optical flow task embedding to produce a single flow vector. It is a variation on the encoder/decoder architecture used in other designs. Performance Perceiver's performance is comparable to ResNet-50 and ViT on ImageNet without 2D convolutions. It attends to 50,000 pixels. It is competitive in all modalities in AudioSet. See also Convolutional neural network Transformer (machine learning model) References External links , with the Fourier features explained in more detail Machine learning
Perceiver
[ "Engineering" ]
682
[ "Artificial intelligence engineering", "Machine learning" ]
75,561,967
https://en.wikipedia.org/wiki/Solar%20facula
Solar faculae are bright spots in the photosphere that form in the canyons between solar granules, short-lived convection cells several thousand kilometers across that constantly form and dissipate over timescales of several minutes. Faculae are produced by concentrations of magnetic field lines. Strong concentrations of faculae appear during increased solar activity, with or without sunspots. Faculae and sunspots contribute noticeably to variations in the solar constant. The chromospheric counterpart of a facular region is called a plage. References Sun Solar phenomena
Solar facula
[ "Physics" ]
118
[ "Physical phenomena", "Stellar phenomena", "Solar phenomena" ]
75,565,213
https://en.wikipedia.org/wiki/Mariagrazia%20Dotoli
Mariagrazia Dotoli (born 1971) is an Italian systems engineer and control theorist whose research involves the optimization of supply chain management and traffic control in smart cities, fuzzy control systems, and the use of Petri nets in modeling these applications as discrete event dynamic systems. She is Professor in Systems and Control Engineering in the Department of Electrical and Information Engineering at the Polytechnic University of Bari. Education Dotoli is the daughter of , an Italian scholar of French literature; she was born in 1971 in Bari. She was educated at the Liceo Scientifico Statale Arcangelo Scacchi and at the Polytechnic University of Bari, where she earned a laurea in 1995, after a year working with Bernadette Bouchon-Meunier at Pierre and Marie Curie University in Paris. She went on to earn a professional engineering qualification in 1996 and to complete a Ph.D. in 1999. Her doctoral dissertation, Recent Developments of the Fuzzy Control Methodology, was supervised by Bruno Maione; her doctoral research also included work with Jan Jantzen at the Technical University of Denmark. Career She remained at the Polytechnic University of Bari as an assistant professor beginning in 1999, and despite winning a national qualification to be a full professor in 2013, remained an assistant until 2015. She became an associate professor from 2015 to 2019, and has been a full professor since 2019. She also served the university as vice chancellor for research from 2012 to 2013. Since 2022 she is the Coordination and founder of the Italian National PhD program in Autonomous Systems, with administrative seat at Politecnico di Bari and 25 affiliated Italian Universities. In 2020 she was the founder of the Interuniversity Italian PhD Program in Industry 4.0 of Politecnico di Bari and University of Bari, Italy. She was the Coordinator of this PhD Program in years 2020-2022. She is the founder and scientific responsible for the Decision and Control Laboratory at the Department of Electrical and Information Engineering of Polytechnic of Bari since 2012 . Editorial Activities She is Senior Editor, for the term 2021-2025, of the international journal IEEE Transactions on Automation Science and Engineering; she has been an Associate Editor of the international journals IEEE Transactions on Systems Man and Cybernetics, IEEE Transactions on Control Systems Technology, and IEEE Robotics and Automation Letters. From 2016 to 2020 she was Editor-in-chief of the international newsletter IEEE Systems Man and Cybernetics Society eNewsletter. She was General Chair for the 29th Mediterranean Conference on Control and Automation (Bari, June 2021) , and she is currently holding such role for the upcoming IEEE 20th International Conference on Automation Science and Engineering (CASE24) . Publications She has authored more than 300 articles in international conferences, journals, and book chapters. She is the author of a MATLAB manual for engineering applications, together with Maria Pia Fanti. Recognitions Dotoli was named an IEEE Fellow, in the 2024 class of fellows, "for contributions to control of logistics systems in smart cities". She is also a fellow of the Asia-Pacific Artificial Intelligence Association. She has been the recipient of the IEEE Systems Man and Cybernetics Society (SMCS) 2021 Outstanding Contribution Award for her service to SMCS as 2016-2020 Editor in Chief of the SMCS newsletter and of the SMCS 2021 Award for Most Active SMCS Technical Committee in Systems – Technical Committee (TC) on Intelligent Systems for Human-Aware Sustainability as co-chair of the TC. Prof. Dotoli is listed in the world top 2% scientists list for career-long impact and single-year categories in the “Industrial Engineering & Automation” and “Artificial Intelligence & Image Processing” fields. References External links Home page 1971 births Living people Systems engineers Women systems engineers Italian engineers Italian women engineers Control theorists Fellows of the IEEE
Mariagrazia Dotoli
[ "Engineering" ]
766
[ "Systems engineers", "Systems engineering", "Control engineering", "Control theorists" ]
75,567,104
https://en.wikipedia.org/wiki/Victoria%20Gray
Victoria Gray was the first patient ever to be treated with the gene-editing tool CRISPR for sickle-cell disease. This marked the initial indication that a cure is attainable for individuals born with sickle-cell disease and another severe blood disorder, beta-thalassemia. Procedure In 2019, Victoria Gray enrolled in a groundbreaking clinical trial. In an interview with National Public Radio, Gray mentioned that she had been contemplating a bone marrow transplantation when she learned about the trial. Serving as the inaugural patient with sickle cell disease to undergo treatment using the revolutionary gene-editing technology CRISPR, she became one of the earliest individuals to experience CRISPR intervention. Although CRISPR had been extensively discussed and lauded, its application had primarily been confined to laboratory cell manipulation. When Gray received her experimental infusion, the outcome was uncertain—scientists were unsure if it would eradicate her disease or pose unforeseen risks. However, the therapy exceeded all expectations and at the end of July 2019, Gray was announced as the first patient to be treated for sickle-cell disease using the CRISPR-Cas9 gene-editing technology. Thanks to her gene-edited cells, Gray has been cured of the disease and now lives a symptom-free life. In the trial, over 96% of the eligible patients (29 out of 30) witnessed a remarkable shift, transitioning from experiencing multiple pain crises annually to none in the 12 months following treatment. References Living people People with sickle-cell disease Genome editing Year of birth missing (living people)
Victoria Gray
[ "Engineering", "Biology" ]
315
[ "Genetics techniques", "Genetic engineering", "Genome editing" ]
75,571,860
https://en.wikipedia.org/wiki/Carmen%20Venegas
Carmen Venegas (1991) was a noted Costa Rican electrical engineer and pilot. Also known as Carmen Venegas Campos, or Carmencita Zeledon Venegas, she was the first Latin American woman to earn a degree in engineering at Virginia Polytechnic Institute (now known as Virginia Tech), the first woman to obtain her pilot's license in Central America, and the first woman to drive an electric locomotive. Early life and education At an early age, Venegas was interested in mechanics and locomotives. Her father was a mechanic who owned his own shop, and this shop was where she learned how to operate different locomotive machines. In 1930, she conducted a train from San Jose to Puntarenas, Costa Rica at the age of 18. Her communication with President Cleto González Víquez allowed her to gain employment working on railroads in Costa Rica. Impressing the Costa Rican government with her work, she was awarded one of two scholarships given annually by her home country and enrolled at Virginia Tech in 1935. At Virginia Tech, she joined the American Institute of Electrical Engineers and was the only woman in the organization at that time. She also helped found the Short Wave Club, training other students in radio operations. Furthermore, she was known as a pilot. She proposed the creation of an Aeronautics Club at Virginia Tech. In 1937, she was invited by the National Intercollegiate Flying Club to watch air races in Miami, Florida. Venegas flew her own 40-horsepower airplane that she kept in Lynchburg to Washington, D.C., where she met the convoy to watch the races. She was the only Virginia-based pilot to join the flight. She graduated from Virginia Tech in 1938 with a Bachelor of Science in electrical engineering. Career At the end of her junior year of college, Venegas spent the summer back in Costa Rica working as an engineer at the Costa Rica Electric Light and Power Company. After graduating from Virginia Tech, she applied to work on the Panama Canal but was turned down because of her sex. Not to be deterred, she flew out to the Panama Canal Zone where she was soon hired, becoming the first woman engineer to work on the canal. After working on the power transmission problems of the Panama Canal, she returned in 1942 to the United States, where she worked as an application engineer in the government department of the Westinghouse Electrical International Company. She was the company's first woman engineer. At Westinghouse, she helped supply the United Nations with electrical equipment, which assisted the Allied Powers during the World War II. Venegas worked on handling technical engineering problems that arose in supplying generators and other necessary machinery to the Allies. Venegas eventually pursued a career in performing and painting. Moving to Los Angeles, she attended the University of California Los Angeles studying music and art. She married Meade A. Livesay and began performing under the name "Carmen Lesay." References 1910s births 1991 deaths Electrical engineers Virginia Tech alumni 20th-century Costa Rican women American people of Costa Rican descent 20th-century Costa Rican people Costa Rican scientists 20th-century engineers 20th-century women engineers
Carmen Venegas
[ "Engineering" ]
630
[ "Electrical engineering", "Electrical engineers" ]
75,572,282
https://en.wikipedia.org/wiki/C11orf91
Chromosome 11 open reading frame 91, or C11orf91 is a protein which in humans is encoded by the C11orf91 gene. Gene The C11orf91 gene consists of 5159 nucleotides with an mRNA of approximately 836 base pairs. There is one exon found in the C11orf91 gene. mRNA The cytogenetic band location of C11orf91 is 11p13 and is located on the minus strand of the DNA . Conceptual translation Annotated depiction of the C11orf91 mRNA and amino acid protein sequences. Protein The C11orf91 gene encodes a protein that is 193 amino acids in length. The C11orf91 protein contains a domain of unknown function, DUF5529, that spans nearly the entire protein. RBMX protein binding sites were found to be highly conserved in several structures of human C11orf91 3'UTR and 5' UTR. C11orf91 is rich in serine and proline and poor in valine and asparagine. There is a proline rich region found in the middle of the C11orf91. The human C11orf91 protein is approximately 20 kDal and has an isoelectric point around 9. Localization Human C11orf91 protein is predicted to be localized in vesicles. Structure C11orf91 has two helices located near the C-terminus and no beta sheets. Post-translational modifications C11orf91 has a predicted Protein kinase C (PKC) phosphorylation site, Casein kinase 2 (CK2) phosphorylation site, amidation site, and two predicted serine phosphorylation sites, see Conceptual Translation for post-translational modification site locations. Evolution There are no paralogs of the human C11orf91 protein. The human C11orf91 protein has several orthologs found across eight categories of jawed vertebrates including: aves, testudines, alligators, reptiles, mammals, amphibians, lungfishes, and cartilaginous fishes. References Proteins
C11orf91
[ "Chemistry" ]
450
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
72,737,333
https://en.wikipedia.org/wiki/Oise%20amber
Oise amber () is a type of amber found near the Oise river near Creil in northern France. Oise amber is around 53 million years old, dating to the Early Eocene (Ypresian). Oise amber is softer than Baltic amber, although Oise amber is older and both types of amber have similar geographic origins. The formation is known for preserving a diverse fauna of invertebrates. History In the late 1990s, an amber deposit was discovered by French entomologist near Creil at Le Quesnoy, close to the Oise river in France. The sediments containing the amber were found at the bottom of quarries used for sand and gravel extraction. The Oise amber deposit had more than 20,000 arthropod inclusions to date. In 2000, pollen was extracted for the first time from Oise amber. Using nuclear magnetic resonance spectroscopy, it was then discovered that the amber contained a unique compound, quesnoin, which was similar to fresh resin from a modern tree found in the Amazon, Hymenaea oblongifolia, suggesting that the amber may have been produced by related trees. Geology The amber originates from the Argiles d'lignite du Soissonnais, which forms part of the stratigraphy of the Paris Basin. The strata form channels cutting into the underlying marine deposited Late Paleocene (Thanetian) aged greensand. The main lithologies of the beds are lenticular bedded bodies consisting of clay rich sand. These are divided into two subfacies, the first of which contains pyrite-rich lignite, as well as amber, the other contains proportionally less lignite, as well as remains of terrestrial vertebrates. The deposit also contains the remains of many coprolites. Description Oise amber tends to be a very clear yellow, and pieces of Oise amber are usually a few centimetres long. In every flow of Oise amber, there is usually at least one inclusion. The amber is of angiosperm origin, with the source tree dubbed Aulacoxylon sparnacense, which is thought to be a member of Fabaceae. Diversity The amber shows a high diversity of invertebrate fauna. The most diverse group of insects are Coleoptera (beetles) and Psocoptera, representing 21% each of collected insect specimens as of 2009, followed by Hymenoptera at 16%, Diptera (flies) at 12% and Hemiptera at 10%. However, Oise amber as of 2010 has fewer described species than Baltic, Dominican or New Jersey ambers. References External links Amber Oise basin
Oise amber
[ "Physics" ]
544
[ "Amorphous solids", "Unsolved problems in physics", "Amber" ]
72,738,124
https://en.wikipedia.org/wiki/IMGT
IMGT or the international ImMunoGeneTics information system is a collection of databases and resources for immunoinformatics, particularly the V, D, J, and C gene sequences, as well as a providing other tools and data related to the adaptive immune system. IMGT/LIGM-DB, the first and still largest database hosted as part of IMGT contains reference nucleotide sequences for 360 species' T-cell receptor and immunoglobulin molecules, as of 2023. These genes encode the proteins which are the foundation of adaptive immunity, which allows highly specific recognition and memory of pathogens. History IMGT was founded in June, 1989, by Marie-Paule Lefranc, an immunologist working at University of Montpellier. The project was presented to the 10th Human Genome Mapping Workshop, and resulted in the recognition of V, D, J, and C regions as genes. The first resource created was IMGT/LIGM-DB, a reference for nucleotide sequences of T-cell receptor and immunoglobulin of humans, and later vertebrate species. IMGT was created under the auspices of Laboratoire d'ImmunoGénétique Moléculaire at the University of Montpellier as well as French National Centre for Scientific Research (CNRS). As both T-cell receptors and immunoglobulin molecules are built through a process of recombination of nucleotide sequences, the annotation of the building block regions and their role is unique within the genome. To standardize terminology and references, the IMGT-NC was created in 1992 and recognized by the International Union of Immunological Societies as a nomenclature subcommittee. Other tools include IMGT/Collier-de-Perles, a method for two dimensional representation of receptor amino acid sequences, and IMGT/mAb-DB, a database of monoclonal antibodies. Now maintained by the HLA Informatics Group, the primary reference for human HLA, IPD-IMGT/HLA Database, originated in part with IMGT. It was merged with the Immuno Polymorphism Database in 2003 to form the current reference. Since 2015, IMGT has been headed by Sofia Kossida. See also Open science data Computational immunology Immunomics References Genetics databases Bioinformatics Biological databases
IMGT
[ "Engineering", "Biology" ]
481
[ "Bioinformatics", "Biological engineering", "Biological databases" ]
72,740,308
https://en.wikipedia.org/wiki/BedMachine%20Antarctica
BedMachine Antarctica is a project to map the sub-surface landmass below the ice of Antarctica using data from radar depth sounding and ice shelf bathymetry methods and computer analysis of that data based on the conservation of mass. The project is uses data from 19 research institutes. It is led by the University of California, Irvine. It has revealed that the Antarctic bedrock is the deepest natural location on land (or at least not under liquid water) worldwide, with the bedrock being below sea level. References External links Science and technology in Antarctica Geophysics Research projects
BedMachine Antarctica
[ "Physics" ]
115
[ "Applied and interdisciplinary physics", "Geophysics" ]
72,743,532
https://en.wikipedia.org/wiki/Serratus%20%28virology%29
Serratus is a large scale viroinformatics platform for uncovering the total genetic diversity of Earth's virome. Originating with the goal of uncovering novel coronaviruses that may have been incidentally sequenced by other researchers, the project expanded to encompass all RNA viruses, those which encode a viral RNA-dependent RNA polymerase (RdRp). By the end of 2020 there were approximately 15,000 distinct RNA virus sequences known from public databases, measured by the number of distinct RdRp (greater than 10% difference in amino acid sequence). Using a bioinformatics workflow optimized for large-scale cloud computing, the research team analyzed 5.7 million freely available sequencing datasets (20.4 petabytes of raw data) in the Sequence Read Archive (SRA) in only 11 days and a computing cost of US$23,900. This analysis yielded 132,000 novel viral RdRp, representing nearly an order of magnitude increase in the known genetic diversity of RNA viruses. Within the database, RNA viruses are classified according to their RdRp palmprint, a type of molecular barcode. The palmprint can be used as a computationally efficient index for the identification of which SRA sequencing runs contain a particular RNA virus. Such an index allows for targeted analysis of raw sequencing datasets from which novel RNA viruses can be characterized. All Serratus data are freely-available under the INDSC release policy. References External links palmID: RdRp sequence search tool Serratus code repository Bioinformatics Computational biology Virology Computational fields of study
Serratus (virology)
[ "Technology", "Engineering", "Biology" ]
327
[ "Biological engineering", "Computational fields of study", "Bioinformatics", "Computing and society", "Computational biology" ]
77,190,088
https://en.wikipedia.org/wiki/PG%201543%2B489
PG 1543+489, also known as QSO B1544+4855 and PGC 2325245, is a quasar located in the constellation of Boötes. At the redshift of 0.399, the object is located 4.5 billion light-years away from Earth. It was first discovered in 1983, by researchers who presented 114 objects in the Palomar-Green bright quasar survey, as one of the best studied samples of active galactic nuclei (AGN). Characteristics The quasar is also classified as a narrow-line Seyfert 1 galaxy, a type of AGN that shows all properties of normal Type 1 Seyfert galaxies but has peculiar characteristics such as narrowest Balmer lines with a full width at half-maximum (FWHM) of 1630 km s−1. Observations Researchers also found a peculiar feature in PG 1543+489. The quasar shows a blueshift of the [O III] 5007 Å line that is 1150 km s−1 with respect to the systemic velocity of the galaxy as well as the blue asymmetry of its profile. The large [O III] blueshift or so-called 'blue outliers' by researchers, is found theoretically interpreted by the result of intense outflows whose receding parts are obscured by an optically thick accretion disc or possibly a scenario which the narrow-line region clouds are entrained by decelerating winds, potentially associated with the high Eddington ratio typical of the 'blue outliers'. Absorption system Through observations from Hubble Space Telescope, researchers were able to find an absorption-line system at z = 0.07489. Looking at it, they found the sightline passes within ρ = 66 kpc of an edge-on 2{L}* disk galaxy at a similar redshift, belonging to four other galaxies in the group within ρ = 160 kpc. From the absorption-line system, they detected H I [log N(H I/cm-2) = 19.12 ± 0.04] as well as N I, Mg II, Si II, and Si III, from which we measure a gas-phase abundance of [N/H] = -1.0 ± 0.1. The photoionization models indicate that the nitrogen-to-silicon relative abundance is solar, yet magnesium is found underabundant by a factor of ≈2. By extracting out its rotational curve and reporting emission-line spectroscopy of the nearby galaxy, researchers suggests the metallicity is ≈8× higher compared to [N/H] in the absorber. Interestingly, the absorber velocities in the galaxy suggests gas at ρ = 66 kpc is corotating with the galaxy's stellar disk, possibly with an inflow component. Although indicating the sub-damped Lyα absorber system is responsible in causing cold accretion flow, the absorber abundance patterns are quite peculiar. Researchers hypothesized gas was probably ejected from its home galaxy or result of tidal debris from interactions between the group galaxies, with solar nitrogen abundance, but mixed with the gas in the circumgalactic medium or group. If the gas is bound to the nearby galaxy, this system may become an example of the gas "recycling" as predicted by theoretical galaxy simulations. References Quasars Boötes Luminous infrared galaxies Starburst galaxies Seyfert galaxies Spiral galaxies Principal Galaxies Catalogue objects 2MASS objects Black holes Active galaxies
PG 1543+489
[ "Physics", "Astronomy" ]
729
[ "Black holes", "Physical phenomena", "Physical quantities", "Boötes", "Unsolved problems in physics", "Astrophysics", "Constellations", "Density", "Stellar phenomena", "Astronomical objects" ]
77,190,312
https://en.wikipedia.org/wiki/Francesca%20E.%20DeMeo
Francesca DeMeo, or Francesca E. DeMeo is an American doctoral astrophysicist, researcher and speaker specializing in the study of celestial bodies in the Solar System, including more particularly asteroids, comets and moons. She is the creator of the modern system of taxonomic classification of asteroids with Schelte J. Bus. With Benoît Carry, she brought a new vision of the main asteroid belt. She pursued entrepreneurial activities alongside astronomy. Education and career At the age of 18, Francesca DeMeo pursued in-depth studies of physics and planetary sciences at the Massachusetts Institute of Technology in Cambridge until 2006. She obtained two undergraduate degrees: B.S. in Physics and B.S. in Earth, Atmospheric, and Planetary Sciences. In 2007, she received a M.S. in Planetary Science including small celestial bodies, asteroids, comets, moons, etc. She was awarded a Fulbright Scholarship in 2007 and 2008 to study of planetary sciences within astronomy and astrophysics as a Fulbright Advanced Student, to pursue a PhD at the Paris Observatory in Meudon on the study of small bodies in the Solar System. In 2009, she received an Eiffel excellence scholarship to support her final year of doctoral studies. In 2009 she published the Bus-DeMeo asteroid taxonomic classification system. This system has become the reference in the spectral classification of asteroids. She obtained her Ph.D. in Astronomy and Astrophysics in 2010. The dissertation on small solar bodies was titled: The Compositional Variation of Small Bodies across the Solar System. Her thesis co-supervisors are Maria Antonella Barucci and Richard P. Binzel. The taxonomy paper from 2009 is one of the most cited papers in all of asteroid science, 672 times as of June 23, 2024. She obtained her Ph.D. in Astronomy and Astrophysics in 2010. The dissertation on small solar bodies was titled: "The Compositional Variation of Small Bodies across the Solar System". Her thesis co-supervisors are Maria Antonella Barucci and Richard P. Binzel, with whom she continued collaborating. Awards 2006 Christopher Goetze Prize 2012 Asteroid named DeMeo (8070). 2013 Hubble Fellowship. 2018 Harold Clayton Urey Award issued by Division of Planetary Sciences of the American Astronomical Society 2018 The Division of Planetology of the American Astronautical Society awards her the Harold C. Urey Prize "in recognition of the broad foundational understanding of the study of solar system bodies using the modern system of asteroid classification that bears her name." Publication of research DeMeo's papers on planetology in Nature and Icarus about Pluto, Triton, Charon, (52872) Okyhroe, (90482) Orcus, (379) Huenna and Small Solar System body. Other activities In 2011, Francesca DeMeo served as co-founder, CIO of Cambridge Select Inc. The same year, she served as a volunteer member of the board of directors of the Governor's Academy and remained there until 2021. References External links Meanings of minor planet names: 8001–9000 Astrophysics Massachusetts Institute of Technology School of Science faculty American planetary scientists American women planetary scientists 21st-century American astronomers Scientists from Massachusetts Living people Year of birth missing (living people)
Francesca E. DeMeo
[ "Physics", "Astronomy" ]
657
[ "Astronomical sub-disciplines", "Astrophysics" ]
77,190,923
https://en.wikipedia.org/wiki/Sullivan%20vortex
In fluid dynamics, the Sullivan vortex is an exact solution of the Navier–Stokes equations describing a two-celled vortex in an axially strained flow, that was discovered by Roger D. Sullivan in 1959. At large radial distances, the Sullivan vortex resembles a Burgers vortex, however, it exhibits a two-cell structure near the center, creating a downdraft at the axis and an updraft at a finite radial location. Specifically, in the outer cell, the fluid spirals inward and upward and in the inner cell, the fluid spirals down at the axis and spirals upwards at the boundary with the outer cell. Due to its multi-celled structure, the vortex is used to model tornadoes and large-scale complex vortex structures in turbulent flows. Flow description Consider the velocity components of an incompressible fluid in cylindrical coordinates in the form where and is the strain rate of the axisymmetric stagnation-point flow. The Burgers vortex solution is simply given by and . Sullivan showed that there exists a non-trivial solution for from the Navier-Stokes equations accompanied by a function that is not the Burgers vortex. The solution is given by where is the exponential integral. For , the function behaves like with being is the Euler–Mascheroni constant, whereas for large values of , we have . The boundary between the inner cell and the outer cell is given by , which is obtained by solving the equation Within the inner cell, the transition between the downdraft and the updraft occurs at , which is obtained by solving the equation The vorticity components of the Sullivan vortex are given by The pressure field with respect to its central value is given by where is the fluid density. The first term on the right-hand side corresponds to the potential flow motion, i.e., , whereas the remaining two terms originates from the motion associated with the Sullivan vortex. Sullvin vortex in cylindrical stagnation surfaces Explicit solution of the Navier–Stokes equations for the Sullivan vortex in stretched cylindrical stagnation surfaces was solved by P. Rajamanickam and A. D. Weiss and is given by where , Note that the location of the stagnation cylindrical surface is not longer given by (or equivalently ), but is given by where is the principal branch of the Lambert W function. Thus, here should be interpreted as the measure of the volumetric source strength and not the location of the stagnation surface. Here, the vorticity components of the Sullivan vortex are given by See also Kerr–Dold vortex References Vortices
Sullivan vortex
[ "Chemistry", "Mathematics" ]
529
[ "Dynamical systems", "Vortices", "Fluid dynamics" ]
69,664,162
https://en.wikipedia.org/wiki/Empirical%20dynamic%20modeling
Empirical dynamic modeling (EDM) is a framework for analysis and prediction of nonlinear dynamical systems. Applications include population dynamics, ecosystem service, medicine, neuroscience, dynamical systems, geophysics, and human-computer interaction. EDM was originally developed by Robert May and George Sugihara. It can be considered a methodology for data modeling, predictive analytics, dynamical system analysis, machine learning and time series analysis. Description Mathematical models have tremendous power to describe observations of real-world systems. They are routinely used to test hypothesis, explain mechanisms and predict future outcomes. However, real-world systems are often nonlinear and multidimensional, in some instances rendering explicit equation-based modeling problematic. Empirical models, which infer patterns and associations from the data instead of using hypothesized equations, represent a natural and flexible framework for modeling complex dynamics. Donald DeAngelis and Simeon Yurek illustrated that canonical statistical models are ill-posed when applied to nonlinear dynamical systems. A hallmark of nonlinear dynamics is state-dependence: system states are related to previous states governing transition from one state to another. EDM operates in this space, the multidimensional state-space of system dynamics rather than on one-dimensional observational time series. EDM does not presume relationships among states, for example, a functional dependence, but projects future states from localised, neighboring states. EDM is thus a state-space, nearest-neighbors paradigm where system dynamics are inferred from states derived from observational time series. This provides a model-free representation of the system naturally encompassing nonlinear dynamics. A cornerstone of EDM is recognition that time series observed from a dynamical system can be transformed into higher-dimensional state-spaces by time-delay embedding with Takens's theorem. The state-space models are evaluated based on in-sample fidelity to observations, conventionally with Pearson correlation between predictions and observations. Methods EDM is continuing to evolve. As of 2022, the main algorithms are Simplex projection, Sequential locally weighted global linear maps (S-Map) projection, Multivariate embedding in Simplex or S-Map, Convergent cross mapping (CCM), and Multiview Embeding, described below. Nearest neighbors are found according to: Simplex Simplex projection is a nearest neighbor projection. It locates the nearest neighbors to the location in the state-space from which a prediction is desired. To minimize the number of free parameters is typically set to defining an dimensional simplex in the state-space. The prediction is computed as the average of the weighted phase-space simplex projected points ahead. Each neighbor is weighted proportional to their distance to the projection origin vector in the state-space. Find nearest neighbor: Define the distance scale: Compute weights: For{} : Average of state-space simplex: S-Map S-Map extends the state-space prediction in Simplex from an average of the nearest neighbors to a linear regression fit to all neighbors, but localised with an exponential decay kernel. The exponential localisation function is , where is the neighbor distance and the mean distance. In this way, depending on the value of , neighbors close to the prediction origin point have a higher weight than those further from it, such that a local linear approximation to the nonlinear system is reasonable. This localisation ability allows one to identify an optimal local scale, in-effect quantifying the degree of state dependence, and hence nonlinearity of the system. Another feature of S-Map is that for a properly fit model, the regression coefficients between variables have been shown to approximate the gradient (directional derivative) of variables along the manifold. These Jacobians represent the time-varying interaction strengths between system variables. Find nearest neighbor: Sum of distances: Compute weights: For{} : Reweighting matrix: Design matrix: Weighted design matrix: Response vector at : Weighted response vector: Least squares solution (SVD): Local linear model is prediction: Multivariate Embedding Multivariate Embedding recognizes that time-delay embeddings are not the only valid state-space construction. In Simplex and S-Map one can generate a state-space from observational vectors, or time-delay embeddings of a single observational time series, or both. Convergent Cross Mapping Convergent cross mapping (CCM) leverages a corollary to the Generalized Takens Theorem that it should be possible to cross predict or cross map between variables observed from the same system. Suppose that in some dynamical system involving variables and , causes . Since and belong to the same dynamical system, their reconstructions (via embeddings) , and , also map to the same system. The causal variable leaves a signature on the affected variable , and consequently, the reconstructed states based on can be used to cross predict values of . CCM leverages this property to infer causality by predicting using the library of points (or vice versa for the other direction of causality), while assessing improvements in cross map predictability as larger and larger random samplings of are used. If the prediction skill of increases and saturates as the entire is used, this provides evidence that is casually influencing . Multiview Embedding Multiview Embedding is a Dimensionality reduction technique where a large number of state-space time series vectors are combitorially assessed towards maximal model predictability. Extensions Extensions to EDM techniques include: Generalized Theorems for Nonlinear State Space Reconstruction Extended Convergent Cross Mapping Dynamic stability S-Map regularization Visual analytics with EDM Convergent Cross Sorting Expert system with EDM hybrid Sliding windows based on the extended convergent cross-mapping Empirical Mode Modeling Variable step sizes with bundle embedding Multiview distance regularised S-map See also System dynamics Complex dynamics Nonlinear dimensionality reduction References Further reading External links Animations Online books or lecture notes EDM Introduction. Introduction with video, examples and references. Geometrical theory of dynamical systems. Nils Berglund's lecture notes for a course at ETH at the advanced undergraduate level. Arxiv preprint server has daily submissions of (non-refereed) manuscripts in dynamical systems. Research groups Sugihara Lab, Scripps Institution of Oceanography, University of California San Diego. Nonlinear systems Data modeling Predictive analytics Machine learning Nonlinear time series analysis
Empirical dynamic modeling
[ "Mathematics", "Engineering" ]
1,299
[ "Machine learning", "Data modeling", "Nonlinear systems", "Data engineering", "Artificial intelligence engineering", "Dynamical systems" ]
78,523,860
https://en.wikipedia.org/wiki/Eilenberg%E2%80%93Watts%20theorem
In mathematics, specifically homological algebra, the Eilenberg–Watts theorem tells when a functor between the categories of modules is given by an application of a tensor product. Precisely, it says that a functor is additive, is right-exact and preserves coproducts if and only if it is of the form . For a proof, see The theorems of Eilenberg & Watts (Part 1) References Charles E. Watts, Intrinsic characterizations of some additive functors, Proc. Amer. Math. Soc. 11, 1960, 5–8. Samuel Eilenberg, Abstract description of some basic functors, J. Indian Math. Soc. (N.S.) 24, 1960, 231–234 (1961). Further reading Eilenberg-Watts theorem in nLab Homological algebra
Eilenberg–Watts theorem
[ "Mathematics" ]
170
[ "Algebra stubs", "Mathematical structures", "Theorems in algebra", "Fields of abstract algebra", "Category theory", "Mathematical problems", "Mathematical theorems", "Algebra", "Homological algebra" ]
78,526,139
https://en.wikipedia.org/wiki/Haliovirgaceae
Haliovirgaceae is a family of bacteria in the order Fusobacteriales. The family contains one genus: Haliovirga. Bacteria in this family are gram-negative, mesophilic, anerobic, and sulfur-reducing. See also List of bacteria genera List of bacterial orders References Microbiology Bacteria described in 2023
Haliovirgaceae
[ "Chemistry", "Biology" ]
74
[ "Bacteria stubs", "Microbiology", "Bacteria", "Microscopy" ]
78,529,880
https://en.wikipedia.org/wiki/Benapenem
Benapenem is an experimental antibiotic drug for the treatment of Gram-negative bacterial infections such as with Enterobacter. References Antibiotics Pyrrolidines Sulfonamide antibiotics Carboxamides Carboxylic acids
Benapenem
[ "Chemistry", "Biology" ]
49
[ "Biotechnology products", "Carboxylic acids", "Functional groups", "Antibiotics", "Biocides" ]
78,536,772
https://en.wikipedia.org/wiki/Enzyme%20memory
Enzyme memory is a concept in enzyme kinetics based on the idea that the kinetic properties of an enzyme may vary according to conditions in its previous catalytic cycle. It can occur both in ternary-complex mechanisms and in substituted-enzyme ("ping-pong") mechanisms, with very different consequences. Ternary-complex mechanism A mnemonical mechanism for a reaction A + B → products that proceeds through a ternary complex EAB is shown in the illustration at the right. The essential characteristic, that makes this different from any mechanism in which substrate binding is at or close to equilibrium, is that it contains both slow and fast steps, with the fast step preventing the binding from reaching equilibrium, because release of products is too rapid to allow this. The enzyme exists in two forms: as a free enzyme it exists as E′, but the form released at the end of the catalytic cycle is E. E′ is the form that exists during the catalytic reaction at low concentrations of the first substrate A, because substrate binding is too slow to prevent equilibration between the two forms of free enzyme. However, at high concentrations of A, EA is formed much more rapidly, and can be swept away too fast to allow E′ to be produced. In consequence the kinetic behaviour can vary with the substrate concentration, and deviations from Michaelis–Menten kinetics can result — negative cooperativity in the case of wheat-germ hexokinase, the enzyme for which the model was proposed, and positive cooperativity for liver hexokinase D. However, the mnemonical model is not the only possible explanation of such behaviour, and other authors have preferred a slow-transition mechanism for similar experimental data. The differences in predictions made by these two models are very small, making it difficult or impossible to distinguish between them. The idea that kinetic mechanisms could lead to properties that would be impossible for prcesses at equilibrium, such as cooperativity in monomeric enzymes originated in a suggestion that the kinetic behaviour of phosphofructokinase could be explained by a non-equilibrium mechanism in which the two substrates could bind in either order and a more general suggestion of how kinetic cooperativity could arise in a one-substrate reaction. However, the absence of any experimental cases that seemed to require such models resulted in their being regarded as theoretical hypotheses rather than as practical mechanisms until the development of the mnemonical model. Substituted-enzyme mechanism A substituted-enzyme mechanism consists of two half reactions. In the first a group G in a substrate AG is transferred to the enzyme E, which becomes EG (the "substituted enzyme"): E + AG → EG + A In the second half reaction the group G is transferred to the second substrate B, producing BG and regenerating the free enzyme E: EG + B → E + BG The complete reaction is thus AG + B → A + BG with E left unchanged. As the substituted enzyme EG is expected to be exactly the same regardless of which possible substrate, out of several possibilities AG, A′G, A′′G etc., donated G. One would expect, therefore, that the kinetics with respect to B would be the same regardless of the identity of AG. That is not, however, what was observed with rhodanese, or with ascorbate oxidase and aspartate aminotransferase. The reaction catalysed by ascorbate oxidase follows a triple-displacement mechanism, with two different substituted-enzyme forms, but it follows the same principles of enzyme memory. Jarabak and Westley interpreted the results of these experiments to mean that in the first half reaction the substrate left an "imprint" on the enzyme that caused it to "remember" what it had been exposed to in the first half reaction. Subsequently, similar effects have been observed with other enzymes, such nitrate reductase from E. coli. References Enzyme kinetics Catalysis
Enzyme memory
[ "Chemistry" ]
819
[ "Catalysis", "Chemical kinetics", "Enzyme kinetics" ]
74,192,384
https://en.wikipedia.org/wiki/11%CE%B2-Hydroxydihydrotestosterone
11β-Hydroxydihydrotestosterone (11OHDHT) is an endogenous steroid. Although it may not have significant androgenic activity, it may still be an important precursor to androgenic molecules. Biological role 11OHDHT, along with other carbon-11-oxygenated (C11-oxy) steroids, 11-ketodihydrotestosterone (11KDHT) and 11-ketotestosterone (11KT), are androgen receptor (AR) agonists. The interconversion of C11-oxy C19 steroids, which includes 11OHDHT, was found to be more efficient than that of C11-oxy C21 steroids. 11OHDHT was also found to exhibit antagonism towards the progesterone receptor B (PRB), although it is not a pregnane (C21) steroid, highlighting the intricate interplay between receptors and active as well as "inactive" C11-oxy steroids. See also Dihydrotestosterone Steroid 11β-hydroxylase 11-Ketodihydrotestosterone 11β-Hydroxytestosterone External links An entry in the LIPID MAPS database: https://www.lipidmaps.org/databases/lmsd/LMST02020136 References 5α-Reduced steroid metabolites Androstanes Cyclopentanols Ketones
11β-Hydroxydihydrotestosterone
[ "Chemistry" ]
320
[ "Ketones", "Functional groups" ]
74,199,747
https://en.wikipedia.org/wiki/Bristol%20perambulation
The Bristol perambulation was a civic ritual, usually performed annually, in Bristol, England, from the sixteenth to the nineteenth centuries. Also called 'beating the bounds' it usually involved a party of civic officers (headed by the mayor and sheriffs) walking or riding around the land boundary of the city and county of Bristol. On the way they inspected the 'shirestones' (boundary markers) to ensure all were visible and in good order. Origin The first perambulation took place on 30 September 1373, following the granting of a royal charter to Bristol on 8 August that established it as a county in it own right - with its own sheriffs and county court. The first action required following this was for the boundary of the town's existing lands to be accurately surveyed and agreed, by notable people from Bristol, Gloucestershire and Somerset. This resulted in a long textual description of the route taken, describing landmarks and places along the way, such as ditches, embankments and existing stone boundary markers. On 20 December 1373 this survey was enshrined in a royal charter under the Great Seal of England - defining the territory of the new county in law. The original charter survives in Bristol Archives. The route and the shirestones must have been subject to regular checks by the mayor or his officers over the following centuries to ensure that boundary stones had not been moved or disturbed. In an age before accurate maps or surveys, this was vital to ensure, for example, that nobody moved a boundary stone as a way of stealing land. It was also important to be clear about the exact boundary because county law officers only had jurisdiction within their county. So they would only be able to arrest or prosecute a highwayman, for example, if they were in their county. Early civic perambulations Perambulating the county boundary as a civic ritual is only clearly documented from the late sixteenth century. The earliest identified reference is found in the Mayor's Audit Book for 1584, where expenses relating to the costs involved are recorded. These included breakfast for the mayor and sheriffs. Following the inspection of the shirestones, the party spent the afternoon drinking a gallon of Madeira wine. Similar expenses are recorded in the city records of the seventeenth century. A typical entry, which has been published in full, is that for September 1628: Item paide for the Charges goeing about the Shirestones viz for ale & cakes at Jacobs Well ij.s.: vj.d., for labourers to open the wayes vj.s., for butter, cheese plums sugar, duckes carrienge the provisions and other thinges as per William Loydes note xvij.s.vij.d., for wyne at Robert Shewardes xxxj.s., for bread & cakes xj.s., and for sweete meates & comfittes to the widowe Patch xvj,s,.: all is iiij.li, vis. j.d. In some years the perambulation had to be abandoned or curtailed. For example, during the Great Plague outbreak of 1665-66, the mayor decided not to ride out to the 'Receipt House' (alias Conduit House) near Baptist Mills. This was because doing so would have required the party to twice pass by the pesthouse that had been established on the Forlorn Hope Estate Other towns and cities, such as Norwich, also developed or regularised the perambulation of their town / city lands during the sixteenth century. In part this was a response to the English Reformation, which had resulted in many religious processions being abolished because they were associated with saints' days or incorporated rituals or practices that were seen as papist. The historian Matthew Woodcock argues that city and town governments saw Perambulation Day as a way 'to actively stage a reaffirmation and celebration of communal identity'. Although these perambulations were centred around the civic elite, in some instances much larger crowds joined them. In Bristol, the perambulation typically took place between the election of the new mayor and sheriffs on 15 September and the commencement of their office at Michaelmas (29 September). The mayor and sheriffs typically served for just one year. Doing the perambulation at this time thus allowed the outgoing and incoming officers to beat the bounds together, surveying the land and the boundaries that were being passed on from one set of civic officers to the next. Writing in the late 18th century, the antiquarian William Barrett noted that in his time 'the circumference of the whole within the liberties as appears by the perambulation round it, (which to preserve its true limits and boundaries, is made annually, at choosing a new mayor) consists of seven miles two quarters and fifty-five pearch.' To 'satisfy the curious and inquisitive' Barrett then provides a seven-page description of the 'Bounds'. Earlier similar published descriptions of the bounds testify to the longstanding interest of the wider public in the exact route of the perambulation. This interest outlasted the creation of accurate maps and survey of the complete county, such as that of John Rocque in 1743, which was the first to mark the county boundary in its entirety, including the location of each shirestone. Victorian and Edwardian perambulations From 1835 the boundary of Bristol was successively expanded through Acts of Parliament. This made the route longer and, as a result, they only took place every few years. However, when they did take place, they could be very large-scale events. For instance, the newspaper reports of a perambulation in 1874 indicate that the perambulation of the land boundary took two days and involved about five hundred people. This was followed by a perambulation of the count's water boundary. That included the lower part of the River Avon and the southern half of the Severn Estuary extending to Steep Holm and Flat Holm. In 1900 there was another perambulation of the much enlarged county boundary. On 11 September 1900 it was reported in the press that ‘the area to be covered renders the task so arduous that five days have to be set apart for the task, which will not be completed until Saturday evening.’ The party was to 'proceed from stone to stone, and see that the stones were properly marked and placed on the boundary in such a way that disputes would be averted and trouble with owners of adjoining property avoided, and also to see that the rights of the city on the line of the boundary were upheld. They would be marshalled in something like processional order’. Great pains were taken to follow the exact route, in some cases passing through private houses, going through one window by ladder and out the other, or traversing walls. A reporter noted on the first day that at one point in Horfield '‘a considerable length of wall had to be traversed. In negotiating this part of the journey Alderman Dix had the misfortune to make an abrupt and unexpected descent through the roof a fowl house’. Twenty-first century civic perambulations Bristol's civic perambulations died out during the twentieth century. There was one in 2007, during the mayoralty of Royston Griffey, which involved a perambulation of the water boundary. A civic perambulation of the medieval land boundary, led by the Deputy Lord Mayor and Deputy Sheriff, took place on 30 September 2023. This was to commemorate the 650th anniversary of the original perambulation. Perambulating the medieval county boundary today In July 2023 historian Evan Jones from the University of Bristol produced a free online map to allow ordinary people to perambulate the city according to its original 1373 boundary. This includes the location of the shirestones recorded in the 1736 survey and a route along public roads / rights of way that sticks as closely as possible to the original boundary. Most of the 1373 route remain public roads or paths today. However, wholesale redevelopment of parts of the city, such as Kingsdown and part of Redcliffe mean that buildings now block some of the early route, requiring diversions. The development of the city docks in the nineteenth century, with the creation of the New Cut, also forces some diversions in the Redcliffe/Bedminster area. The total length of Jones' route is ; all can be done on foot and most of it by bicycle. In June 2024, 'Arts Matter' at the University of Bristol published a 2 minute video, featuring Jones, to promote the route and explain its background. Route Since the perambulation is a circular walk, it can be started from any point and walked in either direction. The route identified by Evan Jones is based on the 1736 description of the perambulation route. This began at the River Avon at the bottom of Jacobs Wells Road and included a separate numbering of the shirestones on the Gloucestershire border and the Somerset border. The numbers allocated in 1736 were given on maps of Bristol produced by George Ashmead in 1828 and 1855. Since boundary stones were added in various places between 1373 and 1736, to make the line of the boundary clearer, not all the stones described in 1736 were extant in 1373. References History_of_Bristol British traditions Ceremonies Borders Boundary markers Landscape history
Bristol perambulation
[ "Physics" ]
1,923
[ "Spacetime", "Borders", "Space" ]
74,199,840
https://en.wikipedia.org/wiki/Project%20Adam
Project Adam was a proposed plan by the United States Army for a manned, suborbital rocket flight. It was developed in 1958, in parallel with the United States Air Force's Project Manhigh, and was initially called Project Man Very High. The twin aims were to gather scientific data on high-altitude flight and to enhance national prestige in the wake of the successful launch of the Soviet Union's Sputnik 1. A further goal was to investigate the possibility of troop transport by ballistic missile. History The plan involved using off-the-shelf hardware to send a passenger on a steep ballistic flight from Cape Canaveral, with a splashdown in the North Atlantic. The launch vehicle would have been a modified Redstone Jupiter-C. The astronaut would have been housed in a capsule modelled on the USAF's Manhigh gondola, modified for a water landing, with no provision for manual control. At the apogee of flight the astronaut would have experienced six minutes of weightlessness. The first manned flight would have been preceded by a series of flights involving primates. Project Adam was devised by the Army Ballistic Missile Agency, and was proposed to the Advanced Research Projects Agency on 11 July 1958. However although Secretary to the Army Wilber M. Brucker backed the project, largely as a psychological demonstration, Deputy Secretary of Defense Donald A. Quarles believed that it had "about the same technical value as the circus stunt of shooting a young lady from a cannon". The plan was not formally approved, although after the formation of NASA on 29 July 1958 elements of the hardware were folded into Project Mercury. References Space research Human subject research in the United States Military projects of the United States
Project Adam
[ "Engineering" ]
349
[ "Military projects of the United States", "Military projects" ]
74,201,328
https://en.wikipedia.org/wiki/Six-dimensional%20holomorphic%20Chern%E2%80%93Simons%20theory
In mathematical physics, six-dimensional holomorphic Chern–Simons theory or sometimes holomorphic Chern–Simons theory is a gauge theory on a three-dimensional complex manifold. It is a complex analogue of Chern–Simons theory, named after Shiing-Shen Chern and James Simons who first studied Chern–Simons forms which appear in the action of Chern–Simons theory. The theory is referred to as six-dimensional as the underlying manifold of the theory is three-dimensional as a complex manifold, hence six-dimensional as a real manifold. The theory has been used to study integrable systems through four-dimensional Chern–Simons theory, which can be viewed as a symmetry reduction of the six-dimensional theory. For this purpose, the underlying three-dimensional complex manifold is taken to be the three-dimensional complex projective space , viewed as twistor space. Formulation The background manifold on which the theory is defined is a complex manifold which has three complex dimensions and therefore six real dimensions. The theory is a gauge theory with gauge group a complex, simple Lie group The field content is a partial connection . The action is where where is a holomorphic (3,0)-form and with denoting a trace functional which as a bilinear form is proportional to the Killing form. On twistor space P3 Here is fixed to be . For application to integrable theory, the three form must be chosen to be meromorphic. See also Chern–Simons theory Four-dimensional Chern-Simons theory External links Holomorphic Chern–Simons theory nLab References Gauge theories Integrable systems
Six-dimensional holomorphic Chern–Simons theory
[ "Physics" ]
346
[ "Integrable systems", "Theoretical physics" ]
74,202,786
https://en.wikipedia.org/wiki/Dislon
A dislon is a quantized field associated with the quantization of the lattice displacement in crystalline solids. It is a localized collective excitation of a crystal dislocation. Description Dislons are special quasiparticles that emerge from the quantization of the lattice displacement field around a dislocation in a crystal. They exhibit unique particle statistics depending on the dimension of quantization. In one-dimensional quantization, dislons behave as bosonic quasiparticles. However, in three-dimensional quantization, the topological constraint of the dislocation leads to a breakdown of the canonical commutation relation, resulting in the emergence of two independent bosonic fields known as the d-field and f-field. Interaction Dislons interact with other particles such as electrons and phonons. In the presence of multiple dislocations, the electron-dislon interaction can affect the electrical conductivity of the system. The distance-dependent interaction between electrons and dislocations leads to oscillations in the electron self-energy away from the dislocation core. Applications The study of dislons provides insights into various phenomena in materials science, including the variation of superconducting transition temperatures in dislocated crystals. Dislons play a role in understanding the interaction between dislocations and phonons, affecting thermal transport properties in the presence of dislocations. See also Quasiparticle Special theory of relativity List of particles List of quasiparticles Strong interaction References Physical phenomena Condensed matter physics Quantum phases Quasiparticles Mesoscopic physics
Dislon
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
334
[ "Quantum phases", "Matter", "Physical phenomena", "Phases of matter", "Quantum mechanics", "Materials science", "Condensed matter physics", "Quasiparticles", "Mesoscopic physics", "Subatomic particles" ]