id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
58,333,370 | https://en.wikipedia.org/wiki/Salinirubellus%20salinus | Salinirubellus salinus is an halophile archaeal species. It was first isolated from a marine solar saltern in Zhejiang Province in China. It is the only known species in the genus Salinirubellus.
See also
List of Archaea genera
References
Monotypic archaea genera
Euryarchaeota
Taxa described in 2018
Archaea described in 2018 | Salinirubellus salinus | [
"Biology"
] | 76 | [
"Archaea",
"Archaea stubs"
] |
58,334,579 | https://en.wikipedia.org/wiki/Third-party%20cookies | Third-party cookies are HTTP cookies which are used principally for web tracking as part of the web advertising ecosystem.
While HTTP cookies are normally sent only to the server setting them or a server in the same Internet domain, a web page may contain images or other components stored on servers in other domains. Third-party cookies are the cookies that are set during retrieval of these components.
A third-party cookie thus can belong to a domain different from the one shown in the address bar, yet can still potentially be correlated to the content of the main web page, allowing the tracking of user visits across multiple websites.
This sort of cookie typically appears when web pages feature content from external websites, such as banner advertisements. Although not originally intended for this purpose, the existence of third party cookies opened up the potential for web tracking of a user's browsing history and is used by advertisers to serve relevant advertisements to each user. Third-party cookies are widely viewed as a threat to the privacy and anonymity of web users.
, all major web browser vendors had plans to phase out third-party cookies. This decision was reversed for Google Chrome in July 2024.
Mechanism
As an example, suppose a user visits www.example.org. This website contains an advertisement from ad.foxytracking.com, which, when downloaded, sets a cookie belonging to the advertisement's domain (ad.foxytracking.com). Then, the user visits another website, www.foo.com, which also contains an advertisement from ad.foxytracking.com and sets a cookie belonging to that domain (ad.foxytracking.com). Eventually, both of these cookies will be sent to the advertiser when loading their advertisements or visiting their website. The advertiser can then use these cookies to build up a browsing history of the user across all the websites that have ads from this advertiser, through the use of the HTTP referer header field.
, some websites were setting cookies readable for over 100 third-party domains. On average, a single website was setting 10 cookies, with a maximum number of cookies (first- and third-party) reaching over 800.
The older standards for cookies, RFC 2109 and RFC 2965, recommend that browsers should protect user privacy and not allow sharing of cookies between servers by default. However, a newer standard, RFC 6265, released in April 2011 explicitly allowed user agents to implement whichever third-party cookie policy they wish, and until the late 1990s allowing third party cookies was the default policy implemented by most major browser vendors.
Privacy law and cookie consent dialogs
While useful for advertisers, web tracking is widely seen as a threat to personal privacy. This prompted the creation of laws against tracking without user consent, the most notable of which is the European GDPR.
This led to the creation of "cookie consent" dialogs, which rapidly became a standard feature across advertising-funded (and many other) websites, and notable for their use of dark patterns to attempt to force users to allow tracking by making it hard for them to refuse to grant consent.
Some websites also responded by simply geoblocking users from countries with privacy-friendly laws.
Most modern web browsers contain privacy settings that can block third-party cookies, and some now block all third-party cookies by default - as of July 2020, such browsers include Apple Safari, Firefox, and Brave. Safari allows embedded sites to use the Storage Access API to request permission to request first-party cookies when the user interacts with them. In May 2020, Google Chrome 83 introduced new features to block third-party cookies by default in its Incognito mode for private browsing, making blocking optional during normal browsing. The same update also added an option to block first-party cookies. Google planned to start blocking third-party cookies by default in late 2024, and in January 2024 started this process with a pilot scheme in which blocking has been implemented for 1% of all Chrome users.
Replacements
Since third-party-cookie-based web tracking was an essential part of the existing web advertising ecosystem, multiple proposals are being implemented to try to replace it.
Google proposes the use of browser-based interest targeting, in which users' interests can be recorded locally by the browser, and then signalled to advertising servers without directly revealing the user's identity. Google's Privacy Sandbox is one such implementation.
Other approaches include the use of browser fingerprinting to track users across sites, which is generally viewed as being as bad a threat to privacy as third-party cookies. There are also concerns that interest-based tracking may itself be abused to fingerprint users.
Circumvention of blocking of third party cookies
A number of methods exists for circumventing the blocking of third-party cookies. One is for the operators of websites to point a DNS name within the site's own domain at an advertiser's server, thus in effect making cookies set on that server first-party cookies from the viewpoint of the browser while still providing a third party with control over the cookie information.
Another approach is for the website operator to proxy traffic from the client to the tracking service's servers. As this would easily allow the website operator to serve false information to the tracking service, this is unlikely to be widely adopted.
References
Hypertext Transfer Protocol headers
Internet privacy
Tracking | Third-party cookies | [
"Technology"
] | 1,111 | [
"Tracking",
"Wireless locating"
] |
58,336,233 | https://en.wikipedia.org/wiki/Roger%20Needham%20Award | The Roger Needham award is a prize given scientists who are recognised for important contributions made to computer science research The British Computer Society established an annual Roger Needham Award in honour of Roger Needham in 2004. It is a £5000 prize is presented to an individual for making "a distinguished research contribution in computer science by a UK-based researcher within ten years of their PhD." The award is funded by Microsoft Research. The winner of the prize has an opportunity to give a public lecture.
Laureates
Since 2004, laureates have included:
2004 Jane Hillston on Tuning Systems: From Composition to Performance
2005 Ian Horrocks on Ontologies and the Semantic Web
2006 Andrew Fitzgibbon on Computer Vision & the Geometry of Nature
2007 Mark Handley on Evolving the Internet: Challenges, Opportunities and Consequences
2008 Wenfei Fan on A Revival of Data Dependencies for Improving Data Quality
2009 Byron Cook on Proving that programs eventually do something good
2010 on Timing is Everything
2011 Maja Pantić on Machine Understanding of Human Behaviour
2012 on Memory Safety Proofs for the Masses
2013 on Theory and Practice: The Yin and Yang of Intelligent Information Systems
2014 on Mining Biological Networks
2015 on Linking Form and Function, Computationally
2016 Sharon Goldwater Language Learning in Humans and Machines: Making Connections to Make Progress
2017 on Many-Core Programming: How to Go Really Fast Without Crashing
2018 Alexandra Silva
2019
2020 Jade Alglave
See also
List of computer science awards
References
Awards established in 2004
British Computer Society
British science and technology awards
2004 establishments in the United Kingdom
Computer science awards | Roger Needham Award | [
"Technology"
] | 310 | [
"Science and technology awards",
"Computer science",
"Computer science awards"
] |
58,337,152 | https://en.wikipedia.org/wiki/NGC%206055 | NGC 6055 is a barred lenticular galaxy located about 450 million light-years away in the constellation Hercules. The galaxy was discovered by astronomer Lewis Swift on June 8, 1886. It also a member of the Hercules Cluster and is a LINER galaxy.
See also
List of NGC objects (6001–7000)
References
External links
6055
57076
Hercules (constellation)
Hercules Cluster
Astronomical objects discovered in 1886
Barred lenticular galaxies
LINER galaxies
10191 | NGC 6055 | [
"Astronomy"
] | 93 | [
"Hercules (constellation)",
"Constellations"
] |
58,337,706 | https://en.wikipedia.org/wiki/Polyurethane%20urea%20elastomer | The polyurethane urea elastomer (PUU), or poly (urethane urea) elastomer, is a flexible polymeric material that is composed of linkages made out of polyurethane and polyurea compounds. Due to its hyperelastic properties, it is capable of bouncing back high-speed ballistic projectiles as if the material had “hardened” upon impact. PUUs were developed by researchers from the U.S. Army Research Laboratory (ARL) and the Army’s Institute for Soldier Nanotechnology at the Massachusetts Institute of Technology (MIT) to potentially replace polyethylene materials in body armor and other protective gear, such as combat helmets, face shields, and ballistic vests.
Composition
In general, PUUs are composed of both hard and soft segments that each play a role in the material’s physical properties. The soft segments consist of two types of chemical compounds, long-chain polyols and diisocyanates, that react and connect together with urethane linkages. On the other hand, the short-chain diamines react with the diisocyanates to form the hard segments that are held together with urea linkages. The mechanical properties of the PUU largely depend on the specific diisocyanates, long-chain polyols, and short-chain diamines in play, because how these components interact determines how well the soft and hard segments of the elastomers both crystallize and undergo microphase separation. As a result, variations in this molecular arrangement of chemical compounds have been shown to greatly affect the elastomer’s morphology and the macroscopic, mechanical properties that it exhibits.
Hyperelastic behavior
In 2017, researchers from the Army Research Laboratory and MIT reported that PUUs are capable of demonstrating hyperelastic properties, meaning that the material becomes extremely hardened upon being deformed within a very short time. As a result, the material may withstand ballistic impacts at exceptionally high speeds.
For the study, the researchers investigated the performance of different PUU variants where 4,4’-dicyclohexylmethane diisocyanate (HMDI) was chosen as the diisocyanate compound, diethyltoluenediamine (DETA) was chosen as the short-chain diamine compound, and poly(tetramethyleneoxide) (PTMO) was chosen as the long-chain polyol compound. Despite consisting of the same chemical compounds with the same stoichiometric ratio of 2:1:1 of [HDMI]:[DETA]:[PTMO], the samples differed regarding the molecular weight of their respective PTMO component, namely , , and , for the soft segments of the elastomers.
Each of the three samples were subjected to a laser-induced projectile impact test (LIPIT), which tested the dynamic response of the material by using a pulsed laser to shoot it with microparticles made of silica at speeds ranging from . The researchers found that the sample with the PTMO was the most rigid variant with the particle exhibiting a shallow penetration of about upon impact despite travelling at before rebounding at . In contrast, the sample with the PTMO displayed a deeper penetration of about , but had a slower particle rebound of , making it the most rubber-like among the PUU samples. The strain-rates associated with these impacts were on the order of 2.0 x 10^8/s for the former and 8.1 x 10^7/s for the latter.
However, all three PUU variants demonstrated rebound capabilities with no signs of post-mortem damage after impact from the microparticles. In contrast, when the LIPIT was performed on a ductile, glassy polycarbonate at similar speeds to that of the PTMO PUU variant, the polycarbonate displayed predominant deformation upon impact, despite its high fracture toughness and ballistic strength. According to the researchers, the effectiveness of the PUUs may come from how the molecules “resonate” similar to chain-mail upon impact with each oscillations at specific frequencies dissipate the absorbed energy. In comparison, the polycarbonate lacked the broad range of relaxation times, a characteristic that reflects how efficiently the molecules in the polymer chains respond to an external impulse, that PUUs are known to have. As a result, the researchers concluded that even the most rubber-like variant of the PUU, specifically the PTMO sample, demonstrated greater robustness and dynamic stiffening than the glassy polycarbonate.
ARL researchers have stated that the PUU’s primary benefit comes not from its extra strength but its fabric-like flexibility, which demonstrates its potential as a replacement material for the rigid ceramic and metal plates generally found in military battle armor. However, as of 2018, the PUU is still under development in the testing phase.
References
External links
PU Polyurethane Open Belts
Military technology
Polyurethanes
Plastics
Elastomers
Body armor | Polyurethane urea elastomer | [
"Physics",
"Chemistry"
] | 1,033 | [
"Synthetic materials",
"Unsolved problems in physics",
"Elastomers",
"Amorphous solids",
"Plastics"
] |
58,340,340 | https://en.wikipedia.org/wiki/Model%20V | The Model V was among the early electromechanical general purpose computers, designed by George Stibitz and built by Bell Telephone Laboratories, operational in 1946.
Only two machines were built: first one was installed at National Advisory Committee for Aeronautics (NACA, later NASA), the second (1947) at the US Army’s Ballistic Research Laboratory (BRL).
Construction
Design was started in 1944. The tape-controlled (Harvard architecture) machine had two (design allowed for a total of six) processors ("computers") that could operate independently, an early form of multiprocessing.
The Model V weighed about .
Significance
Inspired Richard Hamming to investigate the automatic error-correction, which led to invention of Hamming codes
One of the early electromechanical general purpose computers
First American machine and first George Stibitz design to use floating-point arithmetic
Had an early form of multiprocessing
Had a very primitive form of an operating system, albeit in hardware. A separate hardware control unit existed to direct the sequence of computer operations.
Model VI
Built and used internally by Bell Telephone Laboratories, operational in 1949.
Simplified version of the Model V (only one processor, about half the relays) but with several improvements, including one of the earliest use of the microcode.
Bibliography
pdf
Further reading
References
External links
Bell Labs
1940s computers
AT&T computers
Computer-related introductions in 1946
Electro-mechanical computers | Model V | [
"Technology"
] | 288 | [
"Computing stubs",
"Computer hardware stubs"
] |
58,340,400 | https://en.wikipedia.org/wiki/United%20States%20Lake%20Survey | The United States Lake Survey (USLS) was a hydrographic survey for the Great Lakes, New York Barge Canal, Lake Champlain and the Boundary Waters of the Canada–United States border between Minnesota and Ontario. The Survey's activities began on 31 March 1841, with the goal of surveying the Great Lakes. The Lake Survey was created within the United States Army Topographical Engineers (later the United States Army Corps of Engineers). Like the Commerce Department's United States Coast and Geodetic Survey, the Lake Survey had responsibility for the preparation and publication of nautical charts and other navigational aids. By 1882, the Survey had completed the original Congressional mandate, producing 76 charts, then disbanded. By 1901, the original survey and charting products required revision. The Lake Survey was reconstituted and its mission expanded. In addition to traditional survey, charting, and navigation information responsibilities, the Lake Survey was also responsible for studies on lake levels and associated river flow.
Early history (1841–56)
The United States Lakes Survey was created on 31 March 1841 by an act of Congress, appropriating $15,000 for a United States Army Corps of Topographical Engineers led survey of the Great Lakes. William G. Williams was appointed the first commander of the survey. He was assisted by Howard Stansbury, James H. Simpson, Joseph E. Johnston, Thomas J. Cram and I. Carle Woodruff. They were headquartered at the mouth of the Buffalo River. In the first summer, a detailed topographical survey of Mackinac Island was completed, reconnaissance surveys in the northern part of Lake Michigan were made and a site for a baseline near the entrance to Green Bay was selected and partly cleared.
The first four years of the survey largely dealt with the baseline at Green Bay, and building triangulation stations. Surveying work was additionally done on Lakes Michigan, St. Clair, and Erie, and at the Straits of Mackinac. To conduct hydrographic surveys, in 1843, an iron steamer named the Abert (after John James Abert) was built for the survey. Flaws in the ship's design were soon discovered, and it was overhauled and renamed Surveyor in early 1845. By the end of 1845, all harbors besides those in Lake Superior had been surveyed.
James Kearney replaced Williams in 1845, relocating the survey to Detroit. Work was temporarily halted for most of the Mexican–American War. After the war, work resumed and the west end of Lake Erie was completed in 1849. John N. Macomb took command of the lake survey in 1851. It greatly increased in size and appropriations. It published its first charts in 1852–covering all of Lake Erie. During the 1852–55 seasons, the areas surveyed by the Lake Survey included the Straits of Mackinac and the approaches to either side of Mackinac Island, part of the north end of Lake Michigan, all of the St. Marys River, and a few harbors on Lake Superior. As a result of this work the Lake Survey published three new charts. A second steamer, the Jefferson Davis (after Jefferson Davis) was launched in 1856, and soon renamed Search.
Meade years (1857–61)
On 20 May 1857, Kearney was replaced by George Meade. Meade completed the survey of Lake Huron during the 1857–59 seasons and completed the survey of Saginaw Bay as well. He surveyed the Fox and Manitou Islands, and Grand and Little Traverse Bays. The Lake Survey completed a few local harbor surveys on Lake Superior by 1859 and began a general survey of the western end of that lake in 1861. He oversaw a dramatic expansion in the survey, including the construction of an observatory in Detroit and the first systematic recording of lake water levels.
In 1859, a network of 19 meteorological stations around the Lakes were completed. From 1858 through 1861, the federal appropriations for the Lake Survey grew to $75,000 annually. Meade would later write that he considered his early work on the lakes survey as among the most important duties of his extensive career. Upon the outbreak of the American Civil War in 1861, he offered his services to the Union Army.
Completion of original survey
James D. Graham replaced Meade in 1861, and led the survey through much of the Civil War, during which it was the only active topographical field office still operating. During the war, the department still increased, with appropriations rising to $125,000 in 1865. Surveys were completed of Portage Entry on Keweenaw Bay, and the waters of the Keweenaw Waterway. In 1863, the survey formally became part of the United States Army Corps of Engineers. In 1864, William F. Reynolds replaced Graham. As a result of the work on Lake Superior, eight new charts of that lake were published between 1865 and 1873. The charts were given away for free to mariners. Between October 1861 and October 1865, 15,210 navigational charts had been distributed to Great Lakes mariners, bringing the number issued since 1852 to 30,120. In 1869, distribution was further expanded as the Lake Survey was authorized to sell surplus charts for the first time.
In the spring of 1867, a program of river flow measurement was created. Three years later, Reynolds was removed from his command. Cyrus B. Comstock replaced him. Comstock oversaw the completion of a larger observatory, the publishing of a first complete set of charts. The surveys of Lake Michigan were completed in 1874, Lake Superior in 1874, Lake Ontario in 1875 and Lake Erie in 1877. Lake St. Clair and Lake Champlain were completed in 1871. During the summer of 1873 and the following winter, a complete survey of the city of Detroit and the Detroit River was made. A survey of the St. Lawrence began during 1871 at the boundary line near St. Regis, New York, and ended at the head of the river on Lake Ontario in 1873. The survey work on the Mississippi, for which Congress appropriated $16,000 in 1876, began in Cairo, Illinois, and was completed at the mouth of the Arkansas River in 1879.
The survey was officially completed in 1882, when the original mandate was completed, with 76 charts produced. A historian at the time said:
Re-establishment
In the decade after the original survey was ended, it became clear that the charts were not sufficient; for example, since the deepest draft vessels used in the Great Lakes in the mid-late 1800s drew only of water, the Survey's charts only showed depths of or less. Though work on charts had continued in the time between the survey, the Lake Survey was officially re-formed on 9 January 1901, having been part of the Detroit office of the Army Corps of Engineers since 1889.
For the first time, the Survey published maps in color, and the Great Lakes Bulletin. The charts and bulletin were increasingly requested. The Survey collaborated with the United States Geological Survey, and resumed field surveys, resurveying Apostle Islands and vicinity on Lake Superior, the St. Lawrence River, and northern Lake Michigan and the Straits of Mackinac. Several more steamers were acquired to cope with increased workload. Areas that needed the most work were identified, being the east end of Lake Superior and the waters around Isle Royale; the southern end of Lake Michigan; the Straits of Mackinac; both ends of Lake Erie; and the east end of Lake Ontario including the head of the St. Lawrence River. In addition, along the shores of the Lakes in inadequately surveyed areas, sounding and sweeping were necessary. Specifically, these areas were the south shore of Lake Superior, Grand and Little Traverse Bays, the Keweenaw Peninsula, the west shore of Lake Michigan, and the south and west shores of Lake Huron. It worked on those projects for the next 30 years.
The Survey worked on several water diversion projects, including that undertaken by the Niagara Falls Hydraulic Power and Manufacturing Company (in 1906) and water diversion into the Chicago Sanitary and Ship Canal (1912), both times it was called upon to study the effect the diversion would have. On 4 March 1911, its jurisdiction was expanded to include the New York State Barge Canal System and the areas between Lake Superior and Lake of the Woods, which included the Boundary Waters. After several other expansions, in 1914, it became responsible for "an inland waterway system extending nearly halfway across the continental United States". During World War I, the Lake Survey printed recruiting posters, charts and maps for the areas outside the Great Lakes, and other items requested by the War Department. By the end of 1918, it had printed and distributed 573,000 charts of the Great Lakes. Mason Patrick was one of the commanders during this time.
Work continued throughout the 1920s. By 1922, the Lake Survey was distributing 123 different charts. The Superior Shoal was discovered. Funding shrunk at the onset of the Great Depression, but aerial photography was introduced. In 1936, a total of over 1 million charts was reached. To support new equipment and new surveys, funding for the Lake Survey was expanded in the 1930s. The project begun in 1907 was completed, and resurveying began in 1937.
World War II and later work
Upon the outbreak of World War II, the survey began working in various war capacities. It published a "Submarine Training Chart of Upper Lake Michigan" pamphlet. The Lake Survey, with its cartographic and lithographic specialists, directed a major portion of the military's mapping activity. It took over and consolidated the former WPA cartographic units in New York, Chicago, and Detroit on 1 June 1942. It published 370 tons of maps, producing 8,109 different charts and maps, printing and distributing 9,190,000 copies to the armed forces. It was also responsible for Mosaic Mapping Unit in Detroit and the Military Grid Unit at New York City, accounting for another 885 separate mosaic maps with more than 3,128,000 copies being printed. The survey received the Army-Navy "E" Award for its work.
After World War II, work on the survey continued. During the shipping season of 1948, it began work on an experimental "radar chart". The mapping unit also continued to do some work for the War Department, particularly during the Korean War. Electronic surveying methods were also tested. The Survey grew further in 1962 with the establishment of the Great Lakes Research Center. The Center conducted strong programs in coastal engineering and water resources. It was involved heavily in the building of the Saint Lawrence Seaway. Growth of the survey continued, reaching over two million dollars of appropriations in 1968.
By Reorganization Plan Number 4 of 1970, effective October 3, 1970, the Lake Survey Office was transferred to newly established National Oceanic and Atmospheric Administration (NOAA), where it was redesignated the Lake Survey Center and assigned to the National Ocean Survey. Various activities of the Lake Survey Center were transferred to other NOAA organizations between 1974 and 1976, and the Lake Survey Center was abolished on June 30, 1976.
References
Sources
Hydrography
Great Lakes
Field surveys | United States Lake Survey | [
"Environmental_science"
] | 2,221 | [
"Hydrography",
"Hydrology"
] |
58,342,337 | https://en.wikipedia.org/wiki/Global%20Research%20Identifier%20Database | Global Research Identifier Database (GRID) is a database of educational and research organizations worldwide, created and maintained by Digital Science & Research Solutions Ltd., part of the technology company Digital Science. In 2021 public releases of the database were discontinued in favor of Research Organization Registry (ROR) as the leading open organization identifier.
Each organization is assigned a unique GRID ID and there is a corresponding web address and page for each ID in the database. The dataset contains the institution's type, geo-coordinates, official website, and Wikipedia page. Name variations of institutions are included, as well.
The first public release of GRID occurred on 22 September 2015, and it contained entires for institutes. The 30th public release of GRID was on 27 August 2018, and the database contained entries. It is available in the Resource Description Framework (RDF) specification as linked data, and can therefore be linked to other data. Containing relationships, GRID models two types of relationships: a parent-child relationship that defines a subordinate association, and a related relationship that describes other associations
In December 2016, Digital Science released GRID under a Creative Commons CC0 licence — without restriction under copyright or database law.
The database is available for download as a ZIP archive, which includes the entire database in JSON and CSV file formats.
From all the sources which it draws information, including funding datasets, Digital Science claims that GRID covers 92% of institutions.
Data sources
Example
The GRID ID for NASA: → grid.238252.c.
References
External links
Creative Commons-licensed databases
Identifiers
Library cataloging and classification
Open data
Metadata
Semantic Web | Global Research Identifier Database | [
"Technology"
] | 331 | [
"Metadata",
"Data"
] |
58,343,144 | https://en.wikipedia.org/wiki/Nissan%20Deliatitz | Nissan ben Avraham Deliatitz () was a 19th-century Russian rabbi and mathematician.
He wrote Keneh Ḥokhmah, a manual of algebra in five parts, published in Vilna and Grodno in 1829. The work received approbations from Rabbi David, the av beit din of Novhardok, and Rabbi Avraham Abele ben Avraham Shlomo Poswoler, an eminent scholar who headed the Vilna beit din.
References
Mathematicians from the Russian Empire
Algebraists
19th-century rabbis from the Russian Empire
Jewish scientists | Nissan Deliatitz | [
"Mathematics"
] | 120 | [
"Algebra",
"Algebraists"
] |
66,338,875 | https://en.wikipedia.org/wiki/Churchwardens%27%20accounts | Churchwardens' accounts are a form of record maintained by the churchwardens of a parish church where expenses, activities, and events of the parish are recorded. Churchwardens' accounts are sometimes found in association with the parish register, which records ritual matters. These records have been extensively utilized to study European history, particularly during the medieval period and the English Reformation. England has the highest proportion of surviving churchwardens' accounts.
Description
The churchwarden, the oldest officer position within Christian parish churches, was generally elected by an urban congregation once a year at Easter. According to historian Beat Kümin, a churchwarden's role was analogous to that of a chief executive officer, with lay congregants comprising the parish's "shareholders" and the masters or feoffees comprising the parish's "board". Among their duties was managing the parish's accounts. The accounts were recorded the both the expenses and income of the parish, often indicating which parishioners were renting from the parish. Churchwardens were also responsible for annually certifying the accuracy of parish registers before they were submitted to the bishop. Churchwardens' accounts are sometimes found in association with parish registers.
Churchwardens' accounts appear in medieval and post-Reformation Europe, including both Catholic and Church of England parishes. On the British Isles, churchwardens' accounts are most prevalent in England, followed by Wales and Ireland, but are not readily found in Scotland. Overall, England has the highest proportion of churchwardens' accounts. The Borthwick Institute for Archives collection of accounts date from the late 14th century through to the 1980s. The 17th century produced an increasing number now held in that collection, with the majority coming from the 18th and 19th centuries. While some English accounts were made in Latin into the 18th century, the majority were written in the vernacular.
Historical significance
The role of churchwardens' accounts in the study of life within particular parishes has been significant. In the context of English history near the beginning of the 16th century, Kümin described them as "promis[ing] unrivalled insights into the public lives of the vast majority of the population", as the one thing most Englishmen had in common at the time was that "they were parishioners". Historian Clive Burgess criticised the usage of churchwardens' accounts, saying that historians with agendas engaged in uncritical acceptance of churchwardens' accounts and that the role of the churchwarden within parochial governance had become overstated. The value of churchwardens' accounts to genealogists is diminished relative to other parish records as the accounts were rarely indexed.
In England, the historical value of churchwardens' accounts has seen efforts to establish a database collecting them to improve accessibility and encourage their utilization. In 2012, researchers at the Warwick Network for Parish Research's annual conference called for the establishment of such a database.
Reprinted editions of churchwardens' accounts have been produced. Churchwardens' accounts have been used extensively by historian Eamon Duffy in his books The Stripping of the Altars (1992) and The Voices of Morebath. In the case of The Voices of Morebath, Duffy extensively relied upon the 16th-century accounts of Sir Christopher Trychay, the vicar of Morebath's parish, which had been reprinted. Patrick Collinson criticised the "misleading, if conventional", characterisation of Trychay's records as "churchwarden's accounts", as they were a broader set of records beyond those generally maintained by churchwardens.
References
Catholic canonical documents
Christian manuscripts
Genealogy
Manuscripts by type
Medieval manuscripts | Churchwardens' accounts | [
"Biology"
] | 744 | [
"Phylogenetics",
"Genealogy"
] |
66,341,566 | https://en.wikipedia.org/wiki/Adrian%20Constantin | Adrian Constantin (born 22 April 1970) is a Romanian-Austrian mathematician who does research in the field of nonlinear partial differential equations. He is a professor at the University of Vienna and has made groundbreaking contributions to the mathematics of wave propagation. He is listed as an ISI Highly Cited Researcher with more than 160 publications and 11,000 citations.
Life and career
Adrian Constantin was born in Timișoara, Romania, where he studied at the Nikolaus Lenau High School. He was later educated at the University of Nice Sophia Antipolis (BSc 1991, MSc 1992) and at New York University (NYU), where he got his PhD in 1996 under Henry McKean with the thesis "The Periodic Problem for the Camassa–Holm equation". He did post-doctoral work at the University of Basel and at the University of Zurich.
After a short period as a lecturer at the University of Newcastle upon Tyne, he became a professor at the University of Lund in 2000, and then was Erasmus Smith's Professor of Mathematics at Trinity College Dublin (TCD) from 2004 to 2008, and was made a fellow in 2005. Since then he has been university professor for partial differential equations at the University of Vienna, and also had a chair at King's College London during the period 2011-2014.
Constantin specializes in the role of mathematics in geophysics using nonlinear partial differential equations to mathematically model currents and waves in the oceans and in the atmosphere. These flows and waves play an important role in the El Niño climate phenomenon and in natural disasters such as tsunamis. His approach takes into account the fact that the surface of the earth is curved and the importance of the Coriolis force.
Awards and honours
2000: Highly Cited Researcher with more than 160 publications and 11,000 citations
2005: Göran Gustafsson Prize from the Royal Swedish Academy of Sciences
2007: Friedrich Wilhelm Bessel Research Prize from the Alexander von Humboldt Foundation
2009: Fluid Dynamics Research Prize from the Japan Society of Fluid Mechanics
2010: Advanced Grant from the European Research Council (ERC)
2012: Plenary lecture at the European Congress of Mathematicians (ECM) in Krakow
2020: Wittgenstein Award from The Austrian Ministry for Science
2022: Elected corresponding member of the Austrian Academy of Sciences, 22 April 2022
2022: Elected member of the German National Academy of Sciences Leopoldina, 16 March 2022
2022: Made an honorary citizen of the city of Timișoara
2024: Elected full member of the Austrian Academy of Sciences, 15 April 2024
Selected publications
papers
1998: Wave breaking for nonlinear nonlocal shallow water equations (with J. Escher), Acta Mathematica 181 229–243.
1999: A shallow water equation on the circle (with H. P. McKean), Comm. Pure Appl. Math. 52 949–982.
2000: Stability of peakons (with W. Strauss), Comm. Pure Appl. Math. 53 603–610.
2004: Exact steady periodic water waves with vorticity (with W. Strauss), Comm. Pure Appl. Math. 57 481–527.
2006: The trajectories of particles in Stokes waves, Invent. Math. 166 523–535.
2007: Global conservative solutions of the Camassa-Holm equation (with A. Bressan), Arch. Ration. Mech. Anal. 183 215–239.
2011: Analyticity of periodic traveling free surface water waves with vorticity (with J. Escher), Ann. of Math. 173 559–568.
2016: Global bifurcation of steady gravity water waves with critical layers (with W. Strauss and E. Varvaruca), Acta Mathematica 217 195–262.
2019: Equatorial wave-current interactions (with R. I. Ivanov), Comm. Math. Phys. 370 1–48.
2022: On the propagation of nonlinear waves in the atmosphere (with R. S. Johnson), Proceedings of the Royal Society A 478 (2260), 20210895
2022: Stratospheric planetary flows from the perspective of the Euler equation on a rotating sphere (with P. Germain), Arch. Ration. Mech. Anal., (245 587–644)
Books
2011: "Nonlinear Water Waves with Applications to Wave-Current Interactions and Tsunamis", Society for Industrial and Applied Mathematics, Philadelphia,
2016: "Fourier Analysis. Part 1. Theory", London Mathematical Society, Cambridge University Press,
2024: "Analysis I", Springer Spektrum, Berlin, Heidelberg,
References
External links
Adrian Constantin's homepage
Literature by and about Adrian Constantin in the catalog of the German National Library
1970 births
Living people
People from Timișoara
Austrian mathematicians
Romanian mathematicians
New York University alumni
Côte d'Azur University alumni
Romanian emigrants to Austria
Mathematical analysts
Partial differential equation theorists
Fluid dynamicists
Academics of Trinity College Dublin
Corresponding Members of the Austrian Academy of Sciences
Fellows of Trinity College Dublin
Professorships at King's College London
Academic staff of the University of Vienna | Adrian Constantin | [
"Chemistry",
"Mathematics"
] | 1,051 | [
"Mathematical analysis",
"Fluid dynamicists",
"Mathematical analysts",
"Fluid dynamics"
] |
66,341,767 | https://en.wikipedia.org/wiki/Task%20Force%20on%20Process%20Mining | The IEEE Task Force on Process Mining (TFPM) is a non-commercial association for process mining. The IEEE (Institute of Electrical and Electronics Engineers) Task Force on Process Mining was established in October 2009 as part of the IEEE Computational Intelligence Society at the Eindhoven University of Technology.
The task force is supported by over 80 organizations and has around 750 members. The main goal of the task force is to promote the research, development, education, and understanding of process mining.
About
In 2012, the IEEE World Congress on Computational Intelligence/ IEEE Congress on Evolutionary Computation held a session on Process Mining. Process mining is a type of research that is a mix of computational intelligence and data mining, as well as process modeling and analysis.
Activities and organization
The Task Force on Process Mining has a Steering Committee and an Advisory Board. The Steering Committee, was chaired by Wil van der Aalst in its inception in 2009, defined 15 action lines. These include the organization of the annual International Process Mining Conference (ICPM) series, standardization efforts leading to the IEEE XES standard for storing and exchanging event data, and the Process Mining Manifesto which was translated into 16 languages. The Task Force on Process Mining also publishes a newsletter, provides data sets, organizes workshops and competitions, and connects researchers and practitioners.
In 2016, the IEEE Standards Association published the IEEE Standard for Extensible Event Stream (XES), which is a widely accepted file format by the process mining community.
As of 2023, Boudewijn van Dongen serves as chair of the Steering Committee. Wil van der Aalst and Moe Wynn both serve as vice-chair of the Steering Committee.
See also
Process mining
Business process management
References
Further reading
Aalst, W. van der (2016). Process Mining: Data Science in Action. Springer Verlag, Berlin ().
Reinkemeyer, L. (2020). Process Mining in Action: Principles, Use Cases and Outlook. Springer Verlag, Berlin ().
Information science
Computer occupations
Computational fields of study
Data analysis | Task Force on Process Mining | [
"Technology"
] | 413 | [
"Computational fields of study",
"Computer occupations",
"Computing and society"
] |
66,342,149 | https://en.wikipedia.org/wiki/Convergence%20research | Convergence research aims to solve complex problems employing transdisciplinarity. While academic disciplines are useful for identifying and conveying coherent bodies of knowledge, some problems require collaboration among disciplines, including both enhanced understanding of scientific phenomena as well as resolving social issues. The two defining characteristics of convergence research include: 1) the nature of the problem, and 2) the collaboration among disciplines.
Definition
In 2002, it was published the foundational report "Converging Technologies for Improving Human Performance: Nanotechnology, Biotechnology, Information Technology, and Cognitive Science" (Roco et al. 2002 and 2003) and article "Coherence and Divergence of Megatrends in Science and Engineering" (Roco MC, 2002), followed by the international report "Convergence of Knowledge, Technology and Society: Beyond Convergence of Nano-Bio- Info-Cognitive Technologies" (Roco et al. 2013) and "Principles and Methods that Facilitate Convergence" (Roco 2016).
In 2016, convergence research was identified by the National Science Foundation as one of 10 Big Idea's for future investments. As defined by NSF, convergence research has two primary characteristics, namely:
"Research driven by a specific and compelling problem. Convergence research is generally inspired by the need to address a specific challenge or opportunity, whether it arises from deep scientific questions or pressing societal needs.
Deep integration across disciplines. As experts from different disciplines pursue common research challenges, their knowledge, theories, methods, data, research communities and languages become increasingly intermingled or integrated. New frameworks, paradigms or even disciplines can form sustained interactions across multiple communities."
National Research Council published a report on "Convergence: Facilitating Transdisciplinary Integration of Life Sciences, Physical Sciences, Engineering, and Beyond" in 2014.
An illustration of implementing convergence principles to the National Nanotechnology Initiative is described in in 2013.
An illustration of application of convergence to health, science and engineering research is described in in 2016.
Examples of convergence research
Biomedicine
Advancing healthcare and promoting wellness to the point of providing personalized medicine will increase health and reduce costs for everyone. While recognizing the potential benefits of personalized medicine, critics cite the importance of maintaining investments in public health as highlighted by the approaches to combat the COVID-19 pandemic.
Cyber-physical systems
The internet of things allows all people, machines, and infrastructure to be monitored, maintained, and operated in real-time, everywhere. Because the United States Government is one of the largest user of "things", cybersecurity is critical to any effective system.
STEMpathy
Jobs that utilize skills in science, technology, engineering, and mathematics to provide care for human welfare through the use of empathy have been described as creating value with "hired hearts". Thomas Friedman coined the term "STEMpathy" to describe these jobs.
Sustainability
Beyond recycling, the goal of achieving zero waste means designing a closed loop of the material and energy necessary to operate the built environment. Individuals and organizations, including corporations and governments, increasingly are committing to achieving zero waste.
References
Biomedicine
Computer_systems
Sustainability | Convergence research | [
"Technology",
"Engineering",
"Biology"
] | 627 | [
"Computer engineering",
"Biomedicine",
"Computer systems",
"Computer science",
"Computers"
] |
66,342,842 | https://en.wikipedia.org/wiki/Claude%20Langlois | Claude Langlois (c. 1700 – 1756) was a French maker of precision scientific instruments and the foremost among them in the period. His instruments included draughtsman's tools like an improved pantograph, measuring instruments, and six-foot quadrants for astronomical angle measurement. He was appointed official instrument maker for French astronomers Cassini II, Cassini de Thury, Le Monnier, Maupertuis, and the Abbé de Lacaille; and held the official position of ingénieur en instruments de mathématiques for the French Académie des Sciences in 1740.
Little is known of Langlois' life but he was considered the most famous maker of scientific instruments between 1730 and 1756 and many of his instruments are known from his name on them. This was a period when English instrument makers were leading with master instrument makers like Nicolas Bion and Michael Butterfield. Langlois' earliest known contract was for a six-foot wall quadrant for the Paris observatory with markings that indicate that he worked at the Niveau on the Quai de l'Horloge. He also produced instruments for use in labs (including those of Lavosier), by surveyors, navigators and astronomers. His improved pantograph design was sent to the Académie des Sciences for approval. His instruments were sent on geodesic expeditions to Peru and Lapland in 1733-35 which included measuring standards for the toise (the length standard then in use). In 1744 he was in charge of restoring a gnomon at the Church of St Sulpice, Paris. After his death, his nephew Jacques Canivet produced eighty copies of the Toise. His position at the Academy was taken by a pupil of his, Lennel.
References
External links
Description et usage du pantographe, autrement appelé singe , changé & perfectionné par C. Langlois, ingénieur du Roi & de l'Académie royale des sciences pour les instrumens de mathématiques (1744)
A set of instruments in the History of Science Museum
French scientific instrument makers
1756 deaths
History of science and technology
Year of birth uncertain | Claude Langlois | [
"Technology"
] | 439 | [
"History of science and technology"
] |
66,342,900 | https://en.wikipedia.org/wiki/H-object | In mathematics, specifically homotopical algebra, an H-object is a categorical generalization of an H-space, which can be defined in any category with a product and an initial object . These are useful constructions because they help export some of the ideas from algebraic topology and homotopy theory into other domains, such as in commutative algebra and algebraic geometry.
Definition
In a category with a product and initial object , an H-object is an object together with an operation called multiplication together with a two sided identity. If we denote , the structure of an H-object implies there are mapswhich have the commutation relations
Examples
Magmas
All magmas with units are H-objects in the category .
H-spaces
Another example of H-objects are H-spaces in the homotopy category of topological spaces .
H-objects in homotopical algebra
In homotopical algebra, one class of H-objects considered were by Quillen while constructing André–Quillen cohomology for commutative rings. For this section, let all algebras be commutative, associative, and unital. If we let be a commutative ring, and let be the undercategory of such algebras over (meaning -algebras), and set be the associatived overcategory of objects in , then an H-object in this category is an algebra of the form where is a -module. These algebras have the addition and multiplication operationsNote that the multiplication map given above gives the H-object structure . Notice that in addition we have the other two structure maps given bygiving the full H-object structure. Interestingly, these objects have the following property:giving an isomorphism between the -derivations of to and morphisms from to the H-object . In fact, this implies is an abelian group object in the category since it gives a contravariant functor with values in Abelian groups.
See also
André–Quillen cohomology
Cotangent complex
H-space
References
Category theory
Homotopical algebra | H-object | [
"Mathematics"
] | 430 | [
"Functions and mappings",
"Mathematical structures",
"Mathematical objects",
"Fields of abstract algebra",
"Mathematical relations",
"Category theory"
] |
66,345,351 | https://en.wikipedia.org/wiki/H.%20P.%20Newsholme | Henry Pratt Newsholme (27 August 1885 – 21 December 1955) D.M., F.R.C.P., D.P.H. was a British physician and writer.
Newsholme was born at Secunderabad, India. He was the son of Rev. B. Pratt Newholme and was educated at Brighton Grammar School (1903) and Balliol College, Oxford (1907).
He obtained a B.Sc. with first-class honours in physiology from the natural science school in Oxford. He graduated B.M., B.Ch. from St Thomas's Hospital Medical School in 1910 and became a member of the Royal College of Physicians the same year. In 1911, he obtained the D.P.H. of the English Royal Colleges and took the D.M. in 1915. He was admitted F.R.C.P in 1927.
Newsholme was house-physician at St. Thomas's and clinical assistant at the Evelina Children's Hospital. He was assistant medical officer
of health at Brighton Borough Fever Hospital. He was a captain in the R.A.M.C. and served in France and Italy (1915–1918). He was professor of hygiene and public health at Birmingham University (1937–1941) and was appointed medical officer of health of Birmingham. He held this position for twenty-three years, until 1950.
He authored medical works which stressed the importance of mind on the body. Newsholme held deep religious views which he promoted in several books, Evolution and Redemption (1933), Christian Ethics and Social Health (1937) and Matter, Man, and Miracle (1951). He was a theistic evolutionist.
Newsholme was received into the Catholic Church in 1939 with his wife who he married in 1914. He was elected president of the Midland Catholic Medical Society in 1949. He had three sons and two daughters. He died age 70 at his home in Harborne.
Selected publications
Preventive Medicine and the Healthy Mind (1926)
Health, Disease and Integration (1929)
Evolution and Redemption (1933)
Christian Ethics and Social Health (1937)
Matter, Man, and Miracle (1951)
References
1885 births
1955 deaths
20th-century British medical doctors
Alumni of Balliol College, Oxford
British medical writers
Fellows of the Royal College of Physicians
People from Secunderabad
Royal Army Medical Corps officers
Theistic evolutionists | H. P. Newsholme | [
"Biology"
] | 487 | [
"Non-Darwinian evolution",
"Theistic evolutionists",
"Biology theories"
] |
66,345,561 | https://en.wikipedia.org/wiki/Gap%20surface%20plasmon | A gap surface plasmon (or gap plasmon) is a guided electromagnetic wave which propagates in a transparent medium located between two extremely close metallic regions. Propagating in a gap between metals forces light to propagate partially inside the metallic regions, causing the gap plasmon to slow down.
The velocity of the gap-plasmon can be modulated by changing the thickness of the gap even by a few nanometers.
A gap-plasmon is a guided mode, a solution of Maxwell's equations without source. It is the form under which light propagates inside an extremely thin gap between two metals (having the same nature or not). As a gap plasmon, the electromagnetic wave can propagate up to four to five times slower than in vacuum. Such a guided mode only exists for parallel to the interface magnetic fields (p polarization). The distance between the metallic area has to be typically smaller than 50 nm in order to noticeably slow the guided mode. Actually, the GAP plasmon propagates partially inside the metal : the field of the GAP plasmon penetrates the metal to a depth of typically 25 nm, called the skin depth. A slow guided mode presents a short effective wavelength and so a very large wave vector (noted kx when the wave propagates along an Ox axis). As the thickness of the dielectric region decreases, the gap-plasmon is slowed by the metal and its effective index (as well as its wavevector) increases, while its effective wavelength shrinks.
Devices based on gap-plasmon, such as resonators, present a typical size which is of the order of the effective wavelength. Gap-plasmon resonators have in general a reduced size compared to the wavelength of light in vacuum. Such a miniaturization is particularly sought after in plasmonics.
Applications
Gap plasmon resonators :
They can be obtained by self-assembly of chemically synthesized nanocubes or by lithography. A GAP plasmon resonator is a cavity for the guided mode : the wave is reflected back and forth inside the resonator.
Such structures (see picture) present a very small volume compared to the wavelength is vacuum (which allow to reach a very important Purcell effect). Such resonators can then be used to design metasurfaces, fabricate reflection holograms or for subwavelength color printing.
Example : chemically synthesized silver nanocubes on a gold layer, separated by polymer (see picture).
Electro-optical modulators :
Electro-optical modulators are designed to modulate a light signal, i.e. they modulate on the characteristics of a light beam (such as its wavelength, polarization state or intensity) to encode a signal. The gap plasmon based modulators are the smallest existing modulators. Losses are reduced thanks to this small size. They operate over a large frequency range. Actually, the upper frequency limit of such devices is currently beyond the reach of electronic measuring devices.
References
Electromagnetism
Metamaterials
Plasmonics | Gap surface plasmon | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 654 | [
"Plasmonics",
"Electromagnetism",
"Physical phenomena",
"Metamaterials",
"Materials science",
"Surface science",
"Fundamental interactions",
"Condensed matter physics",
"Nanotechnology",
"Solid state engineering"
] |
66,345,698 | https://en.wikipedia.org/wiki/EyeHarp | The EyeHarp is an electronic musical instrument controlled by the player's eye or head movements. It combines eye tracking hardware and specially designed software, which has one component for defining chords and arpeggios, and another to change those definitions and play melodies. People with severely impaired motor function can use this instrument to play music or as an aid to learning or composition.
History
The idea for the EyeHarp was born in 2010 when a friend of musician and computer scientist Zacharias Vamvakousis was involved in a serious motorcycle accident which left him quadriplegic. Vamvakousis noticed a distinct lack of accessible musical instruments for people with disabilities, so he began designing the EyeHarp to create opportunities for people with physical disabilities to make music. The development of the EyeHarp started in 2011 in Barcelona under the auspices of Pompeu Fabra University.
In 2019, Vamvakousis founded the EyeHarp association, a non-profit organisation which works to give people with disabilities access to cheap musical education and assistive technology.
See also
Disability in the arts
References
External links
The EyeHarp as covered by ERT, Greece's public broadcaster
The EyeHarp Organisation
The EyeHarp at Pompeu Fabra University
Electronic musical instruments
Electronic
Audio engineering | EyeHarp | [
"Engineering"
] | 264 | [
"Electrical engineering",
"Audio engineering"
] |
66,345,891 | https://en.wikipedia.org/wiki/Causal%20pie%20model | In the field of epidemiology, the causal mechanisms responsible for diseases can be understood using the causal pie model.This conceptual model was introduced by Ken Rothman to communicate how constellations of component causes can lead to a sufficient cause to lead to a condition of interest and that reflection on these sets could improve epidemiological study design. A set of proposed causal mechanisms are represented as pie charts where each pie in the diagram represent a theoretical causal mechanism for a given disease, which is also called a sufficient cause. Each pie is made up of many component factors, otherwise known as component causes represented by sectors in the diagram. In this framework, each component cause represents an event or condition required for a given disease or outcome. A component cause that appears in every pie is called a necessary cause as the outcome cannot occur without it.
References
Causal diagrams
Epidemiology | Causal pie model | [
"Environmental_science"
] | 176 | [
"Epidemiology",
"Environmental social science"
] |
66,346,107 | https://en.wikipedia.org/wiki/Diamond%E2%80%93Dunmore%20system | The Diamond–Dunmore system was an early blind landing system developed by Harry Diamond and Francis Dunmore at the National Bureau of Standards in the late 1920s. It was similar to the beam landing systems being developed in the UK and Germany shortly thereafter, but had the added advantage that the directional signal was automatically decoded and displayed on a cockpit indicator, rather than requiring the attention of a radio operator. It also added an optional vertical guidance system to provide a glideslope indication. In spite of the advanced nature of the system, or perhaps because of it, the system does not appear to have been widely used. In contrast, the simpler Lorenz system was widely deployed in Europe.
Description
The lateral guidance system used two signals on a single carrier frequency around 330 kHz that was amplitude modulated at slightly different frequencies, 65 and 86.7 Hz. The two signals were broadcast from a crossed coil antenna placed at the far end of the active runway, with each coil receiving one of the two signals. The resulting broadcast pattern formed two large eclipses, slightly overlapping along the centerline of the runway. As the aircraft approached the airport, the aircraft's conventional voice radio would be tuned to the carrier frequency and begin to receive the mixed signal. The system was designed so that the signals would become usable at about range.
The output of the aircraft radio was sent into a low-pass filter that split out the navigation signal on the way to the user's headphones, so it was not audible. The low-frequency portion of the signal was then sent to into a panel instrument containing two vibrating-reed frequency meters, one tuned to 65 and the other to 86.7 Hz. If the aircraft was properly aligned off the end of the runway, it would receive equal amounts of each signal, and the two reed meters would indicate equal strength. If the aircraft were to one side of the runway, the signal on that side would be stronger, and the reed meter would show a larger displacement. Visually they were indicated as two vertical white rectangles side by side on a black background. Stronger signals caused the white area to grow vertically, so the pilot could easily tell if they were properly aligned by comparing the size of the two white areas. If they were not aligned, turning towards the longer bar would put them back on course.
A second highly-directional transmitter was placed from the end of the active runway, the "boundary marker". This was a highly directional short-range signal on the same 330 kHz frequency but modulated at a higher frequency and aimed perpendicular to the runway midline. As the aircraft approached the runway, a brief signal would be received from this transmitter which would bypass the low-pass filter to cause a tone to play in the pilot's headphones, while also overriding the guidance signal and causing the reed indicators to jump. This indicated the aircraft should begin its approach.
The system also included an optional signal operating on 93.7 MHz that provided vertical guidance, a feature that competing systems like the Lorenz beam generally lacked. This was a single signal broadcast in a tightly focused beam placed at the far end of the runway that was tilted upward at an 8 degree angle and powered so it would become usable at about . This tight broadcast pattern was only possible due to the, for the era, extremely high carrier frequency. This required a separate specialized receiver radio in the aircraft to be used, in contrast to the main signal which could be used with any contemporary radio.
The vertical indicator was simpler than the lateral, consisting simply of a ammeter connected to the output of the radio. The pilot would approach the runway at 1000 feet altitude over the ground using the lateral signal and then listen for the boundary marker signal to indicate they were within range of the vertical signal. Then would then adjust the ammeter so its needle was centered in the display. From then on, the approach of the aircraft towards the transmitter would result in the signal growing in strength according to the inverse square law. This would naturally cause the needle to rise in the dial and cause the pilot to correct this by lowering their altitude. The result is a parabolic approach to the runway.
References
Aircraft landing systems
History of air traffic control
Navigational aids
Radio navigation
Runway safety | Diamond–Dunmore system | [
"Technology"
] | 854 | [
"Aircraft instruments",
"Aircraft landing systems"
] |
66,346,687 | https://en.wikipedia.org/wiki/Alexander%E2%80%93Hirschowitz%20theorem | The Alexander–Hirschowitz theorem shows that a specific collection of double points in the will impose independent types of conditions on homogenous polynomials and the hypersurface of with many known lists of exceptions. In which case, the classic polynomial interpolation that is located in several variables can be generalized to points that have larger multiplicities.
References
Mathematical theorems | Alexander–Hirschowitz theorem | [
"Mathematics"
] | 76 | [
"Mathematical theorems",
"Mathematical problems",
"nan"
] |
66,347,717 | https://en.wikipedia.org/wiki/Minister%20for%20Building | The Minister for Building is a minister in the Government of New South Wales with responsibility for building across New South Wales, Australia.
History
Building Materials
During World War II building controls had been exercised by the Commonwealth government. A secondary industries section had been established in the Premier's department in 1944 with responsibility for developing manufacturing industries and in 1945 transferred to the Department of Labour and Industry. The functions of the section were to keep the Department informed about development and decentralisation of secondary industries, to provide information, advice and assistance to those contemplating the establishment of new industries or the expansion and technical development existing industries in NSW. The Section was responsible for the development and progressive implementation of various plans for industrial development, contact with overseas industries, negotiation for the establishment of factories in Australia, and movement towards the more rational and economic grouping of inter-related industries. The Division worked co-operatively with Commonwealth and other NSW agencies concerned with the development and decentralisation of secondary industries, and maintained contact with manufacturers for the purposes of information exchange, fostering expansion and efficiency, and encouraging maximum employment.
With post war reconstruction, control over building materials returned to the state governments. Controls continued to be necessary in the post-war environment to ensure that State planning priorities (including the demands of population growth) were achieved and scarce resources were allocated equitably. The controls introduced by the Building Operations and Building Materials Control Act 1946, included requiring consent for building operations except those exempted under the Act; preventing architects, builders, contractors and engineers from commencing buildings which were unauthorised, and requiring them to conform to any conditions placed on the building authorisation. Local Government powers to approve building applications were subject to the Act. Consent was required to use bricks except for purposes defined by the Act. Restrictions were placed on the supply of a range of other building materials. Inspectors could visit building sites and places where building materials were manufactured, stored, sold or distributed and require the production of relevant records. This was initially administered by the Building Materials Branch of the Department of Labour and Industry and timber distribution staff of the Forestry Commission. In June 1947 these staff were transferred to the new department of building materials. A technical branch was established to stimulate and develop the various activities allied to the building industry, and to ensure the training of skilled tradesmen to enable the State’s housing program to be achieved. The branch also controlled all building materials such as bricks, cement products, and timber. The various branches which had combined to create the Department were operationally restricted to coastal districts while the new department's responsibilities covered the entire State. Bricks for example were not permitted to be used for the construction of fences or garages.
The principal responsibility of the Minister was the development, availability, production and standard of building materials particularly bricks, tiles and baths. It was established in the second McGirr ministry in May 1947, carved out of the responsibilities of the Minister for Labour and Industry. Additional responsibility for the encouragement and regulation of manufacturing, referred to as secondary industries, were added in November 1947 and the title of the portfolio was amended to reflect the additional responsibilities in March 1948.
On 4 November, 1947 the secondary industries division was transferred from the Premier's department to the Department of Building Materials in order to achieve co-ordination between industrial and housing development. Despite the additional responsibilities, the portfolio remained named Building Materials until June 1950 when it was renamed Minister for Secondary Industries and Minister for Building Materials.
On 15 August 1952 William Dickson resigned from the ministry and was elected President of the Legislative Council. The portfolio was abolished, with responsibility for secondary industries returning to the Premier, while building materials returned to the responsibility of the Minister for Labour and Industry. Manufacturing was next represented at a portfolio level as Minister for Industrial Development and Decentralisation.
Infrastructure
Infrastructure was first represented at a portfolio level in the fourth Carr ministry, combined with Planning. The minister, Craig Knowles, also held the portfolio of Natural Resources and was responsible for the Department of Infrastructure, Planning and Natural Resources. The government's stated purpose in establishing a combined department was:
to form one department for the purpose of making integrated decisions about natural resource management and land use planning; that is to bring the social, economic and environmental agendas together to promote sustainability;
improve service delivery and provide clear, concise and co-ordinated information to customers;
to simplify policy and regulation to resolve confusion and duplication;
to reduce costs and redirect savings back to the community;
to link decisions about vital infrastructure with the broader plans for NSW; and
to devolve decision making to the communities that those decisions affect.
Infrastructure was established as a separate portfolio in the first Iemma ministry, however it was not responsible for a department nor legislation The portfolio was combined with planning in the O'Farrell ministry before being split into separate portfolios in the first Baird ministry. The portfolio was then combined with Transport in the second Baird ministry, before being abolished in the second Berejiklian ministry, subsumed into Transport.
The portfolio was recreated in the second Perrottet ministry. In that ministry from December 2021 to March 2023, the minister was responsible for Barangaroo and Infrastructure NSW. It was one of the six ministries in the transport sector and the Minister (for Infrastructure, Cities and Active Transport) works with the Minister for Transport, the Minister for Metropolitan Roads and the Minister for Regional Transport and Roads. Together they administered the portfolio through the Department of Transport (Transport for NSW) and a range of other government agencies that coordinate funding arrangements for transport operators, including hundreds of local and community transport operators.
List of ministers
See also
List of New South Wales government agencies
References
External links
Transport for New South Wales
Infrastructure
Building
Infrastructure ministers | Minister for Building | [
"Engineering"
] | 1,155 | [
"Construction",
"Building"
] |
67,857,634 | https://en.wikipedia.org/wiki/IC%201919 | IC 1919 is an elliptical galaxy in the constellation of Fornax. It is 61 million light years distant from Earth and it is a member of Fornax Cluster, a cluster of approximately 200 galaxies.
It was discovered by Lewis Swift on November 25, 1897. Its diameter, based on distance and size on night sky, is 23 000 light years, which is only a one quarter or probably less the diameter of Milky Way Galaxy.
See also
IC 1913
NGC 1399, central galaxy of Fornax Cluster
NGC 1427A
References
Elliptical galaxies
1919
Fornax | IC 1919 | [
"Astronomy"
] | 115 | [
"Fornax",
"Constellations"
] |
67,858,685 | https://en.wikipedia.org/wiki/Cuneocytheridae | Cuneocytheridae is a family of ostracods belonging to the order Podocopida.
Genera:
Cuneocythere Lienenklaus, 1894
Dicrorygma Poag, 1962
References
Ostracods | Cuneocytheridae | [
"Biology"
] | 51 | [
"Animals",
"Animal stubs"
] |
67,858,994 | https://en.wikipedia.org/wiki/Iterative%20rational%20Krylov%20algorithm | The iterative rational Krylov algorithm (IRKA), is an iterative algorithm, useful for model order reduction (MOR) of single-input single-output (SISO) linear time-invariant dynamical systems. At each iteration, IRKA does an Hermite type interpolation of the original system transfer function. Each interpolation requires solving shifted pairs of linear systems, each of size ; where is the original system order, and is the desired reduced model order (usually ).
The algorithm was first introduced by Gugercin, Antoulas and Beattie in 2008. It is based on a first order necessary optimality condition, initially investigated by Meier and Luenberger in 1967. The first convergence proof of IRKA was given by Flagg, Beattie and Gugercin in 2012, for a particular kind of systems.
MOR as an optimization problem
Consider a SISO linear time-invariant dynamical system, with input , and output :
Applying the Laplace transform, with zero initial conditions, we obtain the transfer function , which is a fraction of polynomials:
Assume is stable. Given , MOR tries to approximate the transfer function , by a stable rational transfer function , of order :
A possible approximation criterion is to minimize the absolute error in norm:
This is known as the optimization problem. This problem has been studied extensively, and it is known to be non-convex; which implies that usually it will be difficult to find a global minimizer.
Meier–Luenberger conditions
The following first order necessary optimality condition for the problem, is of great importance for the IRKA algorithm.
Note that the poles are the eigenvalues of the reduced matrix .
Hermite interpolation
An Hermite interpolant of the rational function , through distinct points , has components:
where the matrices and may be found by solving dual pairs of linear systems, one for each shift [Theorem 1.1]:
IRKA algorithm
As can be seen from the previous section, finding an Hermite interpolator of , through given points, is relatively easy. The difficult part is to find the correct interpolation points. IRKA tries to iteratively approximate these "optimal" interpolation points.
For this, it starts with arbitrary interpolation points (closed under conjugation), and then, at each iteration , it imposes the first order necessary optimality condition of the problem:
1. find the Hermite interpolant of , through the actual shift points: .
2. update the shifts by using the poles of the new :
The iteration is stopped when the relative change in the set of shifts of two successive iterations is less than a given tolerance. This condition may be stated as:
As already mentioned, each Hermite interpolation requires solving shifted pairs of linear systems, each of size :
Also, updating the shifts requires finding the poles of the new interpolant . That is, finding the eigenvalues of the reduced matrix .
Pseudocode
The following is a pseudocode for the IRKA algorithm [Algorithm 4.1].
algorithm IRKA
input: , , closed under conjugation
% Solve primal systems
% Solve dual systems
while relative change in {} > tol
% Reduced order matrix
% Update shifts, using poles of
% Solve primal systems
% Solve dual systems
end while
return % Reduced order model
Convergence
A SISO linear system is said to have symmetric state space (SSS), whenever: This type of systems appear in many important applications, such as in the analysis of RC circuits and in inverse problems involving 3D Maxwell's equations. For SSS systems with distinct poles, the following convergence result has been proven: "IRKA is a locally convergent fixed point iteration to a local minimizer of the optimization problem."
Although there is no convergence proof for the general case, numerous experiments have shown that IRKA often converges rapidly for different kind of linear dynamical systems.
Extensions
IRKA algorithm has been extended by the original authors to multiple-input multiple-output (MIMO) systems, and also to discrete time and differential algebraic systems [Remark 4.1].
See also
Model order reduction
References
External links
Model Order Reduction Wiki
Numerical analysis
Mathematical modeling | Iterative rational Krylov algorithm | [
"Mathematics"
] | 857 | [
"Mathematical modeling",
"Applied mathematics",
"Computational mathematics",
"Mathematical relations",
"Numerical analysis",
"Approximations"
] |
67,859,351 | https://en.wikipedia.org/wiki/NGC%201276 | NGC 1276 is an optical double star system located in the constellation Perseus. The system was discovered by astronomer John Dreyer on December 12, 1876. The pair consists of two 15th magnitude stars known as Pul -3 270349 and Pul -3 270357 that are unrelated as they lie at different distances from each other. Pul -3 270349 lies at a distance of and Pul -3 270357 lies at a distance of .
The two stars are about the same size and luminosity as the Sun.
See also
List of NGC objects (1001–2000)
Double Star
References
External links
Double stars
Perseus (constellation)
1276
Astronomical objects discovered in 1876 | NGC 1276 | [
"Astronomy"
] | 146 | [
"Perseus (constellation)",
"Constellations"
] |
67,859,425 | https://en.wikipedia.org/wiki/Spotter%20%28maneuvering%29 | A spotter is a person used in vehicle maneuvers to assist a driver who may not have a clear view in their direction of travel. They are most commonly used in:
Off-road rock crawling
Reversing truck and trailer combinations, such as semitrailers, b-trains and road trains
Placing oversized freight using a forklift
Lifting loads using a vehicle-mounted crane (loads lifted using a fixed crane are supervised by a banksman)
Guiding military vehicles (also called ground guiding)
Dumping materials, such as from a dump truck
Guiding oversized loads.
The spotter's advantage is the ability to move around the load or vehicle to determine the best trajectory.
A spotter will either use a set of standard hand signals, or will agree hand signals before the maneuver with the driver or operator.
Technological solutions such as reversing cameras and proximity sensors have reduced drivers' reliance on spotters in some circumstances.
References
Driving techniques | Spotter (maneuvering) | [
"Physics"
] | 188 | [
"Physical systems",
"Transport",
"Transport stubs"
] |
67,862,866 | https://en.wikipedia.org/wiki/Chiral%20analysis | Chiral analysis refers to the quantification of component enantiomers of racemic drug substances or pharmaceutical compounds. Other synonyms commonly used include enantiomer analysis, enantiomeric analysis, and enantioselective analysis. Chiral analysis includes all analytical procedures focused on the characterization of the properties of chiral drugs. Chiral analysis is usually performed with chiral separation methods where the enantiomers are separated on an analytical scale and simultaneously assayed for each enantiomer.
Many compounds of biological and pharmacological interest are chiral. Pharmacodynamic, pharmacokinetic, and toxicological properties of the enantiomers of racemic chiral drugs has expanded significantly and become a key issue for both the pharmaceutical industry and regulatory agencies. Typically one of the enantiomers is more active pharmacologically (eutomer). In several cases, unwanted side effects or even toxic effects may occur with the inactive enantiomer (distomer). Even if the side effects are not that serious, the inactive enantiomer has to be metabolized, this puts an unnecessary burden on the already stressed out system of the patient. Large differences in activity between enantiomers reveal the need to accurate assessment of enantiomeric purity of pharmaceutical, agrochemicals, and other chemical entities like fragrances and flavors become very important. Moreover, the moment a racemic therapeutic is placed in a biological system, a chiral environment, it is no more 50:50 due enantioselective absorption, distribution, metabolism, and elimination (ADME) process. Hence to track the individual enantiomeric profile there is a need for chiral analysis tool.
Chiral technology is an active subject matter related to asymmetric synthesis and enantioselective analysis, particularly in the area of chiral chromatography. As a consequence of the advances in chiral technology, a number of pharmaceuticals currently marketed as racemic drugs are undergoing re-assessment as chiral specific products or chiral switches. Despite the choice to foster either a single enantiomer or racemic drug, in the current regulatory environment, there will be a need for enantioselective investigations. This poses a big challenge to pharmaceutical analysts and chromatographers involved in drug development process. In pharmaceutical research and development stereochemical analytical methodology may be required to comprehend enantioselective drug action and disposition, chiral purity assessment, study stereochemical stability during formulation and production, assess dosage forms, enantiospecific bioavailability and bioequivalence investigations of chiral drugs. Besides pharmaceutical applications chiral analysis plays a major role in the study of biological and environmental samples and also in the forensic field. Chiral analysis methods and applications between the period 2010 and 2020 are exhaustively reviewed recently. There are number of articles, columns, and interviews in LCGC relating to emerging trends in chiral analysis and its application in drug discovery and development process.
For chiral examination there is a need to have the right chiral environment. This could be provided as a plane polarized light, an additional chiral compound or by exploiting the inborn chirality of nature. The chiral analytical strategies incorporate physical, biological, and separation science techniques. Recently an optical-based absolute chiral analysis has been reported. The most frequently employed technique in enantioselective analysis involve the separation science techniques, in particular chiral chromatographic methods or chiral chromatography. Today wide range of CSPs are available commercially based on various chiral selectors including polysaccharides, cyclodextrins, glycopeptide antibiotics, proteins, Pirkle, crown ethers, etc. to achieve analysis of chiral molecules.
Chiral chromatography
This term has become very popular and commonly used in practice. But the appropriate expression is "enantioselective chromatography". Chiral chromatography has advanced to turn into the most preferred technique for the determination of enantiomeric purity as well as separation of pure enantiomers both on analytical and preparative scale. Chiral chromatographic assay is the first step in any study pertaining to enantioselective synthesis or separation. This includes the use of techniques viz. gas chromatography (GC), high performance liquid chromatography (HPLC), chiral supercritical fluid chromatography (SFC), capillary electrophoresis (CE) and thin-layer chromatography (TLC). The result of a literature survey done identifies HPLC-based chiral assays as the most dominating technology in use. An overview of various analytical methods engaged for chiral separation and analysis are listed in the table.
Principle - separation of enantiomers
In an isotopic/achiral environment, enantiomers exhibit identical physicochemical properties, and therefore are indistinguishable under these conditions. For the separation of chiral molecules the challenge is to construct the right chiral environment. In a chromatographic system there are three variables namely, the chiral analyte (CA), mobile phase and stationary phase, that can be manipulated to provide the crucial chiral environment. The strategy is to make these variables to interact with a chiral auxiliary (chiral selector, CS) whereby it forms a diastereomeric complex which has different physicochemical properties and makes it possible to separate the enantiomers. Based on the nature of the diastereomeric complex formed between the CS-CA species, enantiomer separation mythologies are categorized as indirect and direct enantiomer separation mode
Indirect separation of enantiomer
Indirect enantiomer separation involves the interaction between the chiral analyte (CA) of interest and the suitable reactive CS (in this case it is an enantiopure chiral derivatizing agent, CDA) leading to the formation of a covalent diastereomeric complex that can be separated with an achiral chromatographic technique. Therapeutic agents often contain reactive functional groups (amino, hydroxyl, epoxy, carbonyl and carboxylic acid, etc.) in their structures. They are converted into covalently bonded diastereomeric derivatives using enantiomerically pure chiral derivatizing agent. The diastereomers thus formed unlike enantiomers, exhibit different physicochemical properties in an achiral environment and are eventually separated as a result of differential retention time on a stationary phase. The success of this approach depends on the availability of stable enantiopure chiral derivatizing agent (CDA) and on the presence of a suitable reactive functional group in the chiral drug molecule for covalent formation of diastereomeric derivative. The reaction of a racemic, (R,S)- Drug with a chirally and chemically pure chiral derivatizing agent, (R’)-CDA, will afford diastereomeric products, (R)-Drug-(R')-CDA + (S)-Drug-(R’)- CDA. The chiral derivatization reaction scheme is illustrated in the box on the right hand side.
In contrast to enantiomers, diastereomers have different physicochemical properties that make them separable on regular achiral stationary phases. The major benefit of the indirect methodology is that conventional achiral stationary phase/mobile phase system may be used for the separation of the generated diastereomers. Thus, considerable flexibility in chromatographic conditions is available to achieve the desired separation and to eliminate interferences from metabolites and endogenous substances. Moreover, the sensitivity of the method can be enhanced by sensible choice of the CDA and the chromatographic detection system. But this indirect approach to enantiomeric analysis has some potential problems. These include availability of a suitable functional group on the enantiomer for derivatization, enantiomeric purity of the CDA, racemization of the CDA during derivatization, and racemization of the analyte during the derivatization. Currently, however, the application of indirect analytical approaches is in decline.
Direct separation of enantiomers
Direct enantiomer separation involves the formation of a transient rather than covalent diastereomeric complexation between the chiral selector/discriminator and the analyte (drug enantiomer). In this approach, the subtle energy differences between the reversibly formed noncovalent diastereomeric complexes are exploited for chiral recognition. The direct chromatographic enantiomer separation may be achieved in two different ways, the chiral mobile phase additive and chiral stationary phase mode.
Chiral mobile phase additive (CMPA)
In this approach, an enantiomerically pure compound, the chiral selector, is added to the mobile phase and separation happens on a conventional achiral column. When a mixture of enantiomers is introduced into the chromatographic system, the individual enantiomers form transient diastereomeric complexes with the chiral mobile phase additive. In the chiral mobile phase additive technique, two possible mechanisms may operate: one possibility is that CMPA and the enantiomers may form diastereomers in the mobile phase. Another is that the stationary phase may be coated with the CMPA, leading to diastereomeric interactions with the enantiomeric pairs during chromatographic separation process. It is observed that both the mechanisms may happen depending on the characteristic of the stationary phase and mobile phase employed. Of late this method finds limited application.
Chiral stationary phase (CSP)
In the direct enantiomer separation the most popular approach is use of chiral stationary phases. In this case the site of the chiral selector is on the stationary phase. Stationary phase consist of an inert solid support (usually silica microparticles) on to the surface of which a single enantiomer of a chiral molecule (selector) is either coated/adsorbed or chemically linked and that forms the chiral stationary phase. Commonly used chiral selectors include polysaccharides, proteins, cyclodextrins, etc. An interesting review of chiral stationary phase development and application in chiral analysis appeared in LCGC magazine, 2011.
Chiral recognition
Chiral recognition implies the ability of chiral stationery phases to interact differently with mirror-image molecules, leading to their separation. The mechanism of enantiomeric resolution using CSPs is generally attributed to the “three-point" interaction model (fig.1.) between the analyte and the chiral selector in the stationary phase. Also known as the Dalgliesh model. Under this model, for chiral recognition, and hence enantiomeric resolution to happen on a CSP one of the enantiomers of the analyte must be involved in three simultaneous interactions. This means to say the one of enantiomers is able to have a good interaction with the complimentary sites on the chiral selector attached to the CSP. While Its mirror-image partner may only interact at two or one such sites. In the figure, enantiomer (a), has the correct configuration of the ligands (X, Y and Z) for three-point interactions with the complimentary sites (X’, Y’ and Z’) on the CSP, while its mirror image (b) can only interact at one site. The dotted lines (-----) indicate interaction with complimentary sites.
The diastereomeric complexes thus formed will have different energies of interaction. The enantiomer forming the more stable complex will have less energy and stay longer in the stationary phase compared to the less stable complex with higher energy. The success of chiral separation basically depends in manipulating the subtle energy differences between the reversibly formed non-covalent transient diastereomeric complexes. The energy difference reflects the magnitude of enantioselectivity. Mobile phase has a major role in stabilizing the diastereomeric complex and thus in chiral separation. This simplified bimolecular interaction model is a treatment suitable for theoretical purposes. Mobile phase plays a key role in chiral recognition mechanism. Components of MP (such as bulk solvents, modifiers, buffer salts, additives) not only influence the conformational flexibility of CS and CA molecules but also their degree of ionization. The types of interaction involved in the analyte-selector interaction vary depending on the nature of the CSP used. These may include hydrogen bonding, dipole-dipole, π-π, electrostatic, hydrophobic or steric interactions, and inclusion complex formation.
Classical chiral selectors and CSPs
The intense research for development of efficient chiral selectors has resulted in the synthesis of over 1400 CSPs and over 200 CSPs have been commercialized and available in the market. The most commonly employed chiral selectors are categorized and presented in the table.
Polysaccharide CSPs
Background
It is surprising to note that In 1980, there was no single chiral stationary phase available in the market for performing chiral chromatography. However, In late 1980s the subject of enantioselective chromatography attracted growing interest, particularly under the drive of the institution of Okamoto in Japan, the teams of Pirkle, and Armstrong in the US, Schurig and König in Germany, Lindner in Austria, and Francotte in Switzerland . The Polysaccharides, amylose and cellulose, form the most abundant chiral polymers on earth. These naturally occurring polysaccharides form basis for an important class of chiral selectors.
Chemistry
Amylose and cellulose cannot be used as such due to poor resolution and difficulty in handling. But the carbamate and benzoate derivatives of these polymers, especially amylose and cellulose, demonstrate excellent properties as chiral selectors for chromatographic separation. A large number of polysaccharide-based CSPs are commercially available for chiral separation. These CSPs showed tremendous chiral recognition capability to resolve a wide range of chiral analytes. Many of these CSPs have been marketed by Daicel Chemical Industries, Ltd., and some of the popular ones are listed in the table.
These CSPs are compatible with NP/RP and SFC and also used for analytical, semi-preparative and preparative separations. Many screening research studies conducted at different labs go to suggest that the four CSPs namely Chiralcel OD, Chiralcel OJ, Chiralpak AD, and Chiralpak As are capable of resolving more than 80% of the chiral separations due to their adaptability and high loading capacity. These four polysaccharide chiral stationary stationary phases are referred to as the "golden four".
Polysaccharide CSPs are prepared with high quality silica support on to which the polymeric chiral selector (amylose/cellulose dr.) is physically coated (coated CSP) or chemically immobilized (immobilized CSP). Separations can be done in normal phase, reversed-phase, and polar organic mode. While working with coated polysaccharide CSP solvent selection should be done with caution. One should not use drastic solvents such as dichloromethane, chloroform, toluene, ethyl acetate, THF; 1,4-dioxane; acetone; DMSO, etc. These so called "non-standard" solvents will dissolve the silica and irreversibly destroy the stationary phase. The limited resistance of these coated phases to many solvents lead to the development of immobilized polysaccharide CSP. The table below presents some of the immobilized CSP commercially available and with the alternates wherever accessible.
These immobilized CSP are much more rugged and the "non-standard" solvents can be employed. Thus expanding the choice of co-solvent. The major strength of immobilized CSPs are high solvent versatility in selection of mobile phase composition, enhanced sample solubility, high selectivity, robustness and extended durability, excellent column efficiency, and broad application domain in the resolution of enantiomers. Solvent is a key factor in HPLC MD. More solvents to play with means better sample solubility, Improves resolution, and enables effective chiral method development.
Mechanism
Number of chiral environments are created within the polymer. Cavities are formed between adjacent glucose units, and spaces/channels between polysaccharide chains. These chiral cavities or channels give the chiral discrimination capability to polysaccharide CSPs. The mechanism of Chiral discrimination is not well understood but believed to involve hydrogen bonding and dipole-dipole interaction between the analyte molecule and the ester or carbamate linkage of the CSP.
Application
Some of the applications of these CSPs include the direct chiral analysis of β-adrenergic blockers such as metoprolol and celiprolol, the calcium channel blocker, felodipine and the anticonvulsant agent, ethotoin.
Macrocyclic CSPs
An interesting way of achieving chiral distinction on a CSP is the use of selectors with chiral cavity. These chiral selectors are attached to the stationary phase support material. In this category, there are basically three types of cavity chiral selectors namely cyclodextrins, crown ethers and macrocyclic glycopeptide antibiotics. Among these cyclodextrin based CSP is popular. In this type of CSPs the enantioselective guest-host interaction governs the chiral distinction.
Cyclodextrin-type CSP
Cyclodextrins (CDs) are cyclic oligosaccharides of six, seven, or eight glucose units designated as α, β, and γ cyclodextrins respectively. Depicted in the diagram below. Daniel Armstrong is considered the pioneer of micelle and cyclodextrin-based separations. Cyclodextrins are covalently attached to silica by Armstrong process and provide stable CSPs. The primary hydroxyl groups are used to anchor the CD molecules to the modified silica surface. CDs are chiral because of innate chirality of the building blocks, glucose units. In cyclodextrin the glucose units are α-(1,4)- connected. The shape of CD looks like a shortened cone (see the sketch). The inner surface of the cone forms moderately hydrophobic pocket. The width of the CD-cavity is identified with the quantity of glucose units present. In cyclodextrins, secondary hydroxyl groups (OH-2 and - 3) line the upper rim of the cavity, and an essential 6-hydroxyl group is positioned at the lower rim. The hydroxyl group offer chiral binding points, which appear to be fundamental for enantioselectivity. Apolar glyosidic oxygen makes the pit hydrophobic and guarantees inclusion complexing of the hydrophobic moiety of analytes. Interactions between the polar area of an analyte and secondary hydroxyl groups at the mouth of the pit, joined with the hydrophobic connections inside the pit, give a unique two-point fit and lead to enantioselectivity.
Selectivity of a cyclodextrin phase is dependent on two key factors namely the size and structure of the analyte since it is based on a simple fit-unfit geometric criteria. An aromatic ring or cycloalkyl ring should be attached near the stereogenic center of the analyte. Substituents at or near the analyte chiral center must be able to interact with the hydroxyl groups at the entrance of the CD cavity through H-bonding. α-Cyclodextrin holds small aromatic molecules, whereas β-cyclodextrin incorporates both naphthyl groups and substituted phenyl groups. The aqueous compatibility of CD and its unique molecular structure make the CD- bonded phase highly suitable for use in chiral HPLC analysis of drugs. One further benefit of CD is that they are generally less expensive than the other CSPs. Some of the major shortcomings of CD CSPs is that it is limited to compounds that can enter into CD cavity, minor structural changes in analyte leads to unpredictable effect on resolution, often poor efficiency and cannot invert elution order.
Enantiomers of propranolol, metoprolol, chlorpheniramine, verapamil, hexobarbitaI, methadone and much more drugs have been separated using immobilized β-cyclodextrin.
Initially natural CDs have been used as the chiral selector. Later, modified cyclodextrin structures have been prepared by derivatizing the secondary hydroxyl groups present on the CD molecule. Incorporation of these additional functional groups may improve the chiral recognition capability by possibly modifying the chiral pocket and creating extra auxiliary interaction site. This approach enabled to expand the range of target chiral analytes that could be separated. A number of chiral pharmaceuticals has been resolved using derivatized CDs including ibuprofen, suprofen, flurbiprofen from NSAID category and b-blockers like metoprolol and atenolol. A brief list of cyclodextrin-based chiral stationary stationary phases available in the market is furnished in the table below.
Glycopeptide-type CSP
Armstrong introduced macrocyclic glycopeptides (also known as glycopeptide antibiotics) as a new class of chiral selector for liquid chromatography in 1994. At present, vancomycin, teicoplanin and ristocetin are available under the brand names Chirobiotic V, Chirobiotic T and Chirobiotic R respectively. These cyclic glycopeptides have multiple chiral centers and a cup-like inclusion area to which a floating sugar lid is attached. Similar to protein chiral selectors, the amphoteric cyclic glycopeptides consist of peptide and carbohydrate binding sites leading to possibilities for different modes of interaction beside the formation of inclusion complexation. In this chiral selector the cavities are shallower than that of CDs and hence the interactions are weaker, allows more rapid solute exchange between phases, higher column efficiency. operates in normal phase, reversed-phase and polar organic phase.
The complex structural nature of glycopeptide antibiotic class of CSP has made the understanding of the mechanism of chiral recognition at molecular level tricky. For instance, vancomycin molecule has 18 stereogenic centers in the molecule and offers a complex cyclodextrin-like chiral environment. In comparison to a single basket of cyclodextrins, vancomycin consists of three baskets, resulting in a more complex inclusion of appropriate guest molecules. The attractive forces include π-π interactions, hydrogen bonding, ionic interactions, and dipole stacking. A carboxylic acid and a secondary amine group located on the rim of the cup and can participate in ionic interactions. Vancomycin stationary phases operate in reversed, normal and polar organic phase modes.
Wide range of chiral analysis has been done using chirobiotic CSPs. The antihypertensive drugs viz. oxprenolol, pindolol, propranolol have been separated using vancomycin and teicoplanin chirobiotic CSPS. The NSAID drugs ketoprofen and ibuprofen has been separated using ristocetin CSP.
Crown ether-type CSP
Crown ethers, like cyclodextrin-type CSPs contain a chiral cavity. Crown ethers are immobilized on the silica surface to form chiral stationary phase. Crown ethers contain oxygen atoms within the cavity. The cyclic structure that contains apolar ethylene groups between oxygen forms hydrophobic inner cavity. Cram et al., introduced CSP based on chiral crown ethers and accomplished separation of amino acid. The crucial chiral recognition principle underlying crown ether-based enantiomer separation is based on the formation of numerous hydrogen bonds between the protonated primary amino group of the analyte and the ether oxygens of the crown structure. This structural requirement confines the application of crown ether-type CSPs to chiral compounds having primary amino groups adjoining the chiral centers, such as amino acids, amino acid derivatives. Progress in the field of crown ether-type CSPs have been reviewed.
Protein-type CSP
Proteins are complex, high-molecular weight biopolymers. They are inherently chiral being composed of L-amino acids and possess ordered 3D-structure. They are known to bind/interact stereoselectively with small molecules reversibly, making them extremely versatile CSPs for chiral separation of drug molecules. Hermansson made use of this property to develop number of CSPs by immobilizing proteins on to silica surface. They operate under reverse phase mode (phosphate buffer and organic modifiers).
Protein polymer remains in twisted form because of the different intramolecular bonding. These bonding create different type of chiral loops/grooves present in the protein molecule. Separation mechanism of proteins depends on unique combination of hydrophobic and polar interactions by which the analytes are oriented to chiral surfaces. H-bonding and charge transfer may also contribute to enantioselectivity. The mechanism of chiral distinction by proteins is mostly not well established due to their complex nature. Several proteins based CSP have been employed for chiral drug analysis including α-acid glycoprotein (enantiopac; chiral-AGP), ovomucoid protein (Ultron ES DVM), human serum albumin (HSA). α-AGP CSP (chiral AGP), has been employed for the quantification of atenolol enantiomers in biological matrices, for pharmacokinetic investigation of racemic metoprolol. The major weakness of protein based CSPs include low loading capacity, protein phases are expensive, extremely fragile, delicate to handle, very low column efficiency, cannot invert elution order.
Pirkle-type CSP
Pirkle and co-workers pioneered the development of a variety of CSPs based on charge-transfer complexation and simultaneous hydrogen bonding. These phases are also referred to as Brush-type CSPs. The Pirkle phases are based on aromatic π-acid (3,5-dinitrobenzoyI ring) and π- basic (naphthalene) derivative. In addition to π-π interaction sites, they have hydrogen-bonding and dipole-dipole interaction sites provided by an amide, urea or ester functionality. Strong three-point interaction, according to Dalgleish's model, enables enantioseparation. These phases are classified into π-electron-acceptor, π-electron-donor or π-electron acceptor-donor phase.
A number of Pirkle-type CSPs are commercially available. They are used most often in the normal phase mode. The ionic form of the DNPBG (3,5-dinitrobenzoyl-phenylglycine) CSP has been successfully employed to achieve separation of racemic propranolol in biological fluid. Many compounds of pharmaceutical interest including enantiomers of naproxen and metoprolol has been separated using Pirkle CSP.
Novel chiral selectors and CSPs
During the last couple of years there has been developments of CSPs based on novel chiral selectors viz. chitosan derivatives, cylofructan derivatives and chiral porous materials for HPLC chiral separation.
Chitosan derivatives based CSP
Cyclofructan derivatives based CSP
Chiral porous materials based CSP
See also
Chiral resolution
Chiral chromatography
Chiral drugs
Chiral switch
Enantiomer
Chirality
References
External links
LCGC.
Chromatography Today
Chromatography
Stereochemistry | Chiral analysis | [
"Physics",
"Chemistry"
] | 5,953 | [
"Chromatography",
"Separation processes",
"Stereochemistry",
"Space",
"nan",
"Spacetime"
] |
67,864,389 | https://en.wikipedia.org/wiki/Bolzano%20process | The Bolzano process is a means to reduce magnesium to metallic form. "Dolomite-ferrosilicon briquettes are stacked on a special charge support system through which internal electric heating is conducted to the charge. A complete reaction takes 20 to 24 hours at 1,200 °C."
In 2014, Brazilian operations produced 10-15 kilotons of Mg by this process.
Also in 2014, Nevada Clean Magnesium announced its Tami-Mosi plan to create a ASTM B-92 pilot plant. The mineral resource is estimated at 412 billion tons of 12.3% grade Mg. The company produced its first ingot from a pilot plant in December 2018.
References
Metallurgy
Metallurgical processes
Smelting
Magnesium processes | Bolzano process | [
"Chemistry",
"Materials_science",
"Engineering"
] | 153 | [
"Smelting",
"Metallurgical processes",
"Magnesium processes",
"Metallurgy",
"Materials science",
"nan"
] |
67,864,498 | https://en.wikipedia.org/wiki/Nullosetigeridae | Nullosetigeridae is a family of copepods belonging to the order Calanoida.
Genera:
Nullosetigera Soh, Ohtsuka, Imabayashi & Suh, 1999
References
Copepods | Nullosetigeridae | [
"Biology"
] | 47 | [
"Animals",
"Animal stubs"
] |
67,864,558 | https://en.wikipedia.org/wiki/Vaccine%20wastage | Vaccine wastage is the number of vaccines that have not been administered during vaccine deployment in an immunization program. The wastage can occur at multiple stages of the deployment process, and can take place in both unopened and opened vials, or in oral admission. It is an expected part of vaccination deployment and is factored into the manufacturing process.
Prevalence
A 2018 study into Cambodia's national immunization program found wastage rates of 0% to 60% depending on location and vaccination type.
A study from India which collected Universal Immunisation Programme data from two different locations (Kangra and Pune districts) between January 2016 to December 2017 found wastage rates that differed according to vaccine type, reuse type, vial size, transition from IPV (inactivated polio vaccine) dosage to fIPV (fractional inactivated polio vaccine) and according to the geographical location. In both districts wastage increased as vial size increased from 5 to 10 dose vials. In Kangra, wastage observed in oral polio vaccine was 50.8% while in Pune it was 14.3%. Wastage for a number of other vaccinations in the program was higher than what had been factored into the initial programme forecasting.
Parts of the United States has vaccine wastage tracking factored into the deployment process. Reasons for vaccine wastage are categorised as— broken vial/syringe, lost or unaccounted for, open but not all doses administered, or drawn into a syringe but not administered. Other reasons for wastage include contamination, expiration and temperature issues. Vaccine wastage in the United States during its 2021 COVID-19 vaccination program is less than 1%, and reported as low as 0.1%. In India covid vaccine wastage was 6.5% while in Scotland and Wales it was 1.8%.
Reduction
Improving requirement estimates, transportation and logistics, wastage reporting, optimal session sizes and usage of syringes and needles with low dead volume are important factors in reducing wastage. While manufacturing single dose vials would considerably reduce vaccine wastage, it would increase the cost of the manufacturing process. However there are cases when single dose vials are optimum such as when administering vaccines to a limited number of people or single person sessions.
See also
Vaccine
Vaccinator
Vaccine cooler
Vaccination
Vaccine hesitancy
References
Vaccination | Vaccine wastage | [
"Biology"
] | 500 | [
"Vaccination"
] |
67,864,591 | https://en.wikipedia.org/wiki/NGC%201484 | NGC 1484 is a barred spiral galaxy approximately 50 million light-years away from Earth in the constellation of Fornax. It was discovered by astronomer John Herschel on November 28, 1837. NGC 1484 is a member of the Fornax cluster.
Its distance and size on the night sky convert to an approximate size of 35,638 light years, only a third or one-quarter the size of the Milky Way Galaxy.
See also
List of NGC objects (1001–2000)
References
External links
Barred spiral galaxies
Fornax
1484
14071
Astronomical objects discovered in 1837
Discoveries by John Herschel
Fornax Cluster | NGC 1484 | [
"Astronomy"
] | 130 | [
"Fornax",
"Constellations"
] |
67,865,065 | https://en.wikipedia.org/wiki/IC%201574 | IC 1574 is an irregular galaxy 17 million light-years from Earth. It is a member of the Sculptor Group, a group of galaxies near the Local Group. It was discovered by DeLisle Stewart on 3 November 1898.
References
Cetus
1574
Irregular galaxies
Sculptor Group
Astronomical objects discovered in 1899
Discoveries by DeLisle Stewart
002578
474-018
-04-02-043
009 | IC 1574 | [
"Astronomy"
] | 86 | [
"Galaxy stubs",
"Astronomy stubs",
"Constellations",
"Cetus"
] |
67,865,410 | https://en.wikipedia.org/wiki/Blocking%20of%20Twitter%20in%20Nigeria | Twitter was blocked in Nigeria from 5 June 2021 to 13 January 2022. The government imposed a ban on the social network after it deleted tweets made by, and temporarily suspended, the Nigerian president Muhammadu Buhari, warning the southeastern people of Nigeria, predominantly Igbo people, of a potential repeat of the 1967 Nigerian Civil War due to the ongoing insurgency in Southeastern Nigeria. The Nigerian government claimed that the deletion of the president's tweets factored into their decision, but it was ultimately based on "a litany of problems with the social media platform in Nigeria, where misinformation and fake news spread through it have had real world violent consequences", citing the persistent use of the platform for activities that are capable of undermining Nigeria's corporate existence.
In January 2022, Nigeria lifted its blocking of Twitter after the platform agreed to establish a legal entity within the country sometime in the first quarter of 2022.
Background
On 1 June 2021, Nigerian President Muhammadu Buhari posted a tweet threatening a crackdown on regional separatists "in the language they understand". The next day, Twitter deleted the tweet, claiming it was in violation of Twitter rules, but gave no further details. Nigeria's Information Minister Lai Mohammed said that Twitter's actions were part of an unfair double standard, as Twitter had not banned incitement tweets from other groups. During the Nigerian Civil War a majority of deaths resulted from the blockade of Biafra which caused the deaths of millions of civilians from starvation, a fact that was not alluded to in the tweet.
The Nigerian government has long held concerns over the use of Twitter in the country. The ongoing local End SARS protest began on Twitter and got amplified in 2020 when it had 48 million tweets in ten days. Buhari's government floated the idea of social media regulation on different occasions prior to banning Twitter. Attempts to pass an anti-social media bill in the past have failed majorly due to massive outcry on Twitter. Days before the ban, the country's minister of information called Twitter's activities in Nigeria suspicious, citing its influence on the End SARS protests.
Aftermath
Three days after Twitter was suspended, it was reported that the move had cost the country over 6 billion naira and would also contribute to the worsening unemployment in the country. ExpressVPN reported an over 200 percent increase in web traffic and searches for VPN spiked across the country. In response, Nigeria's Minister of Justice and Attorney General of the Federation Abubakar Malami at first openly threatened to prosecute citizens who bypass the ban using a VPN but then denied saying so after a screenshot of a Twitter deactivation notification he shared on Facebook showed a VPN logo.
Nigeria's cultural minister Lai Mohammed stated the ban would be lifted once Twitter submitted to locally licensing, registration and conditions. "It will be licensed by the broadcasting commission, and must agree not to allow its platform to be used by those who are promoting activities that are inimical to the corporate existence of Nigeria."
In late June 2021, Twitter announced it would enter talks with the Nigerian government over the platform's suspension. The talks began in July 2021.
On 15 September 2021, Mohammed said the Nigerian government will lift the ban on Twitter in a "few days." The Minister said Twitter gave a progress report of their talks with them, adding that it has been productive and quite respectful.
On 1 October 2021, President Muhammadu Buhari in his Independence Day broadcast said Twitter must meet the Nigerian government's five conditions before the suspension of the social media platform will be lifted. The conditions are: Respect for national security and cohesion; registration, physical presence and representation in Nigeria; fair taxation; dispute resolution; local content.
Reactions
The ban was condemned by Amnesty International, the British, Canadian and Swedish diplomatic missions to Nigeria, as well as the United States and the European Union in a joint statement. Two domestic organizations, the Socio-Economic Rights and Accountability Project (SERAP) and the Nigerian Bar Association, indicated intent to challenge the ban in court. Twitter itself called the ban "deeply concerning".
Former U.S. President Donald Trump, who was permanently suspended from Twitter following the United States Capitol attack in January, praised the ban, stating "Congratulations to the country of Nigeria, who just banned Twitter because they banned their President", and also called on other countries to ban Twitter and Facebook due to "not allowing free and open speech."
Lifting of the ban
On 12 January 2022, the Nigerian Government lifted the ban after Twitter agreed to pay an "applicable tax" and establish "a legal entity in Nigeria during the first quarter of 2022".
References
Internet in Nigeria
Social media
Internet censorship in Africa
Twitter controversies
Presidency of Muhammadu Buhari
2021 in Nigeria
2022 endings
2020s in Internet culture | Blocking of Twitter in Nigeria | [
"Technology"
] | 999 | [
"Computing and society",
"Social media"
] |
67,865,542 | https://en.wikipedia.org/wiki/2Blades | 2Blades is an agricultural phytopathology non-profit which performs research to improve durable genetic resistance in crops, and funds other researchers to do the same. 2Blades was co-founded by Dr. Roger Freedman and Dr. Diana Horvath in 2004.
Funding source
2Blades is partly funded by the Gatsby Charitable Foundation does its research at The Sainsbury Laboratory, among other locations . One co-founder, Chairman Roger Freedman also works for Gatsby, which was founded by Lord David Sainsbury. Freedman had pitched an idea to Sainsbury's venture capital company to begin investing in plant genetic engineering technologies, and although the board did so, they found someone else to lead it. Freedman had wanted to run it, but was told that was not for him by Sainsbury. Indeed, soon thereafter Sainsbury set up another early investment company specifically for Freedman and a colleague, and a separate non-profit for Freedman to grant money, both for plant science. The non-profit was 2Blades.
Research activities
2Blades routinely works in partnership with other crop disease organizations like CIMMYT and BGRI. The foundation also conducts research in partnerships with the industry, including with Bayer CropScience and Monsanto. The organisation's End the Blight campaign has been joined by CIP (the International Potato Center) and Chairman of Joseph P. Kennedy Enterprises Christopher Kennedy. This campaign is advancing research and delivering cultivars specifically for Phytophthora infestans in Africa. Mr Kennedy is chairman of 2Blades African Potato Initiative which is funding the delivery of a Victoria-based cultivar to East African markets.
Crops and pathogens of research interest to the foundation include P. infestans on potato, rye, Phakopsora pachyrhizi on soybean, Puccinia graminis f. sp. tritici on wheat, and Fusarium oxysporum f.sp. cubense on Musa spp.
References
Bibliography of affiliated personnel
(ISNI 0000000134837464). BLDSC number: D1141/71. Jisc text ill. BL EThOS ID uk.bl.ethos.455951
PD ORCID 0000-0003-0620-5923 GS N3w9QUUAAAAJ RID D-1181-2009. RF ISNI 0000000134837464.
PD ORCID 0000-0003-0620-5923 GS N3w9QUUAAAAJ RID D-1181-2009. RF ISNI 0000000134837464.
Oadi N. Matny, Mehran Patpour, Ming Luo, Liqiong Xie, Soma Chakraborty, Aihua Wang, James A. Kolmer, Terese Richardson, Dhara Bhatt, Mohammad Hoque, Chris Sorenson, Burkhard Steuernagel, Brande B. H. Wulff, Narayana Upadhyaya, Rohit Mago, Sam Periyannan, Evans Lagudah, Roger Freedman (ISNI 0000000134837464), Lynne Reuber, Brian J. Steffenson, and Michael Ayliffe, A Wheat Multi-Transgene Cassette Provides Stem and Leaf Rust Resistance in the Field. In Plant and Animal Genome XXVIII Conference (January 11-15, 2020). PAG.
External links
Official website:
Phytopathology
Wheat diseases
Agronomy
Plant breeding
Genetic engineering and agriculture | 2Blades | [
"Chemistry",
"Engineering",
"Biology"
] | 749 | [
"Plant breeding",
"Genetic engineering and agriculture",
"Genetic engineering",
"Molecular biology"
] |
67,865,568 | https://en.wikipedia.org/wiki/NGC%201428 | NGC 1428 is a peculiar galaxy of an uncertain morphology; either an elliptical or lenticular galaxy located approximately 65 million light-years away from Earth.
It was discovered by astronomer Julius Schmidt on January 19, 1865. It is a member of the Fornax Cluster.
40 known globular clusters have been observed surrounding NGC 1428 along with 23 observed planetary nebulae.
Physical characteristics
NGC 1428 is host to a nuclear star cluster with an estimated mass ranging from 1.4 × 107 to 2.2 × 107 M☉. It is thought that this nuclear star cluster which surrounded by a nuclear stellar disk formed from multiple instances of gas acrecction and subsequent episodes of star formation. The presence of counter-rotating population of stars suggests the occurrence of mergers that occurred in the opposite direction of the rotation of NGC 1428.
The galaxy has a supermassive black hole with an estimated mass of 4.1 × 107 M☉.
See also
List of NGC objects (1001–2000)
References
External links
Lenticular galaxies
Peculiar galaxies
Elliptical galaxies
Fornax Cluster
Fornax
1428
13611
Discoveries by Johann Friedrich Julius Schmidt
Astronomical objects discovered in 1865 | NGC 1428 | [
"Astronomy"
] | 234 | [
"Fornax",
"Constellations"
] |
67,865,636 | https://en.wikipedia.org/wiki/NGC%201341 | NGC 1341 is a barred spiral galaxy in the constellation Fornax, 86 million light years away. It is one of the most distant members of the Fornax Cluster. Discovered by John Herschel on November 29, 1837, it is 30,000 light years in diameter and has a redshift of 1854 km/s.
See also
NGC 1399
NGC 1365
NGC 1350
NGC 1427A
References
Barred spiral galaxies
Fornax
Fornax Cluster
1341
012911 | NGC 1341 | [
"Astronomy"
] | 101 | [
"Fornax",
"Constellations"
] |
67,866,246 | https://en.wikipedia.org/wiki/Camille%20Dreyfus%20Teacher-Scholar%20Awards | The Camille Dreyfus Teacher-Scholar Awards are awards given to early-career researchers in chemistry by The Camille and Henry Dreyfus Foundation, Inc. "to support the research and teaching careers of talented young faculty in the chemical sciences." The Dreyfus Teacher-Scholar program began in 1970. In 1994, the program was divided into two parallel awards: The Camille Dreyfus Teacher-Scholar Awards Program, aimed at research universities, and the Henry Dreyfus Teacher-Scholar Awards Program, directed at primarily undergraduate institutions. This list compiles all the pre-1994 Teacher-Scholars, and the subsequent Camille Dreyfus Teacher-Scholars.
The annually presented awards consist of a monetary prize of $75,000, which was increased to $100,000 starting in 2019. Seven winners of the Camille Dreyfus Teacher-Scholar Awards have gone on to win the Nobel Prize in Chemistry, including Paul L. Modrich, Richard R. Schrock, Robert H. Grubbs, K. Barry Sharpless, Ahmed H. Zewail, Mario J. Molina and Yuan Tseh Lee.
Recipients
Source: Dreyfus Foundation
1970
Robert G. Bergman, California Institute of Technology
Bruce A. Cunningham, Rockefeller University
Richard D. Fink, Amherst College
Joseph N. Gayles, Jr., Morehouse College
O. Hayes Griffith, University of Oregon
Daniel S. Kemp, Massachusetts Institute of Technology
, Emory University
, The University of Chicago
John A. Osborn, Harvard University
Mitchel Shen, University of California, Berkeley
Barry M. Trost, University of Wisconsin–Madison
Richard A. Walton, Purdue University
F. Sheldon Wettack, Hope College
James T. Yardley, University of Illinois at Urbana-Champaign
1971
Jesse L. Beauchamp, California Institute of Technology
David A. Evans, University of California, Los Angeles
, University of California, Santa Barbara
Yuan T. Lee, The University of Chicago
Stephen J. Lippard, Columbia University
Kenneth G. Mann, University of Minnesota
J. David Puett, Vanderbilt University
Stanley I. Sandler, University of Delaware
Lothar Schäfer, University of Arkansas
, Massachusetts Institute of Technology
James Snyder, Yeshiva University
Leonard D. Spicer, The University of Utah
Leonard M. Stephenson, Stanford University
Edward I. Stiefel, Stony Brook University
John S. Swenton, Ohio State University
Claude H. Yoder, Franklin & Marshall College
1972
Jon Bordner, North Carolina State University
C. Hackett Bushweller, Worcester Polytechnic Institute
Jon Clardy, Iowa State University
Patricia A. Clark, Vassar College
Clark K. Colton, Massachusetts Institute of Technology
Karl F. Freed, The University of Chicago
Robert M. Gavin, Haverford College
James F. Harrison, Michigan State University
David N. Hendrickson, University of Illinois at Urbana-Champaign
Kendall N. Houk, Louisiana State University
Arnold J. Levine, Princeton University
J. Michael McBride, Yale University
William R. Moomaw, Williams College
William P. Reinhardt, Harvard University
Frederick S. Richardson, University of Virginia
John H. Seinfeld, California Institute of Technology
Frank A. Weinhold, Stanford University
1973
William H. Breckenridge, The University of Utah
Michael P. Doyle, Hope College
Irving R. Epstein, Brandeis University
Martin Feinberg, University of Rochester
Frederick D. Lewis, Northwestern University
Richard Losick, Harvard University
William Hughes Miller, University of California, Berkeley
David L. Nelson, University of Wisconsin-Madison
David F. Ollis, Princeton University
Michael R. Philpott, University of Oregon
Douglas Poland, Johns Hopkins University
David J. Prescott, Bryn Mawr College
Peter R. Rony, Virginia Polytechnic Institute and State University
Martin F. Semmelhack, Cornell University
K. Barry Sharpless, Massachusetts Institute of Technology
Robert W. Vaughan, California Institute of Technology
1974
, University of Washington
Jay Bailey, University of Houston
Robert D. Bereman, State University of New York at Buffalo
Michael Berry, University of Wisconsin-Madison
Robert G. Bryant, University of Minnesota
, University of Notre Dame
, Youngstown State University
Robert H. Grubbs, Michigan State University
Leroy E. Hood, California Institute of Technology
Bruce S. Hudson, Stanford University
John Katzenellenbogen, University of Illinois at Urbana-Champaign
Denis A. Kohl, University of Texas at Austin
Edward Penhoet, University of California, Berkeley
Herschel Rabitz, Princeton University
Robert F. Schleif, Brandeis University
Jeffrey Zink, University of California, Los Angeles
1975
Larry R. Dalton, Vanderbilt University
Victor W. Day, University of Nebraska-Lincoln
Robert Ditchfield, Dartmouth College
Elvera Ehrenfeld, The University of Utah
Thomas F. George, University of Rochester
William C. Harris, Furman University
Wayne L. Hubbell, University of California, Berkeley
Marc W. Kirschner, Princeton University
Lynn C. Klotz, Harvard University
L. Gary Leal, California Institute of Technology
W. Carl Lineberger, University of Colorado Boulder
Patrick S. Mariano, Texas A&M University
Tobin J. Marks, Northwestern University
James A. Spudich, University of California, San Francisco
Mark S. Wrighton, Massachusetts Institute of Technology
1976
Ronald W. Davis, Stanford University
William M. Gelbart, University of California, Los Angeles
George C. Levy, Florida State University
Roger K. Murray, Jr., University of Delaware
Jack R. Norton, Princeton University
Larry E. Overman, University of California, Irvine
Alexander Pines, University of California, Berkeley
Christopher A. Reed, University of Southern California
Robert G. Roeder, Washington University in St. Louis
William H. Scouten, Bucknell University
Barbara Ramsay Shaw, Duke University
John P. Simons, The University of Utah
Christopher T. Walsh, Massachusetts Institute of Technology
W. Henry Weinberg, California Institute of Technology
John R. Wiesenfeld, Cornell University
1977
John E. Bercaw, California Institute of Technology
Robert E. Cohen, Massachusetts Institute of Technology
Paul J. Dagdigian, Johns Hopkins University
David Dressler, Harvard University
John R. Eyler, University of Florida
Michael D. Fayer, Stanford University
Gregory L. Geoffroy, The Pennsylvania State University
Eric J. Heller, University of California, Los Angeles
Kenneth D. Jordan, Yale University
Harold L. Kohn, University of Houston
Paul L. Modrich, Duke University
Mario J. Molina, University of California, Irvine
John S. Olson, Rice University
Hong Yong Sohn, The University of Utah
George Stephanopoulos, University of Minnesota
Dwight A. Sweigart, Swarthmore College
1978
Peter B. Dervan, California Institute of Technology
David A. Dixon, University of Minnesota
James A. Dumesic, University of Wisconsin-Madison
William J. Evans, The University of Chicago
Bruce Ganem, Cornell University
William L. Jorgensen, Purdue University
Michael E. Jung, University of California, Los Angeles
Thomas F. Keyes, Yale University
Daniel A. Kleier, Williams College
Walter G. Klemperer, Columbia University
Nancy H. Kolodny, Wellesley College
F. Raymond Salemme, University of Arizona
Richard R. Schrock, Massachusetts Institute of Technology
John R. Shapley, University of Illinois at Urbana-Champaign
Amos B. Smith, III, University of Pennsylvania
K. Peter C. Vollhardt, University of California, Berkeley
1979
Thomas A. Albright, University of Houston
Douglas L. Brutlag, Stanford University
Jeremy K. Burdett, The University of Chicago
Malcolm H. Chisholm, Indiana University
Gary G. Christoph, Ohio State University
Christos Georgakis, Massachusetts Institute of Technology
Christopher G. Goff, Haverford College
David R. Herrick, University of Oregon
Philip M. Keehn, Brandeis University
Nancy E. Kleckner, Harvard University
George McLendon, University of Rochester
Horia Metiu, University of California, Santa Barbara
Kathlyn A. Parker, Brown University
Christian R. H. Raetz, University of Wisconsin-Madison
Gary B. Schuster, University of Illinois at Urbana-Champaign
Ahmed H. Zewail, California Institute of Technology
1980
Bruce S. Ault, University of Cincinnati
Steven G. Boxer, Stanford University
Harry G. Brittain, Seton Hall University
Chris K. Chang, Michigan State University
Marye Anne Fox, University of Texas at Austin
John A. Gladysz, University of California, Los Angeles
Paul L. Houston, Cornell University
Joseph N. Kushick, Amherst College
Elias Lazarides, California Institute of Technology
Martin Newcomb, Texas A&M University
Kyriacos C. Nicolaou, University of Pennsylvania
David W. Oxtoby, The University of Chicago
Mary Fedarko Roberts, Massachusetts Institute of Technology
Matthew V. Tirrell, III, University of Minnesota
Paul A. Wender, Harvard University
Myung-Hwan Whangbo, North Carolina State University
1981
Robert C. Aller, The University of Chicago
Alfons L. Baumstark, Georgia State University
Lewis C. Cantley, Harvard University
John H. Clark, University of California, Berkeley
Robert H. Crabtree, Yale University
Richard G. Finke, University of Oregon
Stephan S. Isied, Rutgers, The State University of New Jersey
Alan P. Kozikowski, University of Pittsburgh
Dennis Liotta, Emory University
Gary L. Miessler, St. Olaf College
Glenn D. Prestwich, Stony Brook University
Mary C. Rakowski DuBois, University of Colorado Boulder
James E. Rothman, Stanford University
George C. Schatz, Northwestern University
Neil E. Schore, University of California, Davis
Costas G. Vayenas, Massachusetts Institute of Technology
Keith R. Yamamoto, University of California, San Francisco
1982
Alan Campion, University of Texas at Austin
F. Fleming Crim, University of Wisconsin-Madison
G. William Daub, Harvey Mudd College
John H. Dawson, University of South Carolina
Glenn T. Evans, Oregon State University
Graham R. Fleming, The University of Chicago
Evan R. Kantrowitz, Boston College
J. Andrew McCammon, University of Houston
C. William McCurdy, Ohio State University
Cheuk-Yiu Ng, Iowa State University
Maria C. Pellegrini, University of Southern California
Kevin S. Peters, Harvard University
Thomas B. Rauchfuss, University of Illinois at Urbana-Champaign
Barry B. Snider, Brandeis University
Gregory Stephanopoulos, California Institute of Technology
1983
Robert A. Brown, Massachusetts Institute of Technology
Andrew E. DePristo, Iowa State University
Kenneth C. Janda, California Institute of Technology
Frederick W. King, University of Wisconsin-Eau Claire
Branka M. Ladanyi, Colorado State University
Shaul Mukamel, University of Rochester
Matthew S. Platz, Ohio State University
James P. Reilly, Indiana University
Mark H. Thiemens, University of California, San Diego
Craig A. Townsend, Johns Hopkins University
Veronica Vaida, Harvard University
David M. Walba, University of Colorado Boulder
R. Stanley Williams, University of California, Los Angeles
1984
Bruce E. Bursten, Ohio State University
Dennis A. Dougherty, California Institute of Technology
Barbara J. Garrison, The Pennsylvania State University
Miklos Kertesz, Georgetown University
Bruce H. Lipshutz, University of California, Santa Barbara
David G. Lynn, The University of Chicago
Alice C. Mignerey, University of Maryland, College Park
Peter J. Rossky, University of Texas at Austin
H. Bernard Schlegel, Wayne State University
Stuart L. Schreiber, Yale University
James L. Skinner, Columbia University
David S. Soane, University of California, Berkeley
1985
Krishnan Balasubramanian, Arizona State University
Gary W. Brudvig, Yale University
Terrence J. Collins, California Institute of Technology
Dennis P. Curran, University of Pittsburgh
Klavs F. Jensen, University of Minnesota
William D. Jones, University of Rochester
Nathan S. Lewis, Stanford University
Lanny S. Liebeskind, Emory University
David M. Ronis, Harvard University
Ian P. Rothwell, Purdue University
Ming-Daw Tsai, Ohio State University
Bonnie Ann Wallace, Columbia University
1986
Jacqueline K. Barton, Columbia University
John F. Brady, California Institute of Technology
Sylvia T. Ceyer, Massachusetts Institute of Technology
Michael M. Cox, University of Wisconsin-Madison
Richard A. Friesner, University of Texas at Austin
Jeffrey C. Kantor, University of Notre Dame
Marsha I. Lester, University of Pennsylvania
William J. McGinnis, Yale University
Geraldine L. Richmond, University of Oregon
Jasper Rine, University of California, Berkeley
Richard H. Scheller, Stanford University
Patricia A. Thiel, Iowa State University
1987
Peter B. Armentrout, The University of Utah
Anthony G. M. Barrett, Northwestern University
Peter F. Bernath, University of Arizona
George Christou, Indiana University
Bruce Demple, Harvard University
Francois N. Diederich, University of California, Los Angeles
Gary P. Drobny, University of Washington
Gregory S. Ezra, Cornell University
John W. Frost, Stanford University
Keith P. Johnston, University of Texas at Austin
Kevin K. Lehmann, Princeton University
Jeffrey A. Reimer, University of California, Berkeley
1988
Donald R. Bobbitt, University of Arkansas
Stephen L. Buchwald, Massachusetts Institute of Technology
Charles T. Campbell, Indiana University
Ken Feldman, The Pennsylvania State University
Paul L. Frattini, Carnegie Mellon University
Gregory S. Girolami, University of Illinois at Urbana-Champaign
Robert R. Lucchese, Texas A&M University
R. J. Dwayne Miller, University of Rochester
Jonathan L. Sessler, University of Texas at Austin
Michael E. Silver, Hope College
Angelica Stacy, University of California, Berkeley
Thomas D. Tullius, Johns Hopkins University
Daniel P. Weitekamp, California Institute of Technology
Kurt W. Zilm, Yale University
1989
Scott L. Anderson, Stony Brook University
Laurie J. Butler, The University of Chicago
Rob D. Coalson, University of Pittsburgh
Anthony W. Czarnik, Ohio State University
Hai-Lung Dai, University of Pennsylvania
Pablo G. Debenedetti, Princeton University
Andrew G. Ewing, The Pennsylvania State University
Alice P. Gast, Stanford University
Marie E. Krafft, Florida State University
Atsuo Kuki, Cornell University
Thomas E. Mallouk, University of Texas at Austin
John D. Simon, University of California, San Diego
Michael Trenary, University of Illinois at Chicago
Steven C. Zimmerman, University of Illinois at Urbana-Champaign
1990
Peter Chen, Harvard University
Kim R. Dunbar, Michigan State University
Juli F. Feigon, University of California, Los Angeles
Joseph S. Francisco, Wayne State University
Mark A. Johnson, Yale University
Michael Kahn, University of Illinois at Chicago
Charles M. Lieber, Columbia University
Andrew G. Myers, California Institute of Technology
Scott D. Rychnovsky, University of Minnesota
W. Mark Saltzman, Johns Hopkins University
Devarajan Thirumalai, University of Maryland, College Park
Nancy L. Thompson, The University of North Carolina at Chapel Hill
1991
Victoria Buch, University of Illinois at Chicago
Jeffrey A. Cina, The University of Chicago
Ariel Fern‡ndez, University of Miami
Glenn H. Fredrickson, University of California, Santa Barbara
David E. Hansen, Amherst College
Joseph T. Hupp, Northwestern University
Richard B. Kaner, University of California, Los Angeles
Peter T. Lansbury, Jr., Massachusetts Institute of Technology
Roger F. Loring, Cornell University
Daniel M. Neumark, University of California, Berkeley
Gerard Parkin, Columbia University
Andrzej T. Rajca, Kansas State University
1992
Patricia A. Bianconi, The Pennsylvania State University
Emily A. Carter, University of California, Los Angeles
Alan S. Goldman, Rutgers, The State University of New Jersey
Gerard S. Harbison, University of Nebraska-Lincoln
W. Dean Harman, University of Virginia
Joel M. Hawkins, University of California, Berkeley
Eric N. Jacobsen, University of Illinois at Urbana-Champaign
Anne B. Myers, University of Rochester
Gilbert M. Nathanson, University of Wisconsin-Madison
Athanassios Z. Panagiotopoulos, Cornell University
Gustavo E. Scuseria, Rice University
Gregory L. Verdine, Harvard University
Alec M. Wodtke, University of California, Santa Barbara
1993
Jean S. Baum, Rutgers, The State University of New Jersey
Brian E. Bent, Columbia University
Jennifer S. Brodbelt, University of Texas at Austin
Robert J. Cave, Harvey Mudd College
Christopher E. D. Chidsey, Stanford University
Bradley F. Chmelka, University of California, Santa Barbara
David W. Christianson, University of Pennsylvania
William S. Hammack, Carnegie Mellon University
Mark J. Hampden-Smith, University of New Mexico
Barbara Imperiali, California Institute of Technology
Mercouri G. Kanatzidis, Michigan State University
Eric T. Kool, University of Rochester
Jane E. G. Lipson, Dartmouth College
Thomas V. O'Halloran, Northwestern University
Thomas C. Pochapsky, Brandeis University
Alanna Schepartz, Yale University
Athan J. Shaka, University of California, Irvine
L. Keith Woo, Iowa State University
Matthew B. Zimmt, Brown University
1994
Eric V. Anslyn, University of Texas at Austin
Thomas P. Beebe, Jr., The University of Utah
Pamela J. Bjorkman, California Institute of Technology
Arup K. Chakraborty, University of California, Berkeley
James A. Cowan, Ohio State University
Amir H. Hoveyda, Boston College
Jeffery W. Kelly, Texas A&M University
Chi H. Mak, University of Southern California
Craig A. Merlic, University of California, Los Angeles
Jeffrey S. Moore, University of Illinois at Urbana-Champaign
Michael J. Sailor, University of California, San Diego
Eric S. G. Shaqfeh, Stanford University
Margaret A. Tolbert, University of Colorado Boulder
Patrick H. Vaccaro, Yale University
Gregory A. Voth, University of Pennsylvania
Theodore S. Widlanski, Indiana University
1995
Gary D. Glick, University of Michigan
Brent L. Iverson, University of Texas at Austin
Robert J. Levis, Wayne State University
Gaetano T. Montelione, Rutgers, The State University of New Jersey
Reginald M. Penner, University of California, Irvine
Lynne Regan, Yale University
Lawrence R. Sita, The University of Chicago
Timothy M. Swager, University of Pennsylvania
H. Holden Thorp, The University of North Carolina at Chapel Hill
William B. Tolman, University of Minnesota
Eric J. Toone, Duke University
Zhen-Gang Wang, California Institute of Technology
James R. Williamson, Massachusetts Institute of Technology
Peter Wipf, University of Pittsburgh
Sarah A. Woodson, University of Maryland, College Park
John Z. H. Zhang, New York University
1996
Guillermo C. Bazan, University of Rochester
D. Scott Bohle, University of Wyoming
Christopher N. Bowman, University of Colorado Boulder
Mark J. Burk, Duke University
Erick M. Carreira, California Institute of Technology
Robert E. Continetti, University of California, San Diego
Andrew D. Ellington, Indiana University
Lucio Frydman, University of Illinois at Chicago
John H. Griffin, Stanford University
Laura L. Kiessling, University of Wisconsin-Madison
Chad A. Mirkin, Northwestern University
Karin Musier-Forsyth, University of Minnesota
James S. Nowick, University of California, Irvine
Norbert F. Scherer, University of Pennsylvania
Jonathan V. Sweedler, University of Illinois at Urbana-Champaign
Susan C. Tucker, University of California, Davis
Jackie Y. Ying, Massachusetts Institute of Technology
1997
Eray S. Aydil, University of California, Santa Barbara
Juan J. de Pablo, University of Wisconsin-Madison
Peter K. Dorhout, Colorado State University
Gregory C. Fu, Massachusetts Institute of Technology
Konstantinos P. Giapis, California Institute of Technology
Richard A. Goldstein, University of Michigan
John F. Hartwig, Yale University
Nancy Makri, University of Illinois at Urbana-Champaign
Frank E. McDonald, Northwestern University
Dale F. Mierke, Clark University
Karl T. Mueller, The Pennsylvania State University
Todd M. Przybycien, Rensselaer Polytechnic Institute
Vincent M. Rotello, University of Massachusetts Amherst
Igal Szleifer, Purdue University
Michael J. Therien, University of Pennsylvania
Ziling (Ben) Xue, The University of Tennessee
1998
Nicholas L. Abbott, University of California, Davis
Nitash P. Balsara, Polytechnic University (New York)
Stacey F. Bent, New York University
Marcos Dantus, Michigan State University
Jeffery T. Davis, University of Maryland, College Park
P. Andrew Evans, University of Delaware
Ellen Fisher, Colorado State University
Clare P. Grey, Stony Brook University
Martin Gruebele, University of Illinois at Urbana-Champaign
Michael M. Haley, University of Oregon
Paul E. Laibinis, Massachusetts Institute of Technology
John Montgomery, Wayne State University
Catherine J. Murphy, University of South Carolina
Brooks Hart Pate, University of Virginia
David A. Shultz, North Carolina State University
Marc L. Snapper, Boston College
Michael Tsapatsis, University of Massachusetts Amherst
Keith A. Woerpel, University of California, Irvine
John L. Wood, Yale University
XuMu Zhang, The Pennsylvania State University
1999
Scott M. Auerbach, University of Massachusetts Amherst
Carolyn R. Bertozzi, University of California, Berkeley
David E. Clemmer, Indiana University
John T. Fourkas, Boston College
C. Daniel Frisbie, University of Minnesota
Randall L. Halcomb, University of Colorado Boulder
Sharon Hammes-Schiffer, University of Notre Dame
James E. Hutchison, University of Oregon
Thomas Lectka, Johns Hopkins University
Raul Lobo, University of Delaware
Yi Lu, University of Illinois at Urbana-Champaign
Dimitrios Maroudas, University of California, Santa Barbara
Anne B. McCoy, Ohio State University
Dominic V. McGrath, University of Arizona
Amy S. Mullin, Boston University
Andrew M. Rappe, University of Pennsylvania
Daniel Romo, Texas A&M University
Daniel K. Schwartz, Tulane University
Yian Shi, Colorado State University
Peng George Wang, Wayne State University
2000
Kristi S. Anseth, University of Colorado Boulder
Uwe H. F. Bunz, University of South Carolina
Geoffrey W. Coates, Cornell University
Timothy Deming, University of California, Santa Barbara
Deborah G. Evans, University of New Mexico
Michel R. Gagné, The University of North Carolina at Chapel Hill
Hilary A. Godwin, Northwestern University
Mark W. Grinstaff, Duke University
Marc A. Hillmyer, University of Minnesota
James L. Leighton, Columbia University
Jeffrey R. Long, University of California, Berkeley
Todd J. Martinez, University of Illinois at Urbana-Champaign
Scott J. Miller, Boston College
Milan Mrksich, The University of Chicago
John P. Toscano, Johns Hopkins University
, University of Pennsylvania
Thomas J. Wandless, Stanford University
James J. Watkins, University of Massachusetts Amherst
2001
Philip Bevilacqua, The Pennsylvania State University
Vicki Colvin, Rice University
Jan Genzer, North Carolina State University
David Y. Gin, University of Illinois at Urbana-Champaign
Richard Hsung, University of Minnesota
Wenbin Lin, Brandeis University
Mark Lonergan, University of Oregon
Benjamin Miller, University of Rochester
Paul Nealey, University of Wisconsin-Madison
John Peters, Utah State University
Amy Rosenzweig, Northwestern University
Benjamin Schwartz, University of California, Los Angeles
Matthew Shair, Harvard University
Erik Sorensen, The Scripps Research Institute
Ross Widenhoefer, Duke University
Olaf G. Wiest, University of Notre Dame
2002
Annelise E. Barron, Northwestern University
Peter A. Beal, The University of Utah
Jillian Buriak, Purdue University
Jeffrey D. Carbeck, Princeton University
Hongjie Dai, Stanford University
Michael W. Deem, University of California, Los Angeles
Robert M. Dickson, Georgia Institute of Technology
Theodore G. Goodson, Wayne State University
Jonas C. Peters, California Institute of Technology
David R. Reichman, Harvard University
Dalibor Sames, Columbia University
David S. Sholl, Carnegie Mellon University
Mark E. Tuckerman, New York University
Wilfred A. van der Donk, University of Illinois at Urbana-Champaign
Younan Xia, University of Washington
2003
Catalina Achim, Carnegie Mellon University
Jianshu Cao, Massachusetts Institute of Technology
Paul Cremer, Texas A&M University
Michael J. Krische, University of Texas at Austin
Kelvin H. Lee, Cornell University
Christopher J. Lee, University of California, Los Angeles
Louis A. Lyon, Georgia Institute of Technology
David MacMillan, California Institute of Technology
Vijay S. Pande, Stanford University
Hongkun Park, Harvard University
Floyd E. Romesberg, The Scripps Research Institute
Shannon S. Stahl, University of Wisconsin–Madison
Suzanne Walker, Princeton University
2004
Justin Du Bois, Stanford University
Pingyun Feng, University of California, Riverside
Neil L. Kelleher, University of Illinois at Urbana-Champaign
Sergey A. Kozmin, The University of Chicago
David R. Liu, Harvard University
Colin P. Nuckolls, Columbia University
Blake R. Peterson, The Pennsylvania State University
Andrei Sanov, University of Arizona
Stanislav Shvartsman, Princeton University
Matthew Sigman, The University of Utah
Jennifer A. Swift, Georgetown University
Nils G. Walter, University of Michigan
Peidong Yang, University of California, Berkeley
2005
Victor Batista, Yale University
Kristie Boering, University of California, Berkeley
Daniel Gamelin, University of Washington
Brian R. Gibney, Columbia University
Zhibin Guan, University of California, Irvine
Jason M. Haugh, North Carolina State University
Rustem F. Ismagilov, The University of Chicago
Christine D. Keating, The Pennsylvania State University
Shana O. Kelley, Boston College
Todd D. Krauss, University of Rochester
Yung-Ya Lin, University of California, Los Angeles
Janis Louie, The University of Utah
Daniel J. Mindiola, Indiana University
Brian Stoltz, California Institute of Technology
Marcus Weck, Georgia Institute of Technology
Xiaowei Zhuang, Harvard University
2006
Heather C. Allen, Ohio State University
Paul Chirik, Cornell University
Patrick S. Daugherty, University of California, Santa Barbara
David H. Gracias, Johns Hopkins University
Chuan He, The University of Chicago
Paul J. Hergenrother, University of Illinois at Urbana-Champaign
Yoshitaka Ishii, University of Illinois at Chicago
Jeffrey S. Johnson, The University of North Carolina at Chapel Hill
James T. Kindt, Emory University
Carsten Krebs, The Pennsylvania State University
Eric Meggers, University of Pennsylvania
Dong-Kyun Seo, Arizona State University
Alice Y. Ting, Massachusetts Institute of Technology
Orlin D. Velev, North Carolina State University
John P. Wolfe, University of Michigan
2007
Helen Blackwell, University of Wisconsin–Madison
Frank L. H. Brown, University of California, Santa Barbara
Jeffrey M. Davis , University of Massachusetts Amherst
Ivan J. Dmochowski, University of Pennsylvania
Justin P. Gallivan, Emory University
David S. Ginger, University of Washington
Bartosz A. Grzybowski, Northwestern University
Jeffrey D. Hartgerink, Rice University
Efrosini Kokkoli, University of Minnesota
Gavin MacBeath, Harvard University
David A. Mazziotti, The University of Chicago
Sergey Nizkorodov, University of California, Irvine
Oleg V. Ozerov, Brandeis University
Raymond Schaak, The Pennsylvania State University
Michael Strano, Massachusetts Institute of Technology
2008
Christopher Bielawski, University of Texas at Austin
Garnet K. Chan, Cornell University
Olafs Daugulis, University of Houston
Lincoln J. Lauhon, Northwestern University
Mohammad Movassaghi, Massachusetts Institute of Technology
Thuc-Quyen Nguyen, University of California, Santa Barbara
Garegin Papoian, The University of North Carolina at Chapel Hill
Theresa M. Reineke, Virginia Polytechnic Institute and State University
Justine P. Roth, Johns Hopkins University
Yi Tang, University of California, Los Angeles
Victor M. Ugaz, Texas A&M University
Qian Wang, University of South Carolina
M. Christina White, University of Illinois at Urbana-Champaign
Haw Yang, University of California, Berkeley
Dongping Zhong, Ohio State University
2009
Alán Aspuru-Guzik, Harvard University
Xi Chen, University of California, Davis
Katherine Franz, Duke University
Christy Haynes, University of Minnesota
Alan F. Heyduk, University of California, Irvine
So Hirata, University of Florida
Laura Kaufman, Columbia University
Suljo Linic, University of Michigan
Richmond Sarpong, University of California, Berkeley
Shu-ou Shan, California Institute of Technology
Jeremy M. Smith, New Mexico State University
Todd M. Squires, University of California, Santa Barbara
Abraham Stroock, Cornell University
Paul Ryan Thompson, University of South Carolina
2010
Kate Carroll, University of Michigan
Matthew Disney, University at Buffalo
Kevin Dorfman, University of Minnesota
Amar Flood, Indiana University
Jayne Garno, Louisiana State University
Song-i Han, University of California, Santa Barbara
Seogjoo Jang, Queens College, City University of New York
Benjamin McCall, University of Illinois at Urbana-Champaign
R. Mohan Sankaran, Case Western Reserve University
Rachel A. Segalman, University of California, Berkeley
Dmitri Talapin, The University of Chicago
Edward Valeev, Virginia Polytechnic Institute and State University
B. Jill Venton, University of Virginia
Tehshik Yoon, University of Wisconsin–Madison
2011
Christine Aikens, Kansas State University
Ruben L. Gonzalez, Jr., Columbia University
John Herbert, Ohio State University
George Huber, University of Massachusetts Amherst
Rongchao Jin, Carnegie Mellon University
Kevin Kubarych, University of Michigan
So-Jung Park, University of Pennsylvania
Nathan Price, University of Illinois at Urbana-Champaign
Tobias Ritter, Harvard University
Herman Sintim, University of Maryland, College Park
Charles H. Sykes, Tufts University
Ting Xu, University of California, Berkeley
Wei You, The University of North Carolina at Chapel Hill
2012
Adam Cohen, Harvard University
Greg Engel, The University of Chicago
Joshua S. Figueroa, University of California, San Diego
Seth B. Herzon, Yale University
Christopher Jaroniec, Ohio State University
Steven Little, University of Pittsburgh
Shih-Yuan Liu, University of Oregon
Christopher Love, Massachusetts Institute of Technology
Dustin Maly, University of Washington
Anne McNeil, University of Michigan
Valeria Molinero, The University of Utah
Celeste Nelson, Princeton University
William Noid, The Pennsylvania State University
Sarah Reisman, California Institute of Technology
2013
Theodore A. Betley, Harvard University
Michelle C. Chang, University of California, Berkeley
William Dichtel, Cornell University
Abigail Doyle, Princeton University
Neil K. Garg, University of California, Los Angeles
Thomas W. Hamann, Michigan State University
Mandë Holford, Hunter College of the City University of New York
Munira Khalil, University of Washington
Stephen Maldonado, University of Michigan
Thomas F. Miller, California Institute of Technology
Baron G. Peters, University of California, Santa Barbara
Charles M. Schroeder, University of Illinois at Urbana-Champaign
Corey R. J. Stephenson, Boston University
2014
Theodor Agapie, California Institute of Technology
Hal Alper, University of Texas at Austin
Paul Dauenhauer, University of Massachusetts Amherst
Nilay Hazari, Yale University
Ramesh Jasti, Boston University
Matthew Kanan, Stanford University
Elizabeth Nolan, Massachusetts Institute of Technology
Rodney Priestley, Princeton University
Khalid Salaita, Emory University
Jordan Schmidt, University of Wisconsin–Madison
Sara Skrabalak, Indiana University
Adam Wasserman, Purdue University
Emily Weiss, Northwestern University
Daniel Weix, University of Rochester
Michael Zdilla, Temple University
2015
Emily Balskus, Harvard University
Shannon W. Boettcher, University of Oregon
Jennifer Dionne, Stanford University
Joshua E. Goldberger, Ohio State University
André Hoelz, California Institute of Technology
Michael C. Jewett, Northwestern University
Wei Min, Columbia University
Douglas Mitchell, University of Illinois Urbana-Champaign
David A. Nicewicz, The University of North Carolina at Chapel Hill
Bradley D. Olsen, Massachusetts Institute of Technology
Gary J. Patti, Washington University in St. Louis
Jennifer A. Prescher, University of California, Irvine
Joseph E. Subotnik, University of Pennsylvania
2016
Andrew J. Boydston, University of Washington
Luis M. Campos, Columbia University
William C. Chueh, Stanford University
Neal K. Devaraj, University of California, San Diego
Mircea Dincă, Massachusetts Institute of Technology
Naomi Ginsberg, University of California, Berkeley
Aditya S. Khair, Carnegie Mellon University
Jared C. Lewis, The University of Chicago
Amanda J. Morris, Virginia Polytechnic Institute and State University
Eranda Nikolla, Wayne State University
Michael D. Pluth, University of Oregon
Nathaniel K. Szymczak, University of Michigan
Qiu Wang, Duke University
2017
Chase L. Beisel, North Carolina State University
Brandi Cossairt, University of Washington
Jason M. Crawford, Yale University
Aaron P. Esser-Kahn, University of California, Irvine
Alison R. Fout, University of Illinois Urbana-Champaign
Randall H. Goldsmith, University of Wisconsin–Madison
Robert R. Knowles, Princeton University
Julius B. Lucks, Northwestern University
Thomas E. Markland, Stanford University
Christian M. Metallo, University of California, San Diego
Michelle O'Malley, University of California, Santa Barbara
William A. Tisdale, Massachusetts Institute of Technology
Guihua Yu, University of Texas at Austin
2018
Alexander Barnes, Washington University in St. Louis
Amie K. Boal, The Pennsylvania State University
Abhishek Chatterjee, Boston College
Irene A. Chen, University of California, Santa Barbara
Francesco A. Evangelista, Emory University
Danna Freedman, Northwestern University
Catherine L. Grimes, University of Delaware
John B. Matson, Virginia Polytechnic Institute and State University
Kang-Kuen Ni, Harvard University
Corinna S. Schindler, University of Michigan
Mohammad R. Seyedsayamdost, Princeton University
Mikhail G. Shapiro, California Institute of Technology
Matthew D. Shoulders, Massachusetts Institute of Technology
2019
Tianning Diao, New York University
Bryan C. Dickinson, The University of Chicago
Keary M. Engle, The Scripps Research Institute
Renee R. Frontiera, University of Minnesota
Garret M. Miyake, Colorado State University
Timothy R. Newhouse, Yale University
Amish J. Patel, University of Pennsylvania
Dipali G. Sashital, Iowa State University
Natalia Shustova, University of South Carolina
Christopher Uyeda, Purdue University
Timothy A. Wencewicz, Washington University in St. Louis
Jenny Y. Yang, University of California, Irvine
2020
Ou Chen, Brown University
, Duke University
, The University of North Carolina at Chapel Hill
, University of Rochester
, University of California, Berkeley
Katherine Mirica, Dartmouth College
, Arizona State University
Alison R. H. Narayan, University of Michigan
Gabriela Schlau-Cohen, Massachusetts Institute of Technology
Alexander M. Spokoyny, University of California, Los Angeles
Steven D. Townsend, Vanderbilt University
, The University of Chicago
, Harvard University
2021
, The University of Chicago
, The University of Texas at Austin
, University of California, Santa Barbara
Osvaldo Gutierrez, University of Maryland, College Park
Julia Kalow, Northwestern University
Markita del Carpio Landry, University of California, Berkeley
Song Lin, Cornell University
, Yale University
, Massachusetts Institute of Technology
David Olson, University of California, Davis
, Brown University
, University of California, San Francisco
Luisa Whittaker-Brooks, University of Utah
, Lehigh University
, University of Massachusetts Amherst
, University of California, San Diego
2022
, University of California, Los Angeles
, University of Illinois at Urbana-Champaign
, Princeton University
, University of Oregon
, North Carolina State University
, University of Chicago
, Dartmouth College
, Harvard University
, Northeastern University
, California Institute of Technology
, University of Colorado, Boulder
, Massachusetts Institute of Technology
, Stanford University
, University of Washington
, Johns Hopkins University
, University of California, Davis
, The Pennsylvania State University
, Yale University
See also
List of chemistry awards
References
External links
Dreyfus Foundation Website
Chemistry awards
Awards established in 1970
Early career awards | Camille Dreyfus Teacher-Scholar Awards | [
"Technology"
] | 7,479 | [
"Science and technology awards",
"Chemistry awards"
] |
67,868,441 | https://en.wikipedia.org/wiki/Hotel%20Newhouse | The Hotel Newhouse was a 12-story, grand hotel in Salt Lake City, Utah.
History
In 1907, mining magnate Samuel Newhouse launched a building campaign in an attempt to move the city's commercial center away from Temple Square to Exchange Place, which is four blocks to the south on Main Street. Construction was completed in 1912.
The Hotel Newhouse was one of a number of buildings financed by Newhouse in the area which also included the Boston and Newhouse Buildings, Utah's first true skyscrapers. The original design by Henry Ives Cobb, which was imagined to be one of the most opulent hotels in the West, was simplified upon construction due to a bankruptcy experienced by Newhouse and the completed structure stood without windows for a time, earning the satirical nickname "the best air-conditioned hotel in the West." For many years afterward, it stood as the "gentile" alternative to the Hotel Utah on the north side of downtown, and hosted many famous musicians and other noteworthy visitors. After its heyday, the building slowly fell into disrepair and was acquired by Earl Holding, the owner of the nearby rival Little America Hotel. Though it was deemed too costly to renovate by its new ownership, its nomination for the National Register of Historic Places in 1977 states that the building was "in good condition and had only experienced minor deterioration of fabric." The hotel was demolished on June 26, 1983 in front of a large crowd and widely reported on in national media and its longtime neighbor, the Terrace Ballroom, was demolished a few years later. Despite early plans to redevelop the block, the site eventually became part of a 10-acre parking lot that is now owned by City Creek Reserve.
See also
National Register of Historic Places listings in Salt Lake City
Exchange Place Historic District
Bigelow-Ben Lomond Hotel, another grand hotel in nearby Ogden, Utah that resembles the Hotel Newhouse
References
Demolished buildings and structures in Utah
Demolished hotels in the United States
1912 establishments in Utah
1983 disestablishments in Utah
Hotel buildings completed in 1912
Buildings and structures demolished in 1983
Buildings and structures demolished by controlled implosion | Hotel Newhouse | [
"Engineering"
] | 431 | [
"Buildings and structures demolished by controlled implosion",
"Architecture"
] |
67,868,524 | https://en.wikipedia.org/wiki/Plasminogen%20%28medication%29 | Plasminogen, sold under the brand name Ryplazim, is a biologic medication for the treatment of hypoplasminogenemia (plasminogen deficiency type 1). It is purified from human plasma and is administered intravenously.
The most common side effects include abdominal pain, bloating, nausea, bleeding, limb pain, fatigue, constipation, dry mouth, headache, dizziness, joint pain, and back pain.
Individuals with hypoplasminogenemia lack a protein called plasminogen, which is responsible for the ability of the body to break down fibrin clots. Plasminogen deficiency leads to an accumulation of fibrin, causing the development of growths (lesions) that can impair normal tissue and organ function and may lead to blindness when these lesions affect the eyes.
Plasminogen, human-tvmh was approved for medical use in the United States in June 2021. It is the first therapy for hypoplasminogenemia approved by the U.S. Food and Drug Administration (FDA).
Medical uses
Plasminogen, human-tvmh is indicated for the treatment of people with plasminogen deficiency type 1, also referred to as hypoplasminogenemia, a disorder that can impair normal tissue and organ function and may lead to blindness.
History
The effectiveness and safety of plasminogen is primarily based on one single-arm, open-label (unblinded) clinical trial enrolling 15 adult and pediatric participants with plasminogen deficiency type 1. All participants received plasminogen administered every two to four days for 48 weeks. The effectiveness of plasminogen was demonstrated by at least 50% improvement of their lesions in all 11 participants who had lesions at baseline, and absence of recurrent or new lesions in any of the 15 participants through the 48 weeks of treatment.
The U.S. Food and Drug Administration (FDA) granted the application for plasminogen orphan drug designation, fast track designation, priority review, and a rare pediatric disease priority review voucher. The FDA granted approval of Ryplazim to ProMetic Biotherapeutics Inc.
References
External links
Biopharmaceuticals
Orphan drugs | Plasminogen (medication) | [
"Chemistry",
"Biology"
] | 471 | [
"Pharmacology",
"Biotechnology products",
"Biopharmaceuticals"
] |
67,868,790 | https://en.wikipedia.org/wiki/Dmitrii%20Treschev | Dmitrii (or Dmitry) Valerevich Treschev (Дмитрий Валерьевич Трещёв, born 25 October 1964 im Olenegorsk, Murmansk Oblast) is a Russian mathematician and mathematical physicist, specializing in dynamical systems of classical mechanics.
Education and career
Treschev completed his secondary study in 1981 with degree from the Специализированный учебно-научный центр (СУНЦ) МГУ имени А.Н. Колмогорова (Specialized Educational and Scientific Center, МГУ, Physics and Mathematics Boarding School No. 18 named after A. N. Kolmogorov). Treschev completed his undergraduate study in 1986 with degree from the Faculty of Mechanics and Mathematics, Moscow State University. There in 1988 he received his Candidate of Sciences degree (PhD) with thesis Геометрические методы исследования периодических траекторий динамических систем (Geometric methods of investigation of periodic trajectories of dynamical systems) under the supervision of Valerii Vasilievich Kozlov. In 1992 Treschev received his Russian Doctor of Sciences degree (habilitation) with thesis Качественные методы исследования гамильтоновых систем, близких к интегрируемым (Qualitative methods for studying Hamiltonian systems close to integrable).
At the secondary school СУНЦ, Treschev taught as a professor in the Department of Mathematics from 1986 until his resignation. At Moscow State University, he is since 1993 a leading researcher, since 1998 a professor, and since 2006 head of the Department of Theoretical Mechanics. At the Steklov Institute he became in 2005 a chief researcher and the deputy director for research and is since 2017 the director for research. He is the author or coauthor of over 70 scientific publications. Together with V. V. Kozlov, he supervises the seminar Избранные задачи классической динамики (Selected problems of classical dynamics).
Treschev's research deals with integrability and non-integrability, dynamical stability, KAM theory, separatrix splitting, averaging in slow-fast systems, chaos in Hamiltonian dynamics, Arnold diffusion, statistical mechanics, and ergodic theory. He has served on the editorial boards of the journals Nonlinearity (1st published in 1988), Chaos, Mathematical Notes, and Regular and Chaotic Dynamics (first published in 2007).
In 1995 Treschev was a Laureate of the State Prize of the Russian Federation for young scientists. In 2007 he was awarded the Lyapunov Prize. He was elected in 2003 a corresponding member and in 2016 a full member of the Russian Academy of Sciences. In 2002 he was an invited speaker with talk Continuous averaging in dynamical systems at the International Congress of Mathematicians in Beijing.
Selected publications
Articles
Books
References
External links
mathnet.ru
1964 births
Living people
Moscow State University alumni
Academic staff of Moscow State University
Academic staff of the Steklov Institute of Mathematics
20th-century Russian mathematicians
21st-century Russian mathematicians
Dynamical systems theorists
Mathematical physicists | Dmitrii Treschev | [
"Mathematics"
] | 786 | [
"Dynamical systems theorists",
"Dynamical systems"
] |
67,869,112 | https://en.wikipedia.org/wiki/Proportional-fair%20rule | In operations research and social choice, the proportional-fair (PF) rule is a rule saying that, among all possible alternatives, one should pick an alternative that cannot be improved, where "improvement" is measured by the sum of relative improvements possible for each individual agent. It aims to provide a compromise between the utilitarian rule - which emphasizes overall system efficiency, and the egalitarian rule - which emphasizes individual fairness.
The rule was first presented in the context of rate control in communication networks. However, it is a general social choice rule and can also be used, for example, in resource allocation.
Definition
Let be a set of possible `states of the world' or `alternatives'. Society wishes to choose a single state from . For example, in a single-winner election, may represent the set of candidates; in a resource allocation setting, may represent all possible allocations of the resource.
Let be a finite set, representing a collection of individuals. For each , let be a utility function, describing the amount of happiness an individual i derives from each possible state.
A social choice rule is a mechanism which uses the data to select some element(s) from which are `best' for society. The question of what 'best' means is the basic question of social choice theory. The proportional-fair rule selects an element such that, for every other state :Note that the term inside the sum, , represents the relative gain of agent i when switching from x to y. The PF rule prefers a state x over a state y, if and only if If the sum of relative gains when switching from x to y is not positive.
Comparison to other rules
The utilitarian rule selects an element that maximizes the sum of individual utilities, that is, for every other state :That rule ignores the current utility of the individuals. In particular, it might select a state in which the utilities of some individuals is zero, if the utilities of some other individuals is sufficiently large.
The egalitarian rule selects an element that maximizes the smallest individual utilities, that is, for every other state :This rule ignores the total efficiency of the system. In particular, it might select a state in which the utilities of most individuals are very low, just to make the smallest utility slightly larger.
The proportional-fair rule aims to balance between these two extremes. On one hand, it considers a sum of utilities rather than just the smaller utility; on the other hand, inside the sum, it gives more weight to agents whose current utility is smaller. In particular, if the utility of some individual in x is 0, and there is another state y in which his utility is larger than 0, then the PF rule would prefer state y, as the relative improvement of individual y is infinite (it is divided by 0).
Properties
When the utility sets are convex, a proportional-fair solution always exists. Moreover, it maximizes the product of utilities (also known as the Nash welfare).
When the utility sets are not convex, a proportional-fair solution is not guaranteed to exist. However, when it exists, it still maximizes the product of utilities.
The PF rule in specific settings
Proportional fairness has been studied in various settings.
Network scheduling; see proportional-fair scheduling.
The fair subset sum problem.
Queueing.
References
Social choice theory
Mathematical optimization
Fairness criteria | Proportional-fair rule | [
"Mathematics"
] | 683 | [
"Mathematical optimization",
"Mathematical analysis"
] |
67,869,116 | https://en.wikipedia.org/wiki/Oliver%20Zahn | Oliver Zahn is US/German theoretical astrophysicist, data scientist, and entrepreneur, best known for developing algorithms for astrophysical data analysis and widely cited discoveries of phenomena in the history of the Universe. He is also known for his more recent work as founder and CEO of Climax Foods, a California-based biotechnology company modeling dairy and other animal products directly from plant ingredients. Prior to becoming an entrepreneur, Zahn directed UC Berkeley's Center for Cosmological Physics alongside George Smoot and Saul Perlmutter and was Head of Data Science at Google
Early life and education
Zahn was born in Munich and studied physics and philosophy at Ludwig Maximilian University of Munich, doing his Diploma thesis in theoretical astrophysics jointly at Max Planck Institute for Astrophysics and New York University, graduating summa cum laude. He went on to do his dissertation work in cosmology at Harvard University, before winning the inaugural prize fellowship at UC Berkeley's Center for Cosmological Physics, funded directly by the 2006 Nobel Prize in Physics.
Career
Industry career
In 2019, Oliver founded Climax Foods to replace animal foods with dairy and meat directly produced from plants, circumventing the animals' complex metabolisms and thereby reducing greenhouse gases and water use caused by animal agriculture. Climax aims to outcompete animal products by offering zero-compromise alternatives that are purely plant-based, yet indistinguishable in terms of taste and texture, and better than their animal-based competitors in terms of nutrition and price.
Academic career
Zahn has worked on a broad range of topics in theoretical, computational, and observational astrophysics and cosmology. Working with multiple multi-national collaborations, he has co-authored more than 100 peer-reviewed journal articles with more than 14,000 citations and a h-index of 68.
As an undergraduate at Max Planck Institute for Astrophysics, Zahn studied the early Universe and constrained deviations from the laws of gravity and electro-magnetism during the Big Bang.
While a doctoral student at Harvard University, Zahn and co-authors Smith and Dore detected, for the first time, gravitational lensing in the cosmic microwave background. The finding has since been confirmed by teams analyzing data from the Planck, Polarbear, and SPT telescopes
In a separate series of papers Zahn introduced statistical measures to use redshifted 21 cm radiation to study otherwise inaccessible periods of the Universe's structure formation. He invented a novel simulation framework to study galaxy formation in the cosmic web, yielding orders of magnitude performance gains compared to previous ray tracing frameworks, enabling exploration of much larger parameter spaces.
While leading analyses for the University of Chicago and University of California, Berkeley, based South Pole Telescope collaboration, Zahn and his team showed that the first galaxies formed more explosively than previously thought.
References
American astrophysicists
German astrophysicists
Data scientists
Businesspeople from Munich
Observational astronomy
Harvard Graduate School of Arts and Sciences alumni
Year of birth missing (living people)
Living people | Oliver Zahn | [
"Astronomy"
] | 606 | [
"Observational astronomy",
"Astronomical sub-disciplines"
] |
67,869,655 | https://en.wikipedia.org/wiki/Susanne%20Aalto | Susanne E. Aalto (born 28 November 1964) is a Swedish professor of radio astronomy geodesy at the Onsala Space Observatory in the department of Space, Earth and Environment at Chalmers University of Technology. She has been a professor of radio astronomy since 2013. Her research focuses on star formation, supermassive black holes and cold jets in galaxies. Between 1994 and 1999, she completed her post doctoral studies at the Steward Observatory, University of Arizona and at Caltech in the United States.
In 1999, Aalto was awarded the Albert Wallin Prize by the Royal Society for Science and Knowledge in Gothenburg, Sweden. She researches the evolution and motion of galaxies using radio telescopes and radiation from molecules.
In 2023 Susanne was elected as a fellow of the Royal Swedish Academy of Engineering Sciences.
Early life
Aalto was born on 28 November 1964 in Eskilstuna, Sweden. In 1994, aged 29, she became Sweden's first female doctor of radio astronomy with a dissertation on radiation from molecules as a way to study galaxies that form many stars simultaneously (starburst galaxies).
References
Living people
1964 births
Chalmers University of Technology alumni
Academic staff of the Chalmers University of Technology
Women astronomers
21st-century Swedish astronomers
Members of the Royal Swedish Academy of Sciences
Members of the Royal Swedish Academy of Engineering Sciences | Susanne Aalto | [
"Astronomy"
] | 266 | [
"Women astronomers",
"Astronomers"
] |
72,245,777 | https://en.wikipedia.org/wiki/Data%20diplomacy | Data diplomacy can be defined in two different ways: use of data as a means and tool to conduct national diplomacy, or the use of diplomatic actions and skills of various stakeholders to enable and facilitate data access, understanding, and use. Data can help and influence many aspects of the diplomatic process, such as information gathering, negotiations, consular services, humanitarian response and foreign policy development. The second kind of data diplomacy challenges traditional models of diplomacy and can be conducted without tracks and diplomats. Drivers of change in diplomacy are also emerging from industry, academia and directly from the public.
In recent years, as more governments have realized the value and capabilities of data, the influence and application of digital diplomacy have expanded gradually and become a new kind of soft diplomatic power.
Classification
Based on the definition of science diplomacy given by the American Association for the Advancement of Science the article of Andy Boyd, Jane Gatewood, Stuart Thorson and Timothy D. V. Dye identifies three subcategories of data diplomacy: data in diplomacy, diplomacy for data, and data for diplomacy. It is worth noting that the three classifications of data diplomacy are inevitably overlapping.
Data in diplomacy
Diplomatic data refers to the infusion of data and data expertise into relationships between nation-states or other entities that can affect policymaking and involves training in data understanding, use and management. Such types of diplomacy are not limited to diplomats but can also occur between institutions and the public. Actors at both the macro and micro levels drive data diplomacy and data diplomacy is often at the indistinguishable forefront of soft and hard power. Applications include direct aid, access to international credit, and international inter-negotiations. However, the process of data diplomacy is also subject to unstructured and uncontrolled data, such as whistleblowing data disclosures.
Diplomacy for data
This is an important classification in data diplomacy and typically refers to stakeholder interactions to advance international practices of data sharing, data use, and data interpretation. Diplomacy for Data encompasses every stage of data - from generation to sharing, use and archiving or deletion. This helps standardize international data coding, facilitates international data sharing, increases public participation, protects user privacy and regulates environmental imbalances. Applications include the International Classification of Diseases coding system that the World Health Organization established and UN benchmark data sharing.
Data for diplomacy
Data for Diplomacy is a worldwide platform of data experts collaborating to create relationships in which diplomats need to understand the use of data to better facilitate the creation of such relationships, use diplomacy to access data and leverage the value of data as soft power. Data can be used for diplomatic occasions in some cases. Applications include the UN Global Pulse, which uses big data in humanitarian volunteer programs.
Distinction
Data diplomacy and science diplomacy
Science diplomacy is the use of scientific and diplomatic cooperation among nations to address common problems such as climate change, food security, poverty, energy consumption, nuclear energy use, and epidemic diseases. Science diplomacy typically includes three main components:
Science in diplomacy - Diplomatic action for international scientific cooperation
Diplomacy for science - Science as a soft power to advance diplomatic goals
Science for diplomacy—Science directly supports the diplomatic process
Data is the foundation of science, and science without data is inappropriate. From this perspective, data diplomacy can be considered as a part or extension of science diplomacy and as a more thorough, diplomatic relationship built from raw data.
Data diplomacy and digital diplomacy
Digital diplomacy refers to the widespread use of information and communication technologies in diplomacy, particularly innovations on the Internet. This includes the use of social media, online virtual meetings, and other Internet tools by diplomats. The impact of the internet on diplomatic relations is manifested in three main ways: increasing the number of voices and interests involved in international decision-making, accelerating and releasing the dissemination of information about any issue or event and enabling traditional diplomatic services to be delivered more quickly and cost-effectively.
In contrast, data diplomacy focuses on data (including its creation, use, access, and interpretation) rather than on communication mechanisms or media. However, there is an interplay between data and digital communication mechanisms in the field of diplomacy; for example, data-driven artificial intelligence plays a key role in diplomacy.
Practical applications
Data diplomacy and science diplomacy 2.0
Scholars Simone Turchetti and Roberto Lalli developed the concept of science diplomacy 2.0 in their 2020 article Envisioning a "Science Diplomacy 2.0: on Data, Global Challenges and Multi-layered Networks. The term "science diplomacy" broadly identifies the interaction between the scientific and foreign policy communities to promote international scientific exchange and provide scientific advice on issues that are relevant to more than one country, such as climate change, food security, poverty, energy consumption, nuclear energy use and epidemic diseases. However, the concept of science diplomacy in practice does not provide ready-made solutions to global problems and has been considered as not having the benign characteristics that are usually ascribed to it.
Therefore, science diplomacy faces a necessary transformation. The new science diplomacy program should be in line with the shift in focus of academic analysis from production to circulation, addressing data access and integration at the global and regional levels. With this aim in mind, Science Diplomacy 2.0 encompasses three levels of definition: to facilitate integrated data analysis based on interdisciplinary scientific work, to facilitate the infrastructure needed to finish these studies and to facilitate the implementation of programs prioritized in this interdisciplinary and integrated research. This encompasses a new analytic approach and infrastructure. The new analytic approach integrates data and metadata from different disciplines to improve understanding of which research should be prioritized in science diplomacy initiatives. The new infrastructure, the Innovation Observatory, will bring together data (and metadata) from different disciplines and integrate them through a linked data, multi-layered approach. Science Diplomacy 2.0" is thus a new model of science diplomacy driven by data. Data and metadata play a key role in the reform of the science diplomacy model.
Data diplomacy and bioinformatics diplomacy
Data diplomacy has also been widely applied to diplomacy in the areas of bioinformatics and public health. Bioinformatics diplomacy works to address how to rapidly share critical biological information internationally to better manage deadly infectious disease outbreaks. The role of bioinformatics diplomacy was well represented during the COVID-19 pandemic. In the past, scientists investigating outbreaks relied heavily on bringing back samples of crisis pathogens from outbreak countries for physical examination. This process was often time-consuming and economically costly. In contrast, advances in information technology and bioinformatics sharing allow scientists to increasingly use digital sequence data of pathogens for rapid computation and analysis.
However, there are limitations to the data open to bioinformatics diplomacy, and social scientists who are not part of the community of sequencing professionals have little knowledge of the social processes involved in acquiring, exchanging and disseminating pathogen sequence data internationally.
Data diplomacy, public diplomacy and social media
Public diplomacy refers to the process and practice of expanding and strengthening relationships between people and governments and citizens in other parts of the world and engaging the global public in the service of national diplomatic interests. Public diplomacy is useful in dealing with international conflicts, community disputes, and interpersonal interactions.
Social media has become an important tool for public diplomacy and diplomats. In 2014, the U.S. Public Diplomacy Advisory Council released a report entitled "Data-Driven Public Diplomacy," highlighting the potential for incorporating data analytics into the public diplomacy workflow, planning and program design for public diplomacy programs, and analyzing the appeal of their programs to foreign audiences. In 2018, Hamilton Bean and Edward Comor, in their article "Data-Driven public diplomacy: A critical and reflexive assessment," asserted that data analytics techniques do not create useful innovations for public diplomacy but rather support governments’ strategic interests and ambitions, leading officials to move away from the human capacity of PD.
Database diplomacy
The term "Database Diplomacy" was the title of an article first published by Amy Helen Johnson in 2000 about Metagon Technologies LCC's development of a system to connect disparate databases and allow end users to build reports and queries. The article was about the time when Metagon Technologies LCC used its DQpowersuite system to link disparate databases, provide users with a single view of different databases, and allow end users to build reports and queries. deployment and all other information.
Today, the sharing of diplomatic databases has reached new heights. One representative database is the Freedom of Information Archive (FOIArchive), which is the first dataset to combine many machine-readable documents of intrastate communications for analysis. The database collects more than three million domestic diplomatic records on the United States, Great Britain and Brazil between 1620 and 2013, including data on private (i.e., classified) and public information and actions, diplomacy at multiple levels of analysis and communications on various topics. New data is added each time a newly declassified document is released. Publicly available information includes previously classified (or publicly unavailable) internal government documents and the original (often full-text) text of these documents. Scholars can use the online platform to search, view online, and download datasets and texts customized to their research needs. In addition, scholars can access many different types of metadata, including entities and topics identified using natural language processing and machine learning methods.
Cultural diplomacy
Data evidence helped Jordan successfully enter into a bilateral cultural property protection agreement with the United States, allowing Jordan to use diplomatic measures to curb the illicit flow of looted and stolen materials. On January 31, 2019, the U.S. Department of State published an official notice in the Federal Register requesting the Government of the Hashemite Kingdom of Jordan to propose a bilateral agreement to the Government of the United States of America to prevent the importation of illegally exported Jordanian cultural materials. Per the U.S. Cultural Property Convention Implementation Act of 1983 (CCPIA), any State Party to the 1970 UNESCO Convention on the Means of Prohibiting and Preventing the Illicit Import, Export and Transfer of Ownership of Cultural Property (1970 UNESCO Convention) that can demonstrate that its cultural landscapes and artifacts are at risk due to U.S. needs, may request a bilateral agreement from the United States.
Jordan provided strong persuasive evidence to reach a bilateral agreement with the United States. The data came from the Jordan Antiquities Database and Information System (JADIS), MEGA-Jordan, Aerial Satellite Imagery and Landscapes of the Dead (LOD). The Jordan Antiquities Database and Information System (JADIS) is the first computerized database of antiquities in an Arab country, created with the help of the American Center for Oriental Research (ACOR) and the U.S. Agency for International Development (USAID). The database, whose data are available to all, is primarily used to monitor and respond quickly to threats from exploitation, looting and other negative impacts on monitored sites and enables the production of "risk maps" containing information on archaeological sites. In 2010, the new Jordan Middle East Antiquities Geodatabase (MEGA-Jordan), a collaborative database between DoA, the Getty Conservation Institute and the World Monuments Fund replaced the Jordan Antiquities Database and Information System (JADIS). Objectives are to standardize and centralize data throughout the Kingdom and report on threats, disturbances and legal violations involving illegal excavation of sites and removal of artifacts.
Data from the Landscapes of the Dead (LOD) combines spatial data from pedestrian surveys, drone surveys and oral histories with multiple people who encountered the site and its artifacts to provide a more nuanced picture of the site's continued destruction.
Data from databases, satellite imagery and Landscapes of the Dead (LOD) data were persuasive factors in determining that Jordan's cultural heritage was at risk and that U.S. import restrictions would help stop the severe looting situation. These data provide evidence of landscape change over time in a series of resolutions, all of which amply confirm the need for bilateral agreements. The data are having an impact on the shaping of foreign policy.
Risks and limitations
Data sensitivity and data security
One of the risks and impediments to data diplomacy is the issue of sensitive data and data security. Some data, such as pathogen sequence data and genetic sequence data are sensitive because they are located at the core of multiple global value production chains. The disclosure and sharing of such data can be subject to considerable political pressure. If data information causes leakage, it will lead to very serious consequences.
Privacy, human rights protection and ethical standards
Since the public is a major contributor to data, it is critical to developing a standard for ethical and human rights protection. Data collection needs to be ethical in its purpose and designed with the input of key stakeholders, including the public. However, confidentiality and ethical standards are complex issues, and absolute confidentiality is even more impossible to achieve. This can create tensions between groups. In some cases, however, individuals cannot make fair, free or informed choices about how their data is used, which leads to the marginalization of some groups. For example, if one does not agree to the data terms proposed by Google one cannot use its services.
Fairness and intellectual property
Data diplomacy can lead to issues of scientific fairness, intellectual property rights and future benefits. Scientists may not want to share their data quickly because this would give competitors easier access to the information behind the data, thus affecting their career status and development. For scientists in some LMICs, analyses of samples they have shared in the past have subsequently been submitted to international conferences and scientific meetings without proper prior notice, and without including those who shared the samples in authorship arrangements. Therefore, the equal treatment of scientists concerning intellectual property rights and benefits is a critical issue facing data diplomacy.
Data accuracy
Inaccurate data, data manipulation, and the associated loss of diplomatic capital pose serious challenges to data diplomacy. Resisting data manipulation and establishing a good reputation for data use is an important step to building healthy data diplomacy relationships. Currently, many countries around the world are developing strategies to reduce the probability of data inaccuracy and the harm it causes. Common strategies include increasing data transparency, pre-screening before sharing, and checking the credibility of data sources. However, opening all data is difficult to achieve. Given that states and other entities must classify and restrict intellectual property rights to certain data and must protect data from well-resourced users, some data cannot be made fully open and transparent.
References
Data
Diplomacy | Data diplomacy | [
"Technology"
] | 2,900 | [
"Information technology",
"Data"
] |
72,245,879 | https://en.wikipedia.org/wiki/Data%20ecosystem | A data ecosystem is the complex environment of co-dependent networks and actors that contribute to data collection, transfer and use. It can span multiple sectors – such as healthcare or finance, to inform one another's practices. A data ecosystem often consists of numerous data assemblages. Research into data ecosystems has developed in response to the rapid proliferation and availability of information through the web, which has contributed to the commodification of data.
Data
Data refers to digitized information that is compressed for efficient transmission. Data is constituted of binary values, expressed as 1 or 0, which allows complex thoughts, images, videos and more to be abstracted. The level of data production and exchange has exploded in recent decades, with government and public agencies freely publishing vast swaths of data, particularly in environmental, cultural, scientific and statistical fields. It has also led to a highly profitable industry for companies that collect, categorize and disseminate data as a tradable resource and operate within the newly defined data ecosystems.
Data ecosystems
The nature of an ecosystem denotes a symbiotic relationship between elements. Thus, when describing a data environment as an ecosystem, it describes a co-constitutive relationship. Their primary purpose is to create, manage and sustain the sharing of data across platforms and disciplines. Key to this initiative are data intermediaries, which facilitate access to the data, and are categorized into seven types, including data trusts, data exchanges and data platforms. A data ecosystem also comprises data providers and consumers, who as their titles denote, provide and consume the data through the intermediaries.
A common example of data ecosystem exists within the realm of web browser. A third-party tracking app on a website (referred to as cookies) acts as an intermediary by collecting and organizing data. The web browser becomes the data provider, as it shares a user's information as they navigate through different websites. The websites themselves become consumers as they utilize the tracking information to tailor content based on user behaviour.
As mentioned, data ecosystems can span multiple sectors, for example, a client's medical data is shared with an insurance company to calculate a premium. The point of an ecosystem is that all actors within the shared environment are contributing to a common resource or knowledge-base.
Mapping
Data ecosystems possess three major characteristics: network, platform, and co-evolution. Network loosely refers to the groups of data and technology developers, providers, and resellers. The platform, then, is the service, tool or platform that is collaboratively used by the network of actors. The platform provides the interface for the actors to produce their shared product or service. The final characteristic refers to how the different actors and platform enable one another to evolve or improve upon itself. The metaphorical use of the term "ecosystem" intrinsically demands that all parties involved are mutually benefited by their engagement. That would be the betterment or evolution of their own functioning, which leads to positive outcomes for the larger ecosystem. Again, to use the example of a web browser – the third-party tracking app collects data to help websites evolve their content strategies, which then provide more accurate user data to third-party trackers in an endless feedback loop.
Data assemblages
Within the broad landscape of a data ecosystem are numerous data assemblages. An assemblage is described as interconnected socio-technical systems that work in tandem with one another for a common purpose. These systems encompass the technological, political, financial and best practices that sustain the collection, transfer, and dispersion of data. The below table demonstrates the common elements of a data assemblage which facilitate and govern datafication.
A data ecosystem contains numerous data assemblages, as each actor within the system have their own sets of tangible and non-tangible elements for their operation. Web browsers as data providers have their own assemblages of hardware, software, servers, finances, infrastructure, practices, etc. Each website that consumes the data and the broader companies that they represent similarly present an assemblage of systems. And the intermediary tracking sites which collect and sell the data operate within their own assemblage. It is possible that different assemblages may share elements within the broader ecosystem, or have individual elements, such as opposing hardware or platforms, that come into conflict. For example, a web browser may include ad blockers which conflict with the third-party trackers that attempt to scrape a user's data.
Big data
The rise of data ecosystems is part and parcel with the development of big data. Big data is an emerging trend in science and technology that tracks and defines almost all human engagement. It is defined by the following five properties:
Volume
Big data consists of massive amounts of information, which could be terabytes or petabytes.
Velocity
Big data is produced rapidly, and exchanged in real-time.
Variety
Big Data are extremely diverse, constituting numerous fields of study, and with extensive practical applications.
Value
Big data has inherent value due to the potential application of the data and the political economy in which it operates.
Veracity
Big data must be considered accurate and of high-quality. This can be difficult, as information may be incomplete or wrong, but there should be a level of trust that the collection of the data was done with the intention of being truthful.
Concerns
The main concern or critique of data ecosystems relates to privacy. Who has access to the data, either implicitly or explicitly? How is that data secured? How is it being used, and perhaps monetized? The non-profit organization Cloud Secure Alliance (CSA) categorizes the security challenges of Big Data Ecosystems into four groups; infrastructure security, data privacy, data management, and integrity and relative security.
In the case of a web browser, website and third-party tracking operation, there is a clear financial incentive for why data is collected and how it is used. But there is also a level of surveillance that occurs in this scenario, that perhaps goes unnoticed. Rob Kitchin terms this as 'dataveillance,' a result of the datafication of everyday life which allows for highly accurate and continuous tracking of our locations and activities. Who else, besides those trackers and websites, has access to the data being collected, and is it used for more nefarious purposes? In the case of US states that have banned access to abortions, there's concern that these data ecosystems can be harnessed to penalize citizens that seek services out of state.
References
Data
Data collection
Information | Data ecosystem | [
"Technology"
] | 1,335 | [
"Data collection",
"Information technology",
"Data"
] |
72,246,040 | https://en.wikipedia.org/wiki/Fast%20probability%20integration | Fast probability integration (FPI) is a method of determining the probability of a class of events, particularly a failure event, that is faster to execute than Monte Carlo analysis. It is used where large numbers of time-variant variables contribute to the reliability of a system. The method was proposed by Wen and Chen in 1987.
For a simple failure analysis with one stress variable, there will be a time-variant failure barrier, , beyond which the system will fail. This simple case may have a deterministic solution, but for more complex systems, such as crack analysis of a large structure, there can be a very large number of variables, for instance, because of the large number of ways a crack can propagate. In many cases, it is infeasible to produce a deterministic solution even when the individual variables are all individually deterministic. In this case, one defines a probabilistic failure barrier surface, , over the vector space of the stress variables.
If failure barrier crossings are assumed to comply with the Poisson counting process an expression for maximum probable failure can be developed for each stress variable. The overall probability of failure is obtained by averaging (that is, integrating) over the entire variable vector space. FPI is a method of approximating this integral. The input to FPI is a time-variant expression, but the output is time-invariant, allowing it to be solved by first-order reliability method (FORM) or second-order reliability method (SORM).
An FPI package is included as part of the core modules of the NASA-designed NESSUS software. It was initially used to analyse risks and uncertainties concerning the Space Shuttle main engine, but is now used much more widely in a variety of industries.
References
Bibliography
Beck, André T.; Melchers, Robert E., "Fatigue and fracture reliability analysis under random loading", pp. 2201–2204 in, Bathe, K.J (ed), Proceedings of the Second MIT Conference on Computational Fluid and Solid Mechanics June 17–20, 2003, Elsevier, 2003 .
Murthy, Pappu L.N.; Mital, Subodh K.; Shah, Ashwin R., "Design tool developed for probabilistic modeling of ceramic matrix composite strength", pp. 127–128 in, Research & Technology 1998, NASA Lewis Research Center, 1999.
Riha, David S.; Thacker, Ben H.; Huyse, Luc J.; Enright, Mike P.; Waldhart, Chris J.; Francis, W. Loren; Nicolella, Dniel P.; Hudak, Stephen J.; Liang, Wuwei; Fitch, Simeon H.K., "Applications of reliability assessment for aerospace, automotive, bioengineering, and weapons systems", ch. 1 in, Nikolaidis, Efstratios; Ghiocel, Dan M.; Singhal, Suren, Engineering Design Reliability Applications: For the Aerospace, Automotive and Ship Industries, CRC Press, 2007 .
Shah, A.R.; Shiao, M.C.; Nagpal, V.K.; Chamis, C.C., Probabilistic Evaluation of Uncertainties and Risks in Aerospace Components, NASA Technical Memorandum 105603, March 1992.
Wen, Y.K.; Chen, H.C., "On fast integration for time variant structural reliability", Probabalistic Engineering Mechanics, vol. 2, iss. 3, pp. 156–162, September 1987.
Probabilistic models
Reliability engineering | Fast probability integration | [
"Engineering"
] | 745 | [
"Systems engineering",
"Reliability engineering"
] |
72,246,075 | https://en.wikipedia.org/wiki/2MASS%2019281982-2640123 | 2MASS 19281982-2640123 is a Sun-like star located in the area of Sagittarius constellation where the Wow! Signal is most widely believed to have originated. The star was identified in a 2022 paper as the most similar to the Sun out of the three solar analogs found inside the sky region. The star is 1,800 light years away; this is approximately 132 light years away from Claudio Maccone's estimation of where the closest communicative civilization to Earth is most likely to exist per his calculated solution to the Drake Equation.
The star has a right ascension of 19h 28m 19.8s, a declination of -26° 40' 12.59", an estimated temperature of 5,783 Kelvin, a radius of 0.99 solar radii, and a luminosity 1.0007 times that of the Sun. The team used the Gaia Archive to identify another dozen of candidates to be Sun-like stars, but the estimations on their luminosity were unknown.
Breakthrough listen search
As a response to the discovery, on May 21, 2022 Breakthrough Listen conducted the first targeted search for the Wow! Signal to find its source. It also was its first collaboration between the Green Bank Telescope and the Allen Telescope Array (ATA) of the SETI Institute.
Greenbank performed two 30-minute observations, the ATA did six 5-minute observations with its new beam-former backend, and both observatories observed a total of 9 minutes and 40 seconds at the same time. The team used the turboSETI pipeline from 1–2 GHz to search for an artificial narrowband signal (2.79 Hz/1.91 Hz) with a drifting of ±4 Hz s−1. No technosignature candidates were reportedly found.
References
Sagittarius (constellation) | 2MASS 19281982-2640123 | [
"Astronomy"
] | 383 | [
"Sagittarius (constellation)",
"Constellations"
] |
72,246,603 | https://en.wikipedia.org/wiki/Data%20discourse | A data discourse is a discourse that works within the context of data and how data can fulfill particular purposes, agendas and narratives. In relation to open data, the discourses about sharing, reuse, open access, open government, transparency, accountability, social entrepreneurship, and economies of scale are organized to form a discursive regime that promotes investment in open data. In relation to big data, the discourses of insight, wisdom, productivity, competitiveness, efficiency, effectiveness, utility, value is deployed to promote their legitimization and usage in businesses and repositories.
Examples
Patrick Ferucci evaluates meta journalistic discourse in relation to big data through analyzing Metra journalism from 2000 to 2017.
At Online Marketing Summit, in San Diego, Cheemin Bo-Linn, president and interim CMO at Peritus Partners, discusses the increase of big data such as, Facebook that produces 10 terabytes of data per day. Cheemin Bo-Linn says marketers can use these big data to examine practices and behavior of customers, plan campaigns to take actions, to target consumers and shape consumers' habits.
Big data is used to analyze and understand environmental discourses in hotel online reviews.
Narratives
Data imaginaries and discourses are brought together to compose what Foucault coined a term 'discursive regime'. Discursive regime is a coordination of overlapping arguments that promotes developments and legalizes the actions of the developments. The goal of discourses within a regime is to make messages and narratives appear logical, to convince people and institution to act according to the logics and norms of the regime. Data imaginaries and affordances are attained through agglomeration of several data discourses.
The discourses and imaginaries are linked together to form data narratives to make stories about data and their interconnected assemblages persuasive. Data do not represent themselves. For data to be represented and narrated, it is placed in specific settings in order to create shape and meaning making. The elements of data narratives are data trajectories, data temporalities, the cultural grounding of data narratives.
References
discourse | Data discourse | [
"Technology"
] | 445 | [
"Information technology",
"Data"
] |
72,247,701 | https://en.wikipedia.org/wiki/Color%20reproduction | Color reproduction is an aspect of color science concerned with producing light spectra that evoke a desired color, either through additive (light emitting) or subtractive (surface color) models. It converts physical correlates of color perception (CIE 1931 XYZ color space tristimulus values and related quantities) into light spectra that can be experienced by observers. In this way, it is the opposite of colorimetry.
It is concerned with the faithful reproduction of a color in one medium, with a color in another, so it is a central concept in color management and relies heavily on color calibration. For example, food packaging must be able to faithfully reproduce the colors of the foods therein in order to appeal to a customer. This involves proper color calibration of at least four devices:
Lighting, which must have a high color rendering index and not give a color cast to the object.
Camera, which measures the reflected spectrum of the object and converts to a trichromatic color space (e.g. RGB).
Screen, which reproduces color so a designer can proof the captured image and make color corrections as necessary.
Printer, which reproduces the final color on paper.
References
Further reading
Image processing
Visual perception
Psychophysics
Color | Color reproduction | [
"Physics"
] | 258 | [
"Psychophysics",
"Applied and interdisciplinary physics"
] |
72,248,435 | https://en.wikipedia.org/wiki/8%20Leonis%20Minoris | 8 Leonis Minoris (8 LMi) is a solitary, red hued star located in the northern constellation Leo Minor. It has an apparent magnitude 5.37, making it faintly visible to the naked eye. Based on parallax measurements from the Gaia satellite, the object is estimated to be 492 light years distant. It is receding with a heliocentric radial velocity of . At its current distance, 8 LMi is diminshed by 0.12 magnitudes due to interstellar dust.
This is an asymptotic giant branch star with stellar classification of M1 IIIab. It has 1.59 times the mass of the Sun but has expanded to 48.5 times its girth. It radiates 417 times the luminosity of the Sun from its enlarged photosphere at an effective temperature of . 8 LMi has an iron abundance only half of the Sun's, making it metal deficient.
8 LMi's variability was first observed to be variable in 1930 by Joel Stebbins. However, Eggen (1967) instead lists it as an ordinary M-type giant and used the object for comparison. In 1978-9, 8 LMi was again listed as a variable star but did not provide further insight. As of 2017, the star has not been confirmed to be variable.
References
M-type giants
Suspected variables
Asymptotic-giant-branch stars
Leo Minor
Leonis Minoris, 8
BD+35 02015
082198
046735
3769 | 8 Leonis Minoris | [
"Astronomy"
] | 316 | [
"Leo Minor",
"Constellations"
] |
72,248,675 | https://en.wikipedia.org/wiki/Trauma-informed%20care | Trauma-informed care (TIC) or Trauma-and violence-informed care (TVIC), is a framework for relating to and helping people who have experienced negative consequences after exposure to dangerous experiences. There is no one single TIC framework, or model, and some go by slightly different names, including Trauma- and violence-Informed Care (TVIC). They incorporate a number of perspectives, principles and skills. TIC frameworks can be applied in many contexts including medicine, mental health, law, education, architecture, addiction, gender, culture, and interpersonal relationships. They can be applied by individuals and organizations.
TIC principles emphasize the need to understand the scope of what constitutes danger and how resulting trauma impacts human health, thoughts, feelings, behaviors, communications, and relationships. People who have been exposed to life-altering danger need safety, choice, and support in healing relationships. Client-centered and capacity-building approaches are emphasized. Most frameworks incorporate a biopsychosocial perspective, attending to the integrated effects on biology (body and brain), psychology (mind), and sociology (relationship).
A basic view of trauma-informed care (TIC) involves developing a holistic appreciation of the potential effects of trauma with the goal of expanding the care-provider's empathy while creating a feeling of safety. Under this view, it is often stated that a trauma-informed approach asks not "What is wrong with you?" but rather "What happened to you?" A more expansive view includes developing an understanding of danger-response. In this view, danger is understood to be broad, include relationship dangers, and can be subjectively experienced. Danger exposure is understood to impact someone's past and present adaptive responses and information processing patterns.
History
Harris and Fallot first articulated the concept of trauma-informed care (TIC) in 2001. They described trauma-informed as a vital paradigm shift, from focusing on the apparently immediate presenting problem to first considering past experience of trauma and violence. They focused on three primary issues: instituting universal trauma screening and assessment, not causing re-traumatization through the delivery methods of professional services, and promoting an understanding of the biopsychosocial nature and effects of trauma.
Researchers and government agencies immediately began expanding on the concept. In the 2000's, the Substance Abuse and Mental Health Services Administration (SAMHSA) began to measure the effectiveness of TIC programs. The U.S. Congress created the National Child Traumatic Stress Network which SAMHSA administers. SAMHSA commissioned a longitudinal study, the Women, Co-Occurring Disorders and Violence Study (WCDVS) to produce empirical knowledge on the development and effectiveness of a comprehensive approach to help women with mental health, substance abuse, and trauma histories.
Several significant events happened in 2005. SAMHSA formed the National Center for Trauma-Informed Care. Elliott, Fallot and colleagues identified a consensus of 10 TIC concepts for working with individuals. They more finely parsed Harris and Fallot's earlier ideas, and included relational collaboration, strengths and resilience, cultural competence, and consumer input. They offered application examples, such as providing parenting support to create healing for parents and their children. Huntington and colleagues reviewed the WCDVS data, and working with a steering committee, they reached a consensus on a framework of four core principles for organizations to implement.
Organizations and services must be integrated to meet the needs of the relevant population.
Settings and services for this population must be trauma-informed.
Consumer/survivor/recovering persons must be integrated into the design and provision of services.
A comprehensive array of services must be made available.
In 2011 SAMHSA issued a policy statement that all mental health service systems should identify and apply TIC principles. The TIC concept expanded into specific disciplines such as education, child welfare agencies, homeless shelters, and domestic violence services. SAMHSA issued a more comprehensive statement about the TIC concept in 2014, described below.
The term (TVIC) was first used by Browne and colleagues in 2014, in the context of developing strategies for primary health care organizations. In 2016, the Canadian Department of Justice published "Trauma- (and violence-) informed approaches to supporting victims of violence: Policy and practice considerations". Wathen and Varcoe expanded and further detailed the TVIC concept in 2023.
In many ways TIC/TVIC concepts and models overlap or incorporate other models, and there is some debate about whether there is a difference. The confusion may be due to whether TIC is seen as a model instead of a framework or approach which brings in knowledge and techniques from other models. A client/person-centered approach is fundamental to Rogerian and humanistic models, and foundational in ethical codes for lawyers and medical professionals. Attachment-informed healing professionals conceptualize their essential role as being a transitional attachment figure (TAF), where they focus on providing protection from danger, safety, and appropriate comfort in the professional relationship. TIC proponents argue the concept promotes a deeper awareness of the many forms of danger and trauma, and the scope and lifetime effects exposure to danger can cause. The prolific use of TIC may be evidence it is a practical and useful framework, concept, model, or set of strategies for helping-professionals.
Types of trauma
Trauma can result from a wide range of experiences which expose humans to one or more physical, emotional, and/or relational dangers.
Physical: Physical injury, brain injury, assault, crime, natural disaster, war, pain, and situational harm like vehicle or industrial accidents.
Relational—adult: Interpersonal trauma, domestic violence, intimate partner violence, controlling behavior and coercive control, betrayal, gaslighting, DARVO, traumatic bonding, and intense emotional experiences such as shame and humiliation.
Relational—child: For children, it can also involve childhood trauma, adverse childhood experiences, separation distress, and negative attachment experience (controlling, dismissive, inconsistent, harsh, or harmful caregiving environments).
Social/structural: Social and political, structural violence, racism, historical, collective, national, poverty, religious, educational, the various forms of slavery, and cultural environments.
PTSD: Non-complex or complex post-traumatic stress disorder, and continuous traumatic stress.
Psychological and pharmacological: Psychological harm, mental disorders, drug addiction, isolation, and solitary confinement.
Secondary: Vicarious or secondary exposure to other's trauma.
Van der Kolk describes trauma as an experience and response to exposure to one or more overwhelming dangers, which causes harm to neurobiological functioning, and leaves a person with impaired ability to identify and manage dangers. This leaves them "constantly fighting unseen dangers".
Crittenden describes how relational dangers in childhood caregiving environments can cause chronic trauma: "Some parents are dangerous to their children. Stated more accurately, all parents harm their children more or less, just as all are more or less protective and comforting." Parenting, or caregiver, styles which are dismissive, inconsistent, harsh, abusive or expose children to other physical or relational dangers can cause a trauma which impairs neurodevelopment. Children adapt to achieve maximum caregiver protection, but the adaptation may be maladaptive if used in other relationships. The Dynamic-Maturational Model of Attachment and Adaptation (DMM) describes how children's repeated exposure to these dangers can result in lifespan impairments to information processing.
Adverse Childhood Experience scores are a common measure to assess trauma experienced by children and adults. A higher ACE score is associated with an increased chance of developing chronic diseases or mental health conditions, as well an increased propensity for committing violent acts. Similarly, social determinants of health, such as economic insecurity, can also indicate increased risk for injury or development of trauma, contributing to a higher ACE score for individuals at high-risk for re-injury/traumatization.
Because there are so many forms of danger to humans, trauma is extremely common, although the effects of negative and ongoing experience is less common. The effects are dimensional and can vary in scope and degree.
TIC frameworks
There are many TIC-related concepts, principles, approaches, frameworks, or models, some general and some more context specific. Trauma- and violence-informed care (TVIC), is also described as trauma- (and violence-) informed care (T(V)IC). Other terms include trauma-informed, trauma-informed approach, trauma-informed perspective, trauma-focused, trauma-based, trauma-sensitive, trauma-informed care/practice (TIC/P), and trauma-informed practice (TIP).
The U.S. government's Substance Abuse and Mental Health Services Administration (SAMHSA) is an agency which has given significant attention to trauma-informed care. SAMHSA sought to develop a broad definition of the concept. It starts with "the three E's of trauma": Event(s), Experience of events, and Effect. SAMHSA offers four assumptions about a TIC approach with the four R's: Realizing the widespread impact of trauma, Recognizing the signs and symptoms, Responding with a trauma-informed approach, and Resisting re-traumatization. SAMHSA gives six key principles: safety; trustworthiness and transparency; peer support; collaboration and mutuality; empowerment, voice and choice, and; cultural, historical and gender issues. They also list 10 implementation domains: governance and leadership; policy; physical environment; engagement and involvement; cross sector collaboration; screening, assessment and treatment services; training and workforce development; progress monitoring and quality assurance; financing; and evaluation.
Researchers Casassa and colleagues interviewed sex trafficking survivors to search for how trauma bonds can be broken and healing can occur. The survivors identified three essential elements.
Education, or a framework, to understand trauma experience and trauma bonding.
Building a safe and trusted relationship, where brutal honesty can happen.
Cultivating self-love.
Researchers Wathen and colleagues describe four integrated principles evolved by key authors in this field.
Understand structural and interpersonal experiences of trauma and violence and their impacts on peoples' lives and behaviors.
Create emotionally, culturally, and physically safe spaces for service users and providers.
Foster opportunities for choice, collaboration, and connections.
Provide strengths-based and capacity building ways to support service users.
By comparison, Landini, a child and adolescent psychiatrist, describes five primary principles from DMM attachment theory for helping people better manage danger response.
Define problems in terms of response to danger.
The professional acts as a transitional attachment figure.
Explore the family's past and present responses to danger.
Work progressively and recursively with the family.
Practice reflective integration with the client as a form of teaching reflective integration.
Bowen and Murshid identified a framework of seven core TIC principles for social policy development.
Safety
Trustworthiness
Transparency
Collaboration
Empowerment
Choice
intersectionality
Researchers Mitchell and colleagues searched for a consensus of TIC principles among early intervention specialists.
A trauma-informed early intervention psychosis service will work to protect the service user from ongoing abuse.
Staff within a trauma-informed early intervention psychosis service are trained to understand the link between trauma and psychosis and will be knowledgeable about trauma and its effects.
A trauma-informed early intervention psychosis service will:
Seek agreement and consent from the service user before beginning any intervention;
Build a trusting relationship with the service user;
Provide appropriate training on trauma-informed care for all staff;
Support staff in delivering safe assessment and treatments for the effects of trauma;
Adopt a person-centred approach;
Maintain a safe environment for service users;
Have a calm, compassionate and supportive ethos;
Be trustworthy;
Acknowledge the relevance of psychological therapies;
Be sensitive when discussing trauma;
Be empathetic and non-judgmental;
Provide supervision to staff;
Provide regular supervision to practitioners who are working directly with trauma.
General applications and techniques of TIC
SAMHSA's National Center for Trauma-Informed Care provides resources for developing a trauma-informed approach, including: (1) interventions; (2) national referral resources; and (3) information on how to shift from a paradigm that asks, "What's wrong with you?" to one that asks, "What has happened to you?"
Understand
Gaining knowledge about and understanding the effects of trauma may be the most complicated component of TIC, because it generally requires going beyond surface level explanations and using multiple explanatory theories and models or complex biopsychosocial models.
Trauma related behaviors, thoughts, feelings, and current experiences can seem confusing, perplexing, dysfunctional, or dangerous. These are usually adaptions to survive extreme contexts, methods to cope in the current moment, or efforts to communicate pain. Whatever the cause and adaptation, the professional's response can cause more harm, or some measure of emotional co-regulation, lessening of distress, and opportunity for healing.
Safety
The opposite of danger is safety, and most or all TIC models emphasize the provision of safety. In attachment theory the focus is on protection from danger. Van der Kolk describes how the "Brain and body are [neurobiologically] programmed to run for home, where safety can be restored and stress hormones can come to rest."
Cultural safety involves ensuring Indigenous people feel their cultural identity is accepted, free from judgement, and not threatened or compromised when accessing health and wellbeing support.
Safety can be enhanced by anticipating danger. Leary and colleagues describe how interpersonal rejection may be one of the most common precursors to aggression. While boundary-holding is a key aspect of TIC, avoiding a sudden and dramatic devaluation in an interpersonal relationship can reduce the subjective experience of rejection and reduce the risk violent aggression.
Relationship
The nature and quality of the relationship between two people talking about trauma can have a significant impact on the outcome of the discussion.
Communication
Traumatic experiences, including childhood attachment trauma, can impact memory function and communication style in children and adults.
Katz describes some experiences working with her legal clients and how she adjusts her relational and communication approach to meet their needs. Some clients need information delivered in short pieces with extra time to process, and some need to not have unannounced phone calls and be informed by email prior to verbal discussions. TIC helped her shift from thinking about how to develop a "litigation strategy" for clients, to thinking about developing a "representation strategy", which is a major shift in thinking for many lawyers.
Nurses can use enhanced communication skills, such as mindful presence, enhanced listening skills including the use of mirroring and rephrasing statements, allowing short periods of silence as a strategy to facilitate safety, and minimizing the use of "no" statements to facilitate patients sense of safety.
Resilience and strength building
Building psychological resilience and leveraging a person's existing strengths is a common element in most or all TIC models.
Integration of principles
Safety and relationship are intertwined. Roger's person-centered theory is founded on this basic principle. Attachment theory describes how a child's survival and well-being are dependent on a protective relationship with at least one primary caregiver. Badenoch's first principle of trauma-informed counseling is to use the practice of nonjudgmental and agendaless presence to create a foundation of safety and co-regulation. "Once the [client] sees (or feels) that the [professional] understands, then together they can begin the dangerous journey from where the [client] is, across the chasm, to safety."
Talking about trauma
Researchers and clinicians describe how to talk about trauma, particularly when people are reluctant to bring it up. Read and colleagues offer comprehensive details for mental health professionals navigating difficult discussions.
There are numerous barriers for professionals which can inhibit raising discussions about trauma with clients/patients. They include lack of time, being too risk-averse, lack of training and understanding of trauma, fear of discussing emotions and difficult situations, fear of upsetting clients, male or older clients, lack of opportunity to reflect on professional experiences, over-reliance on non trauma-informed care models (such as traditional psychology, and biomedical and biogenetic models of mental distress).
Sweeney and colleagues suggest trauma discussions may include the following techniques and principles.
Ask every client about trauma experience, especially in initial assessment of general psychosocial history.
To establish relational safety and trust, or rapport, approach people sensitively while attuning to their emotions, nonverbal expressions, what they are saying, and what they might be excluding from their narrative. Badenoch suggests a stance of "agendaless presence" helps professionals reduce judgmentalism.
Consider confidentiality needs. Some people may be hesitant to disclose some or all of their experience, and may wish to maintain control over to whom or in what context it is disclosed. Attorney-client privilege, so long as not waived and there is no mandatory reporting requirement, offers the strongest protection for chosen non-disclosure.
It may be difficult for clients to process trauma topics in the middle of crisis situations, although creating a measure of safety and trust within the relationship may help facilitate the discussion.
Clients may not be able or willing to admit traumatic experiences, but may display effects of traumatic experiences.
Prefacing trauma questions with brief normalizing statements, such as "That is a common reaction" might facilitate deeper discussions about trauma.
Asking for details about the experience may be traumatizing for the client. In situations where detail disclosure is necessary, such as law enforcement or litigation, certain approaches may be needed.
Specific questions rather than generalized questions may help if detail is needed, such as "Were you hit/pushed/spat on/held down?" as opposed to "Were you assaulted?" or "Was there domestic violence?"
Prior disclosures can be asked about, and if so, what the person's experience of that was.
Circumstances around intense emotions, such as shame and humiliation, may difficult to explore.
Discussions may be paced according to the person's needs and abilities.
Giving choices may provide agency, including whether to talk about it or not, and what to do about it.
Working collaboratively, in partnership with the person to explore appropriate solutions may be acceptable to the client.
Professionals might reflect on their own understanding of current research about safety and danger.
The offer of relatively comprehensive support for trauma and safety plan options may ease and promote discussions. Particularly if the discussion about trauma is extensive, a lack of follow up support options may lead to re-traumatization.
Concluding questions about how the client is feeling may be useful.
Follow-up appointments and questions about what the client plans to do next may be useful.
A literature review of women and clinicians views on trauma discussions during pregnancy found that both groups thought discussions were valuable and worthwhile, as long as there was both adequate time to have the conversation and support available for those who need it. Women wanted to know in advance that the issue would be raised and to speak with a clinician they knew and trusted.
Specific applications and techniques of TIC
TIC principles are applied in child welfare services, child abuse, social work, psychology, medicine, oral health services, nursing, correctional services. They have been applied in interpersonal abuse situations including domestic violence, elder abuse.
Wathen and Varcoe offer specific suggestions for specific disciplines, such as primary health care clinics, emergency rooms, and for contexts involving interpersonal, structural, or any form of violence. One simple suggestion, in order to enhance the perception of care, safety and agency in the first phone call, is to provide calm phrasing and tone, minimize hold times, and offer brief explanations for delays.
Trauma- and violence-informed practices can be or are addressed in mindfulness programs, yoga, education, obstetrics and gynaecology, cancer treatment, psychological trauma in older adults, military sexual trauma, cybersex trafficking, sex trafficking and trafficking of children, child advocacy, decarceration efforts, and peer support. HDR, Inc. incorporates trauma-informed design principles in prison architecture.
Many therapy models utilize TIC principles, including psychodynamic theory, attachment-informed therapy, trauma focused cognitive behavioral therapy, trauma-informed feminist therapy, Trauma systems therapy which utilizes EMDR, trauma focused CBT, The Art of Yoga Project, the Wellness Recovery Action Plan, music therapy, internet-based treatments for trauma survivors, and in aging therapy.
Culturally-focused applications, often considering indigenous-specific traumas have been applied in minoritized communities, and Maori culture.
Domestic violence
Trauma- and violence-informed (TVIC) principles are widely used in domestic violence and intimate partner violence (IPV) situations. For working with survivors, TVIC has been combined with yoga, motivational interviewing, primary physician care in sexual assault cases, improving access to employment, cases involving HIV and IPV, and cases involving PTSD and IPV.
In 2015 Wilson and colleagues reviewed literature describing trauma-informed practices (TIP) used in the DV context. They found principles organized around six clusters. Promoting safety, giving choice and control, and building healthy relationships are particularly important TVIC concepts in this field.
Promote emotional safety: Consider design options of physical environment. Promote a staff-wide approach to nonjudgmental interactions with clients. Develop organizational policies and communicate them clearly.
Restore choice and control: Give choice and control broadly (it was taken from them previously). Allow clients to tell their stories in their own way and speed. Actively solicit client input on which services they want to utilize.
Facilitate healing connections: Professionals should develop enhanced listening and relationship skills, and use these to build a supporting and trusted relationship with the client. This is sometimes called a person-centered approach. Listening skills can involve active listening, expressing no judgment, listening with the intent hear rather than with the intent to respond, and agendaless presence. Clients can be helped to develop healthy relationships at every level, including parent-child, and between survivors and their communities.
Support coping: Provide clients neurobiopsycho-education about the nature and effects of DV. Help clients gain an awareness of triggers, perhaps with a triggers checklist. Validate and help strengthen client coping, or self-protective strategies. Develop a company-wide holistic and multidimensional approach improving client well-being, which includes healthy eating and living, and managing stress hormone activation.
Respond to identify and context: Be mindful and responsive to gender, race, sexual orientation, ability, culture, immigration status, language, and social and historical contexts. These considerations can be reflected in informational materials. Gain awareness of assumptions based on identity and context. Organizations should be designed to be able to represent the diversity of its clients.
Build strengths: Professionals can develop skills to identify, affirmatively value, and focus on client strengths. Ask "What helped in the past?" Help develop client leadership skills.
Providing education or a framework for understanding is also an important element of healing.
Hospice care
In hospice situations, Feldman describes a multi-stage TIC process. In stage one practitioners alleviate distress by taking actions on behalf of clients. This is unlike many social work approaches which first work to empower clients to solve their own problems. Many hospice patients have little time or energy to take actions on their own. In stage two, the patient is offered tools, psychoeducation and support to cope with distress and trauma impacts. Stage three involves full-threshold PTSD treatment. The last stage is less common based on limited prognosis.
Ethical guidelines
Ethical guidelines and principles imply and support TIC-specific frameworks.
Rudolph describes how to conceptualize and apply TIC in health care settings using egalitarian, relational, narrative and prinicplist ethical frameworks. (The clinical case vignette in Rudolph's article is informative.)
Egalitarian-based ethics provide a foundation to think about how socioeconomic factors influence power and privilege to create and perpetuate loss of agency, oppression and trauma. Those factors include gender, race, education, income, and culture. One ethical approach is to provide people, especially those silenced and marginalized, the opportunity to have meaningful voice and choice.
Care ethics and its relational approach promotes awareness for the need and value of compassion and empathy, integrating both patient and provider perspectives, and promoting patient safety, agency, and therapeutic alliance. The relational approach also orients clinical treatment to consider subjective and objective decision making factors rather than merely abstract or academic norms.
Narrative ethics encourage providers to consider patient history and experience in a broader context such as a biopsychosocial approach to healing. A deliberate and explicit narrative approach promotes both fuller patient disclosure and provider empathy and efforts to reach a collaborative care alliance. This can lead to enhanced patient-centered moral judgments and care outcomes.
Principlist ethics offers four equal moral principles to balance in individual cases. These are the right of patients to make decisions (autonomy), promotion of patient welfare (beneficence), avoidance of patient harm (nonmaleficence), and justice through the fair allocation of scarce resources. These principles align with and support TIC frameworks and goals.
Vadervort and colleagues describe how child welfare workers can experience trauma participating in legal proceedings and how understanding professional ethics can reduce their trauma experiences.
Addressing social determinants of health as trauma-informed care
Many policies and programs have emerged from the field of trauma-informed care, with the intention of preventing trauma at the source by improving social determinants of health. For example, the Nurse Family Partnership is a childhood home visitation program with the goal of helping new mothers learn about parenting to reduce child abuse and improve the living environment of children. The program's approach resulted in less Adverse Childhood Experiences, better pregnancy outcomes, and improved cognitive development of children. Other examples are federal benefit programs aimed at reducing poverty, increasing education, and improving employment, such as Earned Income Tax Credits and Child Tax Credits. These programs have evidence of reducing the risk of interpersonal violence and other forms of trauma. Communities that face a large burden of violence also have taken grassroots initiatives based on the approach of preventing trauma. The organization 365 Baltimore rebranded its violence prevention movement to one of peace creation in order to give power to community members, encourage institutions to take peace-making action, improve social determinants of health, and resist narratives that defined community members inherently violent.
Organizational applications and techniques of TIC
TIC principles have been applied in organizations, including behavioral health services, and policy analysis.
The Connecticut Department of Children and Families (DCF) implemented wide-ranging TIC policies, which were analyzed over a five year period by Connell and colleagues in a research study. TIC components included 1) workforce development, 2) trauma screening, 3) supports for secondary traumatic stress, 4) dissemination of trauma-focused evidence-based treatments (EBTs), and 5) development of trauma-informed policy and practice guides. The study found significant and enduring improvements in DCF's capacity to provide trauma-informed care. DCF employees became more aware of TIC services and policies, although there was less improvement in awareness of efforts to implement new practices. The Child Welfare Trauma Toolkit Training program was one program implemented.
Hospital-based intervention programs
Trauma-informed care can play a large role in both the treatment of trauma and prevention of violence. Survivors of violence have a re-injury rate ranging from 16% to 44%. Proponents argue that TIC is necessary to interrupt this broader cycle of violence, as studies show that medical treatment alone does not protect survivors from re-injury.
Hospital-based intervention programs (HVIPs) have gained popularity for intervening in the cycle of violence. HVIPs aim to intervene when a survivor comes in contact with the medical system. Many of these programs use peer-based case management as a form of trauma-informed care, in order to match survivors with resources in a culturally competent, trauma-informed way. Studies show that having managers with lived-experience can validate the experiences of clients and erode cultural stigmas that may come with seeking help in traditional case-working frameworks. More specifically, Jang et al. note that case managers being from the same community as clients created a sense of personal understanding and connection that was extremely important for the client's participation in the program. The same study suggests that the most successfully met client-reported needs by HVIPs included mental health, legal services, and financial/victim-of-crime assistance. For mental health in particular, the study noted that clients who had their mental health needs met were 6 times more likely to engage and complete their programs. Another study found that survivors that engaged in HVIP services were more likely to continue with medical follow-up visits, and return to work or school after their injury compared to those who did not have access to these programs. Following positive results, some medical professionals have called for the implementation of HVIPs at all Level 1 trauma centers to deliver trauma-informed care addressing social determinants of health post-injury. Notably, HVIPs as a trauma-informed care model struggled with meeting long term needs of clients, such as employment, education, and housing.
Organizations and people promoting TIC
Organizations which have or support TIC programs include the Substance Abuse and Mental Health Services Administration (SAMHSA), National Center for Trauma-informed care, the National Child Traumatic Stress Network, the Surgeon General of California, National Center for Victims of Crime, The Exodus Road, Stetson School, and the American Institutes for Research.
Psychologist Diana Fosha promotes the use of therapeutic models and approaches which integrate relevant neurobiological processes, including implicit memory, and cognitive, emotional and sensorimotor processing. Ricky Greenwald applies eye movement desensitization and reprocessing (EMDR) and founded the Trauma Institute & Child Trauma Institute. Lady Edwina Grosvenor promotes a trauma informed approach in women's prisons in the United Kingdom. Joy Hofmeister promotes trauma-informed instruction for educators in Oklahoma. Anna Baranowsky developed the Traumatology Institute and addresses secondary trauma and effective PTSD techniques.
Other notable people who have developed or promoted TIC programs include Tania Glyde, Carol Wick, Pat Frankish, Michael Huggins, Brad Lamm, Barbara Voss, Cathy Malchiodi, Activists, journalists and artists supporting TIC awareness include Liz Mullinar, Omar Bah, Ruthie Bolton, Caoimhe Butterly, and Gang Badoy.
Effectiveness
Some efforts have been made to measure the effectiveness of TIC implementations.
Wathen and colleagues conducted a scoping review in 2020 and concluded that of the 13 measures they examined which assess TIC effectiveness, none fully assessed the effectiveness of interventions to implement TVIC (and TIC). The measures they examined mostly assessed for TVIC principles of understanding and safety, and fewer looked at collaboration, choice, strength-based and capacity-building. They found several challenges to assessing the effectiveness of TVIC implementations, or existence of vicarious trauma. There was an apparent lack of clarity on how TVIC theory related to the measure's development and validation approaches so it was not always clear precisely what was being investigated. Another is the broad range of topics within the TVIC framework. They found no assessment measured for implicit bias in professionals. They found conflation of "trauma focused", such as may be used in primary health care, policing and education, with "trauma informed" where trauma specific services are routinely provided.
See also
Community accountability
References
Clinical psychology
Domestic violence
Counseling
Practice of law
Legal communication
Medical ethics
Violence | Trauma-informed care | [
"Biology"
] | 6,453 | [
"Behavior",
"Violence",
"Behavioural sciences",
"Aggression",
"Clinical psychology",
"Human behavior"
] |
72,249,115 | https://en.wikipedia.org/wiki/Data%20decolonization | Data decolonization is the process of divesting from colonial, hegemonic models and epistemological frameworks that guide the collection, usage, and dissemination of data related to Indigenous peoples and nations, instead prioritising and centering Indigenous paradigms, frameworks, values, and data practices. Data decolonization is guided by the belief that data pertaining to Indigenous people should be owned and controlled by Indigenous people, a concept that is closely linked to data sovereignty, as well as the decolonization of knowledge.
Data decolonization is linked to the decolonization movement that emerged in the mid-20th century.
History
In various colonial states, data was used to identify Indigenous peoples using Western classification systems, leading to erasure of Indigenous identities, and the origin of narratives that focus on disadvantages in Indigenous communities.
Indigenous knowledge systems were replaced with Western values and systems, devaluing Indigenous ways-of-knowing in the process. Indigenous data practices tend to be more holistic, value diverse, personal opinions, and centre on the person community for their own benefit, rather than Western practices that are closely linked to categorising people as products, replicating colonial structures. Traditions such as oral history, using traditional knowledge, and other practices that were deemed "unscientific" were devalued and replaced with Western ways of knowing that presented as universal and objective. Tools such as the census were used to control narratives about Indigenous peoples, counting Indigenous peoples as they were viewed by the Canadian governenment rather than how they viewed themselves.
Data decolonization seeks to counter the negative narratives that are reinforced by the colonial data practices that persist in a post-colonial era.
Principles
Self-identification
Indigenous peoples value the right to self-identify themselves and define their own identities in data collection. Indigenous peoples value the diversity in their communities and wish to see this diversity accounted for in data.
Self-determination
Indigenous peoples value the right to make decisions about their data. They value the right to control how data is collected about them, how their data is stored, who gets to own the data, and how the data is used.
In practice
Policies
United Nations Declaration on the Rights of Indigenous Peoples
The United Nations Declaration on the Rights of Indigenous Peoples (UNDRIP) was first introduced to the General Assembly in 2007. UNDRIP outlines the comprehensive rights of Indigenous peoples, and serves as a guideline for countries seeking reconciliation with their Indigenous populations. Article 18 especially outlines Indigenous rights to have decision-making power in matters that affect their rights, and this affects their data rights as well. Four countries voted against UNDRIP when it was first proposed: Canada, United States, New Zealand, and Australia, although all four would later agree with the declaration.
Canada
The Canadian government began to endorse UNDRIP in 2010, and they began to fully implement it in 2021. In 2015, the Truth and Reconciliation Commission urged all levels of the Canadian government to adopt the UNDRIP.
United States
The United States supports the declaration, but does not support the UNDRIP. In 2016, the Organization of American States ratified the American Declaration on the Rights of Indigenous Peoples, which is similar to the UNDRIP.
New Zealand
New Zealand announced its support for the UNDRIP in 2010, and is currently working with the Māori Development to design and implement their Declaration plan.
Healthcare
Decolonizing data in healthcare involves reforming healthcare infrastructure and policies to prioritise Indigenous peoples. Current healthcare data structures collect, store, and use data about Indigenous peoples without necessarily consulting the input of Indigenous peoples recreating power dynamics that have previously led to the harm of Indigenous peoples. Decolonizing such structures would put control over healthcare-related data and the use of that data into the hands of Indigenous peoples.
Palestinian Public Health scholar, outlined some principles to guide the creation of decolonized healthcare data systems.
Centering the community: Centering the concerns and opinions of Indigenous peoples at all levels.
Diversity: Ensuring that opinions, and decision-making are sourced from various Indigenous communities, rather than a few tokens.
Transparency: Building complete awareness in Indigenous communities of how their data is collected, aggregated.
Consent: Prioritising the informed consent of Indigenous peoples, promptly and accurately informing them of all actions that are taken with their data.
Concrete action: Focusing on action that produces real-world results for Indigenous peoples, rather than discourse for researchers.
See also
Indigenous decolonization
References
decolonization
Decolonization
Indigenous peoples
Human rights | Data decolonization | [
"Technology"
] | 907 | [
"Information technology",
"Data"
] |
72,249,188 | https://en.wikipedia.org/wiki/Kepler-1513 | Kepler-1513 is a main-sequence star about away in the constellation Lyra. It has a late-G or early-K spectral type, and it hosts at least one, and likely two, exoplanets.
Planetary system
Kepler-1513b (KOI-3678.01) was confirmed in 2016 as part of a study statistically validating hundreds of Kepler planets. In November 2022, an exomoon candidate was reported around Kepler-1513b based on transit-timing variations (TTVs). Unlike previous giant exomoon candidates in the Kepler-1625 and Kepler-1708 systems, this exomoon would have been terrestrial-mass, ranging from 0.76 Lunar masses to 0.34 Earth masses depending on the planet's mass and the moon's orbital period.
In October 2023, a follow-up study by the same team of astronomers using additional observations found that the observed TTVs cannot be explained by an exomoon, but can be explained by a second, outer planet, Kepler-1513c, with a mass comparable to Saturn.
See also
Kepler-90g
References
Lyra
G-type main-sequence stars
Planetary systems with one confirmed planet
3678
J19190999+3917070 | Kepler-1513 | [
"Astronomy"
] | 265 | [
"Lyra",
"Constellations"
] |
72,249,409 | https://en.wikipedia.org/wiki/Glossary%20of%20cellular%20and%20molecular%20biology%20%280%E2%80%93L%29 | This glossary of cellular and molecular biology is a list of definitions of terms and concepts commonly used in the study of cell biology, molecular biology, and related disciplines, including genetics, biochemistry, and microbiology. It is split across two articles:
This page, Glossary of cellular and molecular biology (0–L), lists terms beginning with numbers and with the letters A through L.
Glossary of cellular and molecular biology (M–Z) lists terms beginning with the letters M through Z.
This glossary is intended as introductory material for novices (for more specific and technical detail, see the article corresponding to each term). It has been designed as a companion to Glossary of genetics and evolutionary biology, which contains many overlapping and related terms; other related glossaries include Glossary of virology and Glossary of chemistry.
0–9
One of two ends of a single linear strand of or , specifically the end at which the chain of terminates at the third carbon atom in the furanose ring of or (i.e. the terminus at which the 3' carbon is not attached to another nucleotide via a ; , the 3' carbon is often still bonded to a hydroxyl group). By convention, sequences and structures positioned nearer to the 3'-end relative to others are referred to as . Contrast .
A specially altered attached to the of some as part of the set of which convert raw transcripts into mature RNA products. The precise structure of the 5' cap varies widely by organism; in eukaryotes, the most basic cap consists of a bonded to the triphosphate group that terminates the 5'-end of an RNA sequence. Among other functions, capping helps to regulate the export of mature RNAs from the , prevent their degradation by , and promote in the cytoplasm. Mature can also be decapped.
See .
One of two ends of a single linear strand of or , specifically the end at which the chain of terminates at the fifth carbon atom in the furanose ring of or (i.e. the terminus at which the 5' carbon is not attached to another nucleotide via a ; , the 5' carbon is often still bonded to a phosphate group). By convention, sequences and structures positioned nearer to the 5'-end relative to others are referred to as . Contrast .
See .
A
(of a linear or chromosome fragment) Having no .
A biochemical compound consisting of a molecule to which an acetyl group () is attached via a high-energy thioester bond. of coenzyme A occurs as part of the of , (), and (), after which it participates as an energy carrier in several important , notably the , in which hydrolysis of the acetyl group releases energy which is ultimately captured in 11 and one .
The covalent attachment of an acetyl group () to a chemical compound, protein, or other biomolecule via an esterification reaction with acetic acid, either spontaneously or by catalysis. Acetylation plays important roles in several and in . Contrast .
Any of a class of which catalyze the covalent bonding of an acetyl group () to another compound, protein, or biomolecule, a process known as .
(of a linear or chromosome fragment) Having a positioned very close to one end of the chromosome, as opposed to or .
The local change in voltage that occurs when the of a specific location along the of a rapidly depolarizes, such as when a nerve impulse is transmitted between .
See .
A type of that increases the of a or set of genes. Most activators work by binding to a specific located within or near an or and facilitating the binding of and other transcription machinery in the same region. See also ; contrast .
The region of an to which one or more bind, causing the substrate or another molecule to undergo a chemical reaction. This region usually consists of one or more residues (commonly three or four) which, when the enzyme is properly, are able to form temporary chemical bonds with the atoms of the substrate molecule; it may also include one or more additional residues which, by interacting with the substrate, are able to catalyze a specific reaction involving the substrate. Though the active site constitutes only a small fraction of all the residues comprising the enzyme, its specificity for particular substrates and reactions is responsible for the enzyme's biological function.
Transport of a substance (such as a or ) across a against a concentration gradient. Unlike , active transport requires an expenditure of energy.
A used as one of the four standard nucleobases in both and molecules. Adenine forms a with in DNA and with in RNA.
One of the four standard used in molecules, consisting of an with its N9 nitrogen to the C1 carbon of a sugar. Adenine bonded to is known as , which is the version used in .
An organic compound derived from that functions as the major source of energy for chemical reactions inside living . It is found in all forms of life and is often referred to as the "molecular currency" of intracellular energy transfer.
One of three main biologically active of the , along with and . The A-form helix has a right-handed twist with 11 per full turn, only slightly more compact than B-DNA, but its bases are sharply tilted with respect to the helical axis. It is often favored in dehydrated conditions and within sequences of consecutive nucleotides (e.g. ); it is also the primary conformation adopted by and .
Any pair of organisms which are related genetically and both affected by the same . For example, two cousins who both have blue eyes are an affected relative pair since they are both affected by the that codes for blue eyes.
A laboratory method used in molecular biology to extract and isolate such as the DNA of (as opposed to ) from certain cell types, commonly bacterial cells.
One of multiple alternative versions of an individual , each of which is a viable sequence occupying a given position, or , on a . For example, in humans, one allele of the eye-color gene produces blue eyes and another allele of the same gene produces brown eyes.
Any that differs from an ordinary in size, form, or behavior and which is responsible for determining the sex of an organism. In humans, the and the are sex chromosomes.
A common structural in the secondary structures of consisting of a right-handed helix conformation resulting from hydrogen bonding between residues which are not immediately adjacent to each other.
A regulated phenomenon of eukaryotic in which specific or parts of exons from the same are variably included within or removed from the final, mature transcript. A class of , alternative splicing allows a single to code for multiple protein isoforms and greatly increases the diversity of proteins that can be produced by an individual . See also .
One of three used in the ; in , it is specified by the nucleotide triplet . The other two stop codons are named and .
Any of a class of organic compounds whose basic structural formula includes a central carbon atom bonded to amine and carboxyl functional groups and to a variable side chain. Out of nearly 500 known amino acids, a set of 20 are coded for by the and incorporated into long polymeric chains as the building blocks of and hence of and . The specific sequences of amino acids in the polypeptide chains that form a protein are ultimately responsible for determining the protein's structure and function.
See .
Any of a set of enzymes which catalyze the transesterification reaction that results in the attachment of a specific (or a precursor) to one of its cognate molecules, forming an . Each of the 20 different amino acids used in the is recognized and attached by its own specific synthetase enzyme, and most synthetases are cognate to several different tRNAs according to their specific .
A to which a cognate is chemically bonded; i.e. the product of a transesterification reaction catalyzed by an . Aminoacyl-tRNAs bind to the of the during .
Any DNA or RNA sequence or fragment that is the source and/or product of an reaction. The term is most frequently used to describe the numerous copied fragments that are the products of the or , though it may also refer to sequences that are amplified naturally within a genome, e.g. by .
The of a biomolecule, in particular the production of one or more copies of a , known as an , either naturally (e.g. by spontaneous ) or artificially (e.g. by ), and especially implying many repeated replication events resulting in thousands, millions, or billions of copies of the target sequence, which is then said to be amplified.
The stage of and that occurs after and before , when the replicated chromosomes are segregated and each of the are moved to opposite sides of the .
The failure of one or more pairs of or to properly migrate to opposite sides of the cell during of or due to a defective . Consequently, both daughter cells are : one is missing one or more chromosomes (creating a ) while the other has one or more extra copies of the same chromosomes (creating a ).
(of a linear or chromosome fragment) Having an abnormal number of , i.e. more than one.
The condition of a cell or organism having an abnormal number of one or more particular (but excluding abnormal numbers of complete sets of chromosomes, which instead is known as ).
The of two molecules containing sequences, creating a molecule with . The term is used in particular to describe steps in laboratory techniques such as , where double-stranded DNA molecules are repeatedly into single strands by heating and then exposed to cooler temperatures, causing the strands to reassociate with each other or with complementary . The exact is strongly influenced by the length and specific sequence of the individual strands.
A consecutive within a which the three nucleotides of a within an transcript. During , each tRNA recruited to the contains a single anticodon triplet that pairs with its complementary codon from the mRNA sequence, allowing each codon to specify a particular to be added to the growing peptide chain. Anticodons containing in the first position are capable of pairing with more than one codon due to a phenomenon known as .
Any molecule that functions as an antagonist to a process, limiting or inhibiting normal cellular metabolism; i.e. a metabolic poison.
Any compound that suppresses normal in a cell or population of cells.
A which helps to regulate cell growth and suppress tumors when functioning correctly, such that its absence or malfunction can result in uncontrolled cell growth and possibly cancer. Compare .
The contrasting orientations of the two of a (and more generally any pair of biopolymers) which are parallel to each other but with opposite . For example, the two strands of a molecule run side-by-side but in opposite directions with respect to chemical numbering conventions, with one strand oriented -to- and the other 3'-to-5'.
A which works by exchanging two different ions or small molecules across a in opposite directions, either at the same time or consecutively.
See .
A molecule containing an sequence that is to a sense strand, such as a , with which it readily , thereby inhibiting the sense strand's further activity (e.g. into protein). Many different classes of naturally occurring RNA such as function by this principle, making them potent gene in various mechanisms. Synthetic antisense RNA has also found widespread use in gene studies, and in practical applications such as antisense therapy.
(of a cell or organism) Lacking a , i.e. a discrete, membrane-bound organelle enclosing the cell's , used especially of cells which normally have a nucleus but from which the nucleus (e.g. in artificial ), and also of specialized cell types that develop without nuclei despite that the cells of other tissues comprising the same organism ordinarily do have nuclei (e.g. mammalian erythrocytes).
The process by which contraction of the side of a cell (and often a corresponding expansion of the opposing side) causes the cell to assume a wedge-shaped morphology. The process is common during early development, where it is often coordinated across many adjacent cells of an layer simultaneously in order to generate bends or folds in developing .
A highly regulated form of that occurs in organisms.
Any artificial , , or molecule, or , which functions as a by binding selectively to one or more specific target molecules, usually other nucleic acids or , and often a family of such molecules. The term is used in particular to describe short nucleic acid fragments which have been randomly generated and then artificially selected by procedures such as SELEX. Aptamers are useful in the laboratory as , particularly in applications where conventional protein are not appropriate.
A set of laboratory methods used in the of a (or any other ) from free , i.e. without relying on an existing .
Any process by which chemical compounds containing biologically relevant elements (e.g. carbon, hydrogen, oxygen, nitrogen, phosphorus, sulfur, selenium, iron, cobalt, nickel, copper, zinc, molybdenum, etc.) are uptaken by microorganisms and incorporated into complex in order to synthesize various cellular components. In contrast, a uses the energy released by decomposing exogenous molecules to power the cell's and out of the cell, instead of reusing them to build new molecules.
In animal cells, a star-shaped system of non- that radiates from a or from either of the poles of the during the early stages of cell division.
The failure of to properly pair with each other during . Contrast and .
A single containing two or more physically attached copies of the normal as a result of either a natural internal or any of a variety of methods. The resulting compound chromosome effectively carries two or more doses of all genes and sequences included on the X, yet functions in all other respects as a single chromosome, meaning that haploid 'XX' (rather than the ordinary 'X' gametes) will be produced by and inherited by progeny. In mechanisms such as in which the sex of an organism is determined by the total dosage of X-linked genes, an abnormal 'XXY' , fertilized by one XX gamete and one Y gamete, will develop into a female.
The or digestion of a by its own ; or of a particular enzyme by another instance of the same enzyme. See also .
Any that is not an and hence is not involved in the determination of the sex of an organism. Unlike the sex chromosomes, the autosomes in a cell exist in pairs, with the members of each pair having the same structure, morphology, and genetic .
A cell or organism that is for a at which the two homologous are identical by descent, both having been derived from a single gene in a common ancestor. Contrast .
The growth of a multicellular organism due to an increase in the size of its cells rather than an increase in the number of cells.
Describing a in which only a single species, variety, or strain is present, and which is therefore entirely free of contaminating organisms including symbiotes and parasites.
B
Any supernumerary molecule which is not a duplicate of nor to any of the standard complement of normal "A" chromosomes comprising a genome. Typically very small and devoid of structural genes, B chromosomes are by definition not necessary for life. Though they occur naturally in many eukaryotic species, they are and thus even between closely related individuals.
A that reverses the effect of a previous mutation which had inactivated a gene, thus restoring function. See also .
An abbreviation of and .
A pair of two on or strands which are loosely attracted to each other via hydrogen bonding, a type of non-covalent electrostatic interaction between individual atoms in the purine or pyrimidine rings of the complementing bases. This phenomenon, known as base pairing, is the mechanism underlying the that commonly occurs between nucleic acid polymers, allowing two molecules to combine into a more energetically stable molecule, as well as enabling certain individual strands to . The ability of consecutive base pairs to stack one upon another contributes to the long-chain structures observed in both and molecules.
A measure of the level of a or genes prior to a perturbation in an experiment, as in a negative control. Baseline expression may also refer to the expected or historical measure of expression for a gene.
A computer algorithm widely used in for and comparing primary biological sequence information such as the of DNA or RNA or the of proteins. BLAST programs enable scientists to quickly check for homology between two or more sequences by directly comparing the nucleotides or amino acids present at each position within each sequence; a common use is to search for matches between a specific query sequence and a digital sequence database such as a , with the program returning a list of sequences from the database which resemble the query sequence above a specified threshold of similarity. Such comparisons can permit the identification of an organism from an unknown sample or the inference of evolutionary relationships between genes, proteins, or species.
The "standard" or classical of the , thought to represent an average of the various distinct conformations assumed by very long DNA molecules under physiological conditions. The B-form double helix has a right-handed twist with a diameter of 23.7 ångströms and a of 35.7 ångströms or about 10.5 per full turn, such that each nucleotide pair is rotated 36° around the helical axis with respect to its neighboring pairs. See also and .
A common mechanism of in which two move in opposite directions away from the same ; this results in a where the molecule is locally separated into two .
The separation of a single entity (e.g. a ) into exactly two discrete entities closely resembling the original. The term refers in particular to a type of used by such as bacteria, whereby a single divides evenly into two which are genetically identical to each other and to the parent. Binary fission is preceded by of the parent cell's DNA, rapid growth of the , and various other processes which ensure even distribution of the cell's contents between the two progeny, but is generally a quicker and simpler process than the and that occur in .
Any analytical method that measures or qualifies the presence, effect, or potency of a substance within or upon a biological system, either directly or indirectly, e.g. by quantifying the concentration of a particular chemical compound within a sample obtained from living organisms, cells, or tissues, and ideally under controlled conditions that compare a sample subjected to an experimental treatment or manipulation with an unmanipulated sample, so as to permit inferences about the effect of the treatment upon some measured variable.
A community of symbiotic microorganisms, especially bacteria, where cells produce and embed themselves within a slimy, sticky composed of various high-molecular weight biopolymers, to each other and sometimes also to a , which may be a biotic or abiotic surface. Many bacteria can exist either as independent single cells or switch to a physiologically distinct biofilm phenotype; those that create biofilms often do so in order to shelter themselves from harmful environments. Cells residing within biofilms can easily share nutrients and , and subpopulations of cells may to perform specialized functions supporting the whole biofilm.
A measurable indicator of some biological state, especially a compound or whose presence or absence in a biological system is a reliable sign of a normal or abnormal process, condition, or disease. Things that may serve as biomarkers include direct measurements of the concentration of a particular compound or molecule in a tissue or fluid sample, or any other characteristic physiological, histological, or radiographic signal (e.g. a change in heart rate, or a distinct under a microscope). They are regularly used as predictive or diagnostic tools in clinical medicine and laboratory research.
Any difference in the concentration of between two spaces within a biological system, whether , , across a (e.g. between the on one side of the membrane and the external environment on the other), or between different cells or different parts of a tissue or organ system. Gradients of one kind or another drive virtually all biochemical processes occurring within and between cells, as natural systems tend to move toward a thermodynamic equilibrium where concentrations are uniformly distributed in all spaces and no gradients exist. Gradients thus cause chemical reactions to occur in particular directions, which can be used by cells to accomplish essential biological functions, including , , and the movement of specific ions and solutes into and out of cells and organelles. It is often necessary for cells to continuously regenerate gradients such as in order to permit these processes to continue.
Any molecule or chemical compound involved in or essential to one or more biological processes within a biological system, especially large such as , , , and , but also broadly inclusive of smaller molecules such as , , and which are consumed or produced by biochemical reactions, often as part of . Most biomolecules are organic compounds; some are produced naturally within or ( compounds), while others can only be obtained from the organism's environment ( compounds).
See .
Any of a variety of molecular biology methods by which or chromatographically separated , , or samples are transferred from a support medium such as a polyacrylamide or agarose gel onto an immobilizing carrier such as a nitrocellulose or PVDF membrane. Some methods involve the transfer of molecules by capillary action (e.g. and ), while others rely on the transport of charged molecules by electrophoresis (e.g. ). The transferred molecules are then visualized by colorant staining, by autoradiography, or by for specific sequences or with or bound to chemiluminescent reporters.
A term used to describe the end of a molecule where the terminal nucleobases on each are with each other, such that neither strand has a single-stranded "overhang" of unpaired bases, in contrast to a so-called "", where an overhang is created by one strand being one or more bases longer than the other. Blunt ends and sticky ends are relevant when multiple DNA molecules, e.g. in , because sticky-ended molecules will not readily anneal to each other unless they have matching overhangs; blunt-ended molecules do not anneal in this way, so special procedures must be used to ensure that fragments with blunt ends are joined in the correct places.
C
A that restricts the of other genes to specific tissues or body parts in an organism, typically by producing which variably inhibit or permit of the other genes in different cell types. The term is used most commonly in plant genetics.
Any of a class of which are dependent on calcium ions (Ca2+) and whose extracellular function as mediators of cell–cell adhesion at in eukaryotic tissues.
An unorganized mass of parenchymal cells that forms naturally at the site of wounds in plant tissues, and which is commonly artificially induced to form in plant as a means of initiating somatic embryogenesis.
A whose location on a chromosome is with a particular (often a disease-related phenotype), and which is therefore suspected of causing or contributing to the phenotype. Candidate genes are often selected for study based on a priori knowledge or speculation about their functional relevance to the trait or disease being researched.
See .
Any of a class of organic compounds having the generic chemical formula , and one of several major classes of found universally in biological systems. Carbohydrates include individual as well as larger and , in which multiple monosaccharide monomers are joined by . Abundant and ubiquitous, these compounds are involved in numerous essential biochemical processes and ; they are widely used as an energy source for cellular , as a form of energy storage, as molecules, and as to the activity of other molecules. Carbohydrates are often colloquially described as "sugars"; the prefix glyco- indicates a compound or process containing or involving carbohydrates, and the suffix -ose usually signifies that a compound is a carbohydrate or a derivative.
See .
1. A that functions as a , binding to a solute and facilitating its movement across the membrane by undergoing a series of .
2. A protein to which a specific or has been conjugated and which thereby carries an capable of eliciting an response.
3. A protein which is included in an at high concentrations in order to prevent non-specific interactions of the assay's reagents with vessel surfaces, sample components, or other reagents. For example, in many techniques, albumin is intentionally allowed to bind non-specifically to the blotted membrane prior to , so as to "block" potential off-target binding of the to the membrane, which might otherwise cause background fluorescence that obscures genuine signal from the target.
A pre-existing nucleic acid sequence or construct, especially a DNA with an annotated sequence and precisely positioned , into which one or more can be readily or recombined by various methods. Recombinant vectors containing reliable , , and antibiotic resistance genes are commercially manufactured as cassettes to allow scientists to easily swap into and out of an active "slot" or locus within the plasmid. See also .
A highly DNA sequence located approximately 75 base pairs (i.e. -75) of the for many eukaryotic genes.
See .
The basic structural and functional unit of which all living organisms are composed, essentially a self-replicating ball of surrounded by a which separates the interior from the external environment, thus providing a protected space in which the carefully controlled chemical reactions necessary to sustain biological processes can be carried out unperturbed. organisms are composed of a single autonomous cell, whereas organisms consist of numerous cells cooperating together, with individual cells more or less specialized or to serve particular functions. Cells vary widely in size, shape, and substructure, particularly between and . The typical cell is microscopic, averaging 1 to 20 micrometres (μm) in diameter, though they may range in size from 0.1 μm to more than 20 centimetres in diameter for the eggs laid by some birds and reptiles, which are highly specialized single-celled ova.
The branch of biology that studies the structures, functions, processes, and properties of biological , the self-contained units of life common to all living organisms.
The subdivision of the interior of a into distinct, usually compartments, including the and (, , , , etc.), a defining feature of the Eukarya.
A specialized layer of proteins lining the inner face of the in most eukaryotic cells, composed primarily of and and usually 100–1000 nanometres thick, which functions as a modulator of membrane behavior and cell surface properties.
The process of determining the number of within a biological sample or by any of a variety of methods. Counting cells is an important aspect of used widely in research and clinical medicine. It is generally achieved by using a manual or digital to count the number of cells present in small fractions of a sample, and then extrapolating to estimate the total number present in the entire sample. The resulting quantification is typically expressed as a density or concentration, i.e. the number of cells per unit area or volume.
The process by which living cells are grown and maintained, or "cultured", under carefully controlled conditions, generally outside of their natural environment. Optimal growth conditions vary widely for different cell types but usually consist of a suitable vessel (e.g. a or ) containing a specifically formulated or that supplies all of the nutrients essential for life (, , , minerals, etc.) plus any desirable growth factors and , permits gas exchange (if necessary), and regulates the environment by maintaining consistent physico-chemical properties (, osmotic pressure, temperature, etc.). Some cell types require a solid surface to which they can in order to reproduce, whereas others can be grown while floating freely in a liquid or gelatinous . Most cells have a genetically determined reproduction limit, but cells will divide indefinitely if provided with optimal conditions.
The separation of an individual into two by any process. Cell division generally occurs by a complex, carefully structured sequence of events involving the reorganization of the parent cell's internal contents, the physical cleaving of the and , and the even distribution of contents between the two resulting cells, so that each ultimately contains approximately half of the original cell's starting material. It usually implies reproduction via the of the parent cell's genetic material prior to division, though cells may also divide without replicating their DNA. In prokaryotic cells, is the primary form of cell division. In eukaryotic cells, asexual division occurs by and , while specific lineages of cells reserved for sexual reproduction can additionally divide by .
The merging or coalescence of two or more cells into a single cell, as occurs in the fusion of to form a . Generally this occurs by the destabilization of each cell's and the formation of bridges between them which then expand until the two cytoplasms are completely mixed; intercellular structures or such as may or may not fuse as well. Some cells can be artificially induced to fuse with each other by treating them with a fusogen such as polyethylene glycol or by passing an electric current through them.
The selectively permeable surrounding all prokaryotic and eukaryotic cells, defining the outermost boundary of the cell and physically separating the from the environment. Like all membranes, the cell membrane is a flexible, fluid, sheet-like with , , and numerous other molecules embedded within or interacting with it from both sides. Embedded molecules often have alongside the membrane's lipids. Though the cell membrane can be freely crossed by many ions, small organic molecules, and water, most other substances require through special pores or or by or in order to enter or exit the cell, especially very large or electrically charged molecules such as proteins and nucleic acids. Besides regulating the transport of substances into and out of the cell, the cell membrane creates an organized interior space in which to perform life-sustaining activities and plays fundamental roles in all of the cell's interactions with its environment, making it important in , , defense, and , among numerous other processes.
The study of the various biological activities and biochemical processes which sustain life inside , particularly (but not necessarily limited to) those related to and energy transfer, growth and , and the ordinary processes of the .
The spatial variation within a , i.e. the existence of differences in shape, structure, or function between different parts of the same cell. Almost all exhibit some form of polarity, often along an invisible axis which defines opposing sides or poles where the variation is most extreme. Having internal polarity permits cells to accomplish specialized functions such as or to serve as cells which must perform different tasks on different sides, or facilitates or .
The diverse set of processes by which cells transmit information to and receive information from themselves, other cells, or their environment. occurs in all cell types, prokaryotic and eukaryotic, and is of critical importance to the cell's ability to navigate and survive its physical surroundings. Countless mechanisms of signaling have evolved in different organisms, which are often categorized according to the proximity between sender and recipient (, , , , or ).
Any of a class of proteins embedded within or attached to the external surface of the , with one or more facing the environment and one or more that couple the binding of a particular to an event or process. Cell surface receptors are a primary means by which environmental signals are received by the cell and across the membrane into the cell interior. Some may also bind exogenous ligands and transport them into the cell in a process known as .
A tough, variously flexible or rigid layer of or polymers surrounding some cell types immediately outside of the , including plant cells and most , which functions as an additional protective and selective barrier and gives the cell a definite shape and structural support. The chemical composition of the cell wall varies widely between taxonomic groups, and even between different stages of the : in land plants it consists primarily of cellulose, hemicellulose, and pectin, while algae make use of carrageenan and agar, fungi use chitin, and bacterial cell walls contain .
Any molecule that exists outside of a or , freely floating in an such as blood plasma.
Of, relating to, consisting of, produced by, or resembling a or cells.
See .
A class of immune response that does not rely on the production of but rather the activation of specific such as or cytotoxic T-lymphocytes, or the secretion of various from cells, in response to an .
Any apparently random variability observed in quantities measured in cell biology, particularly those pertaining to levels.
The conversion of a terminally from one -specific cell type to another. This involves to a state; an example is the conversion of mouse to an undifferentiated embryonic state, which relies on the Oct4, Sox2, Myc, and Klf4.
A unit for measuring defined as the distance between chromosomal for which the expected average number of intervening in a single generation is 0.01. Though not an actual measure of physical distance, it is used to infer the actual distance between two loci based on the apparent likelihood of a crossover occurring between them in any given division.
A generalized framework for understanding the flow of genetic information between macromolecules within biological systems. The central dogma outlines the fundamental principle that the sequence information encoded in the three major classes of biopolymer—, , and —can only be transferred between these three classes in certain ways, and not in others: specifically, information transfer between the and from nucleic acid to protein is possible, but transfer from protein to protein, or from protein back to either type of nucleic acid, is impossible and does not occur naturally.
A cylindrical composed of , present only in certain eukaryotes. A pair of centrioles migrate to and define the two opposite poles of a where, as part of a , they initiate the growth of the .
A specialized DNA sequence within a that links a pair of . The primary function of the centromere is to act as the site of assembly for , protein complexes which direct the attachment of to the centromere and facilitate of the chromatids during or .
The proportion of the total length of a encompassed by its , typically expressed as a percentage; e.g. a chromosome with a centromeric index of 15 is , with a short arm comprising only 15% of its overall length.
See .
A type of whose shape forms an aqueous pore in a , permitting the passage of specific solutes, often small ions, across the membrane in either or both directions.
A set of axioms which state that, in the of any chromosome, species, or organism, the total number of () will be approximately equal to the total number of () residues, and the number of () residues will be equal to the number of () residues; accordingly, the total number of ( + ) will equal the total number of ( + ). These observations illustrate the highly specific nature of the that occurs in all DNA molecules: even though non-standard pairings are technically possible, they are exceptionally rare because the standard ones are strongly favored in most conditions. Still, the 1:1 equivalence is seldom exact, since at any given time nucleobase ratios are inevitably distorted to some small degree by , missing bases, and non-canonical bases. The presence of polymers also alters the proportions, as an individual may contain any number of any of the bases.
A to which an has been attached; i.e. an . lack amino acids.
See .
A non-directional, random change in the movement of a molecule, cell, or organism in response to a chemical stimulus, e.g. a change in speed resulting from exposure to a particular chemical compound.
A directed, non-random change in the movement of a molecule, cell, or organism in response to a chemical stimulus, e.g. towards or away from an area with a high concentration of a particular chemical compound.
A cross-shaped junction that forms the physical point of contact between two non-sister belonging to during . As well as ensuring proper of the chromosomes, these junctions are also the at which may occur during or , which results in the reciprocal exchange of DNA between the synapsed chromatids.
The presence of two or more populations of cells with distinct in an individual organism, known as a chimera, which has developed from the fusion of cells originating from separate ; each population of cells retains its own genome, such that the organism as a whole is a mixture of genetically non-identical tissues. Genetic chimerism may be inherited (e.g. by the fusion of multiple embryos during pregnancy) or acquired after birth (e.g. by allogeneic transplantation of cells, tissues, or organs from a genetically non-identical donor); in plants, it can result from grafting or errors in cell division. It is similar to but distinct from .
A type of small, lens-shaped found in the cells of green algae and plants which contains light-sensitive photosynthetic pigments and in which the series of biochemical reactions that comprise photosynthesis takes place. Like , chloroplasts are bound by a double membrane, contain their own from which they direct transcription of a unique set of genes, and replicate independently of the nuclear genome.
The set of molecules contained within , a type of photosynthetic located within the cells of some eukaryotes such as plants and algae, representing a semi-autonomous separate from that within the cell's nucleus. Like other types of plastid DNA, cpDNA usually exists in the form of small circular .
The complete set of or of within a cell, tissue, organism, or species.
One copy of a newly copied , which is joined to the original chromosome by a . Paired copies of the same individual chromosome are known as .
A complex of , , and found in eukaryotic cells that is the primary substance comprising . Chromatin functions as a means of very long DNA molecules into highly organized and densely compacted shapes, which prevents the strands from becoming tangled, reinforces the DNA during , helps to prevent DNA damage, and plays an important role in regulating and .
A central amorphous mass of found in the nuclei of cells of the salivary glands in Drosophila larvae and resulting from the fusion of regions surrounding the of the somatically paired chromosomes, with the distal arms radiating outward.
A region of a that has been locally compacted or into , conspicuous under a microscope as a "bead", node, or dark-staining band, especially when contrasted with nearby uncompacted strings of DNA.
contained in , as opposed to . The term is generally used synonymously with .
The of an entire , as opposed to a segment of a chromosome or an .
A molecule containing part or all of the genetic material of an organism. Chromosomes may be considered a sort of molecular "package" for carrying DNA within the of cells and, in most eukaryotes, are composed of long strands of DNA coiled with which bind to and the strands to prevent them from becoming an unmanageable tangle. Chromosomes are most easily distinguished and studied in their completely condensed forms, which only occur during . Some simple organisms have only one chromosome made of circular DNA, while most eukaryotes have multiple chromosomes made of linear DNA.
The process by which eukaryotic chromosomes become shorter, thicker, denser, and more conspicuous under a microscope during due to systemic coiling and of strands of DNA in preparation for .
The process by which or separate from each other and migrate to opposite sides of the dividing cell during or .
See .
A slender, thread-like, projection extending from the surface of a eukaryotic cell, longer than a but shorter than a . Most eukaryotic cells have at least one primary cilium serving sensory or signaling functions; some cells employ thousands of motile cilia covering their entire surface in order to achieve locomotion or to move extracellular material past the cell.
Any molecule, or , which forms a continuous closed loop without ends; e.g. bacterial chromosomes, and , as well as many other varieties of , including and some viral DNA. Contrast .
Any fragments derived from tumor cells which are circulating freely in the bloodstream.
On the same side; adjacent to; from the same molecule. Contrast .
Affecting a or sequence on the same nucleic acid molecule. A or sequence within a particular DNA molecule such as a is said to be cis-acting if it influences or acts upon other sequences located within short distances (i.e. physically nearby, usually but not necessarily ) on the same molecule or chromosome; or, in the broadest sense, if it influences or acts upon other sequences located anywhere (not necessarily within a short distance) on the same chromosome of a . Cis-acting factors are often involved in the of by acting to inhibit or to facilitate . Contrast .
A occurring within a (such as an ) which alters the functioning of a nearby or genes on the same . Cis-dominant mutations affect the of genes because they occur at sites that control transcription rather than within the genes themselves.
Any sequence or region of which the of nearby (e.g. a , , , or ), typically by serving as a binding site for one or more . Contrast .
Any of a class of flattened, membrane-bound or of the and and the . By traveling through one or more cisternae, each of which contains a distinct set of enzymes, newly created proteins and polysaccharides undergo chemical modifications such as and , which are used as packaging signals to direct their transport to specific destinations within the cell.
The branch of based solely on observation of the visible results of reproductive acts, as opposed to that made possible by the modern techniques and methodologies of . Contrast .
A trough-like indentation in the surface of the , often conspicuous when viewed through a microscope, that initiates the of the cytoplasm () as the begins to narrow during .
The process of producing, either naturally or artificially, individual organisms or cells which are genetically identical to each other. Clones are the result of all forms of asexual reproduction, and cells that undergo produce daughter cells that are clones of the parent cell and of each other. Cloning may also refer to biotechnology methods which artificially create copies of organisms or cells, or, in , copies of DNA fragments or other molecules.
See .
A type of that increases the of one or more genes by binding to an .
The strand of a double-stranded DNA molecule whose nucleotide sequence corresponds directly to that of the RNA transcript produced during (except that bases are substituted with bases in the RNA molecule). Though it is not itself transcribed, the coding strand is by convention the strand used when displaying a DNA sequence because of the direct analogy between its sequence and the of the RNA product. Contrast ; see also .
A series of three consecutive in a coding region of a sequence. Each of these triplets codes for a particular or during . and molecules are each written in a language using four "letters" (four different ), but the language used to construct proteins includes 20 "letters" (20 different amino acids). Codons provide the key that allows these two languages to be into each other. In general, each codon corresponds to a single amino acid (or stop signal). The full set of codons is called the .
The preferential use of a particular to code for a particular rather than alternative codons that are synonymous for the same amino acid, as evidenced by differences between organisms in the frequencies of the synonymous codons occurring in their coding DNA. Because the is , most amino acids can be specified by multiple codons. Nevertheless, certain codons tend to be overrepresented (and others underrepresented) in different species.
A mass of bounded by a and resulting from continuous cytoplasmic growth and repeated nuclear division without , found in some species of algae and fungi, e.g. Vaucheria and Physarum.
A relatively small, independent molecule which associates with a specific and participates in the reaction that the enzyme catalyzes, often by forming a covalent bond with the . Examples include biotin, , and .
Any non-protein organic compound capable of binding to or interacting with an . Cofactors are required for the initiation of catalysis.
A property of biopolymers whereby two polymeric chains or "" aligned to each other will tend to form consisting of hydrogen bonds between the individual comprising each chain, with each type of nucleobase pairing almost exclusively with one other type of nucleobase; e.g. in molecules, pairs only with and pairs only with . Strands that are paired in such a way, and the bases themselves, are said to be complementary. The degree of complementarity between two strands strongly influences the stability of the molecule; certain sequences may also be internally complementary, which can result in a single strand . Complementarity is fundamental to the mechanisms governing , , and .
that is synthesized from a single-stranded template (typically or ) in a reaction catalyzed by the enzyme . cDNA is produced both naturally by retroviruses and artificially in certain laboratory techniques, particularly . In , the term may also be used to refer to the sequence of an mRNA transcript expressed as its DNA counterpart (i.e. with replacing ).
See .
The controlled, inducible of a , either or .
In , a measure of the proportion of the surface area of a that is covered by , commonly expressed as a percentage. A culture in which the entire surface is completely covered by a continuous , such that all cells are immediately adjacent to and in direct physical contact with other cells, with no gaps or voids, is said to be 100-percent confluent. Different cell lines exhibit differences in growth rate or depending on the degree of confluence. Because of , most show a significant reduction in the rate of as they approach complete confluence, though some may continue to divide, expanding vertically rather than horizontally by stacking themselves on top of the , until all available nutrients are depleted.
The three-dimensional spatial configuration of the atoms comprising a molecule or structure. The conformation of a is the physical shape into which its chains arrange themselves during , which is not necessarily rigid and may with the protein's particular chemical environment.
A change in the spatial or physical shape of a molecule or macromolecule such as a protein or nucleic acid, rarely spontaneously but more commonly as a result of some alteration in the molecule's chemical environment (e.g. temperature, pH, salt concentration, etc.) or an interaction with another molecule. Changes in the of proteins can affect whether or how strongly they bind or ; inducing these changes is a common means (both naturally and artificially) of activating, inactivating, or otherwise controlling the function of many enzymes and receptor proteins.
The movement of chromosomes to the during the and stages of .
A calculated order of the most frequent (of either or ) found at each position in a common and obtained by comparing multiple closely related sequence alignments.
A hypothetical mode of in which the two parental of the original molecule ultimately remain hybridized to each other at the end of the replication process, with the two daughter strands forming their own separate molecule; hence one molecule is composed of both of the starting strands while the other is composed of the two newly synthesized strands. This is in contrast to , in which each molecule is a hybrid of one old and one new strand. See also .
A or sequence that is highly similar or identical across many species or within a , indicating that it has remained relatively unchanged through a long period of evolutionary time.
1. The continuous of a , as opposed to , in which a gene is only transcribed as needed. A gene that is transcribed continuously is called a constitutive gene.
2. A gene whose expression depends only on the efficiency of its in binding , and not on any or other which might or its transcription.
In , the phenomenon by which most normal eukaryotic cells cease to grow and upon reaching a critical cell density, usually as they approach full or come into physical contact with other cells. As a result, many types of cells cultured on plates or in will continue to proliferate until they cover the whole surface of the culture vessel, at which point the rate of cell division abruptly decreases or is arrested entirely, thus forming a confluent with minimal overlap between neighboring cells, even if the nutrient medium remains plentiful, rather than stacking themselves on top of each other. or cells tend not to respond to cell density in the same way and may continue to proliferate at high densities. This type of density-dependent inhibition of growth is similar to and may occur simultaneously with, but is nonetheless distinct from, the related phenomenon of contact inhibition of movement, whereby moving cells respond to physical contact by temporarily stopping and then reversing their direction of locomotion away from the point of contact.
A continuous sequence of generated by assembling cloned fragments by means of their overlapping sequences.
A phenomenon observed in some , , and which have multiple , whereby the binding of a to one or more sites apparently increases or decreases the affinity of one or more other binding sites for other ligands. This concept highlights the sensitive nature of the chemistry that governs interactions between biomolecules: the strength and specificity of interactions between protein and ligand are influenced, sometimes substantially, by nearby interactions (often ) and by the local chemical environment in general. Cooperativity is frequently invoked to account for the non-linearity of data resulting from attempts to measure the association/dissociation constants of particular .
See .
A resulting from a mistake made during .
A phenomenon in which sections of a are repeated and the number of varies between individuals in the population, usually as a result of or events that affect entire genes or sections of chromosomes. Copy-number variations play an important role in generating within a population.
A protein that works together with one or more to regulate .
A type of that reduces (represses) the of one or more genes by binding to and activating a .
See .
A region of a in which occur or with high frequency.
A sequence of DNA in which a nucleotide is immediately followed by a nucleotide on the same in the 5'-to-3' ; the "p" in CpG refers simply to the intervening group linking the two consecutive nucleotides.
Any of numerous folds or in the inner membrane, which give this membrane its characteristic wrinkled shape and increase the surface area across which and supporting reactions can occur. Cristae are studded with proteins such as ATP synthase and various cytochromes.
See .
Any chemical bond or series of bonds, normal or abnormal, natural or artificial, that connects two or more molecules to each other, creating an even larger, often structurally rigid and mechanically durable complex. Crosslinks may consist of covalent, ionic, or intermolecular interactions, or even extensive physical entanglements of molecules, and may be reversible or irreversible; in polymer chemistry the term is often used to describe macrostructures that form predictably in the presence of a specific reagent or catalyst. In the usage generally implies abnormal bonding (whether naturally occurring or experimentally induced) between different (or different parts of the same biomolecule) which are ordinarily separate, especially and . Crosslinking of DNA may occur between on opposite of a molecule (interstrand), or between bases on the same strand (intrastrand), specifically via the formation of covalent bonds that are stronger than the hydrogen bonds of normal ; these are common targets of pathways. Proteins are also susceptible to becoming crosslinked to DNA or to other proteins through bonds to specific surface residues, a process which is artificially induced in many laboratory methods such as and which can be useful for studying in their . Crosslinks are generated by a variety of exogenous and endogenous agents, including chemical compounds and high-energy radiation, and tend to interfere with normal cellular processes such as and , meaning their persistence usually compromises cell health.
1. An abbreviation of .
2. An abbreviation of .
The end of a linear chain of (i.e. a ) that is terminated by the free carboxyl group () of the last amino acid to be added to the chain during . This amino acid is said to be C-terminal. By convention, sequences, domains, active sites, or any other structure positioned nearer to the C-terminus of the or the folded it forms relative to others are described as . Contrast .
The total amount of contained within a (e.g. a ) of a particular organism or species, expressed in number of or in units of mass (typically picograms); or, equivalently, one-half the amount in a . For simple diploid the term is often used interchangeably with , but in certain cases, e.g. in hybrid descended from parents of different species, the C-value may actually represent two or more distinct contained within the same nucleus. C-values apply only to , and notably exclude .
A term used to describe a diverse variety of questions regarding the immense variation in nuclear or among eukaryotic species, in particular the observation that genome size does not correlate with the perceived complexity of organisms, nor necessarily with the number of they possess; for example, many single-celled protists have genomes containing thousands of times more DNA than the human genome. This was considered paradoxical until the discovery that eukaryotic genomes consist mostly of , which lacks genes by definition. The focus of the enigma has since shifted to understanding why and how eukaryotic genomes came to be filled with so much non-coding DNA, and why some genomes have a higher gene content than others.
See .
One of the four standard used in molecules, consisting of a with its N9 nitrogen to the C1 carbon of a sugar. Cytosine bonded to is known as , which is the version used in .
The branch of involving the detection and identification of various cellular structures and components, in particular their , using techniques of biochemical analysis and visualization such as chemical and , spectrophotometry and spectroscopy, radioautography, and electron microscopy.
The branch of that studies how influence and relate to cell behavior and function, particularly during and .
Any of a broad and loosely defined class of small and which have functions in (primarily , , and pathways), typically by interacting with specific .
The final stage of in both and , usually immediately following the division of the , during which the of the parent cell is and divided approximately evenly between two . In animal cells, this process occurs by the closing of a microfilament in the equatorial region of the dividing cell. Contrast .
The study of the morphology, processes, and life history of living , particularly by means of light and electron microscopy. The term is also sometimes used as a synonym for the broader field of .
See .
The interdisciplinary field that studies , , and at the level of an individual cell by making use of single-cell molecular techniques and advanced microscopy to visualize the interactions of cellular components .
All of the material contained within a excluding (in eukaryotes) the ; i.e. that part of the which is enclosed by the but separated from the by the , consisting of the fluid and the totality of its contents, including all of the cell's internal , , and substructures such as , , the , and , and a network of filamentous known as the . Some definitions of cytoplasm exclude certain organelles such as and . Composed of about 80 percent water, the numerous small molecules and macromolecular complexes dissolved or suspended within the cytoplasm give it characteristic viscoelastic and thixotropic properties, allowing it to behave variously as a gel or a liquid solution. Though continuous throughout the intracellular space, the cytoplasm can often be resolved into distinct phases of different density and composition, such as an and . Most of the metabolic and biosynthetic activities of the cell take place in the cytoplasm, including by . Despite their physical separation, the cytoplasm and the nucleus are mutually dependent upon each other, such that an isolated nucleus without cytoplasm is as incapable of surviving for long periods as is the .
The flow of the inside a cell, driven by forces exerted upon cytoplasmic fluids by the . This flow functions partly to speed up the transport of molecules and suspended in the cytoplasm to different parts of the cell, which would otherwise have to rely on passive diffusion for movement. It is most commonly observed in very large eukaryotic cells, for which there is a greater need for transport efficiency.
An eukaryotic cell; or all other cellular components besides the nucleus (i.e. the cell membrane, cytoplasm, organelles, etc.) considered collectively. The term is most often used in the context of experiments, during which the cytoplast can sometimes remain viable in the absence of a nucleus for up to 48 hours.
A used as one of the four standard nucleobases in both and molecules. Cytosine forms a with .
The soluble aqueous phase of the , in which small particles such as , , , and many other molecules are suspended or dissolved, excluding larger structures and such as , , , and the .
D
A resulting from the of an initial progenitor, known as the . Generally two daughter cells are produced per division.
An unsupervised algorithm that estimates an activity score for a pathway in a gene expression matrix, following a denoising step.
A spontaneous in the genome of an individual organism that is new to that organism's lineage, having first appeared in a of one of the organism's parents or in the fertilized egg that develops into the organism; i.e. a mutation that was not present in either parent's genome.
The assembly of a synthetic from free without relying on an existing , i.e. de novo, by any of a variety of laboratory methods. De novo synthesis makes it theoretically possible to construct completely with no naturally occurring equivalent, and no restrictions on size or sequence. It is performed routinely in the commercial production of customized, made-to-order sequences such as .
The removal of an acetyl group () from a chemical compound, protein, or other biomolecule via hydrolysis of the covalent ester bond adhering it, either spontaneously or by catalysis. Deacetylation is the opposite of .
The redundancy of the , exhibited as the multiplicity of different that specify the same . For example, in the , the amino acid serine is specified by six unique codons (, , , , , and ). Codon degeneracy accounts for the existence of .
The release of the contents of a secretory granule (usually antimicrobial or cytotoxic molecules) into an space by the of the granule with the cell's plasma membrane.
A type of in which one or more are removed from a .
The removal of a methyl group () from a chemical compound, protein, or other biomolecule, either spontaneously or by catalysis. Demethylation is the opposite of ; both reactions play important roles in numerous biochemical processes, including in , as the methylation state of particular residues within particular proteins or nucleic acids can affect their structural in a way that alters their affinity for other molecules, making transcription at nearby genetic loci more or less likely.
The process by which or lose their , , and/or , either reversibly or irreversibly, through the application of some external chemical or mechanical stress, e.g. by heating, agitation, or exposure to a strong acid or base, all of which can disrupt intermolecular forces such as hydrogen bonding and thereby change or destroy chemical activity. Denatured proteins may be both a cause and a consequence of cell death. Denaturation may also be a normal process; the denaturation of molecules, for example, which breaks the hydrogen bonds between and causes the separation of the duplex molecule into two , is a necessary step in and and hence is routinely performed by enzymes such as . The same mechanism is also fundamental to laboratory methods such as .
One of the four standard used in molecules, consisting of an with its N9 nitrogen to the C1 carbon of a sugar. Adenine bonded to forms an alternate compound known simply as , which is used in .
One of the four standard used in molecules, consisting of a with its N9 nitrogen to the C1 carbon of a sugar. Cytosine bonded to forms an alternate compound known simply as , which is used in .
One of the four standard used in molecules, consisting of a with its N9 nitrogen to the C1 carbon of a sugar. Guanine bonded to forms an alternate compound known simply as , which is used in .
Any of a class of enzymes which catalyze the hydrolytic cleavage of in molecules, thereby severing of and causing the degradation of DNA polymers into smaller components. Compare .
A molecule composed of a series of covalently linked , each of which incorporates one of four : (), (), (), and (). DNA is most often found in form, which consists of two , nucleotide chains in which each of the nucleobases on each individual is via hydrogen bonding with one on the opposite strand; this structure commonly assumes the shape of a . DNA can also exist in form. By storing and encoding genetic information in the of these nucleobases, DNA serves as the universal molecular basis of biological inheritance and the fundamental template from which all proteins, cells, and living organisms are constructed.
A containing as its sugar component, and the or subunit used to build (DNA) molecules. Deoxyribonucleotides canonically incorporate any of four : (), (), (), and (). Compare .
A sugar derived from by the replacement of the hydroxyl group attached to the C2 carbon with a single hydrogen atom. D-deoxyribose, in its cyclic ring form, is one of three main functional groups of and hence of (DNA) molecules.
See .
The removal of a group, , from a chemical compound, protein, or other biomolecule, either spontaneously or by catalysis. Dephosphorylation is the opposite of ; both reactions are common molecular modifications involved in numerous biochemical pathways and processes, including in metabolism, where high-energy bonds to phosphate groups are used to transfer energy between molecules, and in the of proteins, where the phosphorylation state of particular residues can affect the protein's affinity for other molecules or function as a .
The spontaneous loss of one or more (either or ) from a or molecule, either or , via the hydrolytic cleavage of the linking base and sugar, releasing a free purine nucleobase and a . are especially prone to depurination. Loss of bases can also occur spontaneously but is far less common.
The artificial modification of a molecule or protein with the intent of altering its solubility or other chemical properties so as to enable analysis (e.g. by mass spectroscopy or chromatography), or of it by attaching a detectable chemical moiety (e.g. a fluorescent tag) to make it easier to identify and track . Molecules modified in this way are described as derivatives of their naturally occurring counterparts and are said to have been derivatized.
A specialized cell junction between neighboring epithelial cells in which the cells are held together by a network of keratin filaments and structural proteins bridging the gap between the plasma membranes.
The failure of that have normally during to remain paired during . Desynapsis is usually caused by the improper formation of . Contrast .
The branch of biology that studies the various processes and phenomena by which organisms (particularly multicellular but not necessarily excluding ) grow and develop into mature forms capable of reproduction. In the broadest sense the field may encompass topics such as sexual and asexual reproduction, and sporogenesis, , embryogenesis, the renewal and of into specialized cell types, birth or hatching, metamorphosis, and the regeneration of mature tissues.
In , the fifth and final substage of , following and preceding . During diakinesis, the chromosomes are further condensed, the two reach opposite poles of the cell, and the begins to extend from the poles to the equator.
(of a linear or chromosome fragment) Having two instead of the normal one.
The process by which a eukaryotic cell changes from one to another, in particular from a non-specialized to a more specialized cell type which is then said to be differentiated. This usually occurs by a carefully regulated series of modifications which change the specific set of by the cell, turning certain genes off and others on, and (with few exceptions) almost never involves mutations to the nucleotide sequences themselves. These alterations to gene expression result in a cascade of changes which can dramatically change the cell's size, shape, , characteristics, and rate of , and therefore its functions, behaviors, and responsiveness to signals, permitting multicellular organisms to create a huge variety of functionally distinct cell types from a single . Differentiation occurs repeatedly during an organism's development from a single-celled into a complex multicellular system of and cell types, and continues to some extent after the organism reaches maturity in order to repair and replace damaged and dying cells. In most cases differentiation is irreversible, though some cells may also undergo in specific circumstances.
A molecular aggregate consisting of two . The term is often used to describe a composed of two proteins, either the same protein (a ) or different proteins (a ); or to an individual protein composed of two . Compare , , and .
A molecular consisting of exactly two covalently linked ; or any two nucleotides which are immediately adjacent to each other on the same of a longer polymer.
(of a cell or organism) Having two copies of each . Contrast and .
In , the fourth of the five substages of , following and preceding . During diplonema, the disassembles and the paired begin to separate from one another, though they remain tightly bound at the where has occurred.
Any two or more of a specific occurring in the same orientation (i.e. in precisely the same order and not ) and on the same , either separated by intervening nucleotides or not. An example is the sequence , in which occurs twice, though separated by six nucleotides that are not part of the repeated sequence. A direct repeat in which the repeats are immediately adjacent to each other is known as a .
The end-to-end chemical orientation of a single linear or of a polymer or a . The nomenclature used to indicate nucleic acid directionality is based on the chemical convention of identifying individual carbon atoms in the or sugars of nucleotides, specifically the and of the ring. The sequence of nucleotides in a polymeric chain may be read or interpreted in the 5'-to-3' direction – i.e. starting from the terminal nucleotide in which the 5' carbon is not connected to another nucleotide, and proceeding to the other terminal nucleotide, in which the 3' carbon is not connected to another nucleotide – or in the opposite 3'-to-5' direction. Most types of nucleic acid synthesis, including and , work exclusively in the 5'-to-3' direction, because the involved can only catalyze the addition of free nucleotides to the open 3'-end of the previous nucleotide in the chain. Because of this, the convention when writing any nucleic acid sequence is to present it in the 5'-to-3' direction from left to right. In nucleic acids, the two paired strands must be in order to with each other. Polypeptide directionality is similarly based on labeling the functional groups comprising , specifically the amino group, which forms the , and the carboxyl group, which forms the ; amino acid sequences are assembled in the N-to-C direction during , and by convention are written in the same direction.
A composed of two (either the same or different) joined by a covalent . See also and .
Any exergonic process of microbial by which redox-active chemical species participate in oxidation-reduction reactions (exchange of electrons) to provide the cell with energy needed for sustaining activities. External substances are absorbed by the cell from its environment and then decomposed to release energy, with the subsequently out of the cell. This is in contrast to an , in which the atoms of the external substances are reused in the synthesis of biomolecules or the fabrication of cellular components.
Any quantity used to measure the dissimilarity between the levels of different .
See .
A method of taxonomic identification in which short DNA sequences from one or more specific genes are isolated from unidentified samples and then with sequences from a in order to uniquely identify the species or other taxon from which the samples originated. The sequences used in the comparison are chosen carefully from genes that are both widely and that show greater between species than within species, e.g. the cytochrome c oxidase gene for eukaryotes or certain genes for prokaryotes. These genes are present in nearly all living organisms but tend to evolve different mutations in different species, such that a unique sequence variant can be linked to one particular species, effectively creating a unique identifier akin to a retail barcode. DNA barcoding allows unknown specimens to be identified from otherwise indistinct tissues or body parts, where identification by morphology would be difficult or impossible, and the library of organismal barcodes is now comprehensive enough that even organisms previously unknown to science can often be classified with confidence. The simultaneous identification of multiple different species from a mixed sample is known as metabarcoding.
The process of compacting very long molecules into densely packed, orderly configurations such as , either or .
A technology used to measure levels of transcripts or to detect certain changes in . It consists of an array of thousands of microscopic spots of , called features, each containing picomoles of a specific DNA sequence. This can be a short section of a or any other DNA element, and is used as a to hybridize a , cRNA or sample (called a target) under conditions. Probe-target is usually detected and quantified by fluorescence-based detection of targets.
Any of a class of which synthesize molecules from individual . DNA polymerases are essential for and usually work in pairs to create identical copies of the two of an original double-stranded molecule. They build long chains of DNA by adding nucleotides one at a time to the of a DNA strand, usually relying on the provided by the strand to copy the nucleotide sequence faithfully.
The set of processes by which a cell identifies and corrects structural damage or in the molecules that encode its . The ability of a cell to repair its DNA is vital to the integrity of the genome and the normal functionality of the organism.
The process by which a molecule copies itself, producing two identical copies of one original DNA molecule.
The process of determining, by any of a variety of different methods and technologies, the order of the in the long chain of nucleotides that constitutes a of .
Any mechanism by which are exchanged non-reciprocally (e.g. via , , or ) that causes continual fluctuations in the of DNA during an organism's lifetime. Such mechanisms are often major drivers of speciation between populations.
A protein containing at least one structural motif capable of recognizing and interacting with the of a or molecule. DNA-binding domains may bind to specific sequences or have a non-specific affinity for DNA. They are the primary functional components of , including many and regulatory proteins.
Any or containing one or more capable of interacting chemically with one or more parts of a molecule, and consequently having a specific or general affinity for and/or . DNA-binding activity often depends on the presence and physical accessibility of a specific nucleobase sequence, and mostly occurs at the , since it exposes more of the functional groups which uniquely identify the bases. Binding is also influenced by the spatial conformation of the DNA chain and the occupancy of other proteins near the binding site; many proteins cannot bind to DNA without first undergoing induced by interactions with other molecules.
See .
A discrete, usually continuous region of a , or the corresponding sequence of a , which serves a particular function or is defined by particular physico-chemical properties (e.g. , polar, non-polar, , etc.), and especially one which assumes a unique, recognizable spatial as part of the protein's and which contributes to or defines its biological activity. Large proteins are generally composed of several domains linked together by short, intervening polypeptide sequences. Domains are commonly grouped into classes with similar properties or functions, e.g. . More broadly, the term may also be used to refer to a discrete structural entity within any biomolecule, including functionally or compositionally distinct subregions of and .
Any mechanism by which organisms neutralize the large difference in caused by the presence of differing numbers of in the different sexes, thereby equalizing the of sex-linked genes so that the members of each sex receive the same or similar amounts of the of such genes. An example is in female mammals.
The shape most commonly assumed by molecules, resembling a ladder that has been twisted upon its long axis, with the rungs of the ladder consisting of . This is the most energetically stable conformation of the double-stranded forms of both and under most naturally occurring conditions, arising as a consequence of the of the and the stacking of the bonded to it. In , the most common DNA variant found in nature, the double helix has a right-handed twist with about 10 base pairs per full turn, and the molecular geometry results in an alternating pattern of "grooves" of differing widths (a and a ) between the parallel backbones.
The loss of continuity of the in both strands of a molecule, in particular when the two breaks occur at sites that are directly across from or very close to each other on the complementary strands. Contrast .
Composed of two , molecules or (either , , or a ) which are held together by hydrogen bonds between the complementary of each strand, known as . Compare .
Any molecule that is composed of two , polymers, known as , which are bonded together by hydrogen bonds between the complementary . Though it is possible for DNA to exist as a , it is generally more stable and more common in double-stranded form. In most cases, the complementary causes the twin strands to coil around each other in the shape of a .
Any molecule that is composed of two , polymers, known as , which are bonded together by hydrogen bonds between the complementary . Though RNA usually occurs in , it is also capable of forming duplexes in the ; an example is an transcript pairing with an of the same transcript, which effectively the gene from which the mRNA was transcribed by preventing translation. As in dsDNA, the in dsRNA usually causes the twin strands to coil around each other in the shape of a .
Any process, natural or artificial, which decreases the level of of a certain . A gene which is observed to be expressed at relatively low levels (such as by detecting lower levels of its transcripts) in one sample compared to another sample is said to be downregulated. Contrast .
Towards or closer to the of a chain of nucleotides, or to the of a chain. Contrast .
See .
See .
See .
The production of a second copy of part or all of a or , either naturally or artificially, and the retention of both copies; especially when both the copy and the original sequence are retained within the same molecule, often but not necessarily to each other. See also , , and .
See .
The abnormal growth or development of a or organ; a change in the growth, behavior, or organization of cells within a tissue, or the presence of cells of an abnormal type, such that the tissue becomes disordered, an event which often precedes the development of cancer.
E
A molecule exposed on the surface of a cell which effectively tags the cell for , inducing to engulf or "eat" it. The presence of oxidized or phosphatidylserine, or the absence of sialic acid from cell surface or , are all commonly used as eat-me signals in certain cell types. See also .
The physical separation of molecules, e.g. or , according to their movement through a fluid medium to which an electric field is applied, where the distance they travel is proportional to their size. Because of their negatively charged , nucleic acids are repelled by the negative electrode at one end of the medium and attracted to the positive electrode at the other end, which causes them to be pulled toward the latter over time; proteins and even whole cells may migrate through the medium in a similar manner. The speed at which the molecules migrate depends on their net electric charge and is inversely proportional to their overall size (i.e. the number of atoms they contain), such that very small molecules tend to move faster through the medium than very large molecules. Thus electrophoretic techniques, particularly with agarose or polyacrylamide-based gels as the supporting medium, are widely used in molecular biology laboratories to quickly and conveniently isolate molecules of interest from heterogeneous mixtures and/or identify them based on their expected molecular weight. Reference markers containing molecules of known weight are commonly run alongside unknown samples to aid size-based identification. Electrophoresis is often combined with other techniques such as .
A molecular biology technique in which a strong electric field is applied to living cells in order to temporarily increase the permeability of their cell membranes, allowing exogenous nucleic acids, proteins, or chemical compounds to easily pass through the membrane and thereby enter the cells. It is a common method of achieving and .
The linear growth of a polymer by the sequential addition of individual monomers to a , e.g. during or , especially when it occurs by with a . The term is often used to describe steps in certain laboratory techniques such as the .
A which, by binding to a , promotes of the chain during .
The developing organism that represents the earliest stages of in all sexually reproducing organisms, traditionally encompassing the period after of an egg cell and formation of the but prior to birth, hatching, or metamorphosis. During this period, known as embryonic development, the single-cell zygote is transformed by repeated and rearrangements into a series of increasingly complex multicellular structures. For humans, the term "embryo" is only used until the ninth week after conception, after which time the embryo is known as a foetus; for most other organisms, including plants, "embryo" can be used more broadly to describe any early stage of the life cycle.
The quality of genetic that results from a specific configuration of interacting , rather than simply their combination.
Any process by which a substance is uptaken by or brought inside of a , crossing the from an into an , which includes the subclasses of , , and processes. All of these involve surrounding an extracellular molecule, protein, or even another cell or organism with an extension or of the cell membrane, which then "buds off" or separates from the rest of the membrane on the cytoplasmic side, forming a membrane-enclosed containing the ingested materials. By this mechanism the material can cross the without being exposed to the hydrophobic space in between, instead remaining suspended in the fluid of the extracellular space. Many large, polar macromolecules which cannot simply diffuse across the membrane, such as and , are transported into the cell by endocytosis. It is distinguished from alternative routes such as passing through or being chaperoned by . The reverse process is called .
Any surrounding an or , e.g. that of the , , , , (the ), etc.
Any whose activity is to cleave within a chain of , including those that cleave relatively nonspecifically (without regard to ) and those that cleave only at very specific sequences (so-called ). When recognition of a specific sequence is required, endonucleases make their cuts in the middle of the sequence. Contrast .
A region of DNA near a that can be bound by an to increase or by a to decrease expression.
A subclass of transcribed from regions of DNA containing sequences. The expression of a given eRNA generally correlates with the activity of the corresponding enhancer in enhancing transcription of its target genes, suggesting that eRNAs play an active role in gene regulation or .
To artificially remove the from a , e.g. by micromanipulation in the laboratory or by destroying it through irradiation with ultraviolet light, rendering the cell .
A which acts as a for a biological process by accelerating a specific chemical reaction, typically by binding one or more molecules and decreasing the activation energy necessary for the initiation of a particular reaction involving the substrate(s). Enzymatic catalysis often results in the chemical conversion of the substrate(s) into one or more products, which then inhibit or permit subsequent reactions. All consist of a series of individual reactions which each depend upon one or more specific enzymes to drive them forward at rates fast enough to sustain life.
1. Another name for a , especially one that is capable of integrating into a .
2. In eukaryotes, any non-integrated circular molecule that is stably maintained and replicated in the simultaneously with the rest of the host cell. Such molecules may include viral genomes, bacterial plasmids, and aberrant chromosomal fragments.
The collective action of multiple genes interacting during . A form of gene action, epistasis can be either additive or multiplicative in its effects on specific .
The specific site or region within an such as a or which is recognized by B or T cells of the immune system, against which a specific is produced, and with which the antibody's specifically interacts or binds. In proteins, epitopes are typically of 4–5 amino acid residues, sequential or discontiguous, which by virtue of the distinct spatial they adopt upon are able to uniquely interact with a particular paratope. In this sense they may be considered , though they do not necessarily overlap with ligand binding sites and need not be in any way relevant to the protein's normal function. Very large molecules may have multiple epitopes, each of which is recognized by a different antibody.
See .
A relatively open, lightly compacted form of in which is only sporadically bound in and thus broadly accessible to binding and manipulation by and other molecules. Euchromatic regions of a genome are often enriched in and actively undergoing , in contrast to , which is relatively gene-poor, nucleosome-rich, and less accessible to transcription machinery.
The condition of a cell or organism having an abnormal number of complete sets of , possibly excluding the . Euploidy differs from , in which a cell or organism has an abnormal number of one or more specific individual chromosomes.
The change in the characteristics of biological populations over successive generations. In the most traditional sense, it occurs by changes in the frequencies of in a population's .
Occurring outside of a cell or organism, as with observations made or experiments performed in or on cells or which have been isolated or removed from their natural context to an external environment (usually a carefully controlled environment with minimal alteration of natural conditions, such as a being grown in a laboratory). This is in contrast to observations, which are made in an entirely natural context.
The enzymatic removal of a polynucleotide sequence from one or more strands of a , or of a polypeptide sequence from a , typically implying both the breaking of the polymeric molecule in two locations and the subsequent rejoining of the two breakpoints after the sequence between them has been removed. The term may be used to describe a wide variety of processes performed by distinct enzymes, including most and pathways.
Any process by which a substance is secreted from or transported out of a , crossing the from the into the , especially that which occurs by the fusion of the membrane surrounding a secretory with the larger cell membrane. This fusion causes the intra-vesicular space to merge with the extracellular fluid, releasing the vesicle's contents on the exterior side of the cell without exposing them to the hydrophobic space between the . More narrowly the term may refer in particular to the bulk transport of a large amount of molecules out of the cell all at once, often or which are too large and polar to passively diffuse across the membrane themselves. The reverse process, whereby materials are invaginated into the cell, is known as .
The entire set of within a particular , including of mature mRNAs as well as .
Any part of a that encodes a part of the final mature produced by that gene after have been removed by . The term refers to both the sequence as it exists within a DNA molecule and to the corresponding sequence in RNA transcripts.
Any whose activity is to cleave within a chain of , including those that cleave only upon recognition of a specific sequence (so-called ). Exonucleases make their cuts at either the or of the sequence (rather than in the middle, as with ).
1. (protein complex) An intracellular multi-protein complex which serves the function of degrading various types of molecules.
2. (vesicle) A type of membrane-bound produced in many eukaryotic cells by the inward budding of an and the subsequent fusion of the endosome with the , causing the release of the vesicle into various extracellular spaces, including biological fluids such as blood and saliva, where they may serve any of a wide variety of physiological functions, from waste management to intercellular signaling.
An intracellular multi-protein complex which serves the function of degrading various types of molecules.
A type of , usually a or viral vector, designed specifically for the of a in a target cell, rather than for some other purpose such as .
Any part of an which is retained within a precursor , i.e. not excised by , and is therefore present in the mature , analogous to the of RNA transcripts. Contrast .
See .
Outside the of a or cells; i.e. located or occurring externally to a cell. Contrast ; see also .
The network of interacting and minerals secreted by and existing outside of and between cells in structures such as and , forming a hydrated, mesh-like, semi-solid suspension which not only holds the cells together in an organized fashion but also provides structural and biochemical support, acting as an elastic, compressible buffer against external stresses as well as both regulating and influencing numerous aspects of cell behavior, among them , , , , and . The composition and properties of the ECM vary enormously between organisms and tissue types, but generally it takes the form of a gel in which various fibrous proteins (especially collagen and elastin), enzymes, and are embedded. Cells themselves both produce the matrix components and respond constantly to local matrix composition, a source of environmental feedback which is critical for , tissue organization, and development.
Any that is not found in or in the of a cell and hence is not . This may include the DNA contained in or such as or , or, in the broadest sense, DNA introduced by viral infection. Extrachromosomal DNA usually shows significant structural differences from nuclear DNA in the same organism.
F
A type of by which substances are conveyed across more quickly than would be possible by ordinary passive diffusion alone, generally because act as shuttles or pores, being arranged in such a way as to provide a environment that is favorable for the movement of small polar molecules, which would otherwise be repulsed by the interior of the .
The of a only as needed, as opposed to , in which a gene is transcribed continuously. A gene that is transcribed as needed is called a facultative gene.
A molecule exposed on the surface of a cell destined for which is used to attract to engulf and eliminate the cell by . See also .
See .
See .
See .
1. (histology) The preservation of biological material by treating it with a chemical that prevents or delays the natural postmortem processes of decay (e.g. and putrefaction) which would otherwise eventually cause cells, tissues, and biomolecules to lose their characteristic structures and properties. Biological specimens are usually fixed with the broad objective of arresting or slowing biochemical reactions for long enough to study them in detail, essentially 'freezing' cellular processes in their natural state at a specific point in time, while minimizing disruption to existing structures and arrangements, all of which can improve subsequent and microscopy of the fixed samples. Though fixation tends to irreversibly terminate any ongoing reactions, thus killing the fixed cells, it makes it possible to study molecular details that occur too rapidly or transiently to observe in living samples. Common fixatives such as formaldehyde work by disabling enzymes, coagulating, insolubilizing, and/or macromolecules, creating between them, and protecting specimens from decomposition by bacteria and fungi.
2. (population genetics) The process by which a single for a particular with multiple different alleles increases in in a given population such that it becomes permanently established as the only allele at that within the population's .
Any chemical compound or solution that causes the of cells, tissues, or other microscopic structures by any mechanism, thus preserving them for long-term, detailed study by methods such as embedding, , and microscopy. Common fixatives include dilute solutions of ethanol, acetic acid, formaldehyde, and osmium tetroxide, among others.
(of a cell) Having one or more .
A long, thin, hair-like appendage protruding from the surface of some cells, which serves locomotory functions by undulating in a way that propels the cell through its environment or by effecting the movement of and solutes past the cell surface. Many unicellular organisms, including some bacteria, protozoa, and algae, bear one or more flagella, and certain cell types in multicellular organisms, namely sperm cells, also have flagella. Eukaryotic flagella are essentially just longer versions of , often up to 150 micrometres (μm) in length, while bacterial flagella are typically smaller and completely different in structure and mechanism of action.
A type of where the probes are with a that is naturally fluorescent when exposed to light at particular wavelengths, making it possible to detect the locations of complementary sequences with fluorescence microscopy. FISH is commonly used to visualize the physical locations of specific genes on chromosomes.
An experimental approach in in which a researcher starts with a specific known and attempts to determine the genetic basis of that phenotype by any of a variety of laboratory techniques, commonly by random in the organism's genome and then for changes in the phenotype of interest. Observed phenotypic changes are assumed to have resulted from the mutation(s) present in the screened sample, which can then be to specific genomic and ultimately to one or more specific . This methodology contrasts with , in which a specific gene or its gene product is individually manipulated in order to identify the gene's function.
A type of in a caused by the or of a number of that is not divisible by three. Because of the triplet nature by which nucleotides code for amino acids, a mutation of this sort causes a shift in the of the nucleotide sequence, resulting in the sequence of downstream of the mutation site being completely different from the original.
See .
An organization that works with others "to develop standards for biological research data quality, annotation and exchange" as well as software tools that facilitate their use.
G
A technique used in to produce a visible by staining the condensed chromosomes with Giemsa stain. The staining produces consistent and identifiable patterns of dark and light "bands" in regions of , which allows specific chromosomes to be easily distinguished.
A cell that is the product of a progenitor and the final product of the in sexually reproducing multicellular organisms. Gametes are the means by which an organism passes its genetic information to its offspring; during fertilization, two gametes (one from each parent) are fused into a single .
The process by which eukaryotic divide and into . Depending on the organism, gametes may be generated from haploid germ cells by or germ cells by .
See .
See .
Any segment or set of segments of a molecule that contains the information necessary to produce a functional transcript in a controlled manner. In living organisms, genes are often considered the fundamental units of and are typically encoded in . A particular gene can have multiple different versions, or , and a single gene can result in a that influences many different .
See .
The number of copies of a particular present in a . Gene dosage directly influences the amount of a cell is able to express, though a variety of controls have evolved which tightly . Changes in gene dosage caused by mutations include .
A type of defined as any of a region of that contains a . Compare .
The set of processes by which the information encoded in a is used in the synthesis of a , such as a protein or , or otherwise made available to influence one or more . Canonically, the first step is , which produces a molecule complementary to the molecule in which the gene is encoded; for protein-coding genes, the second step is , in which the messenger RNA is read by the to produce a . The information contained within a DNA sequence need not necessarily be transcribed and translated to exert an influence on molecular events, however; broader definitions encompass a huge variety of other ways in which genetic information can be expressed.
A database of functional genomics and data derived from experimental chips and and managed by the National Center for Biotechnology Information.
The union, either by natural mutation or by laboratory techniques, of two or more previously independent genes that code for different gene products such that they become subject to control by the same systems. The resulting hybrid sequence is known as a .
Any of a variety of methods used to precisely identify the of a particular within a DNA molecule (such as a chromosome) and/or the physical or distances between it and other genes.
A being studied in a scientific experiment, especially one that is the focus of a technique such as .
Any of the biochemical material resulting from the of a , most commonly interpreted as the functional produced by of the gene or the fully constructed produced by of the transcript, though molecules such as may also be considered gene products. A measurement of the quantity of a given gene product that is detectable in a cell or tissue is sometimes used to infer how active the corresponding gene is.
The broad range of mechanisms used by cells to control the activity of their genes, especially to allow, prohibit, increase, or decrease the production or of specific , such as or . Gene regulation increases an organism's versatility and adaptability by allowing its cells to express different gene products when required by changes in its environment. In multicellular organisms, the regulation of gene expression also drives cellular differentiation and morphogenesis in the embryo, enabling the creation of a diverse array of cell types from the same .
Any mechanism of which drastically reduces or completely prevents the of a particular gene. Gene silencing may occur naturally during either or . Laboratory techniques often exploit natural silencing mechanisms to achieve .
The insertion of a functional or gene or part of a gene into an organism (especially a patient) with the intention of correcting a , either by direct substitution of the defective gene or by supplementation with a second, functional version.
A technology used to simultaneously inactivate, identify, and report the of a target gene in a mammalian genome by introducing an insertional consisting of a gene and/or a flanked by an upstream site and a downstream termination sequence.
1. In any given organism, a single reproductive cycle, or the phase between two consecutive reproductive events, i.e. between an individual organism's reproduction and that of the progeny of that reproduction; or the actual or average length of time required to complete a single reproductive cycle, either for a particular or for a population or species as a whole.
2. In a given population, those individuals (often but not necessarily living contemporaneously) who are equally removed from a given by virtue of the same number of reproductive events having occurred between them and the ancestor.
A set of rules by which information encoded within is into by living cells. These rules define how sequences of triplets called specify which will be added next during . The vast majority of living organisms use the same genetic code (sometimes referred to as the ) but variant codes do exist.
Any illness, disease, or other health problem directly caused by one or more abnormalities in an organism's which are congenital (present at birth) and not acquired later in life. Causes may include a to one or more , or a such as an of a particular chromosome. The mutation responsible during embryonic development or may be from one or both parents, in which case the genetic disorder is also classified as a . Though the abnormality itself is present before birth, the actual disease it causes may not develop until much later in life; some genetic disorders do not necessarily guarantee eventual disease but simply of developing it.
A measure of the genetic divergence between species, populations within a species, or individuals, used especially in to express either the time elapsed since the existence of a or the degree of differentiation in the comprising the of each population or individual.
The direct, deliberate manipulation of an organism's genetic material using any of a variety of biotechnology methods, including the or of , the transfer of genes within and between species, the of existing sequences, and the construction of novel sequences using . Genetic engineering encompasses a broad set of technologies by which the genetic composition of individual cells, tissues, or entire organisms may be altered for various purposes, commonly in order to study the functions and of individual genes, to produce hormones, vaccines, and other drugs, and to create for use in research and agriculture.
A specific, easily identifiable, and usually highly or other with a known location on a that can be used to identify the individual or species possessing it.
Any reassortment or exchange of genetic material within an individual organism or between individuals of the same or different species, especially that which creates . In the broadest sense, the term encompasses a diverse class of naturally occurring mechanisms by which are copied or physically transferred into different genetic environments, including during or or as a normal part of ; events such as , , or ; or errors in or cell division. Artificial recombination is central to many techniques which produce .
The redundant encoding of two or more distinct that ultimately perform the same biochemical function. in one of these genes may have a smaller effect on fitness than might be expected, since the redundant genes often compensate for any and obviate any .
A graph that represents the regulatory complexity of . The vertices (nodes) are represented by various regulatory elements and while the edges (links) are represented by their interactions. These network structures also represent functional relationships by approximating the rate at which genes are .
A broad class of various procedures used to identify features of an individual's particular chromosomes, genes, or proteins in order to determine parentage or ancestry, diagnose vulnerabilities to heritable diseases, or detect alleles associated with increased risks of developing . Genetic testing is widely used in human medicine, agriculture, and biological research.
Any organism whose genetic material has been altered using techniques, particularly in a way that does not occur naturally by mating or by natural .
The field of biology that studies , , and in living organisms.
1. The entire complement of genetic material contained within the of an organism, , or virus.
2. The collective set of or genetic shared by every member of a population or species, regardless of the different that may be present at these loci in different individuals.
The total amount of contained within one copy of a , typically measured by mass (in picograms or daltons) or by the total number of (in or ). For organisms, genome size is often used interchangeably with .
The contained in , as opposed to the contained in separate structures such as or organelles such as or .
An phenomenon that causes to be in a manner dependent upon the particular parent from which the gene was inherited. It occurs when epigenetic marks such as or are established or "imprinted" in the of a parent organism and subsequently maintained through cell divisions in the of the organism's progeny; as a result, a gene in the progeny that was inherited from the father may be expressed differently than another copy of the same gene that was inherited from the mother.
A region of a that shows evidence of from another organism. The term is used especially in describing microbial genomes such as those of bacteria, where genomic islands having the same or similar sequences commonly occur in species or strains that are otherwise only distantly related, implying that they were not passed on through vertical descent from a common ancestor but through some form of lateral transfer such as conjugation. These islands often contain functional genes which confer adaptive traits such as antibiotic resistance.
An interdisciplinary field that studies the structure, function, evolution, mapping, and editing of entire , as opposed to individual .
The ability of certain chemical agents to cause damage to genetic material within a living cell (e.g. through single- or double-stranded breaks, , or ), which may or may not result in a permanent . Though all are genotoxic, not all genotoxic compounds are mutagenic.
The entire complement of present in a particular individual's , which gives rise to the individual's .
The process of determining differences in the of an individual by examining the in the individual's using and comparing them to another individual's sequences or a reference sequence.
Any that gives rise to the of a sexually reproducing organism. Germ cells are the vessels for the genetic material which will ultimately be passed on to the organism's descendants and are usually distinguished from , which are entirely separate from the .
1. In multicellular organisms, the subpopulation of cells which are capable of passing on their genetic material to the organism's progeny and are therefore (at least theoretically) distinct from , which cannot pass on their genetic material except to their own immediate daughter cells. Cells of the germ line are called .
2. The of germ cells, spanning many generations, that contains the genetic material which has been passed on to an individual from its ancestors.
A unit of length equal to one billion (1) in molecules or one billion in duplex molecules such as .
Any that can be converted into via , as opposed to the , which can be converted into ketone bodies. In humans, 18 of the 20 amino acids are glucogenic; only leucine and lysine are not. Five amino acids (phenylalanine, isoleucine, threonine, tryptophan, and tyrosine) are both glucogenic and ketogenic.
The chain of reactions that results in the generation of from some non-carbohydrate carbon substrates, including the . It is one of two primary used by most animals to maintain blood sugar levels (the other being ), especially during periods of fasting, starvation, and intense exercise.
A simple sugar with the molecular formula and the most abundant in nature, being the primary product of photosynthesis, where it is made in a sunlight-powered reaction of water with carbon dioxide. All living organisms are capable of metabolizing glucose via , an exergonic pathway which for most organisms is the primary means of obtaining chemical energy to power cellular activities. Metabolic glucose is usually stored in the form of large polymeric aggregates such as amylose in plants and in animals, and is released by the breakdown of these polymers via .
A branched composed of as many as 30,000 covalently bonded units of the which functions as the primary form of short-term energy storage in most animal cells. Glycogen reserves are especially abundant in muscle and liver cells, where they can be metabolized at-need into their component glucoses as a means of buffering blood sugar levels, a process known as .
A in which polymeric molecules are broken down into individual monomers by the sequential removal of glucose units via phosphorolysis, a reaction catalyzed by the enzyme glycogen phosphorylase. Glycogenolysis is one of two primary pathways used in animal tissues to generate free glucose for the maintenance of blood sugar levels, the other being .
Any of a subclass of consisting of a central polar molecule (most commonly glycerol or sphingosine) which is covalently attached to one or more or via , as well as to one or more long, non-polar chains. Glycolipids are one of three major types of comprising all biological membranes, along with and .
The in which sugars such as are broken down into simpler molecules, releasing chemical energy which can then be used for various cellular functions. In a series of ten enzyme-catalyzed reactions, each molecule of glucose is converted into two molecules of , with the free energy liberated in this process simultaneously being used to form high-energy bonds in two molecules of reduced (NADH) and two molecules of (ATP). In conditions pyruvate and NADH are further oxidized in the ; in conditions NADH itself subsequently reduces pyruvate to lactate.
A with one or more molecules, typically short chains, covalently attached to one or more of its amino acid side chains. Proteins exposed on the outer surface of the or secreted into the extracellular space are commonly modified in this way, after which they are said to be .
Any of a class of capable of breaking one or more in molecules, commonly found in .
Any chemical compound in which a molecule is covalently bonded to another molecule containing a hydroxyl group (including other carbohydrates) via one or more . When both molecules are carbohydrates, the glycoside is a or .
A covalent ether bond that connects a carbon atom within a molecule (e.g. a ) or a carbohydrate derivative to another substituent or functional group, which may or may not be another carbohydrate; such bonds form as the result of a dehydration reaction between hydroxyl groups on each molecule, liberating a water molecule in the process. A substance containing a glycosidic bond is known as a .
The attachment of an (e.g. ) to an asparagine residue within a or by covalent bonding, a process which takes place in or near the .
See .
See .
A used as one of the four standard nucleobases in both and molecules. Guanine forms a with .
The proportion of in a that are either () or (), typically expressed as a percentage. DNA and RNA molecules with higher GC-content are generally more thermostable than those with lower GC-content due to molecular interactions that occur during base stacking.
One of the four standard used in molecules, consisting of a with its N9 nitrogen to the C1 carbon of a sugar. Guanine bonded to is known as , which is the version used in .
A short which complexes with Cas and, by annealing to a specific complementary sequence in a molecule, serves to "guide" these proteins to viral DNA introduced by foreign pathogens, which can then be digested and degraded as part of an adaptive immune defense employed by bacteria and archaea. Custom-made guide RNAs are designed by scientists to target specific genomic loci in CRISPR-Cas .
H
See .
(of a cell or organism) Having one copy of each , with each copy not being part of a pair. Contrast and .
Any of a class of -dependent that move directionally along the and catalyze the separation of the two complementary of molecules, permitting a wide variety of vital processes to take place, e.g. , , and .
In a organism, having just one at a given (where there would ordinarily be two). Hemizygosity may be observed when only one copy of a is present in a normally diploid cell or organism, or when a segment of a chromosome containing one copy of an allele is , or when a gene is located on a in the heterogametic sex (in which the sex chromosomes do not exist in matching pairs); for example, in human males with normal chromosomes, almost all genes are said to be hemizygous because there is only one and few of the same genes exist on the .
The storage, transfer, and expression of molecular information in biological organisms, as manifested by the passing on of from parents to their , either through sexual or asexual reproduction. Offspring cells or organisms are said to inherit the genetic information of their parents.
See .
A or protein composed of two different which are in the quaternary structure of a multimeric complex. Contrast .
A cell containing with different genotypes, resulting from the of two or more genetically distinct cells, either naturally (e.g. in certain types of sexual reproduction) or artificially (e.g. in genetic engineering).
The of a foreign or any other foreign DNA sequence within a host organism which does not naturally contain the same gene. Insertion of foreign into heterologous hosts using is a common biotechnology method for studying gene structure and function.
In a organism, having two different at a given . In genetics shorthand, heterozygous are represented by a pair of non-matching letters or symbols, often an uppercase letter (indicating a allele) and a lowercase letter (indicating a allele), such as "Aa" or "Bb". Contrast .
The study or analysis of the microscopic anatomy of biological or of within tissues, particularly by making use of specialized techniques to distinguish structures and functions based on visual morphology and differential staining. In practice the term is sometimes used more broadly to include .
Any of a class of highly alkaline responsible for DNA into structural units called in eukaryotic cells. Histones are the chief protein components of , where they associate into which act as "spools" around which the linear DNA molecule winds. They play a major role in and .
The complex of eight proteins around which double-stranded DNA wraps within a . The canonical histone octamer consists of two each of histones H2A, H2B, H3, and H4, which pair with each other symmetrically to form a ball-shaped cluster around which DNA winds through interactions with the histones' surface , though may replace their analogues in certain contexts.
See .
(of a linear or chromosome fragment) Having no single but rather multiple assembly sites dispersed along the entire length of the chromosome. During cell division, the of holocentric chromosomes move apart in parallel and do not form the classical V-shaped structures typical of chromosomes.
Any of a class of DNA approximately 180 base pairs in length occurring near the of certain eukaryotic genes and encoding a 60-amino acid domain, known as a , which is capable of via a characteristic helix-turn-helix motif. Homeobox-containing genes are translated into homeodomain-containing proteins, which commonly transcription or translation by binding to other genes or messenger RNAs containing . The products of many homeotic genes, exemplified by the , are of critical importance in developmental pathways.
Any DNA or RNA that is specifically and bound by the of a homeodomain-containing protein.
A , typically 60 in length, found near the of certain eukaryotic , characterized by a highly helix-turn-helix that binds with strong affinity to the backbone of specific in DNA or RNA molecules. A protein may have one or more homeodomains, each of which is specific to a different recognition sequence. Many homeodomain-containing proteins function as by binding to sequences within and blocking or recruiting other proteins, such as or of the . Homeodomains are the versions of , though the terms are often used interchangeably.
A or protein composed of two identical which are in the quaternary structure of a multimeric complex. Contrast .
A set of two matching , one maternal and one paternal, which pair up with each other inside the nucleus during . They have the same at the same , but may have different .
A type of in which nucleotide sequences are exchanged between two similar or identical ("homologous") molecules of , especially that which occurs between . The term may refer to the recombination that occurs as a part of any of a number of distinct cellular processes, most commonly or during in eukaryotes and in prokaryotes. Contrast .
In a organism, having two identical at a given . In genetics shorthand, homozygous are represented by a pair of matching letters or symbols, such as "AA" or "aa". Contrast .
Any process by which genetic material is transferred between unicellular and/or multicellular organisms other than by vertical transmission from parent to offspring, e.g. bacterial conjugation.
Any that is at a relatively constant level across many or all known conditions and cell types. The of housekeeping genes typically play critical roles in the maintenance of cellular integrity and basic metabolic function. It is generally assumed that their expression is unaffected by experimental or pathological conditions.
A subset of highly -containing whose protein products function as essential for the proper organization of the body plan in developing animal , ensuring that the correct structures are formed in the correct places. Hox genes are usually arranged on a chromosome in arrays and are sequentially during development, with the sequence of gene activation corresponding to their physical arrangement within the genome and/or the physical layout of the tissues in which they are expressed along the organism's anterior–posterior axis.
A collaborative international scientific research project with the goal of all of the and identifying and all of the within human cells, and ultimately of assembling a complete for the human species. The project was launched in 1990 by a consortium of federal agencies, universities, and research institutions and was declared complete in 2003. Because each individual human being has a unique genome, the finished reference genome is a of sequences obtained by sampling DNA from thousands of individuals across the world and does not represent any one individual.
See .
The that results from combining the qualities of two organisms of different genera, species, breeds, or varieties through sexual reproduction. Hybrids may occur naturally or artificially, as during of domesticated animals and plants. Reproductive barriers typically prevent hybridization between distantly related organisms, or at least ensure that hybrid offspring are sterile, but fertile hybrids may result in speciation.
1. The process by which a organism is produced from two organisms of different genera, species, breeds, or varieties.
2. The process by which two or more molecules with nucleotide sequences with each other in solution, creating or molecules via the formation of hydrogen bonds between the complementary nucleobases of each strand. In certain laboratory contexts, especially ones in which long strands hybridize with short , hybridization is often referred to as .
3. A step in some experimental assays in which a single-stranded DNA or RNA preparation is added to an array surface and anneals to a .
Soluble in or having an affinity for water or other polar compounds; describing a polar molecule, or a moiety or functional group within a molecule, which participates in intermolecular interactions such as hydrogen bonding with other polar molecules and therefore readily dissolves in polar solvents such as water or aqueous solutions. Unlike compounds, hydrophilic compounds can form energetically favorable contacts with the aqueous phase of biological fluids and so can often be suspended directly in the or exposed to extracellular spaces. Together, the contrasting properties of hydrophilicity and hydrophobicity play major roles in determining the structural and functions of most .
Having a low solubility in or affinity for water or other polar solvents; describing a non-polar molecule, or a moiety or functional group within a molecule, which cannot form energetically favorable interactions with polar compounds and which therefore tends to "avoid" or be repulsed by such compounds, instead clustering together with other hydrophobic molecules or arranging itself in a way that minimizes its exposure to its polar surroundings. This phenomenon is not so much due to the affinity of the hydrophobic molecules for each other as it is a consequence of the strong intermolecular forces that allow polar compounds such as water molecules to bond with each other; hydrophobic species are unable to form alternative bonds of equivalent strength with the polar compounds, hence they tend to be excluded from aqueous solutions by the tendency of the polar solvent to maximize interactions with itself. Hydrophobicity is a major determinant of countless chemical interactions in biological systems, including the spatial assumed by such as and , the binding of and to proteins, and the structure and properties of lipid . Contrast .
Describing a solution containing a high concentration of dissolved solutes relative to another solution, i.e. having positive osmotic pressure, such that solvent will tend to move by osmosis across a semipermeable membrane from the solution of lower solute concentration to the solution of higher concentration until both solutions have equal concentrations. In a cell where the intracellular is hypertonic relative to the surrounding (which by definition is relative to the cytosol), the solvent (water) will flow across the into the cytosol, filling the cell with extra water and diluting its contents until both sides of the membrane are . Cells placed in severely hypotonic environments may be at risk of bursting due to the sudden inflow.
A mutant that permits a subnormal expression of the gene's normal phenotype, e.g. by encoding an unstable enzyme which degrades too quickly to fully serve its function but which nevertheless is functional in some limited capacity, being generated in quantities sufficient for its reaction to proceed slowly or at low levels.
Describing a solution containing a low concentration of dissolved solutes relative to another solution, i.e. having negative osmotic pressure, such that solvent will tend to move by osmosis across a semipermeable membrane from the solution of lower solute concentration to the solution of higher concentration until both solutions have equal concentrations. In a cell where the intracellular is hypotonic relative to the surrounding (which by definition is relative to the cytosol), the solvent (water) will flow across the out of the cytosol, causing the cell to lose water until both sides of the membrane are . Cells placed in severely hypertonic environments may be at risk of shriveling and desiccating due to the sudden outflow.
A naturally occurring non-canonical that is used in some molecules and pairs with standard nucleobases in a phenomenon known as . Its form is known as , which is the reason it is commonly abbreviated with the letter in sequence reads.
I
See .
A diagrammatic or schematic of the entire set of within a cell or genome, in which annotated illustrations depict each chromosome in its most idealized form (e.g. with straight lines and obvious ) so as to facilitate the easy identification of sequences, structural features, and physical distances, which may be less apparent in photomicrographs of the actual chromosomes.
See .
Capable of provoking or inducing an immune response, as with an or a vaccine.
The use of an to a or to bind a specific within a target substance (e.g. a protein) and thereby make the substance visible amidst a background of non-specific substances, allowing for detection of the target in a biological sample. The term originally referred to antibody-based staining of tissue sections with strong dyes or colorants, known as , but in modern usage encompasses a much broader range of laboratory methods united by their use of antibodies to specific biomolecules with visually conspicuous compounds.
(of a scientific experiment or research) Conducted, produced, or analyzed by means of computer modeling or simulation, as opposed to a real-world trial.
(of a scientific experiment or biological process) Occurring or made to occur in a natural, uncontrolled setting, or in the natural or original position or place, as opposed to in a foreign cell or tissue type or .
A assay in which a , single-stranded DNA or RNA molecule or containing a sequence that is to a particular DNA or RNA sequence is allowed to with its complement , i.e. in its natural context, such as within cells or tissue sections (as opposed to within homogeneous samples extracted from cells or tissues, where cellular or histological structure has been lost in the process of obtaining the sample), in order to of the complementary sequence within this context. The label may be a radioactive compound, , or , permitting detection by a variety of visualization techniques. In situ hybridization is commonly used to identify the physical locations of specific DNA sequences such as genes and regulatory elements on , which can provide insight into chromosomal structure and integrity; to determine the subcellular locations where various types of RNA accumulate and interact with other molecules; and to visualize the tissues and organs within an organism where specific genes are at various developmental stages (by probing for the genes' RNA transcripts).
(of a scientific experiment or biological process) Occurring or made to occur in a laboratory vessel or other controlled artificial environment, e.g. in a test tube or a petri dish, as opposed to or .
(of a scientific experiment or biological process) Occurring or made to occur inside the cells or tissues of a living organism; or, in the broadest sense, in any natural, unmanipulated setting. Contrast and .
A term referring to either an or a of one or more in a nucleic acid sequence.
A protein that binds to a (to disable it) or to an (to enable it).
A whose is either responsive to environmental change or dependent on its host cell's position within the cell cycle.
1. (of a gene or sequence) Read or transcribed in the same as another gene or sequence; not requiring a shift in reading frame to be intelligible or to result in a functional .
2. (of a mutation) Not causing a .
See .
See .
A type of in which one or more are added to a . Contrast .
Any nucleotide sequence that is naturally or artificially into another sequence. The term is used in particular to refer to the part of a that codes for those proteins directly involved in the transposition process, e.g. the enzyme. The coding region in a transposable insertion sequence is usually flanked by short , and the structure of larger transposable elements may include a pair of flanking insertion sequences which are themselves inverted.
The alteration of a DNA sequence by the of one or more nucleotides into the sequence, either naturally or artificially. Depending on the precise location of the insertion within the target sequence, insertions may partially or totally inactivate or even upregulate a or biochemical pathway, or they may be , leading to no substantive changes at all. Many techniques rely on the insertion of exogenous genetic material into host cells in order to study gene function and expression.
A specific DNA sequence that prevents a gene from being influenced by the or of nearby genes.
Any of a class of which are permanently embedded within or attached to the (as opposed to those which are ). Integral membrane proteins can be subclassified into , which span the entirety of the membrane, and , which adhere only to one side.
Any of a class of which are permanently attached to one side of the by any means but which do not completely span the membrane. Contrast .
Any of a class of which span the entirety of the , extending from the interior or side of the membrane to the exterior or side. Transmembrane proteins typically have hydrophilic exposed to each side as well as one or more hydrophobic domains crossing the nonpolar space inside the , by which they are further classified as single-pass or multipass membrane proteins. As such many transmembrane proteins function as or to permit or prohibit the movement of specific molecules or ions across the membrane, often undergoing conformational changes in the process, or as in pathways. Contrast .
A consisting of a containing the gene for a , -specific recognition sites, and a that governs the expression of one or more genes conferring adaptive traits on the host cell. Integrons usually exist in the form of circular DNA fragments, through which they facilitate the rapid adaptation of bacteria by enabling of antibiotic resistance genes between different bacterial species.
Any sequence of one or more within a precursor that is excised by during and is therefore absent from the mature protein, analogous to the spliced out of RNA transcripts. Contrast .
Any chemical compound (e.g. ethidium bromide) that disrupts the alignment and in the complementary strands of a molecule by .
The insertion, naturally or artificially, of chemical compounds between the planar of a molecule, which generally disrupts the hydrogen bonding necessary for .
Between two or more . Contrast ; see also .
Any DNA sequence that is located between the of one and the of the following gene in a transcription unit. See also .
A in which both the male and female parents are at a particular .
Any sequence of that is located between functional .
See .
The abbreviated pause in activities related to cell division that occurs during in some species, between the first and second meiotic divisions (i.e. meiosis I and meiosis II). No occurs during interkinesis, unlike during the normal that precedes meiosis I and .
A sequence present in some that permits recognition by the and thus the initiation of even in the absence of a , which in eukaryotes is otherwise required for assembly of the initiation complex. IRES elements are often located in the , but may also be found in other positions.
All stages of the excluding . A typical cell spends most of its life in interphase, during which it conducts everyday as well as the complete of its genome in preparation for or .
Within a or cells; i.e. inside the . Contrast and .
See .
See .
Any within a functional that is removed by during of the and is therefore absent from the final mature mRNA. The term refers to both the sequence as it exists within a DNA molecule and to the corresponding sequence in RNA transcripts. Contrast .
See .
A whose DNA sequence is nested within an of another gene and hence surrounded by non-coding intronic sequences.
The infolding of a toward the interior of a cell or organelle, or of a sheet of cells toward the interior of a developing , , or organ, forming a distinct membrane-lined pocket. In the case of individual cells, the invaginated pocket may proceed to separate from the source membrane entirely, creating a membrane-bound within the cell, as in .
A followed on the same by its own . The initial sequence and the reverse complement may be separated by any number of nucleotides, or may be immediately adjacent to each other; in the latter case, the composite sequence is also called a . Inverted repeats are by definition, a property which involves them in many biological functions and dysfunctions. Contrast .
A type of complex which forms an spanning the of a membrane, through which specific inorganic, electrically charged ions can diffuse down their electrochemical gradients.
Any chemical compound or macromolecule that facilitates the movement of ions across biological membranes, or more specifically, any chemical species that reversibly binds electrically charged atoms or molecules. Many ionophores are lipid-soluble that catalyze the transport of monovalent and divalent cations across the hydrophobic surrounding cells and vesicles.
A large region of with a relatively homogeneous composition of , distinguished from other regions by the proportion of pairs that are - or -. The genomes of most plants and vertebrates are composed of different classes of GC-rich and AT-rich isochores.
A type of abnormal in which the arms of the chromosome are mirror images of each other. Isochromosome formation is equivalent to simultaneous and events such that two copies of either the or the comprise the resulting chromosome.
Any of a class of which catalyze the conversion of a molecule from one isomer to another, such that the product of the reaction has the same molecular formula as the original substrate but differs in the connectivity or spatial arrangement of its atoms.
Two or more that are equivalent and redundant in the sense that, despite coding for distinct , they each result in the same when set within the same . If several isomeric genes are present in a single they may be either cumulative or non-cumulative in their contributions to the phenotype.
Describing a solution containing the same concentration of dissolved solutes as another solution, such that the two solutions have equal osmotic pressure. Isotonic solutions separated from each other by a semipermeable membrane (as with a cell, where the intracellular is separated from the by the ) have no and thus will not exchange solvent by osmosis. Contrast and .
J
See .
Any DNA sequence that appears to have no known biological function, or which acts in a way that has no positive or a net negative effect on the fitness of the in which it is located. The term was once more broadly used to refer to all , though much of this was later discovered to have a function; in modern usage it typically refers to broken or vestigial sequences and , including , , , and fragments of and retroviruses, which together constitute a large proportion of the genomes of most eukaryotes. Despite not contributing productively to the host organism, these sequences are able to persist indefinitely inside genomes because the disadvantages of continuing to copy them are too small to be acted upon by .
Any RNA-encoded sequence, especially a , that appears to have no known biological function, or whose function has no positive or a net negative effect on the fitness of the genome from which it is transcribed. Despite remaining , many still serve important functions, whereas junk RNAs are truly useless: often they are the product of accidental transcription of a sequence, or they may result from of , as with . Junk RNA is usually quickly degraded by and other cytoplasmic enzymes.
K
A which depicts the entire set of in a cell or organism by using photomicrographs of the actual chromosomes as they appear (usually during , in their most condensed forms), as opposed to the idealized illustrations of chromosomes used in . The photomicrographs are often still arranged in pairs and by size for easier identification of particular chromosomes, whereas in the actual nucleus there is seldom any apparent organization.
See .
See .
See .
See .
See .
The fragmentation and degeneration of the of a dying cell, during which the is destroyed and the contents of the nucleus, including , are dispersed throughout the and degraded by enzymes. Karyorrhexis is usually preceded by and may occur as a result of , , or .
A dense, organized bundle of which forms in the oocyte nucleus during oogenesis in some female eukaryotes.
The number and appearance of within the of a eukaryotic cell, especially as depicted in an organized or (in pairs and arranged by size and by position of the ). The term is also used to refer to the complete set of chromosomes in a species or individual organism or to any test that detects this complement or measures the chromosome number.
The production of via the of or , an exergonic process which is used to supply energy to certain tissues during periods of carbohydrate and protein insufficiency.
Any that can be converted directly into , which can then be oxidized for energy or used as a precursor for many . This is in contrast to the , which can be converted into . In humans, seven of the 20 amino acids are ketogenic, though only leucine and lysine are exclusively ketogenic; the other five (phenylalanine, isoleucine, threonine, tryptophan, and tyrosine) are both ketogenic and glucogenic.
The of , which releases energy that can be used to synthesize .
A unit of length equal to one thousand (1) in molecules or one thousand in duplex molecules such as .
Any of a class of which catalyze the transfer of groups from high-energy, phosphate-donating molecules such as to one or more specific , a process known as . The opposite process, known as , is catalyzed by enzymes.
A non-specific, non-directional movement or change in activity by a cell or a population of cells in response to a stimulus, such that the rate of the movement or activity is dependent on the intensity of the stimulus but not on the direction from which the stimulus occurs. Kinesis refers particularly to cellular locomotion without directional bias, in contrast to and .
A disc-shaped which assembles around the of a during of and , where it functions as the attachment point for of the .
In , an enlarged, heavily staining that can be used as a visual marker, allowing specific chromosomes to be easily identified in the nucleus.
A method by which the normal rate of of one or more of an organism's is reduced or suppressed (though not necessarily completely turned off, as in ), either through direct modification of a DNA sequence or through treatment with a reagent such as a short DNA or RNA with a sequence to either an transcript or a gene.
A method in which one or more novel are into an organism's genome, particularly when targeted to a specific , or in which one or more existing genes are replaced by or with novel genes. This is in contrast to a , in which a gene is deleted or completely inactivated.
A method in which one or more specific are inactivated or entirely an organism's genome, by any of a variety of mechanisms which disrupt their at some point in the pathway that produces their , such that no functional gene products are produced. This allows researchers to study the function of a gene , by observing how the organism's changes when deprived of the gene's normal effects. A complete knockout permanently inactivates the gene; a conditional knockout allows the gene to be turned on or off at will, e.g. at specific times or in specific tissues, by linking the expression of the gene to some easily modifiable biochemical state or condition. In a heterozygous knockout, only one of a diploid organism's two alleles is knocked out; in a homozygous knockout, both copies are knocked out. Contrast .
A highly which functions as the recognition site for the initiation of in most eukaryotic , generally a sequence of 10 bases immediately surrounding and inclusive of the : . As the scans the transcript, recognition of this sequence (or a close variant) causes the complex to commit to full assembly and the start of translation. The Kozak sequence is distinct from other recognition sequences relevant to translation such as and .
See .
L
The chemical attachment of a highly selective substance, known as a label, tag, or , to a particular cell, protein, amino acid, or other molecule of interest, either naturally or artificially, or . Natural labelling is a primary mechanism by which biomolecules specifically identify and interact with other biomolecules; important examples include , , , and . Labelling is also a common laboratory technique, where the label is typically a reactive derivative of a naturally fluorescent compound (e.g. green fluorescent protein), dye, enzyme, , radioactive molecule, or any other substance that makes its target distinguishable in some way. The labelled targets are thereby rendered distinct from their unlabelled surroundings, allowing them to be detected, identified, quantified, or isolated for further study.
In , the nascent for which 's direction of synthesis is away from the , which necessitates a complex and discontinuous process in contrast to the streamlined, continuous synthesis of the other nascent strand, known as the , which occurs simultaneously. Because DNA polymerase works only in the to direction, but the lagging strand's overall direction of chain elongation must ultimately be the opposite (i.e. 3' to 5', toward the replication fork), elongation must occur by an indirect mechanism in which a enzyme synthesizes short complementary to the template DNA, and DNA polymerase then extends the primed segments into short chains of nucleotides known as . The RNA primers are then removed and replaced with DNA, and the Okazaki fragments are joined by .
A transcriptionally active, highly de-condensed morphology assumed by certain during the of in the of in female insects, amphibians, birds, and some other animals. Lampbrush chromosomes are conspicuous under the microscope because the , still attached at , are gigantically elongated into large loops of unpackaged extending laterally from a series of . Large numbers of and are transcribed from the lateral loops, generating a rich pool of to be used in the immature oocyte and after fertilization, with functions in both and . Because they allow individual to be directly visualized, lampbrush chromosomes are useful models for studying chromosome organization and genome structure and for constructing high-resolution chromosome maps.
See .
See .
In , the nascent for which both the direction of synthesis by and the direction of overall chain elongation are toward the ; i.e. both occur in the to direction, resulting in a single, continuous elongation process with few or no interruptions. By contrast, the other nascent strand, known as the , is assembled in a discontinuous process involving the ligation of short synthesized in the opposite direction, away from the replication fork.
The boundary between the left end (by convention, the ) of an and the right () end of an adjacent in a .
In , the first of five substages of , following and preceding . During leptonema, the replicated chromosomes condense from diffuse into long, thin strands that are much more visible within the .
Any that results in the premature death of the organism carrying it. lethal mutations are fatal only to , whereas lethals are fatal even in .
A common structural in and some other types of proteins, approximately 35 amino acids in length, characterized chiefly by the recurrence of the amino acid leucine every seven residues. When modeled in an idealized conformation, the leucine residues are positioned in such a way that they can interdigitate with the same or similar motifs in an alpha helix belonging to another similar polypeptide, facilitating and the formation of a complex resembling a zipper.
In biochemistry, any molecule that binds to or interacts with a on a or other , usually reversibly via intermolecular forces; or any substance that forms a complex with a biomolecule as part of a biological process. The binding of specific ligands to DNA or proteins is important in many ; for example, protein–ligand binding may result in the protein undergoing a which alters its function or affinity for other molecules.
A class of which catalyze the synthesis of large molecules such as by forming one or more chemical bonds between them, typically , , , or bonds via condensation reactions. An example is , which catalyzes the formation of between adjacent on the same strand of a DNA molecule, a reaction known as .
The joining of consecutive in the same of a molecule via the formation of a between the of one nucleotide and the of an adjacent nucleotide, a condensation reaction catalyzed by enzymes known as . This reaction occurs in fundamentally the same way in all varieties of and catabolism, natural or artificial, whether the addition of individual nucleotides to a growing strand (as in and ), or the of and in previously intact molecules, or the joining of separate nucleic acid fragments into a single molecule (as in , , retroviral , and all other forms of , as well as artificial techniques). Ligation is the opposite of the anabolic reaction wherein phosphodiester bonds are cleaved by . It also should not be confused with the non-covalent that can occur between complementary strands; ligation refers to the synthesis of the which defines an individual chain of nucleotides.
The tendency of DNA sequences which are physically near to each other on the same chromosome to be together during . Because the physical distance between them is relatively small, the chance that any two nearby parts of a DNA sequence (often or ) will be separated on to different during is statistically very low; such loci are then said to be more linked than loci that are farther apart. Loci that exist on entirely different chromosomes are said to be perfectly unlinked. The standard unit for measuring genetic linkage is the (cM).
1. A short, synthetic DNA duplex containing the for a particular . In , linkers are often deliberately included in molecules in order to make them easily modifiable by permitting cleavage and of foreign sequences at precise locations. A segment of an engineered containing many such restriction sites is sometimes called a .
2. A section of chromosomal DNA connecting adjacent by binding to H1.
The number of times that the two strands of a circular molecule cross each other, equivalent to the (which measures the torsion of the double helix) plus the (which measures the degree of supercoiling). The linking number of a closed molecule cannot be changed without breaking and rejoining the strands. DNA molecules which are identical except for their linking numbers are known as topological isomers.
Any of a heterogeneous class of organic compounds, including (fats), waxes, sterols, and some vitamins, united only by their amphipathic or nature and consequently their very low solubility in water. Some lipids such as tend to form lamellar structures or micelles in aqueous environments, where they serve as the primary constituents of biological . Others such as can be for energy, have important functions in energy storage, or serve as molecules. Colloquially, the term "lipids" is sometimes used as a synonym for fats, though fats are more correctly considered a subclass of lipids.
A lamellar structure composed of numerous amphipathic molecules packed together in two back-to-back sheets or layers, with their "tails" directed inward and their "heads" exposed on the outer surface. This is the basic structural motif for all biological , including the surrounding all cells as well as the membranes surrounding and . Though bilayers are sometimes colloquially described as phospholipid bilayers, are just one of several classes of which form bilayers; most membranes are actually a of phospholipids, , and , interspersed and studded with various other molecules such as .
See .
See .
A specific, fixed position on a where a particular or resides.
In condensed where the positioning of the creates two segments or "arms" of unequal length, the longer of the two arms of a . Contrast .
Any of a large family of non- which together comprises one of the most widespread in eukaryotic genomes. Each LINE is on average about 7,000 base pairs in length.
A class of consisting of all of more than 200 in length that are not . This limit distinguishes lncRNA from the numerous smaller non-coding RNAs such as . See also .
See .
The disruption and decomposition of the surrounding a cell, or more generally of any membrane-bound or , especially by , , or other chemical or mechanical processes which compromise the membrane's integrity and thereby cause the unobstructed interchange of the contents of and spaces. Lysis generally implies the complete and irreversible loss of intracellular organization as a result of the release of the cell's internal components and the dilution of the , and therefore the death of the cell. Such a cell is said to be lysed, and a fluid containing the contents of lysed cells (usually including , , and many other organic molecules) is called a lysate. Lysis may occur both naturally and artificially, and is a normal part of the cellular life cycle.
See also
Introduction to genetics
Outline of genetics
Outline of cell biology
Glossary of biology
Glossary of chemistry
Glossary of evolutionary biology
References
Further reading
External links
National Human Genome Research Institute (NHGRI) Talking Glossary of Genomic and Genetic Terms
Interactive, labeled diagram of an animal cell from SwissBioPics
Glossary
Genetics
Molecular-biology-related lists
Wikipedia glossaries using description lists | Glossary of cellular and molecular biology (0–L) | [
"Chemistry",
"Biology"
] | 30,052 | [
"Glossaries of biology",
"Molecular-biology-related lists",
"Genetics",
"Molecular biology"
] |
72,251,171 | https://en.wikipedia.org/wiki/Curtis%20House%2C%20Rickmansworth | The Curtis House, Rickmansworth was the first solar house constructed in the United Kingdom. The house, in Rickmansworth, England, was built in 1956 by British architect Edward Curtis, for his own occupation.
References
External links
"Dream House" - British Pathe film on YouTube showing the house
Houses in Hertfordshire
Solar design | Curtis House, Rickmansworth | [
"Engineering"
] | 67 | [
"Solar design",
"Energy engineering"
] |
72,252,320 | https://en.wikipedia.org/wiki/List%20of%20lilioid%20families | The lilioid monocots are a group of 33 interrelated families of flowering plants. They generally have tepals (indistinguishable petals and sepals) similar to those on the true lilies (Lilium). Like other monocots they usually have a single embryonic leaf (cotyledon) in their seeds, scattered vascular systems, leaves with parallel veins, flower parts in multiples of three, and roots that can develop in more than one place along the stems.
The lilioids can be subdivided into five orders: Asparagales, Dioscoreales, Liliales, Pandanales and Petrosaviales. Asparagales is roughly tied with Poales for the most diverse monocot order and includes Orchidaceae, the largest flowering plant family, with more than 26,000 species. Plants in Dioscoreales, such as yams, usually have inflorescences with glandular hairs. In Liliales, plants often have elliptical leaves with up to seven primary veins, inflorescences at the tips of stems, and nectar-producing glands on the tepals. Pandanales includes fragile, non-herbaceous and drought-tolerant species, with leaves often arranged in three vertical rows. Petrosaviales includes species with spirally arranged leaves, nectar-producing glands, and racemes (unbranched inflorescences with short flower stalks).
Glossary
From the glossary of botanical terms:
annual: a plant species that completes its life cycle within a single year or growing season
basal: attached close to the base (of a plant or an evolutionary tree diagram)
climber: a vine that leans on, twines around or clings to other plants for vertical support
glandular hair: a hair tipped with a secretory structure
herbaceous: not woody; usually green and soft in texture
perennial: not an annual or biennial
woody: hard and lignified; not herbaceous
The APG IV system is the fourth in a series of plant taxonomies from the Angiosperm Phylogeny Group.
Families
See also
List of plant family names with etymologies
Notes
Citations
References
See the Creative Commons license.
See their terms-of-use license.
Systematic
Lilioid
Lilioid families
lilioid families | List of lilioid families | [
"Biology"
] | 473 | [
"Lists of biota",
"Lists of plants",
"Plants"
] |
72,252,379 | https://en.wikipedia.org/wiki/Blind%20polytope | In geometry, a Blind polytope is a convex polytope composed of regular polytope facets.
The category was named after the German couple Gerd and Roswitha Blind, who described them in a series of papers beginning in 1979.
It generalizes the set of semiregular polyhedra and Johnson solids to higher dimensions.
Uniform cases
The set of convex uniform 4-polytopes (also called semiregular 4-polytopes) are completely known cases, nearly all grouped by their Wythoff constructions, sharing symmetries of the convex regular 4-polytopes and prismatic forms.
Set of convex uniform 5-polytopes, uniform 6-polytopes, uniform 7-polytopes, etc are largely enumerated as Wythoff constructions, but not known to be complete.
Other cases
Pyramidal forms: (4D)
(Tetrahedral pyramid, ( ) ∨ {3,3}, a tetrahedron base, and 4 tetrahedral sides, a lower symmetry name of regular 5-cell.)
Octahedral pyramid, ( ) ∨ {3,4}, an octahedron base, and 8 tetrahedra sides meeting at an apex.
Icosahedral pyramid, ( ) ∨ {3,5}, an icosahedron base, and 20 tetrahedra sides.
Bipyramid forms: (4D)
Tetrahedral bipyramid, { } + {3,3}, a tetrahedron center, and 8 tetrahedral cells on two side.
(Octahedral bipyramid, { } + {3,4}, an octahedron center, and 8 tetrahedral cells on two side, a lower symmetry name of regular 16-cell.)
Icosahedral bipyramid, { } + {3,5}, an icosahedron center, and 40 tetrahedral cells on two sides.
Augmented forms: (4D)
Rectified 5-cell augmented with one octahedral pyramid, adding one vertex for 11 total. It retains 5 tetrahedral cells, reduced to 4 octahedral cells and adds 8 new tetrahedral cells.
Convex Regular-Faced Polytopes
Blind polytopes are a subset of convex regular-faced polytopes (CRF).
This much larger set allows CRF 4-polytopes to have Johnson solids as cells, as well as regular and semiregular polyhedral cells.
For example, a cubic bipyramid has 12 square pyramid cells.
References
External links
Blind polytope
Convex regular-faced polytopes
Polytopes | Blind polytope | [
"Mathematics"
] | 553 | [
"Geometry",
"Geometry stubs"
] |
72,252,501 | https://en.wikipedia.org/wiki/Data%20universalism | Data universalism is an epistemological framework that assumes a single universal narrative of any dataset without any consideration of geographical borders and social contexts. This assumption is enabled by a generalized approach in data collection. Data are used in universal endeavours across social, political, and physical sciences unrestricted from their local source and people. Data are gathered and transformed into a mutual understanding of knowing the world which forms theories of knowledge. One of many fields of critical data studies explores the geologies and histories of data by investigating data assemblages and tracing data lineage which unfolds data histories and geographies (p. 35). This reveals intersections of data politics, praxes, and powers at play which challenges data universalism as a misguided concept.
Theoretical framework
Data are mainly sampled in rich Western countries which are considered the leaders and voices of technological developments while ignoring cultures, communities, and geographies despite its application being widespread. As the data lifecycle grows, processed small data that are grounded within big data are compiled and formed from heterogeneous sources extracted from mainstream places, forming what has come to be understood as knowledge.
As of 2022, research has not shown the origin behind universalism as a practice due to a lack of controlled data. According to cultural psychologists, democracy and universalism have a positive correlation but there are no studies that show how universalism is shaped by people's experiences and environments (p. 1). A push toward datafication has been spurred by democratically advanced Western voices and diffused across fragile democracies in the Global South with no consideration to the geopolitical context and influence powers of the data landscape in countries outside the West. As mentioned by Cappelen et. al, obscure information is found on the epistemology of universalism, although it is argued that a lack of representative data are problematic for broad global analyses (p. 1).
Criticism
Data universalism has been critiqued by many scholars concerned about data privacy and data justice, claiming that it conceals cultural specifications and diversity. Datafication ought to be viewed through the lens of epistemic diversity and justice to achieve data obedience. So, people are encouraged to critically examine the impacts of datafication by reimagining people and places.
De-westernization
Milan and Treré have contended that datafication as a privileged practice carried by dominant Western democracies that fail to see the richness of worldviews and meanings of the South. As promoted by Global South and Indigenous scholars, data universalism mistakenly assumes data to be universal when it ought to be treated differently (p. 323). If data are extracted without an ethical foundation grounded by attitude and method, the data becomes pervasive and incompetent. Moreover, a shift from a technocentric perspective that emphasizes the human agency behind data to a data method that stimulate discussion about the repercussions of datafication requires relocating the agency. This reinforces the notions of ethics and responsibilities around the data (p. 327). A push for bottom-up data practices shifts the focus of datafication to data justice to encourage citizens of the South to participate in political agency and repel an oppressed and inequal datafication process. Negligence in abstracting knowledge from others through diversifying social and historical contexts will result in biased sampling techniques and methods in data generation. This causes skewed data to be generated leading to unequal datafication.
Also, an aggregate technique used for data processing misrepresents data in a way that shapes an aggregate value as representing an aggregated individual. This adheres to the bias factor in making the subject appear as a collective by reducing variance and limiting space of contributions (p. 192). Using an aggregate technique submits to universal normativity which situates what is considered to be universally right as a practice that does not take responsibility for the context of a specific situation nor the interpretation of norms. Similarly, the interpretation of norms are also subject to contextual interpretations. While humans operate in unique cases where information can be incomplete, agency empowers humans to assess the situation and make radical decisions in complex situations where information is obscure. Thus, assuming universal normativity will not only incapacitate one's ethical validity when making choices but may lead to questionable decisions (p. 284).
Global South
Historical processes of global capitalism and colonialism have majorly impacted the supply of knowledge from Western modernity and subordinated knowledge from the Global South. Colonialism, in this perspective, is understood as data colonialism which pressures but also exploits datafication on communities. Global South and Indigenous scholars claim that the decolonial lens which transcends to a Eurocentric perspective adds value to critical data studies by questioning the geopolitics of knowledge, the depth of knowledge regeneration, and the power constructions of past injustices. Notions of data politics and data justice are more interested in giving a voice to the underprivileged and acquiring decolonial practices rather than issues concerning the blueprints of the political and social contexts of liberal democracies and social orders. Still, achieving decolonial critical data studies comes with a unique set of challenges that confronts the knowledge that has been produced and the knowings of the world, which is at the center of epistemology. The wave of the big data revolution feeds on insights into the production of data, how knowledge is produced, and how it is conducted and governed while using new epistemologies to make sense of the world.
See also
Big data
Colonialism
Critical data studies
Data
Global South
References
universalism
Social epistemology
universalism
Big data
Colonialism | Data universalism | [
"Technology"
] | 1,145 | [
"Social epistemology",
"Science and technology studies",
"Information technology",
"Data",
"Big data"
] |
72,252,525 | https://en.wikipedia.org/wiki/S2S%20%28mathematics%29 | In mathematics, S2S is the monadic second order theory with two successors. It is one of the most expressive natural decidable theories known, with many decidable theories interpretable in S2S. Its decidability was proved by Rabin in 1969.
Basic properties
The first order objects of S2S are finite binary strings. The second order objects are arbitrary sets (or unary predicates) of finite binary strings. S2S has functions s→s0 and s→s1 on strings, and predicate s∈S (equivalently, S(s)) meaning string s belongs to set S.
Some properties and conventions:
By default, lowercase letters refer to first order objects, and uppercase to second order objects.
The inclusion of sets makes S2S second order, with "monadic" indicating absence of k-ary predicate variables for k>1.
Concatenation of strings s and t is denoted by st, and is not generally available in S2S, not even s→0s. The prefix relation between strings is definable.
Equality is primitive, or it can be defined as s = t ⇔ ∀S (S(s) ⇔ S(t)) and S = T ⇔ ∀s (S(s) ⇔ T(s)).
In place of strings, one can use (for example) natural numbers with n→2n+1 and n→2n+2 but no other operations.
The set of all binary strings is denoted by {0,1}*, using Kleene star.
Arbitrary subsets of {0,1}* are sometimes identified with trees, specifically as a {0,1}-labeled tree {0,1}*; {0,1}* forms a complete infinite binary tree.
For formula complexity, the prefix relation on strings is typically treated as first order. Without it, not all formulas would be equivalent to Δ12 formulas.
For properties expressible in S2S (viewing the set of all binary strings as a tree), for each node, only O(1) bits can be communicated between the left subtree and the right subtree and the rest (see communication complexity).
For a fixed k, a function from strings to k (i.e. natural numbers below k) can be encoded by a single set. Moreover, s,t ⇒ s01t where t doubles every character of t is injective, and s ⇒ {s01t: t∈{0,1}*} is S2S definable. By contrast, by a communication complexity argument, in S1S (below) a pair of sets is not encodable by a single set.
Weakenings of S2S: Weak S2S (WS2S) requires all sets to be finite (note that finiteness is expressible in S2S using Kőnig's lemma). S1S can be obtained by requiring that '1' does not appear in strings, and WS1S also requires finiteness. Even WS1S can interpret Presburger arithmetic with a predicate for powers of 2, as sets can be used to represent unbounded binary numbers with definable addition.
Decision complexity
S2S is decidable, and each of S2S, S1S, WS2S, WS1S has a nonelementary decision complexity corresponding to a linearly growing stack of exponentials. For the lower bound, it suffices to consider Σ11 WS1S sentences. A single second order quantifier can be used to propose an arithmetic (or other) computation, which can be verified using first order quantifiers if we can test which numbers are equal. For this, if we appropriately encode numbers 1..m, we can encode a number with binary representation i1i2...im as i1 1 i2 2 ... im m, preceded by a guard. By merging testing of guards and reusing variable names, the number of bits is linear in the number of exponentials. For the upper bound, using the decision procedure (below), sentences with k-fold quantifier alternation can decided in time corresponding to k+O(1)-fold exponentiation of the sentence length (with uniform constants).
Axiomatization
WS2S can be axiomatized through certain basic properties plus induction schema.
S2S can be partially axiomatized by:
(1) ∃!s ∀t ( t0≠s ∧ t1≠s) (empty string, denoted by ε; ∃!s means "there is unique s")
(2) ∀s,t ∀i∈{0,1} ∀j∈{0,1} (si=tj ⇒ s=t ∧ i=j) (the use of i and j is an abbreviation; for i=j, 0 does not equal 1)
(3) ∀S (S(ε) ∧ ∀s (S(s) ⇒ S(s0) ∧ S(s1))⇒ ∀s S(s)) (induction)
(4) ∃S ∀s (S(s) ⇔ φ(s)) (S not free in φ)
(4) is the comprehension schema over formulas φ, which always holds for second order logic. As usual, if φ has free variables not shown, we take the universal closure of the axiom. If equality is primitive for predicates, one also adds extensionality S=T ⇔ ∀s (S(s) ⇔ T(s)). Since we have comprehension, induction can be a single statement rather than a schema.
The analogous axiomatization of S1S is complete. However, for S2S, completeness is open (as of 2021). While S1S has uniformization, there is no S2S definable (even allowing parameters) choice function that given a non-empty set S returns an element of S, and comprehension schemas are commonly augmented with various forms of the axiom of choice. However, (1)-(4) is complete when extended with a determinacy schema for certain parity games.
S2S can also be axiomatized by Π13 sentences (using the prefix relation on strings as a primitive). However, it is not finitely axiomatizable, nor can it be axiomatized by Σ13 sentences even if we add induction schema and a finite set of other sentences (this follows from its connection to Π12-CA0).
Theories related to S2S
For every finite k, the monadic second order (MSO) theory of countable graphs with treewidth ≤k (and a corresponding tree decomposition) is interpretable in S2S (see Courcelle's theorem). For example, the MSO theory of trees (as graphs) or of series-parallel graphs is decidable. Here (i.e. for bounded tree width), we can also interpret the finiteness quantifier for a set of vertices (or edges), and also count vertices (or edges) in a set modulo a fixed integer. Allowing uncountable graphs does not change the theory. Also, for comparison, S1S can interpret connected graphs of bounded pathwidth.
By contrast, for every set of graphs of unbounded treewidth, its existential (i.e. Σ11) MSO theory is undecidable if we allow predicates on both vertices and edges. Thus, in a sense, decidability of S2S is the best possible. Graphs with unbounded treewidth have large grid minors, which can be used to simulate a Turing machine.
By reduction to S2S, the MSO theory of countable orders is decidable, as is the MSO theory of countable trees with their Kleene–Brouwer orders. However, the MSO theory of (, <) is undecidable. The MSO theory of ordinals <ω2 is decidable; decidability for ω2 is independent of ZFC (assuming Con(ZFC + weakly compact cardinal)). Also, an ordinal is definable using monadic second order logic on ordinals iff it can be obtained from definable regular cardinals by ordinal addition and multiplication.
S2S is useful for decidability of certain modal logics, with Kripke semantics naturally leading to trees.
S2S+U (or just S1S+U) is undecidable if U is the unbounding quantifier — UX Φ(X) iff Φ(X) holds for some arbitrarily large finite X. However, WS2S+U, even with quantification over infinite paths, is decidable, even with S2S subformulas that do not contain U.
Formula complexity
A set of binary strings is definable in S2S iff it is regular (i.e. forms a regular language). In S1S, a (unary) predicate on sets is (parameter-free) definable iff it is an ω-regular language. For S2S, for formulas that use their free variables only on strings not containing a 1, the expressiveness is the same as for S1S.
For every S2S formula φ(S1,...,Sk), (with k free variables) and finite tree of binary strings T, φ(S1∩T,...,Sk∩T) can be computed in time linear in |T| (see Courcelle's theorem), but as noted above, the overhead can be iterated exponential in the formula size (more precisely, the time is ).
For S1S, every formula is equivalent to a Δ11 formula, and to a boolean combination of Π02 arithmetic formulas. Moreover, every S1S formula is equivalent to acceptance by a corresponding ω-automaton of the parameters of the formula. The automaton can be a deterministic parity automaton: A parity automaton has an integer priority for each state, and accepts iff the highest priority seen infinitely often is odd (alternatively, even).
For S2S, using tree automata (below), every formula is equivalent to a Δ12 formula. Moreover, every S2S formula is equivalent to a formula with just four quantifiers, ∃S∀T∃s∀t ... (assuming that our formalization has both the prefix relation and the successor functions). For S1S, three quantifiers (∃S∀s∃t) suffice, and for WS2S and WS1S, two quantifiers (∃S∀t) suffice; the prefix relation is not needed here for WS2S and WS1S.
However, with free second order variables, not every S2S formula can be expressed in second order arithmetic through just Π11 transfinite recursion (see reverse mathematics). RCA0 + (schema) {τ: τ is a true S2S sentence} is equivalent to (schema) {τ: τ is a Π13 sentence provable in Π12-CA0 }. Over a base theory, the schemas are equivalent to (schema over k) ∀S⊆ω ∃α1<...<αk Lα1(S) ≺Σ1 ... ≺Σ1 Lαk(S) where L is the constructible universe (see also large countable ordinal). Due to limited induction, Π12-CA0 does not prove that all true (under the standard decision procedure) Π13 S2S statements are actually true even though each such sentence is provable Π12-CA0.
Moreover, given sets of binary strings S and T, the following are equivalent:
(1) T is S2S definable from some set of binary strings polynomial time computable from S.
(2) T can be computed from the set of winning positions for some game whose payoff is a finite boolean combination of Π02(S) sets.
(3) T can be defined from S in arithmetic μ-calculus (arithmetic formulas + least fixed-point logic)
(4) T is in the least β-model (i.e. an ω-model whose set-theoretic counterpart is transitive) containing S and satisfying all Π13 consequences of in Π12-CA0.
Models of S1S and S2S
In addition to the standard model (which is the unique MSO model for S1S and S2S), there are other models for S1S and S2S, which use some rather than all subsets of the domain (see Henkin semantics).
For every S⊆ω, sets recursive in S form an elementary submodel of the standard S1S model, and same for every non-empty collection of subsets of ω closed under Turing join and Turing reducibility.
This follows from relative recursiveness of S1S definable sets plus uniformization:
- φ(s) (as a function of s) can be computed from the parameters of φ and the values of φ(s) for a finite set of s (with its size bounded by the number of states in a deterministic automaton for φ).
- A witness for ∃S φ(S) can be obtained by choosing k and a finite fragment of S of S, and repeatedly extending S such that the highest priority during each extension is k and that the extension can be completed into S satisfying φ without hitting priorities above k (these are permitted only for the initial S). Also, by using lexicographically least shortest choices, there is an S1S formula φ' such that φ'⇒φ and ∃S φ(S) ⇔∃!S φ'(S) (i.e. uniformization; φ may have free variables not shown; φ' depends only on the formula φ).
The minimal model of S2S consists of all regular languages on binary strings. It is an elementary submodel of the standard model, so if an S2S parameter-free definable set of trees is non-empty, then it includes a regular tree. A regular language can also be treated as a regular {0,1}-labeled complete infinite binary tree (identified with predicates on strings). A labeled tree is regular if it can be obtained by unrolling a vertex-labeled finite directed graph with an initial vertex; a (directed) cycle in the graph reachable from the initial vertex gives an infinite tree. With this interpretation and encoding of regular trees, every true S2S sentence may already be provable in elementary function arithmetic. It is non-regular trees that may require nonpredicative comprehension for determinacy (below). There are nonregular (i.e. containing nonregular languages) models of S1S (and presumably S2S) (both with and without standard first order part) with a computable satisfaction relation. However, the set of recursive sets of strings does not form a model of S2S due to failure of comprehension and determinacy.
Decidability of S2S
The proof of decidability is by showing that every formula is equivalent to acceptance by a nondeterministic tree automaton (see tree automaton and infinite-tree automaton). An infinite tree automaton starts at the root and moves up the tree, and accepts iff every tree branch accepts. A nondeterministic tree automaton accepts iff player 1 has a winning strategy, where player 1 chooses an allowed (for the current state and input) pair of new states (p0,p1), while player 2 chooses the branch, with the transition to p0 if 0 is chosen and p1 otherwise. For a co-nondeterministic automaton, all choices are by player 2, while for deterministic, (p0,p1) is fixed by the state and input; and for a game automaton, the two players play a finite game to set the branch and the state. Acceptance on a branch is based on states seen infinitely often on the branch; parity automata are sufficiently general here.
For converting the formulas to automata, the base case is easy, and nondeterminism gives closure under existential quantifiers, so we only need closure under complementation. Using positional determinacy of parity games (which is where we need impredicative comprehension), non-existence of player 1 winning strategy gives a player 2 winning strategy S, with a co-nondeterministic tree automaton verifying its soundness. The automaton can then be made deterministic (which is where we get an exponential increase in the number of states), and thus existence of S corresponds to acceptance by a non-deterministic automaton.
Determinacy: Provably in ZFC, Borel games are determined, and the determinacy proof for boolean combinations of Π02 formulas (with arbitrary real parameters) also gives a strategy here that depends only on the current state and the position in the tree. The proof is by induction on the number of priorities. Assume that there are k priorities, with the highest priority being k, and that k has the right parity for player 2. For each position (tree position + state) assign the least ordinal α (if any) such that player 1 has a winning strategy with all entered (after one or more steps) priority k positions (if any) having labels <α. Player 1 can win if the initial position is labeled: Each time a priority k state is reached, the ordinal is decreased, and moreover in between the decreases, player 1 can use a strategy for k-1 priorities. Player 2 can win if the position is unlabeled: By the determinacy for k-1 priorities, player 2 has a strategy that wins or enters an unlabeled priority k state, in which case player 2 can again use that strategy. To make the strategy positional (by induction on k), when playing the auxiliary game, if two chosen positional strategies lead to the same position, continue with the strategy with the lower α, or for the same α (or for player 2) lower initial position (so we can switch a strategy finitely many times).
Automata determinization: For determinization of co-nondeterministic tree automata, it suffices to consider ω-automata, treating branch choice as input, determinizing the automaton, and using it for the deterministic tree automaton. Note that this does not work for nondeterministic tree automata as the determinization for going left (i.e. s→s0) can depend on the contents of the right branch; by contrast to nondeterminism, deterministic tree automata cannot even accept precisely nonempty sets. To determinize a nondeterministic ω-automaton M (for co-nondeterministic, take the complement, noting that deterministic parity automata are closed under complements), we can use a Safra tree with each node storing a set of possible states of M, and node creation and deletion based on reaching high priority states. For details, see or.
Decidability of acceptance: Acceptance by a nondeterministic parity automaton of the empty tree corresponds to a parity game on a finite graph G. Using the above positional (also called memoryless) determinacy, this can be simulated by a finite game that ends when we reach a loop, with the winning condition based on the highest priority state in the loop. A clever optimization gives a quasipolynomial time algorithm, which is polynomial time when the number of priorities is small enough (which occurs commonly in practice).
Theory of trees: For decidability of MSO logic on trees (i.e. graphs that are trees), even with finiteness and modular counting quantifiers for first order objects, we can embed countable trees into the complete binary tree and use the decidability of S2S. For example, for a node s, we can represent its children by s1, s01, s001, and so on. For uncountable trees, we can use Shelah-Stup theorem (below). We can also add a predicate for a set first order objects having cardinality ω1, and the predicate for cardinality ω2, and so on for infinite regular cardinals. Graphs of bounded tree width are interpretable using trees, and without predicates over edges this also applies to graphs of bounded clique width.
Combining S2S with other decidable theories
Tree extensions of monadic theories: By Shelah-Stup theorem, if a monadic relational model M is decidable, then so is its tree counterpart. For example, (modulo choice of formalization) S2S is the tree counterpart of {0,1}. In the tree counterpart, the first order objects are finite sequences of elements of M ordered by extension, and an M-relation Pi is mapped to Pi'(vd1,...,vdk) ⇔ Pi(d1,...,dk) with Pi' false otherwise (dj∈M, and v is a (possibly empty) sequence of elements of M). The proof is similar to the S2S decidability proof. At each step, a (nondeterministic) automaton gets a tuple of M objects (possibly second order) as input, and an M formula determines which state transitions are permitted. Player 1 (as above) chooses a mapping child⇒state that is permitted by the formula (given the current state), and player 2 chooses the child (of the node) to continue. To witness rejection by a non-deterministic automaton, for each (node, state) pick a set of (child, state) pairs such that for every choice, at least one of the pairs is hit, and such that all the resulting paths lead to rejection.
Combining a monadic theory with a first order theory: Feferman–Vaught theorem extends/applies as follows. If M is an MSO model and N is a first order model, then M remains decidable relative to a (Theory(M), Theory(N)) oracle even if M is augmented with all functions M→N where M is identified with its first objects, and for each s∈M we use a disjoint copy of N, with the language modified accordingly. For example, if N is (,0,+,⋅), we can state ∀(function f) ∀s ∃r∈Ns f(s) +Ns r = 0Ns. If M is S2S (or more generally, the tree counterpart of some monadic model), the automata can now use N-formulas, and thereby convert f:M→Nk into a tuple of M sets. Disjointness is necessary as otherwise for every infinite N with equality, the extended S2S or just WS1S is undecidable. Also, for a (possibly incomplete) theory T, the theory TM of M-products of T is decidable relative to a (Theory(M), T) oracle, where a model of TM uses an arbitrary disjoint model Ns of T for each s∈M (as above, M is an MSO model; Theory(Ns) may depend on s). The proof is by induction on formula complexity. Let vs be the list of free Ns variables, including f(s) if function f is free. By induction, one shows that vs is only used through a finite set of N-formulas with |vs| free variables. Thus, we can quantify over all possible outcomes by using N (or T) to answer what is possible, and given a list possibilities (or constraints), formulate a corresponding sentence in M.
Coding into extensions of S2S: Every decidable predicate on strings can be encoded (with linear time encoding and decoding) for decidability of S2S (even with the extensions above) together with the encoded predicate. Proof: Given a nondeterministic infinite tree automaton, we can partition the set of finite binary labeled trees (having labels over which the automaton can operate) into finitely many classes such that if a complete infinite binary tree can be composed of same-class trees, acceptance depends only on the class and the initial state (i.e. state the automaton enters the tree). (Note a rough similarity with the pumping lemma.) For example (for a parity automaton), assign trees to the same class if they have the same predicate that given initial_state and set Q of (state, highest_priority_reached) pairs returns whether player 1 (i.e. nondeterminism) can simultaneously force all branches to correspond to elements of Q. Now, for each k, pick a finite set of trees (suitable for coding) that belong to the same class for automata 1-k, with the choice of class consistent across k. To encode a predicate, encode some bits using k=1, then more bits using k=2, and so on.
References
Additional reference:
Mathematical logic
Computability theory
Articles containing proofs | S2S (mathematics) | [
"Mathematics"
] | 5,391 | [
"Computability theory",
"Articles containing proofs",
"Mathematical logic"
] |
72,252,618 | https://en.wikipedia.org/wiki/Tetrahedral%20bipyramid | In 4-dimensional geometry, the tetrahedral bipyramid is the direct sum of a tetrahedron and a segment, {3,3} + { }. Each face of a central tetrahedron is attached with two tetrahedra, creating 8 tetrahedral cells, 16 triangular faces, 14 edges, and 6 vertices. A tetrahedral bipyramid can be seen as two tetrahedral pyramids augmented together at their base.
It is the dual of a tetrahedral prism, , so it can also be given a Coxeter-Dynkin diagram, , and both have Coxeter notation symmetry [2,3,3], order 48.
Being convex with all regular cells (tetrahedra) means that it is a Blind polytope.
This bipyramid exists as the cells of the dual of the uniform rectified 5-simplex, and rectified 5-cube or the dual of any uniform 5-polytope with a tetrahedral prism vertex figure. And, as well, it exists as the cells of the dual to the rectified 24-cell honeycomb.
See also
Triangular bipyramid - A lower dimensional analogy of the tetrahedral bipyramid.
Octahedral bipyramid - A lower symmetry form of the as 16-cell.
Cubic bipyramid
Dodecahedral bipyramid
Icosahedral bipyramid
References
4-polytopes | Tetrahedral bipyramid | [
"Mathematics"
] | 306 | [
"Geometry",
"Geometry stubs"
] |
72,252,654 | https://en.wikipedia.org/wiki/Icosahedral%20bipyramid | In 4-dimensional geometry, the icosahedral bipyramid is the direct sum of an icosahedron and a segment, Each face of a central icosahedron is attached with two tetrahedra, creating 40 tetrahedral cells, 80 triangular faces, 54 edges, and 14 vertices. An icosahedral bipyramid can be seen as two icosahedral pyramids augmented together at their bases.
It is the dual of a dodecahedral prism, Coxeter-Dynkin diagram , so the bipyramid can be described as . Both have Coxeter notation symmetry [2,3,5], order 240.
Having all regular cells (tetrahedra), it is a Blind polytope.
See also
Pentagonal bipyramid - A lower dimensional analogy
Tetrahedral bipyramid
Octahedral bipyramid - A lower symmetry form of the as 16-cell.
Cubic bipyramid
Dodecahedral bipyramid
References
External links
Icosahedral tegum
4-polytopes | Icosahedral bipyramid | [
"Mathematics"
] | 219 | [
"Geometry",
"Geometry stubs"
] |
72,252,700 | https://en.wikipedia.org/wiki/Audrey%20Stuckes | Audrey Doris Jones ( ; 15 September 192326 September 2006) was an English material scientist and a senior lecturer in the department of applied acoustics at the University of Salford. She made important contributions to the theory of the Johnsen–Rahbek effect, the electrical and thermal conductivity of semiconductors, and the thermal resistance of building insulation. She was the only daughter of Frederick Stuckes, the general manager of a shipbroking firm, and was educated at Colston's Girls' School in Bristol. In 1942, she won a scholarship to study the Natural Science Tripos at Newnham College in the University of Cambridge.
Stuckes graduated in 1946 with a BA degree and joined Metropolitan-Vickers, Trafford, as a graduate trainee in the research department. From 1953, she published a series of papers on the thermal and electrical conductivity of semiconductors. She proved the existence of the JohnsenRahbek effect and proposed an electric circuit model to explain the data. In December 1962, she was elected a Fellow of the Institute of Physics, and in the following year, she left MetropolitanVickers to work as a lecturer in the department of pure and applied physics at the Royal College of Advanced Technology, Salford, that became the University of Salford in 1967.
In 1975, Stuckes, together with John Edwin Parrott, published a wellreceived textbook that reviewed the theory and experimental data on thermal conductivity in solids and semiconductors. By 1979, she was a senior lecturer in the department of applied acoustics at Salford, and in the following year, she was in charge of the department's heat laboratory. The laboratory was supported by grants from, amongst others, the Science and Engineering Research Council and the Building Research Establishment. These grants funded studies to investigate the efficiency of insulating materials. She led a team to obtain experimental data that would allow builders to calculate a standard level of insulation. In 1982, she presented a television programme for the Open University that demonstrated the usefulness of these simple models of thermal conduction. She retired from the university in September 1988 and died after a long illness at a nursing home in Urmston, Trafford.
Early life
Stuckes was born on 15September 1923 at Bristol, England, the only daughter of Frederick Stephen Stuckes and Beatrice May, . They had married on 8January 1916 at StJohn the Baptist, Bedminster, Bristol. Her father worked for Bethell, Gwyn, and Company (Bethell Gwyn), at 11Baldwin Street, Bristol, a shipbroking firm dealing mainly with Australasian trade. During World War I, he volunteered as a sergeant in the 1/4th Battalion of the City of Bristol Rifles, and on 27June 1917, he was commissioned a second lieutenant in the Royal Warwickshire Regiment. However, later that year, he was severely injured in a shell barrage, and subsequently, he relinquished his commission on 22June 1918.
After the war, Stuckes' father returned to Bethell Gwyn, and in March 1953, he was elected president of the Bristol Steamship Owners' Association. He had been elected a fellow of the Institute of Chartered Shipbrokers in 1945, and in August 1953, he was elected chair of the Bristol section of the institute. In the 1950s, he was secretary to the Apsley Players, a musical quintet based in Bristol. In September 1956, he retired from Bethell Gwyn after fortyeight years of service.
Stuckes' elder brother, Jack Stephen, was educated at Merrywood Grammar School, Knowle, Bristol, and studied electrical engineering at the Merchant Venturers' Technical College, Bristol. During World War II, he was a corporal in No. 34 Service Flying Training School of the Royal Air Force, based at the Royal Canadian Air Force station Medicine Hat in Alberta, Canada. At the end of the war, he took a position as a cashier and wages clerk at Christy Brothers, an electricity engineering company based at Bower Ashton in south west Bristol.
Education
Stuckes was first educated at Merrywood primary school in Knowle, Bristol. In June 1934, she gained a foundational boarding scholarship to Red Maids' School, Westbury-on-Trym. She went on to study at Colston's Girls' School, Montpelier, Bristol, where, in July 1942, she passed her Higher School Certificate in natural sciences with a distinction in chemistry. She was offered a Pfeiffer scholarship at Bedford College, University of London, and awarded a Gamble scholarship by the school of £50 a year (), that was tenable at the universities of Oxford, Cambridge, London, and at the Royal Free Hospital for Women.
However, instead of taking up the scholarship at Bedford, Stuckes entered Newnham College in the University of Cambridge, to study the Natural Science Tripos. In 1946, she graduated with a BA degree, and in the same year, she was elected a student member of the Physical Society of London. In 1950, she was elected an associate of the Institute of Physics and awarded an MA by Cambridge. In 1969, she returned to Cambridge to complete a PhD, and subsequently, she was elected to the senate of the university.
Career
After leaving Cambridge, Stuckes joined Metropolitan-Vickers, Trafford, Greater Manchester, as a graduate trainee in the research department. MetropolitanVickers was a British heavy industrial firm, known for manufacturing electrical equipment and generators, street lighting, and electronics. The company had a relatively favourable attitude to placing graduate women in professional electrical engineering positions. For example, when Stuckes joined the company, Beryl Dent led the computation section and supervised the laboratory team that investigated the physical properties of semiconductors. Stuckes collaborated with Dent on Stuckes' first published paper on the heating effects that occur when a current is passed through a semiconductor. Dent suggested methods to solve the equations and computed the numerical integrations.
From 1953, Stuckes published a series of papers on the thermal and electrical conductivity of semiconductors. In one such paper, she investigated the electrostatic force between polished plates of a semiconductor and a metal when placed in contact and a voltage applied. The force is caused by the free charge that accumulates between the semiconductor and metal surfaces. This force, or attraction, is known as the Johnsen–Rahbek effect, and is proportional to the square of the applied voltage. Stuckes constructed a clutch that consisted of a plate of magnesium orthotitanate, a hard, ceramic semiconductor, that rubbed against a highlypolished steel plate. She found that abrasion at the contact surfaces caused the force to decrease as the number of operations increased and suggested the presence of an electron field emission effect at the contact boundary. She proved the existence of the effect, proposed an electric circuit model to explain the data, and noted that as the voltage increased, the area of field emission also increased, and consequently, this limited the field strength of the circuit.
On 4December 1962, Stuckes was elected a Fellow of the Institute of Physics, and in the subsequent year, she left MetropolitanVickers to work as a lecturer in the department of pure and applied physics at the Royal College of Advanced Technology, Salford, that became the University of Salford in 1967. In 1975, Stuckes, together with John Edwin Parrott, published a textbook that reviewed the theory and experimental data on thermal conductivity in solids and semiconductors. Parrott was a scientist at the Aldermaston Court research laboratory of Associated Electrical Industries (the then holding company of MetropolitanVickers), and later, professor of physics at University of Wales, Cardiff. In 1956, he had obtained a PhD from the University of Reading on the thermal and thermoelectric properties of semiconductors. Paul Gustav Klemens, late professor of physics at the University of Connecticut, reviewed the book at the time of publication and stated that "[it] is most unique and valuable; the theoretical problem is very difficult, and nowhere is there such a good summary of the useful approximations and the salient results."
By 1979, Stuckes was a senior lecturer in the department of applied acoustics at Salford, and in 1980, she was in charge of the department's heat laboratory. The laboratory was supported by grants from the Polymer Engineering Directorate and the Building Subcommittee (she was a member of this committee in 1982) of the Science and Engineering Research Council, and the Building Research Establishment in the Department of the Environment. These grants funded a study to investigate how moisture in cavity walls affects the surrounding insulation. At the time, no one knew the exact efficiency of standard polymeric insulating materials as data was based on material in "dry" condition. Stuckes led a team to obtain experimental data that would allow builders to calculate a standard level of insulation. The study concluded that heat transfer in buildings can be modelled adequately by simple, onedimensional, steady state models.
In 1981, Stuckes was interviewed by Alfred Bates about the most efficient way to heat and insulate homes. The interview was broadcast on 31March 1981, as part of a BBC North West regional television documentary series entitled Towards Tomorrow. In the following year, she presented a television programme for the Open University that demonstrated the usefulness of simple models of thermal conduction. The programme was first broadcast on BBC One in the morning of 3May 1982 and formed part of the Open University's unit on heat transfer. In the 1980s, she continued to publish research on the thermal properties of building materials. In one study with Anthony Simpson, they found that the shape of air inclusions within vermiculite concrete affected the thermal conductivity of the concrete. This finding was explored further with British Petroleum and resulted in a joint patent being granted on a thermally insulating filler. By May 1986, she was awarded Chartered Physicist status by the Institute of Physics, and by September 1988, she had retired from her position at the university.
Personal life and death
In retirement, Stuckes resided at 58 Carlton Road in Hale, Trafford. In October 1947, she had become engaged to Douglas Perrin Jones, the eldest son of William Henry Perrin Jones of Port Sunlight, Wirral. At the time of their engagement, Douglas was an electrical engineer in the research department at MetropolitanVickers. When they both worked there, he would assist in her research, including in 1953, an investigation into the heating effects that occur when a current is passed through a semiconductor. After their marriage at Bristol in 1949, she continued to use her maiden name in her academic life and scientific publications.
Douglas retired in 1984 after 46 years' service as a research engineer with the General Electric Company. He died in hospital on 15July 1995, aged 75 years, and his funeral service was held on 21July 1995 at Altrincham Crematorium, Dunham Massey. Stuckes died after a long illness on 26September 2006, aged 83 years, at Faversham Nursing Home, Urmston, Trafford. Her funeral service was held at the same crematorium on 3October 2006 and her ashes were later interred in the crematorium grounds.
Academic conferences
The following table lists academic conferences where Stuckes was known to have organised the conference and/or read a paper.
Selected publications
Books
Patents
Academic papers
Electrical and thermal conductivity of semiconductors
Thermal conductivity of building materials
See also
Footnotes
References
Bibliography
Further reading
External links
Onedimensional steady state heat transfer via the Wayback Machine. The link opens a video of the Open University television programme presented by Stuckes that was first broadcast on BBC One at 7:05am on 3May 1982. It constitutes programme twelve of thirtytwo for the mathematical models and methods module (Open University module code MST204, unit 12).
Image of the thermal wall laboratory, taken in 1976, at the department of applied acoustics, University of Salford.
The Thermal Measurement Laboratory at the University of Salford.
1923 births
2006 deaths
20th-century British physicists
20th-century English educators
20th-century English people
20th-century English women educators
20th-century English women scientists
20th-century English women
Academics of the University of Salford
Alumni of Newnham College, Cambridge
British women academics
English physicists
English women educators
English women physicists
Fellows of the Institute of Physics
Metropolitan-Vickers people
People associated with the Open University
People educated at Montpelier High School, Bristol
People educated at The Red Maids' School
Scientists from Bristol
Scientists from Salford
Thermodynamicists | Audrey Stuckes | [
"Physics",
"Chemistry"
] | 2,533 | [
"Thermodynamics",
"Thermodynamicists"
] |
72,252,877 | https://en.wikipedia.org/wiki/Dodecahedral%20bipyramid | In 4-dimensional geometry, the dodecahedral bipyramid is the direct sum of a dodecahedron and a segment, Each face of a central dodecahedron is attached with two pentagonal pyramids, creating 24 pentagonal pyramidal cells, 72 isosceles triangular faces, 70 edges, and 22 vertices. A dodecahedral bipyramid can be seen as two dodecahedral pyramids augmented together at their base.
It is the dual of a icosahedral prism.
See also
Tetrahedral bipyramid
Cubic bipyramid
Icosahedral bipyramid
References
External links
Dodecahedral tegum
4-polytopes | Dodecahedral bipyramid | [
"Mathematics"
] | 137 | [
"Geometry",
"Geometry stubs"
] |
72,254,307 | https://en.wikipedia.org/wiki/Infant%20sleep | Infant sleep is an act of sleeping by an infant or a newborn. It differs significantly from sleep during adulthood. Unlike in adults, sleep early in infancy initially does not follow a circadian rhythm. Infant sleep also appears to have two main modes - active, associated with movement, and quiet, associated with stillness - exhibiting distinct neurological firing patterns. Sleep duration is also shorter. As the infant ages, sleep begins to follow a Circadian rhythm and sleep duration increases. Infants nap frequently. Infants are also particularly vulnerable during sleep; they are prone to suffocation and SIDS. As a result, "safe" sleep techniques have been the subject of several public health campaigns. Infant sleep practices vary widely between cultures and over history; historically infants would sleep on the ground with their parents. In many modern cultures, infants sleep in a variety of types of infant beds or share a bed with parents. Infant sleep disturbance is common, and even normal infant sleep patterns can cause considerable disruption to parents' sleep.
Normal infant sleep
In the first week of life, infants will sleep during both the day and night and will wake to feed. Sleep cycle duration is usually short, from 2–4 hours. Over the first two weeks, infants average 16–18 hours of sleep daily. Circadian rhythm has not yet been established and infants sleep during the night and day equally. In the first month of life, 95% of infants will wake during the night. At around 2 months, a day-night pattern begins to gradually develop. At around 3 months, sleep cycle may increase to 3–6 hours, and the majority of infants will still wake in the night to feed. By 4 months, the average infant sleeps 14 hours a day (including naps), but this amount can vary considerably. By 8 months, most infants continue to wake during the night, though a majority are able to fall back asleep without parental involvement. At 9 months, only a third of infants sleep through the night without waking. Daytime sleeping (naps) generally doesn't cease until 3 to 5 years of age.
Infant sleep in the first year can be categorised into active sleep (AS) and quiet sleep (QS). Active sleep is similar to the adult REM sleep in that it is characterised by eye and other kinds of movement; however, unlike adults in REM, infants tend to enter AS at the beginning of their sleep cycle, as opposed to the end of it like REM in adults. Infants spend about half their time in AS/REM and half in QS, a much higher proportion than adults, who only spend about a quarter of their time in REM. It's hypothesised this difference in sleep pattern is related to the shorter sleep cycles and more frequent waking of infants. By 3 months infants become more likely to enter quiet sleep (or NREM, not-REM) at the beginning of their sleep cycle. After 3 months, infants tend to alternate between AS/QS for 50 minutes duration, as opposed to the longer cycle in adult sleep (90 minutes). Sleep cycle duration starts to resemble adult sleep more at 6 months, but doesn't fully resemble adult sleep until around 3 years old, which is generally around the time napping ceases as well.
Frequent night waking and the short sleep cycle in infants is thought to be adaptive. Because infants have small stomachs and are undergoing rapid growth, they need to eat very frequently in order to get enough nutrition. Frequent night awakenings are also protective against SIDS.
References
Infancy
Sleep | Infant sleep | [
"Biology"
] | 720 | [
"Behavior",
"Sleep"
] |
72,255,509 | https://en.wikipedia.org/wiki/23%20Leonis%20Minoris | 23 Leonis Minoris (23 LMi) is a solitary, bluish-white hued star located in the northern constellation Leo Minor. It is positioned 7° south and 11" west from β Leonis Minoris. It is rarely called 7 H. Leonis Minoris, which is its Hevelius designation.
The object has an apparent magnitude of 5.49, allowing it to be faintly visible to the naked eye. Based on parallax measurements from the Gaia satellite, it is estimated to be 279 light years distant. 23 LMi is receding from the Solar System with a fairly constrained radial velocity of . At its current distance, the star's brightness is diminished by a tenth of a magnitude due to interstellar dust. 23 LMi's kinematics matches that of the Hyades moving group and it is considered a probable member.
23 LMi was catalogued as a chemically peculiar star with a stellar classification of A0 Vpn due to a lack of magnesium in its spectrum by Helmut Abt and Nidia Irene Morrell. However, A.P. Cowley and colleagues instead listed it as an ordinary A-type main-sequence star with nebulous absorption lines as a result of rapid rotation, with the class being A0 Vn. It has 2.55 times the mass of the Sun and is said to be 285 million years old, having completed 60.8% of its main sequence lifetime. It has double the radius of the Sun and shines with a luminosity 44.3 times that of the Sun from its photosphere at an effective temperature of . 42 LMi is currently spinning rapidly with a projected rotational velocity of .
References
A-type main-sequence stars
Hyades Stream
Leo Minor
Leonis Minoris, 23
BD+30 01981
088960
050303
4024 | 23 Leonis Minoris | [
"Astronomy"
] | 384 | [
"Leo Minor",
"Constellations"
] |
72,255,622 | https://en.wikipedia.org/wiki/Eva%20Zurek | Eva Dagmara Zurek (born 1976) is a theoretical chemist, solid-state physicist and materials scientist. As a professor of chemistry at the University at Buffalo, Zurek studies the electronic structure, properties, and reactivity of a wide variety of materials using quantum mechanical calculations. She is interested in high pressure science, superhard, superconducting, quantum and planetary materials, catalysis, as well as solvated electrons and electrides. She develops algorithms to predict the structures of crystals, interfaces them with machine learning models, and applies them in materials discovery.
Early life and education
Zurek was born in 1976 in Poland. She completed her Bachelor of Science and master's degree at the University of Calgary, where she carried out research with Tom Ziegler. While at the University of Calgary, Zurek was a recipient of one of the Alberta Ingenuity grants. Zurek's PhD was carried out in the group of Ole Krogh Andersen at the Max Planck Institute for Solid State Research, and she received her degree from the University of Stuttgart in Germany. Following her PhD, she accepted a postdoctoral associate position at Cornell University under Roald Hoffmann.
Career
Upon completing her postdoctoral work at Cornell, Zurek joined the faculty at the University at Buffalo (UB) in 2009. In October 2009, Zurek co-authored a paper with Hoffman and other colleagues in the Proceedings of the National Academy of Sciences of the United States of America predicting that LiH6 could form as a stable metal at a pressure of around 1 million atmospheres. As an assistant professor of chemistry, Zurek research group wrote an algorithm called XtalOpt to predict hydrogen-rich compounds that may be superconducting metals under pressure.
By 2016, Zurek was promoted to associate professor where she continued her work on superconductors. Her research team used the algorithm XtalOpt to understand which combinations of phosphorus and hydrogen were stable at pressures of up to 200 gigapascals. Their results determined that phosphine's superconductivity under pressure likely arose due to the compound decomposing into other chemical products that contain phosphorus and hydrogen. In 2019, Zurek oversaw a research team which used computational techniques to identify 43 previously unknown forms of carbon that are thought to be stable and superhard.
In 2021, Zurek was named a Fellow of the American Physical Society (APS) for "the application of forefront computational electronic structure methods to reveal microscopic processes occurring in large molecules and nanostructures, for the design of hydride superconductors, and for related educational innovations in computational science." At the end of the 2020–21 academic year, Zurek was named a recipient of the 2021 SUNY Chancellor's Award for Excellence. She was also elected by the Division of Computational Physics of the APS as its Vice Chair for the 2022–23 year.
Awards and honors
2024 State University of New York (SUNY) Distinguished Professor
2023 WNYACS Schoellkopf Medal
2022 Fellow of the American Physical Society
2021 SUNY Chancellor's Award for Excellence in Scholarship
2014 The Minerals, Metals & Materials Society's Young Leader Professional Development Award
2014 Promising Young Scientist prize from the Centre de Mecanique Ondulatoire Appliquee
2013 Sloan Research Fellowship
References
External links
Living people
1976 births
Place of birth missing (living people)
Scientists from Alberta
21st-century Canadian women scientists
Theoretical chemists
University of Calgary alumni
University of Stuttgart alumni
Fellows of the American Physical Society | Eva Zurek | [
"Chemistry"
] | 719 | [
"Quantum chemistry",
"Theoretical chemistry",
"Theoretical chemists",
"Physical chemists"
] |
72,257,559 | https://en.wikipedia.org/wiki/Internet%20intervention | Internet intervention, in medical context, refers to the delivery of health care-related treatments through Internet.
References
Therapy
Internet | Internet intervention | [
"Technology"
] | 25 | [
"Internet",
"Transport systems"
] |
72,258,037 | https://en.wikipedia.org/wiki/Intracellular%20space | Intracellular space is the interior space of the plasma membrane. It contains about two-thirds of TBW. Cellular rupture may occur if the intracellular space becomes dehydrated, or if the opposite happens, where it becomes too bloated. Thus it is important for the liquid to stay in optimal quantity.
See also
Extracellular space
References
Cell anatomy
Cell biology | Intracellular space | [
"Biology"
] | 78 | [
"Cell biology"
] |
72,258,523 | https://en.wikipedia.org/wiki/Adaptive%20deep%20brain%20stimulation | Adaptive deep brain stimulation (), also known as closed-loop deep brain stimulation (clDBS), is a neuro-modulatory technique currently under investigation for the treatment of neurodegenerative diseases.
Conventional DBS delivers constant electrical stimulation to regions of the brain that control movement through a surgically implanted wire, or lead, that is connected to an implantable pulse generator (IPG). Programming adjustments to the pulse generator are frequently made by the treating neurologist based on what the patient is doing and the medication they take over time to optimize the patient's symptoms. However, it can lead to side effects. aDBS and differs from conventional DBS systems (that provide constant stimulation) in that it can both sense the brain activity and deliver the appropriate stimulation in real time. Of note, in the early days of deep brain stimulation, closed loop applications were carried out by multiple pioneers, such as José Delgado, Robert Heath, Natalia Bechtereva and Carl Wilhelm Sem-Jacobsen long before the advent of 'modern' DBS. Perhaps the earliest closed-loop experiment in an animal model was performed by Delgado and colleagues in 1969. In the modern era of DBS following the introduction of the method by Alim Louis Benabid, after a demonstration of efficacy of aDBS in the macaque by the team of Hagai Bergman in 2011, the first in-human application of aDBS was carried out by the team of Peter Brown in 2013, followed by the team of Alberto Priori in the same year.
History in Parkinson's disease
After being developed in the 1950s, and modern versions introduced by the team of Alim Louis Benabid in the 1980s, DBS received recognition as a treatment method for tremor and later Parkinson's disease, dystonia, obsessive–compulsive disorder and epilepsy. However, the working mechanism of conventional DBS involved the continuous stimulation of the target structure, which is an approach that cannot adapt to patients' changing symptoms or functional status in real-time.
Keeping in view this unwanted side effect of DBS, concepts that have the capability to sense brain activity and automatically adjust the stimulation in response to fluctuating biomarkers, were re-introduced by multiple teams, with a first peer-reviewed publication in modern times by the teams of Hagai Bergman (in primates) and Peter Brown (in humans), followed by the team of Alberto Priori in the same year. The study, followed by others testing more patients in longer time windows (up to 24 hours) supported the hypothesis that aDBS is effective in controlling PD symptoms while reducing side effects of constant stimulation.
The device used in these studies was the external component of the AlphaDBS system developed by Newronika. While these advancements were ongoing, Medtronic published the architecture of an implantable aDBS device for application in humans. This design was embedded in Medtronic's Activa PC + S research device, allowing LFP sensing and recording while delivering targeted DBS therapy. This device was used in 2018 by a research team led by Philip A. Starr at the University of California, San Francisco, in a public-private partnership with Medtronic. The researchers inserted the device into two patients with Parkinson's disease who had traditional DBS but continued to experience dyskinesia after adjustment by a neurologist. Later on, they compared the results of the adaptive stimulation system with traditional stimulation set manually on two patients, and found that the adaptive approach was as effective at controlling symptoms as constant stimulation. The AlphaDBS implantable system by Newronika was developed and CE-marked in 2021. A systematic study was also conducted to highlight safety and efficacy of aDBS vs cDBS using this new generation of DBS IPG in PD.
The AlphaDBS represents a new generation commercially available DBS implantable pulse generator (IPG) for DBS and sensing, with aDBS capabilities. A systematic multicentre international study consisted of six investigational sites (in Italy, Poland and the Netherlands) was also conducted to highlight safety and efficacy of aDBS vs cDBS using this a new generation of DBS IPG in PD (AlphaDBS system by Newronika SpA, Milan, Italy).[10]
The Medtronic PC+S device was also developed in a commercial IPG allowing stimulation and sensing, the Percept PC, which is approved for aDBS delivery in Japan. Nobutaka Hattori and the group performed a research study, focused on exploring the case of a 51-year-old man with Parkinson's disease (PD) presenting with motor fluctuations, who received bilateral subthalamic deep brain stimulation (DBS) the Percept PC device, showing the feasibility of the approach. While these new devices seem to have various applications in terms of facilitating condition-dependent stimulation, and providing new insights into the pathophysiological mechanisms of PD, they are currently under investigation in larger clinical studies, to definitely allow their use in clinical practice
Mechanism of action
To adapt to the stimulation parameters, adaptive DBS (aDBS) employs the local field potential (LFP) of the target structure recorded through the implanted electrodes that deliver stimulation. The present application of adaptive DBS (aDBS) technique is primarily based on the detection of increased beta oscillations in the subthalamic nucleus (STN), on account of which it has the capability to change the current depending on the strength of the beta band oscillation, and can, therefore, overcome conventional DBS (cDBS) therapy limitations, including stimulation-induced long term side effects, such as dyskinesia or speech deterioration.
Medical use
Adaptive deep brain stimulation (aDBS) is a treatment modality that is being studied for the treatment of multiple neuropsychiatric and movement disorders.
Parkinson's disease (PD)
Since 2015, several experiments were carried out to assess the efficacy of aDBS, that uses beta-band power of the subthalamic local field potentials (LFPs) as target to adapt DBS parameters to motor fluctuations. Results of the experiments proved that aDBS is highly effective in controlling the patients PD symptoms in addition to the normal Levodopa therapy, reducing dyskinesias.
Tourette syndrome (TS)
Adaptive deep brain stimulation (aDBS) is currently being studied to be used as a potential treatment for TS. A 2017 research study presented a review on the available literature supporting the feasibility of an LFP-based aDBS approach in patients with TS. In addition to that, researchers have put forward several explorative findings regarding LFP data recently acquired and analysed in patients with TS after DBS electrode implantation at rest, during voluntary and involuntary movements (tics), and during ongoing DBS. It was found out that LFPs recorded from DBS targets can be used to control new aDBS devices capable of adaptive stimulation responsive to the symptoms of TS.
Dystonia
The applications of aDBS in the treatment of dystonia have significantly evolved over the past few years. Low-frequency oscillations (LFO) detected in the internal globus pallidus of dystonia patients have been identified as a physiomarker for adaptive Deep Brain Stimulation (aDBS).[22] Moreover, the characteristics of pallidal low-frequency and beta bursts can be helpful in implementing adaptive brain stimulation in the context of parkinsonian and dystonic internal globus pallidus.[23] A significant amount of scientific research to date on pathological oscillations in dystonia has been focused to address potential biomarkers that might be used as a feedback signal for controlling aDBS in patients with dystonia.[24]
Essential tremor (ET)
Adaptive deep brain stimulation (aDBS) may be an effective tool in the treatment of essential tremor (ET), which is one of the most common neurological movement disorders. aDBS for ET is however more focused on a closed-loop technology based on external sensors. In a recent study, H J Chizeck presented the first translation-ready training procedure for a fully embedded aDBS control system for MDs and one of the first examples of such a system in ET.
Comparison with conventional DBS (cDBS)
In a 2021 research study conducted by Alberto Priori, a comparative analysis was presented between the impacts on motor symptoms between conventional deep brain stimulation (cDBS) and closed-loop adaptive deep brain stimulation (aDBS) in patients with Parkinson's disease. This work highlighted the safety and effectiveness of aDBS stimulation compared to cDBS in a daily session, both in terms of motor performance and TEED to the patient. Simon Little has regarded aDBS approach to be superior to conventional DBS in PD in primates using cortical neuronal spike triggering and in humans employing local field potential biomarkers. While presenting a protocol for a pseudo-randomised clinical study for adaptive deep brain stimulation as advanced Parkinson's disease treatment, it was shown that aDBS do not induce dysarthria, in contrast to cDBS. Also it has been suggested that aDBS and cDBS can improve patient's axial symptoms to a similar extent, but compared with cDBS, aDBS significantly improves its main symptom, bradykinesia.
References
Electrotherapy
Medical devices
Neurology procedures
Neurosurgical procedures
Neurotechnology | Adaptive deep brain stimulation | [
"Biology"
] | 1,945 | [
"Medical devices",
"Medical technology"
] |
65,015,098 | https://en.wikipedia.org/wiki/Codablock | Codablock is a family of stacked 1D barcodes (Codablock A,. Codablock F, Codablock 256) which was invented in Identcode Systeme GmbH in Germany in 1989 by Heinrich Oehlmann. Codablock barcodes are based on stacked Code 39 and Code 128 symbologies and have some advantages of 2D barcodes.
The barcodes were used mostly in health care industry (HIBC) and presently, Codablock codes are fully replaced by Data Matrix
History
Codablock codes invention were proceeded from 1989 to 1995 year. Codablock A was invented in 1989 and standardized as AIM standard in 1994. Codablock A was based on stacked Code 39 barcodes and wasn’t widely used because of Code 39 restrictions.
The next Codablock F was based on stacked Code 128 symbology and was standardized as AIM standard in 1995. As this time Codablock F is officially accessed as historical standard and isn’t recommended to use in new applications.
Codablock 256 was invented as internal ICS Identcode-Systeme standard and wasn’t standardized. It was also based on stacked Code 128 symbology. Codablock 256 could encode all 256 symbol ISO 8859-1 charset with FNC4 character and each line had error correction. Because of it has issues with reading by code128 scanners, 8-bit charset encoding was added to Codablock F standard and Codablock 256 almost was not used.
The Codablock also played an important pioneering role in the advance of 2D codes, because only it could be read reliably with the slightest modification of the laser scanners used at the time.
Codablock types
Codablock symbologies has been developed as a stacked version of Code 39 and Code 128 barcodes and has some advantages of 2D barcodes. They allow to utilize rectangular space more effectively then 1D barcode and have additional checking characters to ensure the content of the overall message.
Codablock can be compared with a line break in a text editor. As soon as one line is full, the next is broken, whereby the line number is inserted into each line and the number of lines is inserted into the finished block. First line contains row counts. Each code line also contains an indicator for orientation for the readers and additional checksum values at the end of last line.
Codablock A
Codablock A is based on the Code 39 barcode, consists of 2 to a maximum of 22 barcode lines of 1 to 61 data character each and can encode up to 1,340 characters. The checksum for the error correction is calculated according to modulo 43 over the entire code block.
Codablock F
Codablock F is based on Code 128 barcode, consists of 2 to a maximum of 44 lines, of 4 to 62 data character each and can encode up to 2,725 characters. Codablock F can encode full ISO 8859-1 8-bit charset. Codablock F start character always must be Start A(Code 128).
Codablock 256
Codablock 256 has the same structure as Codablock F with the difference that each line has its own start character. Codablock 256, like Codablock F can encode a maximum of 2,725 characters. Additionally, each code line has its own error correction so small damage can be repaired. This version of Codablock was not standardized as international standard and left as internal Identcode Systeme GmbH development.
Codablock F structure
Codablock F consists from stacked Code 128 barcode lines and have the following features:
has from 2 to 44 lines;
each line has from 4 to 62 data characters;
can encode up to 2,725 characters;
can more effectively use rectangular space then any 1D barcode;
can use any code 128 reading equipment with slight modification or even without it;
when printing consists from standard Code 128 rows;
secured by two mod-86 check sums in addition to the row check sums;
can encode full ASCII character set. 8-bit character set is available using the ISO 8859-1 (Latin-1) character set (FNC4);
numeric compression: allows to encode blocks of numbers (minimum 4) using only the half of the usual space;
additionally, secured by a row checksum with modulo 103.
Codablock F constructed from Code 128 data rows which are blocked between Start A (Code128) character and Code 128 Stop character. Every row has its number in second position after encoding mode selector or data character. The first row has rows count on the number place. The last row has two additional checksum characters.
MDX - encoding mode selector or data character if data can be encode in Code A mode.
Rows Count - count of rows in barcode, set in the first row.
RX - row number.
Row X Data - encode data in the code 128 row.
CSX - Code 128 checksum.
CBS1, CBS2 two Codablock F checksum characters.
See also
Automated identification and data capture (AIDC)
Barcode
Code 128
Code 39
Health Industry Business Communications Council(HIBC)
References
External links
AIM ISS - CODABLOCK F - X-24
Codablock F generator
Codablock symbology description (in German)
Health Industry Business Communications Council
ICS International GmbH Identcode-Systeme (in German)
Automatic identification and data capture
1989 introductions
German inventions
Barcodes
Encodings | Codablock | [
"Technology"
] | 1,132 | [
"Data",
"Automatic identification and data capture"
] |
65,020,736 | https://en.wikipedia.org/wiki/15-minute%20city | The 15-minute city (FMC or 15mC) is an urban planning concept in which most daily necessities and services, such as work, shopping, education, healthcare, and leisure can be easily reached by a 15-minute walk, bike ride, or public transit ride from any point in the city. This approach aims to reduce car dependency, promote healthy and sustainable living, and improve wellbeing and quality of life for city dwellers.
Implementing the 15-minute city concept requires a multi-disciplinary approach, involving transportation planning, urban design, and policymaking, to create well-designed public spaces, pedestrian-friendly streets, and mixed-use development. This change in lifestyle may include remote working which reduces daily commuting and is supported by the recent widespread availability of information and communications technology. The concept has been described as a "return to a local way of life".
As people spend more time working from home or near their homes, there is less demand for large central office spaces and more need for flexible, local co-working spaces. The 15-minute city concept suggests a shift toward a decentralized network of workspaces within residential neighbourhoods, reducing the need for long commutes and promoting work-life balance.
The concept's roots can be traced to pre-modern urban planning traditions where walkability and community living were the primary focus before the advent of street networks and automobiles. In recent times, it builds upon similar pedestrian-centered principles found in New Urbanism, transit-oriented development, and other proposals that promote walkability, mixed-use developments, and compact, livable communities. Numerous models have been proposed about how the concept can be implemented, such as 15-minute cities being built from a series of smaller 5-minute neighborhoods, also known as complete communities or walkable neighborhoods.
The concept gained significant traction in recent years after Paris mayor Anne Hidalgo included a plan to implement the 15-minute city concept during her 2020 re-election campaign. Since then, a number of cities worldwide have adopted the same goal and many researchers have used the 15-minute model as a spatial analysis tool to evaluate accessibility levels within the urban fabric.
History
The 15-minute city concept is derived from historical ideas about proximity and walkability, such as Clarence Perry's neighborhood unit. As an inspiration for the 15-minute city, Carlos Moreno, an advisor to Anne Hidalgo, cited Jane Jacobs's model presented in The Death and Life of Great American Cities.
The ongoing climate crisis and global COVID-19 pandemic have prompted a heightened focus on the 15-minute city concept. In July 2020, the C40 Cities Climate Leadership Group published a framework for cities to "build back better" using the 15-minute concept, referring specifically to plans implemented in Milan, Madrid, Edinburgh, and Seattle after COVID-19 outbreaks. Their report highlights the importance of inclusive community engagement through mechanisms like participatory budgeting and adjusting city plans and infrastructure to encourage dense, complete, overall communities.
A manifesto published in Barcelona in April 2020 by architecture theorist Massimo Paolini proposed radical change in the organization of cities in the wake of COVID-19, and was signed by 160 academics and 300 architects. The proposal has four key elements: reorganization of mobility, (re)naturalization of the city, de-commodification of housing, and de-growth.
In early 2023, far-right conspiracy theories began to flourish that described 15-minute cities as instruments of government repression, claiming that they were a pretext to introduce restrictions on travel by car. In fact, the '15-minute city' proposals do not involve any restrictions on travel by car unrelated measures introduced to reduce traffic in some cities have been somehow confused with '15-minute cities'.
Research models
The 15-minute city is a proposal for developing a polycentric city, where density is made pleasant, one's proximity is vibrant, and social intensity (a large number of productive, intricately linked social ties) is real. The key element of the model has been described by Carlos Moreno as "chrono-urbanism" or a refocus of interest on time value rather than time cost.
Moreno and the 15-minute city
Urbanist Carlos Moreno's 2021 article introduced the 15-minute city concept as a way to ensure that urban residents can fulfill six essential functions within a 15-minute walk or bike ride from their dwellings: living, working, commerce, healthcare, education and entertainment. The framework of this model has four components; density, proximity, diversity and digitalization.
Moreno cites the work of Nikos Salingaros, who theorizes that an optimal density for urban development exists which would encourage local solutions to local problems. The authors discuss proximity in terms of both space and time, arguing that a 15-minute city would reduce the space and time necessary for activity. Diversity in this 15-minute city model refers to mixed-use development and multicultural neighborhoods, both of which Moreno and others argue would improve the urban experience and boost community participation in the planning process. Digitalization is a key aspect of the 15-minute city derived from smart cities. Moreno and others argue that a Fourth Industrial Revolution has reduced the need for commuting because of access to technology like virtual communication and online shopping. They conclude by stating that these four components, when implemented at scale, would form an accessible city with a high quality of life.
Larson and the 20-minute city
Kent Larson described the concept of a 20-minute city in a 2012 TED talk, and his City Science Group at the MIT Media Lab has developed a neighborhood simulation platform to integrate the necessary design, technology, and policy interventions into "compact urban cells". In his "On Cities" masterclass for the Norman Foster Foundation, Larson proposed that the planet is becoming a network of cities, and that successful cities in the future will evolve into a network of high-performance, resilient, entrepreneurial communities.
D'Acci and the one-mile city
In 2013, Luca D'Acci (Associate Professor in Urban Studies at the Polytechnic University of Turin, Italy) proposed a city model "where each point can reach continuous natural areas, job locations, centralities, shops, amenities (recreational, medical, cultural), usual daily activities by 15/30 minute walking or within 15 minute biking". He called it a "one-mile green city", or "Isobenefit Urbanism". (The term "isobenefit" is a portmanteau word from "iso" meaning equal, and "benefit", which he defines as advantageous amenities, services, workplaces and green space.)
Weng and the 15-minute walkable neighborhood
In a 2019 article using Shanghai as a case study, Weng and his colleagues proposed the 15-minute walkable neighborhood with a focus on health, and specifically non-communicable diseases. The authors suggest that the 15-minute walkable neighborhood is a way to improve the health of residents, and they document existing disparities in walkability within Shanghai. They found that rural areas, on average, are significantly less walkable, and areas with low walkability tend to have a higher proportion of children. Compared to Moreno et al., the authors focused more on the health benefits of walking and differences in walkability and usage across age groups.
Da Silva and the 20-minute city
In their 2019 article, Da Silva et al. cite Tempe, Arizona, as a case study of an urban space where all needs could be met within 20 minutes by walking, biking, or transit. The authors found that Tempe is highly accessible, especially by bike, but that accessibility varies with geographic area. Compared to Moreno et al., the authors focused more on accessibility within the built environment.
Implementations
Asia
In 2019, Singapore's Land Transport Authority proposed a master plan that included the goals of "20-minute towns" and a "45-minute city" by 2040.
Israel has embraced the concept of a 15-minute city in new residential developments. According to Orli Ronen, the head of the Urban Innovation and Sustainability Lab at the Porter School for Environmental Studies at Tel Aviv University, Tel Aviv, Haifa, Beersheba, and central Jerusalem have been effective in delivering on the concept at least in part in new developments, but only Tel Aviv has been relatively successful.
Dubai launched the 20-minute city project in 2022, where residents are able to access daily needs & destinations within 20 minutes by foot or bicycle. The plan involves placing 55% of the residents within 800 meters of mass transit stations, allowing them to reach 80% of their daily needs and destinations.
In the Philippine's largest city, the government of Quezon City announced in 2023 its plans to implement the 15-minute city concept to establish a walkable, people-friendly, and sustainable community for its residents. Influenced by the city of Paris, the government aims to make urban development people-centered and to further reach the city's goal of reaching carbon neutrality by 2050.
China
The 2016 Master Plan for Shanghai called for "15-minute community life circles", where residents could complete all of their daily activities within 15 minutes of walking. The community life circle has been implemented in other Chinese cities, like Baoding and Guangzhou. Xiong'an is also being developed under the 15-minute life circle concept.
The Standard for urban residential area planning and design (GB 50180–2018), a national standard that came into effect in 2018, stipulates four levels of residential areas: 15-min pedestrian-scale neighborhood, 10-min pedestrian-scale neighborhood, 5-min pedestrian-scale neighborhood, and a neighborhood block. Among them, "15-min pedestrian-scale neighborhood" means "residential area divided according to the principle that residents can meet their material, living and cultural demand by walking for 15 minutes; usually surrounded by urban trunk roads or site boundaries, with a population of 50,000 to 100,000 people (about 17,000 to 32,000 households) and complete supporting facilities."
Chengdu, to combat urban sprawl, commissioned the "Great City" plan, where development on the edges of the city would be dense enough to support all necessary services within a 15-minute walk.
Europe
The mayor of Paris, Anne Hidalgo, introduced the 15-minute city concept in her 2020 re-election campaign and began implementing it during the COVID-19 pandemic. For example, school playgrounds were converted to parks after school hours, while the Place de la Bastille and other squares have been revamped with trees and bicycle lanes.
Cagliari, a city on the Italian island of Sardinia, began a strategic plan to revitalize the city and improve walkability. The city actively solicited public feedback through a participatory planning process, as described in the Moreno model. A unique aspect of the plan calls for re-purposing public spaces and buildings that were no longer being used, relating to the general model of urban intensification.
In Utrecht, the fourth-largest city in the Netherlands, 100 percent of residents can reach all city necessities in a 15-minute bike ride, and 94% in a 10-minute bike ride. The local municipality has plans to improve this further by 2040.
In September 2023, the UK Government announced plans to "protect drivers from over-zealous traffic enforcement", in what it says is "part of a long-term plan to back drivers". These included plans "to stop councils implementing so called '15-minute cities', by consulting on ways to prevent schemes which aggressively restrict where people can drive".
Polish city Pleszew is claimed to be a 15-minute city.
Copenhagen's Nordhavn neighbourhood was developed according to a five-minute city concept. This is based on all daily amenities being located at a distance of 400 m from the nearest public transit stop – a distance walkable within 5 minutes.
North America
In 2012, Portland, Oregon, developed a plan for complete neighborhoods within the city, which are aimed at supporting youth, providing affordable housing, and promoting community-driven development and commerce in historically under-served neighborhoods. Similar to the Weng et al. model, the Portland plan emphasizes walking and cycling as ways to increase overall health and stresses the importance of the availability of affordable healthy food. The Portland plan calls for a high degree of transparency and community engagement during the planning process, which is similar to the diversity component of the Moreno et al. model.
In 2015, Kirkland, Washington, developed a "10-Minute Neighborhood Analysis" tool to guide the city's 2035 Comprehensive Plan. This tool is intended to guide community discussion about how the 10-Minute Neighborhood concept can improve livability and explore the policy changes necessary to achieve that vision.
South America
In March 2021, Bogotá, Colombia, implemented 84 kilometers of bike lanes to encourage social distancing during the COVID-19 pandemic. This expansion complemented the Ciclovía practice that originated in Colombia in 1974, where bicycles are given primary control of the streets. The resulting bicycle lane network is the largest of its kind in the world.
Oceania
The city of Melbourne, Australia, developed Plan Melbourne 2017–2050 to accommodate growth and combat sprawl. The plan contains multiple elements of the 15-minute city concept, including new bike lanes and the construction of "20-minute neighborhoods".
Societal effects
The 15-minute city, with its emphasis on walkability and accessibility, has been put forward as a way to better serve groups of people that have historically been left out of planning, such as women, children, people with disabilities, people with lived experience of mental illness, and the elderly.
Social infrastructure is also emphasized to maximize urban functions such as schools, parks, and complementary activities for residents. There is also a large focus on access to green space, which may promote positive environmental impacts such as increasing urban biodiversity and helping to protect the city from invasive species. Studies have found that increased access to green spaces can also have a positive impact on the mental and physical health of a city's inhabitants, reducing stress and negative emotions, increasing happiness, improving sleep, and promoting positive social interactions. Urban residents living near green spaces have also been found to exercise more, improving their physical and mental health.
Limitations
Limitations of the 15-minute city concept include the difficulty or impracticality of implementing the 15-minute city concept in established urban areas, where land use patterns and infrastructure are already in place. Additionally, the concept may not be feasible in areas with low population density, such as those with extensive urban sprawl, or in areas where lower-income workers commute long distances to or from.
Noted exceptions include Chengdu, which used the 15-minute city concept to curb sprawl, and Melbourne, where Lord Mayor Sally Capp stressed the importance of public transit in expanding the radius of the 15-minute city.
In a paper published in the journal Sustainability, Georgia Pozoukidou and Zoi Chatziyiannaki write that the creation of dense, walkable urban cores often leads to gentrification or displacement of lower-income residents to outlying neighborhoods due to rising property values; to counteract this, the authors argue for affordable housing provisions to be integral with 15-minute city policies.
Furthermore, when the concept is applied as a literal spatial analysis research tool, it then refers to the use of an isochrone to express the radius of an area considered local. Isochrones have a long history of use in transportation planning and are constructed primarily using two variables: time and speed. However, the reliance on population-wide conventions, such as gait speed, to estimate the buffer zones of accessible areas may not accurately reflect the mobility capabilities of specific population groups, like the elderly. This may result in potential inaccuracies and fallacies in research models.
Obstacles to implementation
In the United States, several factors make the implementation of 15-minute cities challenging. The biggest roadblock is strict zoning regulations, especially single-family zoning which makes high density housing construction illegal. NIMBYism is also an obstacle, as are parking requirements and the perceived low quality of urban schools which causes childbearing couples to move from urban areas to suburban areas.
Conspiracy theories
In 2023 conspiracy theories about the 15-minute concept began to flourish, which described the model as an instrument of government oppression. These claims are often part of or linked to other conspiracy theories such as QAnon, anti-vaccine theories or anti-5G misinformation that assert that Western governments seek to oppress their populations. Proponents of the 15-minute concept, including Carlos Moreno, have received death threats.
Some conspiracy theorists conflate the 15-minute concept with the British low-traffic neighbourhood approach, which includes license plate scanners in some implementations. This has led to assertions that the 15-minute model would fine residents for leaving their home districts, or that it would confine people in "open-air prisons". Conspiracy theorists believe the World Economic Forum (WEF) wants to lock people in their homes on the pretext of climate change. Such beliefs are part of a larger network of conspiracy theories surrounding the concept of a "Great Reset".
In a 2023 protest by some 2,000 demonstrators in Oxford, signs described 15-minute cities as "ghettos" and an instrument of "tyrannical control" by the WEF. Canadian media commentator Jordan Peterson has described 15-minute cities as a "perversion". QAnon supporters have claimed a February 2023 derailment of a train carrying hazardous chemicals in Ohio was part of a deliberate plot to force rural residents into 15-minute cities to restrict their personal freedom. Similar claims have been made about wildfires on the island of Maui in August 2023.
In 2023, the British Conservative government began to criticise the idea by name. In February 2023 the Conservative MP Nick Fletcher called 15-minute cities an "international socialist concept" during a debate in the UK Parliament, which was met with laughter. At the Conservative party conference in October 2023, Transport Secretary Mark Harper announced that he was "calling time on the misuse of so-called '15-minute cities, criticising as "sinister" the idea that local councils could "decide how often you go to the shops and that they ration who uses the roads and when". No such powers have been proposed as part of the 15-minute city concept in the United Kingdom. Despite being specifically debunked in a guide given to MPs by the Leader of the House of Commons, Health Secretary Maria Caulfield included the fiction in a local election leaflet and reiterated it in a BBC interview.
See also
Fused grid Type of urban planning design that aims to reconcile the urban grid and Radburn design housing
Notes
References
Further reading
New Urbanism
Sustainable urban planning
Transportation planning
Sustainable transport
Mixed-use developments
Community building
Conspiracy theories | 15-minute city | [
"Physics"
] | 3,887 | [
"Physical systems",
"Transport",
"Sustainable transport"
] |
65,025,179 | https://en.wikipedia.org/wiki/Vaccine%20diplomacy | Vaccine diplomacy, a form of medical diplomacy, is the use of vaccines to improve a country's diplomatic relationship and influence of other countries. Meanwhile, vaccine diplomacy also "means a set of diplomatic measures taken to ensure access to the best practices in the development of potential vaccines, to enhance bilateral and/or multilateral cooperation between countries in conducting joint R&D, and, in the case of the announcement of production, to ensure the signing of a contract for the purchase of the vaccine at the shortest term." Although primary discussed in the context of the supply of COVID-19 vaccines, it also played a part in the distribution of the smallpox vaccine.
Early history of vaccine diplomacy
Commentators have identified vaccine diplomacy occurring as far back as the first vaccine, Edward Jenner's smallpox vaccine. It has also been identified in Soviet involvement with the Albert Sabin polio vaccine. The UN has also brokered ceasefires in order to conduct vaccination campaigns such as with talibans in Afghanistan.
During the COVID-19 pandemic
Australia
Australia promised to ensure early access to a vaccine "for countries in our Pacific family, as well as regional partners in Southeast Asia". to help them fight the COVID-19 pandemic.
China
China's infection rates and early success in handling the COVID-19 pandemic were sufficiently low that it could send vaccines abroad without domestic objections. By August 2021, China had donated 700 million vaccine does abroad, greater than the number from all other countries combined. As academic Suisheng Zhao writes, "Just by showing up and helping plug the colossal gaps in the global supply, China gained ground." Moreover, the Center for Strategic and International Studies found that its vaccine diplomatic activities earned China goodwill and influence in several middle-income countries, many of which are also notably involved in the Belt and Road Initiative, indicating that such diplomacy could have improved China's image and strengthened its relationships with countries that wished for, or already took part in, strong relationships with China. However, because most of Chinese distributed vaccines have gone to such middle-income countries, many of the poorest countries are left highly vulnerable, undercutting China's attempts to present itself as a benevolent giver of needed goods and undermining Xi's claim that a Chinese developed vaccine would be treated as a “global public good."
The Sinopharm BIBP vaccine is used for vaccinations by some countries in Asia, Africa, South America, and Europe. Sinopharm produced one billion doses of the BBIB vaccine in 2021, and supplied 200 million doses by May.
CoronaVac is used for vaccinations by some countries in Asia, South America, North America, and Europe. Sinovac had a production capacity of 2 billion doses a year and had delivered 600 million total doses.
Convidecia is used for vaccination by some countries in Asia, Europe, and Latin America. Production capacity for Ad5-NCov should reach 500 million doses in 2021.
China pledged US$2billion to support efforts by WHO for programs against COVID-19, a US$1billion loan to make its vaccine accessible for countries in Latin America and the Caribbean, and provide five Southeast Asian countries priority access to the vaccine. The Sinopharm BIBP vaccine and CoronaVac were approved by the WHO as part of COVAX. By July 2021, GAVI had signed advanced purchase agreements for 170 million doses of the Sinopharm BIBP vaccine, 350 million doses of CoronaVac, and 414 million doses of SCB-2019, another COVID-19 vaccine in Phase III trials.
All of these actions have been a component in enacting China's strategy of enacting mask diplomacy, where the state has distributed medical supplies, including COVID-19 vaccines, and financial support to other European countries, in an effort to restore China's historically maligned and recently ignominious image. While on the other hand, these actions have demonstrated that China is a pragmatic, self-driven problem solver, willing to establish alliances with other nations contrasting the United States' isolationist policies.
In addition, China is utilizing this opportunity to distance, and even shift the narrative regarding the start of COVID-19, as the virus was discovered in the nation. Although China gained some international sympathy, the country was also accosted by “accusations of fanning the pandemic by silencing early reports” and “dogged by international criticisms that trace the origins of the pandemic to a leak from a Wuhan lab.” A 2020 Pew Research poll further suggests a negative narrative surround China; upon polling citizens in 14 economically advanced nations, including East Asian neighbors Japan and South Korea, a median of 61% said China did "a bad job dealing with the [COVID-19] outbreak" and 78% said they had no confidence in President Xi. Such a poor global perception could suggest that China's distributed vaccines as a means of repairing or strengthening the country's international image. Providing vaccination doses allowed China to combat negative narratives about its early handling of the crisis and recast itself as a provider of needed goods, while cultivating goodwill and showcasing the nation’s technological strength. Through distributing their vaccines, China clearly took a stride to appear favorable in the eyes of the world, and perhaps reverse the criticism garnered at the early stages of the pandemic.
India
By late March 2021, India had produced 125 million doses of COVID-19 vaccines and had exported 55 million doses. 84 countries had received vaccines from India, either through COVAX, grants or regular purchases.
India sent millions of doses of COVID-19 vaccine to 95 countries including neighboring Bhutan, Afghanistan, Nepal, Bangladesh, Sri Lanka, Myanmar and the Maldives. India will also supply vaccines to Pakistan through COVAX initiative.
During the second wave of the COVID-19 pandemic in India, the Vaccine Maitri program was put on hold until July 2021 due to increased number of COVID cases in India. As of 29 May 2021, India had exported 66.4 million doses including 10.7 million vaccine provided as grant to more than 95 nations.
India's health ministry said the country will resume COVID-19 vaccine exports as a part of COVAX and Vaccine Maitri initiative, by October, promising supply development that comes ahead of high-level talks this week on solving vaccine inequity gaps, while World Health Organization chief Tedros Adhanom Ghebreyesus has hailed India's decision to resume COVID-19 vaccine exports as an "important development" in support of the goal to reach 40 per cent vaccination in all countries by end of the year.
Mexico
Secretary of Foreign Affairs of Mexico Marcelo Ebrard announced agreements with CanSino Biologics and Walvax to conduct clinical trials for vaccines from China, with the possibility of the manufacturing the vaccines in the country.
Marcelo Ebrard also announced agreements with Johnson & Johnson to trial its U.S. developed vaccine in Mexico.
Japan
In July 2020, Japan agreed to provide 11.6 billion yen (US$109million) to five countries along the Mekong River: Cambodia, Laos, Myanmar, Thailand and Vietnam over concerns with China's influence on vaccine production and distribution in Asia.
Russia
Russia, the first country to claim a COVID-19 vaccine, Sputnik V, says twenty countries "including Brazil, Indonesia and the United Arab Emirates" have requested access.
Turkey
Turkey has sent or donated CoronaVac vaccines to Azerbaijan, Bosnia and Herzegovina, Northern Cyprus, and North Macedonia.
United States
During the Trump administration, Secretary of Health and Human Services Alex Azar said the United States will share a vaccine with other countries only after the United States' needs have been met. The United States has funded and placed multi-billion dollar orders purchasing hundreds of millions of vaccines from the United Kingdom's AstraZeneca and Germany's BioNTech SE in collaboration with American Pfizer. The United States offered vaccine development to Indonesia in an August 2020 phone call between Mike Pompeo and Retno Marsudi.
The Biden administration has promised to finance vaccine manufacturing in various nations with its announcement in the Quad Summit held in March 2021 that it will provide supply of up to one billion coronavirus vaccines across Asia by the end of 2022 along with India, Australia and Japan. The United States vaccine export policies have been criticised as "Vaccine Apartheid" by The Independent.
European Union
UK-based AstraZeneca was accused of prioritizing the UK market and when their EU vaccine production lagged behind the UK. Diplomatic protests from the Irish and UK sides resolved the matter and the threat was withdrawn. In March 2021, the EU planned to suspend vaccine exports once again in order to incentivize the UK to export its domestic vaccine production.
Vaccine nationalism
This led to fears about vaccine nationalism, where developed countries would benefit in producing home-grown vaccine and poorer countries would not get access to the vaccine as soon, ultimately prolonging the pandemic. A similar phenomenon was observed during the H1N1 Flu and Ebola crisis. During the pandemic situation, there is a "diplomatic race ... for potential vaccines."
Another concern has been that wealthier countries would gain prioritized access to vaccines based on their ability to pay. The COVAX program was established with the intention of counteracting this development. In 2021, an unequal distribution of vaccines based on the principle of vaccine nationalism was observed between high, middle, and low income countries. An August 2021 study concluded that this behavior has resulted in increased transmission of COVID-19, especially because it encourages the development of COVID-19 variants.
Possible collaboration among countries
In early August 2020, Malaysian Minister of Foreign Affairs Hishammuddin Hussein said on Twitter that he had spoken with both Chinese Foreign Minister Wang Yi and United States Secretary of State Mike Pompeo on methods to further collaboration on vaccines.
See also
Aid
Soft power
Science diplomacy
Science diplomacy and pandemics
Further reading
References
Vaccines
Medical diplomacy | Vaccine diplomacy | [
"Biology"
] | 2,067 | [
"Vaccination",
"Vaccines"
] |
65,028,571 | https://en.wikipedia.org/wiki/Verkada | Verkada Inc. is an American security technology company, based in San Mateo, California. The company combines security equipment such as video cameras, access control systems and environmental sensors, with cloud based machine vision and artificial intelligence.
The company was founded in 2016. In 2021, it was the target of a data breach that accessed security camera footage and private data.
History
Verkada Inc. was founded in 2016 in Menlo Park, California by three Stanford University graduates: Filip Kaliszan, James Ren, and Benjamin Bercovitz, who were joined by Hans Robertson, co-founder and former COO of Meraki (now Cisco Meraki). Kaliszan, Ren, and Bercovitz had previously collaborated on CourseRank, a class data aggregation platform that was acquired by Chegg in 2010.
Verkada exited the beta development stage in September 2017, with a product offering of two camera models.
In 2019, Forbes included Verkada in its Next Billion Dollar Startups list, as well as that year's AI 50 list of most promising artificial intelligence companies. In April, the company announced a $40 million Series B funding round, which valued the company at $540 million.
In January 2020, the company raised $80 million in a Series C funding found led by Felicis Ventures, giving the company a $1.6 billion valuation. In spring 2020, the company launched its first access control device, the first move in a shift to moving beyond cameras, and integrating security cameras and locks onto a single platform. In June during the COVID-19 crisis Verkada instituted a program to offer free surveillance kits to businesses and healthcare institutions in order to remotely monitor high-risk locations. It also added features to let customers detect when crowds are forming, and to identify high traffic areas that might need more cleaning. In September, the company launched a line of integrated environmental sensors. In September, it introduced a line of environmental sensors for facilities monitoring.
In April 2021, news site Bloomberg News reported allegations by former employees accusing the company of having a "bro" culture, with lax device security, excessive focus on profit, and parties during the COVID-19 pandemic. In the Bloomberg reporting, Verkada acknowledged an internal lapse in judgment, and was reportedly working to create a more inclusive work environment, including reviewing gender pay equity and implementing better training. In September, the company began donating security cameras to Asian Pacific American business communities, starting with the Oakland California Chinatown Chamber of Commerce, to address growing anti-Asian threats and violence against its members.
In August 2022, the company announced a mailroom product to help companies keep track of mail packages and shipments coming into their facilities. In September, the company raised $205 million in Series D funding, bringing its valuation to $3.2 billion. Despite the data breach it suffered in 2021, the company continued its expansion. In October 2023 it managed to close a $305 million funding round with participation from Lightspeed and shortly after the injection of $100 million by Alkeon Capital.
In September 2024, Verkada was sued by the United States Department of Justice for violating the CAN-SPAM Act.
Data breach
On March 8, 2021, Verkada was hacked by an international group including maia arson crimew and calling themselves the "APT69420 Arson Cats," which gained access to their network for about 36 hours and collected about 5 gigabytes of data.
Initially, it was reported that the scope of the incident included live and recorded security camera footage from more than 150,000 cameras. It was later reported that 95 customers' video and images data were accessed Crimew told Bloomberg News that the hack "exposes just how broadly we're being surveilled".
In response to the data breach, in April 2021 it was reported that Verkada CEO Filip Kaliszan announced a series of measures, including red team/blue team exercises, a bug bounty program, mandatory two-factor authentication use by Verkada support staff, and the sharing of more audit logs with Verkada customers.
Controversies
In August 2021, Motorola Solutions filed a 52-page complaint against Verkada with the United States International Trade Commission, alleging that Verkada cameras and software infringe upon patents held by Motorola subsidiary Avigilon. Verkada subsequently filed a lawsuit against Motorola Solutions in the California Northern District Court in September 2021, arguing that Motorola has "sought to effectively shut Verkada’s business down." Later in September, the International Trade Commission initiated its investigation into Motorola's complaint, with Verkada stating in its response that it does not infringe upon any of Motorola's patents.
On October 24, 2022, the presiding administrative law judge (“ALJ”) of the ITC issued a final initial determination (“FID”) finding that a violation of section 337 has occurred, that Verkada's products infringed claims 6–11 of 1 of 3 patents asserted by Motorola, but no infringement with respect to the other asserted claims of the 1 patent and the claims of other 2 patents. Both Verkada and Motorola filed for ITC review of the FID. On April 4, 2023, ITC issued a final determination and terminated the ITC investigation, finding that Verkada products did not infringe any of Motorola's 3 patents.
Verkada uses facial recognition in its cameras, including systems installed in public housing. Independent testing showed a 15-85% false positive rate in the matches. The use of surveillance cameras equipped with facial recognition technology ends up affecting the daily lives of residents in poor neighbourhoods and public housing areas. Their use makes their existence even more difficult and complicated, and there is no evidence of their usefulness in reducing crime. Facial recognition has been outlawed in Illinois and Texas and has been under scrutiny by HUD.
References
External links
Video surveillance companies
Physical security
Companies based in San Mateo, California
Technology companies based in the San Francisco Bay Area
Computer companies of the United States
Computer companies established in 2016
Computer hardware companies
American companies established in 2016
2016 establishments in California | Verkada | [
"Technology"
] | 1,257 | [
"Computer hardware companies",
"Computers"
] |
65,028,789 | https://en.wikipedia.org/wiki/Jim%20Hall%20%28civil%20engineer%29 | James Hall, (born May 6, 1968) is Professor of Climate and Environmental Risks in the Environmental Change Institute at the University of Oxford., where he leads the Oxford Programme for Sustainable Infrastructure Systems. He is Senior Research Fellow at the Department of Engineering Science and Fellow of Linacre College. Hall is a member of the UK Prime Minister's Council for Science and Technology, commissioner of the National Infrastructure Commission, and is President of the Institution of Civil Engineers for the year November 2024 to October 2025.
He was appointed as a Fellow of the Royal Academy of Engineering in 2010. He was a member of the Adaptation Sub-Committee of the UK Climate Change Committee from 2009 to 2019, and was chair of the Science and Advisory Committee of the International Institute for Applied Systems Analysis from 2020 to 2022.
Career
Hall was born in Sidcup, England, and studied civil engineering at the University of Bristol, with a stage at the Ecole Nationale des Ponts et Chausses before graduating in 1990. He was a civil engineer with Taylor Woodrow Construction from 1987 to 1990 and then served with VSO in Guyana from 1991 to 1993 working on flood protection and drainage projects. He worked with water specialist HR Wallingford from 1993 to 1995 before returning to Bristol University to undertake a PhD in engineering systems and uncertainty analysis which he completed in 1999. He was awarded a Royal Academy of Engineering Post-doctoral Research Fellowship from 1999 to 2004 and became reader in civil engineering systems at the University of Bristol. In 2004, he was appointed as the inaugural Professor of Earth Systems Engineering at Newcastle University where he served until 2011. He represented Newcastle University as a member of the Tyndall Centre Consortium, leading the centre's cities research programme and became deputy director of the Tyndall Centre. He was appointed director of the Environmental Change Institute at Oxford University and was instrumental in the establishment of the Oxford Networks for the Environment (ONE) which bring together research in the University of Oxford on energy, climate, water, biodiversity and food. In 2018, he stood down as Director of the Environmental Change Institute.
Research
He researches risk analysis and decision-making under uncertainty for water resource systems, flood and coastal risk management, infrastructure systems and adaptation to climate change.
Water Resources: Hall developed methods for planning water resources in the context of uncertain future climate changes. In 2018, Hall and his former doctoral student Edoardo Borgomeo were awarded the Prince Sultan Abdulaziz International Prize in the category of Water Management and Protection for developing a new risk-based framework to assess water security and plan water supply infrastructure in times of climate change.
His research has focused on the quantification of risks from water resource systems especially the risks of water shortages and harmful water quality for people and the environment. This has contributed to the concept and literature of water security although this approach has been criticised as reductionist. With Claudia Sadoff and David Grey he was co-chair of the Global water partnership/OECD Task Force on the Economics of Water Security and Sustainable Growth.
Hall's analysis of water risks in Britain provided evidence for the National Infrastructure Commission's 2018 report Preparing for a Drier Future, for the Environment Agency's National framework for Water Resources and for the UK water Regulators' Alliance for Progressing Infrastructure Development (RAPID). He is editor of the AGU journal Water Resources Research.
Flood risk: Hall developed the flood risk analysis for the first National Flood Risk Assessment (NaFRA) in England and Wales. The same research now also underpins the Environment Agency's Long Term Investment Strategy. He also developed the framework for uncertainty analysis in appraisal of options for protecting London from flooding over the 21st century, as part of the Environment Agency's 2012 Thames Estuary 2100 project. He was coordinating lead author in the Government Office of Science and Technology's Foresight Future Flooding project and was a member of the Scientific Advisory Group for Emergencies (SAGE) for the 2014 floods emergency. He was advisor during the 2016 floods and the subsequent National Flood Resilience Review.
Hall has published two books on flooding: Flood Risk Management in Europe: Innovation in Policy and Practice and Applied Uncertainty in Flood Risk Management.
Coastal Change: With Mike Walkden, Hall developed the SCAPE model, which can predict coastal cliff erosion decades into the future. SCAPE has been used to predict the impacts of climate change for coastal towns and nuclear sites. He was part of the team that developed the Tyndall Coastal Simulator which models the response of the East Anglian coast to climate change. Hall conceived the CoastalME modelling environment for simulating decadal to centennial morphological changes. He led the Committee on Climate Change's 2018 report Managing the coast in a changing climate.
Climate Change: Hall's research on adaptation to climate change has focused on climate change risk assessment and decision-making under uncertainty. He was a contributing author to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change which won a Nobel Peace Prize. He was an advisor to the Stern Review on the Economics of Climate Change and led the infrastructure background paper for the Global Commission on Adaptation. Between 2009 and 2019, Hall was a member of the UK Adaptation Committee which is part of the Independent Climate Change Committee established by the 2008 UK Climate Change Act.
He is chair of the Steering Group of the £18.7m UKRI Climate Resilience Programme and served on the Governance Board and the Peer Review Panel for he UK's national climate projections, UKCP 18.
Infrastructure Systems: Jim Hall founded the UK Infrastructure Transitions Research Consortium which received two research Programme Grants from the Engineering and Physical Science Research Council. Hall led the development of the National Infrastructure Systems Model (NISMOD) which simulates the behaviour and interactions between energy, transport, digital, water and waste systems. NISMOD was used for the National Needs Assessment led by Sir John Armitt and for the UK's first National Infrastructure Assessment.
Hall now chairs the £8 million Data and Analytics Facility for National Infrastructure (DAFNI) at the Rutherford Appleton Laboratory. His book The Future of National Infrastructure sets out the challenges of sustainable infrastructure in the 21st century and provides a template for assessing long-term policy, planning and investments. NISMOD has been taken up by the UN Office for Project Services (UNOPS) in pursuit of the Sustainable Development Goals and has been used to inform infrastructure planning in Curaçao, St Lucia and Ghana. Hall developed several methods for analysing risks to infrastructure networks and prioritising actions to enhance network resilience. The work was used as part of the National Infrastructure Commission's 2020 study of infrastructure resilience. The work has twice been recognised with the award of the Lloyd's Science Prize and it has been applied to the analysis of infrastructure network resilience in Tanzania, Vietnam, Argentina and China and at global scale.
Uncertainty and decision analysis: Hall applied generalised theories of probability to civil engineering and environmental systems, including random set theory, the theory of imprecise probability and info-gap theory. He applied the theory of imprecise probabilities to analyse tipping points in the Earth System.
Mountaineering
Jim Hall has climbed new routes in Europe, North America, South America, the Himalayas and the Antarctic. He achieved the first ascent of South Face Thunder Mountain (Alaska) with Paul Ramsden and Nick Lewis, the first winter ascents of Cerro Poincenot and Aig Guillaumet and a winter ascent of Fitzroy Supercouloir (Patagonia) with Paul Ramsden, Nick Lewis and Andy Kirkpatrick, recounted in Kirkpatrick's book Psychovertical.
Honours
Prince Sultan Abdul Aziz International Prize for Water (2018).
Fellow of the Royal Academy of Engineering (2010).
Institution of Civil Engineers' Robert Alfred Carr Prize (2004).
Institution of Civil Engineers' George Stephenson Medal (2001).
Institution of Civil Engineers' Frederick Palmer Prize (2001).
References
British civil engineers
Environmental engineers
Alumni of the University of Bristol
Academics of Newcastle University
Academics of the University of Oxford
Fellows of the Royal Academy of Engineering
Living people
1968 births | Jim Hall (civil engineer) | [
"Chemistry",
"Engineering"
] | 1,634 | [
"Environmental engineers",
"Environmental engineering"
] |
65,031,039 | https://en.wikipedia.org/wiki/Stormtroopers%20Advance%20Under%20a%20Gas%20Attack | Stormtroopers Advance Under a Gas Attack (German: Sturmtruppe geht unter Gas vor) is an engraving in aquatint by Otto Dix representing German soldiers in combat during the First World War. It is the twelfth in the series of fifty engravings entitled The War, published in 1924. Copies are kept at the German Historical Museum in Berlin, at the Museum of Modern Art in New York, and at the Minneapolis Institute of Art, among other public collections.
Description
The engraving is almost monochrome, rectangular in format (19.3 × 28.8 cm for the engraving, 34.8 × 47.3 cm for the sheet). The engraving represents five German stormtroopers, recognizable by their steel helmets, all wearing gas masks, as they are advancing into enemy lines, while suffering a gas attack.
References
Anti-war works
World War I in art
1920s prints
Otto Dix etchings | Stormtroopers Advance Under a Gas Attack | [
"Chemistry"
] | 188 | [
"Chemical war and weapons in popular culture",
"Chemical weapons"
] |
65,037,555 | https://en.wikipedia.org/wiki/EU%20Carbon%20Border%20Adjustment%20Mechanism | The EU Carbon Border Adjustment Mechanism (CBAM, pronounced Si-Bam) is a carbon tariff on carbon intensive products, such as steel, cement and some electricity, imported to the European Union. Legislated as part of the European Green Deal, it takes effect in 2026, with reporting starting in 2023. CBAM was passed by the European Parliament with 450 votes for, 115 against, and 55 abstentions and the Council of the EU with 24 countries in favour. It entered into force on 17 May 2023.
Contents
The price of CBAM certificates is linked to the price of EU allowances under the European Union Emissions Trading System introduced in 2005. The CBAM is designed to stem carbon leakage to countries without a carbon price, and will also permit the EU to stop giving free allowances to some carbon-intensive sectors within its borders. All this should hasten decarbonization.
After the political (provisional) agreement between the Council and the European Parliament was reached in December 2022, the CBAM entered into force on October 1, 2023 and is passing through several phases:
From October 2023 to the end of 2025 transitional phase: importers of products in six carbon intensive sectors highly exposed to international trade, namely aluminium, cement, electricity, fertilisers, hydrogen and iron and steel will need to report their emissions. During the transitional phase, the regulators will be checking if other products can be added to the list like for example some downstream products.
From the beginning of 2026 importers of products included in these 6 sectors will begin to pay a border carbon tax for their products based on the price of allowances in the European Union Emissions Trading System.
By 2030 all sectors covered by the European Union Emissions Trading System will be covered by CBAM.
By 2034 free allowances in the relevant sectors in the European Union will be phased out as the fully implemented CBAM ensures a level playing field for European companies in comparison to importers.
To address the 'lose-lose' scenario of carbon leakage, characterised by a general loss of competitiveness of EU industries with no gain from the perspective of climate protection, the CBAM will require importers of the targeted goods to purchase a sufficient amount of ‘CBAM certificates’ to cover the emissions embedded in their products. Since the main purpose of the CBAM is to avoid carbon leakage, the mechanism tries to subject covered imports to the same carbon price imposed on internal producers under the EU ETS. In other words, the EU is trying to make importers bear an equivalent burden, for what concerns regulatory costs, to the costs of European producers.
Under article 6, importers must make a "CBAM declaration" with the quantity of goods, embedded emissions, and certificates for payment of the carbon import tax.
Annex I sets out the goods that attract the import tax, including cement, electricity, fertilisers (such as nitric acid, ammonia, potassium), iron and steel (including tanks, drums, containers), and aluminium.
Annex II specifies that the CBAM does not apply to the four non-EU member states that are included in the European Economic Area, namely Iceland, Liechtenstein, Norway and Switzerland.
Annex III sets out the methods for calculating embedded greenhouse gas emissions.
Exporters will be required to report their emissions and purchase CBAM certificates, which will increase their costs and reduce their profitability.
Debate
The implementation of the CBAM by the EU is a major step towards addressing the issue of carbon leakage and ensuring a level playing field for European businesses worldwide against cheaper goods from economies outside the EU lacking carbon taxation. The import partners most affected will be Russia, China, Turkey, Ukraine, the Balkans, as well as Mozambique, Zimbabwe, and Cameroon. This mechanism allows the EU to unilaterally impose a levy on imports from countries that do not meet the environmental standards set by the EU.
Compliance and monitoring
Since July 2024 the EU demands "real data" on how energy intensive imported goods were produced, while estimated standard values are only allowed for some 20% of the emissions. A spokesperson of the Mechanical Engineering Industry Association (VDMA) complained in September 2024, that the required data are often not available, either because the suppliers dont collect them in the first place, or are not willing to hand them over. Additionally, every importer can be held accountable for the data they collect from their suppliers, but often lack the resources to control them all, or the influence to force the suppliers to comply with the CBAM regulations. Furthermore the national offices which are meant to help companies with problems to obtain accurate data, were often not functional yet. The de minimis rule exempts imports up to € 150 from CBAM while VDMA representatives campaign to raise that to € 5000.
WTO compatibility and non-discrimination
The EU should ensure that the CBAM is compatible with its international obligations under the World Trade Organization (WTO), according to two legal scholars at the University of Ottawa. This means that the mechanism should not discriminate against any particular country or violate the principles of free trade. The EU should also engage in constructive dialogue with its trading partners, including major emitters such as China and the United States, to ensure that the CBAM is consistent with global climate goals and does not create unnecessary tensions or trade disputes.
Incentivisation of carbon pricing in countries outside the EU
If countries outside the European Union have or will create their own carbon pricing policies, "they will avoid the EU’s carbon border tax and keep the revenues for their own decarbonization projects". A similar UK CBAM will be implemented by 2027.
The carbon import fee is not yet proposed to apply to a wide range of other products or services, such as automobiles, clothing, food and animal products (including those that lead to deforestation), shipping, aviation, or the importation of gas, oil and coal.
It has been suggested that the mechanism will help reduce emissions not only by making companies reduce emissions but also by incentivising other countries (like the United States, which lacks federal carbon pricing) to create similar mechanisms. Some authors even argue that the CBAM constitutes the beginning of a climate club, as proposed by Nobel Memorial Prize winner William Nordhaus.
The carbon market in India and Turkish Emissions Trading System aim to keep the revenues for their own budgets.
According to a report of the Asian Development Bank, the CBAM will reduce emissions only a little (which will be quickly offset by the rise in carbon intensive production), while harming import to the European Union. The report says "mechanisms to share emission reduction technology would be more effective".
Developing countries
According to one Amsterdam legal scholar, the EU should provide adequate support to the least developed countries (LDCs) to help them comply with the CBAM. This support could include technical assistance, capacity building, or financial incentives for investments in low-carbon technologies. By providing such support, the EU can ensure that businesses have the necessary resources and knowledge to transition to a low-carbon economy and avoid the risk of carbon leakage. Another author has suggested that the transition to a low-carbon economy requires technology and investment, which may require investment in countries in the Global South. Proposed solutions include technology transfer and green finance.
Exports
Border adjustments for imports but not for exports leads to reduced global competitiveness for domestic carbon-intensive products.
References
External links
Carbon Border Adjustment Mechanism proposal
Customs duties
Greenhouse gas emissions
Emissions trading
Climate change policy
European Green Deal
2023 in the European Union
2023 in economic history
2023 in law
European Union customs regulations | EU Carbon Border Adjustment Mechanism | [
"Chemistry"
] | 1,543 | [
"Greenhouse gases",
"Greenhouse gas emissions"
] |
65,041,721 | https://en.wikipedia.org/wiki/Hexachara | Hexachara is a genus of fossil charophyte (aquatic green alga) that is likely to have formed meadows within sheltered oligohaline reaches of lakes.
Etymology
Hexachara is derived from Greek "hex", meaning six, a reference to the hexaradial symmetry; chara, referring to membership of the Charales. The specific name setacea is derived from Latin "seta", a bristle, that is a reference to the thin pointed branchlet. The specific name riniensis is derived from the isiXhosa word Rhini, which is a traditional name for Grahamstown/Makhanda and the surrounding valley.
Description
The whorls of branches are arranged in a hexaradial symmetrical manner. In Hexachara each node produces a whorl of six laterals, and oogonia are produced on each lateral. The known species to date include Hexachara setacea and Hexachara riniensis that have been recovered from a carbonaceous shale near the top of the Late Devonian, Famennian, Witpoort Formation (Witteberg Group) exposed in a road cutting south of Grahamstown (Waterloo Farm lagerstätte) in South Africa,. Together with Octochara species from the same locality these represent the only reconstructable Devonian charophytes with in situ oogonia.
In H. setacea, internode parts of axis are outwardly uncorticated and about 3–4 mm long and 0.5 mm in diameter. The nodes in H. setacea bear whorls with diameters of about 6 mm and consisting of six radial branches and each branch encompasses a short stalk that gives rise to at least one slender pointed branchlet and carries an oogonium. The compact tips of H. setacea often formed small 'rollers', following fragmentation of axes. These are commonly enclosed by an organic 'halo', interpreted as a film of green algae utilizing the charophyte as an attachment substratum.
In H. riniensis the laterals divide at about one-half of their length into smaller whorls each of which bears six oogonia and the internodes are uncorticated, and about 0.3 mm in diameter. The nodes bear whorls, about 8 mm in diameter, that consist of six radial laterals. Each lateral divides at about one-half of its length to give rise to a secondary hexaradial whorl each branchlet of which carries an oogonium. H. riniensis has a very small oogonia attached to the ends of branchlets of secondary whorls and surrounded by extremely fine hair-like tertiary branchlets. Each oogonium is ovoid, about 1.8 mm long and 1.0 mm wide at widest point, and tapers slightly towards point of attachment.
References
Fossils
Fossil algae
Charophyta | Hexachara | [
"Biology"
] | 602 | [
"Fossil algae",
"Algae"
] |
65,042,180 | https://en.wikipedia.org/wiki/Geographic%20centre%20of%20Uganda | The geographic centre of Uganda is north of Lake Kyoga in Olyaka village, Olyaka parish in Namasale sub-county in Amolatar District, Northern Uganda.
The point is marked by the Amolatar Monument aka Uganda Tribes Monument which displays the names of all ethnic tribes in Uganda. The Amolatar peninsula offered refuge to different tribes during the Karimojong cattle rustling of the 1970s through to the 1980s and early 1990s most of whom ended up settling in the district. Once a year, in September, people from all tribes of the region gather at this place and pray.
The method by which the coordinates of this geographical centre were determined is not known. The centre point of a bounding box completely enclosing the area of Uganda results in another pair of coordinates (1.368153|N|32.303236|E) which belongs to a point along Kampala–Gulu Highway, west of Lake Kyoga.
References
Uganda
Geography of Uganda | Geographic centre of Uganda | [
"Physics",
"Mathematics"
] | 202 | [
"Point (geometry)",
"Geometric centers",
"Geographical centres",
"Symmetry"
] |
65,043,467 | https://en.wikipedia.org/wiki/C4%20model | The C4 model is a lean graphical notation technique for modeling the architecture of software systems. It is based on a structural decomposition (a hierarchical tree structure) of a system into containers and components and relies on existing modelling techniques such as Unified Modeling Language (UML) or entity–relationship diagrams (ERDs) for the more detailed decomposition of the architectural building blocks.
History
The C4 model was created by the software architect Simon Brown between 2006 and 2011 on the roots of Unified Modelling Language (UML) and the 4+1 architectural view model. The launch of an official website under a Creative Commons license and an article published in 2018 popularised the emerging technique.
Overview
The C4 model documents the architecture of a software system, by showing multiple points of view that explain the decomposition of a system into containers and components, the relationship between these elements, and, where appropriate, the relation with its users.
The viewpoints are organized according to their hierarchical level:
Context diagrams (level 1): show the system in scope and its relationship with users and other systems;
Container diagrams (level 2): decompose a system into interrelated containers. A container represents an application or a data store;
Component diagrams (level 3): decompose containers into interrelated components, and relate the components to other containers or other systems;
Code diagrams (level 4): provide additional details about the design of the architectural elements that can be mapped to code. The C4 model relies at this level on existing notations such as Unified Modelling Language (UML), Entity Relation Diagrams (ERD) or diagrams generated by Integrated Development Environments (IDE).
For level 1 to 3, the C4 model uses 5 basic diagramming elements: persons, software systems, containers, components and relationships. The technique is not prescriptive for the layout, shape, colour and style of these elements. Instead, the C4 model recommends using simple diagrams based on nested boxes in order to facilitate interactive collaborative drawing. The technique also promotes good modelling practices such as providing a title and legend on every diagram, and clear unambiguous labelling in order to facilitate the understanding by the intended audience.
The C4 model facilitates collaborative visual architecting and evolutionary architecture in the context of agile teams where more formal documentation methods and up-front architectural design are not desired.
See also
Software architecture
References
External links
Official site
Architecture description language
Software architecture
Diagrams
Notation
Knowledge representation
Software modeling language
Modeling languages | C4 model | [
"Mathematics"
] | 500 | [
"Symbols",
"Notation"
] |
65,043,485 | https://en.wikipedia.org/wiki/Extraterrestrial%3A%20The%20First%20Sign%20of%20Intelligent%20Life%20Beyond%20Earth | Extraterrestrial: The First Sign of Intelligent Life Beyond Earth (also known as Extraterrestrial) is a popular science book written by American theoretical physicist and Harvard University astronomer Avi Loeb, published by Houghton Mifflin Harcourt on 26 January 2021.
Contents
The book describes the 2017 detection of Oumuamua, the first known interstellar object to pass through the Solar System. Loeb, an astronomer at Harvard University, speculates that the object might be an extraterrestrial artifact, a suggestion considered unlikely by the scientific community collectively. Earlier, Loeb claimed to have demonstrated that the interstellar object was not an asteroid, was moving too fast in a very unusual orbit and left no gas trail or debris in its path to be a comet. Loeb believes, due to the observed acceleration of the object near the Sun, that Oumuamua may be a thin disk that acts as a solar sail. Further, Loeb and colleagues demonstrated that the object is unlikely to be frozen hydrogen, as proposed by other researchers.
Elizabeth Kolbert of The New Yorker magazine summarized the reasoning used by Avi Loeb about Oumuamua as follows:
Besides Oumuamua, another interstellar object, the comet 2I/Borisov, has been detected passing through the Solar System. In comparison, 2I/Borisov has been found to be natural, whereas Oumuamua has not been so determined. The possibility that Oumuamua may be alien technology has not been ruled out, although such an explanation is considered unlikely by most scientists. Nonetheless, according to Loeb, "We should be open-minded and search for evidence rather than assume that everything we see in the sky must be rocks."
Reviews
Alan Lightman writes the book is "provocative and thrilling," and commends Loeb for suggesting that readers "think big and to expect the unexpected." Jeff Foust, editor and publisher of The Space Review, comments that Loeb "fails to close the case that the object must be artificial ... Just because something can’t be immediately explained by natural phenomena doesn't mean it’s not natural." Further, "Perhaps Oumuamua will turn out to be the first of many in a new class of interstellar objects with an unusual, but natural, origin. Or, maybe, it will be like the “Wow!” signal, which was never seen again and its source never identified; mysterious, but not necessarily alien". Dennis Overbye, science writer for The New York Times, notes that the book is, "part graceful memoir and part plea for keeping an open mind about the possibilities of what is out there in the universe — in particular, life. Otherwise, he says, we might miss something amazing, like the church officials in the 17th century who refused to look through Galileo’s telescope." Reviewing for The New Yorker, Elizabeth Kolbert writes, "It seems a good deal more likely that [the book] will be ranked with von Däniken's work than with Galileo's," but concedes "it's thrilling to imagine the possibilities."
On August 24, 2023, The New York Times published an article about Loeb and his related search for signs of extraterrestrial life and his related publications.
A followup book, entitled Interstellar: The Search for Extraterrestrial Life and Our Future in the Stars, was published on August 29, 2023.
References
External links
Official Book WebSite
Official Author WebSite
American non-fiction books
Astronomy books
Houghton Mifflin books
Popular science books
2021 non-fiction books
English-language non-fiction books | Extraterrestrial: The First Sign of Intelligent Life Beyond Earth | [
"Astronomy"
] | 746 | [
"Astronomy books",
"Works about astronomy"
] |
61,353,566 | https://en.wikipedia.org/wiki/Journal%20of%20Behavior%20Therapy%20and%20Experimental%20Psychiatry | The Journal of Behavior Therapy and Experimental Psychiatry is a quarterly peer-reviewed scientific journal covering research on psychopathology, primarily from an experimental psychology perspective. It was established in 1970 and is published by Elsevier. Its founding editor-in-chief was Joseph Wolpe, who served as editor-in-chief from the journal's founding until his death in 1997. The current editor-in-chief is Adam S. Radomsky (Concordia University). According to the Journal Citation Reports, the journal has a 2018 impact factor of 2.189.
References
External links
Psychopathology
Psychiatry journals
Behavior therapy
Experimental psychology journals
Academic journals established in 1970
Quarterly journals
English-language journals
Elsevier academic journals | Journal of Behavior Therapy and Experimental Psychiatry | [
"Biology"
] | 143 | [
"Behavior",
"Behavior therapy",
"Behaviorism"
] |
61,354,069 | https://en.wikipedia.org/wiki/NGC%203597 | NGC 3597 is a galaxy located approximately 150 million light-years away in the constellation of Crater. It was discovered by John Herschel on March 21, 1835.
Characteristics
NGC 3597 is thought to be the product of the collision of two large galaxies, and it appears to be slowly evolving to become an elliptical galaxy. Because of this, NGC 3597 is interesting to astronomers. Galaxies smashing together pool their available gas and dust, triggering new rounds of star birth. Some of this material ends up in dense pockets initially called proto-globular clusters, dozens of which festoon NGC 3597. These pockets will go on to collapse and form globular clusters, packed tightly full of millions of stars.
See also
Interacting galaxy
References
External links
Elliptical galaxies
Crater (constellation)
3597
034266 | NGC 3597 | [
"Astronomy"
] | 167 | [
"Crater (constellation)",
"Constellations"
] |
61,355,226 | https://en.wikipedia.org/wiki/NET%20Power%20Demonstration%20Facility | The NET Power Test Facility, located in La Porte, Tx, is an oxy-combustion, zero-emissions 50 MWth natural gas power plant owned and operated by NET Power. NET Power is owned by Constellation Energy Corporation, Occidental Petroleum Corporation (Oxy) Low Carbon Ventures, Baker Hughes Company and 8 Rivers Capital, the company holding the patents for the technology. The plant is a first of its kind Allam-Fetvedt Cycle which achieved first-fire in May of 2018. The Allam-Fetvedt cycle delivers lower cost power while eliminating atmospheric emissions. The plant was featured in The Global CCS Institutes 2018 Status of CCS report. In recognition of the Allam-Fetvedt Cycle demonstration plant in La Porte, Texas, NET Power was awarded the 2018 International Excellence in Energy Breakthrough Technological Project of the Year at the Abu Dhabi International Petroleum Exhibition and Conference (ADIPEC).
References
Energy
Natural gas-fired power stations in Texas | NET Power Demonstration Facility | [
"Physics"
] | 201 | [
"Energy (physics)",
"Energy",
"Physical quantities"
] |
61,357,173 | https://en.wikipedia.org/wiki/CS%20Indic%20character%20set | The CS Indic character set, or the Classical Sanskrit Indic Character Set, is used by LaTeX represent text used in the Romanization of Sanskrit. It is used in fonts, and is based on Code Page 437. Extended versions are the CSX Indic character set and the CSX+ Indic character set.
Code page layout
History
The CS and CSX character set was defined during an informal discussion over a beer between John Smith, Dominik Wujastyk and Ronald E. Emmerick during the World Sanskrit Conference in Vienna, 1990. A few months later they were endorsed by several other Indologists including Harry Falk, Richard Lariviere, G. Jan Meulenbeld, Hideaki Nakatani, Muneo Tokunaga, and Michio Yano.
References
Character encoding
Character sets | CS Indic character set | [
"Technology"
] | 172 | [
"Natural language and computing",
"Character encoding"
] |
61,357,290 | https://en.wikipedia.org/wiki/CSX%20Indic%20character%20set | The CSX Indic character set, or the Classical Sanskrit eXtended Indic Character Set, is used by LaTeX represent text used in the Romanization of Sanskrit. It has no association with American railroad company CSX Transportation. It is an extension of the CS Indic character set, and is based on Code Page 437. An extended version is the CSX+ Indic character set. Michael Everson made a font in this character set for the Macintosh.
Code page layout
Note that some fonts have ā̃ (U+0101 LATIN SMALL LETTER A WITH MACRON, U+0303 COMBINING TILDE) at code point 171 (0xAC), ī̃ (U+012B LATIN SMALL LETTER I WITH MACRON, U+0303 COMBINING TILDE) at code point 172 (0xAD), and ū̃ (U+016B LATIN SMALL LETTER U WITH MACRON, U+0303 COMBINING TILDE) at code point 216 (0xD8).
History
See the shared history of the CS character set.
References
Character encoding
Character sets | CSX Indic character set | [
"Technology"
] | 223 | [
"Natural language and computing",
"Character encoding"
] |
61,357,438 | https://en.wikipedia.org/wiki/CSX%2B%20Indic%20character%20set | The CSX+ Indic character set, or the Classical Sanskrit eXtended Plus Indic Character Set, is used by LaTeX to represent text used in the Romanization of Sanskrit. It is an extension of the CSX Indic character set (but removes ÿ and the punctuation marks ¢, £, ¥, «, and »), which in turn is an extension of the CS Indic character set, and is based on Code Page 437. It fixes an issue with Windows programs, by moving á from code point 160 (0xA0) (which is problematic because it displays a regular space on Windows), to code point 158 (0x9E).
Code page layout
References
Character encoding
DOS code pages | CSX+ Indic character set | [
"Technology"
] | 150 | [
"Natural language and computing",
"Character encoding"
] |
61,357,933 | https://en.wikipedia.org/wiki/Pteridophyte%20Phylogeny%20Group | The Pteridophyte Phylogeny Group (PPG) is an informal international group of systematic botanists who collaborate to establish on the classification of pteridophytes (lycophytes and ferns) that reflects knowledge about plant relationships discovered through phylogenetic studies. In 2016, the group published a classification for extant pteridophytes, termed "PPG I". The paper had 94 authors (26 principal and 68 additional).
The classification was presented as a consensus classification supported by the community of fern taxonomists, but it has been partially exclusive and is highly contested. Alternative classifications of ferns exist and are preferred by more general taxonomists (see below).
PPG I
A first classification, PPG I, was produced in 2016, covering only extant (living) pteridophytes. The classification was rank-based, using the ranks of class, subclass, order, suborder, family, subfamily and genus.
Phylogeny
The classification was based on a consensus phylogeny, shown below to the level of order.
The very large order Polypodiales was divided into two suborders, as well as families not placed in a suborder:
Classification to subfamily level
To the level of subfamily, the PPG I classification is as follows.
Class Lycopodiopsida Bartl. (3 orders, 3 families, 18 genera)
Order Lycopodiales DC. ex Bercht. & J.Presl (1 family, 16 genera)
Family Lycopodiaceae P.Beauv. (16 genera)
Subfamily Lycopodielloideae W.H.Wagner & Beitel ex B.Øllg. (4 genera)
Subfamily Lycopodioideae W.H.Wagner & Beitel ex B. Øllg. (9 genera)
Subfamily Huperzioideae W.H.Wagner & Beitel ex B. Øllg. (3 genera)
Order Isoëtales Prantl (1 family, 1 genus)
Family Isoëtaceae Dumort. (1 genus)
Order Selaginellales Prantl (1 family, 1 genus)
Family Selaginellaceae Willk (1 genus)
Class Polypodiopsida Cronquist, Takht. & W.Zimm. (11 orders, 48 families, 319 genera)
Subclass Equisetidae Warm. (1 order, 1 family, 1 genus)
Order Equisetales DC. ex Bercht. & J.Presl (1 family, 1 genus)
Family Equisetaceae Michx. ex DC (1 genus)
Subclass Ophioglossidae Klinge (2 orders, 2 families, 12 genera)
Order Psilotales Prant (1 family, 2 genera)
Family Psilotaceae J.W.Griff. & Henfr. (2 genera)
Order Ophioglossales Link (1 family, 10 genera)
Family Ophioglossaceae Martinov (10 genera)
Subfamily Helminthostachyoideae C.Presl (1 genus)
Subfamily Mankyuoideae J.R.Grant & B.Dauphin (1 genus)
Subfamily Ophioglossoideae C.Presl (4 genera)
Subfamily Botrychioideae C.Presl (4 genera)
Subclass Marattiidae Klinge (1 order, 1 family, 6 genera)
Order Marattiales Link (1 family, 6 genera)
Family Marattiaceae Kaulf (6 genera)
Subclass Polypodiidae Cronquist, Takht. & W.Zimm. (7 orders, 44 families, 300 genera)
Order Osmundales Link (1 family, 6 genera)
Family Osmundaceae Martinov (6 genera)
Order Hymenophyllales A.B.Frank (1 family, 9 genera)
Family Hymenophyllaceae Mart (9 genera)
Subfamily Trichomanoideae C.Presl (8 genera)
Subfamily Hymenophylloideae Burnett (1 genus)
Order Gleicheniales Schimp (3 families, 10 genera)
Family Matoniaceae C.Pres (2 genera)
Family Dipteridaceae Seward & E.Dale (2 genera)
Family Gleicheniaceae C.Presl (6 genera)
Order Schizaeales Schimp. (3 families, 4 genera)
Family Lygodiaceae M.Roem (1 genus)
Family Schizaeaceae Kaulf (2 genera)
Family Anemiaceae Link (1 genus)
Order Salviniales Link (2 families, 5 genera)
Family Salviniaceae Martinov (2 genera)
Family Marsileaceae Mirb. (3 genera)
Order Cyatheales A.B.Frank (8 families, 13 genera)
Family Thyrsopteridaceae C.Presl (1 genus)
Family Loxsomataceae C.Presl (2 genera)
Family Culcitaceae Pic.Serm (1 genus)
Family Plagiogyriaceae Bowe (1 genus)
Family Cibotiaceae Koral (1 genus)
Family Metaxyaceae Pic.Serm. (1 genus)
Family Dicksoniaceae M.R.Schomb. (3 genera)
Family Cyatheaceae Kaulf. (3 genera)
Order Polypodiales Link (26 families, 253 genera)
Suborder Saccolomatineae Hovenkamp (1 family, 1 genus)
Family Saccolomataceae Doweld (1 genus)
Suborder Lindsaeineae Lehtonen & Tuomist (3 families, 9 genera)
Family Cystodiaceae J.R.Croft (1 genus)
Family Lonchitidaceae Doweld (1 genus)
Family Lindsaeaceae C.Presl ex M.R.Schomb. (7 genera)
Suborder Pteridineae J.Prado & Schuettp (1 family, 53 genera)
Family Pteridaceae E.D.M.Kirchn. (53 genera)
Subfamily Parkerioideae Burnett (2 genera)
Subfamily Cryptogrammoideae S.Lindsay (3 genera)
Subfamily Pteridoideae Link (13 genera)
Subfamily Vittarioideae Link (12 genera)
Subfamily Cheilanthoideae Horvat (23 genera)
Suborder Dennstaedtiineae Schwartsb. & Hovenkamp (1 family, 10 genera)
Family Dennstaedtiaceae Lotsy (10 genera)
Suborder Aspleniineae H.Schneid. & C.J.Rothf (11 families, 72 genera)
Family Cystopteridaceae Shmakov (3 genera)
Family Rhachidosoraceae X.C.Zhang (1 genus)
Family Diplaziopsidaceae X.C.Zhang & Christenh. (2 genera)
Family Desmophlebiaceae Mynssen (1 genus)
Family Hemidictyaceae Christenh. & H.Schneid. (1 genus)
Family Aspleniaceae Newman (2 genera)
Family Woodsiaceae Herter (1 genus)
Family Onocleaceae Pic.Serm. (4 genera)
Family Blechnaceae Newman (24 genera)
Subfamily Stenochlaenoideae (Ching) J.P.Roux (3 genera)
Subfamily Woodwardioideae Gasper (3 genera)
Subfamily Blechnoideae Gasper, V.A.O.Dittrich & Salino (18 genera)
Family Athyriaceae Alston (3 genera)
Family Thelypteridaceae Ching ex Pic.Serm. (30 genera)
Subfamily Phegopteridoideae Salino, A.R.Sm. & T.E.Almeid (3 genera)
Subfamily Thelypteridoideae C.F.Reed (27 genera)
Suborder Polypodiineae Dumort. (9 families, 108 genera)
Family Didymochlaenaceae Ching ex Li Bing Zhang & Liang Zhang (1 genus)
Family Hypodematiaceae Ching (2 genera)
Family Dryopteridaceae Herter (26 genera)
Subfamily Polybotryoideae H.M.Liu & X.C.Zhang (7 genera)
Subfamily Elaphoglossoideae (Pic.Serm.) Crabbe, Jermy & Mickel (11 genera)
Subfamily Dryopteridoideae Link (6 genera)
2 genera not assigned to a subfamily
Family Nephrolepidaceae Pic.Serm. (1 genus)
Family Lomariopsidaceae Alston (4 genera)
Family Tectariaceae Panigrahi (7 genera)
Family Oleandraceae Ching ex Pic.Serm. (1 genus)
Family Davalliaceae M.R.Schomb. (1 genus)
Family Polypodiaceae J.Presl & C.Presl (65 genera)
Subfamily Loxogrammoideae H.Schneid. (2 genera)
Subfamily Platycerioideae B.K.Nayar (2 genera)
Subfamily Drynarioideae Crabbe, Jermy & Mickel (6 genera)
Subfamily Microsoroideae B.K.Nayar (12 genera)
Subfamily Polypodioideae Sweet (9 genera)
Subfamily Grammitidoideae Parris & Sundue (33 genera)
1 genus not assigned to a subfamily
Number of genera
The number of genera used in PPG I has proved controversial. PPG I uses 18 lycopod and 319 fern genera. The earlier system put forward by Smith et al. (2006) had suggested a range of 274 to 312 genera for ferns alone. By contrast, the system of Christenhusz and Chase (2014) used 5 lycopod and about 212 fern genera. The number of fern genera was further reduced to 207 in a subsequent publication.
The number of genera used in each of these two approaches has been defended by their proponents. Defending PPG I, Schuettpelz et al. (2018) argue that the larger number of genera is a result of "the gradual accumulation of new collections and new data" and hence "a greater appreciation of fern diversity and [..] an improved ability to distinguish taxa". They also argue that the number of species per genus in the PPG I system is already higher than in other groups of organisms (about 33 species per genus for ferns as opposed to about 22 species per genus for angiosperms) and that reducing the number of genera as Christenhusz and Chase propose yields the 'excessive' number of about 50 species per genus for ferns. In response, Christenhusz and Chase (2018) argue that the excessive splitting of genera destabilises the usage of names and will lead to greater instability in future, especially when nuclear DNA is employed, and that the highly split genera have few if any characters that can be used to recognize them, making identification difficult, even to generic level. They further argue that comparing numbers of species per genus in different groups is "fundamentally meaningless", because there is no limit to numbers of species per genus. They also argue that the new findings in phylogeny can easily be treated at subgeneric and subfamilial levels, so that the names used by non-specialists will remain unaltered.
See also
List of fern families
References
Botany organizations
Plant taxonomy
Taxonomy (biology) organizations
Systems of plant taxonomy | Pteridophyte Phylogeny Group | [
"Biology"
] | 2,403 | [
"Taxonomy (biology) organizations",
"Plant taxonomy",
"Taxonomy (biology)",
"Plants"
] |
61,358,217 | https://en.wikipedia.org/wiki/Dialdose | In chemistry, a dialdose is a monosaccharide containing two aldehyde groups. For example, the hexodialdose O=CH–(CHOH)4–CH=O, obtained by reducing glucuronic acid with sodium amalgam.
References
Monosaccharides | Dialdose | [
"Chemistry"
] | 68 | [
"Carbohydrates",
"Monosaccharides",
"Organic compounds",
"Organic compound stubs",
"Organic chemistry stubs"
] |
61,358,363 | https://en.wikipedia.org/wiki/Bundesverband%20der%20Energie-%20und%20Wasserwirtschaft | The Bundesverband der Energie- und Wasserwirtschaft (BDEW) e.V. (German Association of Energy and Water Industries) with headquarters in Berlin is the German business organisation for the energy (power producers, grid operators, natural gas, electricity and district heating) and water industry (water supply and sanitation). BDEW influences the German energy policy and is a proponent of free-market designs. Depending on the sources, BDEW represents over 1800 or even 1900 companies, including local and municipal utilities as well as regional and inter-regional suppliers. Among the big players in the association are E.ON, Vattenfall, EnBW and RWE.
References
External links
bdew.de - Home page
ec.europa.eu/transparencyregister/… - Identification number in the register: 20457441380-38
Business organisations based in Germany
Energy industry in Germany
Water industry | Bundesverband der Energie- und Wasserwirtschaft | [
"Environmental_science"
] | 193 | [
"Hydrology",
"Water industry"
] |
61,360,134 | https://en.wikipedia.org/wiki/Babai%27s%20problem | Babai's problem is a problem in algebraic graph theory first proposed in 1979 by László Babai.
Babai's problem
Let be a finite group, let be the set of all irreducible characters of , let be the Cayley graph (or directed Cayley graph) corresponding to a generating subset of , and let be a positive integer. Is the set
an invariant of the graph ? In other words, does imply that ?
BI-group
A finite group is called a BI-group (Babai Invariant group) if for some inverse closed subsets and of implies that for all positive integers .
Open problem
Which finite groups are BI-groups?
See also
List of unsolved problems in mathematics
List of problems solved since 1995
References
Algebraic graph theory
Unsolved problems in graph theory | Babai's problem | [
"Mathematics"
] | 163 | [
"Unsolved problems in mathematics",
"Graph theory",
"Unsolved problems in graph theory",
"Mathematical relations",
"Mathematical problems",
"Algebra",
"Algebraic graph theory"
] |
61,363,750 | https://en.wikipedia.org/wiki/Aviation%20and%20Aerospace%20University%20Bangladesh | Aviation and Aerospace University Bangladesh is a public university in Bangladesh. It is funded by the government and is Bangladesh's first higher education institution on aerospace engineering. It is the 46th public University of Bangladesh. The permanent campus of AAUB is situated in Lalmonirhat, beside the Lalmonirhat Airport. As per the admission circular the Aeronautical Engineering program began in January 2020.
History
On July 11, 2018, the UGC drafted a law and submitted it to the Ministry of Education for setting up an aviation university in Bangladesh. At the Council of Ministers meeting on September 28, 2018, the approval for establishing this university/institution was granted, along with three more universities receiving policy approval on the same day. On February 28, 2019, education minister Dipu Moni placed a bill in the National Parliament regarding the inauguration of this institution and it was passed. A temporary office has been set up at the old airport at Tejgaon, Dhaka to handle the activities of the university. Air Vice-Marshal AHM Fazlul Haque has been appointed as the founding Vice Chancellor of the institution on May 26, 2019. The current Vice Chancellor of the university is Air Vice-Marshal ASM Fakhrul Islam.
List of vice-chancellors
Air Vice-Marshal AHM Fazlul Haque (2019 - 2021)
Air Vice-Marshal Muhammad Nazrul Islam (2021-2022)
Air Vice Marshal ASM Fakhrul Islam (2022-2024)
Air Vice Marshal A K M Manirul Bahar(2024-Present)
See also
Defence industry of Bangladesh
References
External link
Official Website
Technological institutes of Bangladesh
Public universities of Bangladesh
Public engineering universities of Bangladesh
Engineering universities of Bangladesh
Engineering universities and colleges in Bangladesh
Engineering universities and colleges
Educational institutions established in 2019
2019 establishments in Bangladesh | Aviation and Aerospace University Bangladesh | [
"Engineering"
] | 368 | [
"Engineering universities and colleges"
] |
63,544,559 | https://en.wikipedia.org/wiki/Negevirus | Negevirus is a taxon of non segmented, positive sense single stranded RNA viruses that have been isolated from mosquitoes and phlebotomine sand flies in Africa, the Americas, Asia and Europe. With the electron microscope the viruses appear to be spherical particles 45 to 55 nanometers in diameter.
Taxonomy
There are at least 91 viruses recognised in this taxon.
Genome
The viral genomes are between 7.039 and 10.054 nucleotides in length. There are three open reading frames (ORFs). The largest open reading frame lies toward the 5' end of the genome and encodes a polyprotein. This polyprotein has methyl transferase, viral helicase and RNA dependent RNA polymerase domains. There are untranslated regions at the 5' and 3' ends of the genome. These vary with the 5' end being between 72 and 730 and the 3' end 121 to 442 nucleotides in length. The function of the two smaller open reading frames is not known but they appear to be envelope proteins.
History
This taxon was first described in 2013.
References
RNA viruses
Unaccepted virus taxa | Negevirus | [
"Biology"
] | 238 | [
"Biological hypotheses",
"Unaccepted virus taxa",
"Controversial taxa"
] |
63,544,575 | https://en.wikipedia.org/wiki/INTLAB | INTLAB (INTerval LABoratory) is an interval arithmetic library using MATLAB and GNU Octave, available in Windows and Linux, macOS. It was developed by S.M. Rump from Hamburg University of Technology. INTLAB was used to develop other MATLAB-based libraries such as VERSOFT and INTSOLVER, and it was used to solve some problems in the Hundred-dollar, Hundred-digit Challenge problems.
Version history
12/30/1998 Version 1
03/06/1999 Version 2
11/16/1999 Version 3
03/07/2002 Version 3.1
12/08/2002 Version 4
12/27/2002 Version 4.1
01/22/2003 Version 4.1.1
11/18/2003 Version 4.1.2
04/04/2004 Version 5
06/04/2005 Version 5.1
12/20/2005 Version 5.2
05/26/2006 Version 5.3
05/31/2007 Version 5.4
11/05/2008 Version 5.5
05/08/2009 Version 6
12/12/2012 Version 7
06/24/2013 Version 7.1
05/10/2014 Version 8
01/22/2015 Version 9
12/07/2016 Version 9.1
05/29/2017 Version 10
07/24/2017 Version 10.1
12/15/2017 Version 10.2
01/07/2019 Version 11
03/06/2020 Version 12
Functionality
INTLAB can help users to solve the following mathematical/numerical problems with interval arithmetic.
Works cited by INTLAB
INTLAB is based on the previous studies of the main author, including his works with co-authors.
External links
See also
List of numerical analysis software
Comparison of linear algebra libraries
References
Numerical analysis
Numerical software
Computational science
Computer arithmetic | INTLAB | [
"Mathematics"
] | 361 | [
"Applied mathematics",
"Computational mathematics",
"Computational science",
"Computer arithmetic",
"Arithmetic",
"Mathematical relations",
"Numerical software",
"Numerical analysis",
"Approximations",
"Mathematical software"
] |
63,544,717 | https://en.wikipedia.org/wiki/Solidarity%20trial | The Solidarity trial for treatments is a multinational Phase III-IV clinical trial organized by the World Health Organization (WHO) and partners to compare four untested treatments for hospitalized people with severe COVID-19 illness. The trial was announced 18 March 2020, and as of 6 August 2021, 12,000 patients in 30 countries had been recruited to participate in the trial.
In May, the WHO announced an international coalition for simultaneously developing several candidate vaccines to prevent COVID-19 disease, calling this effort the Solidarity trial for vaccines.
The treatments being investigated are remdesivir, lopinavir/ritonavir combined, lopinavir/ritonavir combined with interferon-beta, and hydroxychloroquine or chloroquine. Hydroxychloroquine or chloroquine investigation was discontinued in June 2020 due to concluding that it provided no benefit.
Solidarity trial for treatment candidates
The trial intends to rapidly assess thousands of COVID-19 infected people for the potential efficacy of existing antiviral and anti-inflammatory agents not yet evaluated specifically for COVID-19 illness, a process called "repurposing" or "repositioning" an already-approved drug for a different disease.
The Solidarity project is designed to give rapid insights to key clinical questions:
Do any of the drugs reduce mortality?
Do any of the drugs reduce the time a patient is hospitalized?
Do the treatments affect the need for people with COVID-19-induced pneumonia to be ventilated or maintained in intensive care?
Could such drugs be used to minimize the illness of COVID-19 infection in healthcare staff and people at high risk of developing severe illness?
Enrolling people with COVID-19 infection is simplified by gathering informed consent, and capturing data on an online clinical trial platform (Castor EDC). After the trial staff determine the drugs available at the hospital, the platform randomizes the hospitalized subject to one of the trial drugs or to the hospital standard of care for treating COVID-19. The trial physician records and submits follow-up information about the subject status and treatment, completing data input via the Castor EDC Platform. The design of the Solidarity trial is not double-blind – which is normally the standard in a high-quality clinical trial – but WHO needed speed with quality for the trial across many hospitals and countries. A global safety monitoring board of WHO physicians examine interim results to assist decisions on safety and effectiveness of the trial drugs, and alter the trial design or recommend an effective therapy. A similar web-based study to Solidarity, called "Discovery", was initiated in March across seven countries by INSERM (Paris, France).
The Solidarity trial seeks to implement coordination across hundreds of hospital sites in different countries – including those with poorly-developed infrastructure for clinical trials – yet needs to be conducted rapidly. According to John-Arne Røttingen, chief executive of the Research Council of Norway and chairman of the Solidarity trial international steering committee, the trial would be considered effective if therapies are determined to "reduce the proportion of patients that need ventilators by, say, 20%, that could have a huge impact on our national health-care systems."
Adaptive design
According to the WHO Director General, the aim of the trial is to "dramatically cut down the time needed to generate robust evidence about what drugs work", a process using an "adaptive design". The Solidarity and European Discovery trials apply adaptive design to rapidly alter trial parameters when results from the four experimental therapeutic strategies emerge.
Adaptive designs within ongoing Phase III-IV clinical trials – such as the Solidarity and Discovery projects – may shorten the trial duration and use fewer subjects, possibly expediting decisions for early termination to save costs if interim results are negative. If the Solidarity project shows early evidence of success, design changes across the project's international locations can be made rapidly to enhance overall outcomes of affected people and hasten use of the therapeutic drug.
Treatment candidates under study
The individual or combined drugs being studied in the Solidarity and Discovery projects are already approved for other diseases. They are:
Remdesivir
Lopinavir/ritonavir combined
Lopinavir/ritonavir combined with interferon-beta
Hydroxychloroquine or chloroquine (discontinued due to no benefit, June 2020)
Due to safety concerns and evidence of heart arrhythmias leading to higher death rates, the WHO suspended the hydroxychloroquine arm of the Solidarity trial in late May 2020, then reinstated it, then withdrew it again when an interim analysis in June showed that hydroxychloroquine provided no benefit to hospitalized people severely infected with COVID-19.
In October 2020, the World Health Organization Solidarity trial produced an interim report concluding that its "remdesivir, hydroxychloroquine, lopinavir and interferon regimens appeared to have little or no effect on hospitalized COVID-19, as indicated by overall mortality, initiation of ventilation and duration of hospital stay." Gilead – the manufacturer of remdesivir – criticized the Solidarity trial methodology after it showed no benefit of the treatments, claiming that the international nature of the Solidarity trial was a weakness, whereas many experts regard the multinational study as a strength. Purchase agreements between the EU and Gilead for remdesivir and granting of its Emergency Use Authorization by the US FDA during October were questioned by Solidarity trial scientists as not based on positive clinical trial data, when the interim analysis of the Solidarity trial had found remdesivir to be ineffective.
In January 2022, the Canadian component of the Solidarity trial reported that in-hospital people with COVID-19 treated with remdesivir had lower death rates (by about 4%) and reduced need for oxygen (less by 5%) and mechanical ventilation (less by 7%) compared to people receiving standard-of-care treatments.
Support and participation
During March, funding for the Solidarity trial reached million from 203,000 individual donations, charitable organizations and governments, with 45 countries involved in financing or trial management. As of 1 July 2020, nearly 5,500 patients in 21 countries of 39 that have approval to recruit were recruited to participate in the trial. More than 100 countries in all 6 WHO regions have expressed interest in participating.
Solidarity trial for vaccine candidates
The WHO has developed a multinational coalition of vaccine scientists defining a Global Target Product Profile (TPP) for COVID-19, identifying favorable attributes of safe and effective vaccines under two broad categories: "vaccines for the long-term protection of people at higher risk of COVID-19, such as healthcare workers", and other vaccines to provide rapid-response immunity for new outbreaks. The international TPP team was formed to 1) assess the development of the most promising candidate vaccines; 2) map candidate vaccines and their clinical trial worldwide, publishing a frequently-updated "landscape" of vaccines in development; 3) rapidly evaluate and screen for the most promising candidate vaccines simultaneously before they are tested in humans; and 4) design and coordinate a multiple-site, international randomized controlled trial the Solidarity trial for vaccines to enable simultaneous evaluation of the benefits and risks of different vaccine candidates under clinical trials in countries where there are high rates of COVID-19 disease, ensuring fast interpretation and sharing of results around the world. The WHO vaccine coalition will prioritize which vaccines should go into Phase II and III clinical trials, and determine harmonized Phase III protocols for all vaccines achieving the pivotal trial stage.
Solidarity Plus Trial
The WHO announced in August 2021 that it will roll out the next phase Solidarity trial under the name Solidarity PLUS trial in 52 countries. The trial will enroll hospitalized patients to test three new drugs for potential treatment of COVID-19. These drugs include artesunate, imatinib and infliximab. The selection of these therapies was done by an independent expert panel of WHO. These drugs are already used for other indications: artesunate is used for malaria, imatinib for cancers, and infliximab, an anti-TNF agent is used for Crohn's Disease and rheumatoid arthritis. The drugs will be donated for the purpose of trial by their manufacturers.
See also
COVID-19 drug repurposing research
COVID-19 drug development#Phase III-IV trials
RECOVERY Trial
PANORAMIC trial
AGILE trial
References
External links
"'Solidarity' clinical trial for COVID-19 treatment by the World Health Organization
COVID-19 (Questions & Answers) by the World Health Organization
COVID-19 (Q&A) by the US Centers for Disease Control and Prevention (CDC)
Coronaviruses by US National Institute for Allergy and Infectious Diseases
COVID-19 (Q&A) by the European Centre for Disease Prevention and Control
COVID-19 by the China National Health Commission
Anti-influenza agents
Clinical research
Clinical trials related to COVID-19
Drugs
Medical responses to the COVID-19 pandemic
Clinical trials | Solidarity trial | [
"Chemistry"
] | 1,852 | [
"Pharmacology",
"Chemicals in medicine",
"Drugs",
"Products of chemical industry"
] |
63,548,048 | https://en.wikipedia.org/wiki/Subrata%20Roy%20%28scientist%29 | Subrata Roy (Bengali: সুব্রত রায়) is an Indian-born inventor, educator, and scientist known for his work in plasma-based flow control and plasma-based self-sterilizing technology. He is a professor of Mechanical and Aerospace Engineering at the University of Florida and the founding director of the Applied Physics Research Group at the University of Florida.
He is also the President and the founder of SurfPlasma Inc., a biotechnology company in Gainesville, Florida.
Biography
Subrata Roy earned his Ph.D. in engineering science from the University of Tennessee in Knoxville, TN in 1994. Roy was a senior research scientist at Computational Mechanics Corporation in Knoxville, Tennessee, and then professor of mechanical engineering at the Kettering University up to 2006. In 2006, Roy joined the University of Florida as a faculty member of the Department of Mechanical and Aerospace Engineering. He is a professor of Mechanical and Aerospace Engineering and the founding director of the Applied Physics Research Group at the University of Florida. He has also worked as a visiting professor at the University of Manchester and the Indian Institute of Technology Bombay.
Scientific work
Subrata Roy's research and scientific work encompasses computational fluid dynamics (CFD), plasma physics, heat transfer, magnetohydrodynamics, electric propulsion, and micro/nanoscale flows. In 2003, Roy incorporated Knudsen's theory that handles surface collisions of molecules by diffusive and specular reflections into hydrodynamic models, which has been used in shale gas seepage studies. In 2006, Roy invented the Wingless Electromagnetic Air Vehicle (WEAV) which was included in Scientific American in 2008 as the world's first wingless, electromagnetically driven air vehicle design. Roy is known for introducing various novel designs and configurations of plasma actuators for applications in mitigation of flow drag related fuel consumption, noise reduction, and active film cooling of turbine blades and propulsion. These designs and configurations include serpentine geometry plasma actuators, fan geometry plasma actuators, micro-scale actuators, multibarrier plasma actuators, and plasma actuated channels of atmospheric plasma actuators.
Roy also led multidisciplinary research on innovating eco-friendly ways of microorganism decontamination using plasma reactors.
Roy served as the Technical Discipline Chair for the 36th AIAA Thermophysics Conference in 2003, the 48th Aerospace Sciences Meeting (for Thermophysics) in 2010, the AIAA SciTech Plasma Dynamics and Lasers Conference in 2016, and served as the Forum Technical Chair for AIAA SciTech in 2018. Roy served (20052007) as an Associate Editor of the Journal of Fluids Engineering and served (20122017) as an Academic Editor of PLOS One. Roy serves as a nation appointed member to the NATO Science and Technology Organisation working group on plasma actuator technologies; a member of the editorial board of Scientific Reports-Nature; and, an Associate Editor of Frontiers in Physics, Frontiers in Astronomy and Space Sciences, and Journal of Fluid Flow, Heat and Mass Transfer. Roy is an inducted Fellow of the National Academy of Inventors, a Distinguished Visiting Fellow of the Royal Academy of Engineering, a Fellow of the Royal Aeronautical Society, a lifetime member and Fellow of the American Society of Mechanical Engineers, and an Associated Fellow of the American Institute of Aeronautics and Astronautics.
Honors
Fellow, National Academy of Inventors
Distinguished Visiting Fellow, Royal Academy of Engineering
Fellow, Royal Aeronautical Society
Lifetime Fellow, American Society of Mechanical Engineers
Space Act Award 2016 NASA
References
External links
University of Florida faculty
Living people
American engineers
Plasma physicists
Computational fluid dynamicists
Scientists from Kolkata
Indian emigrants to the United States
Bengali scientists
American Hindus
20th-century Indian physicists
21st-century American inventors
University of Tennessee alumni
Jadavpur University alumni
Year of birth missing (living people)
American academics of Indian descent
Indian scholars | Subrata Roy (scientist) | [
"Physics"
] | 795 | [
"Plasma physicists",
"Plasma physics"
] |
63,548,621 | https://en.wikipedia.org/wiki/Interior%20Designers%20Association%20of%20Nigeria | The Interior Designers Association of Nigeria (IDAN) is the professional body of interior designers and suppliers in Nigeria. Founded in 2007 by Titi Ogufere, IDAN is the leading authority representing members who are mainly qualified interior designers, dealers of interior design products, finishing companies, and interior decorators in Nigeria. It has its headquarters in Lagos, with regional offices in Abuja and Port Harcourt.
History
At its inception in 2017, IDAN's founding board members were Titi Ogufere, Ekua Abudu, Munirat Shonibare, Anselm Tabansi, Moni Fagbemi, Sarah Daniel and Mathew Eshalomi as board members.
Interior Designers Association of Nigeria, in 2017, hosted the first ever edition of African Culture and Design Festival in Africa, teaming up with 2017 congress of International Federation of Interior Architects/Designers (IFI), a body of professional interior architects and interior designers.
See also
Interior Design
American Society of Interior Designers
References
External links
Professional associations based in Nigeria
Design institutions
Arts organizations established in 2007
2007 establishments in Nigeria | Interior Designers Association of Nigeria | [
"Engineering"
] | 219 | [
"Design",
"Design institutions"
] |
63,550,672 | https://en.wikipedia.org/wiki/N%2CN-Dimethylaminomethylferrocene | N,N-Dimethylaminomethylferrocene is the dimethylaminomethyl derivative of ferrocene, (C5H5)Fe(C5H4CH2N(CH3)2. It is an air-stable, dark-orange syrup that is soluble in common organic solvents. The compound is prepared by the reaction of ferrocene with formaldehyde and dimethylamine:
(C5H5)2Fe + CH2O + HN(CH3)2 → (C5H5)Fe(C5H4CH2N(CH3)2 + H2O
It is a precursor to prototypes of ferrocene-containing redox sensors and diverse ligands.
The amine can be quaternized, which provides access to many derivatives.
References
Ferrocenes
Sandwich compounds
Cyclopentadienyl complexes | N,N-Dimethylaminomethylferrocene | [
"Chemistry"
] | 187 | [
"Organometallic chemistry",
"Cyclopentadienyl complexes",
"Sandwich compounds"
] |
63,551,128 | https://en.wikipedia.org/wiki/IDX-184 | IDX-184 is an antiviral drug which was developed as a treatment for hepatitis C, acting as a NS5B RNA polymerase inhibitor. While it showed reasonable effectiveness in early clinical trials it did not progress past Phase IIb. However research using this drug has continued as it shows potentially useful activity against other emerging viral diseases such as Zika virus, and coronaviruses including MERS, and SARS-CoV-2.
References
Anti–RNA virus drugs
Antiviral drugs
NS5B (polymerase) inhibitors | IDX-184 | [
"Biology"
] | 110 | [
"Antiviral drugs",
"Biocides"
] |
63,551,532 | https://en.wikipedia.org/wiki/AirTag | AirTag is a tracking device developed by Apple. AirTag is designed to act as a key finder, which helps people find personal objects such as keys, bags, apparel, small electronic devices and vehicles. To locate lost items, AirTags use Apple's crowdsourced Find My network, estimated in early 2021 to consist of approximately one billion devices worldwide that detect and anonymously report emitted Bluetooth signals. AirTags are compatible with any iPhone, iPad, or iPod Touch device capable of running iOS/iPadOS 14.5 or later, including iPhone 6S or later (including iPhone SE 1, 2 and 3). Using the built-in U1 chip on iPhone 11 or later (except iPhone SE models), users can more precisely locate items using ultra-wideband (UWB) technology. AirTag was announced on April 20, 2021, made available for pre-order on April 23, and released on April 30.
History
The product was rumored to be under development in April 2019. In February 2020, it was reported that Asahi Kasei was prepared to supply Apple with tens of millions of ultra-wideband (UWB) parts for the rumored AirTag in the second and third quarters of 2020, though the shipment was ultimately delayed. On April 2, 2020, a YouTube video on Apple Support page also confirmed AirTag. In Apple's iOS 14.0 release, code was discovered that described the reusable and removable battery that would be used in the AirTag. In March 2021, Macworld stated that iOS 14.5 beta's Find My user interface included "Items" and "Accessories" features meant for AirTag support for a user's "backpack, luggage, headphones" and other objects. AppleInsider noted that the beta included safety warnings for "unauthorized AirTags" persistently in a user's vicinity.
In May 2024, Bloomberg reported that Apple was preparing a new version of the AirTag, codenamed B589.
Features
AirTags can be interacted with using the Find My app. Users may trigger the AirTag to play a sound from the app. iPhones equipped with the U1 chip can use "Precision Tracking" to provide direction to and precise distance from an AirTag. Precision Tracking utilizes ultra-wideband.
AirTags are not satellite navigation devices. AirTags are located on a map within the Find My app by utilizing Bluetooth signals from other anonymous iOS and iPadOS devices out in the world. To help prevent unwanted tracking, an iOS/iPadOS device will alert their owner if someone else's AirTag seems to be with them, instead of with the AirTag's owner, for too long. If an AirTag is out of range of any Apple device for more than 8 to 24 hours, it will begin to beep to alert a person that an AirTag may have been placed in their possessions.
Users can mark an AirTag as lost and provide a phone number and a message. Any iPhone user can see this phone number and message with the "Identify Found Item" feature within the Find My app, which utilizes near-field communication (NFC) technology. Additionally, Android and Windows 10 Mobile phones with NFC can identify an AirTag with a tap, which will redirect to a website containing the message and phone number.
AirTag requires an Apple ID and iOS or iPadOS 14.5 or later. It uses the CR2032 button cell, replaceable with one year of battery life (though some batteries with child-resistant bitterants cannot be used due to the design of the AirTag battery terminal). The maximum range of Bluetooth tracking is estimated to be around 100 meters. The water-resistance of an AirTag is rated IP67 water and dust; an AirTag can withstand 30 minutes of water immersion in standard laboratory conditions. Each Apple ID is limited to 32 AirTags.
Firmware version history
Apple does not provide a way for users to force an AirTag to carry out a firmware update. Firmware updates may happen automatically whenever an AirTag is in Bluetooth range of the paired iPhone (running iOS 14.5 or later) and both devices have sufficient battery.
Applications
Tracking checked luggage
AirTags have become extremely popular among travelers to track checked luggage on flights and empower them when luggage is lost by carriers. In response, Lufthansa stated that AirTags were not permissible in luggage checked with the carrier. The carrier backtracked after a risk assessment by German risk authorities following widespread criticism and accusations that it was seeking to avoid accountability. The Federal Aviation Administration has ruled that storing AirTags in checked luggage is permitted and not a safety hazard despite containing batteries.
Theft prevention and recovery
AirTags have been used to track stolen property and assist police in recovering them for return to their rightful owners. In February 2023, a North Carolina family discovered that their car had been stolen. In coordination with local police, they utilized an AirTag placed in the vehicle to locate the car and were able to recover their property. Police were reportedly elated at the ease at which they were able to arrest the criminals and recover the property thanks to the AirTags.
Criticism
Use by stalkers
Despite Apple's inclusion of technologies to help prevent unwanted tracking or stalking, The Washington Post found that it was "frighteningly easy" to bypass the systems put in place. It has been described as "a gift to stalkers". Concerns included the built-in audible alarm taking three days to sound (since reduced to 8–24 hours), and the fact that most Americans had Android devices that would not receive alerts about nearby AirTags that iPhone devices receive. AirTags cannot have most of their components replaced correctly, but it has been found that AirTags with their speakers forcibly removed from the rest of the components were being used to track people. The AirTag cannot detect this change, making it harder for people to find out that an AirTag had been stalking them. AirTags with their speakers removed have been found for sale on sites like eBay and Etsy. In January 2022, BBC News spoke to six women who stated that they found unregistered AirTags inside things such as cars and bags.
In late 2021, Apple released an app called Tracker Detect on the Google Play Store to help users of Android 9 or later to discover unknown AirTags near them in a "lost" state and potentially being used for malicious tracking purposes. However, the app does not run in the background.
In February 2022, Apple added a warning for users setting up their AirTag, notifying them that using the device to track people is illegal and the device is only meant for tracking personal belongings. It will take 8–24 hours for an AirTag to chirp if it has been separated from its owner.
Tracking cars
The National Post in Canada reported that AirTags were placed on vehicles at shopping malls and parking lots without the drivers' knowledge, in order to track them to their homes, where the vehicles would be stolen. In response, Apple announced just before WWDC 2021 that it had begun rolling out updates that would allow anyone with an NFC-capable phone to tap an unwanted AirTag for instructions on how to disable it, and that they had decreased the delay time for the audible alert that sounds after the AirTag is separated from its owner from three days to a random time between 8 and 24 hours.
Susceptibility to hacking
Users who set their AirTags to lost mode are prompted to provide a contact phone number for finders to call. In September 2021, security researcher Brian Krebs, citing fellow security researcher Bobby Rauch, reported that the phone number field will actually accept any type of input, including arbitrary computer code, opening up the potential use of AirTags as Trojan horse devices.
Similarity to Tile
Similar product manufacturer Tile criticized Apple for using similar technologies and designs to Tile's trackers. Spokespeople for Tile made a testimony to the United States Congress saying that Apple was supporting "anti-competitive practices", claiming that Apple had done this in the past, and that they think it is "entirely appropriate for Congress to take a closer look at Apple's business practices".
Difficulty attaching
AirTags do not have holes or other mechanical features that would allow them to be positively attached or affixed to the item being tracked; solutions include adhesives (glue, tape) and purpose-built accessories. The polyurethane AirTag Loop is the least expensive solution sold by Apple; it costs the same as a single AirTag and has been criticized as an "accessory tax".
See also
iBeacon
Galaxy SmartTag
List of UWB-enabled devices
References
External links
Apple Inc. hardware
Internet object tracking
Apple Inc. peripherals
IPhone accessories
Products introduced in 2021 | AirTag | [
"Technology"
] | 1,812 | [
"IPhone accessories",
"Components"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.