id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
63,206,774
https://en.wikipedia.org/wiki/Decentralized%20identifier
A decentralized identifier (DID) is a type of globally unique identifier that enables an entity to be identified in a manner that is verifiable, persistent (as long as the DID controller desires), and does not require the use of a centralized registry. DIDs enable a new model of decentralized digital identity that is often referred to as a self-sovereign identity. They are an important component of decentralized web applications. DID documents A decentralized identifier resolves (points) to a DID document, a set of data describing the DID subject, including mechanisms, such as cryptographic public keys, that the DID subject or a DID delegate can use to authenticate itself and prove its association with the DID. DID methods Just as there are many different types of URIs, all of which conform to the URI standard, there are many different types of DID methods, all of which must conform to the DID standard. Each DID method specification must define: The name of the DID method (which must appear between the first and second colon, e.g., did:example:). The structure of the unique identifier that must follow the second colon. The technical specifications for how a DID resolver can apply the CRUD operations to create, read, update, and deactivate a DID document using that method. The W3C DID Working Group maintains a registry of DID methods. Usage of DIDs A DID identifies any subject (e.g., a person, organization, thing, data model, abstract entity, etc.) that the controller of the DID decides that it identifies. DIDs are designed to enable the controller of a DID to prove control over it and to be implemented independently of any centralized registry, identity provider, or certificate authority. DIDs are URIs that associate a DID subject with a DID document. Each DID document can express cryptographic material, verification methods, and service endpoints to enable trusted interactions associated with the DID subject. A DID document might contain additional semantics about the subject that it identifies. A DID document might also contain the DID subject itself (e.g. a data model). National efforts include the European Digital Identity (EUDI) Wallet as a part of eIDAS 2.0 in the European Union, and China Real-Name Decentralized Identifier System (China RealDID) under China's Ministry of Public Security. The AT Protocol and applications powered by the protocol such as Bluesky use DIDs for their identity system in order to give users full control over their identity, including where their data is stored. The protocol uses its own DID method, did:plc. Standardization efforts The W3C DID Working Group developed a specification for decentralized identifiers to standardize the core architecture, data model, and representation of DIDs. The W3C approved the DID 1.0 specification as a W3C Recommendation on July 19, 2022. The Decentralized Identity Foundation (DIF) published a Dynamic Traveler Profile Generation Specification in June 2023, for use cases in the travel industry. See also Self-sovereign identity References External links W3C Decentralized Identifier Working Group Authentication protocols Authentication methods Identity management Digital technology Federated identity Computer access control Decentralization
Decentralized identifier
[ "Technology", "Engineering" ]
675
[ "Information and communications technology", "Cybersecurity engineering", "Computer access control", "Computer science stubs", "Computer science", "Digital technology", "Computing stubs" ]
63,208,443
https://en.wikipedia.org/wiki/System%20and%20Organization%20Controls
System and Organization Controls (SOC; also sometimes referred to as service organizations controls) as defined by the American Institute of Certified Public Accountants (AICPA), is the name of a suite of reports produced during an audit. It is intended for use by service organizations (organizations that provide information systems as a service to other organizations) to issue validated reports of internal controls over those information systems to the users of those services. The reports focus on controls grouped into five categories called Trust Service Criteria. The Trust Services Criteria were established by The AICPA through its Assurance Services Executive Committee (ASEC) in 2017 (2017 TSC). These control criteria are to be used by the practitioner/examiner (Certified Public Accountant, CPA) in attestation or consulting engagements to evaluate and report on controls of information systems offered as a service. The engagements can be done on an entity wide, subsidiary, division, operating unit, product line or functional area basis. The Trust Services Criteria were modeled in conformity to The Committee of Sponsoring Organizations of the Treadway Commission (COSO) Internal Control - Integrated Framework (COSO Framework). In addition, the Trust Services Criteria can be mapped to NIST SP 800 - 53 criteria and to EU General Data Protection Regulation (GDPR) Articles. The AICPA auditing standard Statement on Standards for Attestation Engagements no. 18 (SSAE 18), section 320, "Reporting on an Examination of Controls at a Service Organization Relevant to User Entities' Internal Control Over Financial Reporting", defines two levels of reporting, type 1 and type 2. Additional AICPA guidance materials specify three types of reporting: SOC 1, SOC 2, and SOC 3. Trust Service Criteria Trust Services Criteria were designed such that they can provide flexibility in application to better suit the unique controls implemented by an organization to address its unique risks and threats it faces. This is in contrast to other control frameworks that mandate specific controls whether applicable or not. Trust Services Criteria application in actual situations requires judgement as to suitability. The Trust Services Criteria are used when "evaluating the suitability of the design and operating effectiveness of controls relevant to the security, availability, processing integrity, confidentiality or privacy of information and systems used to provide product or services" - AICPA - ASEC. Organization of the Trust Services Criteria are aligned to the COSO framework's 17 principles with additional supplemental criteria organized into logical and physical access controls, system operations, change management and risk mitigation. Further, the additional supplemental criteria are shared among the Trust Services Criteria - Common Criteria (CC) and additional specific criteria for availability, processing integrity, confidentiality and privacy. Common criteria are labeled as, Control environment (CC1.x), Information and communication (CC2.x), Risk assessment (CC3.x), Monitoring of controls (CC4.x) and Control activities related to the design and implementation of controls (CC5.x). Common criteria are suitable and complete for evaluation security criteria. However, there additional category specific criteria for Availability (A.x), Processing integrity (PI.x), Confidentiality (C.x) and Privacy (P.x). Criteria for each trust services categories addressed in an engagement are considered complete when all criterial associated with that category are addressed. SOC 2 reports focus on controls addressed by five semi-overlapping categories called Trust Service Criteria which also support the CIA triad of information security: Security - information and systems are protected against unauthorized access and disclosure, and damage to the system that could compromise the availability, confidentiality, integrity and privacy of the system. Firewalls Intrusion detection Multi-factor authentication Availability - information and systems are available for operational use. Performance monitoring Disaster recovery Incident handling Confidentiality - information is protected and available on a legitimate need to know basis. Applies to various types of sensitive information. Encryption Access controls Firewalls Processing Integrity - system processing is complete, valid, accurate, timely and authorized. Quality assurance Process monitoring Adherence to principle Privacy - personal information is collected, used, retained, disclosed and disposed according to policy. Privacy applies only to personal information. Access control Multi-factor authentication Encryption Reporting Levels There are two levels of SOC reports which are also specified by SSAE 18: Type 1, which describes a service organization's systems and whether the design of specified controls meet the relevant trust principles. (Are the design and documentation likely to accomplish the goals defined in the report?) Type 2, which also addresses the operational effectiveness of the specified controls over a period of time (usually 9 to 12 months). (Is the implementation appropriate?) Types There are three types of SOC reports. SOC 1 – Internal Control over Financial Reporting (ICFR) SOC 2 – Trust Services Criteria SOC 3 – Trust Services Criteria for General Use Report Additionally, there are specialized SOC reports for Cybersecurity and Supply Chain. SOC 1 and SOC 2 reports are intended for a limited audience – specifically, users with an adequate understanding of the system in question. SOC 3 reports contain less specific information and can be distributed to the general public. Audits SOC 2 Audits can be carried out only by either a Certified Public Accountant (CPA) or a certified technical expert belonging to an audit firm licensed by the AICPA. The SOC 2 Audit provides the organization’s detailed internal controls report made in compliance with the 5 trust service criteria. It shows how well the organization safeguards customer data and assures them that the organization provides services in a secure and reliable way. SOC 2 reports are therefore intended to be made available for the customers and other stakeholders only. See also ISO/IEC 27001 References External links "Statement on Standards for Attestation Engagements 18, Attestation Standards: Clarification and Recodification", AICPA "Professional Standards", section AT-C 320, AICPA Auditing Auditing standards Sarbanes–Oxley Act Computer security standards
System and Organization Controls
[ "Technology", "Engineering" ]
1,205
[ "Computer security standards", "Computer standards", "Cybersecurity engineering" ]
63,209,229
https://en.wikipedia.org/wiki/Academic%20buoyancy
Academic buoyancy is a type of resilience relating specifically to academic attainment. It is defined as 'the ability of students to successfully deal with academic setbacks and challenges that are ‘typical of the ordinary course of school life (e.g. poor grades, competing deadlines, exam pressure, difficult schoolwork)'. It is, therefore, related to traditional definitions of resilience but allows a narrower focus in order to target interventions more precisely. The academic buoyancy model was first proposed by psychologists Andrew Martin and Herbert W. Marsh, following the identification of significant differences between classic resilience (the ability to thrive despite the experience of severe adversity) and the day-to-day setbacks experienced by students. It has been recently extended and adapted through the work and writings of British psychologist Marc Smith More specifically academic buoyancy is defined as ‘the process of dealing with isolated poor grades and patches of poor performance, typical stress levels and daily pressures, threats to confidence due to poor grades, low-level stress and confidence, dips in motivation and engagement and the way in which learners deal with negative feedback on schoolwork'. Basic theory The model of academic buoyancy assumes that academic attainment is, in part, related to the ability to cope with school-based demands and to bounce back when setbacks are encountered. Smith likens the differences between resilience and academic buoyancy to those of major stressors and daily hassles. To this end, certain personal attributes have been found to be present in those students who are more likely to flourish in educational environments. These attributes (or predictors of academic buoyancy) are referred to as the 5Cs. The 5Cs Martin and Marsh identified five predictors of academic buoyancy, referred to as the 5Cs, consisting of: 1. Confidence (self-efficacy) The belief in our ability to complete a given task. 5C confidence is task specific. 2. Coordination (planning) The ability to set and pursue goals, plan, monitor and manage tasks within a specific timeframe (e.g. meeting deadlines and allocating study time to competing tasks). 3. Control (low uncertain control) The extent to which people feel they are in control of the own learning, including the manner in which they attribute the causes of success and failure. 4. Composure (low anxiety) The extent to which people can remain relatively calm in potentially anxiety provoking situation (e.g examination environments). Students prone to high levels of anxiety have been found to perform poorly in high stakes exams and to have increased difficulty in coping with setbacks. 5. Commitment (persistence or conscientiousness) The ability to stay on task, resist distractions, act on feedback and recover from setbacks. Academic buoyancy and attainment The positive outcomes of academic buoyancy are linked to the 5Cs. Commitment is synonymous with Big Five conscientiousness (a personality trait), as well as newer constructs such as grit. Studies consistently find that conscientious students have a higher Grade Point Average (GPA). Duckworth’s studies have also discovered that grit is a trait found in a number of highly effective people, including West Point candidates and skilled spelling bee participants. Composure is a factor related to anxiety and the ability to regulate emotional reactions (trait neuroticism-emotional stability). Resilience interventions in schools Smith has been critical of current resilience interventions in schools, citing reviews that found methodological and practical flaws. Dray et al. found that resilience interventions are relatively messy, with mixed results, varying techniques, competing definitions and little in the way of defined outcomes. Leppin et al. had previously found a similar pattern of mixed results, along with a distinct lack of any agreed theoretical framework. Smith has proposed that schools move away from the traditional view of resilience and adopt a view that is focussed wholly on academic buoyancy. Criticisms Professor Angie Hart of the University of Brighton, UK, has stated that an academic buoyancy approach can never do as much for children as ‘a resilience perspective that addresses systems, and issues of social justice, will do.’ Hart addresses the importance of systems and structures as well as the building of ‘character’ and ‘grit’. In response to these criticisms, Smith stresses that there is no reason why buoyancy interventions can not be used in unison or in parallel with those aimed at increasing wellbeing and reducing inequalities, leading Smith to propose the addition of a 6th C - Community. References Life skills Motivation Psychological adjustment A
Academic buoyancy
[ "Biology" ]
949
[ "Ethology", "Behavior", "Motivation", "Human behavior" ]
63,209,416
https://en.wikipedia.org/wiki/N-Nitrosoglyphosate
N-Nitrosoglyphosate is the nitrosamine degradation product and synthetic impurity of glyphosate herbicide. The US EPA limits N-nitrosoglyphosate impurity to a maximum of 1 ppm in glyphosate formulated products. N-Nitrosoglyphosate can also form from the reaction of nitrates and glyphosate. Formation of N-nitrosoglyphosate has been observed in soils treated with sodium nitrite and glyphosate at elevated levels, though formation in soil is not expected at under typical field conditions. References Herbicides Nitrosamines Acetic acids Phosphonic acids
N-Nitrosoglyphosate
[ "Biology" ]
147
[ "Herbicides", "Biocides" ]
63,209,698
https://en.wikipedia.org/wiki/Matrix%20factorization%20%28algebra%29
In homological algebra, a branch of mathematics, a matrix factorization is a tool used to study infinitely long resolutions, generally over commutative rings. Motivation One of the problems with non-smooth algebras, such as Artin algebras, are their derived categories are poorly behaved due to infinite projective resolutions. For example, in the ring there is an infinite resolution of the -module whereInstead of looking at only the derived category of the module category, David Eisenbud studied such resolutions by looking at their periodicity. In general, such resolutions are periodic with period after finitely many objects in the resolution. Definition For a commutative ring and an element , a matrix factorization of is a pair of n-by-n matrices such that . This can be encoded more generally as a -graded -module with an endomorphism such that . Examples (1) For and there is a matrix factorization where for . (2) If and , then there is a matrix factorization where Periodicity definition Main theorem Given a regular local ring and an ideal generated by an -sequence, set and let be a minimal -free resolution of the ground field. Then becomes periodic after at most steps. https://www.youtube.com/watch?v=2Jo5eCv9ZVY Maximal Cohen-Macaulay modules page 18 of eisenbud article Categorical structure Support of matrix factorizations See also Derived noncommutative algebraic geometry Derived category Homological algebra Triangulated category References Further reading Homological Algebra on a Complete Intersection with an Application to Group Representations Geometric Study of the Category of Matrix Factorizations https://web.math.princeton.edu/~takumim/takumim_Spr13_JP.pdf https://arxiv.org/abs/1110.2918 Homological algebra
Matrix factorization (algebra)
[ "Mathematics" ]
383
[ "Fields of abstract algebra", "Mathematical structures", "Category theory", "Homological algebra" ]
63,209,813
https://en.wikipedia.org/wiki/2020%20Canadian%20pipeline%20and%20railway%20protests
From January to March 2020, a series of civil disobedience protests were held in Canada over the construction of the Coastal GasLink Pipeline (CGL) through of Wetʼsuwetʼen First Nation territory in British Columbia (BC), land that is unceded. Other concerns of the protesters were Indigenous land rights, the actions of police, land conservation, and the environmental impact of energy projects. Starting in 2010, the Wetʼsuwetʼen hereditary chiefs and their supporters made their opposition to the project known and set up a camp directly in the path of the Enbridge Northern Gateway Pipelines, a path similar to that which would later be proposed for the Coastal GasLink Pipeline. Northern Gateway was officially rejected in 2016, but the CGL project moved through planning, indigenous consultations, environmental reviews and governmental reviews before being approved in 2015. However, the approval of all the Wetʼsuwetʼen hereditary chiefs was never granted. In 2018, the backers of the pipeline project gave the go-ahead to the project and it began construction. Access to the Coastal GasLink Pipeline construction camps in Wetʼsuwetʼen territory was blocked and the Coastal GasLink project was granted an injunction in 2018 to remove the land defenders. In January 2019, the Royal Canadian Mounted Police (RCMP) of British Columbia removed the blockades and CGL pre-construction work in the territory was completed. Subsequently, the blockades were rebuilt and Coastal GasLink was granted a second injunction by the BC Supreme Court in December 2019 to allow construction. In February 2020, after the RCMP enforced the second court injunction, removing the Wetʼsuwetʼen blockades and arresting Wetʼsuwetʼen land defenders, solidarity protests sprang up across Canada. Many were rail blockades, including one blockade near Tyendinaga Mohawk Territory which halted traffic along a major Canadian National Railway (CNR) line between Toronto and Montreal and led to a shutdown of passenger rail service and rail freight operations in much of Canada. The Eastern Ontario blockade was itself removed by the Ontario Provincial Police. Blockades and protests continued through March in BC, Ontario and Quebec. Discussions between representatives of the Wetʼsuwetʼen and the governments of Canada and British Columbia have led to a provisional agreement on the Wetʼsuwetʼen land rights in the area. Coastal GasLink pipeline project The Coastal GasLink (CGL) pipeline is a natural gas pipeline designed to carry natural gas from mines in north-eastern British Columbia to a liquefaction plant at the port of Kitimat. The project is intended to supply natural gas to several Asian energy companies, who are partners in the project. The pipeline's route passes through unceded lands of several First Nations peoples, including of Wetʼsuwetʼen territory. Within the Wetʼsuwetʼen territory, the pipeline does not pass through reserves, only through traditional territory. The consortium developed its plans for the pipeline route in the early 2010s, securing the approval of several First Nations councils along the route, but did not secure the approval of the Office of the Wetʼsuwetʼen, the hereditary government of the Wetʼsuwetʼen people, although most of the elected band councils of the Wetʼsuwetʼen First Nations did enter into a benefits agreement with TC Energy, the owner of the pipeline project. In 2014, British Columbia authorities approved the environmental assessment of the project, then approved permits to construct the project in 2015 and 2016. TC Energy was given final approval by its partners to begin construction of the project in 2018, still without the consent of all of the Wetʼsuwetʼen hereditary chiefs. Only one of the nine sitting house chiefs, Samooh (Herb Naziel) supports the project. Environmental infractions CGL Pipeline has taken many actions that contradict their own company's environmental guidelines, as well as those of the Canadian government. Over the course of the project's existence, the BC Environmental Assessment Office (EAO) has found multiple violations of the company's environmental management plan. In December 2020, the provincial Environmental Assessment Office found that CGL had failed to properly comply with erosion and sediment control measures, a violation which posed a major risk to the health of the waterways the pipeline moves through. This environmental infringement also posed a high risk to fish habitats, according to reporting from CBC, with “sediment and turbid water from waterway construction [having] the potential to reduce the biological productivity of aquatic systems and suffocate fish eggs.” Despite the fact that CGL has many different strategies to move the pipeline through waterways safely — ranging from man-made trenches to funnel the pipeline through the waterways to "trenchless" crossings where the pipeline tunnels beneath the waterway without touching the stream or river itself — the EAO has found that the uptake in turbid water and other insufficient sediment control persists. The violations cited by the EAO have affected numerous waterways during construction, including 68 wetlands, and even disrupting Fraser Lake with turbid water. In addition to issues relating to the preservation of waterways, CGL has missed deadlines to protect plants and wildlife from construction, as well as leaving food in areas for natural predators to eat, creating further environmental risk for the Wetʼsuwetʼen and the stability of their land, as well as increased danger from predators in the area. Threats from construction crews The placement of CGL construction crews, including the existence of remote temporary lodgings for mostly male workers (known as man camps) also created more environmental issues, as well as safety issues for the Wet'suwet'en. The danger of man camps in increasing the risk of abduction and murder of indigenous women was reported on in the final report of the National Inquiry into Missing and Murdered Indigenous Women, and the Office of the Wetʼsuwetʼen used this as evidence in their opposition to the extension of CGL's environmental certificate before the EAO in October 2020. The EAO stressed the importance of the issue but did not place it into evidence for the cancellation of the certificate. CGL construction teams obstructed Wetʼsuwetʼen access to traplines paths and walkways the nation uses to hunt and gather resources creating strain on the Wetʼsuwetʼen way of life. In response to the infractions of crews blocking Wetʼsuwetʼen territory, CGL spokesperson Natasha Westover said, “CGL has an obligation to facilitate access for Indigenous peoples to their traditional territories; however, that access may be delayed where it is unsafe [to] provide access immediately.” Threats to construction crews In the early hours of February 17, 2022, twenty masked attackers, some carrying axes, forced nine people to flee from a work site near Houston, British Columbia. They attacked and injured RCMP officers attempting to respond. Ellis Ross said "There were workers inside a truck while attackers were trying to light it on fire." Damage is estimated to be "in the millions of dollars" according to Coastal GasLink. Wetʼsuwetʼen opposition Background The Wetʼsuwetʼen are an Indigenous nation made up of five clans including the: Gilseyhu (Big Frog), Laksilyu (Small Frog), Gitdumden (Wolf/Bear), Laksamshu (Fireweed) and the Tsayu (Beaver Clan). These five clans' territory lies in the central western portion of British Columbia. The language spoken by the Wetʼsuwetʼen people is Babine-Witsuwitʼen, one of the Athabaskan languages. Their traditional government, predating Confederation, is a system of chiefs representing each clan, called the hereditary chiefs. The chiefs have been represented by the non-profit Office of the Wetsuweten since 1994, before having a joint office with the Gitxsan. The elected band councils were created by order of the Government of Canada, under the Indian Act, to govern the reserves put in place, of which the Wetʼsuwetʼen have several. According to hereditary chief Na’Moks (John Ridsdale), "it's the hereditary chiefs' duty to protect the territory". According to Na’Moks, the pipeline "is going along rivers, it will go over rivers and even in some instances, it will go under. One hundred and ninety kilometres of the proposed route will run through our territory. It threatens our water, our salmon, and our rights, our title, our jurisdiction". The pipeline would also go through areas of cultural significance to the Wetʼsuwetʼen. In 1997, the Supreme Court of Canada issued the Delgamuukw-Gisdayʼwa decision, which ruled that aboriginal title exists as an exclusive territorial right for indigenous people. The ruling was made in an appeal of a Supreme Court of British Columbia decision, which had ruled against recognition of Wetʼsuwetʼen and Gitxsan land rights. The Supreme Court of Canada ruled that a new trial was warranted, but encouraged a negotiated settlement. The Wetʼsuwetʼen and Gitxsan then entered the treaty process with the BC government. However, the BC government's position that the Nations would only receive 4 to 6 per cent of their territory was unacceptable and the nations walked away from the process. Hence, the boundaries of the Wetʼsuwetʼen and Gitxsan nations' traditional territories are not yet recognized in Canadian law. In the absence of an agreement over aboriginal title and rights, the hereditary chiefs' position is that their full consent is required for any energy or resource projects within their territory, and the CGL does not have their consent. The rights and title issue has also been the basis for several solidarity protests, which have also objected to the actions and presence of the RCMP within the Wetʼsuwetʼen traditional territory (known in Babine-Witsuwitʼen as ). Blockades, injunctions and RCMP interventions 2010 Beginning in 2010, the Wetʼsuwetʼen hereditary chiefs and their supporters set up barricades and checkpoints along the Morice West Forest Service Road that provides access to the construction of pipeline projects that threatened their territory, originally the Enbridge Northern Gateway Pipelines, and later also Coastal GasLink (planning for which began in 2012). The largest of those camps is Unistʼotʼen Camp, directly in the path of the pipeline, established in 2010 as a checkpoint, and has since added a healing centre. 2018 After TC Energy received its partners' go-ahead in November, it appealed to the Supreme Court of British Columbia to grant an injunction to stop the blockade of its intended route through the Wetʼsuwetʼen territory. A temporary injunction was issued in December by BC Supreme Court Judge Marguerite Church to allow CGL pre-construction work. 2019 On January 7, the RCMP conducted a raid to enforce TC Energy's injunction, removing the barricades on the Morice Forest Service Road and arresting 14 of the Wetʼsuwetʼen land defenders. The RCMP faced criticism from protesters for the amount of force used in the raid, including police snipers, helicopters, and over a dozen police vehicles. The RCMP set up a continuous presence along the road, setting up a local detachment called the Community Industry Safety Office. RCMP at this injunction have faced any protesters attempting to enter with arrest, causing the Wetʼsuwetʼen to remain in place along the road In December, TC Energy prepared to start construction in the Wetʼsuwetʼen territory. It applied for an extension of the injunction order as the land defenders had resumed blockading access after the pre-construction work was done. This injunction was extended by Judge Church of the BC Supreme Court on December 31. The extension included an order authorizing the RCMP to enforce the injunction. In her decision, Church stated: "There is a public interest in upholding the rule of law and restraining illegal behaviour and protecting of the right of the public, including the plaintiff, to access on Crown roads," and "the defendants may genuinely believe in their rights under indigenous law to prevent the plaintiff from entering Dark House territory, but the law does not recognize any right to blockade and obstruct the plaintiff from pursuing lawfully authorized activities." In a public statement, the Wetʼsuwetʼen chief rejected the decision. 2020 On January 1, after rejecting the injunction, the hereditary chiefs ordered the eviction of the RCMP and Coastal GasLink personnel from the Wetʼsuwetʼen territory. On January 30, the RCMP announced that they would stand down while the hereditary chiefs and the province met to discuss and try to come to an agreement. On February 3, the Office of the Wetʼsuwetʼen asked for a judicial review of the environmental approval for the pipeline. All parties issued statements on February 4 that the talks had broken down. On February 6, the RCMP began removing the blockades on Wetʼsuwetʼen territory, arresting 28 land defenders at camps along the route between February 6 and 9. All were released within two days. The RCMP also detained several reporters and were accused of interfering with the freedom of the press. Union of British Columbia Indian Chiefs Grand Chief Stewart Phillip stated that "we are in absolute outrage and a state of painful anguish as we witness the Wetʼsuwetʼen people having their title and rights brutally trampled on and their right to self-determination denied." During the enforcement action by the RCMP, a large amount of advanced equipment was used, including heavily armed tactical teams, division liaison personnel, regular uniformed officers, canine units, helicopters, drones and snowmobiles, according to CBC News. On February 11, the RCMP announced that the road to the construction site was cleared and TC Energy announced that work would resume the following Monday. After the hereditary chiefs made it a condition for talks with government, the RCMP closed their local office and moved to their detachment in Houston on February 22. Throughout February and March, solidarity protests and blockades were held across the world. Most in-person actions were halted in mid-March due to the COVID-19 pandemic, but online solidarity rallies continued. On June 5, the BC Prosecution Service issued a statement saying that criminal contempt charges for 22 members of the Wetʼsuwetʼen Nation and their supporters would not be pursued. Additionally, Coastal GasLink issued a statement that they would not pursue civil contempt charges against the protesters. Meetings and memorandum of understanding Three days of meetings between the hereditary chiefs, Crown-Indigenous Relations Minister Carolyn Bennett and BC Indigenous Relations Minister Scott Fraser began on February 27 in Smithers, British Columbia. The RCMP agreed to stop all patrols on the Morice West Forest Service Road and to shut down their mobile detachment (CISO) during the meetings. In addition, Coastal GasLink agreed to suspend operations in the territory during the talks. RCMP and CGL work resumed on the territory once the meetings were complete. On March 1, Bennett, Fraser, and representatives of the Wetʼsuwetʼen, including hereditary chiefs and matriarchs announced a proposed memorandum of understanding (MOU) to address the Wetʼsuwetʼen land rights, title and a protocol for addressing any future projects impacting their territory. Specific details of the agreement were not immediately released, because the MOU had to first be seen and ratified by the broader Wetʼsuwetʼen nation. However, all parties to the discussions made it clear that the agreement did not address the CGL Pipeline project. On March 10, Theresa Tait-Day, president of the Wetʼsuwetʼen Matrilineal Coalition (WMC), (who was stripped of the subchief name Wiʼhaliʼyte in the mid-2010s over her support of the pipeline and suspected conflict of interest) released a statement that the proposed MOU was not inclusive of the entire community, saying "the government has legitimized the meeting with the five [sic] hereditary chiefs and left out their entire community. We can not be dictated to by a group of five guys [sic]." According to Tait-Day, "over 80 per cent of the people in our community said they wanted LNG [First Nations LNG Alliance] to proceed." Individual Wetʼsuwetʼen clans held meetings to review the MOU throughout March. The MOU was ratified by the attendees of one meeting of the Laksilyu (Small Frog Clan). According to the hereditary chiefs, the Gilseyhu (Big Frog Clan) met once and endorsed the MOU, as did the Laksamshu (Fireweed and Owl Clan) and the Tsayu (Beaver Clan). The Gitdumden (Wolf and Bear Clan) met twice, but their third meeting was cancelled due to a death in the community. A planned all-clans meeting on March 19 was cancelled, for a variety of factors including concerns over the spread of COVID-19. On April 30, the hereditary chiefs made a joint statement with the provincial and federal governments that all five clans had agreed to ratify the MOU. However, the elected chiefs of five Wetʼsuwetʼen band governments (Nee Tahi Buhn Indian Band, Skin Tyee Nation, Ts'il Kaz Koh First Nation, Wetʼsuwetʼen First Nation, and Witset First Nation) released their own joint statement in response the following day, calling on the agreement to be withdrawn, saying they weren't consulted properly. A further statement released on May 11 called once again for the agreement to be withdrawn so the elected governments could be consulted properly, and further calling for Minister Bennett to resign. The May 11 statement was not signed by Chief Sandra George of Witset or Chief Cynthia Joseph of Hagwilget. The draft agreement was finally distributed to the elected band councils on May 7, to all other Wetʼsuwetʼen the following day, and finally it was published on the Office of the Wetʼsuwetʼen website on May 12. The MOU was signed by hereditary chiefs, Minister Bennett, and Minister Fraser on May 14 in a virtual ceremony via Zoom. The memorandum does not address the CGL Pipeline project, nor does it alter Wetʼsuwetʼen rights and title. The MOU states that the Canadian and British Columbian governments recognize that those rights and title are held under the Wetʼsuwetʼen's own system of governance, and commits Canada and BC to a three-month process to craft a formal Affirmation Agreement that confirm aboriginal title as a legal right. It also establishes a twelve-month timeline for negotiation on jurisdiction including over land-use planning, resources, water, wildlife, fish, and child and family wellness. Further, it acknowledges that reunification of the rift between the hereditary leadership and the elected band councils is an essential part of the implementation of the MOU. Further developments BC Supreme Court hearing October 2020 On October 1, the Office of the Wet’suwet’en began a hearing in the BC Supreme Court. The Office of the Wet’suwet’en requested that the Court reject the province's decision to extend CGL's environmental certificate for five years. Lawyers for the Office of the Wet’suwet’en claimed the Environmental Assessment Office (EAO) did not meaningfully account for the final report on Missing and murdered Indigenous women and girls (MMIWG), published in June 2019, as well as the pipeline company's long history of non-compliance with the EAO's own conditions and standards. The EAO's position was that there was no basis for judiciary review of their decision. In a decision published on May 20, 2021, Justice Norell found that the assessment office had asked CGL to consider how indigenous nations would be involved in identifying and monitoring social impacts of the project, and deemed those comments to "not indicate a failure or refusal of the [assessment office] to consider the [MMIWG] inquiry report, but the opposite." As for the company's history of non-compliance, Justice Norell also disagreed that the EAO had not accounted for that, stating that "Both the frequency and nature of the non-compliances are addressed [in the statement of the environmental assessment certificate]" and further, that "the Evaluation Report concludes that: the non-compliance which had occurred had been addressed by the enforcement process; and CGL was committed to compliance, and had either rectified or was in the process of rectifying any non-compliance." 2021–2022 Morice River conflict The 2020 memorandum of understanding did not address the CGL pipeline, and construction met with continued opposition from the Gidimtʼen Access Point and Unistʼotʼen groups during 2021. On September 25, 2021, Cas Yikh house and Gidimtʼen clan members erected new blockades on the Morice West Forest Service Road to block CGL's attempts to drill under the Morice River. These blockades included many solar-panelled tiny home structures, some of which were furnished and stocked with kitchens. Sleydoʼ (Molly Wickham), one of the leaders of Gidimtʼen Access Point, claimed that the work near the river would disrupt her people's livelihoods as well as the salmon population. She called on supporters to join the new blockades. A Gidimtʼen Access Point press release called the Morice River "sacred headwaters that nourish the Wetʼsuwetʼen Yintah [territory] and all those within its catchment area". Coastal GasLink president Tracy Robinson issued a statement about the drilling, saying "our crews will utilize a micro-tunnel method which is a type of trenchless crossing that is constructed well below the riverbed and does not disturb the stream or the bed and banks of the river". Robinson said that experts deemed micro-tunnelling to be the safest and most environmentally-responsible method. She also said that there was still an enforceable injunction against any opposition to CGL construction. In the days after the new blockades went up, the RCMP removed two of them, arresting at least one person. In a November 2021 interview, Sleydoʼ said that RCMP had used abusive force to remove a Wetʼsuwetʼen protester who locked himself underneath a bus being used as a blockade. She said the man was "receiving ongoing medical care and has [nerve damage] in his hands" after being lifted by the legs and repeatedly slammed against the ground while his hands were clipped under the vehicle. Wetʼsuwetʼen Chief Dstaʼhyl confronted construction crews in October, disabling one of their excavators following numerous warnings. The excavator was recovered by CGL that afternoon. On November 18 and 19, RCMP dismantled the blockades and arrested 29 people including Sleydoʼ and two journalists. In February 2022, masked assailants threatened CGL workers and destroyed millions of dollars worth of equipment. No arrests were made. In September 2022, drilling equipment was in place, and CGL was preparing to drill under the river. Members of the Gidimtʼen Clan and residents of Unistʼotʼen Camp said that they were under constant surveillance. Solidarity protests Protests on January 20 disrupted BC ferry service leaving from Swartz Bay, which is Victoria's main ferry link to the BC mainland. BC ferries later obtained a preemptive injunction to prevent anticipated future demonstrations from blocking Vancouver–Victoria ferry service. Once the RCMP began to take down the Wetʼsuwetʼen blockades, protests sprang up across Canada in solidarity with the hereditary chiefs and the land defenders. On February 11, protesters surrounded the BC Legislature in Victoria, preventing the traditional ceremonies around the reading of the Throne Speech by the Lieutenant Governor. Members of the legislature had to have police assistance to enter or used alternate entrances. Other protests took place in Hamilton, Nelson, Calgary, Regina, Winnipeg, Toronto, Ottawa, Sherbrooke, and Halifax. Several major protests blocked access to the Port of Vancouver, Deltaport, and two other ports in Metro Vancouver for a number of days before the Metro Vancouver police began enforcing an injunction on the morning of February 10, arresting 47 protesters who refused to cease obstructing the port. Protests on February 15 over 200 people in Toronto blocked Macmillan Yard, the second largest rail classification yard in Canada. On February 16 and 17 temporarily blocked the Rainbow Bridge in Niagara Falls, Ontario and Thousand Islands Bridge in Ivy Lea, Ontario, two major border crossings between the United States and Canada. At the same time, Miꞌkmaq demonstrators partially blocked access to the Confederation Bridge, the sole road link to Prince Edward Island. On February 18, several activists were arrested for trespassing at BC Premier Horgan's residence. On February 24, demonstrators shut down a major junction in Hamilton, Ontario. A nation-wide student walkout occurred March 4, with university students across the country showing their support for the Wetʼsuwetʼen protesters. The protests led to the creation of several hashtags, used widely on social media in relation to coverage of the protests. These include #ShutDownCanada, #WetsuwetenStrong, #LandBack, and #AllEyesOnWetsuweten. By September 21, over 200 Facebook users had been blocked from posting or sending messages on the site. All the blocked accounts had shared information about an online rally held on May 7 in support of the ongoing struggle against the construction of the CGL pipeline. When asked by organizers why the accounts had been suspended, a spokesperson from Facebook said, "our systems mistakenly removed these accounts and content. They have since been restored and we’ve lifted any limits imposed on identified profiles." Rail disruptions Other First Nations, activists and other supporters of the Wetʼsuwetʼen hereditary chiefs targeted railway lines for their demonstrations of solidarity. Near Belleville, Ontario, members of the Mohawks of the Bay of Quinte First Nation began a blockade of the Canadian National Railway rail line just north of Tyendinaga Mohawk Territory on February 6, causing Via Rail to cancel trains on their Toronto–Montreal and Toronto-Ottawa routes. The line is critical to the CNR network in Eastern Canada as CNR has no other east–west rail lines through Eastern Ontario. However, in order to mitigate major economic disruption, CNR brokered a "workaround" agreement with Canadian Pacific Railway (CPR) to share tracks in order to avoid the Mohawk protesters. Other protests blocking rail lines halted service on Via Rail's Prince Rupert and Prince George lines, running on CNR tracks. Protests on the CNR line west of Winnipeg additionally blocked the Canadian, the passenger rail route operated by Via Rail from Vancouver to Toronto. Protests disrupted multiple GO Transit rail services in Toronto, Hamilton and Exo's Candiac line in Montreal. CPR rail lines were also disrupted in downtown Toronto and south of Montreal. The Société du Chemin de fer de la Gaspésie (SCFG) freight railway between Gaspé and Matapedia was blockaded on February 10 by members of the Listuguj Miꞌgmaq First Nation. Starting on February 6, Via Rail announced passenger train cancellations on a day-to-day basis. Trains on the Toronto-Ottawa and Toronto-Montreal routes were cancelled first. Prince George-Prince Rupert service was suspended on February 11. Canadian National Railway (CNR) rail freight traffic was also halted along these lines. Other Canadian routes were intermittently disrupted as well. On February 13, CNR shut down its rail lines east of Toronto. On the same day Via Rail, which rents these lines for its passenger service, announced it would be shutting down its entire network, with the exception of the Sudbury–White River train line and the Winnipeg–Churchill train between Churchill and The Pas, until further notice. Amtrak international service from New York City to Toronto and Montreal was not affected. Amtrak rail service between Seattle and Vancouver on BNSF Railway Company lines was intermittently blocked; Amtrak's bus operation over the same route was not affected. CNR issued multiple injunctions against the protesters, including several separate injunctions against the Mohawk protesters near Belleville. The Ontario Provincial Police decided not to act immediately on the injunctions. The rail blockade of Prince Rupert was lifted on February 14. On February 18, Via announced partial restoration of passenger service starting February 20, between Ottawa and Quebec City. Via later announced it would resume some south-western Ontario routes. Trans-Canada passenger service was not restored. On February 19, a group of about 20 protesters from a group called "Cuzzins for Wetʼsuwetʼen" erected a blockade on a CN rail line in west Edmonton, Alberta. CN obtained a court injunction, and less than twelve hours after the blockade began, it was dismantled by counter-protesters after a CN legal representative arrived to serve the injunction. On February 19, activists set up a blockade on the Mont-Saint-Hilaire rail line in Saint-Lambert, Quebec, promising to stay until the RCMP leaves the disputed zone in Wetʼsuwetʼen territory. The blockade caused Via Rail to postpone resuming service between Montreal and Quebec City. The Mont-Saint-Hilaire rail line was cleared on February 21, 2019 after Quebec Police arrived to enforce a CNR injunction. On February 20, another blockade of CPR tracks sprang up between Kamloops and Chase in British Columbia. The protesters left voluntarily on February 21, after the RCMP offered to leave the Wetʼsuwetʼen land. The group vowed to return in four days if a dialogue was not started between the prime minister and the hereditary chiefs. This was followed by CPR writing an open letter to Prime Minister Trudeau, asking him to speak directly with the hereditary chiefs. The Mont-Saint-Hilaire rail line was cleared on February 21, after Quebec Police arrived to enforce a CNR injunction. On March 5, the rail blockades in Kahnawake and the Gaspé Peninsula were removed peacefully by the First Nations involved. In early March, Canada's medical officer had advised against gatherings, as part of the country's response to the COVID-19 pandemic, and by the second week of March, most blockades had come down. Despite the widespread closures in response to the pandemic, CGL is continuing construction in the disputed territory. Pipeline opponents launched a letter-writing campaign urging the company to stop on March 21. Businesses attacked by protesters, such as CN Rail, have filed lawsuits against protesters to recover damages. Economic impact The blockades led to the shutdown of CNR's Eastern Canadian network, causing a complete halt of freight traffic from Halifax west to Toronto. On February 19, Canadian Manufacturers and Exporters estimated that million in goods were being stranded each day of the shutdown. An executive of the Business Council of Canada called the shutdown "potentially a catastrophe for the economy" and said that rail "is the backbone of infrastructure in this country." Due to a poor growing season which resulted in an unusually late harvest just before Christmas, Canadian wheat and barley shipments were already in a backlog and were further impacted by the rail blockades. Spring farm supplies such as fertilizer were also delayed by the rail shutdown. Canadian grain farmers have previously advocated to have rail transport declared an essential service. Canadian Federation of Agriculture president Mary Robinson warned of "huge financial consequences" as farmers do not get paid until products are delivered to the market. Dennis Darby, president and CEO of the Canadian Manufacturers and Exporters Association, states that Canadian manufacturers rely on 4,500 rail cars per day, which represent both supply chain and delivery of finished products. Many of these products are too large or bulky to be shipped by other means. The total value of these deliveries amounts to billion annually. Chemicals trade group Responsible Distribution Canada warned of shortages of chlorine to purify drinking water. Supply chains for chlorine, jet fuel and de-icing fluid all rely on rail transport: "You can't put it in a truck and send it down the 401" said an executive of the Chemistry Industry Association of Canada. Mining, which accounted for 20% of Canada's 2018 exports, also moves "most" of its output by rail. By February 21, four thousand containers reportedly sat on the docks of Montreal waiting for transport and no grain had arrived for shipment at the port. In Halifax, the Atlantic Container Line has diverted to New York and Baltimore. In Vancouver, goods waiting to be shipped east led to a backlog of 50 ships waiting to be unloaded. The disruption of propane rail shipments was expected to lead to shortages and rationing, during a time when many communities were experiencing extremely cold weather. In Atlantic Canada, at the end of the propane supply line, reserves fell to a five-day supply by February 14. Superior Propane, Canada's largest supplier, rationed distribution in Atlantic Canada. SCFG laid off five of its 30 employees on February 14. On February 18, CNR laid off 450 employees for reasons related to the pipeline disruptions; the company has stated that as many as 6,000 of its 24,000 employees could be laid off. On February 19, Via Rail announced temporary layoffs of up to 1,000 people due to the blockades. By the first week of March, the majority of the laid-off Via Rail employees and all of the affected CNR employees were recalled. Scrapped rail passenger services in the Montreal-Toronto-Ottawa triangle, caused more than 42,000 Via Rail passengers and more Trans-Canada passengers to seek alternatives. On March 13, Parliamentary Budget Officer Yves Giroux released a report that the protests would leave "a minimal dent in the pace of economic growth", estimating that the blockades would reduce Canadian economic growth by 0.2% for the first quarter of 2020. For the whole year, the expectation was for the GDP to fall by , about 0.01% of the total GDP, which Giroux referred to as "a blip", despite the warnings by businesses of shortages, referred to by the PBO as "overblown". The PBO said that the COVID-19 pandemic would likely have a greater impact on the economy. Federal government response and reaction Prime Minister Justin Trudeau said politicians should not be telling the police how to deal with protesters and that resolution should come through dialogue. The Canadian government does not tell the police what to do operationally. In any case, the police services are under provincial or municipal control. On February 12, Canada's Indigenous Services Minister Marc Miller began a dialogue with several indigenous leaders from different parts of Canada. On February 15, Miller met the Mohawks in a ceremonial encounter on the CNR train tracks to renew a 17th-century treaty between the Iroquois and the British Crown known as the Silver Covenant Chain. Miller then discussed the blockade with the leaders of Mohawks of the Bay of Quinte First Nation, along with Kanenhariyo, one of the primary organizers of the protest near Tyendinaga. Miller asked for a temporary drawback of the protest but his request was refused after Wetʼsuwetʼen hereditary Chief Woos, who was on the phone, stated that the RCMP was still on his territory and "they are out there with guns, threatening us." Leaked audio of the meetings included a Mohawk resident in the meeting telling the minister to "Get the red coats out first, get the blue coats out … then we can maybe have some common discussions". Miller returned to Ottawa and met with Prime Minister Trudeau and other members of the Cabinet called the "Incident Response Group". Trudeau had returned from a foreign relations trip to deal with the issue. On February 18, the House of Commons of Canada resumed after the winter break. Trudeau addressed the Commons asking Canadians for patience as the government sought a negotiated end. "On all sides, people are upset and frustrated. I get it. It's understandable because this is about things that matter—rights and livelihoods, the rule of law and our democracy." Opposition leader Andrew Scheer condemned the government's refusal to use the police to stop the illegal blockades, calling it "the weakest response to a national crisis in Canadian history. Will our country be one of the rule of the law, or will our country be one of the rule of the mob?" Trudeau held a private meeting with the other opposition parties' leaders, barring Scheer after his comments. On February 18, the Assembly of First Nations (AFN) held a press conference in Ottawa. AFN National Chief Perry Bellegarde called for all parties to engage in dialogue. "It's on everybody. It's not on any one individual. I'm just calling on all the parties to come together, get this dialogue started in a constructive way." On February 20, according to a statement from Canadian Public Safety Minister Bill Blair, the RCMP agreed to move its personnel from Wetʼsuwetʼen territory to nearby Houston. The next day, Prime Minister Trudeau held a press conference to state "Canadians have been patient. Our government has been patient, but it has been two weeks and the barricades need to come down now. The government had made repeated overtures to the hereditary chiefs to hold meetings but had been ignored. You can't have dialogue when only one party is coming to the table. Our hand remains extended should someone want to reach for it. We have come to a moment where the onus is now on Indigenous leadership." Shortly after Trudeau's statement on February 21, the Wetʼsuwetʼen hereditary chiefs released a statement reaffirming that discussions would continue once all RCMP and CGL personnel vacate the Wetʼsuwetʼen territory. At the same time, the Mohawk of Tyendinaga asserted that their rail blockade would be removed as soon as Wetʼsuwetʼen legal observers confirm that the RCMP is off their land. On February 24, the day of the Mohawk blockade removal by the OPP, Indigenous Services Minister Miller repeated that the Liberal government was "still open for dialogue" and willing to negotiate. On February 24, in a statement signed and supported by over 200 Canadian lawyers and legal scholars, Beverly Jacobs and Sylvia McAdam of the University of Windsor, Alex Neve of Amnesty International, and Harsha Walia of the BC Civil Liberties Association responded to the calls for the "rule of law." In their opinion, it is the Canadian federal and provincial governments that are breaking international law, not the Wetʼsuwetʼen hereditary chiefs. They also pointed out that the requirements laid out in the UN Declaration on the Rights of Indigenous Peoples have continued to be ignored by Canadian courts, although Canadian governments have expressed a willingness to follow the UN resolution. They call for an end to the violation of indigenous persons' right to free, prior and informed consent. In early May, the elected chiefs of several Wetʼsuwetʼen band councils (primarily Nee-Tahi-Buhn, Skin Tyee, Tsʼil Kaz Koh, and Wetʼsuwetʼen First Nations) called on Minister Bennett to resign, as the Canadian and BC governments, along with the hereditary chiefs, pressed forward with the memorandum of understanding. In a statement on May 11, before the signing of the memorandum, the elected chiefs called on Minister Bennett to resign due to her "disregard for [their] special relationship". They repeated this demand in a statement on May 14, after the signing of the MOU, and added a call for Minister Marc Miller to speak up about his "intention to protect the programs and services the Wetʼsuwetʼen people depend on". On October 2, CBC News reported that information related to protests in February that they had requested from the Canadian Security Intelligence Service (CSIS) under the Access to Information Act had been withheld. CSIS cited section 15 of the act in withholding the information, which defines "subversive or hostile activities" as including sabotage, terrorism, actions directed at a "government change," activities that "threaten" Canadians or federal employees, and espionage. Documents obtained by CBC News in 2019 found that in the two years of their near constant presence on the Morice Forest Service Road, the RCMP had spent over on policing. Chief Naʼmoks compared that very high level of spending with the perceived inaction by the RCMP over violent attacks and harassment of Mi'kmaw fishers in Nova Scotia. See also References 2020 controversies Canadian pipeline and railway protests Canadian pipeline and railway protests, 2020 Environmental issues in Canada First Nations history in Canada Indigenous peoples of North America and the environment Indigenous politics in Canada Indigenous conflicts in Canada Political controversies in Canada Wet'suwet'en Canadian pipeline and railway protests Natural gas pipelines in Canada Rail transport in Canada Petroleum politics Environmental justice Environmental controversies Environmental history of Canada Environmental racism in Canada
2020 Canadian pipeline and railway protests
[ "Chemistry" ]
8,485
[ "Petroleum", "Petroleum politics" ]
63,210,668
https://en.wikipedia.org/wiki/Genome%20skimming
Genome skimming is a sequencing approach that uses low-pass, shallow sequencing of a genome (up to 5%), to generate fragments of DNA, known as genome skims. These genome skims contain information about the high-copy fraction of the genome. The high-copy fraction of the genome consists of the ribosomal DNA, plastid genome (plastome), mitochondrial genome (mitogenome), and nuclear repeats such as microsatellites and transposable elements. It employs high-throughput, next generation sequencing technology to generate these skims. Although these skims are merely 'the tip of the genomic iceberg', phylogenomic analysis of them can still provide insights on evolutionary history and biodiversity at a lower cost and larger scale than traditional methods. Due to the small amount of DNA required for genome skimming, its methodology can be applied in other fields other than genomics. Tasks like this include determining the traceability of products in the food industry, enforcing international regulations regarding biodiversity and biological resources, and forensics. Current Uses In addition to the assembly of the smaller organellar genomes, genome skimming can also be used to uncover conserved ortholog sequences for phylogenomic studies. In phylogenomic studies of multicellular pathogens, genome skimming can be used to find effector genes, discover endosymbionts and characterize genomic variation. High-copy DNA Ribosomal DNA The Internal transcribed spacers (ITS) are non-coding regions within the 18-5.8-28S rDNA in eukaryotes and are one feature of rDNA that has been used in genome skimming studies. ITS are used to detect different species within a genus, due to their high inter-species variability. These have low individual variability, preventing the identification of distinct strains or individuals. They are also present in all eukaryotes, have a high evolution rate and has been used in phylogenetic analysis between and across species. When targeting nuclear rDNA, it is suggested that a minimum final sequencing depth of 100X is achieved, and sequences with less than 5X depth are masked. Plastomes The plastid genome, or plastome, has been used extensively in identification and evolutionary studies using genome skimming due to its high abundance within plants (~3-5% of cell DNA), small size, simple structure, greater conservation of gene structure than nuclear or mitochondrial genes. Plastids studies have previously been limited by the number of regions that could be assessed in traditional approaches. Using genome skimming, the sequencing of the entire plastid genome, or plastome, can be done at a fraction of the cost and time required for typical sequencing approaches like Sanger sequencing. Plastomes have been suggested as a method to replace traditional DNA barcodes in plants, such as the rbcL and matK barcode genes. Compared to the typical DNA barcode, genome skimming produces plastomes at a tenth of the cost per base. Recent uses of genome skims of plastomes have allowed greater resolution of phylogenies, higher differentiation of specific groups within taxa, and more accurate estimates of biodiversity. Additionally, the plastome has been used to compare species within a genus to look at evolutionary changes and diversity within a group. When targeting plastomes, it is suggested that a minimum final sequencing depth of 30X is achieved for single-copy regions to ensure high-quality assemblies. Single nucleotide polymorphisms (SNPs) with less than 20X depth should be masked. Mitogenomes The mitochondrial genome, or mitogenome, is used as a molecular marker in a great variety of studies because of its maternal inheritance, high copy-number in the cell, lack of recombination, and high mutation rate. It is often used for phylogenetic studies as it is very uniform across metazoan groups, with a circular, double-stranded DNA molecule structure, about 15 to 20 kilobases, with 37 ribosomal RNA genes, 13 protein-coding genes, and 22 transfer RNA genes. Mitochondrial barcode sequences, such as COI, NADH2, 16S rRNA, and 12S rRNA, can also be used for taxonomic identification. The increased publishing of complete mitogenomes allows for inference of robust phylogenies across many taxonomic groups, and it can capture events such as gene rearrangements and positioning of mobile genetic elements. Using genome skimming to assemble complete mitogenomes, the phylogenetic history and biodiversity of many organisms can be resolved. When targeting mitogenomes, there are no specific suggestions for minimum final sequencing depth, as mitogenomes are more variable in size and more variable in complexity in plant species, increasing the difficulty of assembling repeated sequences. However, highly conserved coding sequences and nonrepetitive flanking regions can be assembled using reference-guided assembly. Sequences should be masked similarly to targeting plastomes and nuclear ribosomal DNA. Nuclear repeats (satellites or transposable elements) Nuclear repeats in the genome are an underused source of phylogenetic data. When the nuclear genome is sequenced at 5% of the genome, thousands of copies of the nuclear repeats will be present. Although the repeats sequenced will only be representative of those in the entire genome, it has been shown that these sequenced fractions accurately reflect genomic abundance. These repeats can be clustered de novo and their abundance is estimated. The distribution and occurrence of these repeat types can be phylogenetically informative and provide information about the evolutionary history of various species. Low-copy DNA Low-copy DNA can prove useful for evolution developmental and phylogenetic studies. It can be mined from high-copy fractions in a number of ways such as developing primers from databases that contain conserved orthologous genes, single‐copy conserved orthologous gene, and shared copy genes. Another method is looking for novel probes that target low-copy genes using transcriptomics via Hyb-Seq. While nuclear genomes assembled using genome skims are extremely fragmented, some low-copy single-copy nuclear genes can be successfully assembled. Low-quantity degraded DNA Previous methods of trying to recover degraded DNA were based on Sanger sequencing and relied on large intact DNA templates and were affected by contamination and method of preservation. Genome skimming, on the other hand, can be used to extract genetic information from preserved species in herbariums and museums, where the DNA is often very degraded, and very little remains. Studies in plants show that DNA as old as 80 years and with as little as 500 pg of degraded DNA, can be used with genome skimming to infer genomic information. In herbaria, even with low yield and low-quality DNA, one study was still able to produce "high-quality complete chloroplast and ribosomal DNA sequences" at a large scale for downstream analyses. In field studies, invertebrates are stored in ethanol which is usually discarded during DNA-based studies. Genome skimming has been shown to detect the low quantity of DNA from this ethanol-fraction and provide information about the biomass of the specimens in a fraction, the microbiota of outer tissue layers and the gut contents (like prey) released by the vomit reflex. Thus, genome skimming can provide an additional method of understanding ecology via low copy DNA. Workflow DNA extraction DNA extraction protocols will vary depending on the source of the sample (i.e. plants, animals, etc.). The following DNA extraction protocols have been used in genome skimming: Plants Plant DNAzol Reagent Qiagen DNeasy Plant Mini kit Tiangen DNAsecure Plant kit Invitrogen ChargeSwitch gDNA Plant kit Other Quick-DNA Plus Extraction kit Cetyl Trimethylammonium Bromide (CTAB) method Qiagen DNeasy Tissue Extraction kit Qiagen DNeasy Blood and Tissue kit Library preparation Library preparation protocols will depend on a variety of factors: organism, tissue type, etc. In the cases of preserved specimens, specific library preparation protocols modifications may have to be made. The following library preparation protocols have been used in genome skimming: Sequencing Sequencing with short reads or long reads will depend on the target genome or genes. Microsatellites in nuclear repeats require longer reads. The following sequencing platforms have been used in genome skimming: The Illumina MiSeq platform has been chosen by certain researchers for its long read length for short reads. Assembly After genome skimming, high-copy organellar DNA can be assembled with a reference guide or assembled de novo. High-copy nuclear repeats can be clustered de novo. Assemblers chosen will depend on the target genome and whether short or long reads are used. The following tools have been used to assemble genomes from genome skims: Plastomes Fast-Plast NOVOPlasty ORGanelle Mitogenomes Fast-Plast NOVOPlasty ORGanelle MITObim Other Annotation Annotation is used to identify genes in the genome assemblies. The annotation tool chosen will depend on the target genome and the target features of that genome. The following annotation tools have been used in genome skimming to annotate organellar genomes: Plastomes cpGAVAS Dual Organellar GenoMe Annotator (DOGMA) Mitogenomes MITOS MITOS2 Dual Organellar GenoMe Annotator (DOGMA) tRNAs ARWEN tRNAscan-SE rRNAs RNAmmer Other BLAST Geneious ORF Finder GeneWise TransDecoder EMBOSS Transeq Phylogenetic reconstruction The assembled sequences are globally aligned, and then phylogenetic trees are inferred using phylogenetic reconstruction software. The software chosen for phylogeny reconstruction will depend on whether a Maximum Likelihood (ML), Maximum Parsimony (MP), or Bayesian Inference (BI) method is appropriate. The following phylogenetic reconstruction programs have been used in genome skimming: Maximum Likelihood (ML) RAxML RAxML-HPC PhyML Geneious IG-TREE Maximum Parsimony (MP) PAUPRat PAUP* Bayesian Inference (BI) MrBayes BEAST ExaBayes PhyloBayes Other MEGA4 MEGA6 MEGA7 Tools and Pipelines Various protocols, pipelines, and bioinformatic tools have been developed to help automate the downstream processes of genome skimming. Hyb-Seq Hyb-Seq is a new protocol for capturing low-copy nuclear genes that combines target enrichment and genome skimming. Target enrichment of the low-copy loci is achieved through designed enrichment probes for specific single-copy exons, but requires a nuclear draft genome and transcriptome of the targeted organism. The target-enriched libraries are then sequenced, and the resulting reads processed, assembled, and identified. Using off-target reads, rDNA cistrons and complete plastomes can also be assembled. Through this process, Hyb-Seq is able to produce genome-scale datasets for phylogenomics. GetOrganelle GetOrganelle is a toolkit that assembles organellar genomes uses genome skimming reads. Organelle-associated reads are recruited using a modified “baiting and iterative mapping” approach. The reads aligning to the target genome, using Bowtie2, are referred to as “seed reads”. The seed reads are used as “baits” to recruit more organelle-associated reads via multiple iterations of extension. The read extension algorithm uses a hashing approach, where the reads are cut into substrings of certain lengths, referred to as “words”. At each extension iteration, these “words” are added to a hash table, referred to as a “baits pool”, which dynamically increases in size with each iteration. Due to the low sequencing coverage of genome skims, non-target reads, even those with high sequence similarity to target reads, are largely not recruited. Using the final recruited organellar-associated reads, GetOrganelle conducts a de novo assembly, using SPAdes. The assembly graph is filtered and untangled, producing all possible paths of the graph, and therefore all configurations of the circular organellar genomes. Skmer Skmer is an assembly-free and alignment-free tool to compute genomic distances between the query and reference genome skims. Skmer uses a 2 stage approach to compute these distances. First, it generates k-mer frequency profiling using a tool called JellyFish and then these k-mers are converted into hashes. A random subset of these hashes are selected to form a so-called "sketch". For its second stage, Skmer uses Mash to estimate the Jaccard index of two of these sketches. The combination of these 2 stages is used to estimate the evolutionary distance. Geneious Geneious is an integrative software platform that allows users to perform various steps in bioinformatic analysis such as assembly, alignment, and phylogenetics by incorporating other tools within a GUI based platform. PhyloHerb PhyloHerb is a bioinformatic pipeline write in python. It uses built-in database or user specified reference to extract orthologous sequences from plastid, mitochondrial and nuclear ribosomal regions using a BLAST search. In silico Genome skimming Although genome skimming is usually chosen as a cost-effective method to sequence organellar genomes, genome skimming can be done in silico if (deep) whole-genome sequencing data has already been obtained. Genome skimming has been demonstrated to simplify organellar genome assembly by subsampling the reads of the nuclear genome via in silico genome skimming. Since the organellar genomes will be high-copy in the cell, in silico genome skimming essentially filters out nuclear sequences, leaving a higher organellar to nuclear sequence ratio for assembly, reducing the complexity of the assembly paradigm. In silico genome skimming was first done as a proof-of-concept, optimizing the parameters for read type, read length, and sequencing coverage. Other Applications Other than the current uses listed above, genome skimming has also been applied to other tasks, such as quantifying pollen mixtures, monitoring and conservation of certain populations. Genome skimming can also be used for variant calling, to examine single nucleotide polymorphisms across a species. Advantages Genome skimming is a cost-effective, rapid and reliable method to generate large shallow datasets, since several datasets (plastid, mitochondrial, nuclear) are generated per run. It is very simple to implement, requires less lab work and optimization, and does not require a priori knowledge of the organism nor its genome size. This provides a low-risk avenue for biological inquiry and hypothesis generation without a huge commitment of resources. Genome skimming is an especially advantageous approach regarding cases where the genomic DNA may be old and degraded from chemical treatments, such as specimens from herbarium and museum collections, a largely untapped genomic resource. Genome skimming allows for the molecular characterization of rare or extinct species. The preservation processes in ethanol often damage the genomic DNA, which hinders the success of standard PCR protocols and other amplicon-based approaches. This presents an opportunity to sequence samples with very low DNA concentrations, without the need for DNA enrichment or amplification. Library preparation for specific to genome skimming has been shown to work with as low as 37 ng of DNA (0.2 ng/ul), 135-fold less than recommended by Illumina. Although genome skimming is mostly used to extract high-copy plastomes and mitogenomes, it can also provide partial sequences of low-copy nuclear sequences. These sequences may not be sufficiently complete for phylogenomic analysis, but can be sufficient for designing PCR primers and probes for hybridization-based approaches. Genome skimming is not dependent on any specific primers and remains unaffected by gene rearrangements. Limitations Genome skimming scratches the surface of the genome, so it will not suffice for biological questions that require gene prediction and annotation. These downstream steps are required for deep and more meaningful analyses. Although plastid genomic sequences are abundant in genome skims, the presence of mitochondrial and nuclear pseudogenes of plastid origin can potentially pose issues for plastome assemblies. A combination of sequencing depth and read type, as well as genomic target (plastome, mitogenome, etc.), will influence the success of single-end and paired-end assemblies, so these parameters must be carefully chosen. Scalability Both the wet-lab and the bioinformatics parts of genome skimming have certain challenges with scalability. Although the cost of sequencing in genome skimming is affordable at $80 for 1 Gb in 2016, the library preparation for sequencing is still very expensive, at least ~$200 per sample (as of 2016). Additionally, most library preparation protocols have not been fully automated with robotics yet. On the bioinformatics side, large complex databases and automated workflows need to be designed to handle the large amounts of data resulting from genome skimming. The automation of the following processes need to be implemented: Assembly of the standard barcodes Assembly of organellar DNA (as well as nuclear ribosomal tandem repeats) Annotation of the different assembled fragments Removal of potential contaminant sequences Estimation of sequencing coverage for single-copy genes Extraction of reads corresponding to single-copy genes Identification of unknown specimen from a small shotgun sequencing or any DNA fragment Identification of the different organisms from shotgun sequencing of environmental DNA (metagenomics) Some of these scalability challenges have already been implemented, as shown above in the "Tools and Pipelines" section. See also References Genomics DNA sequencing methods
Genome skimming
[ "Biology" ]
3,696
[ "Genetics techniques", "DNA sequencing methods", "DNA sequencing" ]
63,210,872
https://en.wikipedia.org/wiki/MNase-seq
MNase-seq, short for micrococcal nuclease digestion with deep sequencing, is a molecular biological technique that was first pioneered in 2006 to measure nucleosome occupancy in the C. elegans genome, and was subsequently applied to the human genome in 2008. Though, the term ‘MNase-seq’ had not been coined until a year later, in 2009. Briefly, this technique relies on the use of the non-specific endo-exonuclease micrococcal nuclease, an enzyme derived from the bacteria Staphylococcus aureus, to bind and cleave protein-unbound regions of DNA on chromatin. DNA bound to histones or other chromatin-bound proteins (e.g. transcription factors) may remain undigested. The uncut DNA is then purified from the proteins and sequenced through one or more of the various Next-Generation sequencing methods.MNase-seq is one of four classes of methods used for assessing the status of the epigenome through analysis of chromatin accessibility. The other three techniques are DNase-seq, FAIRE-seq, and ATAC-seq. While MNase-seq is primarily used to sequence regions of DNA bound by histones or other chromatin-bound proteins, the other three are commonly used for: mapping Deoxyribonuclease I hypersensitive sites (DHSs), sequencing the DNA unbound by chromatin proteins, or sequencing regions of loosely packaged chromatin through transposition of markers, respectively. History Micrococcal nuclease (MNase) was first discovered in S. aureus in 1956, protein crystallized in 1966, and characterized in 1967. MNase digestion of chromatin was key to early studies of chromatin structure; being used to determine that each nucleosomal unit of chromatin was composed of approximately 200bp of DNA. This, alongside Olins’ and Olins’ “beads on a string” model, confirmed Kornberg’s ideas regarding the basic chromatin structure. Upon additional studies, it was found that MNase could not degrade histone-bound DNA shorter than ~140bp and that DNase I and II could degrade the bound DNA to as low as 10bp. This ultimately elucidated that ~146bp of DNA wrap around the nucleosome core, ~50bp linker DNA connect each nucleosome, and that 10 continuous base-pairs of DNA tightly bind to the core of the nucleosome in intervals. In addition to being used to study chromatin structure, micrococcal nuclease digestion had been used in oligonucleotide sequencing experiments since its characterization in 1967. MNase digestion was additionally used in several studies to analyze chromatin-free sequences, such as yeast (Saccharomyces cerevisiae) mitochondrial DNA as well as bacteriophage DNA through its preferential digestion of adenine and thymine-rich regions. In the early 1980s, MNase digestion was used to determine the nucleosomal phasing and associated DNA for chromosomes from mature SV40, fruit flies (Drosophila melanogaster), yeast, and monkeys, among others. The first study to use this digestion to study the relevance of chromatin accessibility to gene expression in humans was in 1985. In this study, nuclease was used to find the association of certain oncogenic sequences with chromatin and nuclear proteins. Studies utilizing MNase digestion to determine nucleosome positioning without sequencing or array information continued into the early 2000s. With the advent of whole genome sequencing in the late 1990s and early 2000s, it became possible to compare purified DNA sequences to the eukaryotic genomes of S. cerevisiae, Caenorhabditis elegans, D. melanogaster, Arabidopsis thaliana, Mus musculus, and Homo sapiens. MNase digestion was first applied to genome-wide nucleosome occupancy studies in S. cerevisiae accompanied by analyses through microarrays to determine which DNA regions were enriched with MNase-resistant nucleosomes. MNase-based microarray analyses were often utilized at genome-wide scales for yeast and in limited genomic regions in humans to determine nucleosome positioning, which could be used as an inference for transcriptional inactivation. In 2006, Next-Generation sequencing was first coupled with MNase digestion to explore nucleosome positioning and DNA sequence preferences in C. elegans,''. This was the first example of MNase-seq in any organism. It was not until 2008, around the time Next-Generation sequencing was becoming more widely available, when MNase digestion was combined with high-throughput sequencing, namely Solexa/Illumina sequencing, to study nucleosomal positioning at a genome-wide scale in humans. A year later, the terms “MNase-Seq” and “MNase-ChIP”, for micrococcal nuclease digestion with chromatin immunoprecipitation, were finally coined. Since its initial application in 2006, MNase-seq has been utilized to deep sequence DNA associated with nucleosome occupancy and epigenomics across eukaryotes. As of February 2020, MNase-seq is still applied to assay accessibility in chromatin. Description Chromatin is dynamic and the positioning of nucleosomes on DNA changes through the activity of various transcription factors and remodeling complexes, approximately reflecting transcriptional activity at these sites. DNA wrapped around nucleosomes are generally inaccessible to transcription factors. Hence, MNase-seq can be used to indirectly determine which regions of DNA are transcriptionally inaccessible by directly determining which regions are bound to nucleosomes. In a typical MNase-seq experiment, eukaryotic cell nuclei are first isolated from a tissue of interest. Then, MNase-seq uses the endo-exonuclease micrococcal nuclease to bind and cleave protein-unbound regions of DNA of eukaryotic chromatin, first cleaving and resecting one strand, then cleaving the antiparallel strand as well. The chromatin can be optionally crosslinked with formaldehyde. MNase requires Ca2+ as a cofactor, typically with a final concentration of 1mM. If a region of DNA is bound by the nucleosome core (i.e. histones) or other chromatin-bound proteins (e.g. transcription factors), then MNase is unable to bind and cleave the DNA. Nucleosomes or the DNA-protein complexes can be purified from the sample and the bound DNA can be subsequently purified via gel electrophoresis and extraction. The purified DNA is typically ~150bp, if purified from nucleosomes, or shorter, if from another protein (e.g. transcription factors). This makes short-read, high-throughput sequencing ideal for MNase-seq as reads for these technologies are highly accurate but can only cover a couple hundred continuous base-pairs in length. Once sequenced, the reads can be aligned to a reference genome to determine which DNA regions are bound by nucleosomes or proteins of interest, with tools such as Bowtie. The positioning of nucleosomes elucidated, through MNase-seq, can then be used to predict genomic expression and regulation at the time of digestion. Extended Techniques MNase-ChIP/CUT&RUN sequencing Recently, MNase-seq has also been implemented in determining where transcription factors bind on the DNA. Classical ChIP-seq displays issues with resolution quality, stringency in experimental protocol, and DNA fragmentation. Classical ChIP-seq typically uses sonication to fragment chromatin, which biases heterochromatic regions due to the condensed and tight binding of chromatin regions to each other. Unlike histones, transcription factors only transiently bind DNA. Other methods, such as sonication in ChIP-seq, requiring the use of increased temperatures and detergents, can lead to the loss of the factor. CUT&RUN sequencing is a novel form of an MNase-based immunoprecipitation. Briefly, it uses an MNase tagged with an antibody to specifically bind DNA-bound proteins that present the epitope recognized by that antibody. Digestion then specifically occurs at regions surrounding that transcription factor, allowing for this complex to diffuse out of the nucleus and be obtained without having to worry about significant background nor the complications of sonication. The use of this technique does not require high temperatures or high concentrations of detergent. Furthermore, MNase improves chromatin digestion due to its exonuclease and endonuclease activity. Cells are lysed in an SDS/Triton X-100 solution. Then, the MNase-antibody complex is added. And finally, the protein-DNA complex can be isolated, with the DNA being subsequently purified and sequenced. The resulting soluble extract contains a 25-fold enrichment in fragments under 50bp. This increased enrichment results in cost-effective high-resolution data. Single-cell MNase-seq Single-cell micrococcal nuclease sequencing (scMNase-seq) is a novel technique that is used to analyze nucleosome positioning and to infer chromatin accessibility with the use of only a single-cell input. First, cells are sorted into single aliquots using fluorescence-activated cell sorting (FACS). The cells are then lysed and digested with micrococcal nuclease. The isolated DNA is subjected to PCR amplification and then the desired sequence is isolated and analyzed. The use of MNase in single-cell assays results in increased detection of regions such as DNase I hypersensitive sites as well as transcription factor binding sites. Comparison to other Chromatin Accessibility Assays MNase-seq is one of four major methods (DNase-seq, MNase-seq, FAIRE-seq, and ATAC-seq) for more direct determination of chromatin accessibility and the subsequent consequences for gene expression. All four techniques are contrasted with ChIP-seq, which relies on the inference that certain marks on histone tails are indicative of gene activation or repression, not directly assessing nucleosome positioning, but instead being valuable for the assessment of histone modifier enzymatic function. DNase-seq As with MNase-seq, DNase-seq was developed by combining an existing DNA endonuclease with Next-Generation sequencing technology to assay chromatin accessibility. Both techniques have been used across several eukaryotes to ascertain information on nucleosome positioning in the respective organisms and both rely on the same principle of digesting open DNA to isolate ~140bp bands of DNA from nucleosomes or shorter bands if ascertaining transcription factor information. Both techniques have recently been optimized for single-cell sequencing, which corrects for one of the major disadvantages of both techniques; that being the requirement for high cell input. At sufficient concentrations, DNase I is capable of digesting nucleosome-bound DNA to 10bp, whereas micrococcal nuclease cannot. Additionally, DNase-seq is used to identify DHSs, which are regions of DNA that are hypersensitive to DNase treatment and are often indicative of regulatory regions (e.g. promoters or enhancers). An equivalent effect is not found with MNase. As a result of this distinction, DNase-seq is primarily utilized to directly identify regulatory regions, whereas MNase-seq is used to identify transcription factor and nucleosomal occupancy to indirectly infer effects on gene expression. FAIRE-seq FAIRE-seq differs more from MNase-seq than does DNase-seq. FAIRE-seq was developed in 2007 and combined with Next-Generation sequencing three years later to study DHSs. FAIRE-seq relies on the use of formaldehyde to crosslink target proteins with DNA and then subsequent sonication and phenol-chloroform extraction to separate non-crosslinked DNA and crosslinked DNA. The non-crosslinked DNA is sequenced and analyzed, allowing for direct observation of open chromatin. MNase-seq does not measure chromatin accessibility as directly as FAIRE-seq. However, unlike FAIRE-seq, it does not necessarily require crosslinking, nor does it rely on sonication, but it may require phenol and chloroform extraction. Two major disadvantages of FAIRE-seq, relative to the other three classes, are the minimum required input of 100,000 cells and the reliance on crosslinking. Crosslinking may bind other chromatin-bound proteins that transiently interact with DNA, hence limiting the amount of non-crosslinked DNA that can be recovered and assayed from the aqueous phase. Thus, the overall resolution obtained from FAIRE-seq can be relatively lower than that of DNase-seq or MNase-seq and with the 100,000 cell requirement, the single-cell equivalents of DNase-seq or MNase-seq make them far more appealing alternatives. ATAC-seq ATAC-seq is the most recently developed class of chromatin accessibility assays. ATAC-seq uses a hyperactive transposase to insert transposable markers with specific adapters, capable of binding primers for sequencing, into open regions of chromatin. PCR can then be used to amplify sequences adjacent to the inserted transposons, allowing for determination of open chromatin sequences without causing a shift in chromatin structure. ATAC-seq has been proven effective in humans, amongst other eukaryotes, including in frozen samples. As with DNase-seq and MNase-seq, a successful single-cell version of ATAC-seq has also been developed. ATAC-seq has several advantages over MNase-seq in assessing chromatin accessibility. ATAC-seq does not rely on the variable digestion of the micrococcal nuclease, nor crosslinking or phenol-chloroform extraction. It generally maintains chromatin structure, so results from ATAC-seq can be used to directly assess chromatin accessibility, rather than indirectly via MNase-seq. ATAC-seq can also be completed within a few hours, whereas the other three techniques typically require overnight incubation periods. The two major disadvantages to ATAC-seq, in comparison to MNase-seq, are the requirement for higher sequencing coverage and the prevalence of mitochondrial contamination due to non-specific insertion of DNA into both mitochondrial DNA and nuclear DNA. Despite these minor disadvantages, use of ATAC-seq over the alternatives is becoming more prevalent. References Molecular biology techniques
MNase-seq
[ "Chemistry", "Biology" ]
3,229
[ "Molecular biology techniques", "Molecular biology" ]
63,211,137
https://en.wikipedia.org/wiki/Emotion%20recognition%20in%20conversation
Emotion recognition in conversation (ERC) is a sub-field of emotion recognition, that focuses on mining human emotions from conversations or dialogues having two or more interlocutors. The datasets in this field are usually derived from social platforms that allow free and plenty of samples, often containing multimodal data (i.e., some combination of textual, visual, and acoustic data). Self- and inter-personal influences play critical role in identifying some basic emotions, such as, fear, anger, joy, surprise, etc. The more fine grained the emotion labels are the harder it is to detect the correct emotion. ERC poses a number of challenges, such as, conversational-context modeling, speaker-state modeling, presence of sarcasm in conversation, emotion shift across consecutive utterances of the same interlocutor. The task The task of ERC deals with detecting emotions expressed by the speakers in each utterance of the conversation. ERC depends on three primary factors – the conversational context, interlocutors' mental state, and intent. Datasets IEMOCAP, SEMAINE, DailyDialogue, and MELD are the four widely used datasets in ERC. Among these four datasets, MELD contains multiparty dialogues. Methods Approaches to ERC consist of unsupervised, semi-unsupervised, and supervised methods. Popular supervised methods include using or combining pre-defined features, recurrent neural networks (DialogueRNN), graph convolutional networks (DialogueGCN ), and attention gated hierarchical memory network. Most of the contemporary methods for ERC are deep learning based and rely on the idea of latent speaker-state modeling. Emotion Cause Recognition in Conversation Recently a new subtask of ERC has emerged that focuses on recognising emotion cause in conversation. Methods to solve this task rely on language models-based question answering mechanism. RECCON is one of the key datasets for this task. See also Emotion recognition Sentiment analysis References Emotion Applications of artificial intelligence
Emotion recognition in conversation
[ "Biology" ]
428
[ "Emotion", "Behavior", "Human behavior" ]
63,211,357
https://en.wikipedia.org/wiki/Thermoneutral%20voltage
In electrochemistry, a thermoneutral voltage is a voltage drop across an electrochemical cell which is sufficient not only to drive the cell reaction, but to also provide the heat necessary to maintain a constant temperature. For a reaction of the form The thermoneutral voltage is given by where is the change in enthalpy and F is the Faraday constant. Explanation For a cell reaction characterized by the chemical equation: at constant temperature and pressure, the thermodynamic voltage (minimum voltage required to drive the reaction) is given by the Nernst equation: where is the Gibbs energy and F is the Faraday constant. The standard thermodynamic voltage (i.e. at standard temperature and pressure) is given by: and the Nernst equation can be used to calculate the standard potential at other conditions. The cell reaction is generally endothermic: i.e. it will extract heat from its environment. The Gibbs energy calculation generally assumes an infinite thermal reservoir to maintain a constant temperature, but in a practical case, the reaction will cool the electrode interface and slow the reaction occurring there. If the cell voltage is increased above the thermodynamic voltage, the product of that voltage and the current will generate heat, and if the voltage is such that the heat generated matches the heat required by the reaction to maintain a constant temperature, that voltage is called the "thermoneutral voltage". The rate of delivery of heat is equal to where T is the temperature (the standard temperature, in this case) and dS/dt is the rate of entropy production in the cell. At the thermoneutral voltage, this rate will be zero, which indicates that the thermoneutral voltage may be calculated from the enthalpy. An example For water at standard temperature (25 C) the net cell reaction may be written: Using Gibbs potentials ( kJ/mol), the thermodynamic voltage at standard conditions is 1.229 Volt (2 electrons needed to form H2(g)) Just as the combustion of hydrogen and oxygen generates heat, the reverse reaction generating hydrogen and oxygen will absorb heat. The thermoneutral voltage is (using kJ/mol): 1.481 Volts. References Physical chemistry Electrochemistry Electrochemical equations
Thermoneutral voltage
[ "Physics", "Chemistry", "Mathematics" ]
482
[ "Applied and interdisciplinary physics", "Mathematical objects", "Equations", "Electrochemistry", "nan", "Physical chemistry", "Electrochemical equations" ]
63,211,888
https://en.wikipedia.org/wiki/Transition%20metal%20phosphinimide%20complexes
Transition metal phosphinimide complexes are metal complexes that contain phosphinimide ligands of the general formula NPR3− (R = organic substituent). Several coordination modes have been observed, including terminal and various bridging geometries. In the terminal bonding mode the M-N=P core is usually linear but some are quite bent. The preferred coordination type varies with the oxidation state and coligands on the metal and the steric and electronic properties of the R groups on phosphorus. Many transition metal phosphinimide complexes have been well-developed and, more recently, main group phosphinimide complexes have been synthesized. Complexes of Ti, Zr, V, Ta Complexes of Phosphinimide are generally prepared by two routes. For highly electrophilic metal chlorides, the silyl derivative is convenient since is generates volatile trimethylsilyl chloride: R3PNSiMe3 + LnMCl → R3PN-MLn + ClSiMe3 CpTi(NPR3)Cl2 is prepared by this route. More common are salt-elimination reactions: R3PNLi + LnMCl → R3PN-MLn + LiCl Phosphinimide polyethylene catalysts Phosphinimide ligands have shown promise in the area of ethylene polymerization. In terms of homogeneous catalysts, this field has been dominated by metallocene-based catalysts inspired by the Kaminsky catalyst in 1976. Initially phosphinimide ligands were suggested for polyethylene synthesis due to the fact they have similar steric and electronic properties to metallocene polyethylene catalysts. In most respects the steric and electronic properties, of phosphinimides and cyclopentadienyl are comparable ligands. Metal bound t-Bu3PN− has a cone angle of 87° vs 83 for cylclopentadienyl. Compared to Cp, the bulky substituents of the phosphinimide ligand are more distant from the metal, which increase the exposure of the metal centre to substrate. The less sterically crowded metal centre appears to be particularly susceptible to deactivation however. The precatalyst are prepared by alkylation and arylation of the phosphinimide complexes is possible through alkyllithium or Grignard reagents, giving products such as CpTi(NPR3)Me2. The zirconium complexes (R3PN)2ZrCl2 can be alkylated or arylated through simple substitution. These organoTi and organoZr complexes are activated by treatment with MAO and B(C6F5)3 as a cocatalyst to activate polymerization through methyl abstraction. The phosphinimide catalyst is thought to be homogeneous and single sited. It therefore produces reactivity comparable to metallocene catalysts which are also believed to be homogeneous, single sited catalysts. The catalytic process is assumed to proceed in much of the same way as metallocene based catalysts, as the chemistry is thought to occur primarily with the metal centre and not through the bulky ligands. References Coordination complexes Transition metals
Transition metal phosphinimide complexes
[ "Chemistry" ]
673
[ "Coordination chemistry", "Coordination complexes" ]
63,213,352
https://en.wikipedia.org/wiki/List%20of%20copper%20salts
Copper is a chemical element with the symbol Cu (from Latin: cuprum) and the atomic number of 29. It is easily recognisable, due to its distinct red-orange color. Copper also has a range of different organic and inorganic salts, having varying oxidation states ranging from (0,I) to (III). These salts (mostly the (II) salts) are often blue to green in color, rather than the orange color copper is known for. Despite being considered a semi-noble metal, copper is one of the most common salt-forming transition metals, along with iron. Copper(0,I) salts Copper(I) salts Copper(II) salts Copper(I, II) salts See also List of organic salts List of inorganic compounds Copper Copper(I) compounds Copper(II) compounds Copper complexes Copper salts
List of copper salts
[ "Chemistry" ]
173
[ "nan" ]
63,214,335
https://en.wikipedia.org/wiki/Jos%C3%A9%20G%C3%B3mez%20de%20Navia
José Gómez de Navia (1757, in San Ildefonso – after 1812, in Madrid) was a Spanish engraver and draftsman. Life and works He began his studies with Manuel Salvador Carmona at the Real Academia de Bellas Artes de San Fernando and won a prize for engraving in 1784. On numerous occasions, he collaborated on projects to illustrate the scientific publications of the Imprenta Real (Royal Printing Office), such as Elements of Theoretical and Experimental Physics, by the French physicist Joseph-Aignan Sigaud de Lafond (1787), The Ten Books of Architecture, by Vitruvius, translated by (1787), Physical-chemical Elements of General Water Analysis by Torbern Bergman (1794), and New Inquiries About Kneecap Fractures and the Diseases that are Related to it, by the Catalonian physician Leonardo Galli (1795). He tried several new methods of engraving, and introduced the technique known as "stippling", which he used in his Collection of Devout Heads, Taken from Paintings by Famous Artists (1794), and in his portrait of Diego Hurtado de Mendoza in the Portraits of Illustrious Spaniards. His culminating work is the Collection of Different Views of the Magnificent Temple and Royal Monastery of San Lorenzo de El Escorial, Factory of the Catholic and Prudent King Felipe II, built by the Distinguished Architects Juan Bautista de Toledo and Juan de Herrera his Disciple , which he undertook on his own initiative. In a letter addressed to the Academia in 1800, he noted that he was short of work and, pursuing his fondness for drawing, spent the summer sketching at El Escorial. King Charles IV was so pleased with them, he commissioned more, depicting Aranjuez, and provided an annual pension of 300 ducats. Possibly due to failing eyesight or other health issues, the actual engravings were executed by and . Similar projects followed, with the engravings done by Alegre, and Alonso García Sanz (c. 1781-c. 1819). His last known work was a series entitled Collection of the Best Views and Most Sumptuous Buildings in Madrid (1812). References Further reading Juan Carrete Parrondo, Diccionario de grabadores y litógrafos que trabajaron en España. Siglos XIV a XIX External links Digitalized works in the Biblioteca Digital Hispánica of the Biblioteca Nacional de España 1757 births 1810s deaths Draughtsmen Spanish engravers Spanish illustrators People from the Province of Segovia
José Gómez de Navia
[ "Engineering" ]
520
[ "Design engineering", "Draughtsmen" ]
63,214,506
https://en.wikipedia.org/wiki/Euler%27s%20Gem
Euler's Gem: The Polyhedron Formula and the Birth of Topology is a book on the formula for the Euler characteristic of convex polyhedra and its connections to the history of topology. It was written by David Richeson and published in 2008 by the Princeton University Press, with a paperback edition in 2012. It won the 2010 Euler Book Prize of the Mathematical Association of America. Topics The book is organized historically, and reviewer Robert Bradley divides the topics of the book into three parts. The first part discusses the earlier history of polyhedra, including the works of Pythagoras, Thales, Euclid, and Johannes Kepler, and the discovery by René Descartes of a polyhedral version of the Gauss–Bonnet theorem (later seen to be equivalent to Euler's formula). It surveys the life of Euler, his discovery in the early 1750s that the Euler characteristic (the number of vertices minus the number of edges plus the number of faces) is equal to 2 for all convex polyhedra, and his flawed attempts at a proof, and concludes with the first rigorous proof of this identity in 1794 by Adrien-Marie Legendre, based on Girard's theorem relating the angular excess of triangles in spherical trigonometry to their area. Although polyhedra are geometric objects, Euler's Gem argues that Euler discovered his formula by being the first to view them topologically (as abstract incidence patterns of vertices, faces, and edges), rather than through their geometric distances and angles. (However, this argument is undermined by the book's discussion of similar ideas in the earlier works of Kepler and Descartes.) The birth of topology is conventionally marked by an earlier contribution of Euler, his 1736 work on the Seven Bridges of Königsberg, and the middle part of the book connects these two works through the theory of graphs. It proves Euler's formula in a topological rather than geometric form, for planar graphs, and discusses its uses in proving that these graphs have vertices of low degree, a key component in proofs of the four color theorem. It even makes connections to combinatorial game theory through the graph-based games of Sprouts and Brussels Sprouts and their analysis using Euler's formula. In the third part of the book, Bradley moves on from the topology of the plane and the sphere to arbitrary topological surfaces. For any surface, the Euler characteristics of all subdivisions of the surface are equal, but they depend on the surface rather than always being 2. Here, the book describes the work of Bernhard Riemann, Max Dehn, and Poul Heegaard on the classification of manifolds, in which it was shown that the two-dimensional compact topological surfaces can be completely described by their Euler characteristics and their orientability. Other topics discussed in this part include knot theory and the Euler characteristic of Seifert surfaces, the Poincaré–Hopf theorem, the Brouwer fixed point theorem, Betti numbers, and Grigori Perelman's proof of the Poincaré conjecture. An appendix includes instructions for creating paper and soap-bubble models of some of the examples from the book. Audience and reception Euler's Gem is aimed at a general audience interested in mathematical topics, with biographical sketches and portraits of the mathematicians it discusses, many diagrams and visual reasoning in place of rigorous proofs, and only a few simple equations. With no exercises, it is not a textbook. However, the later parts of the book may be heavy going for amateurs, requiring at least an undergraduate-level understanding of calculus and differential geometry. Reviewer Dustin L. Jones suggests that teachers would find its examples, intuitive explanations, and historical background material useful in the classroom. Although reviewer Jeremy L. Martin complains that "the book's generalizations about mathematical history and aesthetics are a bit simplistic or even one-sided", points out a significant mathematical error in the book's conflation of polar duality with Poincaré duality, and views the book's attitude towards computer-assisted proof as "unnecessarily dismissive", he nevertheless concludes that the book's mathematical content "outweighs these occasional flaws". Dustin Jones evaluates the book as "a unique blend of history and mathematics ... engaging and enjoyable", and reviewer Bruce Roth calls it "well written and full of interesting ideas". Reviewer Janine Daems writes, "It was a pleasure reading this book, and I recommend it to everyone who is not afraid of mathematical arguments". See also List of books about polyhedra References Polyhedral combinatorics Topological graph theory Books about the history of mathematics 2008 non-fiction books
Euler's Gem
[ "Mathematics" ]
976
[ "Graph theory", "Combinatorics", "Polyhedral combinatorics", "Topology", "Mathematical relations", "Topological graph theory" ]
63,215,116
https://en.wikipedia.org/wiki/Linda%20Zou
Linda Zou is a professor of Civil Infrastructure and Environmental Engineering from Khalifa University, Abu Dhabi, United Arab Emirates. Prof. Zou has received contribution to her work on nanotechnology to accelerate water condensation from the National University of Singapore and the University of Belgrade. Zou has developed a new aerosol material for use in cloud seeding: salt crystals coated in titanium dioxide nanoparticles. The technique developed by Prof. Zou was used in the January 2020 Cloud Seeding experiment in the UAE. References Living people Year of birth missing (living people) Environmental engineers Nanotechnologists Academic staff of Khalifa University
Linda Zou
[ "Chemistry", "Materials_science", "Engineering" ]
128
[ "Nanotechnology", "Nanotechnologists", "Environmental engineers", "Environmental engineering" ]
63,215,180
https://en.wikipedia.org/wiki/Andr%C3%A9e%20Marquet
Andrée Marquet (born 1934), is a French chemist specializing in organic chemistry and chemical biology, professor emeritus at the Pierre and Marie Curie University and correspondent at the French Academy of sciences since 1993. Biography Andrée Marquet studied engineering at the École nationale supérieure de chimie de Paris, then defended a thesis prepared at the Collège de France under the direction of Jean Jacques (1961), followed by a post-doctoral internship at the ETH in Zurich with Professor Duilio Arigoni. After a career at the CNRS, she was appointed professor at the Pierre-et-Marie-Curie University (1978) and founded the organic biological chemistry laboratory there. She contributed, with a few others, to the development of this interface sub-discipline at the national level, which was still in its infancy, and created at UPMC adapted teaching courses where chemists and biochemists could meet. In addition to her work as a teacher-researcher, she has held various positions of general interest. Between 1984 and 1986, she chaired the organic chemistry division of the Société chimique de France, and from 1987 to 1991, the Société Franco-japonaise de chimie fine et thérapeutique. She chaired section 20 of the CNRS National Committee (1991-1995) and was a member of the CNRS Scientific Council from 1992 to 1997. In 1998, she became Scientific Director of the Chemistry Department at the Research Department of the MENRT. Between 1999 and 2003, she was a member of the Board of Directors of the Palais de la Découverte, and between 2007 and 2008, she was a member of the Board of Directors of the MENRT. 2011, member of the Ethics Committee of the CNRS. In 2002, she founded the "Chemistry and Society" Commission, within the Fondation de la Maison de la Chimie, of which she remains president until 2011. This commission seeks to analyse the origin of the misunderstanding that has developed between chemistry and society, and to contribute to the search for solutions by organising actions resolutely directed towards the general public. Research Andrée Marquet and her collaborators have been interested in reaction mechanisms in organic chemistry, in particular those involving carbanions (enolates, alpha anions of sulfoxides), and have used the results of these studies in synthesis, for example for total synthesis of biotin. She then turned to mechanistic enzymology, applying the approach used in organic chemistry to the functioning of enzymes. The main areas covered are :    steroid biochemistry: inhibition of the biosynthesis of aldosterone (among the various compounds synthesized and tested, 18-vinyl progesterone proved to be an excellent inhibitor of Cytochrome P450 involved in the last stage of this biosynthesis, making this molecule a potential hypotensor.    the mechanism of action of vitamin K, an essential cofactor in the cascade of blood coagulation reactions.    the biotin biosynthesis pathway: the mechanism of several of the enzymes involved has been deciphered and various inhibitors have been designed and synthesized. A particularly difficult problem to which Andrée Marquet and her team have made a decisive contribution is that of the mechanism of biotin synthase, which catalyses the final step. They have shown that it belongs to the newly discovered family of proteins (Fe-S) dependent on S-Adenosylmethionine, catalysing radical reactions. This is a family that opens a new chapter in enzymology. Another field of activity of the laboratory, the result of a collaboration with the neurobiology laboratory of the Collège de France (Prof. Jacques Glovinski) concerns the activity of a family of peptide neurotransmitters, the tachykinins. Main publications A. Marquet, De l’arme chimique à l’agent thérapeutique. L’Actualité Chimique, 2014 ,N°391, XIII-XVIII. A.  Marquet et Y. Jacquot. Faut-il avoir peur du Bisphenol  A ? L’Actualité Chimique, 2013, N°378-379,11-19. A. Marquet et B.Sillion, coordinateurs. Chimie et Société : Quel dialogue ? L’Actualité Chimique, 2011, N°355. A. Marquet, B. Tse Sum Bui, AG. Smith, MJ. Warren. Iron-sulfur proteins as initiators of  radical chemistry. Nat. Prod. Rep., 2007 ; 24 : 1027-1040. M. Lotierzo, B. Tse Sum Bui, D. Florentin, F. Escalettes, A. Marquet. Biotin synthase mechanism: An overview. Biochemical Society Transactions, 2005, 33, 820 – 823. B. Rüdiger, B. Tse Sum Bui, V. Schünemann, D. Florentin, A. Marquet, A..S. Trautwein. Iron-sulfur clusters of biotin synthase in vivo: a Mössbauer study. Biochemistry, 2002, 41, 15000-15006. E. Davioud, A. Piffeteau, C. Delorme, S. Coustal, A. Marquet.18-Vinyldeoxycorticosterone: a potent inhibitor of the bovine cytochrome P-45011b. Bioorganic and Medicinal Chemistry, 1998, 6, 1781-1788. A. Vidal-Cros, M. Gaudry, A. Marquet. Vitamin K dependent carboxylation. Mechanistic studies with 3-fluoroglutamate containing substrates. Biochemical Journal, 1990, 266, 749-755. G. Chassaing and A. Marquet. A  13 C NMR study of the structure of sulfur-stabilized carbanions. Tetrahedron, 1978, 34, 1399. S. Lavielle, S. Bory, B. Moreau, M.J. Luche and A. Marquet. A total synthesis of biotin based on the stereoselective alkylation of sulfoxides. J. Am. Chem. Soc., 1978, 100, 1558. Honours and awards    1961: Eugène Schuëller Prize (ENSCP)    1971: prize of the Organic Chemistry Division of the French Chemical Society    1986: La Caze Prize of the French Academy of sciences and Berthelot Medal of the French Academy of sciences    1988: CNRS silver medal    1993: Corresponding member of the French Academy of sciences.    1994: Achille-Le-Bel Grand Prize of the Chemical Society of France.    2000: Officier of the Ordre National du Mérite    2012: Officier of the Ordre national de la Légion d'Honneur    2018: Commandeur in the Ordre des Palmes Académiques References 1934 births 20th-century French chemists French women chemists Organic chemists French biochemists French National Centre for Scientific Research scientists Academic staff of Pierre and Marie Curie University Members of the French Academy of Sciences Officers of the Legion of Honour Living people Officers of the Ordre national du Mérite Commandeurs of the Ordre des Palmes Académiques 20th-century French women scientists 21st-century French chemists 21st-century French women scientists
Andrée Marquet
[ "Chemistry" ]
1,549
[ "Organic chemists", "French organic chemists" ]
63,216,140
https://en.wikipedia.org/wiki/Lamb%E2%80%93Chaplygin%20dipole
The Lamb–Chaplygin dipole model is a mathematical description for a particular inviscid and steady dipolar vortex flow. It is a non-trivial solution to the two-dimensional Euler equations. The model is named after Horace Lamb and Sergey Alexeyevich Chaplygin, who independently discovered this flow structure. This dipole is the two-dimensional analogue of Hill's spherical vortex. The model A two-dimensional (2D), solenoidal vector field may be described by a scalar stream function , via , where is the right-handed unit vector perpendicular to the 2D plane. By definition, the stream function is related to the vorticity via a Poisson equation: . The Lamb–Chaplygin model follows from demanding the following characteristics: The dipole has a circular atmosphere/separatrix with radius : . The dipole propages through an otherwise irrorational fluid ( at translation velocity . The flow is steady in the co-moving frame of reference: . Inside the atmosphere, there is a linear relation between the vorticity and the stream function The solution in cylindrical coordinates (), in the co-moving frame of reference reads: where are the zeroth and first Bessel functions of the first kind, respectively. Further, the value of is such that , the first non-trivial zero of the first Bessel function of the first kind. Usage and considerations Since the seminal work of P. Orlandi, the Lamb–Chaplygin vortex model has been a popular choice for numerical studies on vortex-environment interactions. The fact that it does not deform make it a prime candidate for consistent flow initialization. A less favorable property is that the second derivative of the flow field at the dipole's edge is not continuous. Further, it serves a framework for stability analysis on dipolar-vortex structures. References Fluid dynamics
Lamb–Chaplygin dipole
[ "Chemistry", "Engineering" ]
384
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
54,696,414
https://en.wikipedia.org/wiki/List%20of%20U.S.%20counties%20with%20longest%20life%20expectancy%20%282014%29
This list of U.S. counties with longest life expectancy includes 51 counties, and county equivalents, out of a grand total of 3,142 counties or county equivalents in the United States. Most of the counties where people live longest are either sparsely populated or well-to-do suburbs of large cities. Forty-seven of the counties listed have a population of which the largest racial component is non-Hispanic whites. Two have populations of which Hispanics are the majority. Asian Americans make up the largest component of two counties. Counties with the longest life expectancy are located in 21 states: Colorado (11); California and Iowa (5); Nebraska (4); North Dakota, Virginia, and Minnesota (3); Alaska, New York, and New Jersey (2), and Texas, New Mexico, Wyoming, Florida, Michigan, South Dakota, Idaho, Maryland, Utah, Wisconsin, and Oregon (one each). The residents of three adjacent counties in the high-elevation Rocky Mountains of Colorado have the longest life expectancy. Dynamics Among all the counties in the US, there is a wide range in life expectancy from birth. The residents of Summit County, Colorado, live the longest with a life expectancy of 86.83 years. The residents of Oglala Lakota County (formerly Shannon County) of South Dakota live the shortest, with a life expectancy of 66.81 years—twenty years less. The gap between the counties with the longest life expectancy and the shortest is widening. US life expectancy increased by more than 5 years between 1980 and 2014. The life expectancy of most of the longest-lived counties equaled or exceeded that increase. The life expectancy of most of the shortest-lived counties increased less than 5 years—and in a few counties, especially in Kentucky, life expectancy decreased. A study published in the Journal of the American Medical Association in 2016 concluded that income was a major component of the difference in life expectancy in states, counties, races, and regions of the U.S.. Men in the richest one percent of the population lived 15 years longer than men in the poorest one percent of the population and women in the richest one percent of the population lived 10 years longer. Top 51 counties in 2014 See also List of U.S. states and territories by life expectancy List of U.S. counties with shortest life expectancy List of U.S. states by changes in life expectancy, 1985–2010 List of U.S. congressional districts by life expectancy List of North American countries by life expectancy References Counties with longest life expectancy Life expectancy, longest Life expectancy, longest, counties United States
List of U.S. counties with longest life expectancy (2014)
[ "Biology" ]
551
[ "Senescence", "Life expectancy" ]
54,696,799
https://en.wikipedia.org/wiki/Gestadienol%20acetate
Gestadienol acetate (developmental code name CIBA-31458-Ba or CIBA-31458) an orally active progestin which was described in the literature in 1967 and was never marketed. It has no androgenic or estrogenic effects. The effects of gestadienol acetate on the endometrium and its general pharmacology were studied in a clinical trial in women. It has also been studied in a clinical trial for benign prostatic hyperplasia in men, but was ineffective. Chemistry Gestadienol acetate, also known as norhydroxy-δ6-progesterone acetate, 6-dehydro-17α-hydroxy-19-norprogesterone 17α-acetate, or 17α-hydroxy-19-norpregn-4,6-diene-3,20-dione 17α-acetate, is a synthetic norpregnane steroid and a derivative of progesterone. It is specifically a combined derivative of 17α-hydroxyprogesterone and 19-norprogesterone, or of gestronol (17α-hydroxy-19-norprogesterone), with an acetate ester at the C17α position and a double bond between the C6 and C7 positions. Gestadienol acetate is the C17α acetate ester of gestadienol. Analogues of gestadienol acetate include algestone acetophenide (dihydroxyprogesterone acetophenide), demegestone, gestonorone caproate (norhydroxyprogesterone caproate), hydroxyprogesterone acetate, hydroxyprogesterone caproate, nomegestrol acetate, norgestomet, and segesterone acetate (nestorone). References Abandoned drugs Acetate esters Enones Norpregnanes Progestogen esters Progestogens
Gestadienol acetate
[ "Chemistry" ]
430
[ "Drug safety", "Abandoned drugs" ]
54,697,007
https://en.wikipedia.org/wiki/V830%20Tauri
V830 Tauri is a T Tauri star located away from the Sun in the constellation Taurus. This star is very young, with an age of only 2 million years, compared to the Sun's age, which is 4.6 billion years. Typical for a young stars, it exhibits strong flare activity, with three flares detected during a 91-day observation period in 2016. Characteristics V830 Tauri is an M-type star. The star has a mass of roughly 1 solar mass, but has a radius of 2 solar radii, due to the star's age, which means that it hasn't fully contracted yet to become a main-sequence star. (It will likely be on the main sequence portion of its lifetime for about 10 billion years, much like the Sun.) It has a surface temperature of . For comparison, the Sun's surface temperature is . V830 Tauri is a weak-lined T Tauri star, a pre-main sequence star that has a surrounding disc producing emission lines in its spectrum. It is also classified as a BY Draconis variable, cool stars with starspots and chromospheric activity that vary in brightness as they rotate. The variable period of 2.74 days matches the rotation period. Planetary system On June 20, 2016, an exoplanet was found around V830 Tauri via radial velocity. It is one of, if not the youngest exoplanet ever found, with an age of only about 2 million years. The exoplanet has a mass of about 0.77 masses of Jupiter and is orbiting away from its host star with a period of and an inclination of . However, a 2020 study was unable to confirm this planet. V830 Tauri b orbits its parent star every 4.93 days at a distance of 0.057 AU from its parent star. This is about 7x closer to the host star than the planet Mercury is to the Sun. Its mass is about 70% that of Jupiter, and, because it is orbiting very close to its parent star, it is classified as a hot Jupiter. Previously, before the discovery of V830 Tauri b (and a slightly older planet named K2-33b, with an age around 5-10 million years), TW Hya b was discovered and disproven and PTFO 8-8695 b / CVSO 30 b was discovered with an age equally young and an orbit even closer. The yet unconfirmed objects are pending confirmation. The discovery of V830 Tauri b, K2-33b and PTFO 8-8695 b / CVSO 30 b suggests that the formation and migration of close-in giant planets can occur on a timescale of only a few million years. The new discoveries support planet-disc interactions as the most likely mechanism for efficiently producing young hot Jupiters. Notes References Taurus (constellation) Pre-main-sequence stars BY Draconis variables T Tauri stars J04331003+2433433 IRAS catalogue objects M-type stars Hypothetical planetary systems Tauri, V830
V830 Tauri
[ "Astronomy" ]
642
[ "Taurus (constellation)", "Constellations" ]
54,697,349
https://en.wikipedia.org/wiki/Metacresol%20purple
Metacresol purple or m-cresol purple, also called ''m''-cresolsulfonphthalein, is a triarylmethane dye and a pH indicator. It is used as a capnographic indicator for detecting detect end-tidal carbon dioxide to ensure successful tracheal intubation in an emergency. It can be used to measure the pH in subzero temperatures of saline or hypersaline media. In colorimetric capnography, the indicator is incorporated in an aqueous matrix that provides a pH just above the indicator's colour change. When exposed to carbon dioxide (CO2), it undergoes a colour change from purple to yellow, because when CO2 dissolves in the matrix, it forms carbonic acid. In chemistry, it has two useful indicator ranges: pH 1.2–2.8: red to yellow pH 7.4–9.0: yellow to purple See also Bromocresol purple References PH indicators Chemicals in medicine Triarylmethane dyes Phenol dyes Purple
Metacresol purple
[ "Chemistry", "Materials_science" ]
222
[ "Titration", "PH indicators", "Chromism", "Chemical tests", "Organic compounds", "Equilibrium chemistry", "Medicinal chemistry", "Chemicals in medicine", "Organic compound stubs", "Organic chemistry stubs" ]
54,697,412
https://en.wikipedia.org/wiki/Simone%20Severini
Simone Severini is an Italian-born British computer scientist. He is currently Professor of Physics of Information at University College London, and Director of Quantum Computing at Amazon Web Services in Seattle. Work Severini worked in quantum information science and complex systems. Together with Adan Cabello and Andreas Winter, he defined a graph-theoretic framework for studying quantum contextuality, and together with Tomasz Konopka, Fotini Markopoulou, and Lee Smolin, he introduced a random graph model of spacetime called quantum graphity. In network theory, he co-introduced the Braunstein–Ghosh–Severini entropy, with applications to quantum gravity. He served as an editor of Philosophical Transactions of the Royal Society A. In 2015 he was the technical co-founder and one of the first scientific advisors of Cambridge Quantum Computing, with Béla Bollobás, Imre Leader, and Fernando Brandão. He co-founded Phasecraft in 2018 with Toby Cubitt, Ashley Montanaro, and John Morton. Publications References Year of birth missing (living people) Living people British physicists Quantum physicists Academics of University College London
Simone Severini
[ "Physics" ]
236
[ "Quantum physicists", "Quantum mechanics" ]
54,698,311
https://en.wikipedia.org/wiki/Video%20line%20selector
A video line selector is an electronic circuit or device for picking a line from an analog video signal. The input of the circuit is connected to an analog video source, the output triggers an oscilloscope, so display the selected line on the oscilloscope or similar device. Properties Video line selectors are circuits or units of other devices, fitted to the demand of the unit or a separate device for use in workshops, production and laboratories. They contain analog and digital circuits and an internal or external DC power supply. There's a video signal input, sometimes an output to prevent reflexions of the video signal and the cause of shadows of the video picture, also a trigger output. There is also an input or adjust for the line number(s) to be picked out and as an option an automatic or manual setting to fit other video standards and non-interlaced video. Video line selectors do not need all the picture signal, just the synchronisation signals are needed. Sometimes inputs for H- and V-sync were installed, only. Setup and References The video signal input is 75 Ω terminated or connected to the video output for a monitor. The amplified video signal is connected to the inputs of the H- und V-sync detector circuits. The H-sync detector outputs the horizontal synchronisation pulse filtered from the video signal. This is the line synchronisation and makes the lines fit vertically. The V-sync detector filters the vertical synchronisation and makes the picture fit the same position on the screen than the previous one. Both synchronisation output pulses are fed to a digital synchron counter. The V-sync resets the counter. The H-sync is being counted. On every frame picture, the counter is being reset and the lines were counted. Most often interlaced video was used, spitting up a picture in the odd numbered lines, followed by the even-numbered lines in a half picture each. (→deninterlacing). Interlace video requires a V-sync detector which detects first a second scan of the interlaced frame. Some reset the counter and toggle an interlace bit, others ignore the sync after the odd-numbered lines and continue counting. Broadcast television systems were based on a nearly identical monochrome video signal with minor changes all over the world, which a number of lines can be covered by 10 bit counter (29 < lines < 210 → 512 < 576 < 1024). The digital comparator, feed by the line number preset and the counter detects the logical equivalence as match of the binary numbers, which is the output pulse of the video line selector. When fed to the trigger input of an oscilloscope, the signal of the selected video line is displayed on the oscilloscope when the test probe is fed by the video signal. A precision timer can trigger a pixel or dot of the line. In order to simplify the digital part of the circuit, it is possible to load the preset line number into the counter and have it count descending. When the counter reaches zero, the trigger output is set. A 10 inputs NOR gate is more sufficient than a 10 bits digital comparator, but evaluating several lines per picture is no longer possible. Decreasing the line number by one, the carry bit of the counter can be used as trigger output, replacing a 10 inputs NOR-Gate. Applications Video line selecting was used in laboratory, production, and workshop: (selection only) focussing CCD-Sensors in cameras (all areas of the picture) analyzing a television signal on quality and troubleshooting video devices Monitoring television picture content: decoding teletext was used on decoding Channel Videodat, a former television service in Germany, broadcasting software and data over television Restoring data like „ArVid“, using a videocassette recorder for data storage For modifying television signals: teletext output to screen merge on-screen displays, logos or text into the television picture As precise optical sensor: use a camera as optical sensor for analyzing a taken picture in automation, use a camera as a line sensor, use a camera as vertical selective line sensor. See also component video, HD-MAC, back porch References External links Video Line Selector Circuit and documentation at elm-chan.org 12 April 2002 (German) PAL Line-Selector at controller-designs.de Electronic circuits Electronic test equipment Television technology Signal processing
Video line selector
[ "Technology", "Engineering" ]
906
[ "Information and communications technology", "Telecommunications engineering", "Computer engineering", "Signal processing", "Television technology", "Electronic circuits", "Electronic test equipment", "Measuring instruments", "Electronic engineering" ]
54,699,202
https://en.wikipedia.org/wiki/NGC%207085
NGC 7085 is a spiral galaxy located about 365 million light-years away in the constellation of Pegasus. NGC 7085 was discovered by astronomer Albert Marth on August 3, 1864. See also List of NGC objects (7001–7840) References External links Spiral galaxies Pegasus (constellation) 7085 66926 Astronomical objects discovered in 1864
NGC 7085
[ "Astronomy" ]
71
[ "Pegasus (constellation)", "Constellations" ]
54,700,310
https://en.wikipedia.org/wiki/NGC%207087
NGC 7087 is a barred spiral galaxy located about 215 million light-years away in the constellation of Grus. NGC 7087 was discovered by astronomer John Herschel on September 4, 1834. NGC 7087 is a member of a group of galaxies known as the NGC 7087 group. See also NGC 1300 References External links Barred spiral galaxies Grus (constellation) 7087 66988 Astronomical objects discovered in 1834
NGC 7087
[ "Astronomy" ]
91
[ "Grus (constellation)", "Constellations" ]
54,700,471
https://en.wikipedia.org/wiki/Fink%20truss
The Fink truss is a commonly used truss in residential homes and bridge architecture. It originated as a bridge truss although its current use in bridges is rare. History The Fink Truss Bridge was patented by Albert Fink in 1854. Albert Fink designed his truss bridges for several American railroads especially the Baltimore and Ohio and the Louisville and Nashville. The 1865 Annual Report of the President and Directors of the Louisville and Nashville Railroad Company lists 29 Fink Truss bridges out of a total of 66 bridges on the railroad. The first Fink Truss bridge was built by the Baltimore and Ohio Railroad in 1852 to span the Monongahela River at Fairmont, Virginia (now West Virginia). It consisted of three spans, each 205 feet long. It was the longest iron railroad bridge in the United States at the time. Several other Fink trusses held world records for their time including the Green River Bridge (c. 1858) carrying the Louisville and Nashville Railroad over its namesake river near Munfordville, Kentucky, and the first bridge to span the Ohio River which included a 396-foot span built between 1868 and 1870. Although the design is no longer used for major structures, it was widely used from 1854 through 1875. Design It is identified by the presence of multiple diagonal members projecting down from the top of the end posts at a variety of angles. These diagonal members extend to the bottom of each of the vertical members of the truss with the longest diagonal extending to the center vertical member. Many Fink trusses do not include a lower chord (the lowest horizontal member). This gives the bridge an unfinished saw-toothed appearance when viewed from the side or below, and makes the design very easy to identify. If the bridge deck is carried along the bottom of the truss (called a through truss) or if a lightweight lower chord is present, identification is made solely by the multiple diagonal members emanating from the end post tops. An Inverted Fink Truss has a bottom chord without a top chord. Notable examples Only two Fink Truss bridges remain intact in the United States. Neither bridge is in its original location. The Zoarville Station Bridge consists of one of the original three spans of a through truss of Fink design built in 1868 by Smith, Latrobe and Company of Baltimore, Maryland. It originally carried Factory Street over the Tuscarawas River in Tuscarawas County, Ohio. In 1905 one span of the structure was relocated to Conotton Creek where it is now a pedestrian only crossing. It is listed on the National Register of Historic Places, documented by the Historic American Engineering Record and carries the Zoar Valley Trail, the intrastate Buckeye Trail, and the interstate North Country Trail. A 56 foot long single span deck truss of Fink design was built in 1870 to carry trains of the Atlantic, Mississippi and Ohio Railroad (later Norfolk and Western Railway, now Norfolk Southern Railway). The original location of this structure is unknown. In 1893 it was relocated to carry Old Forest Road over the Norfolk and Western in Lynchburg, Virginia, and in 1985 the structure was again relocated to Riverside Park in the City of Lynchburg to preserve the historic structure for future generations. It now carries pedestrians only. A third bridge, the Fink-Type Truss Bridge, survived in Clinton Township, New Jersey until it was destroyed by a traffic accident in 1978. Current use Fink design trusses are used today for pedestrian bridges and as roof trusses in building construction in an inverted (upside down) form where the lower chord is present and a central upward projecting vertical member and attached diagonals provide the bases for roofing. References Bridge design Truss bridges
Fink truss
[ "Engineering" ]
743
[ "Structural engineering", "Bridge design", "Architecture" ]
54,702,492
https://en.wikipedia.org/wiki/Outer%20Solar%20System%20Origins%20Survey
The Outer Solar System Origins Survey (OSSOS) is an astronomical survey and observing program aimed at discovering and tracking trans-Neptunian objects located in the outermost regions of the Solar System beyond the orbit of Neptune. OSSOS is designed in way that observational biases can be characterized, allowing the numbers and orbits of detected objects to be compared using a survey simulator to the populations predicted in dynamical simulations of the emplacement of trans-Neptunian objects. Conducted at the Canada-France-Hawaii telescope at Mauna Kea Observatories in Hawaii, the survey has discovered 39 numbered objects as of 2018, with potentially hundreds more to follow. The survey's first numbered discovery was the object in 2013. Description OSSOS observed eight blocks of the sky over a period of five years from 2013–2017 using the MegaPrime camera of the 3.6-meter Canada-France-Hawaii Telescope. Images of these blocks were taken near opposition (when the block is near opposite the sun), two months before, and two months after. This extended period of observation was designed to remove ephemeris bias which can cause the loss of some objects due to inaccurate predictions of their future positions. Pointing directions, detection efficiencies, and tracking frequencies were determined to allow other observational biases to be identified. These identified biases are used by the survey simulator developed by the OSSOS group. This survey simulator can estimate the populations of detected objects, for example those in resonances, and set upper limits for the classes of objects not detected. The survey simulator can also predict the number of object that would be detected by OSSOS given the output of dynamical models of the early Solar System, allowing the models to be statistically tested. OSSOS has detected 838 objects, bring the total objects detected by well characterized surveys to more than 1100. Among these objects are a possible dwarf planet in a 9:2 resonance with Neptune, and two objects in a 9:1 resonance with Neptune. Other resonant objects have been detected and their populations estimated. A previously identified 'kernel' in the cold classical Kuiper belt has been confirmed and other cold classical objects beyond the 2:1 resonance with Neptune have been identified. OSSOS detected 3 potential members of the Haumea family, but none of these were faint, indicating that the family has a shallow size distribution. Analysis of the size distribution of the scattering population revealed a break in its slope. The inclination distribution of these scattering objects had more with inclinations greater than 45 degrees than predicted using simulations that included only the known planets and the influence of the galaxy, but also fewer with inclinations between 15 and 30 degrees than predicted when Planet Nine was added to the simulations. Extreme trans-Neptunian objects (eTNOs) have been found including one with a semi-major axis of 730 AU, , and seven other objects with semi-major axes greater than 150 AU and perihelia greater than 30 AU. After accounting for OSSOS's known biases the orbital elements of these objects are consist with a uniformly distributed population. Four scattered disk objects with high perihelia have been detected with semi-major axes smaller than nearby resonances, consistent with their escape during a slow grainy migration of Neptune. Closer to the Sun, 20 centaurs were found, none of which were active. The number of centaurs detected and their inclinantion distribution were consistent with a model of the early Solar System that included a slow, long range migration of Neptune. 65 of the smaller objects discovered by OSSOS were later observed using the Subaru telescope to determine the variability of their brightness. Operating in conjunction with OSSOS is the Colours of the Outer Solar System Origins Survey (Col-OSSOS). Col-OSSOS observes OSSOS objects with red magnitudes brighter than 23.5 simultaneously using the Gemini-North and Canada-France-Hawaii telescopes. The simultaneous observation allows the colors of these object to be measured more accurately by removing variations in their brightness due to the rotation of the objects and changes in atmospheric conditions. These observations have revealed three surface types among the TNOs, and have identified numerous binaries including loosely bound neutrally colored 'blue binaries' that could have been pushed out into their current orbits during Neptune's migration. Among the dynamically excited populations the ratio of neutral to red objects has been estimated to be between 4:1 and 11:1. The inclination distributions were found to vary with color, with the red objects having lower inclinations. The Col-OSSOS team has also measured the color and light curve of ʻOumuamua. Team Core members The core members of the Outer Solar System Origin Survey are: Brett J. Gladman – co-principal investigator, orbit analysis John J. Kavelaars – co-principal investigator, data, discovery Jean-Marc Petit – co-principal investigator, orbit analysis, survey simulator Michele Bannister – data, discovery, telescope operations, (see ) Stephen Gwyn – astrometric catalogue, (see ) Kat Volk – orbit classification Ying-Tung (Charles) Chen – data analysis Mike Alexandersen – survey cadence & design Collaborators Collaborators of the Outer Solar System Origin Survey are: Andrew C. Becker Susan D. Benecchi (née Kern) Federica Bianco Steven Bickerton Ramon Brasser Audrey C. Delsanti Wesley Fraser Mikael Granvik Will Grundy Aurelie Guilbert-Lepoutre Amanda Sickafoose Gulbis Daniel Hestroffer Wing Ip Marian Jakubik Lynne Jones Nathan Kaib Pavlo Korsun Simon Krughoff Irina Kulyk Pedro Lacerda Sam Lawler Matthew Lehner Edward Lin Tim Lister Patryk Lykawka Ruth Murray-Clay Keith Noll (see ) Alex Parker Nuno Peixinho Rosemary Pike Philippe Rousselot Megan Schwamb Cory Shankman Bruno Sicardy Scott Tremaine Pierre Vernazza (see ) Shiang-Yu Wang List of numbered minor planets discovered by OSSOS See also List of trans-Neptunian objects References External links Outer Solar System Origins Survey, website , SETI Talks, Michele Bannister Michele Bannister, at the Astronomy Research Centre (ARC) Astronomical surveys Astronomical discoveries by institution
Outer Solar System Origins Survey
[ "Astronomy" ]
1,289
[ "Astronomical surveys", "Astronomical objects", "Works about astronomy" ]
54,704,896
https://en.wikipedia.org/wiki/Current%20HIV/AIDS%20Reports
Current HIV/AIDS Reports is a quarterly peer-reviewed medical review journal covering HIV/AIDS. It was established in 2004 and is published by Springer Science+Business Media. The editor-in-chief is Paul Volberding (University of California, San Francisco). According to the Journal Citation Reports, the journal has a 2018 impact factor of 4.382. References External links HIV/AIDS journals Review journals Quarterly journals Academic journals established in 2004 Springer Science+Business Media academic journals English-language journals
Current HIV/AIDS Reports
[ "Biology" ]
103
[ "Virus stubs", "Viruses" ]
54,706,443
https://en.wikipedia.org/wiki/Gun%20Violence%20Archive
Gun Violence Archive (GVA) is an American nonprofit group with an accompanying website and social media delivery platforms which seeks to catalog every incident of gun violence in the United States. It was founded by Michael Klein and Mark Bryant. Klein is the founder of Sunlight Foundation, and Bryant is a retired systems analyst. History GVA was established in 2013 and began in 2014 and is ongoing. It provides gun violence data and statistics. Perceived gaps in both CDC and FBI data, as well as their lagging distribution, are some reasons behind why GVA felt the need to offer independent data collection. The GVA typically publishes incidents in its database within 3 days whereas the government agencies like the FBI may take months or even years. GVA maintains a database of known shootings in the United States, coming from law enforcement, media and government sources in all 50 states. See also Firearm death rates in the United States by state References External links Gun violence in the United States Internet properties established in 2013 Online databases Gun violence Gun politics Violence
Gun Violence Archive
[ "Biology" ]
205
[ "Behavior", "Aggression", "Human behavior", "Violence" ]
54,706,548
https://en.wikipedia.org/wiki/NGC%207095
NGC 7095 is a barred spiral galaxy located about 115 million light-years away in the constellation of Octans. NGC 7095 was discovered by astronomer John Herschel on September 21, 1837. References External links Barred spiral galaxies Octans 7095 67546 Astronomical objects discovered in 1837
NGC 7095
[ "Astronomy" ]
62
[ "Octans", "Constellations" ]
54,709,463
https://en.wikipedia.org/wiki/Periodic%20counter-current%20chromatography
Periodic counter-current chromatography (PCC) is a method for running affinity chromatography in a quasi-continuous manner. Today, the process is mainly employed for the purification of antibodies in the biopharmaceutical industry as well as in research and development. When purifying antibodies, protein A is used as affinity matrix. However, periodic counter-current processes can be applied to any affinity type chromatography. Basic principle In conventional affinity chromatography, a single chromatography column is loaded with feed material up to the point before target material (product) cannot be retained by the affinity material anymore. The resin with the adsorbed product on it is then washed to remove impurities. Finally, the pure product is eluted with a different buffer. Notably, if too much feed material is loaded onto the column, the product can break through and product is consequently lost. Therefore, it is very important to only partially load the column to maximize the yield. Periodic counter-current chromatography puts this problem aside by utilizing more than one column. PCC processes can be run with any number of columns, starting from two. The following paragraph will explain a two-column version of PCC, but other protocols with more columns rely on the same principles (see below). A diagram depicting the individual process steps is shown on the right. In Step 1, the so-called sequential loading phase, columns 1 and 2 are interconnected. Column 1 is fully loaded with sample (red) while its breakthrough is captured on column 2. In Step 2, column 1 is washed, eluted, cleaned and re-equilibrated while loading separately continues on column 2. In Step 3, after regeneration of column 1, the columns are again inter-connected and column 2 is fully loaded while its breakthrough is captured on column 1. Finally, in Step 4 column 2 is washed, eluted, cleaned and re-equilibrated while loading continues independently on column 1. This cyclic process is repeated in a continuous way. Several variations of periodic counter-current chromatography with more than two columns exist. In these cases, additional columns are either placed within the feed stream during loading, having the same effect as using longer columns. Alternatively, additional columns can be kept in an unoccupied stand-by mode during loading. This mode offers additional assurance that the main process is not influenced by washing and cleaning protocols, albeit in practice this is rarely required. On the other hand, the underutilized columns reduce the theoretical maximum productivity for such processes. Generally, the advantages and disadvantages of different multi-column protocols are the subject of debate. However, without a doubt, compared to single column batch processes, periodic counter-current processes provide significantly increased productivity. Dynamic process control On the time scale of continuous chromatography runs, it is fairly common to observe changes in important process parameters, such as column health, buffer quality, feed titer (concentration) or feed composition. Such changes result in an altered maximum column capacity, relative to the amount of loaded feed material. In order to achieve a steady quality and yield for each process cycle, the timing of the individual process steps therefore has to be adjusted. Manual changes are in principle conceivable, but rather impractical. More commonly, dynamic process control algorithms monitor the process parameters and apply changes as needed automatically. There are two different operating modes for dynamic process controllers in use today (see Figure on the right). The first one, called DeltaUV, monitors the difference between two signals from detectors situated before and after the first column. During initial loading, there is a large difference between the two signals, but it is diminishing as the impurities make their way through the column. Once the column is fully saturated with impurities and only additional product is being held back, the difference between the signals reaches a constant value. As long as the product is completely being captured on the column, the difference between the signals will remain constant. As soon as some of the product breaks through the column (compare above), the difference diminishes. Thus, the timing and amount of product breakthrough can be determined. The second possibility, called AutomAb, requires only the signal of a single detector situated behind the first column. During initial loading, the signal increases, as more and more impurities make their way through the column. When the column is saturated with impurities and as long as the product is completely being captured on the column, the signal then remains constant. As soon as some of the product breaks through the column (compare above), the signal increases again. Thus, the timing and amount of product breakthrough can again be determined. Both iterations work equally well in theory. In practice, the requirement for two synced signals and the exposure of one detector to unpurified feed material, makes the DetaUV approach less reliable than AutomAb. Commercial situation As of 2017, GE Healthcare holds patents around three-column periodic counter-current chromatography: this technology is used in their Äkta PCC instrument. Likewise, ChromaCon holds patents for an optimized two-column version (CaptureSMB). CaptureSMB is used in ChromaCon's Contichrom CUBE and under license in YMC's Ecoprime Twin systems. Additional manufacturers of systems capable of periodic counter-current chromatography include Novasep and Pall. References Chromatography
Periodic counter-current chromatography
[ "Chemistry" ]
1,126
[ "Chromatography", "Separation processes" ]
54,709,700
https://en.wikipedia.org/wiki/Oil%20purification
Oil purification (transformer, turbine, industrial, etc.) removes oil contaminants in order to prolong oil service life. Contaminants of industrial oils Contaminants and various impurities get into industrial oils during storage and operation. The most common contaminants are: water; solid particles (like soot and dirt); gases; asphalt-resinous paraffin deposits; acids; oil sludge; organometallic compounds; unsaturated hydrocarbons; polyaromatic hydrocarbons; additive remains; products of oil decomposition. Methods of oil purification Industrial oils are purified through sedimentation, filtration, centrifugation, vacuum treatment and adsorption purification. Sedimentation is precipitation of solid particles and water to the bottom of oil tanks under gravity. The main drawback of this process is its longevity. Filtration is a partial removal of solid particles through filter medium. Oil filtration systems generally use a multistage filtration with coarse and fine filters. Centrifugation is separation of oil and water, or oil and solid particles by centrifugal forces. Vacuum treatment degasses and dehydrates industrial oil. This method is well suited for removing dispersed and dissolved water, as well as dissolved gases. Adsorption purification, in contrast to the methods mentioned above, does not remove solid particles and gases, but it shows good results at removing water, oil sludge and aging products. This process uses adsorbents of natural or artificial origin: bleaching clays, synthetic aluminosilicates, silica gels, zeolites, etc. The difference between purification and regeneration of industrial oil Often the terms "oil purification" and "oil regeneration" are used synonymously. Although in fact they are not the same. Oil purification cleans oil from contaminants. It can be used independently or as a part of oil regeneration. Oil regeneration also removes aging products (with the help of adsorbents) and stabilizes oil with additives. Regenerated oil is clean from carcinogenic products of oil aging and stabilized with the help of additives. References Oils Recycling
Oil purification
[ "Chemistry" ]
451
[ "Oils", "Carbohydrates" ]
54,711,583
https://en.wikipedia.org/wiki/Polaribacter
Polaribacter is a genus in the family Flavobacteriaceae. They are gram-negative, aerobic bacteria that can be heterotrophic, psychrophilic or mesophilic. Most species are non-motile and species range from ovoid to rod-shaped. Polaribacter forms yellow- to orange-pigmented colonies. They have been mostly adapted to cool marine ecosystems, and their optimal growth range is at a temperature between 10 and 32 °C and at a pH of 7.0 to 8.0. They are oxidase and catalase-positive and are able to grow using carbohydrates, amino acids, and organic acids. There is evidence of two life strategies for members of the genus, Polaribacter. Some Polaribacter species are free-living and consume amino acids and carbohydrates, as well as have proteorhodopsin that enhances living in oligotrophic seawaters. Other species of Polaribacter attach to substrates in search of protein polymers. In the context of climate change, algal blooms are becoming increasingly prevalent. Members of the genus Polaribacter decompose algal cells and thus may be important in biogeochemical cycling, as well as influence seawater chemistry and the composition of microbial communities as temperatures continue to rise. This may impact the efficiency of the biological pump in sequestering atmospheric carbon. Polaribacter is a genus that is being continuously researched and to date there are 25 species that have been validly published under the International Code of Nomenclature of Prokaryotes (ICNP): P. aquimarinus, P. atrinae, P. butkevichii, P. dokdonensis, P. filamentus, P. franzmannii, P. gangjinensis, P. glomeratus, P. haliotis, P. huanghezhanensis, P. insulae, P. irgensii, P. lacunae, P. litorisediminis, P. marinaquae, P. marinivivus, P. pacificus, P. porphyrae, P. reichenbachii, P. sejongensis, P. septentrionalilitoris, P. staleyi, P. tangerinus, P. undariae, P. vadi. The genus is sometimes incorrectly referred to as Polaribacer; Polarobacter or Polaribacteria. Phylogeny This phylogeny is based on rRNA gene sequencing. Distribution and abundance Members in the genus Polaribacter are abundant in polar oceans and are important in the export of dissolved organic matter (DOM). A small percentage of the bacterial community is responsible for the DOM uptake rate. In northern latitude waters, the fraction of cells using glucose (fraction of active cells) is higher in summer than winter, and high abundances may occur after phytoplankton blooms, although a study in southern high-latitude waters found lower abundances of Polaribacter after an in situ diatom bloom. Within the Arctic Ocean, there is no obvious pattern in the relative abundance between summer and winter. In the Chukchi Sea, the fraction of cells using leucine is higher in the winter than in summer. In the Beaufort Sea, the fraction of cells using leucine does not differ between seasons. In the coastal waters of Fildes Peninsula, Polaribacter dominated cells in the phylum Bacteriodetes. Habitat Microorganisms in the genus Polaribacter are widely distributed and various species are capable of living in a plethora of different environments. Some Polaribacter species have been isolated from brine pools in the Arctic Ocean. in addition to hypersaline environments, numerous Polaribacter species inhabit extreme environments ranging from -20 °C to 22 °C. In the past, it was thought that Polaribacter only flourished in cold waters as the members of the species that were first discovered (P. irgensii, P. filamentus, and P. franzmannii) in the Arctic and Southern Oceans could only survive in water with temperatures ranging from -20 °C to 10 °C. Subsequently, members of the genus Polaribacter have been shown to be very versatile microorganisms and can survive in oligotrophic and in copiotrophic environments. Polaribacter have also been found in sediments. For example, SM1202T, a phylogenetically close strain to Polaribacter was isolated from marine sediment in Kongsfjorden, Svalbard. Polaribacter have also been experimentally isolated from red macroalgae (Porphyra yezoensis) and green macroalgae (Ulva fenestrate). Role in ecosystem Isolates of related Flavobacteria are able to degrade High-Molecular Weight (HMW) DOM. and Polaribacter may be among the first organisms to degrade particulate organic matter and break-down polymers into smaller particles that can be used by free-living bacterial heterotrophs. This suggests that they likely remineralize primary production matter within the food web. In the Southern Ocean The Antarctic Peninsula exhibits strong seasonal changes, which influences how bacteria respond to and live in these environmental conditions. The Antarctic spring is especially important as it brings about significant changes, including sea ice melting, thermal stratification due to warming surface waters, and increased dissolved organic matter (DOM) production. All these physical changes also result in phytoplankton blooms which are important in supporting higher trophic levels. In the Southern Ocean, flavobacteria dominate bacterial activity, particularly flavobacteria in the genus Polaribacter. Typically, these bacteria are prevalent in sea ice; however, during seasonal melting in the summer, they dominate coastal waters as sea ice retreats. In the Southern Ocean, when phytoplankton blooms occur, Flavobacteria, and particularly members in the genus Polaribacter, are among the first bacterial taxa to respond to phytoplankton blooms, breaking down organic matter by direct attachment and the use of exoenzymes. Both particle-attached and free-living members of the family Rhodobacteraceae were also found in close association with phytoplankton blooms; however, bacteria in this family were found to use lower molecular weight substrates. This suggests that they're secondary in the microbial succession of substrates, using the byproducts of degradation by flavobacteria, which also includes members of the genus Polaribacter. The relative abundance of free-living bacteria belonging to the genus Polaribacter and in the family Rhodobacteraceae peaked at different points during phytoplankton blooms, suggesting a niche specialization contributing to successive degradation of phytoplankton-derived organic matter. Bacteria in the genus Polaribacter and family Rhodobacteraceae were found in clusters, with Polaribacter clusters forming earlier in the bloom, which further suggests a successive ecological interaction between various bacterial taxa. For both the Arctic Ocean and the North Sea, Polaribacter exhibited similar trends pertaining to phytoplankton blooms in the summertime as well as assuming particular niches for organic matter degradation. Metabolism Members of the genus Polaribacter are metabolically flexible depending on their physiology, lifestyle and seasonality of the region they inhabit. Many research studies have found that Polaribacter can alternate between two lifestyles as a mechanism for adaptation in surface waters where nutrient concentrations are low and light exposure is high. Sequenced strains of the genus Polaribacter show a high prevalence of peptidase and glycoside hydrolase genes in comparison to other bacteria in the Flavobacteriaceae, indicating they contribute to degradation and uptake of external proteins and oligopeptides. In the pelagic water column, some species are well equipped to attach to particles and substrates to search for and degrade polymers. They are amongst the first organisms to degrade particulate organic matter and break-down polymers into smaller particles. Studies have shown that they will colonize and attach to particles, glide to search for substrates, and degrade them for carbon and nutrients. Once they've degraded these molecules, the bacterium may then search for new particles to colonize, forcing them to freely-swim in environments where nutrients and organic carbon is not easily available. CAZymes Genetic sequencing found that strains contain numerous genes which encode for CAZymes that are involved in polysaccharide degradation. For example, strain DSW-5 (a strain genetically very similar to strain MED-152), contains 85 genes encoding to CAZymes and 203 peptidases, which suggests its role as a free-living heterotrophs. However, the ratio of peptidases to glycoside hydrolase genes varies depending on the environmental conditions the strain is subjected to. For example, Polaribacter sp. MED134 lives in environmental conditions with extended starvation conditions and expresses twice as many peptidases as CAZymes. On the other hand, macroalgae-colonizing species that live in stable, eutrophic environments may express greater proportions of CAZymes than peptidases. Proteorhodopsin "Free-living" species have the proteorhodopsin gene, which allows them to complete inorganic-carbon fixation using light as an energy source. By utilizing their proteorhodopsin to use light energy, Polaribacter can grow in oligotrophic environmental conditions. Genome General genome characteristics The genome of bacteria in the genus Polaribacter vary in size from 2.76 Mb (P. irgensii) to 4.10 Mb (P. reichenbachii) and the number of genes ranging from 2446 in P. irgensii to 3500 in P. reichenbachii, but have a fairly constant G+C content of approximately 30 mol%. Some notable features of the genome include genes for agar, alginate, and carrageenan degrading enzymes in Polaribacter species which colonize the surface of macroalgae. Agar degrading enzymes have also been found in strains of Polaribacter that colonize the gut of the comb pen shell. Proteases are also commonly found in the genomes of species that preferentially grow on solid substrates and degrade protein instead of using free amino acids and living a pelagic lifestyle. Some members of the genus encode proteorhodopsin, which has been implicated in supporting their central metabolism through photophosphorylation. DNA sequencing of Polaribacter DNA sequencing has commonly been used to identify new strains of Polaribacter and help place species on a phylogenetic tree. DNA sequencing has also been used to help understand, or predict a species role in an environment due to the presence of certain genes. Members of the family Flavobacteriaceae can be identified through the specific quinone, Menaquinone 6, also known as Vitamin K2; however, differentiating species can be much more difficult. Species such as Polaribacter vadi and Polaribacter atrinae were identified as new species based on their similar but unique genome when compared to other members of the genus Polaribacter. New species can be identified through DNA hybridization or through the sequencing and comparison of a common gene such as 16S rRNA. This has allowed scientists to create phylogenetic trees of the genus based on genomic similarity, as seen in the phylogeny section, as well as identify common features in the genome. Life strategies of Polaribacter based on genome analysis Genomic analysis has allowed scientists to examine the relationships between different species of Polaribacter. However, by combining genomic analysis with other analytical techniques such as chemotaxonomic and biochemical, scientists can theorize how a species might fit into an environment or how they believe a species is adapted to survive. A genomic analysis of the Polaribacter strain MED152, found a considerable amount of genes that allow for surface or particle attachment, gliding motility and polymer degradation. These genes fit with the current understanding of how marine bacteroidetes survive through attaching to a surface and moving over it to look for nutrients. However the researchers also noticed that the organism had a proteorhodopsin gene as well as other genes which could be used to sense light and found that under light the species increased carbon dioxide fixation. This led the researchers to theorize that Polaribacter strain MED152 has two different life strategies, one where it acts like other marine bacteroidetes, attaching to surfaces and searching for nutrients and, another life strategy where, if the strain was in a well lit, low nutrient area of the ocean, it would use carbon fixation to synthesize intermediates of metabolic pathways. Another example of this comes from the Polaribacter strains Hel1_33_49 and Hel1_85. The strain Hel1_33_49 has a genome which contains proteorhodopsin, fewer polysaccharide utilization loci and no mannitol dehydrogenase, which the researchers associate with a pelagic lifestyle. Hel1_85 on the other hand, has a genome which contains twice as many polysaccharide utilization loci, a mannitol dehydrogenase and no proteorhodopsin, pointing to a lifestyle with lower oxygen availability such as a biofilm. Species Viral pathogens Only two species of lytic phage are known to infect members of this genus, and both have double stranded DNA with virions that include isometric heads and non-contractile tails (class Caudoviricetes, morphotype: siphoviruses). Viral lysis has been implicated as a major driver of changes in genus-level composition of microbial communities. Applications/uses Cold water enzymes contained in psychrophilic bacteria like Polaribacter are valuable for biotechnology applications since they do not require high temperatures that may other enzyme systems do. Psychrophilic enzymes Polaribacter is a psychrophilic bacterium that lends itself to a variety of applications in both academic and industrial settings. These cold dwelling bacteria are an abundant source of psychrophilic enzymes which have an interesting ability to retain higher catalytic activity at temperatures below 25 °C. This is due to the highly malleable nature of these enzymes as this allows for better substrate - active site binding at colder temperatures. This is important as enzymes that operate at lower temperatures not only make the industrial processes more efficient, but they also minimize the chance of side reactions occurring. More of the substrate can directly be converted into the desired product all the while requiring less energy to do so. Psychrophilic enzymes can also aid with heat labile or volatile compounds, allowing reactions to occur without significant product loss. Another unique application for these enzymes is the ability to be inhibited without the need of external reagents. Usually to stop enzyme activity, chemical inhibitors are required which then require subsequent purification steps. With psychrophilic enzymes you can add slight heat to prevent any further reaction from occurring. Psychrophilic proteases derived from Polaribacter can be added to detergents allowing the washing of fabric at room temperature. An example of this is the enzyme carrageenase, which has been shown to have anti-tumor, antiviral, antioxidant and immunomodulatory activities. However, carrageenase isolated from bacteria has historically had low enzyme activity and poor stability. Recently researchers have isolated and cloned the carrageenase gene from the Polaribacter sp. NJDZ03, which shows better thermostability, and the ability to be active at lower temperatures, making it a better choice for industrial uses. Exopolysaccharide EPS is a secreted exopolysaccharide which protects the cells, stabilizes membranes, and serve and carbon stores. Most EPS is similar but it is found that in extremophiles, the composition may be distinct. Specifically in Polaribacter sp. SM1127, where the EPS has antioxidant activity and has shown to protect human fibroblast cells at lower temperatures. Studies by Sun et al. were done to determine whether this can be utilized to protect and repair damage caused by frostbite. It was found that Polaribacter derived EPS helps facilitate the dermal fibroblast cell movement to the site of injury. This not only promotes healing during frostbite injury but other cutaneous wounds as well/ References Further reading Flavobacteria Bacteria genera Psychrophiles Marine microorganisms
Polaribacter
[ "Biology" ]
3,485
[ "Marine microorganisms", "Microorganisms" ]
76,223,744
https://en.wikipedia.org/wiki/T2%20%28settlement%20system%29
T2 is a financial market infrastructure that provides real-time gross settlement (RTGS) of payments, mostly in euros. It is operated by the European Central Bank and is the critical payments infrastructure of the euro area. With turnover in the trillions of euros every day, it is one of the largest payment systems in the world. It is one of three so-called TARGET Services, together with TARGET2-Securities (T2S) for securities and TARGET Instant Payment Settlement (TIPS) for fast payments. The acronym TARGET stands for Trans-European Automated Real-time Gross-Settlement Express Transfer. T2 replaced its predecessor RTGS system, TARGET2 (itself introduced in 2007-2008), on . Overview Like other RTGS systems, T2 allows individual banks to submit payment orders and have them settled in central bank money, namely the euro. T2 settles payments between banks as well as those related to the Eurosystem's own operations. Member banks can connect to T2 either via SWIFT or via NEXI-Colt, a service of Nexi. In legal terms, the relationship is between the member bank and the relevant National Central Bank within the Eurosystem. In addition to payments in euros, T2 allows settlements in other currencies of the EU if the respective central bank opts for it. This is a new feature of T2 compared with TARGET2, as is the adoption of the ISO 20022 messaging standard. T2 also integrates a Central Liquidity Management (CLM) functionality which extends to T2S and TIPS. The transition to T2 also entailed the phasing out of national settlement systems that had been kept e.g. for overnight deposit and intraday credit provision. T2 was developed jointly with T2S by four central banks of the Eurosystem: the Bank of France, Bank of Italy, Bank of Spain, and Deutsche Bundesbank. It is planned to be complemented by a new Eurosystem Collateral Management System (ECMS), which will the single collateral management system for collateralising the Eurosystem's monetary policy operations. Like its predecessors TARGET and TARGET2, T2 is used for the end-of-day settlement of EURO1 (operated by the Euro Banking Association) and payments in euros between CLS Bank and its members. On , the Eurosystem and Danmarks Nationalbank signed an agreement that provides for Denmark to join T2 (as well as TIPS) in March 2025, allowing for T2 to settle transactions in Danish krones as well as euros. Statistics In the course of 2023, TARGET2 (until 20 March) and T2 (after that date) settled 104 million transactions for a total turnover of €559 trillion, with daily turnover fluctuating between €1.4 trillion (29 May) and €4.7 trillion (20 March). That places TARGET2/T2 turnover below CLS and Fedwire but above BOJ-NET (Japan) and CHAPS (United Kingdom), as has been the case throughout the previous decade. The system suffered no outages during 2023; 90 percent of transactions were settled within 38 seconds, whereas 0.09 percent took more than five minutes. As of end-2023, T2 had 956 direct participants holding an RTGS account, opening access to T2 settlement to 5,368 correspondents worldwide. In total, T2 was accessible to nearly 40,000 participants, including branches and subsidiaries of direct participants and correspondents. See also Fedwire References Payment clearing systems Real-time gross settlement
T2 (settlement system)
[ "Technology" ]
735
[ "Real-time gross settlement" ]
76,224,343
https://en.wikipedia.org/wiki/28%20nm%20process
The "28 nm" lithography process is a half-node semiconductor manufacturing process based on a die shrink of the "32 nm" lithography process. It appeared in production in 2010. Since at least 1997, "process nodes" have been named purely on a marketing basis, and have no direct relation to the dimensions on the integrated circuit; neither gate length, metal pitch or gate pitch on a "28nm" device is twenty-eight nanometers. Taiwan Semiconductor Manufacturing Company has offered "28 nm" production using high-K metal gate process technology. GlobalFoundries offers a "28nm" foundry process called the "28SLPe" ("28nm Super Low Power") foundry process, which uses high-K metal gate technology. According to a 2016 presentation by Sophie Wilson, 28nm has the lowest cost per logic gate. Cost per gate had decreased as processes shrunk until reaching 28nm, and has slowly risen since then. Design "28nm" requires twice the number of design rules for ensuring reliability in manufacturing as "80nm". Shipped devices AMD's Radeon HD 7970 uses a graphics processing unit manufactured using a "28nm" process. Some models of the PS3 use a RSX 'Reality Synthesizer' chip manufactured using a "28nm" process. FPGAs produced with "28 nm" process technology include models of the Xilinx Artix 7 FPGAs and Altera Cyclone V FPGAs. References Application-specific integrated circuits International Technology Roadmap for Semiconductors lithography nodes
28 nm process
[ "Technology", "Engineering" ]
331
[ "Application-specific integrated circuits", "Computer engineering" ]
76,225,488
https://en.wikipedia.org/wiki/High%20injury%20network
A high injury network (sometimes shortened to HIN) is a way of identifying parts of an urban street network with higher rates of traffic injuries or fatalities, typically with a goal of prioritizing these streets for safety interventions. High injury networks have been published by many cities in the US and Canada as part of their efforts to work toward Vision Zero. While data on fatalities and collisions have long been available in many municipalities, the first HIN per se was published by San Francisco in 2013, though work on similar efforts had begun there as early as 2011. Creating a HIN is a data-driven exercise, and the analytic methods and data sources used may vary widely. Most HINs are created at the scale of cities where detailed collision data is collected, though regional efforts at defining a more standardized approach also exist. References Road transport Road safety data sets Urban planning
High injury network
[ "Physics", "Engineering" ]
174
[ "Transport stubs", "Physical systems", "Transport", "Civil engineering", "Urban planning", "Civil engineering stubs", "Architecture" ]
76,226,903
https://en.wikipedia.org/wiki/SOLAR-C
SOLAR-C (official name "High-sensitivity Solar Ultraviolet Spectroscopic Satellite") is a planned Sun-observing satellite being developed by the Japan Aerospace Exploration Agency (JAXA), the National Astronomical Observatory of Japan (NAOJ), and international collaborators. It will be the follow-up to the Hinode (SOLAR-B) and Yohkoh (SOLAR-A) missions and will carry the EUV High-throughput Spectroscopic Telescope (EUVST) and the Solar Spectral Irradiance Monitor (SoSpIM). It is scheduled to launch in fiscal year 2028. Objectives The mission aims to study the sun, its effects on Earth and the Solar System, and the mechanisms behind hot plasma formation. The satellite will also analyse the Sun's UV radiation spectrum. References External link Official website (in Japanese) Satellites of Japan Space telescopes 2028 in spaceflight
SOLAR-C
[ "Astronomy" ]
186
[ "Space telescopes" ]
76,227,291
https://en.wikipedia.org/wiki/Commelina%20sp.%20Sandstone
''Commelina'' sp. Sandstone is a herb in the family Commelinaceae family endemic to the Northern Territory of Australia in both Litchfield and Kakadu National Parks. The perennial herb typically grows along the ground to a length of around 1.5 metres long. It is found in open forest or woodland at the base of sandstone slopes or deep sandy soils. It has been recorded blooming between March and April and December, producing a blue two centimetre flower with three petals. Its fruit, flat, oval capsules, have been recorded in March and April. References Flora of the Northern Territory sp. Sandstone Undescribed plant species
Commelina sp. Sandstone
[ "Biology" ]
137
[ "Undescribed plant species", "Plants" ]
76,227,619
https://en.wikipedia.org/wiki/NGC%201100
NGC 1100 is a spiral galaxy located around 235 million light-years away in the constellation Eridanus. NGC 1100 is situated close to the celestial equator, and it was discovered on October 17, 1885, by Francis Preserved Leavenworth. NGC 1100 is not known to have much star formation, and is not known to have an active galactic nucleus. One supernova has been observed in NGC 1100: SN2024vcj (typeIa-91bg-like, mag. 19.36). See also List of NGC objects (1001–2000) References External links Spiral galaxies Eridanus (constellation) 1100 010438 Astronomical objects discovered in 1885 Discoveries by Francis Leavenworth J02453607-1741201 -03-08-016 546-18 10438
NGC 1100
[ "Astronomy" ]
167
[ "Eridanus (constellation)", "Constellations" ]
76,229,435
https://en.wikipedia.org/wiki/Illinois%20Central%20382
Illinois Central No. 382, also known as "Ole' 382" or "The Cannonball", was a 4-6-0 "Ten Wheeler" bought new from the Rogers Locomotive Works in Paterson, New Jersey for the Illinois Central Railroad. Constructed in 1898, the locomotive was used for fast passenger service between Chicago, Illinois and New Orleans, Louisiana. On the night of April 30, 1900, engineer Casey Jones and fireman Simeon "Sim" Webb were traveling with the engine from Memphis, Tennessee to Canton, Mississippi. The train collided into the rear of a freight train stuck on the mainline, killing Jones, and injuries dozens more in Vaughan, Mississippi, the last station before Canton. After the accident, the locomotive was rebuilt in Water Valley, Mississippi, and returned to service. The locomotive was believed to be cursed after Jones' death as it would suffer three more accidents in its career before being retired in July 1935, and scrapped. Today, a stand in for No. 382, former Clinchfield Railroad No. 99, is now on display at the Casey Jones Home & Railroad Museum, in Jackson, Tennessee, painted up as Illinois Central No. 382. History No. 382 was bought new from the Rogers Locomotive Works of Patterson, New Jersey. The new 300 series of 4-6-0 locomotives were designed for fast passenger service on the Illinois Central between Chicago, Illinois, and New Orleans, Louisiana. 1900 Wreck There are many accounts of Casey Jones' final journey that led up to his accident in Vaughan, Mississippi. But the agreed upon set of facts state that Jones had taken up a double shift to clear up a sick engineer named Sam Tate on April 29. Jones and his fireman, Simeon Webb, had already traveled from Canton, Mississippi northbound to Memphis, Tennessee for their shift, taking the "New Orleans Special" with a sister locomotive of No. 382, No. 384. When Tate called in sick, Jones and Webb agreed to take Tate's "New Orleans Special" from Memphis, Tennessee to Canton, Mississippi. When they departed with their southbound "New Orleans Special" passenger train, it was an hour and a half behind schedule, with No. 382 being the engine hauling the five car train since its departure in Chicago. At 12:30 AM on the night of April 30, the train left Memphis and started their near non-stop journey to Canton, with the only stop being in Goodman, Mississippi to let another train pass. As Jones drove No. 382 down toward Canton, the station and sidings in Vaughan, Mississippi were filled with three trains all at the same time. The crucial train was a doubleheader going southbound, as its train was too long for the siding. As the "New Orleans Special" rounded an S-Curve, fireman Simeon Webb spotted the doubleheader stuck on the tracks. After yelling at Jones about the train, he applied the emergency brakes and threw No. 382 into reverse at the same time. Jones told Webb to jump out, and so Webb did, getting knocked unconscious as he hit the ground. Jones' train crashed at 3:52 AM and smashed through a caboose, two separate flat cars, one full of hay and the other for corn, and halfway through a flatcar of lumber. Jones was the only fatality from that accident. Post 1900 After the Vaughan Wreck, No. 382 was moved to Water Valley, Mississippi for repairs, returning to service that summer. However, the engine had a string of other accidents throughout the rest of her career, resulting in six deaths, including Casey Jones. In 1903, criminals sabotaged the tracks and caused 382 to flip on its side. Engineer Harry A. Norton lost both of his legs and received third degree burns. His fireman, however, died in that accident three days later, after being scalded to death. In 1905, the engine ran over a set of points, derailed, and flipped down an embankment in the Memphis South Yards in Tennessee. Norton was the driver for No. 382 that day as well, but he survived that accident as well. The locomotive was renumbered 212 in July 1900, then 2012 in July/August 1907, then 5012 in 1922. On January 22, 1912, No. 2012 crashed into the rear of a passenger train in Kinmundy, Illinois, resulting in four deaths, including the former president of the Illinois Central. This also ended up being the engine's deadliest accident. In July 1935, No. 2012 was removed from service and scrapped. Clinchfield No. 99 Carolina, Clinchfield, & Ohio Railroad, or Clinchfield for short, No. 99 is a 4-6-0 built by the Baldwin Locomotive Works in 1905 as South & Western Railway Company No. 1. In 1908, the South & Western became the Carolina, Clinchfield & Ohio Railway. In 1924, the road was incorporated with the Carolina, Clinchfield & Ohio of South Carolina and the Clinchfield & Northern Railway of Kentucky into the new Clinchfield Railroad, and the engine was renumbered to No. 99. In 1953, No. 99 was sold to the Black Mountain Railway in Burnsville, North Carolina, where it was renumbered to No. 3. The company was bought by the Yancey Railroad in 1955. The following year, the engine was retired on the Yancey Railroad in 1956 and was sold to the City of Jackson, Tennessee. They purchased No. 99 for the purpose of putting it on display on a new museum dedicated to Casey Jones' life near his and Jeanie Brady's home. The engine was cosmetically restored as Illinois Central No. 382 and was put on display at the Casey Jones Home & Railroad Museum, later opening that same year. In 1980, the Casey Jones Village was established, and Jones' home and No. 382 were moved to the new plaza, with the museum reopening a year later in 1981. Current Disposition No. 99, repainted as IC No. 382, is now on static display at the Casey Jones Home & Railroad Museum in Jackson, Tennessee. Legacy No. 382 has been featured and mentioned in several songs in addition with Casey Jones. No. 382 even served as the basis for the mock up locomotives, No. 29 & Constitution in the 2013 live action Disney film The Lone Ranger. See also Illinois Central Railroad Illinois Railway Museum The Ballad of Casey Jones References Rogers locomotives Jackson, Tennessee Illinois Central locomotives Scrapped locomotives Train wreck ballads Casey Jones 4-6-0 locomotives Curses Illinois Central Railroad
Illinois Central 382
[ "Technology" ]
1,331
[ "Train wreck ballads", "Railway accidents and incidents" ]
76,230,111
https://en.wikipedia.org/wiki/Ippolit%20S.%20Gromeka
Ippolit Stepanovich Gromeka (or Hippolyte Stepanovich Gromeka) was a 19th century Russian scientist who made significant contributions to the science of fluid mechanics. Biography Ippolit was born on 27 January in 1851, in Berdychiv to Stepan Stepanovich Gromeka, a well-known publicist and a governor (1867–1875) of Siedlce and Yekaterina Fyodorovna Shcherbatska. He grew up in Siedlce and also earned a gold medal in the Siedlce high school. He completed his Bachelors degree from the Imperial Moscow University in 1873 and worked as a teacher in the university for two years. He then worked as a teacher in Moscow High School until 1879, and in Belsk high school from 1879. In 1879, he also completed his Master's degree with a dissertation on capillary phenomena. In 1880, he became an assistant professor at the Kazan University. In 1881, he obtained his PhD with a dissertation on Some cases of the motion of an incompressible fluid. He became a professor in 1882. In the winter of 1888-1889, Gromeka fell from a sleigh during hunting with a severe bruise in his chest. Due to his injury, he died on 13 October 1889 in Kutaisi at the age of only 38. One of his brother, Mikhail Stepanovich Gromeka, was a well known literary critic, who died in 1883. Research During his short research career, just over than 10 years, Gromeka has produced many important contributions to the field of fluid mechanics through 11 works, starting from his Master's thesis on capillary phenomena and his last work in 1889 on the effect of temperature distribution on sound waves. He provided an original and modern description of the capillarity phenomena, settling for the first time the discrepancy that was prevalent between Young's and Laplace's theories. He pioneered the studies on Beltrami flows in his PhD thesis in 1882 and because of it, he is referred as the father of the helical flows. He also studied unsteady flows in tubes, wave motion in elastic tubes and others. His scientific works were published in Russian in 1952. A special issue in the journal Fluids in honour of Gromeka was produced in 2024. Published works Gromeka's published works are Gromeka. I.S. Essay on the Theory of Capillary Phenomena. Theory of Surface Fluid Adhesion (Master’s Thesis). Mat. Sb. 1879, 9, 435–500. Gromeka. I.S. Some Cases of Incompressible Fluid Flow. Ph.D. Thesis, Kazan University, Kazan, Russia, 1882; pp. 1–107. Gromeka. I.S. On the Theory of Fluid Motion in Narrow Cylindrical Tubes; Scientific notes of Kazan University; Kazan University: Kazan, Russia, 1882; pp. 1–32. Gromeka. I.S. On the Velocity of Propagation of Wave-Like Motion of Fluids in Elastic Tubes; Scientific notes of Kazan University; Kazan University: Kazan, Russia, 1883; pp. 1–19. Gromeka. I.S. On the Vortex Motions of a Liquid on a Sphere; Scientific Notes of Kazan University; Kazan University: Kazan, Russia, 1885; pp. 1–35. Gromeka. I.S. On the motion of liquid drops. Bull. De La Société Mathématique De Kasan Kasan 1886, 5, 8–47. Gromeka. I.S. Some cases of equilibrium of a perfect gas. Bull. De La Société Mathématique De Kasan Kasan 1886, 5, 66–82. Gromeka. I.S. Lectures on the Mechanics of Liquid Bodies; Kazan University Press: Kazan, Russia, 1887; pp. 1–174. Gromeka. I.S. On infinite values of integrals of second-order linear differential equations. Bull. De La Société Mathématique De Kasan Kasan 1887, 6, 14–40. Gromeka. I.S. On the Effect of Temperature on Small Variations in Air Masses; Scientific Notes of Kazan University; Kazan University: Kazan, Russia, 1888; pp. 1–40. Gromeka. I.S. Influence of the Uneven Distribution of the Temperature on the Propagation of Sound. Mat. Sb. 1889, 14, 283–302. References 1851 births 1889 deaths People from Berdychiv Fluid dynamicists 19th-century mathematicians from the Russian Empire Inventors from the Russian Empire Russian physicists
Ippolit S. Gromeka
[ "Chemistry" ]
954
[ "Fluid dynamicists", "Fluid dynamics" ]
76,230,827
https://en.wikipedia.org/wiki/Nicholas%20D.%20Kazarinoff
Nicholas Donat Kazarinoff (August 12, 1929, Ann Arbor, Michigan – November 21, 1991, Albuquerque, New Mexico) was an American mathematician, specializing in differential equations. In 1988 he was elected a Fellow of the American Association for the Advancement of Science (AAAS). Education and career Kazarinoff grew up in Ann Arbor, Michigan, and went to college in his hometown at the University of Michigan. There he graduated with a B.S. in 1950 and an M.S. in 1951. He graduated in 1954 with a Ph.D. in mathematics from the University of Wisconsin–Madison. His Ph.D. thesis Asymptotic Forms for the Whitaker Functions of Large Complex Order m was supervised by Rudolf Ernest Langer. In the mathematics department of Purdue University, Kazarinoff was from 1953 to 1955 an instructor and from 1955 to 1956 an assistant professor. At the University of Michigan, he was from 1956 to 1960 an assistant professor, from 1960 to 1964 an associate professor, and from 1964 to 1971 a full professor. In 1971 he resigned from the University of Michigan to become the chair of the mathematics department at the University of Buffalo (also known as SUNY Buffalo or the State University of New York, Buffalo). There he was the Martin Professor of Mathematics from 1972 until his death in 1991. He died in Albuquerque when he was a visiting professor at the University of New Mexico, where he was also a visiting professor in 1985. He also held visiting appointments at the University of Wisconsin–Madison's Army Mathematics Research Center (AMRC) (1958–1960), at Rome's Consiglio Nazionale delle Ricerche, CNR (1978 and 1980), and at Beijing University of Technology (1987). At Moscow's Steklov Institute of Mathematics, he was an exchange professor for the academic year 1960–1961 and again in the spring semester of 1965. Kazarinoff's research focused mainly on differential equations. His speciality was partial differential equations applied to reaction-diffusion systems. His research on differential equations included fluid dynamics and dynamical systems. He also did research on the geometry of convex sets, the geometry of theta series, and iteration of real-valued and complex-valued maps. He was the author or co-author of more than 80 research articles and monographs. After his death, the University of Michigan established the Nicholas D. Kazarinoff Collegiate Professorship of Complex Systems, Mathematics, and Physics. D. K. Kazarinoff's inequality for tetrahedra Kazarinoff dedicated his book Geometric Analysis to his father, Donat Konstantinovich Kazarinoff (1892–1957), who taught mathematics and engineering at the University of Michigan for 35 years (with 37 years of affiliation and 2 years of academic leave). Theorem: Let Ⲧ be a tetrahedron and let P be a point belonging to T. Let the distances from P to the vertices and to the faces of Ⲧ be denoted by Ri and ri, respectively, for i = 1,2,3,4. Then:For any tetrahedron Ⲧ whose circumcenter is not an exterior point,ΣRi/Σri > 2 and 2 is the greatest lower bound. According to László Fejes Tóth, D. K. Kazarinoff stated the inequality but never published his proof, perhaps because he thought that his proof was not simple enough. However, shortly before his death, D. K. Kazarinoff provided a simple proof of the Erdős-Mordell inequality for triangles and gave a generalization to three dimensions. Nicholas D. Kazarinoff used the work of his father as a basis for a proof of D. K. Kazarinoff's inequality for tetrahedra. Personal life In July 1948, Kazarinoff married Margaret Louise Koning. They had five sons and a daughter. Upon his death in 1991 at age 62, he was survived by his widow, their six children, and eight grandchildren. He was an active member of the Unitarian Universalist Church of Buffalo and served on the church's finance committee. Selected publications Articles Books References 1929 births 1991 deaths 20th-century American mathematicians Applied mathematicians Dynamical systems theorists Fluid dynamicists Geometers Partial differential equation theorists University of Michigan alumni University of Wisconsin–Madison alumni University of Michigan faculty University at Buffalo faculty American people of Russian descent People from Ann Arbor, Michigan Fellows of the American Association for the Advancement of Science
Nicholas D. Kazarinoff
[ "Chemistry", "Mathematics" ]
919
[ "Applied mathematics", "Applied mathematicians", "Geometers", "Fluid dynamicists", "Geometry", "Fluid dynamics", "Dynamical systems theorists", "Dynamical systems" ]
76,230,980
https://en.wikipedia.org/wiki/Cyclopentadienylcobalt%20dinitrosyl
Cyclopentadienylcobalt dinitrosyl is an organometallic molecule. It is a reactive intermediate in the formation of dinitrosoalkane cobalt complexes. While cyclopentadienylcobalt dinitrosyl has not been isolated and characterized, the preparation of this reactive intermediate in the presence of olefins results in the isolable dinitrosoalkane cobalt complexes. The dinitrosyl intermediate is known for its alkene binding capability. The resulting dinitrosoalkane cobalt complexes are capable of stoichiometric and catalytic C-H bond functionalization. Discovery This nitrosyl cobalt complex was first discovered in 1967 by Henri Brunner at the Technical University of Munich. By reacting cyclopentadienylcobalt dicarbonyl with NO in hexane at room temperature, he obtained a dimer (CpCoNO)2. The dimer was formed by bridging nitrosyl (NO) ligands, which connected the two CpCo units. Based on a dipole moment of 1.61 D observed in hexane, Brunner suggested that the nitrosyl ligands extended above the plane of the cobalt atoms. However, crystal structures of the dimer have shown that the nitrosyl ligands and the cobalt atoms lie in the same plane. Subsequently, in 1973, Brunner discovered that the dimer could react with NO and an olefin to form a monomeric cyclopentadienylcobalt dinitrosoalkane, in which the protons of the olefin assumed the endo position. However, the scope of such reactions was limited to norbornene-type olefins. Ethylene and cyclohexene also reacted but the resulting products could not be purified for structure-determination. Notably, this transformation was ligand-based rather than occurring at the metal center, which is unusual for an organometallic transformation. Brunner's initial investigations of the cyclopentadienylcobalt nitrosyl dimer suggested that an olefin could be activated through interaction with the dimer, but the synthetic utility and identification of the monomeric reactive intermediate, cyclopentadienylcobalt dinitrosyl, were yet to be explored. Furthermore, the origin of the stereoselectivity had not been determined. Brunner's findings were verified by a later crystallographic study conducted by Bernal and coworkers. They confirmed that the hydrogen atoms did occupy the endo positions of norbornene, and the nitroso ligands occupy the exo positions. The crystallographic data of the Co(NO)2C2 fragment closely resembles data from molecules containing free nitroxyl radicals, which led Bernal to suggest that the fragment behaves like a nitroxyl free radical. Specifically, the Co-N-C bond angle of 118.2(2)° closely resembles that of 2,2,5,5-tetramethyl-3-carbamidopyrroline-1-oxyl and other molecules containing free nitroxyl radicals. The N-O bond distances, as well as stretching frequencies closely agree too. Early Synthetic Applications Olefin Binding Whereas in Brunner's work the dinitrosoalkane was not further functionalized, Becker and Robert Bergman demonstrated in the 1980s that the functionalized organic fragment resulting from alkene activation could be released from the dinitrosoalkane complex, and that other unstrained olefins could be made to react with the dimer. They hypothesized that the reaction of the dimer [CpCoNO]2 with NO and olefin proceeded via formation of the cyclopentadienylcobalt dinitrosyl intermediate. Specifically, they suggested that simple olefins could not trap this proposed intermediate fast enough. This led them to perform the reaction with excess amounts of olefin so that the trapping rate would increase and correspondingly improve the yield of the reaction. Simple aliphatic olefins could thus be reacted with the dimer to produce the corresponding dinitrosoalkane. Brunner had demonstrated that the reaction of the dimer with norbornene-type olefins resulted in the protons occupying the endo position. Bergman showed that the reaction was completely stereospecific for unrestrained aliphatic olefins by reacting the dimer with (E)- and (Z)-3-methyl-2-pentene. These reactions gave isomerically pure products, which showed that the stereospecificity was not exclusive to ring-strained substrates like norbornene. 1,2-Diamination Bergman attempted to release the dinitrosoalkane ligand from cobalt via ligand substitution with CO or phosphine, and direct oxidation of the substrate. These unsuccessful experiments suggested the presence of π-backbonding from the cobalt to the nitrosyl ligands. With the presence of π-backbonding, the ligands would resemble nitroxides or nitroxyl radicals rather than nitroso ligands, as suggested by Bernal's XRD study. Despite this shortcoming, Bergman and Becker were successful when they reduced the dinitrosoalkane with LiAlH4. This reaction resulted in the net conversion of alkenes to 1,2-diamines and was applied to a variety of olefins with good yields. The amination reaction however could not be performed with a high degree of stereoselectivity, suggesting that the reaction involves epimerization during reduction. This idea was supported by deuterium studies employing LiAlD4, in which there was a substantial amount of product which had deuterium incorporated at the carbon alpha to the amine. Thus, Becker and Bergman's work demonstrated that the cyclopentadienylcobalt nitrosyl dimer could be used for 1,2-diamination of alkenes. Moreover, they correctly suggested that alkene activation occurred via the cyclopentadienylcobalt dinitrosyl intermediate. Cylcopentadienylcobalt Dinitrosyl Reactive Intermediate Mechanistic Investigation Becker and Bergman published a detailed mechanistic investigation of the reaction between CpCo(NO)2 and olefins shortly after the above study. In this study they reveal that the cyclopentadienylcobalt dinitrosoalkane complex undergoes reversible exchange with alkenes. Kinetic and spectroscopic investigations allowed them to suggest the mechanism of alkene activation and reveal the monomeric cylcopentadienylcobalt dinitrosyl reactive intermediate. Preparing a dinitrosoalkane complex with a simple unstrained olefin and introducing norbornene-type olefins, which allow for much more stable dinitrosoalkane complexes, led to the exchange of the olefins. The observation of this exchange reaction led to the simple mechanistic proposal wherein the reactive intermediate could reversibly bind with the acyclic olefin, or irreversibly bind with norbornene to form the more stable alkane complex. The corresponding rate expression was experimentally verified by varying the concentrations of norbornene and acyclic olefin. Bergman also showed that olefin exchange could occur photochemically. When [CpCoNO]2 was placed in benzene it formed a dark green solution. However, upon introduction of NO gas, the solution turned into a lighter, brighter green color. Infrared spectroscopy revealed that this brighter solution has stretching frequencies of 1609 and 1690 cm−1, as opposed to the dimer's frequencies at 1540 and 1590 cm−1. The greater stretching frequencies supported the assignment of that species as the monomeric CpCo(NO)2 reactive intermediate. These IR stretches disappeared when the solution of the intermediate was treated with norbornene, further supporting its assignment. The CpCo(NO)2 intermediate could be directly synthesized by treating Co((NO)2-μ-Cl)2 with lithium cyclopentadienide in dimethoxyethane. CpCo(NO)2 still decomposed slowly, but was relatively stable under dilute conditions, as evinced by the UV/Vis spectrum remaining unchanged over 20 minutes. This solution was treated directly with olefin and resulted in the dinitrosoalkane complex. Additional studies with this reaction scheme showed that the rate was insensitive to the polarity of the solvent. Substitutions on the cyclopentadienyl did not affect the reactivity of the intermediate, but the permethylated ring reduced the thermal stability while ester substituents improved the stability. Overall, these studies suggested that the reaction between [CpCoNO]2 dimer and olefin is bimolecular. Mechanistically, since tetrasubstituted olefins were found to react with the dimer, Becker and Bergman concluded that the reaction cannot occur via olefin coordination to the cobalt, and instead occurs directly with nitrogen atoms of the nitrosyl group and the π bond of the olefin in a concerted fashion. Moreover, since exchange of the alkene results in the stereospecific release of the starting alkene, olefin exchange was suggested to occur via concerted C-N bond cleavage to produce diradicals which quickly combine to form the alkene. Molecular Orbital Theory Perspective Bergman's work on CpCo(NO)2 suggested a likely mechanism of alkene binding, but it was still unusual that an organometallic transformation should proceed via ligand-based activation. Moreover, the structure of the reactive intermediate could not yet be determined. Roald Hoffmann provided a molecular orbital point of view to describe this reactivity and the possible structure. Hoffmann produced a wavefunction of the highest occupied molecular orbital (HOMO) and found that the HOMO is largely located on the NO π* orbitals, and that these π* orbitals are able to engage in π back-bonding with Co. He identified several plausible structures of CpCo(NO)2. One was a 20-electron species in which the NO ligands are linear and can be considered as LX-type because of the donation of the lone pair and the nitrogen-centered radical. Another was an 18-electron species in which one NO ligand behaves as the aforementioned LX ligand, and the other behaves as an X-type ligand and adopts the bent shape. Finally, if both NO ligands behave as X type ligands and adopt the bent conformation then the CpCo(NO)2 moiety has 16 electrons. The conformation of the NO ligand and the corresponding electronic behavior can be understood in context of the nitric oxide molecular orbital diagram which has a nitrogen-centered radical residing in the degenerate px/py based molecular orbital and a lone pair residing in the pz based orbital. These orbitals are the HOMO and HOMO-1 respectively. When the NO ligand only uses the radical in bonding with the Co center and behaves as an X ligand there is no need for the Co-N-O to be linear since the px/py orbitals are orthogonal to the nuclear axis of N-O. When the NO ligand uses the pz-based lone pair in bonding then the nuclear axis of N-O must coincide with the Co-N nuclear axis. The 18-electron species, while it satisfies the 18-electron rule, has less symmetry than the 16-electron species wherein both nitrosyl ligands are bent. Indeed, a Walsh diagram detailing the conformational change from linear NO ligands to bent NO ligands shows that the LUMO and HOMO centered on the NO π* orbitals decreases in energy. Hoffmann attributed this observation to the loss of antibonding overlap between the metal and nitrosyl orbitals. A contour plot of the wavefunction of the CpCo(NO)2 HOMO and LUMO in the bent conformation shows that the HOMO is antisymmetric with respect to the plane perpendicular to the N-Co-N plane, and the LUMO is symmetric with respect to this axis. The electronic structure of the HOMO and LUMO can have the appropriate symmetry to interact with an olefin. The π* orbital of the olefin can interact with the antisymmetric HOMO, and the π orbital can interact with the symmetric LUMO for an overall stabilizing interaction. This HOMO of the resulting dinitrosoalkane then becomes the dxz orbital of Co, and the LUMO becomes the π* NO orbitals. This description of the LUMO explained why the reduction of the dinitrosoalkane complex lead to diamination, the LUMO was centered on the nitrosyl ligands and thus they underwent reduction Finally, while Hoffman showed that the 16-electron doubly-bent nitrosyl conformation was more stable than the linear 20-electron complex, the 18-electron complex in which one nitrosyl is bent and one is linear was put forth as another probable conformation. Molecular Orbital Theory of [CpCoNO]2 A similar study on electronic structure and molecular orbital theory was conducted on the dimer complex. Fenske concluded that the dimer's metal-ligand interactions are characterized by the metal orbitals interacting with both bridging nitrosyl's 2 π-acceptor orbitals and 5 σ orbitals. By comparison with other metal dimers with isoelectronic ligands such as CO, it was determined that the nature of the bridge ligands more strongly influences the electronic structure of the dimer than the identity of the metal. Moreover, because the electronic structure is determined by the ligands rather than the metal center, Fenske suggested that the ligands dictate the metal-metal separation of the dimer. Indeed, similar organometallic transformations were observed with Rhodium analogues of the CpCo(NO)2, supporting this observation. C-H Functionalization Addition of Michael Acceptors The synthetic utility of cyclopentadienylcobalt dinitrosyl was limited to 1,2-diamination until Bergman & Toste expanded this methodology in 2008 by showing that C-C bonds could be formed between the dinitrosoalkane complex and Michael acceptors. The reversibility of alkene binding was already established from Bergman's earlier work, but here Bergman showed that the dinitrosoalkane complex could be functionalized prior to release, while leaving the nitrosyl ligands intact. The proposed procedure entailed preparing the dintrosoalkane, and then treating with base to afford the nitro-nitroso intermediate. This intermediate then adds to the Michael acceptor to give the functionalized complex. Finally, this undergoes a retrocycloaddition reaction with unfunctionalized olefin to afford the dinitrosoalkane and release the functionalized alkene. This reaction was developed by first reacting a silylated cobalt dinitrosoalkane with a Michael acceptor in the presence of a fluoride source. The fluoride source desilylated the substrate and produced the carbanion which could serve as the nucleophile in reaction with the Michael acceptor. Eventually, the reaction could be performed with a base (LHMDS) and Lewis acid (Sc(OTf)3) promoter to produce the carbanion. The reaction scope included 2-cyclohexen-1-one or phenyl vinyl sufone as Michael acceptors. In line with previous observations, ring-strained alkenes such as norbornene gave the greatest yield of both the functionalized dinitrosoalkane, as well as the released functionalized alkane. Most functionalizations were highly selective for the diastereomer which had the Michael acceptor in the endo position. Additionally most reactions resulted in functionalization of only one side of the olefin, although the phenyl vinyl sulfone was more prone to bind to both sides of the olefin. Although the original olefin could be exchanged for the functionalized olefin, the process was not yet catalytic. If there is an intramolecular Michael acceptor, then the Michael addition may proceed via a one-pot synthesis where the olefin undergoes cyclization via addition with the Michael acceptor. This reaction has been made catalytic with 20 mol% CpCo(NO)2 and 10 mol% base. Whereas other reactions could not be made catalytic because of base-mediated decomposition of CpCo(NO)2 and/or dimerization of the intermediate, the cyclization reaction benefits from the fact that once the olefin binds to the CpCo(NO)2 and the base generates the nitro-nitroso alkane, the nucleophile can quickly react with the Michael acceptor intramolecularly. Enantioselective Addition of Michael Acceptors Bergman & Toste had previously demonstrated that the use of base and a Lewis acid with CpCo(NO)2 could facilitate the addition of Michael acceptors. This addition was performed with relatively high diastereomeric ratios for norbornene which favored addition to the endo position. Moreover, when using an enol as the acceptor, a norbornene which had a ring substituent in the endo position gave 0% yield whereas when the ring was in the exo position they achieved 73% yield. However the retrocycloaddition reaction which released the functionalized alkene could not yet be conducted in a manner that transferred the chirality preference of the dinitrosoalkane. These results as well as Brunner's initial observation that norbornene tended to coordinate with the protons occupying the endo position, had suggested that CpCo(NO)2 could mediate enantioselective transformations. Bergman & Toste developed a method for asymmetric functionalization of olefins. Their methodological approach entailed using chiral N-benzylated ammonium chloride salts. These salts could serve a similar function as the Lewis acid promoter in their previous studies but with the added benefit of being chiral. Colder temperatures, the use of η5 -(t BuMe2Si)C5H4 instead of Cp, and the premixing of the base and salt lad to an enantiomeric excess of 83% for the formation of the norbornene-enol complex. The premixing of the base and salt was thought to lead to a chiral base, and the bulkier Cp ligand reinforced the enantiomeric selectivity. Additionally, using salts with trifluoromethyl groups allowed for further optimization. With these conditions the initial norbornene functionalization proceeded with quantitative diastereomeric selectivity and the enol Michael acceptor occupied the endo position. The subsequent retrocycloaddition could then be performed to release the functionalized alkene with as high as 85% enantiomeric excess. This methodology was expanded to diene substrates such as norbornadiene. These ring strained dienes allowed for sequential, stereoselective, Michael addition. The double Michael addition proceeds as follows: the first Michael addition proceeds as expected, then an isomerization occurs in which the enol moves from the α-nitroso position to the γ-nitroso position on the opposite side of the norbornadiene. This isomerization reaction thus prepares a new α-nitroso position for functionalization. The second Michael addition is then followed by the retrocylcoaddition of another norbornadiene to ultimately release the functionalized diene. The product distribution favored anti-addition of the enol, such that the product had C2 symmetry. The syn-addition resulted in the C1 diene. The anti:syn ratio ranged from 3.7-11:1 and the enantiomeric excess of the anti project ranged from 90-96%, with the enol's maintaining R stereochemistry at the B-position of the ketone. The enantioselectivity could be switched (from R,R,R,R to S,S,S,S) by employing an N-benzylated ammonium chloride salt which had opposite stereochemistry at the alcohol bearing carbon. The origin of the stereoselectivity is proposed to be the loss of symmetry of the dinitrosoalkane complex upon reaction with base. As is the case with other reactions with Michael acceptors, the reaction with base produces the nitroso-nitro intermediate. The exo position of the CpCo(NO)2 moiety favors an approach of the enol such that the major diastereomer is the one in which the β-position of the ketone adopts R stereochemistry. This proposal, however, does not account for the role of the chiral salt and the resulting chiral base. This methodology ultimately allowed for the enantioselective synthesis of C1 and C2 symmetric dienes. It is different from the other C-H functionalization reactions in that it relies on the use of chiral salts and allows for C-H functionalization of multiple sites via an isomerization reaction. Annulation Schomaker et al. demonstrated that CpCo(NO)2 could mediate (3+2)-annulation of alkenes with α,β-unsaturated ketones and imines. The approach is similar to the reactions with Michael acceptors. The advancement in this work is that the dinitrosoalkane could undergo sequential deprotonation to produce in the vinyl dianion. This was achieved by using two equivalents of base. In Schomaker's proposed mechanism, one vinyl anion will attack the -ene portion of an enone while the other vinyl anion will attack the -one portion to produce the alcohol of the enone in an overall annulation (3+2) reaction. This annulation product could be released by retrocycloaddition with a ring-strained alkene such as norbornene, as demonstrated previously. References Organocobalt compounds Nitrosyl complexes Cyclopentadienyl complexes
Cyclopentadienylcobalt dinitrosyl
[ "Chemistry" ]
4,647
[ "Organometallic chemistry", "Cyclopentadienyl complexes" ]
76,231,859
https://en.wikipedia.org/wiki/2023%E2%80%932024%20video%20game%20industry%20layoffs
Beginning in 2023 and continuing into 2024, the video game industry has experienced mass layoffs. Over 10,500 jobs were lost in 2023, and an additional 14,600 jobs were lost in 2024. These layoffs had reverberating effects on both established game development studios and emerging companies, impacting employees, projects, and the overall landscape of the gaming industry. Including major job cuts from Embracer Group, Unity Technologies, Microsoft Gaming, Electronic Arts, Sony Interactive Entertainment, Epic Games, Take-Two Interactive, Ubisoft, Sega, and Riot Games. The layoffs caused several video games to be canceled, video game studios to be shut down or divested from their parent company, and thousands of employees to lose their jobs. Most of the job cuts occurred in North America and Europe, with the video game industry in the United States being the most affected, followed by Canada, United Kingdom and Poland. Over 30 video game development studios laid off their entire staff and shut down. Some of the most notable company closures include: Arkane Austin, London Studio, Pixelopus, Riot Forge, Volition, Ready at Dawn, Firewalk Studios, and Game Informer. A new survey by the International Game Developers Association (IGDA), based on 2023 data, suggests a global unemployment rate of 4.8% within the game industry. Some industry experts believe that the rate in the United States could now be twice as high. Executive Director of Circana (The NPD Group), Mat Piscatella suggests that the most optimistic projection indicates a potential decrease of about 2% for American video game industry in 2024. However, a more pessimistic perspective could see a decline of around 10%, with the possibility of an even greater downturn if conditions worsen significantly. According to a report by DDM Games, the industry is currently in a "reset phase." Companies are restructuring their operations through closures, layoffs, and divestitures. The pandemic-induced growth surge has subsided, leading to a need for recalibration. The video game industry layoffs are a part of the broader tech industry layoffs that began in 2023; many such layoffs have been attributed to artificial intelligence, although increased interest rates, reduced demand from consumers and excessive hiring during the COVID-19 pandemic have also been cited as causes. Causes The layoffs were not a singular event but rather the culmination of several converging factors. The COVID-19 pandemic unexpectedly fueled a surge in video game demand. This led companies to make ambitious investments in acquisitions, mergers, and staff expansion, anticipating sustained growth. However, as the world reopened and the market returned to pre-pandemic trends, the rapid growth proved unsustainable, and companies found themselves with bloated operational costs, necessitating cutbacks. Rising development costs The cost of developing AAA games has steadily climbed in recent years due to several factors. The increasing complexity of game design, the adoption of advanced technologies to create "visually stunning" experiences, and rising player expectations for expansive and cinematic content all contributed to this cost inflation. This put immense pressure on company budgets. The global economic slowdown in 2024, coupled with rising interest rates, made it more challenging for companies to secure funding. This limited their ability to invest in new projects and maintain existing ones, further contributing to the need for workforce reductions. According to a report cited by the Competition and Markets Authority (CMA), development budgets for AAA video games have surged in recent years. While AAA releases previously had budgets ranging from $50–150 million, games set for release in 2024 or 2025 are now seeing budgets of $200 million and higher. Some franchises, like Call of Duty and Grand Theft Auto, have budgets exceeding $250 million and $300 million, respectively. Additionally, according to the CMA, one major publisher mentioned that a single AAA game could have development costs between $90–180 million and marketing budgets ranging from $50–150 million. For certain franchises, such as one cited by the CMA, combined development and marketing costs reached $660 million and almost $550 million, respectively. Activision noted the increasing need for multiple studios to meet the demands of annual Call of Duty releases, leading to greater reliance on outsourcing. According to Bloomberg, video game executives anticipate a trend towards big-budget games that take fewer risks and rely on well-established intellectual properties (IP), especially as game development costs continue to rise. Martin Sibille, Vice President at Tencent Games and a former EA executive, highlighted the increasing difficulty in taking risks within the industry. Rising development costs have prompted video game publishers to either cancel or delay their games and lay off development teams. The Embracer Group notably announced the cancellation of 29 titles. Microsoft Gaming canceled Odyssey, a game Blizzard Entertainment had worked on for over 6 years and laid off some of the same staff who had worked on Odyssey. Sony canceled a live service game from Naughty Dog and London Studio, resulting in layoffs at both studios. Electronic Arts canceled an untitled Star Wars game by Respawn Entertainment, indicating a shift in focus away from licensed titles towards live service games and original IP. Ubisoft canceled three previously unannounced games in January 2023, citing dismal financial results from the previous quarter. Some of the newly founded AAA game development studios, such as Ridgeline Games and Deviation Games, closed down before even releasing their first video game. Ridgeline Games, founded in 2021, shut down just three years later in 2024. It was previously led by game director Marcus Lehto, who made a decision to leave Ridgeline Games. EA laid off the entire team on February 29, 2024. Deviation Games shut down on March 1, 2024, just four years after its establishment in 2020. The studio co-founder, Jason Blundell, left the company in 2022, and the studio canceled its new AAA live service game in 2023. Less than two years after the studio was opened, Prytania Media closed Crop Circle Games, citing "changing consumer tastes" and "economic conditions changing due to the pandemic." Smilegate Barcelona, the studio established in 2020 to develop an open-world AAA console title, shut down just 4 years after its establishment. Consumer shift The escalating expenses associated with video game development have prompted major gaming companies like Sony and Warner Bros. Games to pivot towards creating mobile and live service games. Layoffs and studio closures have also impacted successful live service game companies, such as Epic Games and Bungie. Several live service games launched in 2023 shut down within months, affecting developers and publishers alike. These games, which employ a substantial portion of the industry workforce and generate significant profits, have faced challenges including rising development costs, user fatigue with monetization, and revenue declines post-COVID-19. Additionally, trends like battle royale games are maturing, and expanding franchises to mobile platforms does not always yield expected returns. Sony's entry into live service gaming has encountered significant challenges and delays, resulting in the postponement of several major live service titles. Although live service initiatives are becoming more popular, 68% of producers say their pipelines cannot support these kinds of projects. Furthermore, 53% of major studios expect difficulties in handling their technical debt. 88% of developers questioned said they are looking into integrating new tools into their workflows due to the steep rise in game production expenses and complexity. The market is nearing saturation, leading to increased competition for player time and higher user acquisition costs. Post-pandemic slowdown The first few months of the COVID-19 pandemic brought about a sharp increase in revenue for the gaming sector worldwide as people looked for indoor entertainment. According to IDC, in 2020, revenue from mobile games climbed by 32.8% to $99.9 billion, while expenditure on digital PC and Mac games increased by 7.4% to $35.6 billion. The amount spent on home console games increased significantly as well, reaching $42.9 billion, up 33.9%. In the ensuing years, this growing pattern abruptly stopped. Revenue growth from mobile gaming fell by 15% in 2021, and then fell even further in 2022 and 2023, to -3.3% and -3.1%, respectively. Sales of PC and Mac games saw a brief rise of 8.7% in 2021, a drop of 1.4% in 2022, and a rebound of 2.1% in 2023. Similarly, after a surge in 2020, console game spending plateaued in 2021 with growth at 0.7%, followed by a decline of 3.4% in 2022, before returning to growth at 5.9% in 2023. A new fad in the video game industry, metaverse, once led many investors and companies to believe that it was the future of the gaming industry. Companies like Meta and Microsoft have made significant investments in this space. The Metaverse has encountered challenges impacting investor expectations. Meta reported significant operational losses of $13.72 billion in its Metaverse division in 2021, raising concerns among investors. Meta's acknowledgment that full realization of Metaverse products may take another 10 to 15 years tests investor patience with its long-term horizon. Inflation and economic uncertainties have affected consumer behavior, delaying the adoption of Metaverse-related technologies like headsets. Meta revised its monthly active user targets downward from 500,000 by the end of 2022 to 280,000, disappointing investors with lower-than-expected engagement. Mergers and acquisitions One of the primary reasons for layoffs in the video game industry is mergers and acquisitions. Video game companies believed that the significant growth witnessed during the pandemic would continue afterward, leading many firms to explore mergers and acquisitions. Between 2020 and 2024, 16 out of the 22 most expensive video game acquisitions in video game history occurred, with major players such as Microsoft, Sony, Embracer Group, Tencent, Take-Two Interactive, and Electronic Arts each making at least one acquisition. After several acquisitions, Embracer Group announced that they will undergo a significant restructuring of the company, including the closure of studios, layoffs of employees, and cancellation of dozens of video game projects. Embracer Group faced a setback when a $2 billion deal with an anonymous partner fell through, later revealed to be Savvy Games Group. Savvy, owned by Saudi Arabia’s sovereign wealth Public Investment Fund, had already invested $1 billion in Embracer. Following the deal's collapse, Embracer announced a restructuring, including shutting down or selling studios and pausing game development. The reasons behind the deal's collapse remain undisclosed, but it was intended to establish Savvy as a major player in the gaming industry. Embracer CEO Lars Wingefors had previously faced criticism for accepting investment from Savvy due to concerns about human rights violations by the Saudi government. After several restructuring programs, Embracer Group reduced its headcount by 7,761, closed or divested 44 internal and external studios, and decreased the number of game projects by 80. Several studios and publishers under Embracer Group, Sega and Microsoft Gaming have either opted to spin off from their parent companies or have been compelled to be sold off, resulting in mass layoffs. On February 29, 2024, Microsoft Gaming studio Toys for Bob revealed their decision to spin off from Activision and operate as an independent studio, while expressing openness to collaborating with both Activision and Microsoft on future projects. Embracer Group announced plans to divest Saber Interactive to a private firm for $500 million. On March 28, 2024, Take-Two Interactive announced its intent to acquire Gearbox Software from Embracer Group for $460 million. On the same day, Relic Entertainment was sold by Sega to an unspecified investor, and Thunderful Group sold Headup Games to Microcuts Holding. Headup Games was initially acquired by Thunderful for €11 million in 2021. Russian invasion of Ukraine In February 2022 the Russian invasion of Ukraine caused an exodus of Russian studios and developers, many of which became established in Cyprus. By April, 42% of Russian developers had either already left the country or made plans to leave in the next few months. Russian developers outside the country have reported difficulty in getting projects funded by publishers, as trust is low. The online games market in Russia suffered an 80% decline that year, and the market collapsed in both Russia and Belarus. Many western video game companies ceased operating in Russia, and all major Russian video game trade shows- many of which had not been held since 2019 due to the pandemic- were discontinued. This included IgroMir and Comic-Con Russia as well as several e-sports events. Obsidian, an organisation providing tracking data for the layoffs, uses 2022 as the starting year for the period and includes the immediate aftermath of the invasion. Vladimir Putin made a series of edicts over the following two years with the aim of revitalising the Russian games industry; these were ridiculed by outside observers as ineffective and impossible to fulfill. Putin effectively legalized piracy, and ordered the creation of a "Russian Electronic Arts" and a game engine to compete with Unreal. In 2023 he also ordered the creation of a game console on par with the Playstation 5 and Xbox in only three months. Kommersant reported that such a project would take a decade, and others have noted that restrictions on importing chips to Russia would make that even more challenging. Techdirt questioned how well Putin understands the game industry given that he was 71 years old at the time of the console order. Game development has continued within some Ukrainian studios during the war, though blackouts have of course disrupted operations. Nordcurrent's Dnipro office has continued development even after a bomb detonated fifty meters from the building and shattered the windows. Aurum Dust is a studio composed of a mixture of Ukrainians and Russians who are against the war, and has continued working together despite the fighting. Major layoffs Embracer Group Embracer Group made multiple layoffs, game cancellations, and studio closures between August 2023 and March 2024 after its $2 billion deal with Saudi Public Investment Fund fell apart. The company reportedly reduced its headcount by 7,761, closed or divested 44 internal and external studios, and decreased the number of game projects by 80. The company later announced that it will be separated into three standalone companies by 2026. Unity Technologies The organization with the highest amount of layoffs in the first year was Unity Technologies, with 2,900 jobs lost across several rounds; a significant proportion of the 16,000 losses sector wide by January 2024. On January 17, 2023, Unity Technologies laid off 284 employees as part of a reassessment of objectives, strategies, and priorities in response to current economic conditions. CEO John Riccitiello explained that the layoffs were meant to reduce overlap and shelve certain projects to ensure the company's future strength. Later, on November 29, 2023, Unity announced an additional 265 layoffs, constituting 3.8% of its workforce, as part of a "company reset," according to Reuters. Most of the affected workers (256) were from the Wētā Digital division, which Unity had acquired for $1.6 billion in 2021, along with several Wētā FX tools and 275 employees. On May 3, 2023, Unity announced plans to cut roughly 600 jobs, approximately 8% of its workforce. Additionally, Unity intended to reduce its global network of offices over the next few years from 58 to fewer than 30. The majority of the Unity layoffs occurred in the wake of a controversial pricing change termed the "runtime fee". The policy caused community backlash and a developer boycott. A number of studios announced that they were moving away from the engine permanently in the wake of the decision, and tools were developed to assist in porting existing projects away from Unity. The incident ultimately resulted in the resignation of Unity CEO John Riccitiello, as well as the leader of their engine division, Unity Create chief Marc Whitten. Microsoft Gaming On January 31, 2023, as part of broader Microsoft job cuts, 343 Industries laid off 95 employees following the "disappointing" launch of Halo Infinite's multiplayer mode. Bethesda Game Studios was also reportedly impacted by the layoffs. On January 25, 2024, Microsoft Gaming underwent significant restructuring, leading to 1,900 staff being laid off. As part of this process, Blizzard Entertainment's President Mike Ybarra and co-founder Allen Adham departed from the company, while Blizzard's game Project Odyssey was canceled, and major teams working on Overwatch 2 were affected. Microsoft Gaming Studios, including Toys for Bob and Sledgehammer Games, saw staff reductions of over 30%, with most layoffs occurring at Activision Blizzard. On May 7, 2024, Microsoft Gaming closed three studios: Tango Gameworks, Arkane Austin, and Alpha Dog Games, and announced the merger of Roundhouse Studios into ZeniMax Online Studios. This move was part of a larger "reprioritization of titles and resources" to focus on high-impact games and new intellectual properties, resulting in the cessation of development on certain projects and the reassignment of teams within Bethesda and ZeniMax. However, Tango Gameworks was acquired by Krafton in August 2024, retaining about half of its developers and the Hi-Fi Rush property. And on September 12, 2024, Microsoft Gaming CEO Phil Spencer announced that an additional 650 support and corporate roles would be eliminated. Sony Interactive Entertainment On October 31, 2023, Sony Interactive Entertainment announced additional layoffs affecting around 100 Bungie employees and disclosed delays for two upcoming titles: Marathon and the Destiny 2 expansion, The Final Shape. According to Bloomberg, the layoffs came weeks after executives revealed that Bungie's revenue was 45% lower than projected, which Parsons attributed to the underperformance of Lightfall. On February 27, 2024, Sony Interactive Entertainment announced the layoff of 900 employees across various studios, citing the need to restructure operations in response to the evolving economic landscape and changes in product development, distribution, and launch strategies. Layoff timelines will vary by location, and PlayStation's London Studio will be closed entirely. On July 31, 2024 Sony announced further layoffs at Bungie, cutting 220 employees (17% of Bungie’s workforce), while 155 employees were reassigned to other PlayStation Studios, and around 40 moved to a new studio. Bungie CEO Pete Parsons acknowledged that the company had been overly ambitious and exceeded its financial safety margins, operating at a loss. Electronic Arts On March 29, 2023, Electronic Arts laid off 6 percent of its workforce as part of a strategic shift to reevaluate its investment strategy and reduce office space, according to a blog post by EA CEO Andrew Wilson. The layoffs were aimed at moving away from projects that did not contribute to EA's strategy, reviewing its real estate footprint, and restructuring some teams. While specific departments affected by the layoffs were not mentioned, efforts were made to provide opportunities for affected workers to transition onto other projects where possible. On February 28, 2024, Electronic Arts (EA) announced the layoff of 670 staff members. EA's CEO, Andrew Wilson, outlined the company's focus on owned IP, sports, and massive online communities as part of its business advancement. Additionally, EA shut down Ridgeline Games and canceled a Star Wars single player game developed by Respawn Entertainment. These cuts included 23 jobs at Respawn that were announced in March 2024. Epic Games On September 28, 2023, Epic Games announced a layoff affecting 16% of its workforce, or around 830 employees. The news was initially reported by Bloomberg before Epic Games published its internal memo online. CEO Tim Sweeney explained in an email to staff that the decision was due to the company's ongoing financial situation, stating that they had been spending more money than they were earning. Sweeney expressed optimism about navigating the transition without layoffs but acknowledged that it was unrealistic in retrospect." Take-Two Interactive On April 16, 2024, Take-Two Interactive announced plans to lay off 5% of its workforce and cancel several video game projects. The company cited a cost-reduction plan, anticipating total charges of $160 million to $200 million. These measures are expected to be largely implemented by December 31, 2024. Previously, Take-Two Interactive stated that they were working on "significant cost reductions" but stated they had no current plans for layoffs. Riot Games On January 22, 2024, Riot Games announced a significant restructuring, leading to the layoff of 530 employees, which accounts for about 11% of the company's total workforce. The company also shut down Riot Games' indie publishing label, Riot Forge. The decision was made as part of Riot's strategy to refocus on fewer, high-impact projects, aiming for a more sustainable future. List of major layoffs Canceled video games Reactions Media outlets Some media outlets compared the 2023-2024 layoffs to the video game crash of 1983, when the US video game market collapsed due to an oversaturation of poorly made, low-quality games, causing the video game industry to enter a recession for two years. This has sparked discussions about a potential "second video game crash." Windows Central's article titled "Embracer Group is a prime example of bad consolidation" criticized Embracer Group for its frequent layoffs, studio closures, and personnel cuts. The closure of Volition Studios, layoffs at Lost Boys Interactive, and the shutdown of Free Radical Design are highlighted as notable incidents. Publishers Both Microsoft and Sony have acknowledged that the current approach cannot continue and are exploring alternative business models. Microsoft Gaming CEO Phil Spencer addresses the stagnation in the gaming industry, recognizing its repercussions on job cuts and the challenging decisions faced by companies. He underscores the importance of industry expansion for long-term sustainability, advocating for a shift towards enlarging the player base rather than solely concentrating on extracting revenue from existing players. By prioritizing the growth of Xbox through attracting new players and nurturing creators, Phil aims to guarantee enduring strength and prosperity for the platform and the industry overall. When asked about the gaming layoffs, Phil Spencer addressed both the broader industry trend and the unique aspects related to Xbox's current business. Spencer expressed concern over the lack of growth in the industry, highlighting the pressure on publicly traded companies to show growth to investors. This scrutiny often leads to cost-cutting measures when revenue growth is stagnant. Spencer emphasized the need for the industry to focus on regaining growth to ensure job security and career opportunities for professionals. Regarding Xbox's strategy, he discussed the importance of exclusivity and expanding the player base by making games available on multiple platforms. Spencer stated that every decision made by Xbox is aimed at strengthening the brand in the long run, even if not everyone agrees with those decisions. Spencer also touched on the evolving nature of Xbox, stating that the brand is moving away from traditional exclusivity models to adapt to the preferences of younger audiences. Spencer emphasized that Xbox aims to be a platform where players can find the games they want, regardless of the device they use, aligning with the accessibility and cross-platform trends seen among younger gamers. Sony Interactive Entertainment chairman Hiroki Totoki stated that he acknowledges the need to manage development costs better in PlayStation studios, recognizing industry-wide challenges like rising expenses and lengthy schedules. Totoki emphasizes sustainable profitability and transparently addressing challenges while highlighting the significance of first-party titles achieving growth across platforms. Wes Keltner, CEO of Gun Interactive, expressed concern about the shrinking space for creative and innovative ideas from small game development teams. Keltner noted a lack of funding for indie projects, leading to promising ideas being abandoned at the prototype stage. Keltner highlighted the trend of mergers and acquisitions (M&A) leading to larger studios but diminishing creative freedom. He emphasized the notion that risk is a driving force behind creativity in the gaming industry. Game developers In response to layoffs in the gaming industry, developers expressed a mixture of frustration, disillusionment, and concern about the future. Many felt blindsided by the layoffs, especially when they were told the reasons were related to underperforming games or unsustainable costs. Some developers pointed out the disconnect between management decisions and the realities of game development, such as over scoping projects or investing in risky technologies without clear strategies. There was also criticism of how layoffs were handled, with some developers feeling that companies prioritized executive salaries and unnecessary expenses over investing in game development. There were instances where studios spent extravagantly on events or office perks shortly before laying off a significant portion of their workforce, leading to feelings of betrayal among employees. Developers highlighted broader industry trends contributing to the instability, such as the increasing reliance on outside investors and shareholders who prioritize short-term profits over long-term sustainability. The pandemic exacerbated these issues but was not solely responsible for the ongoing wave of layoffs. Overall, developers expressed deep concern about the future of the industry and the toll these layoffs were taking on morale and creativity. Many feared that the current instability could have long-lasting consequences for both individuals and the industry as a whole. At Game Developers Conference 2024, Epic Games staff organised a "GDScream", where a large number of developers gathered in a park to scream at the sky in "a moment of pure catharsis". The trade show more broadly featured many speeches from award winners about the state of the industry. Dinga Bakaba, the studio head of Arkane Lyon, publicly criticized Microsoft Gaming executives for their decision to close several studios. He emphasized the importance of taking care of artists and entertainers in the video game industry, highlighting that their role is to create value for corporations. The Game Awards On December 12, 2024, The Game Awards 2024 introduced the inaugural Game Changers Award, which was presented to Amir Satvat. Satvat has been recognized for his efforts in assisting individuals who have lost their jobs by helping them find new opportunities. The presentation of the award received a standing ovation, noted as one of the most significant moments of the event. Future Unionization Unions are relatively rare in the video game industry. But after several public scandals involving abuse, sexism, layoffs, and overwork, some game workers have developed a keen interest in organization in the last few years. After starting the process in April, employees at Sega of America's Irvine, California headquarters filed to become unionized with the Communications Workers of America on July 10, 2023. In July, the union election was successfully won by the Allied Employees Guild Improving Sega (AEGIS), with 91 votes in favor and 26 votes against. More than 200 positions in a range of areas, such as marketing, games as a service, localization, product development, and quality assurance, will be covered by the union. On October 6, 2023, Over 100 developers at Avalanche Studio Group unionized. After experiencing layoffs, some workers at CD Projekt Red formed a union on October 9, 2023. According to the union, these layoffs caused significant stress and insecurity among workers, leading to the need for better protection and representation. The union aims to provide more security, transparency, and a stronger voice for workers in times of crisis, believing that mass layoffs pose a threat to the gaming industry and that unionizing is crucial for preserving its potential. The union said its priority was to give CD Projekt Red staff a voice in company decision-making, with a view to increasing employment stability. It also wants to help workers’ voices be heard on working conditions “in the long run.” On December 5, 2023, 300 Quality Assurance workers at ZeniMax Media announced that they were organizing a union. Additionally, a labor neutrality agreement was announced in June 2023 by Microsoft and the Communication Workers of America (CWA). Under this deal, Activision Blizzard employees were entitled to freely form a union, and Microsoft promised to acknowledge and support that union. On March 8, 2024, 600 workers from Activision's QA team joined CWA, establishing the largest game developer union in North America. Growth Despite the layoffs, studio closures, and cancellations of video game projects, as well as high inflation, the video game market continues to remain robust. Many investors and industry analysts believe that the video game industry will fully recover in 2025 with major releases like Grand Theft Auto VI, Monster Hunter Wilds, Ghost of Yōtei, Fable, Doom: The Dark Ages, Pokemon Legends Z-A, and others. Investors also expect Nintendo to release its new hardware, which will boost video game sales and revenue. Executive Director of Circana (The NPD Group), Mat Piscatella, stated that consumer demand remains strong, but consumers are under pressure due to economic challenges. Some parts of the industry are already growing and in a healthy position, like mobile, and Piscatella believes that other segments will follow suit in 2025. According to a 2024 PwC report, the global gaming industry is expected to reach a value of $321 billion by 2026. Deloitte predicts that the share of theatrical box office revenues from video game intellectual property (IP) will double by 2025. Additionally, most major video streaming platforms are expected to include shows based on popular games. Another report by GlobalData suggests that the video games market could become a $300 billion industry by 2025. Factors contributing to this growth include mobile gaming and innovative offerings. Bain & Company predicts that global gaming revenue could surge by over 50% in the next five years. Juniors and long term effects The layoffs affected junior staff in greater numbers than other skill levels, and in some cases juniors were specifically targeted. As the industry was recruiting too few juniors to begin with, there are long term concerns for skills development, diversity, and the viability of the games industry as a career path for young developers. The layoffs were demoralising for juniors, and around a third of those who were laid off left the industry entirely. A level designer interviewed by PC Gamer commented that "As a Junior who put blood, sweat, and tears into obtaining my first role in the industry, I am now back again going in circles looking for roles that are junior level (which is non-existent as every job posting I see is either Senior, Principal, or Lead)." The number of junior positions available has been low for years, but fell dramatically during the period. In 2022, 9.4% of available games jobs in the United Kingdom were at the junior level. By 2023 this had fallen to 2.9%, with only 34 junior positions nationwide over the year. The junior figure had partially recovered to 7% by 2024, but there were no apprenticeships sector wide for the entire year. The few junior jobs available are fiercely competed for; XR Games advertised for four junior positions in 2024 and received 18,000 applications. Grads in Games, a major route into the industry for juniors in the UK, was placed on hiatus in 2024 due to the lack of entry level hiring. Industry support for the program had been in place for a decade but now "just doesn’t exist". The industry's failure to hire and train new workers is exacerbating existing skills shortages at the senior level, as there are not enough staff progressing through the field and moving up to higher positions. A developer interviewed by DigiDay remarked that by recruiting only existing senior level talent, there may be "no new generation of seniors." In the UK in particular, the first wave of developers are now starting to retire, leaving fewer senior staff to train any new juniors. The current skills structure in the UK games industry is unsustainable. As juniors are more likely to be women or from minority groups such as LGBT demographics, this practice has also had a negative effect on diversity in the industry. See also Impact of the COVID-19 pandemic on the video game industry Video games in the United States Video game crash of 1983 References Video game labor relations Video game development 2023 in video gaming 2024 in video gaming History of video games Termination of employment 2023 in labor relations 2024 in labor relations
2023–2024 video game industry layoffs
[ "Technology" ]
6,697
[ "History of video games", "History of computing" ]
76,232,408
https://en.wikipedia.org/wiki/Leratiomyces%20percevalii
Leratiomyces percevalii, commonly known as mulch maid, is a medium-sized saprobic mushroom. Its cap is honey yellow to dingy olive in color, covexed, becoming broadly bell-shaped. Its gills are adnexed to shortly decurrent and whitish to purplish gray or purple-black. It is common in woodchips, fields, and urban waste spaces. References Strophariaceae Fungus species Fungi described in 1879 Taxa named by Miles Joseph Berkeley Taxa named by Christopher Edmund Broome
Leratiomyces percevalii
[ "Biology" ]
111
[ "Fungi", "Fungus species" ]
76,234,074
https://en.wikipedia.org/wiki/International%20Panel%20on%20the%20Information%20Environment
The International Panel on the Information Environment (IPIE) is an international consortium of over 250 experts from 55 countries dedicated to providing actionable scientific knowledge on threats to the global information environment. The organization has been compared with the Intergovernmental Panel on Climate Change, but also CERN and the IAEA, because it uses the model of scientific panels and neutral assessments to identify points of consensus or gaps in knowledge. The IPIE was legally registered as a charitable entity in the Canton of Zurich, Switzerland in 2023. Panels The first panel was a Scientific Panel on Global Standards on AI Auditing, chaired by Professor Wendy Chun and Professor Alondra Nelson. At the UN Summit of the Future in September 2024 the IPIE announced the formation of a Scientific Panel on Information Integrity about Climate Science, a Scientific Panel on Child Protection and Social Media, and a Scientific Panel on AI and Peacebuilding. Origins The concept was proposed in 2021 during the first Nobel Prize Summit organized by the US National Academy of Sciences and the Nobel Foundation, involving Dr. Sheldon Himelfarb, then head of PeaceTech Lab, and Professor Philip N Howard, a Professor at Oxford University and then Director of the Oxford Internet Institute, In September 2022 thirty scientists met at Oxford University to develop a mission statement, organizational structure, and process for developing scientific consensus. This chartering group included researchers from the social, behavioral and computer sciences. Over time, similar calls to create this independent body have come from public science agencies, civil society, philanthropy, and the technology firms themselves. Some proposals focussed exclusively on AI, others on a host of technology-related harms, but there has been strong consensus that the body would need financial independence from technology firms and governments, wouldn't be credibly managed by a steering committee of nation states, and wouldn't function effectively within the UN system. A larger group of scientists convened in Costa Rica in February 2023 to continue planning. In May 2023 the IPIE was publicly launched during the Nobel Prize Summit in Washington DC. The Panel's inaugural announcement said, A New York Times report on the Panel's launch described its initial plans to "issue regular reports, not fact-checking individual falsehoods but rather looking for deeper forces behind the spread of disinformation as a way to guide government policy." Management The CEO of IPIE is Dr. Philip N. Howard, who is also the director of Oxford University's Programme on Democracy and Technology. Jenny Woods is the Executive Director and COO of the IPIE, which has a Secretariat based out of Zuirch. The organization is governed by a small Board of Trustees, a system of permanent methodology, ethics and membership committees, and limited-term scientific panels on particular topics. Dr. Sheldon Himelfarb is co-founder and chair of the IPIE Board of Trustees. The organization is neutral and nonpartisan, but does seek better access to data from technology companies so as to better appraise the impact of new technologies like AI on public life. References Notes Working groups Scientific organisations based in Switzerland Organisations based in Zurich International organizations based in Europe International research institutes Misinformation Disinformation Internet Deepfakes Generative artificial intelligence Media studies AI safety Artificial intelligence conferences Regulation of artificial intelligence Research institutes in Switzerland Science and technology in Europe Science diplomacy
International Panel on the Information Environment
[ "Technology", "Engineering" ]
677
[ "Internet", "Transport systems", "Generative artificial intelligence", "Regulation of artificial intelligence", "Safety engineering", "AI safety", "Computing and society", "Artificial intelligence engineering" ]
76,234,841
https://en.wikipedia.org/wiki/Evelyna%20Bloem%20Souto
Evelyna Bloem Souto (1926 - 11 August 2017) was the only woman in the first class of the civil engineering course at the University of São Paulo in São Carlos, Brazil. She overcame considerable prejudice against women in engineering to build a successful academic career. Early life Evelyna Bloem Souto was born in 1926, in São Paulo. Her interest in civil engineering emerged during her childhood. Her father, Theodoreto de Arruda Souto, was the first director of the Escola de Engenharia de São Carlos da Universidade de São Paulo (EESC) (School of Engineering at the University of São Paulo) between 1952 and 1967. When her father met with friends, the young Evelyna would listen with interest to their conversations about engineering. Education Bloem Souto started her undergraduate studies at Escola Politécnica da Universidade de São Paulo. In 1957, her third year of college, she transferred her course to University of São Paulo at São Carlos. Whilst studying through a scholarship in France, she was made to dress as a man, wear galoshes, pin back her hair and draw a beard and moustache on her face so that she would be allowed on the work site of a tunnel alongside 10 male students. She agreed to participate as she really wanted to inspect the project, expecting to work on tunnels back in Brazil. Career After graduation, Bloem Souto pursued and achieved a PhD. During her academic career she took part in more than 60 conferences around the world and received scholarships to develop research in other universities, including Harvard. During the creation of a Geology and Soil Mechanics department, which she played a significant role in developing, the chairman made her take on the role of librarian so that "nobody would know I was an engineer. But I managed to carve out my own space and it wasn't long before I became head of everything". She played tour guide to the department when the then President of the Republic of Brazil, Juscelino Kubitschek, and the then Governor of the State of São Paulo, Jânio Quadros visited. Bloem Souto taught Geotechnics at EESC and became a professor in the department for the rest of her working life. She played an essential part in the School from its inception, development and long term management, contributing to the institution becoming a national reference in the field of engineering. Death Evelyna Bloem Souto died on 11 August 2017. A mass was held in her memory on 17 August 2017 at the church of Nossa Senhora do Perpétuo Socorro (Our Lady of Perpetual Help) in São Paulo. References 1926 births 2017 deaths Civil engineers 20th-century Brazilian engineers Brazilian women academics Brazilian women engineers University of São Paulo alumni Academic staff of the University of São Paulo Geotechnical engineers 20th-century Brazilian women scientists 20th-century Brazilian women educators 20th-century Brazilian educators 20th-century women engineers
Evelyna Bloem Souto
[ "Engineering" ]
599
[ "Civil engineering", "Civil engineers" ]
76,235,042
https://en.wikipedia.org/wiki/Mid-Holocene%20hemlock%20decline
The mid-Holocene hemlock decline was an abrupt decrease in Eastern Hemlock (Tsuga canadensis) populations noticeable in fossil pollen records across the tree's range. It has been estimated to have occurred approximately 5,500 calibrated radiocarbon years before 1950 AD. The decline has been linked to insect activity and to climate factors. Post-decline pollen records indicate changes in other tree species' populations after the event and an eventual recovery of hemlock populations over a period of about 1000-2000 years at some sites. Causes Some relatively earlier studies on this event link it to insect outbreaks (e.g. hemlock looper), while more recent research has argued for climate changes as the driving factors in this decline. Evidence used to point towards an insect outbreak includes sudden nature of the event and the debated assertion that similar trends were not shown in other species. Fossil evidence used to support the insect pathogen argument include the presence of fossil hemlock looper and spruce budworm head capsules, and more prevalent than normal macrofossil hemlock needles with evidence of feeding by the hemlock looper. Arguments for climate changes as the driving factor of this event include linking the decline in hemlock fossil pollen to trends from other tree species and to lake-level reconstructions from sediment cores and ground-penetrating radar that indicate a change to drier conditions. These climate changes may have been associated with shifts in atmospheric and ocean circulation. While its causes have been debated, this event may be used to provide insight into how modern forests may respond to pathogen outbreaks or to anthropogenic climate change. Post-decline dynamics Increases in the fossil pollen of other tree species such as birch have been found at some sites following the decline in hemlock pollen. In some areas, hemlock fossil pollen indicates a recovery of the population that took place over the period from about 1000-2000 years after the decline, while in other areas, fossil pollen indicates that the hemlock population never fully recovered or that forest composition was forever altered following the event. A continuation of drought conditions may have delayed hemlock recovery in some areas. References Wikipedia Student Program Paleoecology
Mid-Holocene hemlock decline
[ "Biology" ]
437
[ "Evolution of the biosphere", "Paleoecology" ]
74,688,139
https://en.wikipedia.org/wiki/WD%200032%E2%88%92317
WD 0032−317 is a low mass white dwarf star orbited by brown dwarf WD 0032−317 b. WD 0032−317 The white dwarf WD 0032−317 is located about 1,400 light years from Earth. WD 0032−317 formed about three billion years ago when a low mass star (possibly of 1.3 solar masses) expanded into its red giant phase. The star then blew out its outer layers leaving behind the helium-rich core (which is WD 0032−317). WD 0032−317 b The orbiting brown dwarf, WD 0032−317 b, was massive enough to survive the red giant's nova event. It is an extremely hot and very large (75-88 Jupiter masses) brown dwarf that orbits WD 0032−317. One orbit from WD 0032−317 b takes only 2.5 hours. This object is tidally locked to its star with a day side temperature of and a night temperature of about making its temperature equivalent to a planet orbiting close to a late stage B-type star. The intense ultraviolet (UV) exposure can break down the molecules in WD 0032−317's atmosphere and vaporize materials from the surface of the brown dwarf. References White dwarfs Sagittarius (constellation) Brown dwarfs Planetary systems with one confirmed planet
WD 0032−317
[ "Astronomy" ]
291
[ "Sagittarius (constellation)", "Constellations" ]
74,688,467
https://en.wikipedia.org/wiki/Iain%20Buchan
Iain Edward Buchan is a public health physician, data scientist and academic. He holds the W.H. Duncan Chair of Public Health Systems and is Associate Pro Vice Chancellor for Innovation at the University of Liverpool. Buchan's research focuses on health data science and informatics to enable better prevention, early intervention, and value of care for patients and populations. He has written 337 articles and his work has been cited of 26000 times according to Google Scholar. He is most known for leading the world's first evaluation of mass rapid antigen testing, and the first realistic risk-mitigated reopening of mass events during the UK's response to the COVID-19 pandemic. He also developed the Civic Data Cooperative, which resulted in the Combined Intelligence for Population Health Action (CIPHA) system during the pandemic. He is the recipient of HTN Health Tech Award, Alwyn-Smith Medal, and Florence Nightingale Award. Buchan is a Fellow of the Faculty of Public Health, the American College of Medical Informatics, British Computer Society and the Faculty of Clinical Informatics. He has also been an advisor to UK, European and international health policy groups, AstraZeneca) and research organizations including UKRI, Wellcome Trust and the UK National Institute for Health and Care Research (NIHR), for which he is a Senior Investigator. Education and early career In the 1980s, Buchan pursued medical training alongside studies in pharmacology and statistical software development. As an undergraduate, he published the first version of a statistical package called "StatsDirect." During the 1990s, as a junior doctor, he researched care pathways, health system dynamics, and care inequities. Later, he trained as a public health consultant while conducting research in medical informatics and pursuing doctoral studies in computational statistics. Career Buchan began his academic career in 1992 as an Honorary Clinical Lecturer at the University of Liverpool. He then served as a Research Associate in Medical Informatics at the University of Cambridge in 1996 and Senior Research Fellow in Medical Informatics at Wolfson College, Cambridge in 1997, before training as a Consultant in Public Health. In 2003, he joined the University of Manchester as a Clinical Senior Lecturer in Public Health Intelligence and was promoted in 2008 to Clinical Professor in Public Health Informatics. There, from 2003 to 2017, he founded Health eResearch Centre and co-directed the Farr Institute. In the E-Science movement of the early 2000s he conceived e-Labs and Research Objects, leading to today's Trusted Research Environments and applications in healthcare. At Manchester, he also invented the FARSITE system, helping spin out NW eHealth, and started the #DataSavesLives movement and the Connected Health Cities project. Subsequently, Buchan served as Director of Healthcare Research at Microsoft Research Cambridge in 2017–2018, producing two patents and furthering the health avatar framework he had conceived eight years earlier. In 2018, Buchan returned to Liverpool as the University of Liverpool's first chair in Public Health and Clinical Informatics. From 2019 to 2022, he was the founding Executive Dean of the Institute of Population Health at Liverpool, whilst leading research responses to the COVID-19 pandemic. Since 2022, he has been conducting multidisciplinary research partnerships, especially in health technology as Associate Pro Vice Chancellor for Innovation. Research Buchan's research areas encompass public health, data science, clinical informatics, epidemiology, and biostatistics. In particular, he has published in areas related to public health challenges, such as inequalities, obesity, mental health and pandemic resilience, and in methodology, including machine learning in epidemiology, research objects in e-science, learning health systems, and the concept of a digital twin/health avatar for healthcare. COVID-19 response and data-intensive public health research Buchan led the world's first evaluation of voluntary mass testing for the SARS-CoV2 antigen with lateral flow devices, working with the British Army, local and national government, public health agencies and the UK's National Health Service. This work provided quick proof that lateral flow devices worked as expected to detect people infected with the COVID-19 virus whether they had symptoms or not. Responding to media debate over the reliability of lateral flow devices, he clarified the evidence regarding a public health test versus a clinical test for COVID-19. The impact of this testing was that COVID-19 hospital admissions fell by 43% initially and 25% overall. The BMJ asked him and colleagues for an accompanying methodology paper on the data analysis as a blueprint of best practice. The UK's universal access community testing policy was shaped by this work, including its demonstration of inequalities in testing uptake and barriers such as digital poverty. He had also formulated a test-to-release daily testing alternative to quarantine for close contacts of cases, which resulted in the Daily Contact Testing policy. He also researched COVID-19 and informed policies in other contexts including care homes, hospitals, schools, and vaccination. In Spring 2021, Buchan applied previous testing and other COVID-19 risk mitigation research to address the issue of young people being vaccinated last and missing out on social development opportunities due to the continued lockdown of significant cultural events. So, he led a city-scale reopening (after COVID-19 lockdowns) of a cluster of business, nightclub and a music festival events – resulting in minimal SARS-CoV-2 transmission, high levels of enjoyment, low levels of fear over risks, and demonstrated the effectiveness of collaborative strategies for health security at mass cultural gatherings. Public health and data science Buchan's research has underscored the importance of trust in health data utilization, highlighting transparency, consent, and public involvement, with a specific focus on the role of national governments in the reuse of health data. Building on earlier work in civic data linkage and public health intelligence, he established the first Civic Data Cooperative in Liverpool in late 2019, and put a National Grid of Civic Data Cooperatives forward to the UK Government as means of improving health system innovation and resilience. Buchan engaged machine learning researchers from Microsoft Research in the field of epidemiology, leading to discoveries pertaining to asthma and allergies. Most recently, he formed the Mental Health Research for Innovation Centre of the UK Government's Mental Health Mission. Buchan conducted research on other health data science directions including Trusted/Trustworthy Research Environments with Research Objects and eLab networks to improve research reproducibility and tackle the widespread problem of calibration drift in clinical prediction models. He drew attention to the problem of multimorbidity and the need for a unified modelling approach, not only for discovery science but also for personalized care via interactive Health Avatars. Some of Buchan's most highly cited papers arose from applications of his statistical software to public health problems. He has worked to make better use of routine health record data with combined biostatistics and machine learning approaches to predicting clinical outcomes. Buchan's data science research has focused on addressing public health challenges, including obesity, inequalities, mental health, and pandemics. He raised a warning over obesity among pre-school children using routinely collected data, then alerted to the high burden of cancer attributable to obesity, then highlighted the challenges of using consumer technology data to understand weight control. He drew attention to the excess of premature deaths in North compared with South England and the need for regional growth incentives. Awards and honors 2012 – Fellow, American College of Medical Informatics 2014 – Manchester Ambassador, Greater Manchester Combined Authority 2017 – Fellow, British Computer Society 2017 (renewed 2023) – Senior Investigator, National Institute for Health and Care Research 2021 – Best Use of Health Data, HTN Health Tech Awards 2022 – Healthcare Project of the Year, BioNoW 2022 – Alwyn-Smith Medal, The Faculty of Public Health 2023 – Florence Nightingale Award, Royal Statistical Society Selected articles References Public health researchers Health informatics Data scientists Computational statistics British public health doctors Alumni of the University of Liverpool Alumni of the University of Cambridge Academics of the University of Liverpool National Institutes of Health Year of birth missing (living people) Living people
Iain Buchan
[ "Mathematics", "Biology" ]
1,705
[ "Computational statistics", "Computational mathematics", "Health informatics", "Medical technology" ]
74,688,604
https://en.wikipedia.org/wiki/Sky%20crane%20%28landing%20system%29
Sky crane is a soft landing system used in the last part of the entry, descent and landing (EDL) sequence developed by NASA Jet Propulsion Laboratory for its two largest Mars rovers, Curiosity and Perseverance. While previous rovers used airbags for landing, both Curiosity and Perseverance were too heavy to be landed this way. Instead, a landing system that combines parachutes and sky crane was developed. Sky crane is a platform with eight engines that lowers the rover on three nylon tethers until the soft landing. EDL begins when the spacecraft reaches the top of the Martian atmosphere. Engineers have referred to the time it takes to land on Mars as the "seven minutes of terror." Background The first NASA rover, Sojourner (on the Mars Pathfinder lander), and twin rovers Spirit and Opportunity, used a combination of parachutes, retrorockets, and airbags for landing. Curiosity, launched in 2011, weighs nearly 900 kg, and was too heavy to be landed this way, as the airbags needed for it would be too heavy to be launched on a rocket. Instead, a landing system that combined a protective aeroshell, supersonic parachutes, and sky crane was developed by the Jet Propulsion Laboratory (JPL) under Adam Steltzner. Sky crane is "an eight-rocket jetpack attached to the rover". This system is also much more precise: while the Mars Exploration Rovers could have landed anywhere within their respective 93-mile by 12-mile (150 by 20 kilometer) landing ellipses, Mars Science Laboratory landed within a 12-mile (20-kilometer) ellipse. Mars 2020 has even more precise system, and landing ellipse of 7.7 by 6.6 km. The Curiosity team invented the sky crane system by studying old Viking landing system—its engines are "an upgraded 'reinvention' of Viking’s throttleable engines"—and landing experience from previous rovers. The sky crane works much like a helicopter, and the team even consulted with Sikorsky Skycrane helicopter engineers and pilots. Curiosity Curiosity was the first rover landed using the sky crane maneuver. Following the parachute braking, at about altitude, still travelling at about , the rover and descent stage dropped out of the aeroshell. The descent stage is a platform above the rover with eight variable thrust monopropellant hydrazine rocket thrusters on arms extending around this platform to slow the descent. Each rocket thruster, called a Mars Lander Engine (MLE), produces of thrust. A radar altimeter measured altitude and velocity, feeding data to the rover's flight computer. Meanwhile, the rover transformed from its stowed flight configuration to a landing configuration while being lowered beneath the descent stage by the sky crane system. This system consists of a bridle lowering the rover on three nylon tethers and an electrical cable carrying information and power between the descent stage and rover. As the support and data cables unreeled, the rover's six motorized wheels snapped into position. At roughly below the descent stage the sky crane system slowed to a halt and the rover touched down. After the rover touched down, it waited two seconds to confirm that it was on solid ground by detecting the weight on the wheels and fired several pyrotechnic fasteners activating cable cutters on the bridle and umbilical cords to free itself from the descent stage. The descent stage then flew away to a crash landing away. Perseverance Sky crane system was further updated for the Perseverance rover, that is heavier than its predecessor, and weighs 1,025 kg. During the atmospheric entry, the spacecraft jettisoned the lower heat shield and deployed a parachute from the backshell to slow the descent to a controlled speed. It happens about 240 seconds after entry, at an altitude of about 7 miles (11 kilometers) and a velocity of about 940 mph (1,512 kph). The EDL got new Terrain-Relative Navigation technology, that uses a special camera to quickly identify features on the surface. It is then compared to an onboard map to determine exactly where the rover is heading. Mission team members have mapped in advance the safest areas of the landing zone. If Perseverance can tell that it's headed for more hazardous terrain, it picks the safest spot it can reach and gets ready for the next step. With the craft moving under and about from the surface, the rover and sky crane assembly detached from the backshell, and rockets on the sky crane controlled the remaining descent to the planet. As the descent stage levels out and slows to its final descent speed of about 1.7 miles per hour (2.7 kilometers per hour), it initiates the sky crane maneuver. With about 12 seconds before touchdown, at about 66 feet (20 meters) above the surface, the descent stage lowers the rover on a set of cables about 21 feet (6.4 meters) long until it confirmed touchdown, detached the cables, and flew a distance away to avoid damaging the rover. Meanwhile, the rover unstows its mobility system, locking its legs and wheels into landing position. Perseverance successfully landed on the surface of Mars on 18 February 2021 at 20:55 UTC. Ingenuity reported back to NASA via the communications systems on Perseverance the following day, confirming its status. NASA also confirmed that the on-board microphone on Perseverance had survived EDL, along with other high-end visual recording devices, and released the first audio recorded on the surface of Mars shortly after landing, capturing the sound of a Martian wind. References External links "The Martian Chroniclers" at The New Yorker Aerospace engineering Flight phases Mars Science Laboratory Mars 2020
Sky crane (landing system)
[ "Engineering" ]
1,170
[ "Aerospace engineering" ]
74,694,081
https://en.wikipedia.org/wiki/Maurice%20Henri%20L%C3%A9onard%20Pirenne
Maurice Henri Léonard Pirenne (30 May 1912, Verviers–11 October 1978, Oxford) was a Belgian scientist known for his work in vision physiology. Early life and education Pirenne was born to Maria (née Duesberg) and artist Maurice Lucien Henri Joseph Marie Pirenne on 30 May 1912 in Verviers, Belgium. His uncles were medievalist historian, Henri Pirenne and anatomist and cytologist . Pirenne's lifelong interest in drawing and painting, nurtured by his artist father, underscored his fascination with the convergence of visual physiology and artistic expression. While still at school he read Brücke and Helmholtz on the optics of painting. Scientist After earning his Doctor of Science degree from Liege in 1937 and supported by a grant from the Belgian government, he engaged in a year of research in molecular physics under Peter Debye's mentorship, attending seminars led by Victor Henri in which he established connections with significant fellow students. A pivotal phase of his career was the next three years, 1938–40, spent at Columbia University in New York as a Fellow of the Belgian American Educational Foundation where he collaborated with Selig Hecht to explore the biophysics of vision. With Hecht, Pirenne investigated iris contraction in the nocturnal long-eared owl in reaction to infrared radiation. This experience significantly influenced his future devotion to the biophysics of vision. Visual perception After experiments they reported to the American Association for the Advancement of Science that received attention oil the media, in 1942, a joint paper authored by Hecht, Shlaer, and Pirenne marked a turning point in the understanding of visual perception near the absolute threshold level by measuring the minimum number of photons the human eye can detect 60% of the time. This paper highlighted that the perceived variability, previously attributed to biological causes, predominantly stemmed from physical fluctuations in the small quantity of light quanta absorbed by the visual photo-pigment. Pirenne's subsequent research revolved around the visual threshold and its correlation with visual acuity. England During WW2 from March 1941, he had to break with science and join the Belgian Forces marshalled in Canada, as a reserve officer, and in June of that year he was in Great Britain as secretary-treasurer of the Central Welfare Committee of the Belgian Land Forces. On his return to England, Pirenne's intricate neurophysiological studies of 'on' and 'off' neuronal units and their interactions found practical application in screening military personnel for night blindness which he carried out there until 1945. Pirenne employed his investigations of the senses in a physiological approach to the philosophical mind-body problem, and worked in academic positions in Cambridge and was appointed ICI research fellow at London University in 1945 during which he published The diffraction of X-rays and electrons by free molecules in 1946, then Aberdeen, where he lectured in physiology 1948–1955 while continuing to write on his investigation of visual thresholds, before joining the University Laboratory of Physiology at Oxford in 1955. His appointment as a fellow with Wolfson College recognised his teaching methods, remembered for their hands-on demonstrations and pragmatic approach, based on his meticulous preparation. Engaged briefly to Margaret Billinghurst in 1946, Pirenne married, on 16 May 1947, Katherine ('Kathy') Alice Mary Clutton, born in Devonport and they remained partners until the end of his life. In 1948 he was naturalised as a British citizen. Publications Pirenne published on the relation of optics to art, notably in the 1952 essay "The scientific basis of Leonardo da Vinci's theory of perspective." His 1970 work, Optics, Painting and Photography, investigated optical and perspective effects in trompe-l'oeil art and photography, analysed through imagery from a pinhole camera. In it he notably refutes Erwin Panofsky's claim that due to the curvature of the retina, the geometrical construction of perspective, (which provides an image on a plane) does not correspond to what is actually perceived and should also use curves—to which Pirenne responds that;...the fact that the retina, and perforce the retinal image, are curved [...] has led some authors to the idea that a truly 'physiological' perspective should consist of some kind of pseudo-development upon the picture plane of an image curved in shape like the retinal image, which allegedly would lead to systems of 'curvilinear perspective'. But, first, the retinal image is not what we see: what we see is the external world. Secondly, the geometrical construction of such a pseudo-development remains obscure--unless it leads back to central, 'rectilinear', perspective. It would be pointless to reiterate the argument that central perspective, in which straight lines are never projected as curves on a plane, is the only method which is capable of producing a retinal image having the same shape as the retinal image of the actual obiects depicted.Pirenne's final publication in 1975, titled Vision and Art, continued his explorations between visual perception and its artistic interpretation. Legacy Amongst his eighty publications, Pirenne's 1948 Vision and the Eye, remained an authoritative and accessible introduction to the subject. His stature as an international authority in visual physiology was affirmed through recognition such as a Doctor of Science degree from Cambridge in 1972 and his appointment as a Foreign Member of the Royal Belgian Academy of Sciences. He died in Oxford on 11 October 1978. References Belgian scientists 1912 births 1978 deaths People from Verviers Vision scientists Belgian physicists Vision Belgian physiologists Biophysics
Maurice Henri Léonard Pirenne
[ "Physics", "Biology" ]
1,177
[ "Applied and interdisciplinary physics", "Biophysics" ]
74,695,125
https://en.wikipedia.org/wiki/Deltek
Deltek is an American multinational enterprise software and information solutions corporation headquartered in Herndon, Virginia. The company sells software to government contractors, engineering, architectural, accounting, and consulting firms to manage customer information, financial and project accounting, project management, risk management, enterprise resource planning, invoicing, revenue, financial compliance, and expenses. Since 2016, its parent company has been Roper Technologies. Bob Hughes is Deltek’s president and CEO. History Deltek was founded in 1983 as Deltek Systems by father and son Donald and Kenneth E. deLaski (Deltek was short for deLaski Technologies). The company had a successful IPO in 1997 and was publicly traded until 2002. After the tech bubble burst in the early 2000s, the company went private again. Then, in 2005, the private equity firm New Mountain Capital bought 75% of Deltek’s shares. In 2006, the company had around 11,000 customers, including Bechtel Corp., Hellmuth, Obata & Kassabaum, and Verizon. Deltek went public for the second time in 2007, and raised $162 million with its initial public offering (IPO). New Mountain reduced its stake in the company from 75% to around 59%. In July 2012, Deltek put itself up for sale. At the time, the company had a market value of around $800 million. In August, the company was bought by Thoma Bravo LLC for $1.1 billion and again went private. In 2013, Michael Corkery was named president and CEO, and Michael Krone was named the CFO. In July 2016, Deltek acquired the Nottingham, UK-based company Union Square Software. Also in 2016, Deltek became part of Roper Technologies’ portfolio of companies when Roper acquired Deltek for $2.8 billion. In 2020, the company had approximately 3,000 employees in over 12 locations around the world. In 2023, Deltek claimed 95% of public sector spending across the U.S. federal, state, local, and education market, and public sector spending in Canada. In Europe, the company mainly works with consultants, architecture firms, and engineering firms. In 2022 and 2023, Deltek was named one of America's best mid-sized employers by Forbes and Statista Inc. In 2022, the Washington Post included Deltek as one of the Top Workplaces of 2022. Deltek is also rated TSIA Outstanding and has the J.D. Power Certified Assisted Technical Support certification. Acquisitions 2006 - Welcom Software Technology Corp., project management software provider 2008 - Planview’s MPM division, which makes project management software 2009 - mySBX, an online network for government contractors and professionals to find partners and opportunities 2010 - Input, market analysis for government contractors 2010 - Federal Sources, market analysis for government contractors 2010 - Maconomy, practice management software 2011 - Washington Management Group, GSA schedule consulting business 2013 - Centurion Research Solutions, market research and data analytics 2014 - Axium, a software company also known as XTS Software Corp. 2015 - HRsmart, talent management 2016 - Union Square Software, an engineering and architecture technology firm 2017 - Onvia, market intelligence 2017 - WorkBook, total agency management system 2021 - ArchiSnapper, an AEC mobile SaaS vendor 2022 - TIP Technologies 2023 - Replicon, a company that provides unified time tracking applications 2024 - ProPricer, a government proposal pricing software company Products Costpoint: Deltek's flagship product, an enterprise resource planning software package designed for Federal government contractors to meet the unique rules of Federal cost accounting. GovWin IQ: a market intelligence product used for information related to agency contract opportunities, spending, and budgets. References Companies based in Virginia 1983 establishments in Virginia Software companies established in 1983 Software project management Cloud platforms Cloud applications Software companies of the United States Business process management ERP software companies
Deltek
[ "Technology" ]
809
[ "Cloud platforms", "Computing platforms" ]
74,695,192
https://en.wikipedia.org/wiki/Chasing%20Shadows%3A%20My%20Life%20Tracking%20the%20Great%20White%20Shark
Chasing Shadows: My Life Tracking the Great White Shark is a memoir written by Greg Skomal that chronicles his decades long career as an Atlantic shark researcher. It was published in July 2023 by William Morrow, an imprint of HarperCollins. Ret Talbot, a science writer and independent journalist, co-authored this book. Talbot collaborated with Skomal so that the book would appeal to a general audience. The goal for Skomal is to educate and share insights with readers about the Great white shark. Other books Close to Shore by Michael Capuzzo about the Jersey Shore shark attacks of 1916 Twelve Days of Terror by Richard Fernicola about the same events The Devil's Teeth by Canadian born journalist Susan Casey. References External links Book excerpt Ret Talbot biography page. 2023 non-fiction books American memoirs Marine life in popular culture Marine biology American marine biologists Technology books Oceans HarperCollins books Biology books Cruelty to animals History books about the United States Books about sharks Science autobiographies
Chasing Shadows: My Life Tracking the Great White Shark
[ "Biology" ]
204
[ "Marine biology" ]
74,696,367
https://en.wikipedia.org/wiki/Postzegelcode
A postzegelcode is a hand-written method of franking in the Netherlands. It consists of a code containing nine numbers and letters that customers can purchase online from PostNL and write directly on their piece of mail within five days as proof of payment in place of a postage stamp. For mail within the Netherlands, the nine letters and numbers are written in a three-by-three grid. For international mail there is a fourth additional row that contains P, N, L. The system was started in 2013. Initially the postzegelcode was more expensive than a stamp because additional handling systems were required. Then for a while the postzegelcode was cheaper. Eventually the rates were set to the same price. In December 2020, 590,000 people sent cards with postzegelcodes. Safety Since the codes are valid for only five days, the chance that someone would guess a recently purchased code is quite low. Assuming 26 letters and 9 digits (the zero is not used to avoid confusion with the letter O), there are 359 (78.8 trillion) possibilities. Even if a postzegelcode were used for all mail items in the Netherlands, the probability is about 1 in 2 million that any stamp code has been sold in the past five days. References External links PostNL, postzegelcode Postal systems Postal markings
Postzegelcode
[ "Technology" ]
273
[ "Transport systems", "Postal systems" ]
62,293,592
https://en.wikipedia.org/wiki/Roger%20Scantlebury
Roger Anthony Scantlebury (born August 1936) is a British computer scientist and Internet pioneer who worked at the National Physical Laboratory (NPL) and later at Logica. Scantlebury led the pioneering work to implement packet switching and associated communication protocols at the NPL in the late 1960s. He proposed the use of the technology in the ARPANET, the forerunner of the Internet, at the inaugural Symposium on Operating Systems Principles in 1967. During the 1970s, he was a major figure in the International Network Working Group through which he was an early contributor to concepts used in the Transmission Control Program which became part of the Internet protocol suite. Early life Roger Scantlebury was born in Ealing in 1936. Career National Physical Laboratory Scantlebury worked at the National Physical Laboratory in south-west London, in collaboration with the National Research Development Corporation (NRDC). His early work was on the Automatic Computing Engine and English Electric DEUCE computers. Following this he was tasked by Derek Barber to lead the implementation of Donald Davies' pioneering packet switching concepts for data communication. Scantlebury and Keith Bartlett were the first to describe the term protocol in a modern data-communications context in an April 1967 memorandum entitled A Protocol for Use in the NPL Data Communications Network. In October 1967, he attended the Symposium on Operating Systems Principles in the United States, where he gave an exposition of packet-switching, developed at NPL (and referenced the work of Paul Baran). Also attending the conference was Larry Roberts, from the ARPA; this was the first time that Larry Roberts had heard of packet switching. Scantlebury persuaded Roberts and other American engineers to incorporate the concept into the design for the ARPANET. Subsequently he led the development of the NPL Data Communications Network, publishing several research papers pioneering the development of packet-switched computer networks. Elements of the network became operational in early 1969, the first implementation of packet switching, and the NPL network was the first to use high-speed links. He was seconded to the Post Office Telecommunications in 1969, participating in a data communications study and supervising four data communications-related research contracts. This research team developed the alternating bit protocol (ABP). Along with Davies and Barber, he was a major figure in the International Network Working Group (INWG) from 1972, initially chaired by Vint Cerf. He attended the INWG meeting in New York in June 1973 that shaped the early direction of international network protocols, and was acknowledged by Bob Kahn and Vint Cerf in their seminal 1974 paper on internetworking, A Protocol for Packet Network Intercommunication. He co-authored the standard agreed by INWG in 1975, Proposal for an international end to end protocol. Scantlebury later reported directly to Davies at the NPL. As head of the data networks group within the Computer Science Division, he was responsible for the UK technical contribution to the European Informatics Network, a datagram network linking CERN, the French research centre INRIA and the UK’s National Physical Laboratory. Later career Scantlebury joined Logica in 1977 in their Communications Division, where he worked on the CCITT (ITU-T) X.25 protocol and with the formation of the Euronet, a pan-European virtual circuit network using X.25. He moved to the Finance Division in 1981. In the 2000s, he worked for Mercator Software, Integra SP and as a consultant. Subsequently, he worked for Kofax (now Tungsten Automation) and retired in 2020. Personal life Scantlebury married Christine Appleby in 1958 in Middlesex; they had two sons in 1961 and 1966, and a daughter in 1963. He lives in Esher. He was influential in persuading NPL to sponsor a gallery about "Technology of the Internet" at The National Museum of Computing, which opened in 2009. Publications Wilkinson, P.T.; Scantlebury, R.A. (1968). The control functions in a local data network. IFIP Congress (2) 1968: 734-738. Scantlebury, R. A.; Wilkinson, P.T.; Bartlett, K.A. (1968). The design of a message switching centre for a digital communication network. IFIP Congress (2) 1968: 723-727. Scantlebury, R. A. (1969). A model for the local area of a data communication network objectives and hardware organization. Symposium on Problems in the Optimization of Data Communications Systems 1969: 183-204 Bartlett, Keith A.; Scantlebury, Roger A.; Wilkinson, Peter T. (1969). A note on reliable full-duplex transmission over half-duplex links. Commun. ACM 12(5): 260-261. See also History of the Internet Internet in the United Kingdom § History List of Internet pioneers Protocol Wars References Further reading External links Internet Dreamers BBC interview with Vint Cerf, Bob Taylor, Larry Roberts and Roger Scantlebury, 2000 NPL, Packet Switching and the Internet Comments by David Rayner, Derek Barber, Roger Scantlebury, and Peter Wilkinson at the Symposium of the Institution of Analysts & Programmers, 2001 The Internet - Where it came from & where it is going, IET/BCS evening talk at the University of Cambridge, 2007 Celebrating 40 years of the net BBC News article quoting Roger Scantlebury, 2009 'Packet switching' system's first computer network BBC News interview with Roger Scantlebury, 2010 Alan Turing and the Ace computer, BBC News series on British computer pioneers, 2010 The Story of Packet Switching, Interview with Roger Scantlebury, Peter Wilkinson, Keith Bartlett, and Brian Aldous, 2011 Protocol Wars, Interview with Roger Scantlebury for the Computer History Museum, 2011 Internet pioneers airbrushed from history, Letter to the Guardian, 2013 The birth of the Internet in the UK, Google video featuring Vint Cerf, Roger Scantlebury, Peter Kirstein, Peter Wilkinson, 2013 The Joy of Data BBC Four program featuring an interview with Roger Scantlebury, 2016 How we nearly invented the internet in the UK Letter to the New Scientist, 2020 Fifty Years of the Internet Technology Event featuring Roger Scantlebury at The National Museum of Computing, 2020 1936 births Living people British computer scientists History of computing in the United Kingdom Internet pioneers Packets (information technology) People from Brentford People from Esher Scientists of the National Physical Laboratory (United Kingdom)
Roger Scantlebury
[ "Technology" ]
1,317
[ "History of computing", "History of computing in the United Kingdom" ]
62,293,966
https://en.wikipedia.org/wiki/Prorenin
Prorenin () is a protein that constitutes a precursor for renin, the hormone that activates the renin–angiotensin system, which serves to raise blood pressure. Prorenin is converted into renin by the juxtaglomerular cells, which are specialised smooth muscle cells present mainly in the afferent, but also the efferent, arterioles of the glomerular capillary bed. Prorenin is a relatively large molecule, weighing approximately 46 KDa. History Prorenin was discovered by Eugenie Lumbers in 1971. Synthesis In addition to juxtaglomerular cells, prorenin is also synthesised by other organs, such as the adrenal glands, the ovaries, the testis and the pituitary gland, which is why it is found in the plasma of anephric individuals. Concentration Blood concentration levels of prorenin are between 5 and 10 times higher than those of renin. There is evidence to suggest that, in diabetes mellitus, prorenin levels are even higher. One study using relatively newer technology found that blood concentrations levels may be several order of magnitude higher than previously believed, and placing it at micrograms rather than nanograms per millilitre. Pregnancy Prorenin occurs in very high concentrations in amniotic fluid and amnion. It is secreted in large amounts from the placenta and womb, and from the ovaries. Conversion to renin Proprotein convertase 1 converts prorenin into renin, but proprotein convertase 2 does not. There is no evidence that prorenin can be converted into renin in the circulation. Therefore, the granular (JG) cells seem to be the only source of active renin. References External links RCSB PDB PDBe Proteins
Prorenin
[ "Chemistry" ]
382
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
62,296,023
https://en.wikipedia.org/wiki/Decamethylzirconocene%20dichloride
Decamethylzirconocene dichloride is an organozirconium compound with the formula Cp*2ZrCl2 (where Cp* is C5(CH3)5, derived from pentamethylcyclopentadiene). It is a pale yellow, moisture sensitive solid that is soluble in nonpolar organic solvents. The complex has been the subject of extensive research. It is a precursor to many other complexes, including the dinitrogen complex [Cp*2Zr]2(N2)3). It is a precatalyst for the polymerization of ethylene and propylene. Further reading References Organozirconium compounds Metallocenes Chloro complexes Cyclopentadienyl complexes Zirconium(IV) compounds
Decamethylzirconocene dichloride
[ "Chemistry" ]
170
[ "Organometallic chemistry", "Cyclopentadienyl complexes" ]
62,296,569
https://en.wikipedia.org/wiki/Transient%20Array%20Radio%20Telescope
The Transient Array Radio Telescope (TART) is a low-cost open-source array radio telescope consisting of 24 all-sky GNSS receivers operating at the L1-band (1.575 GHz). TART was designed as an all-sky survey instrument for detecting radio bursts, as well as providing a test-bed for the development of new synthesis imaging and calibration algorithms. All of the telescope hardware including radio receivers, correlators and operating software are open source. A TART-2 radio-telescope can be built for approximately 1000 Euros, and the telescope antenna array requires 4m x 4m area for deployment. The TART project is managed by the Electronics Research Foundation, a non-profit based in New Zealand. Design All of the components of TART, from the hardware, FPGA firmware and the operation and imaging software are open source. released under the GPLv3 license. A TART radio telescope consists of four main sub-assemblies, the antenna array, RF Front End, radio Hub and basestation. Antenna array The antenna array consists of 24 antennas arranged on four identical 'tiles' with 6 antennas each. Each tile is a 1x1 meter square. The antennas used are low-cost, widely available commercial GPS active antennas. More recent installations use multi-arm antenna arrays in either a three-arm 'Y' configuration, or a five-arm star configuration. RF front end The Radio Frequency (RF) front ends receive the radio signals from each antenna. The RF front ends take advantage of low-cost, widely available, and very sensitive integrated circuits developed for global positioning satellite receivers. The TART uses the MAX2769C Universal GNSS Receiver made by Maxim Integrated. This single integrated circuit includes all the elements required of a radio-telescope receiver; low-noise amplifier, local oscillator, mixer, filters and an ADC. Each RF front end generates a data-stream of digitized radio signals with 2.5 MHz bandwidth from the GPS L1 band (1.57542 GHz). Radio Hub The TART contains four radio hubs. Each has six RF front end receivers and clock distribution circuitry. Each radio hub sends data to the basestation, and receives the master clock signal from the basestation, over two standard Cat 6 twisted-pair Ethernet cables. Basestation The basestation is a single PCB with an attached Raspberry Pi computer, and Papilio Pro FPGA daughter board. The basestation provides the 16.3767 MHz Crystal Oscillator which is distributed to the four radio hubs to provide synchronous clocking to the RF front ends. The data is returned from the radios via each radio hub to the basestation, consisting of 24 parallel streams of 1-bit samples. An FPGA processes these samples, acting as a radio correlator. The 276 correlations are sent to the Raspberry Pi host via SPI, and made available over a RESTful API. Indicative Budget The component cost of a TART-2 telescope is approximately 1000 Euros. In addition a mounting for the antenna array is required. These can take several forms, with recent TART-2 telescopes using a multi-arm layout which allows for easy adjustment of antenna positions, these can cost approximately 1000 Euros depending on where parts are sourced. Software The TART telescope operating software is open-source and written in Python. It consists of several modules: A hardware driver that reads data from the telescope, via an SPI bus from the FPGA on the basestation. A RESTful API server that makes this data available via HTTP. This runs on the Raspberry Pi computer attached to the basestation. Software that performs aperture synthesis imaging based on the measurements. Aperture synthesis imaging The TART telescope can perform aperture-synthesis imaging of the whole sky in real-time. To do this the data from each of the 24 antennas is correlated with the data from every other antenna forming a complex interferometric visibility. There are 276 unique pairs of antennas, and therefore 276 unique complex visibility measurements. From these measurements, an image of the radio emission from the sky can be formed. This process is called aperture synthesis imaging. In the TART, the imaging is normally done using a browser-based imaging pipeline. Three different pipelines have been written to date: The browser-based control panel for the telescope distributed as part of the TART archive can perform basic imaging. An example is available here . A lightweight imaging-only pipeline written by Max Scheel . A research project from Stellenbosch university written by Jason Jackson . Development TART was developed by a team from the department of Physics at the University of Otago starting in 2013 with TART-1, and in July 2019, TART-3 is under development. TART-1 Development started in 2013 with TART-1, an M.Sc. project developing 6-element proof of concept radio-interferometer. TART-2/2.1 TART-1 was followed by TART-2 which was the focus of a Ph.D. research project. TART-2 consists of 24 elements and is capable of continuous all-sky imaging, with the 'first light' image being taken in August 2015. TART-2 was upgraded into TART-2.1 with reduce costs and improved clock stability. TART-2.1 started operation in 2018. TART-2 includes real-time correlation of the radio data from every pair of antennas. This correlation is carried out in the FPGA. There are 276 pairs of antennas, leading to 276 complex visibilities being calculated which are used as inputs to the synthesis imaging process. These visibilities are made available via the RESTful API for live imaging, or downloading for further analysis. TART-3 TART-3 started development in 2019. A TART-3 telescope will consist of 1-4 radio hubs each with 24 receivers. The maximum number of receivers in a single telescope increases to 96. TART-3 is designed to reduce construction costs, and simplify installation.TART-3 source code TART Installations There are currently four operational TART telescopes, with seven more planned over 2025 as part of an initiative sponsored by the South African Radio Astronomy observatory. The operational telescopes are shown in the table below: References External links TART project website TART Project Github Organization TART-2 Github Repository TART VUER Source code Gitlab Repository for the improved telescope web interface Live images Mirror using the TART VUER interface. Telescopes
Transient Array Radio Telescope
[ "Astronomy" ]
1,364
[ "Telescopes", "Astronomical instruments" ]
62,299,758
https://en.wikipedia.org/wiki/Distributive%20polytope
In the geometry of convex polytopes, a distributive polytope is a convex polytope for which coordinatewise minima and maxima of pairs of points remain within the polytope. For example, this property is true of the unit cube, so the unit cube is a distributive polytope. It is called a distributive polytope because the coordinatewise minimum and coordinatewise maximum operations form the meet and join operations of a continuous distributive lattice on the points of the polytope. Every face of a distributive polytope is itself a distributive polytope. The distributive polytopes all of whose vertex coordinates are 0 or 1 are exactly the order polytopes. See also Stable matching polytope, a convex polytope that defines a distributive lattice on its points in a different way References Order theory Polytopes
Distributive polytope
[ "Mathematics" ]
204
[ "Order theory" ]
62,299,760
https://en.wikipedia.org/wiki/Order%20polytope
In mathematics, the order polytope of a finite partially ordered set is a convex polytope defined from the set. The points of the order polytope are the monotonic functions from the given set to the unit interval, its vertices correspond to the upper sets of the partial order, and its dimension is the number of elements in the partial order. The order polytope is a distributive polytope, meaning that coordinatewise minima and maxima of pairs of its points remain within the polytope. The order polytope of a partial order should be distinguished from the linear ordering polytope, a polytope defined from a number as the convex hull of indicator vectors of the sets of edges of -vertex transitive tournaments. Definition and example A partially ordered set is a pair where is an arbitrary set and is a binary relation on pairs of elements of that is reflexive (for all , ), antisymmetric (for all with at most one of and can be true), and transitive (for all , if and then ). A partially ordered set is said to be finite when is a finite set. In this case, the collection of all functions that map to the real numbers forms a finite-dimensional vector space, with pointwise addition of functions as the vector sum operation. The dimension of the space is just the number of elements of . The order polytope is defined to be the subset of this space consisting of functions with the following two properties: For every , . That is, maps the elements of to the unit interval. For every with , . That is, is a monotonic function For example, for a partially ordered set consisting of two elements and , with in the partial order, the functions from these points to real numbers can be identified with points in the Cartesian plane. For this example, the order polytope consists of all points in the -plane with . This is an isosceles right triangle with vertices at (0,0), (0,1), and (1,1). Vertices and facets The vertices of the order polytope consist of monotonic functions from to . That is, the order polytope is an integral polytope; it has no vertices with fractional coordinates. These functions are exactly the indicator functions of upper sets of the partial order. Therefore, the number of vertices equals the number of upper sets. The facets of the order polytope are of three types: Inequalities for each minimal element of the partially ordered set, Inequalities for each maximal element of the partially ordered set, and Inequalities for each two distinct elements that do not have a third distinct element between them; that is, for each pair in the covering relation of the partially ordered set. The facets can be considered in a more symmetric way by introducing special elements below all elements in the partial order and above all elements, mapped by to 0 and 1 respectively, and keeping only inequalities of the third type for the resulting augmented partially ordered set. More generally, with the same augmentation by and , the faces of all dimensions of the order polytope correspond 1-to-1 with quotients of the partial order. Each face is congruent to the order polytope of the corresponding quotient partial order. Volume and Ehrhart polynomial The order polytope of a linear order is a special type of simplex called an order simplex or orthoscheme. Each point of the unit cube whose coordinates are all distinct lies in a unique one of these orthoschemes, the order simplex for the linear order of its coordinates. Because these order simplices are all congruent to each other and (for orders on elements) there are different linear orders, the volume of each order simplex is . More generally, an order polytope can be partitioned into order simplices in a canonical way, with one simplex for each linear extension of the corresponding partially ordered set. Therefore, the volume of any order polytope is multiplied by the number of linear extensions of the corresponding partially ordered set. This connection between the number of linear extensions and volume can be used to approximate the number of linear extensions of any partial order efficiently (despite the fact that computing this number exactly is #P-complete) by applying a randomized polynomial-time approximation scheme for polytope volume. The Ehrhart polynomial of the order polytope is a polynomial whose values at integer values give the number of integer points in a copy of the polytope scaled by a factor of . For the order polytope, the Ehrhart polynomial equals (after a minor change of variables) the order polynomial of the corresponding partially ordered set. This polynomial encodes several pieces of information about the polytope including its volume (the leading coefficient of the polynomial and its number of vertices (the sum of coefficients). Continuous lattice By Birkhoff's representation theorem for finite distributive lattices, the upper sets of any partially ordered set form a finite distributive lattice, and every finite distributive lattice can be represented in this way. The upper sets correspond to the vertices of the order polytope, so the mapping from upper sets to vertices provides a geometric representation of any finite distributive lattice. Under this representation, the edges of the polytope connect comparable elements of the lattice. If two functions and both belong to the order polytope of a partially ordered set , then the function that maps to , and the function that maps to both also belong to the order polytope. The two operations and give the order polytope the structure of a continuous distributive lattice, within which the finite distributive lattice of Birkhoff's theorem is embedded. That is, every order polytope is a distributive polytope. The distributive polytopes with all vertex coordinates equal to 0 or 1 are exactly the order polytopes. References Order theory Polytopes
Order polytope
[ "Mathematics" ]
1,261
[ "Order theory" ]
62,302,975
https://en.wikipedia.org/wiki/Soft%20Growing%20Robotics
Soft Growing Robotics is a subset of soft robotics concerned with designing and building robots that use robot body expansion to move and interact with the environment. Soft growing robots are built from compliant materials and attempt to mimic how vines, plant shoots, and other organisms reach new locations through growth. While other forms of robots use locomotion to achieve their objectives, soft growing robots elongate their body through addition of new material, or expansion of material. This gives them the ability to travel through constricted areas and form a wide range of useful 3-D formations. Currently there are two main soft growing robot designs: additive manufacturing and tip extension. Some goals of soft growing robotics development are the creation of robots that can explore constricted areas and improve surgical procedures. Additive manufacturing design One way of extending the robot body is through additive manufacturing. Additive manufacturing generally refers to 3-D printing, or the fabrication of three dimensional objects through the conjoining of many layers of material. Additive manufacturing design of a soft growing robot utilizes a modified 3-D printer at the tip of the robot to deposit thermoplastics (material that is rigid when cooled and flexible when heated) to extend the robot in the desired orientation. Design characteristics The body of the robot consists of: A base, where the power supply, circuit board, and spool of thermoplastic filament is stored. The tubular body of varying length created by additive manufacturing which extends outwards from the base. The tip where new material is deposited to lengthen the tubular body, and house sensors. The additive manufacturing process involves polylactic acid filament (a thermoplastic) being pulled through the tubular body of the robot by a motor in the tip. At the tip, the filament passes through a heating element, making it pliable. The filament is then turned perpendicular to the direction of robot growth and deposited onto the outer edge of a rotating disk facing the base of the robot. As the disk (known as the deposition head) rotates, new filament is deposited in spiraling layers. This filament solidifies in front of the previous layer of filament, pushing the tip of the robot forward. The interactions between the temperature of the heating element, the rotation of the deposition head, and the speed the filament is fed through the heating element is precisely controlled to ensure the robot grows in the desired manner. Movement control The speed of the robot is controlled by changing the temperature of the heating element, the speed at which filament is fed through the heating element, and the speed the deposition head is spun. Speed can be defined as the function: Where is the thickness of the deposited layer of filament, and is the angle of the helix in which the filament material is deposited. Controlling the direction of growth (and thus the direction of robot "movement") can be done in two ways: Changing the thickness of the filament deposited on one side of the deposition head (tilting the tip away from that side). Changing the number of layers of filament on one side of the deposition head by using partial rotation of the deposition disk to add extra material in that sector (tilting the tip away from the side with extra layers of filament). For example, the disk could normally rotate clockwise, rotate counter-clockwise for 1 radian, and then resume rotating clockwise. This would add two extra layers of material in the 1 radian section. Capabilities One of the major advantages of soft growing robots is that minimal friction exists between the outside environment and the robot. This is because only the robot tip moves relative to the environment. Multiple robots using additive manufacturing for growth were designed for burrowing into the soil, as less friction with the environment reduces energy required to move through the environment . Unsubmerged, one robot was able to grow at a speed of 1.8-4 mm/min. with a maximum bending speed of 1.28 degrees per minute and a growing force of up to 6kg. Unsubmerged, a second prototype was able to grow at a speed of 3-4 mm/min. as well as passively turn 40 degrees with a 100% success rate and 50 degrees with a 60% success rate (where passively turning means the robot was grown into a slanted wall and the properties of the thermoplastic filament used to bend the robot in the desired direction). Tip extension design A second form of soft growing robot design is tip extension. This design is characterized by a tube of material (common materials include nylon fabric, low density polyethylene, and silicone coated nylon) pressurized with air or water that is folded into itself. By letting out the folded material, the robot extends from the tip as the pressurized tube pushes out the inner folded material. Design characteristics In contrast with additive manufacturing where new material is deposited behind the tip of the robot to push the tip forward, tip extension utilizes the internal pressure within the robot body to push out new material at the tip of the robot. Often, the tubing inside the robot body is stored on a reel to make it easier to control the release of tubing and thus robot growth. Multiple methods of turning a tip extension robot have been developed. They include: Pinching the inner tube of robot body material and securing the pinched material with latches. To turn the robot, a latch is opened, releasing more robot body material on one side of the robot. The internal pressure causes the extra material to inflate, making one side of the robot longer than the other, and turning the robot away from the longer side. To grow the robot straight, none of the latches are released or all of the latches are released. The latches are controlled through their placement in a second set of inflatable tubing attached to the main robot body material. If a latch's tubing is uninflated, the latch can never open because the internal robot body pressure forces it closed. If a latch's tubing is inflated, and the latch is on a straight section of the robot body, the latch will not open due to the slant of the latch's angled, interlocking hooks. If a latch's tubing is inflated, and the latch is on the tip of the robot, the curve of the tip allows the interlocking hooks to slip past each other and open the latch. Adding a second set of inflatable tubing to the sides of the robot body. This tubing is pinched periodically along its length so that when inflated, the tubing will contract lengthwise. To turn the robot, one set of tubing is inflated, causing the tubing to contract along the length of robot body and turn the robot body in the direction of the inflated tubing. Robots utilizing the tip extension design are retractable. Current designs use a wire attached to the tip of the robot that is used to pull the tip of the robot back into the robot body. Mathematical analysis The theoretical force the tip grows under can be modelled as: Where represents the force the tip grows under, represents internal pressure, and represents cross sectional area of the robot tip. However, the experimental force the tip expands under has been found to be less than this largely due to axial tension in the robot body. A model that approximates more accurately is: Here, is an experimentally determined constant and is yield pressure when no growth occurs. , , and , are force terms dependent on velocity, length, and curvature or the robot respectively. Additionally, multiple mathematical models for various forms of turning, twisting, and retracting have been developed. Methods of robot operation Soft growing robots can be controlled in various ways depending on how well the objective and growth path are defined. Without a clearly defined goal or robot growth path, teleoperation is used. When a clearly defined goal exists (such as a light source), computer vision can be used to find a path to the goal and grow a robot along that path. If the desired path of robot growth is known before the robot is deployed, pre-planned turning positions can be used to control the robot. Teleoperation: a human operator controls robot growth, speed, and turning. This can be done either with the operator viewing the robot, or with the operator using an onboard camera. Computer vision: using a camera and software to detect a pre-defined goal and steer the robot towards the goal autonomously. Pre-determined turning positions: With the latch turning design, the latches can be made so they open at pre-planned times, making the robot grow in pre-planned shapes. Applications Possible applications of soft growing robots focus on their low friction/interaction with the environment, their simple method of growth, and their ability to grow through cramped environments. Coral reef exploration: Soft growing robots potentially have the ability to grow within the passageways of the reefs, with sensors (optical, distance, etc.)  without damaging the reef. As the support structure for an antenna: A soft growing robot can grow into a helix configuration with an antenna attached to it, which is an optimal configuration for the operation of the antenna. Surgical Procedures: Minimally invasive surgery involves medical procedures within sensitive, constricted environments (the human body) which could be well suited to the flexibility and controllability of soft growing robots. Burrowing into the ground: As friction is only experienced by the tip of the soft growing robot body when digging, soft growing robots may be more energy efficient than other methods of digging that involve the entire robot body moving relative to the environment. References Robotics Robot kinematics
Soft Growing Robotics
[ "Engineering" ]
1,967
[ "Robot kinematics", "Robotics", "Automation", "Robotics engineering" ]
62,304,639
https://en.wikipedia.org/wiki/Journal%20of%20Spacecraft%20and%20Rockets
The Journal of Spacecraft and Rockets is a bi-monthly (six issues per year) peer-reviewed scientific journal published by the American Institute of Aeronautics and Astronautics. It covers the science and technology of spaceflight, satellite and mission design, missile design, and rockets. The editor-in-chief is Olivier de Weck (Massachusetts Institute of Technology). It was established in 1964. Abstracting and indexing The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2021 impact factor of 1.808. History The journal was published bimonthly from the beginning. Prior editors have been: Hanspeter Schaub (2017–2021) Robert D. Braun (2014–2016) E. Vincent Zoby (1993–2014) Clark H. Lewis (1990–1993) Frank J. Redd (1987–1989) R.H. Woodward Waesche (1981–1987) Paul F. Holloway (1978–1981) Donald C. Fraser (1975–1978) Ralph R. Ragan (1972–1975) Gordon L. Dugger (1964–1971) - founding editor References External links Aerospace engineering journals English-language journals Bimonthly journals Academic journals established in 1964
Journal of Spacecraft and Rockets
[ "Engineering" ]
254
[ "Aerospace engineering journals", "Aerospace engineering" ]
62,305,152
https://en.wikipedia.org/wiki/Inverse%20consistency
In image registration, inverse consistency measures the consistency of mappings between images produced by a registration algorithm. The inverse consistency error, introduced by Christiansen and Johnson in 2001, quantifies the distance between the composition of the mappings from each image to the other, produced by the registration procedure, and the identity function, and is used as a regularisation constraint in the loss function of many registration algorithms to enforce consistent mappings. Inverse consistency is necessary for good image registration but it is not sufficient, since a mapping can be perfectly consistent but not register the images at all. Definition Image registration is the process of establishing a common coordinate system between two images, and given two images registering a source image to a target image consists of determining a transformation that maps points from the target space to the source space. An ideal registration algorithm should not be sensitive to which image in the pair is used as source or target, and the registration operator should be antisymmetric such that the mappings produced when registering to and to respectively should be the inverse of each other, i.e. and or, equivalently, and , where denotes the function composition operator. Real algorithms are not perfect, and when swapping the role of source and target image in a registration problem the so obtained transformations are not the inverse of each other. Inverse consistency can be enforced by adding to the loss function of the registration a symmetric regularisation term that penalises inconsistent transformations Inverse consistency can be used as a quality metric to evaluate image registration results. The inverse consistency error () measures the distance between the composition of the two transforms and the identity function, and it can be formulated in terms of both average () or maximum () over a region of interest of the image: While inverse consistency is a necessary property of good registration algorithms, inverse consistency error alone is not a sufficient metric to evaluate the quality of image registration results, since a perfectly consistent mapping, with no other constraint, may be not even close to correctly register a pair of images. References External links Inverse consistency error Computer vision
Inverse consistency
[ "Engineering" ]
412
[ "Artificial intelligence engineering", "Packaging machinery", "Computer vision" ]
62,305,910
https://en.wikipedia.org/wiki/H3K36me3
H3K36me3 is an epigenetic modification to the DNA packaging protein Histone H3. It is a mark that indicates the tri-methylation at the 36th lysine residue of the histone H3 protein and often associated with gene bodies. There are diverse modifications at H3K36 and have many important biological processes. H3K36 has different acetylation and methylation states with no similarity to each other. Nomenclature H3K36me3 indicates trimethylation of lysine 36 on histone H3 protein subunit: Lysine methylation This diagram shows the progressive methylation of a lysine residue. The tri-methylation (right) denotes the methylation present in H3K36me3. Understanding histone modifications The genomic DNA of eukaryotic cells is wrapped around special protein molecules known as Histones. The complexes formed by the looping of the DNA are known as chromatin. The basic structural unit of chromatin is the nucleosome: this consists of the core octamer of histones (H2A, H2B, H3 and H4) as well as a linker histone and about 180 base pairs of DNA. These core histones are rich in lysine and arginine residues. The carboxyl (C) terminal end of these histones contribute to histone-histone interactions, as well as histone-DNA interactions. The amino (N) terminal charged tails are the site of the post-translational modifications, such as the one seen in H3K36me3. Mechanism and function of modification Binding proteins H3K36me3 can bind chromodomain proteins such as MSL3 hMRG15 and scEaf3. It can bind PWWP proteins such as BRPF1 DNMT3A, HDGF2 and Tudor domains such as PHF19 and PHF1. DNA repair H3K36me3 is required for homologous recombinational repair of DNA damage such as double-strand breaks. The trimethylation is catalyzed by SETD2 methyltransferase. Other roles H3K36me3 acts as a mark for HDACs to bind and deacetylate the histone which would prevent run-away transcription. It is associated with both facultative and constitutive heterochromatin. Relationship with other modifications H3K36me3 might define exons. Nucleosomes in the exons have more histone modifications such as H3K79, H4K20, and especially H3K36me3. Epigenetic implications The post-translational modification of histone tails by either histone modifying complexes or chromatin remodelling complexes are interpreted by the cell and lead to complex, combinatorial transcriptional output. It is thought that a Histone code dictates the expression of genes by a complex interaction between the histones in a particular region. The current understanding and interpretation of histones comes from two large scale projects: ENCODE and the Epigenomic roadmap. The purpose of the epigenomic study was to investigate epigenetic changes across the entire genome. This led to chromatin states which define genomic regions by grouping the interactions of different proteins and/or histone modifications together. Chromatin states were investigated in Drosophila cells by looking at the binding location of proteins in the genome. Use of ChIP-sequencing revealed regions in the genome characterised by different banding. Different developmental stages were profiled in Drosophila as well, an emphasis was placed on histone modification relevance. A look in to the data obtained led to the definition of chromatin states based on histone modifications. Certain modifications were mapped and enrichment was seen to localize in certain genomic regions. Five core histone modifications were found with each respective one being linked to various cell functions. H3K4me3-promoters H3K4me1- primed enhancers H3K36me3-gene bodies H3K27me3-polycomb repression H3K9me3-heterochromatin The human genome was annotated with chromatin states. These annotated states can be used as new ways to annotate a genome independently of the underlying genome sequence. This independence from the DNA sequence enforces the epigenetic nature of histone modifications. Chromatin states are also useful in identifying regulatory elements that have no defined sequence, such as enhancers. This additional level of annotation allows for a deeper understanding of cell specific gene regulation. Clinical significance This histone methylation is responsible for maintaining gene expression stability. It is important throughout aging and has an impact on longevity. Genes that change their expression during aging have much lower levels of H3K36me3 in their gene bodies. There is reduced levels of H3K36me3 and H3K79me2 at the upstream GAA region of the FXN, indicative of a defect of transcription elongation in Friedreich's ataxia. Methods The histone mark H3K36me3 can be detected in a variety of ways: 1. Chromatin Immunoprecipitation Sequencing (ChIP-sequencing) measures the amount of DNA enrichment once bound to a targeted protein and immunoprecipitated. It results in good optimization and is used in vivo to reveal DNA-protein binding occurring in cells. ChIP-Seq can be used to identify and quantify various DNA fragments for different histone modifications along a genomic region. 2. Micrococcal Nuclease sequencing (MNase-seq) is used to investigate regions that are bound by well positioned nucleosomes. Use of the micrococcal nuclease enzyme is employed to identify nucleosome positioning. Well positioned nucleosomes are seen to have enrichment of sequences. 3. Assay for transposase accessible chromatin sequencing (ATAC-seq) is used to look in to regions that are nucleosome free (open chromatin). It uses hyperactive Tn5 transposon to highlight nucleosome localisation. See also Histone methylation Histone methyltransferase Methyllysine References Epigenetics Post-translational modification
H3K36me3
[ "Chemistry" ]
1,326
[ "Post-translational modification", "Gene expression", "Biochemical reactions" ]
62,306,576
https://en.wikipedia.org/wiki/Eduard%20Feireisl
Eduard Feireisl (born 16 December 1957 in Kladno) is a Czech mathematician. After studying from 1973 to 1977 at secondary school in Nové Strašecí, Feireisl studied mathematics at Charles University in Prague from 1977 and graduated there in 1982. He received his doctorate in 1986 from the Institute of Mathematics of the Czechoslovak Academy of Sciences with thesis Critical points of non-differentiable functionals: existence of solutions to problems of mathematical elasticity theory under the supervision of Vladimir Lovicar. During the 1980s he worked as an assistant professor at the Department of Mathematics of the Faculty of Mechanical Engineering Czech Technical University in Prague (CTU). He studied at the Institute of Mathematics of the Czechoslovak Academy of Sciences (as a member since 1988) and habilitated there in 1999. He became a lecturer at the Charles University in 2009 and was appointed there to a full professorship in 2011. Feireisl spent in 1989 half a year in Oxford, in 1993/94 a sabbatical year at the Complutense University of Madrid, and in 1998 and in 1999 half a year at the University of Franche-Comté in Besançon. He was also as visiting scholar for 12 months from 2001 to 2013 at Henri Poincaré University in Nancy and for 3 months in 2000 at Ohio State University. He was in 2004/05 at the TU Munich, from 2008 to 2010 a visiting professor at the Central European University in Budapest, and in 2012 at the Erwin Schrödinger Institute in Vienna. For 2018 to 2021 he was appointed an Einstein Visiting Fellow at TU Berlin. His research deals with partial differential equations, infinite dimensional dynamical systems, and mathematical problems of hydrodynamics. He received in 2004 and 2009 the Prize of the Academy of Sciences of the Czech Republic, in 2015 the Neuron Award, and in 2017 the gold medal of Charles University, as well as the Bernard Bolzano Honorary Medal from the Czech Academy of Sciences. In 2012, he chaired the scientific committee of the European Congress of Mathematicians in Krakow. He was an invited speaker in 2002 at the International Congress of Mathematicians in Beijing, and at the conference Dynamics, Equations and Applications in Kraków in 2019. In 2018 he was a member of the Fields Medal Selection Committee. In 2013 he received an Advanced Grant from the European Research Council (ERC) for the study of mathematical modeling of gas movement and heat exchange. Selected publications Articles Asymptotic analysis of the full Navier–Stokes–Fourier system: From compressible to incompressible fluid flows, Russian Mathematical Surveys, vol. 62, 2007, pp. 511–533 Dynamical systems approach to models in fluid mechanics, Russian Mathematical Surveys, vol. 69, 2014, pp. 331–357 Books Dynamics of viscous compressible fluids, Oxford UP 2004 as editor with Constantine Dafermos: Handbook of differential equations: Evolutionary equations, Elsevier 2004 with Dalibor Pražák: Asymptotic behavior of dynamical systems in fluid mechanics, American Institute of Mathematical Sciences 2010 with Trygve G. Karper, Milan Pokorný: Mathematical Theory of Compressible Viscous Fluids: Analysis and Numerics, Birkhäuser 2016 with John M. Ball, Felix Otto: Mathematical thermodynamics of complex fluids : Cetraro, Italy 2015, Lecture notes in mathematics 2200, Springer 2017 with Antonín Novotný: Singular Limits in Thermodynamics of Viscous Fluids, Birkhäuser 2017 with Dominic Breit, Martina Hofmanová: Stochastically forced compressible fluid flows, De Gruyter 2018 References 1957 births Living people Czech mathematicians Fluid dynamicists Partial differential equation theorists Charles University alumni Academic staff of Charles University
Eduard Feireisl
[ "Chemistry" ]
765
[ "Fluid dynamicists", "Fluid dynamics" ]
62,306,648
https://en.wikipedia.org/wiki/Ratl
A ratl (رطل ) is a medieval Middle Eastern unit of measurement found in several historic recipes. The term was used to measure both liquid and weight (around a pound and a pint in 10th century Baghdad, but anywhere from 8 ounces to 8 pounds depending on the time period and region). While there were a variety of names for different shapes of cups and mugs in use at the time, the ratl seems to have had a position roughly equivalent to a British pint in that the name of the drinking-vessel also implied a standardized measurement as opposed to merely the object's shape, in both 10th century Baghdad and 13th century Andalusia. However, those standardized measures varied both by region and by purpose: the spice-measuring ratl, the flax-measuring ratl, the oil-measuring ratl, and the quicksilver-measuring ratl all differed from each other. The ratl was a part of a sequence of measurements ranging from a grain of barley through the dirham (used as a common point of reference in both medieval European and Middle Eastern regions) on up to the Sa (Islamic measure). measurement 1 Mudd=8/6 ratl. 1 Sá =4 mudd=5+1/3 ratl. 1 Ratl =128+4/7 dirham or 128 dirham or 130 dirham. 1 Uqiyyah=40 dirham. 1 Nashsh=20 dirham. 7 mithqal =10 dirham. 1 mithqal=72 grains of average barely both edges cut. 1 mithqal=20 qirat قِيراط of makkah=21+3/7 qirat of Damascus. 1 Dirham= 0.7 mithqal =14 qirat of makkah=15 qirat of Damascus. 1 mil= 4000 zira. 1 wasq=60 sá. In al-Warraq's tenth-century cookbook, different regions used some of the same terms to mean different units of measurement and the relationships between them. Some of those relationships are described below. References Customary units of measurement Units of measurement Cooking weights and measures
Ratl
[ "Mathematics" ]
448
[ "Quantity", "Customary units of measurement", "Units of measurement" ]
62,307,595
https://en.wikipedia.org/wiki/Nano-interfaces%20in%20bone
Bones are the skeleton of our bodies. They allow us the ability to move and lift our body up against gravity. Bones are attachment points for muscles that help us to do many activities such as walking, jumping, kneeling, grasping, etc. Bones also protect organs from injury. Moreover, bone is responsible for blood cell production in a humans body. The mechanical properties of bone greatly influence the functionality of bone. For instance, deterioration in bone ductility due to diseases such as osteoporosis can adversely affect individuals’ life. Bone ductility can show how much energy bone absorbs before fracture. In bone, the origin ductility is at the nanoscale. The nano interfaces in Bone are the interface between individual collagen fibrils. The interface is filled with non-collagenous proteins, mainly osteopontin (OPN) and osteocalcin (OC). The osteopontin and osteocalcin form a sandwich structure with HAP minerals at nano-scale. The nano Interfaces are less than 2 – 3 % of bone content by weight, while they add more than 30% of the fracture toughness . Deformation mechanisms in nano interfaces The current knowledge of the structure and deformation mechanisms in nano-interfaces is limited. For the first time, a study unravel the complex synergic deformation mechanism in the nano-interfaces in bone. A synergistic deformation mechanism of the proteins through strong anchoring and formation of dynamic binding sites on mineral nano-platelets were seen. The nano-interface can sustain a ductility approaching 5000% and outstanding specific energy to failure that is several times larger than the most known tough natural materials such as spider silk. References Nanotechnology Nanomedicine
Nano-interfaces in bone
[ "Materials_science", "Engineering" ]
359
[ "Nanomedicine", "Nanotechnology", "Materials science" ]
62,308,963
https://en.wikipedia.org/wiki/Split%20screen%20%28computing%29
Split screen is a display technique in computer graphics that consists of dividing graphics and/or text into non-overlapping adjacent parts, typically as two or four rectangular areas. This allows for the simultaneous presentation of (usually) related graphical and textual information on a computer display. TV sports adopted this presentation methodology in the 1960s for instant replay. Originally, non-dynamic split screens differed from windowing systems in that the latter allowed overlapping and freely movable parts of the screen (the "windows") to present both related and unrelated application data to the user. In contrast, the former were strictly limited to fixed positions. The split screen technique can also be used to run two instances of an application, potentially allowing another user to interact with the second instance. In video games The split screen feature is commonly used in non-networked, also known as couch co-op, video games with multiplayer options. In its most easily understood form, a split screen for a multiplayer video game is an audiovisual output device (usually a standard television for video game consoles) where the display has been divided into 2-4 equally sized areas (depending on number of players) so that the players can explore different areas simultaneously without being close to each other. This has historically been remarkably popular on consoles, which until the 2000s did not have access to the Internet or any other network and is less common today with modern support for networked console-to-console multiplayer. In competitive split-screen games, it is customarily considered cheating to look at another player's screen section to gain an advantage. History Split screen gaming dates back to at least the 1970s, with games such Drag Race (1977) from Kee Games in the arcades being presented in this format. It has always been a common feature of two or more player home console and computer games too, with notable titles being Kikstart II for 8-bit systems, a number of 16-bit racing games (such as Lotus Esprit Turbo Challenge and Road Rash II), and action/strategy games (such as Toejam & Earl and Lemmings ), all employing a vertical or horizontal screen split for two player games. Xenophobe is notable as a three-way split screen arcade title, although on home platforms it was reduced to one or two screens. The addition of four controller ports on home consoles also ushered in more four-way split screen games, with Mario Kart 64 and Goldeneye 007 on the Nintendo 64 being two well known examples. In arcades, machines tended to move towards having a whole screen for each player, or multiple connected machines, for multiplayer. On home machines, especially in the first and third person shooter genres, multiplayer is now more common over a network or the internet rather than locally with split screen. See also Multiplayer video game Screen tearing Split screen (video production) References Computer graphics User interface techniques Video game terminology
Split screen (computing)
[ "Technology" ]
590
[ "Computing terminology", "Video game terminology" ]
62,309,271
https://en.wikipedia.org/wiki/S-Methylcysteine
S-Methylcysteine is the amino acid with the nominal formula CH3SCH2CH(NH2)CO2H. It is the S-methylated derivative of cysteine. This amino acid occurs widely in plants, including many edible vegetables. Natural occurrence S-Methylcysteine is not genetically coded, but it arises by post-translational methylation of cysteine. One pathway involves methyl transfer from alkylated DNA by zinc-cysteinate-containing repair enzymes. S-Methylcysteine sulfoxide is an oxidized derivative of S-methylcysteine that is found in onions. Other chemical properties Beyond its biological context, S-methylcysteine has been examined as a chelating agent. References Biochemistry Sulfur amino acids Thioethers Alpha-Amino acids Amino acid derivatives
S-Methylcysteine
[ "Chemistry", "Biology" ]
171
[ "Biochemistry", "nan" ]
62,310,912
https://en.wikipedia.org/wiki/Scott%20Kirkpatrick
Scott Kirkpatrick is a computer scientist, and professor in the School of Engineering and Computer Science at the Hebrew University, Jerusalem. He has over 75,000 citations in the fields of information appliances design, statistical physics, and distributed computing. He initially worked at IBM's Thomas J. Watson Research Center with Daniel Gelatt and Mario Cecchi researching computer design optimization. They argued for "simulated annealing" via the Metropolis–Hastings algorithm, whereas one can obtain iterative improvement to a fast cooling process by "defining appropriate temperatures and energies". Their research was published in Science and was an inflection point in heuristic algorithms. Selected research Havlin, Shlomo, et al. "Challenges in network science: Applications to infrastructures, climate, social systems and economics." The European Physical Journal Special Topics 214.1 (2012): 273–293. Schneider, Johannes, and Scott Kirkpatrick. Stochastic optimization. Springer Science & Business Media, 2007. Carmi, Shai, et al. "A model of Internet topology using k-shell decomposition." Proceedings of the National Academy of Sciences 104.27 (2007): 11150–11154. Kirkpatrick, Scott, C. Daniel Gelatt, and Mario P. Vecchi. "Optimization by simulated annealing." science 220.4598 (1983): 671–680. Kirkpatrick, Scott. "Percolation and conduction." Reviews of modern physics 45.4 (1973): 574. See also List of fellows of the Association for Computing Machinery References Computer scientists Academic staff of the Hebrew University of Jerusalem Year of birth missing (living people) Living people
Scott Kirkpatrick
[ "Technology" ]
348
[ "Computer science", "Computer scientists" ]
59,613,436
https://en.wikipedia.org/wiki/IET%20Information%20Security
IET Information Security is a bimonthly peer-reviewed scientific journal covering information security and cryptography. It was established in 2005 as IEE Proceedings - Information Security, obtaining its current name in 2007. It is published by the Institution of Engineering and Technology and the editor-in-chief is Yvo Desmedt (University College London). Abstracting and indexing The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2017 impact factor of 0.890. References External links Bimonthly journals Computer science in the United Kingdom Computer science journals English-language journals Institution of Engineering and Technology academic journals Academic journals established in 2005
IET Information Security
[ "Engineering" ]
138
[ "Institution of Engineering and Technology", "Institution of Engineering and Technology academic journals" ]
59,613,749
https://en.wikipedia.org/wiki/NGC%204636
NGC 4636 is an elliptical galaxy located in the constellation Virgo. It is a member of the NGC 4753 Group of galaxies, which is a member of the Virgo II Groups, a series of galaxies and galaxy clusters strung out from the southern edge of the Virgo Supercluster. It is located at a distance of about 55 million light years from Earth, which, given its apparent dimensions, means that NGC 4636 is about 105,000 light years across. It was discovered by William Herschel on February 23, 1784. NGC 4636 lies one and a half degrees southwest of Delta Virginis. It can be viewed through a telescope at a ×23 magnification as a bright oval glow. It is part of the Herschel 400 Catalogue. Characteristics The central part of NGC 4636 is circular and is surrounded by an elongated fainter envelope, containing a large number of globular clusters. The galaxy has an active galactic nucleus (AGN) that has been categorised as LINER or a type 1.9 Seyfert galaxy. The source of nuclear activity in galaxies is suggested to be a supermassive black hole that accretes material. NGC 4636 harbors a relatively small supermassive black hole with mass , as inferred from the bulge velocity dispersion. Molecular gas When imaged in CO(2–1) there appear molecular clouds in NGC 4636. Cloud 1 is not associated with detectable optical emission and is out of the dust extinction map field of view, while cloud 2 is centered on a dust absorption knot and aligned with a ridge in the optical line emission map. The faint NGC 4636 ALMA continuum is in good agreement with the expected emission from cold dust, which would indicate that the dust content of NGC 4636 is fairly centrally located. The associated total molecular mass is . The ultraviolet emission of NGC 4636 exhibits O vi emission, which is a tracer of gas cooling. The measured emission indicates a cooling rate of 0.3 M⊙ yr−1. Polycyclic aromatic hydrocarbons (PAH) emission was detected at 11.3 and 17 μm, as well as [Ne ii], [Ne iii], and [S iii] lines in the center of NGC 4636 (within re/8) using the Spitzer IRS. The far infrared emission of the galaxy, as observed by the Infrared Space Observatory is 50 times more than expected based on stellar emission alone. This strongly suggests that there is dust, probably accreted in a recent merger with a gas-rich galaxy. Hα observations reveal the presence of warm (T ~ 104 K) ionized gas in the inner kpc of NGC 4636. Spectra of this gas indicate irregular motion, with a typical velocity of 150–200 km/s. Hα maps of the galaxy core show the presence of a cavity in the distribution of the ionized gas encircled by a dense shell located at a distance of ~400 pc from the center. Again, the most plausible explanation is gas expansion caused by AGN activity. In NGC 4636, the [C ii] emission extends to a radius of ~1 kpc and is centrally peaked. The velocities inferred from the [C ii] line are consistent with those measured for the Hα line. Finally, NGC 4636 has an excess of cold dust, approximately cospatial with the ionized and molecular gas. As above, this dust is expected be embedded in cold gas, to be protected against rapid sputtering. The extended dust distribution originates from the ejection of cold gas by AGN activity 10 Myr ago. Globular clusters NGC 4636 is characterised by its large number of globular clusters, much larger than that of galaxies with similar size located not in the centre of galaxy clusters. The total number of globular clusters within a radius of 14 arcminutes is estimated to be 4,200 ± 120 and within 7 arcminutes is estimated to be 3,500 ± 170. In comparison, globular clusters orbit around Messier 87, the giant elliptical galaxy at the centre of the Virgo Cluster, and 150–200 lie in and around the Milky Way. The number of globular clusters drops abruptly at 7 and 9 arcminutes, probably indicating the edge of the galaxy. The color distribution of the globular clusters in the galaxy is bimodal, a distribution that has been observed in other galaxies too. The globular clusters are characterised based on their color as blue or red. The population of red clusters is higher. Similarly with color, the metallicity distribution is bimodal, with two peaks at [Fe/H] = −1.23(σ = 0.32) and −0.35(σ = 0.19). The ages of the globular clusters in NGC 4636 vary from 2 to 15 billion years, with a bit more than a quarter of the clusters having ages less than 5 billion years. It has been suggested that the younger clusters were formed during the merging of smaller galaxies with the elliptical galaxy. The velocity dispersion of the clusters is km/s, with the velocity dispersion of the blue clusters being slightly larger. This velocity dispersion is similar to that of Messier 60, which is, however, a brighter galaxy. Comparing the velocity dispersion of the globular clusters with the stellar one it is calculated that mass-to-light ratio is not constant, but should increase as the galactocentric distance increases, indicating the existence of an extended dark matter halo in NGC 4636. X-ray emission NGC 4636 is one of the most luminous nearby elliptical galaxies when observed in X-rays, with estimated X-ray flux of erg/s. A hot gas corona around the galaxy was first detected by the Einstein Observatory. Based on the hot interstellar medium temperature profile, the total mass of the halo was estimated to be within a radius of 35 kpc. The percentage of non-luminous matter mass is estimated to be between 50% and 80% of the total galactic mass, implying an exceptionally low baryon fraction in NGC 4636 and the presence of a large dark matter halo. The halo of NGC 4636 has some unique features. Observations by the Chandra X-ray Observatory revealed symmetric, 8 kpc long structures within the halo that look like spiral arms. The arms are about 30 percent hotter than the surrounding gas cloud. The arms form the rim of two large ellipsoid bubbles of hot gas. One more bubble-like feature has been detected about 2 kpc south of the northeastern arm. A weak radio source, elongated in the NE–SW direction, connects the NE and SW bubbles. These large bubbles are likely the result of shocks generated by the AGN jets. It is possible that the bubbles have different ages, generated by different AGN outbursts, as indicated by the presence of radio-emitting plasma in one cavity, while the others are radio-quiet. NGC 4636 has an X-ray-bright core, having a radius of ~1 kpc. The core shows a central cavity surrounded by a bright edge. Interestingly, the small X-ray cavity surrounds the ~1 kpc radio jet detected at 1.4 GHz and is likely generated by the jet. Thus, the X-ray and radio observations point to a scenario in which gas may be currently outflowing in the central kpc of NGC 4636. There are 318 point X-ray sources in the field of NGC 4636. About 25% of them are identified as background sources. 77 of the sources match the location of globular clusters. No correlation was found between the X-ray luminosities of the matched point sources and the luminosity or color of the host GC candidates. The other point sources are low-mass X-ray binaries. Supernovae Two supernovae have been observed in NGC 4636. The first, designated SN 1939A, was discovered on 17 January 1939 by Fritz Zwicky. It was a type Ia supernova whose maximum magnitude was estimated at 11.9. On 12 January 2020, Kōichi Itagaki discovered another type Ia supernova, designated SN 2020ue. Nearby galaxies NGC 4636 is the foremost galaxy of the galaxy group known as the NGC 4636 group. Other members of the group include NGC 4457, NGC 4586, NGC 4587, NGC 4600, NGC 4665, and NGC 4688. These galaxies, along with NGC 4753, Messier 61 and their groups form the southern boundary of the Virgo cluster. It can be difficult to determine which galaxies belong to which group especially around the southern edge of the Virgo cluster where there is a confusion of galaxies at different distances. NGC 4636 has also been listed as a member of the Virgo Cluster. See also NGC 720 – another elliptical galaxy with X-ray halo References External links NGC 4636 on SIMBAD NGC 4636: Hot Galactic Arms Point To Vicious Cycle, X-ray image by Chandra X-ray Observatory Elliptical galaxies Virgo (constellation) 4636 07878 42734 Astronomical objects discovered in 1784 Discoveries by William Herschel
NGC 4636
[ "Astronomy" ]
1,895
[ "Virgo (constellation)", "Constellations" ]
59,614,173
https://en.wikipedia.org/wiki/Icerudivirus%20SIRV2
Sulfolobus islandicus rod-shaped virus 2, also referred to as SIRV2, is an archaeal virus whose only known host is the archaeon Sulfolobus islandicus. This virus belongs to the family Rudiviridae. Like other viruses in the family, it is common in geothermal environments. Biology and biochemistry SIRV2 has a linear double-stranded DNA genome. The viral DNA is replicated by 4 host DNA polymerases: Dpo1 through Dpo4. The virus has a rod-shaped morphology with a width of 23 nanometers (nm) and a length of 900 nm. Three terminal fibers, 28 nm in length, have been observed on both ends of the virus. The terminal fibers mediate attachment of the virus to type 4 pili abundantly present on the host cell surface. SIRV2 is able to survive additions of 6 molar (M) urea, absolute ethanol, octanol-2, and 0.1% Triton X-100 in neutral pH and 25 degrees Celsius. In vitro testing has shown that SIRV2 is still able to infect at 70-80 degrees Celsius and in a pH 3 solution. SIRV2gp19 was found to be a single-stranded DNA endonuclease in 2011. This was proven by inducing a mutation in the SIRVgp19 protein Motif II from the amino acid aspartate to alanine which resulted in a loss of nuclease activity. This protein is functional within pH 7-10. Magnesium chloride was found to be a cofactor to this protein in 1971. Sodium chloride concentrations above 100 mM inhibit SIRV2gp19. Structure A three-dimensional reconstruction of the SIRV2 virion at ~4 angstrom resolution has been obtained by cryo–electron microscopy. The structure revealed a previously unknown form of virion organization, in which the alpha-helical major capsid protein of SIRV2 wraps around the DNA, making it inaccessible to solvent. The viral DNA was found to be entirely in the A-form, which suggested a common mechanism with bacterial spores for protecting DNA in the most adverse environments. References Archaeal viruses Ligamenvirales Rudiviridae
Icerudivirus SIRV2
[ "Biology" ]
462
[ "Archaea", "Archaeal viruses" ]
59,614,227
https://en.wikipedia.org/wiki/Azorudivirus%20SRV
Stygiolobus rod-shaped virus (SRV), scientific name Azorudivirus SRV, is an archaeal virus and the sole species in the genus Azorudivirus. Its only known host is Stygiolobus archaea. References Archaeal viruses Ligamenvirales
Azorudivirus SRV
[ "Biology" ]
65
[ "Virus stubs", "Viruses", "Archaea", "Archaeal viruses" ]
59,615,541
https://en.wikipedia.org/wiki/Institute%20of%20Acoustics%2C%20Chinese%20Academy%20of%20Sciences
The Institute of Acoustics (IOA, ) of the Chinese Academy of Sciences (CAS) was established in 1964 by the Chinese government in the context of China's national defense needs for acoustic research, under the auspices of Marshall Nie Rongzhen. By the end of 2017, the IOA counts more than 700 researchers focusing on the study of basic and applied acoustics, in the following fields: Underwater acoustics and underwater acoustical detection; Environmental acoustics and noise control technologies; Ultrasonics and acoustical micro-electromechanical system technologies; Communication acoustics, language/speech information processing; Integration of acoustics with digital systems, and network new media technologies. Seven academicians of the Chinese Academy of Sciences have been elected from the IOA, they were/are: Wang Dezhao, Ma Dayou, Ying Chongfu, , Hou Chaohuan, Li Qihu, Wang Chenghao. IOA is the de facto sponsor of Acoustical Society of China (ASC), a nongovernmental organization officially affiliated to China Association for Science and Technology. In 2012, the ASC co-hosted with the Acoustical Society of America a joint meeting in Hong Kong. In 2014, the IOA hosted the International Congress on Sound and Vibration in Beijing. In 2015, the IOA co-hosted with The French Acoustics Society The 9th International Conference Auditorium Acoustics in Paris. And according to the IOA official web site, it will co-host with the ASC an International Congress on Ultrasonics in 2021. The IOA publishes 7 academic journals, among others, Acta Acustica (incl. English version) and Applied Acoustics. References Research institutes of the Chinese Academy of Sciences Acoustics 1964 establishments in China Physics research institutes
Institute of Acoustics, Chinese Academy of Sciences
[ "Physics" ]
366
[ "Classical mechanics", "Acoustics" ]
59,616,019
https://en.wikipedia.org/wiki/Frank%E2%80%93Van%20der%20Merwe%20growth
Frank–Van der Merwe growth (FM growth) is one of the three primary modes by which thin films grow epitaxially at a crystal surface or interface. It is also known as 'layer-by-layer growth'. It is considered an ideal growth model, requiring perfect lattice matching between the substrate and the layer growing on to it, and it is usually limited to homoepitaxy. For FM growth to occur, the atoms that are to be deposited should be more attracted to the substrate than to each other, which is in contrast to the layer-plus-island growth model. FM growth is the preferred growth model for producing smooth films. It was first described by South African physicist Jan van der Merwe and British physicist Frederick Charles Frank in a series of four papers based on Van der Merwe's PhD research between 1947 and 1949. See also Epitaxy Thin films Molecular-beam epitaxy References Thin films
Frank–Van der Merwe growth
[ "Materials_science", "Mathematics", "Engineering" ]
191
[ "Nanotechnology", "Planes (geometry)", "Thin films", "Materials science" ]
59,616,443
https://en.wikipedia.org/wiki/NGC%207190
NGC 7190 is a barred lenticular galaxy registered in the New General Catalogue. It is located in the direction of the Pegasus constellation. It was discovered by the French astronomer Édouard Stephan on 28 September 1870 using an 80.01 cm (31.5 inch) reflector. See also New General Catalogue List of NGC objects (7001–7840) References External links Lenticular galaxies Pegasus (constellation) 7190 11885 067928 Astronomical objects discovered in 1870 Discoveries by Édouard Stephan
NGC 7190
[ "Astronomy" ]
100
[ "Pegasus (constellation)", "Constellations" ]
59,616,743
https://en.wikipedia.org/wiki/Mountain%20View%20train%20collision
The Mountain View train collision occurred on 8 January 2019 when two passenger trains collided at station, Pretoria, South Africa. Four people were killed and more than 600 others were injured. Collision The collision occurred at about 09:30 a.m. on 8 January 2019 when two passenger trains were involved in a collision at station, Pretoria, South Africa. It is unclear whether the accident was a head-on collision, or a rear-end collision. There were over 800 passengers on the two trains. Four people were killed and more than 620 were injured. Almost all of the injured sustained minor injuries, with some moderately injured. Two critically injured victims were airlifted to hospital. Many people were trapped in the wreckage, and fears were expressed that the casualty toll would grow as recovery operations took place. Investigation An investigation was opened into the accident. Vandalism and cable theft were suggested as causes for the collision. References 2019 in South Africa History of Pretoria January 2019 events in South Africa Railway accidents in 2019 Train collisions in South Africa 2019 disasters in South Africa Events in Pretoria
Mountain View train collision
[ "Technology" ]
214
[ "Railway accidents and incidents", "Rail accident stubs" ]
59,617,028
https://en.wikipedia.org/wiki/The%20spider%20and%20the%20fly%20problem
The spider and the fly problem is a recreational mathematics problem with an unintuitive solution, asking for a shortest path or geodesic between two points on the surface of a cuboid. It was originally posed by Henry Dudeney. Problem In the typical version of the puzzle, an otherwise empty cuboid room 30 feet long, 12 feet wide and 12 feet high contains a spider and a fly. The spider is 1 foot below the ceiling and horizontally centred on one 12′×12′ wall. The fly is 1 foot above the floor and horizontally centred on the opposite wall. The problem is to find the minimum distance the spider must crawl along the walls, ceiling and/or floor to reach the fly, which remains stationary. Solutions A naive solution is for the spider to remain horizontally centred, and crawl up to the ceiling, across it and down to the fly, giving a distance of 42 feet. Instead, the shortest path, 40 feet long, spirals around five of the six faces of the cuboid. Alternatively, it can be described by unfolding the cuboid into a net and finding a shortest path (a line segment) on the resulting unfolded system of six rectangles in the plane. Different nets produce different segments with different lengths, and the question becomes one of finding a net whose segment length is minimum. Another path, of intermediate length , crosses diagonally through four faces instead of five. For a room of length l, width w and height h, the spider a distance b below the ceiling, and the fly a distance a above the floor, length of the spiral path is while the naive solution has length . Depending on the dimensions of the cuboid, and on the initial positions of the spider and fly, one or another of these paths, or of four other paths, may be the optimal solution. However, there is no rectangular cuboid, and two points on the cuboid, for which the shortest path passes through all six faces of the cuboid. A different lateral thinking solution, beyond the stated rules of the puzzle, involves the spider attaching dragline silk to the wall to lower itself to the floor, and crawling 30 feet across it and 1 foot up the opposite wall, giving a crawl distance of 31 feet. Similarly, it can climb to the ceiling, cross it, then attach the silk to lower itself 11 feet, also a 31-foot crawl. History The problem was originally posed by Henry Dudeney in the English newspaper Weekly Dispatch on 14 June 1903 and collected in The Canterbury Puzzles (1907). Martin Gardner calls it "Dudeney's best-known brain-teaser". A version of the problem was recorded by Adolf Hurwitz in his diary in 1908. Hurwitz stated that he heard it from L. Gustave du Pasquier, who in turn had heard it from Richard von Mises. References Recreational mathematics Geodesic (mathematics)
The spider and the fly problem
[ "Mathematics" ]
589
[ "Recreational mathematics" ]
59,617,340
https://en.wikipedia.org/wiki/Implant%20resistance%20welding
Implant resistance welding is a method used in welding to join thermoplastics and thermoplastic composites. Resistive heating of a conductive material implanted in the thermoplastic melts the thermoplastic while a pressure is applied in order to fuse two parts together. The process settings such as current and weld time are important, because they affect the strength of the joint. The quality of a joint made using implant resistance welding is determined using destructive strength testing of specimens. Applications Implant resistance welding is used to joint thermoplastic composite components in the aerospace industry. For example, PEEK and PEI Laminate components for use in U.S. Air Force aircraft and a GF-PPS component on the Airbus A380 are joined using implant resistance welding. Electrofusion welding is a specific type of implant resistance welding used to join pipes. Process During the implant resistance welding process, current is applied to a heating element implanted in the joint. This current flowing through the implant produces heat through electrical resistance, which melts the matrix. Pressure is applied to push the parts together and molecular diffusion occurs at the melted surfaces of the parts, creating a joint. Implants Implants serve as the source of heat to melt the thermoplastic. The heat is created through resistive heating as a current is applied to the implant. Two common types of implants are carbon fiber and stainless-steel mesh. Carbon Fiber The carbon fiber type implants can be further separated into unidirectional and fabric type implants. The unidirectional type carbon fibers do not transfer heat across the fibers easily, therefore, the carbon fiber fabric works better to evenly heat the entire surface. This difference affects the performance of the resulting weld, the welded joints using the carbon fiber fabric can have 69% higher shear strength and 179% more interlaminar fracture toughness, when compared to unidirectional carbon fibers. For carbon fiber reinforced thermoplastics, the carbon fiber heating element matches the reinforcing material, avoiding the introduction of a new material. Stainless Steel Mesh Welded joints with stainless steel mesh implants tend to have higher strength than welds using carbon fiber implants and results in less air trapped in the joint. Stainless steel wire can be placed in between two layers of resin, to avoid leaving spaces in the holes of the mesh. However, there are reasons to avoid using stainless steel in favor of carbon fiber including, increased weight, the metal acts as a contaminant, possibility of stress concentrations, and possibility of corrosion. Energy Input The amount of energy input into the system (E) depends on the resistance of the heating elements (R), the current applied to the heating elements (I), and the amount of time the current is applied (t). Alternating current (AC) and direct current (DC) both work in this process. The energy produced is calculated using the following equation: Research has shown the input variable with the most impact on the performance of the resulting joint is the current. The same amount of energy can by input into the part by applying a low current for a long period of time or if a high current is applied for a short amount of time. In general, a higher shear strength of the joint is achieved using the method with a higher current for a shorter time. Longer heating times at lower currents do not heat the joint surface as evenly. This can lead to the fiber reinforcement to move within the melted matrix. If the current is too high, however, it can result in residual stresses and warpage. For a given constant electrical power, the temperature of the material surrounding the implants is directly dependent on the weld time. The longer weld time, yields a higher temperature. The lapped shear strength and the weld time are also correlated. Initially, there is a positive correlation between weld time and strength. However, the strength peaks for a certain weld time, and beyond this optimal weld time, the strength decreases. Pressure Pressure is applied to the joining surfaces to prevent deconsolidation, allow intermolecular diffusion, and push air out of the joint. The pressure can be applied using displacement or pressure control. Pressure also ensures good contact between the implant and the bulk material, in order to increase electrical resistance. The pressure on the implant must create good contact without being so high that it severs the implant. This is achieved with pressures of 4 to 20 MPa for carbon fiber and 2 MPa for stainless steel mesh heating elements. Strength Testing Lap shear strength (LSS) testing, in accordance with ASTM D 1002, is a method of destructive testing used to determine the strength of electrofusion welds of thermoplastic composite materials. For this test, two rectangular samples of the composite are lapped at the ends and joined at the lap interface using resistance implant welding. Then, a tension strength test is performed on the welded sample, with the joint surface being loaded in pure shear, a load frame machine pulls the sample until failure and measures the maximum load. The lap shear strength  is the maximum tensile load imparted on the sample by the machine divided by the lapped area. Failure Modes Interfacial failure or tearing is when the resin or laminate in immediate contact with the heating element on either side is pulled away, leaving the mesh or fabric heating element exposed. This type of failure is associated with low LSS of the sample and can occur as a result of inadequate heat input into the weld. Another failure mode associated with low LSS is cohesive failure, which is a failure of the welded material, either the melted base material or resin surrounding the mesh. Cohesive failure is observed in samples with too much heat input during welding, which deteriorates the thermoplastic. Samples with high LSS generally fail due to debonding of the reinforcing fiber-matrix surface or other base material failure, known as intralaminar failure. References Welding
Implant resistance welding
[ "Engineering" ]
1,222
[ "Welding", "Mechanical engineering" ]
59,617,495
https://en.wikipedia.org/wiki/Reoxidant
In chemistry, a reoxidant is a reagent that regenerates a catalyst by oxidation. In some cases they are used stoichiometrically, in other cases only small amounts are required. Applications OsO4-catalyzed dihydroxylations Reoxidants are commonly used in reactions catalyzed by osmium tetroxide, which is a primary oxidant converting alkenes to glycols. The spent catalyst is an osmium(VI) complex, which reacts with a reoxidant to regenerate Os(VIII). Typical reoxidants for this application include pyridine-N-oxide, ferricyanide/water, and N-methylmorpholine N-oxide. Vanadium(III)-based alkene polymerizations As catalysts for the polymerization of dienes, vanadium complexes are activated with alkylaluminium chlorides, e.g. diethylaluminium chloride. The organoaluminium reagent installs alkyl groups on the V(III) precatalyst. During catalysis or during catalyst activation, some vanadium(III) is reduced to inactive vanadium(II) derivatives. To correct for this reduction, reoxidants such as methyl trichloroacetate are added. The alkyl chloride functions as a source of a chlorine radical, which adds to the inactive V(II) species. In some cases, the reoxidants are called rejuvenators. Oxidations with TEMPO (2,2,6,6-Tetramethylpiperidin-1-yl)oxyl, commonly known as TEMPO, is an expensive but effective oxidant for converting alcohols to carbonyls. With iodine as the reoxidant, TEMPO-H is oxidized back to TEMPO, which then functions catalytically: oxidation: RCH2OH + 2 TEMPO → RCHO + 2 TEMPO-H reoxidation: 2 TEMPO-H + I2 → 2 TEMPO + 2 HI References Reaction mechanisms Catalysis Oxidizing agents
Reoxidant
[ "Chemistry" ]
453
[ "Catalysis", "Reaction mechanisms", "Redox", "Oxidizing agents", "Physical organic chemistry", "Chemical kinetics" ]
59,618,203
https://en.wikipedia.org/wiki/Charles%20C.%20Copeland
Charles C. Copeland is an American infrastructure engineer who has helped preserve and maintain several well-known New York City buildings and has developed innovative energy-conservation initiatives. Among the more iconic buildings are the Empire State Building, Grand Central Terminal, and the Alexander Hamilton Customs House. The energy-conserving innovations include an early (1974) solar energy rooftop installation in Manhattan and a 2015 patent for a control sequence to reduce peak utility steam demand in Manhattan buildings. He is president and CEO of Goldman Copeland Consulting Engineers, which also works with many of the nation's largest commercial property owners. Early life and education Charles Copeland was born in New York and raised in Westchester County, graduating from Ardsley High School. He earned a bachelor's degree in mechanical engineering at Missouri University of Science and Technology and a master's degree in mechanical engineering at City College of New York. He is a licensed professional engineer. Career In 1970, Charles Copeland joined consulting engineering firm Goldman & Sokolow, founded in 1968, which became Goldman Copeland in 1991. He has overseen the firm's work on such New York City landmarks as Carnegie Hall, Columbia University, the Empire State Building, Grand Central Terminal, the Guggenheim Museum, National Museum of the American Indian, and New York University. He began addressing energy needs in 1974, designing an early, influential solar collector thermal installation for a homesteading group resurrecting an abandoned building on Manhattan's Lower East Side. A windmill on the roof occasionally created an excess of electric power, which led to a dispute with Con Edison, which at that time prohibited any connection to their electrical grid. The dispute rose to the New York State Public Service Commission, where the homesteaders – represented pro bono by former US Attorney General Ramsey Clark – prevailed. The ruling was a crucial forerunner of federal enactment in 1978 of the Public Utility Regulatory Policies Act, which was key to enabling safe connections to the electrical grid. In 1988 Copeland was responsible for managing engineering work under the New York City Energy Conservation Capital Program, which was the largest municipal energy conservation program of its kind in the United States. Awards Charles Copeland has received numerous awards and recognition. In 2018, he oversaw the preparation of a geothermal screening tool for every lot in New York City, which was honored with a Platinum Award by the Association of Consulting Engineers Council. Also in 2018, the New York Energy Consumers Council presented him with its leadership and innovation award for his legacy of creative energy solutions. In 2015, he was awarded a patent for a control sequence to reduce peak utility steam demand in New York City buildings by storing thermal energy in building hydronic systems. He was named Energy Engineer of the Year in 2006 by the Association of Energy Engineers, and was named a Fellow of the American Society of Heating, Refrigerating and Air Conditioning Engineers in 1991. In 2019, Copeland was named the ENR New York's Legacy Award winner for 2019. In 2020 Copeland was named a recipient of the CCNY Townsend Harris Award as well as the ASHRAE Louise and Bill Holladay Distinguished Fellow Award. Articles Bylined articles by Charles Copeland have been published by leading industry publications, addressing major engineering challenges and opportunities such as "Mapping Geothermal Potential in NYC," "Lessons in Fire Protection from Notre-Dame," "Improving the Performance of Steam Turbine Chiller Plants," and "Developing Geothermal Screening Web Tools." Engineering News-Record published a profile in 2020 of Copeland titled "Sustainability is Engineer Charlie Copeland's Passion." Crain's New York Business published a profile titled "The Man Who Air-conditioned Grand Central Terminal Worries About Climate Change." References Year of birth missing (living people) Living people Missouri University of Science and Technology alumni Engineers from New York City City College of New York alumni 21st-century American engineers Fellows of ASHRAE
Charles C. Copeland
[ "Engineering" ]
771
[ "Building engineering", "Fellows of ASHRAE" ]
59,619,007
https://en.wikipedia.org/wiki/Stars-AO
Stars-AO also known as Aoi is an experimental cubesat with a small camera packaged. It uses amateur radio frequencies to communicate with the ground. Overview References External links Space telescopes
Stars-AO
[ "Astronomy" ]
38
[ "Space telescopes", "Astronomy stubs", "Spacecraft stubs" ]
59,620,357
https://en.wikipedia.org/wiki/Mengla%20dianlovirus
Mengla dianlovirus (MLAV, also written Měnglà virus) is a type of filovirus identified in a Rousettus bat in Mengla County, Yunnan Province, China, and was first reported in January 2019. It is classified in the same family as Ebolavirus and Marburgvirus. It is the only member of the genus Dianlovirus. The name derives from (), the Chinese language abbreviation for Yunnan, added to "filovirus", the common name for Filoviridae. Neither the species nor the genus are listed in the 2018 ICTV classification, as the virus was formally described after that report was released. A formal proposal was submitted for the taxa in January 2019. MLAV proteins, including VP35 and VP40, inhibit host immune responses by interfering with interferon signaling, contributing to immune evasion similar to other filoviruses. Like other filoviruses, Mengla virus utilizes the Niemann-Pick C1 (NPC1) receptor for cell entry, a trait that may facilitate cross-species transmission. References Filoviridae Bat virome
Mengla dianlovirus
[ "Biology" ]
231
[ "Virus stubs", "Viruses" ]
59,620,998
https://en.wikipedia.org/wiki/HSLuv
In colorimetry, the HSLuv color space is a human-friendly alternative to the HSL color space. It was formerly known as "husl". It is a variation of the CIE LCH(uv) color space, where the C (colorfulness) component is replaced by a "Saturation" (S) component representing the colorfulness percentage relative to the maximum sRGB can provide given the L and H values. The value has nothing to do with "saturation" in color theory. History The color spaces used widely for computer display, such as standard Red Green Blue (sRGB) (and the color models built on it like HSL and HSV) are irregular, which means that even though the rectangles have evenly spaced hue values, the corresponding effect is not linear to the human eye. The CIELUV color space was designed for perceptual uniformity, based on human experiments, and was adopted in 1976 by the International Commission on Illumination (CIE) as a simple-to-compute transformation of the 1931 CIE XYZ color space. CIELUV has been extensively used for applications such as computer graphics which deal with colored lights. Although additive mixtures of different colored lights will fall on a line in CIELUV's uniform chromaticity diagram (dubbed the CIE 1976 UCS), such additive mixtures will not, contrary to popular belief, fall along a line in the CIELUV color space unless the mixtures are constant in lightness. When accessed by polar coordinates, CIELUV becomes functionally similar to the HSL color space, with the problem that its chroma component doesn't fit into a specific range. Even though CIELUV and CIELAB color spaces are based on human perception, they are not intuitive when working in code. By extending CIELUV with a new "saturation" component, HSLuv now allows spanning all of the available chroma as a percentage. The HSLuv project is one of the more recent attempts at making these color spaces more intuitive. It allows you to use the CIELUV color space in the same dimensions as the HSL color model. Referred to as a human-friendly HSL, the original code was written in the Haxe programming language, but the project is now implemented in most of the popular programming languages, including JavaScript. Implementation The reference implementation is written in Haxe and released under the MIT license. HSLuv has been ported to the following computer languages: C, C#, Elm, Emacs, GLSL, Haskell, Haxe, Go, Java, JavaScript, Lua, Objective-C, Perl, PHP, Python, Ruby, Rust, Sass, and Swift. See also CIELUV HSL and HSV References External links Explanation of differences between sRGB and HSLuv color spaces with comparisons and Javascript examples. Javascript implementation of HSLuv with math, examples, comparisons, and links to implementations in various programming languages. Color space
HSLuv
[ "Mathematics" ]
638
[ "Color space", "Space (mathematics)", "Metric spaces" ]
59,621,106
https://en.wikipedia.org/wiki/Ulrich%20M%C3%BCller
Ulrich Müller (born 6 July 1940 in Bogotá) is a German chemist who is known for his works on solid-state chemistry and the application of crystallographic group theory to crystal chemistry. He is the author of several textbooks on chemistry, solid-state chemistry, and crystallography. Life Müller studied chemistry at the University of Stuttgart from 1959 to 1963. He worked on his dissertation at the Purdue University and the University of Stuttgart. He finished it in 1966 in the group of Kurt Dehnicke. From 1967 to 1970, he worked in the group of Hartmut Bärnighausen at the University of Marburg. In 1972, he finished his habilitation. From 1972 to 1975, Müller was a professor for inorganic chemistry at the University of Marburg. From 1975 to 1977, he was a guest professor at the University of Costa Rica. Then, several professorships for inorganic chemistry followed: University of Marburg from 1977 to 1992, University of Kassel from 1992 to 1999, and University of Marburg from 2000 to 2005. Since 2005, he has been an emeritus professor. Research His research focused on the following topics: application of crystallographic group theory in crystal chemistry to investigate structural relationships of crystalline solids and to predict possible structure types for inorganic compounds synthesis of thio, polysulfido, and polyselenido complexes structural analysis of crystalline solids with X-ray diffraction Awards He was awarded the Literaturpreis des Fonds der chemischen Industrie for his textbook "Anorganische Strukturchemie" (engl. Inorganic Structural Chemistry). Publications References Living people 20th-century German chemists Crystallographers 1940 births Academic staff of the University of Marburg University of Stuttgart alumni Solid state chemists
Ulrich Müller
[ "Chemistry" ]
359
[ "Solid state chemists" ]
59,621,783
https://en.wikipedia.org/wiki/Human%20milk%20immunity
Human milk immunity is the protection provided to the immune system of an infant via the biologically active components in human milk. Human milk was previously thought to only provide passive immunity primarily through Secretory IgA, but advances in technology have led to the identification of various immune-modulating components. Human milk constituents provide nutrition and protect the immunologically naive infant as well as regulate the infant's own immune development and growth. Immune factors and immune-modulating components in human milk include cytokines, growth factors, proteins, microbes, and human milk oligosaccharides. Immune factors in human milk are categorized mainly as anti-inflammatory primarily working without inducing inflammation or activating the complement system. Immune factors Bio-active constituents of human milk that have been cataloged to possess immune-modulating capabilities include immunoglobulins, Lactoferrin, Lysozyme, oligosaccharides, lipids, cytokines, hormones, and growth factors. Some of the roles of bio-actives in human milk are theorized based on their function in other parts of the body, but the mechanisms and function of their activities remain to be discovered. IgA Immunoglobulin A is the most well known immune factor in human milk. In its secretory form, SIgA, it is the most plentiful antibody in human milk. It constitutes between 80-90% of all immunoglobulins present in milk. SIgA provides adaptive immunity by directly targeting specific pathogens that both infant and mother have been exposed to in their environments. Lactoferrin Lactoferrin is an immune protein with strong anti-microbial function in human milk. Lactoferrin protects the infant intestine by binding to iron to prevent pathogens from utilizing it as a resource. It also modulates immunity by blocking inflammatory signaling cytokines. Cytokines Cytokines are pluripotent signaling molecules with the ability to bind to specific receptors. They can cross the intestinal barrier and mediate immune activity. Their presence in human milk may stimulate lymphocytes responsible for the development of the infant's specific immunity. Cytokines present in human milk include IL-1β, IL-6, IL-8, IL-10, TNFα, and IFN-γ. Origin and establishment Bio-active components in human milk are speculated to colonize in human milk in several ways including secretion by the mammary gland, epithelium cells, and by milk cells. Maternal immune factors are transferred by lymphocytes traveling from the mother's gut to the mammary gland where the secretory cells of the breast produce antibodies. The origin of the human milk microbiota, including those with immune-modulating functions, are not well established. However, several theories including skin-to-skin contact, the entero-mammary pathway, and retrograde back-flow hypothesis have been put forth to explain the microbial composition of human milk. Known factors of influence Lactation stage Human milk immune composition is known to change over the course of lactation. Most notably, antibody levels are lower in mature milk than in colostrum, with SIgA measuring at up to 12 grams per liter in colostrum and decreasing to 1 gram per liter in mature milk. Studies find time postpartum to be most influential on the presence of immune factors, including growth factors and lactoferrin. Human milk microbiome The exposure to microbiota through mother's milk is the primary stimulus for immune development in infants. Microbiota interacts with the infant's immune system by stimulating the mucous layer, down-regulating the inflammatory response, producing antibodies and helping initiate oral tolerance. Mucosal layers protection comes from their ability to limit pathogens from attaching to the infant intestinal tract. Human milk oligosaccharides Human milk oligosaccharides (HMOs) are carbohydrate components in human milk. They are mostly indigestible and work as a prebiotic to feed commensal bacteria in the infant gut. Studies show that HMOs also function as immune-modulators by blocking receptors that allow pathogenic bacteria to attach to the infant intestinal epithelium. Delivery mode There are observed differences in immune factor composition in the milk of mothers who delivered cesarean versus vaginally. A study of 82 women saw an increase in the levels of IgA in the colostrum of women who had cesarean births after experiencing labor when compared to women who delivered vaginally or had elected cesareans. Maternal Characteristics Parity Milk immunity levels are observably lower in women with higher parity. A study among the Ariaal women of Kenya saw that milk IgA decreased drastically only in women who had given birth to eight or more children. Diet Human milk composition remains relatively stable despite maternal dietary changes, except in cases of extreme maternal depletion. Seasonal changes and malnutrition influence the concentration of immune factors. In addition, intervention studies have confirmed that both fish oil and fish consumption during pregnancy can alter immune-modulating components in human milk. Environmental factors Differences in the maternal environment such as rural and urban environments, including exposure to farming, and exposure to pathogens have shown to affect human milk immune factor variation. Geographic location Geographic location is known to play a role in human milk variation, with country of residence specifically linked with immune factor variation. A study found a variation in levels of growth factor in both mature milk and colostrum to be correlated with geographic location. However, a larger study found support for consistency in the presence of a small group of immunological factors in mature milk independent of geographic location. Impact on health Health outcomes for Breastfed versus formula-fed infants Over the last century, breastfeeding has been consistently shown to reduce infant mortality and morbidity, particularly of infectious disease. Comparative research between human milk and formula has pointed towards the bio-active components in human milk as potential proponents of its immunological protection. Studies have shown that breastfed infants respond better to vaccines, and are better protected against diarrhea, otitis media, sepsis, and necrotizing enterocolitis, celiac disease, obesity, and inflammatory bowel disease than formula-fed infants. Human breast milk is seen as particularly beneficial to infants born before full term and those that are underweight at birth who are at a higher risk of infectious diseases, such as sepsis and meningitis. Also, there is a lower chance of contamination acquired through direct breastfeeding than with mixing formula with water or other animal milks which may also help explain why human milk is more protective for the infant. Long term protection Because various components present in human breast milk stimulate the growth of the immune system, there is a growing interest in whether breastfeeding provides a long term protective effect against auto-immune and inflammatory diseases. Milk sharing and donor milk The WHO infant feeding guidelines advise the use of donor milk when the mother's milk is not available. With the understanding that breast milk provides immune protection that is absent in formula, mothers have turned to milk sharing options to in order to give formula alternatives to their infants. A donation of milk without monetary benefit defines milk sharing. In addition, milk banks have emerged to regulate and pasteurize donated milk to be sold in the legal market. The main concern with bank milk is that it has lost many immune cells, commensal microbiota and bio-active proteins during the pasteurization process. Donor milk is in high demand for infants in the Neonatal Intensive Care Unit (NICU). who have been shown to benefit most from access to human milk Immunological consequences or benefits of milk sharing are not well documented, but it has been speculated that allo-nursing, or nursing from multiple females, may provide infants with an immune boost. The reported risk associated with unregulated sharing milk includes the possibility of the transmission of drugs, toxins, pathogenic bacteria, HIV and other viruses. However, some researchers believe that allo-nursing and milk sharing may have been part of our evolutionary past. Evidence of milk sharing history include the wet nursing practices of the 20th century, milk kinship among Islamic tradition, and documentation of allo-nursing in primates species. Evolutionary implications There is evidence of a relationship between the microbes that have co-evolved with humans as their host and the human immune system. The transfer of microorganisms from mother to offspring is universal in animals. In humans, microbial exchange occurs primarily through placental transfer and breast milk. The presence of these complex microbial communities in the human body suggests that the immune system has been selected to remember and mediate the colonization of these microorganisms within the human host. Further, microbial dysbiosis in infants is strongly associated with immune-mediated diseases such as allergies and necrotizing enterocolitis. In early life, an infant's immune system is considered immature due to its lack of resources necessary for defense against infection. An infant is not able to produce specific cytokines, IgA, and is limited to producing mostly IgM antibodies. The human infant is unable to adequately protect itself without the immune-stimulating and immune-modulating components present in human milk. This dynamic affirms the consensus among researchers that human milk evolved to provide not only nutritional but immunological benefits to the infant. Some researches have proposed that the mammary gland and milk production evolved as a part of the human innate immune system, with its immunological protective role predating its nutritional role. See also Human milk microbiome Human milk oligosaccharide References Breastfeeding Immunology
Human milk immunity
[ "Biology" ]
2,000
[ "Immunology" ]
59,623,695
https://en.wikipedia.org/wiki/Dermatotrophy
Dermatotrophy is a rare reproductive behaviour in which the young feed on the skin of its parents. It has been observed in several species of caecilian, including Boulengerula taitana, and is claimed to exist in the newly discovered unpublished species Dermophis donaldtrumpi. References Caecilians Amphibian anatomy Reproduction in animals
Dermatotrophy
[ "Biology" ]
76
[ "Reproduction in animals", "Behavior", "Reproduction" ]
59,624,054
https://en.wikipedia.org/wiki/Motoyashiki%20Pottery%20Kiln%20Site
The is an archaeological site containing late Sengoku to early Edo period kilns located in the Izumi neighborhood of the city of Toki, Gifu in the Chūbu region of Japan. The ruins were designated a National Historic Site of Japan in 1967. Many of the pottery shards excavated from this site have been collectively designated as National Treasures or National Important Cultural Property of Japan. Overview The Motoyashiki Pottery Kiln site is on a steep south slope facing the valley of the Tanigawa River north of Tokishi Station, and consists of one large and three smaller kilns. The large kiln is an noborigama with a total length of 24 meters. It was constructed in the Keichō era (1596 - 1615) by Katō Junpei, a potter from Mino Province who had apprenticed at the Karatsu ware kilns in Kyushu. The large kiln has 14 chambers, each with an average width of 2.2 meters and depth from 0.55 to 1.3 meters, increasing with increasing elevation. The floor is inclined at an angle of between 10 and 20 degrees. From the shards recovered at this site, this kiln was determined to be the origin of Oribe ware pottery. The Motoyashiki Higashi Kiln No.1 was built in the latter half of the 16th century, and is about four meters wide. This kiln was used to produce Tenmoku tea bowls and other glazed pottery. It has been restored to its original appearance. The Motoyashiki Higashi Kiln No.2 has a total length of 7.5 meters and a width of 3.9 meters, and was built next to the Higashi No. 1 kiln. It appears to have been used for experimentation in new designs such as Setoguro, Kiseto, and Haishino pottery. The Motoyashiki Higashi No. 3 kiln has a remaining length of 5.8 meters and a width of 2.9 meters, and was used for mass-producing Shino ware pottery. The kilns and surrounding area have been preserved as the Oribe-no-sato Park, and there is an exhibition room for relics excavated at site. The site is about a 15-minute walk from Tokishi Station on the JR East Chūō Main Line. Gallery See also List of Historic Sites of Japan (Gifu) References External links Gifu Prefecture home page Toki city home page History of Gifu Prefecture Toki, Gifu Historic Sites of Japan Japanese pottery kiln sites Mino Province
Motoyashiki Pottery Kiln Site
[ "Chemistry", "Engineering" ]
531
[ "Kilns", "Japanese pottery kiln sites" ]
59,624,215
https://en.wikipedia.org/wiki/Shell%20gold
In art history and the craft of gilding, shell gold is gold paint given its colour by very small pieces of real gold, normally obtained either from waste gold from goldsmithing and gilding, ground-up gold leaf, or fragments that have come off a gold-ground painting or other gilded object. The name comes from the medieval habit of using sea-shells to hold pigments and paints (of all colours) while painting. In painting it was usually used for details and highlights. A common source is the collecting and processing of flakes of elemental gold that have flaked away from a surface during the process of gilding it. Once the flakes of leftover gold (called "skewings") have been gathered, they are mixed with a small amount of honey and ground together with a mortar and pestle until they become a powder. The honey is then removed by placing this mixture in a bath of hot water, leaving the gold flakes to collect at the bottom. The upper layer of water is poured off and the process is repeated several times, the last few with deionized water. Following the final rinse, the flakes are left to dry. Once the water has nearly evaporated, a drop of concentrated gum arabic is added and mixed into the flakes, creating a basic paint with gold flakes/ dust as pigment. The paint may be applied to a surface using either a brush or the tip of a finger, and can be "reactivated" by only the moisture in an exhaled breath of air. Shell gold and powdered gold are the two principal forms of gold used for making repairs in a surface which has been previously gilded but has been damaged. Shell gold does not require any sizing, where as powdered gold does. References Gilding Painting Paints
Shell gold
[ "Chemistry" ]
365
[ "Paints", "Coatings" ]
59,624,291
https://en.wikipedia.org/wiki/NGC%207199
NGC 7199 is a barred spiral galaxy registered in the New General Catalogue. It is located in the direction of the Indus constellation. It was discovered by the English astronomer John Herschel in 1835 using a 47.5 cm (18.7 inch) reflector. See also List of Messier objects References 7199 Astronomical objects discovered in 1835 Indus (constellation) Barred spiral galaxies Discoveries by John Herschel
NGC 7199
[ "Astronomy" ]
84
[ "Indus (constellation)", "Constellations" ]
59,624,966
https://en.wikipedia.org/wiki/Changchun%20Institute%20of%20Optics%2C%20Fine%20Mechanics%20and%20Physics
The Changchun Institute of Optics, Fine Mechanics and Physics (CIOMP; ), of the Chinese Academy of Sciences (CAS), is a state research institution in Changchun, Jilin, China. It was founded in 1952 as the Institute of Instrumentation of the CAS, by a group of scientists led by Wang Daheng. It was later renamed as the Changchun Institute of Optics and Fine Mechanics. The current name was adopted in 1999 when the institute was merged with the Changchun Institute of Physics, headed by Xu Xurong. Under the leadership of Wang Daheng, the institute played a crucial role in the development of China's strategic weapons, developing high-precision optics for missile guidance systems. It made major breakthroughs for the submarine-launched ballistic missile program. The institute focuses on luminescence, applied optics, optical engineering, and precision mechanics and instruments. It is involved in a number of technology ventures based out of the nearby CAS Changchun Optoelectronics Industrial Park with total assets worth US$403 million. The institute offers undergraduate, master’s and doctoral education programs. The institute developed the Bilibili Video Satellite, launched in September 2020. CGSTL The institute includes the Chang Guang Satellite Technology Corporation (Charming Globe or CGSTL), a commercial offshoot of the institute which manufactures remote sensing satellite buses and unmanned aerial vehicles (drones). Chang Guang Satellite Technology owns Jilin-1 satellite constellation. In September 2024, it launched six Jilin Kuanfu satellites. It already has 31 satellites in orbit and plans to have their constellation reach 138 satellites over the next 4 years. See also Jilin-1 References External links Research institutes of the Chinese Academy of Sciences Education in Changchun 1952 establishments in China Optics institutions Mechanics Physics research institutes Educational institutions established in 1952
Changchun Institute of Optics, Fine Mechanics and Physics
[ "Physics", "Engineering" ]
377
[ "Mechanics", "Mechanical engineering" ]
59,625,279
https://en.wikipedia.org/wiki/JPEG%20XS
JPEG XS (standardized as ISO/IEC 21122) is an interoperable, visually lossless, low-latency and lightweight image and video coding system used in professional applications. Target applications of the standard include streaming high-quality content for professional video over IP (SMPTE ST 2022 and ST 2110) in broadcast and other applications, virtual reality, drones, autonomous vehicles using cameras, gaming. Although there is not an official acronym definition, XS was chosen to highlight the extra small and extra speed characteristics of the codec. Features Three main features are key to JPEG XS: Visually transparent compression: XS compressed content is indistinguishable from the original uncompressed content (passing ISO/IEC 29170-2 tests) for compression ratios between 2:1 and 10:1. Low latency: The total end-to-end latency, introduced by the JPEG XS compression-decompression cycle, is minimal. Depending on the configuration, XS typically imposes only between 1 and 32 lines of additional end-to-end latency, when compared to the same system using uncompressed video. Lightweight: JPEG XS is designed to have low computational and memory complexity, allowing for efficient low-power and low-resource implementations on various platforms such as CPU, GPU, FPGA and ASIC. Relying on these key features, JPEG XS is suitable for any application where uncompressed content is the norm, yet still allowing for significant savings in the required bandwidth usage, preserving quality and low latency. Among the targeted use cases are video transport over professional video links (like SDI and professional video over IP), real-time video storage, memory buffers, omnidirectional video capture and rendering, and image sensor compression (for example in cameras and in the automotive industry). JPEG XS favors visually lossless quality in combination with low latency and low complexity, over date reduction through compression. It is not a direct competitor to alternative image codecs like JPEG 2000 and JPEG XL or video codecs like AV1, AVC/H.264 and HEVC/H.265 which tend to focus on compression efficiency. Other important features are: Exact bitrate allocation: JPEG XS allows to accurately set the targeted bitrate to perfectly match the available bandwidth (also referred to as constant bitrate or CBR). Multi-generation robustness: JPEG XS allows for at least 10 encoding-decoding cycles, without significant quality degradation. This feature allows for example transparently chaining of multiple devices that recompress the signal, without any significant quality degradation taking place. Multi-platform interoperability: The algorithms used in JPEG XS allow for efficient implementations on different platforms, like CPU, GPU, FPGA and ASIC. Each of these platform architectures is best exploited when a specific degree of parallelism is available in the implementation. For instance, a multi-core CPU implementation will leverage a coarse-grained parallelism, while GPU or FPGA will work better with a fine-grained parallelism. Moreover, the choice of parallelism used in the implementation at the encoder will not affect that of the decoder. This means that real-time encoding and decoding between platforms is possible, without sacrificing the low complexity, low latency or high-quality properties. Support for mathematical lossless coding (MLS): JPEG XS is also capable of coding images in a mathematically lossless way, to achieve perfect reconstruction at the decoder side (new profile supported by 2nd edition). Support for High Dynamic Range (HDR) content: The current version of JPEG XS supports bit-depths of up to 16 bits per component, and it provides several parameterizable non-linear transforms (NLTs) to efficiently compress HDR content. Support for RAW Bayer/CFA compression: JPEG XS has also the capability to compress Color Filter Array (CFA) content, such as RAW Bayer content produced by digital cameras. A special color transform, called Star-Tetrix, allows for efficient and direct compression of the original RAW sample values, without the need for converting the Bayer samples to RGB samples first. Accurate flow control: A JPEG XS encoder continuously monitors the amount of bits sent out, and adjusts its rate allocation process to neither overflow nor underflow a normatively defined decoder input buffer. Application domains This section lists them main application domains where JPEG XS is actively used. New and other application domains are subject to be added in the future, for example, frame buffer compression or AR/VR applications. Transport over video links and IP networks Video bandwidth requirements are growing continuously, as video resolutions, frame rates, bit depths, and the amount of video streams are constantly increasing. Likewise, the capacities of video links and communication channels are also growing, yet at a slower pace than what is needed to address the huge video bandwidth growth. In addition, the investments to upgrade the capacity of links and channels are significant and need to be amortized over several years. Moreover, both the broadcast and pro-AV markets are shifting towards AV-over-IP-based infrastructure, with a preference going to 1 Gigabit Ethernet links for remote production or 10G Ethernet networks for in-house facilities. Both 1G, 2.5G, and 10G Ethernet are cheap and ubiquitous, while 25G or better links are usually not yet affordable. Given the available bandwidth and infrastructure cost, relying on uncompressed video is therefore no longer an option, as 4K, 8K, increased bit depths (for HDR), and higher framerates need to be supported. JPEG XS is a light-weight compression that visually preserves the quality compared to an uncompressed stream, at a low cost, targeted at compression ratios of up to 10:1. With XS, it is for example possible to repurpose existing SDI cables to transport 4K60 over a single 3G-SDI (at 4:1), and even over a single HD-SDI (at 8:1). Similar scenarios can be used to transport 8K60 content over various SDI cable types (e.g. 6G-SDI and 12G-SDI). Alternatively, XS enables transporting 4K60 content over 1G Ethernet and 8K60 over 5G or 10G Ethernet, which would be impossible without compression. The following table shows some expected compression ranges for some typical use cases. Real-time video storage and playout Related to the transport of video streams is the storage and retrieval of high-resolution streams where bandwidth limitations similarly apply. For instance, video cameras use internal storage like SSD drives or SD cards to hold large streams of images, yet the maximum data rates of such storage devices are limited and well below the uncompressed video throughput. Sensor compression As stated, JPEG XS has built-in support for the direct compression of RAW Bayer/CFA images using the Star-Tetrix Color Transform. This transform takes a RAW Bayer pattern image and decorrelates the samples into a 4-component image with each component having only a quarter of the resolution. This means that the total amount of samples to further process and compress remains the same, yet the values are decorrelated similarly to a classical Multiple Component Transform. Avoiding such conversion prevents information loss and allows this processing step to be done outside of the camera. This is advantageous because it allows to defer demosaicing the Bayer content from the moment of capturing to the production phase, where choices regarding artistic intent and various settings can be better made. Recall that the demosaicing process is irreversible and requires certain choices, like the choice of interpolation algorithm or the level of noise reduction, to be made upfront. Moreover, the demosaicing process can be power-hungry and will also introduce extra latency and complexity. The ability to push this step out of the camera is possible with JPEG XS and allows to use more advanced algorithms resulting in better quality in the end. Standards JPEG XS (ISO/IEC 21122) The JPEG XS coding system is an ISO/IEC suite of standards that consists of the following parts: Part 1, formally designated as ISO/IEC 21122-1, describes the core coding system of JPEG XS. This standard defines the syntax and, similarly to other JPEG and MPEG image codecs, the decompression process to reconstruct a continuous-tone digital image from its encoded codestream. Part 1 does provide some guidelines of the inverse process that compresses a digital image into a compressed codestream, or more simply called the encoding process, but leaves implementation-specific optimizations and choices to the implementers. Part 2 (ISO/IEC 21122-2) builds on top of Part 1 to segregate different applications and uses of JPEG XS into reduced coding tool subsets with tighter constraints. The definition of profiles, levels, and sublevels allows for reducing the complexity of implementations in particular application use cases, while also safeguarding interoperability. Recall that lower complexity typically means less power consumption, lower production costs, easier constraints, etc. Profiles represent interoperability subsets of the codestream syntax specified in Part 1. In addition, levels and sublevels provide limits to the maximum throughput in respectively the encoded (codestream) and the decoded (spatial and pixels) image domains. Part 2 furthermore also specifies a buffer model, consisting of a decoder model and a transmission channel model, to enable guaranteeing low latency requirements to a fraction of the frame size. Part 3 (ISO/IEC 21122-3) specifies transport and container formats for JPEG XS codestreams. It defines the carriage of important metadata, like color spaces, mastering display metadata (MDM), and EXIF, to facilitate transport, editing, and presentation. Furthermore, this part defines the XS-specific ISOBMFF boxes, an Internet Media Type registration, and additional syntax to allow embedding XS in formats like MP4, MPEG-2 TS, or the HEIF image file format. Part 4 (ISO/IEC 21122-4) is a supporting standard of JPEG XS that provides conformance testing and buffer model verification. This standard is crucial to implementers of XS and appliance conformance testing. Finally, Part 5 (ISO/IEC 21122-5) represents a reference software implementation (written in ISO C11) of the JPEG XS Part 1 decoder, conforming to the Part 2 profiles, levels and sublevels, as well as an exemplary encoder implementation. A second edition of all five parts is in the making and will be published at the latest in the beginning of 2022. It provides additional coding tools, profiles and levels, and new reference software to add support for efficient compression of 4:2:0 content, RAW Bayer/CFA content, and mathematically lossless compression. RFC9134 - RTP Payload Format for ISO/IEC 21122 (JPEG XS) RFC 9134 describes a payload format for the Real-Time Transport Protocol (RTP, RFC 3550) to carry JPEG XS encoded video. In addition, the recommendation also registers the official Media Type Registration for JPEG XS video as , along with its mapping of all parameters into the Session Description Protocol (SDP). The RTP Payload Format for JPEG XS in turn enables using JPEG XS in SMPTE ST 2110 environments using SMPTE ST 2110-22 for CBR compressed video transport. MPEG-TS for JPEG XS ISO/IEC 13818-1:2022, known as MPEG-TS 8th edition, specifies carriage support for JPEG XS in MPEG Transport Streams. See also MPEG-2. Note that AMD1 (Carriage of LCEVC and other improvements) of ISO/IEC 13818-1:2022 contains some additional corrections, improvements, and clarifications regarding embedding JPEG XS in MPEG-TS. VSF TR-07 and TR-08 See VSF TR-07 and TR-08, published by the Video Services Forum NMOS with JPEG XS A Networked Media Open Specifications that enables registration, discovery, and connection management of JPEG XS endpoints using the AMWA IS-04 and IS-05 NMOS Specifications. See AMWA BCP-006-01, published by Advanced Media Workflow Association. JPEG XS in IPMX Internet Protocol Media Experience (IPMX) is a suite of open standards and specifications to enable the carriage of compressed and uncompressed video, audio, and data over IP networks for the pro AV market. JPEG XS is supported under IPMX via VSF TR-10-8 and TR-10-11. History The JPEG committee started the standardization activity in 2016 with an open call for a high-performance, low-complexity image coding standard. The best-performing candidate technologies came from intoPIX and Fraunhofer IIS and formed the basis for the new standard. First implementations were demonstrated in April 2018 at the NAB Show and later that year at the International Broadcasting Convention. XS was also presented at CES in 2019. JPEG XS was formally standardized as ISO/IEC 21122 by the Joint Photographic Experts Group with the first edition published in 2019. A second edition was published in 2022, adding support for direct compression of raw CFA Bayer content, lossless compression, and support for 4:2:0 color subsampling. Today, the JPEG committee is still actively working on further improvements to XS, with the third edition published in 2024. This edition adds support for a temporal decorrelation technology in the wavelet domain, called Temporal Differential Coding (TDC). Technical overview Core coding The JPEG XS standard is a classical wavelet-based still-image codec without any frame buffer. While the standard defines JPEG XS based on a hypothetical reference coder, JPEG XS is easier to explain through the steps a typical encoder performs: Component up-scaling and optional component decorrelation: In the first step, the DC gain of the input data is removed and it is upscaled to a bit-precision of 20 bits. Optionally, a multi-component generation, identical to the JPEG 2000 RCT, is applied. This transformation is a lossless approximation of an RGB to YUV conversion, generating one luma and two chroma channels. Wavelet transformation: Input data is spacially decorrelated by a 5/3 Daubechies wavelet filter. While a five-stage transformation is performed in the horizontal direction, only 0 to 2 transformations are run in the vertical direction. The reason for this asymmetrical filter is to minimize latency. Prequantization: The output of the wavelet filter is converted to a sign-magnitude representation and pre-quantized by a dead zone quantizer to 16-bit precision. Rate control and quantization: The encoder determines by a non-normative process the rate of each possible quantization setting and then quantizes data by either a dead zone quantizer or a data-dependent uniform quantizer. Entropy coding: JPEG XS uses minimalistic Entropy encoding for the quantized data which proceeds in up to four passes over horizontal lines of quantized wavelet coefficients. The steps are: Significance coding: In the (optional) first pass, the significance of 32 consecutive wavelet coefficients is coded by a single bit. Bitplane count coding: In the second pass, the number of non-zero bitplanes of groups of four coefficients each, the so-called "bitplane count", is entropy coded through a Golomb type code. This step may optionally use the bitplane counts of the preceding line as the source for prediction (Differential pulse-code modulation) and then encode only the prediction difference. Data coding:The third pass inserts the raw bitplane values into the codestream without further coding. Sign coding: In the last optional coding pass, the sign bits of all non-zero coefficients are inserted into the codestream. If this coding pass is not present, sign bits are included in the data coding pass for all coefficients. Codestream packing: All entropy-coded data are packed into a linear stream of bits (grouped in byte multiples) along with all of the required image metadata. This sequence of bytes is called the codestream and its high-level syntax is based on the typical JPEG markers and marker segments syntax. Profiles, levels and sublevels JPEG XS defines profiles (in ISO/IEC 21122-2) that define subsets of coding tools that conforming decoders shall support, by limiting the permitted parameter values and allowed markers. The following table represents an overview of all the profiles along with their most important properties. Please refer to the standard for a complete specification of each profile. In addition, JPEG XS defines levels to represent a lower bound on the required throughput that conforming decoders need to support in the decoded image domain (also called the spatial domain). The following table lists the levels as defined by JPEG XS. The maximums are given in the context of the sampling grid, so they refer to a per-pixel value where each pixel represents one or more component values. However, in the context of Bayer data JPEG XS internally interprets the Bayer pattern as an interleaved grid of four components. This means that the number of sampling grid points required to represent a Bayer image is four times smaller than the total number of Bayer sample points. Each group of 2x2 (four) Bayer values gets interpreted as one sampling grid point with four components. Thus sensor resolutions should be divided by four to calculate the respective width, height and amount of sampling grid points. For this reason, all levels also bear double names. Please refer to the standard for a complete specification of each level. Similarly to the concept of levels, JPEG XS defines sublevels to represent a lower bound on the required throughput that conforming decoders need to support in the encoded image domain. Each sublevel is defined by a nominal bit-per-pixel (Nbpp) value that indicates the maximum amount of bits per pixel for an encoded image of the maximum permissible number of sampling grid points according to the selected conformance level. Thus, decoders conforming to a particular level and sublevel shall conform to the following constraints derived from Nbpp: The maximum codestream size in bytes (from SOC to EOC, including all markers) is . The maximum admissible encoded throughput in bits per second is . The following table lists the existing sublevels and their respective nominal bpp values. Please refer to the standard for a complete specification of each level. Patents and RAND JPEG XS contains patented technology which is made available for licensing via the JPEG XS Patent Portfolio License (JPEG XS PPL). This license pool covers essential patents owned by Licensors for implementing the ISO/IEC 21122 JPEG XS video coding standard and is available under RAND terms. References XS IEC standards ISO standards Lossy compression algorithms Image compression Raster graphics file formats
JPEG XS
[ "Technology" ]
4,018
[ "Computer standards", "IEC standards" ]
59,625,428
https://en.wikipedia.org/wiki/Redoute%20des%20Trois%20Communes
The Redoute des Trois Communes is a French fort, located in the commune of Saorge, Alpes-Maritimes. Built in 1897 as part of the Séré de Rivières system, it was one of the first French forts to be constructed of reinforced concrete. Situated at an altitude of 2080 metres at the highest summit of the Authion massif, it was intended to defend the Franco-Italian border. During World War II, the redoubt was held by German troops of the 34th division. It saw combat when the French offensive during the battle of Authion. On April 12, after artillery and aviation strikes, it was approached by 5 volunteers of the 1st Free French Division supported by a tank, who obtained the surrender of the 38-strong garrison. External links Redoute des 3 Communes, chemin de Mémoire, Ministry of the Armed Forces (France) La redoute ou blockhaus de la Pointe des Trois Communes et les baraquements de la tête de l'Authion, www.fortiffsere.fr Séré de Rivières system Redoubts Buildings and structures in Alpes-Maritimes
Redoute des Trois Communes
[ "Engineering" ]
241
[ "Séré de Rivières system", "Fortification lines" ]
59,625,475
https://en.wikipedia.org/wiki/John%20A.%20Osborn
John A. Osborn (1939–2000) was an inorganic chemist who made many contributions to organometallic chemistry. Obsorn received his PhD under the mentorship of Geoffrey Wilkinson. During that degree Osborn contributed to the development of Wilkinson's catalyst. His thesis studies ranged widely. In 1967, he took a faculty position at Harvard University. At Harvard, he supervised the PhD theses of Richard Schrock, John Shapley, and Jay Labinger. During this time, the chemistry of [M(diene)(PR3)2]+ was advanced (M = Rh, Ir), laying the foundation for many subsequent developments. In 1975, Osborn took a faculty position at the Université Louis-Pasteur in Strasbourg, France, where he further broadened his research. References 1939 births 2000 deaths Alumni of Imperial College London 20th-century English chemists Inorganic chemists
John A. Osborn
[ "Chemistry" ]
183
[ "British inorganic chemists", "Inorganic chemists" ]
70,442,791
https://en.wikipedia.org/wiki/HR%202131
HR 2131 (HD 41047) is a solitary star in the southern constellation Columba. It has an apparent magnitude of 5.52, allowing it to be faintly seen with the naked eye. The object is located at a distance of 670 light years but is receding with a heliocentric radial velocity of . HR 2131 has a stellar classification of K5 III, indicating that it is a red giant. It has 1.81 times the mass of the Sun and is 2.19 billion years old. The star's high luminosity of and a low effective temperature of 3,700 K causes it to have an enlarged radius 49 times that of the Sun. HR 2131's metallicity – elements heavier than helium – is around solar level; it spins with a projected rotational velocity of about . References External links Starview/HD 41047 K-type giants Columba (constellation) 041047 2131 028524 Columbae, 67
HR 2131
[ "Astronomy" ]
200
[ "Columba (constellation)", "Constellations" ]
70,443,760
https://en.wikipedia.org/wiki/Eva%20Ingersoll%20Wakefield
Eva Ingersoll Brown Wakefield (1892 – 1 April 1970) was a writer, poet, freethinker, and an authority on the life of Robert G. Ingersoll, her grandfather. Personal life Eva Ingersoll Brown Wakefield was born in Dobbs Ferry, New York in 1892, the daughter of Walston H. and Eva Ingersoll Brown. Her mother, Eva Ingersoll Brown, was a suffragist and activist. She was tutored as a child, and later graduated from Columbia University. In 1917, Brown married McNeal Swasey, but they later divorced. She married Sherman Day Wakefield, an author, editor, and bibliographer, in 1932. The wedding was performed by John Lovejoy Elliott of the New York Society for Ethical Culture, at the home of her aunt, Maud Ingersoll Probasco. Sherman Wakefield was on the editorial staff of The Humanist and also of Progressive World. Eva herself was a contributor to The Humanist, as well as writing poetry. One of her poems was included in an anthology compiled by Edwin Markham, with whom she studied. A passionate defender of her grandfather's legacy, Eva Ingersoll Wakefield published The Life and Letters of Robert G. Ingersoll in 1951, and later donated a significant amount of 'Ingersolliana' to the Library of Congress, the Abraham Lincoln Presidential Library and Museum, and other archives. As well as personal collections and copies of letters kept by her mother (Ingersoll's daughter) and aunt, Wakefield gathered correspondence from letters and journals, and from the collection of Harry Houdini. Activism Eva Ingersoll Brown Wakefield was one of the earliest members of the First Humanist Society of New York, founded in 1929, and later President of the New York Chapter of the American Humanist Association. During the 1930s, Wakefield was active in the Manhattan Branch of the Women's International League for Peace and Freedom. She was also director of the Vivisection Investigation League and a member of the National Society of Colonial Dames in the State of New York. In addition to editing The Life and Letters of Robert G. Ingersoll, Wakefield was secretary of the Robert G. Ingersoll Memorial Association. which maintained the Robert Ingersoll Birthplace in Dresden, N.Y., as a museum. Death She died on 1 April 1970 at the Carolton Hospital in Fairfield, Connecticut. At her memorial service, in lieu of flowers, contributions to the R.G. Ingersoll Memorial Association were requested. Sherman Day Wakefield died the following year. References External links The Life and Letters of Robert G. Ingersoll (English edition) at Internet Archive 1892 births 1970 deaths People from Dobbs Ferry, New York American writers American animal rights activists Members of the National Society of the Colonial Dames of America Women's International League for Peace and Freedom people American Humanist Association Vivisection activists American humanists
Eva Ingersoll Wakefield
[ "Chemistry" ]
596
[ "Vivisection activists", "Vivisection" ]
70,444,574
https://en.wikipedia.org/wiki/Aigialus%20grandis
Aigialus grandis is a fungus species of the genus of Aigialus. Aigialus grandis occurs in tropical and subtropical environments. References Further reading Fungi described in 1986 Fungus species Pleosporales
Aigialus grandis
[ "Biology" ]
45
[ "Fungi", "Fungus species" ]
70,444,766
https://en.wikipedia.org/wiki/2%2C4%2C6-Trinitrobenzoic%20acid
2,4,6-Trinitrobenzoic acid (TNBA) is an organic compound with the formula (O2N)3C6H2CO2H. It is a high explosive nitrated derivative of benzoic acid. Preparation and reactions 2,4,6-Trinitrobenzoic acid is prepared by oxidation of 2,4,6-trinitrotoluene (TNT). It is formed by oxidation of TNT and nitric acid with chlorate and with dichromate. Upon heating, 2,4,6-trinitrobenzoic acid undergoes decarboxylation to give 1,3,5-trinitrobenzene. Reduction with tin gives 2,4,6-triaminobenzenoic acid, a precursor to phloroglucinol (1,3,5-trihydroxybenzene). References Explosive chemicals Benzoic acids Nitrobenzene derivatives
2,4,6-Trinitrobenzoic acid
[ "Chemistry" ]
204
[ "Explosive chemicals" ]