id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
63,473,122
https://en.wikipedia.org/wiki/Kerrine%20Bryan
Kerrine Bryan is an engineer and author from the United Kingdom. She is the founder of Butterfly Books and currently lives in New York with her husband and two daughters. Bryan is a volunteer mentor for the IET and a STEM Ambassador, with a passion for educating youngsters on careers for women in STEM. Early life and background Bryan is a third generation Jamaican. Bryan's grandparents moved to England in the 1950s as part of the Windrush Generation. Bryan was brought up in Birmingham, England. Bryan said: "Being one of very few black children in my school, my mother always told me that I would need to work twice as hard to get half the success of my friends. This thought has been with me throughout my entire life, and has pushed me to always give a little bit extra at school and at work, and to be the best I can be." Education and early career Bryan landed into engineering by accident. Bryan was advised by teachers that accountancy was the best job for a woman who was adept at maths. During her A-levels, Bryan's maths teacher suggested she consider Engineering. He encouraged her to apply for the engineering residential at Glamorgan University (Headstart Scheme), which is now run by the Engineering Trust. After a year of experience in the industry, she decided to pursue a degree in Engineering. Bryan graduated from the University of Birmingham in 2005 after a 4-year master's degree in Electronic Engineering with German. Bryan then directly secured a place on a graduate scheme at a large oil and gas contractor. Advocacy for gender equality in STEM and publishing career Bryan decided to volunteer doing talks about her job across the country to children. It was then that she got the idea to develop a range of children's books that could tackle some of the misconceptions about the profession, which she argues begin from a very early age. "Picture books and rhyme are a brilliant way of communicating to children a positive message about all kinds of professions, especially STEM careers that are suffering skill gaps and diversity issues," said Bryan. "It’s important both children and parents understand that these jobs are available and accessible to them – no matter what gender they are or what background they come from – and that the opportunity is there for the taking if they apply themselves, work hard and want it enough. The world is their oyster." Bryan became a mother to a little girl in 2016. Bryan then founded Butterfly Books with her brother Jason Bryan. The aim of these books were to create stories that will be a helpful teaching resource enabling children to see the opportunities available to them and eventually help close skills gaps and reduce gender bias in professions. Bryan has published five books as of 2019 under Butterfly Books, these include, My Mummy is an Engineer, My Mummy is a Plumber (2015), My Mummy is a Scientist (2016), My Mummy is a Farmer (2018) and her most recent My Mummy is a Soldier (2019). Personal life Kerrine lives in New York with her husband and two daughters. She is a current fellow of the Institution of Engineering and Technology Awards In 2014 Bryan was listed by Management Today Magazine and The Sunday Times as one of the UK's top 35 women in business under the age of 35, which highlighted her work as a lead engineer for a large North Sea Offshore oil platform project. Bryan is the 2015 PRECIOUS Award winner for outstanding woman in STEM. In 2016 Bryan was listed We are the City Rising Star Award Winner. In 2017 Bryan was listed by The Telegraph under the Top 50 Influential Women in Engineering. Bryan has won multiple awards as an author. In 2015 in The Wishing Shelf Book Awards Bryan won a Bronze Medal in the category "Pre-school picture books" for My Mummy is an Engineer. In 2016 Shelf Unbound Notable 100 Winner (My Mummy is an Engineer). Bryan was also listed in the 2017 Purdue University Engineering Gift Guide for My Mummy is an Engineer. References Living people Year of birth missing (living people) 21st-century British women engineers Alumni of the University of Birmingham Black British women writers English electrical engineers English people of Jamaican descent English women children's writers Fellows of the Institution of Engineering and Technology Writers from Birmingham, West Midlands
Kerrine Bryan
[ "Engineering" ]
851
[ "Institution of Engineering and Technology", "Fellows of the Institution of Engineering and Technology" ]
63,474,575
https://en.wikipedia.org/wiki/NGC%20531
NGC 531 is a barred spiral galaxy in the constellation Andromeda with a visual magnitude of 10.51. It is a distance of 65.7 Mpc from the Sun. It is a member of the Hickson Compact Group HCG 10, and is interacting with the other members of the group. References External links NGC 531 on SIMBAD Barred spiral galaxies Andromeda (constellation) 0536 01012 005340 +06-04-020 J01261884
NGC 531
[ "Astronomy" ]
103
[ "Andromeda (constellation)", "Constellations" ]
63,474,723
https://en.wikipedia.org/wiki/NGC%202800
NGC 2800, also known as PGC 26302, is an elliptical galaxy in the constellation Ursa Major. It was discovered February 17, 1831 by William Herschel. References External links Discoveries by William Herschel Astronomical objects discovered in 1831 2800 26302 Ursa Major Elliptical galaxies
NGC 2800
[ "Astronomy" ]
61
[ "Ursa Major", "Constellations" ]
63,474,882
https://en.wikipedia.org/wiki/NGC%203301
NGC 3301, also known as NGC 3760, is a lenticular galaxy in the constellation Leo. Its apparent magnitude in the V-band is 11.1. It was first observed on March 12, 1784, by the astronomer William Herschel. It is a member of the Leo II Groups, a series of galaxies and galaxy clusters strung out from the right edge of the Virgo Supercluster. References External links 3301 Lenticular galaxies Leo (constellation) 031497
NGC 3301
[ "Astronomy" ]
101
[ "Leo (constellation)", "Constellations" ]
63,474,946
https://en.wikipedia.org/wiki/NGC%203003
NGC 3003 is a nearly edge-on barred spiral galaxy in the constellation of Leo Minor, discovered by William Herschel on December 7, 1785. It has an apparent visual magnitude of 11.78, at a distance of 19.5 Mpc from the Sun. It has a recessional velocity of 1474 km/s. Supernova One supernova has been observed in NGC 3003: SN 1961F (typeII, mag. 13.1) was discovered by Paul Wild on 21 February 1961. References External links Astronomical objects discovered in 1785 Discoveries by William Herschel Galaxies discovered in 1785 3003 Barred spiral galaxies Leo Minor 028186
NGC 3003
[ "Astronomy" ]
134
[ "Leo Minor", "Constellations" ]
63,474,969
https://en.wikipedia.org/wiki/Polidin
-Polidin is an immunomodulator vaccine invented in Romania at the Cantacuzino Institute and licensed in 1966, produced until 2012. It is a polybacterial preparation, being made up of a mixture of 13 Gram-positive and Gram-negative bacterial species (in the form of a suspension) that have been thermally inactivated. The preparation was intended for injectable administration, similarly to a vaccine. -The research team that developed this formulation also included the head of production Sylvia Hoișie (b. 1928 - d. 24 May 2022) -Immunostimulant efficacy and absence of teratogenic effects have been demonstrated by studies. -It is desired to resume production, in 2018-'19 an action plan is underway to determine the necessary stages for the resumption of influenza vaccine production and the reauthorization of production to European standards for Polidin, which will be produced again at the Cantacuzino Institute in Iași in a new factory together with Cantastim References Vaccines Romanian brands
Polidin
[ "Biology" ]
220
[ "Vaccination", "Vaccines" ]
63,475,215
https://en.wikipedia.org/wiki/Wheal%20Maid
Wheal Maid (also Wheal Maiden) is a former mine in the Camborne-Redruth-St Day Mining District, 1.5 km east of St Day. Between 1800 and 1840, profits are said to have been up to £200,000. In 1852, the mine was amalgamated with Poldice Mine and Carharrack Mine and worked as St Day United mine. Throughout the 1970s and 1980s, the mine site was turned into large lagoons and used as a tip for two other nearby mines: Mount Wellington and Wheal Jane. There were suggestions that the mine could be used as a landfill site for rubbish imported from New York and a power plant that would produce up to 40 megawatts of electricity; the concept was opposed by local residents and by Cornwall County Council, with Doris Ansari, the chair of the council's planning committee, saying that the idea "[did] not seem right for Cornwall". The site was bought from Carnon Enterprises by Gwennap District Council for a price of £1 in 2002. An investigation by the Environment Agency that concluded in 2007 found that soil near the mine had high levels of arsenic, copper and zinc contamination and by 2012, it was deemed too hazardous for human activity. The mine gains attention during dry spells when the lagoons dry up and leaving brightly coloured stains on the pit banks and bed. 2014 murder In 2014, a 72-year-old man from Falmouth died at the site after what was initially thought to be a cycling accident. It was later found that the man had been murdered. A 34-year-old was found guilty and sentenced to life and to serve at least 28 years. References Mines in Cornwall Pollution in the United Kingdom Soil contamination 2014 murders in the United Kingdom
Wheal Maid
[ "Chemistry", "Environmental_science" ]
361
[ "Environmental chemistry", "Soil contamination" ]
63,478,374
https://en.wikipedia.org/wiki/Judith%20Breuer
Judith Breuer is a British virologist who is professor of virology and director of the Pathogen Genomics Unit at University College London. She was elected a Fellow of the Academy of Medical Sciences in 2019. Breuer is part of the United Kingdom genome sequencing team that looks to map the spread of the coronavirus disease 2019. Early life and education As a child, Breuer was inspired by Vera Brittain and Simone de Beauvoir. She eventually studied medicine at the Middlesex Hospital medical school. During her doctoral degree Breuer studied the genes of HIV-2 tissue culture isolates. Her medical career started in East London, where she noticed that there was a large population of adults with chickenpox. This is rare for countries like the United Kingdom, where children usually contract the disease. She undertook her specialist training in virology at St Mary's Hospital, London in the early nineties, and move to St Bartholomew's Hospital in 1993. She was elected a Fellow of the Royal College of Pathologists in 1998. Research and career In 2005 Breuer joined University College London, where she serves as Chair of Molecular Virology. She simultaneously holds a clinical position at Great Ormond Street Hospital. In 2012 she was made co-Director of the Division of Infection & Immunity. Her research focusses on genome sequencing and phylogenetics. She also studies how viral evolution impacts public health practises and policy. Breuer demonstrated a methodology that enables the recovery of low copy viral DNA from clinical samples, which can then be used for whole genome sequencing. She has primarily investigated the genetic association of Varicella zoster virus, Herpes simplex virus and human parainfluenza viruses. Breuer has investigated norovirus, a pandemic that occurs on cycles of between two and five years. Using phylogenetic trees Breuer showed that the pandemic strains of norovirus exist in the population long before the virus spreads around the world. She believes that changes in the immunity of a population create an environment that allows the pandemic to spread, and that the pandemic strains may exist in children before they emerge in the wider population. Alongside norovirus, Breuer has extensively studied the Varicella zoster virus, which causes chickenpox and shingles. It is the smallest of all herpesviridae. For almost three decades it was unclear how the Varicella zoster virus retained its dormancy. Breuer was the first to identify a latency associated genetic transcript, which can persist in the neurons of almost all adults. She demonstrated that the diversity of human cytomegalovirus (HCMV) in clinical samples is not caused by frequent mutation, as was previously thought, but instead due to multi-strain infection. This finding demonstrates that HCMV does not mutate faster than other viruses, making it easier to identify a vaccination. In 2016 Breuer launched the Pathogens Genomics Unit at University College London, which allows the scientific community to better sequence pathogen genomes. She was elected a Fellow of the Academy of Medical Sciences in 2019. Her research includes the development of new tools and tests to protect people from antimicrobial resistance. Supported by the Department of Health and Social Care, Breuer looks to identify and treat antimicrobial resistant diseases, ensure the appropriate treatment pathways and prevent the spread of antibiotic resistant diseases between people. This aspect of her research makes use of artificial intelligence to quickly interpret test results, collating information from electronic health records and learning how clinicians make use of test results in clinical care. To achieve this, Breuer is involved with the design of new diagnostic tools, comprehensive randomized controlled trials and clinical management mechanisms. In 2020 Breuer was appointed the London lead of a national response effort to sequence the genome and map the spread of the novel coronavirus disease. Selected publications References Living people Year of birth missing (living people) Alumni of the University of London Academics of University College London British virologists Fellows of the Academy of Medical Sciences (United Kingdom) Pathogen genomics Women virologists
Judith Breuer
[ "Biology" ]
849
[ "Molecular genetics", "DNA sequencing", "Pathogen genomics" ]
63,478,388
https://en.wikipedia.org/wiki/Impact%20of%20the%20COVID-19%20pandemic%20on%20science%20and%20technology
The COVID-19 pandemic has affected innumerable scientific and technical institutions globally, resulting in lower productivity in a number of fields and programs. However, the impact of the pandemic has also led to the opening of several new research funding lines for government agencies around the world. Science As a result of the COVID-19 pandemic, new and improved forms of scientific communication have evolved. One example is the amount of data being published on preprint servers and the way it has been reviewed on social media platforms before being formally peer reviewed. Scientists are reviewing, editing, analyzing, and publishing manuscripts and data speedily. This intense communication may have enabled an unusual level of collaboration and efficiency among scientists. Francis Collins notes that while he has not seen research move faster, the pace of research "can still feel slow" during a pandemic. The typical research model was considered too slow for the "urgency of the coronavirus threat". World Health Organization (WHO) On the 4th of May, 2020, the World Health Organization (WHO) organized a telethon to raise billion from forty countries to support the rapid development of COVID-19 vaccines. WHO also announced the implementation of an international "solidarity trial" to simultaneously evaluate multiple vaccine candidates reaching phase II-III clinical trials. The "solidarity trial for treatments" is a multinational phase III-IV clinical trial, organized by WHO and its partners, to compare four untested treatments for hospitalized people with severe cases of COVID-19 disease. The trial was announced on March 18, 2020, and by April 21, 2020. Over 100 countries have participated in the trial. In addition, WHO is coordinating an international multisite randomized controlled trial—"solidarity trial for vaccines"—that will allow simultaneous assessment of the benefits and risks of different vaccine candidates being clinically tested in countries with high rates of COVID-19 disease. The WHO Vaccine Coalition prioritizes which vaccines to include in phase II and III clinical trials, and establishes harmonized phase III protocols for all vaccines that reach the pivotal testing phase. The Coalition for Epidemic Preparedness Innovations (CEPI), which has established a billion global fund for rapid investment and development of vaccine candidates, indicated in April 2020 that a vaccine could be available under protocols of emergency use in less than 12 months, or by early 2021. UNESCO The seventh edition of the UNESCO Science Report, which monitors science policy and governance around the world, was in preparation as the COVID-19 pandemic began. As a result, the report documents some of the ways in which scientists, inventors, and governments used science to meet society's needs during the early stages of the pandemic. In the paper, What the COVID-19 Pandemic Reveals About the Evolving Landscape of Scientific Advice, the authors present five countries' case studies (Uruguay, Sri Lanka, Jamaica, Ghana, and New Zealand). The authors conclude, "Effective and trusted scientific advice is not simply a function of linkages with the policy-maker. It also involves an effective conversation with stakeholders and the public." According to the World Health Organization, during the COVID-19 pandemic, Africa contributed 13% of the world's new or adapted technologies, such as robotics, 3D printing, and mobile phone apps. Many countries have accelerated their approval processes for research project proposals. For example, the innovation agencies of Argentina, Brazil, and Uruguay have issued calls for research proposals with an expedited approval process through early April 2020. Peru's two innovation agencies reduced their own response time to two weeks, as documented in the UNESCO Science Report (2021). The UNESCO study of publication trends in 193 countries on the topic of new or re-emerging viruses that can infect humans covered the period from 2011 to 2019 and now provides an overview of the state of research prior to the COVID-19 pandemic. Global output on this broad topic increased by only 2% per year between 2011 and 2019, slower than overall global scientific publications. Growth was much higher in individual countries that had to use science to address other viral outbreaks during this period, such as Liberia to combat Ebola or Brazil to combat Zika fever. It remains to be seen whether or not the scientific landscape will shift toward a more proactive approach to health sciences after COVID-19. National and intergovernmental laboratories The United States Department of Energy federal scientific laboratories, such as the Oak Ridge National Laboratory, closed to all visitors and many employees; non-essential employees and scientists became remote workers. Contractors were also strongly advised to isolate their facilities and employees unless necessary. Overall, ORNL operations remain reasonably unaffected. Lawrence Livermore National Laboratory was tasked by the White House Coronavirus Task Force to use most of its supercomputing capacity to continue the research on the virus stream, possible mutations, and other factors, while other projects were temporarily scaled back or indefinitely postponed. The European Molecular Biology Laboratory (EMBL) closed all of its six sites in Europe (Barcelona, Grenoble, Hamburg, Heidelberg, Hinxton, and Rome). All EMBL site governments have implemented strict controls in response to the coronavirus. EMBL staff have been instructed to follow the advice of local authorities. Several staff members have been given permission to work at the sites to provide essential services such as animal facility maintenance or data services. All other staff were instructed to stay at home. EMBL also cancelled all visits to the sites by groups outside the staff. This includes physical attendance at the Heidelberg course and conference program, EMBL-EBI training courses, and all other seminars, courses, and public visits at all sites. Meanwhile, the European Bioinformatics Institute established a European COVID-19 platform for data/information exchange. The goal is to collect and share readily available research data to enable synergy, cross-fertilization, and use of different data sets with varying degrees of aggregation, validation, and/or completeness. The platform is envisioned to consist of two interconnected components, the SARS-CoV-2 data hubs, to organize the flow of SARS-CoV-2 outbreak sequence data and enable comprehensive open data exchange for the European and global research community, and a more comprehensive COVID-19 portal. World Meteorological Organization The World Meteorological Organization (WMO) has expressed concern about the effects of the pandemic on its monitoring system. Observations from the Aircraft Meteorological Data Relay program, which uses in-flight measurements from the fleets of 43 airlines, have been reduced by 50 to 80 percent depending on the region. Data from other automated systems have been virtually unaffected, although WMO has expressed concern that repairs and maintenance may be affected eventually. Manual observations, mainly from developing countries, have also seen a significant decrease. Open science The need to accelerate open scientific research prompted several civil society organizations to create an Open COVID-19 Pledge asking different industries to release their intellectual property rights during the pandemic to help find a cure for the disease. Several tech giants have joined the pledge, which includes the release of an Open COVID license. Long-time open access advocates such as Creative Commons have launched a myriad of calls and actions to promote open access in science as a key component to combat the disease. These include a public call for open access policies and a call to scientists to adopt zero embargo periods for their publications, applying a CC BY to their articles and a CC0 waiver for research data. Other organizations have challenged the current scientific culture, calling for more open and public science. For studies and information on coronavirus that can contribute to citizen science through open science, many other online resources are available on other open science and open access websites, including an e-book chapter hosted by the medical collective EMCrit and portals run by Cambridge University Press, the Europe branch of the Scholarly Publishing and Academic Resources Coalition, The Lancet, John Wiley and Sons, and Springer Nature. Medical research A JAMA Network Open study examined trends in oncology clinical trials initiated before and during the COVID-19 pandemic. It was noted that pandemic-related declines in clinical trials raised concerns about the potential negative impact on the development of new cancer therapies and the extent to which these findings could be applied to other diseases. Computing and machine learning research and citizen science In March 2020, the United States Department of Energy, National Science Foundation, NASA, industry, and nine universities pooled resources to access supercomputers from IBM combined with cloud computing resources from Hewlett Packard Enterprise, Amazon, Microsoft, and Google for drug discovery. The COVID-19 High-Performance Computing Consortium also aims to predict the spread of disease, model possible vaccines, and study thousands of chemical compounds to develop a COVID-19 vaccine or therapy. , the Consortium has used up 437 petaFLOPS of computing power. The C3.ai Digital Transformation Institute, another consortium of Microsoft, six universities (including the Massachusetts Institute of Technology, a member of the first consortium), and the National Center for Supercomputer Applications in Illinois, operating under the auspices of C3.ai, founded by Thomas Siebel, is pooling supercomputing resources for drug discovery, developing medical protocols, and improving public health strategies, and awarded large grants through May 2020 to researchers proposing to use AI for similar tasks. In March 2020, the Folding@home distributed computing project launched a program to support medical researchers around the world. The first wave of the project will simulate potential target proteins of SARS-CoV-2 and the related SARS-CoV virus, which has already been studied. In March, the Rosetta@home distributed computing project also joined the effort. The project uses volunteers' computers to model the proteins of the SARS-CoV-2 virus to discover potential drug targets or develop new proteins to neutralize the virus. The researchers announced that using Rosetta@home, they were able to "accurately predict the atomic-scale structure of an important coronavirus protein weeks before it could be measured in the lab." In May 2020, the Open Pandemics—COVID-19 partnership was launched between Scripps Research and IBM's World Community Grid. The partnership is a distributed computing project that "will automatically run a simulated experiment in the background [of connected home PCs] that will help predict the efficacy of a particular chemical compound as a potential treatment for COVID-19." Resources for informatics and scientific crowdsourcing projects on COVID-19 can be found on the internet or as apps. Some examples of such projects are listed below: The Eterna Open-Vaccine project allows video game players to "design an mRNA encoding a potential vaccine against the novel coronavirus." The EU-Citizen Science project provides "a selection of resources related to the current COVID-19 pandemic. It contains links to citizen science and crowdsourcing projects." The COVID-19 Citizen Science project is "a new initiative by University of California, San Francisco physician-scientists" that "will allow anyone in the world age 18 or over to become a citizen scientist advancing understanding of the disease." The CoronaReport digital journalism project is "a citizen science project which democratizes the reporting on the Coronavirus and makes these reports accessible to other citizens." The COVID Symptom Tracker is a crowdsourced study of symptoms of the virus. It has been downloaded two million times as of April 2020. The COVID Near you epidemiology tool "uses crowdsourced data to visualize maps to help citizens and public health agencies identify current and potential hotspots for the recent pandemic coronavirus, COVID-19." The We-Care project is a novel initiative by University of California, Davis researchers that uses anonymity and crowdsourced information to alert infected users and slow the spread of COVID-19. The scientific community has held several machine learning competitions to identify false information related to the COVID-19 pandemic. Some examples are listed below: The First Workshop on Combating Online Hostile Posts in Regional Languages during Emergency Situation, co-located with the Association for the Advancement of Artificial Intelligence conference (AAAI-2021), focused on detecting fake news in English related to COVID-19. The data sources were various social media platforms such as Twitter, Facebook, and Instagram. Given a social media post, the objective of the shared task was to classify it as fake or real news. The winner of the task presented an ensemble approach based on fine-tuning COVID-Twitter-BERT models. The Sixth Workshop on Noisy User-generated Text: Identification of Informative COVID-19 English Tweets, aimed to automatically identify whether a COVID-19 related English tweet is informative or not. The organizers provided the research community with a new dataset of tweets for identification. The selection of tweets included information about suspected, confirmed, recovered, and death cases, as well as the location or travel history of cases. The winning solution for the workshop task presented a neural network ensemble consisting of the COVID-Twitter-BERT and RoBERTa language models. Space NASA NASA announced the temporary closure of all visitor complexes at its field centers until further notice and asked all non-critical personnel to work from home if possible. Production and manufacturing of the Space Launch System at the Michoud Assembly Facility was halted, and further delays occurred for the James Webb Space Telescope, although work resumed on June 3, 2020. The majority of Johnson Space Center personnel transitioned to telecommunicating, and mission-critical personnel on the International Space Station were ordered to reside in the mission control room until further notice. Station operations were relatively unaffected, but astronauts on new expeditions are subject to longer more stringent pre-flight quarantine. NASA's emergency response framework varied based on local virus cases around its agency's field centers. As of March 24, 2020, the following space centers had moved to Stage 4. Glenn Research Center in Ohio Plum Brook Station in Ohio Armstrong Flight Research Center in California Wallops Flight Facility in Virginia Goddard Institute for Space Studies in New York Goddard Space Flight Center in Maryland, which also reported its first COVID-19 positive employee case. Two facilities were maintained at Stage 4 after reporting new cases of coronavirus: the Michoud Assembly Facility reported its first employee to test positive for COVID-19, and Stennis Space Center recorded the second case of a NASA community member with the virus. Kennedy Space Center maintained at Stage 3 after a workforce member tested positive. Due to the mandatory remote work policy already in place, the individual had not been on-site for more than a week before the onset of symptoms. On May 18, the Michoud facility began resuming work operations on the SLS, but so far remains in a Level 3 status. At Level 4, mandatory remote work is in effect for all personnel except for limited personnel required for mission-critical work and to ensure and maintain the safety and security of the facility. ESA The European Space Agency (ESA) directed many of its science and technology facility personnel to telework whenever possible. Developments, including increased restrictions by national, regional, and local authorities across Europe and the first positive COVID-19 test result among European Space Operations Centre personnel, led the agency to further restrict on-site personnel at its mission control centres. ESA Director of Operations, Rolf Densing, strongly advised mission personnel to reduce activity on science missions, especially on interplanetary spacecraft. The affected spacecraft had stable orbits and long-duration missions, so turning off their science instruments and placing them into a largely unattended safety configuration for a certain period of time would have a negligible impact on their overall mission performance. Examples of such missions include: Cluster – A four-spacecraft mission launched in 2000, orbiting Earth to study our planet's magnetic environment and how it is forged by the solar wind, the stream of charged particles constantly released by the Sun; ExoMars Trace Gas Orbiter – Launched in 2016, the spacecraft orbited Mars, where it studied the planet's atmosphere and provided data relay for landers on the surface; Mars Express – Launched in 2003, the workhorse orbiter has been imaging the Martian surface and sampling the planet's atmosphere for more than a decade and a half; Solar Orbiter – ESA's newest science mission, launched in February 2020, was en route to its science operations orbit around the Sun. ESA Science Director Günther Hasinger said: "It was a difficult decision, but the right one to take. Our greatest responsibility is the safety of people, and I know all of us in the science community understand why this is necessary." The temporary reduction in on-site personnel will also allow the ESOC teams to focus on maintaining spacecraft safety for all other missions involved, especially the Mercury explorer BepiColombo, which is en route to the solar system's closest planet and would need on-site support during its planned April 10 2020 flyby of Earth. The difficult manoeuvre, which uses Earth's gravity to adjust BepiColombo's trajectory as it cruises towards Mercury, was performed by a very small number of engineers and with due regard to social distancing and other health and hygiene measures required by the current situation. Commissioning and initial checkout operations of the launched Solar Orbiter were temporarily suspended. ESA plans to resume these operations in the near future, depending on the development of the coronavirus situation. In the meantime, Solar Orbiter will continue its journey towards the Sun, with the first Venus flyby to take place in December. JAXA The space and science operations of the Japan Aerospace Exploration Agency (JAXA) were virtually unaffected. However, all visits to their many field centers were suspended until April 30, 2020, to reduce contamination. Commercial aerospace Bigelow Aerospace announced on March 23, 2020, that it was laying off all its 88 employees. It said it would rehire the workers when pandemic restrictions were lifted. Tucson, Arizona-based World View announced on April 17, 2020, that it had terminated new business initiatives and laid off an unspecified number of employees to reduce cash outflows. The company also received rent deferrals from Pima County, Arizona. OneWeb filed for bankruptcy on March 27, 2020, following a cash crunch due to difficulties in raising capital to complete construction and deployment of the remaining 90 percent of the network. The company had already laid off approximately 85 percent of its 531 employees, but said it would maintain operational satellite capabilities while the court restructures it and new owners for the constellation were sought. Rocket Lab temporarily closed its launch site in New Zealand, but operations continued at its Wallops Flight Facility launch complex. Major companies such as SpaceX and Boeing were not economically affected, except that they took extra precautions and security measures for their employees to limit the spread of the virus in their workplaces. As of April 16, 2020 Blue Origin said that it was continuing to hire staff, with about 20 more people added each week. ULA implemented an internal pandemic plan. Although some aspects of launch-related outreach were scaled back, the company made clear its intention to maintain its launch schedule. Telecommunications From 2019 to 2020, the proportion of EU enterprises employing advanced digital technology in their operations expanded dramatically. From 2020 to 2021, this percentage remained relatively stable, reaching 61% in 2021, compared to 63% in 2020 and 58% in 2019. The pandemic has caused a huge strain on internet traffic, with BT Group and Vodafone seeing a 60 and 50 percent increase in broadband usage, respectively. At the same time, Netflix, Disney+, Google, Amazon, and YouTube have considered reducing the quality of their videos to avoid overload. In addition, Sony has begun to slow down PlayStation game downloads in Europe and the United States to maintain the traffic levels. Cellular service providers in mainland China reported significant declines in subscribers, partially due to inability of migrant workers to return to work as a result of the quarantine lockdowns; China Mobile saw a reduction of 8 million subscribers, while China Unicom had 7.8 million fewer subscribers, and China Telecom lost 5.6 million users. Teleconferencing has been used to replace cancelled events as well as daily business meetings and social contacts. Teleconference companies such as Zoom Video Communications have seen a sharp increase in usage, accompanied by technical issues such as bandwidth overcrowding and social problems such as Zoombombing. However, teleconferencing has also contributed to the development of distance education. Thanks to this technology, virtual happy hours for "quarantinis" (mixed drinks) and even virtual dance parties have been organised. A survey conducted in 2021 found that while the coronavirus outbreak has boosted overall digitization, it has also widened the digital divide, specifically across firms. Leading businesses advanced digitization more frequently, but some enterprises fell behind and were less likely to convert digitally during the pandemic. 53% of surveyed firms in the European Union had previously implemented advanced digital technology and invested more into other digital technologies. 34% of non-digital EU firms viewed the pandemic as a chance to begin investing in their digital transformation. According to the survey, 16% of EU enterprises regard to access to digital infrastructure to be a substantial barrier to investment. A growing digital divide is also emerging - in the United States, despite non-digital enterprises being more dynamic than in the European Union, 48% of enterprises that were non-digital before to the pandemic utilized the crisis to begin investing in digital technologies, compared to 64% of firms that had previously implemented advanced digital technology. Digital infrastructure is essential for digital transformation. Many EU areas have the potential to enable investment in the digital transformation of firms by expanding access to faster internet. This influences organizations' decisions to go digital. Across Europe, access to digital infrastructure is already increasing, with the great majority of homes now having access to broadband, but more has to be done to promote the spread of fast connections. There is a large proportion of enterprises citing digital infrastructure as a key barrier to investment and development across nations and regions. One out of every five businesses in the region of Europe and Central Asia launched or grew their online business or distribution of products and services, while one out of every four businesses started or increased their remote operations. The pandemic has also hastened corporate transformation, with over 30% of companies altering or transforming their output as a result of it. Chemical manufacturers and wholesalers were the first to respond, with one in three expanding online business activity, beginning or boosting delivery of products and services, increasing remote employment, and changing manufacturing. Across sub-regions, Russian companies reported the highest rate of digital transformation, with more than half of them beginning or growing online activity, products delivery, and remote work. Within Central, Eastern and Southeastern Europe, enterprises in Slovenia (48%) and Poland (44%), were the most innovative in 2022, while firms in Slovakia (14%), were the least innovative. 67% of enterprises in these regions deployed at least one sophisticated digital technology, the same as the current EU average (69%). See also Impact of the COVID-19 pandemic on education Impact of the COVID-19 pandemic on religion Impact of the COVID-19 pandemic on politics Impact of the COVID-19 pandemic on aviation Impact of the COVID-19 pandemic on cinema References History of science and technology 2020 in science 2020 in technology
Impact of the COVID-19 pandemic on science and technology
[ "Technology" ]
4,824
[ "History of science and technology", "Impact of the COVID-19 pandemic on science and technology" ]
63,479,435
https://en.wikipedia.org/wiki/Floral%20isolation
Floral Isolation is a form of reproductive isolation found in angiosperms. Reproductive isolation is the process of species evolving mechanisms to prevent reproduction with other species. In plants, this is accomplished through the manipulation of the pollinator’s behavior (ethological isolation) or through morphological characteristics of flowers that favor intraspecific pollen transfer (morphological isolation). Preventing interbreeding prevents hybridization and gene flow between the species (introgression), and consequently protects genetic integrity of the species. Reproductive isolation occurs in many organisms, and floral isolation is one form present in plants. Floral isolation occurs prior to pollination, and is divided into two types of isolation: morphological isolation and ethological isolation. Floral isolation was championed by Verne Grant in the 1900s as an important mechanism of reproductive isolation in plants. Morphological Isolation Mechanical or morphological isolation is a form of floral isolation where the characteristics of the flower prevents reproduction between species. These morphological differences primarily affect the positioning of reproductive structures within flowers and control the placement of pollen on the pollinator’s body to promote transfer within the same species. For example, flowers of Salvia mellifera have anthers and stigmas which are positioned to contact the dorsal surface of the bumblebee abdomen while flowers of the co-occurring Salvia apiana place pollen on the bumblebee’s flanks. Ethological Isolation Ethological isolation is a form of floral isolation caused predominantly by the behavior of pollinators. Flowers can have morphological features which attract or reward specific types of pollinators. The relationship between floral signals and pollinators can promote floral constancy, where different pollinators preferentially visit one species over other others. The color or odor of flowers promotes this isolation as plants effectively manipulate the behavior of their animal pollinators. An example of this type of manipulation is found in orchids as they mimic female bees and wasps in order to attract male pollinators as a form of sexual deception referred to as pseudocopulation. References Botany Evolution of plants
Floral isolation
[ "Biology" ]
412
[ "Evolution of plants", "Plants", "Botany" ]
63,479,676
https://en.wikipedia.org/wiki/David%20Tom%C3%A1nek
David Tománek (born July 1954) is a U.S.-Swiss physicist of Czech origin and researcher in nanoscience and nanotechnology. He is Emeritus Professor of Physics at Michigan State University. He is known for predicting the structure and calculating properties of surfaces, atomic clusters including the C60 buckminsterfullerene, nanotubes, nanowires and nanohelices, graphene, and two-dimensional materials including phosphorene. Academic career Tománek earned a doctoral degree in Physics from the Freie Universität Berlin in 1983 under the supervision of Karl Heinz Bennemann and became Hochschulassistent there in 1984. Between 1985 and 1987 he worked as postdoctoral researcher at the Bell Labs under the supervision of Michael A. Schlüter and at the University of California, Berkeley under the supervision of Steven G. Louie. Since 1987, he has been Professor of Physics at Michigan State University, where he directs the Computational Nanotechnology Laboratory at the Department of Physics and Astronomy. Research Tománek and his research group have worked in areas in nanoscience and nanotechnology. As a graduate student at FU Berlin, he studied structural end electronic properties of surfaces, including reconstruction and photoemission spectra. He was intrigued by the unusual structure and electronic properties of atomic clusters, including collective electronic excitations and superconductivity. His computational studies of growth regimes of silicon and carbon clusters have made use of the semi-quantitative Linear Combination of Atomic Orbitals (LCAO) or tight-binding method. During his 1994 sabbatical stay at the laboratory of Richard E. Smalley, he turned his interest to the unique properties of nanotubes formed of carbon (CNTs) and other materials. He studied their morphology, formation, mechanical stiffness, their ability to conduct heat and electrons, and field electron emission. After 2000, he got involved in studies of two-dimensional materials including phosphorene. In the following years, he has continued identifying applications of carbon nanotubes and two-dimensional materials in fields including low-resistance contacts to nanostructures, nanomechanical energy storage, and purification and desalination of water. Conferences Tománek initiated a series of annual Nanotube (NT) conferences and a Gordon Research Conference on Two-dimensional electronics beyond graphene. Honors and awards In 2004 Tománek was elected a Fellow of the American Physical Society and in 2005 he received the prestigious Alexander-von-Humboldt Senior Scientist Award (Germany). In 2008 he received the Japan Carbon Award for Life-Time Achievement and was chosen by the American Physical Society as member of the Outstanding Referees Program for excellence in peer review. In 2016 he received the Lee Hsun Research Award for Materials Science from the Chinese Academy of Sciences. His h-index is currently 85. References External links Computational Nanotechnology Laboratory at Michigan State University Google profile Living people Swiss physicists Tomanek, David 1954 births Czech physicists Carbon scientists Tomanek, David Theoretical physicists
David Tománek
[ "Physics" ]
621
[ "Theoretical physics", "Theoretical physicists" ]
63,479,813
https://en.wikipedia.org/wiki/Dapivirine%20Ring
Dapivirine (DPV) Ring is an antiretroviral vaginal ring pioneered by the International Partnership for Microbicides (IPM) pending for regulatory review. It is designed as a long-acting form of HIV prevention for at-risk women, particularly in developing nations such as sub-Saharan Africa. IPM has rights to both the medication and the medical device. A total of four rings with different drug diffusion systems and polymer composition have been developed by IPM. The latest design, Ring-004, is a silicone polymer matrix-type system capable of delivering DPV intravaginally in a sustained manner. From 2009 to 2012, two Phase I and one Phase I/II safety trials of the DPV ring were conducted by IPM. Results deemed the device to be well-tolerated and safe. In 2012, two Phase III studies were sequentially launched––The Ring Study and ASPIRE. The Ring Study was sponsored by IPM. ASPIRE was sponsored by Microbicide Trials Network (MTN). Both studies indicated the effectiveness of the ring in reducing the risk of HIV transmission. In July 2016, two open-label studies, DREAM and HOPE were launched following the successful results from the Phase III studies. Insights on the utilisation of the tool by women were illuminated. DPV Rings were given to the former Phase III trial participants for one whole year. HOPE ended in October 2018 and DREAM ended in January 2019. In 2019, the results of both studies were published which indicated up to 54% efficacy. The World Health Organization today recommended that the dapivirine vaginal ring may be offered as an additional prevention choice for women at substantial risk. The risk of DPV resistance; ring's negative impact on intimate relationships, and inaccurate rumours surrounding the device are potential drawbacks limiting the overall implementation of the technology. Alternative long-acting rings with similar functionality to the DPV Ring are under development by IPM. Product design The DPV Ring is a discreet HIV microbicide tool developed by IPM for vulnerable female populations. In 2018, South Africa faced a high adult HIV prevalence (20.4%), with 7.7 million people living with HIV. More than 60% of those affected are female. IPM aims to alleviate HIV's disproportionate impact on women across South Africa through the provision of the ring. DPV, an HIV-1 non-nucleoside reverse transcriptase inhibitor (NNRTI), is slowly released via local intravaginal administration from the ring over the course of one month. The attachment and inhibition of HIV reverse transcriptase by DPV prevent HIV genome replication within the host upon initial exposure to the virus. Replacement of the ring is needed at the end of each month due to the dissipation of the drug component. The antiretroviral is also owned by IPM. The device is a one-size that fits all. It has a whitish, opaque colour with an outer diameter of 56 mm and a cross-sectional diameter of 7.7 mm. Injection moulding of silicone elastomers forms the shape of the ring. It is similar in polymeric composition, shape, and size to commercially available contraceptive or therapeutic hormonal replacement vaginal rings such as Estring and Femring. The silicone elastomer composition provides suitable biocompatibility and durability to the wearer. In-situ movement of the ring in the vagina is restricted by the muscular walls of the vagina and the cervix. This prevents object entry into the uterus or any accidental slip out of the device. Four prototypes were initially developed and tested in several clinical studies. There are two diffusion systems of these prototypes––a reservoir-type (Ring-001 & Ring-002) and a matrix-type (Ring-003 & Ring-004). Both forms differ in their drug-delivery mechanism and composition design. Reservoir-type rings (Ring-001 & Ring-002) Reservoir-type rings contain drug-reservoir cores that are dispersed within cured silicone elastomers. These cores contain an antiretroviral that dissolves in the polymeric component via diffusion. A concentration gradient is then established between the core inside the ring and the vaginal space. Reservoir-types also possess a porous outer sheath of the ring that acts as a rate-controlling membrane. Both the concentration gradient and the semi-permeable sheath allows the release of medication into the vaginal space. This maintains a constant drug release rate at the site of potential infection. The similarities and differences between the two prototypes of reservoir-type rings are tabulated below: Matrix-type rings (Ring-003 & Ring-004) In contrast to reservoir-type rings, matrix-types do not contain drug-filled cores and outer polymeric sheaths. Instead, the homogeneous dispersion of DPV in a cured silicone elastomer matrix is used. This diffusion system allows the sustained-release of the medication over time. A brief burst effect of the antiretroviral in vaginal fluids is characteristic to initial insertion of matrix-type rings for normal therapeutic response. A prolonged––but not constant––gradual release of DPV into the vaginal tissues is then maintained for sufficient drug dosage. The similarities and differences between the two prototypes of matrix-type rings are tabulated below: Clinical trials Phase I Ring-001, -002, and -003 were tested in Phase I trials. These prototypes demonstrated safety and acceptability to wearers over a 4-week period. This deemed the ring's potential effectiveness in preventing HIV-1. Despite testing, they were not developed further due to the improved polymerisation stability exhibited in Ring-004. Phase I/II Between 2010 and 2011, a safety and tolerability Phase I/II study (IPM-015) of Ring-004 was conducted in Kenya, Malawi, South Africa, and Tanzania. A sampling pool of 280 African women aged 18 to 40 were involved. The device was reported to be safe and well-accepted by wearers throughout the continuous 12-week utilisation period. However, a few reports of mild adverse effects caused by the ring were mentioned such as pelvic abnormalities, vaginal discharge, intermenstrual bleeding. In terms of sexual acceptability, more than half of the participants stated that the DPV Ring did not affect vaginal intercourse with their male partner. Only 1-3% reported that their male partner felt it which led to thoughts of extreme doubt about the continued adherence to the ring. A very high self-reported adherence to daily use was also cited, with 97% expressing support to the prospective continual use to the tool if proven effective. Phase III IPM and MTN conducted two Phase III studies between 2012 and 2016 (The Ring Study and ASPIRE) to test the efficacy and long-term safety of the DPV Ring. ASPIRE (MTN-020) investigated the extended use of the device on HIV prevention. Over 2,500 women at the age of 18 to 45 in Malawi, Uganda, and Zimbabwe participated in the study. The results of this trial confirmed a 27% reduction in new HIV infections overall and a 67% reduction for those over 25 years old. This demonstrated a significant age-related dependency on microbicide ring efficiency. Moreover, the partial use of the tool resulted in a 45% reduction in HIV infection risk among women. The Ring Study (IPM-027), conducted from April 2012 to December 2016, enrolled around 2,000 women aged from 18 to 45. It transformed into an open-label study in Uganda after it released early results in February 2016 due to the recommendation of an independent safety and monitoring board. MTN revealed at the International AIDS Conference 2016 in July that participants who had consistent use of the DPV ring had a 56% lower risk of HIV. However, women younger than 21 years old are less likely to adhere to the intravaginal HIV microbicide regimen. From 2016 to 2018, Phase III/IIIb open-label clinical trials HOPE (MTN-025) and DREAM (IPM-032) were launched to focus on the safety and tolerability of the tool on a monthly basis. These trials included participants across South Africa. The Boston Conference on Retroviruses and Opportunistic Infections in 2018 presented initial results in which the ring displayed high efficacy at 54%. The HIV incidence per year observed in both HOPE and DREAM studies are 1.9% and 1.8% respectively. Both rates are substantially lower than that of the ASPIRE placebo ring HIV incidence per year (>4 %). Concerns Drug resistance HIV-1 Group M subtype C is the most dominant lineage of HIV in sub-Saharan Africa. There is evidence of frequent cross-resistance to DPV among HIV-1 Group M subtype C from South Africans with first-generation NNRTI resistance such as Nevirapine and Efavirenz. This cites a possibility of DPV ineffectiveness. Invasiveness on intimate relationships Physical characteristics of the ring such as size and hardness were a topic of concern amongst participants. The ease of insertion and lack of interference in their daily lives were perceived to be of great importance to the demographic. Most of the females in the study are in a relationship. Statements on the impact of the device on relationship dynamics and sexual experience amongst couples were revealed. Many partnered wearers harboured apprehensive feelings as they were scared of their partners’ reaction to the ring or that they may feel the ring during sexual intercourse. Some even hid the fact that they wear it and maintained secrecy through abstinence or refrained from certain sexual positions. Therefore, this posed discussions on the ring's negative impact on interpersonal relationships, causing psychosocial strain and disturbed sexual experience. Negative rumours The spread of negative rumours surrounding the ring was prevalent amongst locals during the ASPIRE study. In 2018, follow-up studies have revealed that the prominent claims were about its carcinogenicity and its ability to cause infertility. Mentions of population control method by disease transfer via the ring were widespread as well. There was also a pervasive conflation of the device with witchcraft or Satanism. This is attributed to traditional Christian values and beliefs that are widely practiced across the South African region. Instilled fear regarding the novel tool stems from the fact that it is a vaginally administered object which incited claims of its unnatural nature by the locals. Hence, participants were forced to remove the ring or drop out of the study due to these concerns raised by their communities. Development of vaginal ring alternatives Since 2017, a three-month DPV Ring option has been under clinical trial. A number of alternative intravaginal rings made by IPM are under further testing or in preclinical developmental delay due to insufficient funding, including: Dapivirine-Contraceptive Ring: a three-month multipurpose ring containing DPV and contraceptive Levonorgestrel hormone. DS003-Dapivirine Ring: a three-month combined antiretroviral ring containing DPV and DS003. Darunavir Ring: a one-month combined antiretroviral ring containing Darunavir and a second antiretroviral medication. References Medical devices HIV/AIDS
Dapivirine Ring
[ "Biology" ]
2,366
[ "Medical devices", "Medical technology" ]
63,479,880
https://en.wikipedia.org/wiki/Quyen%20T.%20Nguyen
Quyen T. Nguyen is an American surgeon-scientist and Professor in the Department of Surgery at UC San Diego School of Medicine and associate director of Education and Training at UC San Diego Moores Cancer Center. She is known for her work pioneering fluorescence guided surgery and co-holds several patents with Nobel Laureate Roger Y. Tsien, PhD pertaining to their invention of peptides, imaging systems and methods to support fluorescence-guided cancer tumor resection and fluorescent labeling of nerves on the surgical bed. Education Nguyen received a bachelor's degree in Psychobiology from the University of Southern California, and an MD/PhD in Medicine and Neuroscience from Washington University School of Medicine in St. Louis, Missouri in the lab of Jeff W. Lichtman. While in Lichtman's lab, Nguyen developed an in-vivo fluorescence time-lapse imaging system to visualize motor nerve regeneration. She completed her General Surgery internship at Barnes-Jewish Hospital and a residency in Otolaryngology and Head and neck surgery at UC San Diego. Career and awards Nguyen is board-certified in Head and Neck Surgery and Neurotology/Skull Base Surgery. She serves as Director of the Facial Nerve Clinic at UC San Diego, which provides evaluation and surgical treatment for patients with varying facial nerve dysfunctions. In her clinical practice she also treats and operates on patients with diseases of the ear and skull base. She is Professor of Surgery in the UC San Diego School of Medicine, and serves as associate director of Education and Training at UC San Diego Moores Cancer Center where her focus is on providing equitable access to quality cancer education and training programs across all academic and faculty levels. Nguyen and her research team have received a number of grants and awards including support from the NIH and a Burroughs Wellcome Award in 2009 which have helped to support her research into fluorescence guided surgery, hailed as a breakthrough in numerous news and scientific publications. In 2011, Nguyen presented a talk at a TEDMED conference titled, "Color-Coded Surgery" that has been viewed over 1.2 million times on Ted.com. In 2014, Nguyen received the Presidential Early Career Award for Scientists and Engineers (PECASE) from President Barack Obama for her pioneering work in fluorescence guided surgery. In 2017, Nguyen founded Alume Biosciences, a biotechnology startup with the goal of translating nerve agents developed in the lab to aid physicians in visualizing nerves in the operating room. References American surgeons 21st-century American women scientists Living people American technology chief executives Year of birth missing (living people) Women biotechnologists 21st-century American scientists 21st-century American physicians 21st-century American women physicians American women medical researchers American medical researchers University of Southern California alumni Washington University School of Medicine alumni University of California, San Diego faculty American women surgeons
Quyen T. Nguyen
[ "Biology" ]
566
[ "Biotechnologists", "Women biotechnologists" ]
63,480,925
https://en.wikipedia.org/wiki/NGC%20937
NGC 937 is a barred spiral galaxy located in the constellation Andromeda about 251 million light years from the Milky Way. It was discovered by the French astronomer Édouard Stephan on 12 December 1884. See also List of NGC objects (1–1000) References External links Barred spiral galaxies 0937 01961 +07-06-024 Andromeda (constellation) 009480 Astronomical objects discovered in 1884 Discoveries by Édouard Stephan
NGC 937
[ "Astronomy" ]
90
[ "Andromeda (constellation)", "Constellations" ]
63,480,943
https://en.wikipedia.org/wiki/NGC%20938
NGC 938 is an elliptical galaxy located in the constellation Aries, approximately 184 million light years from the Milky Way. It was discovered by the Prussian astronomer Heinrich d'Arrest in 1863. SN 2015ab, a type Ia supernova, occurred within NGC 938. See also List of NGC objects (1–1000) References External links Elliptical galaxies Aries (constellation) 0938 009423
NGC 938
[ "Astronomy" ]
84
[ "Aries (constellation)", "Constellations" ]
63,480,970
https://en.wikipedia.org/wiki/NGC%20939
NGC 939 is a lenticular or elliptical galaxy in the constellation Eridanus. It is estimated to be 241 million light-years from the Milky Way and has a diameter of approximately 80,000 ly. NGC 939 was discovered on October 18, 1835 by astronomer John Herschel. NGC 939 is better seen from the southern hemisphere because of its location south of the celestial equator. See also List of NGC objects (1–1000) References External links Elliptical galaxies Eridanus (constellation) 0939 009271
NGC 939
[ "Astronomy" ]
109
[ "Eridanus (constellation)", "Constellations" ]
63,481,031
https://en.wikipedia.org/wiki/NGC%20941
NGC 941 is an intermediate spiral galaxy in the constellation Cetus. It is an estimated 16.83 MPc (55 million light-years) from the Milky Way and has a diameter of approximately 55,000 light years. The galaxies NGC 926, NGC 934, NGC 936, NGC 955 are located in the same sky area. NGC 941 was discovered by the astronomer William Herschel on 6 January 1785. One supernova has been observed in NGC 941: SN 2005ad (type II, mag. 17.4). References External links Intermediate spiral galaxies 0941 Cetus 009414
NGC 941
[ "Astronomy" ]
129
[ "Cetus", "Constellations" ]
63,481,723
https://en.wikipedia.org/wiki/Timeline%20of%20crystallography
This is a timeline of crystallography. 17th century 1669 - In his book De solido intra solidum naturaliter contento Nicolas Steno asserted that, although the number and size of crystal faces may vary from one crystal to another, the angles between corresponding faces are always the same. This was the original statement of the first law of crystallography (Steno's law). 18th century 1723 - Moritz Anton Cappeller introduced the term crystallography in his book Prodromus Crystallographiae De Crystallis Improprie Sic Dictis Commentarium. 1766 - Pierre-Joseph Macquer, in his Dictionnaire de Chymie, promoted mechanisms of crystallization based on the idea that crystals are composed of polyhedral molecules (primitive integrantes). 1772 - Jean-Baptiste L. Romé de l'Isle developed geometrical ideas on crystal structure in his Essai de Cristallographie. He also described the twinning phenomenon in crystals. 1781 - Abbé René Just Haüy (often termed the "Father of Modern Crystallography") discovered that crystals always cleave along crystallographic planes. Based on this observation, and the fact that the inter-facial angles in each crystal species always have the same value, Haüy concluded that crystals must be periodic and composed of regularly arranged rows of tiny polyhedra (molécules intégrantes). This theory explained why all crystal planes are related by small rational numbers (the law of rational indices). 1783 - Jean-Baptiste L. Romé de l'Isle in the second edition of his Cristallographie used the contact goniometer to discover the law of constancy of interfacial angles: angles are constant and characteristic for crystals of the same chemical substance. 1784 - René Just Haüy published his law of decrements: a crystal is composed of molecules arranged periodically in three dimensions. 1795 - René Just Haüy lectured on his law of symmetry: "the manner in which Nature creates crystals is always obeying ... the law of the greatest possible symmetry, in the sense that oppositely situated but corresponding parts are always equal in number, arrangement, and form of their faces". 19th century 1801 - René Just Haüy published his multi-volume Traité de Minéralogie in Paris. A second edition under the title Traité de Cristallographie was published in 1822. 1801 - Déodat de Dolomieu published his Sur la philosophie minéralogique et sur l'espèce minéralogique in Paris. 1815 - René Just Haüy published his law of symmetry. 1815 - Christian Samuel Weiss, founder of the dynamist school of crystallography, developed a geometric treatment of crystals in which crystallographic axes are the basis for classification of crystals rather than Haüy's polyhedral molecules. 1819 - Eilhard Mitscherlich discovered crystallographic isomorphism. 1822 - Friedrich Mohs attempted to bring the molecular approach of Haüy and the geometric approach of Weiss into agreement. 1823 - Franz Ernst Neumann invented a system of crystal face notation, by using the reciprocals of the intercepts with crystal axes, which becomes the standard for the next 60 years. 1824 - Ludwig August Seeber conceived of the concept of using an array of discrete (molecular) points to represent a crystal. 1826 - Moritz Ludwig Frankenheim derived the 32 crystal classes by using the crystallographic restriction, consistent with Haüy's laws, that only 2, 3, 4 and 6-fold rotational axes are permitted. 1830 - Johann F. C. Hessel published an independent geometrical derivation of the 32 point groups (crystal classes). 1832 - Friedrich Wöhler and Justus von Liebig discovered polymorphism in molecular crystals, using the example of benzamide. 1839 - William Hallowes Miller invented zonal relations by projecting the faces of a crystal upon the surface of a circumscribed sphere. Miller indices are defined which form a notation system in crystallography for planes in crystal (Bravais) lattices. 1840 - Gabriel Delafosse, independently of Seeber, represented crystal structure as an array of discrete points generated by defined translations. 1842 - Moritz Frankenheim derived 15 different theoretical networks of points in space not dependent on molecular shape. 1848 - Louis Pasteur discovered that sodium ammonium tartrate can crystallize in left- and right-handed forms and showed that the two forms can rotate polarized light in opposite directions. This was the first demonstration of molecular chirality, and also the first explanation of isomerism. 1850 - Auguste Bravais derived the 14 space lattices. 1869 - Axel Gadolin, independently of Hessel, derived the 32 crystal classes using stereographic projection. 1877 - Paul Heinrich von Groth founded the journal Zeitschrift für Krystallographie und Mineralogie, and served as its editor for 44 years. 1877 - Ernest-François Mallard, building on the work of Auguste Bravais, published a memoir on optically "anomalous" crystals (that is, those crystals the morphology of which seems to be of greater symmetry than their optics), in which the importance of crystal twinning and "pseudosymmetry" were used as explanatory concepts. 1879 - Leonhard Sohncke listed the 65 crystallographic point systems using rotations and reflections in addition to translations. 1888 - Friedrich Reinitzer discovered the existence of liquid crystals during investigations of cholesteryl benzoate. 1889 - Otto Lehmann, after receiving a letter from Friedrich Reinitzer, used polarizing light to explain the phenomenon of liquid crystals. 1891 - Derivation of the 230 space groups (by adding mirror-image symmetry to Sohncke's work) by a collaborative effort of Evgraf Fedorov and Arthur Schoenflies. 1894 - William Barlow, using a sphere packing approach, independently derived the 230 space groups. 1894 - Pierre Curie described the now called Curie's principle for the symmetry properties of crystals. 1895 - Wilhelm Conrad Röntgen on 8 November 1895 produced and detected electromagnetic radiation in a wavelength range now known as X-rays or Röntgen rays, an achievement that earned him the first Nobel Prize in Physics in 1901. X-rays became the major mode of crystallographic research in the 20th century. 1899 - Hermanus Haga and Cornelis Wind observed X-ray diffuse broadening through a slit and deduced that the wavelength of X-rays is on the order of an angstrom. 20th century 1905 - Charles Glover Barkla discovered the X-ray polarization effect. 1908 - Bernhard Walter and Robert Wichard Pohl observed X-ray diffraction from a slit. 1912 - Max von Laue discovered diffraction patterns from crystals in an x-ray beam. 1912 - Bragg diffraction, expressed through Bragg's law, is first presented by Lawrence Bragg on 11 November 1912 to the Cambridge Philosophical Society. 1912 - Heinrich Baumhauer discovered and described polytypism in crystals of carborundum, or silicon carbide. 1913 - Lawrence Bragg published the first observation of x-ray diffraction by crystals. Similar observations were also published by Torahiko Terada in the same year. 1913 - Georges Friedel stated Friedel's law, a property of Fourier transforms of real functions. Friedel's law is used in X-ray diffraction, crystallography and scattering from real potential within the Born approximation. 1914 - Max von Laue won the Nobel Prize in Physics "for his discovery of the diffraction of X-rays by crystals." 1915 - William and Lawrence Bragg published the book X rays and crystal structure and shared the Nobel Prize in Physics "for their services in the analysis of crystal structure by means of X-rays." 1916 - Peter Debye and Paul Scherrer discovered powder (polycrystalline) diffraction. 1916 - Paul Peter Ewald predicted the Pendellösung effect, which is a foundational aspect of the dynamical diffraction theory of X rays. 1917 - Albert W. Hull independently discovered powder diffraction in researching the crystal structure of metals. 1920 - Reginald Oliver Herzog and Willi Jancke published the first systematic analysis of X-ray diffraction patterns of cellulose extracted from a variety of sources. 1921 - Paul Peter Ewald introduced a spherical construction for explaining the occurrence of diffraction spots, which is now called Ewald's sphere. 1922 - Charles Galton Darwin formulated the theory of X-ray diffraction from imperfect crystals and introduced the concept of mosaicity in crystallography. 1922 - Ralph Wyckoff published a book containing tables with the positional coordinates permitted by the symmetry elements. These positions are now known as Wyckoff positions. This book was the forerunner of the International tables for crystallography, which first appeared in 1935. 1923 - Roscoe Dickinson and Albert Raymond, and independently, H.J. Gonell and Hermann Mark, first showed that an organic molecule, specifically hexamethylenetetramine, could be characterized by x-ray crystallography. 1923 - William H. Bragg and Reginald E. Gibbs elucidated the structure of quartz. 1923 - Paul Peter Ewald published his book Kristalle und Röntgenstrahlen (Crystals and X-rays). 1924 - Louis de Broglie in his PhD thesis Recherches sur la théorie des quanta introduced his theory of electron waves. This was the start of electron and neutron diffraction and crystallography. 1924 - J.D. Bernal established the structure of graphite. 1926 - Victor Goldschmidt distinguished between atomic and ionic radii and postulated some rules for atom substitution in crystal structures. 1927 - Frits Zernike and Jan Albert Prins proposed the pair distribution function for analyzing molecular structures in solution-phase diffraction. 1927 - Two groups demonstrated electron diffraction, the first the Davisson–Germer experiment, the other by George Paget Thomson and Alexander Reid. Alexander Reid, who was Thomson's graduate student, performed the first experiments, but he died soon after in a motorcycle accident. 1928 - Felix Machatschki, working with Goldschmidt, showed that silicon can be replaced by aluminium in feldspar structures. 1928 - Kathleen Lonsdale used x-rays to determine that the structure of benzene is a flat hexagonal ring. 1928 - Paul Niggli introduced reduced cells for simplifying structures using a technique now known as Niggli reduction. 1928 - Hans Bethe published the first non-relativistic explanation of electron diffraction based upon Schrödinger's equation, which remains central to all further analysis. 1928 - Carl Hermann introduced and Charles Mauguin modified the international standard notation for crystallographic groups called Hermann–Mauguin notation. 1929 - Linus Pauling formulated a set of rules (later called Pauling's rules) to describe the structure of complex ionic crystals. 1929 - William Howard Barnes published the crystal structure of ice. 1930 - Lawrence Bragg assembled the first classification of silicates, describing their structure in terms of grouping of SiO4 tetrahedra. 1930 - Gas electron diffraction was developed by Herman Mark and Raymond Wierl, 1931 - Paul Ewald and Carl Hermann published the first volume of the Strukturbericht (Structure Report), which established the systematic classification of crystal structure prototypes, also known as the Strukturbericht designation. 1931 - Fritz Laves enumerated the Laves tilings for the first time. 1932 - W. H. Zachariasen published an article entitled The atomic arrangement in glass, which perhaps had more influence than any other published work on the science of glass. 1932 - Friedrich Rinne introduced the concept of paracrystallinity for liquid crystals and amorphous materials. 1932 - Vadim E. Lashkaryov and Ilya D. Usyskin determined of the positions of hydrogen atoms in ammonium chloride crystals using electron diffraction. 1934 - Arthur Patterson introduced the Patterson function which uses diffraction intensities to determine the interatomic distances within a crystal, setting limits to the possible phase values for the reflected x-rays. 1934 - Martin Julian Buerger developed the equi-inclination Weissenberg X-ray camera. Buerger invented the precession camera in 1942. 1934 - C. Arnold Beevers and Henry Lipson invented the Beevers–Lipson strip as a calculation aid for Fourier methods for the determination of the crystal structure of CuSO4.5H2O. 1934 - Fritz Laves investigated the structures of intermetallic compounds of formula AB2. These structures were subsequently named Laves phases. 1935 - First publication of the International tables for the determination of crystal structures edited by Carl Hermann. The successor volumes are currently published by IUCr as the International tables for crystallography. 1935 - William Astbury established the structure of keratin using x-ray crystallography; this work provided the foundation for Linus Pauling's 1951 discovery of the α-helix. 1936 - Peter Debye won the Nobel Prize in Chemistry "for his contributions to our knowledge of molecular structure through his investigations on dipole moments and on the diffraction of X-rays and electrons in gases." 1936 - showed that electron microscope could be used as micro-diffraction cameras with an aperture—the birth of selected area electron diffraction. 1937 - Clinton Joseph Davisson and George Paget Thomson shared the Nobel Prize in physics "for their experimental discovery of the diffraction of electrons by crystals." 1939 - Linus Pauling published the book The Nature of the Chemical Bond and the Structure of Molecules and Crystals. 1939 - André Guinier discovered small-angle X-ray scattering. 1939 - Walther Kossel and Gottfried Möllenstedt published the first work on convergent beam electron diffraction (CBED), It was extended by Peter Goodman and Gunter Lehmpfuhl, then mainly by the groups of John Steeds and Michiyoshi Tanaka who showed how to use CBED patterns to determine point groups and space groups. 1941 - The International Centre for Diffraction Data was founded. 1945 - George W. Brindley and Keith Robinson solved the crystal structure of kaolinite. 1945 - The crystal structure of the perovskite BaTiO3 was first published by Helen Megaw based on barium titanate X-ray diffraction data. 1945 - A.F. Wells published the classic reference book, Structural inorganic chemistry, which subsequently went through five editions. 1946 - Foundation of the International Union of Crystallography. 1946 - James Batcheller Sumner shared the Nobel Prize in Chemistry "for his discovery that enzymes can be crystallized". 1947 - Lewis Stephen Ramsdell systematically classified the polytypes of silicon carbide, and introduced the Ramsdell notation. 1948 - The first congress and general assembly of the International Union of Crystallography was held at Harvard University. 1948 - Acta Crystallographica was founded by the International Union of Crystallography (IUCr) with P.P. Ewald as its first editor. 1948 - Ernest O. Wollan and Clifford Shull published the first series of neutron diffraction experiments for crystallography performed at the Oak Ridge National Laboratory. 1948 - George Pake used solid state NMR spectroscopy to determine hydrogen atom distances in a single crystal of gypsum. 1949 - Clifford Shull opened a new field of magnetic crystallography based on neutron diffraction. 1950 - Jerome Karle and Herbert A. Hauptman introduced formulae for phase determination known as direct methods. 1951 - Johannes Martin Bijvoet and his colleagues, using anomalous scattering, confirmed Emil Fischer's arbitrary assignment of absolute configuration, in relation to the direction of optical rotation of polarized light, was correct in practice. 1951 - Linus Pauling determined the structure of the α-helix and the β-sheet in polypeptide chains. 1951 - Alexei Vasilievich Shubnikov published Symmetry and antisymmetry of finite figures which opened up the field of antisymmetry in magnetic structures. 1952 - David Sayre suggested that the phase problem could be more easily solved by having at least one more intensity measurement beyond those of the Bragg peaks in each dimension. This concept is understood today as oversampling. 1952 - Geoffrey Wilkinson and Ernst Otto Fischer determined the structure of ferrocene, the first metallic sandwich compound, for which they won the 1973 Nobel prize in Chemistry. The structure was soon refined by Jack Dunitz, Leslie Orgel, and Alexander Rich. 1953 - Arne Magnéli introduced the term homologous series to describe polytypes of transition metal oxides that exhibit crystallographic shear structures. 1953 - Determination of the structure of DNA by three British teams, for which James Watson, Francis Crick and Maurice Wilkins won the 1962 Nobel Prize in Physiology or Medicine in 1962 (Franklin's death in 1958 made her ineligible for the award). 1954 - Ukichiro Nakaya's book Snow Crystals: Natural and Artificial, dedicated to the modern study of snow crystals, is published. 1954 - Linus Pauling won the Nobel Prize in Chemistry "for his research into the nature of the chemical bond and its application to the elucidation of the structure of complex substances"." 1956 - Durward W. J. Cruickshank developed the theoretical framework for anisotropic displacement parameters, also known as the thermal ellipsoid. 1956 - James Menter published the first electron microscope images showing the lattice structure of a material. 1958 - William Burton Pearson published A Handbook of Lattice Spacings and Structures of Metals and Alloys, where he introduced the Pearson symbols for crystal structure types. 1959 - Norio Kato and Andrew Richard Lang observed Pendellösung fringes in X-ray diffraction from silicon and quartz. The observation of similar fringes in neutron diffraction was made by Clifford Shull in 1968. 1960 - John Kendrew determined the structure of myoglobin for which he shared the 1962 Nobel Prize in Chemistry. 1960 - After many years of research, Max Perutz determined the structure of haemoglobin for which he shared the 1962 Nobel Prize in Chemistry. 1960 - Lester Germer and his coworkers at Bell Labs using a flat phosphor screen for the first modern low-energy electron diffraction camera combined with ultra-high vacuum, the start of quantitative surface crystallography. 1962 - Alan Mackay demonstrated that there exists close packing of spheres to yield icosahedral structures. 1962 - Michael Rossmann and David Blow laid the foundation for the molecular replacement approach which provides phase information without requiring additional experimental effort. 1962 - Max Perutz and John Kendrew shared the Nobel Prize for Chemistry "for their studies of the structures of globular proteins", namely haemoglobin and myoglobin respectively 1962 - James Watson, Francis Crick and Maurice Wilkins won the Nobel Prize in Physiology or Medicine "for their discoveries concerning the molecular structure of nucleic acids and its significance for information transfer in living material," specifically for their determination of the structure of DNA. 1963 - Isabella Karle developed the symbolic addition procedure in direct methods for inverting X-ray diffraction data. 1963 - Jürg Waser introduced restrained least square method, also known as regularized least squares, for crystallographic structure fitting. 1964 - Dorothy Hodgkin won the Nobel Prize for Chemistry "for her determinations by X-ray techniques of the structures of important biochemical substances." The substances included penicillin and vitamin B12. 1965 - David Chilton Phillips, Louise Johnson and their co-workers published the structure of Lysozyme, the first enzyme to have its structure determined. 1965 - Olga Kennard established the Cambridge Structural Database. 1967 - Hugo Rietveld invented the Rietveld refinement method for computation of crystal structures. 1968 - Erwin Félix Lewy-Bertaut introduced magnetic space groups to account for the spin ordering of magnetic structures observed in neutron crystallography. 1968 - Aaron Klug and David DeRosier used electron microscopy to visualise the structure of the tail of bacteriophage T4, a common virus, thus signalling a breakthrough in macromolecular structure determination. 1968 - Dorothy Hodgkin, after 35 years of work, finally deciphered the structure of insulin. 1969 - Benno P. Schoenborn conducted the first structural study of macromolecules (myoglobin) by neutron diffraction at the Brookhaven National Laboratory. 1970 - Albert Crewe demonstrated imaging of single atoms in a scanning transmission electron microscopy. 1971 - Establishment of the Protein Data Bank (PDB). At PDB, Edgar Meyer develops the first general software tools for handling and visualizing protein structural data. 1971 - Gerd Rosenbaum, Kenneth Holmes, and Jean Witz first discussed the potential of synchrotron X-ray diffraction for biological applications. 1972 - The first quantitative matching of atomic scale images and dynamical simulations was published by J. G. Allpress, E. A. Hewat, A. F. Moodie and J. V. Sanders. 1972 - Michael Glazer established the classification of octahedral tilting patterns in perovskite crystal structures, later also known as the Glazer tilts. 1973 - Alex Rich's group published the first report of a polynucleotide crystal structure - that of the yeast transfer RNA (tRNA) for phenylalanine. 1973 - Geoffrey Wilkinson and Ernst Fischer shared the Nobel Prize in Chemistry "for their pioneering work, performed independently, on the chemistry of the organometallic, so called sandwich compounds", specifically the structure of ferrocene. 1976 - Douglas L. Dorset and Herbert A. Hauptman used direct methods to solve crystal structures from electron diffraction data. 1976 - Boris Delaunay, building on his work in the 1930s, proved that the regularity of a system of points, an (r, R) system or Delone set, can be established by postulating the points' congruence within a sphere of a defined finite radius. 1976 - William Lipscomb won the Nobel Prize in Chemistry "for his studies on the structure of boranes illuminating problems of chemical bonding." 1978 - Stephen C. Harrison provided the first high-resolution structure of a virus: tomato bushy stunt virus which is icosahedral in form. 1978 - Günter Bergerhoff and I. David Brown initiated the Inorganic Crystal Structure Database. 1979 - The first award of the Gregori Aminoff Prize for a contribution in the field of crystallography is made by the Royal Swedish Academy of Sciences to Paul Peter Ewald. 1979 - A team involving Alfred Y. Cho and others at Bell Labs made the first reconstruction of atomic structures at the materials interface between gallium arsenide and aluminium using X-ray diffraction. 1980 - Jerome Karle and Wayne Hendrickson developed multi-wavelength anomalous dispersion (MAD) a technique to facilitate the determination of the three-dimensional structure of biological macromolecules via a solution of the phase problem. 1982 - Aaron Klug won the Nobel Prize in Chemistry "for his development of crystallographic electron microscopy and his structural elucidation of biologically important nucleic acid-protein complexes." 1983 - John R. Helliwell promoted the use of synchrotron radiation in the crystallography of molecular biology. 1983 - Effectively simultaneously Ian Robinson used surface X-ray Diffraction (SXRD) to solve the structure of the gold 2x1 (110) surface, Laurence D. Marks used electron microscopy and Gerd Binnig and Heinrich Rohrer used scanning tunneling microscope. 1984 - A team led by Dan Shechtman also involving Ilan Blech, Denis Gratias, and John W. Cahn discovered quasicrystals in a metallic alloy. These structures have no unit cell and no periodic translational order but have long-range bond orientational order, which generates a defined diffraction pattern. 1984 - Aaron Klug and his colleagues provided an advance in determining the structure of protein–nucleic acid complexes when they solved the structure of the 206-kDa nucleosome core particle. 1985 - Jerome Karle shared the Nobel Prize in Chemistry with Herbert A. Hauptman "for their outstanding achievements in the development of direct methods for the determination of crystal structures". Karle developed the theoretical basis for multiple-wavelength anomalous diffraction (MAD). 1985 - Hartmut Michel and his colleagues reported the first high-resolution X-ray crystal structure of an integral membrane protein when they published the structure of a photosynthetic reaction centre. 1985 - Kunio Takanayagi led a team which solved the structure of the 7x7 reconstruction of the silicon (111) surface using Patterson function methods with ultra-high vacuum electron diffraction. This surface structure had defeated many prior attempts. 1986 - Ernst Ruska shared the Nobel Prize in Physics "for his fundamental work in electron optics, and for the design of the first electron microscope". 1987 - John M. Cowley and Alexander F. Moodie shared the first IUCr Ewald Prize "for their outstanding achievements in electron diffraction and microscopy. They carried out pioneering work on the dynamical scattering of electrons and the direct imaging of crystal structures and structure defects by high-resolution electron microscopy. The physical optics approach used by Cowley and Moodie takes into account many hundreds of scattered beams, and represents a far-reaching extension of the dynamical theory for X-rays, first developed by P.P. Ewald". 1987 - Don Craig Wiley and Jack L. Strominger solved the structure of the soluble portion of a class I MHC molecule known as HLA-A2. This structure revealed the presence of a pocket which holds the antigenic peptide, which is recognized by the receptors of T cells only when firmly bound to the MHC product and presented at the surface of an infected cell. This structure strongly influenced the concept of T cell recognition in future work. 1988 - Johann Deisenhofer, Robert Huber and Hartmut Michel shared the Nobel Prize in Chemistry "for the determination of the three-dimensional structure of a photosynthetic reaction centre." 1989 - Gautam R. Desiraju defined crystal engineering as "the understanding of intermolecular interactions in the context of crystal packing and the utilization of such understanding in the design of new solids with desired physical and chemical properties." 1991 - Georg E. Schulz and colleagues reported the structure of a bacterial porin, a membrane protein with a cylindrical shape (a ‘β-barrel'). 1991 - The crystallographic information file (CIF) format was introduced by Sydney R. Hall, Frank H. Allen, and I. David Brown based on the self-defining text archive and retrieval (STAR) file format developed by Sydney R. Hall. 1991 - Sumio Iijima used electron diffraction to determine the structure of carbon nanotubes. 1992 - The International Union of Crystallography changed the IUCr's definition of a crystal to "any solid having an essentially discrete diffraction pattern" thus formally recognizing quasicrystals. 1992 - First release of the CNS software package by Axel T. Brunger. CNS is an extension of X-PLOR released in 1987, and is used for solving structures based on X-ray diffraction or solution NMR data. 1994 - Jan Pieter Abrahams et al. reported the structure of an F1-ATPase which uses the proton-motive force across the inner mitochondrial membrane to facilitate the synthesis of adenosine triphosphate (ATP). 1994 - Roger Vincent and Paul Midgley invented the precession electron diffraction method for electron crystallography in a transmission electron microscope. 1994 - Bertram Brockhouse and Clifford Shull shared the Nobel Prize in Physics "for pioneering contributions to the development of neutron scattering techniques for studies of condensed matter". Specifically, Brockhouse "for the development of neutron spectroscopy" and Shull "for the development of the neutron diffraction technique." 1994 - Philip Coppens led a team of researchers to uncover the transient structure of sodium nitroprusside, a first example in X-ray excited-state crystallography. 1995 - Douglas L. Dorset published Structural Electron Crystallography, a major text on electron crystallography. 1997 - The Bilbao Crystallographic Server was launched at the University of the Basque Country, led by Mois Ilia Aroyo, Juan Manuel Perez-Mato. 1997 - The X-ray crystal structure of bacteriorhodopsin was the first time the lipidic cubic phase (LCP) was used to facilitate the crystallization of a membrane protein; LCP has since been used to obtain the structures of many unique membrane proteins, including G protein-coupled receptors (GPCRs). 1997 - Paul D. Boyer and John E. Walker shared one half of the Nobel Prize in Chemistry "for their elucidation of the enzymatic mechanism underlying the synthesis of adenosine triphosphate (ATP)" Walker determined the crystal structure of ATP synthase, and this structure confirmed a mechanism earlier proposed by Boyer, mainly on the basis of isotopic studies. 1997 - Nobuo Niimura led a team that first used a neutron image plate for structure determination of lysozyme at the Institut Laue–Langevin. 1998 - The structure of tubulin and the location of the taxol-binding site is first determined by Eva Nogales and her team using electron crystallography. 1998 - A group led by Jon Gjønnes combined three-dimensional electron diffraction with precession electron diffraction and direct methods to solve an intermetallic, combining this with dynamical refinements. 1999 - Jianwei Miao, Janos Kirz, David Sayre and co-workers performed the first experiment to extend crystallography to allow structural determination of non-crystalline specimens which has become known as coherent diffraction imaging (CDI), lensless imaging, or computational microscopy. 1999 - A team led by Michael O'Keefe and Omar Yaghi synthesized and determined the structure of MOF-5, the first metal-organic framework (MOF) compound. In the ensuing years, the duo and mathematician Olaf Delgado-Friedrichs further developed the periodic net theory proposed by Alexander F. Wells to characterize MOFs. 21st century 2000 - Janos Hajdu, Richard Neutze, and colleagues calculated that they could use Sayre's ideas from the 1950s, to implement a ‘diffraction before destruction' concept, using an X-ray free-electron laser (XFEL). 2001 - Harry F. Noller's group published the 5.5-Å structure of the complete Thermus thermophilus 70S ribosome. This structure revealed that the major functional regions of the ribosome were based on RNA, establishing the primordial role of RNA in translation. 2001 - Roger Kornberg's group published the 2.8-Å structure of Saccharomyces cerevisiae RNA polymerase. The structure allowed both transcription initiation and elongation mechanisms to be deduced. Simultaneously, this group reported the structure of free RNA polymerase II, which contributed towards the eventual visualisation of the interaction between DNA, RNA, and the ribosome. 2003 - Raimond Ravelli et al. demonstrated X-ray radiation damage-induced phasing method for structure determination. 2005 - The first X-ray free-electron laser in the soft X-ray regime, FLASH, became an operational user facility at DESY for X-ray diffraction experiments. 2007 - Ute Kolb and co-workers developed automated diffraction tomography for electron crystallography by combining diffraction and tomography within a transmission electron microscope. 2007 - Two X-ray crystal structures of a GPCR, the human β2 adrenergic receptor, were published. Because many drugs elicit their biological effect(s) by binding to a GPCR, the structures of these and other GPCRs may be used to develop efficacious drugs with few side effects. 2009 - The first hard X-ray free-electron laser, the Linac Coherent Light Source, became operational at the SLAC National Accelerator Laboratory. 2009 - Luca Bindi, Paul Steinhardt, Nan Yao, and Peter Lu identified the first naturally occurring quasicrystal using X-ray and electron crystallography. 2009 - Venkatraman Ramakrishnan, Thomas A. Steitz and Ada E. Yonath shared the Nobel Prize in Chemistry "for studies of the structure and function of the ribosome." 2009 - Judith Howard and her collaborators created the Olex2 crystallographic software package. 2011 - Gustaaf Van Tendeloo led a team including Sandra Van Aert, Kees Joost Batenburg et. al. determined the 3D atomic positions of a silver nanoparticle using electron tomography. 2011 - Dan Shechtman received the Nobel Prize in chemistry "for the discovery of quasicrystals." 2011 - Henry N. Chapman, Petra Fromme, John C. H. Spence and 85 co-workers used femtosecond pulses from a Free-electron laser (XFEL) to examine the structure of nanocrystals of Photosystem I. By using very brief x-ray pulses, most radiation damage is mitigated using the technique called serial femtosecond crystallography. 2012 - Jianwei Miao and his co-workers applied the coherent diffraction imaging (CDI) method in Atomic Electron Tomography (AET). 2013 - Tamir Gonen and his co-workers demonstrated microcrystal electron diffraction (microED) for lysozyme microcrystals at the Janelia Farm Research Campus. 2014 - Carmelo Giacovazzo published Phasing in Crystallography: A Modern Perspective, a comprehensive opus on phasing methods in X-ray and electron crystallography. 2014 - The International Union of Crystallography and UNESCO named 2014 the International Year of Crystallography to commemorate the century of discovery since the invention of X-ray diffraction. 2017 - Lukas Palatinus and co-workers used dynamical structure refinement to resolve hydrogen atom positions in nanocrystals using electron diffraction. 2017 - Jacques Dubochet, Joachim Frank and Richard Henderson shared the Nobel Prize in chemistry "for developing cryo-electron microscopy for the high-resolution structure determination of biomolecules in solution." 2019 - The Cambridge Structural Database reached the milestone of one million structures. 2020 - Two independent groups led respectively by Holger Stark and Sjors Scheres demonstrated that single-particle cryoelectron microscopy has reached atomic resolution. 2021 - Kenneth G. Libbrecht published the book Snow Crystals: A Case Study in Spontaneous Structure Formation, summarizing his decade-spanning work on the subject for engineering conditions for designer ice crystals. 2022 - Leonid Dubrovinsky, Igor A. Abrikosov, and Natalia Dubrovinskaia led a team that demonstrates high-pressure crystallography in the terapascal regime. 2024 - A team led by Anders Madsen developed a deep learning model, PhAI, to solve crystallographic phase problem for small molecules. References Further reading Crystallography before 20th century Burke, John G. (1966), Origins of the science of crystals, University of California Press. Lima-de-Faria, José (ed.) (1990), Historical atlas of crystallography, Springer Netherlands Crystallography in the 20th century and beyond Milestones in crystallography, Nature, August 2014 History of X-ray crystallography Ewald, P. P. (ed.) (1962), 50 Years of x-ray diffraction, IUCR, Oosthoek Authier, André (2013), Early days of x-ray crystallography, Oxford Univ. Press. History of electron crystallography History of neutron crystallography History of NMR crystallography History of structure determination History of macromolecular crystallography History of crystallographic organizations and journals Crystallography Crystallography Chemistry timelines
Timeline of crystallography
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
7,584
[ "Crystallography", "Condensed matter physics", "Materials science" ]
63,481,792
https://en.wikipedia.org/wiki/Biosecurity%20Act%202015
The Biosecurity Act 2015 is an Act of the Parliament of Australia which manages biosecurity risks in Australia at the national border. It was enacted on 16 June 2015, after the Bill was passed with bipartisan support on 14 May 2015. It covers both agricultural and human medical biosecurity risks, including epidemics and pandemics, and is designed to contain and/or deal with any "diseases and pests that may cause harm to human, animal or plant health or the environment" in Australia. The application of the Act was particularly tested during the COVID-19 pandemic in Australia. History The Act replaced most of the Quarantine Act 1908, which was wholly repealed on 16 June 2016 by the Biosecurity (Consequential Amendments and Transitional Provisions) Act 2015. The new Act is a major reform of the Quarantine Act, in particular in its strengthening and modernising the existing framework of regulations governing biosecurity legislation in Australia. As recommended by the Beale Review (One Biosecurity: A Working Partnership, Roger Beale et al., 2008) and the earlier Nairn Report, the Act adopts a risk-based approach, but includes several measures to manage unacceptable levels of biosecurity risk. New requirements include how the Department of Agriculture, Water and the Environment (at the time of enactment, Department of Agriculture and Water Resources) would manage biosecurity risks associated with goods, people and vessels entering Australia. The Biosecurity Bill 2014 passed through parliament on 14 May 2015 with bipartisan support. The Act did not radically change operational functions, but were more clearly described, with the aim of being easier to use and reducing the complexity of administering it. The main change relate was the compliance and enforcement of powers. Amendments and related legislation The Act has been amended a number of times, with the latest amended, as of 25 March 2020, being the Coronavirus Economic Response Package Omnibus Act 2020, which also affects other legislation. Also related to the Act are the Biosecurity Charges Imposition (General) Act 2015, Biosecurity Regulation 2016, Biosecurity (Human Health) Regulation 2016, and Biosecurity Charges Imposition (General) Regulation 2016. Description The Act is about "managing diseases and pests that may cause harm to human, animal or plant health or the environment" in Australia however has no jurisdiction within Australia as biosecurity is a sovereign right of the States under the Australian Constitution and s focused on preventing the entry of new pests and diseases and managing ‘Commonwealth land’ such as ports and defence force sites, There are chapters which cover management of risks in several areas, including: Risks to human health. Human diseases listed in a legislative instrument are the main focus, but there are also requirements relating to people entering or leaving Australian territory as well as rules relating to managing deceased people. Goods that are brought into Australian territory from outside Australian territory, by air or by sea. It also provides for prohibition of or conditions relating to certain goods from being brought or imported into Australian territory. Aircraft and vessels entering Australian territory from outside Australian territory, including controlling landing or mooring places and their movement while they are in Australian territory. Implementation of the Ballast Water Convention, which regulates the ballast water and sediment of certain vessels, in accordance with the United Nations Convention on the Law of the Sea. Other biosecurity risks which may be posed by diseases or pests in or on goods or premises in Australian territory. Arrangements to enable biosecurity officials to carry out biosecurity activities to manage biosecurity risks associated with goods, premises or other things. Provision for the Governor-General of Australia to declare biosecurity emergencies and human biosecurity emergencies; provision for special powers for the Agriculture Minister to deal with biosecurity emergencies (including the delegation of certain powers to national response agencies; and provision for special powers to be given to the Health Minister to deal with human biosecurity emergencies, including by effecting the recommendations of the World Health Organization (WHO). Provision for powers for biosecurity officers powers to ensure people are complying with this Act, to investigate non‑compliance and to enforce this Act by means such as civil penalties, infringement notices, enforceable undertakings and injunctions and other means. Governance is described, including the functions and powers of the Director of Biosecurity, Director of Human Biosecurity, and various types of biosecurity officers. Part 3 (Section 344) states that the Director of Human Biosecurity is the person acting in the capacity of Commonwealth Chief Medical Officer (CMO), and this person has certain powers conferred upon them by this Act. Part 4 deals with human biosecurity officers. The CMO may, under Section 563 of the Act, make members of the Australian Defence Force, as well as officers and employees of the federal and state departments of health, human biosecurity officers, if satisfied that they have appropriate clinical expertise. Biosecurity emergencies Chapter 8, Part 2 of the Act deals with "emergencies involving threats or harm to human health on a nationally significant scale". This chapter deals with special powers which exist as well as the powers available under Chapter 2. "The Governor‑General may make a human biosecurity emergency declaration if the Health Minister is satisfied that the special powers in this Part are needed to deal with a human biosecurity emergency. The Health Minister may exercise special powers under this Part to deal with a human biosecurity emergency, subject to limits and protections. These powers may be exercised anywhere in Australian territory". Similar emergency powers are given to the Governor-General and Agriculture Minister for other types of biosecurity emergencies, in Chapter 8, Par 1. In addition, the Agriculture Minister may "declare Commonwealth bodies, or parts of Commonwealth bodies, to be national response agencies for the purposes of dealing with biosecurity emergencies", and some of the Minister's powers may be delegated to staff in the national response agencies. Scope An article on the parliamentary website explains the scope the Health Minister's powers under the Act under sections 477 and 478. The minister may: However, Sub-section 478(5) places limits on interference with state and territory bodies and officials: "A direction must not be given under subsection (1) to an officer or employee of a State, Territory or State or Territory body unless the direction is in accordance with an agreement between the Commonwealth and the State, Territory or body". (For the COVID-19 pandemic, an intergovernmental agreement was signed.) Invocations 2020 invocation of state of emergency On 18 March 2020, a human biosecurity emergency was declared in Australia owing to the risks to human health posed by the COVID-19 pandemic in Australia, after the National Security Committee met the previous day. The Act specifies that the Governor-General may declare such an emergency if the Health Minister (at the time Greg Hunt; Mark Butler) is satisfied that "a listed human disease is posing a severe and immediate threat, or is causing harm, to human health on a nationally significant scale". This gives the Minister sweeping powers, including imposing restrictions or preventing the movement of people and goods between specified places, and evacuations. The Biosecurity (Human Biosecurity Emergency) (Human Coronavirus with Pandemic Potential) Declaration 2020 was declared by the Governor-General, David Hurley, under Section 475 of the Act. The Act only allows for three months, but the period may be extended for a further three if the Governor-General is satisfied that it is required. The Biosecurity (Human Biosecurity Emergency) (Human Coronavirus with Pandemic Potential) (Emergency Requirements) Determination 2020, made by the Health Minister on the same day under Section 477 of the Act, banned international cruise ships from entering Australian ports before 15 April 2020. On 25 March 2020, the Health Minister made a second determination, the Biosecurity (Human Biosecurity Emergency) (Human Coronavirus with Pandemic Potential) (Overseas Travel Ban Emergency Requirements) Determination 2020, which "forbids Australian citizens and permanent residents from leaving Australian territory by air or sea as a passenger". On 25 April 2020, the Biosecurity (Human Biosecurity Emergency) (Human Coronavirus with Pandemic Potential) (Emergency Requirements—Public Health Contact Information) Determination 2020, made under subsection 477(1) of the Act, was signed into law by the Health Minister. The purpose of the new legislation is "to make contact tracing faster and more effective by encouraging public acceptance and uptake of COVIDSafe", COVIDSafe being the new mobile app created for the purpose. The function of the app is to record contact between any two people who both have the app on their phones when they come within of each other. The encrypted data would remain on the phone for 21 days of not encountering a person logged with confirmed COVID-19. Investigation into breach by Ruby Princess On 5 April 2020, New South Wales Police launched a criminal investigation into whether the operator of Ruby Princess, Carnival Australia, broke the Biosecurity Act 2015 (Cwth) and possibly NSW state laws, by deliberately concealing COVID-19 cases. 2021 India flight ban Late on 30 April 2021, after some days earlier imposing a ban on all flights from India, which was experiencing a dramatic rise in cases in a second wave of the COVID-19 pandemic, the Federal Government announced a ban on Australian citizens and permanent residents in India from entering Australia via any route. These measures came into effect on 3 May and would remain in force until 15 May. Prime Minister Scott Morrison announced that anyone, including Australian citizens and permanent residents, caught returning from India to Australia via any route would be subject to punishment under the Biosecurity Act, with penalties for breaches including up to five years' jail, a fine of , or both. Foreign Minister Marise Payne reported that 57% of positive cases in quarantine had come from India in April, compared with 10% in March. The move was branded as racist by some critics, and a potential breach of international human rights law. On 3 May 2021 the government announced that it would review this decision earlier than originally intended, possibly within the same week. There were about 9,000 Australian citizens in India, of whom 650 were considered vulnerable. On 7 May 2021 Morrison announced that the flight ban would end on 15 May and that repatriation flights to the Northern Territory would start on this date. A legal challenge to the ban was due to be heard in the Federal Court in Sydney on 12 May 2021, part of the challenge was rejected on 10 May. See also References External links Acts of the Parliament of Australia 2015 in Australian law Biosecurity Quarantine Environment of Australia COVID-19 pandemic in Australia Pest legislation Environmental history of Australia
Biosecurity Act 2015
[ "Biology", "Environmental_science" ]
2,266
[ "Pest legislation", "Pests (organism)", "Biosecurity", "Toxicology" ]
63,481,932
https://en.wikipedia.org/wiki/Phenserine
{{Infobox drug | drug_name = Phenserine | type = | IUPAC_name = (3aS,8aR)-1,3a,8-Trimethyl-1H,2H,3H,3aH,8H,8aH-pyrrolo[2,3-b]indol-5-yl N-phenylcarbamate | image = Phenserine.svg | width = | alt = | caption = | pronounce = | tradename = | Drugs.com = | MedlinePlus = | licence_EU = | licence_US = | DailyMedID = | pregnancy_AU = | pregnancy_AU_comment = | pregnancy_US = | pregnancy_US_comment = | pregnancy_category = | legal_AU = | legal_AU_comment = | legal_BR = | legal_BR_comment = | legal_CA = | legal_CA_comment = | legal_DE = | legal_DE_comment = | legal_NZ = | legal_NZ_comment = | legal_UK = | legal_UK_comment = | legal_US = | legal_UN = | legal_US_comment = | legal_UN_comment = | legal_status = Investigational | dependency_liability = | addiction_liability = | routes_of_administration = By mouth | bioavailability = ~100% | protein_bound = | metabolism = liver | metabolites = (−)-N1-norphenserine, (−)-N8-norphenserine, (−)-N1,N8-bisnorphenserine | onset = | elimination_half-life = 12.6 minutes | duration_of_action = 8.25 hours | excretion = renal or hepatic clearance | CAS_number = 101246-66-6 | CAS_supplemental = | ATCvet = | ATC_prefix = | ATC_suffix = | ATC_supplemental = | PubChem = 192706 | PubChemSubstance = | IUPHAR_ligand = | DrugBank = DB04892 | ChemSpiderID = 167225 | UNII = SUE285UG3S | KEGG = | ChEBI = | ChEMBL = 74926 | NIAID_ChemDB = | PDB_ligand = | synonyms = (-)-Phenserine, (-)-Eseroline phenylcarbamate, N-phenylcarbamoyleseroline, N''-phenylcarbamoyl eseroline | chemical_formula = | C = 20 | H = 23 | N = 3 | O = 2 | charge = | SMILES = [H][C@]12N(C)CC[C@@]1(C)C1=C(C=CC(OC(=O)NC3=CC=CC=C3)=C1)N2C | StdInChI = InChI=1S/C20H23N3O2/c1-20-11-12-22(2)18(20)23(3)17-10-9-15(13-16(17)20)25-19(24)21-14-7-5-4-6-8-14/h4-10,13,18H,11-12H2,1-3H3,(H,21,24)/t18-,20+/m1/s1 | StdInChI_comment = | StdInChIKey = PBHFNBQPZCRWQP-QUCCMNQESA-N | density = | density_notes = | melting_point = 150 | melting_high = | melting_notes = | solubility = | specific_rotation = | INN = | licence_CA = | class = | Jmol = | sol_units = }}Phenserine (also known as (-)-phenserine or (-)-eseroline phenylcarbamate''') is a synthetic drug which has been investigated as a medication to treat Alzheimer's disease (AD), as the drug exhibits neuroprotective and neurotrophic effects. The research of phenserine, initially patented by the National Institute on Aging (NIA), has been suspended since phase III of clinical trials in 2006, conducted right after the drug licenses were issued. The abandonment of the clinical trials led to disapproval by FDA. The retrospective meta-analysis of the phenserine research proposed that its clinical invalidation was arisen from methodological issues that were not impeccably settled before proceeding to the subsequent clinical phases. Phenserine was introduced as an inhibitor of acetylcholinesterase (AChE) and demonstrated significant alleviation in numerous neuropathological manifestations, improving cognitive functions of the brain. The ameliorative mechanism involves both cholinergic and non-cholinergic pathways. The clinical translatable doses of phenserine show relatively high tolerability and rarely manifest severe adverse effects. With respect to overdosing of the drug (20 mg/kg), a few cholinergic adverse effects were reported, including nausea and tremor which are not life-threatening. An administration form of phenserine, (-)-phenserine tartrate, which exhibits high bioavailability and solubility, is taken by mouth. Phenserine and its metabolites can readily access the brain with high permeability across the blood-brain barrier and sustain to act for a long duration with the relatively short half-life. Posiphen ((+)-phenserine), the enantiomer of (-)-phenserine, is also a potential drug by itself or synergically with (-)-phenserine, to mitigate the progression of neurological diseases, mainly Alzheimer's disease. History Phenserine was first investigated as a substitute for physostigmine which failed to satisfy the clinical standards for treating Alzheimer's disease, and developed into more compatible remedy. It was initially invented by Nigel Greig whose laboratory is affiliated with the National Institute on Aging (NIA) under the US National Institutes of Health (NIH) which subsequently released a patent of phenserine as an AChE inhibitor in 1995. During phase I in 2000, the supplementary patent regarding its inhibitory mechanism upon β-amyloid precursor protein (APP) synthesis was added. Following 6 years of phase I and II trials, Axonyx Corporation had licensed phenserine to Daewoong Pharmaceutical and QR Pharma (later adopted new corporation name, Annovis Bio) companies in 2006, which then planned to undertake phase III trial and merchandize the drug. However, the clinical deficits ̶ representatively from a double-blinded, placebo-controlled and 7-month phase III trial which had been conducted on 377 mild to moderate Alzheimer's disease patients across Austria, Croatia, Spain, and UK ̶ were discovered and no significance was exhibited for the drug efficacy. This led to the relinquishment of phenserine development, merely displaying its marketable potential. Approval status Phenserine failed in phase III of Alzheimer's disease-aimed clinical trials and there has yet been no promise of the trial resumption since 2006. The methodological problems of trials are frequently speculated as the principal reason for the failure of FDA approval as well as the scarcity of Alzheimer's disease drugs. The underlying complications are generated by an inordinate variance in clinical outcomes and poor determination in optimal dosing. Intra and inter-site variations were incurred by a lack of baseline evaluation and longitudinal assessment on placebo groups. This produced an inadequate power and, thus, appeared to have insufficient statistical significance. In light of the dose determination, the criteria for human subject engagement was not meticulously established before dosing and the effective dose range was not completely established in phase I and II, yet still persisting to phase III. Compared to other Alzheimer's disease drugs, such as donepenzil, tacrine and metrifonate, the clinical advancement of phenserine involves comparably high compliance in outcome measures and protocol regimentation on methods and the clinical phase transition. Pharmacological benefits Phenserine was invented as the Alzheimer's disease-oriented treatment in particular, and also proven to have alleviative effects upon other neurological disorders, Parkinson's disease, dementia and amyotrophic lateral sclerosis. The administration of phenserine within the short delay of the disease onset was shown to diminish the severity of neurodegeneration and accompanying cognitive impairments. Its post-injury intervention at clinical translatable doses has been shown to significantly mitigate various neurodegenerative manifestations, preventing chronic deterioration in cognitive functions. The collective neuropathological cascades in the brain are either naturally occurring or provoked by mild or moderate traumatic brain injuries, such as concussion, diffuse axonal injury, ischemic and hypoxic brain injuries The traumatic brain injuries have been substantially examined and induced to form test groups in phenserine research. They are highly correlated with the onset of neurodegenerative disorders, precipitating cognitive, and behavioral impairments. Phenserine was proven to mitigate the multiple cascades of neuropathology, triggered by traumatic brain injuries, via both cholinergic and non-cholinergic mechanisms. Pharmacodynamics (mechanism of action) Cholinergic mechanism Phenserine serves as an acetylcholinesterase (AChE) inhibitor which selectively acts on the acetylcholinesterase enzyme. It prevents acetylcholine from being hydrolyzed by the enzyme and enables the neurotransmitter to be further retained at the synaptic clefts. Such mechanism promotes the cholinergic neuronal circuits to be activated and thereby enhances memory and cognition in Alzheimer's subjects. Non-cholinergic mechanism In clinical trials, phenserine was demonstrated to alleviate neurodegeneration, repressing the programmed neuronal cell death and enhancing the stem cell survival and differentiation. The alleviation can be achieved by increase in levels of neurotrophic BDNF and anti-apoptotic protein, Bcl-2, which subsequently reduces expressions of pro-apoptotic factors, GFAP and activates caspase 3. The treatment also suppresses the levels of Alzheimer's disease-inducing proteins which are β-amyloid precursor protein (APP) and Aβ peptide. The drug interaction with the APP gene mediates the expression of both APP and its following product, Aβ protein. This regulating action reverses the glial cell-favored differentiation and increases the neuronal cell output. Phenserine also attenuates the neuroinflammation which involves the excessive activation of microglial cells to remove the cellular wastes from injury lesions. The accumulation of the activated glia near the site of brain injury is unnecessarily prolonged, stimulating the oxidative stress. The inflammatory response was significantly weakened with the introduction of phenserine, which was evidenced with discouraged expression of pro-inflammatory markers, IBA 1 and TNF-α. The disrupted integrity of the blood brain barrier by a degrading chemical, MMP-9, leading to neuroinflammation, is restored by phenserine as well. The alpha-synucleins, the toxic aggregates resulting from the protein misfolding, are highly observed in Parkinson's disease. The drug therapy was proven to neutralize the toxicity of alpha-synucleins via protein translation, alleviating the symptoms of the disease. Dosage Clinically, the translatable dose of phenserine was primarily employed within a range of 1 to 5 mg/kg where the unit calibration took account of the body surface area. This standard dose range was generally well tolerated in long term trials by neuronal cell cultures, animal models and humans. Increment in dosing by 10 mg/kg is still tolerated without instigating any physiological implication. The maximal administration of phenserine up to 15 mg/kg was reported in rats. Overdose The dose of 20 mg/kg and above are appraised as overdosing in which cholinergic adverse effects ensue. The symptoms of overdosing includes: Nausea Vomiting Dizziness Tremors Bradycardia Mild symptoms were notified in clinical trials but no other seriously considerable adverse effects were expressed. Tremor was also noted as one of the dose-limiting actions. Chemistry Pharmacokinetics Oral bioavailability of phenserine was shown to be very high, up to 100%. Its bioavailability was tested by computing the drug's delivery rate across the rat's blood brain barrier. The drug concentration, reached in the brain, is 10-fold higher than plasma levels, verifying phenserine as a brain-permeable AChE inhibitor. Relative to its short plasma half life of 8 to 12 minutes, phenserine exhibits a long duration of action with the half-life of 8.25 hours in which the hindering effect on AChE is time-dependently faded. With the administration of phenserine, 70% or higher AChE inhibitory action in the blood was observed in preclinical studies and with systemic phenserine administration, the extracellular ACh level in the striatum increased up to three times. Through PET studies and microdialysis, the compound's brain permeability was able to be further elucidated. Enantiomer (posiphen) (-)-Phenserine, generally referred to phenserine, acts as an active enantiomer for the inhibition of acetylcholinesterase (AChE) while posiphen, its alternative enantiomer, was comparably demonstrated as a poor AChE inhibitor. In the history of posiphen research, several companies were interactively involved. In 2005, an Investigational New Drug (IND) application of posiphen was filed with FDA by TorreyPines Therapeutics while its phase I trial on animal models had been implemented by Axonyx. Axonyx and TorreyPines Therapeutics officially signed for their merger agreement in 2006 and licensed the drug to QR Pharma in 2008. The clinical trials of posiphen against Alzheimer's disease are still underway. Interactions Currently, 282 drugs have been reported to make interactions with phenserine. Current research A 5-year double blinded, donepezil-controlled clinical study for validation of Alzheimer's disease course modification using phenserine has been undertaken as from 2018, involving 200 patients in the UK and US. The study aims to reduce variation in AD therapeutic response between patients via optimal dose formulation. References Experimental drugs for Alzheimer's disease Neuroprotective agents Neurotrophic factors
Phenserine
[ "Chemistry" ]
3,150
[ "Neurotrophic factors", "Neurochemistry", "Signal transduction" ]
63,483,604
https://en.wikipedia.org/wiki/NGC%20534
NGC 534 is a lenticular galaxy located in the constellation of Sculptor about 260 million light years from the Milky Way. It was discovered by the British astronomer John Herschel in 1835. See also List of NGC objects (1–1000) References 0534 Lenticular galaxies Sculptor (constellation) 005215
NGC 534
[ "Astronomy" ]
63
[ "Constellations", "Sculptor (constellation)" ]
63,483,648
https://en.wikipedia.org/wiki/NGC%20535
NGC 535 is a lenticular galaxy in the constellation Cetus. It is estimated to be 222 million light years from the Milky Way and has a diameter of approximately 65,000 light years. The supernova SN 1988ad was observed near these coordinates. NGC 535 was discovered on October 31, 1864, by astronomer Heinrich Ludwig d'Arrest. See also List of NGC objects (1–1000) References External links Lenticular galaxies Cetus 0535 005282
NGC 535
[ "Astronomy" ]
98
[ "Cetus", "Constellations" ]
63,484,108
https://en.wikipedia.org/wiki/Mixed%20mating%20systems
A mixed mating system (in plants), also known as “variable inbreeding” a characteristic of many hermaphroditic seed plants, where more than one means of mating is used. Mixed mating usually refers to the production of a mixture of self-fertilized (selfed) and outbred (outcrossed) seeds. Plant mating systems influence the distribution of genetic variation within and among populations, by affecting the propensity of individuals to self-fertilize or cross-fertilize (or reproduce asexually).  Mixed mating systems are generally characterized by the frequency of selfing vs. outcrossing, but may include the production of asexual seeds through agamospermy.  The trade offs for each strategy depend on ecological conditions, pollinator abundance and herbivory and parasite load. Mating systems are not permanent within species; they can vary with environmental factors, and through domestication when plants are bred for commercial agriculture. Occurrence Although practiced by a minority of species, mixed mating systems are widespread. Examples of mixed mating systems in nature are found in jewelweeds, violets, morning glories, and bamboos, which are considered invasive in many regions. Mixed mating is common in many invasive species. Part of their ability to spread vigorously is sometimes attributed to changes in mating strategies, potentially caused by varying environmental factors, including pollinator service. Common commercial crops, including peanut plant, avocados, sorghum, and cotton, also exhibit mixed mating systems. Evolutionary models of mixed mating Historically, Charles Darwin's experiments on selfing and out-crossing many plant species caused him to question any adaptive value of self-fertilization. Early evolutionary models assumed inbreeding depression did not change, which increased the likelihood of stable mixed mating. Ronald Fisher (1941) presented the idea that selfing plants had a genetic transmission advantage over outcrossing plants because selfed offspring would inherit two copies of the seed parent's genome instead of just one. His models solidified the idea of automatic selection for increased selfing. David Lloyd (1979) developed phenotypic models that showed that the conditions of automatic selection for selfing via pollinators was different from autonomous selfing, and predictive of stable mixed mating systems. Lande & Schemske (1985) introduced the idea that inbreeding depression is not constant and evolves through purging of genetic load due to selection associated with selfing. They predicted that outcrossing as a mating strategy would resist increases in selfing frequencies due to inbreeding depression, but once inbreeding depression was reduced, selection due to the genetic transmission advantage would result in the production of only selfed seeds. Their model predicted that most plants would either be outcrossing or selfing. The observation of large numbers of species with mixed mating contradicted this idea, and motivated others to develop models to explain the prevalence of mixed mating systems including ideas such as selective interference and pollen discounting. Mechanisms maintaining mixed mating systems Mass Action Model – Holsinger's “mass action” model assumes that the proportion of selfed and out-crossed seeds produced is a function of rates of pollen transfer among plants and plant density. This model predicts that mixed mating can be a stable strategy when plants receive mixtures of self and out-cross pollen. Selective Interference –The genetic process of selective inference may prevent purging of genetic load and counterbalance the automatic selection of selfing. Cryptic Self-Incompatibility – A mechanism of reproductive assurance, pollen competition favors out-cross pollen resulting in complete out-crossing when pollinators are abundant, but allows for self-fertilization when pollen is limited. Delayed Selfing – A mechanism providing reproductive assurance at a lower cost than autonomous selfing, when the anthers or stigma change position as the flower ages, bringing them into close proximity and promoting self-pollination. Reproductive Compensation – A result of more ovules than can mature into seeds, and the production of large numbers of seeds over the lifespan of a perennial plant, can contribute to the evolution of mixed mating systems. Rare selfed seedlings with higher fitness may decrease the fitness difference between selfed and out-crossed offspring. Cleistogamy – Most plants producing cleistogamous (closed, selfing) flowers also produce chasmogamous (open, outcrossing) flowers, and consequently will typically produce mixtures of selfed and out-crossed seeds. Components of the maintenance of mixed mating system also include self‐compatibility, especially autonomous self‐pollination, which can become particularly beneficial in human degraded habitats with less pollinators and increased pollen limitation. References Plant reproduction
Mixed mating systems
[ "Biology" ]
958
[ "Behavior", "Plant reproduction", "Plants", "Reproduction" ]
63,485,404
https://en.wikipedia.org/wiki/UZ%20Fornacis
UZ Fornacis (abbreviated as UZ For) is a binary star in the constellation of Fornax. It appears exceedingly faint with a maximum apparent magnitude 17.0. Its distance, as measured by Gaia using the parallax method, is about 780 light-years (240 parsecs). The system consists of two stars, a white dwarf and a red dwarf, in close orbit around each other. It is hypothesized that there are also two planets orbiting the central stars. Nomenclature The system is most commonly referred to as UZ Fornacis, which is its variable star designation. The General Catalogue of Variable Stars describes it as "E+XM", meaning it is an eclipsing binary system consisting of a low-mass star with an X-ray-emitting companion. In the past the system has also been referred to using the designation EXO 033319–2554.2, which refers to its coordinates on the celestial sphere, as well as the EXOSAT satellite that detected it. Overview UZ Fornacis is a cataclysmic variable. The two stars, a white dwarf and red dwarf, orbit each other every 127 minutes. The stars' orbit is inclined about 81 degrees away from the plane-of-sky, so the system eclipses. The eclipsing nature of this system was first discovered in 1987. At the time, it was the 14th AM Herculis star known and only the third system known to eclipse. In systems like UZ Fornacis, matter is siphoned off the red dwarf and towards the white dwarf. However, unlike typical cataclysmic variable where this matter forms an accretion disk, the white dwarf is highly magnetic and has a strong magnetic field. This magnetic field channels the matter into loops that eventually accrete onto the white dwarf. When this happens, the matter emits cyclotron radiation and soft X-rays. Due to the activity of the red dwarf, sometimes more mass gets transferred and X-ray flare-ups occur. Matter flows onto a spot on the white dwarf, at a rate of to 1 grams per square centimeter per second. The white dwarf's magnetism also locks its rotation so it matches the orbit. Variability The brightness of UZ Fornacis varies rapidly and somewhat unpredictably. The two stars in the system eclipse each other regularly. The eclipses last for about , with the initial drop in brightness and return to maximum brightness each taking about . The eclipse light curves do not all have the same shape, some being more or less flat-bottomed while others show a smooth variation in brightness, and some are asymmetrical. The times of the eclipses vary, possibly due to substellar companions. Outside of the eclipses, the brightness varies during the orbit depending on the visibility of an accretion spot on the white dwarf. The brightness also varies over a period of years due to differences in the rate of accretion onto the white dwarf from the red dwarf. This can generally be seen as a bright state and a faint state, although the magnitudes of each state vary. For example, UZ Fornacis has been observed between magnitudes 15.9 and 16.75 at different times in the bright state. The system also shows rapid "flickering" on a timescale of minutes, common in cataclysmic variable systems. Possible planetary system Investigations in 2010 and 2011 found that the orbital period of the two stars in UZ Fornacis varied cyclically. Researchers attributed this to two possible gas giant sized planets around the two stars, perturbing their orbits and causing the orbital period to vary. As of 2019, there is not enough information to explain all of the period variations, since the planets would have to be in eccentric orbits to fit the data, and that would cause the orbits to be dynamically unstable. It is possible that there are even more planets causing additional perturbation, or some physical effect such as the Applegate mechanism is responsible for the eclipse timing variations. References Eclipsing binaries White dwarfs M-type main-sequence stars Fornax Polars (cataclysmic variable stars) 2 Hypothetical planetary systems Fornacis, UZ
UZ Fornacis
[ "Astronomy" ]
878
[ "Fornax", "Constellations" ]
63,486,013
https://en.wikipedia.org/wiki/Snout%E2%80%93vent%20length
Snout–vent length (SVL) is a morphometric measurement taken in herpetology from the tip of the snout to the most posterior opening of the cloacal slit (vent). It is the most common measurement taken in herpetology, being used for all amphibians, lepidosaurs, and crocodilians (for turtles, carapace length (CL) and plastral length (PL) are used instead). The SVL differs depending on whether the animal is struggling or relaxed (if alive), or various other factors if it is a preserved specimen. For fossils, an osteological correlate such as precaudal length must be used. When combined with weight and body condition, SVL can help deduce age and sex. Advantages Because tails are often missing or absent, especially in juveniles, SVL is seen as more invariant than total length. Even in the case of crocodiles, tail tips may be missing. Methods The measurements may be taken with dial calipers or digital calipers. Various devices are used to position the animal while the measurement is being taken, such as a snake tube, "Mander Masher", or a "Salamander Stick". References Further reading Herpetology Measurement
Snout–vent length
[ "Physics", "Mathematics" ]
264
[ "Quantity", "Physical quantities", "Measurement", "Size" ]
63,486,429
https://en.wikipedia.org/wiki/Behavioral%20Medicine%20%28journal%29
Behavioral Medicine is an interdisciplinary medical journal published by Taylor & Francis, addressing the interactions of the behavioral sciences with other fields of medicine. Before Spring 1988 (Vol. 14, No. 1), the journal's previous title was Journal of Human Stress (ISSN 2374-9741), which was published from March 1975 (Vol. 1) through Winter 1987 (Vol. 13). As of 2020, the editor is Perry N. Halkitis (Rutgers University). The journal is indexed in: Applied Social Science Index and Abstracts; Behavioral Medicine Abstracts; BIOMED; Current Contents/Clinical Medicine; Current Contents/Social & Behavioral Sciences; EMBASE/Excerpta Medica; Family Resources Database; Health Instrument File; Index Medicus / MEDLINE; International Bibliography of Book Reviews (IBR); International Bibliography of Periodical Literature; National AIDS Information Clearinghouse; NIOSHTIC; Psychological Abstracts / PsycINFO; Research Alert; Scisearch; and Social Sciences Citation Index / Journal Citation Reports. References Quarterly journals Academic journals established in 1975 Taylor & Francis academic journals English-language journals Behavioral medicine journals
Behavioral Medicine (journal)
[ "Biology" ]
227
[ "Behavioural sciences", "Behavior", "Behavioral medicine journals" ]
67,784,602
https://en.wikipedia.org/wiki/French%20Foundation%20for%20the%20Study%20of%20Human%20Problems
The French Foundation for the Study of Human Problems (), often referred to as the Alexis Carrel Foundation or the Carrel Foundation, was a eugenics organization created by Nobel laureate in Medicine Alexis Carrel under the Vichy regime in World War II France. Alexis Carrel spent most of his career at the Rockefeller Institute in New York and returned to France just before the outbreak of World War II. Carrel, who had worked previously with Philippe Pétain during the First World War, accepted an offer to establish and lead a foundation for the study of human problems. Its ambitious mission was to give an account of the "human element associating the soul and the body". Charged with "the comprehensive study of the most appropriate measures needed to safeguard, improve, and advance the French people in all their activities," the Foundation was created by decree of the Vichy regime in 1941, and Carrel was appointed as "regent". The Foundation initiated studies on demographics (Robert Gessain, Paul Vincent, Jean Bourgeois), nutrition (Jean Sutter), and housing (Jean Merlet), as well as the first polls (Jean Stoetzel). The foundation employed 300 researchers from the summer of 1942 to the end of the autumn of 1944. The Foundation made many positive accomplishments during its time. It promoted the 16 December 1942 Act which established the prenuptial certificate, which was required before marriage and which sought to insure the good health of the spouses, in particular in regard to sexually transmitted diseases (STD) and "life hygiene". The institute also established the , which could be used to record students' grades in the French secondary schools, and thus classify and select them according to scholastic performance. Carrel was suspended after the liberation of Paris in August 1944 and died soon thereafter, thus avoiding the inevitable purge. The Foundation itself was "purged", but resurfaced soon after as the French Institute for Demographic Studies (INED) after the war. Most members of Carrel's team moved to INED, led by demographist Alfred Sauvy, who coined the expression "Third World". Others joined Robert Debré's which later became INSERM. See also Collaboration with the Axis Powers during World War II Human enhancement International Eugenics Conference Nazi eugenics Philippe Pétain Révolution nationale References Notes Citations Works cited 1941 in France 1942 in France 1943 in France 1944 documents Bioethics Eugenics organizations France in World War II German occupation of France during World War II Government of France Pseudo-scholarship Technological utopianism Vichy France
French Foundation for the Study of Human Problems
[ "Technology" ]
521
[ "Bioethics", "Ethics of science and technology" ]
67,785,300
https://en.wikipedia.org/wiki/Mucoromycota
Mucoromycota is a division within the kingdom fungi. It includes a diverse group of various molds, including the common bread molds Mucor and Rhizopus. It is a sister phylum to Dikarya. Informally known as zygomycetes I, Mucoromycota includes Mucoromycotina, Mortierellomycotina, and Glomeromycotina, and consists of mainly mycorrhizal fungi, root endophytes, and plant decomposers. Mucoromycotina and Glomeromycotina can form mycorrhiza-like relationships with nonvascular plants. Mucoromycota contain multiple mycorrhizal lineages, root endophytes, and decomposers of plant-based carbon sources. Mucoromycotina species known as mycoparasites, or putative parasites of arthropods are like saprobes. When Mucoromycota infect animals, they are seen as opportunistic pathogens. Mucoromycotina are fast-growing fungi and early colonizers of carbon-rich substrates. Mortierellomycotina are common soil fungi that occur as root endophytes of woody plants and are isolated as saprobes. Glomeromycotina live in soil, forming a network of hyphae, but depend on organic carbon from host plants. In exchange, the arbuscular mycorrhizal fungi provide nutrients to the plant. Reproduction Known reproduction states of Mucoromycota are zygospore production and asexual reproduction. Zygospores can have decorations on their surface and range up to several millimeters in diameter. Asexual reproduction typically involves the production of sporangiospores or chlamydospores. Multicellular sporcaps are present within Mucoromycotina, Mortierellomycotina and as aggregations of spore-producing in species of Glomeromycotina. Shown in Mucorales, sexual reproduction is under the control of mating type genes, sexP and sexM, which regulate the production of pheromones required for the maturation of hyphae into gametangia. The sexP gene is expressed during vegetative growth and matting, while the sexM gene is expressed during mating. Sexual reproduction in Glomeromycotina is unknown, although its occurrence is inferred from genomic studies. However, specialized hyphae produce chlamydospore-like spores asexually; these may be borne at terminal (apical) or lateral positions on the hyphae, or intercalary (formed within the hypha, between sub-apical cells). Species of Glomeromycotina produce coenocytic hyphae that can have bacterial endosymbionts. Mortierellomycotina reproduce asexually by sporangia that either lack or have a reduced columella, which support the sporangium. Species of Mortierellomycotina only form microscopic colonies, but some make multicellular sporocarps. Mucoromycotina sexual reproduction is by prototypical zygospore formation and asexual reproduction and involves the large production of sporangia. Morphology Mucoromycotina contain discoidal hemispherical spindle pole bodies. Although spindle pole bodies function as microtubule organizing centers, they lack remnants of the centrioles' characteristic 9+2 microtubule arrangement. Species of Mucoromycotina and Mortierellomycotina produce large-diameter, coenocytic hyphae. Glomeromycotina also form coenocytic hyphae with highly branched, narrow hyphal arbuscules in host cells. When septations occur in Mucoromycota they are formed at the base of reproductive structures. Production of lipids, polyphosphates, and carotenoids Mucoromycota's metabolism can utilize many substrates that are from various nitrogen and phosphorus resources to produce lipids, chitin, polyphosphates, and carotenoids. They have been found to co-produce metabolites in a single fermentation process like polyphosphates and lipids. The overproduction of chitin from Mucoromycota fungi can be accomplished by limiting inorganic phosphorus. Mucoromycota are capable of accumulating high amounts of lipids in their cell biomass, which allows the fungi to produce polyunsaturated fatty acids and carotenoids. They have been found to induce antimicrobial activity from fungal crude total lipids. The high production of lipids from Mucoromycota have the potential for use in biodiesel production. Gallery See also Mucor circinelloides References External links Zygomycota Fungus phyla Fungi by classification
Mucoromycota
[ "Biology" ]
1,027
[ "Fungi", "Eukaryotes by classification", "Fungi by classification" ]
67,785,692
https://en.wikipedia.org/wiki/Nana%20Astar%20Deviluke
is a fictional character in the manga series To Love Ru, created by Saki Hasemi and Kentaro Yabuki. In the series, Nana is an alien princess from the distant planet Deviluke who possesses the unique ability to communicate with different animal species. She is the younger sister of Lala Satalin Deviluke and the older twin sister of Momo Belia Deviluke. Of the entire female cast of To Love Ru, Nana appears to be the least attracted to the protagonist Rito Yuki, although she eventually develops romantic feelings for him as the series progresses. Appearances Nana Astar Deviluke has the appearance of a teenage girl with purple eyes and long pink hair, which she usually wears in pigtails on the sides of her head. However, she sometimes loosens her braids, such as after a shower or when she is wearing casual clothes. Like all members of the Devilukean alien species, Nana also has a long black tail on the back that, like her sisters, ends with a heart-shaped tip. Another interesting feature in Nana is that she has a sharp tusk on the left side of the upper teeth. Regarding her clothing style, Nana loves to dress in a gothic Lolita style, with her attires generally consisting of black and red colours. Also, Nana's height is 151 cm, her weight is 43 kg, and her three sizes are B68-W54-H77. In To Love Ru As the daughter of King Gid and Queen Sephie of Deviluke, an alien planet far distant from Earth, Nana is an alien princess who holds the title of "Second Princess of Deviluke" as a member of Deviluke's royal family, she being the younger sister of Lala Satalin Deviluke and the older twin sister of Momo Belia Deviluke. At some point in time before the series' events, Nana travels across the galaxy and meets tons of animal species from different alien worlds, befriending all of them, thanks to her ability to communicate with animals, and then collecting them into an alien device called which has the appearance of a mobile phone, but it is able to summon animals from a virtual space within it. Nana, alongside Momo, makes her first appearance in the 97th chapter of the To Love Ru manga, in which the twins arrive on Earth and transport basically the entire cast of the manga into a RPG world inside Trouble Quest, a virtual reality game programmed by Nana and Momo and that they use to determine whether Rito Yuki is worthy of being engaged to Lala or not. In the end, after everyone exits Trouble Quest, the twins apologize to Lala for all the problems they caused before returning to Deviluke. A few chapters later, Nana and Momo flee from Deviluke in an attempt to avoid their studies. The twins escape to Earth, where they teleport naked to Rito's bathroom at the exact moment he was there, and Nana consequently beats him up for seeing her and her sister undressed. Zastin then shows up and, instructed by the Nana and Momo's father to bring them home, chases them to a bridge where they use their D-Dials to summon various dangerous animals and plants to attack Zastin. In the end, the twins defeat Zastin and convince their father that they are going to stay on Earth with Lala. During Nana and Momo's stay at the Yuki household, they create their own house in the attic by remodeling it using space distortion technology; they also take care of all their necessities to avoid disturbing Rito's younger sister, Mikan Yuki, for cooking, cleaning, and other needs. Unlike the other female characters in To Love Ru, who are all in love with Rito, Nana seems to be the least attracted to him. However, she unconsciously develops affectionate feelings for Rito throughout the series, although she consciously dislikes him and thinks of him as nothing more than a pervert. As a result of her suppression of emotions, Nana often dreams of Rito advancing sexually towards her, to which she reacts by shouting "You beast!" in her sleep. At the end of the series, in an attempt to confess his love for his classmate and longtime crush Haruna Sairenji, Rito fails and, instead, his confession is accidentally directed to Nana and three other girls: Yui Kotegawa, Run Elise Jewelria, and Ryouko Mikado. In To Love Ru Darkness In the manga continuation of To Love Ru, titled To Love Ru Darkness, Nana becomes a Sainan High student at Momo's urging. There, Nana manages to make friends with a girl in her school class named Mea Kurosaki. Eventually, the friendship between them, which strengthens over time, becomes one of the main focuses of the manga's plot. Ultimately, Mea exposes her true identity as an alien assassin and living weapon to Nana, which leaves her confused and petrified by the revelation of her friend's true form. Nana is then left heartbroken and traumatized by Mea's statement that their friendship has never been real and that she wants to stop "playing" being friends with Nana. Depressed by the end of the friendship with Mea, Nana tells in a conversation with Rito that she still wants to help Mea and that despite her efforts, Nana feels that Mea never saw her as a friend and was just playing with her. Embraced by Rito, Nana is assured by him that she and Mea can still be reconciled. Feeling confident, Nana finds Mea and confronts her. Mea, fixed on her identity as a living weapon, tries to keep Nana away by attacking her. A fight ensues between the two ex-friends, with Mea explaining to Nana in detail why they cannot be together and Nana refuting her reasoning, stating that regardless of her identity, only the feelings matter in. Mea, beginning to understand Nana's words and feelings, ends the fight by accepting Nana's request to be friends again. Shortly thereafter, due to Rito's encouragement, Nana's romantic feelings for him grow to such an extent that. On a night when she is unable to sleep, Nana remembers the warmth of Rito's embrace and decides to sneak into bed with him, only to find that Momo, as usual, is already there sleeping with Rito, and Nana, mistaking the situation, starts beating him up. In other media In the anime adaptations of To Love Ru, Nana is voiced by Kanae Itō in Japanese, while Allison Sumrall dubs her in the English version. In addition to both the manga and anime series, Nana also appears in three To Love Ru video games: To Love Ru: Darkness — Battle Ecstasy, To Love Ru: Darkness — Idol Revolution, and To Love Ru: Darkness — True Princess. Reception Popularity Ever since her introduction, the character of Nana has become a popular subject of cosplay, causing a trend in Japan where female fan readers of the To Love Ru series attempt to replicate her iconic look. In 2015, the June issue of Shueisha's Jump Square magazine included the results of its popularity poll for the heroines of To Love-Ru Darkness. In the various categories presented, Nana ranked 6th place as "which character would you want to be in your family (but not as a wife/girlfriend)?"; 7th place as "which character would you want to be your girlfriend (or wife)?", "which character would you want to be your friend?" and "which character would be your favorite if all the heroines were in an idol group?"; and 9th place as "which character would you want to switch bodies with for just one day?". In August 2015, Jump Square presented the results of another popularity poll for the female characters of To Love-Ru Darkness in the October 2014 issue, for which Nana ranked 1st place as "which character would you want to be your friend?"; 5th place as "which character would you want to be in your family?"; and 9th place as "which character would you want to be your girlfriend (or wife)?" and "which character would be your favorite if all the heroines were in an idol group?". Critical response In a review for Motto To Love Ru, Theron Martin of Anime News Network (ANN) stated, "Nana and Momo explain enough about themselves over the course of this series that viewers who have not seen the OVAs will eventually be able to piece together who the twins are and what they're about." Reviewing the anime adaptation of To Love Ru Darkness, Martin noted Nana's expanded role in the series. In a later review for the second season of To Love Ru Darkness, titled To Love Ru Darkness 2nd, Martin complimented Nana's development in the anime as "in-depth and satisfying", in addition to saying that "Nana efforts to win Mea over as a friend [are] a key part of the storyline." Merchandise A 1/4 scale figure of the character Nana wearing a green bunny girl suit was released in Japan in August 2019 for the price of 21,111 yen (excluding taxes). In May 2021, the Good Smile Company announced the release of a scale figure of Nana wearing a pink swimsuit. The product stands 180mm tall and is available for pre-order with a price tag of 3,900 yen and a release date scheduled for July 2021. See also List of To Love Ru characters Notes References External links Nana's anime bio Anime and manga characters with superhuman strength Anime and manga characters introduced in 2008 Extraterrestrial characters in comics Female characters in anime and manga Fictional characters who can communicate with animals Fictional characters with energy-manipulation abilities Fictional extraterrestrial characters Fictional princesses Fictional hybrids Teenage characters in anime and manga To Love Ru Twin characters in comics
Nana Astar Deviluke
[ "Biology" ]
2,033
[ "Fictional hybrids", "Hybrid organisms" ]
67,785,773
https://en.wikipedia.org/wiki/Semi-inclusive%20deep%20inelastic%20scattering
In high energy particle physics nucleon-lepton scattering, the semi-inclusive deep inelastic scattering (SIDIS) is a method to obtain information on the nucleon structure. It expands the traditional method of deep inelastic scattering (DIS). In DIS, only the scattered lepton is detected while the remnants of the shattered nucleon are ignored (inclusive experiment). In SIDIS, a high momentum hadron, a.k.a. as the leading hadron is detected in addition to the scattered lepton. This allows us to obtain additional details about the scattering process kinematics. Usefulness The leading hadron results from the hadronization of the struck quark. This latter retains the information on its motion inside the nucleon, including its transverse momentum which allows to access the transverse momentum distributions (TMDs) of partons. Likewise, by detecting the leading hadron, one essentially tags (i.e. identifies) the quark on which the scattering occurred. For example, if the leading hadron is a kaon, we know that the scattering occurred on one of the strange quarks of the nucleon's quark sea. In DIS the struck quark is not identified and the information is an indistinguishable sum over all the quark flavors. SIDIS allows to disentangle this information. Experiments SIDIS measurements were pioneered at DESY by the HERMES experiment. They are currently (2021) being carried out at CERN by the COMPASS experiment and several experiments at Jefferson Lab. SIDIS will be an important technique used in the future Electron Ion Collider scientific program. References Quantum chromodynamics Nuclear physics Scattering
Semi-inclusive deep inelastic scattering
[ "Physics", "Chemistry", "Materials_science" ]
354
[ "Matter", "Hadrons", "Scattering", "Particle physics", "Condensed matter physics", "Nuclear physics", "Subatomic particles" ]
67,786,287
https://en.wikipedia.org/wiki/United%20Nations%20General%20Assembly%20Resolution%201%20%28I%29
United Nations General Assembly Resolution 1 was the first resolution passed by the United Nations General Assembly on 24 January 1946, which created the United Nations Atomic Energy Commission to "deal with the problems raised by the discovery of atomic energy", and commissioned to "make specific proposals... for the elimination from national armaments of atomic weapons and of all other major weapons adaptable to mass destruction", among other issues regarding nuclear technology." References 01/1 1946 in the United Nations Nuclear energy
United Nations General Assembly Resolution 1 (I)
[ "Physics", "Chemistry" ]
99
[ "Nuclear energy", "Radioactivity", "Nuclear physics" ]
67,787,441
https://en.wikipedia.org/wiki/Lisa%20Shaw%20%28broadcaster%29
Lisa Eve (; 19 June 1976 – 21 May 2021) was a British radio presenter and journalist based in Newcastle upon Tyne, who worked in both commercial radio and for the BBC. An established radio personality in the North East, she co-presented a breakfast show alongside Gary Philipson for Century Radio (later Real Radio and Heart North East), before joining BBC Radio Newcastle in 2016. Shaw's death at the age of 44, attributed to the Oxford–AstraZeneca COVID-19 vaccine, was the subject of widespread media coverage. Life and career Born and raised in County Durham, Shaw began her radio career at Newcastle's Metro Radio, which she joined as a journalist before going on to present for the station. In 2004, she moved to Century Radio (later Real Radio then Heart North East), where she twice co-presented Gary and Lisa at Breakfast alongside Gary Philipson, first in 2004 and again between 2010 and 2014. Shaw joined BBC Radio Newcastle in 2016, becoming part of the station's daytime presenting team. From 2020, she presented a weekday show for the station as part of a simplified schedule that was introduced by BBC radio during the COVID-19 pandemic, and aired between 10am and 2pm. In addition to her work on radio, Shaw was a compere and voiceover artist, and wrote a column for regional newspaper The Sunday Sun. In 2012, she received a Sony Gold award for Best Breakfast Show in Britain for the show she presented with Philipson on what was then Real Radio. Death Shaw presented her last programme for BBC Radio Newcastle on 7 May 2021. She died at Newcastle's Royal Victoria Infirmary on 21 May, aged 44. Her family told media reporters that, days after having received a first dose of the Oxford–AstraZeneca COVID-19 vaccine, Shaw, who was not known to have any underlying health problems, became seriously ill and was treated for blood clots and cerebral bleeding. On 27 May, it was reported that her death would be investigated in a coroner's inquest. Sky News reported that senior Newcastle coroner Karen Dilks had issued an interim fact-of-death certificate citing a "complication of AstraZeneca COVID-19 virus vaccination" as a consideration. BBC News reported in August 2021 that the coroner had concluded in her final judgment that it was "clearly established" Shaw's death was caused by an extremely rare "vaccine-induced thrombotic thrombocytopenia", a condition which leads to a brain haemorrhage. The National Institute for Health and Care Excellence (NICE) published medical recommendations for the condition in July 2021 matching the treatment Shaw was given. Shaw's funeral was held on 10 June at Durham Cathedral. Following the service her family announced plans to establish a charity to provide holidays and activities for bereaved children who have lost a parent. The charity, Lisa Shaw's Little'uns, is named after a feature on her BBC radio show. Personal life Shaw was married to Gareth Eve, and was the mother of one child. References 1976 births 2021 deaths British radio personalities British radio DJs BBC radio presenters British radio journalists British newspaper journalists People from County Durham Mass media people from Newcastle upon Tyne Vaccine controversies
Lisa Shaw (broadcaster)
[ "Chemistry", "Biology" ]
665
[ "Vaccination", "Drug safety", "Vaccine controversies" ]
67,789,038
https://en.wikipedia.org/wiki/Spaces%20of%20test%20functions%20and%20distributions
In mathematical analysis, the spaces of test functions and distributions are topological vector spaces (TVSs) that are used in the definition and application of distributions. Test functions are usually infinitely differentiable complex-valued (or sometimes real-valued) functions on a non-empty open subset that have compact support. The space of all test functions, denoted by is endowed with a certain topology, called the , that makes into a complete Hausdorff locally convex TVS. The strong dual space of is called and is denoted by where the "" subscript indicates that the continuous dual space of denoted by is endowed with the strong dual topology. There are other possible choices for the space of test functions, which lead to other different spaces of distributions. If then the use of Schwartz functions as test functions gives rise to a certain subspace of whose elements are called . These are important because they allow the Fourier transform to be extended from "standard functions" to tempered distributions. The set of tempered distributions forms a vector subspace of the space of distributions and is thus one example of a space of distributions; there are many other spaces of distributions. There also exist other major classes of test functions that are subsets of such as spaces of analytic test functions, which produce very different classes of distributions. The theory of such distributions has a different character from the previous one because there are no analytic functions with non-empty compact support. Use of analytic test functions leads to Sato's theory of hyperfunctions. Notation The following notation will be used throughout this article: is a fixed positive integer and is a fixed non-empty open subset of Euclidean space denotes the natural numbers. will denote a non-negative integer or If is a function then will denote its domain and the of denoted by is defined to be the closure of the set in For two functions , the following notation defines a canonical pairing: A of size is an element in (given that is fixed, if the size of multi-indices is omitted then the size should be assumed to be ). The of a multi-index is defined as and denoted by Multi-indices are particularly useful when dealing with functions of several variables, in particular we introduce the following notations for a given multi-index : We also introduce a partial order of all multi-indices by if and only if for all When we define their multi-index binomial coefficient as: will denote a certain non-empty collection of compact subsets of (described in detail below). Definitions of test functions and distributions In this section, we will formally define real-valued distributions on . With minor modifications, one can also define complex-valued distributions, and one can replace with any (paracompact) smooth manifold. Note that for all and any compact subsets and of , we have: Distributions on are defined to be the continuous linear functionals on when this vector space is endowed with a particular topology called the . This topology is unfortunately not easy to define but it is nevertheless still possible to characterize distributions in a way so that no mention of the canonical LF-topology is made. Proposition: If is a linear functional on then the is a distribution if and only if the following equivalent conditions are satisfied: For every compact subset there exist constants and (dependent on ) such that for all For every compact subset there exist constants and such that for all with support contained in For any compact subset and any sequence in if converges uniformly to zero on for all multi-indices , then The above characterizations can be used to determine whether or not a linear functional is a distribution, but more advanced uses of distributions and test functions (such as applications to differential equations) is limited if no topologies are placed on and To define the space of distributions we must first define the canonical LF-topology, which in turn requires that several other locally convex topological vector spaces (TVSs) be defined first. First, a (non-normable) topology on will be defined, then every will be endowed with the subspace topology induced on it by and finally the (non-metrizable) canonical LF-topology on will be defined. The space of distributions, being defined as the continuous dual space of is then endowed with the (non-metrizable) strong dual topology induced by and the canonical LF-topology (this topology is a generalization of the usual operator norm induced topology that is placed on the continuous dual spaces of normed spaces). This finally permits consideration of more advanced notions such as convergence of distributions (both sequences nets), various (sub)spaces of distributions, and operations on distributions, including extending differential equations to distributions. Choice of compact sets K Throughout, will be any collection of compact subsets of such that (1) and (2) for any compact there exists some such that The most common choices for are: The set of all compact subsets of or A set where and for all , and is a relatively compact non-empty open subset of (here, "relatively compact" means that the closure of in either or is compact). We make into a directed set by defining if and only if Note that although the definitions of the subsequently defined topologies explicitly reference in reality they do not depend on the choice of that is, if and are any two such collections of compact subsets of then the topologies defined on and by using in place of are the same as those defined by using in place of Topology on Ck(U) We now introduce the seminorms that will define the topology on Different authors sometimes use different families of seminorms so we list the most common families below. However, the resulting topology is the same no matter which family is used. All of the functions above are non-negative -valued seminorms on As explained in this article, every set of seminorms on a vector space induces a locally convex vector topology. Each of the following sets of seminorms generate the same locally convex vector topology on (so for example, the topology generated by the seminorms in is equal to the topology generated by those in ). With this topology, becomes a locally convex Fréchet space that is normable. Every element of is a continuous seminorm on Under this topology, a net in converges to if and only if for every multi-index with and every compact the net of partial derivatives converges uniformly to on For any any (von Neumann) bounded subset of is a relatively compact subset of In particular, a subset of is bounded if and only if it is bounded in for all The space is a Montel space if and only if The topology on is the superior limit of the subspace topologies induced on by the TVSs as ranges over the non-negative integers. A subset of is open in this topology if and only if there exists such that is open when is endowed with the subspace topology induced on it by Metric defining the topology If the family of compact sets satisfies and for all then a complete translation-invariant metric on can be obtained by taking a suitable countable Fréchet combination of any one of the above defining families of seminorms (A through D). For example, using the seminorms results in the metric Often, it is easier to just consider seminorms (avoiding any metric) and use the tools of functional analysis. Topology on Ck(K) As before, fix Recall that if is any compact subset of then For any compact subset is a closed subspace of the Fréchet space and is thus also a Fréchet space. For all compact satisfying denote the inclusion map by Then this map is a linear embedding of TVSs (that is, it is a linear map that is also a topological embedding) whose image (or "range") is closed in its codomain; said differently, the topology on is identical to the subspace topology it inherits from and also is a closed subset of The interior of relative to is empty. If is finite then is a Banach space with a topology that can be defined by the norm And when then is even a Hilbert space. The space is a distinguished Schwartz Montel space so if then it is normable and thus a Banach space (although like all other it is a Fréchet space). Trivial extensions and independence of Ck(K)'s topology from U The definition of depends on so we will let denote the topological space which by definition is a topological subspace of Suppose is an open subset of containing and for any compact subset let is the vector subspace of consisting of maps with support contained in Given its is by definition, the function defined by: so that Let denote the map that sends a function in to its trivial extension on . This map is a linear injection and for every compact subset (where is also a compact subset of since ) we have If is restricted to then the following induced linear map is a homeomorphism (and thus a TVS-isomorphism): and thus the next two maps (which like the previous map are defined by ) are topological embeddings: (the topology on is the canonical LF topology, which is defined later). Using the injection the vector space is canonically identified with its image in (however, if then is a topological embedding when these spaces are endowed with their canonical LF topologies, although it is continuous). Because through this identification, can also be considered as a subset of Importantly, the subspace topology inherits from (when it is viewed as a subset of ) is identical to the subspace topology that it inherits from (when is viewed instead as a subset of via the identification). Thus the topology on is independent of the open subset of that contains . This justifies the practice of written instead of Canonical LF topology Recall that denote all those functions in that have compact support in where note that is the union of all as ranges over Moreover, for every , is a dense subset of The special case when gives us the space of test functions. This section defines the canonical LF topology as a direct limit. It is also possible to define this topology in terms of its neighborhoods of the origin, which is described afterwards. Topology defined by direct limits For any two sets and , we declare that if and only if which in particular makes the collection of compact subsets of into a directed set (we say that such a collection is ). For all compact satisfying there are inclusion maps Recall from above that the map is a topological embedding. The collection of maps forms a direct system in the category of locally convex topological vector spaces that is directed by (under subset inclusion). This system's direct limit (in the category of locally convex TVSs) is the pair where are the natural inclusions and where is now endowed with the (unique) strongest locally convex topology making all of the inclusion maps continuous. Topology defined by neighborhoods of the origin If is a convex subset of then is a neighborhood of the origin in the canonical LF topology if and only if it satisfies the following condition: Note that any convex set satisfying this condition is necessarily absorbing in Since the topology of any topological vector space is translation-invariant, any TVS-topology is completely determined by the set of neighborhood of the origin. This means that one could actually the canonical LF topology by declaring that a convex balanced subset is a neighborhood of the origin if and only if it satisfies condition . Topology defined via differential operators A is a sum where and all but finitely many of are identically . The integer is called the of the differential operator If is a linear differential operator of order then it induces a canonical linear map defined by where we shall reuse notation and also denote this map by For any the canonical LF topology on is the weakest locally convex TVS topology making all linear differential operators in of order into continuous maps from into Properties of the canonical LF topology Canonical LF topology's independence from One benefit of defining the canonical LF topology as the direct limit of a direct system is that we may immediately use the universal property of direct limits. Another benefit is that we can use well-known results from category theory to deduce that the canonical LF topology is actually independent of the particular choice of the directed collection of compact sets. And by considering different collections (in particular, those mentioned at the beginning of this article), we may deduce different properties of this topology. In particular, we may deduce that the canonical LF topology makes into a Hausdorff locally convex strict LF-space (and also a strict LB-space if ), which of course is the reason why this topology is called "the canonical LF topology" (see this footnote for more details). Universal property From the universal property of direct limits, we know that if is a linear map into a locally convex space (not necessarily Hausdorff), then is continuous if and only if is bounded if and only if for every the restriction of to is continuous (or bounded). Dependence of the canonical LF topology on Suppose is an open subset of containing Let denote the map that sends a function in to its trivial extension on (which was defined above). This map is a continuous linear map. If (and only if) then is a dense subset of and is a topological embedding. Consequently, if then the transpose of is neither one-to-one nor onto. Bounded subsets A subset is bounded in if and only if there exists some such that and is a bounded subset of Moreover, if is compact and then is bounded in if and only if it is bounded in For any any bounded subset of (resp. ) is a relatively compact subset of (resp. ), where Non-metrizability For all compact the interior of in is empty so that is of the first category in itself. It follows from Baire's theorem that is metrizable and thus also normable (see this footnote for an explanation of how the non-metrizable space can be complete even though it does not admit a metric). The fact that is a nuclear Montel space makes up for the non-metrizability of (see this footnote for a more detailed explanation). Relationships between spaces Using the universal property of direct limits and the fact that the natural inclusions are all topological embedding, one may show that all of the maps are also topological embeddings. Said differently, the topology on is identical to the subspace topology that it inherits from where recall that 's topology was to be the subspace topology induced on it by In particular, both and induces the same subspace topology on However, this does imply that the canonical LF topology on is equal to the subspace topology induced on by ; these two topologies on are in fact equal to each other since the canonical LF topology is metrizable while the subspace topology induced on it by is metrizable (since recall that is metrizable). The canonical LF topology on is actually than the subspace topology that it inherits from (thus the natural inclusion is continuous but a topological embedding). Indeed, the canonical LF topology is so fine that if denotes some linear map that is a "natural inclusion" (such as or or other maps discussed below) then this map will typically be continuous, which (as is explained below) is ultimately the reason why locally integrable functions, Radon measures, etc. all induce distributions (via the transpose of such a "natural inclusion"). Said differently, the reason why there are so many different ways of defining distributions from other spaces ultimately stems from how very fine the canonical LF topology is. Moreover, since distributions are just continuous linear functionals on the fine nature of the canonical LF topology means that more linear functionals on end up being continuous ("more" means as compared to a coarser topology that we could have placed on such as for instance, the subspace topology induced by some which although it would have made metrizable, it would have also resulted in fewer linear functionals on being continuous and thus there would have been fewer distributions; moreover, this particular coarser topology also has the disadvantage of not making into a complete TVS). Other properties The differentiation map is a continuous linear operator. The bilinear multiplication map given by is continuous; it is however, hypocontinuous. Distributions As discussed earlier, continuous linear functionals on a are known as distributions on . Thus the set of all distributions on is the continuous dual space of which when endowed with the strong dual topology is denoted by We have the canonical duality pairing between a distribution on and a test function which is denoted using angle brackets by One interprets this notation as the distribution acting on the test function to give a scalar, or symmetrically as the test function acting on the distribution . Characterizations of distributions Proposition. If is a linear functional on then the following are equivalent: is a distribution; : is a continuous function. is continuous at the origin. is uniformly continuous. is a bounded operator. is sequentially continuous. explicitly, for every sequence in that converges in to some is sequentially continuous at the origin; in other words, maps null sequences to null sequences. explicitly, for every sequence in that converges in to the origin (such a sequence is called a ), a is by definition a sequence that converges to the origin. maps null sequences to bounded subsets. explicitly, for every sequence in that converges in to the origin, the sequence is bounded. maps Mackey convergent null sequences to bounded subsets; explicitly, for every Mackey convergent null sequence in the sequence is bounded. a sequence is said to be if there exists a divergent sequence of positive real number such that the sequence is bounded; every sequence that is Mackey convergent to necessarily converges to the origin (in the usual sense). The kernel of is a closed subspace of The graph of is closed. There exists a continuous seminorm on such that There exists a constant a collection of continuous seminorms, that defines the canonical LF topology of and a finite subset such that For every compact subset there exist constants and such that for all For every compact subset there exist constants and such that for all with support contained in For any compact subset and any sequence in if converges uniformly to zero for all multi-indices then Any of the statements immediately above (that is, statements 14, 15, and 16) but with the additional requirement that compact set belongs to Topology on the space of distributions The topology of uniform convergence on bounded subsets is also called . This topology is chosen because it is with this topology that becomes a nuclear Montel space and it is with this topology that the kernels theorem of Schwartz holds. No matter what dual topology is placed on a of distributions converges in this topology if and only if it converges pointwise (although this need not be true of a net). No matter which topology is chosen, will be a non-metrizable, locally convex topological vector space. The space is separable and has the strong Pytkeev property but it is neither a k-space nor a sequential space, which in particular implies that it is not metrizable and also that its topology can be defined using only sequences. Topological properties Topological vector space categories The canonical LF topology makes into a complete distinguished strict LF-space (and a strict LB-space if and only if ), which implies that is a meager subset of itself. Furthermore, as well as its strong dual space, is a complete Hausdorff locally convex barrelled bornological Mackey space. The strong dual of is a Fréchet space if and only if so in particular, the strong dual of which is the space of distributions on , is metrizable (note that the weak-* topology on also is not metrizable and moreover, it further lacks almost all of the nice properties that the strong dual topology gives ). The three spaces and the Schwartz space as well as the strong duals of each of these three spaces, are complete nuclear Montel bornological spaces, which implies that all six of these locally convex spaces are also paracompact reflexive barrelled Mackey spaces. The spaces and are both distinguished Fréchet spaces. Moreover, both and are Schwartz TVSs. Convergent sequences Convergent sequences and their insufficiency to describe topologies The strong dual spaces of and are sequential spaces but not Fréchet-Urysohn spaces. Moreover, neither the space of test functions nor its strong dual is a sequential space (not even an Ascoli space), which in particular implies that their topologies can be defined entirely in terms of convergent sequences. A sequence in converges in if and only if there exists some such that contains this sequence and this sequence converges in ; equivalently, it converges if and only if the following two conditions hold: There is a compact set containing the supports of all For each multi-index the sequence of partial derivatives tends uniformly to Neither the space nor its strong dual is a sequential space, and consequently, their topologies can be defined entirely in terms of convergent sequences. For this reason, the above characterization of when a sequence converges is enough to define the canonical LF topology on The same can be said of the strong dual topology on What sequences do characterize Nevertheless, sequences do characterize many important properties, as we now discuss. It is known that in the dual space of any Montel space, a sequence converges in the strong dual topology if and only if it converges in the weak* topology, which in particular, is the reason why a sequence of distributions converges (in the strong dual topology) if and only if it converges pointwise (this leads many authors to use pointwise convergence to actually the convergence of a sequence of distributions; this is fine for sequences but it does extend to the convergence of nets of distributions since a net may converge pointwise but fail to converge in the strong dual topology). Sequences characterize continuity of linear maps valued in locally convex space. Suppose is a locally convex bornological space (such as any of the six TVSs mentioned earlier). Then a linear map into a locally convex space is continuous if and only if it maps null sequences in to bounded subsets of . More generally, such a linear map is continuous if and only if it maps Mackey convergent null sequences to bounded subsets of So in particular, if a linear map into a locally convex space is sequentially continuous at the origin then it is continuous. However, this does necessarily extend to non-linear maps and/or to maps valued in topological spaces that are not locally convex TVSs. For every is sequentially dense in Furthermore, is a sequentially dense subset of (with its strong dual topology) and also a sequentially dense subset of the strong dual space of Sequences of distributions A sequence of distributions converges with respect to the weak-* topology on to a distribution if and only if for every test function For example, if is the function and is the distribution corresponding to then as so in Thus, for large the function can be regarded as an approximation of the Dirac delta distribution. Other properties The strong dual space of is TVS isomorphic to via the canonical TVS-isomorphism defined by sending to (that is, to the linear functional on defined by sending to ); On any bounded subset of the weak and strong subspace topologies coincide; the same is true for ; Every weakly convergent sequence in is strongly convergent (although this does not extend to nets). Localization of distributions Preliminaries: Transpose of a linear operator Operations on distributions and spaces of distributions are often defined by means of the transpose of a linear operator. This is because the transpose allows for a unified presentation of the many definitions in the theory of distributions and also because its properties are well known in functional analysis. For instance, the well-known Hermitian adjoint of a linear operator between Hilbert spaces is just the operator's transpose (but with the Riesz representation theorem used to identify each Hilbert space with its continuous dual space). In general the transpose of a continuous linear map is the linear map or equivalently, it is the unique map satisfying for all and all (the prime symbol in does not denote a derivative of any kind; it merely indicates that is an element of the continuous dual space ). Since is continuous, the transpose is also continuous when both duals are endowed with their respective strong dual topologies; it is also continuous when both duals are endowed with their respective weak* topologies (see the articles polar topology and dual system for more details). In the context of distributions, the characterization of the transpose can be refined slightly. Let be a continuous linear map. Then by definition, the transpose of is the unique linear operator that satisfies: Since is dense in (here, actually refers to the set of distributions ) it is sufficient that the defining equality hold for all distributions of the form where Explicitly, this means that a continuous linear map is equal to if and only if the condition below holds: where the right hand side equals Extensions and restrictions to an open subset Let be open subsets of Every function can be from its domain to a function on by setting it equal to on the complement This extension is a smooth compactly supported function called the and it will be denoted by This assignment defines the operator which is a continuous injective linear map. It is used to canonically identify as a vector subspace of (although as a topological subspace). Its transpose (explained here) is called the and as the name suggests, the image of a distribution under this map is a distribution on called the restriction of to The defining condition of the restriction is: If then the (continuous injective linear) trivial extension map is a topological embedding (in other words, if this linear injection was used to identify as a subset of then 's topology would strictly finer than the subspace topology that induces on it; importantly, it would be a topological subspace since that requires equality of topologies) and its range is also dense in its codomain Consequently, if then the restriction mapping is neither injective nor surjective. A distribution is said to be if it belongs to the range of the transpose of and it is called if it is extendable to Unless the restriction to is neither injective nor surjective. Spaces of distributions For all and all all of the following canonical injections are continuous and have an image/range that is a dense subset of their codomain: where the topologies on the LB-spaces are the canonical LF topologies as defined below (so in particular, they are not the usual norm topologies). The range of each of the maps above (and of any composition of the maps above) is dense in the codomain. Indeed, is even sequentially dense in every For every the canonical inclusion into the normed space (here has its usual norm topology) is a continuous linear injection and the range of this injection is dense in its codomain if and only if . Suppose that is one of the LF-spaces (for ) or LB-spaces (for ) or normed spaces (for ). Because the canonical injection is a continuous injection whose image is dense in the codomain, this map's transpose is a continuous injection. This injective transpose map thus allows the continuous dual space of to be identified with a certain vector subspace of the space of all distributions (specifically, it is identified with the image of this transpose map). This continuous transpose map is not necessarily a TVS-embedding so the topology that this map transfers from its domain to the image is finer than the subspace topology that this space inherits from A linear subspace of carrying a locally convex topology that is finer than the subspace topology induced by is called . Almost all of the spaces of distributions mentioned in this article arise in this way (e.g. tempered distribution, restrictions, distributions of order some integer, distributions induced by a positive Radon measure, distributions induced by an -function, etc.) and any representation theorem about the dual space of may, through the transpose be transferred directly to elements of the space Compactly supported Lp-spaces Given the vector space of on and its topology are defined as direct limits of the spaces in a manner analogous to how the canonical LF-topologies on were defined. For any compact let denote the set of all element in (which recall are equivalence class of Lebesgue measurable functions on ) having a representative whose support (which recall is the closure of in ) is a subset of (such an is almost everywhere defined in ). The set is a closed vector subspace and is thus a Banach space and when even a Hilbert space. Let be the union of all as ranges over all compact subsets of The set is a vector subspace of whose elements are the (equivalence classes of) compactly supported functions defined on (or almost everywhere on ). Endow with the final topology (direct limit topology) induced by the inclusion maps as ranges over all compact subsets of This topology is called the and it is equal to the final topology induced by any countable set of inclusion maps () where are any compact sets with union equal to This topology makes into an LB-space (and thus also an LF-space) with a topology that is strictly finer than the norm (subspace) topology that induces on it. Radon measures The inclusion map is a continuous injection whose image is dense in its codomain, so the transpose is also a continuous injection. Note that the continuous dual space can be identified as the space of Radon measures, where there is a one-to-one correspondence between the continuous linear functionals and integral with respect to a Radon measure; that is, if then there exists a Radon measure on such that for all and if is a Radon measure on then the linear functional on defined by is continuous. Through the injection every Radon measure becomes a distribution on . If is a locally integrable function on then the distribution is a Radon measure; so Radon measures form a large and important space of distributions. The following is the theorem of the structure of distributions of Radon measures, which shows that every Radon measure can be written as a sum of derivatives of locally functions in : Positive Radon measures A linear function on a space of functions is called if whenever a function that belongs to the domain of is non-negative (meaning that is real-valued and ) then One may show that every positive linear functional on is necessarily continuous (that is, necessarily a Radon measure). Lebesgue measure is an example of a positive Radon measure. Locally integrable functions as distributions One particularly important class of Radon measures are those that are induced locally integrable functions. The function is called if it is Lebesgue integrable over every compact subset of . This is a large class of functions which includes all continuous functions and all Lp space functions. The topology on is defined in such a fashion that any locally integrable function yields a continuous linear functional on – that is, an element of – denoted here by , whose value on the test function is given by the Lebesgue integral: Conventionally, one abuses notation by identifying with provided no confusion can arise, and thus the pairing between and is often written If and are two locally integrable functions, then the associated distributions and are equal to the same element of if and only if and are equal almost everywhere (see, for instance, ). In a similar manner, every Radon measure on defines an element of whose value on the test function is As above, it is conventional to abuse notation and write the pairing between a Radon measure and a test function as Conversely, as shown in a theorem by Schwartz (similar to the Riesz representation theorem), every distribution which is non-negative on non-negative functions is of this form for some (positive) Radon measure. Test functions as distributions The test functions are themselves locally integrable, and so define distributions. The space of test functions is sequentially dense in with respect to the strong topology on This means that for any there is a sequence of test functions, that converges to (in its strong dual topology) when considered as a sequence of distributions. Or equivalently, Furthermore, is also sequentially dense in the strong dual space of Distributions with compact support The inclusion map is a continuous injection whose image is dense in its codomain, so the transpose is also a continuous injection. Thus the image of the transpose, denoted by forms a space of distributions when it is endowed with the strong dual topology of (transferred to it via the transpose map so the topology of is finer than the subspace topology that this set inherits from ). The elements of can be identified as the space of distributions with compact support. Explicitly, if is a distribution on then the following are equivalent, ; the support of is compact; the restriction of to when that space is equipped with the subspace topology inherited from (a coarser topology than the canonical LF topology), is continuous; there is a compact subset of such that for every test function whose support is completely outside of , we have Compactly supported distributions define continuous linear functionals on the space ; recall that the topology on is defined such that a sequence of test functions converges to 0 if and only if all derivatives of converge uniformly to 0 on every compact subset of . Conversely, it can be shown that every continuous linear functional on this space defines a distribution of compact support. Thus compactly supported distributions can be identified with those distributions that can be extended from to Distributions of finite order Let The inclusion map is a continuous injection whose image is dense in its codomain, so the transpose is also a continuous injection. Consequently, the image of denoted by forms a space of distributions when it is endowed with the strong dual topology of (transferred to it via the transpose map so 's topology is finer than the subspace topology that this set inherits from ). The elements of are The distributions of order which are also called are exactly the distributions that are Radon measures (described above). For a is a distribution of order that is not a distribution of order A distribution is said to be of if there is some integer such that it is a distribution of order and the set of distributions of finite order is denoted by Note that if then so that is a vector subspace of and furthermore, if and only if Structure of distributions of finite order Every distribution with compact support in is a distribution of finite order. Indeed, every distribution in is a distribution of finite order, in the following sense: If is an open and relatively compact subset of and if is the restriction mapping from to , then the image of under is contained in The following is the theorem of the structure of distributions of finite order, which shows that every distribution of finite order can be written as a sum of derivatives of Radon measures: Example. (Distributions of infinite order) Let and for every test function let Then is a distribution of infinite order on . Moreover, can not be extended to a distribution on ; that is, there exists no distribution on such that the restriction of to is equal to . Tempered distributions and Fourier transform Defined below are the , which form a subspace of the space of distributions on This is a proper subspace: while every tempered distribution is a distribution and an element of the converse is not true. Tempered distributions are useful if one studies the Fourier transform since all tempered distributions have a Fourier transform, which is not true for an arbitrary distribution in Schwartz space The Schwartz space, is the space of all smooth functions that are rapidly decreasing at infinity along with all partial derivatives. Thus is in the Schwartz space provided that any derivative of multiplied with any power of converges to 0 as These functions form a complete TVS with a suitably defined family of seminorms. More precisely, for any multi-indices and define: Then is in the Schwartz space if all the values satisfy: The family of seminorms defines a locally convex topology on the Schwartz space. For the seminorms are, in fact, norms on the Schwartz space. One can also use the following family of seminorms to define the topology: Otherwise, one can define a norm on via The Schwartz space is a Fréchet space (i.e. a complete metrizable locally convex space). Because the Fourier transform changes into multiplication by and vice versa, this symmetry implies that the Fourier transform of a Schwartz function is also a Schwartz function. A sequence in converges to 0 in if and only if the functions converge to 0 uniformly in the whole of which implies that such a sequence must converge to zero in is dense in The subset of all analytic Schwartz functions is dense in as well. The Schwartz space is nuclear and the tensor product of two maps induces a canonical surjective TVS-isomorphisms where represents the completion of the injective tensor product (which in this case is the identical to the completion of the projective tensor product). Tempered distributions The inclusion map is a continuous injection whose image is dense in its codomain, so the transpose is also a continuous injection. Thus, the image of the transpose map, denoted by forms a space of distributions when it is endowed with the strong dual topology of (transferred to it via the transpose map so the topology of is finer than the subspace topology that this set inherits from ). The space is called the space of . It is the continuous dual of the Schwartz space. Equivalently, a distribution is a tempered distribution if and only if The derivative of a tempered distribution is again a tempered distribution. Tempered distributions generalize the bounded (or slow-growing) locally integrable functions; all distributions with compact support and all square-integrable functions are tempered distributions. More generally, all functions that are products of polynomials with elements of Lp space for are tempered distributions. The can also be characterized as , meaning that each derivative of grows at most as fast as some polynomial. This characterization is dual to the behaviour of the derivatives of a function in the Schwartz space, where each derivative of decays faster than every inverse power of An example of a rapidly falling function is for any positive Fourier transform To study the Fourier transform, it is best to consider complex-valued test functions and complex-linear distributions. The ordinary continuous Fourier transform is a TVS-automorphism of the Schwartz space, and the is defined to be its transpose which (abusing notation) will again be denoted by . So the Fourier transform of the tempered distribution is defined by for every Schwartz function is thus again a tempered distribution. The Fourier transform is a TVS isomorphism from the space of tempered distributions onto itself. This operation is compatible with differentiation in the sense that and also with convolution: if is a tempered distribution and is a smooth function on is again a tempered distribution and is the convolution of and . In particular, the Fourier transform of the constant function equal to 1 is the distribution. Expressing tempered distributions as sums of derivatives If is a tempered distribution, then there exists a constant and positive integers and such that for all Schwartz functions This estimate along with some techniques from functional analysis can be used to show that there is a continuous slowly increasing function and a multi-index such that Restriction of distributions to compact sets If then for any compact set there exists a continuous function compactly supported in (possibly on a larger set than itself) and a multi-index such that on Tensor product of distributions Let and be open sets. Assume all vector spaces to be over the field where or For define for every and every the following functions: Given and define the following functions: where and These definitions associate every and with the (respective) continuous linear map: Moreover, if either (resp. ) has compact support then it also induces a continuous linear map of (resp. denoted by or is the distribution in defined by: Schwartz kernel theorem The tensor product defines a bilinear map the span of the range of this map is a dense subspace of its codomain. Furthermore, Moreover induces continuous bilinear maps: where denotes the space of distributions with compact support and is the Schwartz space of rapidly decreasing functions. This result does not hold for Hilbert spaces such as and its dual space. Why does such a result hold for the space of distributions and test functions but not for other "nice" spaces like the Hilbert space ? This question led Alexander Grothendieck to discover nuclear spaces, nuclear maps, and the injective tensor product. He ultimately showed that it is precisely because is a nuclear space that the Schwartz kernel theorem holds. Like Hilbert spaces, nuclear spaces may be thought as of generalizations of finite dimensional Euclidean space. Using holomorphic functions as test functions The success of the theory led to investigation of the idea of hyperfunction, in which spaces of holomorphic functions are used as test functions. A refined theory has been developed, in particular Mikio Sato's algebraic analysis, using sheaf theory and several complex variables. This extends the range of symbolic methods that can be made into rigorous mathematics, for example Feynman integrals. See also Notes References Bibliography . . . . . . . . . . . . Further reading M. J. Lighthill (1959). Introduction to Fourier Analysis and Generalised Functions. Cambridge University Press. (requires very little knowledge of analysis; defines distributions as limits of sequences of functions under integrals) V.S. Vladimirov (2002). Methods of the theory of generalized functions. Taylor & Francis. . . . . . Functional analysis Generalized functions Generalizations of the derivative Smooth functions Topological vector spaces Schwartz distributions
Spaces of test functions and distributions
[ "Mathematics" ]
8,579
[ "Functions and mappings", "Functional analysis", "Vector spaces", "Mathematical objects", "Space (mathematics)", "Topological vector spaces", "Mathematical relations" ]
67,790,275
https://en.wikipedia.org/wiki/Anna%20Romanowska
Anna B. Romanowska is a Polish mathematician specializing in abstract algebra. She is professor emeritus of algebra and combinatorics at the Warsaw University of Technology, and was the first convenor of European Women in Mathematics. Education and career Romanowska earned her Ph.D. in 1973 at the Warsaw University of Technology. Her dissertation, Toward an Algebraic Study of the Tone System, was supervised by . She became the first convenor of European Women in Mathematics, for 1993–1994. Books Romanowska is the coauthor of three books on abstract algebra with Jonathan D. H. Smith: Modal theory: an algebraic approach to order, geometry, and convexity (Heldermann, 1985) Post-modern algebra (Wiley, 1999) Modes (World Scientific, 2002) References External links Year of birth missing (living people) Living people Polish women mathematicians 20th-century Polish mathematicians 21st-century Polish mathematicians Algebraists Warsaw University of Technology alumni Academic staff of the Warsaw University of Technology
Anna Romanowska
[ "Mathematics" ]
202
[ "Algebra", "Algebraists" ]
67,792,931
https://en.wikipedia.org/wiki/FM%20extended%20band%20in%20Brazil
In Brazil, the FM extended band (), abbreviated eFM, refers to the extension of the FM broadcast band between 76.1 and 87.3 MHz, beyond the conventional band of 87.5 to 108 MHz that was previously used. The reclaimed spectrum was previously used to broadcast analog television channels 5 and 6 before the country's digital television transition. The first eFM stations began broadcasting on May 7, 2021, and the spectrum is being used as part of a plan to migrate AM stations to the FM band. History The idea of converting the former channels 5 and 6 for sound broadcasting use had been first floated in Brazil in 2013, as a method to support AM stations by migrating them to FM; that year, President Dilma Rousseff signed a law that started the AM–FM migration process in Brazil. Since then, 1,720 of the country's 1,781 AM outlets have requested migration, including in areas where no further FM stations could be added. Jovem Pan News in São Paulo was allowed by the Ministry of Communications to conduct tests on 84.7 MHz in 2014. In 2017, a decree was issued that required all new radios produced in the Free Economic Zone of Manaus beginning on January 1, 2019, to support tuning the extended band. By 2019, some makers of new automobiles, including Ford and Hyundai, and stereo manufacturer Pioneer Corporation were producing radios that supported the new band. Necessary regulatory changes by the National Telecommunications Agency (ANATEL) came into effect on November 3, 2020. The new frequencies will support AM–FM migration in parts of Brazil where there is insufficient room to migrate stations on the standard band alone, which is the case in 14 states. However, they will not be accessible on all radio receivers, including smartphones, if these cannot be updated or replaced. On May 7, 2021, the first ten stations began broadcasting on the extended band. Five, all on 87.1 MHz, are owned by the public broadcaster Brazil Communication Company (EBC). Four of those five are being used to rebroadcast Rádio Nacional's AM service, while the fifth has been designated to Rádio MEC in Brasília, which already had Rádio Nacional AM on the FM band. The other five stations are existing AM stations. It is planned that future highway advisory radio services use an eFM channel; only one such service exists in Brazil, . References Radio in Brazil Bandplans Broadcast engineering 2013 establishments in Brazil 2019 establishments in Brazil 2021 establishments in Brazil
FM extended band in Brazil
[ "Engineering" ]
508
[ "Broadcast engineering", "Electronic engineering" ]
67,793,103
https://en.wikipedia.org/wiki/Emmonsiosis
Emmonsiosis, also known as emergomycosis, is a systemic fungal infection that can affect the lungs, generally always affects the skin and can become widespread. The lesions in the skin look like small red bumps and patches with a dip, ulcer and dead tissue in the centre. It is caused by the Emergomyces species, a novel dimorphic fungus, previously classified under the genus Emmonsia. These fungi are found in soil and transmitted by breathing in its spores from the air. Inside the body it converts to yeast-like cells which then cause disease and invade beyond the lungs. Diagnosis is by skin biopsy and its appearance under the microscope. It is difficult to distinguish from histoplasmosis. Treatment is usually with amphotericin B. Emmonsiosis can be fatal. The disseminated type is more prevalent in South Africa, particularly in people with HIV. Signs and symptoms Generally, all cases have involvement of the skin. The lesions look like small red bumps and patches with a dip, ulcer and dead tissue in the centre. There may be several lesions and their distribution can be widespread. The lungs may be affected. Cause It is caused by the Emergomyces species, a novel dimorphic fungus, previously classified under the genus Emmonsia. Following a revised taxonomy in 2017 based on DNA sequence analyses, five of these Emmonsia-like fungi have been placed under the separate genus Emergomyces. These include Emergomyces pasteurianus, Emergomyces africanus, Emergomyces canadensis, Emergomyces orientalis and Emergomyces europaeus. Emergomyces africanus was previously known as Emmonsia africanus, which has similar features to Histoplasma spp. and the family of Ajellomycetaceae. The disease has been observed among people who have a weakened immune system and risk factors include HIV, organ transplant and steroid use. Mechanism The fungus is found in soil and is released in the air. Transmission is by breathing in fungal spores from the air. Inside the body it converts to yeast-like cells which then cause disease and invade beyond the lungs. In people with HIV, Emmonsiosis has been associated with Immune reconstitution inflammatory syndrome following initiating antiretroviral treatment. Diagnosis Diagnosis is by skin biopsy and its appearance under the microscope. Differential diagnosis Generally, it is difficult to distinguish from histoplasmosis. Other conditions that appear similar include tuberculosis, blastomycosis, sporotrichosis, chicken pox, Kaposi's sarcoma and drug reactions. Treatment Treatment usually includes amphotericin B. Prognosis It can be fatal. Epidemiology The disseminated type is more prevalent in South Africa, particularly in people with HIV. History The disease was thought to be a rare condition of the lung. Early cases may have been misdiagnosed as histoplasmosis. Other animals The genus Emmonsia can cause adiaspiromycosis, a lung disease in wild animals. References Mycosis-related cutaneous conditions Rare diseases Rare infectious diseases Fungal diseases
Emmonsiosis
[ "Biology" ]
663
[ "Fungi", "Fungal diseases" ]
59,882,447
https://en.wikipedia.org/wiki/List%20of%20protein%20tandem%20repeat%20annotation%20software
Computational methods use different properties of protein sequences and structures to find, characterize and annotate protein tandem repeats. Sequence-based annotation methods Structure-based annotation methods References
List of protein tandem repeat annotation software
[ "Biology" ]
40
[ "Protein tandem repeats", "Protein classification" ]
59,882,661
https://en.wikipedia.org/wiki/The%20Science%20of%20Managing%20Our%20Digital%20Stuff
The Science of Managing Our Digital Stuff is a book about personal information management (PIM) written by Ofer Bergman and Steve Whittaker. It was published in 2016 by MIT Press. The book examines why and how individuals organize their personal digital information, as well as how new PIM systems can make this process more efficient. It has three parts: Personal Information Management: The Curation Perspective, Hierarchical Folders and Their Alternatives, and The User-Subjective Approach to PIM Systems Design. In his review for the Journal of the Association for Information Science and Technology, William Jones found the book to be a comprehensive overview of Bergman and Whittaker's research work, but felt it was missing key details that would provide readers with a broader and more nuanced understanding of PIM. Dorothy Waugh's review, published in the American Archivist, praised the book as an "excellent introduction" to PIM, but noted that the studies cited by the authors were somewhat dated. References 2016 non-fiction books Information management MIT Press books
The Science of Managing Our Digital Stuff
[ "Technology" ]
214
[ "Information systems", "Information management" ]
59,885,583
https://en.wikipedia.org/wiki/Journal%20of%20Alloys%20and%20Compounds
The Journal of Alloys and Compounds is a peer-reviewed scientific journal covering experimental and theoretical approaches to materials problems that involve compounds and alloys. It is published by Elsevier and the editor-in-chief is Hongge Pan, Livio Battezzati. It was the first journal established to focus specifically on a group of inorganic elements. History The journal was established by William Hume-Rothery in 1958 as the Journal of the Less-Common Metals, focussing on the chemical elements in the rows of the periodic table for the Actinide and Lanthanide series. The lanthanides are sometimes referred to as the rare earths. The journal was not strictly limited to articles about those specific elements: it also included papers about the preparation and use of other elements and alloys. The journal developed out of an international symposium on metals and alloys above 1200 °C which Hume-Rothery organized at Oxford University on September 17–18, 1958. The conference included more than 100 participants from several countries. The papers presented at the symposium "The study of metals and alloys above 1200°C" were published as volume 1 of the journal. It was the first journal dealing specifically with a category of inorganic elements. The title of "Less-Common Metals" was something of a misnomer, since these metals are actually found fairly commonly, but in small amounts. The journal obtained its current name in 1991 and is considered a particularly rich source of information on hydrogen-metal systems. Retractions In 2017, Elsevier was reported to be retracting 3 papers from the journal, which was one of several to be affected by falsified reviews, which led to a broader discussion of the processes for reviewing journal articles. Abstracting and indexing The journal is abstracted and indexed in: Chemical Abstracts Service Current Contents/Physical, Chemical & Earth Sciences Science Citation Index Scopus According to the Journal Citation Reports, the journal has a 2022 impact factor of 6.371. References External links Materials science journals Academic journals established in 1958 Rare earth elements English-language journals Elsevier academic journals
Journal of Alloys and Compounds
[ "Materials_science", "Engineering" ]
420
[ "Materials science journals", "Materials science" ]
59,885,652
https://en.wikipedia.org/wiki/C2orf16
C2orf16 is a protein that in humans is encoded by the C2orf16 gene. Isoform 2 of this protein (NCBI ID: CAH18189.1 henceforth referred to as C2orf16) is 1,984 amino acids long. The gene contains 1 exon and is located at 2p23.3. Aliases for C2orf16 include Open Reading Frame 16 on Chromosome 2 and P-S-E-R-S-H-H-S Repeats Containing Sequence. 68 orthologs are known for this gene, including in mice and sheep, but no paralogs have been found. Gene The C2orf16 isoform 2 is a 6.2 kb, 1 exon gene at locus 2p23.3, and contains P-S-E-R-S-H-H-S repeats on the C-terminal side of the gene from amino acid 1,559 to 1,903. These repeats appear to have arisen from a transposable element. Primates show more P-S-E-R-S-H-H-S repeats than other mammalian orthologs do. Expression C2orf16 is found to be highly expressed in the testes and a retinoic acid and mitogen-treated human embryonic stem cell line, but is not known to be expressed differently in age or disease phenotypes. C2orf16 is also seen to have high expression in the pre-implantation embryo from the 4-cell embryo stage to the blastocyst stage. C2orf16 is not seen to have rapamycin sensitive expression. C2orf16 is also seen to significantly increase expression in c-MYC knockdown breast cancer cells. mRNA Isoforms Two isoforms exist of C2orf16. Isoform 1 is 5,388 amino acids long encoded in 5 exons over 16,401 base pairs. Isoform 2 uses an alternate start site of transcription and is considerably shorter at 1,984 amino acids long encoded in 1 exon over 6,200 base pairs. Expression Regulation One miRNA is predicted to bind to the 3'UTR of C2orf16, accession number MI0005564. Protein C2orf16 has a predicted molecular weight of 224kD and a predicted isoelectric point of 10.08, values that are relatively constant between orthologs. The protein includes higher than average composition of serine, histidine, and arginine and a lower than average composition of alanine. Compositional Features A positive charge cluster is found from amino acid residues 1,274 to 1,302. An arginine rich region is found from amino acids 1,545 to 1,933, a serine rich region is found from amino acids 1,568 to 1,934, and a histidine rich region is found from amino acids 1,630 to 1,853. A dot matrix analysis reveals a heavily repeated region from approximately residue 1,500 to 1,984, this being the P-S-E-R-S-H-H-S repeat. a small band of dots at approximately amino acid 1,200 denotes a half repeat of the P-S-E-R-S-H-H-S sequence.C2orf16 isoform 2 has no transmembrane domains, and is predicted to be localized to the nucleus after translation due to two nuclear localization sequences predicted at residues 1,233 and 1,281. No nuclear export sequence is conserved amongst orthologs, suggesting C2orf16 is not meant to leave the nucleus after import. No N- or C- terminal modifications were predicted. Sub-cellular Localization C2orf16 is predicted to be localized to the nucleus after transcription. Structure The 3D structure of C2orf16 is predicted to have three major domains. Domain 1 is from amino acids 1 to 662, domain 2 is from amino acids 674 to 1,487, and domain 3 is from amino acids 1,488 to 1,984. Domain 1 and 2 are predicted to be connected via a stretch of 12 amino acids not otherwise organized into a secondary structure allowing flexibility between domains 1 and 2. Domain 2 is predicted to have protein interacting domains for transcription factors. Domain 3 is predicted to follow a "balls on a string" structure and has many sites for possible phosphorylation. Protein Interactions C2orf16 has been shown to have a physical interaction with proto-oncogene Myc by tandem affinity purification. Ortholog Phylogeny 68 orthologs are known for C2orf16. The protein seems to have appeared in the mammalian evolutionary history 320 million years ago, around the divergence of mammals from reptiles. This history would explain why orthologs do not exist in amphibians, reptiles, birds, nor other more distantly related species. Any orthologs from species more distant from humans than other mammals are likely not related in function, however, the P-S-E-R-S-H-H-S repeat is present in bony fishes, crustaceans, stramenopiles including potato blight, plantae, and prokaryotes. The transposon repeat may have been reintroduced to mammals by a viral vector. Repeat Sequence The P-S-E-R-S-H-H-S repeat sequence is seen to be conserved in orthologs for C2orf16, and is conserved in organisms as distantly related as oomycete slime mold and plants including the chloroplasts of Ashby's Wattle. The S-P-S-E-R portion of the repeat is seen to be the most important for conservation, as seen by alignment with these orthologs and by creation of a Logo. The conservation analysis of the repeat shows the initial S-P-S is highly conserved, possibly for phosphorylation(S) and structure(P), and the R is almost completely conserved, mutating to a Lysine in some orthologs, implying the positive charge is necessary for the purpose of the repeat. The 3D shape of the repeat sequence is unclear as it has been predicted to be either balls-on-a-string or an antiparallel beta-sheet structure. Function C2orf16 isoform 2 is predicted to have a possible function in mitosis regulation through its nuclear localization, predicted transcription factor binding site, physical association with Myc, and increased expression in c-MYC knockdown breast cancer cells. Clinical Significance There are four patents on record for C2orf16, one each involving: cancerous PPP2RIA and ARID1A mutations, Alzheimer's predisposition, viral vaccine diversity, and copy number variation relation to common variable immunodeficiency. C2orf16 is also shown to have increased expression in some breast cancer lines, as well as being involved with Myc which is a common oncogene, making C2orf16 a possible oncogene to target in cancer treatments. References Proteins
C2orf16
[ "Chemistry" ]
1,460
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
59,885,692
https://en.wikipedia.org/wiki/Lists%20of%20telescopes
This is a list of lists of telescopes. List of astronomical interferometers at visible and infrared wavelengths List of astronomical observatories List of highest astronomical observatories List of large optical telescopes List of largest infrared telescopes List of largest optical telescopes historically List of largest optical telescopes in the 18th century List of largest optical telescopes in the 19th century List of largest optical telescopes in the 20th century List of largest optical reflecting telescopes List of largest optical refracting telescopes List of optical telescopes List of proposed space observatories List of radio telescopes List of solar telescopes List of space telescopes List of telescopes of Australia List of largest optical telescopes in the British Isles List of telescope parts and construction List of telescope types List of the largest optical telescopes in North America List of X-ray space telescopes See also Lists of astronauts Lists of astronomical objects List of government space agencies List of planetariums Lists of space scientists Lists of spacecraft External references Telescope History , NASA Official Website, accessed 02/09/2019 History of the Telescope, accessed 02/09/2019 List of astronomical observatories and telescopes, Encyclopedia Britannica, 02/09/2019 Major Space Telescopes, Space.com, By Andrea Thompson, 05/18/2009 A list of space telescopes, PHYSICS4ME, accessed 02/29/2019 The Biggest Telescopes In The World, World Atlas, accessed 02/29/2019
Lists of telescopes
[ "Astronomy" ]
280
[ "Astronomy-related lists", "Lists of telescopes" ]
59,886,546
https://en.wikipedia.org/wiki/Popov%20criterion
In nonlinear control and stability theory, the Popov criterion is a stability criterion discovered by Vasile M. Popov for the absolute stability of a class of nonlinear systems whose nonlinearity must satisfy an open-sector condition. While the circle criterion can be applied to nonlinear time-varying systems, the Popov criterion is applicable only to autonomous (that is, time invariant) systems. System description The sub-class of Lur'e systems studied by Popov is described by: where x ∈ Rn, ξ,u,y are scalars, and A,b,c and d have commensurate dimensions. The nonlinear element Φ: R → R is a time-invariant nonlinearity belonging to open sector (0, ∞), that is, Φ(0) = 0 and yΦ(y) > 0 for all y not equal to 0. Note that the system studied by Popov has a pole at the origin and there is no direct pass-through from input to output, and the transfer function from u to y is given by Criterion Consider the system described above and suppose A is Hurwitz (A,b) is controllable (A,c) is observable d > 0 and Φ ∈ (0,∞) then the system is globally asymptotically stable if there exists a number r > 0 such that See also Circle criterion References Nonlinear control Stability theory
Popov criterion
[ "Mathematics" ]
288
[ "Stability theory", "Dynamical systems" ]
59,886,688
https://en.wikipedia.org/wiki/Tiziana%20Di%20Matteo%20%28econophysicist%29
Tiziana Di Matteo is a Professor of Econophysics at King's College London. She studies complex systems, such as financial markets, and complex materials (such as superconductors). She serves on the council of the Complex Systems Society. Education and early career Di Matteo graduated cum laude from the University of Salerno in 1994. She was an Erasmus student at Queen Mary University of London. She remained at the University of Salerno for her graduate studies, completing her PhD on Josephson junctions networks in 1999. After her PhD, she became interested in the data sets of real financial markets. Selected publications Awards and honours Di Matteo was a QEII Fellow at the Australian National University. She joined the Department of Mathematics at King's College London in 2009. She has used the generalised Hurst approach to study the foreign exchange market and stock markets. In 2014 she was made a Professor of Econophysics at King's College London. Econophysics uses the statistical methods of physics to analyse financial markets. She was appointed to the Council of the Complex Systems Society in 2018. Di Matteo is the editor-in-chief of the Journal of Network theory in Finance. She also serves as editor for the European Physical Journal B. She was elected to the Academia Europaea in 2024. References External links Home page Year of birth missing (living people) Living people Applied physicists Academic journal editors Members of Academia Europaea Alumni of Queen Mary University of London University of Salerno alumni Academics of King's College London Women physicists
Tiziana Di Matteo (econophysicist)
[ "Physics" ]
313
[ "Applied and interdisciplinary physics", "Applied physicists" ]
59,888,094
https://en.wikipedia.org/wiki/Alfred%20Maddock
Alfred Gavin Maddock (1917–2009) was an English inorganic chemist, radiochemist and spectroscopist who worked on the Tube Alloys Project and the Manhattan Project during World War II. Those projects resulted in the development of the atomic bomb. He may be best known for, during World War II, spilling Canada's entire supply of plutonium which was 10 milligrams onto a wooden laboratory bench, and for recovered 9 and a half milligrams of plutonium. He recovered it by wet chemistry. He also had a distinguished, though less eventful, post-war academic career. Biography Maddock was born in Bedford Park, a garden suburb of London, and was educated at Latymer Upper School. He won a state scholarship to study chemistry at the Royal College of Science (RCS), then a constituent part of Imperial College London. After his undergraduate education, he continued on to postgraduate studies at RCS under the supervision of inorganic chemist Professor H. J. Emeléus. Those studies related to silicon hydrides, and he was awarded his PhD in 1942. During the early years of World War II, his other studies included methods of protection against arsine, which had been proposed as a chemical warfare agent. He also studied the toxicity of volatile compounds of fluorine; which resulted in his suffering an acute case of poisoning. He and Lord Rothschild devised a device based on mercuric chloride which was used by Allied parachutists into France. In 1941, he got to know several French nuclear physicists from the Curie Institute in Paris who had escaped the Nazi invasion. He initially worked with them at the Cavendish Laboratory in Cambridge, and later moved with them and others to Ottawa, Canada, where he helped build a heavy water reactor, as part of what was first known as the Tube Alloys Project and later as the Manhattan Project. It was during that time that he spilled Canada's supply of plutonium (about 10 mg) onto a wooden laboratory bench. He pragmatically sawed it into pieces, ashed them, and recovered the plutonium by wet chemistry. After the War, he returned to England; was appointed lecturer in the Department of Chemistry at the University of Cambridge, where Emeléus now occupied the chair of inorganic chemistry; and was elected Fellow of St Catharine's College. He had a broad range of scientific interests, which included: the chemistry of the actinide elements, in particular plutonium and protactinium; the chemistry associated with nuclear transformation; solvent extraction; radiation of inorganic solids; the chemistry of positronium ions; and Mössbauer spectroscopy, in which he was a pioneer. He was consultant to the International Atomic Energy Agency, and to atomic energy projects in various countries. He published more than 300 scientific papers. Honours and awards These include: 1960Awarded DSc and a personal readership by the University of Cambridge Awarded DSc honoris causa by the University of Louvain, Belgium Elected to the Brazilian Academy of Sciences; in 1995, awarded by it its Grand Cross of the Order of Merit in Science From 1981President of St Catharine's College, Cambridge 1996Awarded the Becquerel Medal of the Royal Society of Chemistry Notes References Further reading and external links The Papers of Alfred Maddock held at the Churchill Archives Centre ; from the Periodic Videos series, dated 5 September 2008; in which Professor Sir Martyn Poliakoff, who studied under Maddock, recounts the laboratory bench anecdote. 1917 births 2009 deaths People from the London Borough of Ealing People educated at Latymer Upper School Alumni of Imperial College London Alumni of the Royal College of Science Fellows of St Catharine's College, Cambridge Members of the Brazilian Academy of Sciences 20th-century English chemists Inorganic chemists Spectroscopists
Alfred Maddock
[ "Physics", "Chemistry" ]
788
[ "British inorganic chemists", "Spectrum (physical sciences)", "Physical chemists", "Analytical chemists", "Inorganic chemists", "Spectroscopists", "Spectroscopy" ]
59,888,206
https://en.wikipedia.org/wiki/Trans%20International%20Airlines%20Flight%20863
Trans International Airlines Flight 863 was a ferry flight from John F. Kennedy International Airport in New York City to Washington Dulles International Airport. On September 8, 1970, the Douglas DC-8 (registration N4863T) crashed during take-off from JFK's runway 13R. None of the 11 occupants, who were all crew members, survived. The probable cause of the accident was an asphalt-covered object lodged in between the right elevator and the right horizontal stabilizer, that jammed the elevator and caused the loss of pitch control. Aircraft and crew The aircraft involved was a Douglas DC-8-63CF, built in 1968. The aircraft was powered by four Pratt and Whitney JT3D-7 engines. The aircraft had 7,878 hours at the time of the accident. The captain was 49-year-old Joseph John May, who had 22,300 flight hours, including 7,100 hours on the DC-8. Other TIA pilots referred to him as "Ron". The first officer was 47-year-old John Donald Loeffler, who had 15,775 flight hours, with 4,750 of them on the DC-8. The flight engineer was 42-year-old Donald Kenneth Neely, who had 10,000 flight hours, including 3,500 hours on the DC-8. Eight flight attendants also were on board. Accident At 16:04 (EST), the aircraft was cleared to take off from JFK Airport runway 13R. The take-off roll commenced one minute later. The takeoff was unusually slow, with rotation occurring down the runway. Due to the slow rotation, a tailstrike occurred and skidded on the runway for . The cockpit voice recorder (CVR) recorded the sound of the tailstrike. At 16:05:35, Captain May said, "let's take it off," with First Officer Loeffler replying, "can't control this thing, Ron." The aircraft became airborne at down the runway. About 2 seconds after take off, the stick-shaker activated, warning the flight crew that the aircraft was in danger of stalling. The aircraft pitched 60–90° nose-up, rising only above the ground. The aircraft then rolled 20° to the right, then sharply to the left, and stalled in a nose-down position. The aircraft crashed into the ground at 16:05:52. The aircraft exploded and burst into flames on impact, killing the crew. Investigation The National Transportation Safety Board (NTSB) investigated the crash. The accident was labeled as "nonsurvivable." While examining the wreckage, investigators discovered a foreign object lodged in between the right elevator and the right horizontal stabilizer. The NTSB determined that this jammed the elevator and caused the loss of pitch control, but could not determine how the object got lodged in between the two surfaces, though one scenario stated that the object was blown in by wake turbulence from the aircraft that took off before Flight 863. The NTSB published its final report on August 18, 1971, with the "probable cause" section stating: The Board determines that the probable cause of this accident was a loss of pitch control caused by the entrapment of a pointed, asphalt-covered object between the leading edge of the right elevator and the right horizontal spar web access door in the aft part of the stabilizer. The restriction to elevator movement, caused by a highly unusual and unknown condition, was not detected by the crew in time to reject the take off successfully; however, an apparent lack of crew responsiveness to a highly unusual emergency situation, coupled with the captain's failure to monitor adequately the take off, contributed to the failure to reject the take off. Aftermath After the accident, the Federal Aviation Administration instituted new time minima between aircraft in line-up for take off. See also Emery Worldwide Airlines Flight 17 – another accident involving a DC-8 freighter also involving problems with the right elevator References External links Aviation accidents and incidents in the United States in 1970 1970 in New York City 1970s in Queens Airliner accidents and incidents in New York City Airliner accidents and incidents caused by mechanical failure Airliner accidents and incidents caused by pilot error Accidents and incidents involving the Douglas DC-8 September 1970 events in the United States Accidents and incidents involving cargo aircraft
Trans International Airlines Flight 863
[ "Materials_science" ]
886
[ "Airliner accidents and incidents caused by mechanical failure", "Mechanical failure" ]
59,888,223
https://en.wikipedia.org/wiki/Intelligent%20transformation
Intelligent transformation is the process of deriving better business and societal outcomes by leveraging smart devices, big data, artificial intelligence, and cloud technologies. Intelligent transformation can facilitate firms in gaining recognition from external investors, thereby enhancing their market image and attracting larger consumers who are more eager to collaborate. Conversely, intelligent transformation can foster the development of more interactive and multidimensional value-creation models while optimizing the conventional organizational model. Process Intelligent transformation takes place where devices and data center infrastructure work together to create end-to-end solutions. It addresses the needs of a customer and improves the performance of individual vertical industries. This takes place by leveraging big data analytics, machine learning, cloud computing, edge computing, and artificial intelligence. Intelligent transformation typically involves three capabilities. On the front end, there needs to be smart devices, or sensors and modules in the field to generate the information to be analyzed, a process called Smart Internet of Things (SIoT). On the back-end, data center infrastructure processes the information and through algorithms, generate patterns and insights. This is called smart infrastructure. The final is called smart vertical and takes place once the data for specific use cases to be addressed. The more use cases a vendor is able to address, the more quickly that it will likely become the leading provider of intelligent solutions to a particular industry. Use cases and industry recognition There are multiple uses cases for smart manufacturing. In the healthcare industry, intelligent transformation can help to develop the next generation of radiology tools and help surgeons create more precise analytics for pathology images. For example, advanced machine learning methods developed can achieve more accurate demand forecast in certain scenarios. Predix Asset Performance Management from General Electric is designed to optimize the performance of assets. Its goal is to increase reliability and availability and also minimize costs. Microsoft incorporates intelligent transformation in its Surface Hub 2 digital whiteboard for smart office by integrating hardware and software solutions together. Features developed through intelligent transformation include a 4K camera for Skype, 4 screen tilling, and incorporation of collaboration tools such as Windows, Office, and Skype. Intelligent transformation is used by Lenovo in various products such as smart speakers, smart watches and smart displays which use various AI technology. Smart devices would include Smart PC Yoga S940 by Lenovo which uses AI technologies to detect user attention and protect work privacy by automatically adjusting the display background. Smart infrastructure would include ThinkAgile Software Defined Infrastructure which is optimized for a variety of workloads and designed to provide more efficient resource allocation to support business growth. An example of vertical use case would include DaystAR for remote monitoring of airline maintenance process, which has been applied to manufacturing and aviation. Intelligent transformation is also used by LiveTiles to create employee and customer-facing chatbots powered by Microsoft natural language. Amazon Go is another example and uses computer vision, sensor fusion and deep learning to detect when products are taken off the shelf and then places them in a "virtual shopping cart" for checkout. See also Intelligent maintenance system References External links Global Tech Forum 2018: Intelligent Transformation - Fortune Technology forecasting Data management
Intelligent transformation
[ "Technology" ]
628
[ "Data management", "Data" ]
59,889,536
https://en.wikipedia.org/wiki/Katharina%20Heinroth
Katharina Bertha Charlotte Heinroth née Berger, (4 February 1897 – 20 October 1989) was a German zoologist and a director of the Berlin Zoo, succeeding her husband Oskar Heinroth, from 1945 to 1956. She was born in Breslau and died in Berlin. Life and work Katharina Berger was born among four siblings in Breslau. As a child she grew up in the village of Wohwitz west of Breslau where she kept frogs and other animals at home, observing the growth of butterflies. She later noted that this was significant in deciding her future interests and career. She went to the secondary lyceum in Breslau followed by studies in zoology, botany and geology at Munich. She graduated in 1923 summa cum laude with work on hearing in reptiles under Otto Köhler. She moved to Munich in 1925 where she lived with Gustav Adolf Rösch, an assistant to Karl von Frisch. She too worked with Frisch on bees and Breslau. She married Rösch in 1928 but they divorced in a few years. She then moved to Halle and worked at the Leopoldina Academy library. She then moved to Berlin where she married Oskar Heinroth in 1933. When he died in 1945 and with the escape of the earlier Nazi director, Lutz Heck (whose father Ludwig Heck had also worked in the Berlin zoo), she was given charge as scientific director of the Berlin zoo and helped restore it from the damages of war. Of the 4000 animals in the zoo, only 91 remained at the end of the war. She earned the nickname of "Katharina die Einzige" ("the one and only Katharina"). She specialized in animal behaviour and was especially skilled in raising birds. From 1953, she also lectured on zoology at Technische Universität Berlin. She travelled widely on work that involved adding animals to the zoo. She raised private donations for acquiring new animals from around the world. On a trip to Uganda, she noted that African marabous did not fly on approach by humans, due to the protection given to them. Along with Oskar they made studies on pigeon behaviour and navigation. A major work on the birds of central Europe, Mitteleuropäische Vögel (1962) was written by Katharina Heinroth along with J. Steinbacher with art by Franz Murr. Apart from numerous scientific and popular writings, she also wrote a biography of Oskar in 1971 and an autobiography in 1979. She retired as director of Berlin zoo in 1956 and was succeeded by Heinz-Georg Klös. Honors and tributes In 1989, Katharina Heinroth received the Urania-Medal conferred by Urania (Berlin). The elementary school "Katharina-Heinroth-Grundschule" ("Katharina Heinroth Elementary School") in Berlin was named after her in 2000. Every year, the Berlin Society of Friends of Natural Science awards students from Berlin's universities with the "Katharina-Heinroth-Preis" ("Katharina-Heinroth Award"). The award is conferred for outstanding bachelor's and master's theses or independent research projects in the field of the life sciences. Recipients of this honor receive 300€ alongside a two-year membership in the society. References 20th-century German women scientists 1989 deaths 1897 births Officers Crosses of the Order of Merit of the Federal Republic of Germany Recipients of the Order of Merit of Berlin German ornithologists Zookeepers Ethologists 20th-century German zoologists
Katharina Heinroth
[ "Biology" ]
738
[ "Ethology", "Behavior", "Ethologists" ]
59,889,616
https://en.wikipedia.org/wiki/Qiu%20Dahong
Qiu Dahong (; 6 April 1930 – 11 January 2025) was a Chinese coastal and offshore engineer. He served as chief engineer of the Dalian Fishing Port, the New Dalian Port, the Qinhuangdao Petroleum Port, and many other projects. He was a professor of the Dalian University of Technology and directed the State Key Laboratory of Coastal and Offshore Engineering. He was elected an academician of the Chinese Academy of Sciences in 1991. Life and career Qiu was born on 6 April 1930 in Shanghai, Republic of China, with his ancestral home in Huzhou, Zhejiang. After graduating from the Department of Civil Engineering of Tsinghua University in 1951, he joined the Dalian University of Technology (DUT), where he worked under Professor Qian Lingxi and helped create China's first port and harbour engineering program. In 1958, Qiu was appointed chief engineer for the construction of Dalian Fishing Port at the age of 28. The port, designed to occupy of open water with docks for 300 fishing boats, was unprecedented in China in both scale and difficulty. When completed in 1966, it was Asia's largest fishing port. In 1987, Qiu again served as chief engineer for the port's expansion project, which was completed in 1989. In 1973, Qiu became chief engineer of the New Dalian Port, the first port in China capable of handling oil tankers with a displacement of 100,000 tons. It was opened in 1976, and won a national gold medal for its design. Qiu later led or participated in the design of the Qinhuangdao Petroleum Port, the Lianyungang Container Port, the Shenzhen Chiwan Port, the Hainan Petroleum Port, the Yamen Shipping Channel of the Pearl River estuary, and the Yangshan Port of Shanghai. Qiu served as Director of the State Key Laboratory of Coastal and Offshore Engineering at DUT and was elected an academician of the Chinese Academy of Sciences in 1991. In 1992, he was elected a Central Committee member of the Jiusan Society. On 11 January 2025, Qiu died in Dalian at the age of 94. Publications Qiu published more than 100 scientific papers and multiple monographs and textbooks. In addition to coastal and offshore engineering, he researched wave theory and conducted experiments to calculate the forces that sea waves exert on engineering structures. In 2011, China Ocean Press published The Collective Writings of Qiu Dahong (), which includes 93 scientific papers, 15 published articles on port construction projects, and 20 other articles. References 1930 births 2025 deaths Chinese civil engineers Coastal engineering Academic staff of Dalian University of Technology Engineers from Shanghai Members of the Chinese Academy of Sciences Members of the Jiusan Society Offshore engineering Nanyang Model High School alumni Tsinghua University alumni
Qiu Dahong
[ "Engineering" ]
568
[ "Construction", "Coastal engineering", "Civil engineering", "Offshore engineering" ]
59,890,641
https://en.wikipedia.org/wiki/Neuro-Information-Systems
Neuro-Information-Systems (NeuroIS) is a subfield of the information systems (IS) discipline, which relies on neuroscience and neurophysiological knowledge and tools to better understand the development, use, and impact of information and communication technologies. The field has been formally established at the International Conference on Information Systems (ICIS) in 2007. Aims and scope Research evidence supports the idea that human behavior is influenced by individual factors (e.g., genetic predisposition) and environmental factors. These influences affect the brain (e.g., its structure and processing mechanisms) which subsequently impacts the way in which information is processed. By acknowledging this relationship of individual characteristics (e.g., experiences with e-commerce platforms that have led to changes in the brain due to learning processes), environmental influences (e.g., characteristics of an IT artifact such as the usability of an e-commerce platform) and human behavior (e.g., purchasing behavior in an e-commerce context), NeuroIS seeks to understand the internal processes that are involved in the formation of human behavior related to information systems. By applying theories and tools from neuroscience and related fields, NeuroIS strives to make a number of important contributions, including but not limited to: Inform the design of IT artifacts and IS investigations in general Introduce a biological level of analysis as mediator between IT artifact and IT behavior Shed light on theoretical mechanisms underlying the influence of the IT artifact on IT behavior Offer additional avenues for IT artifact evaluation (e.g., using brain activity) Enable the measurement of constructs that cannot be reliably measured using self-report techniques (e.g., questionnaires, interviews) Offer additional predictive power for certain outcome variables (e.g., user health) Enable investigations into how physiology (e.g., brain structure) is affected by the use of IT artifacts Offer additional input for adaptive systems (e.g., based on real-time assessments of physiological well-being) Offer additional input for users to reflect on their behavior (e.g., biofeedback) Offer additional input for human-computer interaction (e.g., brain-computer interfaces for physically-impaired individuals) Applying theories and tools from neuroscience, NeuroIS also draws from other reference disciplines and shares a close connection with sister disciplines that have also added these theories and instruments to their set of methods. Reference disciplines and sister disciplines for NeuroIS include, but are not limited to: Neuropyschology and Cognitive Neuroscience Neuroeconomics, Decision Neuroscience and Social Neuroscience Neuromarketing and Consumer Neuroscience Neuroergonomics Affective Computing and Brain-Computer Interaction Data collection methods Two commonly used types of neurophysiological data collection methods are applied in NeuroIS research: Psychophysiological tools that focus on the capture of activity related to the autonomic nervous system and Brain imaging tools that focus on the capture of activity in the central nervous system. Psychophysiological tools The most commonly used psychophysiological tools in NeuroIS include the measurement of eye gaze behavior and pupil dilation (eye tracking), the measurement of electrodermal activity (skin conductance response), the measurement of muscular activity (electromyography) and the measurement of heart-related activity (electrocardiogram). Brain imaging tools The main brain imaging tools that are used in NeuroIS include functional magnetic resonance imaging (fMRI) and Electroencephalography (EEG). Conferences and groups Since 2009 an annual conference is taking place in Austria to support NeuroIS research. From 2009 to 2017 this conference has been called the Gmunden Retreat on NeuroIS and took place in Gmunden, Austria. Since 2018, it is being called the NeuroIS Retreat and takes place in Vienna, Austria. In 2018, a society called the NeuroIS Society has been founded in Austria to further support the growth of the field and the collaboration amongst NeuroIS researchers. References Information systems
Neuro-Information-Systems
[ "Technology" ]
827
[ "Information systems", "Information technology" ]
59,891,255
https://en.wikipedia.org/wiki/Dezful%20%28missile%29
Dezful () is a medium range ballistic missile (MRBM) developed by Iran and unveiled in February 2019 in an underground missile factory. The Iranian armed forces said that the missile has a range of over 1,000 kilometers (620 miles). It carries a 600 or 700 kg warhead and has a CEP (circular error of probability) of 5 meters. The missile can attain the speed of Mach 7 (8,643 km/h). Brigadier General Amir Ali Hajizadeh said this is an upgrade on the older Zolfaghar model, that had a range of 700 kilometers. See also Fateh-110 Zolfaghar (missile) Haj Qasem (missile) List of military equipment manufactured in Iran Science and technology in Iran Raad-500 (missile) Khorramshahr (missile) References Ballistic missiles of Iran Medium-range ballistic missiles of Iran Surface-to-surface missiles of Iran Guided missiles of Iran Theatre ballistic missiles
Dezful (missile)
[ "Astronomy" ]
200
[ "Rocketry stubs", "Astronomy stubs" ]
59,891,279
https://en.wikipedia.org/wiki/Thyrotroph%20Thyroid%20Hormone%20Sensitivity%20Index
The Thyrotroph Thyroid Hormone Sensitivity Index (abbreviated TTSI, also referred to as Thyrotroph T4 Resistance Index or TT4RI) is a calculated structure parameter of thyroid homeostasis. It was originally developed to deliver a method for fast screening for resistance to thyroid hormone. Today it is also used to get an estimate for the set point of thyroid homeostasis, especially to assess dynamic thyrotropic adaptation of the anterior pituitary gland, including non-thyroidal illnesses. How to determine TTSI Universal form The TTSI can be calculated with from equilibrium serum or plasma concentrations of thyrotropin (TSH), free T4 (FT4) and the assay-specific upper limit of the reference interval for FT4 concentration (lu). Reference ranges Short form Some publications use a simpler form of this equation that doesn't correct for the reference range of free T4. It is calculated with . The disadvantage of this uncorrected version is that its numeric results are highly dependent on the used assays and their units of measurement. Biochemical associations In case of resistance to thyroid hormone, the magnitude of TTSI depends on which nucleotide in the THRB gene is mutated, but also on the genotype of coactivators. A systematic investigation in mice demonstrated a strong association of TT4RI to the genotypes of THRB and the steroid receptor coactivator (SRC-1) gene. Clinical significance The TTSI is used as a screening parameter for resistance to thyroid hormone due to mutations in the THRB gene, where it is elevated. It is also beneficial for assessing the severity of already confirmed thyroid hormone resistance, even on replacement therapy with L-T4, and for monitoring the pituitary response to substitution therapy with thyromimetics (e.g. TRIAC) in RTH Beta. In autoimmune thyroiditis the TTSI is moderately elevated. A large cohort study demonstrated TTSI to be strongly influenced by genetic factors. A variant of the TTSI that is not corrected for the upper limit of the FT4 reference range was shown to be significantly increased in offspring from long-lived siblings compared to their partners. Conversely, an elevated set point of thyroid homeostasis, as quantified by the TT4RI, is associated to higher prevalence of metabolic syndrome and several harmonized criteria by the International Diabetes Federation, including triglyceride and HDL concentration and blood pressure. In certain phenotypes of non-thyroidal illness syndrome, especially in cases with concomitant sepsis, the TTSI is reduced. This reflects a reduced set point of thyroid homeostasis, as also experimentally predicted in rodent models of inflammation and sepsis. Negative correlation of the TTSI with the urinary excretion of certain phthalates suggests that endocrine disruptors may affect the central set point of thyroid homeostasis. See also Thyroid function tests Thyroid's secretory capacity Sum activity of peripheral deiodinases Jostel's TSH index Thyroid Feedback Quantile-based Index References Chemical pathology Blood tests Endocrine procedures Thyroidological methods Thyroid homeostasis Structure parameters of thyroid function Static endocrine function tests
Thyrotroph Thyroid Hormone Sensitivity Index
[ "Chemistry", "Biology" ]
666
[ "Biochemistry", "Blood tests", "Chemical pathology", "Structure parameters of thyroid function" ]
59,892,172
https://en.wikipedia.org/wiki/Neural%20style%20transfer
Neural style transfer (NST) refers to a class of software algorithms that manipulate digital images, or videos, in order to adopt the appearance or visual style of another image. NST algorithms are characterized by their use of deep neural networks for the sake of image transformation. Common uses for NST are the creation of artificial artwork from photographs, for example by transferring the appearance of famous paintings to user-supplied photographs. Several notable mobile apps use NST techniques for this purpose, including DeepArt and Prisma. This method has been used by artists and designers around the globe to develop new artwork based on existent style(s). History NST is an example of image stylization, a problem studied for over two decades within the field of non-photorealistic rendering. The first two example-based style transfer algorithms were image analogies and image quilting. Both of these methods were based on patch-based texture synthesis algorithms. Given a training pair of images–a photo and an artwork depicting that photo–a transformation could be learned and then applied to create new artwork from a new photo, by analogy. If no training photo was available, it would need to be produced by processing the input artwork; image quilting did not require this processing step, though it was demonstrated on only one style. NST was first published in the paper "A Neural Algorithm of Artistic Style" by Leon Gatys et al., originally released to ArXiv 2015, and subsequently accepted by the peer-reviewed CVPR conference in 2016. The original paper used a VGG-19 architecture that has been pre-trained to perform object recognition using the ImageNet dataset. In 2017, Google AI introduced a method that allows a single deep convolutional style transfer network to learn multiple styles at the same time. This algorithm permits style interpolation in real-time, even when done on video media. Mathematics This section closely follows the original paper. Overview The idea of Neural Style Transfer (NST) is to take two images—a content image and a style image —and generate a third image that minimizes a weighted combination of two loss functions: a content loss and a style loss . The total loss is a linear sum of the two:By jointly minimizing the content and style losses, NST generates an image that blends the content of the content image with the style of the style image. Both the content loss and the style loss measures the similarity of two images. The content similarity is the weighted sum of squared-differences between the neural activations of a single convolutional neural network (CNN) on two images. The style similarity is the weighted sum of Gram matrices within each layer (see below for details). The original paper used a VGG-19 CNN, but the method works for any CNN. Symbols Let be an image input to a CNN. Let be the matrix of filter responses in layer to the image , where: is the number of filters in layer ; is the height times the width (i.e. number of pixels) of each filter in layer ; is the activation of the filter at position in layer . A given input image is encoded in each layer of the CNN by the filter responses to that image, with higher layers encoding more global features, but losing details on local features. Content loss Let be an original image. Let be an image that is generated to match the content of . Let be the matrix of filter responses in layer to the image . The content loss is defined as the squared-error loss between the feature representations of the generated image and the content image at a chosen layer of a CNN: where and are the activations of the filter at position in layer for the generated and content images, respectively. Minimizing this loss encourages the generated image to have similar content to the content image, as captured by the feature activations in the chosen layer. The total content loss is a linear sum of the content losses of each layer: , where the are positive real numbers chosen as hyperparameters. Style loss The style loss is based on the Gram matrices of the generated and style images, which capture the correlations between different filter responses at different layers of the CNN: where Here, and are the entries of the Gram matrices for the generated and style images at layer . Explicitly, Minimizing this loss encourages the generated image to have similar style characteristics to the style image, as captured by the correlations between feature responses in each layer. The idea is that activation pattern correlations between filters in a single layer captures the "style" on the order of the receptive fields at that layer. Similarly to the previous case, the are positive real numbers chosen as hyperparameters. Hyperparameters In the original paper, they used a particular choice of hyperparameters. The style loss is computed by for the outputs of layers conv1_1, conv2_1, conv3_1, conv4_1, conv5_1 in the VGG-19 network, and zero otherwise. The content loss is computed by for conv4_2, and zero otherwise. The ratio . Training Image is initially approximated by adding a small amount of white noise to input image and feeding it through the CNN. Then we successively backpropagate this loss through the network with the CNN weights fixed in order to update the pixels of . After several thousand epochs of training, an (hopefully) emerges that matches the style of and the content of . , when implemented on a GPU, it takes a few minutes to converge. Extensions In some practical implementations, it is noted that the resulting image has too much high-frequency artifact, which can be suppressed by adding the total variation to the total loss. Compared to VGGNet, AlexNet does not work well for neural style transfer. NST has also been extended to videos. Subsequent work improved the speed of NST for images by using special-purpose normalizations. In a paper by Fei-Fei Li et al. adopted a different regularized loss metric and accelerated method for training to produce results in real-time (three orders of magnitude faster than Gatys). Their idea was to use not the pixel-based loss defined above but rather a 'perceptual loss' measuring the differences between higher-level layers within the CNN. They used a symmetric convolution-deconvolution CNN. Training uses a similar loss function to the basic NST method but also regularizes the output for smoothness using a total variation (TV) loss. Once trained, the network may be used to transform an image into the style used during training, using a single feed-forward pass of the network. However the network is restricted to the single style in which it has been trained. In a work by Chen Dongdong et al. they explored the fusion of optical flow information into feedforward networks in order to improve the temporal coherence of the output. Most recently, feature transform based NST methods have been explored for fast stylization that are not coupled to single specific style and enable user-controllable blending of styles, for example the whitening and coloring transform (WCT). References Algorithms Deep learning
Neural style transfer
[ "Mathematics" ]
1,477
[ "Applied mathematics", "Algorithms", "Mathematical logic" ]
59,893,956
https://en.wikipedia.org/wiki/Net%20ecosystem%20production
Net ecosystem production (NEP) in ecology, limnology, and oceanography, is the difference between gross primary production (GPP) and net ecosystem respiration. Net ecosystem production represents all the carbon produced by plants in water through photosynthesis that does not get respired by animals, other heterotrophs, or the plants themselves. Overview Net ecosystem production describes the total carbon in an ecosystem that can be stored, exported, or oxidized back into carbon dioxide gas. NEP is written in units of mass of carbon per unit area per time, for example, grams carbon per square meter per year (g C m−2 yr−1). In a given ecosystem, carbon quantified as net ecosystem production can eventually end up: oxidized by fire or ultraviolet radiation, accumulated as biomass, exported as organic carbon to another system, or accumulated in sediments or soils. Carbon classified as NEP can be in the form of particles in the particulate organic carbon (POC) pool such as phytoplankton cells (living) and detritus (non-living), or it can be in the form of dissolved substances that have not yet been decomposed in the dissolved organic carbon (DOC) pool. In any form, if the carbon gets respired or decomposed by any living organism (plant, animal, bacteria, or other microscopic organism) to release carbon dioxide, that carbon no longer counts as NEP. NEP = GPP - respiration [by plants] - respiration [by animals and other heterotrophs] Net ecosystem production vs. net primary production Net ecosystem production is all the carbon not respired, including respiration by plants and heterotrophic organisms such as animals and microbes. In contrast, net primary production (NPP) is all the carbon taken up by plants (autotrophs) minus the carbon that the plants themselves respire through cellular respiration. NPP = GPP - respiration [by plants] Net community production Net community production (NCP) is the difference between net primary production and respiration by animals and heterotrophs only. Net community production is equal to net ecosystem production, and is only calculated differently. NCP = NPP - respiration [by animals and other heterotrophs] Annual net community production (ANCP) is this carbon pool estimated per year. For example, annual net community production in the tropical South Pacific Ocean can be very close to zero, meaning that basically all carbon produced is respired by heterotrophs. In the rest of the Pacific Ocean, annual net community production can range from 2.0 to 2.4 mol C m−2 yr−1, meaning that carbon produced by phytoplankton (minus what the phytoplankton respire themselves) is greater during a given year than what gets respired by heterotrophs See also Primary production Ecosystem respiration Biological pump Oceanic carbon cycle Lake metabolism f-ratio Biomass (ecology) Biogeochemical cycle References Plants Ecology Biological oceanography Marine biology Limnology
Net ecosystem production
[ "Biology" ]
635
[ "Ecology", "Plants", "Marine biology" ]
59,894,017
https://en.wikipedia.org/wiki/Malonoben
Malonoben (also known as tyrphostin A9, SF-6847, GCP5126, and AG-17) is an uncoupling agent/protonophore. As of 1974 when it was discovered, it was considered the most powerful agent of this type, with a potency over 1800 times that of 2,4-dinitrophenol - the prototypical uncoupling agent - and about 3 times the effectiveness of 5-chloro-3-tert-butyl-2'-chloro-4'-nitrosalicylanilide. References Uncouplers Ionophores Nitriles Tert-butyl compounds Phenols
Malonoben
[ "Chemistry" ]
150
[ "Cellular respiration", "Functional groups", "Organic compounds", "Nitriles", "Organic compound stubs", "Organic chemistry stubs", "Uncouplers" ]
59,895,168
https://en.wikipedia.org/wiki/Memory%20T%20cell%20inflation
Memory T cell inflation phenomenon is the formation and maintenance of a large population of specific CD8+ T cells in reaction to cytomegalovirus (CMV) infection. Cytomegalovirus (CMV) CMV is a worldwide type of virus which affects 60-80 % of the human population in developed countries. The virus is spread through saliva or urine and in healthy individuals can survive under the immune system control without any visible symptoms. The CMV life strategy is to integrate DNA into the genome of the host cells and escape the mechanism of natural immunity. Infection Immune response against CMV is primarily provided by CD8 + T cells which recognize viral fragments in MHC class I complex on the surface of infected cells and destroy these cells. Specific CD8+ T cells are generated in secondary lymphoid organs where naïve T cells encounter with cytomegalovirus antigen on antigen presenting cells. This results in a population of migrating effector CD8 + T-lymphocytes and the second small population called central memory T-cells that remains in the secondary lymphatic organs and the bone marrow. These cells are capable to respond and proliferate immediately after repeated pathogen recognition. The amount of memory cells generated as a response to cytomegalovirus is approximately 9.1% - 10.2% of all circulating CD4 + and CD8 + memory cells. Memory CD8 + T lymphocytes characteristics Generally, these cells express a low level of node localization markers - CD62L, CCR7 and occur in peripheral organs. They retain their standard functions like cytokine production and cytotoxicity. They do not express costimulatory molecules (CD28) and PD-1 receptor inhibitors on the surface, but they express the inhibitory molecules KLRG1 and CD85. Immunosenescence Remodeling of immune response and reduced ability to protect individuals from infectious diseases is observed in relation to age. Especially in the elderly, long-term CMV infection leads to a rapid increase the number of CMV-specific T cells. The number of CMV memory CD8 + T lymphocytes then predominate and the total number of available naïve T lymphocytes decrease. CD8+T cells form up to 50 % of all peripheral blood memory cells in HCV-positive elderly individuals. The same effect on the immune system has been described in herpes viruses and parvoviruses. Use in therapy Potential therapeutic use of memory cells is vaccination based on induction of memory T cells in the periphery that will be capable of effectively and immediately attacking the pathogen. References Betaherpesvirinae Immunology T cells
Memory T cell inflation
[ "Biology" ]
552
[ "Immunology" ]
59,895,747
https://en.wikipedia.org/wiki/Vandi%20Verma
Vandana "Vandi" Verma is a space roboticist and chief engineer at NASA's Jet Propulsion Laboratory, known for driving the Mars rovers, notably Curiosity and Perseverance, using software including PLEXIL programming technology that she co-wrote and developed. Biography Verma was born and grew up partly in Halwara, India; her father was a pilot in the Indian Air Force. She gained her first qualification, a bachelor's degree in electrical engineering, at Punjab Engineering College in Chandigarh, India. She went on to gain a master's in robotics from Carnegie Mellon University (CMU) followed by a PhD in robotics from Carnegie Mellon in 2005, with a thesis entitled Tractable Particle Filters for Robot Fault Diagnosis. At CMU, she developed in interest in robotics in unknown environments. She was involved in a 3-year astrobiology experimental station in the Atacama Desert. The desert was chosen because of the similarities between its hostile environment and the surface of Mars. She won a competition to create a robot to navigate a maze and collect balloons. She tested robotic technologies in the Arctic and Antarctic. Between studies, she gained her pilot's license. Her first post-graduate job was at Ames Research Center as a research scientist. In 2006, Verma co-wrote PLEXIL, an open source programming language now used in automation technologies such as the NASA K10 rover, Mars Curiosity rover's percussion drill, the International Space Station, the Deep Space Habitat and Habitat Demonstration Unit, the Edison Demonstration of Smallsat Networks, LADEE, and Autonomy Operating System (AOS). In 2007 Verma joined NASA's Jet Propulsion Laboratory (JPL) with a special interest in robotics and flight software and became part of the Mars rover team in 2008. As of 2019, she leads JPL's Autonomous Systems, Mobility and Robotic Systems group. Verma has written academic papers in her field on subjects such as the AEGIS (Autonomous Exploration for Gathering Increased Science) targeting system, NASA Lunar rover operation and robot fault detection, an area she has worked consistently. Verma helped develop flight and flight simulation software systems used by the Mars 2020 rover. Verma frequently participates in JPL's open house events at the lab and online as a science communicator to encourage children (and particularly girls) into STEM careers. Mars robotics Verma has worked on NASA's Mars Exploration Rover projects since 2008 and has operated all three rovers: MER-A Spirit; MER-B Opportunity; and Mars Science Laboratory's Curiosity. Verma explains that in order to operate robotic spacecraft efficiently, the team must adjust to the sol, or Martian day, which is 24 hours, 39 minutes, and 35.244 seconds, by beginning each day 40 minutes later. This kind of shift work involves covering the windows at home and work. Verma says: As of 2018, there have been approximately 12 rover drivers. She explains how driving the rover is an extremely slow operation, since commands can take from 4 up to 20 minutes to reach the device, so commands are usually performed first as a simulation, and multiple commands are uploaded at a time via NASA's Deep Space Network, relaying signals using Mars Odyssey orbiter. Operating the rover involves a large team effort with scientists performing experiments across different fields. A typical set of commands will have involved evaluating previous 3D images, developing a plan and route to maximize exploratory potential without risking the rover's safety (including using Curiosity'''s 2 meter robotic arm), choreographing and simulating moves, and then integrating each step of the sequence into a detailed set of instructions. Verma said in 2012: Awards Verma has received numerous awards for her team work including: 2008 NASA Earth Science team award for Intelligent Autonomy Technology Transition Team 2010 NASA Honors award to the MER Electro-mechanical Failure Mitigation Team 2013 NASA Honors Award to the MSL to the Motor Control Team 2013 NASA Honors Award to the MSL Surface Sampling and Science Systems Team 2013 NASA Honors Award to the MSL Testbed and Simulation Support Equipment Team 2014 NASA Software of the Year Award, awarded to the Mars Science Laboratory Flight Software Team 2016 MSL AEGIS Team Award 2017 MSL CHIMRA (Collection and Handling for In-Situ Martian Rock Analysis) Award for Tunnel Anomaly Recovery Other media In 2011 Verma appeared in and directed an episode of Nova ScienceNow called Can We Make It to Mars?Verma appears in US Air Force documentary Science in the Extremes series 3, episode 6 by Seeker explaining her 2020 work on Mars' surface. In 2018 Finnish director Minna Långström made a documentary about Verma and her work with the Mars rover Curiosity titled The Other Side of Mars (original Finnish title Mars kuvien takaa). The film focuses on the way images are made, their manipulation and use which shapes our understanding of space and technology. In 2022 Verna appeared in Good Night Oppy, a full length documentary film telling the story of Spirit and Opportunity'' and their 15 year mission. See also Adam Steltzner Anita Sengupta Bobak Ferdowsi List of missions to Mars References External links The Other Side of Mars film site with trailer "Can We Make It to Mars?" Nova episode Verma explains NASA's vision for Mars 2020 mission (video on MSN) Discussing AI and Robotics Jet Propulsion Laboratory Mars Exploration Rover mission Mars Science Laboratory Mars 2020 American women computer scientists American computer scientists Indian emigrants to the United States Women in space Carnegie Mellon University alumni Robotics software Punjab Engineering College alumni Women from Punjab, India Ames Research Center Year of birth missing (living people) Living people
Vandi Verma
[ "Engineering" ]
1,149
[ "Robotics software", "Robotics engineering" ]
62,565,944
https://en.wikipedia.org/wiki/Polymerisation%20inhibitor
In polymer chemistry, polymerisation inhibitors (US: polymerization inhibitors) are chemical compounds added to monomers to prevent their self-polymerisation. Unsaturated monomers such as acrylates, vinyl chloride, butadiene and styrene require inhibitors for both processing and safe transport and storage. Many monomers are purified industrially by distillation, which can lead to thermally-initiated polymerisation. Styrene, for example, is distilled at temperatures above 100 °C whereupon it undergoes thermal polymerisation at a rate of ~2% per hour. This polymerisation is undesirable, as it can foul the fractionating tower; it is also typically exothermic, which can lead to a runaway reaction and potential explosion if left unchecked. Once initiated, polymerisation is typically radical in mechanism and as such many polymerisation inhibitors act as radical scavengers. Inhibitors vs retarders The term 'inhibitor' is often used in a general sense to describe any compound used to prevent unwanted polymerisation, however these compounds are often divided into 'retarders' and 'true inhibitors'. A true inhibitor has a well defined induction period during which no noticeable polymerisation takes place. They are consumed during this period and once gone polymerisation occurs as normal. Retarders display no induction period but provide a permanent decrease in the rate of polymerisation, while themselves being degraded only slowly. Attempts have been made to define the difference quantitatively in terms of reaction rate. In an industrial setting compounds from both classes will usually be used together, with the true inhibitor providing optimal plant performance and the retarder acting as a failsafe. Inhibitors for processing True inhibitors Radical polymerisation of unsaturated monomers is generally propagated by C-radicals. These can be effectively terminated by combining with other radicals to form neutral species and many true inhibitors operate through this mechanism. In the simplest example oxygen can be used as it exists naturally in its triplet state (i.e. it is a diradical). This is referred to as air inhibition and is a diffusion-controlled reaction with rates typically in the order of 107–109 mol−1 s−1, the resulting peroxy radicals (ROO•) are less reactive towards polymerisation. However air stabilisation is not suitable for monomers with which it can form explosive peroxides, such as vinyl chloride. Other stable radicals include TEMPO and TEMPOL, which are exceedingly effective radical scavengers. Certain compounds marketed as true inhibitors, such as p-phenylenediamines, phenothiazine and hydroxylamines like HPHA and DEHA, are also thought to react through the intermediary of aminoxyl radicals. Not all inhibitors are radicals however, with quinones and quinone methides being important examples. Retarders Certain hydroxylamines and p-phenylenediamines may act as retarders. For styrene, nitrophenol compounds such as dinitro-ortho-cresol and di-nitro-sec-butylphenol (DNBP) have long been the important, however they are coming under regulatory pressure due to their high toxicity. Inhibitors for transport & storage Purified monomers stored at ambient temperatures are of less risk of polymerising and as such the most highly reactive inhibitors are rarely used at this stage. In general compounds are chosen which can be easily removed immediately prior to industrial polymerisation to make plastics. Compounds bearing a hydroxy group, which can be removed by an alkali wash, tend to dominate. Examples include 4-tert-butylcatechol (TBC), 4-methoxyphenol (MEHQ), butylated hydroxytoluene (BHT), and hydroquinone (HQ). See also Anti-skinning agent - These agents prevent polymerisation in paints and varnishes by binding to, and thus inhibiting, the action of oil drying agents Tubulin polymerisation inhibitors - chemotherapy drugs that interfere with the tubulin system References Monomers
Polymerisation inhibitor
[ "Chemistry", "Materials_science" ]
853
[ "Monomers", "Polymer chemistry" ]
62,566,206
https://en.wikipedia.org/wiki/AIOps
Artificial Intelligence for IT Operations (AIOps) is a practice that uses artificial intelligence and machine learning to enhance and automate various aspects of IT operations. It is designed to optimize IT environments by analyzing large volumes of data generated by complex IT systems, including system logs, performance metrics, and network data. AIOps aims to streamline IT workflows, predict potential issues, automate incident response, and ultimately improve the performance and efficiency of enterprise IT environments. Definition The term refers to the multi-layered complex technology platforms which enhance and automate IT operations by using machine learning and analytics to analyze the large amounts of data collected from various ITOps devices and tools, automatically identifying and responding to issues in real-time. With AIOps, you must shift from isolated IT data to aggregated observational data (e.g., job logs and monitoring systems) and interaction data (such as ticketing, events, or incident records) within a big data platform AIOps then applies machine learning and analytics to this data. The result is continuous visibility, which, combined with the implementation of automation, can lead to ongoing improvements. AIOps connects three IT disciplines—automation, service management, and performance management—to achieve continuous visibility and improvement. This new approach in modern, accelerated, and hyperscaled IT environments leverages advances in machine learning and big data to overcome previous limitations. Keys AI can optimize IT operations in five key ways: First, intelligent monitoring powered by AI helps identify potential issues before they cause outages, improving metrics like Mean Time to Detect (MTTD) by 15-20%. Second, performance data analysis and insights enable quick decision-making by ingesting and analyzing large data sets in real time. Third, AI-driven automated infrastructure optimization efficiently allocates resources and reduces cloud costs. Fourth, enhanced IT service management reduces critical incidents by over 50% through AI-driven end-to-end service management. Lastly, intelligent task automation accelerates problem resolution and automates remedial actions with minimal human intervention. AIOPS vs. MLOps AIOps tools use big data analytics, machine learning algorithms, and predictive analytics to detect anomalies, correlate events, and provide proactive insights. This automation reduces the burden on IT teams, allowing them to focus on strategic tasks rather than routine operational issues. AIOps is widely used by IT operations teams, DevOps, network administrators, and IT service management (ITSM) teams to enhance visibility and enable quicker incident resolution in hybrid cloud environments, data centers, and other IT infrastructures. In contrast to MLOps (Machine Learning Operations), which focuses on the lifecycle management and operational aspects of machine learning models, AIOps focuses on optimizing IT operations using a variety of analytics and AI-driven techniques. While both disciplines rely on AI and data-driven methods, AIOps primarily targets IT operations, whereas MLOps is concerned with the deployment, monitoring, and maintenance of ML models. References Artificial intelligence Machine learning Artificial intelligence engineering
AIOps
[ "Engineering" ]
618
[ "Artificial intelligence engineering", "Software engineering", "Machine learning" ]
62,567,443
https://en.wikipedia.org/wiki/List%20of%20Tesla%20factories
Tesla, Inc. operates plants worldwide for the manufacture of their products, including electric vehicles, lithium-ion batteries, solar shingles, chargers, automobile parts, manufacturing equipment and tools for its own factories, as well as a lithium ore refinery. The following is a list of current, future and former facilities. Current production facilities Future production facilities Former production facilities Note: Maxwell Technologies was acquired by Tesla in 2019 for their battery technology. Maxwell continued to operate as subsidiary until 2021. Due to the short holding time and no known products produced under Tesla, their production facilities are not listed above. Notes References External links Manufacturing official website, "Here's Where Tesla Produces Its Electric Cars Around the World", Newsweek, 2021-08-03 Tesla Tesla, Inc. Tesla Battery manufacturers Tesla, Inc.-related lists
List of Tesla factories
[ "Engineering" ]
164
[ "Electrical engineering", "Electrical-engineering-related lists" ]
62,567,637
https://en.wikipedia.org/wiki/Radio%20Spectrum%20Policy%20Group
The Radio Spectrum Policy Group (RSPG) is an advisory group founded 26 July 2002 for the European Commission on matters related to the radio spectrum. The group is made up of representatives from the European Commission and the member states of the European Union. The group focuses on dealing with the radio spectrum in regards to telecommunications, health and transportation. The group was reformed 11 June 2019 under the same name. References External links European Commission Radio spectrum
Radio Spectrum Policy Group
[ "Physics" ]
88
[ "Radio spectrum", "Spectrum (physical sciences)", "Electromagnetic spectrum" ]
62,567,781
https://en.wikipedia.org/wiki/Radio%20Spectrum%20Policy%20Programme
The Radio Spectrum Policy Programme (RSPP) was a five year programme which set out regulatory requirements, goals and priorities of the European Union relating to the radio spectrum. It was first adopted 14 March 2012. It attempted to standardise the frequencies that different types of communication could use and also set goals as to when this standardisation should be complete. However, some member states did not meet certain goals laid out in the programme. A legislative review recommended implementing an adapted programme as legislation in a regulation, and so a modified version was adapted into a proposed regulation. The legislation was supported by the European Parliament, but was subsequently removed after criticism from member states in the European Council. In 2015, the Radio Spectrum Policy Group said the programme had mostly met its goals. The modified version was then used as a basis for the section on the radio spectrum in the European Electronic Communications Code. History The programme was adopted by the European Council and the European Parliament 14 March 2012. It was managed and created by the European Commission. The first version laid out goals and their timescales, which aimed to standardise the assignment of the radio spectrum across the EU. It also stipulated that the commission had to produce a report on what the programme had achieved by April 2014. Several member states failed to meet certain goals due to a variety of reasons and this meant that the programme did not achieve standardisation in all member states early on. There were several member states who missed the target for the assignment of the 800MHz band. In the year after its introduction, the European Commission initiated three different legislative reviews of the programme, with the third review proposing that the commission adopt the programme into regulation. This was because not all of the member states met the goals on time. Adapting it into a regulation would mean that once it was adopted it would be enforceable in member states without national legislation, ensuring that every member state met the goals on time. In 2013, the commission modified the programme and added the new version as legislation to a regulatory proposal. The European Parliament supported the legislation, however, member states in the European Council did not agree with the legislation due to its "intrusiveness into national prerogatives". The legislation then was removed by the council from the regulatory proposal. In 2015, the Radio Spectrum Policy Group in their annual report said that the programme's objectives had been mostly achieved. In 2016, the European Electronic Communications Code was created, which incorporated a section on the radio spectrum and this section was mostly based on the modified 2013 programme. The code was implemented, along with the section, in 2018. Aims of the programme Some of the goals of the programme included switching to digital broadcasting from analogue, assignment of certain frequencies to mobile broadband throughout the EU and making use of the freed radio spectrum space for wireless communication. The proposed legislation in 2013 aimed to phase out national differences in the allocation of the radio spectrum. References Works cited European Union law 2012 in law 2012 in the European Union Radio spectrum
Radio Spectrum Policy Programme
[ "Physics" ]
601
[ "Radio spectrum", "Spectrum (physical sciences)", "Electromagnetic spectrum" ]
62,568,533
https://en.wikipedia.org/wiki/Malwina%20Luczak
Malwina J. Luczak is a mathematician specializing in probability theory and the theory of random graphs. She is Professor of Applied Probability and Leverhulme International Professor at the Department of Mathematics at the University of Manchester. Education and research Luczak grew up in Poland, and began her university studies at age 16 at the Nicolaus Copernicus University in Toruń, where she took philology of the English language. However, after her second year studying philology at Keele University in the UK, she decided to switch to mathematics, and enrolled at St Catherine's College, Oxford. After her first year's examinations, she was able to obtain scholarship support, continue her studies and remain at Oxford for doctoral work. She completed her D.Phil. in 2001 with a dissertation, Probability, algorithms and telecommunication systems, supervised by Colin McDiarmid and Dominic Welsh. She became an assistant lecturer at the Statistical Laboratory at the University of Cambridge and then a reader in mathematics at the London School of Economics. However, in 2010, after failing to receive an expected promotion to professor, she took a professorial chair position at the University of Sheffield and a five-year Engineering and Physical Sciences Research Council Leadership Fellowship. She moved again to Queen Mary University of London before taking a Professorship in Melbourne in 2017. Most recently, in 2023 she joined the University of Manchester. Research Luczak's publications include research on the supermarket model in queueing theory, cores of random graphs, the giant component in random graphs with specified degree distributions, and the Glauber dynamics of the Ising model. They include: References External links Department of Mathematics, University of Manchester Year of birth missing (living people) Living people Polish women mathematicians 20th-century Polish mathematicians 21st-century Polish mathematicians Australian mathematicians Women mathematicians Probability theorists Graph theorists Alumni of St Catherine's College, Oxford Academics of the University of Cambridge Academics of the London School of Economics Academics of the University of Sheffield Academics of Queen Mary University of London Academic staff of the University of Melbourne Academics of the University of Manchester
Malwina Luczak
[ "Mathematics" ]
413
[ "Mathematical relations", "Graph theory", "Graph theorists" ]
62,568,641
https://en.wikipedia.org/wiki/H4K5ac
H4K5ac is an epigenetic modification to the DNA packaging protein histone H4. It is a mark that indicates the acetylation at the 5th lysine residue of the histone H4 protein. H4K5 is the closest lysine residue to the N-terminal tail of histone H4. It is enriched at the transcription start site (TSS) and along gene bodies. Acetylation of histone H4K5 and H4K12ac is enriched at centromeres. Nomenclature H4K5ac indicates acetylation of lysine 5 on histone H4 protein subunit: Histone modifications The genomic DNA of eukaryotic cells is wrapped around special protein molecules known as histones. The complexes formed by the looping of the DNA are known as chromatin. The basic structural unit of chromatin is the nucleosome: this consists of the core octamer of histones (H2A, H2B, H3 and H4) as well as a linker histone and about 180 base pairs of DNA. These core histones are rich in lysine and arginine residues. The carboxyl (C) terminal end of these histones contribute to histone-histone interactions, as well as histone-DNA interactions. The amino (N) terminal charged tails are the site of the post-translational modifications, such as the one seen in H3K36me3. H4 histone H4 modifications are not as well known as H3's and H4 has fewer variations which might explain their important function. H4K5ac H4K5 is acetylated by TIP60 and CBP/p300 proteins. CAP/p300 open transcriptional start site chromatin by acetylating histones. H4K5ac has also been implicated in epigenetic bookmarking which allows gene expression patterns to be faithfully passed to daughter cells through mitosis. Important cell-type specific genes are marked in some way that prevents them from being compacted during mitosis and ensures their rapid transcription. H4K5ac appears to prime activity-dependent genes expressed during learning. Lysine acetylation and deacetylation Proteins are typically acetylated on lysine residues and this reaction relies on acetyl-coenzyme A as the acetyl group donor. In histone acetylation and deacetylation, histone proteins are acetylated and deacetylated on lysine residues in the N-terminal tail as part of gene regulation. Typically, these reactions are catalyzed by enzymes with histone acetyltransferase (HAT) or histone deacetylase (HDAC) activity, although HATs and HDACs can modify the acetylation status of non-histone proteins as well. The regulation of transcription factors, effector proteins, molecular chaperones, and cytoskeletal proteins by acetylation and deacetylation is a significant post-translational regulatory mechanism These regulatory mechanisms are analogous to phosphorylation and dephosphorylation by the action of kinases and phosphatases. Not only can the acetylation state of a protein modify its activity, but this post-translational modification may also crosstalk with phosphorylation, methylation, ubiquitination, sumoylation, and others for dynamic control of cellular signaling. Epigenetic implications The post-translational modification of histone tails by either histone modifying complexes or chromatin remodeling complexes are interpreted by the cell and lead to complex, combinatorial transcriptional output. It is thought that a histone code dictates the expression of genes by a complex interaction between the histones in a particular region. The current understanding and interpretation of histones comes from two large scale projects: ENCODE and the Epigenomic roadmap. The purpose of the epigenomic study was to investigate epigenetic changes across the entire genome. This led to chromatin states which define genomic regions by grouping the interactions of different proteins and/or histone modifications together. Chromatin states were investigated in Drosophila cells by looking at the binding location of proteins in the genome. Use of ChIP-sequencing revealed regions in the genome characterised by different banding. Different developmental stages were profiled in Drosophila as well, an emphasis was placed on histone modification relevance. A look in to the data obtained led to the definition of chromatin states based on histone modifications. The human genome was annotated with chromatin states. These annotated states can be used as new ways to annotate a genome independently of the underlying genome sequence. This independence from the DNA sequence enforces the epigenetic nature of histone modifications. Chromatin states are also useful in identifying regulatory elements that have no defined sequence, such as enhancers. This additional level of annotation allows for a deeper understanding of cell specific gene regulation. Methods The histone mark acetylation can be detected in a variety of ways: 1. Chromatin Immunoprecipitation Sequencing (ChIP-sequencing) measures the amount of DNA enrichment once bound to a targeted protein and immunoprecipitated. It results in good optimization and is used in vivo to reveal DNA-protein binding occurring in cells. ChIP-Seq can be used to identify and quantify various DNA fragments for different histone modifications along a genomic region. 2. Micrococcal Nuclease sequencing (MNase-seq) is used to investigate regions that are bound by well positioned nucleosomes. Use of the micrococcal nuclease enzyme is employed to identify nucleosome positioning. Well positioned nucleosomes are seen to have enrichment of sequences. 3. Assay for transposase accessible chromatin sequencing (ATAC-seq) is used to look in to regions that are nucleosome free (open chromatin). It uses hyperactive Tn5 transposon to highlight nucleosome localisation. Clinical significance H4K5ac is implicated in inflammatory bowel disease and Crohn's disease. See also Histone acetylation References Epigenetics Post-translational modification
H4K5ac
[ "Chemistry" ]
1,325
[ "Post-translational modification", "Gene expression", "Biochemical reactions" ]
62,571,269
https://en.wikipedia.org/wiki/Single-entity%20electrochemistry
Single-Entity Electrochemistry (SEE) refers to the electroanalysis of an individual unit of interest. A unique feature of SEE is that it unifies multiple different branches of electrochemistry. Single-Entity Electrochemistry pushes the bounds of the field as it can measure entities on a scale of 100 microns to angstroms. Single-Entity Electrochemistry is important because it gives the ability to view how a single molecule, or cell, or "thing" affects the bulk response, and thus the chemistry that might have gone unknown otherwise. The ability to monitor the movement of one electron or ion from one unit to another is valuable, as many vital reactions and mechanisms undergo this process. Electrochemistry is well suited for this measurement due to its incredible sensitivity. Single-Entity Electrochemistry can be used to investigate nanoparticles, wires, vesicles, nanobubbles, nanotubes, cells, and viruses, and other small molecules and ions. Single-entity electrochemistry has been successfully used to determine the size distribution of particles as well as the number of particles present inside a vesicle or other similar structures Early history Coulter Counter The Coulter Counter was created by Wallace H. Coulter in 1949. The Coulter counter consists of two electrolyte reservoirs that are connected by a small channel, through which a current of ions flow. Each particle drawn through the channel causes a brief change to the electrical resistance of the liquid. The change in the electrical resistance causes a disturbance in the electric field. The counter detects these changes in electrical resistance; the size of the particles in the field is proportional to magnitude of the disturbance in the electric field. Patch-Clamp Electrophysiology Patch-Clamp Electrophysiology was developed by Neher and Sakmann in 1976. This technique allowed measurements of individual proteins through ion channels. A glass pipette was fixed to the cell membrane, and the ion currents though the ion channels were measured. The Patch-Clamp method increased the sensitivity of detection by three orders of magnitude over previous methods, and the time resolution for the measurements was decreased to nearly 10 microseconds. The success of this method was a result of the ability to create a high resistance seal between the glass micropipette and the cell membrane; isolating the system chemically and electrically. Single-Cell Electrochemistry While it is useful to study bulk cell entities, there is an underlying need to study an individual or single cell as it will provide a better understanding of how it contributes to the entity as a whole. It was found that the utilization of electrochemical techniques could analyze cells without interrupting cellular activity as well as provide a highly resolute spectrum. This analysis method was first completed by Wightman in 1982. In this method of analysis, a carbon microfiber electrode is placed near the studied cell; this electrode can monitor the call via methods of voltammetry or amperometry. Before the measure can be taken, the cell must be stimulated by an ejection pipette to cause a cellular release. This can be cellular release can be measured via the aforementioned methods. From this method, it was seen that instrumental advances were needed in order to perform quality SEE measurements. Single-Molecule Redox Cycling Single-Molecule electrochemistry is an electrochemical technique used to study the faradaic response of redox molecules in electrochemical environments. The ability to study singular molecules gives rise to the potential of developing ultra-sensitive sensors which are necessary in SEE. From the work of Bard and Fan, this technique has had large advances with the use of redox cycling. Redox cycling amplifies a charge transfer by reducing and oxidizing a molecule multiple times as it diffuses between electrodes. Specifically in this technique, an insulated nano-electrode tip is placed near a substrate electrode to form an ultra-small electrochemical chamber. Molecules will become trapped in this chamber where the redox cycling and charge amplification will occur, allowing for detection of single molecules. From this technique, the necessary tool of charge amplification of redox reactions helped improve SEE measurements. It has helped increase detection limits, which need to be high for SEE. Applications Single-Cell Electrochemistry With the advance of nanoscale electrodes, the resolution of SEE has advanced from being able to detect single cells to detecting single molecules within cells. Nanoscale electrodes are small enough they can be inserted into the synapses between neurons, which can be used to detect neurotransmitter concentrations. If the electrode is thin enough, it can be inserted directly into a cell and used to detect concentrations of intracellular molecules, such as metabolites or even DNA. Optoelectrochemical Imaging Plasmonic nanoparticles can be individually analyzed through optoelectrochemical imaging (in which electrochemical processes are measured by optical means). When electrochemistry is performed on a nanoparticle, the refractive index of its environment will change resulting in a shift of the localized surface plasmon resonance. The spectral difference can be measured through characterization techniques such as darkfield microscopy to monitor electrochemical reactions at the surface of plasmonic nanoparticles. Plasmonics-based electrochemical current microscopy (PECM) measures the contrast that appears from the interference of localized surface plasmon scattered light and reflected light that, like above, is sensitive to changes in the refractive index. This can be used to quantify the electrocatalytic reactions occurring at Pt nanoparticles. Since nanoparticles are inherently heterogenous (which affects catalytic activity), SEE methods can provide more information than traditional methods that measure the average of an ensemble of nanoparticles. Single Enzyme Electron Transferring At present, single entity electrochemistry is not sensitive enough to quantify the turnover of a single enzyme. References Electrochemistry
Single-entity electrochemistry
[ "Chemistry" ]
1,202
[ "Electrochemistry" ]
62,571,409
https://en.wikipedia.org/wiki/Dihydrocaffeic%20acid
Dihydrocaffeic acid (DHCA; systematic name 3-(3,4-dihydroxyphenyl)propionic acid) is a phytochemical found in grapes and other plants. DHCA is known to lower IL-6 production through down regulation of DNMT1 expression and inhibition of DNA methylation of the IL-6 gene in mice. DHCA in combination with malvidin-3′-O-glucoside (Mal-gluc) is effective in promoting resilience against stress by modulating brain synaptic plasticity and peripheral inflammation. DHCA/Mal-gluc also significantly lowered depression like phenotypes in mice that had increased peripheral inflammation caused by transplantation of hematopoietic progenitor cells from other more stress-susceptible mice. References Phenylpropanoids Phenolic acids
Dihydrocaffeic acid
[ "Chemistry" ]
187
[ "Biomolecules by chemical classification", "Phenylpropanoids", "Organic compounds", "Organic compound stubs", "Organic chemistry stubs" ]
62,571,696
https://en.wikipedia.org/wiki/Golodirsen
Golodirsen, sold under the brand name Vyondys 53, is a medication used for the treatment of Duchenne muscular dystrophy. It is an antisense oligonucleotide medication of phosphorodiamidate morpholino oligomer (PMO) chemistry. The most common side effects include headache, fever, fall, cough, vomiting, abdominal pain, cold symptoms (nasopharyngitis) and nausea. Medical uses Golodirsen is indicated for the treatment of Duchenne muscular dystrophy in people who have a confirmed mutation of the dystrophin gene that is amenable to exon 53 skipping. Mechanism of action Golodirsen has been provisionally approved for approximately 8% of all people with Duchenne muscular dystrophy amenable to exon 53 skipping. It works by inducing exon skipping in the dystrophin gene and thereby increasing the amount of dystrophin protein available to muscle fibers. Adverse effects The most common side effects include headache, fever, fall, cough, vomiting, abdominal pain, cold symptoms (nasopharyngitis) and nausea. In animal studies, no significant changes were seen in the male reproductive system of monkeys and mice following weekly subcutaneous administration. According to the reports obtained from the clinical trials, pain at the site of intravenous administration, back pain, oropharyngeal pain, sprain in ligaments, diarrhea, dizziness, contusion, flu, ear infection, rhinitis, skin abrasion, tachycardia, and constipation occurred at an elevated frequency in the treatment group, as compared to their placebo counterparts. Hypersensitivity reactions, including rash, fever, itching, hives, skin irritation (dermatitis) and skin peeling (exfoliation), have occurred in people who were treated with golodirsen. Renal toxicity was observed in animals who received golodirsen. Although renal toxicity was not observed in the clinical studies with golodirsen, potentially fatal glomerulonephritis, has been observed after administration of some antisense oligonucleotides. Renal function should be monitored in those taking golodirsen. Pharmacology Pharmacokinetics Following single or multiple intravenous infusions, the majority of drug elimination occurs within 24 hours of intravenous administration. The elimination half-life of golodirsen, in parity with eteplirsen was 3 to 6 hours. Clinical benefits As a first-generation medication, golodirsen is far away from being curative; clinical trial outcomes have demonstrated it to have a marginal effect on ameliorating Duchenne muscular dystrophy pathology. As of December 2019, golodirsen is approved for therapeutic use in the United States, as well as in the countries that automatically recognize the decisions of the US Food and Drug Administration, under the condition that its benefit will be demonstrated in a confirmatory clinical trial. Society and culture Golodirsen is one of the very few FDA-approved exon-skipping therapy for Duchenne muscular dystrophy, although the clinical benefits of the medication are yet to established. While the development of golodirsen needed huge financing, it is only applicable to a small subset of people with Duchenne muscular dystrophy. Sarepta Therapeutics has announced that golodirsen will cost in parity with eteplirsen, another medication of a similar kind, which may be as high as per year. Also, the accelerated approval of golodirsen has paved the way for people to have early access to the medication, at the same time, it is shrouded with controversy over a number of issues. A double-blind placebo-controlled confirmatory trial (NCT02500381) is ongoing to resolve the issues. History Golodirsen was developed by collaborative research led by Prof. Steve Wilton and Prof. Sue Fletcher in the Perron Institute and licensed to Sarepta Therapeutics by the University of Western Australia. In the clinical trial of golodirsen, dystrophin levels increased, on average, from 0.10% of normal at baseline to 1.02% of normal after 48 weeks of treatment with the drug or longer. The change was a surrogate endpoint and the trial did not establish clinical benefit of the drug, including changes to the subject's motor function. The pharmacological assessment of golodirsen did not include special population groups, e.g., pregnant and lactating women, the elderly, and people with concurrent disease states. As DMD predominantly affects male children and young adults, and golodirsen is indicated for the treatment of children, but primarily not for adult women, the elderly, and people with comorbidity, it was not evaluated on them. The US Food and Drug Administration (FDA) approved golodirsen in December 2019, under the accelerated approval pathway. The application for golodirsen was granted fast track, priority review, and orphan drug designations, and a rare pediatric disease priority review voucher. References Antisense RNA Muscular dystrophy Orphan drugs Therapeutic gene modulation Muscle protectors
Golodirsen
[ "Biology" ]
1,122
[ "Therapeutic gene modulation" ]
62,574,090
https://en.wikipedia.org/wiki/One-off%20vehicle
A one-off vehicle is a vehicle designed for normal circulation and produced in almost single units following specific instructions from the customer. Generally the model is made on a small or large series, with significant technical, functional and aesthetic variations between each unit. In the field of vehicles authorized to drive, the production of unique vehicles is reduced to almost one unit in each case. The easiest cases to analyze are those of cars and motorcycles. Unique automobiles and unique motorcycles are usually kept and displayed in museums. Aeronautical vehicles, with notable exceptions, are not preserved in the same way (due to accidents and disappearances). Naval vehicles feature a myriad of unique models. A non-exclusive base of examples could be the field of sailing yachts and motorboats. Some cases of non-legalized vehicles may be included in this article if they have particularly noticeable characteristics, such as engine, chassis, and body types. Origin of one-off A one-off is something made or occurring only once, independently of any particular pattern. First used in 1934, this term is employed to differentiate singular items from those in a series: e.g. "the Lincoln Futura was a one-off". It has been suggested that it is a misspelling of "one-of", but this etymology is not supported by sources such as the Oxford English Dictionary. Cars Before mass production, automobiles were handcrafted. First the whole car and then only the body on a factory-provided chassis. There are many unique specimens of that time. Many customers made the car custom-made. Of that mass of unique cars it is only good to expose a few. Those that, besides being in a museum, have some remarkable features. Alfa Romeo 40-60 HP Castagna The house A.L.F.A. Manufactured a 40/60 designated street car. Designed by engineer Giuseppe Merosi, it had a 4-cylinder in-line engine (with camshaft and cylinder head) and provided 70 hp (52 kW) with a top speed of 125 km / h. In the racing version, the power reached 73 hp (54 kW) and a speed of 137 km / h. In 1914 the Milanese count Marco Ricotti commissioned an aerodynamic body to the specialized firm Castagna that allowed the speed of 140 km / h. This unique model was officially called "aerodynamics" and popularly was " Siluro Ricotti." 1924. Hispano Suiza "Tulipwood" The extraordinary and well-honored Dubonnet (the heir to the snack maker Dubonnet) commissioned a racing car in Hispano Switzerland based on the Boulogne model. The bodywork was commissioned to Nieuport, an aircraft manufacturer. Engine: 6 cylinders in line, 8 liters, 200 hp Chassis Bodywork: From Virginia tulip strips of wood tucked in with brass nails (some talking about rivets) on wooden notebooks. There are doubts about some technical details. There are sources that speak of the structural elements of fir. Other authors indicate an aluminum sub-body. The total weight would be 78 kg. Despite the luxurious appearance it was a race car. It was sixth on the Targa Florio and the fifth on the Florio Cup. 1927. Bugatti Type 41 Royale 1931. Hispano-Suiza J12. This luxury car was stripped naked, with only the chassis and engine. All J12s were unique. The engine was V12 at 60 degrees. The engine block was machined from a 313 kg casting block. The crankshaft rotated on seven bearings. Each cylinder had two valves operated by rockers from a central camshaft. According to the designer, Mark Birkit, this solution (apparently less sophisticated than the camshafts in the cilinde head) was chosen as less noisy. The Hispano-Suiza J12 appears in the film Borsalino & Co 1938. Hispano-Suiza H6B Dubonnet Xenia 1939. Lagonda Rapide V12 Tulipwood Tourer 2006. Ferrari P4/5 by Pininfarina American James Glickenhaus commissioned Pininfarina to create a special P3 racing body. The mechanical base was the Ferrari Enzo. The result would be designated with the reference Ferrari P4/5 by Pininfarina, authorized by the Ferrari house. Rolls-Royce Sweptail It was the costliest new car in the world at its debut in 2017. References Vehicles
One-off vehicle
[ "Physics" ]
917
[ "Vehicles", "Transport", "Physical systems" ]
62,576,396
https://en.wikipedia.org/wiki/National%20Union%20of%20Scalemakers
The National Union of Scalemakers was a trade union representing workers involved in making weighing scales in the United Kingdom and Ireland. History In 1909, a strike occurred among scalemakers at Messrs Hodgson and Stead, in Manchester. Following the strike, many employees decided to found a union, the Amalgamated Society of Scale Beam and Weighing Machine Makers. Initially very small, the union expanded steadily, opening branches in Liverpool and Sheffield in 1910, and expanding into Wales in 1911, Scotland in 1912, and Ireland in 1918. That year, membership reached 600, and in 1920 it peaked at 1,000. Wage reductions in the industry and poor organisation led to financial difficulties, which culminated in 1923 with the London branch splitting away. The London branch claimed to represent the continuation of the union, and it was moderately successful, reaching 150 members by 1927. The remainder of the union struggled to survive, making its general and financial secretary post part-time, and renaming itself as the Society of Scale Beam and Weighing Machinists. It registered as a trade union in 1924 and affiliated to the Trades Union Congress (TUC), but declined to only 150 members. The TUC was concerned about the conflict between the two unions, and brokered a merger, which took place at the start of 1928, although the union still had a membership of only 282. A ballot saw the union's headquarters move to London, and membership began increasing rapidly. In 1939, it was able to make the general and financial secretary position full-time again, and by 1949 it had a membership of 2,500. In 1935, the union affiliated with the Scottish Trades Union Congress, with the Irish Trades Union Congress in 1945, and the Confederation of Shipbuilding and Engineering Unions in 1948. In 1938, it began describing itself as an industrial union, representing all workers connected with the scalemaking trade, and the first woman joined the union in 1941. The union repeatedly considered merging into the Amalgamated Engineering Union, but feared that its members interests would be neglected by the much larger union. In 1993, the union merged into Manufacturing, Science and Finance. Leadership General and Financial Secretary 1909: J. Cope 1915: J. P. Wadsworth 1924: G. Hatfield 1928: Harry Bending 1963: S. W. Parfitt 1980: A. F. Smith President 1909: Andrew Leslie 1913: T. Richardson 1914: D. Donaldson 1918: Harry Walker 1920: J. A. Hodson 1921: J. Maxwell 1922: J. C. Turnbull 1925: J. Maxwell 1926: Andrew Leslie Jr 1928: Thomas Knight 1937: Albert Jackson References Business organisations based in the United Kingdom 1909 establishments in the United Kingdom Engineering trade unions Trade unions established in 1909 Trade unions disestablished in 1993 Weighing instruments Trade unions based in the West Midlands (county)
National Union of Scalemakers
[ "Physics", "Technology", "Engineering" ]
565
[ "Weighing instruments", "Mass", "Matter", "Measuring instruments" ]
62,577,114
https://en.wikipedia.org/wiki/Polymers%20of%20intrinsic%20microporosity
Polymers of intrinsic microporosity (PIMs) are a unique class of microporous material developed by research efforts led by Neil McKeown, Peter Budd, et al. PIMs contain a continuous network of interconnected intermolecular voids less than 2 nm in width. Classified as a porous organic polymer, PIMs generate porosity from their rigid and contorted macromolecular chains that do not efficiently pack in the solid state. PIMs are composed of a fused ring sequences interrupted by Spiro-centers or other sites of contortion along the backbone. Due to their fused ring structure PIMs cannot rotate freely along the polymer backbone, ensuring the macromolecular components conformation cannot rearrange and ensuring the highly contorted shape is fixed during synthesis. Synthesis PIMs require that the non-network macromolecular structure is rigid and non-linear. In order to maintain permanent microporosity the rotation along the polymer chain must be prohibited through the use of fused ring structure or strongly hindered by steric inhibition to avoid conformation changes that would allow the polymer to pack efficiently. This results in the use of a conformationally locked monomer and a polymerization reaction that provides a linkage about which rotation is prohibited. Three main types of polymerization reactions have been successfully used to prepare PIMs of sufficient mass to form self-standing films. These involve a polymerization reaction based on a double aromatic nucleophilic substitution mechanism to form the dibenzodioxin linkage, a polymerization using Troger's base formation, and amide linkages formation between monomeric units. It is also possible to modify the structure of PIMs by post-synthesis reactions. However, this can result in a reduction in intrinsic microporosity due to the additional interchain cohesive interactions. Applications Due to the presence of intrinsic microporosity these polymers have high-free volume, high internal surface area, and have a high affinity for gases. A novel property of PIMs is that they do not possess a network structure and are often freely soluble in organic solvents. This allows PIMs to be precipitated or cast from solution to give microporous powders or self-standing films that are useful for a variety of applications. For example the first commercial application of PIMs was in a sensor developed by 3M. Additionally, due to PIMs affinity for small gases and ability to form self-standing films they are actively being investigated as a membrane material and adsorbent for industrial separation processes such as gas separation and carbon dioxide capture. PIM membranes are also heavily investigated due to their contribution in the revision of the 2008 upper bounds of performance by Robeson, an important parameter in membrane gas separation stating that the permeability must be sacrificed for selectivity. Specifically active areas of PIM membrane research include, enhancing permeability, decreasing aging, and tailoring selectivity. PIMs are also used to create mixed matrix membranes with a variety of material such as inorganic materials, metal-organic frameworks, and carbons. References Porous media Polymers
Polymers of intrinsic microporosity
[ "Chemistry", "Materials_science", "Engineering" ]
636
[ "Polymers", "Porous media", "Polymer chemistry", "Materials science" ]
58,229,069
https://en.wikipedia.org/wiki/United%20Nations%20Framework%20Classification%20for%20Resources
United Nations Framework Classification for Resources (UNFC) is an international scheme for the classification, management and reporting of energy, mineral, and raw material resources. United Nations Economic Commission for Europe's (UNECE) Expert Group on Resource Management (EGRM) is responsible for the development promotion and further development of UNFC. Development Classification and management of natural resources such as minerals and petroleum are classified using differing schemes. In 1997, UNECE published the United Nations Framework Classification for Reserves and Resources of Solid Fuels and Mineral Commodities (UNFC-1997) as a unifying international system for classifying solid minerals and fuels. In 2004, the Classification was revised to include petroleum (oil and natural gas) and uranium and renamed the UNFC for Fossil Energy and Mineral Resources 2004 (UNFC-2004). In 2009, a simplified United Nations Framework Classification for Fossil Energy and Mineral Reserves and Resources 2009 (UNFC-2009) was published. In response to the application of UNFC being extended to renewable energy, injection projects for geological storage and anthropogenic resources, the name was changed in 2017 to the United Nations Framework Classification for Resources (UNFC). An updated version of UNFC, with improved terminology, was released in 2019. Application The UNFC system is used for: Policy formulation in energy and raw material studies National resources management functions Corporate business processes Financial reporting UNFC currently applies to minerals, petroleum, renewable energy, nuclear fuel resources, injection projects for geological storage, and anthropogenic resources. Application of UNFC to groundwater resources is being evaluated. Implementation UNFC has been adopted as the basis of national resource classification in many countries including China, India, Mexico, Poland and Ukraine. African Union Commission has developed a UNFC-based African Mineral and Energy Resources Classification and Management System (AMREC) as a unifying system for Africa. AMREC includes a Pan African Resource Reporting Code (PARC 2023). European Commission is using UNFC to classify and report raw material resources of Europe and mandated the same in the Critical Raw Materials Act. References See also United Nations Resource Management System External links UNFC on UNECE website Minerals Natural resource management Petroleum Renewable energy Resource economics Resource extraction
United Nations Framework Classification for Resources
[ "Chemistry" ]
446
[ "Petroleum", "Chemical mixtures" ]
58,229,324
https://en.wikipedia.org/wiki/Neutron%20resonance%20spin%20echo
Neutron resonance spin echo is a quasielastic neutron scattering technique developed by Gähler and Golub. In its classic form it is used analogously to conventional neutron spin echo (NSE) spectrometry for quasielastic scattering where tiny energy changes from the sample to the neutron have to be resolved. In contrast to NSE, the large magnetic solenoids are replaced by two resonant flippers respectively. This allows for variants in combination with triple axes spectrometers to resolve narrow linewidth of excitations or MIEZE (Modulation of IntEnsity with Zero Effort) for depolarizing conditions and incoherent scattering which are not possible with conventional NSE. Neutron spin echo techniques achieve very high energy resolution in combination with very high neutron intensity by means of a decoupling of the energy resolution of the instrument from the wavelength spread of the neutrons. The energy transfer of the neutrons is encoded in their polarization and not in the change of the wavelength of the scattered neutrons. The final neutron polarization provides the (normalized) intermediate scattering function S(Q,τ), providing direct information on relaxation processes, activation energies, and the amplitudes of dynamic processes in the samples under investigation. How it works The classical NSE technique (Figure 1. a)), relies upon the Lamor precession the neutron spin undergoes, while flying through static magnetic fields. Several other NSE schemes exist however, which employ resonant spin flips in a magnetic RF-field to achieve the same effect on the neutron, such as neutron resonant spin echo (NRSE) and modulation of intensity by zero effort (MIEZE). In NRSE, the static magnetic fields produced by large DC coils in NSE are replaced by two resonant flipper coils, producing a static magnetic field B0 and a thereto perpendicular radio frequency (RF) field of frequency ωRF (Figure 1. b). A neutron entering the first resonant flipper undergoes a resonant π-flip induced by the static field B0 while precessing with a frequency ωL (the Lamor frequency) equal to ωRF and performing Rabi – oscillations due to the RF field. In classical NRSE the path between the two flippers is kept free of any magnetic field and the spin phase is not changed. In the second resonant flipper coil the neutron undergoes another resonant π-flip. The effect these two flippers have on the neutron spin is identical to the action of an effective static magnetic field as utilized in NSE. Longitudinal resonance spin echo The original NRSE setup was designed in a transverse configuration (T-NRSE, Figure 1. b)) where the field B0 lies transverse to the spin direction. In this form the energy resolution of the setup is limited by the production accuracy of the B0 coils to a few nanoseconds. The space between the transverse NRSE coils needs to be free of field, and is therefore shielded by a mu-metal housing. The drawbacks mentioned above lead to the development of the longitudinal NRSE (L-NRSE, Figure 1. d)) design to combine the advantages of both classical NSE and T-NRSE. In contrast to the conventional transverse NRSE technique, the cylindrically symmetric longitudinal NRSE configuration allows the use of guide fields through the whole spectrometer, reducing the effort to maintain the neutron polarization. This makes the mu-metal shielding required for transverse NRSE obsolete and facilitates maintaining the polarization of neutrons with large wavelengths λ. These neutrons are particularly important for NSE techniques, as their resolution increases with λ3. Using a longitudinal field geometry, no field corrections are required for a non-divergent neutron beam while the corrections for divergent neutron trajectories are at least a factor of 10 smaller as compared to conventional NSE. In combination with TAS The RF flipper coils utilized in NRSE are much smaller than the DC coils used in classical NSE, leading to a large reduction in stray fields around the coils. This makes it possible to tilt the RF flipper coils and perform NRSE in a triple axis spectrometer configuration. The tilting of the coils, makes spin-echo focusing possible, where the entire energy dispersion of an excitation can be measured with very high resolution (as low as 1 μeV) over the entire Brillouin zone. Therefore, this technique allows the investigation of linewidths of dispersing excitations, including both phonons and magnons, over the entire Brillouin zone. MIEZE One disadvantage of classical NSE and NRSE is the fact that a depolarization of the neutron beam leads to a complete loss of signal, making it impossible to measure under depolarizing conditions, such as very large magnetic fields. Furthermore, it is not possible to measure samples that cause a depolarization of the neutron beam, such as ferromagnets, and superconductors. Due to the dominating amount of incoherent scattering, materials containing large amounts of hydrogen are also difficult to measure using conventional NSE as well as NRSE. To circumvent these drawbacks the MIEZE (Modulation of IntEnsity with Zero Effort) method was introduced in transverse as well as longitudinal configuration (Figure 1. c) and e)). In MIEZE configuration the first two RF spin flippers are operated at different frequencies (as opposed to traditional NRSE where they operate at the same frequency), leading to a sinusoidal time modulation of the measured signal, which is detected by a time and position sensitive detector. This setup allows to place all spin manipulating devices (including the analyser) upstream of the sample, making it possible to measure (depolarizing) samples under depolarizing condition. Following the same nomenclature as NRSE transverse MIEZE refers to a configuration where the field B0 lies transverse to the neutron beam, while for longitudinal MIEZE the field B0 points along the neutron beam. Dedicated instruments The list below provides an extensive list of neutron spin echo instruments in used (or in planning) at the moment. Most of these instruments are operated at continuous neutron sources using cold neutrons. Very few instruments are used under different conditions which are indicated below. NSE SNS-NSE at the Oak Ridge National Laboratory in Tennessee, USA (spallation source) NSE at the National Institute of Standards and Technology in Maryland, US IN11 at the Institut Laue–Langevin in France IN15 at the Institut Laue–Langevin in France Wide angle Spin Echo – WASP at the Institut Laue–Langevin in France (under construction) J-NSE PHOENIX at the Forschungs-Neutronenquelle Heinz Maier-Leibnitz in Germany NRSE RESEDA at the Forschungs-Neutronenquelle Heinz Maier-Leibnitz in Germany BL06 at the Japan Proton Accelerator Research Complex in Japan (using a spallation source) MIEZE RESEDA at the Forschungs-Neutronenquelle Heinz Maier-Leibnitz in Germany BL06 at the Japan Proton Accelerator Research Complex in Japan (using a spallation source) MIRA at the Forschungs-Neutronenquelle Heinz Maier-Leibnitz in Germany MUSES at Laboratoire Léon Brillouin in France Triple axis spectrometer – NRSE IN22/ZETA at the Institut Laue–Langevin in France TRISP at the Forschungs-Neutronenquelle Heinz Maier-Leibnitz in Germany (using thermal neutrons) FLEXX at the Helmholtz-Zentrum Berlin in Germany References Neutron scattering
Neutron resonance spin echo
[ "Chemistry" ]
1,607
[ "Scattering", "Neutron scattering" ]
58,229,662
https://en.wikipedia.org/wiki/Paessler%20PRTG
PRTG (Paessler Router Traffic Grapher ) is a network monitoring software developed by Paessler GmbH. It monitors system conditions like bandwidth usage or uptime and collect statistics from miscellaneous hosts such as switches, routers, servers, and other devices and applications. It was initially released on May 29, 2003 by the German company Paessler GmbH which was founded by Dirk Paessler in 2001. The software is available in three versions: a classic standalone solution (PRTG Network Monitor), one for large and distributed networks (PRTG Enterprise Monitor) and a SaaS-version (PRTG Hosted Monitor). Specifications The software has an auto-discovery mode that scans predefined areas of an enterprise network and creates a device list from this data. Further information on the detected devices can be retrieved using communication protocols like ICMP, SNMP, WMI, NetFlow, jFlow, sFlow, DCOM or the RESTful API. Sensors The software is based on sensors, e.g. HTTP, SMTP/POP3 (e-mail) application sensors and hardware-specific sensors for switches, routers and servers. The software has over 200 different predefined sensors that retrieve statistics from the monitored instances, e.g. response times, processor, memory, database information, temperature or system status. Web interface and desktop client The software can be operated via an AJAX-based web interface which is suitable for both real-time troubleshooting and data exchange with non-technical staff via maps (dashboards) and user-defined reports. An additional administration interface in the form of a desktop application for Windows, Linux, and macOS is available. Notifications, reports and price model In addition to the usual communication channels such as Email and SMS, notification is also provided via an app for iOS or Android. The software also provides customizable reports and its price model is based on sensors. See also Comparison of network monitoring systems References Literature Andrés, Steven, Brian Kenyon, and Erik Pack Birkholz. Security Sage's guide to hardening the network infrastructure. Elsevier, 2004. Elsayed, Abdellatief, and Nashwa Abdelbaki. "Performance evaluation and comparison of the top market virtualization hypervisors." Computer Engineering & Systems (ICCES), 2013 8th International Conference on. IEEE, 2013. System administration Network management Port scanners Network analyzers Windows software
Paessler PRTG
[ "Technology", "Engineering" ]
502
[ "Information systems", "Computer networks engineering", "Network management", "System administration" ]
58,230,115
https://en.wikipedia.org/wiki/Ragulator-Rag%20complex
The Ragulator-Rag complex is a regulator of lysosomal signalling and trafficking in eukaryotic cells, which plays an important role in regulating cell metabolism and growth in response to nutrient availability in the cell. The Ragulator-Rag Complex is composed of five LAMTOR subunits, which work to regulate MAPK and mTOR complex 1. The LAMTOR subunits form a complex with Rag GTPase and v-ATPase, which sits on the cell’s lysosomes and detects the availability of amino acids. If the Ragulator complex receives signals for low amino acid count, it will start the process of catabolizing the cell. If there is an abundance of amino acids available to the cell, the Ragulator complex will signal that the cell can continue to grow. Ragulator proteins come in two different forms: Rag A/Rag B and Rag C/Rag D. These interact to form heterodimers with one another. History mTORC1 is a complex within the lysosome membrane that initiates growth when promoted by a stimulus, such as growth factors. A GTPase is a key component in cell signaling, and there were, in 2010, four RAG complexes discovered within the lysosomes of cells. In 2008, it was thought that these RAG complexes would slow down autophagy and activate cell growth by interacting with mTORC1. However, in 2010, the Ragulator was discovered. Researchers determined that the function of this Ragulator was to interact with the RAG A, B, C, and D complexes to promote cell growth. This discovery also led to the first use of the term “Rag-Ragulator” complex, because of the interaction between these two. The amino acid level, cell growth, and other important factors are influenced by the mTOR Complex 1 pathway. On the lysosomal surface, the amino acids signal the activation of the four Rag proteins (RagA, RagB, RagC, and RagD) to translocate mTORC1 to the site of activation. A 2014 study noted that AMPK (AMP-activated protein kinase) and mTOR play important roles in managing different metabolic programs. It was also found that the protein complex v-ATPase-Ragulator was essential for activation of mTOR and AMPK. The v-ATPase-Ragulator complex is also used as an initiating sensor for energy stress, and serves as an endosomal docking site for LKB1-mediated AMPK activation by forming the v-ATPase-Ragulator-AXIN/LKB1-AMPK complex. This allows a switch between catabolism and anabolism. In 2016, it was established that RagA and Lamtor4 were key to microglia functioning and biogenesis regulation within the lysosome. Further studies also indicate that the Ragulator-Rag complex interacts with proteins other than mTORC1, including an interaction with v-ATPase, which facilitates functions within microglia of the lysosome. In 2017, the Ragulator was thought to regulate the position of the lysosome, and interact with BORC, a multi subunit complex located on the surface of the lysosomal membrane. Both BORC and mTORC1 work together in activating the GTPases to change the position of the lysosome. It was concluded that BORC and GTPases compete for a binding site in the LAMTOR 2 protein to reposition the lysosome. Function While the intricate functions of the Ragulator-Rag Complex are not fully understood, it is known that the Ragulator-Rag Complex associates with the lysosome and plays a key role in mTOR (mammalian target of rapamycin) signaling regulation. mTOR signaling is sensitive to amino acid concentrations in the cytoplasm of the cell, and the Ragulator complex works to detect amino acid concentration and transmit signals that activate, or inhibit, mTORC1. The Ragulator, along with the Rag GTPases and v-ATPases, are part of an amino acid identifying pathway, and are necessary for the localization of the mTORC1 to the lysosome surface. The Ragulator and v-ATPases reside on the lysosomal surface. The Rag GTPases cannot be directly bound to the lysosome because they lack the proteins necessary to bind to its lipid bilayer, so Rag GTPases must instead be anchored to the Ragulator. The Ragulator is bound to the surface via the V-ATPase. The Ragulator is a crystalized structure composed of five different subunits; LAMTOR 1, LAMTOR 2, LAMTOR 3, LAMTOR 4, LAMTOR 5. There are two sets of obligate heterodimers in the complex, LAMTOR 2/3, which sits right above LAMTOR 4/5. The LAMTOR 1 dimer does not have the same structure as the other subunits. LAMTOR 1 surrounds most of the two heterodimers, providing structural support and keeping the heterodimers in place. When amino acids are present, the subunits are folded and positioned in such a way that allows for the Rag-GTPases to be anchored to its primary docking site of LAMTOR 2/3 on the Ragulator. The Rag-GTPases consist of two sets of heterodimers; RAGs A/B and RAGs C/D. Before Rag-GTPases can bind to the Ragulator, Rag A/B must be GTP loaded via guanine nucleotide exchange factors (GEFs), and RAG C/D must be GDP loaded. Once Rag-GTPases are bound to the regulator complex, the mTORC1 can be translocated to the surface of the lysosome. At the lysosomal surface, the mTORC1 will then bind to Rheb, but only if Rheb was first loaded to a GTP via GEFs. If the amount of nutrients and the concentration of amino acids are sufficient, mTORC1 will be activated. Activation of mTORC1 The lysosomal membrane is the main area in which mTORC1 is activated. However, some activation can occur in the Golgi apparatus and the peroxisome. In mammalian cells, GTPase RagA and RagB are heterodimers with RagC and RagD, respectively. When enough amino acids are present, RagA/B GTPase becomes activated, which leads to the translocation of mTORC1 from the cytoplasm to the lysosome surface, via the Raptor. This process brings mTORC1 in close enough proximity to Rheb for Rheb to either (1) cause a conformational change to mTORC1, leading to and increase in substrate turnover, or (2) induce kinase activity of mTORC1. Rags do not contain membrane-targeting sequences, and as a result, depend on the entire Ragulator-Rag Complex to bind to the lysosome, activating mTORC1. While most amino acids indirectly activate mTORC1 in mammals, Leucine has the ability to directly activate mTORC1 in cells that are depleted of amino acids. Yeast contain LRS (leucyltRNA synthetase), which is a molecule that can interact with Rags, directly activating the molecule. Structure The complex consists of five subunits, named LAMTOR 1-5 (Late endosomal/lysosomal adaptor, mapk and mtor activator 1), however several have alternative names. LAMTOR1 LAMTOR2 LAMTOR3 (MAP2K1IP1) LAMTOR4 LAMTOR5 (HBXIP) References Cell biology
Ragulator-Rag complex
[ "Biology" ]
1,616
[ "Cell biology" ]
58,230,624
https://en.wikipedia.org/wiki/Celebrity%20influence%20in%20politics
Celebrity influence in politics, also referred to as "celebrity politics," or "political star power," is the act of a prominent person using their fame as a platform to influence others on political issues or ideology. According to Anthony Elliott, celebrity is a central structuring point in self and social identification, per-forming as it does an increasingly important role in self-framings, self-imaginings, self-revisions and self-reflection. The influential people considered celebrities can be anyone with a major following such as professional athletes, actors/actresses, television personalities or musicians. Celebrities have two kinds of specific power: the abilities to shed light on issues and to persuade audiences. Social media is one of the most common areas for celebrities to discuss specific issues or current events that are being politicized; the individuals may also speak out in public forums such as television talk shows, events, or during their own widely attended performances. In the United States, most celebrities tend to hold liberal political beliefs, for reasons that are debated by social psychologists. History The use of media The adoption of wide-reaching mediums has made it easier for celebrities to exert influence in politics that began with the creation of television. According to John Street, celebrity influence in politics began with television, as the medium is intimate and focused more on the human side of people, including political candidates, than any other medium. He further states that celebrities were able to use television to reach a wide audience and that their influence affected others' understandings of certain topics. As different mediums emerged, such as social media, where celebrities could voice their opinions on various topics, their influence had a greater effect. Mark Wheeler has opined that this led to one of the main critiques of celebrity involvement in politics in that celebrities could be viewed as a manufactured product, one fabricated through media exposure. Examples of celebrities in politics Celebrities such as movie stars, professional athletes, musicians, and reality television stars have campaigned for and against political parties, candidates, and on political issues. Examples include Oprah Winfrey and George Clooney endorsing Barack Obama's presidential campaign in 2008 and a song written by American musician Hank Williams Jr. endorsing Senator John McCain's campaign in the same election. Prior to winning the 2016 United States presidential election, President Donald Trump was a businessman and television personality who actively appeared on Fox News to discuss politics and endorsed political candidates. Michael Higgins concludes that Trump's media-centered politics amounts to a "pseudo presidency", confounding orthodox forms of political accountability. According to John Street, this is evident in how he is represented, how he performs and how his 'fans' respond to him. It is also symptomatic of wider changes in the conduct and form of the contemporary, mediatised political realm. In India, Amitabh Bachchan and Smriti Irani have campaigned for various political parties and positions. Selena Gomez wrote an essay expressing her opinion on immigration, detailing her family's past immigration background. Gomez addresses the struggles of immigration to the United States and talks about how its more than just a political issue. Reality T.V. star, entrepreneur and influencer, Kim Kardashian West has also used her celebrity influence to get the opportunity to talk with President Donald Trump and advocate for him to grant clemency for Alice Marie Johnson, who was sentenced to life in prison for a first time drug offense. Since then she has helped with the release of Momolu Stewart, Crystal Munos, Judith Negron, and Tynice Hall to name a few. She has also helped establish a partnership between a prison reform initiative, #cut50, and Lyft to help released inmates get reliable transportation to job interviews. Her legal journey can be seen in the documentary, The Justice Project, in which she claims that, "I want to help elevate these cases to a national level to effect change, and this documentary is an honest depiction of me learning about the system and helping bring tangible results to justice reform.” There are multiple cases of celebrities who have performed well in the polls and have become important political figures in their respective countries. Examples include Austrian-American bodybuilder and movie star Arnold Schwarzenegger, Italian comedian Beppe Grillo, Israeli television host Yair Lapid, Brazilian singer and composer Gilberto Gil, and Panamanian salsa singer and actor Rubén Blades. Ideology In the United States, celebrities tend to be liberal. Some scientists have argued that artistic individuals tend to hold liberal leanings, because liberalism and creativity go hand in hand, whereas conservatives tend to work in business and the military. The Friends of Abe was a support group for conservatives in Hollywood. Celebrities within ambassadorial programs and organizations Various organizations and programs employ the celebrity limelight in order to interect and engage with their fanbase. UNICEF is a policy organization branching from the United Nations. According to the UNICEF website, it utilizes their celebrity ambassadorial program in order to gain public recognition. Film, television, sports, and social media stars collaborate to "raise awareness" and "fundraise, advocate, and educate on behalf of UNICEF". Alyssa Milano, P!nk, Gigi Hadid, and other influential persons all claim positions within this program and assist its mission statement, this being providing special protection for the children most disadvantaged. Although this organization is the most politically present, there are various other examples of organizations and programs utilizing this very format in order to achieve their political goals. Examples of politicians in televised networks In the fifth episode ("2016") of season three of the Comedy Central show Broad City, the main characters encounter and openly support and gush over the 2016 presidential candidate, Hillary Clinton. The episode was written by Chris Kelly and directed by Todd Biermann. Although it was publicly specified as an episode that was not meant to be political, the show's characters displayed a clear support for said candidate. The show was originally created and scripted by Ilana Glazer and Abbi Jacobson, two comedians who regularly utilize their social media platforms to speak on political topics. This episode aired on March 16, 2016, in the midst of the presidential campaign. The twenty-second episode ("Moving Up") of the sixth season of Parks and Recreation featured a brief cameo by the first lady, Michelle Obama. This episode aired on April 24, 2014, well within President Barack Obama’s time in office. Written by Aisha Muharrar & Alan Yang and directed by Michael Schur, the episode featured the first lady convincing the main character, Leslie Knope, to move forward with her decision to work in Chicago, Illinois. Leslie Knope is portrayed by Amy Poehler, a comedian who also starred within SNL and several films. There are several other cases of politicians utilizing televised networks and celebrity affiliation to increase polling opinion and public interest. Social media celebrities Social media has risen as a platform for stardom utilized by average individuals. Many teenagers, children, and adults have amassed a public following on platforms such as TikTok and YouTube. YouTube, for example, is believed to have accumulated over 2 billion users in 2025, globally. This statistic alone depicts the amount of people that such an app reaches, being an easy target for those who aim to grow a platform, create content, and/or exert a message for viewers. These platforms have even given names to such stars who have received a large amount of media attention. A name that has become increasingly infamous across all platforms (Facebook and Amazon Prime Video included) is MrBeast, a multimillionaire YouTuber- originally- who recently publicly bidded to buy platform TikTok during the 2025 ban of the app in America. Despite his attempts being unsuccessful, it is clear that social media is built from an astounding number of users that the public are influenced by every video they consume. These video sharing applications allow for content to be uploaded and enjoyed by the public, even reposted on multiple apps. This consistent exposure to the public has allowed some social media niche stars to become public celebrities: starring in films, shows, for example, Addison Rae starring in Netflix movie 'He's All That'. Social media platforms and influence Social media stars also known as "Micro-celebrities" or "Internet Celebrities" are a relatively new form of influencer who has arisen from the various social media content creation platforms: namely YouTube, Vine, TikTok. These individuals spawn from ordinary users who upload their own personal content and gather a following. These followers become fans and eventually a micro-celebrity is born. Although, the origination of the name was given to individuals who share a niche platform and audience, they have grown with the popularity of social media applications and websites. Social Media stars like Lilly Singh, Cody Ko, and Philip DeFranco have all utilized platforms to star in television and web-series: opening up their fanbase. Additionally, some Social Media stars have even gone on to sign media contracts with television networks such as MTV. MTV has personally produced various shows starring social media stars like Tana Mongeau and Jake Paul. Since the late 2010s, YouTube influencers in the UK have increasingly appeared on mainstream television, including Ash Sarkar, who engineered her own "rapid rise to becoming an important ‘alternative’ voice" by appearing on Sky News, Newsnight (BBC) and Good Morning Britain (ITV). Youtubers YouTubers is a name given to those who have reached high numbers of viewers and followers. They become celebrities of said application and continue to manufacture content that their viewers find appealing. These individuals are often idolized by their viewers and fans. Many of these celebrities also use their platform to collaborate with businesses and organizations to advertise their products and services. Additionally, they not only use this platform as a consistent monetary income, YouTube has always been a platform for ideas and a spread of political knowledge. Many of these users display clear bias towards a particular side. TikTokers TikTok became a social media craze in the 2020s, allowing for videos stories and "stitches" to respond to and include other videos in one's posts. Charli D'Amelio, the 18-year old leading TikToker, accumulated over 75 million followers in the span of a year. She was publicly recognized as the most followed user in the application and has used her platform to stand with political and social events in relation to the current Black Lives Matter movement. As a result, the video accumulated over 25 million views and over 8 million "likes" within the application. There have additionally been a variety of other TikTokers utilizing their platform in support of the same movement as well. This, in turn, educates the young viewers and encourages them to be socially and politically aware and active. Many of these users and content creators have acknowledged the murder of George Floyd, and stand in solidarity. Not only can individuals use their voice on such platform, in recent years, organizations have taken liking to the app with News sources across many Western countries competing to spread information about politics, politicians and global engagements (The Daily Times, The Daily Mail, The Washington Post, Fox News). On this occasion, they are competing for engagement and clicks rather than for people to buy their papers, reaching a much greater number of users digitally than their prints could ever achieve.This form of journalism is particularly known for promoting political agendas associated with right or left-wing thinking, something not so obviously depicted on the platform previously. This shift in content has sparked many debates regarding social media platforms promoting political content that is biased to certain parties, particularly more conservative content. Blockout 2024 In May 2024, the social media accounts of numerous celebrities became a target of the Blockout 2024 social media campaign related to the Gaza–Israel conflict, after the Met Gala brought up concerns about an out-of-touch celebrity culture with the political climate. Impact on the public General influence Through their activism on the world stage, a self-selected cast of celebrities have begun to have a significant impact on policy, shaping the agenda on a range of global humanitarian issues. Celebrities play a growing role as part of an emerging strategy for political advocacy. Spawned by the difficulty most groups have making news, and made possible by the evolution of technology and the public sphere, this new celebrity advocacy strategy represents one aspect of the broader shift in American politics being ushered in by the digital age. Celebrities have the ability to generate parasocial relationships (feelings of a personal connectedness despite the lack of direct contact with the consumers). According to Brian Loader et al., young citizens are generally cautiously positive about both politicians and celebrities using social media but feel that they should learn to use it appropriately if they are to rebuild trust and credibility. In the United States and Canada, there is empirical confirmation of celebrities having a positive effect on the willingness of young people to support specific causes. According to Anubhav and Abhinav Mishra, the credibility of the endorser will likely translate to that of the political group that celebrity is endorsing. Politics in Branding Political sway could be heavily influenced by public media fans. As more individuals become involved with the opinions of their idols, they could potentially be swayed into a specific political scope. Influence is a pivotal part of politics. Monetary donations are extremely influential in order to utilize the public. When a campaign is in the position of having more monetary resources, they are able to purchase more air-time on television, radio, and web-ads. In fact, within the year of 2019 over $700 million was donated to political campaigns in support of presidential candidates. Many of these donations originate from people in socially powerful positions: celebrities. A majority of these socially powerful individuals utilize their platform and fanbase to sway public opinion towards a more socially aware option. There are many examples of televised propaganda utilized by celebrity platforms in order to publicly favor a specific political candidate. Identification Identification is the process by which individuals are thought to develop a deep connection with celebrities. This increases the likelihood that the viewer will perform the behaviors advocated by the celebrity as well as adopt similar attitudes and beliefs. Individuals go through the process of identification where they start to believe in the values, convictions and behaviors portrayed by the celebrity endorser and eventually adopt them as their own. Because it is easier to identify with people who have a connection or relationship, individuals are more likely to identify with celebrities that are closer to them in age. Scholars consider identification a significant component in the persuasion process through which celebrities influence audience behavior. Critiques Various concerns have been raised over celebrity involvement in politics. According to John Street, one of the main criticisms of celebrity involvement in politics is when celebrities take an office. In his article, as his example of this Street mentions Arnold Schwarzenegger becoming the Governor of California. He cites the main concerns surrounding celebrities holding office is lacking the qualities necessary to be a representative of the people. Street also argues that reliance on intimate mediums, such as television shifts the criteria by which politicians are judged from leadership skills to populist empathy. In Mark Wheeler's book, Celebrity Politics, he mentions that outside of politicians holding public office, their advocacy on political issues can cause skewed understanding of that particular topic. Wheeler suggests these critiques of celebrity involvement in politics reflect the values of the Frankfurt School. The school's critical theorists contended that media has become an expression of dominate ideology, which celebrities advocate for. References Celebrity Political activism Mass media issues Influence of mass media Social impact Social influence Public opinion
Celebrity influence in politics
[ "Technology" ]
3,162
[ "Computing and society", "Social media" ]
58,232,142
https://en.wikipedia.org/wiki/The%20monkey%20and%20the%20coconuts
The monkey and the coconuts is a mathematical puzzle in the field of Diophantine analysis that originated in a short story involving five sailors and a monkey on a desert island who divide up a pile of coconuts; the problem is to find the number of coconuts in the original pile (fractional coconuts not allowed). The problem is notorious for its confounding difficulty to unsophisticated puzzle solvers, though with the proper mathematical approach, the solution is trivial. The problem has become a staple in recreational mathematics collections. General description The problem can be expressed as: There is a pile of coconuts, owned by five men. One man divides the pile into five equal piles, giving the one left over coconut to a passing monkey, and takes away his own share. The second man then repeats the procedure, dividing the remaining pile into five and taking away his share, as do the third, fourth, and fifth, each of them finding one coconut left over when dividing the pile by five, and giving it to a monkey. Finally, the group divide the remaining coconuts into five equal piles: this time no coconuts are left over. How many coconuts were there in the original pile? The monkey and the coconuts is the best known representative of a class of puzzle problems requiring integer solutions structured as recursive division or fractionating of some discretely divisible quantity, with or without remainders, and a final division into some number of equal parts, possibly with a remainder. The problem is so well known that the entire class is often referred to broadly as "monkey and coconut type problems", though most are not closely related to the problem. Another example: "I have a whole number of pounds of cement, I know not how many, but after addition of a ninth and an eleventh, it was partitioned into 3 sacks, each with a whole number of pounds. How many pounds of cement did I have?" Problems ask for either the initial or terminal quantity. Stated or implied is the smallest positive number that could be a solution. There are two unknowns in such problems, the initial number and the terminal number, but only one equation which is an algebraic reduction of an expression for the relation between them. Common to the class is the nature of the resulting equation, which is a linear Diophantine equation in two unknowns. Most members of the class are determinate, but some are not (the monkey and the coconuts is one of the latter). Familiar algebraic methods are unavailing for solving such equations. History The origin of the class of such problems has been attributed to the Indian mathematician Mahāvīra in chapter VI, § 131, 132 of his Ganita-sara-sangraha (“Compendium of the Essence of Mathematics”), circa 850CE, which dealt with serial division of fruit and flowers with specified remainders. That would make progenitor problems over 1000 years old before their resurgence in the modern era. Problems involving division which invoke the Chinese remainder theorem appeared in Chinese literature as early as the first century CE. Sun Tzu asked: Find a number which leaves the remainders 2, 3 and 2 when divided by 3, 5 and 7, respectively. Diophantus of Alexandria first studied problems requiring integer solutions in the 3rd century CE. The Euclidean algorithm for greatest common divisor which underlies the solution of such problems was discovered by the Greek geometer Euclid and published in his Elements in 300 BC. Prof. David Singmaster, a historian of puzzles, traces a series of less plausibly related problems through the middle ages, with a few references as far back as the Babylonian empire circa 1700 BC. They involve the general theme of adding or subtracting fractions of a pile or specific numbers of discrete objects and asking how many there could have been in the beginning. The next reference to a similar problem is in Jacques Ozanam's Récréations mathématiques et physiques, 1725. In the realm of pure mathematics, Lagrange in 1770 expounded his continued fraction theorem and applied it to solution of Diophantine equations. The first description of the problem in close to its modern wording appears in Lewis Carroll's diaries in 1888: it involves a pile of nuts on a table serially divided by four brothers, each time with remainder of one given to a monkey, and the final division coming out even. The problem never appeared in any of Carroll's published works, though from other references it appears the problem was in circulation in 1888. An almost identical problem appeared in W.W. Rouse Ball's Elementary Algebra (1890). The problem was mentioned in works of period mathematicians, with solutions, mostly wrong, indicating that the problem was new and unfamiliar at the time. The problem became notorious when American novelist and short story writer Ben Ames Williams modified an older problem and included it in a story, "Coconuts", in the October 9, 1926, issue of the Saturday Evening Post. Here is how the problem was stated by Williams (condensed and paraphrased): Five men and a monkey were shipwrecked on an island. They spent the first day gathering coconuts for food. During the night, one man woke up, and decided to take his share early. So he divided the coconuts in five piles. He had one coconut left over, and he gave that to the monkey. Then he hid his pile and put the rest back together. By and by each of the five men woke up and did the same thing, one after the other: each one taking a fifth of the coconuts that were in the pile when he woke up, and having one left over for the monkey. In the morning they divided what coconuts were left, and they came out in five equal shares. Of course each one must have known there were coconuts missing; but each one was guilty as the others, so they didn't say anything. How many coconuts were there in the original pile? Williams had not included an answer in the story. The magazine was inundated by more than 2,000 letters pleading for an answer to the problem. The Post editor, Horace Lorimer, famously fired off a telegram to Williams saying: "FOR THE LOVE OF MIKE, HOW MANY COCONUTS? HELL POPPING AROUND HERE". Williams continued to get letters asking for a solution or proposing new ones for the next twenty years. Martin Gardner featured the problem in his April 1958 Mathematical Games column in Scientific American. According to Gardner, Williams had modified an older problem to make it more confounding. In the older version there is a coconut for the monkey on the final division; in Williams's version the final division in the morning comes out even. But the available historical evidence does not indicate which versions Williams had access to. Gardner once told his son Jim that it was his favorite problem. He said that the Monkey and the Coconuts is "probably the most worked on and least often solved" Diophantine puzzle. Since that time the Williams version of the problem has become a staple of recreational mathematics. The original story containing the problem was reprinted in full in Clifton Fadiman's 1962 anthology The Mathematical Magpie, a book that the Mathematical Association of America recommends for acquisition by undergraduate mathematics libraries. Numerous variants which vary the number of sailors, monkeys, or coconuts have appeared in the literature. Solutions Numerous solutions starting as early as 1928 have been published both for the original problem and Williams modification. Before entering upon a solution to the problem, a couple of things may be noted. If there were no remainders, given there are 6 divisions of 5, 56=15,625 coconuts must be in the pile; on the 6th and last division, each sailor receives 1024 coconuts. No smaller positive number will result in all 6 divisions coming out even. That means that in the problem as stated, any multiple of 15,625 may be added to the pile, and it will satisfy the problem conditions. That also means that the number of coconuts in the original pile is smaller than 15,625, else subtracting 15,625 will yield a smaller solution. But the number in the original pile is not trivially small, like 5 or 10 (that is why this is a hard problem) – it may be in the hundreds or thousands. Unlike trial and error in the case of guessing a polynomial root, trial and error for a Diophantine root will not result in any obvious convergence. There is no simple way of estimating what the solution will be. The original version Martin Gardner's 1958 Mathematical Games column begins its analysis by solving the original problem (with one coconut also remaining in the morning) because it is easier than Williams's version. Let F be the number of coconuts received by each sailor after the final division into 5 equal shares in the morning. Then the number of coconuts left before the morning division is ; the number present when the fifth sailor awoke was ; the number present when the fourth sailor awoke was ; and so on. We find that the size N of the original pile satisfies the Diophantine equation Gardner points out that this equation is "much too difficult to solve by trial and error," but presents a solution he credits to J. H. C. Whitehead (via Paul Dirac): The equation also has solutions in negative integers. Trying out a few small negative numbers it turns out is a solution. We add 15625 to N and 1024 to F to get the smallest positive solution: . Williams version Trial and error fails to solve Williams's version, so a more systematic approach is needed. Using a sieve The search space can be reduced by a series of increasingly larger factors by observing the structure of the problem so that a bit of trial and error finds the solution. The search space is much smaller if one starts with the number of coconuts received by each man in the morning division, because that number is much smaller than the number in the original pile. If F is the number of coconuts each sailor receives in the final division in the morning, the pile in the morning is 5F, which must also be divisible by 4, since the last sailor in the night combined 4 piles for the morning division. So the morning pile, call the number n, is a multiple of 20. The pile before the last sailor woke up must have been 5/4(n)+1. If only one sailor woke up in the night, then 5/4(20)+1 = 26 works for the minimum number of coconuts in the original pile. But if two sailors woke up, 26 is not divisible by 4, so the morning pile must be some multiple of 20 that yields a pile divisible by 4 before the last sailor wakes up. It so happens that 3*20=60 works for two sailors: applying the recursion formula for n twice yields 96 as the smallest number of coconuts in the original pile. 96 is divisible by 4 once more, so for 3 sailors awakening, the pile could have been 121 coconuts. But 121 is not divisible by 4, so for 4 sailors awakening, one needs to make another leap. At this point, the analogy becomes obtuse, because in order to accommodate 4 sailors awakening, the morning pile must be some multiple of 60: if one is persistent, it may be discovered that 17*60=1020 does the trick and the minimum number in the original pile would be 2496. A last iteration on 2496 for 5 sailors awakening, i.e. 5/4(2496)+1 brings the original pile to 3121 coconuts. Blue coconuts Another device is to use extra objects to clarify the division process. Suppose that in the evening we add four blue coconuts to the pile. Then the first sailor to wake up will find the pile to be evenly divisible by five, instead of having one coconut left over. The sailor divides the pile into fifths such that each blue coconut is in a different fifth; then he takes the fifth with no blue coconut, gives one of his coconuts to the monkey, and puts the other four fifths (including all four blue coconuts) back together. Each sailor does the same. During the final division in the morning, the blue coconuts are left on the side, belonging to no one. Since the whole pile was evenly divided 5 times in the night, it must have contained 55 coconuts: 4 blue coconuts and 3121 ordinary coconuts. The device of using additional objects to aid in conceptualizing a division appeared as far back as 1912 in a solution due to Norman H. Anning. A related device appears in the 17-animal inheritance puzzle: A man wills 17 horses to his three sons, specifying that the eldest son gets half, the next son one-third, and the youngest son, one-ninth of the animals. The sons are confounded, so they consult a wise horse trader. He says, "here, borrow my horse." The sons duly divide the horses, discovering that all the divisions come out even, with one horse left over, which they return to the trader. Base 5 numbering A simple solution appears when the divisions and subtractions are performed in base 5. Consider the subtraction, when the first sailor takes his share (and the monkey's). Let n0,n1,... represent the digits of N, the number of coconuts in the original pile, and s0,s1... represent the digits of the sailor's share S, both base 5. After the monkey's share, the least significant digit of N must now be 0; after the subtraction, the least significant digit of N' left by the first sailor must be 1, hence the following (the actual number of digits in N as well as S is unknown, but they are irrelevant just now): n5n4n3n2n1 0 (N5) s4s3s2s1s0 (S5) 1 (N'5) The digit subtracted from 0 base 5 to yield 1 is 4, so s0=4. But since S is (N-1)/5, and dividing by 55 is just shifting the number right one position, n1=s0=4. So now the subtraction looks like: n5n4n3n2 4 0 s4s3s2s1 4 1 Since the next sailor is going to do the same thing on N', the least significant digit of N' becomes 0 after tossing one to the monkey, and the LSD of S' must be 4 for the same reason; the next digit of N' must also be 4. So now it looks like: n5n4n3n2 4 0 s4s3s2s1 4 4 1 Borrowing 1 from n1 (which is now 4) leaves 3, so s1 must be 4, and therefore n2 as well. So now it looks like: n5n4n3 4 4 0 s4s3s2 4 4 4 1 But the same reasoning again applies to N' as applied to N, so the next digit of N' is 4, so s2 and n3 are also 4, etc. There are 5 divisions; the first four must leave an odd number base 5 in the pile for the next division, but the last division must leave an even number base 5 so the morning division will come out even (in 5s). So there are four 4s in N following a LSD of 1: N=444415=312110 A numerical approach A straightforward numeric analysis goes like this: If N is the initial number, each of 5 sailors transitions the original pile thus: N => 4(N–1)/5 or equivalently, N => 4(N+4)/5 – 4. Repeating this transition 5 times gives the number left in the morning: N => 4(N+4)/5 – 4    => 16(N+4)/25 – 4    => 64(N+4)/125 – 4    => 256(N+4)/625 – 4    => 1024(N+4)/3125 – 4 Since that number must be an integer and 1024 is relatively prime to 3125, N+4 must be a multiple of 3125. The smallest such multiple is 31251, so N = 3125 – 4 = 3121; the number left in the morning comes to 1020, which is evenly divisible by 5 as required. Modulo congruence A simple succinct solution can be obtained by directly utilizing the recursive structure of the problem: There were five divisions of the coconuts into fifths, each time with one left over (putting aside the final division in the morning). The pile remaining after each division must contain an integral number of coconuts. If there were only one such division, then it is readily apparent that 51+1=6 is a solution. In fact any multiple of five plus one is a solution, so a possible general formula is 5k – 4, since a multiple of 5 plus 1 is also a multiple of 5 minus 4. So 11, 16, etc also work for one division. If two divisions are done, a multiple of 55=25 rather than 5 must be used, because 25 can be divided by 5 twice. So the number of coconuts that could be in the pile is k25 – 4. k=1 yielding 21 is the smallest positive number that can be successively divided by 5 twice with remainder 1. If there are 5 divisions, then multiples of 55=3125 are required; the smallest such number is 3125 – 4 = 3121. After 5 divisions, there are 1020 coconuts left over, a number divisible by 5 as required by the problem. In fact, after n divisions, it can be proven that the remaining pile is divisible by n, a property made convenient use of by the creator of the problem. A formal way of stating the above argument is: The original pile of coconuts will be divided by 5 a total of 5 times with a remainder of 1, not considering the last division in the morning. Let N = number of coconuts in the original pile. Each division must leave the number of nuts in the same congruence class (mod 5). So, (mod 5) (the –1 is the nut tossed to the monkey) (mod 5) (mod 5) (–4 is the congruence class) So if we began in modulo class –4 nuts then we will remain in modulo class –4. Since ultimately we have to divide the pile 5 times or 5^5, the original pile was 5^5 – 4 = 3121 coconuts. The remainder of 1020 coconuts conveniently divides evenly by 5 in the morning. This solution essentially reverses how the problem was (probably) constructed. The Diophantine equation and forms of solution The equivalent Diophantine equation for this version is: (1) where N is the original number of coconuts, and F is the number received by each sailor on the final division in the morning. This is only trivially different than the equation above for the predecessor problem, and solvability is guaranteed by the same reasoning. Reordering, (2) This Diophantine equation has a solution which follows directly from the Euclidean algorithm; in fact, it has infinitely many periodic solutions positive and negative. If (x0, y0) is a solution of 1024x–15625y=1, then N0=x08404, F0=y08404 is a solution of (2), which means any solution must have the form (3) where is an arbitrary parameter that can have any integral value. A reductionist approach One can take both sides of (1) above modulo 1024, so Another way of thinking about it is that in order for to be an integer, the RHS of the equation must be an integral multiple of 1024; that property will be unaltered by factoring out as many multiples of 1024 as possible from the RHS. Reducing both sides by multiples of 1024, subtracting, factoring, The RHS must still be a multiple of 1024; since 53 is relatively prime to 1024, 5F+4 must be a multiple of 1024. The smallest such multiple is 11024, so 5F+4=1024 and F=204. Substituting into (1) Euclidean algorithm The Euclidean algorithm is quite tedious but a general methodology for solving rational equations ax+by=c requiring integral answers. From (2) above, it is evident that 1024 (210) and 15625 (56) are relatively prime and therefore their GCD is 1, but we need the reduction equations for back substitution to obtain N and F in terms of these two quantities: First, obtain successive remainders until GCD remains: 15625 = 15·1024 + 265 (a) 1024 = 3·265 + 229 (b) 265 = 1·229 + 36 (c) 229 = 6·36 + 13 (d) 36 = 2·13 + 10 (e) 13 = 1·10 + 3 (f) 10 = 3·3 + 1 (g) (remainder 1 is GCD of 15625 and 1024) 1 = 10 – 3(13–1·10) = 4·10 – 3·13 (reorder (g), substitute for 3 from (f) and combine) 1 = 4·(36 – 2·13) – 3·13 = 4·36 – 11·13 (substitute for 10 from (e) and combine) 1 = 4·36 – 11·(229 – 6·36) = –11·229 + 70*36 (substitute for 13 from (d) and combine) 1 = –11·229 + 70·(265 – 1·229) = –81·229 + 70·265 (substitute for 36 from (c) and combine) 1 = –81·(1024 – 3·265) + 70·265 = –81·1024 + 313·265 (substitute for 229 from (b) and combine) 1 = –81·1024 + 313·(15625 – 15·1024) = 313·15625 – 4776·1024 (substitute for 265 from (a) and combine) So the pair (N0,F0) = (-4776·8404, -313*8404); the smallest (see (3) in the previous subsection) that will make both N and F positive is 2569, so: Continued fraction Alternately, one may use a continued fraction, whose construction is based on the Euclidean algorithm. The continued fraction for (0.065536 exactly) is [;15,3,1,6,2,1,]; its convergent terminated after the repetend is , giving us x0=–4776 and y0=313. The least value of t for which both N and F are non-negative is 2569, so . This is the smallest positive number that satisfies the conditions of the problem. A generalized solution When the number of sailors is a parameter, let it be , rather than a computational value, careful algebraic reduction of the relation between the number of coconuts in the original pile and the number allotted to each sailor in the morning yields an analogous Diophantine relation whose coefficients are expressions in . The first step is to obtain an algebraic expansion of the recurrence relation corresponding to each sailor's transformation of the pile, being the number left by the sailor: where , the number originally gathered, and the number left in the morning. Expanding the recurrence by substituting for times yields: Factoring the latter term, The power series polynomial in brackets of the form sums to so, which simplifies to: But is the number left in the morning which is a multiple of (i.e. , the number allotted to each sailor in the morning): Solving for (=), The equation is a linear Diophantine equation in two variables, and . is a parameter that can be any integer. The nature of the equation and the method of its solution do not depend on . Number theoretic considerations now apply. For to be an integer, it is sufficient that be an integer, so let it be : The equation must be transformed into the form whose solutions are formulaic. Hence: , where Because and are relatively prime, there exist integer solutions by Bézout's identity. This equation can be restated as: But (m–1)m is a polynomial Zm–1 if m is odd and Zm+1 if m is even, where Z is a polynomial with monomial basis in m. Therefore r0=1 if m is odd and r0=–1 if m is even is a solution. Bézout's identity gives the periodic solution , so substituting for in the Diophantine equation and rearranging: where for odd and for even and is any integer. For a given , the smallest positive will be chosen such that satisfies the constraints of the problem statement. In the William's version of the problem, is 5 sailors, so is 1, and may be taken to be zero to obtain the lowest positive answer, so N = 1 55 – 4 = 3121 for the number of coconuts in the original pile. (It may be noted that the next sequential solution of the equation for k=–1, is –12504, so trial and error around zero will not solve the Williams version of the problem, unlike the original version whose equation, fortuitously, had a small magnitude negative solution). Here is a table of the positive solutions for the first few ( is any non-negative integer): Other variants and general solutions Other variants, including the putative predecessor problem, have related general solutions for an arbitrary number of sailors. When the morning division also has a remainder of one, the solution is: For and this yields 15,621 as the smallest positive number of coconuts for the pre-William's version of the problem. In some earlier alternate forms of the problem, the divisions came out even, and nuts (or items) were allocated from the remaining pile after division. In these forms, the recursion relation is: The alternate form also had two endings, when the morning division comes out even, and when there is one nut left over for the monkey. When the morning division comes out even, the general solution reduces via a similar derivation to: For example, when and , the original pile has 1020 coconuts, and after four successive even divisions in the night with a coconut allocated to the monkey after each division, there are 80 coconuts left over in the morning, so the final division comes out even with no coconut left over. When the morning division results in a nut left over, the general solution is: where if is odd, and if is even. For example, when , and , the original pile has 51 coconuts, and after three successive divisions in the night with a coconut allocated to the monkey after each division, there are 13 coconuts left over in the morning, so the final division has a coconut left over for the monkey. Other post-Williams variants which specify different remainders including positive ones (i.e. the monkey adds coconuts to the pile), have been treated in the literature. The solution is: where for odd and for even, is the remainder after each division (or number of monkeys) and is any integer ( is negative if the monkeys add coconuts to the pile). Other variants in which the number of men or the remainders vary between divisions, are generally outside the class of problems associated with the monkey and the coconuts, though these similarly reduce to linear Diophantine equations in two variables. Their solutions yield to the same techniques and present no new difficulties. See also Archimedes's cattle problem, a substantially more difficult Diophantine problem Fermat's Last Theorem, possibly the most famous Diophantine equation of all Cannonball problem References Sources Antonick, Gary (2013). Martin Gardner’s The Monkey and the Coconuts in Numberplay The New York Times:, October 7, 2013 Pleacher, David (2005). Problem of the Week: The Monkey and the Coconuts May 16, 2005 Pappas, Theoni (1993). The Joy of Mathematics: Discovering Mathematics All Around Wide World Publishing, January 23, 1993, Wolfram Mathworld: Monkey and Coconut Problem Kirchner, R. B. "The Generalized Coconut Problem." Amer. Math. Monthly 67, 516-519, 1960. Fadiman, Clifton (1962). The Mathematical Magpie, Simon & Schuster Bogomolny, Alexander (1996) Negative Coconuts at cut-the-knot External links Monkeys and Coconuts – Numberphile video Coconuts, a copy of the story as it appeared in the Saturday Evening Post The Monkey and the Coconuts: An Introduction to the Extended Euclidean Algorithm Recreational mathematics Puzzles Mathematical problems Diophantine equations Coconuts
The monkey and the coconuts
[ "Mathematics" ]
6,071
[ "Recreational mathematics", "Mathematical objects", "Equations", "Diophantine equations", "Mathematical problems", "Number theory" ]
58,233,805
https://en.wikipedia.org/wiki/Swamp%20Works
The Swamp Works is a lean-development, rapid innovation environment at NASA's Kennedy Space Center. It was founded in 2012, when four laboratories in the Surface Systems Office were merged into an enlarged facility with a modified philosophy for rapid technology development. Those laboratories are the Granular Mechanics and Regolith Operations Lab, the Electrostatics and Surface Physics Lab, the Applied Chemistry Lab, and the Life Support and Habitation Systems (LSHS) team. The first two of these are located inside the main Swamp Works building, while the other two use the facility although their primary work is located elsewhere. The team developed the Swamp Works operating philosophy from Kelly Johnson's Skunk Works, including the "14 Rules of Management", from the NASA development shops of Wernher von Braun, and from the innovation culture of Silicon Valley. The team prototypes space technologies rapidly to learn early in the process how to write better requirements, enabling them to build better products, rapidly, and at reduced cost. It was named the Swamp Works for similarity with the Skunk Works and the Phantom Works, but branded by the widespread marshes (swamps) on the Cape Canaveral and Merritt Island property of the Kennedy Space Center. The Swamp Works was co-founded by NASA engineers and scientists Jack Fox, Rob Mueller, and Philip Metzger. The logo, a robotic alligator, was designed by Rosie Mueller, a professional designer and the spouse of Rob Mueller. Swamp Works Facility The Swamp Works' main facility is the high bay in KSC's Engineering Development Lab, which was formerly the Astronaut Training Building during the NASA Apollo program. The building is where Apollo astronauts practiced working with the Lunar Module for lunar landings and extravehicular activities. During the Space Shuttle era it was used as a destination for bus tours from the KSC Visitor Complex. After the visitor complex decided it no longer needed the facility, it was handed back to NASA and was renovated for Swamp Works. The high bay was furnished with a lunar soil testing facility, the "Big Bin", which is thought to be the world's largest indoor, climate controlled lunar regolith chamber and contains 120 tons of BP-1 simulated lunar soil. The simulated soil is a finely crushed basalt from Black Point, Arizona, that has mechanical properties matching lunar soil. The facility has four 3D printers and an adjacent machine shop with lathes, drill presses, a CNC Router, and other equipment to enable rapid iterative prototyping. The facility also includes an Innovation Space where employees are encouraged to work informally in the upstairs loft. Granular Mechanics and Regolith Operations Lab The Granular Mechanics and Regolith Operations (GMRO) Lab combines theoretical and experimental granular mechanics with applied robotics to operate with the soil on other planetary bodies, known as regolith. GMRO develops technologies to mine, convey, extract resources from, manufacture with, and construct infrastructure, like buildings and rocket landing pads, from regolith. GMRO also develops self-cleaning connectors for the dusty lunar and martian environments, performs research into rocket blast effects for landing or launching on the surfaces of the Moon, Mars or asteroids, and has developed a miniature space mining robot called the Regolith Advanced Surface Systems Operations Robot (RASSOR). RASSOR has fore-and-after counter-rotating bucket drums to dig in soil in almost zero gravity. The GMRO Lab is involved in organizing and judging the NASA Robotic Mining competition, held annually at the Kennedy Space Center each May, and the Swarmathon University Challenge for swarming robots. GMRO also built the KSC Hazard Field at the north end of the Space Shuttle's runway, which is a field of simulated craters and boulders in sandy regolith. The hazard field was used by the Morpheus Lander project for flight tests in 2013-2014. The GMRO Lab has a large industrial robot arm used for printing buildings from lunar or martian (simulated) regolith mixed with recycled plastic. Electrostatics and Surface Physics Lab The Electrostatics and Surface Physics Lab (ESPL) develops technologies related to the unique physics that occur at material surfaces, leveraging it for applications in space. It developed an Electrodynamic Dust Shield that uses electrostatic forces that shift locations to sweep lunar or Martian dust from spacecraft surfaces. It created sensors that can be mounted into the wheels of planetary rovers to measure the spectrometry of tribocharging as an identification tool for the minerals it is driving over. It is also working on graphene as an energy storage medium. The ESPL and GMRO Lab worked together to develop a Mars Entry Heat Shield made out of regolith bonded by high temperature polymer. It could be made on the Martian moon Phobos then attached to a spacecraft from Earth to land on Mars, resulting in a cost savings for Mars missions. Applied Chemistry Lab The Applied Chemistry Lab develops technologies to support launch activities on the Kennedy Space Center and for use on the surfaces of the Moon, Mars or asteroids. Technologies for terrestrial ground operations include toxic vapor detection and environmental remediation. Technologies for use in space include chemical extraction of resources from lunar or Martian soil, recycling packing materials from space launch to create methane and other needed gases, and development of payload instruments for prospecting lunar ice. Life Support and Habitation Systems Team The Life Support and Habitation Systems team develops technologies in four main areas. The first is recovery and recycling of water onboard spacecraft. The second is controlling the trace amounts of chemicals such as ammonia that could build up in a spacecraft's enclosed atmosphere. The third is characterizing the microbial content of solid waste during space missions. The fourth is producing food through plant growth. The lab developed and operates the payload VEGGIE on board the International Space Station, which uses LED lighting at specific frequencies to cause plant growth with minimum energy. See also Breakthrough Propulsion Physics Program JPL NASA Eagleworks References External links Swamp Works website Laboratories in the United States Space technology Space technology research institutes Kennedy Space Center NASA groups, organizations, and centers
Swamp Works
[ "Astronomy" ]
1,225
[ "Space technology", "Outer space" ]
58,234,137
https://en.wikipedia.org/wiki/UKM%20Medical%20Molecular%20Biology%20Institute
The UKM Medical Molecular Biology Institute, usually referred to as UMBI, is a biomedicine and cancer research institute located in Bandar Tun Razak, Kuala Lumpur, Malaysia. The institute is one of research institute in National University of Malaysia (UKM). UMBI was established in 2003. The institute has been recognized as a Center for Excellence in Higher Education (HICoE) in 2009 by the Prime Minister of Malaysia. History The UKM Medical Molecular Biology Institute (UMBI) was founded as one of the Centre of Excellence in UKM after the approval from the National University of Malaysia senate meeting. UMBI was officially established in July 2003 with the operating budget allocated to this new institute of RM 25 thousand. Professor Datuk Dr. A Rahman A Jamal has been appointed as a founding director of UMBI since 2003 until his tenure ends in 2017. Director Professor Datuk Dr A Rahman A Jamal 2003 - 2017 Professor Dr Shamsul Azhar Shah 2018 - 2019 Dr Nor Azian Abdul Murad 2019 - current References External links Official website of UKM Medical Molecular Biology Institute Biotechnology Genetics or genomics research institutions National University of Malaysia Organisations based in Malaysia Research institutes in Malaysia
UKM Medical Molecular Biology Institute
[ "Biology" ]
239
[ "nan", "Biotechnology" ]
58,238,950
https://en.wikipedia.org/wiki/Aflavinine
Aflavinine is an anti-insectan chemical compound produced by some plants and fungi. References Indole alkaloids
Aflavinine
[ "Chemistry" ]
26
[ "Alkaloids by chemical classification", "Indole alkaloids" ]
58,241,500
https://en.wikipedia.org/wiki/Biomarkers%20of%20multiple%20sclerosis
Several biomarkers for diagnosis of multiple sclerosis, disease evolution and response to medication (current or expected) are under research. While most of them are still under research, there are some of them already well stablished: oligoclonal bands: They present proteins that are in the CNS or in blood. Those that are in CNS but not in blood suggest a diagnosis of MS. MRZ-Reaction: A polyspecific antiviral immune response against the viruses of measles, rubella and zoster found in 1992. In some reports the MRZR showed a lower sensitivity than OCB (70% vs. 100%), but a higher specificity (69% vs. 92%) for MS. free light chains (FLC). Several authors have reported that they are comparable or even better than oligoclonal bands. They can be of several types like body fluid biomarkers, imaging biomarkers or genetic biomarkers. They are expected to play an important role in the near future of MS. Classification Biomarkers can be classified according to several criteria. It is common to classify them according to their source (imaging biomarkers, body fluid biomarkers and genetic biomarkers) or their utility (diagnosis, evolution and response to medication) Among the imaging biomarkers in MS the most known is MRI by two methods, gadolinium contrast and T2-hypertense lesions, but also important are PET and OCT. Among the body fluid biomarkers the most known are oligoclonal bands in CSF but several others are under research. Genetic biomarkers are under study but there is nothing conclusive still. Addressing the classification by its utility we have diagnosis biomarkers, evolution biomarkers and response to medication biomarkers. Biomarkers for diagnosis Apart from its possible involvement in disease pathogenesis, vitamin D has been proposed as a biomarker of the disease evolution. Diagnosis of MS has always been made by clinical examination, supported by MRI or CSF tests. According with both the pure autoimmune hypothesis and the immune-mediated hypothesis, researchers expect to find biomarkers able to yield a better diagnosis, and able to predict the response to the different available treatments. As of 2016 no specific biomarker for MS has been found, but several studies are trying to find one. Some researchers are focusing also in specific diagnosis for each of the clinical courses Some people focus on blood tests, given the easy availability for diagnosis. Among the studies for blood tests, the highest sensitivity and specificity reported to date is testing circulating erythrocytes (s=98.3%, e=89.5%). Also a good result was obtained using methylation patterns of circulating cell debris are specific for a number of conditions, including RRMS There are ongoing efforts to be able to diagnose MS by analysing myelin debris into the blood stream. As of 2014, the only fully specific biomarkers found were four proteins in the CSF: CRTAC-IB (cartilage acidic protein), tetranectin (a plasminogen-binding protein), SPARC-like protein (a calcium binding cell signalling glycoprotein), and autotaxin-T (a phosphodiesterase) This list was expanded in 2016, with three CSF proteins (Immunoglobulins) reported specific for MS. They are the following immunoglobulins: Ig γ-1 (chain C region), Ig heavy chain V-III (region BRO) and Ig-κ-chain (C region) For existing damage and disease evolution During a clinical trial for one of the main MS drugs, a catheter was inserted into the brain's ventricles of the patients. Existing damage was evaluated and correlated with body fluids. Thanks to the courage of these volunteers, now we know that in PPMS, neurofilament light chain (NF-L) level, in CSF and serum, is a sensitive and specific marker for white matter axonal injury About biomarkers for MRI images, Radial Diffusivity has been suggested as a biomarker associated with the level of myelination in MS lesions. However, it is affected also by tissue destruction, which may lead to exaggeration of diffusivity measures. Diffusivity can be more accurate. Distinct patterns of diffusivity in MS lesions suggest that axonal loss dominates in the T1 hypointense core and that the effects of de/remyelination may be better detected in the "T2-rim", where there is relative preservation of structural integrity. Glial fibrillary acidic protein (GFAP) has been indicated as a possible biomarker for the progression of MS. The blood level of GFAP increases when astrocytes are damaged or activated, and elevated levels of the protein's cellular component correlate with severity of MS symptoms. Treatments and response to therapy Currently the only clear biomarker that predicts a response to therapy is the presence of anti-MOG autoantibodies in blood. Anti-MOG seropositive patients do not respond to approved MS medications. In fact, it seems that MS patients with anti-MOG positivity could be considered a different disease in the near future. Comparative Effectiveness Research (CER) is an emerging field in Multiple Sclerosis treatment. The response of the disease to the different available medications at this moment cannot be predicted, and would be desirable. But the ideal target is to find subtypes of the disease that respond better to a specific treatment. A good example could be the discovery that the presence of a gene called SLC9A9 appears in people who fail to respond to interferon β therapy or that the disregulation of some transcription factors define molecular subtypes of the disease Other good example could be the Hellberg-Eklund score for predicting the response to Natalizumab. Though biomarkers are normally assumed to be chemical compounds in body fluids, image can also be considered a biomarker. For an example about research in this area, it has been found that fingolimod is specially suitable for patients with frequently relapsing spinal cord lesions with open-ring enhancement. Anyway, patients with spinal cord lesions could have different T-helper cells patterns that those with brain lesions. Biomarkers are also important for the expected response to therapy. As an example of the current research, in 2000 was noticed that patients with pattern II lesions were dramatically responsive to plasmapheresis, and in February 2016, it was granted the first patent to test the lesion pattern of a patient without biopsy. Other examples could be the proposal for protein SLC9A9 (gen Solute carrier family 9) as biomarker for the response to interferon beta, as it happens for serum cytokine profiles The same was proposed to MxA protein mRNA. The presence of anti-MOG, even with CDMS diagnosis, can be considered as a biomarker against MS disease modifying therapies like fingolimod. Diagnosis of MS has always been made by clinical examination, supported by MRI or CSF tests. According with both the pure autoimmune hypothesis and the immune-mediated hypothesis, researchers expect to find biomarkers able to yield a better diagnosis, and able to predict the response to the different available treatments. As of 2014 no biomarker with perfect correlation has been found, but some of them have shown a special behavior like IgG- and IgM- oligoclonal bands in the cerebrospinal fluid and autoantibodies against neurotropic viruses (MRZ reaction) and the potassium channel Kir4.1. A biomarker is a characteristic that is objectively measured and evaluated as an indicator of normal biological processes, pathogenic processes or pharmacological responses to a therapeutic intervention. Type 0 biomarkers are those related to the course a pathogenic process and type 1 are those that show the effects of the therapeutical intervention. As of 2014, the only fully specific biomarkers found to date are four proteins in the CSF: CRTAC-IB (cartilage acidic protein), tetranectin (a plasminogen-binding protein), SPARC-like protein (a calcium binding cell signalling glycoprotein), and autotaxin-T (a phosphodiesterase). Nevertheless, abnormal concentrations of non-specific proteins can also help in the diagnosis, like chitinases. This list has been expanded in 2016, with three CSF proteins (Immunoglobulins) reported specific for MS. They are the following immunoglobulins: Ig γ-1 (chain C region), Ig heavy chain V-III (region BRO) and Ig-κ-chain (C region). Biomarkers are also important for the expected response to therapy. Currently it has been proposed the protein SLC9A9 (gen Solute carrier family 9) as biomarker for the response to interferon beta. Molecular biomarkers in blood Blood serum of MS patients shows abnormalities. Endothelin-1 shows maybe the most striking discordance between patients and controls, being a 224% higher in patients than controls. Creatine and Uric acid levels are lower than normal, at least in women. Ex vivo CD4(+) T cells isolated from the circulation show a wrong TIM-3 (Immunoregulation) behavior, and relapses are associated with CD8(+) T Cells. There is a set of differentially expressed genes between MS and healthy subjects in peripheral blood T cells from clinically active MS patients. There are also differences between acute relapses and complete remissions. Platelets are known to have abnormal high levels. MS patients are also known to be CD46 defective, and this leads to Interleukin-10 (IL-10) deficiency, being this involved in the inflammatory reactions. Levels of IL-2, IL-10, and GM-CSF are lower in MS females than normal. IL6 is higher instead. These findings do not apply to men. This IL-10 could be related to the mechanism of action of methylprednisolone, together with CCL2. Interleukin IL-12 is also known to be associated with relapses, but this is unlikely to be related to the response to steroids. Kallikreins are found in serum and are associated with secondary progressive stage. Related to this, it has been found that B1-receptors, part of the kallikrein-kinin-system, are involved in the BBB breakdown. There is evidence of Apoptosis-related molecules in blood and they are related to disease activity. B cells in CSF appear, and they correlate with early brain inflammation. There is also an overexpression of IgG-free kappa light chain protein in both CIS and RR-MS patients, compared with control subjects, together with an increased expression of an isoforms of apolipoprotein E in RR-MS. Expression of some specific proteins in circulating CD4+ T cells is a risk factor for conversion from CIS to clinically defined multiple sclerosis. Recently, unique autoantibody patterns that distinguish RRMS, secondary progressive (SPMS), and primary progressive (PPMS) have been found, based on up- and down-regulation of CNS antigens, tested by microarrays. In particular, RRMS is characterized by autoantibodies to heat shock proteins that were not observed in PPMS or SPMS. These antibodies patterns can be used to monitor disease progression. Finally, a promising biomarker under study is an antibody against the potassium channel protein KIR4.1. This biomarker has been reported to be present in around a half of MS patients, but in nearly none of the controls. Micro-RNA in blood Micro-RNA are non-coding RNA of around 22 nucleotides in length. They are present in blood and in CSF. Several studies have found specific micro-RNA signatures for MS. They have been proposed as biomarkers for the presence of the disease and its evolution and some of them like miR-150 are under study, specially for those with lipid-specific oligoclonal IgM bands Circulating MicroRNAs have been proposed as biomarkers. There is current evidence that at least 60 circulating miRNAs would be dysregulated in MS patient's blood and profiling results are continuously emerging. Circulating miRNAs are highly stable in blood, easy to collect, and the quantification method, if standardized, can be accurate and cheap. They are putative biomarkers to diagnose MS but could also serve differentiating MS subtypes, anticipating relapses and proposing a customized treatment. MiRNA has even been proposed as a primary cause of MS and its white matter damaged areas Genetic biomarkers for MS type By RNA profile Also in blood serum can be found the RNA type of the MS patient. Two types have been proposed classifying the patients as MSA or MSB, allegedly predicting future inflammatory events. By transcription factor The autoimmune disease-associated transcription factors EOMES and TBX21 are dysregulated in multiple sclerosis and define a molecular subtype of disease. The importance of this discovery is that the expression of these genes appears in blood and can be measured by a simple blood analysis. NR1H3 Mutation. Some PPMS patients have been found to have a special genetic variant named rapidly progressive multiple sclerosis In these cases MS is due to a mutation inside the gene NR1H3, an arginine to glutamine mutation in the position p.Arg415Gln, in an area that codifies the protein LXRA. In blood vessel tissue Endothelial dysfunction has been reported in MS and could be used as biomarker via biopsia. Blood circulation is slower in MS patients and can be measured using contrast or by MRI Interleukin-12p40 has been reported to separate RRMS and CIS from other neurological diseases In cerebrospinal fluid The most specific laboratory marker of MS reported to date, as of 2016, is the intrathecal MRZ (Measles, Rubella and Varicella) reaction showing 78% sensitivity and 97% specificity. It has been known for quite some time that glutamate is present at higher levels in CSF during relapses, maybe because of the IL-17 disregulation, and to MS patients before relapses compared to healthy subjects. This observation has been linked to the activity of the infiltrating leukocytes and activated microglia, and to the damage to the axons and to the oligodendrocytes damage, supposed to be the main cleaning agents for glutamate Also a specific MS protein has been found in CSF, chromogranin A, possibly related to axonal degeneration. It appears together with clusterin and complement C3, markers of complement-mediated inflammatory reactions. Also Fibroblast growth factor-2 appear higher at CSF. Varicella-zoster virus particles have been found in CSF of patients during relapses, but this particles are virtually absent during remissions. Plasma Cells in the cerebrospinal fluid of MS patients could also be used for diagnosis, because they have been found to produce myelin-specific antibodies. As of 2011, a recently discovered myelin protein TPPP/p25, has been found in CSF of MS patients A study found that quantification of several immune cell subsets, both in blood and CSF, showed differences between intrathecal (from the spine) and systemic immunity, and between CSF cell subtypes in the inflammatory and noninflammatory groups (basically RRMS/SPMS compared to PPMS). This showed that some patients diagnosed with PPMS shared an inflammatory profile with RRMS and SPMS, while others didn't. Other study found using a proteomic analysis of the CSF that the peak intensity of the signals corresponding to Secretogranin II and Protein 7B2 were significantly upregulated in RRMS patients compared to PrMS (p<0.05), whereas the signals of Fibrinogen and Fibrinopeptide A were significantly downregulated in CIS compared to PrMS patients As of 2014 it is considered that the CSF signature of MS is a combination of cytokines CSF lactate has been found to correlate to disease progression Three proteins in CSF have been found to be specific for MS. They are the following immunoglobulins: Ig γ-1 (chain C region), Ig heavy chain V-III (region BRO) and Ig-κ-chain (C region) Other interesting byproduct of the MS attack are the neurofilaments, remainings of the neural damage and the immunoglobulin heavy chains. Oligoclonal bands CSF also shows oligoclonal bands (OCB) in the majority (around 95%) of the patients. Several studies have reported differences between patients with and without OCB with regard to clinical parameters such as age, gender, disease duration, clinical severity and several MRI characteristics, together with a varying lesion load. CSF oligoclonal bands can be reflected in serum or not. This points to a heterogeneous origin of them Though early theories assumed that the OCBs were somehow pathogenic autoantigens, recent research has shown that the immunoglobulins present in them are antibodies against debris, and therefore, OCBs seem to be just a secondary effect of MS. Given that OCBs are not pathogenic, their remaining importance is to demonstrate the production of intrathecal immunoglobins (IgGs) against debris, but this can be shown by other methods. Specially interesting are the free light chains (FLC), specially the kappa-FLCs (kFLCs). Free kappa chains in CSF have been proposed as a marker for MS evolution Biomarkers in brain cells and biopsies Abnormal sodium distribution has been reported in living MS brains. In the early-stage RRMS patients, sodium MRI revealed abnormally high concentrations of sodium in brainstem, cerebellum and temporal pole. In the advanced-stage RRMS patients, abnormally high sodium accumulation was widespread throughout the whole brain, including normal appearing brain tissue. It is currently unknown whether post-mortem brains are consistent with this observation. The pre-active lesions are clusters of microglia driven by the HspB5 protein, thought to be produced by stressed oligodendrocytes. The presence of HspB5 in biopsies can be a marker for lesion development. Retinal cells are considered part of the CNS and present a characteristic thickness loss that can separate MS from NMO Biomarkers for the clinical course Currently it is possible to distinguish between the three main clinical courses (RRMS, SPMS and PPMS) using a combination of four blood protein tests with an accuracy around 80% Currently the best predictor for clinical multiple sclerosis is the number of T2 lesions visualized by MRI during the CIS, but it has been proposed to complement it with MRI measures of BBB permeability It is normal to evaluate diagnostic criteria against the "time to conversion to definite". Imaging biomarkers: MRI, PET and OCT Magnetic resonance (MRI) and positron emission tomography (PET) are two techniques currently used in MS research. While the first one is routinely used in clinical practice, the second one is also helping to understand the nature of the disease. In MRI, some post-processing techniques have improved the image. SWI-adjusted magnetic resonance has given results close to 100% specificity and sensitivity respect McDonald's CDMS status and magnetization transfer MRI has shown that NAWM evolves during the disease reducing its magnetization transfer coefficient. PET is able to show the activation status of microglia, which are macrophage-like cells of the CNS and whose activation is thought to be related to the development of the lesions. Microglial activation is shown using tracers for the 18 kDa translocator protein (TSPO) like the radioligand PK11195 Biomarkers for MS pathological subtype Differences have been found between the proteins expressed by patients and healthy subjects, and between attacks and remissions. Using DNA microarray technology groups of molecular biomarkers can be established. For example, it is known that Anti-lipid oligoclonal IgM bands (OCMB) distinguish MS patients with early aggressive course and that these patients show a favourable response to immunomodulatory treatment. It seems that Fas and MIF are candidate biomarkers of progressive neurodegeneration. Upregulated levels of sFas (soluble form of Fas molecule) were found in MS patients with hypotensive lesions with progressive neurodegeneration, and also levels of MIF appeared to be higher in progressive than in non-progressing patients. Serum TNF-α and CCL2 seem to reflect the presence of inflammatory responses in primary progressive MS. As previously reported, there is an antibody against the potassium channel protein KIR4.1 which is present in around a half of MS patients, but in nearly none of the controls, pointing towards a heterogeneous etiology in MS. The same happens with B-Cells DRB3*02:02 patients Specially interesting is the case of DRB3*02:02 patients (HLA-DRB3*–positive patients), which seem to have a clear autoimmune reaction against a protein called GDP-L-fucose synthase. Biomarkers for response to therapy Response to therapy is heterogeneous in MS. Serum cytokine profiles have been proposed as biomarkers for response to Betaseron and the same was proposed to MxA mRNA. References Biomarkers Multiple sclerosis
Biomarkers of multiple sclerosis
[ "Biology" ]
4,646
[ "Biomarkers" ]
58,246,357
https://en.wikipedia.org/wiki/Henry%20Schlinger
Henry David Schlinger Jr. is an American psychologist known for his work in applied behavior analysis. He is a professor of psychology at California State University, Los Angeles, where he was formerly the director of the M.S. Program in Applied Behavior Analysis. He also holds a part-time position as an associate professor in the Chicago School of Professional Psychology's Applied Behavior Analysis program. He is a former editor-in-chief of both the Analysis of Verbal Behavior and the Behavior Analyst. He is a member of the Association for Behavior Analysis International and the board of trustees of the Cambridge Center for Behavioral Studies. References Behaviourist psychologists Living people Western Michigan University alumni Southern Methodist University alumni California State University, Los Angeles faculty American academic journal editors 21st-century American psychologists Year of birth missing (living people)
Henry Schlinger
[ "Biology" ]
163
[ "Behaviourist psychologists", "Behavior", "Behaviorism" ]
58,246,920
https://en.wikipedia.org/wiki/Equal%20Earth%20projection
The Equal Earth map projection is an equal-area pseudocylindrical global map projection, invented by Bojan Šavrič, Bernhard Jenny, and Tom Patterson in 2018. It is inspired by the widely used Robinson projection, but unlike the Robinson projection, retains the relative size of areas. The projection equations are simple to implement and fast to evaluate. The features of the Equal Earth projection include: The curved sides of the projection suggest the spherical form of Earth. Straight parallels make it easy to compare how far north or south places are from the equator. Meridians are evenly spaced along any line of latitude. Software for implementing the projection is easy to write and executes efficiently. According to the creators, the projection was created in response to the decision of the Boston public schools to adopt the Gall-Peters projection for world maps in March 2017, to accurately show the relative sizes of equatorial and non-equatorial regions. The decision generated controversy in the world of cartography due to this projection’s extreme distortion in the polar regions. At that time Šavrič, Jenny, and Patterson sought alternative map projections of equal areas for world maps, but could not find any that met their aesthetic criteria. Therefore, they created a new projection that had more visual appeal compared to existing projections of equal areas. As with the earlier Natural Earth projection (2012) introduced by Patterson, a visual method was used to choose the parameters of the projection. A combination of Putniņš P4ʹ and Eckert IV projections was used as the basis. Mathematical formulae for the projection were derived from a polynomial used to define the spacing of parallels. Formulation The projection is formulated as the equations where and refers to latitude and to longitude. Use The first known thematic map published using the Equal Earth projection is a map of the global mean temperature anomaly for July 2018, produced by the NASA’s Goddard Institute for Space Studies. References External links Map projections
Equal Earth projection
[ "Mathematics" ]
386
[ "Map projections", "Coordinate systems" ]
76,536,009
https://en.wikipedia.org/wiki/Thorium%20trichloride
Thorium trichloride is a binary inorganic compound of thorium metal and chlorine with the chemical formula . Synthesis The compound can be prepared by reducing thorium tetrachloride at 800°C: Also a reaction of both elements: Other reactions are also known. Physical properties The compound forms crystals of the uranium trichloride crystal system. Chemical properties Above 630 °C thorium trichloride dissociates into the dichloride and tetrachloride. Uses Thorium trichloride is supposed to be used in a dual fluid reactor as reactor fuel. References Thorium compounds Nuclear materials Chlorides Actinide halides
Thorium trichloride
[ "Physics", "Chemistry" ]
137
[ "Chlorides", "Inorganic compounds", "Salts", "Materials", "Nuclear materials", "Matter" ]
76,536,104
https://en.wikipedia.org/wiki/Thomas%20Vogt
Thomas Vogt (born 1958) is a German chemist and material scientist. He is an Educational Foundation Distinguished Professor in the Department of Chemistry and Biochemistry at the University of South Carolina. Vogt is most known for his work in structural chemistry, chemical synthesis, and structure-property correlations of metal oxides based on diffraction techniques using electrons, x-rays, and neutrons. He has authored and co-authored over 300 peer-reviewed journal articles and several books such as Solid State Materials Chemistry and Modelling Nanoscale Imaging in Electron Microscopy. He is the recipient of the 1996 R&D 100 award from R&D Magazine, the 2002 Design and Engineering Award of Popular Mechanics, the 2018 Carolina Trustee Professorship Award, and the 2019 USC Educational Foundation Award in Science, Mathematics and Engineering. Vogt is a Fellow of the American Physical Society, the American Association for the Advancement of Science, the Neutron Scattering Society of America, as well as of the Institute of Advanced Study at Durham University and was a Founding Member of the editorial board for Physical Review Applied. Education Vogt earned a Diploma in Chemistry in 1985, followed by a PhD in 1987, both from the University of Tübingen. Career After working at a European and US national laboratory (Institute Laue Langevin and Brookhaven National Laboratory), Vogt began an academic career at the Department of Philosophy at the University of South Carolina. He teaches The History and Philosophy of Chemistry in the South Carolina Honors College. Later he became a professor in the Department of Chemistry and Biochemistry at the University of South Carolina, where he has been the Educational Foundation Distinguished Professor since 2010. From 2005 to 2023, Vogt served as Director of the NanoCenter at the University of South Carolina and was Associate Vice President for Research from 2011-2013, and a member on the Board of Directors of the USC Research Foundation from 2008 to 2012. He was the co-chair of the Search Committee for Provost and Chief Academic Officer in 2019 and later a Pearce Faculty Fellow in the South Carolina Honors College from 2020 to 2022. Before joining the University of South Carolina, Vogt worked as a Scientist at the Institute Laue-Langevin, France until 1992, then joined Brookhaven National Laboratory (BNL) as an Associate Physicist, promoted to Physicist in 1995, and by 2000, he led the Powder Diffraction Group in BNL's Physics Department. From 2003 to 2005, he held various roles at BNL, including Head of Materials Synthesis and Characterization Group, Cluster Leader of Materials Synthesis in the Center for Functional Nanomaterials (CFN), and Technical Coordinator for scientific equipment in the CFN building project. Moreover, he led three startups, Nanosource, LUMINOF and Sens4 as the Chief Technology Officer. He is a limited partner of TEXXMO mobile solutions, a wearable computer company and IOT button manufacturer. Research Vogt has conducted basic research using neutron, x-ray, and electron diffraction techniques to study structure-property relationships in materials, while also exploring philosophical and ethical implications of science and technology, particularly concerning the emergence of the periodic table of chemical elements. He holds 11 US patents such as the development of multidimensional integrated detection and analysis system (MIDAS) and neutron scintillating materials. Scanning transmission electron microscopy (STEM) Vogt investigated complex material structures using aberration-corrected scanning transmission electron microscopy (STEM). He helped develop new image simulation and modeling methodologies, such as super-resolution techniques, specialized de-noising methods, mathematical and statistical learning theories, and applications of compressed sensing, outlined in the book Modelling Nanoscale Imaging in Electron Microscopy. In a review for Physics Today, Les J. Allen commented, "In six chapters, the editors tackle the ambitious challenge of bridging the gap between high-level applied mathematics and experimental electron microscopy. They have met the challenge admirably... That work is also applicable to the new generation of x-ray free-electron lasers, which have similar prospective applications, and illustrates nicely the importance of applied mathematics in the physical sciences." Vogt and collaborators using STEM imaging with spherical aberration imaged the M1 phase, a MoVNbTe oxide partial oxidation catalyst, highlighting its potential applications in complex materials structure analysis. He also used the annular dark-field STEM to analyze nanoscale domains of complex oxide phases in disordered solids development. Furthermore, he and Douglas Blom employed parallel computing to analyze compositional disorder in a Mo, V-oxide bronze, highlighting discrepancies between experimental and simulated V content along metal-oxygen atomic columns, validated by HAADF-STEM imaging. Crystallography Vogt used high-resolution neutron diffraction techniques to investigate structural changes in molecules. Alongside Andrew N. Fitch and Jeremy K. Cockcroft, he revealed the low-temperature crystal structure of Rhenium heptafluoride (ReF7), confirming its molecular configuration as a distorted pentagonal bipyramid with Cs (m) symmetry. In another joint study published in Science, he observed negative thermal expansion in ZrWO, using diffraction to analyze its cubic structure. Using high-resolution neutron powder diffraction, Czjzek and Vogt located the hydrogen positions in zeolite Y. Subsequently, with Yongjae Lee, he examined structural changes in zeolites at high pressures, showing a pronounced rearrangement of non-framework metal ions and pressure-induced hydration/superhydration. Solid-state chemistry Vogt's work on solid-state chemistry has focused on the temperature and pressure-dependent structural arrangements of materials. In 2021, he co-authored a textbook Solid State Materials Chemistry with Patrick M. Woodward, Pavel Karen and John S.O. Evans, covering structure, defects, bonding, and properties of solid state materials. He reported a spin ordering transition in oxygen-deficient YBaCoO, accompanied by structural changes and spin state alterations, marking the first observation of this phenomenon induced by long-range orbital and charge ordering. He collaborated on the characterization of a new solid electrolyte, BiLa[(GeO)]O, identifying oxide ion interstitials as key to its ionic conductivity using advanced dark field electron microscopy. Furthermore, he investigated the cubic structure of CaCuTiO, a material with a large optical conductivity, ruling out ferroelectricity in favor of relaxor-like dynamics responsible for its giant dielectric effect. In a paper published in Nature Chemistry, Vogt and collaborators demonstrated the irreversible insertion and trapping of xenon in Ag-natrolite under moderate conditions, a possible explanation xenon deficiency in terrestrial and Martian atmospheres. He also observed water insertion into kaolinite at 2.7 GPa and 200 °C, shedding light on water release in subduction zones and its effects on seismicity and volcanic activity. Furthermore, his research showcased a pressure-driven metathesis reaction resulting in the formation of a water-free pollucite phase, CsAlSiO, with potential applications in nuclear waste remediation. Vogt and colleagues used advanced laser techniques to observe sub nanosecond structural dynamics of iron, revealing intricate wave patterns during compression and shock decay. He also examined the structural phase transitions in silicon 2D-nanosheets under high pressure, revealing size and shape-dependent behavior and the formation of 1D nanowires with reduced thermal conductivity. Phosphor materials for lighting Vogt contributed to the development of white phosphors for fluorescent lighting. Together with Sangmoon Park, he developed a family of self-activating and doped UV phosphors for fluorescent white-light production. They also developed up-conversion phosphors emitting shorter-wavelength light in an ordered oxyfluoride compound. Awards and honors 1996 – R&D 100 Award, R&D Magazine 2002 – Design and Engineering Award, Popular Mechanics 2018 – Carolina Trustee Professorship Award, USC 2019 – Educational Foundation Award in Science, Mathematics and Engineering, USC Bibliography Books Modelling Nanoscale Imaging in Electron Microscopy (2012) Solid State Materials Chemistry (2021) Complex Oxides: An Introduction (2019) Selected articles Evans, J. S. O., Mary, T. A., Vogt, T., Subramanian, M. A., & Sleight, A. W. (1996). Negative thermal expansion in ZrW2O8 and HfW2O8. Chemistry of materials, 8(12), 2809–2823. Mary, T. A., Evans, J. S. O., Vogt, T., & Sleight, A. W. (1996). Negative thermal expansion from 0.3 to 1050 Kelvin in ZrW2O8. Science, 272(5258), 90–92. Ramirez, A. P., Subramanian, M. A., Gardel, M., Blumberg, G., Li, D., Vogt, T., & Shapiro, S. M. (2000). Giant dielectric constant response in a copper-titanate. Solid state communications, 115(5), 217–220. Homes, C. C., Vogt, T., Shapiro, S. M., Wakimoto, S., & Ramirez, A. P. (2001). Optical response of high-dielectric-constant perovskite-related oxide. science, 293(5530), 673–676. Petkov, V., Trikalitis, P. N., Bozin, E. S., Billinge, S. J., Vogt, T., & Kanatzidis, M. G. (2002). Structure of V2O5⊙ n H2O Xerogel Solved by the Atomic Pair Distribution Function Technique. Journal of the American Chemical Society, 124(34), 10157–10162. Murphy, G. L., Zhang, Z., Maynard-Casely, H. E., Stackhouse, J., Kowalski, P. M., Vogt, T., ... & Kennedy, B. J. (2023). Pressure induced reduction in SrUO4–A topotactic pathway to accessing extreme incompressibility. Acta Materialia, 243, 118508. References Living people 21st-century German chemists Materials scientists and engineers Fellows of the American Physical Society Fellows of the American Association for the Advancement of Science University of South Carolina faculty University of Tübingen alumni Fellows of the Institute of Advanced Study (Durham) 1958 births
Thomas Vogt
[ "Materials_science", "Engineering" ]
2,201
[ "Materials scientists and engineers", "Materials science" ]
76,537,020
https://en.wikipedia.org/wiki/List%20of%20Australian%20states%20by%20life%20expectancy
This is a list of Australian states and territories by estimated life expectancy at birth. Australian Bureau of Statistics (2022) Data source: Australian Bureau of Statistics. The next release is expected 8 November 2024. Global Data Lab (2021) Data source: Global Data Lab Charts See also List of countries by life expectancy List of Oceanian countries by life expectancy States and territories of Australia Demographics of Australia References Life expectancy Demographics of Australia Australia, life expectancy Australia States by life expectancy Life expectancy
List of Australian states by life expectancy
[ "Biology" ]
105
[ "Senescence", "Life expectancy" ]
76,537,354
https://en.wikipedia.org/wiki/Mountain%20soap
Mountain soap (, , ), rock-soap or bolus — a partially outdated trivial name for a large group of clay minerals similar in properties from the group of hydrous layered aluminum silicates with variable composition. Minerals from the mountain soap group contain primarily silicates (44-46%), alumina (17-26%), iron oxides (6-10%) and water (13-25%). The mountain soap group included at different times up to two dozen mineral species and varieties. In different cases, this name could mean different minerals, most often halloysite (from the proper name), saponite (soapstone), bentonite or montmorillonite (from the , toponym). The last mineral is a large group, each of which could be called mountain soap. All these minerals received their common colloquial name, which gradually penetrated into mineralogy, for their ability to lather and serve as a detergent for various purposes. Mountain soap has some common properties that are characteristic of all minerals included in the group. All of them have very low hardness (from 1 to 2 on the Mohs scale) and are a typical weathering product of aluminosilicates. In the form of its main component (bentonite, which is formed during the weathering of volcanic rocks — tuffs and ash), mountain soap is one of the main minerals in many types of soils, and it is also found in many sedimentary rocks. Thanks to its layered "batch" structure, mountain soap has the ability to absorb water, swell strongly and has pronounced sorption properties. Name The name "mountain soap" is not so much a metaphor, which is quite often found in mineralogy, as a simple statement of the soapy properties of the mineral, as well as the systematic use that it has had for thousands of years. This mineral is named mountain soap, on account of its greasiness, sectility, and softness. On the one hand, this type of clay received its trivial name from its property of softening in water and forming with it a viscous and greasy substance to the touch, similar to thickly diluted soap. On the other hand, for a very long time, the population that had access to deposits of such clays systematically used it as a detergent for various purposes: from waulking cloth, canvas and wool to washing hands, face and hair in the bathhouse. From the combination of two factors inherent in all these varieties, other synonymous names also follow under which washing clays are found. In the literature of the 18th and 19th centuries, mountain soap can be found under the names: soapy earth, soapy clay, soapstone or bolus. The conventional scientific use of the collective term "mountain soap" generally ended by the last quarter of the 19th century, giving way to individual names of the minerals included in this conventional group. Currently, the term is considered completely outdated and does not appear as such even in the latest mineralogical dictionaries. Properties Mountain soap has the same properties as one of its main varieties: bentonite or montmorillonite. This is a typical clay mineral belonging to the subclass of layered silicates, due to its structure it has the ability to swell strongly. In addition, mountain soap has pronounced sorption properties. The modern use of bentonites in industrial production, construction and even as a food additive is based on their high thermal stability, binding ability, as well as catalytic and adsorption activity. Old authors of the 18th-19th centuries, describing the properties of mountain soap, unanimously note that it is opaque, not sticky, convenient for writing, and also soft when wetted, lathers well and sticks strongly to the tongue. It feels greasy and light. In composition (according to Buchholz's analysis) it contains 44% silica, 26.5% alumina, about 20.5% water, 8% iron oxides and 0.5-1% calcium compounds (lime). Naturally, the results of chemical analysis relate exclusively to the specific variety on which laboratory operations were carried out. In all other cases, both the percentage composition of the substances that make up the clay and the set of impurities may differ significantly from the proposed ratio. Nevertheless, this analysis is quite indicative in general for that mineral group that can be described as "mountain soaps". Being in damp soil or being moistened, mountain soap becomes completely soft and similar to butter, but when it dries in dry weather in the air or when heated, it hardens. Chemical stability and even some inertness of mountain soap are also noted; even in strong acids, for example, sulfuric, hydrochloric or nitric, no visible change or any other signs of a chemical reaction occur with it. The authors differ significantly more regarding the color of the clays. If Johann Gottgelf von Waldheim unequivocally states that mountain soap "has a light pitch-black color", Pavel Goryaninov practically confirm his opinion, assuring that it is "black-brown in color", and Robert Jameson confirms this information, however then other authors are much less categorical. The Mining Journal for 1926 writes that "this soap has different colors, but more greenish and yellow", and in the Mining Dictionary of 1843 summarizes: "its color is generally white, turning into dark brown, bluish and yellowish". It is clear that the specific characteristics of mountain soap are associated, first of all, with the breadth of coverage of regional forms and varieties that a particular author included in his review. In particular, Spassky separately mentions that the external properties of mountain soap, used in the Kazan and Irkutsk governorates, and especially in the Crimea under the name kil, are generally similar to those described here, however, the chemical composition of each of these differences has not yet been precisely determined. Pavel Ogorodnikov writes about exactly the same thing in his memoir "Essays on Persia", of course, without mentioning the exact names and chemical terms. Talking about his travels through Northern Iran, he notices that different types of fatty clay, which replaces soap for the poor population when washing clothes, are found in abundance in all the nearby mountains. For example, grayish clay, sold at the local market for 1,80 rub. and more expensive per halvar, mined mainly between the villages of Tash and Mujen, and cheaper, yellowish (90 kopecks per halvar) — in the surrounding mountains of the village of Taal-Saab-abad. Asterabad residents wash their clothes mainly with Gilaser clay, which is abundant in the Asterabad mountains. In addition, the Shahrud people also use Semnon clay, which comes from the city of Semnon, which lies on the way from Shahrud to Tehran. This particularly expensive and probably high quality clay sells for 9 rub. for halvar, used exclusively by women when washing their hair in the bathhouse. Gallery References See also Mountain (disambiguation) Soap (disambiguation) Kaolinite Clay Set index articles on minerals Aluminium minerals Silicate minerals Food additives Rocks Cleaning products Monoclinic minerals Mining terminology Mineral groups
Mountain soap
[ "Physics", "Chemistry" ]
1,506
[ "Products of chemical industry", "Cleaning products", "Physical objects", "Rocks", "Matter" ]
76,538,532
https://en.wikipedia.org/wiki/Henryk%20Minc
Henryk Minc (November 12, 1919 – July 15, 2013) was a Polish-born, British-educated, American professor of mathematics. He is known for his 1963 conjecture of what is now called the Bregman–Minc inequality (or Bregman's theorem), proved in 1973 by Lev M. Bregman. Biography Henryk Minc was born in Łódź, Poland to a Jewish family and had two brothers. In 1937 he graduated from secondary school in Poland. In 1938 he matriculated at Belgium's University of Liège. He returned to Lodz in August 1938 during summer vacation, but the outbreak of WW II caught his family by surprise. In November 1939 Henryk Minc escaped from Poland to Belgium. He and his two brothers survived WW II but their parents died in Auschwitz in August 1944. In May 1940 Henryk Minc escaped from Belgium to France, where he joined the Polish army. At the end of June 1940 he, as a member of a Polish engineer company, was evacuated to England and then stationed in Scotland until the allied invasion of Normandy in 1944. Minc was sent in May 1941 to an officer engineer school in Dundee, Scotland, and in 1944 was commissioned as a second lieutenant in the British army. Minc's Polish army unit was stationed in Tayport, Scotland. In April 1943 he married Catherine Taylor Duncan. The couple became the parents of three sons. In 1944 he was mainly involved in dismantling minefields. After eight years of military service, he matriculated at the University of Edinburgh. There he graduated with an M.A. in 1955 and a doctorate in mathematics in 1959. His doctoral dissertation Logarithmetics, Index Polynomials, and Bifurcating Root Trees was supervised by Ivor Etherington. From 1956 to 1958, Minc taught at Morgan Academy in Dundee, Scotland. From 1957 to 1958 he was lecturer at Dundee Technology College (now renamed Abertay University). At Canada's University of British Columbia, he was a lecturer from 1958 to 1959 and an assistant professor from 1959 to 1960. In 1960 he and his family immigrated to the United States. At the University of Florida, Gainesville he was an associate professor from 1960 to 1963. At the University of California, Santa Barbara (UCSB), he was a full professor from 1963 to 1990, when he retired as professor emeritus. During the 1970s he was a visiting professor at the Technion – Israel Institute of Technology. In the early part of his mathematical career, Henryk Minc was interested in non-associative algebras and the intuitionist foundation of mathematics. He was one of the important mathematicians, along with Robert Charles Thompson and Ky Fan, recruited by Marvin Marcus, for the linear algebra and matrix theory school in UCSB's mathematics department. Minc and Marcus collaborated extensively. Minc was a leading expert on permanents and nonnegative matrices. At UCSB he was an outstanding member of the semiautonomous Institute for Interdisciplinary Applications of Algebra and Combinatorics and served on the editorial staff of Linear Algebra and Its Applications. He was the author or co-author of 10 mathematical textbooks and numerous research publications. In 1966, Marvin Marcus and Henryk Minc received the Mathematical Association of America's Lester R. Ford Award for their 1965 article Permanents. Hobbies Minc published articles on Biblical archaeology and collected ancient Jewish coins and other antiquities. He spoke five languages and was able to read and understand ancient Hebrew and ancient Greek. Over a forty-year period for almost every day, he swam about a mile (1.6 km) each day. He played the harpsichord and the recorder, as well as bagpipes. During his retirement, he was an active participant in the Santa Barbara Scottish Society and, with his wife Catherine, visited Scotland many times. He developed a love for the poetry of Robert Burns and collected many books and manuscripts written by Burns. Minc was delighted when he was appointed the honorary president of the Robert Burns World Federation. Family Henryk Minc was predeceased by his wife Catherine (1922–2008). Upon his death in 2013 in Santa Barbara, he was survived by his three sons, Robert (1944–2019), Ralph, and Raymond, and a grandson Jeffrey. Selected publications Articles Books References External links 1919 births 2013 deaths 20th-century American mathematicians 21st-century American mathematicians Combinatorialists Linear algebraists British Army personnel of World War II Alumni of the University of Edinburgh University of California, Santa Barbara faculty American people of Polish-Jewish descent Naturalized citizens of the United States Polish emigrants to the United States People from Łódź
Henryk Minc
[ "Mathematics" ]
952
[ "Combinatorialists", "Combinatorics" ]
76,540,168
https://en.wikipedia.org/wiki/Parasites%20of%20phytoplankton
Phytoplankton are characterized as organisms which are unable to swim against a current and produce their own organic carbon via photosynthesis. They are responsible for producing approximately 50 percent of the Earth’s primary productivity and are therefore crucial in maintaining both marine ecosystems and adding a significant amount of oxygen to the atmosphere. However, as with other organisms, phytoplankton are hosts to many diverse forms of parasites, including, but not limited to, fungal- and non-fungal zoosporic parasites, Dinoflagellates, Cercozoans, and viruses. Parasites use nutrients from their hosts, at that organisms expense, and display diverse methods of infection. Parasites can play integral roles in the dynamics and interactions between phytoplankton and their communities, such as controlling population abundance, distribution and biodiversity. Methods of parasitism Similar to parasites of other organisms, parasites of phytoplankton have various methods of infecting and feeding on their hosts. Many have evolved special attachment structures to bind and penetrate host membranes. Following attachment, parasites may release enzymes into the host cytoplasm or begin feeding. In some cases, host cells engulf parasitic cells via endocytosis, passively allowing access to their intracellular compartments. Some parasites directly absorb nutrients, such as amino acids, carbohydrates and lipids through the host's cytoplasm once they have entered the host. Other parasites, like viruses, can inject their genetic material into their host’s cells, and use host-cell material to reproduce more genetic material to carry on the virus life cycle. Parasites will also use their hosts for more effective dispersal throughout the ocean. By infecting semi-mobile hosts, such as phytoplankton that drift in the ocean, and reproducing within them, parasites can be released into new regions by lysing host cells or through the release of spores, to then continue their life cycle in new hosts. Fungal parasites of phytoplankton Fungal parasites of phytoplankton occur in most pelagic aquatic systems, but historically been more recognized in freshwater lakes with high recorded mortality rates of phytoplankton; however, their prevalence in marine systems is increasingly documented. The overall importance of fungal parasites in aquatic systems is currently not well understood, with a large range in mortality estimates, and limited understanding of the fundamental biology behind their interactions with phytoplankton. Chytrid parasites The diversity of fungal parasites of phytoplankton is not well characterized, and largely emphasizes chytrids (phylum Chytridiomycota) as key phytoplankton parasites. Ooomycetes are fungus-like parasites of phytoplankton, but are not true fungi. The chytrids are a group of mostly unicellular fungi, producing flagellated zoosporic parasites, and are quite small; with a diameter of 2-6 μm. The Chytrids are quite diverse, including saprotrophs, pathogens and parasites, and have been shown to be quite important in aquatic food webs, especially in freshwater systems, where fungal parasites have been shown to infect up to 90% of the diatom populations. In marine systems, chytrids are also the primary group of fungal parasites, where the interactions between fungi and diatoms are the most well studied examples; with some reports showing up to 93% of the infected diatom population being infected with Chytrid parasites. The diversity of chytrid parasites is poorly characterized in marine systems as few studies on chytrids have been conducted in marine settings, similar to most marine fungi. However, recent advances in molecular genomics, with large scale DNA-based surveys in the marine environment, have begun to show the large diversity and abundance of Chytrid parasites in the ocean. Hosts for chytrid parasites Chytrids are usually quite host-specific, but their host range is poorly characterized, where some are quite specialist, only infecting specific strains, and some have been shown to be more generalist. Problems arise while defining host ranges as many chytrid-phytoplankton interactions have mostly been characterized under microscope, where the morphological similarity of the zoospores makes it difficult to distinguish the different species from each other.   Chytrids have been shown to infect phytoplankton of all size ranges, from small cyanobacteria to large diatoms. Although there tends to be a preference for the chytrids to infect blooming species, likely due to greater encounter rates, and release of phytoplankton derived organic matter which serves as targets for chytrid chemotaxis. A preference for larger phytoplankton, or large phytoplankton aggregates seems to also be a trend due to the higher encounter rates and more potential nutrients present for the parasites to replicate. Implications of chytrids on food webs - the mycoloop Fungal parasites are grazed by zooplankton in aquatic systems, where they tend to be easier to consume than large phytoplankton; such as diatoms or filamentous cyanobacterial colonies. Large phytoplankton can often be inedible for many zooplankton, especially to smaller grazers, where the parasitism and successive release of easier-to-consume zoospores and fragmented phytoplankton particles can greatly enhance the available prey for the grazers. An in-vitro study conducted with the zooplankton Daphnia showed a near doubling in grazing rates on the filamentous cyanobacteria Planktothrix in the presence of chytrid parasites. However, another study conducted on diatoms showed how infection by chytrid parasites caused more aggregation of the diatoms, leading to less accessibility for Daphnia grazing. This indicates how the understanding of fungal parasites and their effects on food webs is complicated and requires more attention from marine scientists. There are also effects from fungal parasitism on other microbial processes in aquatic systems. The release of phytoplankton-derived organic matter after fungal infection also feeds back dissolved organic matter into the microbial loop, further complicating the understanding of the food web. The prevalence of a trophic link between phytoplankton, zooplankton, heterotrophic bacteria and fungal parasites is becoming more evident, where the inclusion of fungi into food web models would improve the understanding and modeling of aquatic ecosystem dynamics. Phytoplankton viruses Phytoplankton viruses are a type of marine virus. There is substantial genetic variation in phytoplankton viruses, just as there is substantial variation in the types of phytoplankton they infect.  However, many of the viruses infecting phytoplankton, in particular eukaryotic algae, can be found in members of the family phycodnaviridae, a diverse family of large icosahedral viruses with clear importance in their respective aquatic environments. Additionally, many of these viruses may be classified as Giant Viruses. Another key group of phytoplankton viruses are Cyanophages, phages that infect cyanobacteria. The first cyanophage isolated, LPP-1, in the family podoviridae, is small compared to the large size characterizing many members of family phycondnaviridae. Some other types of phytoplankton viruses include but are not limited to; Algal viruses, Coccolithoviruses, Dinoflaggelate Viruses, Cyanophages and Diatom viruses. Phycovirus history Phycoviruses, an overarching category phycodnaviridae belongs to, are viruses that target eukaryotic phytoplankton and were first identified back in 1963 in Indiana. Despite this early discovery, research progress in this field was slow compared to viruses affecting humans, animals, or crops. However, these viruses have a significant impact on the growth and breakdown of algal communities, resulting in more recent research interest. One key focus has been on how they contribute to regulation of algal populations, preventing them from growing out of control. Interestingly, many of the first Phycoviruses found were those targeting blue-green algae, often found in places like waste settling ponds. Cyanophage history Studies on cyanophages were initially focused on freshwater from which the first isolation of a cyanophage occurred in 1963. From the period of 1970-1990, there was extensive research into the genetics of cyanophages allowing for a better understanding of their biology. After the 1990s, research focuses changed to phage host interactions and determining the diversity of these marine viruses. Ecological significance of phytoplankton viruses Phytoplankton viruses play an important ecological role in controlling phytoplankton blooms and are often one of the primary reasons behind their regulation and termination during the spring bloom through infection and lysis of phytoplankton. Additionally, phytoplankton viruses have been shown to influence genetic diversity and strain succession within host populations when standing stocks are maintained. Lastly, lysis of phytoplankton by viruses impacts the microbial food web as this process results in increased nutrients and dissolved organic matter being made available in the lower components of the food chain. Overall this results in changes to the carbon budget of the ocean. However, the nature of phytoplankton-virus interactions has been shown to vary based on a series of abiotic conditions such as temperature, salinity, nutrients and light, and impact infection processes at various points in the viral life cycle. These factors influence the coexistence relationship of phytoplankton and their viruses through the variation in viral life history traits and modulation of viral life cycles. Economic value of phytoplankton viruses Phytoplankton viruses have a significant economic role, specifically In their management of phytoplankton blooms. This is due to the manner in which these blooms can produce harmful toxins that not only negatively impact the aquatic ecosystem but may have further impacts on economics and human health. Bacterial parasites of phytoplankton The diversity of bacteria-phytoplankton relationships widely spans from mutualistic to parasitic. In this context, the intricacies of bacteria-phytoplankton relationships becomes crucial for comprehending the dynamics of the global carbon cycle and nutrient cycling in marine ecosystems. Studies in effort to comprehend the impact of bacterial parasites suggest that the presence of potentially parasitic bacteria in sea water can be both positive and negative. This section explores such bacterial-phytoplankton interactions and its impact on community composition, revealing patterns of cooperative and competitive dynamics within the two. From mutualism to parasitism During upwelling processes, microalgae-bacteria relationship becomes very complex, switching from the initial mutualism to parasitism. This shift is dependent on the physiological state of the microalga involved. the interactions between microalgae and microorganisms offer potential benefits, particularly in aquaculture. These interactions can enhance algal biomass with valuable compounds like lipids and carbohydrates. Below is a table providing examples of microalgae-bacteria interactions that lead to positive biomass production. Phaeobacter gallaeciensis BS107 is a member of the roseobacter clade, a significant group of marine α-proteobacteria making up to a quarter of bacteria in coastal communities. It interacts with Emiliania huxleyi, a widely distributed microalga crucial for seasonal algal blooms and oxygen production. The relationship between P. gallaeciensis BS107 and E. huxleyi is dynamic. The bacteria alternates between mutualistic and parasitic phases. In the mutualistic phase, both benefit mutually, with the algal host providing a surface for biofilm formation and nutrients, and the bacterium offering protection and growth promotion. However, when the algal host begins to senesce, it releases compounds triggering the bacterium to secrete selective algaecides to the host, leading to a parasitic phase where the bacterium benefits from the host's demise. This fluidity in the relationship highlights the dynamic nature of microalgae-bacteria interactions, where they can oscillate between co-operation and exploitation based on environmental and biological conditions. Algicidal bacteria and its impact on algal blooms Extensive research on algicidal bacteria has been done to mitigate harmful algal blooms (HABs) or excessive algae growth. Although the cause of HABs is not yet perfectly understood, the presence of algicidal bacteria often coincides with the decline of algal blooms, which hints that there is potential role of algicidal bacteria in shaping algal bloom dynamics. Parasitic bacteria Vampirovibrio chlorellavorus is an example of an algicidal bacteria. The predatory bacterium Vampirovibrio poses a significant threat to Chlorella cultures, leading to rapid deterioration and collapse of (micro)algal populations. This bacterium, although playing a crucial ecological role in maintaining an optimal population of Chlorella in natural settings, can have devastating effects on large-scale algal ponds, which are vital for aquaculture and other valuable products. When Chlorella cultures are exposed to pH 3.5 for 15 minutes in the presence of acetate, which has a bactericidal effect that significantly reduces aerobic bacterial counts, it effectively prevents culture crashes and doubles the productive longevity of the cultures. Another possible measure to prevent Vampirovibrio infection of Chlorella is the induction in the production of small bioactive peptides and glycosides by Chlorella under iron limitation. Factors influencing prevalence of bacterial parasites There is apparent seasonal differences in prevalence amongst many parasite groups. The bacterial parasites such as Spirobacillus and Pasteuria, always reached their highest prevalence in summer. Waterbody morphology can explain the varied host exposure levels that lead to the parasitic stages. For example, Spirabacillus requires turbulence after its parasitic stage or host decomposition in order to reattach with the next living host. Turbulence is affected by depth, surface area or basin shape, and shallow habitats are more subject to wind-induced turbulence, leading to higher overall prevalence of bacterial parasites. Ecological importance Regulating population dynamics Parasites can have major influences on regulating the populations of phytoplankton communities. By infecting and killing their specific hosts, parasites can directly influence and reduce population numbers contributing to maintaining the biodiversity of an ecosystem. Thereby, they assure there is no prolonged dominance of one phytoplankton species in a given community, ensuring the ecosystem can remain stable. Influencing carbon sequestration Due to the fact that phytoplankton are the primary producers of their environments they generate the organic carbon for their ecosystem. As planktonic parasites control the abundance of their hosts they can also indirectly influence the rate of carbon sequestration in their environment which then shifts the export of carbon to the deep ocean and impacts the global carbon cycle. Nutrient cycling Parasitic infection also contributes to the stable nutrient circulation throughout the water column. When parasites infect their hosts they can lyse (disintegrate) and kill the host, releasing the organic matter and nutrients, such as carbon and nitrogen, stored within host cells back into the surrounding water. This process is called the viral shunt; and contributes to ensuring that nutrients can be readily available for other microorganisms filling the lower trophic levels of food webs. With more nutrients available for these microorganisms, higher trophic levels will be able to be sustained, maintaining a balanced ecosystem. Maintaining diversity and evolution Organisms are continuously evolving and adapting to changing environments, and many parasites are host specific which is constantly driving the need for parasites to evolve alongside their hosts. Oftentimes, organisms are being driven to evolve defense mechanisms against their parasites, leading parasites to coevolve strategies in which they can continue to infect their hosts. Evolution is a constant battle against selective pressures and environmental changes, and parasite-host coevolution allows for organisms to generate more genetic diversity promoting species survival. References Wikipedia Student Program Planktology Parasitism
Parasites of phytoplankton
[ "Biology" ]
3,399
[ "Parasitism", "Symbiosis" ]
76,540,369
https://en.wikipedia.org/wiki/Timekeeping%20on%20the%20Moon
Timekeeping on the Moon is an issue of synchronized human activity on the Moon and contact with such. The two main differences to timekeeping on Earth are the length of a day on the Moon, being the lunar day or lunar month, observable from Earth as the lunar phases, and the rate at which time progresses, with 24 hours on the Moon being 58.7 microseconds (0.0000587 seconds) faster, resulting from gravitational time dilation due to the different masses of the Moon and Earth. History The technology used for the timekeeping devices deployed to the Moon have varied over the decades. Several Omega Speedmasters have been on the Moon, synched to Central Standard Time (CST). The Apollo Guidance Computer (AGC) kept a triple-precision count of time in a real time clock cuing from a quartz oscillator; a standby option (although never used) would allow it to update this count every 1.28 second (~0.78 hertz) — more often when not standing by. In addition to maintaining the clock cycle, computer timekeeping allowed the AGC to display the capsule's vertical and horizontal movements relative to the Moon's surface, in units of feet per second. Coordinated Lunar Time Coordinated Lunar Time (LTC) is a proposed primary lunar time standard for the Moon. In early April 2024, the White House asked NASA to work alongside US and international agencies for the purpose of establishing a unified standard time for the Moon and other celestial bodies by 2026. The White House's request, led by the Office of Science and Technology Policy (OSTP), called for a "Coordinated Lunar Time", which was first proposed by the European Space Agency in early 2023. , there is no lunar time standard. As a result, activities on the Moon are coordinated using the time zone of where a mission's headquarters is based. For example, the Apollo missions utilized the Central Time Zone as the missions were controlled from Houston, Texas. Likewise, Chinese activities on the Moon run on China Standard Time. As more countries are active on the Moon and interact with each other, a different, unified system will be needed. As part of an ongoing global billionaire space race and a wider international space race between the United States and China, a need exists for a universal time-keeping benchmark so that lunar spacecraft and satellites are able to fulfill their respective missions with precision and accuracy. Due to differences in gravitational force and other factors, time passes fractionally faster on the Moon when observed from Earth. Under the Artemis program, and supported by the Commercial Lunar Payload Services missions, astronauts and a proposed scientific moonbase are envisioned to take place on and around the lunar surface from the 2020s onwards. The proposed standard would therefore solve a timekeeping issue. According to OSTP Chief Arati Prabhakar, time would "appear to lose on average 58.7 microseconds per Earth-day and come with other periodic variations that would further drift Moon time from Earth time". The development of the standard is set to be a collaborative effort, initially amongst members of the Artemis Accords, but will be meant to apply globally. The initial proposal of the standard calls for four key features: traceability back to Coordinated Universal Time, accuracy sufficient for navigation and science, resilience to disruptions, and scalability to potential environments beyond cislunar space. LunaNet, an upcoming lunar communications and navigation service under development with the European Space Agency, calls for a Lunar Time System Standard which the LTC is meant to address. In August 2024, the US National Institute of Standards and Technology furthered development of the proposal by releasing a draft for the standard focused on defining the framework and mathematical model. The draft takes into account the gravitational differences on the Moon and was published to The Astronomical Journal. See also References timekeeping moon
Timekeeping on the Moon
[ "Physics" ]
791
[ "Spacetime", "Timekeeping", "Physical quantities", "Time" ]
76,540,481
https://en.wikipedia.org/wiki/NGC%201218
NGC 1218 is a lenticular galaxy in Cetus that hosts the radio source 3C 78. It was discovered in 1886 by American astronomer Lewis A. Swift. It is located at = 174.86, = -44.51 in the galactic coordinate system. History Discovered by Lewis Swift on September 6, 1886, NGC 1218 was one of the original objects included in the New General Catalogue. 3C 78 was discovered , and subsequently included in the Third Cambridge Catalogue of Radio Sources (3C). In 1982, it was found that the nucleus of NGC 1218 emits a radio jet. A follow-up study in 1986 corroborated the presence of the jet, as well as finding evidence of a possible weak counter-jet. The Hubble Space Telescope observed NGC 1218 on August 17, 1994. An optical jet of synchrotron radiation similar to that of Messier 87 was subsequently found. On September 6, 2000, a type Ia supernova was detected in NGC 1218. A 2002 study found that the previously identified radio jet was the cause. In 2023, the proper motion of 3C 78 was determined using observations from the Very Large Array (VLA), as well as a single observation from the Atacama Large Millimeter/submillimeter Array (ALMA). Composition and structure NGC 1218 is a lenticular (S/0a) radio galaxy, with a radio halo roughly equivalent in size to the optical halo's extent. The observable synchrotron jet has a total length of 1.37 arcseconds (0.75 kpc), and expands substantially at 0.5 arcseconds from the nucleus. NGC 1218 has an approximate hydrogen mass of < . 3C 78 3C 78 is an astronomical radio source with an angular extent of approximately 80 × 55 arcseconds squared. According to Tabara and Inoue (1980), 3C 78 has a rotation measure of 8.7 ± 1.9 m−2 and an intrinsic position angle of 87° ± 4°, although Simard-Normandin, Kronberg, and Button (1981) claim that it has a rotation measure of 14 ± 2 m−2 and an intrinsic position angle of 85° ± 3°. It possesses a radio jet approximately one arcsecond (0.58 kpc) in length, with three bright, compact inhomogeneities (or "knots"), with the second and thirds ones being the most prominent. The second knot has a longitudinal motion of approximately 0.51 ± 0.14c at roughly 200 pc, and the third knot had an apparent superluminal backwards motion of −2.6 ± 2c prior to 2000, followed by a forward motion of 0.5 ± 2c, both at roughly 300 pc. Notes References External links SIMBAD listing NED listing VizieR listing (VizieR also has listings for alternate identifiers) Lenticular galaxies Radio galaxies Cetus Discoveries by Lewis Swift Galaxies discovered in 1886 1218 078 02555 11749 11749 +01-09-001
NGC 1218
[ "Astronomy" ]
636
[ "Cetus", "Constellations" ]
76,540,553
https://en.wikipedia.org/wiki/Color%20and%20Symmetry
Color and Symmetry is a book by Arthur L. Loeb published by Wiley Interscience in 1971. The author adopts an unconventional algorithmic approach to generating the line and plane groups based on the concept of "rotocenter" (the invariant point of a rotation). He then introduces the concept of two or more colors to derive all of the plane dichromatic symmetry groups and some of the polychromatic symmetry groups. Structure and topics The book is divided into three parts. In the first part, chapters 1–7, the author introduces his "algorismic" (algorithmic) method based on "rotocenters" and "rotosimplexes" (a set of congruent rotocenters). He then derives the 7 frieze groups and the 17 wallpaper groups. In the second part, chapters 8–10, the dichromatic (black-and-white, two-colored) patterns are introduced and the 17 dichromatic line groups and the 46 black-and-white dichromatic plane groups are derived. In the third part, chapters 11–22, polychromatic patterns (3 or more colors), polychromatic line groups, and polychromatic plane groups are derived and illustrated. Loeb's synthetic approach does not enable a comparison of colour symmetry concepts and definitions by other authors, and it is therefore not surprising that the number of polychromatic patterns he identifies are different from that published elsewhere. Audience Unusually, the author does not state the target audience for his book; his publisher, in their dust jacket blurb, say "Color and Symmetry will be of primary interest on the one hand to crystallographers, chemists, material scientists, and mathematicians. On the other hand, this volume will serve the interests of those active in the fields of design, visual and environmental studies and architecture." Only a school-level mathematical background is required to follow the author's logical development of his argument. Group theory is not used in the book, which is beneficial to readers without this specific mathematical background, but it makes some of the material more long-winded than it would be if it had been developed using standard group theory. Michael Holt in his review for Leonardo said: "In this erudite and handsomely presented monograph, then, designers should find a rich source of explicit rules for pattern-making and mathematicians and crystallographers a welcome and novel slant on symmetry operations with colours." Reception The book had a generally positive reception from contemporary reviewers. W.E. Klee in a review for Acta Crystallographica wrote: "Color and Symmetry will surely stimulate new interest in colour symmetries and will be of special interest to crystallographers. People active in design may also profit from this book." D.M. Brink in a review for Physics Bulletin published by the Institute of Physics said: "The book will be useful to workers with a technical interest in periodic structures and also to more general readers who are fascinated by symmetrical patterns. The illustrations encourage the reader to understand the mathematical structure underlying the patterns." J.D.H. Donney in a review for Physics Today said: "This book should prove useful to physicists, chemists, crystallographers (of course), but also to decorators and designers, from textiles to ceramics. It will be enjoyed, not only by mathematicians, but by all lovers of orderliness, logic and beauty." David Harker in a review for Science said: "It may well be that this work will become a classic essay on planar color symmetry." Criticism The author's idiosyncratic approach was not adopted by researchers in the field, and later assessments of Loeb's contribution to color symmetry were more critical of his work than earlier reviewers had been. Marjorie Senechal said that Loeb's work on polychromatic patterns, whilst not wrong, imposed artificial restrictions which meant that some valid colored patterns with three or more colors were excluded from his lists. R.L.E. Schwarzenberger in 1980 said: "The study of colour symmetry has been bedevilled by a lack of precise definitions when the number of colours is greater than two ... it is unfortunate that this paper was apparently ignored by Shubnikov and Loeb whose books give incomplete and unsystematic listings." In a 1984 review paper Schwarzenberger remarks: "... these authors [including Loeb] confine themselves to a restricted class of colour group ... for N > 2 the effect is to dramatically limit the number of colour groups considered." Branko Grünbaum and G.C. Shephard in their book Tilings and patterns gave an assessment of previous work in the field. Commenting on Color and Symmetry they said:"Loeb gives an original, interesting and satisfactory account of the 2-color groups ... unfortunately when discussing multicolor patterns, Loeb restricts the admissible color changes so severely that he obtains a total of only 54 periodic k-color configurations with k ≥ 3." Later authors determined that the total number of k-color configurations with 3 ≤ k ≤ 12 is 751. References Mathematics books 1971 non-fiction books Symmetry
Color and Symmetry
[ "Physics", "Mathematics" ]
1,077
[ "Geometry", "Symmetry" ]
76,540,675
https://en.wikipedia.org/wiki/Annette%20Mills%20%28academic%29
Annette Marie Mills is a Jamaican–New Zealand academic, and is Professor of Information Systems in the business school at the University of Canterbury, specialising in the impacts of new technologies, especially relating to privacy. Academic career Mills completed her undergraduate training at the University of Technology and the University of the West Indies. Mills completed a PhD titled Investigating the determinants of user sophistication: a perspective from social cognitive theory at the University of Waikato in 1996. She also has a postgraduate certificate in tertiary teaching from the University of Otago. Mills is on the faculty of the University of Canterbury, where she is Professor of Information Systems. Mills is interested in how people and societies adapt to new and emerging technologies, and the positive and negative impacts of those technologies, including privacy concerns. She covers topics such as pervasive data collection, digital surveillance, automated decision-making, wearable technologies, and biometric data. She has also examined how the increase in home-working has affected enterprise security. Mills is on the committee of the Privacy Foundation New Zealand, and convenes the working group on children's privacy. Mills is an editor for several journals, including and IT & People, the Journal of Global Information Management, and the Australian Journal of Information Systems. In 2020 Mills was elected a Fellow of the Association for Information Systems. In 1993 she was awarded a Commonwealth Scholarship to New Zealand. Selected works References External links Privacy in the wake of large-language models, webinar by Marcin Betkier and Annette Mills, 8 May 2023, via YouTube Profiling, AI driven decision making and the privacy concerns of Kiwis, by Annette Mills and Nidhi Tewari, 10 May 2023, via YouTube Living people New Zealand academics New Zealand women academics University of the West Indies alumni University of Waikato alumni Information systems researchers University of Otago alumni Jamaican emigrants to New Zealand Academic staff of the University of Canterbury Year of birth missing (living people)
Annette Mills (academic)
[ "Technology" ]
396
[ "Information systems", "Information systems researchers" ]
76,541,100
https://en.wikipedia.org/wiki/Novozymes%20Prize
The Novozymes Prize is an annual scientific award. The prize aims to recognise outstanding contributions to the advancement of science within the fields of biotechnology and bioinnovation. Background The Novozymes Prize is sponsored by the Novo Nordisk Foundation, and acknowledges exceptional European research or technological achievements that contribute to the advancement of innovative and sustainable solutions in biotechnology, benefiting both humanity and the environment. The name originates from the Danish biotechnology company Novozymes, specialising in enzyme production. The foundation, associated with Novozymes, a global leader in biotechnological solutions and enzyme production, promotes initiatives that drive progress in science and sustainability. Nomination The prize is awarded to individuals or research groups whose work has demonstrated significant impact and innovation in areas such as enzyme technology, industrial biotechnology, and sustainable solutions. Recipients of the Novozymes Prize are selected through a nomination and evaluation process by a committee composed of experts in relevant scientific disciplines. Award The Novozymes Prize consists of a monetary award and a commemorative medal. The recipient(s) are invited to deliver a lecture or presentation on their research during a ceremony held in conjunction with the award announcement. Recipients List of recipients of the Novozymes prize over the years: References External links Official website Academic awards International awards Science and technology awards
Novozymes Prize
[ "Technology" ]
257
[ "Science and technology awards", "International science and technology awards" ]
76,541,820
https://en.wikipedia.org/wiki/Female%20fertility%20agents
Female fertility agents are medications that improve female’s ability to conceive pregnancy. These agents are prescribed for infertile female who fails to conceive pregnancy after 1-year of regular and unprotected sexual intercourse. The following will cover the advancements of female fertility agents, major causes of female infertility. Next, it emphasizes on common female fertility agents in terms of their mechanism of action, side effects, fetal consideration and clinical application and ended up by the introduction of supplements and herbal medicines for female infertility. Causes Female infertility can be caused by multiple factors, such as ovulatory disorders, structural abnormalities in reproductive organs and aging. 1. Ovulatory disorders Ovulatory disorders result in infrequent ovulation (Oligoovulation) or absent ovulation (anovulation) which causes infertility. The World Health Organisation (WHO) has classified anovulation into three main classes, which are hypogonadotropic hypogonadal anovulation (Class 1), normogonadotropic normoestrogenic anovulation (Class 2), and hypergonadotropic hypoestrogenic anovulation (Class 3). Apart from the three classes, hyperprolactinemic anovulation is also identified as one of the etiologies. 2. Structural abnormalities More refer to Tubal Factor Infertility Structural abnormalities in female reproductive organs will lead to infertility. Abnormalities in Fallopian tube, either blockage or injuries will prevent fertilization and/or implantation. Besides, anomalies in uterus, commonly caused by Müllerian anomalies will cause failure in implantation. 3. Aging More refer to Age and female fertility Female fertility declines with aging due to the decreased quantity and quality of oocytes. It is noticed that the number of follicles reduces rapidly after the age of 37 when close to menopause, resulting in a natural decline in fertility. Medication 1. Gonadotropin therapy Drug action Female gonadotropins include follicle stimulating hormone (FSH), luteinizing hormone (LH), and human chorionic gonadotropin (hCG). Gonadotropin therapy refers to the administration of exogenous gonadotropins to treat anovulation. Gonadotropin therapy is given via intrauterine insemination (IUI) and in vitro fertilization (IVF) to exert its effects. FSH promotes the recruitment and maturation of early antral follicles, while LH helps follicular development and maturation. Both FSH and LH regulate the length and order of the menstrual cycle in female. HCG aids the ultimate follicular maturation and growth of the immature oocyte in meiosis and in the luteal phase. Side effects Gonadotropin treatment may induce ovarian hyperstimulation syndrome due to several enlarged follicles. The enlarged ovaries will lead to severe abdominal pain, vomiting, clotting in the lungs and legs along with fluid imbalance. Fetal considerations Gonadotropin therapy has a higher tendency to cause multiple pregnancy when compared to clomiphene and aromatase inhibitors. Multiple pregnancy may increase the risks of preterm birth and decreased gestational age, which are associated with fetal complications and higher infant mortality. Clinical use Gonadotropin therapy can be given to female who have ovulatory failure and irregular ovulation as second-line treatment. It can be given to female who has normal ovulation but fails to conceive by in vitro fertilization (IVF) to boost the production of more follicles in the ovaries. For patients with polycystic ovary syndrome (PCOS), gonadotropin is not an initial treatment choice. During early stage of Gonadotrophin therapy, blood testing and pelvic ultrasound are performed to confirm the absence of large ovarian cysts. The treatment initiation and dosage of gonadotropin will be instructed depending on the outcome of the tests. During the therapy, regular blood testing and pelvic ultrasound is required. Discontinuation of the therapy is required if more than three large follicles are detected in pelvic ultrasound. 2. Clomiphene Drug action Clomiphene acts on the hypothalamus. By occupying intracellular estrogen receptors (ERs), receptor recycling is interfered. This inhibits hypothalamic ERs and hence interrupts normal estrogenic negative feedback. As a result, Gonadotropin-releasing hormone pulsation occurs which induces the release of pituitary gonadotropin to boost follicular growth and trigger ovulation. Side effects Common side effects include ovarian enlargement, hot flash, abdominal distention, breast discomfort, and hyperlipidemia. Rare adverse effects are ovarian hyperstimulation syndrome and visual abnormalities. Long-term use may raise risk of ovarian cancer, such that long-term therapy (more than 6 cycles) is not recommended. Fetal considerations Compared to the general population, clomiphene does not exhibit a tendency toward increasing the risk of miscarriages or harmful fetal abnormalities. For breastfeeding consideration, a study found that clomiphene effectively suppresses lactation, which can be explained by prolactin inhibition. It is reported that clomiphene is present in breast milk, maximum content of breast milk recorded was 582.5 ng/mL which is still be considered as acceptable when relative infant dose of a drug is less than 10%. Nevertheless, breastfeeding is still debatable due to lack of clinical evidence. Clinical use Clomiphene is initiated for use in patients with polycystic ovary syndrome, psychogenic amenorrhea, post-oral contraceptive amenorrhea, and secondary amenorrhea of unidentified cause. Serum estrogen should be measured prior to therapy which rules out primary pituitary or ovarian failure, endometrial carcinoma, hyperprolactinaemia. 3. Aromatase Inhibitors: Letrozole Drug action Among all aromatase inhibitors, Letrozole is commonly used for improving female fertility. It works by inhibiting aromatase which is an enzyme that catalyses the conversion of androstenedione and testosterone to estrogen by hydroxylation. Hence, Letrozole inhibits the synthesis of estrogen. The hypoestrogenic state boosts the release of gonadotropin-release hormone and raises the synthesis of FSH in pituitary gland. Side effects Common side effects include bone fracture due to bone mineral density loss, ischemic cardiovascular events like angina pectoris, acute myocardial infarction, and musculoskeletal effects for instance arthralgia and tenosynovitis. Fetal considerations Letrozole raises concern regarding its teratogenicity. It is potentially teratogenic if administered unintentionally during early stages of pregnancy as they can interrupt with normal Aromatase activity in embryonic development indicated in animal experiments. However, teratogenicity is not observed with an increase tend compared to clomiphene. In addition, it is stated in a systematic review and meta-analysis that letrozole does not associate with congenital fetal malformation or miscarriage compared to clomiphene, gonadotropin and natural conception. Clinical use Besides being a first-line medication for hormone receptor-positive breast cancer, it also acts as an off-label agent for ovulation induction in patients with polycystic ovary syndrome and anovulatory infertility in recent years. It is stated that letrozole therapy leads to greater birth rates compared to clomiphene in a randomized trial and meta-analysis with large sample of anovulatory women. 4. Metformin Drug action Metformin lowers the synthesis of glucose in liver and absorption of glucose from intestines. It also acquires an antilipolytic effect which reduces free fatty acid concentration. It is proved in many trials that Metformin can normalize menstrual function and raise the chance of ovulation. Side effects Common side effects are gastrointestinal discomfort, for example flatulence, indigestion, nausea and vomiting which are reversible by dosage adjustment and discontinuation. Fetal considerations Metformin reduces intestinal absorption of vitamin B12 and also serum vitamin B12 concentration, thus patients are recommended to monitor B12 deficiency with complete blood count. Regarding breastfeeding, metformin does not show obvious association with adverse outcomes. In addition, metformin is not associated with a raised chance of major birth abnormalities in female with PCOS. Clinical use Metformin is often used as second-line treatment especially in PCOS patients are contraindicated to treatment of combined estrogen-progestogen oral contraceptives (COCs). It is proved that Metformin can restore ovulatory menses in PCOS in numerous trials. 13 trials are studies in a meta-analysis, fourfold increase of ovulation is revealed when using clomiphene with metformin compared to using clomiphene alone. In addition, Metformin can be administered for IVF pretreatment, the number of retrieved oocytes is greatly lowered. Moreover, ovarian hyperstimulation syndrome (OHSS) is prevented. 5. Dopamine agonist—Cabergoline and Bromocriptine Drug action Dopamine agonists, cabergoline and bromocriptine bind to specific dopamine receptors to block the secretion pathway of prolactin and shrink the size of tumor (prolactinoma), which subsequently treat infertility caused by elevated level of prolactin.(See also : Hyperprolactinaemia) Bromocriptine possesses both dopamine 2 receptor agonistic and dopamine 1 receptor antagonistic properties. It is an ergot derivative, which directly binds to the postsynaptic dopamine 2 receptors of anterior pituitary cells and inhibits the secretion of prolactin. Cabergoline is a dopamine 2 receptor agonist which is also an ergot derivative. It functions similarly to bromocriptine but with higher selectivity and affinity than bromocriptine. Side effects The major side effects of dopamine agonists include nausea, vomiting , arrhythmia and postural hypotension. Other adverse effects for instance impulse-control disorder and valvular heart disease are less frequently resulted. Fetal considerations Dopamine agonists are usually discontinued once the patient is pregnant. Based on existing data and studies, the exposure of fetus to dopamine agonists during the early stage of pregnancy does not harm the foetus. Both bromocriptine and cabergoline are considered to be safe with no identifiable risk of inducing congenital deformity, miscarriage and premature birth. Clinical use Dopamine agonists for infertility treatment are commonly administered to hyperprolactinemic anovulation patients. Both bromocriptine and cabergoline are the first-line dopamine agonist in hyperprolactinemia treatment. Cabergoline is currently more preferred than bromocrptine due to its higher efficacy and fewer side effects like nausea. Supplements Coenzyme Q10 Coenzyme Q10 is a natural antioxidant. It is stated in randomized trials that its supplementation increases the number of oocytes, which contributes to a greater fertilization rate and improved embryonic development in women with suboptimal ovarian reserve parameters. Myoinositol Myoinositol is a naturally existing substance that is involved in both insulin and gonadotropin signaling, which is associated with follicle maturation. Multiple researches have indicated that myoinositol supplementation in poor ovarian responders can improve fertilization rate and ovarian sensitivity index (OSI). Herbal medicines Chasteberry The extracts of Chasteberry Vitex agnus castus improve premenstrual symptoms, especially premenstrual mastodynis (breast pain) which is caused by hyperprolactinemia and hence explains female infertility. In addition, low progesterone level is also attributed to female infertility. Combination of VAC extracts and medication can normalize the levels of prolactin and progesterone. Therefore, Chasteberry alleviates infertility condition. Red clover The red clover (Trifolium pratense) extracts contain several phytoestrogenic compounds, which stimulate the production of female hormone and bind to the beta estrogen receptor so as to mitigate the menopausal symptoms. Due to the presence of phytoestrogenic compounds, red clover extract can raise estrogen level and thus trigger ovulation and aid fertility. History of female fertility agents In 1931, the first commercially available human chorionic gonadotropin (hCG) was launched, which marked the first emergence of female fertility agents. Subsequently, clomiphene citrate was discovered in 1951, which was then approved by the Food and Drug Administration (FDA) in 1967. In 1978, bromocriptine gained FDA approval for hyperprolactinemia and later proved effective in treating prolactinemia-related infertility. As a breakthrough, hMG/hCG protocols for pre-IVF treatment were introduced. The first recombinant human follicle-stimulating hormone (r-hFSH) received EU approval in 1995, followed by the approval of recombinant human luteinizing hormone (rhLH) and recombinant human chorionic gonadotropin (rhCG) in 2000. In the 2000s, there were clinical trials exploring the effectiveness of the aromatase inhibitor letrozole for infertility treatment and ongoing epidemiological studies examining the phenotypic differences associated with prenatal exposure to metformin. There is some evidence that letrozole can improve the rate of live births and the rate of pregnancy rate for people who experience anovulatory PCOS, and this treatment may be more effective than SERMs. There is high quality evidence that the rate of people developing ovarian hyperstimulation syndrome is similar between SERMs and letrozole. There is also high quality evidence to conclusion that there is no difference in the rate of pregnancy loss and the rate that women may have multiple pregnancies when comparing letrozone and SERMs. References Drugs by target organ system Fertility
Female fertility agents
[ "Biology" ]
3,107
[ "Organ systems", "Drugs by target organ system" ]
76,542,190
https://en.wikipedia.org/wiki/Dalpiciclib
Dalpiciclib is a drug for the treatment of various forms of cancer. In China, dalpiciclib is approved for use in combination with fulvestrant for treatment of HR-positive, HER2-negative recurrent, or metastatic breast cancer in patients who have progressed after previous endocrine therapy. Dalpicicilib is a CDK inhibitor that targets the CDK4 and CDK6 isoforms. References CDK inhibitors Piperidines 2-Aminopyridines Cyclopentanes Pyridopyrimidines Ketones Lactams
Dalpiciclib
[ "Chemistry" ]
119
[ "Ketones", "Functional groups" ]
76,543,563
https://en.wikipedia.org/wiki/Ushikubi%20Sue%20Ware%20Kiln%20Site
The is an archaeological site containing a group of Nara period kilns located in the Ushikubi and Kamidairi neighborhoods of the city of Ōnojō, Fukuoka Prefecture, Japan, extending across the border into parts of Dazaifu and Kasuga. The site was designated a National Historic Site of Japan in 2009. Overview Ushikubi Sue Ware Kilns were located in the southeastern part of the Fukuoka Plain, and operated from the mid-6th century to the mid-9th century. It is the largest group of Sue ware kilns in western Japan outside of the Kinki region. As a result of archaeological excavations, it was estimated that the kiln ruins were distributed in an area of about four kilometers from east-to-west and about 4.8 km from north-to-south, and there were originally about 500 kilns. The kilns were anagama-style underground kilns; they became larger from the middle to the end of the 6th century, with many having a total length of over ten meters from the end of the 6th century to the first half of the 7th century, and from then on they became smaller. During the period when kilns were larger, many were perforated flue kilns, which were unique to the Ushikubi site and had multiple flues deep inside the firing section. Sue ware shards bearing the inscription Wadō 6 (713) were excavated from the remains of a pit dwelling and a building with buried pillar foundations, suggesting that there was also a village for craftsmen. The types of fired vessels from the Kofun period to the first half of the Nara period were diverse, including small saucers, bottles, and jars, but by the middle of the Nara period, it seems that the kilns specialized in small ware such as saucers and plates. The distribution range of pottery from this site was limited to the area around the Fukuoka Plain during the Kofun period, but in the Nara period and early Heian period it spread across the entire northern Kyushu region. More than 300 kiln traces have been excavated to date. The site is approximately 2.2 kilometers by car from Mizuki Station on the JR Kyushu Kagoshima Main Line, or 4.6 kilometers west of the Dazaifu ruins. See also List of Historic Sites of Japan (Fukuoka) References External links Ōnojō city home page Fukuoka Tourism Web Cultural Properties in Fukuoka Prefecture History of Fukuoka Prefecture Chikuzen Province Ōnojō Historic Sites of Japan Japanese pottery kiln sites
Ushikubi Sue Ware Kiln Site
[ "Chemistry", "Engineering" ]
520
[ "Kilns", "Japanese pottery kiln sites" ]
76,543,945
https://en.wikipedia.org/wiki/Ruby%20Payne-Scott%20Medal%20and%20Lecture
The Ruby Payne-Scott Medal and Lecture for women in science is a distinguished career award that acknowledges outstanding Australian women researchers in the biological sciences or physical science. It is conferred by the Australian Academy of Science and is awarded to researchers who are usually resident in, and conduct their research predominantly in Australia. This award, established in 2021, honours the contributions of Ruby Payne-Scott, particularly in the fields of radiophysics and radio astronomy. Recipients References Australian science and technology awards Science awards honoring women Awards established in 2021 Australian Academy of Science Awards
Ruby Payne-Scott Medal and Lecture
[ "Technology" ]
108
[ "Science and technology awards", "Science awards honoring women" ]
76,545,564
https://en.wikipedia.org/wiki/Donafenib
Donafenib, sold under the brand name Zepsun, is a pharmaceutical drug for the treatment of cancer. In China, donafenib is approved for the treatment of unresectable hepatocellular carcinoma in patients who have not previously received systemic treatment. Donafenib is a kinase inhibitor that targets Raf kinase and various receptor tyrosine kinases. It is a deuterated derivative of sorafenib with improved pharmacokinetic properties. References Receptor tyrosine kinase inhibitors Chloroarenes Trifluoromethyl compounds Ureas Phenol ethers Pyridines Benzamides Diaryl ethers Deuterated compounds
Donafenib
[ "Chemistry" ]
141
[ "Organic compounds", "Ureas" ]
76,547,002
https://en.wikipedia.org/wiki/Steven%20Gao
Steven Shichang Gao (; born 1972) is a British Chinese. He is a Professor of Electronic Engineering. His research mainly includes antennas, MIMO, intelligent antennas and phased arrays for mobile and satellite communications, navigation and sensing. He obtained a Doctorate degree at Shanghai University in 1999. He completed Post-Doctoral research at the National University of Singapore and subsequently moved to the United Kingdom in 2001 to work as a Research Fellow at the University of Birmingham. The following year, he began teaching at Northumbria University as a Senior Lecturer and was promoted to a Reader in 2006. In 2007, he moved to Surrey Space Center at the University of Surrey, and from 2013 to 2022, worked at the University of Kent as a Professor and Chair of RF and Microwave Engineering. Since Sep 2022, he joined the Chinese University of Hong Kong, Hong Kong, as a Professor and the Director of Center for intelligent Electromagnetic Systems. Since 2023, Prof. Gao serves as the Editor-in-Chief of the journal IEEE Antennas and Wireless Propagation Letters. Prof Gao is a Fellow of the Institute of Electrical and Electronics Engineers, the Royal Aeronautical Society, and the Institution of Engineering and Technology. References Chinese electrical engineers Chinese expatriates in the United Kingdom Academic staff of the Chinese University of Hong Kong Academics of the University of Surrey Academics of the University of Kent Academics of Northumbria University Fellows of the IEEE Fellows of the Royal Aeronautical Society Fellows of the Institution of Engineering and Technology Scientific journal editors Living people 1972 births Chinese expatriates in Singapore Shanghai University alumni
Steven Gao
[ "Engineering" ]
313
[ "Institution of Engineering and Technology", "Fellows of the Institution of Engineering and Technology" ]
56,516,958
https://en.wikipedia.org/wiki/Emergency%20locator%20beacon
An emergency locator beacon is a radio beacon, a portable battery powered radio transmitter, used to locate airplanes, vessels, and persons in distress and in need of immediate rescue. Various types of emergency locator beacons are carried by aircraft, ships, vehicles, hikers and cross-country skiers. In case of an emergency, such as the aircraft crashing, the ship sinking, or a hiker becoming lost, the transmitter is deployed and begins to transmit a continuous radio signal, which is used by search and rescue teams to quickly find the emergency and render aid. The purpose of all emergency locator beacons is to help rescuers find survivors within the so-called "golden day", the first 24 hours following a traumatic event, during which the majority of survivors can usually be saved. Beacon types COSPAS-SARSAT 406 MHz Distress Beacons Cospas-Sarsat is an international humanitarian consortium of governmental and private agencies which acts as a worldwide dispatcher for search and rescue operations. It operates a network of about 47 satellites carrying radio receivers, which detect distress signals from emergency locator beacons anywhere on Earth transmitting on the international Cospas distress frequency of 406 MHz. The satellites calculate the geographic location of the beacon within 2 km by measuring the Doppler frequency shift of the radio waves due to the relative motion of the transmitter and the satellite, and quickly transmit the information to the appropriate local first responder organizations, which perform the search and rescue. Defined officially as emergency position-indicating radiobeacon stations in the ITU Radio Regulations (Section IV. Radio Stations and Systems – Article 1.93), these transmit a coded data burst once every 50 seconds, conforming to the C/S T.001 Specification for Cospas-Sarsat 406 MHz Distress Beacons, compatible with the Cospas-Sarsat satellite receivers. The different types include: ELTs (emergency locator transmitters) signal aircraft distress EPIRBs (emergency position-indicating radio beacons) signal maritime distress SEPIRBs (submarine emergency position-indicating radio beacons) are EPIRBs designed only for use on submarines SSASes (ship security alert system) are used to indicate possible piracy or terrorism attacks on sea-going vessels PLBs (personal locator beacons) are for personal use and are intended to indicate a person in distress who is away from normal emergency services; e.g., 9-1-1. They are also used for crewsaving applications in shipping and lifeboats at terrestrial systems. In New South Wales, Australia, some police stations and the National Parks and Wildlife Service provide personal locator beacons to hikers for no charge. Auxiliary maritime beacons ENOS Rescue-System A rescue beacon system designed for use by divers who have drifted away from their dive boats. Search and rescue transponder (SART) A specialized radar beacon (RACON) that emits a string of 12 dots (replaced by arcs and circles when closer) for display on an X-band radar screen when scanned. Man-overboard beacons MSLDs (Maritime Survivor Locating Devices ) are man-overboard signalling devices, first standardized in 2016. A Maritime Survivor Locator Device (MSLD) is a man-overboard locator beacon. In the U.S., rules were established in 2016 in 47 C.F.R. Part 95. A MSLD may transmit on 121.500 MHz, or one of these: 156.525 MHz, 156.750 MHz, 156.800 MHz, 156.850 MHz, 161.975 MHz, 162.025 MHz (bold are Canadian-required frequencies). AIS-SART A hand-held automatic identification system (AIS) transmitter that emits only an emergency beacon. It does not have a receiver and thus cannot be a transponder. SEND—Satellite Emergency Notification Devices SPOT inReach Spidertracks Yellowbrick Somewear Global Hotspot Avalanche beacons RECCO Avalanche transceiver Other beacons Mountain Locator Unit Automatic Packet Reporting System Crash position indicator Transponder (aeronautics) Can be used as an emergency beacon of sorts by setting it to squawk 7700, the distress code See also References Aircraft emergency systems Emergency communication Beacons Radio geopositioning
Emergency locator beacon
[ "Technology" ]
879
[ "Radio geopositioning", "Wireless locating" ]
56,518,900
https://en.wikipedia.org/wiki/Why%20Is%20It%20So%3F
Why Is It So? is an educational science series produced in Australia by ABC Television from 1963 to 1986. The series was hosted by American scientist Julius Sumner Miller, who demonstrated experiments in the world of physics. The series was also screened in the United States, Canada, New Zealand and in Europe. This program was based on his 1959 series Why Is It So? in the United States on KNXT (now KCBS-TV) Channel 2 in Los Angeles. Several segments from the program have been uploaded to the ABC Science YouTube channel. References External links 1960s Australian documentary television series 1970s Australian documentary television series 1980s Australian documentary television series Science education television series Physics education Australian Broadcasting Corporation original programming 1963 Australian television series debuts 1986 Australian television series endings
Why Is It So?
[ "Physics" ]
148
[ "Applied and interdisciplinary physics", "Physics education" ]