id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
54,903,105 | https://en.wikipedia.org/wiki/Genetically%20modified%20food%20in%20Asia | India and China are the two largest producers of genetically modified products in Asia. India currently only grows GM cotton, while China produces GM varieties of cotton, poplar, petunia, tomato, papaya and sweet pepper. Cost of enforcement of regulations in India are generally higher, possibly due to the greater influence farmers and small seed firms have on policy makers, while the enforcement of regulations was more effective in China. Other Asian countries that grew GM crops in 2011 were Pakistan, the Philippines and Myanmar. GM crops were approved for commercialisation in Bangladesh in 2013 and in Vietnam and Indonesia in 2014.
China
GM crops in China go through three phases of field trials (pilot field testing, environmental release testing, and preproduction testing) before they are submitted to the Office of Agricultural Genetic Engineering Biosafety Administration (OAGEBA) for assessment. Producers must apply to OAGEBA at each stage of the field tests. The Chinese Ministry of Science and Technology developed the first biosafety regulations for GM products in 1993 and they were updated in 2001. The 75 member National Biosafety Committee evaluates all applications, although OAGEBA has the final decision. Most of the National Biosafety Committee are involved in biotechnology leading to criticisms that they do not represent a wide enough range of public concerns.
India
The release of transgenic crops in India is governed by the Indian Environment Protection Act, which was enacted in 1986. The
Institutional Biosafety Committee (IBSC), Review Committee on Genetic Manipulation (RCGM) and Genetic Engineering Approval Committee (GEAC) all review any genetically modified organism to be released, with transgenic crops also needing permission from the Ministry of Agriculture. India regulators cleared the Bt brinjal, a genetically modified eggplant, for commercialisation in October 2009. Following opposition from some scientists, farmers and environmental groups a moratorium was imposed on its release in February 2010.
Official Reports on GMO
There have been four official reports on GMO in India till August 2013 :
The ‘Jairam Ramesh Report’ - February 2010, imposing an indefinite moratorium on Bt Brinjal
The Sopory Committee Report - August 2012
The Parliamentary Standing Committee (PSC) Report on GM crops - August 2012
Final Report of The Technical Expert Committee established by Supreme Court - July 2013
Japan
Two laws regulate food safety and food quality in Japan, the Food Sanitation Law passed in 1947 and the Law Concerning Standardization and Proper Labeling of Agricultural and Forestry Products passed in 1950. The Food Sanitation Law has been amended and updated many times; an amendment dealing with pre-market approval and labeling of GMOs was passed in 2000 and came into effect in 2001. Japan passed laws to implement the Cartagena Protocol on Biosafety in September 2003 which came into effect in February 2004 - the Law Concerning the Conservation and Sustainable Use of Biological Diversity through Regulations on the Use of Living Modified Organisms (Law No. 97 of 2003).
Authority for approvals for various uses of genetically modified organisms is divided in Japan. The Ministry of the Environment has final approval for all uses of GMOs, but crops for commercial use and live vaccines for
animals first go through the Ministry of Agriculture, Forestry and Fisheries; viruses for gene therapy and other medical applications first go through the Ministry of Health, Labor and Welfare; field trials of GM crops and recombinant DNA used in biotechnology research first goes through the Ministry of Education, Culture, Sports, Science and Technology; and uses in the process of production of industrial enzymes, etc. goes through the Ministry of Economy, Trade and Industry.
Japan has not approved any commodity GM crops to be grown in Japan, but does allow import of agricultural products made from GM crops and food made of imported GM ingredients. Japan does however allow cultivation of GM flowers (e.g. Blue roses).
GM foods must undergo a safety assessment prior to being awarded certification for distribution to the domestic market. The Food Safety Commission (FSC) performs food and feed safety risk assessments.
Certain GM food must be labeled, but this is limited to designated genetically modified agricultural products, which are soybean, corn, potato, rapeseed, cottonseed, alfalfa and beet, and is limited to 32 processed foods which contain soybean, corn and potato, alfalfa and beet, in which recombinant DNA or the resulting protein still exists even after processing. However, processed food in which recombinant DNA or protein is dissolved in or removed during processing, such as soy sauce, soybean oil, corn flakes, millet jelly, corn oil, rapeseed oil, cottonseed oil, and others, do not have to be labeled.
Japan does not require traceability, and allows negative labeling ("GMO-free" and the like).
Philippines
The Philippines bans all GMOs recently overturning existing Department of Agriculture regulations. A petition filed on May 17, 2013 by environmental group Greenpeace Southeast Asia and farmer-scientist coalition Masipag (Magsasaka at Siyentipiko sa Pagpapaunlad ng Agrikultura) asked the appellate court to stop the planting of Bt eggplant in test fields, saying the impacts of such an undertaking to the environment, native crops and human health are still unknown. The Court of Appeals granted the petition, citing the precautionary principle stating "when human activities may lead to threats of serious and irreversible damage to the environment that is scientifically plausible but uncertain, actions shall be taken to avoid or diminish the threat." Respondents filed a motion for reconsideration in June 2013 and on September 20, 2013 the Court of Appeals chose to uphold their May decision saying the bt talong field trials violate the people’s constitutional right to a "balanced and healthful ecology." The Supreme Court on Tuesday, December 8, 2015 permanently stopped the field testing for Bt (Bacillus thuringiensis) talong (eggplant), upholding the decision of the Court of Appeals which stopped the field trials for the genetically modified eggplant. The Philippines Supreme Court also took the unprecedented step and invalidated the Department of Agriculture administrative order allowing the field testing, propagation and commercialization, and importation of GMOs.
References
Genetic engineering by country | Genetically modified food in Asia | [
"Engineering",
"Biology"
] | 1,282 | [
"Genetic engineering",
"Genetic engineering by country",
"Biotechnology by country"
] |
54,903,954 | https://en.wikipedia.org/wiki/Genetically%20modified%20food%20in%20North%20America | Genetic engineering in North America is any genetic engineering activities in North America
the United States, Canada, and Mexico do not require labeling of genetically modified foods.
Canada
Mainland Canada is one of the world's largest producers of GM canola and also grows GM maize, soybean and sugarbeet. Health Canada, under the Food and Drugs Act, and the Canadian Food Inspection Agency are responsible for evaluating the safety and nutritional value of genetically modified foods. Environmental assessments of biotechnology-derived plants are carried out by the CFIA's Plant Biosafety Office (PBO). The Canadian regulatory system is based on whether a product has novel features regardless of method of origin. In other words, a product is regulated as GM if it carries some trait not previously found in the species whether it was generated using traditional breeding methods (e.g. selective breeding, cell fusion, mutation breeding) or genetic engineering. Canadian law requires that manufacturers and importers submit detailed scientific data to Health Canada for safety assessments for approval. This data includes: information on how the GM plant was developed; nucleic acid data that characterizes the genetic change; composition and nutritional data of the novel food compared to the original non-modified food' potential for new toxins; and potential for being an allergen. A decision is then made whether to approve the product for release along with any restrictions or requirements. Labeling of foods as products of Genetic Engineering or not products of Genetic Engineering is voluntary. The Canadian regulations were reviewed by the Canadian Biotechnology Advisory Committee between 1999 and 2003, with the conclusion that the current level of regulation was satisfactory. The committee was accused by environmental and citizen groups of not representing the full spectrum of public interests by only having one member of the board of 20 representing non-governmental organisations and for being too closely aligned to industry groups.
Mexico
In February 2005, after consulting the Mexican Academy of Sciences, Mexico's senate passed a law allowing to plant and sell genetically modified cotton and soybean. The law requires all genetically modified products to be labelled according to guidelines issued by the Mexican Ministry of Health. In 2009, the government enacted statutory provisions for the regulation of genetically modified maize. Mexico is the center of diversity for maize and concerns had been raised about the impact genetically modified maize could have on local strains.
In 2013, a federal judge ordered Mexico's SAGARPA (Secretaría de Agricultura, Ganadería, Desarrollo Rural, Pesca, y Alimentación), which is Mexico's Secretary of Agriculture, and SEMARNAT (Secretaría de Medio Ambiente y Recursos Naturales), equivalent of the EPA, to temporarily halt any new GMO corn permits, accepting a lawsuit brought by opponents of the crop.
United States
Federal regulation
The USA is the largest commercial grower of genetically modified crops in the world.
United States regulatory policy is governed by the Coordinated Framework for Regulation of Biotechnology This regulatory policy framework that was developed under the Presidency of Ronald Reagan to ensure safety of the public and to ensure the continuing development of the fledgling biotechnology industry without overly burdensome regulation. The policy as it developed had three tenets: "(1) U.S. policy would focus on the product of genetic modification (GM) techniques, not the process itself, (2) only regulation grounded in verifiable scientific risks would be tolerated, and (3) GM products are on a continuum with existing products and, therefore, existing statutes are sufficient to review the products." In 2015 the Obama administration announced that it would update the way the government regulated genetically modified crops.
For a genetically modified organism to be approved for release, it must be assessed under the Plant Protection Act by the Animal and Plant Health Inspection Service (APHIS) agency within the US Department of Agriculture (USDA) and may also be assessed by the Food and Drug Administration (FDA) and the Environmental Protection Agency (EPA), depending on the intended use of the organism. The USDA evaluates the plant's potential to become a weed. The FDA has a voluntary consultation process with the developers of genetically engineered plants. The Federal Food, Drug, and Cosmetic Act, which outlines FDA's responsibilities, does not require pre-market clearance of food, including genetically modified food plants. The EPA regulates genetically modified plants with pesticide properties, as well as agrochemical residues. Most genetically modified plants are reviewed by at least two of the agencies, with many subject to all three. Within the organization are departments that regulate different areas of GM food including, the Center for Food Safety and Applied Nutrition (CFSAN, ) and the Center for Biologics Evaluation and Research (CBER). As of 2008, all developers of genetically modified crops in the US had made use of the voluntary process. Final approval can still be denied by individual counties within each state. In 2004, Mendocino County, California became the first county to impose a ban on the "Propagation, Cultivation, Raising, and Growing of Genetically Modified Organisms", the measure passing with a 57% majority. In May, 2014 Jackson and Josephine Counties in Southern Oregon passed initiatives similar to that passed by Mendocino County; both passing by 2 to 1 margins.
Several laws govern the US regulatory agencies. These laws are statutes the agencies review when determining the safety of a particular GM food. These laws include:
The Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) (EPA);
The Toxic Substances Control Act (TSCA) (EPA);
The Federal Food, Drug, and Cosmetic Act (FFDCA) (FDA and EPA);
The Plant Protection Act (PPA) (USDA);
The Virus-Serum-Toxin Act (VSTA) (USDA);
The Public Health Service Act (PHSA)(FDA);
The Dietary Supplement Health and Education Act (DSHEA) (FDA)
The Meat Inspection Act (MIA)(USDA);
The Poultry Products Inspection Act (PPIA) (USDA);
The Egg Products Inspection Act (EPIA) (USDA); and
The National Environmental Policy Act (NEPA).
State regulation
Several states have passed regulations concerning labelling of GM food; Connecticut passed a GMO labeling bill in May 2013, but the bill will only be triggered after four other states enact similar legislation. On January 9, 2014, Maine’s governor signed a bill requiring labeling for foods made with GMO's, with a similar triggering mechanism as Connecticut's bill. In May 2014 Vermont passed a law requiring labeling of food containing ingredients derived from genetically modified organisms. A federal judge ruled Maui's GMO ban invalid.
References
Genetic engineering by country
North American cuisine | Genetically modified food in North America | [
"Engineering",
"Biology"
] | 1,362 | [
"Genetic engineering",
"Genetic engineering by country",
"Biotechnology by country"
] |
54,904,025 | https://en.wikipedia.org/wiki/Genetically%20modified%20food%20in%20South%20America | Brazil and Argentina are the 2nd and 3rd largest producers of genetically modified food behind the United States.
The Argentine government was one of the first to accept genetically modified food. Assessment of genetically modified products for release is provided by the National Agricultural Biotechnology Advisory Committee (environmental impact), the National Service of Health and Agrifood Quality (food safety) and the National Agribusiness Direction (effect on trade), with the final decision made by the Secretariat of Agriculture, Livestock, Fishery and Food. The government is looking to tighten the current law which allows farmers to keep seed without paying royalties in a bid to encourage more private investment.
In Brazil the National Biosafety Technical Commission is responsible for assessing environmental and food safety and prepares guidelines for transport, importation and field experiments involving GM products. The Council of Ministers evaluates the commercial and economical issues with release. The National Biosafety Technical Commission has 27 members and includes 12 scientists, 9 ministerial representatives and 6 other specialists.
Honduras, Costa Rica, Colombia, Bolivia, Paraguay, Chile, and Uruguay also allow GM crops to be grown.
Venezuela banned genetically modified seeds in 2004, in 2008, Ecuador prohibited genetically engineered crops and seeds in its 2008 Constitution, approved by 64% of the population in a referendum (although Ecuadorian President Rafael Correa said in 2012 that this was "a mistake"). Peru has banned transgenic crops.
References
Genetic engineering by country
South American cuisine | Genetically modified food in South America | [
"Engineering",
"Biology"
] | 292 | [
"Genetic engineering",
"Genetic engineering by country",
"Biotechnology by country"
] |
54,904,141 | https://en.wikipedia.org/wiki/Genetically%20modified%20food%20in%20Oceania | Since the 1980s New Zealand and Australia have used genetic engineering for different purposes, including the production of food. Each country has faced controversy in this area and used a variety of legal measures to allay concerns and move toward the safe implementation of the technology. As of 2024 many issues requiring ongoing review remain in Oceania, in line with European data that showed "questions of consumer confidence and trust" and negative perceptions of genetically modified food as unhealthy and the technology as a process likely to damage the environment. Australian and New Zealand both require labeling so consumers can exercise choice between foods that have genetically modified, conventional, or organic origins.
Australia
Genetic engineering in Australia was originally (since 1987) overseen by the Genetic Manipulation Advisory Committee, before the Office of the Gene Technology Regulator (OGTR) and Food Standards Australia New Zealand took over in 2001. The OTGR is a Commonwealth Government Authority within the Department of Health and Ageing and reports directly to Parliament through a Ministerial Council on Gene Technology and has legislative powers. It was established as part of the Gene Technology Act 2003 and operates according to the Gene Technology Regulations 2001. The OGTR reports directly to Parliament through a Ministerial Council on Gene Technology and has legislative powers.
The OGTR decides on license applications for the release of all genetically modified organisms, while regulation is provided by the Therapeutic Goods Administration for GM medicines or Food Standards Australia New Zealand for GM food. The individual state governments are then able to assess the impact of release on markets and trade and apply further legislation to control approved genetically modified products.
Genetically modified cotton, canola, and carnations are grown in Australia. Genetically modified cotton has been grown commercially in New South Wales and Queensland since 1996. GM canola was approved in 2003 and was first grown in 2008 and was first approved in Western Australia in 2010.
In 2011 genetically modified plants were grown in all states except South Australia and Tasmania, who have extended their moratoriums until 2019 and 2014. The Queensland and Northern Territory Governments have not implemented any further legislation beyond the national level, but several other states placed bans on planting certain GM crops. In 2007 the New South Wales government extended a blanket moratorium on GM food crops until 2011, but allowed groups to apply for exemptions. New South Wales approved GM Canola for commercial cultivation in 2008, while the Victorian government let the moratorium on GM Canola expire in 2007. Western Australia passed the Genetically Modified Crops Free Areas Act in 2003 and was declared a GM free area in 2004. In 2008 an exception was made for the commercial cultivation of GM cotton in the Ord River Irrigation Areas. Trials of GM canola were carried out in 2003 and in 2010 the Western Australian government allowed the commercialisation of GM canola.
New Zealand
Early management
By the early 1980s technologies involving recombinant bacteria were being applied in laboratories in New Zealand mostly for biological and medical research. There had already been a move at a governmental level to manage genetic modification in the country and in 1978 the government placed a moratorium on field releases and established the Advisory Committee on Novel Genetic Techniques (ACNGT). The setting up of this body reflected concerns at the time of some scientists and the community of New Zealand "about the possible ecological and public health consequences of release of a GMO into the environment...[being concerned with]...laboratory experiments involving novel genetic techniques and [giving] advice to researchers on contained laboratory and glasshouse experiments." Research in this field advanced rapidly and by 1986 when there was more demand for field trials, the New Zealand government setup the Field Release Working Party (FRWP) to monitor the regulation of field testing and release of GMOs. In 1987, the FRWP recommended the establishment of a committee to assess all proposals to field test or release GMOs and the Minister for the Environment established the GMO Interim Assessment Group (IAG) under Section 33 of the Environment Act 1986 and at this point, the moratorium on field release was lifted. Researchers, funded either by the private sector or the Government, were required to submit proposals to the IAG for assessment and after meeting the required guidelines, these were publicly posted in several New Zealand newspapers and the public had thirty days to make submissions.
Hazardous Substances and New Organisms Act (1996)
Neither the IAG or ACNGT had any legislative authority and as the public became more aware of field testing of genetically modified organisms, there was a call for legislation to monitor the process. The Environmental Minister at the time, Simon Upton who was also Minister for Crown Research Institutes, sponsored what was to become the Hazardous Substances and New Organisms Act (HSNO Act) which was passed on 16 April 1996. Upton saw the debate about genetically modified crops as significant as the controversy when New Zealand became nuclear-free in the 1980s. The Act defined genetically modified organism as "any organism in which any of the genes or other genetic material have been modified by in vitro techniques; or are inherited or otherwise derived, through any number of replications, from any genes or other genetic material which has been modified by in vitro techniques". It also noted that regulations could be made "(a) specifying the procedures and methods for assessing the probability that an adverse effect will occur from genetic modification of an organism: (b) specifying the probability that adverse effects will occur from specified development procedures:(c) specifying the circumstances in which genetic modification of an organism is a low risk genetic modification". Other sections covered assessments of projects for low risk genetic modification, and for the importation of genetically modified organisms with "any person importing any organism to declare, by statutory declaration, that the organism is not a genetically modified organism". Under the HSNO Act the Environmental Risk Management (ERMA) was also established as the body responsible for the management of the development or importation of genetically modified organisms (GMOs) into New Zealand.
Growing public concerns
By the late 1990s, the potential risks of GE technology emerged as an issue of public concern in New Zealand "fuelled by ethical concerns and health risks from inserting human genes into cattle, international concerns about the health effects of GM foods, and the potential environmental impacts of GM crops and other field uses." Greenpeace called for a moratorium on the release of GE organisms into the environment in 1996 and in 1997 established a consumer network that urged supermarkets not to sell GE foods. The government responded by setting up the Independent Biotechnology Advisory Group (IBAC) to focus on the human biotechnology and "provide independent advice to Government on its environmental, economic, ethical, social and health implications".
Royal Commission into Genetic Modification (2000)
The New Zealand Government was made fully aware of the concerns about genetic modification when in October 1999, a petition, signed by 92,000 New Zealanders, was presented to Parliament by the Green Party. Accordingly, on 21 December 1999, the government announced in its Speech from the Throne Opening of Parliament:It is recognised that one area of research and development has led to significant public concerns. That is the area of genetic modification. A Royal Commission into Genetic Modification will be established. Until it has reported, a moratorium will be imposed on the commercial planting of genetically modified crops. Very strict conditions will apply to the consideration of any application for field trials until such time as the Commission reports on the wider issues.
Australian science broadcaster and writer Pete Pockley claimed that while the New Zealand Prime Minister at the time Helen Clark was justified in saying the "scope and processes [of the Commission] had been unique", some of the Commission's recommendations would require further resourcing, [to] "consolidate the commission’s good work [and] the New Zealand government will need to legislate with determination". Pockley further commented on some of the key recommendations of the Commission concluding that while the government was not bound by these, Helen Clarke and the environmental minister Marian Hobbs welcomed the commission and set a deadline for when plans would be in place to enact recommendations. A conclusion of the Commission was that while New Zealand should not rule out genetic modification, it recommended proceeding very cautiously.
Response to the Commission's Report
Government
The report of the Royal Commission into Genetic Modification made forty nine recommendations to the New Zealand Government. Marian Hobbs, the Minister for the Environment, noted that she was delighted the commission had [inquired] "into and report on the strategic options available to enable New Zealand to address genetic modification now and in the future...[and]...the concerns of New Zealanders [were] heard and evaluated". The same press release contained an overview and summary of the Commission's report including details of the recommendations and a timeline for the government to respond. The main conclusion of the Commission's report was that New Zealand should keep its options open with regard to genetic engineering and proceed carefully in order to minimise and manage any risks. [adopting] "a strategy of preserving opportunities and proceeding to use genetic modification selectively and with appropriate care...[rejecting]...the idea of a New Zealand free of all genetically modified material at one extreme and the option of unrestricted use of genetic modification at the other". In this formal acknowledgement of the Commission's Report in October 2001, the New Zealand Government committed to responding to the recommendations within three months. It noted the Commission's expectation that the food industry "must be subject to rigorous standards, properly enforced and carefully managed", with better publicity about the rules and a suggested introduction of a voluntary labelling scheme to allow identification of foods that are "free of genetically modified content or processes". The government confirmed on 31 October 2001 that it would legislate to stop genetically modified organisms being released into the country for the next two years, but it would lift a 16-month ban on field trials of the organisms which Helen Clark said "would go hand-in-hand with new rules to ensure materials used in the research were later destroyed or locked away". The Prime Minister stated in the same press release that there was not unanimous support for the policy from all members of her centre-left Labour Party, including some Māori MPs who "objected to genetic modification for religious and cultural reasons".
Early in 2002, following public consultation, the New Zealand government drafted a series of proposals for legislative changes, notably some to amend the Hazardous Substances and New Organisms act. These were not only to provide practical frameworks for managing new organisms but were an important part of the government's response to the Commission's report. Key elements included addressing changes in line with increased scientific knowledge, streamlining the process for contained research and for approval of new organisms that are medicines, ensuring effective compliance and enforcement and
"extending the Minister’s ability to call in specific applications to include significant cultural, ethical and spiritual effects". These changes were presented as the New Organisms and Others Matters Bill. In the first reading of the Bill, Marian Hobbs concluded: "The changes proposed in this Bill will ensure that the Hazardous Substances and New Organisms Act is pitched at the right level to allow genetic modification developments to proceed cautiously while preserving opportunities."
In 2003, as part of its response to the Commission's report, the government commissioned Business and Economic Research Limited (BERL) to investigate the possible impact of the release of genetically modified organisms on the New Zealand economy. The research methodology used "the modelling of four hypothetical scenarios and a snapshot consumer survey" with the aim of providing an "economic analysis of the risks and opportunities that may arise from the use of genetic modification and non-genetic modification technologies". The research identified that New Zealand's 'clean, green image' overseas could be affected by the introduction of GMOs with the risk of changing the intentions of potential foreign purchasers of New Zealand's goods and services, but did note that some of the scenarios showed genetic modification had the potential "to create entirely new products and sectors of economic activity".
no genetically modified food was grown in New Zealand, and no medicines containing live genetically modified organisms had been approved for use. However, medicines manufactured using genetically modified organisms that do not contain live organisms had been approved for sale, and imported foods with genetically modified components were sold. Food Standards Australia New Zealand (FSANZ) were required to approve any food produced from GM crops, or made using genetically engineered enzymes, before it could be marketed in Australia or New Zealand. FSANZ made a list of such approvals available on its website.
Published reviews
An independent review analysed the level of implementation of the Commission's recommendations by the New Zealand Government mainly during the period 2001-2003. The findings showed that 41% of all recommendations had been fully implemented, mostly in the area of 'Research', while 'Te Tiriti o Waitangi' and 'Preserving Opportunities' were in the 59% of those partially or not implemented. Daniel Pollak, an Ian Axford Fellow in Public Policy, wrote a paper in 2003 that assessed the effectiveness of the Royal Commission's report both in facilitating public participation, and shaping policies that were developed to identify and manage the environmental risks and co-existence of crops due to GMOs within New Zealand. The paper concluded that while the Royal Commission used adequate consultation tools, more processes could have been considered to empower people to be involved in collaborative decision making. It was suggested that the Environmental Risk Management Authority (ERMA) would find it difficult to satisfy everyone because it was confronted with a "lack of universally agreed methods for doing environmental risk assessment" and conflicting demands to avoid being risk-averse, while taking "more intangible considerations beyond science into account". The author contended that crop co-existence and zero tolerance of GM contamination could be costly with some legal uncertainties about the legal status of GMOs that might be introduced to New Zealand inadvertently through seed imports, concluding that "dealing with GMOs will continue to be controversial, and as long as that is the case there will be many trade-offs and balances to manage. New Zealand will have to try to avoid both over-regulating GMOs and under-regulating them".
The New Zealand Society for Risk Management, a group of professionals from the private, government and academic sectors established in 2000, responded to the report of the Commission on 3 August 2001. The Society questioned the degree to which the Commission had been constrained by the Terms of Order in Council provided by the Executive Council of New Zealand in specifying the principles of risk management and concluded there were still issues to be resolved. These included more thorough identification, assessment and prioritising of the risks posed by different forms and uses of genetic technology and further consultation with the community to discuss and inform on the risks and how they could be managed.
A report by The Sustainability Council of New Zealand noted in a 2003 report that many markets rejected GM food, contamination during harvesting and transport was a problem, and there were risks to the environment from GM agriculture. The report concluded that the actions required by the Government were to protect the country's interests, declare New Zealand a GM-free food producer for five years and make appropriate changes to the law for possible releases of GM in the future.
Public protests
The findings of the Royal Commission and the response of the government led to several protest activities by the public of New Zealand. The GE-Free Day of Action organised by a coalition of GE-Free groups resulted in an estimated 15,000 people gathering in cities around the country on 6 October 2000 and in September 2001 the Auckland GE-Free Coalition organised a rally in Aotea Square, Auckland in which around 10,000 people participated. Jeanette Fitzsimons said the rally demonstrated that in marching for a GE-free future the issue was now a "mainstream concern...[and]...this march was not a negative protest, but a positive celebration of our unique GE-free status and a demonstration of the determination of many New Zealanders to keep it this way".
On 1 November 2001 over two hundred people arrived at Parliament in Wellington after travelling from Northland to protest genetically modified tamarillo field trials carried out by HortResearch in Kerikeri. Sue Bradford welcomed the protesters and noted [that] "many indigenous people in this country, as in other parts of the world, are deeply uncomfortable with the prospect of genetically engineered contamination of the natural world". Another march that began on 22 August 2003 and ended with hundreds of protesters gathering at Parliament on 23 October called for a complete ban on GM in New Zealand and presenting a petition that read:We, the undersigned, request that Parliament ensure that genetic engineering research takes place only in contained laboratories, that genetically modified organisms are not released into the environment or food chain, and that the moratorium continues for at least five more years until 2008 The hīkoi became known the 'Seed Carriers', because the participants collected seeds on the march and later presented these to the government "in protest at the harm GM could cause to New Zealand’s seed varieties, including native plants.
As part of a nation-wide initiative, an estimated 9000 people marched through the centre of Auckland on 12 October 2003 to protest against the lifting of the moratorium on the commercial release of genetically engineered organisms. Elvira Dommisse, a New Zealand scientist who resigned from her position at the state-owned New Zealand Institute for Crop and Food Research in 1993 because she disagreed with the direction being taken, spoke at one of the rallies. She claimed that there were scientists within New Zealand research institutes and universities who felt that there was too much emphasis on GM research, and she knew some [who were] "totally opposed to the lifting of the moratorium and others who are unhappy about GE but see it as inevitable".
Ongoing issues
Prior to the 2002 general election, the book Seeds of Distrust was published which highlighted possible contamination of imported corn seed with GMO seeds. During the election campaign the book caused an amount of friction between the Labour and Green Parties, referred to as "Corngate" in the media.
In 2004, the New Zealand Government announced it was going to ratify the Cartagena Protocol on Biosafety. Australian diplomat Alan Oxley quoted Marian Hobbs as saying that New Zealand was doing this to be "a good international citizen" but claimed ratification would undermine the role of the World Trade Organization in protecting New Zealand because the Cartegena Protocol "unequivocally gives parties the right to ban an import of a living modified organism 'without scientific justification'". Alexander Gillespie however, said that "no such language existed in the protocol...what it does allow is for the implementation of the precautionary approach." He further suggested that "as a signatory to the protocol New Zealand will at least be given the chance to change aspects of the international law that it might not like".
A field trial was installed in 2003 for SCION, a Crown Research Institute to look at the impact of genetically altered Pinus radiata trees on the environment. In January 2008 Environmental activists breached the security at the site and damaged 19 of the trees. The media noted that GE Free New Zealand national spokesman Jon Carapiet cautioned that the action could have "caused the spread of contaminated material and harmed New Zealand's green image." Scion's acting chief executive, Elspeth MacRae, said it was "confident no genetically modified material was moved outside the site during the break in...[and]...there [were] no concerns surrounding contamination". No organisation claimed responsibility, but a spade left on the site had a 'GE-free New Zealand' sticker attached to it.
GE-Free New Zealand asked in January 2009 for the withdrawal of the ten-year consent for field trials on broccoli, cabbage, cauliflower and forage kale because it was claimed there had been "two environmental control breaches at the site by not preventing open flowering and not killing living brassica material after the trial finished...[and a]...Biosecurity NZ spokeswoman said yesterday the incident was regarded as serious non-compliance and it was reviewing a range of enforcements".
New Zealand citizens took part in the first global protest against genetically modified food in May 2013. The protest, known as the 'March against Monsanto', was in opposition to the genetically modified food produced by Monsanto. Held across the country, the marches drew attention to Monsanto selling genetically-engineered seeds which were claimed to "resist insecticides and herbicides, add nutritional benefits or otherwise improve crop yields and increase the global food supply".
An Environment Court decision in December 2013 empowered local councils to put policies in place around GMOs in land-based activities, allowing communities to make submissions and raise concerns about the issue in all New Zealand local authorities.
When a group of Nobel Prize winners sent an open letter to Greenpeace in 2016 urging an "end its opposition to genetically-modified food, in particular a new rice which has the potential to reduce disease in third-world countries", several New Zealand scientists came out in support of the letter. Professor and Associate Dean (Research), University of Otago Peter Dearden held "it is time for us to stop believing that all GM is bad and to see that the benefits can far outweigh the risks," and Barry Scott, Professor of Molecular Genetics at Massey University, suggested the report showed how the extreme view of Greenpeace could be challenged by "new technologies associated with gene and genome editing...given changes can now be made to the genome that are similar to those made by non-GM methods such as radiation treatment".
A 2017 paper by a researcher at Lincoln University explored the implications of the controversy around genetic modification for research practices and risk management policies. The point was made that although the HSNO Act determined outdoor research was safe in containing GMOs, public opposition to this resulted in most research being conducted indoors. This required a change of focus to identifying and managing the significant risks associated with GMO research in indoor containment facilities. The author held that because of the overt focus on risks associated with outdoor research, little attention was paid to those in indoor facilities and wondered whether people understood this when debating the issue of GE research. The conclusion was [it]..."is likely that the complexities of this situation will remain hidden as long as the use of outdoor containment facilities remains central to the controversy over GMO research in New Zealand".
Calls for review
In 2018 a paper in the journal Frontiers in Plant Science made the case that New Zealand's economy was led by the export of plant-based commodities and well-managed integration of gene editing into plant breeding programmes could have potential benefits, [suggesting]..."gene editing offer[ed] the potential to produce a step change in NZ primary industry productivity, biosecurity and speed of innovation". The authors acknowledged why the New Zealand Government took a cautious approach to regulating gene editing but suggested that it "prevents rapid implementation of non-transgenic gene editing" which may be an effective innovation solution necessary to keep the country competitive in international markets. The paper concluded: "The three largest importers of NZ primary products, China, Australia and USA all currently grow GM crops and Australia and China seem likely to follow the lead of USA in not regulating gene edited crops."
Peter Gluckman, a former science advisor to the New Zealand Government suggested in 2018 that a reconsideration of genetic engineering was "long overdue...[and]...the issue needs re-addressing because there have been significant developments over the last 15 years". In the same piece a range of views were shared. Jon Carapiet, the national spokesman for GE-Free New Zealand, urged caution and suggested that while industries can respond "creatively" to regulation, others are "driven by deliberate ignorance to practical proven alternatives such as climate-smart agri-ecology". Bruno Chambers, chair of The Hastings District Council which had a 10-year moratorium in place on genetically modified crops, said he had an open mind about possible benefits from the technology but believed the New Zealand brand in the market would be compromised if the country's crops were not GM-Free. Andrew Allan, professor of plant biology at Auckland University cautioned against a lost opportunity, concluding "without the ability to use gene-editing, New Zealand will be prevented from growing food that is better for the environment and our industries will fall behind our trading partner and competitors". AgResearch principal plant biotechnology scientist Greg Bryan held that the ryegrass that has been developed "could transform farming by reducing its environmental footprint and improving animal productivity". The Prime Minister's chief science advisor Professor Juliet Gerrard, said in July 2019 that the legal and regulatory frameworks around genetic modification did not take new technologies into account and Environment Minister David Parker agreed that some of the new GMO techniques may not be covered by the HSNO Act and he was seeking advice before making changes. Agriculture Minister Damien O'Connor said at the time that New Zealand needed to have a "sensible, mature conversation" about of the opportunities genetic engineering could bring to the country.
Gene editing from a Māori perspective was explored in an article published by Frontiers Media in 2019 at a time when the New Zealand Government was convening a public consultation process to consider possible changes of the regulations around gene editing technologies. The study showed Māori had held strong views about genetically modified organisms and were informed on the effects of biotechnology by cultural values such as whakapapa, mauri (life force), mana, and kaitiakitanga to provide a "cultural scaffold for considering the philosophical, moral, ethical and technical dimensions relevant to the use of gene editing technologies". The paper cautioned that gene-editing technologies can "prioritize commercial interests over community benefit" resulting in "societal sensitivities about inequities", but concluded that "the participants in this study wanted to engage in a constructive discussion to create a robust regulatory framework that addresses gene editing on a case-by-case basis and utilizes Māori values within the decision-making process". Agriculture Minister Damien O'Connor said at the time that New Zealand needed to have a "sensible, mature conversation" about of the opportunities genetic engineering could bring to the country.
In 2019 a group of New Zealand scientists called for a full review of the country's laws related to genetically modified organisms claiming there was now "scientific consensus about the safety of GM crops". Work done by the Royal Society Te Apārangi in the same year also took the position that there needed to be an urgent review of gene editing in New Zealand. The panel co-chair David Penman said that New Zealand [needed] "to have its own perspective given our unique cultural heritage and environment, the special challenges we face in maintaining our biodiversity and a viable and productive primary industry, and our unique regulatory environment". The Society produced a series of scenarios that identified and assessed the possible risks and benefits of gene editing technologies in the primary sector in New Zealand. The case studies considered how the technology could be used "within and outside the human food chain...[and with]...agricultural plants and animals".A further paper, Gene Editing Legal and Regulatory Implications presented six recommendations if New Zealand was to have a regulatory framework that recognized the principles of Te Tiriti o Waitangi, met its ethical obligation as a global citizen, and reflected strong relationships between New Zealand industries, research communities and local and central government.
By 2020 there were calls for the New Zealand Government to appreciate the role of biotechnology, including some genetic engineering, in addressing the issues of climate change, in particular the reduction of emissions. One of a series of recommendations in a report by Biotech New Zealand
was to "increase public discussion and understanding of genetic modification and its various methods, their safety and practical application...[and]...undertake review of regulations relating to biotechnology and genetics".. The document also noted that forty eight percent of the country's greenhouse gas emissions were from agriculture but there were important developments in the use of biotechnology tools that could mitigate some of this. The report identified research by AgResearch to develop a genetically modified ryegrass, that "strikes a balance between reductions in greenhouse gas emissions, greater tolerance to drought and farm productivity" and work by the same organisation on white clover using genetic transformation to "increase the levels of condensed tannins (CT) as they are highly desirable in forage as they sequester dietary protein and reduce bloat and methane emissions in ruminants". Modelling from the study also predicted that less nitrogen would be excreted into the environment by animals feeding on the ryegrass, and therefore less leaching of nitrate and lower emissions of nitrous oxide. This work by AgResearch had been discussed in 2019 in an article claiming that it reflected the progress New Zealand was making in the area of genetically modified crops that were likely to reduce emissions. A principal scientist working on the ryegrass project said it would "produce 23% less methane from dairy livestock it feeds". An Auckland academic took the position that the possibility of genetic modification helping New Zealand's respond to climate change was a key driver in shifting the debate about the issue. The contention was that some genetically modified plants could be better suited to rising temperatures and grasses produced using this technology reduced emissions from animals that eat them. The author concluded: "the climate crisis is here, and we and our primary industries will need all the science-based tools we have to fight against, and survive, rising temperatures".
Political responses
Labour government
The New Zealand Government authorised The New Zealand Productivity Commission to investigate factors that could be inhibiting or detracting from the productivity of firms in the country and the Commission's report in April 2021 identified a full review of the regulation of genetic modification as key to enhancing innovation. Specifically, the report noted [that] "timely access to new plant genetic material is critical for New Zealand’s primary sector to retain and build its competitive advantage in international markets". In his response to the report, Stuart Nash the Economic Development Minister, did not note food directly, but acknowledged the importance of [supporting] "internationally-focussed growth and innovation...[and retaining]...links to global research, science and technology". The release of the Commission's report prompted debate in the New Zealand media. Radio New Zealand detailed the recommendations of the report, particularly those aimed at removing constraints to innovation in line with new techniques reflecting the precision of gene editing. The article noted that the Ministry for the Environment had also stated the "regulatory settings were quickly becoming outdated and hard to enforce in 2018, referring to a 2014 court decision that adopted a strict definition of the not-GM regulations", and there was a move amongst countries toward less regulation based on scientific risk assessments. Another commentator claimed that despite the Commission's report there was little public interest in changing the regulations although there were still two sides to the argument, from defending the legislation because it controlled the spread of GMOs to how it could be modified to help cut agricultural emissions. An item in Newshub on 25 June 2022 quoted a scientist who said he was frustrated that it was not possible to use new genetic technology to explore options with plants. The article also put the position of those who claim that genetic modification would not be effective in lowering emissions, citing Steve Abel from Greenpeace who said: "the time frames that it takes to develop these technologies and test them and prove them are not the timeframes we have. We need to act now on what we know will address the problem of climate change".
Emily King, a former environmental lawyer and author of Re-food, urged a continuation of the cautious approach New Zealand had taken to regulation of gene modification and editing. In her book, King advocates for a "food systems approach...to consider the full process of getting food from farm to table" as a context for reducing emissions during food production, noting that "while farmers and growers create emissions, manufacturers and consumers do too through food waste".
Research by John Caradus in 2022, assessed the risks, opportunities and impacts of using GM crops concluding: GM crops provide considerable benefits and are a valuable option that needs to be employed to solve many of the current challenges facing mankind and as a result improve not simply economic outcomes but also the environment. GM technologies like many non-GM technologies can bring risks, but these can be monitored and quantified and allow decisions to be made about commercial, societal and environmental benefits versus real risks.
A spokesperson for one pro-Gm New Zealand organisation BioTechNZ , said public attitudes toward GM might have shifted, but GE-Free NZ questioned the motives for a review and feared it could lead to deregulation putting people and nature at risk. Andrew Hoggard from Federated Farmers suggested the agriculture sector needed to partake in research being done elsewhere in the world because there were "big gains for them and the environment." The Environment Minister David Parker clarified that the debate would be "restricted to just medicines, saying there is still a suspicion around genetically-modified food".
Parker confirmed with Newshub on 26 June 2022 that the government was going to review genetic modification regulations to see if they aligned with new biomedical and laboratory research. He said the goal was to make research easier but there was no intention to change regulations around the release of genetically modified organisms into the environment. In February 2023 Science New Zealand stressed the importance of New Zealand having an informed debate about genetic technologies to ensure that any regulations to control it are developed from an informed approach to both the risks and benefits. The article suggested that modern gene-editing tools can potentially more quickly develop "new varieties of plants providing sustainable and nutritious food or organisms that grow the materials needed for a sustainable low-carbon bioeconomy", but the current legislation makes research expensive for CRIs. The paper acknowledged that the Prime Minister’s Chief Science Adviser has noted that there is a need for different approaches to a "spectrum of genetic modification technologies", and concluded: "In the view of the Crown Research Institutes, it is time to...consider how New Zealand may appropriately take advantage of new knowledge to advance the wellbeing of the people and the country."
Tony Conner, New Zealand biochemist and geneticist, made the case for a review of the regulations to allow field testing and the release of GM crops. He claimed that there were significant benefits from industries in the country growing genetically modified crops, including more nutritious food with a longer storage life, less waste from fruit and vegetables with "fewer blemishes from pest and disease damage", higher yields for producers and the development of plants that could reduce greenhouse gas emissions and be "better adapted to the forthcoming constraints of climate change (e.g. winter chilling for fruit production)".
On 27 June 2023 the New Zealand Government announced a consultation process to get feedback on proposed changes to the regulations around genetically modified organisms. David Parker said the changes would remove barriers to laboratory research but would not change rules for field trials or regulations to the release of GMOs such as plants or animals into the environment. Ten proposed changes were identified for consideration during the consultation beginning on 3 July 2023 and closing on 25 August 2023. The response to the proposed changes was generally positive but one scientist did question the rationale for allowing the use of genetic modification for medical purposes but not to "make genetically-modified foods, or even give genetically-modified grass to cows".
As New Zealand approached a general election scheduled for October 2023, gene editing and genetic modification became a political issue in the media. Denise Conroy, a programme leader at Plant & Food Research told Kathryn Ryan on Radio New Zealand that focus groups her organisation were running showed that people in the country wanted to get clear information about gene editing and genetic modification as the debate had only recently become a discussion in the media, and they "didn't have the tools or knowledge to make informed decisions". Conroy said people understood the technology could be beneficial, but needed reassurance that the benefits were "much more convincing than the perceived risks". Surveys conducted by Plant & Food Research showed that consumers in Australia and New Zealand had similar levels of acceptance of food produced using genetic engineering, with "about 43 percent willing to purchase this type of produce".
Coalition government
On 13 August 24 Minister of Science, Innovation and Technology Judith Collins announced that the National-led coalition government would end the ban on gene technology outside of the laboratory, removing "restrictive rules and time-consuming processes...[bringing]...New Zealand up to global best practice and ensure we can capitalise on the benefits". Prime Minister Christopher Luxon said legislation and "a dedicated new regulator [would] oversee the new technology to ensure it is used safely". One media commentator explained a range of activities that the changes would enable, noted public responses from an earlier poll and summarised counter-arguments. Political responses were mixed. Speaking on behalf of the Act Party, Parmjeet Parmar claimed the move will mean the "brightest scientific minds will be freer to make advancements that will lift human flourishing, improve environmental outcomes, and create major commercial opportunities", while in the same piece Labour's Deborah Russell urged the government to be transparent as the change was "new territory". Steve Abel put the Green Party's position "that a wide-ranging and robust public discussion is required about scientific developments in gene-editing and related technology before any changes can be considered to the regulatory framework in the Hazardous Substances and New Organism Act". A report presenting research findings into how New Zealanders perceived genetic technologies for environmental and conservation purposes was published in July 2024. Following the announcement of the Government's plan, the authors backgrounded the research, noting it had been carried out in two streams: The Māori Biodiversity Network engaging with Māori, while social scientists engaged with the general public and interest groups. The purpose of the research was to gain insights into what "safe and responsible environmental genetic innovation [meant] for New Zealanders", and the researchers concluded that conversations with a diverse community were complex because "discussions about gene technology bring strong reactions based on people’s values and beliefs...[being]...especially pointed when talking about the use of these technologies in conservation, environmental protection and food". All participants discussed the risks and potential of the new technologies and the need for "high levels of regulation and oversight...and continuous research, particularly in contained environments, to monitor and evaluate the impacts of genetic technology". Issues of trust and who might control the technologies were raised in both groups.
In November 2024, Organics Aotearoa New Zealand (OANZ) raised concerns about the proposed reforms to regulate genetic engineering. OANZ chief Executive Tiffany Tompkins said that the proposed changes were not likely to be adopted by New Zealand's major trading partners, [making the country] "an international outlier, risking environmental and economic consequences", evidence that after thirty years of research, "the supposed benefits of GE have not been realised, and its risks remain unresolved". Mark Patterson, in his capacity as Minister for Rural Communities, agreed that OANZ should have been engaged with earlier because of the importance of organic farmers and growers in New Zealand's primary sector and Judith Collins explained that legislative changes were consistent with regulations in other countries. In the same piece, it was noted that the Ministry for Business, Innovations and Employment (MBIE) had provided assurance to the organic sector that there would be full risk assessment and public consultation before licences were confirmed. OANZ however, questioned the government's "economic benefits data", remaining concerned that any rewards might not outweigh the risks.
References
External links
GE-Free New Zealand: Media articles
Genetic engineering by country
Genetic engineering in New Zealand | Genetically modified food in Oceania | [
"Engineering",
"Biology"
] | 8,131 | [
"Genetic engineering",
"Genetic engineering by country",
"Biotechnology by country"
] |
54,904,665 | https://en.wikipedia.org/wiki/Natural%20bundle | In differential geometry, a field in mathematics, a natural bundle is any fiber bundle associated to the s-frame bundle for some . It turns out that its transition functions depend functionally on local changes of coordinates in the base manifold together with their partial derivatives up to order at most .
The concept of a natural bundle was introduced by Albert Nijenhuis as a modern reformulation of the classical concept of an arbitrary bundle of geometric objects.
Definition
Let denote the category of smooth manifolds and smooth maps and the category of smooth -dimensional manifolds and local diffeomorphisms. Consider also the category of fibred manifolds and bundle morphisms, and the functor associating to any fibred manifold its base manifold.
A natural bundle (or bundle functor) is a functor satisfying the following three properties:
, i.e. is a fibred manifold over , with projection denoted by ;
if is an open submanifold, with inclusion map , then coincides with , and is the inclusion ;
for any smooth map such that is a local diffeomorphism for every , then the function is smooth.
As a consequence of the first condition, one has a natural transformation .
Finite order natural bundles
A natural bundle is called of finite order if, for every local diffeomorphism and every point , the map depends only on the jet . Equivalently, for every local diffeomorphisms and every point , one hasNatural bundles of order coincide with the associated fibre bundles to the -th order frame bundles .
A classical result by Epstein and Thurston shows that all natural bundles have finite order.
Examples
An example of natural bundle (of first order) is the tangent bundle of a manifold .
Other examples include the cotangent bundles, the bundles of metrics of signature and the bundle of linear connections.
Notes
References
Differential geometry
Manifolds
Fiber bundles | Natural bundle | [
"Mathematics"
] | 384 | [
"Topological spaces",
"Topology",
"Manifolds",
"Space (mathematics)"
] |
54,906,224 | https://en.wikipedia.org/wiki/NGC%203228 | NGC 3228 is an open cluster in Vela. It was discovered by Nicolas Louis de Lacaille in 1751–1752, while he was in South Africa and catalogued it as Lac II.7. It is small but bright and can be observed easily with binoculars in sufficiently dark skies.
It is a cluster of Trumpler type I1p or II3p, with few members with large brightness range and a slight concentration toward its center. Klarchenko et al. mention 53 possible members within the angular diameter of the cluster. The tidal radius of the cluster is 1.4 – 5.5 parsecs (4.5 – 18 light years) and represents the average outer limit of NGC 3228, beyond which a star is unlikely to remain gravitationally bound to the cluster core. The brightest member is of mag. 7.9 and the hottest star is of spectral type B9. One member, HD 89856 (mag. 9.04, spectral type B9), is a variable star with period 4.556 days.
References
External links
3228
Vela (constellation)
Open clusters | NGC 3228 | [
"Astronomy"
] | 228 | [
"Vela (constellation)",
"Constellations"
] |
54,906,945 | https://en.wikipedia.org/wiki/Kepler-124b | Kepler-124b is an extrasolar planet discovered in 2014. It is located from Earth, orbiting the unclassified star Kepler-124 in the constellation Cygnus. Within The Kepler-124 system (KOI-241) there are three known planets, Kepler-124b being both the smallest and closest to its parent star.
Characteristics
Kepler-124b is located from Earth orbiting the star Kepler-124. Both Kepler-124b and its host star are smaller than our own planet and star, respectively; Kepler-124b is estimated to be 0.729±0.045 Earth radii (0.065±0.004 Jupiter radii), and its parent star Kepler-124 is estimated to be 68.7% of the mass the Sun, approximately 0.636±0.030 solar radii.
It is the smallest discovered planet in the Kepler-124 system, and has the closest orbit of the three known planets. Kepler-124b orbits 96% closer to its star than Earth (approximately 3 Earth days), which in the Kepler-124 system is inside the inner limit of the star's habitable zone.
Discovery
Like many Exoplanets discovered by the Kepler telescope, Kepler-124b was found using the transit method. The transit method utilizes the high magnification and numerous instruments on the Kepler telescope to detect slight fluctuations in brightness of a star being observed. These dips can indicate the presence planet and determine certain parameters of it as well. Kepler-124b was initially only a planet candidate but was later confirmed as an exoplanet; a statistical analysis by a team at NASA Ames Research Center validated the existence of Kepler-124b with 99% assurance, along with Kepler-124c and Kepler-124d. Although scientists are very confident about some of Kepler-124b's parameters, many are still unknown.
References
Exoplanets discovered by the Kepler space telescope
Cygnus (constellation)
Exoplanets discovered in 2014
Transiting exoplanets | Kepler-124b | [
"Astronomy"
] | 422 | [
"Cygnus (constellation)",
"Constellations"
] |
54,908,540 | https://en.wikipedia.org/wiki/Joanna%20Maria%20Vandenberg | Joanna (Joka) Maria Vandenberg (born 1938) is a Dutch solid state chemist and crystallographer who immigrated to the United States in 1968. At Bell Telephone Laboratories, she made a major contribution to the success of the Internet. She invented, developed, and applied the X-ray scanning tool for quality control essential to manufacturing indium gallium arsenide phosphide-based multi-quantum well lasers. These are the lasers that amplify and modulate light that travels through optical fibers that are at the heart of today's Internet.
Early life
Joanna Vandenberg was born January 24, 1938, in Heemstede, a small town near Amsterdam, where she was the youngest of a family of five, and the first one to go to college. Her family was in the tulip business. In 1956 she graduated cum laude from gymnasium-β and went to the State University of Leiden in the Netherlands where she received a B.S. in Physical Sciences and Mathematics, 1959 and a M.S. in Inorganic and Solid State Chemistry with A. E. van Arkel as well as Theoretical Chemistry, 1962. She studied with van Arkel in Leiden and Caroline H. MacGillavry in Amsterdam for a Ph.D. thesis on X-ray diffraction analysis of metal–metal bonding in inorganic compounds, 1964.
Career
She worked for 4 years (1964–1968) at Royal Dutch Shell laboratory in Amsterdam, where she joined the research group on catalytic properties of transition metal-layered chalcogenides. In 1968 she moved to Bell Laboratories where she continued work on structural and magnetic properties of transition-metal chalcogenides. Her career was interrupted when she was laid off seven months into her first pregnancy. She was rehired in 1972 after the AT&T operators won a historic class action lawsuit for being fired when pregnant. With Bernd Matthias of UCSD, she started to work on metal cluster formation in superconducting ternary transition metal compounds. Her extensive knowledge of structural inorganic chemistry enabled her to predict inorganic crystal structures and led to the discovery the superconducting rare earth ternary borides.
In 1980 she changed direction and began research on contact metallization on InGaAsP/InP multi-quantum well layers used as high speed digital lasers in the internet. She designed a temperature-dependent in-situ annealing X-ray diffractometer. This technique made it possible to optimize the electrical behavior of the gold metallization contacts and became a standard reference in semiconductor industry.
In 1986 Vandenberg turned her attention to the quality control of the crystal growth of InGaAsP multi-quantum well (MQW) layers, used as laser light sources and optical modulators designed to work in the 1.3 to 1.55 μm wavelength range. Advancing the design, performance and manufacturability of these devices had been the focus of all the leading optical component suppliers for decades. These devices are manufactured using organometallic vapor phase epitaxy, a complex process involving multiple sources subject to drift. Manufacture of early devices was based on unacceptably low (much less than 1%) end-to-end yields. Dramatic improvement was needed to produce the high performance components used to transport the massive amounts of data in today's Internet. In many cases mono-layer thickness control is required along with variations in bandgap less than 0.5%. This high level of quality control must be achieved using complex crystal growth machines which can fail in hundreds of ways. To insure that these multiple failure modes do not impact the final device, Vandenberg designed a one-room (later bench-top) non-destructive high-resolution X-ray diffractometer to provide immediate on-line feedback into the MQW growth process. She constructed robust algorithms linking X-ray features to layer thickness and strain information essential to crystal growth control and optoelectronic device performance. Her X-ray diffraction technique is used to scan every laser wafer many times during manufacture. All Internet lasers are now manufactured using her tool X-Ray Crystallography, and their operational lifetime exceeds 25 years.
Awards
Vandenberg received the 1995 and 1997 Optoelectronics Award in recognition of contributions to the development of characterization and process control routines for manufacture of Lucent's world class semiconductor lasers.
She is a fellow of the American Physical Society and a corresponding member of the Royal Netherlands Academy of Arts and Sciences.
Selected publications
References
20th-century Dutch chemists
Dutch women chemists
Leiden University alumni
Shell plc people
Bell Labs
Crystallographers
Fellows of the American Physical Society
Members of the Royal Netherlands Academy of Arts and Sciences
20th-century Dutch women scientists
1938 births
Rare earth scientists
Living people
Women inventors | Joanna Maria Vandenberg | [
"Chemistry",
"Materials_science"
] | 985 | [
"Crystallographers",
"Crystallography"
] |
54,908,640 | https://en.wikipedia.org/wiki/Blind%20%28app%29 | Blind is an app that provides an anonymous forum and community for verified employees to discuss issues. Users on Blind are grouped by topics, company and their broader industry. The app verifies that the registered users actually work in the company through their work email and claims to keep user identities untraceable. However, this claim remains disputable on the basis of Blind being a closed source and with ties to South Korea, a country which enacts controversial defamation laws, which include defamation by factual information under article 307(1).
Blind was founded in 2013 by Sunguk Moon and Kyum Kim. In 2014, it initially launched in South Korea, followed by the U.S. in 2015. The company is based in San Francisco, California.
The app has been in the news in multiple cases, noticeably when its anonymous surveys reveal the frank opinions of employees across industries. However, it is also used for more discussions about everyday topics such as salaries.
According to its app pages on the iOS App Store and Google Play, it has registered employees from over 83,000 companies. According to Forbes, the app is being used worldwide and is influencing corporate decisions by giving executives information about employees' concerns.
Employees from various companies have provided their input on situations at their workplace through the app's surveys and chats, including the Korean Air VP rage 7 sacking incident, Uber sexual harassment claims, Google memo, and Amazon employment conditions and problems.
References
Further reading
External links
Software companies of South Korea
Android (operating system) software
iOS software
Anonymous social media
Business chat software
Mobile instant messaging clients | Blind (app) | [
"Technology"
] | 321 | [
"Mobile software stubs",
"Business chat software",
"Mobile technology stubs",
"Instant messaging"
] |
54,909,836 | https://en.wikipedia.org/wiki/NGC%205281 | NGC 5281 is an open cluster in the constellation Centaurus. It was discovered by Nicolas Louis de Lacaille in 1751-1752 from South Africa, and catalogued it as Lacaille I.7. NGC 5281 is located three and a quarter degrees southwest of Beta Centauri. Under dark skies, it is bright enough to be spotted with naked eye, appearing as a 6th magnitude star.
Characteristics
The four bright stars of the cluster form a striking line as seen from Earth, however the cluster is sparsely populated. The brightest member of the cluster is of mag 6.61. The next two brighter stars have evolved away from main sequence. The turn-off mass of the cluster is estimated to be at 5.6 . Based on the colour magnitude diagram, the age of the cluster is estimated to be 45 myrs. The tidal radius of the cluster is 5.5 - 8.4 parsecs (18 - 27 light years) and represents the average outer limit of NGC 5281, beyond which a star is unlikely to remain gravitationally bound to the cluster core. The radius of the core of the cluster is about 4.3 light years, nearly the same as the distance between the Sun and the closest star system, Alpha Centauri. Within the angular radius of the cluster there are 371 probable members.
One of the members of the cluster is HD 119682 (mag. 7.97, spectral type B0.5V), a Be star notable for its X-rays emission. It has been categorised by Moffat & Vogt (1973), Mermilliod (1982), and Safi-Harb (2007), as a blue straggler, and it has also been categorised as a gamma Cassiopeiae analog. HD 119682 has been identified as the visual counterpart of the X-ray source 1WGA J1346.5-6255, found within the radio lobes of the supernova remnant G309.2-00.6, located 4 ± 2 kpc away, with which it is unrelated. The light curve of the star in X-rays shows significant brightness variations within hours, however, the spectral distribution appears rather stable. The spectrum obtained by the High Energy Transmission Gratings on board Chandra X-ray Observatory seems to lack strong emission lines, including Fe Kα fluorescence.
References
External links
5281
Centaurus
Open clusters | NGC 5281 | [
"Astronomy"
] | 505 | [
"Centaurus",
"Constellations"
] |
54,909,839 | https://en.wikipedia.org/wiki/Laurie%20Brokenshire | Commodore Laurence Phillip Brokenshire CBE (20 October 1952 – 4 August 2017) was a Royal Naval officer, magician, and world-class puzzle solver. He is also known to have successfully fostered over 70 children in 22 years.
History
Laurie Brokenshire was born on 20 October 1952 at 40 Amherst Road, Plymouth to Martin Brokenshire (1926–97) and his wife Pansy Jeanne (née Hewitt; 1930-2007). He had a younger sister, Lynnette, and a younger brother, Adrian. His early hobby interests included chess, puzzles and magic. In 1964, he joined Devonport High School for Boys.
In 1966, following the completion of his father's naval career, the family moved to Slough, where he joined Slough Grammar School, now called Upton Court Grammar School. He played for Buckinghamshire junior hockey. In later years, he managed the School chess club and, jointly, the School bridge club (which notably beat Eton College on one occasion).
After school, he went to the University of Exeter (1971–75) where he took a BSc (Hons) degree in Mathematics, graduating in 1974, and a PGCE (Maths) in 1975. He played hockey and table tennis for University teams, and turned down the offer of a place in the University bridge team. During this time, he beat his Head of Department, Professor David Rees, at both chess and, at Rees' insistence, Go.
Royal Navy
In 1975, Brokenshire joined the Royal Navy as an Instructor - his father's career and service branch. He spent some time as Maths Instructor at HMS Fisgard, an Artificer apprentice training establishment, Torpoint, E. Cornwall. After training at Royal Naval College, Greenwich, and subsequent postings to Dartmouth, Westminster, Plymouth, Portsmouth and Faslane, his career developed as a submariner and later as a senior Royal Navy officer. His success on the Greenwich course encouraged him to take a second degree, this time an Open University BA degree in Science. As 2007 Royal Navy chess champion and President of the Combined Services Chess Association, he represented the RN at the NATO Chess championship several years running, creating for himself an international standing in military chess.
In later years, he commanded two shore establishments: firstly Northwood (1992–93) and later, as a Commodore, HMS Raleigh (2000–03) - the Navy's main Torpoint training centre - the family lived at nearby Trevol House. In 2003, on the occasion of his retirement, the family moved back to their house in Stubbington, Hampshire, and he was awarded a CBE for services rendered.
Sea Cadets
Following the end of his Royal Navy career, Brokenshire was appointed as Commodore of the UK Sea Cadet Corps. As such, he toured and inspected as many local associations as he could. On one such visit to Essex, he met his 6th cousin and fellow Exeter-graduate, the local MP James Brokenshire (1968-2021), and remained in regular contact.
Magic
Brokenshire was accepted into the Inner Magic Circle, and became an occasional professional / semi-professional magic performer. He was regularly used by his charities as a high-profile magic performer, in particular, performing table magic for members of the British Royal Family at various charitable occasions. He was always able to find a suitable magic trick for any occasion, particularly for young children, and carried his "magic" bag with him at all times.
Puzzling
In his spare time, Brokenshire became a world-class puzzling expert. Specialising in combinatorial and mechanical puzzles, he was in regular contact with puzzle researchers, designers, makers, enthusiasts and other specialists around the world. He introduced some novel solutions to existing problems, and was exceptionally quick to solve new problems. He was retained by a number of major puzzles companies as a consultant to offer an assessment on the viability of proposed puzzles. His personal puzzle collection was considered among the largest in the UK. He organised and held G4G Celebration of Mind meetings at his house.
Historically, the famous puzzlist Henry Dudeney (1857-1930) announced a particularly-difficult chessboard (aka checkerboard) dissection puzzle in one of his first puzzle books and asserted that it had a "unique", or single, solution. Numerous people tried and failed to solve this puzzle, such that it became famously known as the "Dudeney Problem". Some years later, true to his word, Dudeney gave his "unique" solution in one of his last puzzle books. Later still, Dudeney's former collaborator, Sam Loyd (1841-1911), asserted his own abilities, and disproved Dudeney's assertion of uniqueness, by giving a second "unique" solution in one of his puzzle books. This same problem then became known as the "Dudeney-Loyd problem". Today, that same problem is classified as the "Dudeney-Loyd-Brokenshire problem" when Brokenshire found the third "unique" solution, after a gap of about 100 years. Substantial further analysis has shown that there were only ever three solutions, despite Dudeney's original claim. For this solution and others, Laurie Brokenshire is present in the puzzling record books, and is in illustrious company.
International Puzzle Parties
Brokenshire and his wife, Ethel, camped and bicycled the length of several continents to reach successive invitation-only annual International Puzzle Parties (IPP). Taking two bicycles, two panniers and a magic bag, they cycled and either wild-camped or stayed with friends along the Eastern coast of Australia, around the North Island of New Zealand, Japan, Europe and Scandinavia, and various routes across the US from Alaska to Washington, D.C. On one trip, Ethel automatically swatted a bear, which was nuzzling the side of their tent in the early hours of the morning, without suffering any ill effects. On another, he contracted viral encephalitis from an infected tick bite.
He organised and hosted the 2014 IPP34 puzzle party, based at a hotel near Heathrow.
Fostering
From 1994, Brokenshire and his wife, Ethel, undertook fostering in Hampshire. They successfully fostered over 70 children in 22 years.
Religion
Brokenshire was a member of the Navy Christian Fellowship and was a pillar of his local Church.
Sea swimming
In 1986, Brokenshire swam the English Channel. Subsequently, his son Matthew has also swum the Channel - making them one of the few father and son pairs who have achieved this feat. In later years, Brokenshire enjoyed sea swimming throughout the year with his local "Shack Sharks" club, and represented his locality at cold water swimming competitions up and down the country.
Personal life
Brokenshire married Ethel Isobel McMahon (born 1954) (WRNS) on 29 March 1980 at Clonallan Parish Church, with four children and several grandchildren.
He was a member of Mensa.
Illness, death and legacy
In early 2016, Brokenshire was diagnosed with terminal brain cancer, which had overtaken his father some 20 years earlier. His response was immediate and typically selfless - his family undertook a 30-mile crowd-sponsored sea swim off Plymouth in aid of various cancer charities, raising the targeted £30,000 in under three weeks, and in excess of £45,000 overall.
He had a protracted 18-month fight against his cancer, enabling him to see and interact with his new grandchildren. On 4 August 2017, he died at home surrounded by all his family. On 18 August, following a Thanksgiving service, attended by around 1000 people, at Crofton Church, his body was interred at nearby Crofton Cemetery.
At Upton Court Grammar School, an OPA Memorial Prize for Yr 13 Mathematics is given annually in Laurie Brokenshire's name.
References
External links
1952 births
2017 deaths
Commanders of the Order of the British Empire
English Channel swimmers
People from Slough
Graduates of the Royal Naval College, Greenwich
Royal Navy commodores
Puzzle designers
Recreational mathematicians
English magicians
People educated at Devonport High School for Boys
People educated at Upton Court Grammar School
Alumni of the University of Exeter
Alumni of the Open University
Mensans
English chess players
Burials in Hampshire
Military personnel from Plymouth, Devon
Deaths from brain cancer in England
20th-century chess players | Laurie Brokenshire | [
"Mathematics"
] | 1,682 | [
"Recreational mathematics",
"Recreational mathematicians"
] |
54,913,289 | https://en.wikipedia.org/wiki/Norman%20R.%20Legge | Norman Reginald Legge (20 April 1919 – 28 March 2004) was a Canadian researcher for the Shell Oil Company and pioneer of thermoplastic elastomers, Kraton in particular.
Personal
Legge was born on 20 April 1919 in Edmonton, Alberta, Canada. He died on 28 March 2004 in Livermore, California.
Education
1942 BSC, Chemistry, University of Alberta
1943 MSC, Chemistry, University of Alberta
1945 Ph.D., McGill University, explosives research during World War II.
Career
Legge worked as a research chemist for Polymer Corporation (formerly Polysar Corp.) in Sarnia, Ontario (1945-1951). Later, he moved to Kentucky Synthetic Rubber Corporation as Director of Research in Louisville. He then moved to Shell Chemical until his retirement.
He was a Fellow of the American Association for the Advancement of Science and a member of the American Chemical Society, Rubber Division.
Awards and Recognitions
1987 - Charles Goodyear Medal from the ACS Rubber Division
1992 - IISRP Technical Award from the International Institute of Synthetic Rubber Producers
External links
Audio interview with Norman Legge.
References
1919 births
2004 deaths
Polymer scientists and engineers
Scientists from Edmonton
University of Alberta alumni
McGill University alumni | Norman R. Legge | [
"Chemistry",
"Materials_science"
] | 242 | [
"Polymer scientists and engineers",
"Physical chemists",
"Polymer chemistry"
] |
66,192,010 | https://en.wikipedia.org/wiki/Topological%20Hochschild%20homology | In mathematics, Topological Hochschild homology is a topological refinement of Hochschild homology which rectifies some technical issues with computations in characteristic . For instance, if we consider the -algebra then but if we consider the ring structure on (as a divided power algebra structure) then there is a significant technical issue: if we set , so , and so on, we have from the resolution of as an algebra over , i.e. This calculation is further elaborated on the Hochschild homology page, but the key point is the pathological behavior of the ring structure on the Hochschild homology of . In contrast, the Topological Hochschild Homology ring has the isomorphism giving a less pathological theory. Moreover, this calculation forms the basis of many other THH calculations, such as for smooth algebras
Construction
Recall that the Eilenberg–MacLane spectrum can be embed ring objects in the derived category of the integers into ring spectrum over the ring spectrum of the stable homotopy group of spheres. This makes it possible to take a commutative ring and constructing a complex analogous to the Hochschild complex using the monoidal product in ring spectra, namely, acts formally like the derived tensor product over the integers. We define the Topological Hochschild complex of (which could be a commutative differential graded algebra, or just a commutative algebra) as the simplicial complex, pg 33-34 called the Bar complexof spectra (note that the arrows are incorrect because of Wikipedia formatting...). Because simplicial objects in spectra have a realization as a spectrum, we form the spectrumwhich has homotopy groups defining the topological Hochschild homology of the ring object .
See also
Revisiting THH(F_p)
Topological cyclic homology of the integers
Homological algebra
Algebraic topology | Topological Hochschild homology | [
"Mathematics"
] | 386 | [
"Mathematical structures",
"Algebraic topology",
"Fields of abstract algebra",
"Topology",
"Category theory",
"Homological algebra"
] |
66,192,375 | https://en.wikipedia.org/wiki/STAC-9 | STAC-9 is an experimental drug that was developed by GlaxoSmithKline as a small-molecule activator of the sirtuin subtype SIRT1, with potential applications in the treatment of diabetes.
See also
SRT-1460
SRT-1720
SRT-2104
SRT-2183
SRT-3025
References
Trifluoromethyl compounds
4-Pyridyl compounds
Amides
Pyrrolopyridines
Carboxamides | STAC-9 | [
"Chemistry"
] | 103 | [
"Pharmacology",
"Functional groups",
"Medicinal chemistry stubs",
"Pharmacology stubs",
"Amides"
] |
66,192,711 | https://en.wikipedia.org/wiki/GosNIIOKhT | The State Research Institute of Organic Chemistry and Technology () (GosNIIOKhT) is a Russian research institute engaged in the development of chemical technologies for use in the national economy and the production of relevant goods and products.
History
GosNIIOKhT was founded in 1924, during the time of the Soviet Union, to conduct research work in organic synthesis and to be for the Soviet state the umbrella organization for it, below which were arrayed a number of satellite institutes.
From the early 1930s, the research institute was engaged in the development of chemical weapons. Significant numbers of scientists were also assigned to develop anti-crop and anti-animal agents.
GosNIIOKhT employed approximately 6,000 people by the dissolution of the Soviet Union. The employees worked in Novocheboksarsk on nerve agent production, in Volgograd on nerve agent production, in Dzerzinsk on blister agent production, in Shikhany on testing, and in Nukus, Uzbekistan on testing.
The Yeltsin government alarmed the international community by stating that it could not afford to keep the GosNIIOKhT facilities open or personnel employed, as that would mean starving scientists would have incentive to work for nefarious organizations.
By December 1999 the International Science and Technology Center had borne small fruit. In the opinion of one writer, "permitting the ISTC and the other grant programs to sponsor projects that work with Western commercial companies to retool some equipment and kick off the manufacturing of consumer products at these facilities. An advantage to lifting the congressional ban on defense conversion is that the Western commercial partners would have a frequent presence on site—an arrangement likely to foil efforts to produce warfare agents covertly at these facilities. Such an outcome
is far preferable to allowing the skilled labor at these facilities to become increasingly destitute and even desperate... Entire segments of poison gas experts have no contact with the [ISTC] grant programs, especially those within the design bureaus that have specialized skills in the aerosolization of agents and their weaponization."
Currently, its activities include the production of chemical weapons and other hazardous materials. Other areas of work include the development and production of drugs, toxicological research, preclinical testing, chemical technology, and environmental safety.
The Navalny affair
On 15 October 2020, European Union sanctions were imposed on the institute in connection with the poisoning of politician Alexei Navalny. The Council of the European Union's grounds for designation states
US sanctions
On 21 March 2021, invoking its authorities under the Countering America’s Adversaries Through Sanctions Act (CAATSA) Section 231, the United States Department of State added GosNIIOKhT to its List of Specified Persons as persons that are part of, or operate for or on behalf of, the defense or intelligence sectors of the Government of the Russian Federation. The Department describes GosNIIOKhT as "a Russian institute with a longstanding role in researching and developing chemical weapons, and GosNIIOKhT developed Russia’s Novichok chemical weapons. Since 2016, GosNIIOKhT has expanded its research, development, testing, and evaluation capabilities."
In addition, GosNIIOKhT was designated under the authority of the International Emergency Economic Powers Act and , "Blocking Property of Weapons of Mass Destruction Proliferators and Their Supporters."
References
Soviet chemical weapons program
Novichok agents
Research institutes in the Soviet Union
Chemical research institutes
Research institutes established in 1924
1924 establishments in the Soviet Union | GosNIIOKhT | [
"Chemistry"
] | 719 | [
"Chemical research institutes"
] |
66,196,771 | https://en.wikipedia.org/wiki/Diplomacy%20of%20the%20Caspian%20littoral%20states | Several states in the Caspian region, including the five littoral states of the Caspian Sea, namely the Islamic Republic of Iran, Turkmenistan, Kazakhstan, the Russian Federation, and the Republic of Azerbaijan use ad hoc diplomatic relations to build trust and goodwill as well as to boost the bargaining power of their governments.
Generally, the focus is to establish, design, and amend regional and international rules about the world system and the littoral states.
Mechanisms include hard power, soft power, and smart power to achieve hard security, soft security, and smart security.
Summits
Diplomatic processes include repeated summits over the centuries:
Treaty of Turkmenchay, 1828 - The first diplomatic effort, the treaty barred Iran from deploying military ships in the Caspian Sea.
Ashgabat, 2002 - The first Caspian Sea Summit was held in Ashgabat, Turkmenistan, in April 2003. Items included combating pollution and the protecting the water in the Caspian Sea.
Tehran, 2007 - The next summit was held in Tehran and ended with the signing of the Caspian Environment Convention. This was the first international law document related to the Caspian Sea, approved and implemented by the legislatures of the five littoral states. The meeting agreed on general principles of fisheries, protection of the Caspian Sea, and shipping from littoral countries. A memorandum of understanding on security and military cooperation aimed at fighting terrorism and extremism was also negotiated.
Baku, 2010 - On November 18, 2010, the group met in Baku. It achieved the signing of the Security and Military Cooperation Agreement.
Astrakhan, 2014 - The fourth summit was held in 2014 in Astrakhan, in the Russian Federation. It resulted in three agreements:
Protection and optimal use of water resources
Weather
Prevention and response to emergencies.
Nur-Sultan, 2017 - The Astana Summit was held at the foreign minister level in 2017. Although the foreign ministers had sufficient authority to coordinate some parts of the Caspian Sea Convention, disagreement on some key issues led to adjournment to the following year. Issues included the division of the Caspian Sea into national sections, construction of pipelines across the seabed, and the navigational situation.
Aktau, 2018 - The meeting in 2018 in Aktau, Kazakhstan reached an agreement on the forms of exploitation of the Caspian Sea, and the water and coastal boundaries between the five states. The agreement had 24 articles. Among the more important were:
Article 1: A warship belonging to peacekeeping forces must have the insignia of one of the five states and be under the command of an officer formally appointed by that government.
Article 2: Sovereignty, sovereign rights, monopoly and jurisdiction will be exercised in the Caspian Sea. The Convention set out the rights and obligations of the parties, including the waters, water bed, subsoil, natural resources, and airspace above the sea.
Article 5: The Caspian Sea water area is divided into inland waters, territorial waters, fishing areas, and common sea areas.
Political cooperation
Energy
The Caspian Sea's oil resources are second only to those of the Persian Gulf. Security concerns in the Middle East and the consequent reluctance to invest there have increased the focus on the Caspian region and its energy resources.
The Caspian Sea basin reserves include 48 billion barrels of oil and gas reserves of 292 trillion cubic meters, the world's fourth largest gas reserve.
Pipelines
Baku–Tbilisi–Ceyhan pipeline
South Caucasus Pipeline
Turkmenistan–Afghanistan–Pakistan–India Pipeline
The five states have not yet set resource exploitation rules. The main reason for these disputes and crises is how to transfer and export these resources.
Sources
Caspian Sea
Oil reserves
Admiralty law
International borders
Treaties of Iran
Treaties of Turkmenistan
Treaties of Kazakhstan
Treaties of Azerbaijan
Azerbaijan–Iran relations
Pipelines
Azerbaijan–Russia relations
Azerbaijan–Turkmenistan relations
Multilateral relations of Russia
Multilateral relations of Kazakhstan
International law
Environmental mitigation
Jurisdiction
Sovereignty
Diplomatic conferences | Diplomacy of the Caspian littoral states | [
"Chemistry",
"Engineering"
] | 778 | [
"Environmental mitigation",
"Environmental engineering"
] |
66,199,027 | https://en.wikipedia.org/wiki/GSK-4112 | GSK-4112 is an experimental drug that was developed by GlaxoSmithKline as an agonist of Rev-ErbAα. It is used for studying regulation of the circadian rhythm and its influence on diverse processes such as adipogenesis, regulation of bone density, and inflammation.
See also
SR8278
SR9009
SR9011
References
Thiophenes
Tert-butyl compounds
Nitroarenes
4-Chlorophenyl compounds
Amines
Esters | GSK-4112 | [
"Chemistry"
] | 104 | [
"Pharmacology",
"Esters",
"Functional groups",
"Medicinal chemistry stubs",
"Amines",
"Organic compounds",
"Pharmacology stubs",
"Bases (chemistry)"
] |
66,199,369 | https://en.wikipedia.org/wiki/SR8278 | SR-8278 is an experimental drug that was developed as an antagonist of Rev-ErbAα. It has been used to demonstrate potential applications of Rev-ErbAα antagonists in the treatment of conditions such as Duchenne muscular dystrophy and Alzheimer's disease.
See also
GSK4112
SR9009
SR9011
References
Isoquinolines | SR8278 | [
"Chemistry"
] | 79 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
66,199,649 | https://en.wikipedia.org/wiki/Thiothionyl%20fluoride | Thiothionyl fluoride is a chemical compound of fluorine and sulfur, with the chemical formula . It is an isomer of disulfur difluoride (difluorodisulfane) .
Preparation
Thiothionyl fluoride can be obtained from the reaction between disulfur dichloride with potassium fluoride at about 150 °C or with mercury(II) fluoride at 20 °C.
Another possible preparation is by the reaction of nitrogen trifluoride with sulfur.
It also forms from disulfur difluoride when in contact with alkali metal fluorides.
can also be synthesized with the reaction of potassium fluorosulfite and disulfur dichloride:
Properties
Thiothionyl fluoride is a colorless gas. At high temperatures and pressures, it decomposes into sulfur tetrafluoride and sulfur.
With hydrogen fluoride, it forms sulfur tetrafluoride and hydrogen sulfide.
It condenses with sulfur difluoride at low temperatures to yield 1,3-difluoro-trisulfane-1,1-difluoride.
References
Sulfur compounds
Fluorides | Thiothionyl fluoride | [
"Chemistry"
] | 253 | [
"Fluorides",
"Salts"
] |
66,200,335 | https://en.wikipedia.org/wiki/Torin-1 | Torin_1 is a drug which was one of the first non-rapalog derived inhibitors of the mechanistic target of rapamycin (mTOR) subtypes mTORC1 and mTORC2. In animal studies it has anti-inflammatory, anti-cancer, and anti-aging properties, and shows activity against neuropathic pain.
References
Enzyme inhibitors | Torin-1 | [
"Chemistry"
] | 80 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
66,200,541 | https://en.wikipedia.org/wiki/Omzotirome | Omzotirome (INN), formerly codenamed TRC-150094, is a thyromimetic drug which acts as a metabolic modulator which restores metabolic flexibility. It has been shown to improve insulin resistance and hyperglycemia, and is in Phase III human clinical trials for the treatment of Cardiometabolic-Based Chronic Disease (CMBCD) by improving dysglycemia, dyslipidemia and hypertension.
References
Small-molecule drugs
Thyroid
Indenes
Experimental diabetes drugs
Pyrazoles
Carboxylic acids | Omzotirome | [
"Chemistry"
] | 118 | [
"Pharmacology",
"Carboxylic acids",
"Functional groups",
"Medicinal chemistry stubs",
"Pharmacology stubs"
] |
66,201,497 | https://en.wikipedia.org/wiki/Micius%20Quantum%20Prize | The Micius Quantum Prize is awarded every year since 2018 "for promoting the quantum information science and technology research". The recipients are awarded one million Chinese yuan (about 150,000 US dollars) and a gold medal. The prize is awarded by the Micius quantum foundation, which was established thanks to donations (with a sum of 100 million Chinese yuan) from private entrepreneurs. Chair of the selection committee is Chunli Bai, the president of the Chinese Academy of Science.
The prize is named after Mozi, an ancient Chinese philosopher (~400 B.C) who founded the school of Mohism during the Hundred Schools of Thought period.
Laureates
References
Physics awards
Awards established in 2018 | Micius Quantum Prize | [
"Technology"
] | 139 | [
"Science and technology awards",
"Physics awards"
] |
58,106,874 | https://en.wikipedia.org/wiki/Catalina%20Curceanu | Cătălina Oana Curceanu is a Romanian physicist and lead researcher at the Istituto Nazionale di Fisica Nucleare. She researches low energy quantum chromodynamics.
Early life and education
Curceanu was born in Transylvania. She became interested in science as a child, and applied to the Mathematics and Physics Lyceum at Magurele in Bucharest. She attributes her passion for physics to her very skilled teachers. She studied physics at the University of Bucharest and graduated as a Valedictorian. She carried out her doctoral research using the Low Energy Antiproton Ring at CERN on the OBELIX experiment. She earned her PhD from the Horia Hulubei National Institute of Physics and Nuclear Engineering.
Research and career
In 1992 Curceanu joined the Istituto Nazionale di Fisica Nucleare. She uses the DAFNE (DAΦNE) collider at Frascati. She is part of the VIP2 experiment (Violation of the Pauli Principle) in the Laboratori Nazionali del Gran Sasso. In 2010 she was awarded Personality of the Year by the Romanian Academy in Rome. She works at CERN on the OBELIX experiment, looking for Exotic mesons, and DIRAC, looking for exotic pionium.
She published the popular science book Dai Buchi Neri all’adroterapia. Un Viaggio nella Fisica Moderna in 2013 with Springer. The book considers concepts of modern physics, including; the standard model, black holes and neutrinos. In 2015 she was awarded a $85,000 grant from FQXI and the John Templeton Foundations for her quantum physics research. Her proposal considered collapse models and the measurement problem. She used an ultrapure germanium detector to test the radiation it emits. Her recent work involves the SIDDHARTA experiment, looking at the strong interaction and strangeness.
Curceanu was the Australian Institute of Physics Women in Physics lecturer in 2016. In her lectures she asked "Quo Vadis the Universe'''". She has spoken about quantum computers at TEDx Brașov and TEDx Cluj-Napoca. She won the 2017 European Physical Society Emmy Noether Distinction for Women in Physics'' for her contributions to low-energy QCD. She won a Visiting International Scholar Award from the University of Wollongong in 2017, researching detector systems for high precision spectroscopy in fundamental physics. She is involved with several outreach and education activities.
References
Romanian physicists
Romanian women physicists
University of Bucharest alumni
Particle physicists
Living people
Year of birth missing (living people)
People associated with CERN | Catalina Curceanu | [
"Physics"
] | 538 | [
"Particle physicists",
"Particle physics"
] |
58,107,182 | https://en.wikipedia.org/wiki/NGC%203884 | NGC 3884 is a spiral galaxy located about 330 million light-years away in the constellation Leo. The galaxy was discovered by astronomer William Herschel on April 27, 1785 and is a member of the Leo Cluster.
Although it is classified as a LINER galaxy, NGC 3884 is also classified as a type 1 Seyfert galaxy.
On February 23, 2018, a type Ic supernova designated as SN 2018yn was discovered in NGC 3884.
References
External links
3884
36706
Leo (constellation)
Leo Cluster
Astronomical objects discovered in 1785
Unbarred spiral galaxies
Seyfert galaxies
LINER galaxies
6746 | NGC 3884 | [
"Astronomy"
] | 131 | [
"Leo (constellation)",
"Constellations"
] |
58,107,796 | https://en.wikipedia.org/wiki/Missouri%20water%20resource%20region | The Missouri water resource region is one of 21 major geographic areas, or regions, in the first level of classification used by the United States Geological Survey to divide and sub-divide the United States into successively smaller hydrologic units. These geographic areas contain either the drainage area of a major river, or the combined drainage areas of a series of rivers.
The Missouri region, which is listed with a 2-digit hydrologic unit code (HUC) of 10, has an approximate size of , and consists of 30 sub-regions, which are listed with the 4-digit HUCs 1001 through 1030.
This region includes the drainage within the United States of: (a) the Missouri River Basin, (b) the Saskatchewan River Basin, and (c) several small closed basins. Includes all of Nebraska and parts of Colorado, Iowa, Kansas, Minnesota, Missouri, Montana, North Dakota, South Dakota, and Wyoming.
List of water resource subregions
See also
List of rivers in the United States
Water resource region
References
Lists of drainage basins
Drainage basins
Watersheds of the United States
Regions of the United States
Resource
Water resource regions | Missouri water resource region | [
"Environmental_science"
] | 229 | [
"Hydrology",
"Drainage basins"
] |
58,108,923 | https://en.wikipedia.org/wiki/Seema%20Bhatnagar | Seema Bhatnagar (born: Seema Srivastava) is an Indian scientist, working in the field of anticancer drug discovery. She primarily works on synthetic chemistry approaches for targeted delivery of anticancer drugs in breast cancer.
Education
Seema Bhatnagar completed her B.Sc. in Chemistry, Biology and Zoology (1992), followed by M.Sc. in Organic Chemistry (1994) from Isabella Thoburn College, Lucknow.
She did her Ph.D. in Chemistry (1999) at Central Drug Research Institute, Lucknow and thesis executed under the doctoral advisor of Amiya Prasad Bhaduri.
Career
Bhatnagar has been associated with research and development in the field of drug discovery and worked with various government and non-government organizations before her current assignment with Amity Institute of Biotechnology, Amity University, Noida as Assistant Director. She had joined Amity University as a lecturer around the time of its inception in 2005. Prior to that she had worked Project Associate in Department of Cell Biology, National Institute of Immunology, India. (07/01 to 04/04) and Project Associate in Department of Immunopharmacology at National Institute of Immunology, India. (05/00 to 05/01).Seema Bhatnagar started her career as Senior Research Fellow (Extended) in Medicinal Chemistry Division, associated with "Lactam Acetals in Organic Synthesis" under the supervision of Nitya Anand. Project fully sponsored by Council of Scientific and Industrial Research being executed at New Drug Discovery Research (NDDR), Ranbaxy Laboratories. (6/99-12/99)
Bhatnagar is currently working as Assistant Director at Amity Institute of Biotechnology with Amity University, Noida. Bhatnagar has been associated with the University since its inception. She began her research career with project sponsored by the Department of Science and Technology (India) (DST). She is currently pursuing project sponsored by Indian Council of Medical Research (ICMR), along with her collaborator Bhudev Chandra Das. Apart from this she has developed active collaboration with Bhyravabhotla Jayaram at Indian Institute of Technology Delhi, Thankayyan Retnabai Santhosh Kumar at Rajiv Gandhi Centre for Biotechnology (RGCB), Thiruvananthapuram and Drug Discovery Unit at the University of Dundee to strengthen her research work. She has been the front runner at her Institute for being selected to attend Wellcome Trust Advanced Course in Small Molecule Drug Discovery. Prof Bhatnagar’s research credentials include several patents and publications.
At Amity University along with the research, Prof.Bhatnagar also leads various initiatives, including collaboration of Foreign Universities and Scientific Research organizations, with Amity University, apart from Study Abroad Program and Three Continent Program. Prof.Bhatnagar has worked with the top research institutes and scientist in India during her Post Doc. including National Institute of Immunology, India Ranbaxy Laboratories, Central Drug Research Institute.
Professional Training's & Fellowship/Awards/Achievements
Small Molecule Drug Discovery organized by Wellcome Trust at the Wellcome Trust Genome Campus, Cambridge UK.1-6 June 2014
Fellowships/Awards
Got into the World's Top 2% Most Influential Scientists in the 2023 Stanford University List.*Awarded Fellow of The Indian Society of Agricultural Biochemists'2023"
Awarded Senior Research Fellowship (Extended) by the Council of Scientific and Industrial Research, New Delhi. (06/99-12/99)
Nominated member of the Jury Panel (level 1) for the ‘Dupont India Challenge-2002 Science Paper Contest ‘(07/02)
Personal life
Bhatnagar was born in the city of Lucknow, Uttar Pradesh, India and is the eldest daughter of to Ram Chandra Srivastava (1940-1999), who was Associate Director with Defence Research and Development Organisation and Meera Srivastava. Her younger brother is heading IT Operations in a MNC, and a youngest sister is a physiotherapist.
Seema is married to an IT Consultant, and they together have one son and one daughter.
References
21st-century women scientists
21st-century Indian women scientists
21st-century Indian scientists
Lists of Indian scientists
Indian women chemists
Indian organic chemists
Scientists from Lucknow
Isabella Thoburn College alumni
Chaudhary Charan Singh University alumni
Living people
1971 births
Index of women scientists articles | Seema Bhatnagar | [
"Chemistry"
] | 900 | [
"Organic chemists",
"Indian organic chemists"
] |
58,112,175 | https://en.wikipedia.org/wiki/Kids%20Code%20Jeunesse | Kids Code Jeunesse (KCJ) is a Canadian (not for profit) organization based in Montreal, Quebec, which helps children in Canada have an opportunity to learn computational thinking through code. The organization was founded in 2013.
Projects
Code Club
In 2016, in partnership with Code Club U.K., KCJ licensed the rights to Code Club Canada, which runs volunteer-led Code Clubs for free across Canada. There are now over 750 Code Clubs registered throughout every province and territory. These clubs are run for children aged 7–12 in schools, libraries, and community centers for 8 weeks.
Code Create Teach
In 2016, Kids Code Jeunesse, in partnership with Lighthouse Labs, embarked on a national campaign to inspire teachers to incorporate the basics of coding and computational thinking into their classrooms. From April to December 2018, KCJ hosted a free, full-day Code Create Teach workshops which provided K-12 educators with the tools to help them teach their students how to experiment with technology. Two workshops were planned for each Canadian province and territory, allowing KCJ to reach over 1500 teachers. During the workshops, KCJ instructors provided tips and guidelines to bring coding into the classroom, as well as combine unplugged activities with hands-on coding activities. These methods gave attendees the opportunity to connect with other teachers in purposeful and learner-driven way. Following the Code Create Teach workshops, each teacher was given a free classroom kit of micro:bits, a pocket-sized programmable micro-controller designed to make learning and teaching code easy and fun.
Code MTL
In 2017, KCJ was contracted for CodeMTL, a project to deliver coding workshops to over 65 schools in the Commission Scolaire de Montréal, Quebec's largest school board.
CanCode
In January 2018, the Honourable Navdeep Bains, Canada's Minister of Innovation, Science and Economic Development announced that Kids Code Jeunesse was one of the recipients of the inaugural CanCode program. This program is part of the Canadian Government's Innovation and Skills Plan which has the stated intention to invest $50 million by March 2019 to increase the opportunities for children and teachers to master digital skills. With the funds received, Kids Code Jeunesse has been able to extend the training it provides to Canada's youth and aims to support over 70,000 children and 2000 teachers.
References
Organizations established in 2013
2013 establishments in Quebec
Non-profit organizations based in Quebec
Educational organizations based in Quebec
Computer science education | Kids Code Jeunesse | [
"Technology"
] | 503 | [
"Computer science education",
"Computer science"
] |
58,114,516 | https://en.wikipedia.org/wiki/NGC%203886 | NGC 3886 is a lenticular galaxy located about 280 million light-years away in the constellation Leo. It was discovered by astronomer Heinrich d'Arrest on May 9, 1864. The galaxy is a member of the Leo Cluster.
See also
List of NGC objects (3001–4000)
References
External links
3886
36756
Leo (constellation)
Leo Cluster
Astronomical objects discovered in 1864
Lenticular galaxies
6760 | NGC 3886 | [
"Astronomy"
] | 84 | [
"Leo (constellation)",
"Constellations"
] |
58,114,992 | https://en.wikipedia.org/wiki/Patisiran | Patisiran, sold under the brand name Onpattro, is a medication used for the treatment of polyneuropathy in people with hereditary transthyretin-mediated amyloidosis, a fatal rare disease that is estimated to affect 50,000 people worldwide.
It is the first small interfering RNA-based drug approved by the U.S. Food and Drug Administration (FDA) and the first drug approved by the FDA to treat this condition. It is a gene silencing drug that interferes with the production of an abnormal form of transthyretin. Patisiran utilizes a novel approach to target and reduce production of the TTR protein in the liver via the RNAi pathway.
Patisiran was developed and is marketed by Alnylam. The FDA considers it to be a first-in-class medication.
History
Patisiran was granted orphan drug status, fast track designation, priority review and breakthrough therapy designation due to its novel mechanism and the rarity of the condition it treats. It was approved for medical use in the United States and in the European Union in August 2018. The per-patient cost is between and per year, depending on the number of vials needed.
Formulation
The siRNA active component of Patisiran is formulated into lipid nanoparticles, which protect the RNA and facilitate its delivery to target tissues. The lipid nanoparticle formulation includes buffer components, as well as the lipid components DLin-MC3-DMA, Distearoylphosphatidylcholine, cholesterol, and the PEGylated lipid DMG-PEG 2000.
Society and culture
Economics
As of 2020, there were 1050 people globally receiving patisiran, generating $65.5M in net-revenues for Alnylam Pharmaceuticals.
References
Biopharmaceuticals
Orphan drugs
small interfering RNA | Patisiran | [
"Chemistry",
"Biology"
] | 383 | [
"Pharmacology",
"Biotechnology products",
"Biopharmaceuticals"
] |
58,115,927 | https://en.wikipedia.org/wiki/Jiji%20Weir | The Jiji Weir () is a weir located in Nantou County, Taiwan. The weir is located at the border of three townships in the county, which are Jiji Township, Lugu Township and Zhushan Township.
History
The construction of the weir started in July 1990 and completed in December 2001.
Architecture
The weir features the Taiwan Water Museum () within Jiji Township border.
Transportation
The weir is accessible southwest of Jiji station of Taiwan Railways.
See also
List of dams and reservoirs in Taiwan
References
2001 establishments in Taiwan
Buildings and structures in Nantou County
Dams completed in 2001
Weirs | Jiji Weir | [
"Environmental_science"
] | 120 | [
"Hydrology",
"Weirs"
] |
58,115,956 | https://en.wikipedia.org/wiki/Samsung%20Galaxy%20Home | The Samsung Galaxy Home is a smart speaker developed by Samsung Electronics. It was officially announced alongside the Galaxy Note 9 and Galaxy Watch on 9 August 2018, but as of the start of 2024, has yet to be commercially released.
History
The Wall Street Journal reported in July 2017 that a Bixby smart-speaker, codenamed Vega, was under development. It was later confirmed by Dong-Jin Koh, CEO of Samsung Electronics, in August 2017.
The Galaxy Home was revealed at the Samsung Unpacked event on 9 August 2018, with more information promised during the Samsung Developer Conference in November.
At the Samsung Developer Conference 2019, held in San Jose, California, they showed the Samsung Galaxy Home Mini, which launched on 12 February 2020.
Specifications
Hardware
The Galaxy Home has a vase shape and features black cloth material with a mesh design, supported by 3 metal tripod legs. The top surface has a glass touch interface with music and volume controls, and also has an illuminated ring and AKG logo. There are 3 mid-range and high-range speakers and a subwoofer, as well as 8 far-field microphones for voice commands.
Software
The speaker features the Bixby voice assistant and can be activated by saying “Hi Bixby”. Its functionality is similar to that found on mobile devices such as the Note 9. The Galaxy Home can adjust its sound to adapt to its environment and also features Sound Steer, a Bixby voice command that allows the device to identify the location of the user in the room and better direct sound.
The speaker features SmartThings Hub integration, allowing it to control other smart home appliances compatible with the Samsung SmartThings platform. Spotify is the default music player, and can be controlled via voice. Audio playback can be also switched between Samsung home appliances.
References
External links
Samsung Galaxy
Smart speakers
Products introduced in 2018
Vaporware | Samsung Galaxy Home | [
"Technology"
] | 383 | [
"Computer industry",
"Vaporware"
] |
58,116,271 | https://en.wikipedia.org/wiki/Emergency%20Cell%20Broadcast%20System | Emergency Cell Broadcast System (ECBS) is an alert broadcast system in the Philippines, designed to disseminate emergency alerts and warnings to mobile devices via cell broadcast services (CBS)
Telecommunications companies and the National Disaster Risk Reduction and Management Council (NDRRMC) are both mandated and required by law to send free mobile alerts before disasters happen.
Background
The alert broadcast system was implemented in compliance with the Republic Act 10639, also known as the Free Mobile Disaster Alerts Act. The legislation was signed on June 20, 2014 and its implementing rules and regulations (IRR) were released on July 21, 2015. Initially only SMS or text messages were used to alert the public regarding emergencies and disasters.
The Emergency Cell Broadcast System (ECBS) was launched on March 13, 2017 by the National Disaster Risk Reduction and Management Council and Smart Communications.
Information transmission capabilities
Critical information that affected communities can use to prepare for and respond to disasters
Contact information of authorities and responders in affected areas
Information on evacuation centers, relief sites, and pick-up points
Up-to-date information provided by state weather bureau PAGASA, the Philippine Institute of Volcanology and Seismology (PhiVolcs), and NDRRMC
Mechanism
Emergency alerts disseminated through this system are crafted by the National Disaster Risk Reduction and Management Council with inputs from other government agencies. The NDRRMC is limited on the number of characters it can use for each emergency alert message. A computer program made for the system is used to create and send the message.
The system is location-specific, meaning a message is sent by designating an area where mobile phones within it shall receive the emergency alert. In contrast, the SMS-based emergency alert broadcast system is sent to devices through their mobile phone numbers which meant that the NDDRMC had to send emergency alert messages through telecommunications service providers. The process of the SMS-based system could take hours.
References
Emergency population warning systems
Emergency management in the Philippines
2017 introductions | Emergency Cell Broadcast System | [
"Technology"
] | 406 | [
"Warning systems",
"Emergency population warning systems"
] |
73,488,171 | https://en.wikipedia.org/wiki/Interleaving%20distance | In topological data analysis, the interleaving distance is a measure of similarity between persistence modules, a common object of study in topological data analysis and persistent homology. The interleaving distance was first introduced by Frédéric Chazal et al. in 2009. since then, it and its generalizations have been a central consideration in the study of applied algebraic topology and topological data analysis.
Definition
A persistence module is a collection of vector spaces indexed over the real line, along with a collection of linear maps such that is always an isomorphism, and the relation is satisfied for every . The case of indexing is presented here for simplicity, though the interleaving distance can be readily adapted to more general settings, including multi-dimensional persistence modules.
Let and be persistence modules. Then for any , a -shift is a collection of linear maps between the persistence modules that commute with the internal maps of and .
The persistence modules and are said to be -interleaved if there are -shifts and such that the following diagrams commute for all .
It follows from the definition that if and are -interleaved for some , then they are also -interleaved for any positive . Therefore, in order to find the closest interleaving between the two modules, we must take the infimum across all possible interleavings.
The interleaving distance between two persistence modules and is defined as .
Properties
Metric properties
It can be shown that the interleaving distance satisfies the triangle inequality. Namely, given three persistence modules , , and , the inequality is satisfied.
On the other hand, there are examples of persistence modules that are not isomorphic but that have interleaving distance zero. Furthermore, if no suitable exists then two persistence modules are said to have infinite interleaving distance. These two properties make the interleaving distance an extended pseudometric, which means non-identical objects are allowed to have distance zero, and objects are allowed to have infinite distance, but the other properties of a proper metric are satisfied.
Further metric properties of the interleaving distance and its variants were investigated by Luis Scoccola in 2020.
Computational complexity
Computing the interleaving distance between two single-parameter persistence modules can be accomplished in polynomial time. On the other hand, it was shown in 2018 that computing the interleaving distance between two multi-dimensional persistence modules is NP-hard.
References
Computational topology
Data analysis | Interleaving distance | [
"Mathematics"
] | 489 | [
"Topology",
"Computational topology",
"Computational mathematics"
] |
73,488,529 | https://en.wikipedia.org/wiki/History%20of%20phagocytosis | The history of phagocytosis is an account of the discoveries of cells, known as phagocytes, that are capable of eating other cells or particles, and how that eventually established the science of immunology. Phagocytosis is broadly used in two ways in different organisms, for feeding in unicellular organisms (protists) and for immune response to protect the body against infections in metazoans. Although it is found in a variety of organisms with different functions, its fundamental process is cellular ingestion of foreign (external) materials, and thus, is considered as an evolutionary conserved process.
The biological theory and concept, experimental observations and the name, phagocyte () were introduced by a Ukrainian zoologist Élie Metchnikoff in 1883, the moment regarded as the foundation or birth of immunology. The discovery of phagocytes and the process of innate immunity earned Metchnikoff the 1908 Nobel Prize in Physiology or Medicine, and the epithet "father of natural immunity".
However, the cellular process was known before Metchnikoff's works, but with inconclusive descriptions. The first scientific description was from Albert von Kölliker who in 1849 reported an alga eating a microbe. In 1862, Ernst Haeckel experimentally showed that some blood cells in a slug could ingest external particles. By then evidences were mounting that leucocytes can perform cell eating just like protists, but it was not until Metchnikoff showed that specific leukocytes (in his case macrophages) eat cell that the role of phagocytosis in immunity was realised.
Discovery of cell feeding
Phagocytosis was first observed as a process by which unicellular organisms eat their food, usually smaller organisms like protists and bacteria. The earliest definitive account was given by Swiss scientist Albert von Kölliker in 1849. As he reported in the journal , Kölliker described the feeding process of an amoeba-like alga, Actinophyrys sol (a heliozoan). Under microscope, he noticed that the protist engulfed and swallowed (the process now called endocytosis) a small organism, that he named infusoria (a generic name for microbes at the time). Modern translation of his description reads:The creature [infusoria] which is destined for food [i.e., trapped by the spines], gradually reaches the surface of the animal [i.e., Actinophyrys), in particular, the thread that caught it is shortened to nothing, or, as it often happens, once trapped in the thread space, the thread unwinds from around the prey when close together and at the surface of the cell body... The place on the cell surface where the caught animal is, gradually becomes a deeper and deeper pit into which the prey, which is attached everywhere to the cell surface, comes to rest. Now, by continuing to draw in the body wall, the pit gets deeper, and the prey which was previously on the edge of the Actinophrys, disappears completely, and at the same time the catching threads, which still lay with their points against each other, cancel each other out and extend again. Finally, the edges "choke" the pit, so that it is flask-shaped (flaschenformig) all sides increasingly merging together, so that the pit completely closes and the prey is completely within the cortical cytoplasm.The general process given by Kölliker correlates with modern understanding of phagocytosis as a feeding method. The thread and thread space are pseudopodia, gradually deepening pit is the endocytosis, the flaschenformig structure is the phagosome.
Discovery of phagocytic immune cells
Eosinophils
The first demonstration of phagocytosis as a property of leukocytes, the immune cells, was from the German zoologist Ernst Haeckel. In 1846, English physician Thomas Wharton Jones had discovered that a group of leucocytes, which he called "granule-cell" (later renamed and identified as eosinophil), could change shape, the phenomenon later called amoeboid movement. Jones studied the bloods of different animals, from invertebrates to mammals, and noticed the blood of a marine fish (skate) had cells that could move by themselves and remarked that "the granule-cells at first presented most remarkable changes of shape." Other scientists confirmed his findings, however, among them, German physician Johann Nathanael Lieberkühn in 1854 concluded that the movement was not for ingesting food or particles.
Disproving Lieberkühn's conclusion, Haeckel discovered that such cells could indeed ingest particles, even experimentally introduced ones. In 1862, Haeckel injected an Indian ink (or indigo) into a sea slug,Tethys, and observed how the colour was taken up by the tissues. As he extracted the blood, he found that the colour particles accumulated in the cytoplasm of some blood cells. It was a direct evidence of phagocytosis by immune cells. Haeckel reported his experiment in a monograph Die Radiolarien (Rhizopoda Radiaria): Eine Monographie.
In 1869, Joseph Gibbon Richardson at the Pennsylvania Hospital observed amoeboid leukocytes from his own salivary cells, urine of an individual hospitalised for kidney and bladder problem and urine from a cystitis case. He noticed from the pus sample that one cell had moving "molecule" inside, the cell gradually enlarged and ultimately ruptured like "that of swarm of bees from a hive". He hypothesised: "[It] seems not improbably that the white corpuscles, either in the capillaries or lymphatic glands, collect during their amoebaform [sic] movements, those germs of bacteria, which my own experiments indicate always exist in the blood to a greater or less amount." Although generally overlooked in the study of phagocytosis, after it was originally published in the Pennsylvania Hospital Report, it was reproduced in other journals.
Epithelial cells
In 1869, Russian physician Kranid Slavjansky published his research on injection of guinea pigs and rabbits with indigo and cinnabar in (later renamed Virchows Archiv). Slavjansky found that leukocytes easily take up the indigo and cinnabar as do the cells of the respiratory tract (alveoli). He noticed that the alveolar cells behaved like the leukocytes as they became distributed in the alveoli and the bronchial mucus, the observation of which made him to suggest that the tissue cells were the source of particle up-take in the lungs. He concluded: [As those cells contain cinnabar, it is natural to suppose them to be white blood cells migrating out of the vessels and finding no free pigment in the pulmonary alveoli, as is the case in the experiments in which cinnabar is introduced into the blood after introducing indigo into the lungs two days before cinnabar cells appear... either they are migrated white blood cells which have undergone mucus metamorphosis and have thus become mucus corpuscles, or they can come from the metamorphosed columnar epithelium of the bronchial mucosa.]
A Canadian physician William Osler at McGill College reported "On the pathology of miner's lung" in Canada Medical and Surgical Journal in 1875. Osler had examined a case of black lung disease (pneumoconiosis) in two miners. From an autopsy of one who died from the disease, he found leukocytes and lung cells (alveolar cells) that contained the coal (carbon) particles. For the blood cells, he was not convinced that the coal particles were taken up by the cells; instead suggesting that "they must be regarded as the original cell elements of the alveoli", conceding that he lacked "the necessary knowledge to decide." But on the lung cells, his observation was clear, remarking:Inside all of these [lung cells] the carbon particles exist in extraordinary numbers, filling the cells in different degrees. Some are so densely crowded that not a trace of cell substance can be detected, more commonly a rim of protoplasm remains free, or at a spot near the circumference, the nucleus, which in these cells is almost always eccentric, is seen uncovered... One most curious specimen was observed: on an elongated piece of carbon three cells were attached, one at either end, and a third in the middle; so that the whole had a striking resemblance to a dumbbell. I could hardly credit this at first, until, by touching top-cover with a needle and causing the whole to roll over, I quite satisfied myself that the ends of the rod were completely imbedded in the corpuscles, and the middle portion entirely surrounded by another.Oslar's report continued with his experimental observation. He injected Indian ink into the axillae and lungs of kittens. On autopsy of a two-day-old kitten, he noticed leukocytes and large tissue cells, which showed amoeboid movements, containing the ink. However, he could not work out how the ink spread inside the cells, as he accidentally dropped and broke his slide. From a four-week-old kitten, he found that the ink also accumulated in almost all the blood and lung cells, and such cells were so crowded that under a microscope "hardly anything could be seen. He was convinced that there was a cellular process of up-taking particles ("irritating materials" as he called them), which he considered as an "intravasation" or "ingestion", as he concluded:Here we have to do with an intravasation, or rather an ingestion of the coloured corpuscles within others. Many deny this, but as far as my observation goes there can be no doubt of the fact. In these corpuscles as many as six to ten were seen, in others again the outlines of the red corpuscles could not be detected, as if the cells had absorbed only the colouring matter.
Discovery of macrophage
Groundwork
The phagocytic property of macrophage, a specialised leukocyte, and its role in immunity was discovered by Ukrainian zoologist Élie Metchnikoff. However, he did not discover phagocytes or phagocytosis, as is often depicted in books. Metchnikoff had been working as professor of zoology and comparative anatomy at the University of Odessa, Ukraine (then Russian Empire), since 1870. In 1880, he had nervous breakdown, partly due to her wife Olga Belokopytova's terminal typhoid fever, and attempted suicide by self-injecting with blood sample from blood from an individual with relapsing fever. By then he had keen interest in Charles Darwin's theory of natural selection, and had been investigating the origin of metazoans.
Based on the knowledge of cell eating in primitive metazoans, Metchnikoff believed that the common ancestor of metazoan must be a simple cell-eating organism. His initial experimental observation in 1880 in Naples, Italy, showed that such intracellular digestion does occur in the parenchyma (tissue cells) of coelenterates, and became convinced that the original metazoan must be like that. He called this hypothetical metazoan ancestor parenchymella (later commonly known as phagocytella; the term parechymella adopted for the name of the larvae of demosponges.) This was a direct contradiction to the hypothesis of Ernst Haeckel, a German zoologist and staunch supporter of Darwin's theory. In 1872, Haeckel had formulated a theory (as part of his evolutionary theory called biogenetic law) that a metazoan ancestor must be like a gastrula, an embryonic stage undergoing invagination as seen in chordates. He named the hypothetical ancestor gastrea.
Experimental discovery
To strengthen his parenchymella theory, Metchnikoff thought about several ways to look for cell eating as a fundamental process in metazoans. In the summer of 1880, he resigned from the University of Odessa and moved to Messina, a seashore city in Sicily, where he could conduct a private research. His initial study on sponges indicated that the mesodermal and endodermal (body tissue wall) cells performed amoeboid movements and cell eating. His earlier experiments on planarian worms already showed that the endoderm is formed by migrating cells, and not by invagination. His critical study came from the larvae (bipinnaria) of a starfish, Astropecten pentacanthus (later reclassified as Astropecten irregularis).
Metchnikoff observed that the body covering of the transparent starfish consisted of the outer (ectoderm) and internal (endoderm) layers, and that the space in between the layers are filled with moving endodermal cells. When he injected carmine stain (a red dye) into the starfish, he found that the stain was taken up (eaten) by the amoeboid cells as they turned red in colour. He remarked: "I found it an easy matter to demonstrate that these elements seized foreign bodies of very varied nature by means of their living processes, and certain of these bodies underwent a true digestion within the amoeboid cells." Then, he conceived a novel idea that if the cells could eat external particles, they must be responsible for eating harmful materials and pathogens like bacteria to protect the body – the key process to immunity.
It was one afternoon in December 1880, when he stayed home alone while his family went to a circus show that he momentarily realised that his idea could be put to test by piercing live starfish larvae. He collected fresh specimens from the seashore and a few rose thorns on the way home. He discovered what he hypothesised, that the amoeboid cell gathers round the rose thorn as if to eat when it pierced through the skin, and predicted that the same would be true in humans as a form of body defence. Recapitulating the experiment, he said:I hypothesized that if my presumption was correct, a thorn introduced into the body of a starfish larva, devoid of blood vessels and nervous system, would have to be rapidly encircled by the motile cells, similarly to what happens to a human finger with a splinter. No sooner said than done. In the shrubbery of our home, the same shrubbery where we had just a few days before assembled a 'Christmas tree' for the children on a mandarin bush, I picked up some rose thorns to introduce them right away under the skin of the superb starfish larva, as transparent as water. I was so excited I couldn't fall asleep all night in trepidation of the result of my experiment, and the next morning, at a very early hour, I observed with immense joy that the experiment was a perfect success! This experiment formed the basis for the theory of phagocytosis, to whose elaboration I devoted the next 25 years of my life.
Thus, it was in Messina that the turning point in my scientific life took place.
References
History of medicine
Immunology
Cellular processes | History of phagocytosis | [
"Biology"
] | 3,200 | [
"Immunology",
"Cellular processes"
] |
73,489,092 | https://en.wikipedia.org/wiki/False%20bottom%20%28sea%20ice%29 | False bottom is a form of sea ice that forms at the interface between meltwater and seawater via the process of double-diffusive convection of heat and salt.
Characteristics
False bottoms have been observed under drifting Arctic sea ice, under land-fast ice in Greenland, and at Ward Hunt Ice Shelf. Being located under ice, false bottoms are not easy to investigate, and the current observations are quite variable. For example, the areal coverage of false bottoms was 50% at the drifting station Charlie in 1959, 15% during SHEBA expedition in 1998 and 20% during MOSAiC expedition in 2020. Both physical modelling and in situ observations suggest that false bottoms may decrease sea ice melt up to 8%. Meanwhile, measurements from manual ice thickness gauges in Fram Strait in the summer of 2020 showed a nearly 50% reduction in bottom ice melt due to false bottoms. The salinity and temperature of under-ice meltwater and false bottoms are controlled by both ice melt and desalination. The salinity of false bottoms was 1.0 during the ARCTIC 91 expedition, 0.4 during SHEBA and 2.3 during MOSAiC. The average thickness of false bottoms was 20 cm during the ARCTIC 91 expedition, 15 cm during SHEBA, and 8 cm during MOSAiC. The presence of false bottoms can increase the rates of sea ice desalination.
Formation
During Arctic summer, snow and ice melt results in the accumulation of low-salinity meltwater. Most of this meltwater is transferred to the ocean, while some of it migrates to the surface melt ponds, the sea ice matrix, and under-ice meltwater layers. False bottoms form due to a substantial difference in freezing temperatures of water with different salinities. Their formation in summer was first documented by Fridtjof Nansen in 1897. During MOSAiC expedition, false bottoms occurred in areas of thin and ponded sea ice encircled by thicker sea ice ridges and were formed at the same time when surface melt ponds drained. False bottoms are formed at the upper part of the interface of meltwater and seawater. The ice crystals initially grow downwards towards seawater, and further grow horizontally until a formation of a horizontal ice layer. After the formation of this horizontal layer, false bottoms constantly migrate upwards due to conductive heat flux, supported by the temperature difference between meltwater and seawater, and the rate of such migration is mostly defined by its thickness. The growth and melt of false bottoms are controlled by the physical parameters of the ocean. False bottoms are often observed in areas of thin ice covered by surface melt ponds and encircled by thicker pressure ridges, with ridge draft limiting the depth of under-ice meltwater layers.
Under-ice meltwater layer
The false bottom formation is directly linked to the appearance of under-ice meltwater layers. The appearance of such meltwater layers often happens after surface melt pond drainage during the melt season. The depth of under-ice meltwater layers is usually limited by the draft of thicker and usually deformed ice, surrounding thinner ice with under-ice meltwater. The salinity of under-ice meltwater depends on the sources of meltwater including snow and ice, on the desalination of the ice above under-ice meltwater layers, and on the presence of false bottoms. During the MOSAiC expedition in Fram Strait, the average thickness of meltwater layers was 0.46 m under first-year ice and 0.26 m under second-year ice. The thickness of meltwater layers under multiyear ice during the SHEBA expedition in the Beaufort Sea was 0.35–0.47 m. Observations for fast multiyear ice in the Wandel Sea in North Greenland showed under-ice meltwater layers with 1.1–1.2 m thickness, later transformed into thick platelet ice layer with 0.01 m thick false bottoms under it.
Observation techniques
False bottoms may create errors in estimates of sea ice thickness from its draft measurements. They can be investigated manually using ice coring and drilling, hotwire thickness gauges or remotely using underwater sonars. Ground-based upward-looking sonar cannot distinguish "normal" or parental sea ice from false bottoms. Similarly, drifting buoys measuring sea-ice temperature (ice mass balance buoys) cannot accurately detect false bottoms but can identify thicker under-ice meltwater layers.
References
Sea ice
Cryosphere | False bottom (sea ice) | [
"Physics",
"Environmental_science"
] | 885 | [
"Physical phenomena",
"Earth phenomena",
"Hydrology",
"Sea ice",
"Cryosphere"
] |
73,492,903 | https://en.wikipedia.org/wiki/Persistent%20Betti%20number | In persistent homology, a persistent Betti number is a multiscale analog of a Betti number that tracks the number of topological features that persist over multiple scale parameters in a filtration. Whereas the classical Betti number equals the rank of the homology group, the persistent Betti number is the rank of the persistent homology group. The concept of a persistent Betti number was introduced by Herbert Edelsbrunner, David Letscher, and Afra Zomorodian in the 2002 paper Topological Persistence and Simplification, one of the seminal papers in the field of persistent homology and topological data analysis. Applications of the persistent Betti number appear in a variety of fields including data analysis, machine learning, and physics.
Definition
Let be a simplicial complex, and let be a monotonic, i.e., non-decreasing function. Requiring monotonicity guarantees that the sublevel set is a subcomplex of for all . Letting the parameter vary, we can arrange these subcomplexes into a nested sequence for some natural number . This sequences defines a filtration on the complex .
Persistent homology concerns itself with the evolution of topological features across a filtration. To that end, by taking the homology group of every complex in the filtration we obtain a sequence of homology groups that are connected by homomorphisms induced by the inclusion maps in the filtration. When applying homology over a field, we get a sequence of vector spaces and linear maps commonly known as a persistence module.
In order to track the evolution of homological features as opposed to the static topological information at each individual index, one needs to count only the number of nontrivial homology classes that persist in the filtration, i.e., that remain nontrivial across multiple scale parameters.
For each , let denote the induced homomorphism . Then the persistent homology groups are defined to be the images of each induced map. Namely, for all .
In parallel to the classical Betti number, the persistent Betti numbers are precisely the ranks of the persistent homology groups, given by the definition .
References
Computational topology
Data analysis
Homology theory
Algebraic topology | Persistent Betti number | [
"Mathematics"
] | 450 | [
"Computational topology",
"Computational mathematics",
"Algebraic topology",
"Fields of abstract algebra",
"Topology"
] |
73,493,450 | https://en.wikipedia.org/wiki/Active%20circulator | In electrical engineering, an active circulator is an active non-reciprocal three-port device that couples a microwave or radio-frequency signal only to an adjacent port in the direction of circulation. Other (external) circuitry connects to the circulator ports via transmission lines. An ideal three-port active circulator has the following scattering matrix:
An active circulator can be constructed using one of several different technologies. One early technology is the use of transistors as the active devices to perform the non-reciprocal function. Varactor circuits are another technology, relying on a time-varying transmission line structure, driven by a separate pump signal. A third technology utilizes spatiotemporally-modulated rings of coupled resonators. Another design approach relies on staggered commutation and integrated circuit techniques.
Compared to passive (ferrite) circulators, active circulators have the advantages of small size, low mass, and simple integration with other circuitry. System designers must weigh these factors with the disadvantages of active circulators: they require DC power and sometimes a separate pump or clock signal, they can be nonlinear, and can introduce significant noise into the signal path.
References
Electrical components | Active circulator | [
"Technology",
"Engineering"
] | 252 | [
"Electrical engineering",
"Electrical components",
"Components"
] |
73,493,574 | https://en.wikipedia.org/wiki/OS/4 | OS/4 is a discontinued operating system, introduced in 1972, from UNIVAC for their 9400, 9480, and 9700 computer systems. It is an enhanced version of UNIVAC's 9400 Disc Operating System. OS/4 is a disc-resident system requiring 64 KB of main memory, two disc drives, a punched-card reader and a printer. The resident memory footprint is approximately 24 KB.
UNIVAC intended to replace OS/4 with a new system known as OS/7; however, OS/7 development was discontinued in 1975 when the 9700 was made part of the new UNIVAC Series 90 line as the 90/70.
REferences
Discontinued operating systems
UNIVAC mainframe computers | OS/4 | [
"Technology"
] | 149 | [
"Computing stubs"
] |
73,493,845 | https://en.wikipedia.org/wiki/Persistent%20homology%20group | In persistent homology, a persistent homology group is a multiscale analog of a homology group that captures information about the evolution of topological features across a filtration of spaces. While the ordinary homology group represents nontrivial homology classes of an individual topological space, the persistent homology group tracks only those classes that remain nontrivial across multiple parameters in the underlying filtration. Analogous to the ordinary Betti number, the ranks of the persistent homology groups are known as the persistent Betti numbers. Persistent homology groups were first introduced by Herbert Edelsbrunner, David Letscher, and Afra Zomorodian in a 2002 paper Topological Persistence and Simplification, one of the foundational papers in the fields of persistent homology and topological data analysis, based largely on the persistence barcodes and the persistence algorithm, that were first described by Serguei Barannikov in the 1994 paper. Since then, the study of persistent homology groups has led to applications in data science, machine learning, materials science, biology, and economics.
Definition
Let be a simplicial complex, and let be a real-valued monotonic function. Then for some values the sublevel-sets yield a sequence of nested subcomplexes known as a filtration of .
Applying homology to each complex yields a sequence of homology groups connected by homomorphisms induced by the inclusion maps of the underlying filtration. When homology is taken over a field, we get a sequence of vector spaces and linear maps known as a persistence module.
Let be the homomorphism induced by the inclusion . Then the persistent homology groups are defined as the images for all . In particular, the persistent homology group .
More precisely, the persistent homology group can be defined as , where and are the standard p-cycle and p-boundary groups, respectively.
Birth and death of homology classes
Sometimes the elements of are described as the homology classes that are "born" at or before and that have not yet "died" entering . These notions can be made precise as follows. A homology class is said to be born at if it is not contained in the image of the previous persistent homology group, i.e., . Conversely, is said to die entering if is subsumed (i.e., merges with) another older class as the sequence proceeds from . That is to say, but . The determination that an older class persists if it merges with a younger class, instead of the other way around, is sometimes known as the Elder Rule.
The indices at which a homology class is born and dies entering are known as the birth and death indices of . The difference is known as the index persistence of , while the corresponding difference in function values corresponding to those indices is known as the persistence of . If there exists no index at which dies, it is assigned an infinite death index. Thus, the persistence of each class can be represented as an interval in the extended real line of either the form or . Since, in the case of an infinite field, the infinite number of classes always have the same persistence, the collection over all classes of such intervals does not give meaningful multiplicities for a multiset of intervals. Instead, such multiplicities and a multiset of intervals in the extended real line are given by the structure theorem of persistence homology. This multiset is known as the persistence barcode.
Canonical form
Concretely, the structure theorem states that for any filtered complex over a field , there exists a linear transformation that preserves the filtration and converts the filtered complex into so called canonical form, a canonically defined direct sum of filtered complexes of two types: two-dimensional complexes with trivial homology and one-dimensional complexes with trivial differential .
Persistence diagram
Geometrically, a barcode can be plotted as a multiset of points (with possibly infinite coordinates) in the extended plane . By the above definitions, each point will lie above the diagonal, and the distance to the diagonal is exactly equal to the persistence of the corresponding class times . This construction is known as the persistence diagram, and it provides a way of visualizing the structure of the persistence of homology classes in the sequence of persistent homology groups.
References
Computational topology
Data analysis
Homology theory
Algebraic topology | Persistent homology group | [
"Mathematics"
] | 884 | [
"Computational topology",
"Computational mathematics",
"Algebraic topology",
"Fields of abstract algebra",
"Topology"
] |
73,494,538 | https://en.wikipedia.org/wiki/Clusteroid | A clusteroid is a method of producing 3D cell cultures that was first developed in 2019. Clusteroids are grown as not true spheroids but as dense clusters of cells in an aqueous two-phase system of water-in-water Pickering emulsion. The cells are incapsulated by mixing two aqueous solutions containing the incompatible polymers: Polyethylene oxide (PEO) solution as a continuous phase and dextran solution (DEX) as a dispersed phase, using whey protein as a stabiliser. Clusteoids as an in vitro model are more accurate to the complexities of in vivo, and aren't as susceptible to some of the problems 2D cultures present, for example; A large problem in culturing cells as a 2D monolayer is confluence as most cell lines used in research tend to decline in growth and health above 80% due to competition between cells for nutrients and oxygen in their growth media. A unique problem in non-vascularised clusteroids is necrotic core formation; as nutrients and oxygen cannot diffuse into the centre of the clusteroid without other cells taking it up, the cells within become starved and subsequently die. This necrotic core formation is similar to that of poorly-vascularised solid tumours.
Co-culture clusteroids are clusteroids that are composed of multiple different cell lines or types.
Method of Creation
The DEX phase containing cells and the PEO phase containing whey protein particles is emulsified, encapsulating the cells in a DEX-PEO water-in-water emulsion template stabilized by whey protein particles. Additional PEO is then added to efflux the water from the DEX phase, causing osmotic shrinking of the cell-rich dextran droplets. The interfacial tension between the two phases packs the cells into tissue clusteroids. The emulsion is then broken by dilution with culture media and the gelling agent sodium alginate and a calcium chloride solution is then added to form a hydrogel. The clusteroids are then incubated in the gel for up to 7 days to form tissues or organoids. Finally the gel is reverted by into liquid form by alginate lyase.
Use in Research
Co-culture clusteroids have been used in research into angiogenesis; vascularised clusteroids do not develop necrotic core formation as vessels created allow for perfusion of nutrients and oxygen into the centre of the clusteroid to occur, preventing cell death. Angiogenesis is initiated by pro-angiogenic growth factors such as vascular endothelial growth factor (VEGF), which is either exogenous or secreted by cells within the clusteroid, depending on the cells cultured. Clusteroids demonstrate tube and vessel formation in vitro. Use of Co-culture clusteroids as opposed to culturing separately leads to increased expression of angiogenesis-related genes demonstrating an impact of heterogeneity on cancer
References
Cell culture techniques
2019 beginnings | Clusteroid | [
"Chemistry",
"Biology"
] | 607 | [
"Biochemistry methods",
"Cell culture techniques"
] |
73,497,059 | https://en.wikipedia.org/wiki/Louise%20Willingale | Louise Willingale is a laser physicist at the University of Michigan and associate director of the National Science Foundation (NSF) ZEUS facility.
Education
Willingale completed her undergraduate Physics degree (MSci) from Imperial College London in 2003 and stayed on to complete her PhD in 2007 with her thesis titled Ion acceleration from high intensity laser plasma interactions: Measurements and applications. She was then a research assistant before moving to the University of Michigan to carry out postdoc studies.
Career
Willingale is interested in experiments and numerical modeling of high intensity laser plasma interactions and laser-driven ion acceleration. She has made use of advancements in laser technology, mainly chirped pulse amplification which was developed by Gérard Mourou who shared the 2018 Nobel Prize in Physics.
Willingale has been successful at winning a range of funding as principal investigator and is a member of the Institute of Physics, American Physical Society, and IEEE.
In 2016–17 Willingale was a senior lecturer at Lancaster University, before returning to the University of Michigan.
As of 2022, she is Associate Professor at the University of Michigan in the Department of Electrical Engineering and Computer Science and associate director and co-principal investigator of the NSF Zetawatt-Equivalent Ultrashort pulse laser System (ZEUS) facility, which will be the highest peak power laser in the US and one of the most powerful in the world. ZEUS is designed to have a maximum peak power of 3 petawatts but can simulate much higher powers by firing it at a high-energy electron beam travelling in the opposite direction. In 2022 she also became a Fellow of the American Physical Society.
Awards and honours
2008 – Institute of Physics Culham Thesis Prize
2008 – European Physical Society Plasma Physics Division PhD Research Award
2018 – National Science Foundation CAREER award
2022 – Fellow of the American Physical Society for "significant contributions to the experimental understanding of ion acceleration, electron acceleration and magnetic field dynamics resulting from relativistic laser plasma interactions."
2022 – National Academy of Sciences Kavli Fellow
2023 – University of Michigan EECS Outstanding Achievement Award
Selected publications
References
External links
Louise Willingale – Home of Louise Willingale
Living people
Fellows of the American Physical Society
Alumni of Imperial College London
University of Michigan faculty
Laser researchers
Plasma physicists
Women physicists
Year of birth missing (living people) | Louise Willingale | [
"Physics"
] | 466 | [
"Plasma physicists",
"Plasma physics"
] |
73,498,718 | https://en.wikipedia.org/wiki/Dichromatic%20symmetry | Dichromatic symmetry, also referred to as antisymmetry, black-and-white symmetry, magnetic symmetry, counterchange symmetry or dichroic symmetry, is a symmetry operation which reverses an object to its opposite. A more precise definition is "operations of antisymmetry transform objects possessing two possible values of a given property from one value to the other." Dichromatic symmetry refers specifically to two-coloured symmetry; this can be extended to three or more colours in which case it is termed polychromatic symmetry. A general term for dichromatic and polychromatic symmetry is simply colour symmetry. Dichromatic symmetry is used to describe magnetic crystals and in other areas of physics, such as time reversal, which require two-valued symmetry operations.
Examples
A simple example is to take a white object, such as a triangle, and apply a colour change resulting in a black triangle. Applying the colour change once more yields the original white triangle.
The colour change, here termed an anti-identity operation (1'), yields the identity operation (1) if performed twice.
Another example is to construct an anti-mirror reflection (m') from a mirror reflection (m) and an anti-identity operation (1') executed in either order.
The m' operation can then be used to construct the antisymmetry point group 3m' of a dichromatic triangle.
There are no mirror reflection (m) operations for the dichromatic triangle, as there would be if all the smaller component triangles were coloured white. However, by introducing the anti-mirror reflection (m') operation the full dihedral D3 symmetry is restored. The six operations making up the dichromatic D3 (3m') point group are:
identity ()
rotation by ()
rotation by ()
anti-mirror reflection ()
combination of with ()
combination of with ().
Note that the vertex numbers do not form part of the triangle being operated on - they are shown to keep track of where the vertices end up after each operation.
History
In 1930 Heinrich Heesch was the first person to formally postulate an antisymmetry operation in the context of examining the 3D space groups in 4D. Heesch's work was influenced by Weber's 1929 paper on black-and-white colouring of 2D bands.
In 1935-1936 H.J. Woods published a series of four papers with the title The geometrical basis of pattern design. The last of these was devoted to counterchange symmetry and in which was derived for the first time the 46 dichromatic 2D point groups.
The work of Heesch and Woods were not influential at the time, and the subject of dichromatic symmetry did not start to become important until the publication of A.V. Shubnikov's book Symmetry and antisymmetry of finite figures in 1951. Thereafter the subject developed rapidly, initially in Russia but subsequently in many other countries, because of its importance in magnetic structures and other physical fields.
1951 Landau and Lifshitz reinterpret black and white colours to correspond to time reversal symmetry
1953 Zamorzaev derives the 1651 3D antisymmetric space groups for the first time
1956 Tavger and Zaitsev use the concept of vector reversal of magnetic moments to derive point groups for magnetic crystals
1957 Belov and his colleagues independently derive the 2D and 3D antisymmetric groups
1957 Zamorzaev and Sokolov begin the generalization of antisymmetry by introducing the concept of more than one kind of two-valued antisymmetry operation
1957 Mackay publishes the first review of the Russian work in English. Subsequent reviews were published by Holser (1961), Koptsik (1968), Schwarzenberger (1984), in Grünbaum and Shephard's Tilings and patterns (1987), and Brückler and Stilinović (2024)
Late 1950s M.C. Escher's artworks based on dichromatic and polychromatic patterns popularise colour symmetry amongst scientists
1961 Clear definition by van der Waerden and Burckhardt of colour symmetry in terms of group theory, regardless of the number of colours or dimensions involved
1964 First publication of Shubnikov and Belov's Colored Symmetry in English translation
1965 Wladyslaw Opechowski and Rosalia Guccione provide a complete derivation and enumeration of the dichromatic 3D space groups
1966 Publication by Koptsik of the complete atlas of dichromatic 3D space groups (in Russian)
1971 Derivation by Loeb of 2D colour symmetry configurations using rotocenters
1974 Publication of Symmetry in Science and Art by Shubnikov and Koptsik with extensive coverage of dichromatic symmetry in 1D, 2D and 3D
1988 Washburn and Crowe apply colour symmetry analysis to cultural patterns and objects
2008 Conway, Burgiel and Goodman-Strauss publish The Symmetries of Things which describes the colour-preserving symmetries of coloured objects using a new notation based on Orbifolds
Dimensional counts
The table below gives the number of ordinary and dichromatic groups by dimension. The Bohm symbol is used to denote the number of groups where = overall dimension, = lattice dimension and = number of antisymmetry operation types. for dichromatic groups with a single antisymmetry operation .
References
External links
Crowe, D.W. (1986). The mosaic patterns of H.J. Woods, Comput. Math. Applic., 12B(1/2), 407-411
Schattschneider, D. (1986). In black and white: how to create perfectly colored symmetric patterns, Comput. Math. Applic., 12B(1/2), 673-695,
Senechal, M. (1988). Color symmetry, Comput. Math. Applic., 16(5-8), 545-553,
Radovic, L. and Jablan, S. (2001). Antisymmetry and modularity in ornamental art, Proceedings of Bridges: Mathematical Connections in Art, Music, and Science, Kansas, 55–66
Symmetry | Dichromatic symmetry | [
"Physics",
"Mathematics"
] | 1,270 | [
"Geometry",
"Symmetry"
] |
73,499,563 | https://en.wikipedia.org/wiki/Alina%20Chertock | Alina Chertock is a mathematician whose research involves numerical methods for partial differential equations, especially those modeling fluid dynamics, gas dynamics, and chemotaxis. Educated in the Soviet Union and Israel, she works in the US as LeRoy B. Martin, Jr. Distinguished Professor of Mathematics and head of the Department of Mathematics at North Carolina State University.
Education and career
Chertock earned a master's degree in applied mathematics from Moscow State University in 1989, and a Ph.D. in applied mathematics from Tel Aviv University in 1999. Her doctoral dissertation, Strict stability of high-order compact implicit finite-difference schemes: the role of boundary conditions for hyperbolic PDEs, was supervised by Saul Abarbanel.
After postdoctoral research at the University of California, Berkeley and Lawrence Berkeley National Laboratory, she joined North Carolina State University as an assistant professor of mathematics in 2002. She was promoted to associate professor in 2007 and full professor in 2013. She has been department head there since 2015.
Recognition
Chertock was named as the LeRoy B. Martin, Jr. Distinguished Professor in 2021.
She was named as a Fellow of the Society for Industrial and Applied Mathematics (SIAM) in 2023, "for significant contributions to numerical methods for hyperbolic systems of conservation laws and important service to the applied mathematics community".
References
External links
Home page
Year of birth missing (living people)
Living people
Soviet emigrants to the United States
Applied mathematicians
Women mathematicians
Moscow State University alumni
Tel Aviv University alumni
North Carolina State University faculty
Fellows of the Society for Industrial and Applied Mathematics | Alina Chertock | [
"Mathematics"
] | 311 | [
"Applied mathematics",
"Applied mathematicians"
] |
73,500,095 | https://en.wikipedia.org/wiki/Carving%20width | In graph theory, the carving width of a graph is a number, defined from the graph, that describes the number of edges separating the clusters in a hierarchical clustering of the graph vertices.
Definition and examples
The carving width is defined in terms of hierarchical clusterings of the vertices of a given graph, called "carvings". A carving can be described as an unrooted binary tree whose leaves are labeled with the vertices of the given graph. Removing any edge from this tree partitions the tree into two subtrees, and correspondingly partitions the vertices of the tree into two clusters. The vertex clusters, formed in this way, constitute a laminar set family: any two vertex clusters (not just the two complementary clusters formed by removing the same edge) are either disjoint, or one is contained in the other. The width of a carving, defined in this way, is the maximum number of edges that connect two complementary clusters. The carving width of the graph is the minimum width of any hierarchical clustering.
The graphs of carving width one are exactly the matchings. The graphs of carving width two are exactly those formed from disjoint unions of path graphs and cycle graphs. The graphs of carving width three are the subcubic partial 2-trees. This means that their maximum degree is three and that they are subgraphs of series-parallel graphs. All other graphs have carving width at least four.
Computational complexity
Carving width is NP-hard in general, but may be computed in polynomial time in planar graphs. It may be approximated to within a constant of the same approximation ratio as balanced cuts, for which the current best approximation ratio is . It is also fixed-parameter tractable: for any fixed , testing whether the carving width is at most , and if so finding a hierarchical clustering that realizes that width, can be performed in linear time. In general, computing the carving width exactly, on a multigraph with vertices and edges, may be done in time .
Related parameters
The carving width is only one of several graph width parameters that measure how tree-like a given graph is. Others include the treewidth and branchwidth. The branchwidth of a graph is defined similarly to carving width, using hierarchical clusterings, but of the edges of a graph rather than of its vertices; these are called branch-decompositions.
A carving of a graph can be converted into a branch decomposition by attaching each graph edge to one of its two endpoints, and expanding each leaf of a carving into a subtree representing its attached edges. Using this construction, it can be shown that for any graph, the carving width is greater than or equal to half the branch width, and is less than or equal to the degree times the branchwidth. Because treewidth and branchwidth are always within constant factors of each other, similar bounds can be used to relate carving width to treewidth.
Another width parameter, defined by the numbers of edges spanning cuts in a graph, is its cutwidth, defined using a linear ordering on the vertices of a graph and the system of partitions separating earlier from later vertices in this ordering. Unlike carving width, this system of partitions does not include a partition separating each vertex from the remaining vertices, so (despite using a more restricted class of families of cuts) the cutwidth can be smaller than the carving width. However, the carving width is always at most the maximum of the cutwidth and the maximum degree of a graph.
References
Graph invariants | Carving width | [
"Mathematics"
] | 730 | [
"Graph invariants",
"Mathematical relations",
"Graph theory"
] |
74,853,623 | https://en.wikipedia.org/wiki/Great%20British%20Insulation%20Scheme | The Great British Insulation Scheme (GBIS) is an initiative launched by the UK government to enhance efficient energy use in residential properties. The scheme initially consulted on by the Department for Energy Security and Net Zero labelled as ECO+, reflects the UK's efforts towards environmental sustainability and the reduction of household energy costs.
The GBIS programme sets clear goals for medium and large energy suppliers in the UK. Their mission is to help households reduce their energy bills by implementing specific insulation measures of Energy efficiency in British housing. This not only aids in cutting down costs but also plays a significant role in reducing the carbon footprint of homes across Great Britain.
The scheme's timeline begins on 25 July 2023 and concludes on 31 March 2026. The responsibility of overseeing its operations and ensuring compliance with its guidelines is entrusted to Ofgem, the primary energy regulatory authority in the UK.
Background
"Energy poverty occurs when household bills cost so much that, once paid, a household’s leftover income is below the official poverty line. At one point in 2022, it was estimated that rising energy prices would push over 8 million people into energy poverty. In the UK, energy poverty has particular geographies. It is intricately linked to the vulnerability of particular social groups, such as the elderly or those with disabilities or chronic health issues."
The scheme is a response to the need for energy efficiency in the UK, building on the legacy of similar initiatives like the Energy Company Obligation (ECO), Carbon Emission Reduction Target (CERT), and Community Energy Saving Programme (CESP).
It addresses several government objectives, including reducing energy demand to secure the UK’s energy independence. The residential sector accounted for around 31% of the UK's final energy consumption in 2021.
Objectives of the Scheme
The primary goal is to deliver greater energy efficiency for hundreds of thousands of households through insulation, aligning with the UK's Net Zero ambitions.
It aims to help households maintain warmer homes, reduce their energy bills, and cut carbon emissions.
Group 1 – General Group:
Must be living in a home in council tax bands A-D in England and A-E in Wales and Scotland
Property must have an Energy Performance Certificate (EPC) of D and below
If eligible, house owner may receive a single insulation measure
A contribution may be required towards the insulation
Group 2 – Low-Income Eligibility Group:
Must be receiving certain means-tested benefits
Property must have an Energy Performance Certificate (EPC) of D and below
If eligible, one may receive a single insulation measure – plus the possibility of heating controls
The scheme continues the focus of ECO4 on enhancing the energy efficiency of the least energy-efficient homes, but is predominantly a single measure scheme focusing on insulation.
Key Features
The scheme promotes various insulation measures, including:
cavity wall insulation
flat or pitched roof insulation
loft insulation
solid wall insulation (internal or external)
park home insulation
room-in-roof insulation
solid floor insulation
underfloor insulation
It will run alongside the existing Energy Company Obligation (ECO4) scheme.
The scheme utilises established ECO supply chains, ensuring that the appropriate measures are selected and correctly installed.
Implementation
Approximately 2,000 installers per year are expected to deliver the scheme in its final two years across Great Britain.
The scheme broadens the eligibility criteria from ECO4 by introducing two eligibility groups, with the low-income group being for those referred by local authorities or energy suppliers.
Funding and Budget
The Great British Insulation Scheme has been allocated a substantial budget to ensure its effective implementation and to achieve its objectives:
Overall Budget: The scheme has a total budget of £1 billion over its three-year duration, adjusted for inflation to equate to 2022 prices. This ensures that the scheme's financial resources are in line with its ambitious goals.
Annual Budgets: The budget is distributed annually over the three phases of the scheme:
Phase A: Commencing from the start of the ECO4A order on 25 July 2023 and concluding on 31 March 2024.
Phase B: Spanning from 1 April 2024 to 31 March 2025.
Phase C: Starting on 1 April 2025 and ending on 31 March 2026.
Household Contributions: In addition to the government's allocation, there's an expectation of household contributions towards the scheme. The exact contribution is a matter for the customer and installer to agree upon, linked to the specific work to be done.
Administrative Costs: Based on historical data from the Household Energy Efficiency Statistics of March 2023, administrative costs have been observed to account for approximately 5.9% of the total costs. This percentage has been applied to the £1.08 billion overall spend under the GBIS, which includes £80 million of household contributions.
Utilisation of Funds: The scheme's budget will be utilised to promote energy efficiency measures, support households in reducing their energy bills, and contribute to the UK's Net Zero ambitions. This includes the installation of insulation, renewable energy sources, and the upgrade of inefficient heating systems.
Impact and Outcomes
The scheme is expected to treat 315,000 homes, with around 17% of these homes being in fuel poverty.
Households benefiting from the scheme are expected to see an average reduction in their energy bills by around, £300-400 per year.
The scheme aims to achieve non-traded greenhouse gas emissions savings of 0.65 MtCO2e (Million Tonnes of carbon dioxide equivalent).
Criticisms and Controversies
The most pronounced criticism of the Great British Insulation Scheme, ECO4, and other government insulation initiatives is the marked reduction in implementation. The number of insulated homes plummeted from assisting half a million homes in 2013 to a mere 60,000 in 2022.
The Great British Insulation Scheme has faced criticism, from the Local Government Association (LGA) and local councils. The reduction in installations within last decade has cost households £2 billion in potential savings. There are fears that 2.4 million fuel-poor homes will be unsupported by 2030, delaying insulation goals and the net zero target. The LGA recommends devolving energy schemes to councils for a targeted approach.
The National Energy Action (NEA) charity expressed concerns about the scheme's pace and targeting and emphasised the scheme's misalignment with fuel-poor households' needs. ″While in recent years the UK Government has made some steady progress to expanding state funding for increasing the energy efficiency of fuel poor households, a big funding gap still exists, and throughout the crisis there has only been one policy introduced to try and reduce energy demand (ECO+)...″
According to the March 2023 report ″Infrastructure Progress Review 2023″ by the National Infrastructure Commission, the government's energy efficiency schemes are underperforming, with installations needing a significant boost to meet the EPC C target by 2035.
Protesters from the group Insulate Britain demand that the government improve the insulation of all social housing in the UK by 2025 and retrofit all homes with improved insulation by 2030.
In its ″Cost of NOT zero in 2022″ report, the non-profit organisation Energy and Climate Intelligence Unit stated, ″The UK was on an upward trajectory with home insulation until 2013 when government support schemes were cut. Since then, insulation rates have been 90% lower. Britain has the least efficient homes in western Europe.″
Comparison with Similar Schemes
The Great British Insulation Scheme builds upon the legacy of previous initiatives:
Energy Company Obligation (ECO): In its 4th iteration, ECO4, the scheme emphasises improving the energy efficiency of the most energy-deficient homes. Yet, it distinguishes itself as a scheme that primarily targets multiple measures for low-income households.
Carbon Emission Reduction Target (CERT) was a UK government scheme that ran from 2008 to 2012. It obligated certain suppliers to make savings on carbon emissions in households, primarily through the promotion of energy efficiency improvements, such as insulation and energy-efficient lighting.
Community Energy Saving Programme (CESP) ran from 2009 to 2012 targeting specific low-income areas in the UK. It obligated larger energy suppliers, electricity generators, and electricity distributors to deliver energy-saving measures to domestic consumers in these areas.
While both CERT and CESP aimed at promoting energy efficiency and reducing carbon emissions, CESP was specifically designed to target low-income areas and address fuel poverty.
References
External links
Design of the Energy Company Obligation (ECO): 2023-2026 gov.uk
Apply for support from the Great British Insulation Scheme gov.uk
Companies you can apply for GBIS (Great Britain insulation scheme)
Climate change policy in the United Kingdom
Emissions reduction
Energy in the United Kingdom
Government programs | Great British Insulation Scheme | [
"Chemistry"
] | 1,752 | [
"Greenhouse gases",
"Emissions reduction"
] |
74,853,969 | https://en.wikipedia.org/wiki/34%20North%20118%20West | 34 North 118 West by Jeff Knowlton, Naomi Spellman, and Jeremy Hight is one of the first locative hypertexts. Published in 2003, the work connected Global Positioning System (GPS) data with a fictional narrative on an early tablet PC connected to Global Positioning devices to deliver a real-time story to a user.
Plot and structure
Setting
The work is set in a freight depot and warehouse in downtown Los Angeles. The time spans the early 1900s when innovations were telegraphs and radio to the time of the work where innovations are the internet and GPS. Astrid Ensslin and Alice Bell examine 34 North 118 West and explain that it works in a city street in Los Angeles. As readers follow an interactive map in the city, they access fragments of the story. Bell and Ensslin explain that the work asks "listeners to imagine fictional stories alongside their current physical location in the actual world."
Maps and GPS
Historical maps were based on Sanborn Fire Insurance maps from the historical time period. A contemporary journal, American Cultural Resources Association Newsletter (February 2004) calls this a "real-space museum" and explained that walking the actual current streets with this work allowed readers to experience the past in innovative ways.
Critical reception and literary significance
In his 2006 critical study, Hypertext 3.0, George Landow analyzed this work as a first example of "narrative archaelogy" and used this to analyze the role of narrative in augmented reality. "Hight wants to use his augmented reality to create something radically different by making the augmentation occur in the same place and time as the everyday physical world."
This work was shown at the LA Freewaves Festival and the Art in Motion Festival, according to the original website. A contemporary journal, American Cultural Resources Association Newsletter (February 2004) calls 34 North 118 West a "real-space museum" and explained that walking the actual current streets with this work allowed readers to experience the past in innovative ways.
This seminal work helped pave the way for locative fiction works and software. The work is one of the first to use GPS to serve content to readers. Users walk the city and listen to portions of the story that are delivered based on their GPS positions. NOEMA, a journal that reviews electronic work, described this work as "a marriage of high-tech and story telling that uses a GPS device, tablet PC and custom software to determine the viewers and deliver story components based on the users’ location." GPS Museum noted that this early locative work is one of the first that used walking around in a physical setting and experiencing a digital work.
Scott Rettberg explains that this early hyperfiction paved the way for locative works and the programming prefigured software tools to create further works that merged physical locations with digital stories. In a 2020 interview with Molly Hankwitz, Jeremy Hight explained that this technology could "let places and history speak and potentially skin the world with stories: things not possible on paper".
In an analysis of the poetic possibilities in digital media, Markku Eskelinen uses this work as an example of ergodic texts as defined by Espen Aarseth. Eskelinen goes on to note that these works require users to use their bodies in ways that interact with the text, thus demanding more of a reader than merely interpreting text and writing.
See also
Hypertext fiction
Location-based game
Locative media
Urban informatics
References
External links
The work is archived in the Electronic Literature Lab. See the original work at https://34n118w.net/
Location-based software
2000s electronic literature works
Art in California
Internet-based works
American electronic literature works | 34 North 118 West | [
"Technology"
] | 747 | [
"Multimedia",
"Internet-based works"
] |
74,854,001 | https://en.wikipedia.org/wiki/Work%20as%20play | Work as play is the concept of a qualitative change in human work activity. An idea does not have a single author, but is present in studies and culture.
Work is usually perceived as an external obligation and play as an internal compulsion. Consequently, turning work into play is seen as the solution to the alienation of labor. Nowadays, play is increasingly integrated into human labor activities. This approach is called gamification as applied to work.
Anarchism
American anarchist Bob Black, in his essay The Abolition of Work called for the complete abolition of labor. The method of achieving this goal is "turning work into play".
Psychology
According to Mihaly Csikszentmihalyi, a broad understanding of what constitutes a game can include work. In addition, the factors for achieving a flow state desirable for labor activity are obvious characteristic of a game situation.
A 2019 study showed that those who view the content creation as work had the highest levels of activity and income. At the same time, those who associated it with play, earned more income than those content creators who regard their content creation equally as play and work.
See also
Critique of work
Playbor
Post-work society
Serious game
Serious play
Work–life interface
References
Criticism of work
Motivation
Gamification | Work as play | [
"Biology"
] | 256 | [
"Ethology",
"Behavior",
"Motivation",
"Human behavior"
] |
74,854,900 | https://en.wikipedia.org/wiki/List%20of%20heritage%20registers%20in%20Bosnia%20and%20Herzegovina | National Monuments of Bosnia and Herzegovina are declared and maintained through the Commission to preserve national monuments of Bosnia and Herzegovina or KONS.
State level
Commission to preserve national monuments of Bosnia and Herzegovina
Central Register of Monuments
Also, a Bosnia and Herzegovina state commission for cooperation with the UNESCO is established:
State Commission of Bosnia and Herzegovina for UNESCO
Local level
Local level include entity registers, district Brčko, cantonal, and regional registers:
Institute for the Protection of Monuments of the Federation of Bosnia and Herzegovina [Zavod za zaštitu spomenika Federacija Bosne i Hercegovine]
Republic Institute for Protection of Cultural and Natural Heritage of Republic of Srpska [Republic Institute for Protection of Cultural and Natural Heritage of Republic of Srpska]
Institute for the Protection of Monuments District Brčko [Zavod za zaštitu spomenika District Brčko] (Služba za turizam Vlade Brčko distrikta Bosne i Hercegovine)
Cantonal Institute for the Protection of Cultural–Historical and Natural Heritage Sarajevo [Kantonalni zavod za zaštitu kulturno–historijskog i prirodnog naslijeđa Sarajevo]
Public Institution Institute for the Protection and Use of Cultural–Historical and Natural Heritage of Tuzla Canton [JU Zavod za zaštitu i korištenje kulturno–historijskog i prirodnog naslijeđa Tuzlanskog kantona]
Cantonal Institute for Urbanism, Spatial Planning and Protection of the Cultural and Historical Heritage of the Central Bosnian Canton [Kantonalni zavod za urbanizam, prostorno planiranje i zaštitu kulturno–historijskog naslijeđa Srednjobosanskog Kantona]
Institute for the Protection of Cultural and Historical Heritage of Herzegovina–Neretva Canton [Zavod za zaštitu kulturno–historijske baštine Hercegovačko–Neretvanskog Kantona]
Public Institution Institute for the Protection of Cultural Heritage Bihać – Una-Sana Canton [JU Zavod za zaštitu kulturnog naslijeđa Bihać – Unsko–Sanski Kanton]
Institute for the Protection of Cultural Heritage of the Zenica–Doboj Canton [Zavod za zaštitu kulturne baštine Zeničko–dobojskog kantona]
Public Institution Agency for cultural–historical and natural heritage and development of the tourist potential of the city of Jajce [JU Agencija za kulturno–povijesnu i prirodnu baštinu i razvoj turističkih potencijala grada Jajca]
See also
Cultural heritage
National heritage site
World Heritage Site
List of heritage registers
List of National Monuments of Bosnia and Herzegovina
List of Intangible Cultural Heritage of Bosnia and Herzegovina
List of World Heritage Sites in Bosnia and Herzegovina
List of fortifications in Bosnia and Herzegovina
List of bridges in Bosnia and Herzegovina
List of World War II monuments and memorials in Bosnia and Herzegovina
List of People's Heroes of Yugoslavia monuments in Bosnia and Herzegovina
List of museums in Bosnia and Herzegovina
References
External links
Commission to preserve national monuments
Commission to preserve national monuments (old website in use as an archive)
Bosnia and Herzegovina
Heritage registers in Bosnia and Herzegovina
B | List of heritage registers in Bosnia and Herzegovina | [
"Engineering"
] | 709 | [
"Heritage listed buildings and structures by country",
"Architecture"
] |
74,855,025 | https://en.wikipedia.org/wiki/C14H17ClN2O2 | {{DISPLAYTITLE:C14H17ClN2O2}}
The molecular formula C14H17ClN2O2 may refer to:
LY305
TIK-301 | C14H17ClN2O2 | [
"Chemistry"
] | 43 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
74,855,045 | https://en.wikipedia.org/wiki/C25H34O4 | {{DISPLAYTITLE:C25H34O4}}
The molecular formula C25H34O4 may refer to:
Cannabidiol diacetate
Metynodiol diacetate | C25H34O4 | [
"Chemistry"
] | 43 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
74,858,966 | https://en.wikipedia.org/wiki/Injury%20in%20plants | Injury in plants is damage caused by other organisms or by the non-living (abiotic) environment to plants. Animals that commonly cause injury to plants include insects, mites, nematodes, and herbivorous mammals; damage may also be caused by plant pathogens including fungi, bacteria, and viruses. Abiotic factors that can damage plants include heat, freezing, flooding, lightning, ozone gas, and pollutant chemicals.
Plants respond to injury by signalling that damage has occurred, by secreting materials to seal off the damaged area, by producing antimicrobial chemicals, and in woody plants by regrowing over wounds.
Factors
Biotic
Animals that commonly cause injury to plants include pests such as insects, mites, and nematodes. These variously bite or abrade plant parts such as leaves, stems, and roots, or as is common among the true bugs, pierce the plant's surface and suck plant juices. The resulting injuries may admit plant pathogens such as bacteria and fungi, which may extend the injury. Caterpillar larvae of agricultural pests such as cabbage white butterflies (Pieridae) can completely defoliate Brassica crops.
Molluscs such as snails graze on plants including grasses and forbs, abrading them with their rasp-like radula; they can inflict substantial damage to crops.
Grazing mammals including livestock such as cattle, too, bite off or break parts of plants including grasses, forbs, and forest trees, causing injury, and again, potentially admitting pathogens.
Abiotic
Abiotic factors that can damage plants include heat, freezing, flooding, lightning strikes, ozone gas, and pollutant chemicals.
Heat can kill any plant, given a sufficient temperature. Alpine plants tend to die at around 47 Celsius; temperate plants at around 51 Celsius; and tropical plants at nearly 58 Celsius: but there is some overlap depending on species. Similarly among cereal crops, temperate barley and oat die at around 49 Celsius, but tropical maize at 55 Celsius.
Freezing affects plants variously, according to each species' ability to resist frost damage. Many forbs, including many garden flowers, are tender with little tolerance to frost, and die or are seriously damaged when frozen. Many woody plants are able to supercool, with tough buds and stems containing molecules that lower the freezing point or help to prevent the nucleation of ice crystals, and cell walls that mechanically protect cells against freezing.
Flooding of soil quickly kills or injures many plants. The leaves become yellow (chlorosis) and die, progressively up the stem, within about five days after the roots are flooded. The roots lose the ability to absorb water and nutrients.
Lightning strikes kill or injure plants, from root crops like beet and potato, which are instantly cooked in the ground, to trees such as coconut, through effects such as sudden heat and pressure shock waves created when water inside the plant flashes to steam. This can rupture stems and scorch any plant parts.
Ozone, a gas, causes injury to leaves at concentrations from as little as 0.1 part per million in the atmosphere, such as may be found in or near large cities.
It is one of many pollutant chemicals that can damage plants.
Plant responses
Plants respond to injury by signalling that damage has occurred, by secreting materials to seal off the damaged area, by producing antimicrobials to limit the spread of pathogens, and in some woody plants by regrowing over the wound.
Signalling
Plants produce chemicals at the injury site that signal the presence of damage and may help to reduce further damage. The chemicals involved depend to some extent on the plant species, though several of them are shared among species; and the signals given depend on the cause of the injury. Plants injured by spider mites release volatile chemicals that attract predatory mites, serving to reduce the attack on the plants. As another example, maize plants damaged by the caterpillars of noctuid moths release a mixture of terpenoid substances which attract the parasitoid wasp Cotesia marginiventris, which kills caterpillars. Many plants give off such herbivory-induced signals.
Wound occlusion
Plants secrete a variety of chemicals to help seal off damaged areas. For example, the grape vine Vitis vinifera is able to block the xylem water-transport tubes in its stems using the chemical tylose in summertime, and gels in wintertime when the plant is dormant. Tylose helps to prevent pathogens such as wood-rotting fungi and the bacterium Xylella fastidiosa from spreading through the plant: the chemical is produced as a response both to the bacterium and to mechanical damage such as viticultural pruning.
Chemical defence
Many woody plants produce resins and antimicrobial chemicals to limit the spread of pathogens after an injury.
Wound healing
Many woody plants regrow around injuries, such as those caused by pruning. In time, such regrowth often completely covers the damaged area as the cambium growth layer produces new tissues. Well-pruned trees with undamaged branch collars often recover well, where poorly-pruned trees rot below the wound.
See also
Injury in animals
References
Plant physiology
Herbivory
Chemical ecology | Injury in plants | [
"Chemistry",
"Biology"
] | 1,098 | [
"Plant physiology",
"Chemical ecology",
"Plants",
"Herbivory",
"Biochemistry",
"Eating behaviors"
] |
74,859,016 | https://en.wikipedia.org/wiki/Coulomb%20drag | In condensed matter physics, Coulomb drag (also called electron drag or current drag) refers a transport phenomenon between two spatially isolated electrical conductors, where passing a steady electric current through one of them induces a voltage difference in the other. It is named after the Coulomb interaction between charge carriers (usually electrons) responsible for the effect.
The effect was first predicted by Soviet physicist M. B. Pogrebinsky in 1977. The first experimental verification of the phenomena was carried between 1991 and 1992 in two-dimensional electron gases by the group of James P. Eisenstein working with gallium arsenide (GaAs) double quantum wells.
In the presence of magnetic fields it leads to analogous phenomena, like the Hall drag or the magneto-Coulomb drag. When spin-polarized currents are involved, it is termed spin Coulomb drag.
Description
The phenomenon considers two spatially isolated layers. In between the two layers, there can be vacuum or an insulator. When an electric direct current is driven in the active layer, it drags carriers in the passive layer due to Coulomb interaction, this charge imbalance leads to a drag voltage VD induced in the passive layer. For ballistic conduction, it is expected that the resistance is RD to be proportional to the temperature squared . In a realistic system, the resistance dependence with temperature deviates from this regime due to the presence of phonons (low temperatures compared to the Fermi temperature TF), plasmons (high temperatures of the order of TF), disorder ( behaviour) and magnetic fields.
References
Mesoscopic physics | Coulomb drag | [
"Physics",
"Materials_science"
] | 332 | [
"Quantum mechanics",
"Mesoscopic physics",
"Condensed matter physics"
] |
74,860,173 | https://en.wikipedia.org/wiki/KIAA2012 | KIAA2012 is a protein which, in humans, is encoded by the KIAA2012 gene. KIAA2012 is expressed at very low levels throughout the body, but it is primarily expressed in the ovary, lungs, and brain.
Gene
KIAA2012 is located on the positive sense strand at position 2q33.1. KIAA2012 has 24 exons, and it spans 131,934 bases including introns. No aliases or common names are used in addition to KIAA2012.
Gene level regulation
Within the promoter region of KIAA2012, there is a highly conserved transcription factor binding site that has no common SNPs. The RFX transcription factors, more specifically RFX1-6, bind to this highly conserved region and regulates cellular specialization and differentiation. The image below shows the promoter region of KIAA2012 with the highly conserved RFX1-6 binding site.
mRNA
KIAA2012 is expressed differentially in the body at low levels. Of this overall low expression, KIAA2012 is expressed most highly in the brain, lungs, and ovary. KIAA2012 is expressed at lower levels in the liver, trachea, and testes.
Protein
Unmodified KIAA2012 is 1,181 amino acids in length, has a molecular weight of 136 kdal, and an isoelectric pH around 8.
Internal features
KIAA2012 is rich in glutamic acid and glutamine, and it is poor in valine. There is also one mixed charge cluster between amino acids 951–1118. There is one Domain of Unknown Function (DUF 4670) within KIAA2012 spanning from amino acid 635 to amino acid 1137. Different than the whole KIAA2012, DUF 4670 is also rich in arginine and poor in glycine and phenylalanine.
Structure
The secondary structure of KIAA2012 consists primarily of alpha helices. On the left, a high confidence prediction of the secondary structure is shown. On the right, the entire 3-D structure is shown, showing how the alpha helices fold to form the entire KIAA2012 protein.
Post-translational modification
KIAA2012 has a highly conserved cGMP-dependent protein kinase binding domain. These cGMP-dependent protein kinases (PRKG) are a part of the NO/cGMP signaling pathway, and they are important factors in many signal transduction processes. Additionally, there are many potential sites for phosphorylation, SUMOylation, and myristoylation. In instances where KIAA2012 is post-translationally modified in these ways, the resulting charge, structure, function, and sub-cellular localization can be altered.
Sub-cellular Localization
Proteins tagged with localization signals will be transported to various regions of the cell. KIAA2012 contains nuclear localization signal sequences, which are short stretches of amino acids that moderate transportation of nuclear proteins to the nucleus. Shown in the table below, human KIAA2012 and two orthologs are listed with confidence values of where in the cell KIAA2012 is localized.
Function
KIAA2012 has predicted protein interactions with STAG2 and SMC1A. STAG2 encodes a subunit of cohesion complexes used to regulate sister chromatid separation during cell division. SMC1A is an important part of functional kinetochores due to its role in the multiprotein cohesion complex required for sister chromatid cohesion. Because KIAA2012 is localized in the nucleus and interacts with STAG2 and SMC1A, its role as a protein surrounds DNA manipulation or cell division.
Homology and evolution
Twenty organisms with a KIAA2012 ortholog are shown below, and they are sorted by date of divergence and sequence identity. There were no orthologs found in birds, but ortholog versions of KIAA2012 exist in mammals, reptiles, amphibians, and fish. An unrooted phylogenetic tree showing each taxonomic group and their divergence patterns can be found below the ortholog table.
Clinical significance
There are several genome-wide association studies that report traits associated variations in KIAA2012. The reported traits with the highest number of associations are heel bone mineral density, taste liking measurement, educational attainment, lung function, and height. Additionally, KIAA2012 is down regulated in women with polycystic ovary syndrome (PCOS) compared to women without PCOS.
References
Proteins | KIAA2012 | [
"Chemistry"
] | 949 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
74,861,325 | https://en.wikipedia.org/wiki/Stanis%C5%82aw%20Panczakiewicz | Stanisław Panczakiewicz was a pioneering Polish car body designer and engineer.
Career
Panczakiewicz attended Staszic junior high school in Warsaw. After the outbreak of World War I, he interrupted his studies in 1916 to join the Polish Legions in the Austro-Hungarian Empire. He served in the 5th Infantry Regiment of the 3rd Brigade of the Polish Legions. In 1917, due to the Oath crisis, he was interned together with his regiment in Zegrze near Warsaw. Thanks to the help from his family, he regained his freedom, but on condition that he joined the Central Committee of the Army as a one-year volunteer. Since his father was from the Austrian partition, Stanisław was granted Austro-Hungarian citizenship. In 1918 Panczakiewicz was sent to the infantry officer school in Opava. Before that, he filled the gap in his education by obtaining a secondary school leaving certificate in Kraków. He left officer school with the rank of ensign. After Poland declared independence, Stanisław joined the 5th Zaslaw Uhlan Regiment, with which he took part in the relief of Lviv and the fights against the Ukrainians near Kovel. During the Polish–Soviet War, Panczakiewicz was already a cadet officer and deputy commander of a motor column at the disposal of the 5th Army of under General Władysław Sikorski.
After the end of the war in 1922, he worked briefly as a draftsman in his father's architectural studio, but soon he went to study in Paris, where in 1926 he graduated from the Higher School of Aviation and Mechanical Structures (Ecole Supérieure d'Aéronautique et de Constructions Mécaniques) and the School of Engineering (École d'Ingénieurs Civiles) and several months of economic courses at the Higher School of Commerce (École des Hautes Études Commerciales). During his studies in 1924, he was a quality controller of the aviation equipment ordered by the army at the Polish Military Mission in Paris.
In 1927, Panczakiewicz started working at the Central Automotive Workshops (CWS), immediately as the head of the body shop, as its youngest employee. He designed the body of the first serially-built Polish passenger car, the CWS T-1. He created several body styles for the T-1, including the torpedo, carriage, berlina and faux-cabriolet body variants, as well as the development version of the CWS T-8 and the smaller T-2, as well as an ambulance, mail truck, and semi-truck based on the T-1 and the T-8.
In the years 1932–1933 he traveled around Western European countries, where he became acquainted with advancements in the field of coachwork construction. From 1934, he worked at the National Engineering Institute (PZInż), where he headed the bodywork department. Before the outbreak of World War II, he designed, among others, a tourist bus body on the Polski Fiat 621R chassis, the PZInż Zawrat, a streamline body for the PZInż 403 Lux-Sport, cabs for the PZInż 342 and PZInż 343 wheeled artillery tractors and trucks, including the driver's cabin of the 3.5-ton PZInż 713 truck. He cooperated in creating the body architecture of all types of Sokół motorcycles. He also developed the body of the CWS M111/Sokół 1000 sidecar.
After the outbreak of World War II, during the Invasion of Poland, he was evacuated with the crew and resources of PZInż to the eastern areas of the country. After the end of hostilities, he returned to the capital. During the German occupation, Stanisław ran a paper warehouse, thus avoiding work in the automotive industry for the Germans. He was active in the underground and was a soldier of the Kedyw with the rank of lieutenant under the pseudonym Bończa. During the Warsaw Uprising he was cut off from his parent unit and instead smuggled weapons to a local unit and also engaged in combat in Mokotów. After the fall of the uprising, he was held in a prisoner of war camp, but managed to escape from captivity.
On January 18, 1945, Panczakiewicz returned to Warsaw and co-organized the launch of the Hipolit Wawelberg and Stanisław Rotwand School of Machine Construction and Electrical Engineering (later part of the Warsaw University of Technology), where he also inaugurated the first series of lectures. After a short period of work in state institutions, at the beginning of 1947 he took up the position of head of the bodywork department at the Central Technical Bureau of the Automotive Industry (CBTPM), later renamed Centralne Biuro Konstrukcyjne No. 5 (CBK 5) and then - Bureau of Design of the Automotive Industry (BKPMot.). Stanisław held this position until 1968.
Panczakiewicz was the co-creator of the first post-war truck, the Star 20. The team of designers of the 3.5-ton truck was composed mostly of former employees of the PZInż Study Office, who in the 1930s participated in the work on the PZInż 703, 713 and 723 series of trucks. The author of the general concept, frame and suspension design was Mieczysław Dębicki. The drive transmission was created under the supervision of Jerzy Werner and the engine was created under the supervision of Jan Werner. The dyno and road tests were organized and directed by Aleksander Rummel. Panczakiewicz designed the N20 cabin and cargo box. The team of designers received the State Science Prize in 1950 for developing the car.
In the years 1947–1948, the WSK Mielec plant produced a Leyland LOPS3/1 bus based on the Leyland Motors frame and engine according to Panczakiewicz's design. From 1950, the Sanok Wagon Factory "Sanowag" also assembled a Fiat 666RN bus based on an Italian frame and engine, with a body produced on site. It was adapted by Panczakiewicz from the original design to the factory's capabilities. At the end of 1951, a prototype of the FSC Star Star N50 bus according to his design was built on a lowered and extended version of the Star 20 chassis . A year later, the production of the Star N52 bus began in "Sanowag", created as a result of refining the prototype version.
In 1954, Panczakiewicz joined the team led by Karol Pionnier, head of the Chief Designer Department at the Passenger Car Factory (FSO), which was to design a popular car. Early in the project that led to the design of the FSO Syrena. Two competing pre-prototypes with different styling and body structures were built. This resulted from a conflict within the team between Panczakiewicz, who had extensive pre-war experience in metal and wood structures, and a young engineer from FSO - Stanisław Łukaszewicz, versed in the design of the Warsaw car, calling for an all-metal body. The so-called Syrena II prepared in 1954 by Łukaszewicz used body elements from Warsaw and was technologically developed for large-scale production. Panczakiewicz's competing car had a body based on a wooden frame, covered with fiberboard panels and used fewer components from Warsaw, but was stylistically better. Pionnier reconciled engineers of two generations by choosing Panczakiewicz's styling and commissioning Łukaszewicz to develop its design using more modern technology.
The designer was also the author of the styling of the K26 cab of the Star 25 car prototype (1956), which, despite its modern styling, did not enter mass production and the body of the 48-seat Odra A81 bus (1957) based on the elongated and lowered frame and drive of the Żubr A80 car, which remained a prototype. He presented drawings of two of his own proposals for modernizing car bodies, in opposition to those by the Italian company Carrozzeria Ghia. In articles in the automotive press, he expressed surprise that the task of designing the new body of Warsaw was not entrusted to domestic designers, suggesting that he could take it on.
Relatives and death
Stanisław was the son of Ludwik Panczakiewicz (1873-1935), a Warsaw architect and construction entrepreneur. He died suddenly on July 8, 1982, in Warsaw, and was buried at the Powązki Cemetery (plot 100-6-20/21).
References
1900 births
1982 deaths
Polish automobile designers
Automotive engineers
Polish legionnaires (World War I)
People interned during World War I
People interned during World War II
Academic staff of the Warsaw University of Technology | Stanisław Panczakiewicz | [
"Engineering"
] | 1,817 | [
"Automotive engineering",
"Automotive engineers"
] |
74,861,460 | https://en.wikipedia.org/wiki/DESeq2 | DESeq2 is a software package in the field of bioinformatics and computational biology for the statistical programming language R. It is primarily employed for the analysis of high-throughput RNA sequencing (RNA-seq) data to identify differentially expressed genes between different experimental conditions. DESeq2 employs statistical methods to normalize and analyze RNA-seq data, making it a valuable tool for researchers studying gene expression patterns and regulation. It is available through the Bioconductor repository.
It was first presented in 2014. As of September 2023, its use has been cited over 30,000 times.
Features
One of the key steps in the analysis of RNA-seq data is data normalization. DESeq2 employs the "size factor" normalization method, which adjusts for differences in sequencing depth between samples. This normalization ensures that the expression values of genes are comparable across samples, allowing for accurate identification of differentially expressed genes. In addition to size factor normalization, DESeq2 also employs a variance-stabilizing transformation, which further enhances the quality of the data by stabilizing the variance across different expression levels. This combination of normalization techniques minimizes bias and improves the accuracy of differential expression analysis.
DESeq2 makes available negative binomial distribution models to account for the over-dispersion commonly observed in RNA-seq data. This modeling approach takes into consideration the variability that is not adequately explained by a simple Poisson distribution. By incorporating the negative binomial distribution, DESeq2 accurately models the dispersion of gene expression counts and provides more reliable estimates of differential expression.
DESeq2 also offers an adaptive shrinkage procedure, known as the "apeglm" method, which is particularly useful when dealing with small sample sizes. This technique effectively shrinks the log-fold changes of gene expression estimates, reducing the impact of extreme values and improving the stability of results. This is especially valuable for researchers working with limited biological replicates, as it helps to mitigate the problem of low statistical power.
Further, DESeq2 allows users to incorporate relevant covariates into their analyses. This feature enables researchers to account for potential confounding factors, such as batch effects or experimental conditions, that can influence gene expression. By including covariates in the analysis, DESeq2 offers a more accurate assessment of the true differential expression patterns in the data.
Use
DESeq2 is interfaced through R, via the bioconductor repository. The repository provides comprehensive documentation and tutorials, making it accessible to a wide range of researchers.
References
Applied statistical analysis
Software using the GNU Lesser General Public License
R scientific libraries
RNA sequencing
Cross-platform free software
Free software for Linux
Free software for Windows
Free software for macOS
Bioinformatics software | DESeq2 | [
"Chemistry",
"Biology"
] | 580 | [
"Genetics techniques",
"Bioinformatics software",
"Bioinformatics",
"RNA sequencing",
"Molecular biology techniques"
] |
74,862,497 | https://en.wikipedia.org/wiki/Phaeocystis%20globosa%20virus%20virophage | Phaeocystis globosa virus virophage, or PgVV, or Preplasmiviricota sp. Gezel-14T, is a polinton-like virus, which are small DNA viruses that are found integrated in protist genomes. Similar to virophages, PgVV requires a helper virus to replicate. Phaeocystis globosa virus virophage has a parasitic relationship with its helper virus species Phaeocystis globosa virus (PgV). They are a species of giant virus that infect algae of the genus Phaeocystis.
References
Viruses
Virophages
DNA viruses
Unaccepted virus taxa | Phaeocystis globosa virus virophage | [
"Biology"
] | 151 | [
"Viruses",
"Tree of life (biology)",
"Controversial taxa",
"Unaccepted virus taxa",
"Biological hypotheses",
"Microorganisms",
"DNA viruses"
] |
74,862,853 | https://en.wikipedia.org/wiki/Dammam%20No.%207 | Dammam No. 7 also known as "Prosperity Well," is an oil well located in Dammam, Saudi Arabia, notable for being the site where commercial quantities of oil were first discovered in the country on March 4, 1938. This discovery marked the beginning of Saudi Arabia's transformation into one of the world's leading oil producers. The well, which operated until its closure in 1982, is now a historic landmark and part of an Aramco museum.
Discovery and Early Development
When drilling commenced in the 1930s, the existence of oil in Saudi Arabia was uncertain The discovery of oil in neighboring Bahrain in 1932, however, spurred Saudi Arabia to initiate its own oil exploration efforts.
The California Arabian Standard Oil Company (CASOC), later known as Saudi Aramco, which had secured a concession agreement with the Saudi government in 1933, began drilling in Dammam in the Eastern Province. Prior to this, the region consisted of small fishing villages. CASOC faced immense logistical challenges, requiring the construction of infrastructure from scratch. Despite initial setbacks with the first six wells (Dammam No. 1–6), which failed to yield significant commercial quantities of oil, drilling continued at Dammam No. 7. By November 1937, all other drilling operations in the kingdom were halted, and efforts were concentrated on this well, which had already reached twice the depth of the "Bahrain Zone," where oil had been discovered in Bahrain.
Oil discovery
In a project led by American geologist Max Steineke and assisted by Saudi Bedouin Khamis Bin Rimthan, the two men persisted to drill deeper in the well. On March 4, 1938, commercial volumes of oil began gushing out of the well at a depth of approximately . On that day, 1,585 barrels of oil were extracted from the well, and six days later this daily output had increased to 3,810 barrels. By October 1938, the Dammam field was confirmed as a viable commercial oil source. The discovery validated the persistence of chief geologist Max Steineke, who advocated drilling deeper despite earlier setbacks.
Production volume
From 1938 until its closure in 1982, the well produced more than 32 million barrels of oil with a daily average of 1,600 barrels.
Legacy and cultural impact
Crown Prince Abdullah officially named Dammam No. 7 the 'Prosperity Well' in 1999.
In 2021, Saudi Aramco built a supercomputer called Dammam 7, named after the well; it is ranked the tenth-most powerful supercomputer in the world. In August 2023, it was announced that an upcoming film titled Sands of Fortune would feature the story of Dammam No. 7 while chronicling the early history of the Saudi oil industry. In the present day, the oil well still stands and is integrated into an Aramco museum, where visitors frequently have their photographs taken in front of the historic landmark.
See also
History of the oil industry in Saudi Arabia
Further reading
"Well No. 7 Leaflet in English" published by King Saud University
References
Oil wells
Petroleum in Saudi Arabia | Dammam No. 7 | [
"Chemistry"
] | 628 | [
"Petroleum technology",
"Oil wells"
] |
74,863,825 | https://en.wikipedia.org/wiki/C19orf38 | Highly Expressed In Immature Dendritic Cell Transcript 1 (HIDE1) is a protein encoded by chromosome 19 open reading frame 38 (C19orf38) gene in humans. There are no other aliases used for the gene. C19orf38 is only expressed in white blood cells, of the innate immune system. HIDE1 protein has been found to play a role in immune escape of tumors and diet induced obesity.
Gene
Risk-associated variants
There are five risk associated variants found within the c19orf38 gene. Three of which lead to a significant increase in low density lipoprotein cholesterol. One variant is associated with prevalence of coronary artery disease. And the fifth identified risk variant is associated with increased reporting of Idiopathic knee osteoarthritis.
mRNA Transcripts
Isoforms
C19orf38 can be alternatively spliced to form three distinct mRNA products. Both isoform's 1 and 2 differ only via the 5' UTR. Isoform 3 has a different protein product in that the mRNA transcript does not contain exon 2 or exon 3, however, isoform 3 is not expressed in humans.
Tissue Localization
C19orf38 transcript is found at the highest amount in bone marrow, with less than a fifth of the transcript amount in the spleen, testis, appendix, and lymph nodes, with little to no transcript in other tissue types. Tissues with the transcript have a high leukocyte presence. It is exclusively present in the following cell types: monocytes, peripheral blood mononuclear cells, eosinophils and basophil's, so any expression in tissues comes from innate immune cells, or granulocytes. Transcript is not present in neutrophils. C19orf38 transcript is not found in macrophages, despite, classical monocyte expression.
Regulation of Transcription
The promoter region of C19orf38 contains two transcription factor binding domains that are particularly important for innate immune system development: Spi-C Transcription Factor (SPIC) and E74 Like ETS Transcription Factor 3 (ELF3). Both are transcription factors are only present in leukocytes are involved in the negative transcription of genes for the development of macrophages, which coincides with cellular localization of C19orf38.
Protein
Structure
HIDE1 is a 230 amino acid transmembrane protein, anchored via ɑ-helix transmembrane region. F-box only protein 2 (FBXO2) binds in an extracellular region to glycosylated arginine amino acids found at positions 48 and 97. The extracellular region also contains a highly conserved signal peptide sequence, which leads the protein to the membrane space. Additionally, HIDE1 protein contains a disordered region in its intracellular region. TNPO3 and XPO-4 are known to interact with HIDE1.
Sub-cellular localization
Human HIDE1 protein is largely confirmed to be a signal protein existing either embedded within the cellular membrane or in a secreted form. Deeploc signal analysis predicts a signal peptide region at the start of its translation. Furthermore, PSORT2 k-NN prediction finds the protein to be localized extracellularly 34.8% of the time, 30.4% in the plasma membrane, 21.7% in the endoplasmic reticulum, and 13.0% in the golgi bodies.
Binding motifs
HIDE1 protein contains an ig-like domain and signal peptide in its extracellular region as well as multiple lipidification sites to assist with membrane association. Additionally, N-linked glycosylation sites can be found in the luminal side. The intracellular/cytoplasmic region contains multiple phosphorylation sites and calpain cleavage locations.
Homology
Orthologs
Orthologs are found in the following taxon classes: Mammalia, Reptilia, Aves, and Amphibia. There are no orthologs found in either class Insecta or Actinopterygii. C19orf38 is only present in jawed vertebrates which coincides with the divergence of adaptive immune systems 550 MYA between jawed and jawless vertebrates.
Evolutionary rate
C19orf38 mutation rate is found to be less than that of fibrinogen alpha, but is high in comparison to other human proteins, especially, immune proteins which are highly conserved in jawed vertebrates.
Clinical significance
HIDE1 shows no significant association with any cancer.
References
Proteins | C19orf38 | [
"Chemistry"
] | 932 | [
"Proteins",
"Biomolecules by chemical classification",
"Molecular biology"
] |
62,461,655 | https://en.wikipedia.org/wiki/Polynomial%20method%20in%20combinatorics | In mathematics, the polynomial method is an algebraic approach to combinatorics problems that involves capturing some combinatorial structure using polynomials and proceeding to argue about their algebraic properties. Recently, the polynomial method has led to the development of remarkably simple solutions to several long-standing open problems. The polynomial method encompasses a wide range of specific techniques for using polynomials and ideas from areas such as algebraic geometry to solve combinatorics problems. While a few techniques that follow the framework of the polynomial method, such as Alon's Combinatorial Nullstellensatz, have been known since the 1990s, it was not until around 2010 that a broader framework for the polynomial method has been developed.
Mathematical overview
Many uses of the polynomial method follow the same high-level approach. The approach is as follows:
Embed some combinatorial problem into a vector space.
Capture the hypotheses of the problem by constructing a polynomial of low-degree that is zero on a certain set
After constructing the polynomial, argue about its algebraic properties to deduce that the original configuration must satisfy the desired properties.
Example
As an example, we outline Dvir's proof of the Finite Field Kakeya Conjecture using the polynomial method.
Finite Field Kakeya Conjecture: Let be a finite field with elements. Let be a Kakeya set, i.e. for each vector there exists such that contains a line . Then the set has size at least where is a constant that only depends on .
Proof: The proof we give will show that has size at least . The bound of can be obtained using the same method with a little additional work.
Assume we have a Kakeya set with
Consider the set of monomials of the form of degree exactly . There are exactly such monomials. Thus, there exists a nonzero homogeneous polynomial of degree that vanishes on all points in . Note this is because finding such a polynomial reduces to solving a system of linear equations for the coefficients.
Now we will use the property that is a Kakeya set to show that must vanish on all of . Clearly . Next, for , there is an such that the line is contained in . Since is homogeneous, if for some then for any . In particular
for all nonzero . However, is a polynomial of degree in but it has at least roots corresponding to the nonzero elements of so it must be identically zero. In particular, plugging in we deduce .
We have shown that for all but has degree less than in each of the variables so this is impossible by the Schwartz–Zippel lemma. We deduce that we must actually have
Polynomial partitioning
A variation of the polynomial method, often called polynomial partitioning, was introduced by Guth and Katz in their solution to the Erdős distinct distances problem. Polynomial partitioning involves using polynomials to divide the underlying space into regions and arguing about the geometric structure of the partition. These arguments rely on results from algebraic geometry bounding the number of incidences between various algebraic curves. The technique of polynomial partitioning has been used to give a new proof of the Szemerédi–Trotter theorem via the polynomial ham sandwich theorem and has been applied to a variety of problems in incidence geometry.
Applications
A few examples of longstanding open problems that have been solved using the polynomial method are:
The finite field Kakeya conjecture by Dvir
The cap set problem by Ellenberg and Gijswijt with the original framework developed on the analogous problem over by Croot, Lev and Pach
The Erdős distinct distances problem by Guth and Katz
The Joints Problem in 3D by Guth and Katz. Their argument was later simplified by Elekes, Kaplan and Sharir
See also
Combinatorial Nullstellensatz
References
External links
Survey on the Polynomial Method by Terence Tao
Survey on the Polynomial Method by Larry Guth
Combinatorics | Polynomial method in combinatorics | [
"Mathematics"
] | 789 | [
"Discrete mathematics",
"Combinatorics"
] |
62,461,775 | https://en.wikipedia.org/wiki/LB-1 | LB-1 is a binary star system in the constellation Gemini. In 2019, a paper in Nature proposed that the system contained an unusually massive stellar black hole outside of ordinary single stellar evolution parameters. However, analyses in 2020 found the original 2019 conclusion to be incorrect. Some researchers now believe the system consists of a stripped B-type star and a massive rapidly rotating Be star.
Star
The optically observed star, LB-1 A, or , is a B-type star nine times the mass of the Sun and located at least from Earth. It was found to exhibit radial velocity variations by Chinese astronomers using the Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) and the radial-velocity method to search for such wobbly stars.
The astronomers observed the star orbiting an unseen companion every 78.9 days, in what researchers described as a "surprisingly circular" orbit. Follow-up observations using the Gran Telescopio Canarias in Spain and the W. M. Keck Observatory in the United States better defined the findings.
The parallax to LB-1 has been published in Gaia Data Release 2, implying a distance around . The observed spectral properties of the star are inconsistent with those expected for an ordinary main sequence B-type star at this distance.
A separate spectroscopic analysis of the star suggests that instead of a B-type main sequence star as had been indicated, LB-1 A is more likely a stripped helium star (whose spectrum is very similar) with only ~, if at the distance determined by the Gaia satellite.
An additional spectroscopic analysis utilised multi-epoch spectroscopy and disentangling techniques and found that LB-1 comprises two non-degenerate stars: a rapidly rotating B-type star with a disk (a Be star) and a slowly rotating stripped helium star.
Unseen companion
The unseen companion to the star was discovered by measuring the radial velocity shifts of its companion star. If it is a black hole, this would mark the first time a stellar black hole was discovered without observation of its X-ray emissions.
If the distance from parallax is ignored, and the star is assumed to be an ordinary main sequence B-type star, the unseen companion LB-1 B or LB-1 *, could be hypothesized to be a black hole, with a mass of about 70 solar masses, more than twice the mass as the maximum predicted by most current theories of stellar evolution. It would be in the stellar-mass black hole range, below the size of intermediate-mass black holes; however, it would fall in the pair-instability gap of black hole sizes, whereby sufficiently massive black hole progenitor stars undergo pair-instability supernovae and completely disintegrate, leaving no remnant behind. LB-1 would be the first black hole discovered in the mass gap range. The companion mass would be high enough that anything other than a black hole would be expected to be easily detected. According to one of the researchers, "This discovery forces us to re-examine our models of how stellar-mass black holes form [...] This remarkable result, along with the LIGO-Virgo detections of binary black hole collisions during the past four years, really points towards a renaissance in our understanding of black hole astrophysics."
Alternatively, the evidence for the star to be a stripped helium star reduces the mass estimate of the compact object to as little as ~ and raises the possibility of a neutron star.
A revised multiepoch spectroscopic study of LB-1 has revealed that LB-1 does not contain a black hole at all. Instead, it comprises a rapidly rotating Be star and a slowly-rotating helium star. The system was proposed to have formed through a past mass-transfer event. In this framework, the stripped helium star was originally the more massive star and has therefore evolved faster than its companion. After leaving the main sequence, the progenitor star transferred mass to its companion, which became the massive rapidly rotating Be star we see today.
See also
HR 6819
List of black holes
List of nearest black holes
References
External links
Black Holes: Gravity's Relentless Pull – Interactive multimedia website about the physics and astronomy of black holes from the Space Telescope Science Institute
Frequently Asked Questions (FAQs) on Black Holes
Astronomical objects discovered in 2019
Astronomical radio sources
Binary stars
B-type subgiants
Gemini (constellation)
Variable stars
Be stars | LB-1 | [
"Astronomy"
] | 903 | [
"Gemini (constellation)",
"Astronomical radio sources",
"Astronomical events",
"Constellations",
"Astronomical objects"
] |
62,461,799 | https://en.wikipedia.org/wiki/Clawson%20point | In Euclidean geometry, the Clawson point is a special point in a triangle defined by the trilinear coordinates , where are the interior angles at the triangle vertices . It is named after John Wentworth Clawson, who published it 1925 in the American Mathematical Monthly. It is denoted X(19) in Clark Kimberling's Encyclopedia of Triangle Centers.
Geometrical constructions
There are at least two ways to construct the Clawson point, which also could be used as coordinate free definitions of the point. In both cases you have two triangles, where the three lines connecting their according vertices meet in a common point, which is the Clawson point.
Construction 1
For a given triangle , let be its orthic triangle and the triangle formed by the outer tangents to its three excircles. These two triangles are similar and the Clawson point is their center of similarity, therefore the three lines connecting their vertices meet in a common point, which is the Clawson point.
Construction 2
For a triangle , its circumcircle intersects each of its three excircles in two points. The three lines through those points of intersections form a triangle This triangle and are perspective triangles with the Clawson point being their perspective center. Hence the three lines meet in the Clawson point.
History
The point is now named after J. W. Clawson, who published its trilinear coordinates 1925 in the American Mathematical Monthly as problem 3132, where he asked for geometrical construction of that point. However the French mathematician
Émile Lemoine had already examined the point in 1886. Later the point was independently rediscovered by R. Lyness and G. R. Veldkamp in 1983, who called it crucial point after the Canadian math journal Crux Mathematicorum in which it was published as problem 682.
References
External links
X(19)=CLAWSON POINT und CLAWSON POINT at the Encyclopedia of trinagle Centers (ETC)
LE POINT CLAWSON PAR LES TRIANGLES ORTHIQUES ET EXTANGENT
Clawson Point: Orthic Triangle, Extangents Triangle, Homothecy or Homothety
Triangle centers | Clawson point | [
"Physics",
"Mathematics"
] | 431 | [
"Point (geometry)",
"Triangle centers",
"Points defined for a triangle",
"Geometric centers",
"Symmetry"
] |
62,464,330 | https://en.wikipedia.org/wiki/Carolyn%20Hansson | Carolyn M. Hansson (nee Russell; March 15, 1941) is a Canadian materials engineer. She was the first female student to attend the Royal School of Mines at Imperial College, London, and the first woman to graduate with a PhD in metallurgy from there. Hansson was honoured for pioneering a monitoring system for evaluating the integrity of concrete structures.
Early life and education
Hansson was born on March 15, 1941, in Hazel Grove, Cheshire, England. Growing up, she attended an all-girls school in England and applied for metallurgy at Imperial College. Upon being accepted, she was the first female student to attend the Royal School of Mines at Imperial College, London, and the first woman to graduate with a PhD in metallurgy from there. She was also only one of two women in the United Kingdom with a PhD in metallurgy.
Career
In 1976, Hansson joined AT&T Bell Labs where she stayed four years before spending the following nine as a research scientist, and eventually as head of the Research Department, at the Danish Corrosion Centre. When her husband was extended a position in Maryland, Hansson accepted an appointment within the Martin Marietta’s Institute for Advanced Studies. She was awarded a Guggenheim Fellowship in 1977 for research on physical metallurgy. doing these studies at the University of Cambridge. She was awarded the 1980 Society of Women Engineers Achievement Award.
In 1990, she became a professor and head of the Materials and Metallurgical Engineering Department at Queen's University and then joined the University of Waterloo in 1996 as Vice President of University Research. The following year, she was elected a Fellow of The Minerals, Metals & Materials Society. Hansson was eventually replaced as VP by Paul Guild in 2001 after a five year term.
Hansson's research focus is on the corrosion of steel inside concrete. She has identified techniques for measuring the amount of corrosion and also studies rust-resistant reinforcing materials. Hansson has worked as a consultant to the Ministry of Transportation Ontario and Alberta Transportation in corrosion monitoring of bridge structures. In 2005, Hansson resigned from Hydrogenics Corporation upon their acquisition of Stuart Energy. A few years later, she was elected a Fellow of the Royal Society of Canada for her contributions in the basic science of corrosion and metallurgical processes and applied engineering. Hansson also received the 2009 Acta Materialia, Inc. Materials & Society Award.
In 2014, she was appointed Executive Secretary and Cooperating Society Governor of Acta Materialia Inc. The next year, Hansson was appointed a member of the Order of Canada for "pioneering a monitoring system for evaluating the integrity of concrete structures." She has also been appointed a Fellow of the Danish Academy of Technical Sciences, the UK Institution of Materials, Minerals and Mining, and the American Concrete Institute. Two years later, she joined the Board of Directors at Electrovaya Inc. During the year, she was appointed head of Electrovaya's Disclosure Committee after it was fined $250 thousand by the Ontario Securities Commission.
Personal life
As of 1980, she lived in Murray Hill, New Jersey.
References
External links
Living people
1941 births
Alumni of Imperial College London
Academic staff of the University of Waterloo
Academic staff of Queen's University at Kingston
Fellows of the Royal Society of Canada
Canadian women academics
Women materials scientists and engineers
Members of the Order of Canada
People from Hazel Grove
People from Union County, New Jersey
Engineers from New Jersey
Fellows of the Minerals, Metals & Materials Society | Carolyn Hansson | [
"Materials_science",
"Technology"
] | 702 | [
"Women materials scientists and engineers",
"Materials scientists and engineers",
"Women in science and technology"
] |
62,464,531 | https://en.wikipedia.org/wiki/Phylogenetic%20classification%20of%20bony%20fishes | The phylogenetic classification of bony fishes is a phylogenetic classification of bony fishes and is based on phylogenies inferred using molecular and genomic data for nearly 2000 fishes. The first version was published in 2013 and resolved 66 orders. The latest version (version 4) was published in 2017 and recognised 72 orders and 79 suborders.
Phylogeny
The following cladograms show the phylogeny of the Osteichthyes down to order level, with the number of families in parentheses.
The 43 orders of spiny-rayed fishes are related as follows:
References
External links
www.deepfin.org - Phylogeny of all Fishes (redirects to https://sites.google.com/site/guilleorti/home)
Phylogenetics
Bony fish | Phylogenetic classification of bony fishes | [
"Biology"
] | 166 | [
"Bioinformatics",
"Phylogenetics",
"Taxonomy (biology)"
] |
62,467,440 | https://en.wikipedia.org/wiki/Mathematica%20Applicanda | Mathematica Applicanda is a peer-reviewed scientific journal covering applied mathematics. It was established in 1973 by the Polish Mathematical Society as Series III of the Annales Societatis Mathematicae Polonae, under the name Matematyka Stosowana (ISSN 0137-2890). The first editor-in-chief was Marceli Stark. In 1999 the journal was renamed Matematyka Stosowana-Matematyka dla Społeczeństwa (ISSN 1730-2668 ). Since 2012 its main issue is the electronic one with the name Mathematica Applicanda with ISSN 2299-4009.
Former Editors-in-chief
Marceli Stark (volume I[1973])
Robert Bartoszyński (volumes II[1974] - XXIX[1987])
Andrzej Kiełbasiński (volumes XXX[1987] - XLI[1999])
Witold Kosiński (volumes XLII[2000] - XLIV[2011])
Krzysztof J. Szajowski (volumes XL[2012] - XLVII[2019])
Krzysztof Burnecki (volume LXVIII[2020] )
Jacek Miękisz (volume XLIX[2021] - L[2022])
Agnieszka Wyłomańska (volume LI[2023- ] )
Abstracting and indexing
The journal is abstracted and indexed in
MathSciNet
Zentralblatt MATH
CEON The Library of Science (Biblioteka Nauki)
BazTech
Scopus
Index Copernicus
See also
List of mathematical physics journals
List of probability journals
List of statistics journals
References
External links
Applied mathematics journals
Academic journals established in 1973
English-language journals
Biannual journals | Mathematica Applicanda | [
"Mathematics"
] | 373 | [
"Applied mathematics",
"Applied mathematics journals"
] |
62,468,130 | https://en.wikipedia.org/wiki/HD%20128429 | HD 128429 is a binary star system located at a distance of 88 light years from the Sun in the southern zodiac constellation of Libra. It has a yellow-white hue and is just barely visible to the naked eye with an apparent visual magnitude of 6.20. The system is drifting closer to the Sun with a radial velocity of −66 km/s and has a high proper motion, traversing the celestial sphere at the rate of per year. It is a well-known high velocity star system with a net heliocentric velocity of 158.8 km/s. The system is orbiting the through the galaxy with a high eccentricity of 0.62, which carries it from as close as 4.1 out to away from the Galactic Center.
Binary system
This star was found to be a binary system based on variations in radial velocity data collected from the Hipparcos satellite. The pair have an orbital period of with photometric data yielding an angular separation of . Observations from the Gaia DR2 provide an estimated linear semimajor axis of . The eccentricity of the orbit is unknown, but has been assumed to be near zero.
The visible member of this system, designated component Aa, has a stellar classification of F6V. Superficially, it resembles 2–3 billion year old F-type main-sequence star that is generating energy through core hydrogen fusion. However, the star displays anomalies that are a challenge to explain through the normal star formation process. The first is the high velocity orbit of the star through the Milky Way, which would be very difficult for a young population I star to accomplish. The second is an abnormally low iron-to-magnesium [Fe/Mg] abundance ratio. This strongly suggests it is an ancient population II star that was formed during the early starburst phase of the galaxy about 12 billion years ago – a period when high levels of magnesium was released during supernovae explosions of massive stars. Both anomalies can be explained by a mass transfer that converted a much older star into a blue straggler.
Evidence suggests that the companion, Ab, is a white dwarf star that evolved from an F- or G-type main-sequence star with a similar mass to the current primary. As component Ab became a red giant, it overflowed its Roche lobe and mass transfer took place. The white dwarf now has less than half the mass of the Sun, having transferred a substantial fraction of its mass to the current primary. The interaction would have circularized the orbit of the pair.
Properties
The current primary has 1.32 times the mass of the Sun and 1.39 times the Sun's radius. It has a low metallicity and is completely lacking in lithium. The star is spinning with a projected rotational velocity of 16.2 km/s. It is radiating 2.75 times the luminosity of the Sun from its photosphere at an effective temperature of 6,341 K. The system is a source for X-ray emission.
References
F-type main-sequence stars
Blue stragglers
White dwarfs
Binary stars
Libra (constellation)
Durchmusterung objects
111395
62523 | HD 128429 | [
"Astronomy"
] | 645 | [
"Libra (constellation)",
"Constellations"
] |
62,469,111 | https://en.wikipedia.org/wiki/Homomorphism%20density | In the mathematical field of extremal graph theory, homomorphism density with respect to a graph is a parameter that is associated to each graph in the following manner:
.
Above, is the set of graph homomorphisms, or adjacency preserving maps, from to . Density can also be interpreted as the probability that a map from the vertices of to the vertices of chosen uniformly at random is a graph homomorphism. There is a connection between homomorphism densities and subgraph densities, which is elaborated on below.
Examples
The edge density of a graph is given by .
The number of walks with steps is given by .
where is the adjacency matrix of .
The proportion of colorings using colors that are proper is given by .
Other important properties such as the number of stable sets or the maximum cut can be expressed or estimated in terms of homomorphism numbers or densities.
Subgraph densities
We define the (labeled) subgraph density of in to be
.
Note that it might be slightly dubious to call this a density, as we are not quite dividing through by the total number of labeled subgraphs on vertices of , but our definition is asymptotically equivalent and simpler to analyze for our purposes. Observe that any labeled copy of in corresponds to a homomorphism of into . However, not every homomorphism corresponds to a labeled copy − there are some degenerate cases, in which multiple vertices of are sent to the same vertex of . That said, the number of such degenerate homomorphisms is only , so we have . For instance, we see that for graphs with constant homomorphism density, the labeled subgraph density and homomorphism density are asymptotically equivalent. For being a complete graph , the homomorphism density and subgraph density are in fact equal (for without self-loops), as the edges of force all images under a graph homomorphism to be distinct.
Generalization to graphons
The notion of homomorphism density can be generalized to the case where instead of a graph , we have a graphon ,
Note that the integrand is a product that runs over the edges in the subgraph , whereas the differential is a product running over the vertices in . Intuitively, each vertex in is represented by the variable
For example, the triangle density in a graphon is given by
.
This definition of homomorphism density is indeed a generalization, because for every graph and its associated step graphon , .
The definition can be further extended to all symmetric, measurable functions . The following example demonstrates the benefit of this further generalization. Relative to the function , the density of in is the number of Eulerian cycles in .
This notion is helpful in understanding asymptotic behavior of homomorphism densities of graphs which satisfy certain property, since a graphon is a limit of a sequence of graphs.
Inequalities
Many results in extremal graph theory can be described by inequalities involving homomorphism densities associated to a graph. The following are a sequence of examples relating the density of triangles to the density of edges.
Turan's Theorem
A classic example is Turán's Theorem, which states that if , then . A special case of this is Mantel's Theorem, which states that if , then .
Goodman's Theorem
An extension of Mantel's Theorem provides an explicit lower bound on triangle densities in terms of edge densities.Theorem (Goodman).
Kruskal-Katona Theorem
A converse inequality to Goodman's Theorem is a special case of Kruskal–Katona theorem, which states that . It turns out that both of these inequalities are tight for specific edge densities.
Proof. It is sufficient to prove this inequality for any graph . Say is a graph on vertices, and are the eigenvalues of its adjacency matrix . By spectral graph theory, we know
, and .
The conclusion then comes from the following inequality:
.
Description of triangle vs edge density
A more complete description of the relationship between and was proven by Razborov. His work from 2008 completes the understanding of a homomorphism inequality problem, the description of , which is the region of feasible edge density, triangle density pairs in a graphon..The upper boundary of the region is tight and given by the Kruskal-Katona Theorem. The lower boundary is main result of work by Razborov in providing a complete description.
Useful tools
Cauchy-Schwarz
One particularly useful inequality to analyze homomorphism densities is the Cauchy–Schwarz inequality. The effect of applying the Cauchy-Schwarz inequality is "folding" the graph over a line of symmetry to relate it to a smaller graph. This allows for the reduction of densities of large but symmetric graphs to that of smaller graphs. As an example, we prove that the cycle of length 4 is Sidorenko. If the vertices are labelled 1,2,3,4 in that order, the diagonal through vertices 1 and 3 is a line of symmetry. Folding over this line relates to the complete bipartite graph . Mathematically, this is formalized as
where we applied Cauchy-Schwarz to "fold" vertex 2 onto vertex 4. The same technique can be used to show , which combined with the above verifies that is a Sidorenko graph.
The generalization Hölder's inequality can also be used in a similar manner to fold graphs multiple times with a single step. It is also possible to apply the more general form of Cauchy-Schwarz to fold graphs in the case that certain edges lie on the line of symmetry.
Lagrangian
The Lagrangian can be useful in analyzing extremal problems. The quantity is defined to be
.
One useful fact is that a maximizing vector is equally supported on the vertices of a clique in . The following is an application of analyzing this quantity.
According to Hamed Hatami and Sergei Norine, one can convert any algebraic inequality between homomorphism densities to a linear inequality. In some situations, deciding whether such an inequality is true or not can be simplified, such as it is the case in the following theorem.Theorem (Bollobás). Let be real constants. Then, the inequality
holds for every graph if and only if it holds for every complete graph .
However, we get a much harder problem, in fact an undecidable one, when we have a homomorphism inequalities on a more general set of graphs :Theorem (Hatami, Norine). Let be real constants, and graphs. Then, it is an undecidable problem to determine whether the homomorphism density inequality
holds for every graph . A recent observation proves that any linear homomorphism density inequality is a consequence of the positive semi-definiteness of a certain infinite matrix, or to the positivity of a quantum graph; in other words, any such inequality would follow from applications of the Cauchy-Schwarz Inequality.
See also
Common graph
Sidorenko's conjecture
References
Extremal graph theory | Homomorphism density | [
"Mathematics"
] | 1,444 | [
"Mathematical relations",
"Graph theory",
"Extremal graph theory"
] |
62,469,606 | https://en.wikipedia.org/wiki/Darmois%E2%80%93Skitovich%20theorem | In mathematical statistics, the Darmois–Skitovich theorem characterizes the normal distribution (the Gaussian distribution) by the independence of two linear forms from independent random variables. This theorem was proved independently by G. Darmois and V. P. Skitovich in 1953.
Formulation
Let be independent random variables. Let be nonzero constants. If the linear forms and are independent then all random variables have normal distributions (Gaussian distributions).
History
The Darmois–Skitovich theorem is a generalization of the Kac–Bernstein theorem in which the normal distribution (the Gaussian distribution) is characterized by the independence of the sum and the difference of two independent random variables. For a history of proving the theorem by V. P. Skitovich, see the article
References
Mathematical theorems | Darmois–Skitovich theorem | [
"Mathematics"
] | 170 | [
"Mathematical problems",
"Mathematical theorems",
"Theorems in statistics"
] |
62,469,857 | https://en.wikipedia.org/wiki/Kac%E2%80%93Bernstein%20theorem | The Kac–Bernstein theorem is one of the first characterization theorems of mathematical statistics. It is easy to see that if the random variables and are independent and normally distributed with the same variance, then their sum and difference are also independent. The Kac–Bernstein theorem states that the independence of the sum and difference of two independent random variables characterizes the normal distribution (the Gauss distribution). This theorem was proved independently by Polish-American mathematician Mark Kac and Soviet mathematician Sergei Bernstein.
Formulation
Let and be independent random variables. If and are independent then and have normal distributions (the Gaussian distribution).
Generalization
A generalization of the Kac–Bernstein theorem is the Darmois–Skitovich theorem, in which instead of sum and difference linear forms from n independent random variables are considered.
References
Kac M. "On a characterization of the normal distribution," American Journal of Mathematics. 1939. 61. pp. 726—728.
Bernstein S. N. "On a property which characterizes a Gaussian distribution," Proceedings of the Leningrad Polytechnic Institute. 1941. V. 217, No 3. pp. 21—22.
Theorems in statistics
Normal distribution | Kac–Bernstein theorem | [
"Mathematics"
] | 245 | [
"Mathematical theorems",
"Mathematical problems",
"Theorems in statistics"
] |
62,471,004 | https://en.wikipedia.org/wiki/ArviZ | ArviZ ( ) is a Python package for exploratory analysis of Bayesian models. It is specifically designed to work with the output of probabilistic programming libraries like PyMC, Stan, and others by providing a set of tools for summarizing and visualizing the results of Bayesian inference in a convenient and informative way. ArviZ also provides a common data structure for manipulating and storing data commonly arising in Bayesian analysis, like posterior samples or observed data.
ArviZ is an open source project, developed by the community and is an affiliated project of NumFOCUS. and it has been used to help interpret inference problems in several scientific domains, including astronomy, neuroscience, physics and statistics.
Etymology
The ArviZ name is derived from reading "rvs" (the short form of random variates) as a word instead of spelling it and also using the particle "viz" usually used to abbreviate visualization.
Exploratory analysis of Bayesian models
When working with Bayesian models there are a series of related tasks that need to be addressed besides inference itself:
Diagnoses of the quality of the inference, this is needed when using numerical methods such as Markov chain Monte Carlo techniques
Model criticism, including evaluations of both model assumptions and model predictions
Comparison of models, including model selection or model averaging
Preparation of the results for a particular audience
All these tasks are part of the Exploratory analysis of Bayesian models approach, and successfully performing them is central to the iterative and interactive modeling process. These tasks require both numerical and visual summaries.
Library features
InferenceData object for Bayesian data manipulation. This object is based on xarray
Plots using two alternative backends matplotlib or bokeh
Numerical summaries and diagnostics for Markov chain Monte Carlo methods.
Integration with established probabilistic programming languages including; PyStan (the Python interface of Stan), PyMC, Edward Pyro, and easily integrated with novel or bespoke Bayesian analyses. ArviZ is also available in Julia, using the ArviZ.jl interface
See also
Bambi is a high-level Bayesian model-building interface based on PyMC
PyMC a probabilistic programming language written in Python
Stan is a probabilistic programming language for statistical inference written in C++
References
External links
ArviZ web site
Computational statistics
Free Bayesian statistics software
Monte Carlo software
Numerical programming languages
Probabilistic software | ArviZ | [
"Mathematics"
] | 518 | [
"Probabilistic software",
"Computational statistics",
"Computational mathematics",
"Mathematical software"
] |
62,471,938 | https://en.wikipedia.org/wiki/Counting%20lemma | The counting lemmas this article discusses are statements in combinatorics and graph theory. The first one extracts information from -regular pairs of subsets of vertices in a graph , in order to guarantee patterns in the entire graph; more explicitly, these patterns correspond to the count of copies of a certain graph in . The second counting lemma provides a similar yet more general notion on the space of graphons, in which a scalar of the cut distance between two graphs is correlated to the homomorphism density between them and .
Graph embedding version of counting lemma
Whenever we have an -regular pair of subsets of vertices in a graph , we can interpret this in the following way: the bipartite graph, , which has density , is close to being a random bipartite graph in which every edge appears with probability , with some error.
In a setting where we have several clusters of vertices, some of the pairs between these clusters being -regular, we would expect the count of small, or local patterns, to be roughly equal to the count of such patterns in a random graph. These small patterns can be, for instance, the number of graph embeddings of some in , or more specifically, the number of copies of in formed by taking one vertex in each vertex cluster.
The above intuition works, yet there are several important conditions that must be satisfied in order to have a complete statement of the theorem; for instance, the pairwise densities are at least , the cluster sizes are at least , and . Being more careful of these details, the statement of the graph counting lemma is as follows: Statement of the theorem
If is a graph with vertices and edges, and is a graph with (not necessarily disjoint) vertex subsets , such that for all and for every edge of the pair is -regular with density and , then contains at least many copies of with the copy of vertex in .
This theorem is a generalization of the triangle counting lemma, which states the above but with : Triangle counting Lemma
Let be a graph on vertices, and let be subsets of which are pairwise -regular, and suppose the edge densities are all at least . Then the number of triples such that form a triangle in is at least
Proof of triangle counting lemma:
Since is a regular pair, less than of the vertices in have fewer than neighbors in ; otherwise, this set of vertices from along with its neighbors in would witness irregularity of , a contradiction. Intuitively, we are saying that not too many vertices in can have a small degree in .
By an analogous argument in the pair , less than of the vertices in have fewer than neighbors in . Combining these two subsets of and taking their complement, we obtain a subset of size at least such that every vertex has at least neighbors in and at least neighbors in .
We also know that , and that is an -regular pair; therefore, the density between the neighborhood of in and the neighborhood of in is at least , because by regularity it is -close to the actual density between and .
Summing up, for each of these at least vertices , there are at least choices of edges between the neighborhood of in and the neighborhood of in . From there we can conclude this proof.
Idea of proof of graph counting lemma:The general proof of the graph counting lemma extends this argument through a greedy embedding strategy; namely, vertices of are embedded in the graph one by one, by using the regularity condition so as to be able to keep a sufficiently large set of vertices in which we could embed the next vertex.
Graphon version of counting lemma
The space of graphons is given the structure of a metric space where the metric is the cut distance . The following lemma is an important step in order to prove that is a compact metric space. Intuitively, it says that for a graph , the homomorphism densities of two graphons with respect to this graph have to be close (this bound depending on the number of edges ) if the graphons are close in terms of cut distance.
Definition (cut norm).
The cut norm of is defined as , where and are measurable sets.
Definition (cut distance).
The cut distance is defined as , where represents for a measure-preserving bijection . Graphon Counting Lemma
For graphons and graph , we have , where denotes the number of edges of graph .
Proof of the graphon counting lemma:
It suffices to prove Indeed, by considering the above, with the right hand side expression having a factor instead of , and taking the infimum of the over all measure-preserving bijections , we obtain the desired result.
Step 1: Reformulation. We prove a reformulation of the cut norm, which is by definition the left hand side of the following equality. The supremum in the right hand side is taken among measurable functions and :
Here's the reason for the above to hold: By taking and , we note that the left hand side is less than or equal than the right hand side. The right hand side is less than or equal than the left hand side by bilinearity of the integrand in , and by the fact that the extrema are attained for taking values at or .
Step 2: Proof for . In the case that , we observe that
By Step 1, we have that for a fixed thatTherefore, when integrating over all we get that Using this bound on each of the three summands, we get that the whole sum is bounded by .
Step 3: General case. For a general graph , we need the following lemma to make everything more convenient: Lemma.
The following expression holds:
The above lemma follows from a straightforward expansion of the right hand side. Then, by the triangle inequality of norm, we have the following
Here, each absolute value term in the sum is bounded by the cut norm if we fix all the variables except for and for each -th term, altogether implying that . This finishes the proof.
See also
Graph removal lemma
References
Lemmas
Combinatorics
Graph theory | Counting lemma | [
"Mathematics"
] | 1,251 | [
"Discrete mathematics",
"Mathematical theorems",
"Theorems in discrete mathematics",
"Combinatorics",
"Mathematical problems",
"Theorems in graph theory",
"Lemmas"
] |
62,472,235 | https://en.wikipedia.org/wiki/Pl%C3%BCnnecke%E2%80%93Ruzsa%20inequality | In additive combinatorics, the Plünnecke–Ruzsa inequality is an inequality that bounds the size of various sumsets of a set , given that there is another set so that is not much larger than . A slightly weaker version of this inequality was originally proven and published by Helmut Plünnecke (1970).
Imre Ruzsa (1989) later published a simpler proof of the current, more general, version of the inequality.
The inequality forms a crucial step in the proof of Freiman's theorem.
Statement
The following sumset notation is standard in additive combinatorics. For subsets and of an abelian group and a natural number , the following are defined:
The set is known as the sumset of and .
Plünnecke-Ruzsa inequality
The most commonly cited version of the statement of the Plünnecke–Ruzsa inequality is the following.
This is often used when , in which case the constant is known as the doubling constant of . In this case, the Plünnecke–Ruzsa inequality states that sumsets formed from a set with small doubling constant must also be small.
Plünnecke's inequality
The version of this inequality that was originally proven by Plünnecke (1970) is slightly weaker.
Proof
Ruzsa triangle inequality
The Ruzsa triangle inequality is an important tool which is used to generalize Plünnecke's inequality to the Plünnecke–Ruzsa inequality. Its statement is:
Proof of Plünnecke-Ruzsa inequality
The following simple proof of the Plünnecke–Ruzsa inequality is due to Petridis (2014).
Lemma: Let and be finite subsets of an abelian group . If is a nonempty subset that minimizes the value of , then for all finite subsets ,
Proof: This is demonstrated by induction on the size of . For the base case of , note that is simply a translation of for any , so
For the inductive step, assume the inequality holds for all with for some positive integer . Let be a subset of with , and let for some . (In particular, the inequality holds for .) Finally, let . The definition of implies that . Thus, by the definition of these sets,
Hence, considering the sizes of the sets,
The definition of implies that , so by the definition of , . Thus, applying the inductive hypothesis on and using the definition of ,
To bound the right side of this inequality, let . Suppose and , then there exists such that . Thus, by definition, , so . Hence, the sets and are disjoint. The definitions of and thus imply that
Again by definition, , so . Hence,
Putting the above two inequalities together gives
This completes the proof of the lemma.
To prove the Plünnecke–Ruzsa inequality, take and as in the statement of the lemma. It is first necessary to show that
This can be proved by induction. For the base case, the definitions of and imply that . Thus, the definition of implies that . For inductive step, suppose this is true for . Applying the lemma with and the inductive hypothesis gives
This completes the induction. Finally, the Ruzsa triangle inequality gives
Because , it must be the case that . Therefore,
This completes the proof of the Plünnecke–Ruzsa inequality.
Plünnecke graphs
Both Plünnecke's proof of Plünnecke's inequality and Ruzsa's original proof of the Plünnecke–Ruzsa inequality use the method of Plünnecke graphs. Plünnecke graphs are a way to capture the additive structure of the sets in a graph theoretic manner
To define a Plünnecke graph we first define commutative graphs and layered graphs:
Definition. A directed graph is called semicommutative if, whenever there exist distinct such that and are edges in for each , then there also exist distinct so that and are edges in for each .
is called commutative if it is semicommutative and the graph formed by reversing all its edges is also semicommutative.
Definition. A layered graph is a (directed) graph whose vertex set can be partitioned so that all edges in are from to , for some .
Definition. A Plünnecke graph is a layered graph which is commutative.
The canonical example of a Plünnecke graph is the following, which shows how the structure of the sets form a Plünnecke graph.
Example. Let be subsets of an abelian group. Then, let be the layered graph so that each layer is a copy of , so that , , ..., . Create the edge (where and ) whenever there exists such that . (In particular, if , then by definition, so every vertex has outdegree equal to the size of .)
Then is a Plünnecke graph. For example, to check that is semicommutative, if and are edges in for each , then . Then, let , so that and . Thus, is semicommutative. It can be similarly checked that the graph formed by reversing all edges of is also semicommutative, so is a Plünnecke graph.
In a Plünnecke graph, the image of a set in , written , is defined to be the set of vertices in which can be reached by a path starting from some vertex in . In particular, in the aforementioned example, is just .
The magnification ratio between and , denoted , is then defined as the minimum factor by which the image of a set must exceed the size of the original set. Formally,
Plünnecke's theorem is the following statement about Plünnecke graphs.
The proof of Plünnecke's theorem involves a technique known as the "tensor product trick", in addition to an application of Menger's theorem.
The Plünnecke–Ruzsa inequality is a fairly direct consequence of Plünnecke's theorem and the Ruzsa triangle inequality. Applying Plünnecke's theorem to the graph given in the example, at and , yields that if , then there exists so that . Applying this result once again with instead of , there exists so that . Then, by Ruzsa's triangle inequality (on ),
thus proving the Plünnecke–Ruzsa inequality.
See also
Additive combinatorics
Freiman's theorem
Ruzsa triangle inequality
References
Additive combinatorics | Plünnecke–Ruzsa inequality | [
"Mathematics"
] | 1,358 | [
"Additive combinatorics",
"Combinatorics"
] |
62,472,742 | https://en.wikipedia.org/wiki/Ruzsa%20triangle%20inequality | In additive combinatorics, the Ruzsa triangle inequality, also known as the Ruzsa difference triangle inequality to differentiate it from some of its variants, bounds the size of the difference of two sets in terms of the sizes of both their differences with a third set. It was proven by Imre Ruzsa (1996), and is so named for its resemblance to the triangle inequality. It is an important lemma in the proof of the Plünnecke-Ruzsa inequality.
Statement
If and are subsets of a group, then the sumset notation is used to denote . Similarly, denotes . Then, the Ruzsa triangle inequality states the following.
An alternate formulation involves the notion of the Ruzsa distance.
Definition. If and are finite subsets of a group, then the Ruzsa distance between these two sets, denoted , is defined to be
Then, the Ruzsa triangle inequality has the following equivalent formulation:
This formulation resembles the triangle inequality for a metric space; however, the Ruzsa distance does not define a metric space since is not always zero.
Proof
To prove the statement, it suffices to construct an injection from the set to the set . Define a function as follows. For each choose a and a such that . By the definition of , this can always be done. Let be the function that sends to . For every point in the set is , it must be the case that and . Hence, maps every point in to a distinct point in and is thus an injection. In particular, there must be at least as many points in as in . Therefore,
completing the proof.
Variants of the Ruzsa triangle inequality
The Ruzsa sum triangle inequality is a corollary of the Plünnecke-Ruzsa inequality (which is in turn proved using the ordinary Ruzsa triangle inequality).
Proof. The proof uses the following lemma from the proof of the Plünnecke-Ruzsa inequality.
Lemma. Let and be finite subsets of an abelian group . If is a nonempty subset that minimizes the value of , then for all finite subsets
If is the empty set, then the left side of the inequality becomes , so the inequality is true. Otherwise, let be a subset of that minimizes . Let . The definition of implies that Because , applying the above lemma gives
Rearranging gives the Ruzsa sum triangle inequality.
By replacing and in the Ruzsa triangle inequality and the Ruzsa sum triangle inequality with and as needed, a more general result can be obtained: If , , and are finite subsets of an abelian group then
where all eight possible configurations of signs hold. These results are also sometimes known collectively as the Ruzsa triangle inequalities.
References
Additive combinatorics | Ruzsa triangle inequality | [
"Mathematics"
] | 576 | [
"Additive combinatorics",
"Combinatorics"
] |
62,474,444 | https://en.wikipedia.org/wiki/FACOM%20128 | The FACOM 128 was a relay-based electromechanical computer built by Fujitsu. Two models were made, namely the FACOM 128A, built in 1956, and the FACOM 128B, built in 1959. , a fully working FACOM 128B is still in working order, maintained by Fujitsu staff at a facility in Numazu in Shizuoka Prefecture.
The FACOM 128B processes numbers using a bi-quinary coded decimal representation.
See also
FACOM 100
FACOM
References
External links
Electro-mechanical computers
1950s computers | FACOM 128 | [
"Technology"
] | 113 | [
"Computing stubs"
] |
62,475,236 | https://en.wikipedia.org/wiki/Haplogroup%20M8 | In human mitochondrial genetics, Haplogroup M8 is a human mitochondrial DNA (mtDNA) haplogroup.
Origin
Haplogroup M8 is a descendant of haplogroup M. Haplogroup M8 is divided into subclades M8a, C and Z.
Distribution
It is an East Asian haplogroup. Today, haplogroup M8 is found at its highest frequency in indigenous populations of East Siberia such as Evenk and Yukaghir. Haplogroup M8 is one of the most common mtDNA haplogroups among Yakut, Tuvan. Haplogroup C, the most major one of three subclades, is highly distributed among the Amerindian and Indigenous peoples of East Siberia. Haplogroup Z, another one of three subclades, is highly distributed among Even from Kamchatka (8/39 Z1a2a, 3/39 Z1a3, 11/39 = 28.2% Z total). mtDNA Haplogroup M8a, a not well known one of the three subclades, is highly distributed among Northern Han Chinese from Liaoning (16/317 = 5.0%).
Table of Frequencies by ethnic group
Subclades
Haplogroup C, the most major one of three subclades is highly distributed among the Amerindian and Indigienous peoples of East Siberia. Haplogroup Z, the other one of three subclades is highly distributed among Even from Kamchatka (8/39 Z1a2a, 3/39 Z1a3, 11/39 = 28.2% Z total), mtDNA Haplogroup M8a, not well known one of three subclades is highly distributed among Northern Han Chinese from Liaoning (16/317 = 5.0%).
Tree
This phylogenetic tree of haplogroup M8 subclades is based on the paper by Mannis van Oven and Manfred Kayser Updated comprehensive phylogenetic tree of global human mitochondrial DNA variation and subsequent published research.
M8
M8a
M8a1 - Ulch
M8a1a - Japanese
M8a2'3
M8a2 - Japanese, Han Chinese
M8a2-a* - Japanese, Russia
M8a2a'b (T152C!) - Japanese
M8a2a - Han Chinese
M8a2a1 - Japanese, Han Chinese(Hunan)
M8a2a1a1
M8a2a1b
M8a2a1c - Japanese
M8a2b - Japanese, Han Chinese (Shandong)
M8a2b1
M8a2b2 - Russia
M8a2c - Japanese, Han Chinese
M8a2d - Han Chinese
M8a2e - Ami(Taiwan Aborigines), Han Chinese (Taiwan)
M8a3 - Japanese, Han Chinese
M8a3a - Han Chinese
M8a3a1 - Han Chinese
CZ
C
C1
C1a - Ulch, Swedish
C1b - Amerindian
C1b1
C1b2
C1b3
C1b4
C1b5
C1b5a
C1b5b
C1b6
C1b7'10 (T16311C!)
C1b7
C1b7a
C1b10
C1b8
C1b9
C1b11
C1b12
C1b13
C1b13a
C1b13a1
C1b13b
C1b13c
C1b13c1
C1b13d
C1b13e
C1b14
C1c - Amerindian
C1c1
C1c1a
C1c1b
C1c2
C1c3
C1c4
C1c5
C1c6'7
C1c6
C1c7
C1c8
C1d - Amerindian
C1d1
C1d1a
C1d1a1
C1d1b
C1d1b1
C1d1c
C1d1c1
C1d1d
C1d2
C1d2a
C1d3
C1e - Amerindian
C1f - Amerindian
C4 - Siberian, Mongolian, Han Chinese
C4a'b'c
C4a
C4a1
C4a1a
C4a1a1
C4a1a1a
C4a1a2'3'4
C4a1a2
C4a1a2a
C4a1a3
C4a1a3a
C4a1a3a1
C4a1a3b
C4a1a3c
C4a1a3d
C4a1a4
C4a1a4a
C4a1a5
C4a1a6
C4a2
C4a2a
C4a2a1
C4a2a1a
C4a2a1b
C4a2b
C4a2b1
C4a2b2
C4a2b2a
C4a2c
C4a2c1
C4a2c2
C4a2c2a
C4b
C4b1
C4b1a
C4b1b
C4b2
C4b2a
C4b3
C4b3a
C4b3a1
C4b3b
C4b5
C4b6
C4b7
C4b8
C4b8a
C4c
C4c1
C4c1a
C4c1b
C4c2
C4d'e
C4d
C4e
C5 - Siberian, Mongolian, Han Chinese
C5a
C5a1
C5a2
C5a2a
C5a2b
C5a2b1
C5b
C5b1
C5b1a
C5b1a1
C5b1b
C5b1b1
C5c'd
C5c
C5c1
C5c1a
C5d
C5d1
C5d2
C7 - Han Chinese, Indo-China Peninsulan
C7a
C7a1
C7a1a
C7a1a1
C7a1a2
C7a1c
C7a1d
C7a2
C7a2a
C7b
Z
Z1'2'3'4'7
Z1 - Tofalar
Z1a - Tubalar
Z1a1
Z1a1a - Saami, Kets
Z1a1b - Nganasan, Estonian
Z1a2 - Ulch
Z1a2a - Nivkh
Z1a3 - Yakuts, Estonian
Z2 - Japanese
Z3 - Japanese
Z3a
Z3a1
Z3a1a - Han Chinese, Indian
Z3a2 - Indian
Z3b - Indian
Z3c - Persian(Iranian), Japanese
Z3d - Han Chinese, Taiwanese
Z4 - Han Chinese
Z4a - Japanese
Z4a1 - Han Chinese
Z4a1a
Z4a1a1 - Japanese
Z7 - Indian
Z5 - Japanese
Popular culture
The American figure skater Kristi Yamaguchi is a member of haplogroup M8a.
See also
Genealogical DNA test
Genetic Genealogy
Human mitochondrial genetics
Population Genetics
Human mitochondrial DNA haplogroups
Indigenous American genetic studies
References
Bibliography
External links
General
Mannis van Oven's Phylotree
Haplogroup M8
Ian Logan's Mitochondrial DNA Site: Haplogroup M8
M8
Bioinformatics | Haplogroup M8 | [
"Engineering",
"Biology"
] | 1,514 | [
"Bioinformatics",
"Biological engineering"
] |
62,475,939 | https://en.wikipedia.org/wiki/Equal-area%20projection | In cartography, an equivalent, authalic, or equal-area projection is a map projection that preserves relative area measure between any and all map regions. Equivalent projections are widely used for thematic maps showing scenario distribution such as population, farmland distribution, forested areas, and so forth, because an equal-area map does not change apparent density of the phenomenon being mapped.
By Gauss's Theorema Egregium, an equal-area projection cannot be conformal. This implies that an equal-area projection inevitably distorts shapes. Even though a point or points or a path or paths on a map might have no distortion, the greater the area of the region being mapped, the greater and more obvious the distortion of shapes inevitably becomes.
Description
In order for a map projection of the sphere to be equal-area, its generating formulae must meet this Cauchy-Riemann-like condition:
where is constant throughout the map. Here, represents latitude; represents longitude; and and are the projected (planar) coordinates for a given coordinate pair.
For example, the sinusoidal projection is a very simple equal-area projection. Its generating formulae are:
where is the radius of the globe. Computing the partial derivatives,
and so
with taking the value of the constant .
For an equal-area map of the ellipsoid, the corresponding differential condition that must be met is:
where is the eccentricity of the ellipsoid of revolution.
Statistical grid
The term "statistical grid" refers to a discrete grid (global or local) of an equal-area surface representation, used for data visualization, geocode and statistical spatial analysis.
List of equal-area projections
These are some projections that preserve area:
Azimuthal
Lambert azimuthal equal-area
Wiechel (pseudoazimuthal)
Conic
Albers
Lambert equal-area conic projection
Pseudoconical
Bonne
Bottomley
Werner
Cylindrical (with latitude of no distortion)
Lambert cylindrical equal-area (0°)
Behrmann (30°)
Hobo–Dyer (37°30′)
Gall–Peters (45°)
Pseudocylindrical
Boggs eumorphic
Collignon
Eckert II, IV and VI
Equal Earth
Goode's homolosine
Mollweide
Sinusoidal
Tobler hyperelliptical
Other
Eckert-Greifendorff
McBryde-Thomas Flat-Polar Quartic Projection
Hammer
Strebe 1995
Snyder equal-area projection, used for geodesic grids.
See also
Authalic latitude
Authalic radius
Equiareal map (mathematics)
Measure-preserving dynamical system
Geodesic polygon area
References
Map projections | Equal-area projection | [
"Mathematics"
] | 549 | [
"Map projections",
"Coordinate systems"
] |
69,097,976 | https://en.wikipedia.org/wiki/Madeleine%20Akrich | Madeleine Akrich (born 4 March 1959) is a French sociologist of technology. She served as the director of the Center for the Sociology of Innovation at Mines ParisTech from 2003 to 2013. She is known for developing actor–network theory (ANT) with Bruno Latour, Michel Callon, John Law and others.
Research
Akrich's work concerns the sociology of technology and has been influential in Science and technology studies (STS). She developed actor–network theory, a theoretical approach to social analysis, alongside Michel Callon, Bruno Latour, John Law, and others.
Akrich primarily studies users' relationships with various technologies, with a focus on technologies of obstetric medicine and, in recent collaboration with Cécile Méadel, online health discussion forums.
Script analysis is another STS methodology developed by Akrich. The term "script" is "a metaphor for the 'instruction manual' she claims is inscribed in an artifact. This is related to Don Norman's concept of affordances, but more comprehensive, and has been applied both in STS and adjacent disciplines such as design, internet research and management.
In 2016, Akrich received the CNRS Silver Medal.
Notable publications
Madeleine Akrich, Cécile Méadel and Vololona Rabeharisoa, Se mobiliser pour la santé. Des associations s'expriment, Paris, Presses des mines, 2009.
Madeleine Akrich & Cécile Méadel, "De l'interaction à l'engagement: les collectifs électroniques, nouveaux militants dans le champ de la santé," Hermès, n°47, 2007.
Madeleine Akrich, Bruno Latour, & Michel Callon (ed.), Sociologie de la traduction : textes fondateurs, Paris, Mines Paris, les Presses, "Sciences sociales," 2006.
Madeleine Akrich, Vololona Rabeharisoa, P. Jamet, Cécile Méadel & F. Vincent (ed.), La Griffe de l'ours. Débats et controverses en environnement, Paris, Presses de l'École des Mines, 2002.
Madeleine Akrich & Françoise Laborie, De la contraception à l'enfantement. L'offre technologique en question, Paris; Montréal (Québec), l'Harmattan, 1999.
Madeleine Akrich & Bernike Pasveer, Comment la naissance vient aux femmes. Les techniques de l'accouchement en France et aux Pays-Bas, Le Plessis-Robinson, Synthélabo, "Les Empêcheurs de penser en rond," 1996.
Madeleine Akrich, L. Bibard, Michel Callon et al. (ed.), Ces réseaux que la raison ignore, Paris, l'Harmattan, "Logiques sociales," 1992.
Madeleine Akrich, "The De-Scription of Technical Objects" in Shaping Technology / Building Society: Studies in Sociotechnical Change, 1992.
References
External links
Research page on CSI
French women sociologists
Science and technology studies scholars
Actor-network theory
Sociologists of science
French philosophers of technology
1959 births
People from Boulogne-Billancourt
Living people
Academic staff of Mines Paris - PSL | Madeleine Akrich | [
"Technology"
] | 680 | [
"Actor-network theory",
"Science and technology studies",
"Science and technology studies scholars"
] |
69,103,599 | https://en.wikipedia.org/wiki/Khyber%20Pakhtunkhwa%20Information%20Technology%20Board | Khyber Pakhtunkhwa Information Technology Board or KPITB is a public sector autonomous organization of the Khyber Pakhtunkhwa government established in May 2011.
References
External links
Departments of Government of Khyber Pakhtunkhwa
Information technology in Pakistan
Information technology organizations
Government agencies of Khyber Pakhtunkhwa | Khyber Pakhtunkhwa Information Technology Board | [
"Technology"
] | 60 | [
"Information technology",
"Information technology organizations"
] |
69,103,946 | https://en.wikipedia.org/wiki/Warburg%E2%80%93Christian%20method | The Warburg–Christian method is an ultraviolet spectroscopic protein and nucleic acid assay method based on the absorbance of UV light at 260 nm and 280 nm wavelengths. Proteins generally absorb light at 280 nanometers due to the presence of tryptophan and tyrosine. Nucleic acids absorb more at 260 nm, primarily due to purine and pyrimidine bases. The Warburg–Christian method combines measurements at these wavelengths to estimate the amounts of protein and nucleic acid present. Original description of the method appeared in 1941.
The method is named for its creators, the German cancer researcher Otto Heinrich Warburg, Nobel Prize winner, and his employee Walter Christian of the Kaiser Wilhelm Institute for Biology in Berlin.
References
Protein methods
Analytical chemistry
Chemical tests | Warburg–Christian method | [
"Chemistry",
"Biology"
] | 157 | [
"Biochemistry methods",
"Protein methods",
"Protein biochemistry",
"Chemical tests",
"nan",
"Analytical chemistry stubs"
] |
69,105,066 | https://en.wikipedia.org/wiki/Problem%20of%20the%20Nile | The problem of the Nile is a mathematical problem related to equal partitions of measures. The problem was first presented by Ronald Fisher in 1936–1938. It is presented by Dubins and Spanier in the following words:"Each year, the Nile would flood, thereby irrigating or perhaps devastating parts of the agricultural land of a predynastic Egyptian village. The value of different portions of the land would depend upon the height of the flood. In question was the possibility of giving to each of the k residents, piece of land whose value would be 1/k of the total land value, no matter what the height of the flood."Formally, for each height h, there is a nonatomic measure vh on the land, which represents the land values when the height of the Nile is h.
In general, there can be infinitely many different heights, and hence, infinitely many different measures. William Feller showed in 1938 that a solution for the general case might not exist.
When the number of different heights (= measures) is finite, a solution always exists. This was first noted by Jerzy Neyman in 1946, and proved as a corollary of the Dubins–Spanier theorems in 1961. The problem in this case is called the exact division or consensus division problem.
Related problems
A related problem is the problem of similar regions studied by Neyman and Pearson. Here, instead of partitioning the land into k subsets, one only looks for a single subset, whose value for each measure vh is r times the total value (where r is a given constant in [0,1]). From existence perspective, the problem is equivalent to the problem of the Nile, as noted by Georges Darmois. However, they differ in the number of required cuts. The optimal number of required cuts for any r is described in the Stromquist–Woodall theorem.
References
Fair division | Problem of the Nile | [
"Mathematics"
] | 392 | [
"Recreational mathematics",
"Game theory",
"Fair division"
] |
69,105,220 | https://en.wikipedia.org/wiki/Jason%20Barnes%20%28drummer%29 | Jason Barnes (born 1989 in Guam) is an American amputee drummer with a robotic arm.
Having started out a career with a music band, in 2012 Barnes lost one of his arms in an accident. After the amputation, Barnes created his own prosthetic arm in an attempt to be able to play the drums; he was accepted into the drumming program at the Atlanta Institute of Music.
Later on he started working with Gil Weinberg at the Georgia Institute of Technology Institute for Robotics and Intelligent Machines to develop a cyborg arm that enables him to play his drum kit.
In 2015 he took part in the Geek Picnic festival in Moscow. He also took part at Robotronica in Brisbane, Australia.
Currently, the experience gathered around Barnes is helping create future technology for people with disabilities.
References
External links
Jason Barnes – website
Leading change
21st-century Guamanian people
American drummers
American amputees
Amputee musicians
Cyborgs
1990 births
Living people
American musicians with disabilities | Jason Barnes (drummer) | [
"Biology"
] | 198 | [
"Cyborgs"
] |
69,106,538 | https://en.wikipedia.org/wiki/Cashierless%20store | A cashierless store (also called a till-less store, checkout-free store or just walk out store) is a store which allows customers to shop their products and leave without having to wait in line and pay at a checkout. Cashierless stores can currently be found in the United States, Asia, Europe, the Middle East, and Africa.
Process
The process of shopping in a cashierless store can be broken down into four phases: the before-purchase phase, the check-in phase, the product selection phase, and the check-out phase. In the before-purchase phase, an app may need to be downloaded. In the check-in phase, a bar code from the store’s app may need to be scanned in order to enter the store. In the product selection phase, products can usually be selected without taking any preceding actions, but some stores require customers to scan a bar-code on the product or tap a screen to select products. In the check-out phase, stores utilize sensor fusion and deep learning for computer vision to allow customers to walk out with their products without waiting in line at a register.
Technology
Sensor fusion
In cashierless stores, most systems mark each customer with defining features and use cameras and pressure sensors together to keep track of where each customer goes and takes from the shelves. Sensor fusion gathers information from different sensors and compiles it to create an accurate representation of what is happening and the relative positions of objects in an area at a specific time. Sensor fusion is often more accurate than single sensors since the separate measurements can be used to double-check and narrow the margin of error.
Deep learning for computer vision
Deep learning for computer vision is used in cashierless stores to track customers and products. Computer vision is the computer’s ability to interpret real-life images and video feeds. Computer vision uses deep learning to make interpretations of the real world. Deep learning allows computers to learn from large amounts of data similarly to how the human brain would. Deep learning for computer vision tracks customers and products using object detection, multitarget tracking, and pose estimation. Object detection is the process of identifying objects within an image. Object detection is useful in identifying instances when a customer picks up and puts down an object, or for identifying the products on each of the shelves. Multitarget tracking approximately locates a moving person within consecutive frames of images. Multitarget tracking allows stores to keep track of each customer and their actions, like what products they picked up and put back and when they entered and exited the store. Pose estimation is the process of using an image to track a person using the positions of their body parts, like their head, hands, and wrists. Similar to multitarget tracking, pose estimation allows stores to keep track of customers providing information for the computer to determine which customer interacted with the store, like when a product is grabbed.
Human assistance
Inadequacies in Amazon's automatic tracking required over a thousand employees in India to label videos to allocate purchases to shoppers. As a result, the company in 2024 announced it would remove its "Just Walk Out" technology from its Amazon Fresh stores.
Regions
United States
In 2016, Amazon announced the opening of its first cashierless store, Amazon Go, which opened in 2018. To shop, customers were required to have the Amazon app (formerly a specific Amazon Go app) so that they could be billed for their purchases afterward through their accounts. Amazon introduced its cashierless technology in two Whole Foods stores located in Washington, D.C., and Sherman Oaks, California in 2022.
In January 2021, the Hudson Group, a travel retailer, announced that it would be implementing Amazon's Just Walk Out technology in select airport convenience stores, branded as Hudson Nonstop.
In June 2021, San Diego’s first fully automated and cashierless store, Valet Market, was opened to the public. Valet market is powered by the technology company Accel Robotics which utilizes an advanced machine learning AI platform that optimizes efficiency in local markets. There are three ways of shopping at Valet Store: Hub Store, Satellite Store, and Last Step Delivery.
Cashierless stores have also been established in sporting and event arenas. In March 2021, Delaware North opened two cashierless MRKT convenience stores at TD Garden in Boston (only open during arena events). With the opening of Seattle's Climate Pledge Arena in October 2021, four stores were equipped with Amazon's cashierless technology. In April 2022, Daikin Park in Houston, Texas became the first Major League Baseball stadium to incorporate cashierless stores, installing Amazon's technology at two of its concession stands. Minor league Polar Park in Worcester, Massachusetts launched similar technology from startup Standard AI in April 2022.
Asia
Japan introduced cashierless shopping to their country by implementing New Zealand start-up Imagr's scalable autonomous checkout technology. However, Imagr's technology in Japan is not fully cashierless. Customers have to go to a checkout operator to accommodate Japanese customers' shopping behavior of paying with cash and having a cashier. Cashierless stores can also be found at a subway station in Tokyo. In addition, other convenience store chains are implementing their cashierless stores. Stores like 7-Eleven and Lawson are working towards creating cashierless stores.
In Singapore, the first cashierless convenience store named Cheers launched in 2017, where it saves 180 man hours per week through their autonomous format. Cheers is the first convenience store that allows customers to pay with Nets by QR code.
In China, the competition with the US in the adoption and growth of cashierless stores led to rapid developments. Taobao launched the "Tao Cafe" pop-up cashierless store in Hangzhou in July 2017, and Alibaba opened a cashierless experience store in January 2018. Other Chinese retail giants such as Jingdong and WeChat also opened cashierless flash stores. However, by 2018, many of these stores faced closures and bankruptcies. The online retailer JD.com had announced 5,000 virtual shelves in July 2018 but retracted from this engagement 6 months later. One reason for the rapid downfall was the failure of the pioneering stores to create an authentic "just-walk-out" experience, which diminished the convenience of shopping there. The focus on technology over customer experience contributed to the decline.
Europe
In 2019, Sainsbury's opened the first cashierless store in the United Kingdom. However, it closed a few months later due to customer dissatisfaction with the lack of additional payment options. In September 2021, Aldi UK announced its first cashierless store in Greenwich. In October 2021, Tesco introduced its cashierless store called the GetGo store in central London, following a small trial of a similar store at the Tesco head office in Welwyn Garden City.
A Finnish company called Korttelikauppa opened a cashierless store in Helsinki in 2020, followed by six more stores in 2021 in Helsinki, Espoo, and Vantaa. Also in 2021, another Finnish convenience retailer, R-kioski, opened a cashierless store called R-kioski Go! in Helsinki.
In the Netherlands, Aldi Nord opened the first cashierless store in Utrecht as a 12-month trial in 2022.
November 2023, in France, Carrefour launched a pilot store in Paris called Flash 10/10 ("10 seconds to shop and 10 seconds to pay") with AiFi’s technology. It contains 900 SKU on 50 m2. Carrefour Flash was previously tested at head office in Massy over more than a year, during which the Innovation team was able to refine the technology and adapt the concept based on feedback from the employees using it on a daily basis.
Middle East
In September 2021, French conglomerate Carrefour, thanks to its partner Majid Al Futtaim Group, opened the first cashierless store in the Middle East called Carrefour City+, located in the Mall of the Emirates in Dubai.
In 2019, Israel opened their first cashierless store Nowpet, a cashierless pet shop, which uses the technology developed by startup Cyb-Org. Cyb-Org’s sensor technology differentiates from Amazon Go’s implementation of computer vision by detecting the weight of the shelf whenever customers grab products. Cyb-Org has also collaborated with Rami Levy, a popular grocery chain store in Israel, to implement cashierless technology of scanning fingerprints as a method of purchase, which costs much less compared to Amazon Go’s checkout mechanism on its shopping carts.
Africa
South Africa’s first cashierless food store, Checkers, is in the process of testing a store without checkout counters, from the technology of Shoprite Holdings.
See also
Amazon Go
Automated convenience store
Automated retail
Cashless society
Technological unemployment
References
Payment methods in retailing
Retail formats | Cashierless store | [
"Technology"
] | 1,827 | [
"Information systems",
"Self-service"
] |
69,109,827 | https://en.wikipedia.org/wiki/2-%28Trimethylsilyl%29ethoxymethyl%20chloride | 2-(Trimethylsilyl)ethoxymethyl chloride (SEM-Cl) is an organochlorine compound with the formula C6H15ClOSi, which was developed by Bruce H. Lipshutz during his work on the synthesis of N-methylmaysenine. It is used to protect hydroxyl groups, which can be cleaved with fluoride in organic solvents selectively under mild conditions. Typically tetrabutylammonium fluoride and caesium fluoride can be used as deprotection reagents. Alternatives such as magnesium bromide, lithium tetrafluoroborate and boron trifluoride etherate were also developed to deprotect SEM group.
References
Further reading
Organochlorides
Trimethylsilyl compounds | 2-(Trimethylsilyl)ethoxymethyl chloride | [
"Chemistry"
] | 172 | [
"Functional groups",
"Trimethylsilyl compounds"
] |
69,109,885 | https://en.wikipedia.org/wiki/Fiat%20S.76%20engine | The engine used in the Fiat S.76 land speed record vehicle is a large-displacement, four-cylinder engine, designed and developed by Fiat, in 1910.
Overview
The S.76 uses a 4-cylinder engine, with a displacement of (190 mm × 250 mm)(7.48in x 9.84in), providing at 1400 rpm, 4 valves (3 valves the airship engine) starting with trembler coil, 2 spark plugs per cylinder (3 spark plug the aeroship engine), ignition with high voltage magneto BOSCH type DR4/4, and water cooling.
After the 2 car engine was built from 1910 and 1911, FIAT built a similar engine for an airship, changing the valves to 3 (two exhausts and one intake) and the spark plugs to 3 (the car engine had 2 spark plugs) That engine was built from 1912 to 1913, and was used on Forlanini airships.
In November 2014, Pittaway and a team of motorists managed to return the S76 engine to working order including Leonardo E. M. Sordi an Italian Air Force consultant and historic expert of mechanics and magneto, to rebuild a full ignition system (including spark plugs), full set of engine bearings whit shell and white metal, and rework the original crankcase n°2 for realignment of the bench supports, deformed in over 100 years of history; although more work was needed before the car was fully operational. This was completed in 2015 and the "Beast of Turin" was displayed and driven for the first time in almost a century at the Goodwood Festival of Speed between 23 and 26 June 2015.
Applications
Fiat S76 Record
References
Engines by model
Fiat engines
Gasoline engines by model
Straight-four engines | Fiat S.76 engine | [
"Technology"
] | 361 | [
"Engines",
"Engines by model"
] |
56,406,451 | https://en.wikipedia.org/wiki/NGC%206043 | NGC 6043 is a lenticular galaxy located about 444 million light-years away in the constellation Hercules. NGC 6043 was discovered by astronomer Lewis Swift on June 27, 1886. The galaxy is a member of the Hercules Cluster.
See also
List of NGC objects (6001–7000)
References
External links
Hercules (constellation)
Lenticular galaxies
6043
57019
Astronomical objects discovered in 1886
Hercules Cluster
Discoveries by Lewis Swift | NGC 6043 | [
"Astronomy"
] | 87 | [
"Hercules (constellation)",
"Constellations"
] |
56,406,996 | https://en.wikipedia.org/wiki/Variable%20refresh%20rate | Variable refresh rate (VRR) refers to a dynamic display that can continuously and seamlessly change its refresh rate without user input. A display supporting a variable refresh rate usually supports a specific range of refresh rates (e.g. 30 Hertz through 144 Hertz). This is called the VRR range. The refresh rate can continuously vary seamlessly anywhere within this range.
Purpose
On displays with a fixed refresh rate, a frame can only be shown on the screen at specific intervals, evenly spaced apart. If a new frame is not ready when that interval arrives, then the old frame is held on screen until the next interval (stutter) or a mixture of the old frame and the completed part of the new frame is shown (tearing). Conversely, if the frame is ready before the interval arrives, then it won't be shown until that interval arrives.
Variable refresh rates eliminate these issues by matching the refresh rates of a display to be in sync with the frame rate from a video input, making the display motion more smooth. Although VRR is strongly associated with video games due to such content having unpredictable, discontinuous frame rates that would benefit from the technology, it is also useful for media whose frame rate is fixed and known in advance, such as film and video. Being able to sync the refresh rate with industry standard framerates (24, 30, and 60 FPS), it again helps to eliminate screen tearing. VRR also has use in power management, by temporarily lowering the refresh rate of a display during instances when there is little movement on the screen to save power.
History
Vector displays had a variable refresh rate on their cathode-ray tube (CRT), depending on the number of vectors on the screen, since more vectors took more time to draw on their screen.
Since the 2010s, raster displays gained several industry standards for variable refresh rates. Historically, there was only a limited selection of fixed refresh rates for common display modes.
Implementations
Variable refresh rate display technologies include several industry standards and proprietary standards:
AMD FreeSync
Nvidia G-Sync
DisplayPort 1.2a's optional Adaptive-Sync feature
HDMI 2.1 Variable Refresh Rate (VRR)
Apple ProMotion
Qualcomm Q-Sync
References
External links
TestUFO Animation: Variable Refresh Rate Simulation
Graphics hardware
Temporal rates | Variable refresh rate | [
"Physics"
] | 491 | [
"Temporal quantities",
"Temporal rates",
"Physical quantities"
] |
56,407,123 | https://en.wikipedia.org/wiki/National%20Oil%20and%20Gas%20Authority | The National Oil and Gas Authority () was the governmental body in Bahrain responsible for developing and implementing the government policy for exploiting the country's oil and gas resources.
Nogaholding was established in August 2007 as a subsidiary unit of NOGA. Nogaholding was intended to concentrate and refocus NOGA's oil, gas, and petrochemical development activities.
NOGA was abolished in September 2021, and all its activities, responsibilities, and personnel transferred to Bahrain's Ministry of Oil. Nogaholding became a semi-independent agency within the Ministry of Oil.
References
External links
Official website (Ministry of Oil and Gas)
Government ministries of Bahrain
Regulation in Bahrain
Energy regulatory authorities | National Oil and Gas Authority | [
"Engineering"
] | 141 | [
"Energy organizations",
"Energy ministries"
] |
56,407,447 | https://en.wikipedia.org/wiki/Runtime%20application%20self-protection | Runtime application self-protection (RASP) is a security technology that uses runtime instrumentation to detect and block computer attacks by taking advantage of information from inside the running software. The technology differs from perimeter-based protections such as firewalls, that can only detect and block attacks by using network information without contextual awareness. RASP technology is said to improve the security of software by monitoring its inputs, and blocking those that could allow attacks, while protecting the runtime environment from unwanted changes and tampering. RASP-protected applications rely less on external devices like firewalls to provide runtime security protection. When a threat is detected RASP can prevent exploitation and possibly take other actions, including terminating a user's session, shutting the application down, alerting security personnel and sending a warning to the user. RASP aims to close the gap left by application security testing and network perimeter controls, neither of which have enough insight into real-time data and event flows to either prevent vulnerabilities slipping through the review process or block new threats that were unforeseen during development.
Implementation
RASP can be integrated as a framework or module that runs in conjunction with a program's codes, libraries and system calls. The technology can also be implemented as a virtualization. RASP is similar to interactive application security testing (IAST), the key difference is that IAST is focused on identifying vulnerabilities within the applications and RASPs are focused protecting against cybersecurity attacks that may take advantages of those vulnerabilities or other attack vectors.
Deployment options
RASP solutions can be deployed in two different ways: monitor or protection mode. In monitor mode, the RASP solution reports on web application attacks but does not block any attack. In protection mode, the RASP solution reports and blocks web application attacks.
Future Research
Pursue "integrated" approaches that support both development-time and runtime
Explore decentralized coordination, planning, and optimization approaches
Explore quantitative and qualitative approaches to assess overall security posture
See also
Runtime verification
Runtime error detection
Dynamic program analysis
References
Cybersecurity engineering | Runtime application self-protection | [
"Technology",
"Engineering"
] | 431 | [
"Cybersecurity engineering",
"Computer networks engineering",
"Computer engineering"
] |
56,407,685 | https://en.wikipedia.org/wiki/America%20%28Cattelan%29 | America is a sculpture created in 2016 by the Italian artist Maurizio Cattelan. An example of satirical participatory art, it is a fully functioning toilet made of 18-karat solid gold. It was stolen in 2019 from Blenheim Palace, where it was exhibited on loan from the permanent collection of the Solomon R. Guggenheim Museum.
Exhibitions
Solomon R. Guggenheim Museum
Cattelan created the toilet in 2016 for the Solomon R. Guggenheim Museum in New York City. It was made in a foundry in Florence, cast in several parts that were welded together. Made to look like the museum's other Kohler toilets, it was installed in one of the museum's bathrooms for visitors to use. A special cleaning routine was put in place. The museum stated that the work was paid for with private funds.
According to the museum, over 100,000 people waited in line to use America, and a security guard was posted outside the bathroom. According to Cattelan, the work was made of of gold, which in September 2019 was valued at more than four million dollars as bullion. As an artwork, it has been estimated as high as six million.
In September 2017, when the museum declined a White House request to loan its 1888 Van Gogh painting Landscape with Snow for then President Donald Trump's private rooms, curator Nancy Spector offered to loan America instead. Any reply by the White House was not reported.
Blenheim Palace
In September 2019, America was installed at Blenheim Palace in the United Kingdom, where it was available for use as part of an exhibition of Cattelan's works. It was placed in a water closet formerly used by Winston Churchill.
Theft
On 14 September 2019, the sculpture was stolen from Blenheim Palace. A representative of the palace previously said that because America was plumbed in, and potential thieves would be aware of its use, security was not much of an issue. Because it had been connected to the building's water pipes, the theft caused structural damage and flooding to the World Heritage Site. Two men were arrested and released in connection with the incident. Cattelan commented: "I always liked heist movies and finally I'm in one of them."
Blenheim's insurance company has stated that up to approximately $124,000 can be paid in reward for the return of the toilet. In mid-October, three new arrests were made in connection to the theft. As of August 2023, the total number of arrests was six, all of whom have been released without charge. In November 2023, the Crown Prosecution Service charged four men with the theft of the toilet. The men have pleaded not guilty and a trial is scheduled for February 2025.
Speculation about the fate of the toilet includes it being melted down, that it has been hidden fairly close to Blenheim and that the theft is a prank by Cattelan. Local imitations of the work have been made, including one that was itself stolen.
Interpretation
The Guggenheim museum linked the meaning of the sculpture to the career of Donald Trump, writing in September 2016 that "the aesthetics of this 'throne' recall nothing so much as the gilded excess of Trump's real-estate ventures and private residences". Cattelan himself declined to give an interpretation of his work, which he conceived of before Trump's presidential candidacy. He said that the connection to Trump is "another layer, but it shouldn’t be the only one."
The work has also been described as an interpretation of Marcel Duchamp's 1917 sculpture Fountain. Art critic Calvin Tomkins called it Cattelan's most beautiful artwork, and said "for viewers who crave a one-to-one relationship with art, this piece cannot be topped." Art critic Jonathan Jones, using the work at Blenheim Palace, opined that it felt "Much like peeing on porcelain. But here, among all the photos of young Winston, it also feels like pissing on British history." He also found the sculpture reminiscent of then prime minister Boris Johnson's hair.
Other gold toilets
In 2002 , a Hong Kong businessman, included two gold toilets in what he called a shrine to Lenin. He referred to a comment by Lenin about the use of gold after the victory of socialism.
In 2019 the Hong Kong jewellery firm Coronet displayed a gold toilet in Shanghai. This toilet had a bulletproof seat containing more than 40,000 small diamonds.
Cattelan said that he made three gold toilets.
See also
List of heists in the United Kingdom
References
Notes
External links
Maurizio Cattelan: America at guggenheim.org
2016 sculptures
Sculptures by Maurizio Cattelan
Gold sculptures in the United States
Lost sculptures
Sculptures in New York City
Sculptures in the United Kingdom
Stolen works of art
Toilets
Works about Donald Trump
Donald Trump in popular culture
Robberies in the United Kingdom | America (Cattelan) | [
"Biology"
] | 994 | [
"Excretion",
"Toilets"
] |
56,408,134 | https://en.wikipedia.org/wiki/Sheaf%20on%20an%20algebraic%20stack | In algebraic geometry, a quasi-coherent sheaf on an algebraic stack is a generalization of a quasi-coherent sheaf on a scheme. The most concrete description is that it is a data that consists of, for each a scheme S in the base category and in , a quasi-coherent sheaf on S together with maps implementing the compatibility conditions among 's.
For a Deligne–Mumford stack, there is a simpler description in terms of a presentation : a quasi-coherent sheaf on is one obtained by descending a quasi-coherent sheaf on U. A quasi-coherent sheaf on a Deligne–Mumford stack generalizes an orbibundle (in a sense).
Constructible sheaves (e.g., as ℓ-adic sheaves) can also be defined on an algebraic stack and they appear as coefficients of cohomology of a stack.
Definition
The following definition is
Let be a category fibered in groupoids over the category of schemes of finite type over a field with the structure functor p. Then a quasi-coherent sheaf on is the data consisting of:
for each object , a quasi-coherent sheaf on the scheme ,
for each morphism in and in the base category, an isomorphism
satisfying the cocycle condition: for each pair ,
equals .
(cf. equivariant sheaf.)
Examples
The Hodge bundle on the moduli stack of algebraic curves of fixed genus.
ℓ-adic formalism
The ℓ-adic formalism (theory of ℓ-adic sheaves) extends to algebraic stacks.
See also
Hopf algebroid - encodes the data of quasi-coherent sheaves on a prestack presentable as a groupoid internal to affine schemes (or projective schemes using graded Hopf algebroids)
Notes
References
Editorial note: This paper corrects a mistake in Laumon and Moret-Bailly's Champs algébriques.
External links
https://mathoverflow.net/questions/69035/the-category-of-l-adic-sheaves
http://math.stanford.edu/~conrad/Weil2seminar/Notes/L16.pdf Adic Formalism, Part 2 Brian Lawrence March 1, 2017
Sheaf theory
Algebraic geometry | Sheaf on an algebraic stack | [
"Mathematics"
] | 479 | [
"Mathematical structures",
"Fields of abstract algebra",
"Sheaf theory",
"Topology",
"Category theory",
"Algebraic geometry"
] |
56,409,474 | https://en.wikipedia.org/wiki/Bicyclohexyl | Bicyclohexyl, also known as dicyclohexyl or bicyclohexane, is an organic chemical with the formula C12H22 and a molecular mass of 166.303 g mol−1. It is a nonvolatile liquid at room temperature, with a boiling point of . Its structure consists of two cyclohexane rings joined by a single carbon-carbon bond.
Production
Carbazole can be denitrogenated by hydrogen to yield bicyclohexyl as the main product.
When cyclohexane is exposed to radiation, bicyclohexyl is produced among other hydrocarbons.
Properties
The molecule is not completely flat, and the two rings are twisted compared to each other. Liquid bicyclohexyl contains a mixture of molecules with C2 and C2h symmetry termed ee anti, and ee gauche. The carbon-carbon bond (pivot) between the rings is 1.55 Å, and the carbon-carbon length in the rings is 1.535 Å and carbon-hydrogen bond length is 1.102 Å. The torsion angle between the rings is 74.9°. The C-C-C bond angle ∠ is about 111° and C-C-H angle is 109°.
The speed of sound in bicyclohexyl is 1441.51 m/s, higher than many other hydrocarbons. The density is 882.73 kgm−1. The isothermal compressibility is 674 TPa−1 and isobaric expansivity is 819 K−1.
When bicyclohexyl is heated to around it slowly decomposes to cyclohexane and cyclohexene, as the pivot bond joining the two rings is the longest and weakest one.
Heat of combustion is 1814.8 kcal/mol.
Use
Bicyclohexyl has uses in organic synthesis as a building block and structural motif, in studying the chemistry of liquid interfaces, and in surface modification of metal oxides as a solvent.
See also
Biphenyl
References
Cyclohexyl compounds
Hydrocarbons | Bicyclohexyl | [
"Chemistry"
] | 447 | [
"Organic compounds",
"Hydrocarbons"
] |
56,410,801 | https://en.wikipedia.org/wiki/Yui%20%28behavior%29 | Yui (Japanese/Okinawan:結,ゆい) involves a system of collaborative work in small settlements and autonomous units. It consists of mutual aid that helps and cooperates within the residents' village, which requires a great deal of time, money, and effort.
Though the loan word has crept into standard Japanese, the cultural concept is more particular to Okinawan life. Nevertheless, traditional informal fire brigades in other parts of Japan have been considered a type of Yui as labor on demand, in addition to more ubiquitous agricultural collectives.
Yui Maaru 「ゆいまーる」 and Ii Maaru 「いーまーる」 mean "mutual assistance" equal and in order. No reward is expected. They are rooted in Okinawan concepts, not limited to mutual farm work, but also extends to the construction of houses and graveyards, therefore is not completely agriculture based.
Such informal groups are called Yui-gumi 「結い組」. They consist of relatives, friends, neighborhood residents, etc. As modernization progresses and agriculture dies out, the practice is becoming less common and more monetary. However, unlike volunteer moles of Mexico who rescue trapped people in emergencies like quakes, (Topos de Tlatelolco), these informal groups are generalists rather than specialists in a particular task.
It may be compared/contrasted to other societies, such as pumasi (품앗이) culture of Korea.
Modern associations
Notably, the local railway there is named after the concept and practice.
Organizational behavior
Okinawan culture | Yui (behavior) | [
"Biology"
] | 313 | [
"Behavior",
"Organizational behavior",
"Human behavior"
] |
56,412,019 | https://en.wikipedia.org/wiki/Plants%20of%20the%20World%20Online | Plants of the World Online (POWO) is an online database published by the Royal Botanic Gardens, Kew.
History
After the Convention on Biological Diversity, Plants of the World Online was launched by the Royal Botanic Gardens in Kew in March 2017 to create an exhaustive online database of seed-bearing plants worldwide. The initial focus was on tropical African flora, particularly flora Zambesiaca, flora of West and East Tropical Africa.
Since March 2024, the website displays AI-generated predictions of the extinction risk for each plant.
Description
The database uses the same taxonomical source as Kew's World Checklist of Selected Plant Families, which is the International Plant Names Index, and the World Checklist of Vascular Plants (WCVP).
The database contains information of the world's flora that was gathered in the past 250 years of botanical research. It aims to make data available from projects that no longer have an online presence or were never externally available. POWO has information on taxonomy, identification, distribution, traits, threat status and use of plants worldwide. It also contains many images.
, POWO contained 1,433,000 global plant names, 531,800 detailed descriptions, and 400,900 images.
See also
Australian Plant Name Index
Convention on Biological Diversity
eMonocot
International Plant Names Index
Tropicos
World Flora Online
References
External links
Online botany databases
Online taxonomy databases
.
Plant taxonomy
Royal Botanic Gardens, Kew
Databases in the United Kingdom | Plants of the World Online | [
"Biology"
] | 297 | [
"Botanical nomenclature",
"Plants",
"Botanical terminology",
"Biological nomenclature",
"Plant taxonomy"
] |
56,412,457 | https://en.wikipedia.org/wiki/Vaginal%20anomalies | Vaginal anomalies are abnormal structures that are formed (or not formed) during the prenatal development of the female reproductive system and are rare congenital defects that result in an abnormal or absent vagina.
When present, they are often found with uterine, skeletal and urinary abnormalities. This is because these structures, like the vagina, are most susceptible to disruption during crucial times of organ-genesis. Many of these defects are classified under the broader term Müllerian duct anomalies. Müllerian duct anomalies are caused by a disturbance during the embryonic time of genitourinary development.
The other isolated incidents of vaginal anomalies can occur with no apparent cause. Oftentimes vaginal anomalies are part of a cluster of defects or syndromes. In addition, inheritance can play a part as can prenatal exposure to some teratogens. Many vaginal anomalies are not detected at birth because the external genitalia appear to be normal. Other organs of the reproductive system may not be affected by an abnormality of the vagina. The uterus, fallopian tubes and ovaries can be functional despite the presence of a defect of the vagina and external genitalia.
A vaginal anomaly may not affect fertility. Though it depends on the extent of the vaginal defect, it is possible for conception to occur. In instances where a functional ovary exists, IVF may be successful. Functioning ovaries in a woman with a vaginal defect allows the implantation of a fertilized ovum into the uterus of an unaffected gestational carrier, usually another human. A successful conception and can occur. Vaginal length varies from 6.5 to 12.5 cm. Since this is slightly shorter than older descriptions, it may impact the diagnosis of women with vaginal agenesis or hypoplasia who may unnecessarily be encouraged to undergo treatment to increase the size of the vagina.
Vaginal anomalies may cause difficulties in urination, conception, pregnancy, impair sex. Psychosocial effects can also exist.
Signs and symptoms
Isolated anomalies
Some anomalies are found upon examination shortly after birth or when the development of sexual characteristics does not progress as expected. Defects that prevent menstrual flow become obvious when amenorrhea occurs.
Syndromes
Syndromes may take longer to identify since they are rare and often involve errors in metabolism. Many syndromes share the same signs and symptoms.
Associated uterine defects
Uterine defects can accompany vaginal abnormalities:
Müllerian agenesis (absent uterus). Uterus is not present, vagina only rudimentary or absent.
Uterus didelphys, also uterus didelphis (double uterus). transverse vaginal septum
Septated uterus (uterine septum or partition). With a complete vaginal septum.
Rudimentary uterus is a uterine remnant not connected to cervix and vagina.
Women with uterine abnormalities may have associated renal abnormalities including unilateral renal agenesis.
Anomalies associated with syndromes
Some congenital syndromes present with vaginal anomalies in association with other serious conditions. These include Fraser syndrome, WNT4 deficiency, and Bardet-Biedl syndrome, Isolated incidents of vaginal anomalies can occur with no apparent cause and in other instances these anomalies are part of a syndrome or cluster of other abnormalities. The origin of many vaginal anomalies is due to a disturbance during the embryonic stage of genitourinary development. Inheritance can play a part as can prenatal exposure to hormones and teratogens. Though the presence of a vaginal anomaly does not necessarily prevent conception and a successful pregnancy when a functional uterus and ovaries are present, vaginal anomalies increase the risk of miscarriage.
Prenatal exposure to some hormones can cause vaginal anomalies as can the lack of necessary hormones needed for normal development. Diethylstilbestrol (DES), also known formerly (and inappropriately) as stilboestrol, is a synthetic nonsteroidal estrogen and teratogen that can cause vaginal abnormalities in the developing embryo.
Cause
The cause of isolated cases of vaginal anomalies can not always be identified, though disruption of the embryonic development of the vagina likely plays a significant role.
Diagnosis
Imaging studies are usually the most useful in diagnosing vaginal anomalies including retrograde contrast studies. An anomaly scan can be helpful, especially detecting the presence of a urogenital syndrome. Genetic and metabolic defects require further testing to support a diagnosis.
Treatment
Vaginal anomalies are treated surgically. A 'neo-vagina' can be constructed for those girls and women who do not have a vagina. Vaginal septa are treated surgically.
The most common vaginal anomaly is an imperforate hymen. This anomaly occurs often enough that it can be detected by some pediatricians shortly after birth. It can be corrected through a minor surgery and may be delayed until puberty. The hymen can be unusually thick or partially obstructed by the presence of fibrous bands of tissue. An imperforate hymen can also present with other abnormalities such as septa. An imperforate hymen can be displaced and its location may not be where it is expected. Other abnormalities of the hymen can exist including the presence of septa, displacement and a hymen that consists of microperforations. Uncommonly, a double hymen is present. The imperforate hymen is treated by excision and drainage. Sometimes a small border of hymenal tissue is left around the opening of the vagina.
Congenital adrenal hyperplasia can cause the abnormal development of the vagina. Vaginal adenosis is the abnormal presence of cervical and uterine tissue within the wall of the vagina. Ten percent of women have this condition and remain unsymptomatic. It rarely develops into a malignancy. Cloacal exstrophy is a condition when two vaginas are present. Vaginal agenesis or the complete absence of the vagina affects 1 out of 5,000 women. A hemivagina is the abnormal presence of a partial vagina that is attached to the wall of the functioning vagina. The hemivagina does not open to the normal vagina and is attached to an abnormal, second uterus. Vaginal hypoplasia is the under-development of the vagina and is found in instances of complete androgen insensitivity syndrome. Vaginal septa are structures consisting of fibrous tissue that block the vagina. The tissue extends horizontally, blocking or partially blocking the vaginal canal or transversely essentially creating two vaginas that connect to a normal uterus. Septa can prevent menstrual flow and result in painful intercourse, though some women do not have symptoms. Many vaginal anomalies are not detected at birth because the external genitalia can appear to be normal.
Epidemiology
The occurrence of vaginal defects varies widely and some are only known from case studies. The prevalence of an imperforate hymen is 1 in 1000.
History
Notable is the mention of vaginal anomalies and pelvic organ prolapse in older cultures and locations. In 1500 B.C. Egyptians wrote about the "falling of the womb". In 400 B.C. a Greek physician documented his observations and treatments:
"After the patient had been tied to a ladder-like frame, she was tipped upward so that her head was toward the bottom of the frame. The frame was then moved upward and downward more or less rapidly for approximately 3–5 min. As the patient was in an inverted position, it was thought that the prolapsing organs of the genital tract would be returned to their normal position by the force of gravity and the shaking motion."
Hippocrates also described the prolapse of other organs out through the vagina. In 1521, Berengario da Carpi performed the first surgical treatment for prolapse. This was to tie a rope around the prolapse, tighten it for two days until it was no longer viable and cut it off. Wine, aloe, and honey were then applied to the stump.
References regarding the existence of vaginal anomalies related to müllerian defects have been traced back to 300 BC when a historian described a case of vaginal agenesis.
In 1823, other physicians proposed that vaginoplasty may provide treatment for pelvic organ prolapse. In 1830, the first dissection of the vagina was performed on a living woman. Other vaginal repairs were described in 1834 and treatment sometimes the suturing the edges of a vaginal defect. In 1859 a solution to vaginal elongation was to remove the cervix. In 1866, methods that resembled those used today came into practice. Surgery on the anterior vaginal wall at this time did not have to involve full-thickness repairs to be successful. Sim subsequently developed another procedure that did not require the full-thickness dissection of the vaginal wall. Shortly after this time it was proposed that reattaching the vagina to support structures was more successful and resulted in less recurrence. This same proposal was proposed again in 1976 but further studies indicated that the recurrence rate was not better. Further advances in 1961 began when surgeons started to reattach of the anterior vaginal wall to Cooper's ligament.
In 1955, surgical mesh began to be used to strengthen pelvic tissue. In 1970, tissue from pigs began to be used to strengthen the anterior vaginal wall in surgery. Beginning in 1976, improvement in suturing began along with the surgical removal of the vagina being used to treat prolapse of the bladder. In 1991, assumptions about the detailed anatomy of the pelvic support structures began to be questioned regarding the existence of some pelvic structures and the non-existence of others. More recently, the use of stem cells, robot-assisted laparoscopic surgery are being used during vaginectomy and vaginoplasty.
See also
Genetic counseling
Genetic diagnosis of intersex
List of obstetric topics
Obstetric ultrasonography
Prenatal testing
Progestin-induced virilisation
References
External links
Vagina, Anatomical Atlases, an Anatomical Digital Library (2018)
Congenital disorders of female genital organs
Vagina
Pediatric gynecology
Syndromes in females
Syndromes affecting female reproductive system
Women's health
Gynaecology
Embryology of urogenital system
Human development
Pathology of pregnancy, childbirth and the puerperium
Theriogenology | Vaginal anomalies | [
"Biology"
] | 2,232 | [
"Behavioural sciences",
"Behavior",
"Human development"
] |
56,413,297 | https://en.wikipedia.org/wiki/TranSMART | tranSMART is an open-source data warehouse designed to store large amounts of clinical data from clinical trials, as well as data from basic research, so that it can be interrogated together for translational research. It is also designed to be used by many people, across organizations. It was developed by Johnson & Johnson, in partnership with Recombinant Data Corporation. The platform was released in Jan 2012 and has been governed by the tranSMART Foundation since its initiation in 2013. In May 2017, the tranSMART Foundation merged with the i2b2 Foundation to create an organization with the key mission to advance the field of precision medicine.
The tranSMART platform has been adopted and evaluated by numerous pharmaceutical companies, not-for-profits and patient advocacy groups, academics, governmental organisations and service providers. At the Bio-IT World industry conference both the Innovative Medicines Initiative's U-BIOPRED project and The Michael J. Fox Foundation were awarded a Best Practices Award for their application of the platform.
tranSMART is built on top of the i2b2 clinical data warehouse and leverages the i2b2 star schema for modelling clinical and low-dimensional data. High-dimensional omics data is stored in dedicated tables where each of the data types (e.g., gene expression, SNP or metabolomics) retains its specific data structure. Both the Oracle and PostgreSQL database management systems are supported for its data storage.
tranSMART 17.1
Development project
Researchers reported missing functionalities in the earlier versions of tranSMART (version 16.2 and before), which restricted the capabilities of the tool and opportunities for research. In response, tranSMART Foundation brought together four leading pharmaceutical companies – Pfizer, Sanofi, Abbvie and Roche – together in October 2016 to help sponsor a joint project to develop the new functionality in a coherent way and improve tranSMART. The Hyve BV was the IT company responsible for the execution of the tranSMART 17.1 development project.
Improvements of tranSMART version 17.1
The focus of the sponsors was to add three main functionalities:
Cross-study and ontology term support.
Support for modeling time series and sample data, to allow the storage of longitudinal and EHR data.
Creating the connection between tranSMART and Arvados, to support large data storage and analysis.
In addition, another goal of the project was to improve the quality of the tranSMART back-end to make it ready for the future. The back-end improvements implemented in the development project, delivered early in 2017, had a large impact on the capabilities of tranSMART, as well as its quality nowadays.
Functional improvements of the 17.1 version include the support for time series, samples, and cross-study concepts. This is accomplished by re-alignment with the i2b2 data model, on top of which tranSMART was built, extended with the features which make tranSMART unique: the organization in studies, the support for modelling clinical trial event grouping and high dimensional data support. Technical improvements include automated-test coverage for all Core API and REST API calls and documentation of those calls and the full database schema.
The interface
The new functionalities developed in the tranSMART 17.1 project, however, were not supported by the existing tranSMART user interface. Therefore, The Hyve BV started the development of a new, modern user interface for tranSMART 17.1 that would accommodate the added functionalities. In October 2018, Glowing Bear, the user interface built on tranSMART version 17.1 for cohort selection and exploratory analyses, was released. This user interface has been adopted by leading medical and research institutions such as the Princess Máxima Center for Pediatric Oncology, the Netherlands Twin Registry, and Leiden University Medical Center.
Challenges
The 17.1 project failed to meet its objectives for compatibility with the i2b2 data model, and still lacks the functionality of the full tranSMART interface. The 17.1 code is only server-side code and was only released by the Foundation as a 'developer release' and never met the requirements for a full release. A new release of tranSMART (v19) is now in beta test, and not only meets the compatibility requirements with i2b2, but has an enhanced user interface with additional analytical workflows. This release will be supported by the Foundation, and will be the sole branch of the codebase to continue with support from the i2b2 tranSMART Foundation.
See also
Data sharing
JANUS clinical trial data repository
Registry of Research Data Repositories
References
Further reading
Data warehousing products
Translational medicine | TranSMART | [
"Biology"
] | 946 | [
"Translational medicine"
] |
56,413,547 | https://en.wikipedia.org/wiki/Ruth%20Baker | Ruth Elizabeth Baker is a British applied mathematician and mathematical biologist at the University of Oxford whose research interests include pattern formation, morphogenesis, and the mathematical modeling of cell biology and developmental biology.
Education and career
Baker read mathematics at Wadham College, Oxford, and earned a doctorate (D.Phil.) at the University of Oxford in 2005. Her dissertation, Periodic Pattern Formation in Developmental Biology: A Study of the Mechanisms Involved in Somite Formation, was jointly supervised by biologist Santiago Schnell and mathematician Philip Maini, who was also the doctoral supervisor of Schnell.
After postdoctoral research in Germany, the US, and Australia, funded by a UK Research Council Junior Research Fellowship, she returned to a permanent position at Oxford. She is a professor of applied mathematics at the Mathematical Institute of the University of Oxford and a tutorial fellow in mathematics in St Hugh's College, Oxford since 2010.
Recognition
Baker was a 2014 winner of the Whitehead Prize of the London Mathematical Society "for her outstanding contributions to the field of Mathematical Biology". She was awarded a Leverhulme Research Fellowship for her work in "efficient computational methods for testing biological hypotheses" in 2017.
References
External links
Home page
Year of birth missing (living people)
Living people
British mathematicians
British women mathematicians
British applied mathematicians
Theoretical biologists
Alumni of Wadham College, Oxford
Alumni of the University of Oxford
Academics of the University of Oxford
Fellows of St Hugh's College, Oxford | Ruth Baker | [
"Biology"
] | 294 | [
"Bioinformatics",
"Theoretical biologists"
] |
56,413,728 | https://en.wikipedia.org/wiki/River%20plume | A river plume is a freshened water mass that is formed in the sea as a result of mixing of river discharge and saline seawater. River plumes are formed in coastal sea areas at many regions in the World. River plumes generally occupy wide-but-shallow sea surface layers bounded by sharp density gradients. The area of a river plume is 3-5 orders of magnitude greater than its depth; therefore, even small rivers with discharge rates ~1–10 m/s form river plumes with horizontal spatial extents ~10–100 m. Areas of river plumes formed by the largest rivers are ~100–1000 km2. Despite the relatively small volume of total freshwater runoff to the World Ocean, river plumes occupy up to 21% of shelf areas of the ocean, i.e., several million square kilometers.
In some occasions river plumes are spoken of as regions of fresh water influence (ROFIs), although it is preferred to use this term for regions in which multiple sources add to the fresh water input of the zone or for shallow, frictional shelves. ROFIs and river plumes differ in the variation at temporal and spatial scales. The river plume can be identified as a buoyant water mass that emerges due to river discharge into the coastal ocean and varies over diurnal to synoptic timescales. At the edges of this water mass mixing takes place, creating a region adjacent to the river plume which is diluted and fresher compared to the open ocean, but does not have a clear boundary. This extended region is called the region of freshwater influence, ROFI. Due to the indirect influence of freshwater discharge, ROFIs incorporate the dynamics and spatial extent of the river plumes but are typically assessed on seasonal, annual, and decadal timescales.
Processes
River plumes play an important role in global and regional land-ocean interactions. River discharges provide large fluxes of buoyancy, heat, terrigenous sediments, nutrients, and anthropogenic pollutants to the ocean. River plumes strongly influence many physical, biological, and geochemical processes in the coastal and shelf sea areas including stratification of seawater, coastal currents, carbon and biogeochemical cycles, primary production, and seabed morphology.
A river plume is a dynamical system influenced by processes with a wide range of temporal and spatial scales, which depend on the size and shape of the estuary as well as on the type and variation of the forcing from the estuary and the ocean. Feedback mechanisms between sediment deposited by the plume at the submarine delta and the geometry of the delta make for a complex system. Due to this complexity there is not (yet) a general, simple theory that offers quantitative predictability for the motion of particles and the structure of river plumes; however, some theories incorporating simplified assumptions have helped in understanding the important aspects of buoyancy-influenced coastal flows. As is commonly used in fluid dynamics, the description of these complex flows is aided by scaling analysis to determine the relevant processes. The primary parameters which define the structure and scale of an individual river plume are freshwater discharge, tidal energy, coastline bathymetry/geometry, ambient ocean currents, wind, and the rotation of the Earth.
Structure
The balance between the important processes varies over the position in the plume. The following regions can be distinguished: the source region, the liftoff point, the front, and the near field region. Beyond the plume itself but within its area of influence are the mid-field region and the far field region.
Source region
In the source or estuarine region, the buoyancy and momentum of the freshwater inflow from the estuary are the dominant properties that determine the initiation of the river plume. The competition between river-induced stratification and tidal mixing sets the river plume's characteristic properties. This competition can be captured in the (dimensionless) estuarine Richardson number, which is defined as
where
the reduced gravity is the gravitational acceleration due to the density difference between fresh river water and saline ocean water,
is the river discharge,
is the estuary width, and
is the tidal velocity.
where
is the gravitational acceleration due to the density difference between fresh river water and saline ocean water,
is the river discharge,
is the estuary width, and
is the tidal velocity.
A large estuarine Richardson number (i.e. ) indicates that freshwater processes are dominant compared to the tidal influence, and one can expect development of a river plume.
Liftoff point
In case of strong riverine forcing, often with a large estuarine Richardson number, the front of the plume separates from the bottom. The position at which this flow separation occurs is called the liftoff point and sets the landward edge of the near-field. This point is important in surface-advected river plumes.
Near-field region
In the near-field the momentum of the plume is larger than its buoyancy. This balance is represented in the (dimensionless) Froude number, and is larger than one in the near-field, indicating supercritical flow. Both the liftoff point and the outer boundary of the near-field, the plume front, are characterized by critical flow conditions () and the flow in the near-field region shows features similar to a jet. The momentum balance is dominated by barotropic and baroclinic pressure gradients, turbulent shear stresses, and flow acceleration. Flow deceleration is mainly caused by the shear stresses on the interface of the plume with the ambient ocean. In some cases a near-field region will not exist. This is for example the case if the width of the river mouth is large relative to the Rossby radius of deformation, , and the fresh water inflow will leave the river mouth as a far-field plume. When tides are large, the near-field plume is also known as the tidal plume.
Mid-field region
The area at which the near-field inertial jet transfers into a flow in which geostrophic or wind-driven processes are dominant is the midfield-area. The momentum balance of the mid-field is dominated by the rotation of the Earth (Coriolis effect), cross-stream internal pressure gradients, and sometimes centripetal acceleration. The initial momentum of the outflow from the source is lost and the wind forcing (or rotation of the Earth in case of small wind forcing) gradually takes over as the most important parameter. As a result, the flow changes its speed, direction, and spreading pattern. When the influence of wind forcing is small, outflows can sometimes form a recirculating bulge; however, evidence of such a feature in field observations is scant.
Far-field region
Even further away from the source region is the far-field, where the plume has lost all memory of the outflow momentum. The momentum balance of the far-field is dominated by the rotation of the Earth (Coriolis effect), buoyancy, wind forcing, and bottom stress. The far-field can cover large areas, up to hundreds of kilometers from its source. Diurnal and semi-diurnal variability of the far-field region is generally governed by tides, synoptic variability by wind forcing, and seasonal variability by river discharge. In the absence of strong wind forcing and strong currents, the far-field plume can behave as a current of relatively fresh water in the direction of a propagating Kelvin wave. Examples of this can be observed in the Rhine ROFI, where the river plume can be traced all along the Dutch coast. The character of this coastal current is different in the case of shallow seas, when the current occupies the whole water column and its motion is affected by bottom friction, and in the case of a surface-advected plume whose vertical size is less than the water depth.
Advection
At the most basic and idealized level, river plumes can be classified to be either surface-advected or bottom-advected. A plume is considered to be bottom-advected when it occupies the whole water column from the surface to the seabed. In this case its stratification is mainly horizontal as a result of strong advection over the whole water column, especially near the bed. A surface-advected plume does not interact with the bottom because its vertical size is less than its depth. In this case a plume is mainly vertically stratified. Differentiation between these two (idealized) types of river plumes can be made by evaluating a set of parameters, as set up by Yankovsky and Chapman in their paper from 1997. The distance up to which the fresh water river plume is transported across-shelf by processes at the surface is given by
where
is the inflow velocity from the source region and the near-field jet,
is the Coriolis force,
is the buoyancy, and
is the depth of the water column at the mouth of the river/estuary.
Up to the liftoff point, the plume still "feels" the bottom and one speaks of bottom-advected plumes, and the relevant processes involving bottom dynamics must be accounted for. Vertical scales of river plumes formed by the largest rivers across the world are 10-20 m, while the vertical scale of the majority of river plumes is less than several meters. As a result, the majority of river plumes in the world are surface-advected; that is, the bottom-advected part near the estuary before the liftoff point at these plumes is much smaller than the surface-advected part. River plumes with large bottom-advected parts are formed mainly by large rivers that flow into shallow sea areas, such as the Volga plume in the northern part of the Caspian Sea.
Bottom-advected plumes
Bottom-advected plumes are often characterized by large discharge conditions and are generally less sensitive to wind forcing and corresponding advection and mixing. This type of advection is driven by bottom Ekman transport, which drives the fresh or brackish river outflow with density and velocity from an estuary of width and depth to the frontal zone across the shelf. This is indicated in the figure to the right. When the frontal zone is far enough from the shore, thermal wind dynamics can transport the complete volume flux away from the estuary. The across-shore position , which denotes the width of the coastal current, and the equilibrium-depth at which the plume separates from the bottom can be calculated in equilibrium conditions with a certain bottom slope by
.
Note that this is only valid when . When the bottom Ekman layer cannot transport the river outflow offshore and another process governs the propagation. In that case, only a surface-advected plume is found.
Surface-advected plumes
Surface-advected plumes occur when the previously-defined condition of is met. A surface-advected plume has the typical structure of a river plume as described in the section river plume structure. In the region near the mouth the initial momentum of the river outflow is the dominant mechanism, after which other processes such as wind forcing and the Coriolis effect take over. In a surface-advected plume,. processes regarding interaction with the bottom such as the development of a bottom Ekman layer are not relevant. Therefore, the defined parameter can be ignored in this approach as it has no physical basis.
Intermediate plumes
In the case that the inflow depth is smaller than depth , and the distance up to which the bottom Ekman layer transports the river discharge is smaller than the distance up to which the surface processes transport the river outflow, (), one can find an intermediate plume. In an intermediate plume both regimes can be found. Naturally, the bottom-advected section can be found closer to the estuary mouth and the surface-advected section can be found further offshore. The liftoff point separates the regions.
The approach can be further generalized by non-dimensionalizing the parameters. Non-dimensional parameters have the benefit of simplifying the dynamics of the relevant processes by evaluating the magnitude of different terms. In the case of river plumes, it gives further direction to the basic classification and their different dynamics. The two most relevant non-dimensional numbers are the Burger number , which expresses the relative importance of buoyancy, and the Rossby number , which expresses the relative importance of advection. Regrouping leads to the following, non-dimensional cross-shore distances and :
.
The same regimes as discussed above hold for the non-dimensional parameters. Bottom-advected plumes (, ) in general have small Burger numbers and therefore buoyancy is relatively unimportant. Surface-advected plumes () in general have large Burger numbers and therefore buoyancy is important. Furthermore, the Rossby number indicates whether the plume is classified as a surface-advected plume or an intermediate plume. A relatively large Rossby number compared to the Burger number indicates that advection is important compared to buoyancy and will allow at least partial bottom-advection to occur so that one can expect an intermediate plume.
Note that the scheme described above was developed for idealized cases: that is, for river plumes in absence of external forcing which flow into a sea with idealized bathymetry and shoreline.
Tidal variation
River plumes vary over diurnal to synoptic temporal scales. In this range of temporal scales, the most important periodic variation lies within the tidal cycle, in which a tidal cycle (daily) and a spring-neap cycle (two-weekly) can be distinguished. This barotropic variation in tidal velocity magnitude and direction gives rise to variability in the strength and stability of the river plume. This is already clear from the competition between river discharge and tidal mixing, captured in the (dimensionless) estuarine Richardson number , which is used to assess in a general fashion whether a river plume can develop in a certain system. The tidal dynamics lead to the following general dynamics of river plumes.
Tidal cycle
A tidal cycle consists of a flood period or landward flow, and an ebb period or seaward flow. For constant river discharge one can find a stable stratification during ebb conditions and an unstable stratification during flood conditions. This is schematically portrayed in the figure to the right. The mixing that occurs during flood conditions due to the unstable stratification weakens the stratification and efficient river plume advection and occurs in situations with low estuarine Richardson numbers.
During ebb conditions the stratification is enhanced. This leads to stable conditions and strong advection at the surface. Due to mass conservation, this situation requires enhanced landward flows near the bottom. This process is called tidal straining. In the case of an open coast, two-dimensional effects start playing a role. Baroclinic Ekman transport causes upwelling during ebb flows and downwelling during flood flows. Therefore, these baroclinic upwelling effects can cause ebb flows to transport nutrients and sediment towards the coast.
Spring-neap cycle
Over a spring-neap cycle the baroclinic effects over a tidal cycle amplify and favor either increased tidal straining or tidal mixing. Spring tides are characterized by relatively large tidal amplitudes and tidal flow velocities. This leads to increased tidal mixing over the complete tidal cycle and weakened stratification. In some areas the stratification vanishes completely, resulting in a well-mixed system, and these systems can only incorporate river plumes some of the time. In open-coast systems, spring tide conditions generally lead to increased downwelling effects from the buoyant river plume, causing increased seaward transport of sediment and nutrients.
Neap tides are characterized by relatively low tidal amplitudes and tidal flow velocities. This situation favors the tidal straining effect as observed during ebb tides due to decreased tidal mixing and increased differential flow over a tidal cycle. Due to the stronger tidal straining effect, neap tide conditions are generally characterized by increased landward flow near the bottom and associated increased coastal upwelling effects. In extreme cases this can lead to large depositions on the beach, such as the mass beaching event of starfish at the coast near Scheveningen January 30, 2019.
Natural examples
Fraser River
An example of a surface-advected plume is the Fraser River plume. The Fraser River plume contains all dynamical regions, clearly visible from space. The initial jet-like structure gradually transfers into a far-field plume further offshore, which is deflected to the right as would be expected on the Northern Hemisphere due to the Coriolis effect. Other similar river plumes are those of the Columbia River, the Niagara River, and the Hudson River.
Amazon River
The Amazon River plume is an example of a river plume in which the Earth's rotation does not play a role. Due to the high discharge, the corresponding momentum of the outflow, and the equatorial latitude, the dynamics of the plume are mainly characterized by the internal Froude number. Ambient currents transport the plume away from the mouth. Similar plumes can be found elsewhere along the Equator.
Mersey River
The dynamics of the Mersey River plume at the mouth of Liverpool Bay show high resemblance to a bottom-advected plume. This is due to strong influence of the bottom and bottom friction on the flow, and this controls the cross-shore spreading and length-scale. This type of plume can often be found at marginal seas and shelf seas, such as in the North Sea at the mouth of the Rhine.
See also
Plume (fluid dynamics)
Region of freshwater influence
References
External links
Coastal geography
Physical oceanography
Aquatic ecology
Estuaries
Limnology
Coastal engineering | River plume | [
"Physics",
"Engineering",
"Biology"
] | 3,644 | [
"Applied and interdisciplinary physics",
"Coastal engineering",
"Civil engineering",
"Physical oceanography",
"Ecosystems",
"Aquatic ecology"
] |
56,413,821 | https://en.wikipedia.org/wiki/Use%20of%20fetal%20tissue%20in%20vaccine%20development | The use of fetal tissue in vaccine development is the practice of researching, developing, and producing vaccines through growing viruses in cultured (laboratory-grown) cells that were originally derived from human fetal tissue. Since the cell strains in use originate from abortions, there has been opposition to the practice and the resulting vaccines on religious and moral grounds.
The vaccines do not contain any of the original fetal tissue or cells or cells derived from fetal materials. Although the vaccine materials are purified from cell debris, traces of human DNA fragments remain. The cell lines continue to replicate on their own and no further sources of fetal cells are needed.
The Catholic Church has encouraged its members to use alternative vaccines, produced without human cell lines, if possible. However, the Vatican has clarified that "all vaccinations recognized as clinically safe and effective can be used in good conscience, with the certain knowledge that the use of such vaccines does not constitute formal cooperation with the abortion".
Background
Immortalised cell lines are an important research tool offering a stable medium for experiments. These are derived either from tumors, which have developed resistance to cellular senescence, or from stem cells originally taken from aborted fetuses. Fetal cell lines have been used in the manufacture of vaccines since 1930s. One of the first medical applications of cell lines derived from fetal tissues was their use in the production of the first polio vaccines. For example, in the 1950s, scientists at the Karolinska Institute in Sweden propagated a polio virus in fetal cell lines to make into a polio vaccine. The resulting vaccine was given to about 2,000 children.
Many other vaccines, including those for chicken pox and rubella, are made using cell lines originally derived from fetal tissue from two pregnancies terminated in the 1960s, for reasons unrelated to vaccine development. Descendants of the fibroblast cells from these fetuses have been growing in labs ever since, as the WI-38 and MRC-5 cell lines. They are still used to grow vaccine viruses today. As of March 2017, billions of vaccines have been given that were made using the WI-38 line alone.
Applications
Vaccines that have been or are made using cell lines originally derived from fetal tissue include:
Adenovirus
Chicken pox
Ebola
Polio
Rabies
Rubella
Shingles
Of these, the vaccines approved for use in the United States include some of those against rabies (Imovax), rubella, chicken pox, shingles, and adenovirus (as of January 2017).
Rubella
One historical cell line used in rubella vaccines was originally obtained from a fetus aborted due to infection with rubella. Rubella during pregnancy can lead to miscarriage (spontaneous abortion), and if it does not, there is a risk of severe disability due to congenital rubella syndrome. By one estimate, rubella vaccination may prevent up to 5,000 miscarriages per year in the United States.
COVID-19
Several of the vaccines in use or advanced development for COVID-19 use the cell lines HEK-293 or PER.C6 for production. In other cases, notably the vaccines made by Pfizer, Sputnik-V and Moderna, HEK-293 was used during the testing phase. PER.C6, a retinal cell line that was isolated from an aborted fetus in 1985 was used by Janssen in development of COVID-19 Vaccine.
Alternatives
COS-1 cells are of monkey origin and there are xenogeneic differences between monkey and human proteins.
Position of the Catholic Church
The Catholic Church is opposed to abortion. Nevertheless, the Pontifical Academy for Life, concluded in 2005 that parents may allow their children to receive vaccines made from fetal tissue if no alternative exists and there is a grave health risk. Consumers were urged to "oppose by all means (in writing, through the various associations, mass media, etc.) the vaccines which do not yet have morally acceptable alternatives, creating pressure so that alternative vaccines are prepared, which are not connected with the abortion of a human fetus". This academy also called for the development of new vaccines that can be made by other means. In 2017, the Pontifical Academy for Life stated that "clinically recommended vaccinations can be used with a clear conscience and that the use of such vaccines does not signify some sort of cooperation with voluntary abortion".
On December 21, 2020, the Vatican's doctrinal office, the Congregation for the Doctrine of the Faith, further clarified that it is "morally " for Catholics to receive vaccines derived from fetal cell lines or in which such lines were used in testing or development, including the COVID-19 vaccines, because "passive material cooperation in the procured abortion from which these cell lines originate is, on the part of those making use of the resulting vaccines, remote. The moral duty to avoid such passive material cooperation is not obligatory if there is a grave danger," such as during the COVID-19 pandemic, and that "in such a case, all vaccinations recognized as clinically safe and effective can be used in good conscience" and "does not and should not in any way imply that there is a moral endorsement of the use of cell lines proceeding from aborted fetuses". Moreover,
[F]rom the ethical point of view, the morality of vaccination depends not only on the duty to protect one's own health, but also on the duty to pursue the common good. In the absence of other means to stop or even prevent the epidemic, the common good may recommend vaccination, especially to protect the weakest and most exposed. Those who, however, for reasons of conscience, refuse vaccines produced with cell lines from aborted fetuses, must do their utmost to avoid, by other prophylactic means and appropriate behavior, becoming vehicles for the transmission of the infectious agent. In particular, they must avoid any risk to the health of those who cannot be vaccinated for medical or other reasons, and who are the most vulnerable.
References
Vaccine controversies
Abortion debate | Use of fetal tissue in vaccine development | [
"Chemistry",
"Biology"
] | 1,249 | [
"Vaccination",
"Drug safety",
"Vaccine controversies"
] |
56,414,357 | https://en.wikipedia.org/wiki/Phase%20precession | Phase precession is a neurophysiological process in which the time of firing of action potentials by individual neurons occurs progressively earlier in relation to the phase of the local field potential oscillation with each successive cycle. In place cells, a type of neuron found in the hippocampal region of the brain, phase precession is believed to play a major role in the neural coding of information. John O'Keefe, who later shared the 2014 Nobel Prize in Physiology or Medicine for his discovery that place cells help form a "map" of the body's position in space, co-discovered phase precession with Michael Recce in 1993.
Place cells
Pyramidal cells in the hippocampus called place cells play a significant role in self-location during movement over short distances. As a rat moves along a path, individual place cells fire action potentials at an increased rate at particular positions along the path, termed "place fields". Each place cell's maximum firing ratewith action potentials occurring in rapid burstsoccurs at the position encoded by that cell; and that cell fires only occasionally when the animal is at other locations. Within a relatively small path, the same cells are repeatedly activated as the animal returns to the same position.
Although simple rate coding (the coding of information based on whether neurons fire more rapidly or more slowly) resulting from these changes in firing rates may account for some of the neural coding of position, there is also a prominent role for the timing of the action potentials of a single place cell, relative to the firing of nearby cells in the local population. As the larger population of cells fire occasionally when the rat is outside of the cells' individual place fields, the firing patterns are organized to occur synchronously, forming wavelike voltage oscillations. These oscillations are measurable in local field potentials and electroencephalography (EEG). In the CA1 region of the hippocampus, where the place cells are located, these firing patterns give rise to theta waves. Theta oscillations have classically been described in rats, but evidence is emerging that they also occur in humans.
In 1993, O'Keefe and Recce discovered a relationship between the theta wave and the firing patterns of individual place cells. Although the occasional action potentials of cells when rats were outside of the place fields occurred in phase with (at the peaks of) the theta waves, the bursts of more rapid spikes elicited when the rats reached the place fields were out of synchrony with the oscillation. As a rat approached the place field, the corresponding place cell would fire slightly in advance of the theta wave peak. As the rat moved closer and closer, each successive action potential occurred earlier and earlier within the wave cycle. At the center of the place field, when the cell would fire at its maximal rate, the firing had been advanced sufficiently to be anti-phase to the theta potential (at the bottom, rather than at the peak, of the theta waveform). Then, as the rat continued to move on past the place field and the cell firing slowed, the action potentials continued to occur progressively earlier relative to the theta wave, until they again became synchronous with the wave, aligned now with one wave peak earlier than before. O'Keefe and Recce termed this advancement relative to the wave phase "phase precession". Subsequent studies showed that each time a rat entered a completely different area and the place fields would be remapped, place cells would again become phase-locked to the theta rhythm. It is now widely accepted that the anti-phase cell firing that results from phase precession is an important component of information coding about place.
Other systems
There have been conflicting theories of how neurons in and around the hippocampus give rise to theta waves and consequently give rise to phase precession. As these mechanisms became better understood, the existence of phase precession was increasingly accepted by researchers. This, in turn, gave rise to the question of whether phase precession could be observed in any other regions of the brain, with other kinds of cell circuits, or whether phase precession was a peculiar property of hippocampal tissue. The finding that theta wave phase precession is also a property of grid cells in the entorhinal cortex demonstrated that the phenomenon exists in other parts of the brain that also mediate information about movement.
Theta wave phase precession in the hippocampus also plays a role in some brain functions that are unrelated to spatial location. When rats were trained to jump up to the rim of a box, place cells displayed phase precession much as they do during movement along a path, but a subset of the place cells showed phase precession that was related to initiating the jump, independently of spatial location, and not related to the position during the jump.
Phase precession in the entorhinal cortex has been hypothesized to arise from an attractor network process, so that two sequential neural representations within a single cycle of the theta oscillation can be temporally linked to each other downstream in the hippocampus, as episodic memories.
References
Neural coding
Neural circuitry
Hippocampus (brain)
Animal locomotion
Electrophysiology | Phase precession | [
"Physics",
"Biology"
] | 1,096 | [
"Animal locomotion",
"Physical phenomena",
"Animals",
"Behavior",
"Motion (physics)",
"Ethology"
] |
56,415,007 | https://en.wikipedia.org/wiki/Shaping%20processes%20in%20crystal%20growth | Shaping processes in crystal growth are a collection of techniques for growing bulk crystals of a defined shape from a melt, usually by constraining the shape of the liquid meniscus by means of a mechanical shaper. Crystals are commonly grown as fibers, solid cylinders, hollow cylinders (or tubes), and sheets (or plates). More complex shapes such as tubes with a complex cross section, and domes have also been produced. Using a shaping process can produce a near net shape crystal and reduce the manufacturing cost for crystals which are composed of very expensive or difficult to machine materials.
List of shaping processes
Horizontal Ribbon Growth (HRG, 1959)
Edge-defined Film-fed Growth (EFG, 1960)
Low Angle Silicon Sheet (LASS, 1981)
Micro-pulling-down (μ-PD)
Stepanov technique
String ribbon
Edge-defined film-fed growth
Edge-defined film-fed growth or EFG was developed for sapphire growth in the late 1960s by Harold LaBelle and A. Mlavsky at Tyco Industries.
A shaper (also referred to as a die) having dimensions approximately equal to the crystal to be grown rests above the surface of the melt which is contained in a crucible. Capillary action feeds liquid material to a slit at the center of the shaper. When a seed crystal is touched to the liquid film and raised upwards, a single crystal forms at the interface between the solid seed and the liquid film. By continuing to pull the seed upwards, the crystal expands as a liquid film forms between the crystal and the top surface of the shaper. When the film reaches the edges of the shaper, the final crystal shape matches that of the shaper.
The exact dimensions of the crystal will deviate from the dimensions of the shaper because every material has a characteristic growth angle, the angle formed at the triple interface between the solid crystal, liquid film, and the atmosphere. Because of the growth angle, varying the height of the meniscus (i.e. the thickness of the liquid film) will change the dimensions of the crystal. The meniscus height is affected by pulling speed and crystallization rate. The crystallization rate depends on the temperature gradient above the shaper, which is determined by the configuration of the hot-zone of the crystal growth furnace, and the power applied to the heating elements during growth. The difference in thermal expansion coefficients between the shaper material and the crystal material can also cause appreciable size differences between the shaper and the crystal at room temperature for crystals grown at high temperatures.
The shaper material should be non-reactive with both the melt and growth atmosphere, and should be wet by the melt.
It is possible to grow multiple crystals from a single crucible using the EFG technique, for example by growing many parallel sheets.
Applications
Sapphire: EFG is used to grow large plates of sapphire, primarily for use as robust infrared windows for defense and other applications. Windows about 7 mm thick x 300 mm wide x 500 mm long are produced. The shaper is typically made from molybdenum.
Silicon: EFG was used in the 2000s by Schott Solar to produce silicon sheets for solar photovoltaic panels, by pulling a thin-walled (~250-300 μm) octagon with faces 12.5 cm on a side and diameter about 38 cm, about 5–6 m long. The shaper is typically made from graphite.
Other oxides: Many high melting-point oxides have been grown by EFG, among them Ga2O3, LiNbO3, and Nd3+:(LuxGd1−x)3Ga5O12 (Nd:LGGG).
Often an iridium shaper is used.
Horizontal ribbon growth
Horizontal ribbon growth or HRG is a method developed and patented by William Shockley in 1959 for silicon growth. By this method a thin crystalline sheet is pulled horizontally from the top of a crucible. The melt level must be constantly replenished in order to keep the surface of the melt at the same height as the edge of the crucible from which the sheet is being pulled. By blowing a cooling gas at the surface of the growing sheet, very high growth rates (>400 mm/min) can be achieved. The method relies on the solid crystal floating on the surface of the melt, which works because solid silicon is less dense than liquid silicon.
Micro-pulling-down
The micro-pulling-down or μ-PD technique uses a small round opening in the bottom of the crucible to pull a crystalline fiber downward. Hundreds of different crystalline materials have been grown by this technique.
A variation called pendant drop growth or PDG uses a slot in the bottom of the crucible to produce crystalline sheets in a similar manner.
Stepanov technique
The Stepanov technique was developed by A.V. Stepanov in the Soviet Union after 1950. The method involves pulling a crystal vertically through a shaper located at the surface of the melt. The shaper is not necessarily fed by a capillary channel as in EFG. The shaper material may be wetted or non-wetted by the melt, as opposed to EFG where the shaper material is wetted. The technique has been used to grow metal, semiconductor, and oxide crystals.
Czochralski growth using a floating shaper known as a "coracle" was done for some III-V semiconductors prior to the development of advanced control-systems for diameter control.
String ribbon
The string ribbon method, also known as dendritic web or edge-supported pulling, has been used to grow semiconductor sheets including indium antimonide, gallium arsenide, germanium, and silicon.
A seed crystal with the width and thickness matching the sheet to be grown is dipped into the top surface of the melt. Strings of a suitable material are fixed to the vertical edges of the seed and extend down through holes in the bottom of the crucible to a spool. As the seed is raised, string is continuously fed through the melt and a liquid film forms between the seed, the strings, and the melt. The film crystallizes to the seed, forming a sheet or ribbon.
References
Semiconductor growth
Industrial processes
Crystals | Shaping processes in crystal growth | [
"Chemistry",
"Materials_science"
] | 1,281 | [
"Crystallography",
"Crystals"
] |
56,415,312 | https://en.wikipedia.org/wiki/CubeSail%20%28UltraSail%29 | CubeSail was a 2018 low-cost spacecraft propulsion demonstration mission using two identical 1.5U CubeSat satellites to deploy a long, solar sail ribbon between them. This mission was a first in a series of increasingly-complex planned demonstrations leading up to a full-scale UltraSail heliogyro by the University of Illinois and CU Aerospace.
Background: Heliogyro
UltraSail is a proposed type of robotic spacecraft that uses radiation pressure exerted by sunlight for propulsion. It builds upon the "heliogyro" concept by Richard H. MacNeal, published in 1971, and consists of multiple rotating blades attached to a central hub.
The Heliogyro spacecraft's attitude (orientation), and therefore thrust direction, would be controlled by changing the cyclic and collective blade pitch similar to a helicopter.
Although the Heliogyro design has no mass advantage over a square sail, it remains attractive because the method of deploying large sail blades is simpler than a strut-based design. Blade stiffness is achieved by spinning the spacecraft (centrifugal force) with its rotational axis generally pointing at the Sun.
CubeSail spacecraft
Overview
The University of Illinois together with CU Aerospace designed this mission to demonstrate deployment and to measure the thrust on a 7.7 cm × 250 m membrane (about 20 m2) made of aluminized mylar. The membrane is deployed between two 1.5U CubeSats that separate from each other in low Earth orbit. It is intended as a first step towards the development of the larger solar sail concept called UltraSail.
Re-orientation of the CubeSats will cause the sail to undergo aerodynamic drag in the upper atmosphere for its disposal.
Selection
The spacecraft was selected in 2012 by NASA to be launched as part of the ELaNa program.
Launch
CubeSail was launched on an Electron launch vehicle on 16 December 2018 from New Zealand.
While "satellite beacons at the correct frequency were observed post-launch once on 18 Dec. 2018", there was not "sufficient signal to noise ratio to demodulate the call sign in the beacons.", and "no further communications were received from CubeSail".
Follow-on
I-sail
The proposed second mission of the project is called I-Sail, proposed to be launched in 2022, and would consist of a spacecraft with bilateral blades with a total sail area of 2,500 m2. It will demonstrate thrust levels many times those of ion thrusters used for deep space missions and perform an Earth gravity escape. Several science objectives are being assessed as secondary objectives. The project is being funded by NASA's Small Business Innovation Research (SBIR) program.
UltraSail
CubeSail and I-Sail are intended as steps towards the development of a larger (1,600 kg) solar sail concept called UltraSail for interplanetary and interstellar missions. This last consists of multiple CubeSail-like structures that extend kilometers long film blades attached to a central hub to ultimately form a heliogyro. The UltraSail blade material, the body of the solar sail, is mounted on multiple reels, each with a width of 5–10 m, and deployed to a blade length up to for a total 100,000 m2 of sail area. The spacecraft spins around the central hub to flatten the blades by centrifugal force, supported by tip-CubeSats. For the kilometre long blades' stability, this requires a rotational period of 1–2 hours so they overcome the solar pressure force by 3 to 5 times. Each blade is a thin polyimide film coated with ripstop.
For UltraSail, blade control (and hence the spacecraft's attitude control) is initiated by small controllable mini-satellites (tipsat) at the tip of each blade. The tipsat mass provides a stabilizing centrifugal force on the blade while in rotation. Each tipsat would be a 5-meter long carbon-fiber structure with a total mass of 50 kg, including avionics and 20 kg propellant (catalyzed nitrous oxide () and cold gas). Alternatively, the tipsats could be propelled with electric microthrusters to control blade pitch.
The maximum expected thrust force due to solar pressure is equivalent to 400 kW ion thruster systems used for comparable deep space missions.
See also
IKAROS, a Japanese solar sail, launched in May 2010
NanoSail-D2, the successor to NanoSail-D, launched in November 2010
LightSail, a controlled solar sail CubeSat launched in July 2019
Near-Earth Asteroid Scout, a solar sail CubeSat currently planned to launch in 2020
Sunjammer, a solar sail that was cancelled before launch in 2014
References
CubeSats
Solar sail spacecraft
Aerospace engineering
Spacecraft launched in 2018
University of Illinois System
NASA satellites
Spacecraft launched by Electron rockets | CubeSail (UltraSail) | [
"Engineering"
] | 994 | [
"Aerospace engineering"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.