text
stringlengths 60
353k
| source
stringclasses 2
values |
|---|---|
**Klout**
Klout:
Klout was a website and mobile app that used social media analytics to rate its users according to online social influence via the "Klout Score", which was a numerical value between 1 and 100. In determining the user score, Klout measured the size of a user's social media network and correlated the content created to measure how other users interact with that content. Klout launched in 2008.Lithium Technologies, who acquired the site in March 2014, closed the service on May 25, 2018.Klout used Bing, Facebook, Foursquare, Google+, Instagram, LinkedIn (individuals' pages, not corporate/business), Twitter, Wikipedia, and YouTube data to create Klout user profiles that were assigned a "Klout Score". Klout scores ranged from 1 to 100, with higher scores corresponding to a higher ranking of the breadth and strength of one's online social influence. While all Twitter users were assigned a score, users who registered at Klout could link multiple social networks, of which network data was then aggregated to influence the user's Klout Score.
Methodology:
Klout measured influence by using data points from Twitter, such as the following count, follower count, retweets, list memberships, how many spam/dead accounts were following you, how influential the people who retweet you were and unique mentions. This information was combined with data from a number of other social network followings and interactions to come up with the Klout Score. Other accounts such as Flickr, Blogger, Tumblr, Last.fm, and WordPress could also be linked by users, but they did not weigh into the Klout Score. Microsoft announced a strategic investment in Klout in September 2012 whereby Bing would have access to Klout influence technology, and Klout would have access to Bing search data for its scoring algorithm.Klout scores were supplemented with three nominally more specific measures, which Klout calls "true reach", "amplification" and "network impact". True reach is based on the size of a user's engaged audience who actively engage in the user's messages. Amplification score relates to the likelihood that one's messages will generate actions, such as retweets, mentions, likes and comments. Network impact reflects the computed influence value of a person's engaged audience.
History:
In 2007, Joe Fernandez underwent a surgery that required him to wire his mouth shut. Because he could not speak for three months, he turned to Facebook and Twitter for social interaction. During this period, he became obsessed with the idea that "word of mouth was measurable." Pulling data from Twitter’s API, he created a prototype that would assign users a score out of 100 to measure their influence. Midway into 2008, he showed the prototype to some friends, who told him it was "the dumbest thing ever."In May 2018, Klout announced that it would cease operations on May 25, 2018. The closure had been planned for some time and was accelerated by the entry into force of the General Data Protection Regulation.
Business model:
Perks The primary business model for Klout involved companies paying Klout for Perks campaigns, in which a company offers free services or products to Klout users who match a pre-defined set of criteria including their scores, topics, and geographic locations. While Klout users who had received Perks were under no obligation to write about them, the hope was that they will effectively advertise the products on social media. Klout offered the Perks program beginning in 2010. According to Klout CEO Joe Fernandez, about 50 partnerships had been established as of November 2011. In May 2013, Klout announced that its users had claimed more than 1 million Perks across over 400 campaigns.
Business model:
Klout for business In March 2013, Klout announced its intention to begin displaying business analytics aimed at helping business and brand users learn about their online audiences.
Content page In September 2012, Klout announced an information-sharing partnership with the Bing search engine, showing Klout scores in Bing searches and allowing Klout users to post items selected by Bing to social media.
Criticism:
Several objections to Klout's methodology were raised regarding both the process by which scores were generated, and the overall societal effect. Critics pointed out that Klout scores were not representative of the influence a person really has, highlighted by Barack Obama, then President of the United States, having a lower influence score than a number of bloggers. Other social critics argued that the Klout score devalued authentic online communication and promoted social ranking and stratification by trying to quantify human interaction. Klout attempted to address some of these criticisms, and updated their algorithms so that Barack Obama's importance was better reflected.The site was criticized for violating the privacy of minors, and for exploiting users for its own profit.John Scalzi described the principle behind Klout's operation as "socially evil" in its exploitation of its users' status anxiety. Charles Stross described the service as "the Internet equivalent of herpes," blogging that his analysis of Klout's terms and conditions revealed that the company's business model was illegal in the United Kingdom, where it conflicted with the Data Protection Act 1998; Stross advised readers to delete their Klout accounts and opt out of Klout services.Ben Rothke concluded that "Klout has its work cut out, and it seems like they need to be in beta a while longer. Klout can and should be applauded for trying to measure this monstrosity called social influence; but their results of influence should, in truth, carry very little influence."Klout was criticised for the opacity of their methodology. While it was claimed that advanced machine learning techniques were used, leveraging network theory, Sean Golliher analysed Klout scores of Twitter users and found that the simple logarithm of the number of followers was sufficient to explain 95% of the variance. In November 2015 Klout released an academic paper discussing their methodology at the IEEE BigData 2015 Conference.In spite of the controversy, some employers made hiring decisions based on Klout scores. As reported in an article for Wired, a man recruited for a VP position with fifteen years of experience consulting for companies including America Online, Ford and Kraft was eliminated as a candidate specifically because of his Klout score, which at the time was 34, in favour of a candidate with a score of 67.
Notable events:
September 2011: Klout integrated with Google+.
October 2011: Klout changed its scoring algorithm, lowering many scores and creating complaints.
November 2011: Klout partnered with Wahooly for their beta launch.
January 2012: Klout was able to raise an estimated $30 million from a host of venture capital firms.
February 2012: Klout acquired local and mobile neighborhood app Blockboard.
May 2012: Klout announced growth of 2000 new partners over a one-year period.
August 14, 2012: Klout changed its algorithm again.
September 2012: Microsoft announced a strategic investment in Klout for an undisclosed sum.
March 28, 2013: Klout announced inclusion of Instagram analytics in factoring Klout scores.
May 13, 2013: Klout users had claimed more than 1 million Perks across over 400 campaigns.
March 27, 2014: Lithium Technologies acquired Klout.
September 14, 2015: Engagement on YouTube content was factored into the Klout Score October 29, 2015: Klout exposed inner workings of the Klout Score.
May 10, 2018: Lithium announced that they would be ending the service on May 25, 2018.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Climate Data Record**
Climate Data Record:
A Climate Data Record (CDR) is a specific definition of a climate data series, developed by the Committee on Climate Data Records from NOAA Operational Satellites of the National Research Council at the request of NOAA in the context of satellite records. It is defined as "a time series of measurements of sufficient length, consistency, and continuity to determine climate variability and climate change.".Such measurements provide an objective basis for the understanding and prediction of climate and its variability, such as global warming.
Interim Climate Data Record (ICDR):
An Interim Climate Data Record (ICDR) is a dataset that has been forward processed, using the baselined CDR algorithm and processing environment but whose consistency and continuity have not been verified. Eventually it will be necessary to perform a new reprocessing of the CDR and ICDR parts together to guarantee consistency, and the new reprocessed data record will replace the old CDR.
Fundamental Climate Data Record (FCDR):
A Fundamental Climate Data Record is a long-term data record of calibrated and quality-controlled data designed to allow the generation of homogeneous products that are accurate and stable enough for climate monitoring.
Examples of CDRs:
AVHRR Pathfinder Sea Surface Temperature GHRSST-PP Reanalysis Project, on the website for Ghrsst-pp Snow and Ice NOAA's Climate Data Records homepage
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**2-Phospho-L-lactate guanylyltransferase**
2-Phospho-L-lactate guanylyltransferase:
2-Phospho-L-lactate guanylyltransferase (EC 2.7.7.68, CofC, MJ0887) is an enzyme with systematic name GTP:2-phospho-L-lactate guanylyltransferase. This enzyme catalyses the following chemical reaction (2S)-2-phospholactate + GTP ⇌ (2S)-lactyl-2-diphospho-5'-guanosine + diphosphateThis enzyme is involved in the biosynthesis of coenzyme F420.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Knotenschiefer**
Knotenschiefer:
Knotenschiefer is a variety of spotted slate characterized by conspicuous subspherical or polyhedral clots that are often individual minerals such as cordierite, biotite, chlorite, andalusite and others.Like fleckschiefer, fruchtschiefer and garbenschiefer, knotenschiefer is a variety of contact metamorphic slate. It is formed at temperatures of around 400 °C and its dark coloration is caused by graphite. Fruchtschiefer occurs at 500 °C. Knotenschiefer is characterised by small nodules, up to one centimetre in size, and nodular deposits of mica as a result of the growth in grain size during metamorphism. The nodules consist of iron minerals, carbon substances and mica; as the metamorphic temperature rises, minerals such as andalusite or chiastolite increasingly occur.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**GEN2PHEN**
GEN2PHEN:
Genotype to Phenotype Databases: a Holistic Approach (GEN2PHEN) is a European project aiming to develop a knowledge web portal integrating information from the genotype to the phenotype in a unifying portal: The Knowledge Centre].
Summary and Objectives:
The GEN2PHEN project aims to unify human and model organism genetic variation databases towards increasingly holistic views into Genotype-To-Phenotype (G2P) data, and to link this system into other biomedical knowledge sources via genome browser functionality. The project will establish the technological building-blocks needed for the evolution of today’s diverse G2P databases into a future seamless G2P biomedical knowledge environment, by the projects end. This will consist of a European-centred but globally networked hierarchy of bioinformatics GRID-linked databases, tools and standards, all tied into the Ensembl genome browser. The project has the following specific objectives: To analyse the G2P field and thus determine emerging needs and practices To develop key standards for the G2P database field To create generic database components, services and integration infrastructures for the G2P database domain To create search modalities and data presentation solutions for G2P knowledge To facilitate the process of populating G2P databases To build a major G2P internet portal To deploy GEN2PHEN solutions to the community To address system durability and long-term financing To undertake a whole-system utility and validation pilot studyThe GEN2PHEN Consortium members have been selected from a talented pool of European research groups and companies that are interested in the G2P database challenge. Additionally, a few non-EU participants have been included to bring extra capabilities to the initiative. The final constellation is characterised by broad and proven competence, a network of established working relationships, and high-level roles/connections within other significant projects in this domain...
Background and Concept:
By providing a complete Homo sapiens ‘parts list’ (the gene sequences) and a powerful ‘toolkit’ (technologies), the Human Genome Project has revolutionised mankind’s ability to explore how genes cause disease and other phenotypes. Studies in this domain are proceeding at a rapid and ever-increasing pace, generating unprecedented amounts of raw and processed data. It is now imperative that the scientific community finds ways to effectively manage and exploit this flood of information for knowledge creation and practical benefit to society. This fundamental goal lies at the heart of the “Genotype-To-Phenotype Databases: A Holistic Solution (GEN2PHEN)” project.
Background and Concept:
Previous genetics studies have shown that inter-individual genome variation plays a major role in differential normal development and disease processes. However, the details of how these relationships work are far from clear, even in the case of most Mendelian disorders where single genetic alterations are fully penetrant (essentially causative, rather than risk modifying). Background genetic effects (modifier genes), epistasis, somatic variation, and environmental factors all complicate the situation. This is particularly the case in complex, multi-factorial disorders (e.g., cancer, heart disease, diabetes, dementia) that will affect most of us at some stage in our lifetime. Strategies do, however, now exist to study the genetics of these disorders, and such investigations are a major focus of research throughout Europe and beyond. A common thread in these studies is the need to create ever-larger datasets and integrate these more effectively.
Related Projects and Applications:
GWAS Central Leiden Open Variation Database Locus Reference Genomic (LRG)
Partners:
University of Leicester, UK European Molecular Biology Laboratory, Germany Fundació IMIM, Spain Leiden University Medical Center, Netherlands Institut National de la Santé et de la Recherche Médicale, France Karolinska Institutet, Sweden Foundation for Research and Technology – Hellas, Greece Commissariat à l’Energie Atomique, France Erasmus University Medical Center, Netherlands Institute for Molecular Medicine Finland, University of Helsinki, Finland University of Aveiro – IEETA, Portugal University of Western Cape, South Africa Council of Scientific and Industrial Research, India Swiss Institute of Bioinformatics, Switzerland University of Manchester, UK BioBase GmbH, Germany deCODE genetics ehf, Iceland Biocomputing Platforms Ltd Oy, Finland University of Patras, Greece University Medical Center Groningen (UMCG), Netherlands (From March 2012) University of Lund (ULUND), Sweden (From March 2012) Synapse Research Management Partners, Spain. (From March 2012)
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Tap water**
Tap water:
Tap water (also known as faucet water, running water, or municipal water) is water supplied through a tap, a water dispenser valve. In many countries, tap water usually has the quality of drinking water. Tap water is commonly used for drinking, cooking, washing, and toilet flushing. Indoor tap water is distributed through indoor plumbing, which has existed since antiquity but was available to very few people until the second half of the 19th century when it began to spread in popularity in what are now developed countries. Tap water became common in many regions during the 20th century, and is now lacking mainly among people in poverty, especially in developing countries.
Tap water:
Governmental agencies commonly regulate tap water quality. Household water purification methods such as water filters, boiling, or distillation can be used to treat tap water's microbial contamination to improve its potability. The application of technologies (such as water treatment plants) involved in providing clean water to homes, businesses, and public buildings is a major subfield of sanitary engineering. Calling a water supply "tap water" distinguishes it from the other main types of fresh water which may be available; these include water from rainwater-collecting cisterns, water from village pumps or town pumps, water from wells, or water carried from streams, rivers, or lakes (whose potability may vary).
Background:
Providing tap water to large urban or suburban populations requires a complex and carefully designed system of collection, storage, treatment and distribution, and is commonly the responsibility of a government agency.Publicly available treated water has historically been associated with major increases in life expectancy and improved public health. Water disinfection can greatly reduce the risks of waterborne diseases such as typhoid and cholera. There is a great need around the world to disinfect drinking water. Chlorination is currently the most widely used water disinfection method, although chlorine compounds can react with substances in water and produce disinfection by-products (DBP) that pose problems to human health. Local geological conditions affecting groundwater are determining factors for the presence of various metal ions, often rendering the water "soft" or "hard".Tap water remains susceptible to biological or chemical contamination. Water contamination remains a serious health issue around the world, and diseases resulted from consuming contaminated water cause the death of 1.6 million children each year. In the event of contamination deemed dangerous to public health, government officials typically issue an advisory regarding water consumption. In the case of biological contamination, residents are usually advised to boil their water before consumption or to use bottled water as an alternative. In the case of chemical contamination, residents may be advised to refrain from consuming tap water entirely until the matter is resolved.
Background:
In many areas, low concentration of fluoride (< 1.0 ppm F) is intentionally added to tap water to improve dental health, although in some communities "fluoridation" remains a controversial issue. (See water fluoridation controversy). However, long-term consumption of water with high fluoride concentration (> 1.5 ppm F) can have serious undesirable consequences such as dental fluorosis, enamel mottle and skeletal fluorosis, bone deformities in children. Fluorosis severity depends on how much fluoride is present in the water, as well as people's diet and physical activity. Defluoridation methods include membrane-based methods, precipitation, absorption, and electrocoagulation.
Fixtures and appliances:
Everything in a building that uses water falls under one of two categories; fixture or appliance. As the consumption points above perform their function, most produce waste/sewage components that will require removal by the waste/sewage side of the system. The minimum is an air gap. See cross connection control & backflow prevention for an overview of backflow prevention methods and devices currently in use, both through the use of mechanical and physical principles.Fixtures are devices that use water without an additional source of power.
Fixtures and appliances:
Fittings and valves Potable water supply systems are composed of pipes, fittings and valves.
Fixtures and appliances:
Materials The installation of water pipes can be done using the following plastic and metal materials: Plastic polybutylene (PB) high density cross-linked polyethylene (PE-X) block copolymer of polypropylene (PP-B) the polypropylene copolymer (PP-H) random copolymer of polypropylene (random) (PP-R) Layer: cross-linked polyethylene, aluminum, high-density polyethylene (PE-X / Al / PE-HD) Layer: polyethylene crosslinked, aluminum, cross-linked polyethylene (PE-X / Al / PE-X) Layer copolymer of a random polypropylene, aluminum, polypropylene random copolymer (PP-R / Al / PP-R) polyvinyl chloride, chlorinated (PVC-C) polyvinyl chloride - not softened(only cold water) (PVC-U) Metals carbon steel, ordinary galvanized corrosion resistant steel Deoxidized High Phosphorus copper(Cu-DHP) lead (no longer used for new installations due to its toxicity)Other materials, if the pipes made from them have been let into circulation and the widespread use in the construction of the water supply systems.
Fixtures and appliances:
Lead pipes For many centuries, water pipes were made of lead, because of its ease of processing and durability. The use of lead pipes was a cause of health problems due to ignorance of the dangers of lead on the human body, which causes miscarriages and high death rates of newborns. Lead pipes, which were installed mostly in the late 1800s in the US, are still common today, much of which are located in the Northeast and the Midwest. Their impact is relatively small due to the fouling of pipes and stone cessation of the evolution of lead in the water; however, lead pipes are still detrimental. Most of the lead pipes that exist today are being removed and replaced with the more common material, copper or some type of plastic.
Fixtures and appliances:
Remnants of pipes in some languages are the names of the experts involved in the execution, reparation, maintenance, and installation of water supply systems, which have been formed from the Latin word 'lead', English word 'plumber', French word, 'plombier'.
Potable water supply:
Potable water is water that is drinkable and does not pose a risk to health. This supply may come from several possible sources.
Potable water supply:
Municipal water supply Water wells Processed water from creeks, streams, rivers, lakes, rainwater, etc.Domestic water systems have been evolving since people first located their homes near a running water supply, such as a stream or river. The water flow also allowed sending wastewater away from the residences.Modern plumbing delivers clean, safe, and potable water to each service point in water distribution system, including taps. It is important that the clean water not be contaminated by the wastewater (disposal) side of the process system. Historically, this contamination of drinking water has been one of the largest killers of humans.Most of the mandates for enforcing drinking water quality standards are not for the distribution system, but for the treatment plant. Even though the water distribution system is supposed to deliver the treated water to the consumers' taps without water quality degradation, complicated physical, chemical, and biological factors within the system can cause contamination of tap water.There is a huge gap regarding the potable water supply between the developed and developing world. In general, Africa, especially Sub-Saharan Africa, has the poorest water supply system in the world because of the insufficient access to the system and the low quality of the water in the region, while Finland has the best tap water quality in the world according to a reports by UNICEF and UNESCO.Tap water can sometimes appear cloudy and is often mistaken for mineral impurities in the water. It is usually caused by air bubbles coming out of solution due to change in temperature or pressure. Because cold water holds more air than warm water, small bubbles will appear in water. It has a high dissolved gas content that is heated or depressurized, which reduces how much dissolved gas the water can hold. The harmless cloudiness of the water disappears quickly as the gas is released from the water.
Potable water supply:
Hot water supply Domestic hot water is provided by means of water heater appliances, or through district heating. The hot water from these units is then piped to the various fixtures and appliances that require hot water, such as lavatories, sinks, bathtubs, showers, washing machines, and dishwashers.
Water flow reduction Water flow through a tap can be reduced by inexpensive small plastic flow reducers. These restrict flow between 15 and 50%, aiding water conservation and reducing the burden on both water supply and treatment facilities.
Comparison to bottled water:
United States Contaminant levels found in tap water vary between households and plumbing systems. While the majority of US households have access to high-quality tap water, demand for bottled water increases. In 2002, the Gallup Public Opinion Poll revealed that the possible health risk associated with tap water consumption is one of the main reasons that cause American consumers to prefer bottled water over tap water.The trust level towards tap water depends on various criteria, including the existing governmental regulations towards the water quality and their appliance. In 1993, the cryptosporidium outbreak in Milwaukee, Wisconsin, led to a massive hospitalization of more than 400,000 residents and was considered the largest in US history. Severe violations of tap water standards influence the decrease in public trust.The difference in water quality between bottled and tap water is debatable. In 1999, the Natural Resources Defense Council (NRDC) released controversial findings from a 4-year study on bottled water. The study claimed that one-third of the tested waters were contaminated with synthetic organic chemicals, bacteria, and arsenic. At least one sample exceeded state guidelines for contamination levels in bottled water.In the United States, some municipalities make an effort to use tap water over bottled water on governmental properties and events. Voters in Washington State repealed a bottled water tax via citizen initiative.
Regulation and compliance:
United States The US Environmental Protection Agency (EPA) regulates the allowable levels of some contaminants in public water systems. There may also be numerous contaminants in tap water that are not regulated by EPA and yet potentially harmful to human health. Community water systems—those systems that serve the same people throughout the year—must provide an annual "Consumer Confidence Report" to customers. The report identifies contaminants, if any, in the water system and explains the potential health impacts. After the Flint lead crisis (2014), researchers have paid special attention in studying quality trends in drinking water all across the US. Unsafe level of lead were found in tap water in different cities, such as Sebring, Ohio in August 2015, and Washington, DC, in 2001. Several studies show that a Safe Drinking Water Act (SDWA) health violation occurs in around 7-8% of community water system (CWS) in an average year. Around 16 million cases of acute gastroenteritis occur each year in the US, due to the existence of contaminants in drinking water.
Regulation and compliance:
USGS has tested tap water from 716 locations across the United States, finding PFAS exceeding the EPA advisories in approximately 75% of samples from urban areas and in approximately 25% of rural areas.Before a water supply system is constructed or modified, the designer and contractor are required to consult the local plumbing code and obtain a building permit prior to construction. Replacing an existing water heater may require a permit and inspection of the work. The US national standard for potable water piping guidelines is NSF/ANSI 61 certified materials. NSF/ANSI also sets standards for certifying polytanks, though the Food and Drug Administration (FDA) approves the materials.
Regulation and compliance:
Japan To improve water quality, Japan's Ministry of Health revised its water quality standards, which were implemented in April 2004. Numerous professionals developed the drinking water standards. They also determined ways to manage the high quality water system. In 2008, improved regulations were conducted to improve the water quality and reduce the risk of water contamination.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Calcium 2-aminoethylphosphate**
Calcium 2-aminoethylphosphate:
Calcium 2-aminoethylphosphate (Ca-AEP or Ca-2AEP) is a compound discovered by the biochemist Erwin Chargaff in 1941. It is the calcium salt of phosphorylethanolamine. It was patented by Hans Alfred Nieper and Franz Kohler.
Terminology and glossary:
Calcium 2-amino ethyl phosphoric acid (Ca-AEP or Ca-2AEP) is also called calcium ethylamino-phosphate (calcium EAP), calcium colamine phosphate, calcium 2-aminoethyl ester of phosphoric acid, and calcium 2-amino ethanol phosphate 2-AEP plays a role as a component in the cell membrane and at the same time has the property to form complexes with minerals. This mineral transporter goes into the outer layer of the outer cell membrane where it releases its associated mineral and is itself metabolized with the structure of the cell membrane.
History, treatments, uses, and risks:
Ca-AEP was discovered by Erwin Chargaff in 1953.According to the U.S. National Multiple Sclerosis Society Calcium EAP is often promoted as a cure or therapy for Multiple Sclerosis and many other diseases. However, it states that it is not recommended by its medical advisory board, and also notes that the Food and Drug Administration has classified it as unsafe and unapproved for use.Calcium 2-AEP is manufactured by numerous nutraceutical companies and is sold online and in health food stores.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Host microbe interactions in Caenorhabditis elegans**
Host microbe interactions in Caenorhabditis elegans:
Caenorhabditis elegans- microbe interactions are defined as any interaction that encompasses the association with microbes that temporarily or permanently live in or on the nematode C. elegans. The microbes can engage in a commensal, mutualistic or pathogenic interaction with the host. These include bacterial, viral, unicellular eukaryotic, and fungal interactions. In nature C. elegans harbours a diverse set of microbes. In contrast, C. elegans strains that are cultivated in laboratories for research purposes have lost the natural associated microbial communities and are commonly maintained on a single bacterial strain, Escherichia coli OP50. However, E. coli OP50 does not allow for reverse genetic screens because RNAi libraries have only been generated in strain HT115. This limits the ability to study bacterial effects on host phenotypes. The host microbe interactions of C. elegans are closely studied because of their orthologs in humans. Therefore, the better we understand the host interactions of C. elegans the better we can understand the host interactions within the human body.
Natural ecology:
C. elegans is a well-established model organism in different research fields, yet its ecology however is only poorly understood. They have a short development cycle only lasting three days with a total life span of about two weeks.C. elegans were previously considered a soil-living nematode, but in the last 10 years it was shown that natural habitats of C. elegans are microbe-rich, such as compost heaps, rotten plant material, and rotten fruits. Most of the studies on C. elegans are based on the N2 strain, which has adapted to laboratory conditions. Only in the last few years the natural ecology of C. elegans has been studied in more detail and one current research focus is its interaction with microbes. As C. elegans feeds on bacteria (microbivory), the intestine of worms isolated from the wild is usually filled with a large number of bacteria. In contrast to the very high diversity of bacteria in the natural habitat of C. elegans, the lab strains are only fed with one bacterial strain, the Escherichia coli derivate OP50 . OP50 was not co-isolated with C. elegans from nature, but was rather used because of its high convenience for laboratory maintenance. Bleaching is a common method in the laboratory to clean C. elegans of contaminations and to synchronize a population of worms. During bleaching the worms are treated with 5N NaOH and household bleach, leading to the death of all worms and survival of only the nematode eggs. The larvae hatching from these eggs lack any microbes, as none of the currently known C. elegans-associated microbes can be transferred vertically. Since most laboratory strains are kept under these gnotobiotic conditions, nothing is known about the composition of the C. elegans microbiota. The ecology of C. elegans can only be fully understood in the light of the multiple interactions with the microorganisms, which it encounters in the wild. The effect of microbes on C. elegans can vary from beneficial to lethal.
Beneficial microbes:
In its natural habitat C. elegans is constantly confronted with a variety of bacteria that could have both negative and positive effects on its fitness. To date, most research on C. elegans-microbe interactions focused on interactions with pathogens. Only recently, some studies addressed the role of commensal and mutualistic bacteria on C. elegans fitness. In these studies, C. elegans was exposed to various soil bacteria, either isolated in a different context or from C. elegans lab strains transferred to soil. These bacteria can affect C. elegans either directly through specific metabolites, or they can cause a change in the environmental conditions and thus induce a physiological response in the host.
Beneficial microbes:
Beneficial bacteria can have a positive effect on the lifespan, generate certain pathogen resistances, or influence the development of C. elegans.
Beneficial microbes:
Lifespan extension The lifespan of C. elegans is prolonged when grown on plates with Pseudomonas sp. or Bacillus megaterium compared to individuals living on E.coli. The lifespan extension mediated by B. megaterium is greater than that caused by Pseudomonas sp.. As determined by microarray analysis (a method, which allows the identification of C. elegans genes that are differentially expressed in response to different bacteria), 14 immune defence genes were up-regulated when C. elegans was grown on B. megaterium, while only two were up-regulated when fed with Pseudomonas sp. In addition to immune defence genes, other upregulated genes are involved in the synthesis of collagen and other cuticle components, indicating that the cuticle might play an important role in the interaction with microbes. Although some of the genes are known to be important for C. elegans lifespan extension, the precise underlying mechanisms still remain unclear.
Beneficial microbes:
Protection against microbes The microbial communities residing inside the host body have now been recognized to be important for effective immune responses. Yet the molecular mechanisms underlying this protection are largely unknown. Bacteria can help the host to fight against pathogens either by directly stimulating the immune response or by competing with the pathogenic bacteria for available resources. In C. elegans, some associated bacteria seem to generate protection against pathogens. For example, when C. elegans is grown on Bacillus megaterium or Pseudomonas mendocina, worms are more resistant to infection with the pathogenic bacterium Pseudomonas aeruginosa [21], which is a common bacterium in C. elegans’ natural environment and therefore a potential natural pathogen. This protection is characterized by prolonged survival on P. aeruginosa in combination with a delayed colonization of C. elegans by the pathogen. Due to its comparatively large size B. megaterium is not an optimal food source for C. elegans, resulting in a delayed development and a reduced reproductive rate. The ability of B. megaterium to enhance resistance against the infection with P. aeruginosa seems to be linked to the decrease in reproductive rate. However, the protection against P. aeruginosa infection provided by P. mendocina is reproduction independent, and depends on the p38 mitogen-activated protein kinase pathway. P. mendocina is able to activate the p38 MAPK pathway and thus to stimulate the immune response of C. elegans against the pathogen. A common way for an organism to protect itself against microbes is to increase fecundation to increase the surviving individuals in the face of an attack. This defense against parasites are genetically linked to stress response pathways and dependent on the innate immune system.
Beneficial microbes:
Effects on development Under natural conditions it might be advantageous for C. elegans to develop as fast as possible to be able to reproduce rapidly. The bacterium Comamonas DA1877 accelerates the development of C. elegans. Neither TOR (target of rapamycin), nor insulin signalling seem to mediate this effect on the accelerated development. It is thus possible that secreted metabolites of Comamonas, which might be sensed by C. elegans, lead to faster development. Worms that were fed with Comamonas DA1877 also showed a reduced number of offspring and a reduced lifespan. Another microbe that accelerates C. elegans' growth are L . sphaericus. This bacteria significantly increased the growth rate of C. elegans when compared to their normal diet of E. coli OP50. C. elegans are mostly grown and observed in a controlled laboratory with a controlled diet, therefore, they may show differential growth rates with naturally occurring microbes.
Pathogenic microbes:
In its natural environment C. elegans is confronted with a variety of different potential pathogens. C. elegans has been used intensively as a model organism for studying host-pathogen interactions and the immune system. These studies revealed that C. elegans has well-functioning innate immune defenses. The first line of defense is the extremely tough cuticle that provides an external barrier against pathogen invasion. In addition, several conserved signaling pathways contribute to defense, including the DAF-2/DAF-16 insulin-like receptor pathway and several MAP kinase pathways, which activate physiological immune responses. Finally, pathogen avoidance behavior represents another line of C. elegans immune defense. All these defense mechanisms do not work independently, but jointly to ensure an optimal defense response against pathogens. Many microorganisms were found to be pathogenic for C. elegans under laboratory conditions. To identify potential C. elegans pathogens, worms in the L4 larval stage are transferred to a medium that contains the organism of interest, which is a bacterium in most cases. Pathogenicity of the organism can be inferred by measuring the lifespan of worms. There are several known human pathogens that have a negative effect on C. elegans survival. Pathogenic bacteria can also form biofilms, whose sticky exopolymer matrix could impede C. elegans motility and cloaks bacterial quorum sensing chemoattractants from predator detection. However, only very few natural C. elegans pathogens are currently known.
Pathogenic microbes:
Eukaryotic microbes One of the best studied natural pathogens of C. elegans is the microsporidium Nematocida parisii, which was directly isolated from wild-caught C. elegans. N. parisii is an intracellular parasite that is exclusively transmitted horizontally from one animal to another. The microsporidian spores are likely to exit the cells by disrupting a conserved cytoskeletal structure in the intestine called the terminal web. It seems that none of the known immune pathways of C. elegans is involved in mediating resistance against N. parisii. Microsporidia were found in several nematodes isolated from different locations, indicating that microsporidia are common natural parasites of C. elegans. The N. parisii-C. elegans system represents a very useful tool to study infection mechanisms of intracellular parasites. Additionally, a new species of microsporidia was recently found in a wild caught C. elegans that genome sequencing places in the same genus Nematocida as prior microsporidia seen in these nematodes. This new species was named Nematocida displodere, after a phenotype seen in late infected worms that explode at the vulva to release infectious spores. N. displodere was shown to infect a broad range of tissues and cell types in C. elegans, including the epidermis, muscle, neurons, intestine, seam cells, and coelomocytes. Strangely, the majority of intestinal infection fails to grow to later parasite stages, while the muscle and epidermal infection thrives. This is in stark contrast to N. parisii which infects and completes its entire life cycle in the C. elegans intestine. These related Nematocida species are being used to study the host and pathogen mechanisms responsible for allowing or blocking eukaryotic parasite growth in different tissue niches. Another eukaryotic pathogen is the fungus Drechmeria coniospora, which has not been directly co-isolated with C. elegans from nature, but is still considered to be a natural pathogen of C. elegans. D. coniospora attaches to the cuticle of the worm at the vulva, mouth, and anus and its hyphae penetrate the cuticle. In this way D. coniospora infects the worm from the outside, while the majority of bacterial pathogens infect the worm from the intestinal lumen.
Pathogenic microbes:
Viral pathogens In 2011 the first naturally associated virus was isolated from C. elegans found outside of a laboratory. The Orsay virus is an RNA virus that is closely related to nodaviruses. The virus is not stably integrated into the host genome. It is transmitted horizontally under laboratory conditions. An antiviral RNAi pathway is essential for C. elegans resistance against Orsay virus infection. To date there has not been a virus, other intracellular pathogens, or multicellular parasite that have been able to affect the nematode. Because of this we cannot use C. elegans as an experimental system for these interactions. In 2005, two reports have shown that vesicular stomatitis virus (VSV), an arbovirus with a many invertebrate and vertebrate host range, could replicate in primary cells derived from C. elegans embryos.
Pathogenic microbes:
Bacterial pathogens Two bacterial strains of the genus Leucobacter were co-isolated from nature with the two Caenorhabditis species C. briggsae and C. n. spp 11, and named Verde 1 and Verde 2. These two Leucobacter strains showed contrasting pathogenic effects in C. elegans. Worms that were infected with Verde 2 produced a deformed anal region (“Dar” phenotype), while infections with Verde 1 resulted in slower growth due to coating of the cuticle with the bacterial strain. In liquid culture Verde 1 infected worms stuck together with their tails and formed so called “worm stars”. The trapped worms cannot free themselves and eventually die. After death C. elegans is then used as a food source for the bacteria. Only larvae in the L4 stage seem to be able to escape by autotomy. They split their bodies into half, so that the anterior half can escape. The “half-worms” remain viable for several days. The Gram-positive bacterium Bacillus thuringiensis is likely associated with C. elegans in nature. B. thuringiensis is a soil bacterium that is often used in infection experiments with C. elegans. It produces spore-forming toxins, called crystal (Cry) toxins, which are associated with spores. These are jointly taken up by C. elegans orally. Inside the host, the toxins bind to the surface of intestinal cells, where the formation of pores in intestinal cells is induced, causing their destruction. The resulting change in milieu in the gut leads to germination of the spores, which subsequently proliferate in the worm body. An aspect of the C. elegans–B. thuringiensis system is the high variability in pathogenicity between different strains. There are highly pathogenic strains, but also strains that are less or even non-pathogenic.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Kamikaze 1NT**
Kamikaze 1NT:
Kamikaze 1NT is a preemptive 1NT opening in the game of contract bridge and in common practice shows a balanced hand with 10-12 high-card points (HCP) - also known as the mini-notrump range. It is used in first or second seat hoping to make 1NT opposite an average hand of about 10 HCP.
Originally developed by John Kierein as part of a bidding system to indicate 9-12 HCP, he modified the point range to 10-13 HCP because American Contract Bridge League (ACBL) rules on conventions did not allow the use of Stayman on opening notrump bids with a lower limit below 10 HCP.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Bug bash**
Bug bash:
In software development, a bug bash is a procedure where all the developers, testers, program managers, usability researchers, designers, documentation folks, and even sometimes marketing people, put aside their regular day-to-day duties and "pound on the product"—that is, each exercises the product in every way they can think of. Because each person will use the product in slightly different (or very different) ways, and the product is getting a great deal of use in a short amount of time, this approach may reveal bugs relatively quickly.The use of bug-bashing sessions is one possible tool in the testing methodology TMap (test management approach). Bug-bashing sessions are usually announced to the organization some days or weeks ahead of time. The test management team may specify that only some parts of the product need testing. It may give detailed instructions to each participant about how to test, and how to record bugs found.
Bug bash:
In some organizations, a bug-bashing session is followed by a party and a prize to the person who finds the worst bug, and/or the person who finds the greatest total of bugs.
Bug Bash is a collaboration event, the step-by-step procedure has been given in the article 'Bug Bash—A Collaboration Episode'.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Urmetazoan**
Urmetazoan:
The Urmetazoan is the hypothetical last common ancestor of all animals, or metazoans. It is universally accepted to be a multicellular heterotroph — with the novelties of a germline and oogamy, an extracellular matrix (ECM) and basement membrane, cell-cell and cell-ECM adhesions and signaling pathways, collagen IV and fibrillar collagen, different cell types (as well as expanded gene and protein families), spatial regulation and a complex developmental plan, and relegated unicellular stages.
Choanoflagellates:
All animals are posited to have evolved from a flagellated eukaryote. Their closest known living relatives are the choanoflagellates, collared flagellates whose cell morphology is similar to the choanocyte cells of certain sponges.
Molecular studies place animals in a supergroup called the opisthokonts, which also includes the choanoflagellates, fungi, and a few small parasitic protists. The name comes from the posterior location of the flagellum in motile cells, such as most animal spermatozoa, whereas other eukaryotes tend to have anterior flagella as well.
Hypotheses:
Several different hypotheses for the animals' last common ancestor have been suggested.
Hypotheses:
The placula hypothesis, proposed by Otto Bütschli, holds that the last common ancestor of animals was an amorphous blob with no symmetry or axis. The center of this blob rose slightly above the silt, forming a hollow that aided feeding on the sea floor underneath. As the cavity grew deeper and deeper, the organisms resembled a thimble, with an inside and an outside. This body shape is found in sponges and cnidaria. This explanation leads to the formation of the bilaterian body plan; the urbilaterian would develop its symmetry when one end of the placula became adapted for forward movement, resulting in left-right symmetry.The planula hypothesis, proposed by Otto Bütschli, suggests that metazoa are derived from planula; that is, the larva of certain cnidaria, or the adult form of the placozoans. Under this hypothesis, the larva became sexually mature through paedomorphosis, and could reproduce without passing through a sessile phase.The gastraea hypothesis was proposed by Ernst Haeckel in 1874, shortly after his work on the calcareous sponges. He proposed that this group of sponges is monophyletic with all eumetazoans, including the bilaterians. This suggests that the gastrulation and the gastrula stage are universal for eumetazoans. It has been perceived as problematic that gastrulation by invagination is by no means universal among eumetazoans. Only recently has an invagination been confirmed in a Calcarea sponge, albeit too early to form a remaining inner space (archenteron).The bilaterogastraea hypothesis was developed by Gösta Jägersten as an adaptation of Ernst Haeckel's Gastraea hypothesis. He proposed that the Bilaterogastraea have a two-stage life cycle, with a pelagic juvenile and a benthic adult stage. The invagination of the original gastrula stage he saw as bilaterally symmetric rather than radially symmetric.The phagocytella hypothesis was proposed by Élie Metchnikoff.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Mechanical aptitude**
Mechanical aptitude:
According to Paul Muchinsky in his textbook Psychology Applied to Work, "mechanical aptitude tests require a person to recognize which mechanical principle is suggested by a test item." The underlying concepts measured by these items include sounds and heat conduction, velocity, gravity, and force.
A number of tests of mechanical comprehension and mechanical aptitude have been developed and are predictive of performance in manufacturing/production and technical type jobs, for instance.
Background information:
Military information Aptitude tests have been used for military purposes since World War I to screen recruits for military service. The Army Alpha and Army Beta tests were developed in 1917-1918 so ability of personnel could be measured by commanders. The Army Alpha was a test that assessed verbal ability, numerical ability, ability to follow directions, and general knowledge of specific information. The Army Beta was its non-verbal counterpart used to evaluate the aptitude of illiterate, unschooled, or non-English speaking draftees or volunteers.
Background information:
During World War II, the Army Alpha and Beta tests were replaced by The Army General Classification Test (AGCT) and Navy General Classification Test (NGCT). The AGCT was described as a test of general learning ability, and was used by the Army and Marine Corps to assign recruits to military jobs. About 12 million recruits were tested using the AGCT during World War II, the NGCT was used by the Navy to assign recruits to military jobs sailors were tested using the NGCT during World War II.Additional classification tests were developed early in World War II to supplement the AGCT and the NGCT. These included: Specialized aptitude tests related to the technical fields (mechanical, electrical, and later, electronics) Clerical and administrative tests, radio code operational tests Language tests and driver selection tests.
Background information:
Mechanical aptitude and spatial relations Mechanical aptitude tests are often coupled together with spatial relations tests. Mechanical aptitude is a complex function and is the sum of several different capacities, one of which is the ability to perceive spatial relations. Some research has shown that spatial ability is the most important part of mechanical aptitude for certain jobs. Because of this, spatial relations tests are often given separately, or in part with mechanical aptitude tests.
Background information:
Gender differences There is no evidence that states there is a general intelligence difference between men and women. In recent years, another mechanical aptitude test was created. The main purpose of this test was to create a fair chance for women to perform higher than or at the same level as men. Males still perform at a much higher level than women, but the scores between men and women have been drawn closer together. There is little research that has been devoted to why men are able to complete the tests and perform much higher than women. However studies have found that those with lower spatial ability usually do worse on mechanical reasoning, and this might be tied to women's lower performance in mechanical tasks. Studies have also found that pre-natal androgens such as testosterone positively affect performance in both spatial and mechanical abilities.
Uses:
The major uses for mechanical aptitude testing are: Identify candidates with good spatial perception and mechanical reasoning ability Assess a candidate's working knowledge of basic mechanical operations and physical laws Recognize an aptitude for learning mechanical processes and tasks Predict employee success and appropriately align your workforceThese tests are used mostly for industries involving: Manufacturing/Production Energy/UtilitiesThe major occupations that these tests are relevant to are: Automotive and Aircraft Mechanics Engineers Installation/Maintenance/Repairpersons Industrial/Technical (Non-Retail) Sales Representatives Skilled Tradesperson such as Electricians, Welders, and Carpenters Transportation Trades/Equipment Operators such as Truck driver and Heavy Equipment Operator
Types of tests:
US Department of Defense Test of Mechanical Aptitude The mechanical comprehension subtest of the Armed Services Vocational Aptitude Battery (ASVAB), is one of the most widely used mechanical aptitude tests in the world. The test consists of ten subject-specific tests that measure your knowledge of and ability to perform in different areas, and provides an indication of your level of academic ability. The military would ask that all recruits take this exam to help them be placed in the correct job while enrolled in the military. In the beginning, World War I, the U.S. Army developed the Army Alpha and Beta Tests, which grouped the draftees and recruits for military service. The Army Alpha test measured recruits' knowledge, verbal and numerical ability, and ability to follow directions using 212 multiple-choice questions.
Types of tests:
However, during World War II, the U.S. Army replaced the tests with a newer and improved one called the Army General Classification Test. The test had many different versions until they improved it enough to be used regularly. The current tests consist of three different versions, two of which are on paper and pencil and the other is taken on the computer. The scores from each different version are linked together, so each score has the same meaning no matter which exam you take. Some people find that they score higher on the computer version of the test than the other two versions, an explanation of this is due to the fact that the computer based exam is tailored to their demonstrated ability level. These tests are beneficial because they help measure your potential; it gives you a good indicator of where your talents are. By viewing your scores, you can make intelligent career decisions. The higher score you have, the more job opportunities that are available to you.
Types of tests:
Wiesen Test of Mechanical Aptitude The Wiesen Test of Mechanical Aptitude is a measure of a person's mechanical aptitude, which is referred to as the ability use machinery properly and maintain the equipment in best working order. The test is 30 minutes and has 60 items that can help predict performance for specific occupations involving the operation, maintenance, and servicing of tools, equipment, and machinery. Occupations in these areas require and are facilitated by mechanical aptitude. The Wiesen Test of Mechanical Aptitude was designed with the intent to create an evolution of previous tests that helps to improve the shortcomings of these earlier mechanical aptitude tests, such as the Bennett Test of Mechanical Comprehension. This test was reorganized in order to lessen certain gender and racial biases. The reading level that is required for the Wiesen Test of Mechanical Aptitude has been estimated to be at a sixth-grade level, and it is also available in a Spanish-language version for Spanish-speaking mechanical workers. Overall, this mechanical aptitude test has been shown to have less of an adverse impact [on what?] than previous mechanical aptitude tests.
Types of tests:
There are two scores given to each individual taking the test, a raw score and a percentile ranking. The raw score is a measure of how many questions (out of the 60 total) the individual answered correctly, and the percentile ranking is a relative performance score that indicates how the individual's score rates in relation to the scores of other people who have taken this particular mechanical aptitude test.
Types of tests:
Average test scores for the Wiesen Test of Mechanical Aptitude were determined by giving the test to a sample of 1,817 workers aged 18 and older working in specific industrial occupations that were mentioned previously. Using this sample of workers, it was determined that the Wiesen Test of Mechanical Aptitude has very high reliability (statistics) (.97) in determining mechanical aptitude in relation to performance of mechanical occupations.
Types of tests:
Bennett Test of Mechanical Comprehension The Bennett Mechanical Comprehension Test (BMCT) is an assessment tool for measuring a candidate's ability to perceive and understand the relationship of physical forces and mechanical elements in practical situations. This aptitude is important in jobs and training programs that require the understanding and application of mechanical principles. The current BMCT Forms, S and T, have been used to predict performance in a variety of vocational and technical training settings and have been popular selection tools for mechanical, technical, engineering, and similar occupations for many years.
Types of tests:
The BMCT is composed of 68 items, 30-minute time limited test, that are illustrations of simple, encountered mechanisms used in many different mechanisms. It is not considered a speeded time test, but a timed power test and the cut scores will provide the different job requirements for employers. The reading and exercise level of concentration for this test is below or at a sixth-grade reading level.
Types of tests:
In current studies of internal consistency reliability, the range of estimates were compared from previous studies and found out the range was from .84 to .92. So this shows a high reliable consistency when taking and measuring the BMCT. Muchinsky (1993) evaluated the relationships between the BMCT, a general mental ability test, and an aptitude classification test focused on mechanics, and supervisory ratings of overall performance for 192 manufacturing employees. Of the three tests, he found the BMCT to be the best single predictor of job performance (r = .38, p < .01). He also found that the incremental gain in predictability from the other tests was not significant.
Types of tests:
From a current employer standpoint, these people are typically using cognitive ability tests, aptitude tests, personality tests etc. And the BMCT has been used for positions such as electrical and mechanical positions. Also companies will use these tests for computer operators and operators in manufacturing. This test can also help eliminate any issues or variables to employers about who may need further training and instruction or not. This test will help show employers who is a master of the trade they are applying for, and will also highlight the applicants who still have some "catching up" to do.
Types of tests:
Stenquist Test of Mechanical Aptitude The Stenquist Test consist of a series of problems presented in the form of pictures, where each respondent would try to determine which picture assimilates better with another group of pictures. The pictures are mostly common mechanical objects which do not have an affiliation with a particular trade or profession, nor does the visuals require any prior experience or knowledge. Other variations of the test are used to examine a person's keen perception of mechanical objects and their ability to reason out a mechanical problem. For example, The Stenquist Mechanical Assemblying Test Series III, which was created for young males, consisted of physical mechanical parts for the boys to individually construct items with.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Minimax theorem**
Minimax theorem:
In the mathematical area of game theory, a minimax theorem is a theorem providing conditions that guarantee that the max–min inequality is also an equality. The first theorem in this sense is von Neumann's minimax theorem about zero-sum games published in 1928, which was considered the starting point of game theory. Von Neumann is quoted as saying "As far as I can see, there could be no theory of games ... without that theorem ... I thought there was nothing worth publishing until the Minimax Theorem was proved".
Minimax theorem:
Since then, several generalizations and alternative versions of von Neumann's original theorem have appeared in the literature.Formally, von Neumann's minimax theorem states: Let X⊂Rn and Y⊂Rm be compact convex sets. If f:X×Y→R is a continuous function that is concave-convex, i.e.
f(⋅,y):X→R is concave for fixed y , and f(x,⋅):Y→R is convex for fixed x .Then we have that max min min max x∈Xf(x,y).
Special case: Bilinear function:
The theorem holds in particular if f(x,y) is a linear function in both of its arguments (and therefore is bilinear) since a linear function is both concave and convex. Thus, if f(x,y)=xTAy for a finite matrix A∈Rn×m , we have: max min min max x∈XxTAy.
The bilinear special case is particularly important for zero-sum games, when the strategy set of each player consists of lotteries over actions (mixed strategies), and payoffs are induced by expected value. In the above formulation, A is the payoff matrix.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Allopoiesis**
Allopoiesis:
Allopoiesis is the process whereby a system produces something other than the system itself. One example of this is an assembly line, where the final product (such as a car) is distinct from the machines doing the producing. This is in contrast with autopoiesis. Allopoiesis is a compound word formed from allo- (Greek prefix meaning other or different) and -poiesis (Greek suffix meaning production, creation or formation).
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**ISO/IEC 12207**
ISO/IEC 12207:
ISO/IEC/IEEE 12207 Systems and software engineering – Software life cycle processes is an international standard for software lifecycle processes. First introduced in 1995, it aims to be a primary standard that defines all the processes required for developing and maintaining software systems, including the outcomes and/or activities of each process.
Revision history:
ISO/IEC/IEEE 12207:2017 is the newest version, published in November 2017. The IEEE Computer Society joined directly with ISO/IEC JTC 1/SC 7/WG 7 in the editing process for this version. A significant change is that it adopts a process model identical to the ISO/IEC/IEEE 15288:2015 process model (there is one name change, the 15288 "System Requirements Definition" process is renamed to the "System/Software Requirements Definition" process). This harmonization of the two standards led to the removal of separate software development and software reuse processes, bringing the total number of 43 processes from 12207 down to the 30 processes defined in 15288. It also caused changes to the quality management and quality assurance process activities and outcomes. Additionally, the definition of "audit" and related audit activities were updated. Annex I of ISO/IEC/IEEE 12207:2017 provides a process mapping between the 2017 version and the previous version, including the primary process alignments between the two versions; this is intended to enable traceability and ease transition for users of the previous version.
Revision history:
Prior versions include: ISO/IEC 12207:2008, which was published in February 2008 ISO/IEC 12207:1995/Amd 2:2004, an amended version of the prior, published in November 2004 ISO/IEC 12207:1995/Amd 1:2002, an amended version of the prior, published in May 2002 ISO/IEC 12207:1995, the first iteration, published in July 1995; originally was divided into five primary processes (acquisition, supply, development, operation, and maintenance), with eight supporting and four organizational life cycle processes IEEE versions Prior to the IEEE Computer Society formally joining the editing process (becoming a major stakeholder) for the 2017 release, the IEEE maintained its own versions of ISO/IEC 12207, initially with modifications made jointly with the Electronic Industries Alliance (EIA). With the 2008 update came a "shared strategy of ISO/IEC JTC 1/SC 7 and the IEEE to harmonize their respective collections of standards," resulting in identical standards thereon, but with slightly different names. Those IEEE versions included: IEEE Std. 12207-2008: "integrates ISO/IEC 12207:1995 with its two amendments and was coordinated with the parallel revision of ISO/IEC 15288:2002 (System life cycle processes) to align structure, terms, and corresponding organizational and project processes"; superseded by ISO/IEC/IEEE 12207:2017 IEEE/EIA 12207.2-1997: "provides implementation consideration guidance for the normative clauses of IEEE/EIA 12207.0"; superseded/made obsolete by IEEE Std. 12207-2008, which was then superseded by ISO/IEC/IEEE 12207:2017 IEEE/EIA 12207.1-1997: "provides guidance for recording life cycle data resulting from the life cycle processes of IEEE/EIA 12207.0"; superseded by ISO/IEC/IEEE 15289:2011, which was then superseded by ISO/IEC/IEEE 15289:2017 IEEE/EIA 12207.0-1996: "consists of the clarifications, additions, and changes [to ISO/IEC 12207:1995 for industry implementation] accepted by the Institute of Electrical and Electronics Engineers (IEEE) and the Electronic Industries Alliance (EIA) as formulated by a joint project of the two organizations"; superseded by IEEE Std. 12207-2008, which was then superseded by ISO/IEC/IEEE 12207:2017It's also worth noting that IEEE/EIA 12207 officially replaced MIL-STD-498 (released in December 1994) for the development of DoD software systems on May 27, 1998.
Processes not stages:
The standard establishes a set of processes for managing the lifecycle of software. The standard "does not prescribe a specific software life cycle model, development methodology, method, modelling approach, or technique.". Instead, the standard (as well as ISO/IEC/IEEE 15288) distinguishes between a "stage" and "process" as follows: stage: "period within the life cycle of an entity that relates to the state of its description or realization". A stage is typically a period of time and ends with a "primary decision gate".
Processes not stages:
process: "set of interrelated or interacting activities that transforms inputs into outputs". The same process often recurs within different stages.Stages (aka phases) are not the same as processes, and this standard only defines specific processes - it does not define any particular stages. Instead, the standard acknowledges that software life cycles vary, and may be divided into stages (also called phases) that represent major life cycle periods and give rise to primary decision gates. No particular set of stages is normative, but it does mention two examples: The system life cycle stages from ISO/IEC TS 24748-1 could be used (concept, development, production, utilization, support, and retirement).
Processes not stages:
It also notes that a common set of stages for software is concept exploration, development, sustainment, and retirement.The life cycle processes the standard defines are not aligned to any specific stage in a software life cycle. Indeed, the life cycle processes that involve planning, performance, and evaluation "should be considered for use at every stage". In practice, processes occur whenever they are needed within any stage.
Processes:
ISO/IEC/IEEE 12207:2017 divides software life cycle processes into four main process groups: agreement, organizational project-enabling, technical management, and technical processes. Under each of those four process groups are a variety of sub-categories, including the primary activities of acquisition and supply (agreement); configuration (technical management); and operation, maintenance, and disposal (technical).
Processes:
Agreement processes Here ISO/IEC/IEEE 12207:2017 includes the acquisition and supply processes, which are activities related to establishing an agreement between a supplier and acquirer. Acquisition covers all the activities involved in initiating a project. The acquisition phase can be divided into different activities and deliverables that are completed chronologically. During the supply phase a project management plan is developed. This plan contains information about the project such as different milestones that need to be reached.
Processes:
Organizational project-enabling processes Detailed here are life cycle model management, infrastructure management, portfolio management, human resource management, quality management, and knowledge management processes. These processes help a business or organization enable, control, and support the system life cycle and related projects. Life cycle model management helps ensure acquisition and supply efforts are supported, while infrastructure and portfolio management supports business and project-specific initiatives during the entire system life cycle. The rest ensure the necessary resources and quality controls are in place to support the business' project and system endeavors. If an organization does not have an appropriate set of organizational processes, a project executed by the organization may apply those processes directly to the project instead.
Processes:
Technical management processes ISO/IEC/IEEE 12207:2017 places eight different processes here: [Project planning] Project assessment and control Decision management Risk management Configuration management Information management Measurement Quality assuranceThese processes deal with planning, assessment, and control of software and other projects during the life cycle, ensuring quality along the way.
Processes:
Technical processes The technical processes of ISO/IEC/IEEE 12207:2017 encompass 14 different processes, some of which came from the old software-specific processes that were phased out from the 2008 version.The full list includes: Business or mission analysis Stakeholder needs and requirements definition Systems/Software requirements definition Architecture definition Design definition System analysis Implementation Integration Verification Transition Validation Operation Maintenance DisposalThese processes involve technical activities and personnel (information technology, troubleshooters, software specialists, etc.) during pre-, post- and during operation. The analysis and definition processes early on set the stage for how software and projects are implemented. Additional processes of integration, verification, transition, and validation help ensure quality and readiness. The operation and maintenance phases occur simultaneously, with the operation phase consisting of activities like assisting users in working with the implemented software product, and the maintenance phase consisting of maintenance tasks to keep the product up and running. The disposal process describes how the system/project will be retired and cleaned up, if necessary.
Conformance:
Clause 4 describes the document's intended use and conformance requirements. It is expected that particular projects "may not need to use all of the processes provided by this document." In practice, conforming to this standard normally involves selecting and declaring the set of suitable processes. This can be done through either "full conformance" or "tailored conformance".
"Full conformance" can be claimed in one of two ways. "Full conformance to tasks" can be claimed if all requirements of the declared processes' activities and tasks are met. "Full conformance to outcomes" can be claimed if all required outcomes of the declared processes are met. The latter permits more variation.
"Tailored conformance" may be declared when specific clauses are selected or modified through the tailoring process also defined in the document.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Clearing factor**
Clearing factor:
In centrifugation the clearing factor or k factor represents the relative pelleting efficiency of a given centrifuge rotor at maximum rotation speed. It can be used to estimate the time t (in hours) required for sedimentation of a fraction with a known sedimentation coefficient s (in svedbergs): t=ks The value of the clearing factor depends on the maximum angular velocity ω of a centrifuge (in rad/s) and the minimum and maximum radius r of the rotor: ln 10 13 3600 As the rotational speed of a centrifuge is usually specified in RPM, the following formula is often used for convenience: 2.53 10 ln 1000 )2 Centrifuge manufacturers usually specify the minimum, maximum and average radius of a rotor, as well as the k factor of a centrifuge-rotor combination.
Clearing factor:
For runs with a rotational speed lower than the maximum rotor-speed, the k factor has to be adjusted: maximum rotor-speed actual rotor-speed ) 2The K-factor is related to the sedimentation coefficient S by the formula: T=KS Where T is the time to pellet a certain particle in hours. Since S is a constant for a certain particle, this relationship can be used to interconvert between different rotors.
Clearing factor:
T1K1=T2K2 Where T1 is the time to pellet in one rotor, and K1 is the K-factor of that rotor. K2 is the K-factor of the other rotor, and T2 , the time to pellet in the other rotor, can be calculated. In this manner, one does not need access to the exact rotor cited in a protocol, as long as the K-factor can be calculated. Many online calculators are available to perform the calculations for common rotors.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Hentriacontylic acid**
Hentriacontylic acid:
Hentriacontylic acid (also hentriacontanoic acid, henatriacontylic acid, or henatriacontanoic acid) is a carboxylic saturated fatty acid.
Sources:
Hentriacontylic acid can be derived from peat wax and montan wax.
The olefin triacontene-1 can be reacted to yield linear n-henatriacontanoic acid.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Triolein**
Triolein:
Triolein is a symmetrical triglyceride derived from glycerol and three units of the unsaturated fatty acid oleic acid. Most triglycerides are unsymmetrical, being derived from mixtures of fatty acids. Triolein represents 4–30% of olive oil.Triolein is also known as glyceryl trioleate and is one of the two components of Lorenzo's oil.The oxidation of triolein is according to the formula: C57H104O6 + 80 O2 → 57 CO2 + 52 H2OThis gives a respiratory quotient of 57 80 or 0.7125. The heat of combustion is 8,389 kcal (35,100 kJ) per mole or 9.474 kcal (39.64 kJ) per gram. Per mole of oxygen it is 104.9 kcal (439 kJ).
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Maximum satisfiability problem**
Maximum satisfiability problem:
In computational complexity theory, the maximum satisfiability problem (MAX-SAT) is the problem of determining the maximum number of clauses, of a given Boolean formula in conjunctive normal form, that can be made true by an assignment of truth values to the variables of the formula. It is a generalization of the Boolean satisfiability problem, which asks whether there exists a truth assignment that makes all clauses true.
Example:
The conjunctive normal form formula (x0∨x1)∧(x0∨¬x1)∧(¬x0∨x1)∧(¬x0∨¬x1) is not satisfiable: no matter which truth values are assigned to its two variables, at least one of its four clauses will be false.
However, it is possible to assign truth values in such a way as to make three out of four clauses true; indeed, every truth assignment will do this.
Therefore, if this formula is given as an instance of the MAX-SAT problem, the solution to the problem is the number three.
Hardness:
The MAX-SAT problem is OptP-complete, and thus NP-hard, since its solution easily leads to the solution of the boolean satisfiability problem, which is NP-complete.
It is also difficult to find an approximate solution of the problem, that satisfies a number of clauses within a guaranteed approximation ratio of the optimal solution. More precisely, the problem is APX-complete, and thus does not admit a polynomial-time approximation scheme unless P = NP.
Weighted MAX-SAT:
More generally, one can define a weighted version of MAX-SAT as follows: given a conjunctive normal form formula with non-negative weights assigned to each clause, find truth values for its variables that maximize the combined weight of the satisfied clauses. The MAX-SAT problem is an instance of weighted MAX-SAT where all weights are 1.
Approximation algorithms 1/2-approximation Randomly assigning each variable to be true with probability 1/2 gives an expected 2-approximation. More precisely, if each clause has at least k variables, then this yields a (1 − 2−k)-approximation. This algorithm can be derandomized using the method of conditional probabilities.
Weighted MAX-SAT:
(1-1/e)-approximation MAX-SAT can also be expressed using an integer linear program (ILP). Fix a conjunctive normal form formula F with variables x1, x2, ..., xn, and let C denote the clauses of F. For each clause c in C, let S+c and S−c denote the sets of variables which are not negated in c, and those that are negated in c, respectively. The variables yx of the ILP will correspond to the variables of the formula F, whereas the variables zc will correspond to the clauses. The ILP is as follows: The above program can be relaxed to the following linear program L: The following algorithm using that relaxation is an expected (1-1/e)-approximation: Solve the linear program L and obtain a solution O Set variable x to be true with probability yx where yx is the value given in O.This algorithm can also be derandomized using the method of conditional probabilities.
Weighted MAX-SAT:
3/4-approximation The 1/2-approximation algorithm does better when clauses are large whereas the (1-1/e)-approximation does better when clauses are small. They can be combined as follows: Run the (derandomized) 1/2-approximation algorithm to get a truth assignment X.
Run the (derandomized) (1-1/e)-approximation to get a truth assignment Y.
Output whichever of X or Y maximizes the weight of the satisfied clauses.This is a deterministic factor (3/4)-approximation.
Example On the formula where ϵ>0 , the (1-1/e)-approximation will set each variable to True with probability 1/2, and so will behave identically to the 1/2-approximation. Assuming that the assignment of x is chosen first during derandomization, the derandomized algorithms will pick a solution with total weight 3+ϵ , whereas the optimal solution has weight 4+ϵ
Solvers:
Many exact solvers for MAX-SAT have been developed during recent years, and many of them were presented in the well-known conference on the boolean satisfiability problem and related problems, the SAT Conference. In 2006 the SAT Conference hosted the first MAX-SAT evaluation comparing performance of practical solvers for MAX-SAT, as it has done in the past for the pseudo-boolean satisfiability problem and the quantified boolean formula problem.
Solvers:
Because of its NP-hardness, large-size MAX-SAT instances cannot in general be solved exactly, and one must often resort to approximation algorithms and heuristicsThere are several solvers submitted to the last Max-SAT Evaluations: Branch and Bound based: Clone, MaxSatz (based on Satz), IncMaxSatz, IUT_MaxSatz, WBO, GIDSHSat.
Satisfiability based: SAT4J, QMaxSat.
Unsatisfiability based: msuncore, WPM1, PM2.
Special cases:
MAX-SAT is one of the optimization extensions of the boolean satisfiability problem, which is the problem of determining whether the variables of a given Boolean formula can be assigned in such a way as to make the formula evaluate to TRUE. If the clauses are restricted to have at most 2 literals, as in 2-satisfiability, we get the MAX-2SAT problem. If they are restricted to at most 3 literals per clause, as in 3-satisfiability, we get the MAX-3SAT problem.
Related problems:
There are many problems related to the satisfiability of conjunctive normal form Boolean formulas.
Related problems:
Decision problems: 2SAT 3SAT Optimization problems, where the goal is to maximize the number of clauses satisfied: MAX-SAT, and the corresponded weighted version Weighted MAX-SAT MAX-kSAT, where each clause has exactly k variables: MAX-2SAT MAX-3SAT MAXEkSAT The partial maximum satisfiability problem (PMAX-SAT) asks for the maximum number of clauses which can be satisfied by any assignment of a given subset of clauses. The rest of the clauses must be satisfied.
Related problems:
The soft satisfiability problem (soft-SAT), given a set of SAT problems, asks for the maximum number of those problems which can be satisfied by any assignment.
The minimum satisfiability problem.
The MAX-SAT problem can be extended to the case where the variables of the constraint satisfaction problem belong to the set of reals. The problem amounts to finding the smallest q such that the q-relaxed intersection of the constraints is not empty.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Amplification factor**
Amplification factor:
In general an amplification factor is the numerical multiplicative factor by which some quantity is increased.
In structural engineering the amplification factor is the ratio of second order to first order deflections.
In electronics the amplification factor, or gain, is the ratio of the output to the input of an amplifier, sometimes represented by the symbol AF.
In numerical analysis the amplification factor is a number derived using Von Neumann stability analysis to determine stability of a numerical scheme for a partial differential equation.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Non-photo blue**
Non-photo blue:
Non-photo blue (or non-repro blue) is a common tool in the graphic design and print industry, being a particular shade of blue that cannot be detected by graphic arts camera film. This allows layout editors to write notes to the printer on the print flat (the image that is to be photographed and sent to print) which will not show in the final form. It also allows artists to lay down sketch lines without the need to erase after inking.
Change in function:
More recently, with digital scanning and image manipulation, non-photo blue fulfills its function in a different way. The artist can do their sketch and inking in the traditional manner and scan the page. Most scanners will detect the light blue lines. However, shifting to greyscale and increasing the contrast and brightness of the scanned image causes the blue to disappear. Another common approach involves replacing the blue channel with another channel – typically the red channel. The exact processes may differ depending on the scanner, settings and image-editing software, but the concept remains the same.
Black ink:
The difference between the non-photo blue and black ink is great enough that digital image manipulation can separate the two easily. If a black-and-white bitmap setting is scanned in, the exposure or threshold number can be set high enough to detect the black ink or dark images being scanned, but low enough to leave out the non-photo blue. On a threshold scale of 0–255, this number would be approximately 140. Only with a considerably low threshold setting will the blue be detected; this however may greatly distort black lines and add a lot of noise and black speckles, making the image potentially almost unrecognizable. Scanning in black-and-white makes it possible for the non-photo blue still to serve its original purpose, as notes and rough sketching lines can be placed throughout the image being scanned and remain undetected by the scan head.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Higher-order function**
Higher-order function:
In mathematics and computer science, a higher-order function (HOF) is a function that does at least one of the following: takes one or more functions as arguments (i.e. a procedural parameter, which is a parameter of a procedure that is itself a procedure), returns a function as its result.All other functions are first-order functions. In mathematics higher-order functions are also termed operators or functionals. The differential operator in calculus is a common example, since it maps a function to its derivative, also a function. Higher-order functions should not be confused with other uses of the word "functor" throughout mathematics, see Functor (disambiguation).
Higher-order function:
In the untyped lambda calculus, all functions are higher-order; in a typed lambda calculus, from which most functional programming languages are derived, higher-order functions that take one function as argument are values with types of the form (τ1→τ2)→τ3
General examples:
map function, found in many functional programming languages, is one example of a higher-order function. It takes as arguments a function f and a collection of elements, and as the result, returns a new collection with f applied to each element from the collection.
Sorting functions, which take a comparison function as a parameter, allowing the programmer to separate the sorting algorithm from the comparisons of the items being sorted. The C standard function qsort is an example of this.
filter fold apply Function composition Integration Callback Tree traversal Montague grammar, a semantic theory of natural language, uses higher-order functions
Support in programming languages:
Direct support The examples are not intended to compare and contrast programming languages, but to serve as examples of higher-order function syntax In the following examples, the higher-order function twice takes a function, and applies the function to some value twice. If twice has to be applied several times for the same f it preferably should return a function rather than a value. This is in line with the "don't repeat yourself" principle.
Support in programming languages:
APL Or in a tacit manner: C++ Using std::function in C++11: Or, with generic lambdas provided by C++14: C# Using just delegates: Or equivalently, with static methods: Clojure ColdFusion Markup Language (CFML) Common Lisp D Dart Elixir In Elixir, you can mix module definitions and anonymous functions Alternatively, we can also compose using pure anonymous functions.
Support in programming languages:
Erlang In this Erlang example, the higher-order function or_else/2 takes a list of functions (Fs) and argument (X). It evaluates the function F with the argument X as argument. If the function F returns false then the next function in Fs will be evaluated. If the function F returns {false, Y} then the next function in Fs with argument Y will be evaluated. If the function F returns R the higher-order function or_else/2 will return R. Note that X, Y, and R can be functions. The example returns false.
Support in programming languages:
F# Go Notice a function literal can be defined either with an identifier (twice) or anonymously (assigned to variable plusThree).
Support in programming languages:
Haskell J Explicitly, or tacitly, Java (1.8+) Using just functional interfaces: Or equivalently, with static methods: JavaScript With arrow functions: Or with classical syntax: Julia Kotlin Lua MATLAB OCaml PHP or with all functions in variables: Note that arrow functions implicitly capture any variables that come from the parent scope, whereas anonymous functions require the use keyword to do the same.
Support in programming languages:
Perl or with all functions in variables: Python Python decorator syntax is often used to replace a function with the result of passing that function through a higher-order function. E.g., the function g could be implemented equivalently: R Raku In Raku, all code objects are closures and therefore can reference inner "lexical" variables from an outer scope because the lexical variable is "closed" inside of the function. Raku also supports "pointy block" syntax for lambda expressions which can be assigned to a variable or invoked anonymously.
Support in programming languages:
Ruby Rust Scala Scheme Swift Tcl Tcl uses apply command to apply an anonymous function (since 8.6).
XACML The XACML standard defines higher-order functions in the standard to apply a function to multiple values of attribute bags.
The list of higher-order functions in XACML can be found here.
XQuery Alternatives Function pointers Function pointers in languages such as C, C++, and Pascal allow programmers to pass around references to functions. The following C code computes an approximation of the integral of an arbitrary function: The qsort function from the C standard library uses a function pointer to emulate the behavior of a higher-order function.
Macros Macros can also be used to achieve some of the effects of higher-order functions. However, macros cannot easily avoid the problem of variable capture; they may also result in large amounts of duplicated code, which can be more difficult for a compiler to optimize. Macros are generally not strongly typed, although they may produce strongly typed code.
Support in programming languages:
Dynamic code evaluation In other imperative programming languages, it is possible to achieve some of the same algorithmic results as are obtained via higher-order functions by dynamically executing code (sometimes called Eval or Execute operations) in the scope of evaluation. There can be significant drawbacks to this approach: The argument code to be executed is usually not statically typed; these languages generally rely on dynamic typing to determine the well-formedness and safety of the code to be executed.
Support in programming languages:
The argument is usually provided as a string, the value of which may not be known until run-time. This string must either be compiled during program execution (using just-in-time compilation) or evaluated by interpretation, causing some added overhead at run-time, and usually generating less efficient code.
Support in programming languages:
Objects In object-oriented programming languages that do not support higher-order functions, objects can be an effective substitute. An object's methods act in essence like functions, and a method may accept objects as parameters and produce objects as return values. Objects often carry added run-time overhead compared to pure functions, however, and added boilerplate code for defining and instantiating an object and its method(s). Languages that permit stack-based (versus heap-based) objects or structs can provide more flexibility with this method.
Support in programming languages:
An example of using a simple stack based record in Free Pascal with a function that returns a function: The function a() takes a Txy record as input and returns the integer value of the sum of the record's x and y fields (3 + 7).
Defunctionalization Defunctionalization can be used to implement higher-order functions in languages that lack first-class functions: In this case, different types are used to trigger different functions via function overloading. The overloaded function in this example has the signature auto apply.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Drawing room play**
Drawing room play:
A drawing room play is a type of play, developed during the Victorian period in the United Kingdom, in which the actions take place in a drawing room or which is designed to be reenacted in the drawing room of a home. The common practice of entertaining a guest in the home led to the creation of this category of plays and while the drawing room itself has fallen out of favour, the play format has continued to provide a source of entertainment. While there is no date or authority directly ascribed to the term drawing room play, there is evidence that the term was derived from the English habit of putting on these short works for guests while entertaining in the drawing room of the home. In French usage the room and the social gathering it contained are equally the salon.
Types:
Beginning with the early forms of drama, the drawing room play has evolved to encompass comedy as well as to include the forms of the dramatic monologue. The play format itself has also grown out of the traditional drawing room performance and back into main street theatre and film. Drawing room comedy is also sometimes called the "comedy of manners." Many of the drawing room plays adapted some form of social criticism in the transition from the Victorian period into the Modern era.The genre is comedy
Examples:
The Elder Statesman (1959) was the last of T. S. Eliot's drawing room works.
Oscar Wilde's The Importance of Being Earnest is one of the most widely known examples of the drawing room play.
Several of the collected works of Noël Coward are also considered typical of the form.
Paul Rudnick's Regrets Only is a contemporary drawing room comedy released in 2006.
Additional authors include Clement Scott, Walter Besant, Grace Luce Irwin and Arnold Bennett.
Sources:
Nicholas Cooper, Houses of the Gentry 1480-1680 (English Heritage) 1999: "Parlours and withdrawing rooms 289-93.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Lamination (geology)**
Lamination (geology):
In geology, lamination (from Latin lāmina 'thin layer') is a small-scale sequence of fine layers (PL: laminae; SG: lamina) that occurs in sedimentary rocks. Laminae are normally smaller and less pronounced than bedding. Lamination is often regarded as planar structures one centimetre or less in thickness, whereas bedding layers are greater than one centimetre. However, structures from several millimetres to many centimetres have been described as laminae. A single sedimentary rock can have both laminae and beds.
Description:
Lamination consists of small differences in the type of sediment that occur throughout the rock. They are caused by cyclic changes in the supply of sediment. These changes can occur in grain size, clay percentage, microfossil content, organic material content or mineral content and often result in pronounced differences in colour between the laminae. Weathering can make the differences even more clear.
Description:
Lamination can occur as parallel structures (parallel lamination) or in different sets that make an angle with each other (cross-lamination). It can occur in many different types of sedimentary rock, from coarse sandstone to fine shales, mudstones or in evaporites.
Because lamination is a small structure, it is easily destroyed by bioturbation (the activity of burrowing organisms) shortly after deposition. Lamination therefore survives better under anoxic circumstances, or when the sedimentation rate was high and the sediment was buried before bioturbation could occur.
Origin:
Lamination develops in fine grained sediment when fine grained particles settle, which can only happen in quiet water. Examples of sedimentary environments are deep marine (at the seafloor) or lacustrine (at the bottom of a lake), or mudflats, where the tide creates cyclic differences in sediment supply.Laminae formed in glaciolacustrine environments (in glacier lakes) are a special case. They are called varves. Quaternary varves are used in stratigraphy and palaeoclimatology to reconstruct climate changes during the last few hundred thousand years.
Origin:
Lamination in sandstone is often formed in a coastal environment, where wave energy causes a separation between grains of different sizes.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**European College of Veterinary Anaesthesia and Analgesia**
European College of Veterinary Anaesthesia and Analgesia:
The European College of Veterinary Anesthesia and Analgesia (ECVAA) is one of 25 veterinary specialist colleges recognized by the European Board of Veterinary Specialization, comprising more than 35 distinct specialties.
History:
ECVA was inaugurated on 1 January 1995 and was formally registered in the Netherlands on May 22, 1997 as a non-profit organization. Application for de facto recognition was possible until 31 December 1997. During this period 44 de facto specialists were appointed. The first exam took place in 1997.
In 2003 ECVA acquired full recognition-status by the European Board of Veterinary Specialization (EBVS) and has continued to grow since then. The name of the College was changed to the European College of Veterinary Anesthesia and Analgesia (ECVAA) in 2007.
Membership:
In order to become an EBVS European Specialist in Veterinary Anesthesia and Analgesia, veterinarians need to fulfil the following requirements: (1) have worked as a veterinarian in general practice for two years or have completed a rotating internship, which covers different specialties for at least one year, (2) have successfully completed a three year specialized postgraduate training programme in anesthesia, analgesia and intensive care coordinated by an ECVAA Diplomate, (3) have published two peer-reviewed articles in internationally recognized scientific journals and submitted a case log and two case reports and (4) have successfully passed the written and practical/oral parts of the qualifying examinations. Diplomates of the European Colleges have to pass a re-validation process every five years to retain the European Specialist title. As of March 2020, there have been 227 members recognized ECVAA Diplomates. The current president is Matthew Gurney.
Function:
ECVAA and the Association of Veterinary Anesthetists, of which all Diplomates of ECVAA are members are the main scientific organizations consulted by EU and national authorities for their expert opinion on matters related to veterinary anesthesia and analgesia, protection of animals used for experimental and other scientific purposes etc. ECVAA contributes substantially to animal welfare, not only by alleviating pain and stress of the animals, but also by assisting Animal Welfare Associations in various ways. Diplomates have also been consulted to provide specialist opinion during the registration process of veterinary drugs in the National and European Medicines Agencies.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Chalkstick fracture**
Chalkstick fracture:
Chalkstick fractures are fractures, typically of long bones, in which the fracture is transverse to the long axis of the bone, like a broken stick of chalk. A healthy long bone typically breaks like a hard woody stick as the collagen in the matrix adds remarkable flexibility to the mineral and the energy can run up and down the growth rings of bone. The bones of children will even follow a greenstick fracture pattern.
Chalkstick fracture:
Chalkstick fractures are particularly common in Paget's disease of bone, and osteopetrosis. It is also seen in cases of fused spine as in a patient with ankylosing spondylitis.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Jewellery cleaning**
Jewellery cleaning:
Jewelry cleaning is the practice of removing dirt or tarnish from jewelry to improve its appearance.
Methods and risks:
Some kinds of jewelry can be cleaned at home while others are suggested to be done by a professional. Jewelry made from gold and sterling silver are examples of jewelry that can be cleaned at home, while platinum should not be due to how at-risk platinum is of scratching. Jewelry with gemstones such as diamonds or sapphires can be cleaned at home as well using mild soap and warm water. However, gemstones such as opals and pearls should be done professionally. Another issue is the age of jewelry, as certain materials or build strategies of older jewelry (such as from the Georgian era) may have restrictions, such as not being able to get wet without damage. Keeping your jewelry clean helps to ensure that the gemstone(s) keep a good appearance and prevents dirt and grease (among others) from loosening them. Dirty jewelry may also cause skin irritation.A professional cleaning may take anywhere from a few minutes to a few days depending on the circumstances. The cleaner would first inspect the jewelry to ensure that the gemstones are accounted for and secured. Materials that can handle it are often placed in an ultrasonic bath using a cleaning solution and later put through a steam cleaner, while more sensitive materials will go through light brushing in soapy water. Following this, they are rinsed, dried, and inspected. A jeweler may provide their customers with sudsy ammonia cleaning kits, while another may sell small ultrasonic cleaners. Some gemstones, such as white topaz, have an overlay to produce certain colors. Ultrasonic cleaning can remove this coating.
Ultrasonic jewellery cleaning:
Ultrasonic cleaners are useful for jewelry cleaning and removing tarnish. They use ultrasound waves and chemicals combined to create bubbles that "cling" to the foreign particles such as dirt, oil, and unknown substances. The high frequency waves are sent out and pull the contaminants off the object. The bubbles collapse after they attach to the contaminants and move to the surface of the chemical solution creating what appears to be a boiling solution.
Cleanliness of gems:
Colored dye or smudges can affect the perceived color of a gem. Historically, some jewelers' diamonds were mis-graded due to smudges on the girdle, or dye on the culet. Current practice is to thoroughly clean a gem before grading its color as well as clarity.How a gem can be safely cleaned depends upon its individual characteristics and therefore its susceptibility to damage.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Nucleoside-diphosphatase**
Nucleoside-diphosphatase:
In enzymology, a nucleoside-diphosphatase (EC 3.6.1.6) is an enzyme that catalyzes the chemical reaction a nucleoside diphosphate + H2O ⇌ a nucleotide + phosphateThus, the two substrates of this enzyme are nucleoside diphosphate and H2O, whereas its two products are nucleotide and phosphate.
Nucleoside-diphosphatase:
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides in phosphorus-containing anhydrides. The systematic name of this enzyme class is nucleoside-diphosphate phosphohydrolase. Other names in common use include thiamine pyrophosphatase, UDPase, inosine diphosphatase, adenosine diphosphatase, IDPase, ADPase, adenosinepyrophosphatase, guanosine diphosphatase, guanosine 5'-diphosphatase, inosine 5'-diphosphatase, uridine diphosphatase, uridine 5'-diphosphatase, nucleoside diphosphate phosphatase, type B nucleoside diphosphatase, GDPase, CDPase, nucleoside 5'-diphosphatase, type L nucleoside diphosphatase, NDPase, and nucleoside diphosphate phosphohydrolase. This enzyme participates in purine metabolism and pyrimidine metabolism.
Structural studies:
As of late 2007, two structures have been solved for this class of enzymes, with PDB accession codes 2H2N and 2H2U.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Panda Cloud Antivirus**
Panda Cloud Antivirus:
Panda Cloud Antivirus is an antivirus software developed by Panda Security, a free and a paid version are available. It is cloud-based in the sense that files are scanned on a remote server without using processing power of the user's machine. The cloud technology is based on Panda's Collective Intelligence. It can run constantly, providing protection against viruses and malicious websites but slowing the system to some extent, or do a system scan.
Features:
According to Panda Security, Panda Cloud Antivirus is able to detect viruses, trojans, worms, spyware, dialers, hacking tools, hacker and other security risks.Panda Cloud Antivirus relies on its "Collective Intelligence" and the cloud for up-to-date information. It normally uses an Internet connection to access up-to-date information; if the Internet cannot be accessed, it will use a local cache of "the most common threats in circulation".
Reviews:
An April 2009 review found Panda Cloud Antivirus 1.0 to be clean, fast, simple, easy to use, and with good detection rates. The same review scored Panda 100.00% in malware detection and 100.0% in malicious URL detection. Its overall score was 100%, a strong protection factor considering it is software.When version 1.0 was released on November 10, 2009, PC Magazine reviewed Panda Cloud Antivirus and gave it an Editor's Choice Award for Best AV.TechRadar's review states "We think that Panda Cloud Antivirus is best viewed as a defense tool rather than a utility for cleaning up a system that's already riddled with infection."
License:
The free edition of Panda Cloud Antivirus is released under a license. Its usage is exclusively allowed for private households, state schools, non-governmental and non-profit organizations.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Haloperidol decanoate**
Haloperidol decanoate:
Haloperidol decanoate, sold under the brand name Haldol Decanoate among others, is a typical antipsychotic which is used in the treatment of schizophrenia. It is administered by injection into muscle at a dose of 100 to 200 mg once every 4 weeks or monthly. The dorsogluteal site is recommended. A 3.75-cm (1.5-inch), 21-gauge needle is generally used, but obese individuals may require a 6.5-cm (2.5-inch) needle to ensure that the drug is indeed injected intramuscularly and not subcutaneously. Haloperidol decanoate is provided in the form of 50 or 100 mg/mL oil solution of sesame oil and benzyl alcohol in ampoules or pre-filled syringes. Its elimination half-life after multiple doses is 21 days. The medication is marketed in many countries throughout the world.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Coloplast**
Coloplast:
Coloplast A/S is a Danish multinational company that develops, manufactures and markets medical devices and services related to ostomy, urology, continence, and wound care.
History:
Coloplast was founded in 1957 by Aage Louis-Hansen. His son Niels Peter Louis-Hansen owns 20% of the company and is deputy chairman. It employs more than 12,000 people and operates around the world, with sales activities in 53 countries and production in Denmark, Hungary, France, China and the US. It has its global headquarters in Humlebæk, Denmark. Its United States operations and North American headquarters are based in Minneapolis, Minnesota. The company manufactures and supplies products to hospitals and institutions, as well as wholesalers and retailers. In selected markets, Coloplast is a direct supplier to consumers.
History:
The company had revenues of DKK 18.554 billion in 2020/2021. Europe constitutes the biggest market with 63% of sales, while 22% of sales are in North America and 15% in the rest of the world. Coloplast is listed on the Danish Stock Exchange, and has for a number of years been represented among the 20 most traded shares in the country.In 2016, Coloplast was listed as the 22nd most innovative company in the world by Forbes Forbes. In 2015, Coloplast was listed as the 33rd most innovative company in the world by Forbes Magazine. The company has been featured in The Star Tribune's annual list of top employers in Minnesota.In November 2021, Coloplast announced the acquisition for 2.5 billion dollars of Atos Medical, specialized in laryngectomy, which belonged to PAI Partners.
Acquisitions:
In 2010, the company acquired Mpathy Medical Devices.In November 2016, Coloplast acquired Comfort Medical for $160 million. Comfort Medical was a direct-to-consumer medical supplier based in Coral Springs, Florida. Prior to the acquisition from Coloplast, Comfort Medical had acquired Medical Direct Club as well as Liberty Medical's urological business area in 2015. In 2020, Coloplast announced the acquisitions of Hope Medical and Rocky Medical Supply, and integrated the organizations into Comfort Medical.In November 2021, Coloplast announced it would acquire Atos Medical and its laryngectomy technology for 2.16 billion euros ($2.49 billion).
Transvaginal surgical mesh lawsuits:
In 2019, the U.S. Food and Drug Administration ordered Coloplast and Boston Scientific to halt the sale and distribution of transvaginal surgical mesh implants for failure to prove their mesh products were safe and effective for the repair of pelvic organ prolapse. Coloplast was one of several companies that had previously settled multimillion-dollar lawsuits for damages caused by transvaginal mesh implants.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Gold heptafluoride**
Gold heptafluoride:
Gold heptafluoride is a gold(V) compound with the empirical formula AuF7. The synthesis of this compound was first reported in 1986. However, current calculations suggest that the structure of the synthesized molecule was actually a difluorine ligand on a gold pentafluoride core, AuF5·F2. That would make it the first difluorine complex and the first compound containing a fluorine atom with an oxidation state of zero. The gold(V)–difluorine complex is calculated to be 205 kJ/mol more stable than gold(VII) fluoride. The vibrational frequency at 734 cm−1 is the hallmark of the end-on coordinated difluorine molecule.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Associative browsing**
Associative browsing:
Associative browsing is the professional name for several methods of browsing the web. These methods are usually assisted by some sort of a discovery tool and are considered to be more intuitive.
The tools that serve the associative browsing are similarity/relevancy tools. They use different algorithms to analyze the content and the user in order to offer him or direct him to the next link in what is considered his associative chain.
One of the more familiar sites to use the associative browsing method was Pandora.com, the site used a relevancy engine to bring users music they would like according to the music they are listening right now. The site's algorithm tried to forecast the next link (in this case a song) in the user's associative chain.
Associative browsing:
Associative browsing can be done without the discovery tools but due to the internet overload of information this can be done on a very thin surface line. For example, when you are looking at shoes on the Nike website you can type Puma.com and continue surfing associatively, but your intuitive recalling of shoes websites is limited and at some point you will be forced to perform a search.
Associative browsing:
The difference between searching and associative browsing is in the "flow" of the events. While in associative browsing you are surfing from site to site following your associative chain, in search mode you are inquiring information which will help you develops an associative chain.
The implementation of associative browsing can be on different levels and on different subjects. The discovery tools can try to guess the next link in one's associative chain, the next song one will hear, the next site one will visit or the next product one will shop for.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Orthonormality**
Orthonormality:
In linear algebra, two vectors in an inner product space are orthonormal if they are orthogonal (or perpendicular along a line) unit vectors. A set of vectors form an orthonormal set if all vectors in the set are mutually orthogonal and all of unit length. An orthonormal set which forms a basis is called an orthonormal basis.
Intuitive overview:
The construction of orthogonality of vectors is motivated by a desire to extend the intuitive notion of perpendicular vectors to higher-dimensional spaces. In the Cartesian plane, two vectors are said to be perpendicular if the angle between them is 90° (i.e. if they form a right angle). This definition can be formalized in Cartesian space by defining the dot product and specifying that two vectors in the plane are orthogonal if their dot product is zero.
Intuitive overview:
Similarly, the construction of the norm of a vector is motivated by a desire to extend the intuitive notion of the length of a vector to higher-dimensional spaces. In Cartesian space, the norm of a vector is the square root of the vector dotted with itself. That is, ‖x‖=x⋅x Many important results in linear algebra deal with collections of two or more orthogonal vectors. But often, it is easier to deal with vectors of unit length. That is, it often simplifies things to only consider vectors whose norm equals 1. The notion of restricting orthogonal pairs of vectors to only those of unit length is important enough to be given a special name. Two vectors which are orthogonal and of length 1 are said to be orthonormal.
Intuitive overview:
Simple example What does a pair of orthonormal vectors in 2-D Euclidean space look like? Let u = (x1, y1) and v = (x2, y2).
Consider the restrictions on x1, x2, y1, y2 required to make u and v form an orthonormal pair.
From the orthogonality restriction, u • v = 0.
From the unit length restriction on u, ||u|| = 1.
From the unit length restriction on v, ||v|| = 1.Expanding these terms gives 3 equations: x1x2+y1y2=0 x12+y12=1 x22+y22=1 Converting from Cartesian to polar coordinates, and considering Equation (2) and Equation (3) immediately gives the result r1 = r2 = 1. In other words, requiring the vectors be of unit length restricts the vectors to lie on the unit circle.
After substitution, Equation (1) becomes cos cos sin sin θ2=0 . Rearranging gives tan cot θ2 . Using a trigonometric identity to convert the cotangent term gives tan tan (θ2+π2) ⇒θ1=θ2+π2 It is clear that in the plane, orthonormal vectors are simply radii of the unit circle whose difference in angles equals 90°.
Definition:
Let V be an inner-product space. A set of vectors {u1,u2,…,un,…}∈V is called orthonormal if and only if ∀i,j:⟨ui,uj⟩=δij where δij is the Kronecker delta and ⟨⋅,⋅⟩ is the inner product defined over V
Significance:
Orthonormal sets are not especially significant on their own. However, they display certain features that make them fundamental in exploring the notion of diagonalizability of certain operators on vector spaces.
Properties Orthonormal sets have certain very appealing properties, which make them particularly easy to work with.
Theorem. If {e1, e2, ..., en} is an orthonormal list of vectors, then Theorem. Every orthonormal list of vectors is linearly independent.
Significance:
Existence Gram-Schmidt theorem. If {v1, v2,...,vn} is a linearly independent list of vectors in an inner-product space V , then there exists an orthonormal list {e1, e2,...,en} of vectors in V such that span(e1, e2,...,en) = span(v1, v2,...,vn).Proof of the Gram-Schmidt theorem is constructive, and discussed at length elsewhere. The Gram-Schmidt theorem, together with the axiom of choice, guarantees that every vector space admits an orthonormal basis. This is possibly the most significant use of orthonormality, as this fact permits operators on inner-product spaces to be discussed in terms of their action on the space's orthonormal basis vectors. What results is a deep relationship between the diagonalizability of an operator and how it acts on the orthonormal basis vectors. This relationship is characterized by the Spectral Theorem.
Examples:
Standard basis The standard basis for the coordinate space Fn is Any two vectors ei, ej where i≠j are orthogonal, and all vectors are clearly of unit length. So {e1, e2,...,en} forms an orthonormal basis.
Real-valued functions When referring to real-valued functions, usually the L² inner product is assumed unless otherwise stated. Two functions ϕ(x) and ψ(x) are orthonormal over the interval [a,b] if (1)⟨ϕ(x),ψ(x)⟩=∫abϕ(x)ψ(x)dx=0,and 1.
Fourier series The Fourier series is a method of expressing a periodic function in terms of sinusoidal basis functions.
Taking C[−π,π] to be the space of all real-valued functions continuous on the interval [−π,π] and taking the inner product to be ⟨f,g⟩=∫−ππf(x)g(x)dx it can be shown that sin sin sin cos cos cos (nx)π},n∈N forms an orthonormal set.
However, this is of little consequence, because C[−π,π] is infinite-dimensional, and a finite set of vectors cannot span it. But, removing the restriction that n be finite makes the set dense in C[−π,π] and therefore an orthonormal basis of C[−π,π].
Sources:
Axler, Sheldon (1997), Linear Algebra Done Right (2nd ed.), Berlin, New York: Springer-Verlag, p. 106–110, ISBN 978-0-387-98258-8 Chen, Wai-Kai (2009), Fundamentals of Circuits and Filters (3rd ed.), Boca Raton: CRC Press, p. 62, ISBN 978-1-4200-5887-1
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Ankyrin repeat domain 11**
Ankyrin repeat domain 11:
Ankyrin repeat domain 11 is a protein that in humans is encoded by the ANKRD11 gene.
Function:
This locus encodes an ankyrin repeat domain-containing protein. The encoded protein inhibits ligand-dependent activation of transcription. Mutations in this gene have been associated with KBG syndrome, which is characterized by macrodontia, distinctive craniofacial features, short stature, skeletal anomalies, global developmental delay, seizures and intellectual disability. Alternatively spliced transcript variants have been described. Related pseudogenes exist on chromosomes 2 and X.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**4-Oxo-2-nonenal**
4-Oxo-2-nonenal:
4-Oxo-2-nonenal is a lipid peroxidation product that can structurally alter proteins and induce α-synuclein oligomers.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Freight terminal**
Freight terminal:
A freight terminal is a processing node for freight. They may include airports, seaports, container ports, goods stations, railroad terminals and trucking terminals. As most freight terminals are located at ports, many cargo containers can be seen around the area.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Vancomycin-resistant Staphylococcus aureus**
Vancomycin-resistant Staphylococcus aureus:
Vancomycin-resistant Staphylococcus aureus (VRSA) are strains of Staphylococcus aureus that have acquired resistance to the glycopeptide antibiotic vancomycin. Resistance in VRSA is conferred by the plasmid mediated vanA gene and operon. Although VRSA infections are limited, VRSA is still a potential threat to public health because treatment options are limited since the bacteria is resistant to many of the standard drugs used to treat S. aureus infections. Furthermore, resistance can be transferred from one bacteria to another.
Mechanism of acquired resistance:
Vancomycin-resistant Staphylococcus aureus was first reported in the United States in 2002. To date, documented cases of VRSA have acquired resistance through uptake of a vancomycin resistance gene cluster from Enterococcus (i.e. VRE). The acquired mechanism is typically the vanA gene and operon from a plasmid in Enterococcus faecium or Enterococcus faecalis.This mechanism differs from strains of vancomycin-intermediate Staphylococcus aureus (VISA), which appear to develop elevated MICs to vancomycin through sequential mutations resulting in a thicker cell wall and the synthesis of excess amounts of D-ala-D-ala residues.
Diagnosis:
The diagnosis of vancomycin-resistant Staphylococcus aureus (VRSA) is performed by performing susceptibility testing on a single S. aureus isolate to vancomycin. This is accomplished by first assessing the isolate's minimum inhibitory concentration (MIC) using standard laboratory methods, including disc diffusion, gradient strip diffusion, and automated antimicrobial susceptibility testing systems. Once the MIC is known, resistance is determined by comparing the MIC with established breakpoints Resistant or "R" designations are assigned based on agreed upon values called breakpoints. Breakpoints are published by standards development organizations such as the U.S. Clinical and Laboratory Standards Institute (CLSI), the British Society for Antimicrobial Chemotherapy (BSAC) and the European Committee on Antimicrobial Susceptiblity Testing (EUCAST).
Treatment of infection:
For isolates with a vancomycin minimum inhibitory concentration (MIC) > 2 µg/mL, an alternative to vancomycin should be used. The approach is to treat with at least one agent to which VISA/VRSA is known to be susceptible by in vitro testing. The agents that are used include daptomycin, linezolid, telavancin, ceftaroline, quinupristin–dalfopristin. For people with methicillin-resistant Staphylococcus aureus (MRSA) bacteremia in the setting of vancomycin failure the IDSA recommends high-dose daptomycin, if the isolate is susceptible, in combination with another agent (e.g. gentamicin, rifampin, linezolid, TMP-SMX, or a beta-lactam antibiotic).
History:
Three classes of vancomycin-resistant S. aureus have emerged that differ in vancomycin susceptibilities: vancomycin-intermediate S. aureus (VISA), heterogeneous vancomycin-intermediate S. aureus (hVISA), and high-level vancomycin-resistant S. aureus (VRSA).
History:
Vancomycin-intermediate S. aureus (VISA) Vancomycin-intermediate S. aureus (VISA) ( or ) was first identified in Japan in 1996 and has since been found in hospitals elsewhere in Asia, as well as in the United Kingdom, France, the U.S., and Brazil. It is also termed GISA (glycopeptide-intermediate Staphylococcus aureus), indicating resistance to all glycopeptide antibiotics. These bacterial strains present a thickening of the cell wall, which is believed to reduce the ability of vancomycin to diffuse into the division septum of the cell required for effective vancomycin treatment.
History:
Vancomycin-resistant S. aureus (VRSA) High-level vancomycin resistance in S. aureus has been rarely reported. In vitro and in vivo experiments reported in 1992 demonstrated that vancomycin resistance genes from Enterococcus faecalis could be transferred by gene transfer to S. aureus, conferring high-level vancomycin resistance to S. aureus. Until 2002 such a genetic transfer was not reported for wild S. aureus strains. In 2002, a VRSA strain ( or ) was isolated from a patient in Michigan. The isolate contained the mecA gene for methicillin resistance. Vancomycin MICs of the VRSA isolate were consistent with the VanA phenotype of Enterococcus species, and the presence of the vanA gene was confirmed by polymerase chain reaction. The DNA sequence of the VRSA vanA gene was identical to that of a vancomycin-resistant strain of Enterococcus faecalis recovered from the same catheter tip. The vanA gene was later found to be encoded within a transposon located on a plasmid carried by the VRSA isolate. This transposon, Tn1546, confers vanA-type vancomycin resistance in enterococci.As of 2019, 52 VRSA strains have been identified in the United States, India, Iran, Pakistan, Brazil, and Portugal.
History:
Heterogeneous vancomycin-intermediate S. aureus (hVISA) The definition of hVISA according to Hiramatsu et al. is a strain of Staphylococcus aureus that gives resistance to vancomycin at a frequency of 10−6 colonies or even higher.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Paste up**
Paste up:
Paste up is a method of creating or laying out publication pages that predates the use of the now-standard computerized page design desktop publishing programs. Completed, or camera-ready, pages are known as mechanicals or mechanical art. In the offset lithography process, the mechanicals would be photographed with a stat camera to create a same-size film negative for each printing plate required.
Paste up:
Paste up relied on phototypesetting, a process that would generate "cold type" on photographic paper that usually took the form of long columns of text. These printouts were often a single column in a scroll of narrow (3-inch or 4-inch) paper that was as deep as the length of the story.
Paste up:
A professional known variously as a paste-up artist, layout artist, mechanical artist, production artist, or compositor would cut the type into sections and arrange it carefully across multiple columns. For example, a 15 inch strip could be cut into three 5-inch sections. Headlines and other typographic elements were often created and supplied separately by the typesetter, leaving it to the paste up artist to determine their final position on the page.
Paste up:
Adhesive was then applied to the back side of these strips, either by applying rubber cement with a brush or passing them through a machine that would apply a wax adhesive. The adhesives were intentionally made semi-permanent, allowing the strips to be removed and moved around the layout if it needed to be changed. The strips would be adhered to a board, usually a stiff white paper on which the artist would draw the publication's margins and columns, either lightly in pencil or in non-photographic blue ink, a light cyan color that would be ignored by the orthochromatic film used to make printing plates in offset lithography. For magazines, newspapers, and other recurring projects, often the boards would be pre-printed in this color.
Paste up:
Other camera-ready materials like photostats and line art would also be prepared with adhesive and attached to the boards. Continuous-tone photographs would need halftoning, which would require black paper or red film (which photo-imaged the same as black) to be trimmed and placed on the board in place of the image; in the process of creating the negative film for the printing plates, the solid black area would create a clear spot on the negative, called a window. The photographs would be converted to halftone film separately and then positioned in this window to complete the page (although this process was typically performed by a different worker, known as a negative "stripper").
Paste up:
Once a page was complete, the board would be attached to an easel and photographed in order to create a negative, which was then used to make a printing plate.
Paste up was preceded by hot type and cold type technologies. Starting in the 1990s, many newspapers started doing away with paste up, switching to desktop publishing software that allows pages to be designed completely on a computer. Such software includes QuarkXPress, PageMaker and InDesign.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Distorted thread locknut**
Distorted thread locknut:
A distorted thread locknut, is a type of locknut that uses a deformed section of thread to keep the nut from loosening from vibrations or rotation of the clamped item. They are broken down into four types: elliptical offset nuts, centerlock nuts, toplock nuts and partially depitched (Philidas) nuts.
High temperature use:
Because these nuts are solid metal, they are effective in elevated temperature settings, unlike nyloc nuts. High grade nuts can withstand temperatures up to 1,400 °F (760 °C).
Safety factors:
High strength distorted thread nuts cannot be used with low strength fasteners because the hard nut will act like a die and destroy the threads on the fastener.
Elliptical offset nuts:
Elliptical offset nuts is a catch-all category that encompasses designs known as oval locknuts or non-slotted hex locknuts,. The salient feature is that the thread form has been deformed at one end so that the threads are no longer perfectly circular. The deformed end is usually shaped into an ellipse or obround triangle. These are known as one-way nuts as the nut may be easily started on the male fastener from the bottom non-deformed portion, but are practically impossible to start from the deformed end. As the male fastener reaches the deformed section it stretches the threads of the nut elastically back into a circle. This action increases the friction between the nut and the fastener greatly and creates the locking action. Due to the elastic nature of the deformation the nuts can be reused indefinitely.
Centerlock nuts:
Center lock nuts are similar to elliptical offset nuts, except that they are distorted in the middle of the nut. This allows the nut to be started from either side.
Toplock nuts:
Toplock nuts are also similar to elliptical offset nuts, except that the whole thread on one end is not distorted. Instead only three small sections of the thread are deformed on one end.
Partially depitched nuts:
Partially depitched nuts are commonly called Philidas nuts, after their originator and current manufacturer, and differ from the above three nut types insofar as a portion of the thread is displaced axially, this being facilitated by one or more slots perpendicular to the axis.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Quantum cognition**
Quantum cognition:
Quantum cognition is an emerging field which applies the mathematical formalism of quantum theory to model cognitive phenomena such as information processing by the human brain, language, decision making, human memory, concepts and conceptual reasoning, human judgment, and perception. The field clearly distinguishes itself from the quantum mind as it is not reliant on the hypothesis that there is something micro-physical quantum-mechanical about the brain. Quantum cognition is based on the quantum-like paradigm or generalized quantum paradigm or quantum structure paradigm that information processing by complex systems such as the brain, taking into account contextual dependence of information and probabilistic reasoning, can be mathematically described in the framework of quantum information and quantum probability theory.
Quantum cognition:
Quantum cognition uses the mathematical formalism of quantum theory to inspire and formalize models of cognition that aim to be an advance over models based on probability theory. The field focuses on modeling phenomena in cognitive science that have resisted traditional techniques or where traditional models seem to have reached a barrier (e.g., human memory), and modeling preferences in decision theory that seem paradoxical from a traditional rational point of view (e.g., preference reversals). Since the use of a quantum-theoretic framework is for modeling purposes, the identification of quantum structures in cognitive phenomena does not presuppose the existence of microscopic quantum processes in the human brain.
Main subjects of research:
Quantum-like models of information processing ("quantum-like brain") The brain is definitely a macroscopic physical system operating on scales of time, space and temperature that -- from the mainstream view -- differs crucially from the corresponding quantum scales. Macroscopic quantum-physical phenomena, such as the Bose-Einstein condensate, are also characterized by special conditions that are definitely not fulfilled in the brain. In particular, the brain's temperature is simply too high to be able to perform real quantum information processing, i.e., to use quantum carriers of information such as photons, ions or electrons. As is commonly accepted in brain science, the basic unit of information processing is a neuron. It is clear that a neuron cannot be in the superposition of two states: firing and non-firing. Hence, it cannot produce superposition playing the basic role in the quantum information processing. Superpositions of mental states are created by complex networks of neurons (classical neural networks). The quantum cognition community states that the activity of such neural networks can produce effects formally described as interference (of probabilities) and entanglement. In principle, however, the community does not try to create concrete models of "quantum-like" representation of information in the brain.The quantum cognition project is based on the observation that various cognitive phenomena are more adequately described by quantum information theory and quantum probability than by the corresponding classical theories (see examples below). Thus, the quantum formalism is considered an operational formalism that describes non-classical processing of probabilistic data. Recent derivations of the complete quantum formalism from simple operational principles for representation of information support the foundations of quantum cognition.
Main subjects of research:
Although, at the moment, we cannot present the concrete neurophysiological mechanisms of creation of the quantum-like representation of information in the brain, we can present general informational considerations supporting the idea that information processing in the brain matches with quantum information and probability. Here, contextuality is the key word (see the monograph of Khrennikov for detailed representation of this viewpoint). Quantum mechanics is fundamentally contextual. Quantum systems do not have objective properties which can be defined independently of measurement context. As has been pointed out by Niels Bohr, the whole experimental arrangement must be taken into account. Contextuality implies existence of incompatible mental variables, violation of the classical law of total probability, and constructive or destructive interference effects. Thus, the quantum cognition approach can be considered an attempt to formalize contextuality of mental processes, by using the mathematical apparatus of quantum mechanics.
Main subjects of research:
Decision making Suppose a person is given an opportunity to play two rounds of the following gamble: a coin toss will determine whether the subject wins $200 or loses $100. Suppose the subject has decided to play the first round, and does so. Some subjects are then given the result (win or lose) of the first round, while other subjects are not yet given any information about the results. The experimenter then asks whether the subject wishes to play the second round. Performing this experiment with real subjects gives the following results: When subjects believe they won the first round, the majority of subjects choose to play again on the second round.
Main subjects of research:
When subjects believe they lost the first round, the majority of subjects choose to play again on the second round.Given these two separate choices, according to the sure thing principle of rational decision theory, they should also play the second round even if they don't know or think about the outcome of the first round. But, experimentally, when subjects are not told the results of the first round, the majority of them decline to play a second round.
Main subjects of research:
This finding violates the law of total probability, yet it can be explained as a quantum interference effect in a manner similar to the explanation for the results from double-slit experiment in quantum physics. Similar violations of the sure-thing principle are seen in empirical studies of the Prisoner's Dilemma and have likewise been modeled in terms of quantum interference.The above deviations from classical rational expectations in agents’ decisions under uncertainty produce well known paradoxes in behavioral economics, that is, the Allais, Ellsberg and Machina paradoxes. These deviations can be explained if one assumes that the overall conceptual landscape influences the subject's choice in a neither predictable nor controllable way. A decision process is thus an intrinsically contextual process, hence it cannot be modeled in a single Kolmogorovian probability space, which justifies the employment of quantum probability models in decision theory. More explicitly, the paradoxical situations above can be represented in a unified Hilbert space formalism where human behavior under uncertainty is explained in terms of genuine quantum aspects, namely, superposition, interference, contextuality and incompatibility.Considering automated decision making, quantum decision trees have different structure compared to classical decision trees. Data can be analyzed to see if a quantum decision tree model fits the data better.
Main subjects of research:
Human probability judgments Quantum probability provides a new way to explain human probability judgment errors including the conjunction and disjunction errors. A conjunction error occurs when a person judges the probability of a likely event L and an unlikely event U to be greater than the unlikely event U; a disjunction error occurs when a person judges the probability of a likely event L to be greater than the probability of the likely event L or an unlikely event U. Quantum probability theory is a generalization of Bayesian probability theory because it is based on a set of von Neumann axioms that relax some of the classic Kolmogorov axioms. The quantum model introduces a new fundamental concept to cognition—the compatibility versus incompatibility of questions and the effect this can have on the sequential order of judgments. Quantum probability provides a simple account of conjunction and disjunction errors as well as many other findings such as order effects on probability judgments.The liar paradox - The contextual influence of a human subject on the truth behavior of a cognitive entity is explicitly exhibited by the so-called liar paradox, that is, the truth value of a sentence like "this sentence is false". One can show that the true-false state of this paradox is represented in a complex Hilbert space, while the typical oscillations between true and false are dynamically described by the Schrödinger equation.
Main subjects of research:
Knowledge representation Concepts are basic cognitive phenomena, which provide the content for inference, explanation, and language understanding. Cognitive psychology has researched different approaches for understanding concepts including exemplars, prototypes, and neural networks, and different fundamental problems have been identified, such as the experimentally tested non classical behavior for the conjunction and disjunction of concepts, more specifically the Pet-Fish problem or guppy effect, and the overextension and underextension of typicality and membership weight for conjunction and disjunction. By and large, quantum cognition has drawn on quantum theory in three ways to model concepts.
Main subjects of research:
Exploit the contextuality of quantum theory to account for the contextuality of concepts in cognition and language and the phenomenon of emergent properties when concepts combine Use quantum entanglement to model the semantics of concept combinations in a non-decompositional way, and to account for the emergent properties/associates/inferences in relation to concept combinations Use quantum superposition to account for the emergence of a new concept when concepts are combined, and as a consequence put forward an explanatory model for the Pet-Fish problem situation, and the overextension and underextension of membership weights for the conjunction and disjunction of concepts.The large amount of data collected by Hampton on the combination of two concepts can be modeled in a specific quantum-theoretic framework in Fock space where the observed deviations from classical set (fuzzy set) theory, the above-mentioned over- and under- extension of membership weights, are explained in terms of contextual interactions, superposition, interference, entanglement and emergence. And, more, a cognitive test on a specific concept combination has been performed which directly reveals, through the violation of Bell's inequalities, quantum entanglement between the component concepts.
Main subjects of research:
Semantic analysis and information retrieval The research in (iv) had a deep impact on the understanding and initial development of a formalism to obtain semantic information when dealing with concepts, their combinations and variable contexts in a corpus of unstructured documents. This conundrum of natural language processing (NLP) and information retrieval (IR) on the web – and data bases in general – can be addressed using the mathematical formalism of quantum theory. As basic steps, (a) K. Van Rijsbergen introduced a quantum structure approach to IR, (b) Widdows and Peters utilised a quantum logical negation for a concrete search system, and Aerts and Czachor identified quantum structure in semantic space theories, such as latent semantic analysis. Since then, the employment of techniques and procedures induced from the mathematical formalisms of quantum theory – Hilbert space, quantum logic and probability, non-commutative algebras, etc. – in fields such as IR and NLP, has produced significant results.
Main subjects of research:
Gestalt perception There are apparent similarities between Gestalt perception and quantum theory. In an article discussing the application of Gestalt to chemistry, Anton Amann writes: "Quantum mechanics does not explain Gestalt perception, of course, but in quantum mechanics and Gestalt psychology there exist almost isomorphic conceptions and problems: Similarly as with the Gestalt concept, the shape of a quantum object does not a priori exist but it depends on the interaction of this quantum object with the environment (for example: an observer or a measurement apparatus).
Main subjects of research:
Quantum mechanics and Gestalt perception are organized in a holistic way. Subentities do not necessarily exist in a distinct, individual sense.
Main subjects of research:
In quantum mechanics and Gestalt perception objects have to be created by elimination of holistic correlations with the 'rest of the world'."Each of the points mentioned in the above text in a simplified manner (Below explanations correlate respectively with the above-mentioned points): As an object in quantum physics doesn't have any shape until and unless it interacts with its environment; Objects according to Gestalt perspective do not hold much of a meaning individually as they do when there is a "group" of them or when they are present in an environment.
Main subjects of research:
Both in quantum mechanics and Gestalt perception, the objects must be studied as a whole rather than finding properties of individual components and interpolating the whole object.
Main subjects of research:
In Gestalt concept creation of a new object from another previously existing object means that the previously existing object now becomes a sub entity of the new object, and hence "elimination of holistic correlations" occurs. Similarly a new quantum object made from a previously existing object means that the previously existing object loses its holistic view.Amann comments: "The structural similarities between Gestalt perception and quantum mechanics are on a level of a parable, but even parables can teach us something, for example, that quantum mechanics is more than just production of numerical results or that the Gestalt concept is more than just a silly idea, incompatible with atomistic conceptions."
History:
Ideas for applying the formalisms of quantum theory to cognition first appeared in the 1990s by Diederik Aerts and his collaborators Jan Broekaert, Sonja Smets and Liane Gabora, by Harald Atmanspacher, Robert Bordley, and Andrei Khrennikov. A special issue on Quantum Cognition and Decision appeared in the Journal of Mathematical Psychology (2009, vol 53.), which planted a flag for the field. A few books related to quantum cognition have been published including those by Khrennikov (2004, 2010), Ivancivic and Ivancivic (2010), Busemeyer and Bruza (2012), E. Conte (2012). The first Quantum Interaction workshop was held at Stanford in 2007 organized by Peter Bruza, William Lawless, C. J. van Rijsbergen, and Don Sofge as part of the 2007 AAAI Spring Symposium Series. This was followed by workshops at Oxford in 2008, Saarbrücken in 2009, at the 2010 AAAI Fall Symposium Series held in Washington, D.C., 2011 in Aberdeen, 2012 in Paris, and 2013 in Leicester. Tutorials also were presented annually beginning in 2007 until 2013 at the annual meeting of the Cognitive Science Society. A Special Issue on Quantum models of Cognition appeared in 2013 in the journal Topics in Cognitive Science.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Quillen–Lichtenbaum conjecture**
Quillen–Lichtenbaum conjecture:
In mathematics, the Quillen–Lichtenbaum conjecture is a conjecture relating étale cohomology to algebraic K-theory introduced by Quillen (1975, p. 175), who was inspired by earlier conjectures of Lichtenbaum (1973). Kahn (1997) and Rognes & Weibel (2000) proved the Quillen–Lichtenbaum conjecture at the prime 2 for some number fields. Voevodsky, using some important results of Markus Rost, has proved the Bloch–Kato conjecture, which implies the Quillen–Lichtenbaum conjecture for all primes.
Statement:
The conjecture in Quillen's original form states that if A is a finitely-generated algebra over the integers and l is prime, then there is a spectral sequence analogous to the Atiyah–Hirzebruch spectral sequence, starting at etale Spec A[ℓ−1],Zℓ(−q/2)), (which is understood to be 0 if q is odd)and abutting to K−p−qA⊗Zℓ for −p − q > 1 + dim A.
K-theory of the integers:
Assuming the Quillen–Lichtenbaum conjecture and the Vandiver conjecture, the K-groups of the integers, Kn(Z), are given by: 0 if n = 0 mod 8 and n > 0, Z if n = 0 Z ⊕ Z/2 if n = 1 mod 8 and n > 1, Z/2 if n = 1.
K-theory of the integers:
Z/ck ⊕ Z/2 if n = 2 mod 8 Z/8dk if n = 3 mod 8 0 if n = 4 mod 8 Z if n = 5 mod 8 Z/ck if n = 6 mod 8 Z/4dk if n = 7 mod 8where ck/dk is the Bernoulli number B2k/k in lowest terms and n is 4k − 1 or 4k − 2 (Weibel 2005).
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**1,3-beta-D-glucan phosphorylase**
1,3-beta-D-glucan phosphorylase:
In enzymology, a 1,3-beta-D-glucan phosphorylase (EC 2.4.1.97) is an enzyme that catalyzes the chemical reaction (1,3-beta-D-glucosyl)n + phosphate ⇌ (1,3-beta-D-glucosyl)n-1 + alpha-D-glucose 1-phosphateThus, the two substrates of this enzyme are (1,3-beta-D-glucosyl)n and phosphate, whereas its two products are (1,3-beta-D-glucosyl)n-1 and alpha-D-glucose 1-phosphate.
This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is 1,3-beta-D-glucan:phosphate alpha-D-glucosyltransferase. Other names in common use include laminarin phosphoryltransferase, 1,3-beta-D-glucan:orthophosphate glucosyltransferase, and laminarin phosphoryltransferase.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Circuit breaker analyzer**
Circuit breaker analyzer:
A circuit breaker analyzer is an instrument that measures the parameters of a circuit breaker.In 1984, Megger patented a digital circuit breaker analyzer, controlled by a microprocessor. in 2020 few companies develop software to control circuit breaker analyzers from different devices such as computers, tablet computer, smartphones and others.The following tests can be carried out on the circuit breaker: mechanical, thermal, dielectric, short-circuit.
Circuit breaker analyzer:
The analyzer operates the circuit breaker under fault current conditions. After finishing the test of the breaker, the system measures currents, voltages and other main parameters of the breaker and through a set algorithm diagnoses the condition of the device under different conditions. The final result of the analysis give information about trip times, essential synchronism of the poles in the different operations of the circuit breaker.
Measured values:
Timing measurements Motion measurements Coil currents Dynamic resistance measurement (DRM) Vibration analysis Dynamic capacitance measurement Static and dynamic resistance measurement
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Estradiol (medication)**
Estradiol (medication):
Estradiol (E2) is a medication and naturally occurring steroid hormone. It is an estrogen and is used mainly in menopausal hormone therapy and to treat low sex hormone levels in women. It is also used in hormonal birth control for women, in hormone therapy for transgender women, and in the treatment of hormone-sensitive cancers like prostate cancer in men and breast cancer in women, among other uses. Estradiol can be taken by mouth, held and dissolved under the tongue, as a gel or patch that is applied to the skin, in through the vagina, by injection into muscle or fat, or through the use of an implant that is placed into fat, among other routes.Side effects of estradiol in women include breast tenderness, breast enlargement, headache, fluid retention, and nausea among others. Men and children who are exposed to estradiol may develop symptoms of feminization, such as breast development and a feminine pattern of fat distribution, and men may also experience low testosterone levels and infertility. Estradiol may increase the risk of endometrial hyperplasia and endometrial cancer in women with intact uteruses if it is not taken together with a progestogen such as progesterone. The combination of estradiol with a progestin, though not with oral progesterone, may increase the risk of breast cancer. Estradiol should not be used in women who are pregnant or breastfeeding or who have breast cancer, among other contraindications.Estradiol is a naturally occurring and bioidentical estrogen, or an agonist of the estrogen receptor, the biological target of estrogens like endogenous estradiol. Due to its estrogenic activity, estradiol has antigonadotropic effects and can inhibit fertility and suppress sex hormone production in both women and men. Estradiol differs from non-bioidentical estrogens like conjugated estrogens and ethinylestradiol in various ways, with implications for tolerability and safety.Estradiol was discovered in 1933. It became available as a medication that same year, in an injectable form known as estradiol benzoate. Forms that were more useful by mouth, estradiol valerate and micronized estradiol, were introduced in the 1960s and 1970s and increased its popularity by this route. Estradiol is also used as other prodrugs, like estradiol cypionate. Related estrogens such as ethinylestradiol, which is the most common estrogen in birth control pills, and conjugated estrogens (brand name Premarin), which is used in menopausal hormone therapy, are used as medications as well. In 2020, it was the 59th most commonly prescribed medication in the United States, with more than 11 million prescriptions. It is available as a generic medication.
Medical uses:
Hormone therapy Menopause Estradiol is used in menopausal hormone therapy to prevent and treat moderate to severe menopausal symptoms such as hot flashes, vaginal dryness and atrophy, and osteoporosis (bone loss). As unopposed estrogen therapy (using estrogen alone without progesterone) increases the risk of endometrial hyperplasia and endometrial cancer in women with intact uteruses, estradiol is usually combined with a progestogen like progesterone or medroxyprogesterone acetate to prevent the effects of estradiol on the endometrium. This is not necessary if the woman has undergone a hysterectomy (surgical removal of the uterus). A 2017 meta-analysis found that estradiol had no effect on depressive symptoms in peri- and postmenopausal women.
Medical uses:
Hypogonadism Estrogen is responsible for the mediation of puberty in females, and in girls with delayed puberty due to hypogonadism (low-functioning gonads, which can result in low sex hormone levels) such as in Turner syndrome, estradiol is used to induce the development of and maintain female secondary sexual characteristics such as breasts, wide hips, and a female fat distribution. It is also used to restore estradiol levels in adult premenopausal women with hypogonadism, for instance those with premature ovarian failure or who have undergone oophorectomy. It is used to treat women with hypogonadism due to hypopituitarism as well.
Medical uses:
Transgender women Estradiol is used as part of feminizing hormone therapy for transgender women. The drug is used in higher dosages prior to sex reassignment surgery or orchiectomy to help suppress testosterone levels; after this procedure, estradiol continues to be used at lower dosages to maintain estradiol levels in the normal premenopausal female range.
Medical uses:
Birth control Although almost all combined oral contraceptives contain the synthetic estrogen ethinylestradiol, natural estradiol itself is also used in some hormonal contraceptives, including in estradiol-containing oral contraceptives and combined injectable contraceptives. It is formulated in combination with a progestin such as dienogest, nomegestrol acetate, or medroxyprogesterone acetate, and is often used in the form of an ester prodrug like estradiol valerate or estradiol cypionate. Hormonal contraceptives contain a progestin and/or estrogen and prevent ovulation and thus the possibility of pregnancy by suppressing the secretion of the gonadotropins follicle-stimulating hormone (FSH) and luteinizing hormone (LH), the peak of which around the middle of the menstrual cycle causes ovulation to occur.
Medical uses:
Hormonal cancer Prostate cancer Estradiol is used as a form of high-dose estrogen therapy to treat prostate cancer and is similarly effective to other therapies such as androgen deprivation therapy with castration and antiandrogens. It is used in the form of long-lasting injected estradiol prodrugs like polyestradiol phosphate, estradiol valerate, and estradiol undecylate, and has also more recently been assessed in the form of transdermal estradiol patches. Estrogens are effective in the treatment of prostate cancer by suppressing testosterone levels into the castrate range, increasing levels of sex hormone-binding globulin (SHBG) and thereby decreasing the fraction of free testosterone, and possibly also via direct cytotoxic effects on prostate cancer cells. Parenteral estradiol is largely free of the cardiovascular side effects of the high oral dosages of synthetic estrogens like diethylstilbestrol ad ethinylestradiol that were used previously. In addition, estrogens may have advantages relative to castration in terms of hot flashes, sexual interest and function, osteoporosis, cognitive function, and quality of life. However, side effects such as gynecomastia and feminization in general may be difficult to tolerate and unacceptable for many men.
Medical uses:
Breast cancer High-dose estrogen therapy is effective in the treatment of about 35% of cases of breast cancer in women who are at least 5 years menopausal and has comparable effectiveness to antiestrogen therapy with medications like the selective estrogen receptor modulator (SERM) tamoxifen. Although estrogens are rarely used in the treatment of breast cancer today and synthetic estrogens like diethylstilbestrol and ethinylestradiol have most commonly been used, estradiol itself has been used in the treatment of breast cancer as well. It has been used orally at very high doses (30 mg/day) in the treatment of therapy-naive breast cancer and orally at low doses (2 to 6 mg/day) in the treatment of breast cancer in women who were previously treated with and benefited from but acquired resistance to aromatase inhibitors. Polyestradiol phosphate is also used to treat breast cancer.
Medical uses:
Other uses Infertility Estrogens may be used in treatment of infertility in women when there is a need to develop sperm-friendly cervical mucous or an appropriate uterine lining.It is also commonly used during in vitro fertilization (IVF). Estrogen helps maintain the endometrial lining of the uterus and help prepare for pregnancy. Research shows higher pregnancy rate if the mother takes estrogen in addition to progesterone. Estradiol is the predominant form of estrogen during reproductive years and is most commonly prescribed.
Medical uses:
Lactation suppression Estrogens can be used to suppress and cease lactation and breast engorgement in postpartum women who do not wish to breastfeed. They do this by directly decreasing the sensitivity of the alveoli of the mammary glands to the lactogenic hormone prolactin.
Medical uses:
Tall stature Estrogens have been used to limit final height in adolescent girls with tall stature. They do this by inducing epiphyseal closure and suppressing growth hormone-induced hepatic production and by extension circulating levels of insulin-like growth factor-1 (IGF-1), a hormone that causes the body to grow and increase in size. Although ethinylestradiol and conjugated estrogens have mainly been used for this purpose, estradiol can also be employed.
Medical uses:
Breast enhancement Estrogens are involved in breast development and estradiol may be used as a form of hormonal breast enhancement to increase the size of the breasts. Both polyestradiol phosphate monotherapy and pseudopregnancy with a combination of high-dosage intramuscular estradiol valerate and hydroxyprogesterone caproate have been assessed for this purpose in clinical studies. However, acute or temporary breast enlargement is a well-known side effect of estrogens, and increases in breast size tend to regress following discontinuation of treatment. Aside from those without prior established breast development, evidence is lacking for a sustained increases in breast size with estrogens.
Medical uses:
Schizophrenia Estradiol has been found to be effective in the adjunctive treatment of schizophrenia in women. It has been found to significantly reduce positive, negative, and cognitive symptoms, with particular benefits on positive symptoms. Other estrogens, as well as selective estrogen receptor modulators (SERMs) like raloxifene, have been found to be effective in the adjunctive treatment of schizophrenia in women similarly. Estrogens may be useful in the treatment of schizophrenia in men as well, but their use in this population is limited by feminizing side effects. SERMs, which have few or no feminizing side effects, have been found to be effective in the adjunctive treatment of schizophrenia in men similarly to in women and may be more useful than estrogens in this sex.
Medical uses:
Sexual deviance Estradiol has been used at high doses to suppress sex drive in men with sexual deviance such as paraphilias and in sex offenders. It has specifically been used for this indication in the forms of intramuscular injections of estradiol valerate and estradiol undecylate and of subcutaneous pellet implants of estradiol.
Medical uses:
Available forms Estradiol is available in a variety of different formulations, including oral, intranasal, transdermal/topical, vaginal, injectable, and implantable preparations. An ester may be attached to one or both of the hydroxyl groups of estradiol to improve its oral bioavailability and/or duration of action with injection. Such modifications give rise to forms such as estradiol acetate (oral and vaginal), estradiol valerate (oral and injectable), estradiol cypionate (injectable), estradiol benzoate (injectable), estradiol undecylate (injectable), and polyestradiol phosphate (injectable; a polymerized ester of estradiol), which are all prodrugs of estradiol.
Contraindications:
Estrogens like estradiol have a number of contraindications. Estradiol should be avoided when there is undiagnosed abnormal vaginal bleeding, known, suspected or a history of breast cancer, current treatment for metastatic disease, known or suspected estrogen-dependent neoplasia, deep vein thrombosis, pulmonary embolism or history of these conditions, active or recent arterial thromboembolic disease such as stroke, myocardial infarction, liver dysfunction or disease. Estradiol should not be taken by people with a hypersensitivity/allergy or those who are pregnant or are suspected pregnant.
Side effects:
Common side effects of estradiol in women include headache, breast pain or tenderness, breast enlargement, irregular vaginal bleeding or spotting, abdominal cramps, bloating, fluid retention, and nausea. Other possible side effects of estrogens may include high blood pressure, high blood sugar, enlargement of uterine fibroids, melasma, vaginal yeast infections, and liver problems. In men, estrogens can cause breast pain or tenderness, gynecomastia (male breast development), feminization, demasculinization, sexual dysfunction (decreased libido and erectile dysfunction), hypogonadism, testicular atrophy, and infertility.
Side effects:
Blood clots Oral estradiol and estradiol valerate, for instance in menopausal hormone therapy or birth control pills, are associated with a significantly higher risk of venous thromboembolism (VTE) than non-use. Higher doses of oral estrogens are associated with higher risks of VTE. In contrast to oral estradiol, transdermal and vaginal estradiol at menopausal replacement dosages are not associated with a higher incidence of VTE. Low doses (e.g., 50 μg/day) and high doses (e.g., 100 μg/day) of transdermal estradiol for menopausal replacement do not differ in terms of VTE risk. The higher risk of VTE with oral estradiol can be attributed to the first pass and a disproportionate effect on liver synthesis of coagulation factors. Even high doses of parenteral estradiol, such as high-dose polyestradiol phosphate, have minimal influence on coagulation factors, in contrast to oral estrogen therapy. However, sufficient doses of parenteral estradiol, for instance very high doses of estradiol valerate by intramuscular injection, can nonetheless activate coagulation, presumably increasing VTE risk.In addition to the route of administration, the type of estrogen influences VTE risk. Oral conjugated estrogens are associated with a higher risk of VTE than oral estradiol. Estradiol- and estradiol valerate-containing birth control pills are associated with a lower risk of VTE than birth control pills containing ethinylestradiol. The relative risk of VTE is thought to be highest with oral ethinylestradiol, intermediate with oral conjugated estrogens, low with oral estradiol and parenteral estradiol valerate, and very low with transdermal estradiol. Conjugated estrogens and ethinylestradiol are thought to have a higher risk of VTE than estradiol because they are resistant to hepatic metabolism and have a disproportionate influence on liver production of coagulation factors.The combination of oral or transdermal estradiol and a progestin is associated with a higher risk of VTE than estradiol alone. Dydrogesterone is associated with a lower risk than other progestins such as medroxyprogesterone acetate and norethisterone, while oral progesterone is associated with no increase in risk of VTE. Older age, higher body weight, lower physical activity, and smoking are all associated with a higher risk of VTE with oral estrogen therapy. Risk of VTE with estrogen therapy is highest at the start of treatment, particularly during the first year, and decreases over time.The absolute risk of VTE with estrogen and/or progestin therapy is small. Women who are not on a birth control pill or hormone therapy have a risk of VTE of about 1 to 5 out of 10,000 women per year. In women taking a birth control pill containing ethinylestradiol and a progestin, the risk of VTE is in the range of 3 to 10 out of 10,000 women per year. Birth control pills containing estradiol valerate and a progestin are associated with about half the risk of VTE of ethinylestradiol/progestin-containing birth control pills. Hormone therapy for transgender women likewise is associated with a lower risk of VTE than birth control pills containing ethinylestradiol and a progestin. The risk of VTE during pregnancy, when estrogens and progesterone increase to very high levels, is 5 to 20 in 10,000 women per year, while the risk is 40 to 65 per 10,000 women per year during the postpartum period.
Side effects:
Long-term effects Uncommon but serious possible side effects of estrogens associated with long-term therapy may include breast cancer, uterine cancer, stroke, heart attack, blood clots, dementia, gallbladder disease, and ovarian cancer. Warning signs of these serious side effects include breast lumps, unusual vaginal bleeding, dizziness, faintness, changes in speech, severe headaches, chest pain, shortness of breath, pain in the legs, changes in vision, and vomiting.Due to health risks observed with the combination of conjugated estrogens and medroxyprogesterone acetate in the Women's Health Initiative (WHI) studies (see below), the United States Food and Drug Administration (FDA) label for Estrace (estradiol) advises that estrogens should be used in menopausal hormone therapy only for the shortest time possible and at the lowest effective dose. While the FDA states that is unknown if these risks generalize to estradiol (alone or in combination with progesterone or a progestin), it advises that in the absence of comparable data, the risks should be assumed to be similar. When used to treat menopausal symptoms, the FDA recommends that discontinuation of estradiol should be attempted every three to six months via a gradual dose taper.The combination of bioidentical transdermal or vaginal estradiol and oral or vaginal progesterone appears to be a safer form of hormone therapy than the combination of oral conjugated estrogens and medroxyprogesterone acetate and may not share the same health risks. Advantages may include reduced or no risk of venous thromboembolism, cardiovascular disease, and breast cancer, among others.
Overdose:
Estrogens are relatively safe in overdose. During pregnancy, levels of estradiol increase to very high concentrations that are as much as 100-fold normal levels. In late pregnancy, the body produces and secretes approximately 100 mg of estrogens, including estradiol, estrone, and estriol, per day. Doses of estradiol of as high as 200 mg per day by intramuscular injection for several weeks have been administered to humans in studies. Serious adverse effects have not been described following acute overdose of large doses of estrogen- and progestogen-containing birth control pills by small children. Symptoms of estrogen overdosage may include nausea, vomiting, bloating, increased weight, water retention, breast tenderness, vaginal discharge, vaginal bleeding, heavy legs, and leg cramps. These side effects can be diminished by reducing the estrogen dosage.
Interactions:
Inducers of cytochrome P450 enzymes like CYP3A4 such as St. John's wort, phenobarbital, carbamazepine and rifampicin decrease the circulating levels of estradiol by accelerating its metabolism, whereas inhibitors of cytochrome P450 enzymes like CYP3A4 such as erythromycin, cimetidine, clarithromycin, ketoconazole, itraconazole, ritonavir and grapefruit juice may slow its metabolism resulting in increased levels of estradiol in the circulation. There is an interaction between estradiol and alcohol such that alcohol considerably increases circulating levels of estradiol during oral estradiol therapy and also increases estradiol levels in normal premenopausal women and with parenteral estradiol therapy. This appears to be due to a decrease in hepatic 17β-hydroxysteroid dehydrogenase type 2 (17β-HSD2) activity and hence estradiol inactivation into estrone due to an alcohol-mediated increase in the ratio of NADH to NAD in the liver. Spironolactone may reduce the bioavailability of high doses of oral estradiol.
Pharmacology:
Pharmacodynamics Estradiol is an estrogen, or an agonist of the estrogen receptors (ERs), the ERα and ERβ. It is also an agonist of membrane estrogen receptors (mERs), including the GPER, Gq-mER, ER-X, and ERx. Estradiol is highly selective for these ERs and mERs, and does not interact importantly with other steroid hormone receptors. It is far more potent as an estrogen than are other bioidentical estrogens like estrone and estriol. Given by subcutaneous injection in mice, estradiol is about 10-fold more potent than estrone and about 100-fold more potent than estriol.The ERs are expressed widely throughout the body, including in the breasts, uterus, vagina, fat, skin, bone, liver, pituitary gland, hypothalamus, and other parts of the brain. In accordance, estradiol has numerous effects throughout the body. Among other effects, estradiol produces breast development, feminization, changes in the female reproductive system, changes in liver protein synthesis, and changes in brain function. The effects of estradiol can influence health in both positive and negative ways. In addition to the aforementioned effects, estradiol has antigonadotropic effects due to its estrogenic activity, and can inhibit ovulation and suppress gonadal sex hormone production. At sufficiently high dosages, estradiol is a powerful antigonadotropin, capable of suppressing testosterone levels into the castrate/female range in men.There are differences between estradiol and other estrogens, such as non-bioidentical estrogens like natural conjugated estrogens and synthetic estrogens like ethinylestradiol and diethylstilbestrol, with implications for pharmacodynamics and pharmacokinetics as well as efficacy, tolerability, and safety.
Pharmacology:
Pharmacokinetics Estradiol can be taken by a variety of different routes of administration. These include oral, buccal, sublingual, intranasal, transdermal (gels, creams, patches), vaginal (tablets, creams, rings, suppositories), rectal, by intramuscular or subcutaneous injection (in oil or aqueous), and as a subcutaneous implant. The pharmacokinetics of estradiol, including its bioavailability, metabolism, biological half-life, and other parameters, differ by route of administration. Likewise, the potency of estradiol, and its local effects in certain tissues, most importantly the liver, differ by route of administration as well. In particular, the oral route is subject to a high first-pass effect, which results in high levels of estradiol and consequent estrogenic effects in the liver and low potency due to first-pass hepatic and intestinal metabolism into metabolites like estrone and estrogen conjugates. Conversely, this is not the case for parenteral (non-oral) routes, which bypass the intestines and liver.Different estradiol routes and dosages can achieve widely varying circulating estradiol levels. For purposes of comparison with normal physiological circumstances, menstrual cycle circulating levels of estradiol in premenopausal women are 40 pg/mL in the early follicular phase, 250 pg/mL at the middle of the cycle, and 100 pg/mL during the mid-luteal phase. Mean integrated levels of circulating estradiol in premenopausal women across the whole menstrual cycle have been reported to be in the range of 80 and 150 pg/mL, according to some sources.
Chemistry:
Estradiol is a naturally occurring estrane steroid. It is also known as 17β-estradiol (to distinguish it from 17α-estradiol) or as estra-1,3,5(10)-triene-3,17β-diol. It has two hydroxyl groups, one at the C3 position and the other at the C17β position, as well as three double bonds in the A ring (the estra-1,3,5(10)-triene core). Due to its two hydroxyl groups, estradiol is often abbreviated as E2. The structurally related estrogens, estrone (E1), estriol (E3), and estetrol (E4) have one, three, and four hydroxyl groups, respectively.
Chemistry:
Hemihydrate A hemihydrate form of estradiol, estradiol hemihydrate, is widely used medically under a large number of brand names similarly to estradiol. In terms of activity and bioequivalence, estradiol and its hemihydrate are identical, with the only disparities being an approximate 3% difference in potency by weight (due to the presence of water molecules in the hemihydrate form of the substance) and a slower rate of release with certain formulations of the hemihydrate. This is because estradiol hemihydrate is more hydrated than anhydrous estradiol, and for this reason, is more insoluble in water in comparison, which results in slower absorption rates with specific formulations of the drug such as vaginal tablets. Estradiol hemihydrate has also been shown to result in less systemic absorption as a vaginal tablet formulation relative to other topical estradiol formulations such as vaginal creams. Estradiol hemihydrate is used in place of estradiol in some estradiol products.
Chemistry:
Derivatives A variety of C17β and/or C3 ester prodrugs of estradiol, such as estradiol acetate, estradiol benzoate, estradiol cypionate, estradiol dipropionate, estradiol enantate, estradiol undecylate, estradiol valerate, and polyestradiol phosphate (an estradiol ester in polymeric form), among many others, have been developed and introduced for medical use as estrogens. Estramustine phosphate is also an estradiol ester, but with a nitrogen mustard moiety attached, and is used as a cytostatic antineoplastic agent in the treatment of prostate cancer. Cloxestradiol acetate and promestriene are ether prodrugs of estradiol that have been introduced for medical use as estrogens as well, although they are little known and rarely used.Synthetic derivatives of estradiol used as estrogens include ethinylestradiol, ethinylestradiol sulfonate, mestranol, methylestradiol, moxestrol, and quinestrol, all of which are 17α-substituted estradiol derivatives. Synthetic derivatives of estradiol used in scientific research include 8β-VE2 and 16α-LE2.
History:
Estradiol was first discovered and synthesized in 1933 via reduction of estrone. Subsequently, estradiol was isolated for the first time in 1935. It was also originally known as dihydroxyestrin, dihydrofolliculin, or alpha-estradiol.Estradiol was first introduced for medical use, in the form of estradiol benzoate, a short-acting ester prodrug of estradiol administered by intramuscular injection in oil solution, under the brand name Progynon B in 1933. Estradiol itself was also marketed in the 1930s and 1940s in the form of oral tablets and solutions, vaginal suppositories, and topical ointments under a variety of brand names including Dimenformon, Gynoestryl, Ovocyclin, Progynon, and Progynon DH. Marketed vaginal estradiol suppositories were also used rectally. Estradiol dipropionate, another short-acting ester of estradiol in oil solution for use by intramuscular injection, was marketed under the brand name Di-Ovocylin by 1939. In contrast to estrone, estradiol was never marketed in oil solution for intramuscular injection. This is attributable to its short duration of action and the availability of longer-acting estradiol esters like estradiol benzoate and estradiol dipropionate.Delivery of estrogens by nasal spray was studied in 1929, and an estradiol nasal spray for local use was marketed by Schering under the brand name Progynon DH Nasal Spray by 1941. Sublingual administration of estradiol was first described in the early 1940s. Buccal estradiol tablets were marketed by Schering under the brand name Progynon Buccal Tablets by 1949. Estradiol tablets for use by the sublingual route were marketed under the brand name Estradiol Membrettes in 1950, as well as under the brand name Diogynets by 1952. Longer-acting esters of estradiol in oil solution like estradiol valerate (Delestrogen, Progynon Depot), estradiol cypionate (Depo-Estradiol), and estradiol undecylate (Delestrec, Progynon Depot 100), as well as the polymeric estradiol ester polyestradiol phosphate in aqueous solution (Estradurin), were developed and introduced for use by intramuscular injection in the 1950s.Due to poor absorption and low potency relative to other estrogens, oral estradiol was not widely used as late as the early 1970s. Instead, synthetic and animal-derived estrogens like conjugated estrogens, ethinylestradiol, and diethylstilbestrol were typically used by the oral route. In 1966, oral estradiol valerate was introduced by Schering for medical use under the brand name Progynova. Esterification of estradiol, as in estradiol valerate, was believed to improve its metabolic stability with oral administration. Studies in the 1960s showed that micronization of steroids such as spironolactone and norethisterone acetate improved their absorption and oral potency by several-fold. In 1972, micronization of estradiol was studied in women and was likewise found to improve the absorption and potency of estradiol by the oral route. Subsequently, oral micronized estradiol was introduced for medical use in the United States under the brand name Estrace in 1975. However, oral micronized estradiol valerate had been introduced by Schering in 1968. Oral micronized estradiol and oral estradiol valerate have similar bioavailability and are both now widely used throughout the world.After the introduction of oral micronized estradiol, vaginal and intranasal micronized estradiol were evaluated in 1977 and both subsequently introduced.The first transdermal estradiol gel, a hydroalcoholic gel known as EstroGel, was initially described in 1980 and was introduced in Europe around 1981. Transdermal estradiol gel did not become available in the United States until 2004, when EstroGel was introduced in this country as well. A transdermal estradiol emulsion, Estrasorb, was marketed in the United States in 2003 as well. One of the earliest reports of transdermal estradiol patches was published in 1983. Estraderm, a reservoir patch and the first transdermal estradiol patch to be marketed, was introduced in Europe in 1985 and in the United States in 1986. The first transdermal matrix estradiol patches to be introduced were Climara and Vivelle between 1994 and 1996, and were followed by many others.Ethinylestradiol, a synthetic derivative of estradiol, was synthesized from estradiol by Inhoffen and Hohlweg in 1938 and was introduced for oral use by Schering in the United States under the brand name Estinyl in 1943. Starting in the 1950s, ethinylestradiol became widely used in birth control pills. Estradiol-containing birth control pills were initially studied in the 1970s, with the first report published in 1977. Development of birth control pills containing estradiol was motivated by the thrombotic risks of ethinylestradiol that were uncovered in the 1960s and 1970s. More than 15 attempts were made at development of an estradiol-containing birth control pill starting in the 1970s, but were unsuccessful due to unacceptable menstrual bleeding patterns. Estradiol valerate/cyproterone acetate (Femilar) was introduced for use as a birth control pill in Finland in 1993, but was never marketed elsewhere. Subsequently, estradiol valerate/dienogest (Natazia, Qlaira) was marketed as a birth control pill in 2008 and estradiol/nomegestrol acetate (Naemis, Zoely) was introduced in 2012.
Society and culture:
Generic names Estradiol is the generic name of estradiol in American English and its INN, USAN, USP, BAN, DCF, and JAN. Estradiolo is the name of estradiol in Italian and the DCIT and estradiolum is its name in Latin, whereas its name remains unchanged as estradiol in Spanish, Portuguese, French, and German. Oestradiol was the former BAN of estradiol and its name in British English, but the spelling was eventually changed to estradiol. When estradiol is provided in its hemihydrate form, its INN is estradiol hemihydrate.
Society and culture:
Brand names Estradiol is marketed under a large number of brand names throughout the world. Examples of major brand names in which estradiol has been marketed in include Climara, Climen, Dermestril, Divigel, Estrace, Natifa, Estraderm, Estraderm TTS, Estradot, Estreva, Estrimax, Estring, Estrofem, EstroGel, Evorel, Fem7 (or FemSeven), Imvexxy, Menorest, Oesclim, OestroGel, Sandrena, Systen, and Vagifem. Estradiol valerate is marketed mainly as Progynova and Progynon-Depot, while it is marketed as Delestrogen in the U.S. Estradiol cypionate is used mainly in the U.S. and is marketed under the brand name Depo-Estradiol. Estradiol acetate is available as Femtrace, Femring, and Menoring.Estradiol is also widely available in combination with progestogens. It is available in combination with norethisterone acetate under the major brand names Activelle, Cliane, Estalis, Eviana, Evorel Conti, Evorel Sequi, Kliogest, Novofem, Sequidot, and Trisequens; with drospirenone as Angeliq; with dydrogesterone as Femoston, Femoston Conti; and with nomegestrol acetate as Zoely. Estradiol valerate is available with cyproterone acetate as Climen; with dienogest as Climodien and Qlaira; with norgestrel as Cyclo-Progynova and Progyluton; with levonorgestrel as Klimonorm; with medroxyprogesterone acetate as Divina and Indivina; and with norethisterone enantate as Mesigyna and Mesygest. Estradiol cypionate is available with medroxyprogesterone acetate as Cyclo-Provera, Cyclofem, Feminena, Lunelle, and Novafem; estradiol enantate with algestone acetophenide as Deladroxate and Topasel; and estradiol benzoate is marketed with progesterone as Mestrolar and Nomestrol.Estradiol valerate is also widely available in combination with prasterone enantate (DHEA enantate) under the brand name Gynodian Depot.
Society and culture:
Slang Names Estradiol has a number of humorous nicknames among the transgender community. Among them are titty skittles, breast mints, femme&m’s, antiboyotics, trans-mission fluid, and the Notorious H.R.T.
Availability Estradiol and/or its esters are widely available in countries throughout the world in a variety of formulations.
Society and culture:
United States As of November 2016, estradiol is available in the United States in the following forms: Oral tablets (Femtrace (as estradiol acetate), Gynodiol, Innofem, generics) Transdermal patches (Alora, Climara, Esclim, Estraderm, FemPatch, Menostar, Minivelle, Vivelle, Vivelle-Dot, generics) Topical gels (Divigel, Elestrin, EstroGel, Sandrena), emulsions (Estrasorb), and sprays (Evamist) Vaginal tablets (Vagifem, generics), creams (Estrace), inserts (Imvexxy), and rings (Estring, Femring (as estradiol acetate)) Oil solution for intramuscular injection (Delestrogen (as estradiol valerate), Depo-Estradiol (as estradiol cypionate))Oral estradiol valerate (Progynova) and other esters of estradiol that are used by injection like estradiol benzoate, estradiol enantate, and estradiol undecylate all are not marketed in the U.S. Polyestradiol phosphate (Estradurin) was marketed in the U.S. previously but is no longer available.Estradiol is also available in the U.S. in combination with progestogens for the treatment of menopausal symptoms and as a combined hormonal contraceptive: Oral oil-filled capsules with progesterone (Bijuva) Oral tablets with drospirenone (Angeliq) and norethisterone acetate (Activella, Amabelz) and as estradiol valerate with dienogest (Natazia) Transdermal patches with levonorgestrel (Climara Pro) and norethisterone acetate (Combipatch)Estradiol and estradiol esters are also available in custom preparations from compounding pharmacies in the U.S. This includes subcutaneous pellet implants, which are not available in the United States as FDA-approved pharmaceutical drugs. In addition, topical creams that contain estradiol are generally regulated as cosmetics rather than as drugs in the U.S. and hence are also sold over-the-counter and may be purchased without a prescription on the Internet.
Society and culture:
Other countries Pharmaceutical estradiol subcutaneous pellet implants were formerly available in the United Kingdom and Australia under the brand name Estradiol Implants or Oestradiol Implants (Organon; 25, 50, or 100 mg), but have been discontinued. However, an estradiol subcutaneous implant with the brand name Meno-Implant (Organon; 20 mg) continues to be available in the Netherlands. Previously, for instance in the 1970s and 1980s, other subcutaneous estradiol implant products such as Progynon Pellets (Schering; 25 mg) and Estropel Pellets (25 mg; Bartor Pharmacol) were marketed. It has been said that pharmaceutical estradiol implants have been almost exclusively used in the United Kingdom. Subcutaneous estradiol implants are also available as custom compounded products in some countries.
Society and culture:
Cost Generic oral estradiol tablets are much less expensive than other forms of estradiol such as transdermal gel and patches and vaginal rings.
Research:
A variety of estradiol-containing combined birth control pills were studied but never marketed. In addition, a variety of estradiol-containing combined injectable contraceptives were studied but never marketed.Estradiol has been studied in the treatment of postpartum depression and postpartum psychosis.Estrogens such as estradiol appear to improve sexual desire and function in women. However, the available evidence overall does not support the use of estradiol and other estrogens for improving sexual desire and function in women as of 2016. An exception is the use of estrogens to treat vaginal atrophy.Estrogen therapy has been proposed as a potential treatment for autism but clinical studies are needed.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**SKIM**
SKIM:
Sea surface kinematics multiscale monitoring (SKIM) was one of the two candidate missions for the 9th Earth Explorer mission of in the Living Planet Programme of the European Space Agency (ESA). SKIM and the other candidate (FORUM) were pre-selected for a detailed study in November 2017. Only one of the two candidates was to be selected in 2019 for immediate implementation and a possible launch by the year 2025, and FORUM was chosen.
Context:
SKIM builds on the technological heritage of the SWIM instrument now flying on the China-France Ocean Satellite, with the important addition of Doppler measurement and changing from Ku to Ka-band. SKIM also inherits experience with Ka-band altimetry from the Indian-France SARAL-AltiKa mission.
Scientific Objectives:
The mission's science goals are to determine how the dynamics of the ocean total surface current velocity influence the integrated Earth systemmore specifically, Determine the transport by waves and currents of material at the ocean surface including plankton, nutrients, heat, carbon, oil, and marine plastic debris Map and apply currents and its components to generate better estimates of atmosphere–ocean exchanges of heat, gas, momentum and energy accounting for the full interplay between the surface ocean and the lower atmosphere (including upper ocean mixing) The satellite will overfly Earth from 83°S to 83°N, covering at least 97 percent of the globe.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Rydberg matter**
Rydberg matter:
Rydberg matter is an exotic phase of matter formed by Rydberg atoms; it was predicted around 1980 by É. A. Manykin, M. I. Ozhovan and P. P. Poluéktov. It has been formed from various elements like caesium, potassium, hydrogen and nitrogen; studies have been conducted on theoretical possibilities like sodium, beryllium, magnesium and calcium. It has been suggested to be a material that diffuse interstellar bands may arise from. Circular Rydberg states, where the outermost electron is found in a planar circular orbit, are the most long-lived, with lifetimes of up to several hours, and are the most common.
Physical:
Rydberg matter consists of usually hexagonal planar clusters; these cannot be very big because of the retardation effect caused by the finite velocity of the speed of light. Hence, they are not gases or plasmas; nor are they solids or liquids; they are most similar to dusty plasmas with small clusters in a gas. Though Rydberg matter can be studied in the laboratory by laser probing, the largest cluster reported consists of only 91 atoms, but it has been shown to be behind extended clouds in space and the upper atmosphere of planets. Bonding in Rydberg matter is caused by delocalisation of the high-energy electrons to form an overall lower energy state. The way in which the electrons delocalise is to form standing waves on loops surrounding nuclei, creating quantised angular momentum and the defining characteristics of Rydberg matter. It is a generalised metal by way of the quantum numbers influencing loop size but restricted by the bonding requirement for strong electron correlation; it shows exchange-correlation properties similar to covalent bonding. Electronic excitation and vibrational motion of these bonds can be studied by Raman spectroscopy.
Lifetime:
Due to reasons still debated by the physics community because of the lack of methods to observe clusters, Rydberg matter is highly stable against disintegration by emission of radiation; the characteristic lifetime of a cluster at n = 12 is 25 seconds. Reasons given include the lack of overlap between excited and ground states, the forbidding of transitions between them and exchange-correlation effects hindering emission through necessitating tunnelling that causes a long delay in excitation decay. Excitation plays a role in determining lifetimes, with a higher excitation giving a longer lifetime; n = 80 gives a lifetime comparable to the age of the Universe.
Excitations:
In ordinary metals, interatomic distances are nearly constant through a wide range of temperatures and pressures; this is not the case with Rydberg matter, whose distances and thus properties vary greatly with excitations. A key variable in determining these properties is the principal quantum number n that can be any integer greater than 1; the highest values reported for it are around 100. Bond distance d in Rydberg matter is given by 2.9 n2a0, where a0 is the Bohr radius. The approximate factor 2.9 was first experimentally determined, then measured with rotational spectroscopy in different clusters. Examples of d calculated this way, along with selected values of the density D, are given in the adjacent table.
Condensation:
Like bosons that can be condensed to form Bose–Einstein condensates, Rydberg matter can be condensed, but not in the same way as bosons. The reason for this is that Rydberg matter behaves similarly to a gas, meaning that it cannot be condensed without removing the condensation energy; ionisation occurs if this is not done. All solutions to this problem so far involve using an adjacent surface in some way, the best being evaporating the atoms of which the Rydberg matter is to be formed from and leaving the condensation energy on the surface. Using caesium atoms, graphite-covered surfaces and thermionic converters as containment, the work function of the surface has been measured to be 0.5 eV, indicating that the cluster is between the ninth and fourteenth excitation levels.
Disputed:
The research claiming to create ultradense hydrogen Rydberg matter (with interatomic spacing of ~2.3pm: many orders of magnitude less than in most solid matter) is disputed:″The paper of Holmlid and Zeiner-Gundersen makes claims that would be truly revolutionary if they were true. We have shown that they violate some fundamental and very well established laws in a rather direct manner. We believe we share this scepticism with most of the scientific community. The response to the theories of Holmlid is perhaps most clearly reflected in the reference list of their article. Out of 114 references, 36 are not coauthored by Holmlid. And of these 36, none address the claims made by him and his co-authors. This is so much more remarkable because the claims, if correct, would revolutionize quantum science, add at least two new forms of hydrogen, of which one is supposedly the ground state of the element, discover an extremely dense form of matter, discover processes that violate baryon number conservation, in addition to solving humanity’s need for energy practically in perpetuity.″
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Isotropic quadratic form**
Isotropic quadratic form:
In mathematics, a quadratic form over a field F is said to be isotropic if there is a non-zero vector on which the form evaluates to zero. Otherwise the quadratic form is anisotropic. More explicitly, if q is a quadratic form on a vector space V over F, then a non-zero vector v in V is said to be isotropic if q(v) = 0. A quadratic form is isotropic if and only if there exists a non-zero isotropic vector (or null vector) for that quadratic form. Suppose that (V, q) is quadratic space and W is a subspace of V. Then W is called an isotropic subspace of V if some vector in it is isotropic, a totally isotropic subspace if all vectors in it are isotropic, and an anisotropic subspace if it does not contain any (non-zero) isotropic vectors. The isotropy index of a quadratic space is the maximum of the dimensions of the totally isotropic subspaces.A quadratic form q on a finite-dimensional real vector space V is anisotropic if and only if q is a definite form: either q is positive definite, i.e. q(v) > 0 for all non-zero v in V ; or q is negative definite, i.e. q(v) < 0 for all non-zero v in V.More generally, if the quadratic form is non-degenerate and has the signature (a, b), then its isotropy index is the minimum of a and b. An important example of an isotropic form over the reals occurs in pseudo-Euclidean space.
Hyperbolic plane:
Let F be a field of characteristic not 2 and V = F2. If we consider the general element (x, y) of V, then the quadratic forms q = xy and r = x2 − y2 are equivalent since there is a linear transformation on V that makes q look like r, and vice versa. Evidently, (V, q) and (V, r) are isotropic. This example is called the hyperbolic plane in the theory of quadratic forms. A common instance has F = real numbers in which case {x ∈ V : q(x) = nonzero constant} and {x ∈ V : r(x) = nonzero constant} are hyperbolas. In particular, {x ∈ V : r(x) = 1} is the unit hyperbola. The notation ⟨1⟩ ⊕ ⟨−1⟩ has been used by Milnor and Husemoller: 9 for the hyperbolic plane as the signs of the terms of the bivariate polynomial r are exhibited.
Hyperbolic plane:
The affine hyperbolic plane was described by Emil Artin as a quadratic space with basis {M, N} satisfying M2 = N2 = 0, NM = 1, where the products represent the quadratic form.Through the polarization identity the quadratic form is related to a symmetric bilinear form B(u, v) = 1/4(q(u + v) − q(u − v)).
Two vectors u and v are orthogonal when B(u, v) = 0. In the case of the hyperbolic plane, such u and v are hyperbolic-orthogonal.
Split quadratic space:
A space with quadratic form is split (or metabolic) if there is a subspace which is equal to its own orthogonal complement; equivalently, the index of isotropy is equal to half the dimension.: 57 The hyperbolic plane is an example, and over a field of characteristic not equal to 2, every split space is a direct sum of hyperbolic planes.: 12, 3
Relation with classification of quadratic forms:
From the point of view of classification of quadratic forms, anisotropic spaces are the basic building blocks for quadratic spaces of arbitrary dimensions. For a general field F, classification of anisotropic quadratic forms is a nontrivial problem. By contrast, the isotropic forms are usually much easier to handle. By Witt's decomposition theorem, every inner product space over a field is an orthogonal direct sum of a split space and an anisotropic space.: 56
Field theory:
If F is an algebraically closed field, for example, the field of complex numbers, and (V, q) is a quadratic space of dimension at least two, then it is isotropic.
If F is a finite field and (V, q) is a quadratic space of dimension at least three, then it is isotropic (this is a consequence of the Chevalley–Warning theorem).
If F is the field Qp of p-adic numbers and (V, q) is a quadratic space of dimension at least five, then it is isotropic.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Integrated Encryption Scheme**
Integrated Encryption Scheme:
Integrated Encryption Scheme (IES) is a hybrid encryption scheme which provides semantic security against an adversary who is able to use chosen-plaintext or chosen-ciphertext attacks. The security of the scheme is based on the computational Diffie–Hellman problem.
Two variants of IES are specified: Discrete Logarithm Integrated Encryption Scheme (DLIES) and Elliptic Curve Integrated Encryption Scheme (ECIES), which is also known as the Elliptic Curve Augmented Encryption Scheme or simply the Elliptic Curve Encryption Scheme. These two variants are identical up to the change of an underlying group.
Informal description of DLIES:
As a brief and informal description and overview of how IES works, a Discrete Logarithm Integrated Encryption Scheme (DLIES) is used, focusing on illuminating the reader's understanding, rather than precise technical details.
Informal description of DLIES:
Alice learns Bob's public key gx through a public key infrastructure or some other distribution method.Bob knows his own private key x Alice generates a fresh, ephemeral value y , and its associated public value gy Alice then computes a symmetric key k using this information and a key derivation function (KDF) as follows: KDF (gxy) Alice computes her ciphertext c from her actual message m (by symmetric encryption of m ) encrypted with the key k (using an authenticated encryption scheme) as follows: c=E(k;m) Alice transmits (in a single message) both the public ephemeral gy and the ciphertext c Bob, knowing x and gy , can now compute KDF (gxy) and decrypt m from c .Note that the scheme does not provide Bob with any assurance as to who really sent the message: This scheme does nothing to stop anyone from pretending to be Alice.
Formal description of ECIES:
Required information To send an encrypted message to Bob using ECIES, Alice needs the following information: The cryptography suite to be used, including a key derivation function (e.g., ANSI-X9.63-KDF with SHA-1 option), a message authentication code (e.g., HMAC-SHA-1-160 with 160-bit keys or HMAC-SHA-1-80 with 80-bit keys) and a symmetric encryption scheme (e.g., TDEA in CBC mode or XOR encryption scheme) — noted E The elliptic curve domain parameters: (p,a,b,G,n,h) for a curve over a prime field or (m,f(x),a,b,G,n,h) for a curve over a binary field.
Formal description of ECIES:
Bob's public key KB , which Bob generates it as follows: KB=kBG , where kB∈[1,n−1] is the private key he chooses at random.
Some optional shared information: S1 and S2 O which denotes the point at infinity.
Formal description of ECIES:
Encryption To encrypt a message m Alice does the following: generates a random number r∈[1,n−1] and calculates R=rG derives a shared secret: S=Px , where P=(Px,Py)=rKB (and P≠O uses a KDF to derive symmetric encryption keys and MAC keys: KDF (S‖S1) encrypts the message: c=E(kE;m) computes the tag of encrypted message and S2 : MAC (kM;c‖S2) outputs R‖c‖d Decryption To decrypt the ciphertext R‖c‖d Bob does the following: derives the shared secret: S=Px , where P=(Px,Py)=kBR (it is the same as the one Alice derived because P=kBR=kBrG=rkBG=rKB ), or outputs failed if P=O derives keys the same way as Alice did: KDF (S‖S1) uses MAC to check the tag and outputs failed if MAC (kM;c‖S2) uses symmetric encryption scheme to decrypt the message m=E−1(kE;c)
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Internal combustion engine cooling**
Internal combustion engine cooling:
Internal combustion engine cooling uses either air or liquid to remove the waste heat from an internal combustion engine. For small or special purpose engines, cooling using air from the atmosphere makes for a lightweight and relatively simple system. Watercraft can use water directly from the surrounding environment to cool their engines. For water-cooled engines on aircraft and surface vehicles, waste heat is transferred from a closed loop of water pumped through the engine to the surrounding atmosphere by a radiator.
Internal combustion engine cooling:
Water has a higher heat capacity than air, and can thus move heat more quickly away from the engine, but a radiator and pumping system add weight, complexity, and cost. Higher-power engines generate more waste heat, but can move more weight, meaning they are generally water-cooled. Radial engines allow air to flow around each cylinder directly, giving them an advantage for air cooling over straight engines, flat engines, and V engines. Rotary engines have a similar configuration, but the cylinders also continually rotate, creating an air flow even when the vehicle is stationary.
Internal combustion engine cooling:
Aircraft design more strongly favors lower weight and air-cooled designs. Rotary engines were popular on aircraft until the end of World War I, but had serious stability and efficiency problems. Radial engines were popular until the end of World War II, until gas turbine engines largely replaced them. Modern propeller-driven aircraft with internal-combustion engines are still largely air-cooled. Modern cars generally favor power over weight, and typically have water-cooled engines. Modern motorcycles are lighter than cars, and both cooling methods are common. Some sport motorcycles were cooled with both air and oil (sprayed underneath the piston heads).
Overview:
Heat engines generate mechanical power by extracting energy from heat flows, much as a water wheel extracts mechanical power from a flow of mass falling through a distance. Engines are inefficient, so more heat energy enters the engine than comes out as mechanical power; the difference is waste heat which must be removed. Internal combustion engines remove waste heat through cool intake air, hot exhaust gasses, and explicit engine cooling.
Overview:
Engines with higher efficiency have more energy leave as mechanical motion and less as waste heat. Some waste heat is essential: it guides heat through the engine, much as a water wheel works only if there is some exit velocity (energy) in the waste water to carry it away and make room for more water. Thus, all heat engines need cooling to operate.
Overview:
Cooling is also needed because high temperatures damage engine materials and lubricants and becomes even more important in hot climates. Internal-combustion engines burn fuel hotter than the melting temperature of engine materials, and hot enough to set fire to lubricants. Engine cooling removes energy fast enough to keep temperatures low so the engine can survive.Some high-efficiency engines run without explicit cooling and with only incidental heat loss, a design called adiabatic. Such engines can achieve high efficiency but compromise power output, duty cycle, engine weight, durability, and emissions.
Basic principles:
Most internal combustion engines are fluid cooled using either air (a gaseous fluid) or a liquid coolant run through a heat exchanger (radiator) cooled by air. Marine engines and some stationary engines have ready access to a large volume of water at a suitable temperature. The water may be used directly to cool the engine, but often has sediment, which can clog coolant passages, or chemicals, such as salt, that can chemically damage the engine. Thus, engine coolant may be run through a heat exchanger that is cooled by the body of water.
Basic principles:
Most liquid-cooled engines use a mixture of water and chemicals such as antifreeze and rust inhibitors. The industry term for the antifreeze mixture is 'engine coolant'. Some antifreezes use no water at all, instead using a liquid with different properties, such as propylene glycol or a combination of propylene glycol and ethylene glycol. Most air-cooled engines use some liquid oil cooling, to maintain acceptable temperatures for both critical engine parts and the oil itself. Most liquid-cooled engines use some air cooling, with the intake stroke of air cooling the combustion chamber. An exception is Wankel engines, where some parts of the combustion chamber are never cooled by intake, requiring extra effort for successful operation.
Basic principles:
There are many demands on a cooling system. One key requirement is to adequately serve the entire engine, as the whole engine fails if just one part overheats. Therefore, it is vital that the cooling system keep all parts at suitably low temperatures. Liquid-cooled engines are able to vary the size of their passageways through the engine block so that coolant flow may be tailored to the needs of each area. Locations with either high peak temperatures (narrow islands around the combustion chamber) or high heat flow (around exhaust ports) may require generous cooling. This reduces the occurrence of hot spots, which are more difficult to avoid with air cooling. Air-cooled engines may also vary their cooling capacity by using more closely spaced cooling fins in that area, but this can make their manufacture difficult and expensive.
Basic principles:
Only the fixed parts of the engine, such as the block and head, are cooled directly by the main coolant system. Moving parts such as the pistons, and to a lesser extent the crankshaft and connecting rods, must rely on the lubrication oil as a coolant, or to a very limited amount of conduction into the block and thence the main coolant. High performance engines frequently have additional oil, beyond the amount needed for lubrication, sprayed upwards onto the bottom of the piston just for extra cooling. Air-cooled motorcycles often rely heavily on oil-cooling in addition to air-cooling of the cylinder barrels.
Basic principles:
Liquid-cooled engines usually have a circulation pump. The first engines relied on thermosiphon cooling alone, where hot coolant left the top of the engine block and passed to the radiator, where it was cooled before returning to the bottom of the engine. Circulation was powered by convection alone.
Other demands include cost, weight, reliability, and durability of the cooling system itself.
Basic principles:
Conductive heat transfer is proportional to the temperature difference between materials. If engine metal is at 250 °C and the air is at 20 °C, then there is a 230 °C temperature difference for cooling. An air-cooled engine uses all of this difference. In contrast, a liquid-cooled engine might dump heat from the engine to a liquid, heating the liquid to 135 °C (Water's standard boiling point of 100 °C can be exceeded as the cooling system is both pressurised, and uses a mixture with antifreeze) which is then cooled with 20 °C air. In each step, the liquid-cooled engine has half the temperature difference and so at first appears to need twice the cooling area.
Basic principles:
However, properties of the coolant (water, oil, or air) also affect cooling. As example, comparing water and oil as coolants, one gram of oil can absorb about 55% of the heat for the same rise in temperature (called the specific heat capacity). Oil has about 90% the density of water, so a given volume of oil can absorb only about 50% of the energy of the same volume of water. The thermal conductivity of water is about four times that of oil, which can aid heat transfer. The viscosity of oil can be ten times greater than water, increasing the energy required to pump oil for cooling, and reducing the net power output of the engine.
Basic principles:
Comparing air and water, air has vastly lower heat capacity per gram and per volume (4000) and less than a tenth the conductivity, but also much lower viscosity (about 200 times lower: 17.4 × 10−6 Pa·s for air vs 8.94 × 10−4 Pa·s for water).
Continuing the calculation from two paragraphs above, air cooling needs ten times of the surface area, therefore the fins, and air needs 2000 times the flow velocity and thus a recirculating air fan needs ten times the power of a recirculating water pump.
Basic principles:
Moving heat from the cylinder to a large surface area for air cooling can present problems such as difficulties manufacturing the shapes needed for good heat transfer and the space needed for free flow of a large volume of air. Water boils at about the same temperature desired for engine cooling. This has the advantage that it absorbs a great deal of energy with very little rise in temperature (called heat of vaporization), which is good for keeping things cool, especially for passing one stream of coolant over several hot objects and achieving uniform temperature. In contrast, passing air over several hot objects in series warms the air at each step, so the first may be over-cooled and the last under-cooled. However, once water boils, it is an insulator, leading to a sudden loss of cooling where steam bubbles form. Steam may return to water as it mixes with other coolant, so an engine temperature gauge can indicate an acceptable temperature even though local temperatures are high enough that damage is being done.
Basic principles:
An engine needs different temperatures. The inlet including the compressor of a turbo and in the inlet trumpets and the inlet valves need to be as cold as possible. A countercurrent heat exchanger with forced cooling air does the job. The cylinder-walls should not heat up the air before compression, but also not cool down the gas at the combustion. A compromise is a wall temperature of 90 °C. The viscosity of the oil is optimized for just this temperature. Any cooling of the exhaust and the turbine of the turbocharger reduces the amount of power available to the turbine, so the exhaust system is often insulated between engine and turbocharger to keep the exhaust gasses as hot as possible.
Basic principles:
The temperature of the cooling air may range from well below freezing to 50 °C. Further, while engines in long-haul boat or rail service may operate at a steady load, road vehicles often see widely varying and quickly varying load. Thus, the cooling system is designed to vary cooling so the engine is neither too hot nor too cold. Cooling system regulation includes adjustable baffles in the air flow (sometimes called 'shutters' and commonly run by a pneumatic 'shutterstat'); a fan which operates either independently of the engine, such as an electric fan, or which has an adjustable clutch; a thermostatic valve or a thermostat that can block the coolant flow when too cool. In addition, the motor, coolant, and heat exchanger have some heat capacity which smooths out temperature increase in short sprints. Some engine controls shut down an engine or limit it to half throttle if it overheats. Modern electronic engine controls adjust cooling based on throttle to anticipate a temperature rise, and limit engine power output to compensate for finite cooling.
Basic principles:
Finally, other concerns may dominate cooling system design. As example, air is a relatively poor coolant, but air cooling systems are simple, and failure rates typically rise as the square of the number of failure points. Also, cooling capacity is reduced only slightly by small air coolant leaks. Where reliability is of utmost importance, as in aircraft, it may be a good trade-off to give up efficiency, longevity (interval between engine rebuilds), and quietness in order to achieve slightly higher reliability; the consequences of a broken airplane engine are so severe, even a slight increase in reliability is worth giving up other good properties to achieve it.
Basic principles:
Air-cooled and liquid-cooled engines are both used commonly. Each principle has advantages and disadvantages, and particular applications may favor one over the other. For example, most cars and trucks use liquid-cooled engines, while many small airplane and low-cost engines are air-cooled.
Generalization difficulties:
It is difficult to make generalizations about air-cooled and liquid-cooled engines. Air-cooled diesel engines are chosen for reliability even in extreme heat, because air-cooling would be simpler and more effective at coping with the extremes of temperatures during the depths of winter and height of summer, than water cooling systems, and are often used in situations where the engine runs unattended for months at a time.Similarly, it is usually desirable to minimize the number of heat transfer stages in order to maximize the temperature difference at each stage. However, Detroit Diesel two-stroke cycle engines commonly use oil cooled by water, with the water in turn cooled by air.The coolant used in many liquid-cooled engines must be renewed periodically, and can freeze at ordinary temperatures thus causing permanent engine damage when it expands. Air-cooled engines do not require coolant service, and do not suffer damage from freezing, two commonly cited advantages for air-cooled engines. However, coolant based on propylene glycol is liquid to −55 °C, colder than is encountered by many engines; shrinks slightly when it crystallizes, thus avoiding damage; and has a service life over 10,000 hours, essentially the lifetime of many engines.
Generalization difficulties:
It is usually more difficult to achieve either low emissions or low noise from an air-cooled engine, two more reasons most road vehicles use liquid-cooled engines. It is also often difficult to build large air-cooled engines, so nearly all air-cooled engines are under 500 kW (670 hp), whereas large liquid-cooled engines exceed 80 MW (107000 hp) (Wärtsilä-Sulzer RTA96-C 14-cylinder diesel).
Air-cooling:
Cars and trucks using direct air cooling (without an intermediate liquid) were built over a long period from the very beginning and ending with a small and generally unrecognized technical change. Before World War II, water-cooled cars and trucks routinely overheated while climbing mountain roads, creating geysers of boiling cooling water. This was considered normal, and at the time, most noted mountain roads had auto repair shops to minister to overheating engines.
Air-cooling:
ACS (Auto Club Suisse) maintains historical monuments to that era on the Susten Pass where two radiator refill stations remain. These have instructions on a cast metal plaque and a spherical bottom watering can hanging next to a water spigot. The spherical bottom was intended to keep it from being set down and, therefore, be useless around the house, in spite of which it was stolen, as the picture shows.
Air-cooling:
During that period, European firms such as Magirus-Deutz built air-cooled diesel trucks, Porsche built air-cooled farm tractors, and Volkswagen became famous with air-cooled passenger cars. In the United States, Franklin built air-cooled engines.
For many years air cooling was favored for military applications as liquid cooling systems are more vulnerable to damage by shrapnel.
Air-cooling:
The Czech Republic–based company Tatra is known for their large displacement air-cooled V8 car engines; Tatra engineer Julius Mackerle published a book on it. Air-cooled engines are better adapted to extremely cold and hot environmental weather temperatures: you can see air-cooled engines starting and running in freezing conditions that seized water-cooled engines and continue working when water-cooled ones start producing steam jets. Air-cooled engines have may be an advantage from a thermodynamic point of view due to higher operating temperature. The worst problem met in air-cooled aircraft engines was the so-called "Shock cooling", when the airplane entered in a dive after climbing or level flight with throttle open, with the engine under no load while the airplane dives generating less heat, and the flow of air that cools the engine is increased, a catastrophic engine failure may result as different parts of engine have different temperatures, and thus different thermal expansions. In such conditions, the engine may seize, and any sudden change or imbalance in the relation between heat produced by the engine and heat dissipated by cooling may result in an increased wear of engine, as a consequence also of thermal expansion differences between parts of engine, liquid-cooled engines having more stable and uniform working temperatures.
Liquid cooling:
Today, most automotive and larger IC engines are liquid-cooled.
Liquid cooling is also employed in maritime vehicles (vessels, ...). For vessels, the seawater itself is mostly used for cooling. In some cases, chemical coolants are also employed (in closed systems) or they are mixed with seawater cooling.
Transition from air cooling:
The change of air cooling to liquid cooling occurred at the start of World War II when the US military needed reliable vehicles. The subject of boiling engines was addressed, researched, and a solution found. Previous radiators and engine blocks were properly designed and survived durability tests, but used water pumps with a leaky graphite-lubricated "rope" seal (gland) on the pump shaft. The seal was inherited from steam engines, where water loss is accepted, since steam engines already expend large volumes of water. Because the pump seal leaked mainly when the pump was running and the engine was hot, the water loss evaporated inconspicuously, leaving at best a small rusty trace when the engine stopped and cooled, thereby not revealing significant water loss. Automobile radiators (or heat exchangers) have an outlet that feeds cooled water to the engine and the engine has an outlet that feeds heated water to the top of the radiator. Water circulation is aided by a rotary pump that has only a slight effect, having to work over such a wide range of speeds that its impeller has only a minimal effect as a pump. While running, the leaking pump seal drained cooling water to a level where the pump could no longer return water to the top of the radiator, so water circulation ceased and water in the engine boiled. However, since water loss led to overheat and further water loss from boil-over, the original water loss was hidden.
Transition from air cooling:
After isolating the pump problem, cars and trucks built for the war effort (no civilian cars were built during that time) were equipped with carbon-seal water pumps that did not leak and caused no more geysers. Meanwhile, air cooling advanced in memory of boiling engines... even though boil-over was no longer a common problem. Air-cooled engines became popular throughout Europe. After the war, Volkswagen advertised in the USA as not boiling over, even though new water-cooled cars no longer boiled over, but these cars sold well. But as air quality awareness rose in the 1960s, and laws governing exhaust emissions were passed, unleaded gas replaced leaded gas and leaner fuel mixtures became the norm. Subaru chose liquid-cooling for their EA series (flat) engine when it was introduced in 1966.
Low heat rejection engines:
A special class of experimental prototype internal combustion piston engines have been developed over several decades with the goal of improving efficiency by reducing heat loss. These engines are variously called adiabatic engines, due to better approximation of adiabatic expansion, low heat rejection engines, or high-temperature engines. They are generally diesel engines with combustion chamber parts lined with ceramic thermal barrier coatings. Some make use of titanium pistons and other titanium parts due to its low thermal conductivity and mass. Some designs are able to eliminate the use of a cooling system and associated parasitic losses altogether. Developing lubricants able to withstand the higher temperatures involved has been a major barrier to commercialization.
Sources:
Biermann, Arnold E.; Ellerbrock, Herman H., Jr (1939). The design of fins for air-cooled cylinders (PDF). NACA. Report Nº. 726.{{cite book}}: CS1 maint: multiple names: authors list (link) P V Lamarque: "The Design of Cooling Fins for Motor-Cycle Engines". Report of the Automobile Research Committee, Institution of Automobile Engineers Magazine, March 1943 issue, and also in "The Institution of Automobile Engineers Proceedings, XXXVII, Session 1942-43, pp 99-134 and 309-312.
Sources:
"Air-cooled Automotive Engines", Julius Mackerle, M. E.; Charles Griffin & Company Ltd., London, 1972.
engineeringtoolbox.com for physical properties of air, oil and water https://automotivedroid.com/can-low-coolant-cause-rough-idle/ for Low coolant causing rough idle.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Molecular Materials Research Group**
Molecular Materials Research Group:
The Molecular Materials Research Group (MMRG) is a multidisciplinary research group composed of several Ph.D. members as well as the expertise of other researchers in the field of Computational, Organic and Analytical Chemistry.Located at Madeira University in Madeira, its main scientific activity is devoted to the preparation and characterization of potentially useful molecular materials with enhanced electronic and biomedical properties. The development of new materials based in dendrimers for gene delivery and for non-linear optical applications is one of their primary goals.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Hypoestrogenism**
Hypoestrogenism:
Hypoestrogenism, or estrogen deficiency, refers to a lower than normal level of estrogen. It is an umbrella term used to describe estrogen deficiency in various conditions. Estrogen deficiency is also associated with an increased risk of cardiovascular disease, and has been linked to diseases like urinary tract infections and osteoporosis.
Hypoestrogenism:
In women, low levels of estrogen may cause symptoms such as hot flashes, sleeping disturbances, decreased bone health, and changes in the genitourinary system. Hypoestrogenism is most commonly found in women who are postmenopausal, have primary ovarian insufficiency (POI), or are presenting with amenorrhea (absence of menstrual periods). Hypoestrogenism includes primarily genitourinary effects, including thinning of the vaginal tissue layers and an increase in vaginal pH. With normal levels of estrogen, the environment of the vagina is protected against inflammation, infections, and sexually transmitted infections. Hypoestrogenism can also occur in men, for instance due to hypogonadism.
Hypoestrogenism:
There are both hormonal and non-hormonal treatments to prevent the negative effects of low estrogen levels and improve quality of life.
Signs and symptoms:
Vasomotor Presentations of low estrogen levels include hot flashes, which are sudden, intense feelings of heat predominantly in the upper body, causing the skin to redden as if blushing. They are believed to occur due to the narrowing of the thermonuclear zone in the hypothalamus, making the body more sensitive to body temperature changes. Night disturbances are also common symptoms associated with hypoestrogenism. People may experience difficulty falling asleep, waking up several times a night, and early awakening with different variability between races and ethnic groups.
Signs and symptoms:
Genitourinary Other classic symptoms include both physical and chemical changes of the vulva, vagina, and lower urinary tract. Genitals go through atrophic changes such as losing elasticity, losing vaginal rugae, and increasing of vaginal pH, which can lead to changes in the vaginal flora and increase the risk of tissue fragility and fissure. Other genital signs include dryness or lack of lubrication, burning, irritation, discomfort or pain, as well as impaired function. Low levels of estrogen can lead to limited genital arousal and cause dyspareunia, or painful sexual intercourse because of changes in the four layers of the vaginal wall. People with low estrogen will also experience higher urgency to urinate and dysuria, or painful urination. Hypoestrogenism is also considered one of the major risk factors for developing uncomplicated urinary tract infection in postmenopausal women who do not take hormone replacement therapy.
Signs and symptoms:
Bone health Estrogen contributes to bone health in several ways; low estrogen levels increase bone resorption via osteoclasts and osteocytes, cells that help with bone remodeling, making bones more likely to deteriorate and increase risk of fracture. The decline in estrogen levels can ultimately lead to more serious illnesses, such as scoliosis or type I osteoporosis, a disease that thins and weakens bones, resulting in low bone density and fractures. Estrogen deficiency plays an important role in osteoporosis development for both genders, and it is more pronounced for women and at younger (menopausal) ages by five to ten years compared with men. Females are also at higher risk for osteopenia and osteoporosis.
Causes:
A variety of conditions can lead to hypoestrogenism: menopause is the most common. Primary ovarian insufficiency (premature menopause) due to varying causes, such as radiation therapy, chemotherapy, or a spontaneous manifestation, can also lead to low estrogen and infertility.Hypogonadism (a condition where the gonads – testes for men and ovaries for women – have diminished activity) can decrease estrogen. In primary hypogonadism, elevated serum gonadotropins are detected on at least two occasions several weeks apart, indicating gonadal failure. In secondary hypogonadism (where the cause is hypothalamic or pituitary dysfunction) serum levels of gonadotropins may be low.Other causes include certain medications, gonadotropin insensitivity, inborn errors of steroid metabolism (for example, aromatase deficiency, 17α-hydroxylase deficiency, 17,20-lyase deficiency, 3β-hydroxysteroid dehydrogenase deficiency, and cholesterol side-chain cleavage enzyme or steroidogenic acute regulatory protein deficiency) and functional amenorrhea.
Causes:
Risks Low endogenous estrogen levels can elevate the risk of cardiovascular disease in women who reach early menopause. Estrogen is needed to relax arteries using endothelial-derived nitric oxide resulting in better heart health by decreasing adverse atherogenic effects. Women with POI may have an increased risk of cardiovascular disease due to low estrogen production.
Pathophysiology:
Estrogen deficiency has both vaginal and urologic effects; the female genitalia and lower urinary tract share common estrogen receptor function due to their embryological development. Estrogen is a vasoactive hormone (one that affects blood pressure) which stimulates blood flow and increases vaginal secretions and lubrication. Activated estrogen receptors also stimulate tissue proliferation in the vaginal walls, which contribute to the formation of rugae. This rugae aids in sexual stimulation by becoming lubricated, distended, and expanded.Genitourinary effects of low estrogen include thinning of the vaginal epithelium, loss of vaginal barrier function, decrease of vaginal folding, decrease of the elasticity of the tissues, and decrease of the secretory activity of the Bartholin glands, which leads to traumatization of the vaginal mucosa and painful sensations. This thinning of the vaginal epithelium layers can increase the risk of developing inflammation and infection, such as urinary tract infection.The vagina is largely dominated by bacteria from the genus Lactobacillus, which typically comprise more than 70% of the vaginal bacteria in women. These lactobacilli process glycogen and its breakdown products, which result in a maintained low vaginal pH. Estrogen levels are closely linked to lactobacilli abundance and vaginal pH, as higher levels of estrogen promote thickening of the vaginal epithelium and intracellular production of glycogen. This large presence of lactobacilli and subsequent low pH levels are hypothesized to benefit women by protecting against sexually transmitted pathogens and opportunistic infections, and therefore reducing disease risk.
Diagnosis:
Hypoestrogenism is typically found in menopause and aids in diagnosis of other conditions such as POI and functional amenorrhea. Estrogen levels can be tested through several laboratory tests: vaginal maturation index, progestogen challenge test, and vaginal swabs for small parabasal cells.
Diagnosis:
Menopause Menopause is usually diagnosed through symptoms of vaginal atrophy, pelvic exams, and taking a comprehensive medical history consisting of last menstruation cycle. There is no definitive testing available for determining menopause as the symptom complex is the primary indicator and because the lower levels of estradiol are harder to accurately detect after menopause. However, there can be laboratory tests done to differentiate between menopause and other diagnoses.
Diagnosis:
Functional hypothalamic amenorrhea Functional hypothalamic amenorrhea (FHA) is diagnosed based on findings of amenorrhea lasting three months or more, low serum hormone of gonadotropins and estradiol. Since common causes of FHA include exercising too much, eating too little, or being under too much stress, diagnosis of FHA includes assessing for any changes in exercise, weight, and stress. In addition, evaluation of amenorrhea includes a history and physical examination, biochemical testing, imaging, and measuring estrogen level. Examination of menstrual problems and clinical tests to measure hormones such as serum prolactin, thyroid-stimulating hormone, and follicle-stimulating hormone (FSH) can help rule out other potential causes of amenorrhea. These potential conditions include hyperprolactinemia, POI, and polycystic ovary syndrome.
Diagnosis:
Primary ovarian insufficiency Primary ovarian insufficiency, also known as premature ovarian failure, can develop in women before the age of forty as a consequence of hypergonadotropic hypogonadism. POI can present as amenorrhea and has similar symptoms to menopause, but measuring FSH levels is used for diagnosis.
Treatment:
Hormone replacement therapy (HRT) can be used to treat hypoestrogenism and menopause related symptoms, and low estrogen levels in both premenopausal and postmenopausal women. Low-dose estrogen medications are approved by the U.S. Food and Drug Administration for treatment of menopause-related symptoms. HRT can be used with or without a progestogen to improve symptoms such as hot flashes, sweating, trouble sleeping, vaginal dryness and discomfort. The FDA recommends HRT to be avoided in women with a history or risk of breast cancer, undiagnosed genital bleeding, untreated high blood pressure, unexplained blood clots, or liver disease.HRT for the vasomotor symptoms of hypoestrogenism include different forms of estrogen, such as conjugated equine estrogens, 17β-estradiol, transdermal estradiol, ethinyl estradiol, and the estradiol ring. In addition to HRT, there are common progestogens that are used to protect the inner layer of the uterus, the endometrium. These medications include medroxyprogesterone acetate, progesterone, norethisterone acetate, and drospirenone.Non-pharmacological treatment of hot flashes includes using portable fans to lower the room temperature, wearing layered clothing, and avoiding tobacco, spicy food, alcohol and caffeine. There is a lack of evidence to support other treatments such as acupuncture, yoga, and exercise to reduce symptoms.
In men:
Estrogens are also important in male physiology. Hypoestrogenism can occur in men due to hypogonadism. Very rare causes include aromatase deficiency and estrogen insensitivity syndrome. Medications can also be a cause of hypoestrogenism in men. Hypoestrogenism in men can lead to osteoporosis, among other symptoms. Estrogens may also be positively involved in sexual desire in men.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Environment of Russia**
Climate:
The climate of Russia is formed under the European peninsula. The enormous size of the country and the remoteness of many areas from the sea result in the dominance of the continental climate, which is prevalent in European and Asian Russia except for the tundra and the best extreme southeast. Mountains in the south obstructing the flow of cold air masses from the Arctic Ocean and the plain of the south and north makes the country open to Pacific and Atlantic influences.
Waste management:
141 019 100 tonnes of hazardous waste was generated in Russia in 2009
Environmental policy and law:
Treaties and international agreements Russia is a signatory to a number of treaties and international agreements: Party to Air Pollution, Air Pollution-Nitrogen Oxides, Air Pollution-Sulphur 85, Antarctic-Environmental Protocol, Antarctic Treaty, Biodiversity, Climate Change, Endangered Species, Environmental Modification, Hazardous Wastes, Law of the Sea, Marine Dumping, Nuclear Test Ban, Ozone Layer Protection, Ship Pollution, Tropical Timber 83, Wetlands, Whaling, Climate Change-Kyoto Protocol Signed, but not ratified Air Pollution-Sulphur 94,
Environmental issues:
Air pollution from heavy industry, emissions of coal-fired electric plants, and transportation in major cities; industrial, municipal, and agricultural pollution of inland waterways and sea coasts; deforestation; soil erosion; soil contamination from improper application of agricultural chemicals; scattered areas of sometimes intense radioactive contamination; ground water contamination from toxic waste; considerable biodiversity addressed by the country's Biodiversity Action Plan.
Environmental issues:
While Russia possesses vast mineral and energy wealth, this does not come without some price both to Russia and to the greater globe. Particularly, oil and gas extraction exacts a heavy cost to the health of the land and people. Drilling waste water, mud, and sludges are accumulated, annual volumes have been estimated at 1.7 million tons of chemical reagents contaminating 25 million cubic meters of topsoil. Considerable geomechanical disturbances, contamination of soils and water, and multiple increases of contaminated waste water ejected into surface water streams, is a serious problem offsetting Russia's profits from the industry. It has been estimated that between 1991-1999 the volume of contaminated waste waters from the Russian oil industry amounted to 200 million cubic meters. Complete utilization of co-extracted gas in oil extraction does not exceed 80% in Russia, it has been variously estimated that annually 5-17 billion cubic meters of un-utilized gas extracted alongside oil is burnt in "gas torches," with 400,000 tons or more hazardous substances released into the atmosphere from this each year, creating the double impact of wasted resource and negative environmental effect. 560 million tons of methane is estimated to leak annually into the atmosphere from oil and gas extraction, not counting accidental outbursts and pipe breakage. Other valuable industries also have their costs, such as the coal industry's release of vast quantities of hazardous, toxic, and radioactive materials. Also the Russian gold industry, with Russia being the only nation for at least a century with high extraction of gold from placer deposits, and having 4000+ large deposits, inevitably creates problems for the river systems. The associated pollution from using mass explosions in mining also can be a problem. Overall, the extensive mineral wealth and riches, brings with it both great benefit to the Russian economy & people, and the greater globe and all people, yet also several difficult problems to be dealt with.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Helpmate**
Helpmate:
A helpmate is a type of chess problem in which both sides cooperate in order to achieve the goal of checkmating Black. In a helpmate in n moves, Black moves first, then White, each side moving n times, to culminate in White's nth move checkmating Black. (In a helpmate in 2 for example, sometimes abbreviated h#2, the solution consists of a Black move, a White move, a second Black move, then a second White move, giving checkmate.) Although the two sides cooperate, all moves must be legal according to the rules of chess.
Helpmate:
The example problem illustrated is a helpmate in 8 (or h#8) by Z. Maslar, published in Die Schwalbe in 1981. The solution is (recall that in helpmate solutions, Black's move is given first): 1. Kf3 Kd3 2. Bb3 Kc3 3. Ke4+ Kd2 4. Kd4 Ke2 5. Kc3 Nb4 6. Kb2 Kd2 7. Ka1 Kc1 8. Ba2 Nc2#
History:
The first helpmate problem was by the German chess master Max Lange, published in Deutsche Schachzeitung, December 1854. The problem had White to move and White could play in a number of different ways to achieve the same mate (duals), considered a serious flaw today.
History:
In The Chess Monthly, November 1860, American puzzle inventor Sam Loyd published the first helpmate with Black to move as is now standard, one intended main line, and an attractive but false solution (a try) to mislead solvers. However, this problem too had a minor dual, and also had the major flaw (or cook) of having a second, completely separate solution, not noted by the author. Even so, it was a much better problem than Lange's and its presentation, incorporating a story written by D. W. Fiske, established the genre.The first completely sound helpmate was by A. Barbe of Leipzig, published in 105 Leipziger Ill. Familien-Journal, 1861.The term "help-mate" originated in The Problem Art by T. B. and F. F. Rowland (Kingstown, 1897). The helpmate problem task has since increased in popularity to be second only to the directmate and is no longer considered to be part of fairy chess.
Varieties of helpmate problems:
Multiple solutions Because the nature of helpmates sees Black and White cooperating, the play in helpmates may seem to be a great deal simpler than in directmates (the most common type of problem, where White tries to checkmate Black, and Black tries to avoid being mated). In directmates, a great variety of play can be found in the solution because although White has only one move at each juncture which will solve the problem, Black can choose between several to try to thwart White's efforts. In helpmates, however, both White's and Black's moves are limited to just one at each juncture; this may seem simple, but a well-constructed helpmate also shows thematic play, and the cooperating moves should not always be easy to find. It has been noted by Jean Oudot that "helpmates are the purest form of all the chess arts" In order to introduce more lines of play into a problem, various devices can be employed. Most straightforwardly, a problem can have more than one solution. The solutions will usually complement each other in some thematic and aesthetically pleasing way. Each solution can be considered a different phase of play. If there is more than one solution, the composer will state this; if there is no such statement, the problem has only one solution. The example to the right is a helpmate in 2 (h#2) with two solutions. It was published in the June 1975 issue of Schach and is by the helpmate specialist Chris J. Feather.
Varieties of helpmate problems:
The two solutions are 1. Bxb8 Bd5 2. Nc7 Bxg5# and 1. Rdxd8 Bc6 2. Nd7 Rxb3#. These lines are very closely linked, with both exhibiting the same basic pattern: first, Black takes the white piece that gives mate in the other solution (this is known as a Zilahi), at the same time opening the line on which mate is eventually given, then White moves a bishop to close a line so that Black's next move will not give check. Black's second move closes another line so that after White's last move, giving check, Black will not be able to interpose one of his pieces.
Varieties of helpmate problems:
Twinning Another way of giving variety to the play of a helpmate is twinning. Here, more than one problem is wrought from a single diagram by making small changes to it, such as moving a piece from one square to another, adding or removing a piece, turning the board round or some other device. Twinning is occasionally found in other types of problems, but is particularly common in helpmates. The example shown is a helpmate in 2 by Henry Forsberg (published in 1935 in Revista Romana de Şah). The twins are created by substituting the black queen on a6 with a different piece. The solutions are: a) diagram position: 1. Qf6 Nc5 2. Qb2 Ra4# b) with black rook at a6: 1. Rb6 Rb1 2. Rb3 Ra1# c) with black bishop at a6: 1. Bc4 Ne1 2. Ba2 Nc2# d) with black knight at a6: 1. Nc5 Nc1 2. Na4 Rb3# e) with black pawn at a6: 1. a5 Rb3+ 2. Ka4 Nc5# Duplex A further variation is the duplex, another way of getting two problems for the price of one. The first problem is a normal helpmate; the second starts from the same position but has White moving first and helping Black to checkmate him. Again, duplex problems have been composed with other types of problems, but the vast majority are helpmates. To the right is an example by Milan Vukcevich (from CHM avec 6 pieces Bad Pyrmont, 1996).
Varieties of helpmate problems:
The solution with Black moving first is 1. Ng6 f8=Q 2. Ne5 d8=N#. With White moving first, it is 1. f8=R Nf7 2. d8=B Nd6#. These two lines are closely linked, with two white pawn promotions covering the black king's flight squares in the first part and promoted pieces blocking White's flight squares in the second. This problem is an Allumwandlung, a problem in which pawns are promoted to each of knight, bishop, rook and queen.
Varieties of helpmate problems:
Unorthodox helpmate problems Very popular today also are helpmates where White moves first; then the stipulation contains a "½", for example a helpmate in 2½ moves. Helpmates, like other problems, can be composed with fairy chess pieces or with fairy conditions (chess variant rules), such as Circe chess, Grid chess, or Patrol chess. All of these variations can be, and have been, combined. (So it is possible to have, for instance, a series-helpmate in 7, twinned with two solutions in each phase, using nightriders and Madrasi chess.) Problems related to helpmates can have other kinds of stipulations involving cooperation between White and Black, in particular seriesmover problems, like seriesmates, serieshelpmates, serieshelpstalemates, etc.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Convex Polyhedra (book)**
Convex Polyhedra (book):
Convex Polyhedra is a book on the mathematics of convex polyhedra, written by Soviet mathematician Aleksandr Danilovich Aleksandrov, and originally published in Russian in 1950, under the title Выпуклые многогранники. It was translated into German by Wilhelm Süss as Konvexe Polyeder in 1958. An updated edition, translated into English by Nurlan S. Dairbekov, Semën Samsonovich Kutateladze and Alexei B. Sossinsky, with added material by Victor Zalgaller, L. A. Shor, and Yu. A. Volkov, was published as Convex Polyhedra by Springer-Verlag in 2005.
Topics:
The main focus of the book is on the specification of geometric data that will determine uniquely the shape of a three-dimensional convex polyhedron, up to some class of geometric transformations such as congruence or similarity. It considers both bounded polyhedra (convex hulls of finite sets of points) and unbounded polyhedra (intersections of finitely many half-spaces).The 1950 Russian edition of the book included 11 chapters. The first chapter covers the basic topological properties of polyhedra, including their topological equivalence to spheres (in the bounded case) and Euler's polyhedral formula. After a lemma of Augustin Cauchy on the impossibility of labeling the edges of a polyhedron by positive and negative signs so that each vertex has at least four sign changes, the remainder of chapter 2 outlines the content of the remaining book. Chapters 3 and 4 prove Alexandrov's uniqueness theorem, characterizing the surface geometry of polyhedra as being exactly the metric spaces that are topologically spherical locally like the Euclidean plane except at a finite set of points of positive angular defect, obeying Descartes' theorem on total angular defect that the total angular defect should be 4π . Chapter 5 considers the metric spaces defined in the same way that are topologically a disk rather than a sphere, and studies the flexible polyhedral surfaces that result.Chapters 6 through 8 of the book are related to a theorem of Hermann Minkowski that a convex polyhedron is uniquely determined by the areas and directions of its faces, with a new proof based on invariance of domain. A generalization of this theorem implies that the same is true for the perimeters and directions of the faces. Chapter 9 concerns the reconstruction of three-dimensional polyhedra from a two-dimensional perspective view, by constraining the vertices of the polyhedron to lie on rays through the point of view. The original Russian edition of the book concludes with two chapters, 10 and 11, related to Cauchy's theorem that polyhedra with flat faces form rigid structures, and describing the differences between the rigidity and infinitesimal rigidity of polyhedra, as developed analogously to Cauchy's rigidity theorem by Max Dehn.The 2005 English edition adds comments and bibliographic information regarding many problems that were posed as open in the 1950 edition but subsequently solved. It also includes in a chapter of supplementary material the translations of three related articles by Volkov and Shor, including a simplified proof of Pogorelov's theorems generalizing Alexandrov's uniqueness theorem to non-polyhedral convex surfaces.
Audience and reception:
Robert Connelly writes that, for a work describing significant developments in the theory of convex polyhedra that was however hard to access in the west, the English translation of Convex Polyhedra was long overdue. He calls the material on Alexandrov's uniqueness theorem "the star result in the book", and he writes that the book "had a great influence on countless Russian mathematicians". Nevertheless, he complains about the book's small number of exercises, and about an inconsistent level presentation that fails to distinguish important and basic results from specialized technicalities.Although intended for a broad mathematical audience, Convex Polyhedra assumes a significant level of background knowledge in material including topology, differential geometry, and linear algebra.
Audience and reception:
Reviewer Vasyl Gorkaviy recommends Convex Polyhedra to students and professional mathematicians as an introduction to the mathematics of convex polyhedra. He also writes that, over 50 years after its original publication, "it still remains of great interest for specialists", after being updated to include many new developments and to list new open problems in the area.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Planetary phase**
Planetary phase:
A planetary phase is a certain portion of a planet's area that [[diffuse reflecti ntage point, as well as the period of time during which it occurs.
Inferior planets:
The two inferior planets, Mercury and Venus, which have orbits that are smaller than the Earth's, exhibit the full range of phases as does the Moon, when seen through a telescope. Their phases are "full" when they are at superior conjunction, on the far side of the Sun as seen from the Earth. It is possible to see them at these times, since their orbits are not exactly in the plane of Earth's orbit, so they usually appear to pass slightly above or below the Sun in the sky. Seeing them from the Earth's surface is difficult, because of sunlight scattered in Earth's atmosphere, but observers in space can see them easily if direct sunlight is blocked from reaching the observer's eyes. The planets' phases are "new" when they are at inferior conjunction, passing more or less between the Sun and the Earth. Sometimes they appear to cross the solar disk, which is called a transit of the planet. At intermediate points on their orbits, these planets exhibit the full range of crescent and gibbous phases.
Superior planets:
The superior planets, orbiting outside the Earth's orbit, do not exhibit the full range of phases as they appear almost always as gibbous or full. However, Mars often appears significantly gibbous, when it is illuminated by the Sun at a very different angle than it is seen by an observer on Earth, so an observer on Mars would see the Sun and the Earth widely separated in the sky. This effect is not easily noticeable for the giant planets, from Jupiter outward, since they are so far away that the Sun and the Earth, as seen from these outer planets, would appear to be in almost the same direction.
See here also:
Earth phase Lunar phase Phases of Venus
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Method of support**
Method of support:
In statistics, the method of support is a technique that is used to make inferences from datasets.
Method of support:
According to A. W. F. Edwards, the method of support aims to make inferences about unknown parameters in terms of the relative support, or log likelihood, induced by a set of data for a particular parameter value. The technique may be used whether or not prior information is available. The method of maximum likelihood is part of the method of support, but note that the method of support also provides confidence regions that are defined in terms of their support.
Method of support:
Notable proponents of the method of support include A. W. F. Edwards.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Staggered tuning**
Staggered tuning:
Staggered tuning is a technique used in the design of multi-stage tuned amplifiers whereby each stage is tuned to a slightly different frequency. In comparison to synchronous tuning (where each stage is tuned identically) it produces a wider bandwidth at the expense of reduced gain. It also produces a sharper transition from the passband to the stopband. Both staggered tuning and synchronous tuning circuits are easier to tune and manufacture than many other filter types.
Staggered tuning:
The function of stagger-tuned circuits can be expressed as a rational function and hence they can be designed to any of the major filter responses such as Butterworth and Chebyshev. The poles of the circuit are easy to manipulate to achieve the desired response because of the amplifier buffering between stages.
Applications include television IF amplifiers (mostly 20th century receivers) and wireless LAN.
Rationale:
Staggered tuning improves the bandwidth of a multi-stage tuned amplifier at the expense of the overall gain. Staggered tuning also increases the steepness of passband skirts and hence improves selectivity.
Rationale:
The value of staggered tuning is best explained by first looking at the shortcomings of tuning every stage identically. This method is called synchronous tuning. Each stage of the amplifier will reduce the bandwidth. In an amplifier with multiple identical stages, the 3 dB points of the response after the first stage will become the 6 dB points of the second stage. Each successive stage will add a further 3 dB to what was the band edge of the first stage. Thus the 3 dB bandwidth becomes progressively narrower with each additional stage.As an example, a four-stage amplifier will have its 3 dB points at the 0.75 dB points of an individual stage. The fractional bandwidth of an LC circuit is given by, B = m − 1 Q B={{\sqrt {m-1}} \over Q} where m is the power ratio of the power at resonance to that at the band edge frequency (equal to 2 for the 3 dB point and 1.19 for the 0.75 dB point) and Q is the quality factor.The bandwidth is thus reduced by a factor of m − 1 {\sqrt {m-1}} . In terms of the number of stages m = 2 1 / n m=2^{1/n} . Thus, the four stage synchronously tuned amplifier will have a bandwidth of only 19% of a single stage. Even in a two-stage amplifier the bandwidth is reduced to 41% of the original. Staggered tuning allows the bandwidth to be widened at the expense of overall gain. The overall gain is reduced because when any one stage is at resonance (and thus maximum gain) the others are not, unlike synchronous tuning where all stages are at maximum gain at the same frequency. A two-stage stagger-tuned amplifier will have a gain 3 dB less than a synchronously tuned amplifier.Even in a design that is intended to be synchronously tuned, some staggered tuning effect is inevitable because of the practical impossibility of keeping all tuned circuits perfectly in step and because of feedback effects. This can be a problem in very narrow band applications where essentially only one spot frequency is of interest, such as a local oscillator feed or a wave trap. The overall gain of a synchronously tuned amplifier will always be less than the theoretical maximum because of this.Both synchronously tuned and stagger-tuned schemes have a number of advantages over schemes that place all the tuning components in a single aggregated filter circuit separate from the amplifier such as ladder networks or coupled resonators. One advantage is that they are easy to tune. Each resonator is buffered from the others by the amplifier stages so have little effect on each other. The resonators in aggregated circuits, on the other hand, will all interact with each other, particularly their nearest neighbours. Another advantage is that the components need not be close to ideal. Every LC resonator is directly working into a resistor which lowers the Q anyway so any losses in the L and C components can be absorbed into this resistor in the design. Aggregated designs usually require high Q resonators. Also, stagger-tuned circuits have resonator components with values that are quite close to each other and in synchronously tuned circuits they can be identical. The spread of component values is thus less in stagger-tuned circuits than in aggregated circuits.
Design:
Tuned amplifiers such as the one illustrated at the beginning of this article can be more generically depicted as a chain of transconductance amplifiers each loaded with a tuned circuit.
where for each stage (omitting the suffixes) gm is the amplifier transconductance C is the tuned circuit capacitance L is the tuned circuit inductance G is the sum of the amplifier output conductance and the input conductance of the next amplifier.
Design:
Stage gain The gain A(s), of one stage of this amplifier is given by; A ( s ) = g m s L s 2 L C + s L G + 1 A(s)={\frac {g_{\mathrm {m} }sL}{s^{2}LC+sLG+1}} where s is the complex frequency operator.This can be written in a more generic form, that is, not assuming that the resonators are the LC type, with the following substitutions, ω 0 = 1 L C \omega _{0}={1 \over {\sqrt {LC}}} (the resonant frequency) A 0 := A ( ω 0 ) = g m G A_{0}:=A(\omega _{0})={\frac {g_{\mathrm {m} }}{G}} (the gain at resonance) Q = 1 ω 0 L G Q={1 \over \omega _{0}LG} (the stage quality factor)Resulting in, A ( s ) = A 0 s ω 0 s 2 Q + s ω 0 + ω 0 2 Q A(s)=A_{0}{\frac {s\omega _{0}}{s^{2}Q+s\omega _{0}+\omega _{0}^{2}Q}} Stage bandwidth The gain expression can be given as a function of (angular) frequency by making the substitution s = iω where i is the imaginary unit and ω is the angular frequency A ( ω ) = A 0 i ω ω 0 i ω ω 0 + ω 0 2 Q − ω 2 Q A(\omega )=A_{0}{\frac {i\omega \omega _{0}}{i\omega \omega _{0}+\omega _{0}^{2}Q-\omega ^{2}Q}} The frequency at the band edges, ωc, can be found from this expression by equating the value of the gain at the band edge to the magnitude of the expression, | A ( ω c ) | = A 0 m |A(\omega _{c})|={\frac {A_{0}}{\sqrt {m}}} where m is defined as above and equal to two if the 3 dB points are desired.Solving this for ωc and taking the difference between the two positive solutions finds the bandwidth Δω, Δ ω c = ω c 1 − ω c 2 = ω 0 ( m − 1 ) Q \Delta \omega _{\mathrm {c} }=\omega _{{\mathrm {c} }1}-\omega _{{\mathrm {c} }2}={\frac {\omega _{0}{\sqrt {(m-1)}}}{Q}} and the fractional bandwidth B, B := Δ ω c ω 0 = m − 1 Q B:={\frac {\Delta \omega _{\mathrm {c} }}{\omega _{0}}}={\frac {\sqrt {m-1}}{Q}} Overall response The overall response of the amplifier is given by the product of the individual stages, A T = A 1 A 2 A 3 ⋯ A_{\mathrm {T} }=A_{1}A_{2}A_{3}\cdots It is desirable to be able to design the filter from a standard low-pass prototype filter of the required specification. Frequently, a smooth Butterworth response will be chosen but other polynomial functions can be used that allow ripple in the response. A popular choice for a polynomial with ripple is the Chebyshev response for its steep skirt. For the purpose of transformation, the stage gain expression can be rewritten in the more suggestive form, A ( s ) = A 0 1 + Q ( s ω 0 + ω 0 s ) A(s)={\frac {A_{0}}{1+Q\left({\frac {s}{\omega _{0}}}+{\frac {\omega _{0}}{s}}\right)}} This can be transformed into a low-pass prototype filter with the transform Q ( s ω 0 + ω 0 s ) → s ω c ′ Q\left({\frac {s}{\omega _{0}}}+{\frac {\omega _{0}}{s}}\right)\to {\frac {s}{\omega _{c}'}} where ω'c is the cutoff frequency of the low-pass prototype.This can be done straightforwardly for the complete filter in the case of synchronously tuned amplifiers where every stage has the same ω0 but for a stagger-tuned amplifier there is no simple analytical solution to the transform. Stagger-tuned designs can be approached instead by calculating the poles of a low-pass prototype of the desired form (e.g. Butterworth) and then transforming those poles to a band-pass response. The poles so calculated can then be used to define the tuned circuits of the individual stages.
Design:
Poles The stage gain can be rewritten in terms of the poles by factorising the denominator; A ( s ) = A 0 Q s ω 0 ( s − p ) ( s − p ∗ ) A(s)={\frac {A_{0}}{Q}}{\frac {s\omega _{0}}{(s-p)(s-p^{*})}} where p, p* are a complex conjugate pair of polesand the overall response is, A T = s a 1 ( s − p 1 ) ( s − p 1 ∗ ) ⋅ s a 2 ( s − p 2 ) ( s − p 2 ∗ ) ⋅ s a 3 ( s − p 3 ) ( s − p 3 ∗ ) ⋅ ⋯ A_{\mathrm {T} }={\frac {sa_{1}}{(s-p_{1})(s-p_{1}^{*})}}\cdot {\frac {sa_{2}}{(s-p_{2})(s-p_{2}^{*})}}\cdot {\frac {sa_{3}}{(s-p_{3})(s-p_{3}^{*})}}\cdot \cdots where the ak = A0kω0k/Q0kFrom the band-pass to low-pass transform given above, an expression can be found for the poles in terms of the poles of the low-pass prototype, qk, p k , p k ∗ = 1 2 ( q k ω 0 B ω c ′ Q e f f ± ( q k ω 0 B ω c ′ Q e f f ) 2 − 4 ω 0 B 2 ) p_{k},p_{k}^{*}={1 \over 2}\left({\frac {q_{k}\omega _{0{\mathrm {B} }}}{\omega '_{\mathrm {c} }Q_{\mathrm {eff} }}}\pm {\sqrt {\left({\frac {q_{k}\omega _{0{\mathrm {B} }}}{\omega '_{\mathrm {c} }Q_{\mathrm {eff} }}}\right)^{2}-4{\omega _{0{\mathrm {B} }}}^{2}}}\right) where ω0B is the desired band-pass centre frequency and Qeff is the effective Q of the overall circuit.Each pole in the prototype transforms to a complex conjugate pair of poles in the band-pass and corresponds to one stage of the amplifier. This expression is greatly simplified if the cutoff frequency of the prototype, ω'c, is set to the final filter bandwidth ω0B/Qeff.
Design:
p k , p k ∗ = 1 2 ( q k ± q k 2 − 4 ω 0 B 2 ) p_{k},p_{k}^{*}={1 \over 2}\left(q_{k}\pm {\sqrt {q_{k}^{2}-4{\omega _{0{\mathrm {B} }}}^{2}}}\right) In the case of a narrowband design ω0≫q which can be used to make a further simplification with the approximation, p k , p k ∗ ≈ q k 2 ± i ω 0 B p_{k},p_{k}^{*}\approx {q_{k} \over 2}\pm i\omega _{0{\mathrm {B} }} These poles can be inserted into the stage gain expression in terms of poles. By comparing with the stage gain expression in terms of component values, those component values can then be calculated.
Applications:
Staggered tuning is of most benefit in wideband applications. It was formerly commonly used in television receiver IF amplifiers. However, SAW filters are more likely to be used in that role nowadays. Staggered tuning has advantages in VLSI for radio applications such as wireless LAN. The low spread of component values make it much easier to implement in integrated circuits than traditional ladder networks.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**The Appropriate Technology Library**
The Appropriate Technology Library:
The Appropriate Technology Library consists of 1050 books on 29 subject areas of small scale, do-it-yourself technology. Originally developed by Volunteers in Asia (VIA) it was transferred to Village Earth: The Consortium for Sustainable Village-Based Development in 1993.
The Library was developed to be a low-cost and portable source of appropriate technology information for aid and relief workers around the world. Since its inception, it has been used in dozens of countries around the world.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Shot noise**
Shot noise:
Shot noise or Poisson noise is a type of noise which can be modeled by a Poisson process. In electronics shot noise originates from the discrete nature of electric charge. Shot noise also occurs in photon counting in optical devices, where shot noise is associated with the particle nature of light.
Origin:
In a statistical experiment such as tossing a fair coin and counting the occurrences of heads and tails, the numbers of heads and tails after many throws will differ by only a tiny percentage, while after only a few throws outcomes with a significant excess of heads over tails or vice versa are common; if an experiment with a few throws is repeated over and over, the outcomes will fluctuate a lot. From the law of large numbers, one can show that the relative fluctuations reduce as the reciprocal square root of the number of throws, a result valid for all statistical fluctuations, including shot noise.
Origin:
Shot noise exists because phenomena such as light and electric current consist of the movement of discrete (also called "quantized") 'packets'. Consider light—a stream of discrete photons—coming out of a laser pointer and hitting a wall to create a visible spot. The fundamental physical processes that govern light emission are such that these photons are emitted from the laser at random times; but the many billions of photons needed to create a spot are so many that the brightness, the number of photons per unit of time, varies only infinitesimally with time. However, if the laser brightness is reduced until only a handful of photons hit the wall every second, the relative fluctuations in number of photons, i.e., brightness, will be significant, just as when tossing a coin a few times. These fluctuations are shot noise.
Origin:
The concept of shot noise was first introduced in 1918 by Walter Schottky who studied fluctuations of current in vacuum tubes.Shot noise may be dominant when the finite number of particles that carry energy (such as electrons in an electronic circuit or photons in an optical device) is sufficiently small so that uncertainties due to the Poisson distribution, which describes the occurrence of independent random events, are significant. It is important in electronics, telecommunications, optical detection, and fundamental physics.
Origin:
The term can also be used to describe any noise source, even if solely mathematical, of similar origin. For instance, particle simulations may produce a certain amount of "noise", where because of the small number of particles simulated, the simulation exhibits undue statistical fluctuations which don't reflect the real-world system. The magnitude of shot noise increases according to the square root of the expected number of events, such as the electric current or intensity of light. But since the strength of the signal itself increases more rapidly, the relative proportion of shot noise decreases and the signal-to-noise ratio (considering only shot noise) increases anyway. Thus shot noise is most frequently observed with small currents or low light intensities that have been amplified.
Origin:
For large numbers, the Poisson distribution approaches a normal distribution about its mean, and the elementary events (photons, electrons, etc.) are no longer individually observed, typically making shot noise in actual observations indistinguishable from true Gaussian noise. Since the standard deviation of shot noise is equal to the square root of the average number of events N, the signal-to-noise ratio (SNR) is given by: SNR=NN=N.
Origin:
Thus when N is very large, the signal-to-noise ratio is very large as well, and any relative fluctuations in N due to other sources are more likely to dominate over shot noise. However, when the other noise source is at a fixed level, such as thermal noise, or grows slower than N , increasing N (the DC current or light level, etc.) can lead to dominance of shot noise.
Properties:
Electronic devices Shot noise in electronic circuits consists of random fluctuations of DC current, which is due to electric current being the flow of discrete charges (electrons). Because the electron has such a tiny charge, however, shot noise is of relative insignificance in many (but not all) cases of electrical conduction. For instance 1 ampere of current consists of about 6.24×1018 electrons per second; even though this number will randomly vary by several billion in any given second, such a fluctuation is minuscule compared to the current itself. In addition, shot noise is often less significant as compared with two other noise sources in electronic circuits, flicker noise and Johnson–Nyquist noise. However, shot noise is temperature and frequency independent, in contrast to Johnson–Nyquist noise, which is proportional to temperature, and flicker noise, with the spectral density decreasing with increasing frequency. Therefore, at high frequencies and low temperatures shot noise may become the dominant source of noise.
Properties:
With very small currents and considering shorter time scales (thus wider bandwidths) shot noise can be significant. For instance, a microwave circuit operates on time scales of less than a nanosecond and if we were to have a current of 16 nanoamperes that would amount to only 100 electrons passing every nanosecond. According to Poisson statistics the actual number of electrons in any nanosecond would vary by 10 electrons rms, so that one sixth of the time less than 90 electrons would pass a point and one sixth of the time more than 110 electrons would be counted in a nanosecond. Now with this small current viewed on this time scale, the shot noise amounts to 1/10 of the DC current itself.
Properties:
The result by Schottky, based on the assumption that the statistics of electrons passage is Poissonian, reads for the spectral noise density at the frequency f ,S(f)=2e|I|, where e is the electron charge, and I is the average current of the electron stream. The noise spectral power is frequency independent, which means the noise is white. This can be combined with the Landauer formula, which relates the average current with the transmission eigenvalues Tn of the contact through which the current is measured ( n labels transport channels). In the simplest case, these transmission eigenvalues can be taken to be energy independent and so the Landauer formula is I=e2πℏV∑nTn, where V is the applied voltage. This provides for S=2e3πℏ|V|∑nTn, commonly referred to as the Poisson value of shot noise, SP . This is a classical result in the sense that it does not take into account that electrons obey Fermi–Dirac statistics. The correct result takes into account the quantum statistics of electrons and reads (at zero temperature) S=2e3πℏ|V|∑nTn(1−Tn).
Properties:
It was obtained in the 1990s by Khlus, Lesovik (independently the single-channel case), and Büttiker (multi-channel case). This noise is white and is always suppressed with respect to the Poisson value. The degree of suppression, F=S/SP , is known as the Fano factor. Noises produced by different transport channels are independent. Fully open ( Tn=1 ) and fully closed ( Tn=0 ) channels produce no noise, since there are no irregularities in the electron stream.
Properties:
At finite temperature, a closed expression for noise can be written as well. It interpolates between shot noise (zero temperature) and Nyquist-Johnson noise (high temperature).
Examples Tunnel junction is characterized by low transmission in all transport channels, therefore the electron flow is Poissonian, and the Fano factor equals one.
Quantum point contact is characterized by an ideal transmission in all open channels, therefore it does not produce any noise, and the Fano factor equals zero. The exception is the step between plateaus, when one of the channels is partially open and produces noise.
A metallic diffusive wire has a Fano factor of 1/3 regardless of the geometry and the details of the material.
In 2DEG exhibiting fractional quantum Hall effect electric current is carried by quasiparticles moving at the sample edge whose charge is a rational fraction of the electron charge. The first direct measurement of their charge was through the shot noise in the current.
Properties:
Effects of interactions While this is the result when the electrons contributing to the current occur completely randomly, unaffected by each other, there are important cases in which these natural fluctuations are largely suppressed due to a charge build up. Take the previous example in which an average of 100 electrons go from point A to point B every nanosecond. During the first half of a nanosecond we would expect 50 electrons to arrive at point B on the average, but in a particular half nanosecond there might well be 60 electrons which arrive there. This will create a more negative electric charge at point B than average, and that extra charge will tend to repel the further flow of electrons from leaving point A during the remaining half nanosecond. Thus the net current integrated over a nanosecond will tend more to stay near its average value of 100 electrons rather than exhibiting the expected fluctuations (10 electrons rms) we calculated. This is the case in ordinary metallic wires and in metal film resistors, where shot noise is almost completely cancelled due to this anti-correlation between the motion of individual electrons, acting on each other through the coulomb force.
Properties:
However this reduction in shot noise does not apply when the current results from random events at a potential barrier which all the electrons must overcome due to a random excitation, such as by thermal activation. This is the situation in p-n junctions, for instance. A semiconductor diode is thus commonly used as a noise source by passing a particular DC current through it.
Properties:
In other situations interactions can lead to an enhancement of shot noise, which is the result of a super-poissonian statistics. For example, in a resonant tunneling diode the interplay of electrostatic interaction and of the density of states in the quantum well leads to a strong enhancement of shot noise when the device is biased in the negative differential resistance region of the current-voltage characteristics.Shot noise is distinct from voltage and current fluctuations expected in thermal equilibrium; this occurs without any applied DC voltage or current flowing. These fluctuations are known as Johnson–Nyquist noise or thermal noise and increase in proportion to the Kelvin temperature of any resistive component. However both are instances of white noise and thus cannot be distinguished simply by observing them even though their origins are quite dissimilar.
Properties:
Since shot noise is a Poisson process due to the finite charge of an electron, one can compute the root mean square current fluctuations as being of a magnitude σi=2qIΔf where q is the elementary charge of an electron, Δf is the single-sided bandwidth in hertz over which the noise is considered, and I is the DC current flowing.
For a current of 100 mA, measuring the current noise over a bandwidth of 1 Hz, we obtain 0.18 nA.
If this noise current is fed through a resistor a noise voltage of σv=σiR would be generated. Coupling this noise through a capacitor, one could supply a noise power of P=12qIΔfR.
to a matched load.
Properties:
Detectors The flux signal that is incident on a detector is calculated as follows, in units of photons: where c is the speed of light, and h is the Planck constant. Following Poisson statistics, the photon noise is calculated as the square root of the signal: The SNR for a CCD camera can be calculated from the following equation: where: I = photon flux (photons/pixel/second), QE = quantum efficiency, t = integration time (seconds), Nd = dark current (electrons/pixel/sec), Nr = read noise (electrons).
Properties:
Optics In optics, shot noise describes the fluctuations of the number of photons detected (or simply counted in the abstract) due to their occurrence independent of each other. This is therefore another consequence of discretization, in this case of the energy in the electromagnetic field in terms of photons. In the case of photon detection, the relevant process is the random conversion of photons into photo-electrons for instance, thus leading to a larger effective shot noise level when using a detector with a quantum efficiency below unity. Only in an exotic squeezed coherent state can the number of photons measured per unit time have fluctuations smaller than the square root of the expected number of photons counted in that period of time. Of course there are other mechanisms of noise in optical signals which often dwarf the contribution of shot noise. When these are absent, however, optical detection is said to be "photon noise limited" as only the shot noise (also known as "quantum noise" or "photon noise" in this context) remains.
Properties:
Shot noise is easily observable in the case of photomultipliers and avalanche photodiodes used in the Geiger mode, where individual photon detections are observed. However the same noise source is present with higher light intensities measured by any photo detector, and is directly measurable when it dominates the noise of the subsequent electronic amplifier. Just as with other forms of shot noise, the fluctuations in a photo-current due to shot noise scale as the square-root of the average intensity: (ΔI)2=def⟨(I−⟨I⟩)2⟩∝I.
Properties:
The shot noise of a coherent optical beam (having no other noise sources) is a fundamental physical phenomenon, reflecting quantum fluctuations in the electromagnetic field. In optical homodyne detection, the shot noise in the photodetector can be attributed to either the zero point fluctuations of the quantised electromagnetic field, or to the discrete nature of the photon absorption process. However, shot noise itself is not a distinctive feature of quantised field and can also be explained through semiclassical theory. What the semiclassical theory does not predict, however, is the squeezing of shot noise. Shot noise also sets a lower bound on the noise introduced by quantum amplifiers which preserve the phase of an optical signal.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Piece goods**
Piece goods:
Piece goods were the textile materials sold in cut pieces as per the buyer's specification. The piece goods were either cut from a fabric roll or produced with a certain length, also called yard goods. Various textiles such as cotton, wool, silk, etc., were traded in terms of piece goods. The prices were determined as per the fabric quality.John Forbes Watson classified Indian textiles into two types: piece goods and loom goods. Piece goods are materials that must be cut and sewn before they can be used, whereas loom goods, such as scarves and Saris, are ready to use after leaving the loom.
Production:
Many Indian clothes were ready to wear after leaving the loom. These were simple pieces of cloth of dimensions suited to the purposes. Lungi, Dhoti, and Sari are few specific examples of drape clothes. Other cloths produced according to specified dimensions are: Longcloth made at Coromandel Coast was of the length of 37 yards or 37 to 40 yards.
Qutni at Damascus was weaved as per market specified dimensions; for example, Length 6.13 meters width 0.7 meters was for Syria, Baghdad and Constantinople, Smyrna, and Persia. But for Egypt, the length was slightly more, i.e., 6.83 with the same width.
Chautar an old muslin has been recorded with specific dimensions, i.e., length 12.44 meters and width 77.75 centimeters. Chautar was compared with sansuo, which was a three shuttle cloth, type of fine cotton variety produced at Songjiang.
Tasar, a silk and cotton cloth used for lining in quilts from Bengal was produced with 14 yards of length and 1.5 yards width.
Alachas were 5 yards long.
A type of Gulbadan (silk cloth), Sohren Gulbadan was with 36 feet long and 1 foot and 4 inches wide.
Salampore was 16X1 yards.
Sussi (cloth) a striped fabric was 10 to 20 yards long and one yard in wide.
Khasas were having dimensions of 20 x 1 or 1.5 yards. The number of threads was in warp direction were 1400–2800 with the weight of 595 grams /pc (with 2800 threads).
Mulboos khas special muslins, reserved for royal aristocracy were measured 10 yards X 1-yard when produced of half-length. They were having 1800-1900 threads in warp.
Man-cheti was a “ginger yellow” cotton cloth made in India in the 14th century. Made in lengths of fifty feet and a width of four feet.
Punjum, a kind of longcloth from the Northern Circars was produced in a variety of thread counts. As per John Forbes Watson, a common piece of Punjum weighs 14 pounds and is 18 yards long (36 Cubits). Its width ranges from 38 to 44 inches.
Ghalta had a standard length of 9 yards and a width of 26 inches.: 91, 92 Kente cloth from Ghana, which dates back to the ninth century, consists of narrowly woven strips that are sewn together.
Trading practices:
Textile piece goods have been sold globally in many varieties, including grey, bleached, or dyed and prints. And the practice is still being followed by many buyers. The knitted fabric is traded by weight also.
History Historically drapers and cloth merchants were trading in piece goods. India was famous for its handloom cotton piece goods. Many fabrics of coarse to fine cotton qualities such as Baftas, calicos, and muslins were used to be exported during the Mughal era.
Trading practices:
There are records stating that in 1664 the East India Company imported 273,746 pieces of cotton cloth from India (approximately 4.2 million sq. meters). This increasing trend finally peaked in 1684 at 1,760,315 pieces (or 26.9 million sq. meters). Woolen and silk piece goods were also traded. Woollen piece goods for example shawls were exported from Kashmir.The exports were continued until the British cloths emerged in the 19th century. Substantial quantities of various piece goods were exported from Madras in 18th and 19th century. Punjum cloths accounted for a sizable portion of Madras' exports in the 18th century. Punjum, Salampores, Palampores, Chintz, Book muslin and Longcloth, varieties of Ghingahm were among the piece goods which were exported to America from Madras.: 22, 429 During the 1920s, the Philippines was the largest market for cotton piece goods exported by the United States of America.
Trading practices:
Currently Several textile piece goods are still traded with different HS codes to differentiate the weave, structure, and composition. For example, HS code 51123030 stands for hundred percent wool, and 58109100 is for woven dyed cotton with embroidery piece goods. The Harmonized System, or ‘HS,’ is an identification code developed by the World Customs Organization (WCO).
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Steinmetz solid**
Steinmetz solid:
In geometry, a Steinmetz solid is the solid body obtained as the intersection of two or three cylinders of equal radius at right angles. Each of the curves of the intersection of two cylinders is an ellipse.
The intersection of two cylinders is called a bicylinder. Topologically, it is equivalent to a square hosohedron. The intersection of three cylinders is called a tricylinder. A bisected bicylinder is called a vault, and a cloister vault in architecture has this shape.
Steinmetz solids are named after mathematician Charles Proteus Steinmetz, who solved the problem of determining the volume of the intersection. However, the same problem had been solved earlier, by Archimedes in the ancient Greek world, Zu Chongzhi in ancient China, and Piero della Francesca in the early Italian Renaissance.
Bicylinder:
A bicylinder generated by two cylinders with radius r has the volume 16 3r3 and the surface area 16 r2 .The upper half of a bicylinder is the square case of a domical vault, a dome-shaped solid based on any convex polygon whose cross-sections are similar copies of the polygon, and analogous formulas calculating the volume and surface area of a domical vault as a rational multiple of the volume and surface area of its enclosing prism hold more generally. In China, the bicylinder is known as Mou he fang gai, literally "two square umbrella"; it was described by the third-century mathematician Liu Hui.
Bicylinder:
Proof of the volume formula For deriving the volume formula it is convenient to use the common idea for calculating the volume of a sphere: collecting thin cylindric slices. In this case the thin slices are square cuboids (see diagram). This leads to 16 3r3 .It is well known that the relations of the volumes of a right circular cone, one half of a sphere and a right circular cylinder with same radii and heights are 1 : 2 : 3. For one half of a bicylinder a similar statement is true: The relations of the volumes of the inscribed square pyramid ( a=2r,h=r,V=43r3 ), the half bicylinder ( V=83r3 ) and the surrounding squared cuboid ( a=2r,h=r,V=4r3 ) are 1 : 2 : 3.
Bicylinder:
Using Multivariable Calculus Consider the equations of the cylinders: x2+z2=r2 x2+y2=r2 The volume will be given by: V=∭Vdzdydx With the limits of integration: −r2−x2⩽z⩽r2−x2 −r2−x2⩽y⩽r2−x2 −r⩽x⩽r Substituting, we have: 16 r33 Proof of the area formula The surface area consists of two red and two blue cylindrical biangles. One red biangle is cut into halves by the y-z-plane and developed into the plane such that half circle (intersection with the y-z-plane) is developed onto the positive ξ -axis and the development of the biangle is bounded upwards by the sine arc sin (ξr),0≤ξ≤πr . Hence the area of this development is sin (ξr)dξ=2r2 and the total surface area is: 16 r2 Alternate proof of the volume formula Deriving the volume of a bicylinder (white) can be done by packing it in a cube (red). A plane (parallel with the cylinders' axes) intersecting the bicylinder forms a square and its intersection with the cube is a larger square. The difference between the areas of the two squares is the same as 4 small squares (blue). As the plane moves through the solids, these blue squares describe square pyramids with isosceles faces in the corners of the cube; the pyramids have their apexes at the midpoints of the four cube edges. Moving the plane through the whole bicylinder describes a total of 8 pyramids.
Bicylinder:
The volume of the cube (red) minus the volume of the eight pyramids (blue) is the volume of the bicylinder (white). The volume of the 8 pyramids is: 8×13r2×r=83r3 , and then we can calculate that the bicylinder volume is 16 3r3
Tricylinder:
The intersection of three cylinders with perpendicularly intersecting axes generates a surface of a solid with vertices where 3 edges meet and vertices where 4 edges meet. The set of vertices can be considered as the edges of a rhombic dodecahedron. The key for the determination of volume and surface area is the observation that the tricylinder can be resampled by the cube with the vertices where 3 edges meet (s. diagram) and 6 curved pyramids (the triangles are parts of cylinder surfaces). The volume and the surface area of the curved triangles can be determined by similar considerations as it is done for the bicylinder above.The volume of a tricylinder is V=8(2−2)r3 and the surface area is 24 (2−2)r2.
More cylinders:
With four cylinders, with axes connecting the vertices of a tetrahedron to the corresponding points on the other side of the solid, the volume is 12 (22−6)r3 With six cylinders, with axes parallel to the diagonals of the faces of a cube, the volume is: 16 3(3+23−42)r3
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Drug of last resort**
Drug of last resort:
A drug of last resort (DoLR), also known as a heroic dose, is a pharmaceutical drug which is tried after all other drug options have failed to produce an adequate response in the patient. Drug resistance, such as antimicrobial resistance or antineoplastic resistance, may make the first-line drug ineffective, especially in case of multidrug-resistant pathogens and tumors. Such an alternative may be outside of extant regulatory requirements or medical best practices, in which case it may be viewed as salvage therapy.
Purposes:
The use of a drug of last resort may be based on agreement among members of a patient's care network, including physicians and healthcare professionals across multiple specialties, or on a patient's desire to pursue a particular course of treatment and a practitioner's willingness to administer that course. Certain situations such as severe bacterial related sepsis or septic shock can more commonly lead to last resorts.
Purposes:
Therapies considered to be drugs of last resort may at times be used earlier in the event that an agent would likely show the most immediate dose-response related efficacy in time-critical situations such as high mortality circumstances. Many of the drugs considered last resorts fall into one or more of the categories of antibiotics, antivirals, and chemotherapy agents. These agents often exhibit what are considered to be among the most efficient dose-response related effects, or are drugs for which few or no resistant strains are known.
Purposes:
With regard to antibiotics, antivirals, and other agents indicated for treatment of infectious pathological disease, drugs of last resort are commonly withheld from administration until after the trial and failure of more commonly used treatment options to prevent the development of drug resistance. One of the most commonly known examples of both antimicrobial resistance and the relationship to the classification of a drug of last resort is the emergence of Staphylococcus aureus (MRSA) (sometimes also referred to as multiple-drug resistant S. aureus due to resistance to non-penicillin antibiotics that some strains of S. aureus have shown to exhibit). In cases presenting with suspected S. aureus, it is suggested by many public health institutions (including the World Health Organization (WHO) and the Centers for Disease Control and Prevention (CDC) in the United States) to treat first with empirical therapies for S. aureus, with an emphasis on evaluating the response to initial treatment and laboratory diagnostic techniques to isolate cases of drug resistance.
Purposes:
Due to the possibility of potential severe or fatal consequences of resistant strains, initial treatment often includes concomitant administration of multiple antimicrobial agents that are not known to show cross-resistance, so as to reduce the possibility of a resistant strain remaining inadequately treated by a single agent during the evaluation of drug response. Once a specific resistance profile has been isolated via clinical laboratory findings, treatment is often modified as indicated.
Purposes:
Vancomycin has long been considered a drug of last resort, due to its efficiency in treating multiple drug-resistant infectious agents and the requirement for intravenous administration. Recently, resistance to even vancomycin has been shown in some strains of S. aureus (sometimes referred to as vancomycin resistant S. aureus (VRSA) or vancomycin intermediate-resistance S. aureus (VISA)) often coinciding with methicillin/penicillin resistance, prompting the inclusion of newer antibiotics (such as linezolid) that have shown efficacy in highly drug-resistant strains. There are also strains of enterococci that have developed resistance to vancomycin referred to as vancomycin resistant enterococcus (VRE).
Purposes:
Agents classified as fourth-line (or greater) treatments or experimental therapies could be considered by default to be drugs of last resort due to their low placement in the treatment hierarchy. Such placement may result from a multitude of considerations, including greater efficacy of other agents, socioeconomic considerations, availability issues, unpleasant side effects or similar issues relating to patient tolerance. Some experimental therapies might also be called drugs of last resort when administered following the failure of all other currently accepted treatments.
Purposes:
Although most of the notable drugs of last resort are antibiotics or antivirals, other drugs are sometimes considered drugs of last resort, such as cisapride.
Examples:
Antimicrobials Aminoglycosides — their use is extremely restricted due to risk of hearing loss and kidney damage; Amphotericin B — used for life-threatening fungal infections and primary amoebic meningoencephalitis; its side effects are often severe or potentially fatal; Carbapenems (such as imipenem/cilastatin) — used as a drug of last resort for a variety of different bacterial infections; Ceftobiprole and ceftaroline — fifth-generation cephalosporins active against methicillin-resistant Staphylococcus aureus (MRSA); use is limited to prevent development of drug resistance; Cefiderocol — a cephalosporin used to treat complicated urinary tract infections (cUTI) caused by multi-drug resistant Gram-negative bacteria in patients with limited or no alternative options; Chloramphenicol — formerly first-line therapy for Rocky Mountain spotted fever (until doxycycline became available). Also first-line therapy (used topically) for bacterial conjunctivitis, and systemically for meningitis when allergies to penicillin or cephalosporin exist. Unacceptably high risk of irreversible, fatal aplastic anemia and gray baby syndrome causes intravenous chloramphenicol to be a drug of last resort; Colistin — used against certain life-threatening infections, such as those caused by Pseudomonas; carries risk of kidney and nerve damage; Linezolid — use is limited due to high cost and risk of vision loss or myopathy (due to mitochondrial damage); Tigecycline — used to kill Acinetobacter and Legionella species; this drug is limited by high cost and risk of liver injury.
Examples:
Other drugs Alosetron — used in the management of severe chronic diarrhea-predominant irritable bowel syndrome (IBS-D) in females not responsive to conventional therapy. Its use is restricted due to serious gastrointestinal adverse reactions, e.g. ischemic colitis and complications of constipation; Cisapride — used for severe gastroesophageal reflux disease (GERD); carries risk of heart arrhythmias; Clomethiazole — a sedative/hypnotic agent used in the treatment of alcohol withdrawal when benzodiazepines are not effective; Clozapine — used in treatment-resistant schizophrenia not responsive to at least two different antipsychotics; the main reason for such restriction is agranulocytosis and other severe side effects including seizures and myocarditis; Felbamate — an anticonvulsant used in refractory epilepsy; use is associated with an increased risk of aplastic anemia and liver failure; Isotretinoin — when all topical treatments or antibiotics against acne have failed, many dermatologists resort to isotretinoin, which is an oral treatment that permanently dries out the sebum production of the skin and is often a permanent solution against acne. It can cause severe nosebleed, causes birth defects when taken while pregnant, is said to cause depression, hair loss and can permanently dry out the skin all over the body and is therefore not the first treatment; Levosimendan — used in acutely decompensated severe chronic heart failure in situations where conventional therapy is not sufficient; Oral minoxidil for hypertension, however topical minoxidil is the first-line drug for hair loss; Monoamine oxidase inhibitors — due to potentially lethal dietary and drug interactions which may trigger hypertensive crisis and/or serotonin syndrome, they are generally used only when other classes of antidepressants (e.g., SSRIs or SNRIs) don't work; Thalidomide — withdrawn in 1961 owing to widespread incidence of severe birth defects (phocomelia or tetraamelia) after prenatal use by pregnant women, US Food and Drug Administration approved thalidomide for erythema nodosum leprosum (ENL) in 1998, and 2008 for new cases of multiple myeloma (administered with dexamethasone). A large "off-label" business in thalidomide began for rare cancers even while it was only FDA-approved for erythema nodosum leprosum; Tolcapone — used in patients with Parkinson's disease who are not appropriate candidates for other adjunctive therapies. Use is restricted due to hepatotoxicity; Vigabatrin — used only in extreme treatment-resistant epilepsy due to the risk of permanent vision loss.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Agda (programming language)**
Agda (programming language):
Agda is a dependently typed functional programming language originally developed by Ulf Norell at Chalmers University of Technology with implementation described in his PhD thesis. The original Agda system was developed at Chalmers by Catarina Coquand in 1999. The current version, originally known as Agda 2, is a full rewrite, which should be considered a new language that shares a name and tradition.
Agda (programming language):
Agda is also a proof assistant based on the propositions-as-types paradigm, but unlike Coq, has no separate tactics language, and proofs are written in a functional programming style. The language has ordinary programming constructs such as data types, pattern matching, records, let expressions and modules, and a Haskell-like syntax. The system has Emacs, Atom, and VS Code interfaces but can also be run in batch mode from the command line.
Agda (programming language):
Agda is based on Zhaohui Luo's unified theory of dependent types (UTT), a type theory similar to Martin-Löf type theory.
Agda is named after the Swedish song "Hönan Agda", written by Cornelis Vreeswijk, which is about a hen named Agda. This alludes to the name of the theorem prover Coq, which was named after Thierry Coquand, Catarina Coquand's husband.
Features:
Inductive types The main way of defining data types in Agda is via inductive data types which are similar to algebraic data types in non-dependently typed programming languages.
Here is a definition of Peano numbers in Agda: Basically, it means that there are two ways to construct a value of type N , representing a natural number. To begin, zero is a natural number, and if n is a natural number, then suc n, standing for the successor of n, is a natural number too.
Features:
Here is a definition of the "less than or equal" relation between two natural numbers: The first constructor, z≤n, corresponds to the axiom that zero is less than or equal to any natural number. The second constructor, s≤s, corresponds to an inference rule, allowing to turn a proof of n ≤ m into a proof of suc n ≤ suc m. So the value s≤s {zero} {suc zero} (z≤n {suc zero}) is a proof that one (the successor of zero), is less than or equal to two (the successor of one). The parameters provided in curly brackets may be omitted if they can be inferred.
Features:
Dependently typed pattern matching In core type theory, induction and recursion principles are used to prove theorems about inductive types. In Agda, dependently typed pattern matching is used instead. For example, natural number addition can be defined like this: This way of writing recursive functions/inductive proofs is more natural than applying raw induction principles. In Agda, dependently typed pattern matching is a primitive of the language; the core language lacks the induction/recursion principles that pattern matching translates to.
Features:
Metavariables One of the distinctive features of Agda, when compared with other similar systems such as Coq, is heavy reliance on metavariables for program construction. For example, one can write functions like this in Agda: ? here is a metavariable. When interacting with the system in emacs mode, it will show the user expected type and allow them to refine the metavariable, i.e., to replace it with more detailed code. This feature allows incremental program construction in a way similar to tactics-based proof assistants such as Coq.
Features:
Proof automation Programming in pure type theory involves a lot of tedious and repetitive proofs. Although Agda has no separate tactics language, it is possible to program useful tactics within Agda itself. Typically, this works by writing an Agda function that optionally returns a proof of some property of interest. A tactic is then constructed by running this function at type-checking time, for example using the following auxiliary definitions: Given a function check-even : (n : N ) → Maybe (Even n) that inputs a number and optionally returns a proof of its evenness, a tactic can then be constructed as follows: The actual proof of each lemma will be automatically constructed at type-checking time. If the tactic fails, type-checking will fail.
Features:
Additionally, to write more complex tactics, Agda has support for automation via reflection. The reflection mechanism allows one to quote program fragments into – or unquote them from – the abstract syntax tree. The way reflection is used is similar to the way Template Haskell works.Another mechanism for proof automation is proof search action in emacs mode. It enumerates possible proof terms (limited to 5 seconds), and if one of the terms fits the specification, it will be put in the meta variable where the action is invoked. This action accepts hints, e.g., which theorems and from which modules can be used, whether the action can use pattern matching, etc.
Features:
Termination checking Agda is a total language, i.e., each program in it must terminate and all possible patterns must be matched. Without this feature, the logic behind the language becomes inconsistent, and it becomes possible to prove arbitrary statements. For termination checking, Agda uses the approach of the Foetus termination checker.
Standard library Agda has an extensive de facto standard library, which includes many useful definitions and theorems about basic data structures, such as natural numbers, lists, and vectors. The library is in beta, and is under active development.
Unicode One of the more notable features of Agda is a heavy reliance on Unicode in program source code. The standard emacs mode uses shortcuts for input, such as \Sigma for Σ.
Backends There are two compiler backends, MAlonzo for Haskell and one for JavaScript.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Thermization**
Thermization:
Thermization, also spelled thermisation, is a method of sanitizing raw milk with low heat. "Thermization is a generic description of a range of subpasteurization heat treatments (57 to 68°C × 10 to 20 s) that markedly reduce the number of spoilage bacteria in milk with minimal heat damage." The process is not used on other food products, and is similar to pasteurization but uses lower temperatures, allowing the milk product to retain more of its original taste. In Europe, there is a distinction between cheeses made of thermized milk and raw-milk cheeses. However, the United States' Food and Drug Administration (FDA) places the same regulations on all unpasteurized cheeses. As a result, cheeses from thermized milk must be aged for 60 days or more before being sold in the United States, the same restriction placed on raw-milk cheeses by the FDA.Thermization involves heating milk at temperatures of around 145–149 °F (63–65 °C) for 15 seconds, while pasteurization involves heating milk at 160 °F (71 °C) for 15 seconds or at 145 °F (63 °C) for 30 minutes. Thermization is used to extend the keeping quality of raw milk (the length of time that milk is suitable for consumption) when it cannot be immediately used in other products, such as cheese. Thermization can also be used to extend the storage life of fermented milk products by inactivating microorganisms in the product.Thermization inactivates psychrotrophic bacteria in milk and allows the milk to be stored below 8 °C (46 °F) for three days, or stored at 0–1 °C (32–34 °F) for seven days. Later, the milk may be given stronger heat treatment to be preserved longer. Cooling thermized milk before reheating is necessary to delay/prevent the outgrowth of bacterial spores. When the milk is first heated, spores can begin to germinate, but their growth can be halted or delayed when the milk is refrigerated, depending on the microorganisms' growth requirements. Germinated spores are sensitive to subsequent heating, however since germination is not a homogeneous process, not all spores will germinate or be inactivated by subsequent heating.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Sony FE 16-35mm F2.8 GM**
Sony FE 16-35mm F2.8 GM:
The Sony FE 16-35mm F2.8 GM is a premium constant maximum aperture wide-angle full-frame (FE) zoom lens for the Sony E-mount, announced by Sony on May 17, 2017. The lens is scheduled for release on August 31, 2017.This lens is part of Sony's professional GM zoom lens of the FE 12-24mm F2.8, FE 16-35mm F2.8, FE 24-70mm F2.8, and FE 70-200mm F2.8. Though designed for Sony's full frame E-mount cameras, the lens can be used on Sony's APS-C E-mount camera bodies, with an equivalent full-frame field-of-view of 24–52.5mm.
Build quality:
The lens showcases a weather resistant matte-black plastic exterior with a pair of rubber focus and zoom rings. The barrel of the lens telescopes outward from the main lens body as it's zoomed in from 16mm to 35mm. The lens does not feature image stabilization.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Supplementary weaving**
Supplementary weaving:
Supplementary weaving is a decorative technique in which additional threads are woven into a textile to create an ornamental pattern in addition to the ground pattern. The supplementary weave can be of the warp or of the weft. Supplementary weave is commonly used in many of thetextiles of Southeast Asia such as in Balinese textiles, the textiles of Sumba and the songket of Sumatra, Malaysia and Brunei.
Supplementary of the warp weaving:
An additional set of threads are incorporated in the warp to create the design.
Supplementary of the weft weaving:
An extra set of threads are woven into the weft between two regular weft threads to create an ornamental pattern in addition to the ground weave. Songket textiles are an example of supplementary weaving of the weft in which metallic threads are used to form the pattern.
History:
Evidence from certain important textiles displaying ancient iconography and significant in ritual, suggests that supplementary weft patterning techniques existed before the period of Indian influence in Southeast Asia. Nevertheless, there is no doubt that the earliest weaving decorations in the region was predominantly warp oriented. However a fundamental shift from warp to weft decoration seems to have occurred throughout many parts of Southeast Asia during the period of Indian influence. The development of weft ornamentation is evident in woven patterns found throughout Indianized areas. In Cambodia during the Angkor period and in Thailand from the 11th to the 14th century, carved statues and sculptures record figures wearing textiles with stripes running down the torso.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Ancestry-informative marker**
Ancestry-informative marker:
In population genetics, an ancestry-informative marker (AIM) is a single-nucleotide polymorphism that exhibits substantially different frequencies between different populations. A set of many AIMs can be used to estimate the proportion of ancestry of an individual derived from each population.
Ancestry-informative marker:
A single-nucleotide polymorphism is a modification of a single nucleotide base within a DNA sequence. There are an estimated 15 million SNP (Single-nucleotide polymorphism) sites (out of roughly 3 billion base pairs, or about 0.4%) from among which AIMs may potentially be selected. The SNPs that relate to ancestry are often traced to the Y chromosome and mitochondrial DNA because both of these areas are inherited from one parent, eradicating complexities that come with parental gene recombination. SNP mutations are rare, so sequences with SNPs tend to be passed down through generations rather than altered each generation. However, because any given SNP is relatively common in a population, analysts must examine groups of SNPs (otherwise known as AIMS) to determine someone's ancestry. Using statistical methods such as apparent error rate and Improved Bayesian Estimate, the set of SNPs with the highest accuracy for predicting a specific ancestry can be found.Examining a suite of these markers more or less evenly spaced across the genome is also a cost-effective way to discover novel genes underlying complex diseases in a technique called admixture mapping or mapping by admixture linkage disequilibrium.
Ancestry-informative marker:
As one example, the Duffy Null allele (FY*0) has a frequency of almost 100% of Sub-Saharan Africans, but occurs very infrequently in populations outside of this region. A person having this allele is thus more likely to have Sub-Saharan African ancestors. North and South Han Chinese ancestry can be distinguished unambiguously using a set of 140 AIMS.Collections of AIMs have been developed that can estimate the geographical origins of ancestors from within Europe.Following the development of ancient DNA databases, ancient ancestry-informative marker (aAIM) were similarly defined as a single-nucleotide polymorphism that exhibits substantially different frequencies between different ancient populations. A set of aAIMs can be used to identify the ancestry of ancient populations and eventually quantify the genetic similarity to modern-day individuals.
Discovery and development:
The discovery of ancestry-informative markers was made possible by the development of next generation sequencing, or NGS. NGS enables the study of genetic markers by isolating specific gene sequences. One such method for sequence extraction is the use restriction enzymes, specifically endonuclease, which modifies the DNA sequence. This enzyme can be used with DNA ligase (connecting two different DNA), modifying DNA by inserting DNA from other organism. Another method, cDNA sequencing, or RNA-seq, can also help to acquire information of the transcriptomes in a broad range of organisms and find SNPs (single nucleotide polymorphisms), within a DNA sequence.
Applications:
Ancestry informative markers have a number of applications in genetic research, forensics, and private industry. AIMs that indicate a predisposition for diseases such as type 2 diabetes mellitus and renal disease have been shown to reduce the effects of genetic admixture in ancestral mapping when using admixture mapping software. The differential ability of ancestry-informative markers allows scientists and researchers to narrow geographical populations of concern; for example, illegal organ trafficking can be traced to certain areas by comparing the samples taken from organ recipients and deciphering the foreign marker in their body. An array of private companies, such as 23andMe and AncestryDNA, provide cost-effective direct-to-consumers (DTC) genetic testing by analyzing ancestry informative markers to determine geographic origins. These private companies collect massive quantities of data such as biological samples and self-reported information from consumers, a practice known as biobanking, enabling their researchers to discover more insights on AIMs.Though AIM panels can be useful for disease screening, the Genetic Information Nondiscrimination Act (GINA) prevents the use of genetic information for insurance and workplace discrimination.
Medical research:
Different ancestral traits and their affiliation to diseases can help scientists determine appropriate approaches of treatment for a specific population. Medical researchers have revealed the link between ancestry traits and some common diseases; for example, individuals of African descent have been found to be at higher risk of asthma than those of European ancestry.AIM panels can be used for detecting disease risk factors. One such panel was created for African American ancestry based on subsets of commercially available SNP arrays. These types of arrays can help reduce the cost of identifying risk factors, since they allow researchers to screen for ancestry markers instead of the entire genome. This is due to the fact that these SNP arrays narrow the scope of the necessary screening from hundreds of thousands of SNP markers to a panel of a few thousands of AIMs.While some believe that structured populations should be used in studies to better ascertain genetic associations to diseases, the social implications of the potential racial stigma that may result from such studies is a major concern. However, the study done by Yang et al. (2005) suggests that the technology to conduct deeper research into and identify ancestry-associated variations in human disease does already exist.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Meteorological instrumentation**
Meteorological instrumentation:
Meteorological instruments (or weather instruments), including meteorological sensors (weather sensors), are the equipment used to find the state of the atmosphere at a given time. Each science has its own unique sets of laboratory equipment. Meteorology, however, is a science which does not use much laboratory equipment but relies more on on-site observation and remote sensing equipment. In science, an observation, or observable, is an abstract idea that can be measured and for which data can be taken. Rain was one of the first quantities to be measured historically. Two other accurately measured weather-related variables are wind and humidity. Many attempts had been made prior to the 15th century to construct adequate equipment to measure atmospheric variables.
History:
Devices used to measure weather phenomena in the mid-20th century were the rain gauge, the anemometer, and the hygrometer. The 17th century saw the development of the barometer and the Galileo thermometer while the 18th century saw the development of the thermometer with the Fahrenheit and Celsius scales. The 20th century developed new remote sensing tools, such as weather radars, weather satellites and wind profilers, which provide better sampling both regionally and globally. Remote sensing instruments collect data from weather events some distance from the instrument and typically stores the data where the instrument is located and often transmits the data at defined intervals to central data centers.
History:
In 1441, king Sejong's son, Prince Munjong, invented the first standardized rain gauge. These were sent throughout the Joseon Dynasty of South Korea as an official tool to assess land taxes based upon a farmer's potential harvest. In 1450, Leone Battista Alberti developed a swinging-plate anemometer, and is known as the first anemometer. In 1607, Galileo Galilei constructs a thermoscope. In 1643, Evangelista Torricelli invents the mercury barometer. In 1662, Sir Christopher Wren invented the mechanical, self-emptying, tipping bucket rain gauge. In 1714, Gabriel Fahrenheit creates a reliable scale for measuring temperature with a mercury-type thermometer. In 1742, Anders Celsius, a Swedish astronomer, proposed the 'centigrade' temperature scale, the predecessor of the current Celsius scale. In 1783, the first hair hygrometer is demonstrated by Horace-Bénédict de Saussure. In 1806, Francis Beaufort introduced his system for classifying wind speeds. The April 1960 launch of the first successful weather satellite, TIROS-1, marked the beginning of the age where weather information became available globally.
History:
This was also used to measure the temperature of the surrounding air.
Types:
A thermometer measures air temperature, or the kinetic energy of the molecules within air. A barometer measures atmospheric pressure, or the pressure exerted by the weight of the Earth's atmosphere above a particular location. An anemometer measures the wind speed and the direction the wind is blowing from at the site where it is mounted. A hygrometer measures the relative humidity at a location, which can then be used to compute the dew point. Radiosondes directly measure most of these quantities, except for wind, which is determined by tracking the radiosonde signal with an antenna or theodolite. Supplementing the radiosondes a network of aircraft collection is organized by the World Meteorological Organization (WMO), which also use these instruments to report weather conditions at their respective locations. A sounding rocket or rocketsonde, sometimes called a research rocket, is an instrument-carrying rocket designed to take measurements and perform scientific experiments during its suborbital flight.
Types:
A pyranometer is a type of actinometer used to measure broadband solar irradiance on a planar surface and is a sensor that is designed to measure the solar radiation flux density (in watts per metre square) from a field of view of 180 degrees. A ceilometer is a device that uses a laser or other light source to determine the height of a cloud base. Ceilometers can also be used to measure the aerosol concentration within the atmosphere. A ceiling balloon is used by meteorologists to determine the height of the base of clouds above ground level during daylight hours. The principle behind the ceiling balloon is a balloon with a known ascent rate (how fast it climbs) and determining how long the balloon rises until it disappears into the cloud. Ascent rate times ascent time yields the ceiling height. A disdrometer is an instrument used to measure the drop size distribution and velocity of falling hydrometeors. Rain gauges are used to measure the precipitation which falls at any point on the Earth's landmass.
Types:
Remote sensing, as used in meteorology, is the concept of collecting data from remote weather events and subsequently producing weather information. Each remote sensing instrument collects data about the atmosphere from a remote location and, usually, stores the data where the instrument is located. The most common types of remote sensing are radar, lidar, and satellites (also photogrammetry). The main uses of radar are to collect information concerning the coverage and characteristics of precipitation and wind. Satellites are chiefly used to determine cloud cover, as well as wind. SODAR (SOnic Detection And Ranging) is a meteorological instrument as one form of wind profiler, which measures the scattering of sound waves by atmospheric turbulence. Sodar systems are used to measure wind speed at various heights above the ground, and the thermodynamic structure of the lower layer of the atmosphere. Radar and lidar are not passive because both use electromagnetic radiation to illuminate a specific portion of the atmosphere. Weather satellites along with more general-purpose Earth-observing satellites circling the earth at various altitudes have become an indispensable tool for studying a wide range of phenomena from forest fires to El Niño.
Weather stations:
A weather station is a facility with instruments and equipment to make observations of atmospheric conditions in order to provide information to make weather forecasts and to study the weather and climate. The measurements taken include temperature, barometric pressure, humidity, wind speed, wind direction, and precipitation amounts. Wind measurements are taken as free of other obstructions as possible, while temperature and humidity measurements are kept free from direct solar radiation, or insolation. Manual observations are taken at least once daily, while automated observations are taken at least once an hour.
Surface weather observations:
Surface weather observations are the fundamental data used for safety as well as climatological reasons to forecast weather and issue warnings worldwide. They can be taken manually, by a weather observer, by computer through the use of automated weather stations, or in a hybrid scheme using weather observers to augment the otherwise automated weather station. The ICAO defines the International Standard Atmosphere, which is the model of the standard variation of pressure, temperature, density, and viscosity with altitude in the Earth's atmosphere, and is used to reduce a station pressure to sea level pressure. Airport observations can be transmitted worldwide through the use of the METAR observing code. Personal weather stations taking automated observations can transmit their data to the United States mesonet through the use of the Citizen Weather Observer Program (CWOP), or internationally through the Weather Underground Internet site. A thirty-year average of a location's weather observations is traditionally used to determine the station's climate.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Bhimpalasi**
Bhimpalasi:
Bhimpalasi or Bheempalasi (also known as Bhimpalas or Bheempalas) is a joint raga or “jor raga” it's a mixture of raga "bhim(Bheem) also known as raga gavti" and raga "palasi or palas" , where raga bhim is a creation of babaAllauddin Khan (founder of rampur maihar seniya gharana) and raga palasi is a traditional raga Raga Bhimpalasi belongs to the Kafi Thaat.
Theory:
Aarohana: Ṉ̣ S G̱ M P Ṉ Ṡ Avaroha: Ṡ Ṉ D P M G̱ R SThe raga has komal Ni and Ga. Rishabh (second) and dhaivat (sixth) are skipped in āroha (ascending) passages, but are given due importance when descending (avroha). Since the scale has 5 notes ascending and all 7 descending, the resulting jāti is Audav–Sampūrṇa. It is performed in the early afternoon, from 12:00 P.M. to 3:00 P.M. (the third prahar of the day).Use of dhaivat and rishabh is symmetrical in that both are approached via the succeeding notes (D from Ṉ, and R from G̱).G̱ is sung with a kaṇ-svara (grace note) of M. Similarly, Ṉ is sung with a kaṇ-svara from S.
Theory:
Vadi Swar: M Samavadi Swar: S Thaat: Kafi Pakad or Chalan: Ṉ̣ S M ❟ M G̱ P M ❟ G̱ M G̱ R S
Bandish Examples:
A bandish is a composition in Hindustani classical music. Both of the following bandishes are examples of Bhimpalasi.
Bandish by Naimat Khan "Sadarang" This bandish is set in Teental. Pandit Jasraj is known for having sung this particular bandish; it is also in the repertoire of Sanjeev Abhyankar.
Prominent Bandish(Composition) by Acharya Dr. Pandit Gokulotsavji Maharaj "MadhurPiya" The Bandish Initials(Bandish Name): "Gāo Bajāo Sab Mil Ātā Umaṅg So" The Bandish is set in tāla Ektal Organisation and relationships Related/similar ragas: Bageshree, Dhanashree, Dhani, Patdeep, Hamsakinkini, Patdeepaki In Carnatic music, Karnataka Devagandhari is the most similar raga, falling with Melakarta 22 (Karaharapriya).
Behaviour:
The madhyam (fourth) is the most important note. It is also a nyāsa-svara (resting note) with emphasized elaboration around this note - S G̱ M ❟ M G̱ M ❟ G̱ M P ❟ M P G̱ M P ( M ) G̱ ( M ) G̱ M.
Film songs:
Language: Hindi Language: Tamil Note that the following songs are composed in Abheri, the equivalent of raga Bhimpalasi in Carnatic music.
Sources:
Bor, Joep; Rao, Suvarnalata (1999). The Raga Guide: A Survey of 74 Hindustani Ragas. Nimbus Records with Rotterdam Conservatory of Music. p. 40. ISBN 9780954397609.
Bhimpalāsi Rāga (Hin), The Oxford Encyclopaedia of the Music of India. Oxford University Press. ISBN 9780195650983. Retrieved 12 October 2018.
Gosvami, O. (1957). The Story Of Indian Music. Bombay: Asia Publishing House.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Iff card**
Iff card:
Iff card is a contactless smart card introduced in Cardiff in 2010. It allows customers to travel on Cardiff Bus services after having pre-paid.The name "Iff card" is a play on the word "Cardiff".
Launch:
Having been an aspiration of Cardiff Bus for many years beforehand, the card was launched in October 2010 during a publicity event outside Cardiff Central Library. The first 30,000 cards were issued free of charge and pre-loaded with £3 of credit, after which the cards were charged at £5. The cards are now issued free.The company spent £300,000 on developing the card whose ITSO technology can be shared with other transport providers and public bodies in the future.More than 25,000 applications for the card were received within the first few weeks of its launch.
Use:
An amount of money is electronically loaded onto the card, either upon boarding a bus or at the Cardiff Bus customer service centre. A passenger then chooses a ticket type. The card can also be used as a season ticket. The card should be topped up when the balance is low, but the card allows the customer to acquire a negative balance up to £3, the first operator in the UK to do so.
Use:
Those who have registered for an Iff card will be able to be sent service updates via text or e-mail during spells of bad weather.
Restrictions:
The card can be topped-up in units of £5, £10 and £20 up to maximum amount of £50. The card may be used by persons aged between 6 and 60. The Iff card cannot be used to pay a partial amount. The card would be cancelled if not used for a continuous period of one year
Future:
The Managing Director of Cardiff Bus hopes the online topping up service would be available by 2011. Executive Member for Cardiff Council hopes the card would become integrated with rail companies so it could be used across all transport systems. Deputy First Minister for Wales, Ieuan Wyn Jones hopes to see the smartcard technology rolled out throughout Wales by 2014.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Paul Babitzke**
Paul Babitzke:
Paul Babitzke is a professor of biochemistry and molecular biology and director of the Center for RNA Molecular Biology at Pennsylvania State University.
Education:
Paul Babitzke obtained his B.A. in biomedical science from St. Cloud State University in Minnesota in 1994. He earned his Ph.D. in genetics from the University of Georgia in 1991.
Career:
Before he started at Penn State University in 1994, Babitzke worked as a postdoctoral scientist at Stanford University's department of biological sciences for 3 years. Currently, he is professor of biochemistry and molecular biology at Penn State.He became an assistant professor of biochemistry and molecular biology in 1994 and associate professor in 2000. In 2006, Babitzke was promoted to full professor.Since 2009, he has been serving as director of the Center for RNA Molecular Biology in the Penn State Huck Institutes of the Life Sciences. His research focuses on the regulation of gene expression mediated by RNA polymerase pausing, transcription termination, RNA structure, and RNA-binding proteins.
Career:
In 2016, he was elected as Fellow of the American Association for the Advancement of Science.In 2017, he was elected as Fellow of the American Academy of Microbiology.
Honors and awards:
Chair, Division H, American Society for Microbiology (ASM) (2006) Chair, NIGMS Microbial Physiology and Genetics-subcommittee 2 (MBC-2) (2004) Daniel R. Tershak Teaching Award (2009) Divisional Group IV Representative, American Society for Microbiology (ASM) (2011-2015) Charles E. Kaufman New Initiative Research Award (2016) Fellow, American Association for the Advancement of Science (AAAS) (2016) Fellow, American Academy of Microbiology (AAM) (2017) St. Cloud State University Biological Sciences Distinguished Alumni Award (2018)
Selected publications:
Baker, C.S., Morozov, I., Suzuki, K., Romeo, T., and Babitzke, P. (2002) CsrA regulates glycogen biosynthesis by preventing translation of glgC in Escherichia coli. Mol. Microbiol. 44:1599-1610.
Yakhnin, A.V., and Babitzke, P. (2002) NusA-stimulated RNA polymerase pausing and termination participates in the Bacillus subtilis trp operon attenuation mechanism in vitro. Proc. Natl. Acad. Sci. USA. 99:11067-11072.
Yakhnin, A.V., Yakhnin, H., and Babitzke, P. (2006) RNA polymerase pausing participates in the Bacillus subtilis trpE translation control mechanism by providing additional time for TRAP to bind to the nascent trp leader transcript. Mol. Cell 24:547-557.
Yakhnin, A.V., Yakhnin, H., and Babitzke, P. (2008) Function of the Bacillus subtilis transcription elongation factor NusG in hairpin-dependent RNA polymerase pausing in the trp leader. Proc. Natl. Acad. Sci. USA 105:16131-16136.
Yakhnin, H., Yakhnin, A.V., Baker, C.S., Sineva, E., Berezin, I., Romeo, T., and Babitzke, P. (2011) Complex regulation of the global regulatory gene csrA: CsrA-mediated translational repression, transcription from five promoters by Es70 and EsS, and indirect transcriptional activation by CsrA. Mol. Microbiol. 81:689-704.
Yakhnin, A.V., Baker, C.S., Vakulskas, C.A., Yakhnin, H., Berezin, I., Romeo, T., and Babitzke, P. (2013) CsrA activates flhDC expression by protecting flhDC mRNA from RNase E-mediated cleavage. Mol. Microbiol. 87:851-866.
Mondal, S., Yakhnin, A.V., Sebastian, A., Albert, I., and Babitzke, P. (2016) NusA-dependent transcription termination prevents misregulation of global gene expression. Nat. Microbiol. 1:15007.
Potts, A.H., Vakulskas, C.A., Pannuri, A., Yakhnin, H., Babitzke, P., and Romeo, T. (2017) Global role of the bacterial post-transcriptional regulator CsrA revealed by integrated transcriptomics. Nat. Commun. 8:1596.
Yakhnin, A.V., FitzGerald, P.C., McIntosh, C., Yakhnin, H., Kireeva, M., Turek-Herman, J., Mandell, Z.F., Kashlev, M., and Babitzke, P. (2020) NusG controls transcription pausing and RNA polymerase translocation throughout the Bacillus subtilis genome. Proc. Natl. Acad. Sci. USA 117:21628-21636.
Mandell, Z.F., Oshiro, R.T., Yakhnin, A.V., Kashlev, M., Kearns, D.B., and Babitzke, P. (2021) NusG is an intrinsic transcription termination factor that stimulates motility and coordinates gene expression with NusA. eLife 10:e61880.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Beckmann thermometer**
Beckmann thermometer:
A Beckmann thermometer is a device used to measure small differences of temperature, but not absolute temperature values. It was invented by Ernst Otto Beckmann (1853 – 1923), a German chemist, for his measurements of colligative properties in 1905. Today its use has largely been superseded by platinum PT100 resistance thermometers and thermocouples.
Beckmann thermometer:
A Beckmann thermometer's length is usually 40 – 50 cm. The temperature scale typically covers about 5 °C and it is divided into hundredths of a degree. With a magnifier it is possible to estimate temperature changes to 0.001 °C. The peculiarity of Beckmann's thermometer design is a reservoir (R on diagram) at the upper end of the tube, by means of which the quantity of mercury in the bulb can be increased or diminished so that the instrument can be set to measure temperature differences at either high or low temperature values. In contrast, the range of a typical mercury-in-glass thermometer is fixed, being set by the calibration marks etched on the glass or the marks on the printed scale.
Calibration:
In setting the thermometer, a sufficient amount of mercury must be left in the bulb and stem to give readings between the required temperatures. First, the thermometer is inverted and gently tapped so that the mercury in the reservoir lodges in the bend (B) at the end of the stem. Next, the bulb is heated until the mercury in the stem joins the mercury in the reservoir. The thermometer is then placed in a bath one or two degrees above the upper limit of temperatures to be measured.
Calibration:
The upper end of the tube is gently tapped with the finger, and the mercury suspended in the upper part of the reservoir will be jarred down, thus separating it from the thread at the bend (B). The thermometer will then be set for readings between the required temperatures.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Dog Barbos and Unusual Cross**
Dog Barbos and Unusual Cross:
Dog Barbos and Unusual Cross or (Russian: Пёс Барбос и необычный кросс, romanized: Pyos Barbos i neobychnij kross) is a 1961 Soviet short comedy film directed by Leonid Gaidai.
Plot:
A trio of petty criminals – The Coward, The Fool and The Pro go "fishing". They do not only want to eat and drink well, but they also wish to catch a fish. But the conmen do not want to sit on the beach with a fishing rod and wait patiently for a fish to bite, instead they decide to go poaching; their plan is to stun fish using dynamite! Dropping a stick with a dynamite block tied to it into the river, the crooks rub their hands in anticipation of a magnificent "catch", but ... the unruly dog Barbos interferes. The dog manages to fish out the stick of dynamite which is about to explode from the river and rushes towards the poachers! In a panic, the scoundrels run away, but Barbos chases after them, and the three men climb a tall tree. But the cunning dog throws dynamite with a burning safety fuse under a tree, runs away ... and after that there is a loud blast! Poachers who were going to blow away the fish have instead knocked themselves senseless and their clothes get tattered to shreds.
Cast:
Yuri Nikulin – The Fool Georgy Vitsin – The Coward Yevgeny Morgunov – The Pro Georgy Millyar – Water-bailiff (uncredited) Leonid Gaidai – Bear in a tent (deleted scene) Dog Bryokh – Dog Barbos
Filming:
Filming took place in the vicinity of the village of Snegiri in the Istrinsky District of Moscow region, on the banks of the Istra River, and the scene with the explosion of dynamite were shot near the summer residence of Ivan Kozlovsky.
Filmed material in total was enough for a half-hour, but the director Leonid Gaidai reduced it to a ten minutes running time and removed a lot of stunt scenes that were later used in Bootleggers.
During the filming, Yuri Nikulin had huge false eyelashes applied, and the actor diligently blinked. Thus, according to the director, The Fool's face was supposed to look even more silly.
Awards:
Nominated for Short Film Palme d'Or at the 1961 Cannes Film Festival.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Autosomal dominant hypophosphatemic rickets**
Autosomal dominant hypophosphatemic rickets:
Autosomal dominant hypophosphatemic rickets (ADHR) is a rare hereditary disease in which excessive loss of phosphate in the urine leads to poorly formed bones (rickets), bone pain, and tooth abscesses. ADHR is caused by a mutation in the fibroblast growth factor 23 (FGF23). ADHR affects men and women equally; symptoms may become apparent at any point from childhood through early adulthood. Blood tests reveal low levels of phosphate (hypophosphatemia) and inappropriately normal levels of vitamin D. Occasionally, hypophosphatemia may improve over time as urine losses of phosphate partially correct.ADHR may be lumped in with X-linked hypophosphatemia under general terms such as hypophosphatemic rickets. Hypophosphatemic rickets are associated with at least nine other genetic mutations. Clinical management of hypophosphatemic rickets may differ depending on the specific mutations associated with an individual case, but treatments are aimed at raising phosphate levels to promote normal bone formation. In a 2019 randomised, clinical trial the rickets in children with X-linked hypophosphataemia treated with a human monoclonal antibody against FGF23 called burosumab improved significantly compared to conventional therapy.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Shadow marks**
Shadow marks:
Shadow marks (shadow relief) are a form of archaeological feature visible from the air. Unlike cropmarks, frost marks and soil marks they require upstanding features to work and are therefore more commonly seen in the context of extant sites rather than previously undiscovered buried ones.
Shadow marks:
They are caused by the differences in height on the ground produced by archaeological remains. In the case of ancient, eroded earthworks these differences are often small and they are most apparent when viewed from the air, when the sun is low in the sky. This causes long shadows to be cast by the higher features, which are illuminated from one side by the sun, with dark shadows marking hollows and depressions.
Shadow marks:
Artificial shadow marks can be created easily by constructing a virtual model of a site by merging aerial images (Photogrammetry) and then vectoring in a virtual light source from any direction and at any angle.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Potential flow around a circular cylinder**
Potential flow around a circular cylinder:
In mathematics, potential flow around a circular cylinder is a classical solution for the flow of an inviscid, incompressible fluid around a cylinder that is transverse to the flow. Far from the cylinder, the flow is unidirectional and uniform. The flow has no vorticity and thus the velocity field is irrotational and can be modeled as a potential flow. Unlike a real fluid, this solution indicates a net zero drag on the body, a result known as d'Alembert's paradox.
Mathematical solution:
A cylinder (or disk) of radius R is placed in a two-dimensional, incompressible, inviscid flow. The goal is to find the steady velocity vector V and pressure p in a plane, subject to the condition that far from the cylinder the velocity vector (relative to unit vectors i and j) is: V=Ui+0j, where U is a constant, and at the boundary of the cylinder V⋅n^=0, where n̂ is the vector normal to the cylinder surface. The upstream flow is uniform and has no vorticity. The flow is inviscid, incompressible and has constant mass density ρ. The flow therefore remains without vorticity, or is said to be irrotational, with ∇ × V = 0 everywhere. Being irrotational, there must exist a velocity potential φ: V=∇ϕ.
Mathematical solution:
Being incompressible, ∇ · V = 0, so φ must satisfy Laplace's equation: ∇2ϕ=0.
The solution for φ is obtained most easily in polar coordinates r and θ, related to conventional Cartesian coordinates by x = r cos θ and y = r sin θ. In polar coordinates, Laplace's equation is (see Del in cylindrical and spherical coordinates): 1r∂∂r(r∂ϕ∂r)+1r2∂2ϕ∂θ2=0.
The solution that satisfies the boundary conditions is cos θ.
The velocity components in polar coordinates are obtained from the components of ∇φ in polar coordinates: cos θ and sin θ.
Being inviscid and irrotational, Bernoulli's equation allows the solution for pressure field to be obtained directly from the velocity field: p=12ρ(U2−V2)+p∞, where the constants U and p∞ appear so that p → p∞ far from the cylinder, where V = U. Using V2 = V2r + V2θ, cos (2θ)−R4r4)+p∞.
In the figures, the colorized field referred to as "pressure" is a plot of cos (2θ)−R4r4.
Mathematical solution:
On the surface of the cylinder, or r = R, pressure varies from a maximum of 1 (shown in the diagram in red) at the stagnation points at θ = 0 and θ = π to a minimum of −3 (shown in blue) on the sides of the cylinder, at θ = π/2 and θ = 3π/2. Likewise, V varies from V = 0 at the stagnation points to V = 2U on the sides, in the low pressure.
Mathematical solution:
Stream function The flow being incompressible, a stream function can be found such that V=∇ψ×k.
It follows from this definition, using vector identities, V⋅∇ψ=0.
Therefore, a contour of a constant value of ψ will also be a streamline, a line tangent to V. For the flow past a cylinder, we find: sin θ.
Physical interpretation:
Laplace's equation is linear, and is one of the most elementary partial differential equations. This simple equation yields the entire solution for both V and p because of the constraint of irrotationality and incompressibility. Having obtained the solution for V and p, the consistency of the pressure gradient with the accelerations can be noted.
The dynamic pressure at the upstream stagnation point has value of 1/2ρU2. a value needed to decelerate the free stream flow of speed U. This same value appears at the downstream stagnation point, this high pressure is again needed to decelerate the flow to zero speed. This symmetry arises only because the flow is completely frictionless.
The low pressure on sides on the cylinder is needed to provide the centripetal acceleration of the flow: ∂p∂r=ρV2L, where L is the radius of curvature of the flow. But L ≈ R, and V ≈ U. The integral of the equation for centripetal acceleration over a distance Δr ≈ R will thus yield p−p∞≈−ρU2.
The exact solution has, for the lowest pressure, p−p∞=−32ρU2.
The low pressure, which must be present to provide the centripetal acceleration, will also increase the flow speed as the fluid travels from higher to lower values of pressure. Thus we find the maximum speed in the flow, V = 2U, in the low pressure on the sides of the cylinder.
A value of V > U is consistent with conservation of the volume of fluid. With the cylinder blocking some of the flow, V must be greater than U somewhere in the plane through the center of the cylinder and transverse to the flow.
Comparison with flow of a real fluid past a cylinder:
The symmetry of this ideal solution has a stagnation point on the rear side of the cylinder, as well as on the front side. The pressure distribution over the front and rear sides are identical, leading to the peculiar property of having zero drag on the cylinder, a property known as d'Alembert's paradox. Unlike an ideal inviscid fluid, a viscous flow past a cylinder, no matter how small the viscosity, will acquire a thin boundary layer adjacent to the surface of the cylinder. Boundary layer separation will occur, and a trailing wake will exist in the flow behind the cylinder. The pressure at each point on the wake side of the cylinder will be lower than on the upstream side, resulting in a drag force in the downstream direction.
Janzen–Rayleigh expansion:
The problem of potential compressible flow over circular cylinder was first studied by O. Janzen in 1913 and by Lord Rayleigh in 1916 with small compressible effects. Here, the small parameter is square of the Mach number M2=U2/c2≪1 , where c is the speed of sound. Then the solution to first-order approximation in terms of the velocity potential is cos 12 13 cos cos 3θ]+O(M4) where a is the radius of the cylinder.
Potential flow over a circular cylinder with slight variations:
Regular perturbation analysis for a flow around a cylinder with slight perturbation in the configurations can be found in Milton Van Dyke (1975). In the following, ε will represent a small positive parameter and a is the radius of the cylinder. For more detailed analyses and discussions, readers are referred to Milton Van Dyke's 1975 book Perturbation Methods in Fluid Mechanics.
Potential flow over a circular cylinder with slight variations:
Slightly distorted cylinder Here the radius of the cylinder is not r = a, but a slightly distorted form r = a(1 − ε sin2 θ). Then the solution to first-order approximation is sin sin sin 3θ)+O(ε2) Slightly pulsating circle Here the radius of the cylinder varies with time slightly so r = a(1 + ε f(t)). Then the solution to first-order approximation is sin sin θ)+O(ε2) Flow with slight vorticity In general, the free-stream velocity U is uniform, in other words ψ = Uy, but here a small vorticity is imposed in the outer flow.
Potential flow over a circular cylinder with slight variations:
Linear shear Here a linear shear in the velocity is introduced.
as x→−∞, where ε is the small parameter. The governing equation is ∇2ψ=−ω(ψ).
Then the solution to first-order approximation is sin cos cos 2θ−ar)+O(ε2).
Parabolic shear Here a parabolic shear in the outer velocity is introduced.
as x→−∞.
Then the solution to the first-order approximation is sin sin ln sin θ+χ)+O(ε2), where χ is the homogeneous solution to the Laplace equation which restores the boundary conditions.
Potential flow over a circular cylinder with slight variations:
Slightly porous cylinder Let Cps represent the surface pressure coefficient for an impermeable cylinder: sin cos 2θ−1, where ps is the surface pressure of the impermeable cylinder. Now let Cpi be the internal pressure coefficient inside the cylinder, then a slight normal velocity due to the slight porousness is given by cos at r=a, but the zero net flux condition ∫02π1r∂ψ∂θdθ=0 requires that Cpi = −1. Therefore, cos at r=a.
Potential flow over a circular cylinder with slight variations:
Then the solution to the first-order approximation is sin sin 2θ+O(ε2).
Corrugated quasi-cylinder If the cylinder has variable radius in the axial direction, the z-axis, r = a (1 + ε sin z/b), then the solution to the first-order approximation in terms of the three-dimensional velocity potential is cos cos sin zb+O(ε2), where K1(r/b) is the modified Bessel function of the first kind of order one.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**EOTD**
EOTD:
eOTD is the acronym for the ECCMA Open Technical Dictionary. The dictionary is a language-independent database of concepts with associated terms, definitions and images used to unambiguously describe individuals, organizations, locations, goods, services, processes, rules, and regulations. The eOTD is maintained by the Electronic Commerce Code Management Association (ECCMA).
History:
The eOTD was developed with the support of the Defense Logistics Information Service (DLIS) an agency of the US Defense Logistics Agency (DLA). The eOTD is the first dictionary to be compliant with ISO 22745 (open technical dictionaries).
Structure:
The eOTD contains terms, definitions and images linked to concept identifiers. eOTD concept identifiers are used to create unambiguous language independent descriptions of individuals, organizations, locations, goods, services, processes, rules and regulations. The process of using concept identifiers from an external open technical dictionary is a form of semantic encoding compliant with the requirements of ISO 8000-110:2008, the international standard for the exchange of quality master data.
Structure:
The eOTD concept identifiers are in the public domain. Using public domain identifiers as metadata creates portable data that can be legally separated from the software application that was used to create it. The dictionary contains concepts from international, national and industry standards including over 400,000 concepts of class (approved item name), property (attribute), units of measure, currency and common enumerated value (e.g., days of the week). The eOTD does not include a class hierarchy or class-property relationships.
Use:
Companies use the eOTD to create data requirement specifications as Identification Guides (IGs) or cataloging templates. These Identification Guides contain the class-property relationships and are used for cataloging, to measure data quality as well as to create requests for data or requests for data validation.
Industrial products and services categorization standards:
eCl@ss ETIM (standard) UNSPSC eOTD RosettaNet
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Smoke point**
Smoke point:
The smoke point, also referred to as the burning point, is the temperature at which an oil or fat begins to produce a continuous bluish smoke that becomes clearly visible, dependent upon specific and defined conditions. Smoke point values can vary greatly, depending on factors such as the volume of oil utilized, the size of the container, the presence of air currents, the type and source of light as well as the quality of the oil and its acidity content, otherwise known as free fatty acid (FFA) content. The more FFA an oil contains, the quicker it will break down and start smoking. The lower the value of FFA, the higher the smoke point. However, the FFA content typically represents less than 1% of the total oil and consequently renders smoke point a poor indicator of the capacity of a fat or oil to withstand heat.
Temperature:
The smoke point of an oil correlates with its level of refinement. Many cooking oils have smoke points above standard home cooking temperatures: Pan frying (sauté) on stove top heat: 120 °C (248 °F) Deep frying: 160–180 °C (320–356 °F) Oven baking: Average of 180 °C (356 °F)Smoke point decreases at different pace in different oils.Considerably above the temperature of the smoke point is the flash point, the point at which the vapours from the oil can ignite in air, given an ignition source.
Temperature:
The following table presents smoke points of various fats and oils.
Oxidative stability:
Hydrolysis and oxidation are the two primary degradation processes that occur in an oil during cooking. Oxidative stability is how resistant an oil is to reacting with oxygen, breaking down and potentially producing harmful compounds while exposed to continuous heat. Oxidative stability is the best predictor of how an oil behaves during cooking.The Rancimat method is one of the most common methods for testing oxidative stability in oils. This determination entails speeding up the oxidation process in the oil (under heat and forced air), which enables its stability to be evaluated by monitoring volatile substances associated with rancidity. It is measured as "induction time" and recorded as total hours before the oil breaks down. Canola oil requires 7.5 hours, for example, whereas extra virgin olive oil (EVOO) and virgin coconut oil will last over a day at 110 °C of continuous heat. The differing stabilities correlate with lower levels of polyunsaturated fatty acids, which are more prone to oxidation. EVOO is high in monounsaturated fatty acids and antioxidants, conferring stability. Some plant cultivars have been bred to produce "high-oleic" oils with more monounsaturated oleic acid and less polyunsaturated linoleic acid for enhanced stability.The oxidative stability does not directly correspond to the smoke point and thus the latter cannot be used as a reference for safe and healthy cooking.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Articular cartilage repair**
Articular cartilage repair:
Articular cartilage repair treatment involves the repair of the surface of an articular joint's hyaline cartilage. Over the last few decades, surgeons and researchers have made progress in elaborating surgical cartilage repair interventions. Though these solutions do not perfectly restore the articular cartilage, some of the latest technologies have started to bring very promising results in repairing cartilages from traumatic injuries or chondropathies. These treatments have been shown to be especially beneficial for patients who have articular cartilage damage. They can provide some measure of pain relief, while at the same time slowing down the accumulation of damage, or considerably delaying the need for joint replacement (knee replacement) surgery.
Different articular cartilage repair procedures:
Though the different articular cartilage repair procedures differ in the technologies and surgical techniques used, they all share the same aim to repair articular cartilage whilst keeping options open for alternative treatments in the future. Broadly taken, there are five major types of articular cartilage repair: Arthroscopic lavage / debridement Arthroscopic lavage is a "cleaning up" procedure of the knee joint. This short-term solution is not considered as an articular cartilage repair procedure but rather a palliative treatment to reduce pain, mechanical restriction and inflammation. Lavage focuses on removing degenerative articular cartilage flaps and fibrous tissue. The main target groups are patients with very small defects of the articular cartilage.
Different articular cartilage repair procedures:
Marrow stimulation techniques (micro-fracture surgery and others) Marrow stimulating techniques attempt to solve articular cartilage damage through an arthroscopic procedure. Firstly, the damaged cartilage is drilled or punched until the underlying bone is exposed. By doing this, the subchondral bone is perforated to generate a blood clot within the defect. Studies, however, have shown that marrow stimulation techniques often have insufficiently filled the chondral defect and the repair material is often fibrocartilage (which is not as good mechanically as hyaline cartilage). The blood clot takes about 8 weeks to become fibrous tissue and it takes 4 months to become fibrocartilage. This has implications for the rehabilitation.Further on, chances are high that after only 1 or 2 years of the surgery symptoms start to return as the fibrocartilage wears away, forcing the patient to reengage in articular cartilage repair. This is not always the case and microfracture surgery is therefore considered to be an intermediate step.An evolvement of the microfracture technique is the implantation of a collagen membrane onto the site of the microfracture to protect and stabilize the blood clot and to enhance the chondrogenic differentiation of the MSCs. This technique is known as AMIC (Autologous Matrix-Induced Chondrogenesis) and was first published in 2003.Microfracture techniques show new potential, as animal studies indicate that microfracture-activated skeletal stem-cells form articular cartilage, instead of fibrous tissue, when co-delivered with a combination of BMP2 and VEGF receptor antagonist.
Different articular cartilage repair procedures:
Marrow stimulation augmented with hydrogel implant A hydrogel implant to help the body regrow cartilage in the knee is currently being studied in U.S. and European clinical trials. Called GelrinC, the implant is made of a synthetic material called polyethylene glycol (PEG) and denatured human fibrinogen protein.During the standard microfracture procedure, the implant is applied to the cartilage defect as a liquid. It is then exposed to UVA light for 90 seconds, turning it into a solid, soft implant that completely occupies the space of the cartilage defect.
Different articular cartilage repair procedures:
The implant is designed to support the formation of hyaline cartilage through a unique guided tissue mechanism. It protects the repair site from infiltration of undesired fibrous tissue while providing the appropriate environment for hyaline cartilage matrix formation. Over six to 12 months, the implant resorbs from its surface inward, enabling it to be gradually replaced with new cartilage.Preliminary clinical studies in Europe have shown the implant improves pain and function.
Different articular cartilage repair procedures:
Marrow stimulation augmented with peripheral blood stem cells A 2011 study reports histologically confirmed hyaline cartilage regrowth in a 5 patient case-series, 2 with grade IV bipolar or kissing lesions in the knee. The successful protocol involves arthroscopic microdrilling/ microfracture surgery followed by postoperative injections of autologous peripheral blood progenitor cells (PBPC's) and hyaluronic acid (HA). PBPC's are a blood product containing mesenchymal stem cells and is obtained by mobilizing the stem cells into the peripheral blood. Khay Yong Saw and his team propose that the microdrilling surgery creates a blood clot scaffold on which injected PBPC's can be recruited and enhance chondrogenesis at the site of the contained lesion. They explain that the significance of this cartilage regeneration protocol is that it is successful in patients with historically difficult-to-treat grade IV bipolar or bone-on-bone osteochondral lesions.Saw and his team are currently conducting a larger randomized trial and working towards beginning a multicenter study. The work of the Malaysian research team is gaining international attention.
Different articular cartilage repair procedures:
Osteochondral autografts and allografts This technique/repair requires transplant sections of bone and cartilage. First, the damaged section of bone and cartilage is removed from the joint. Then a new healthy dowel of bone with its cartilage covering is punched out of the same joint and replanted into the hole left from removing the old damaged bone and cartilage. The healthy bone and cartilage are taken from areas of low stress in the joint so as to prevent weakening the joint. Depending on the severity and overall size of the damage multiple plugs or dowels may be required to adequately repair the joint, which becomes difficult for osteochondral autografts. The clinical results may deteriorate over time.For osteochondral allografts, the plugs are taken from deceased donors. This has the advantage that more osteochondral tissue is available and larger damages can be repaired using either the plug (snowman) technique or by hand carving larger grafts. There are, however, worries on the histocompatibility, though no rejection drugs are required and infection has been shown to be lesser than that of a total knee or hip. Osteochondral allografting using donor cartilage has been used most historically in knees, but is also emerging in hips, ankles, shoulders and elbows. Patients are typically younger than 55, with BMI below 35, and have a desire to maintain a higher activity level that traditional joint replacements would not allow. Advances in tissue preservation and surgical technique are quickly growing this surgery in popularity.
Different articular cartilage repair procedures:
Joint distraction arthroplasty This technique involves physically separating a joint for a period of time (typically 8–12 weeks) to allow for cartilage regeneration.
Different articular cartilage repair procedures:
Cell-based repairs Aiming to obtain the best possible results, scientists have striven to replace damaged articular cartilage with healthy articular cartilage. Previous repair procedures, however, always generated fibrocartilage or, at best, a combination of hyaline and fibrocartilage repair tissue. Autologous chondrocyte implantation (ACI) procedures are cell-based repairs that aim to achieve a repair consisting of healthy articular cartilage.ACI articular cartilage repair procedures take place in three stages. First, cartilage cells are extracted arthroscopically from the patient's healthy articular cartilage that is located in a non load-bearing area of either the intercondylar notch or the superior ridge of the femoral condyles. Then these extracted cells are transferred to an in vitro environment in specialised laboratories where they grow and replicate, for approximately four to six weeks, until their population has increased to a sufficient amount. Finally, the patient undergoes a second surgery where the in vitro chondrocytes are applied to the damaged area. In this procedure, chondrocytes are injected and applied to the damaged area in combination with either a membrane or a matrix structure. These transplanted cells thrive in their new environment, forming new articular cartilage.
Different articular cartilage repair procedures:
Autologous mesenchymal stem cell transplant For years, the concept of harvesting stem cells and re-implanting them into one's own body to regenerate organs and tissues has been embraced and researched in animal models. In particular, mesenchymal stem cells have been shown in animal models to regenerate cartilage. Recently, there has been a published case report of decrease in knee pain in a single individual using autologous mesenchymal stem cells. An advantage to this approach is that a person's own stem cells are used, avoiding transmission of genetic diseases. It is also minimally invasive, minimally painful and has a very short recovery period. This alternative to the current available treatments was shown not to cause cancer in patients who were followed for 3 years after the procedure.See also Stem cell transplantation for articular cartilage repair Drug therapies While there are currently no drugs approved for human use, there are multiple drugs currently in development which are aimed at slowing the progression of cartilage degeneration and even potentially repairing it. These are usually referred to DMOADs.
The importance of rehabilitation in articular cartilage repair:
Rehabilitation following any articular cartilage repair procedure is paramount for the success of any articular cartilage resurfacing technique. The rehabilitation is often long and demanding. The main reason is that it takes a long time for the cartilage cells to adapt and mature into repair tissue. Cartilage is a slow adapting substance. Where a muscle takes approximately 35 weeks to fully adapt itself, cartilage only undergoes 75% adaptation in 2 years. If the rehabilitation period is too short, the cartilage repair might be put under too much stress, causing the repair to fail.
Concerns:
New research by Robert Litchfield, September 2008, of the University of Western Ontario concluded that routinely practised knee surgery is ineffective at reducing joint pain or improving joint function in people with osteoarthritis. The researchers did however find that arthroscopic surgery did help a minority of patients with milder symptoms, large tears or other damage to the meniscus — cartilage pads that improve the congruence between femur and tibia bones. Similarly, a 2013 Finnish study found surgery to be ineffective for knee surgery (arthroscopic partial meniscectomy), by comparing to sham treatment.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Cholesterol oxidase**
Cholesterol oxidase:
In enzymology, a cholesterol oxidase (EC 1.1.3.6) is an enzyme that catalyzes the chemical reaction cholesterol + O2 ⇌ cholest-4-en-3-one + H2O2Thus, the two substrates of this enzyme are cholesterol and O2, whereas its two products are cholest-4-en-3-one and H2O2.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with oxygen as acceptor. The systematic name of this enzyme class is cholesterol:oxygen oxidoreductase. Other names in common use include cholesterol- O2 oxidoreductase, 3beta-hydroxy steroid oxidoreductase, and 3beta-hydroxysteroid:oxygen oxidoreductase. This enzyme participates in bile acid biosynthesis.
The substrate-binding domain found in some bacterial cholesterol oxidases is composed of an eight-stranded mixed beta-pleated sheet and six alpha-helices. This domain is positioned over the isoalloxazine ring system of the FAD cofactor bound by the FAD-binding domain and forms the roof of the active site cavity, allowing for catalysis of oxidation and isomerisation of cholesterol to cholest-4-en-3-one.
Structural studies:
As of late 2007, 14 structures have been solved for this class of enzymes, with PDB accession codes 1B4V, 1B8S, 1CBO, 1CC2, 1COY, 1I19, 1IJH, 1MXT, 1N1P, 1N4U, 1N4V, 1N4W, 2GEW, and 3COX.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Visual snow syndrome**
Visual snow syndrome:
Visual snow syndrome (VSS) is an uncommon neurological condition in which the primary symptom is that affected individuals see persistent flickering white, black, transparent, or coloured dots across the whole visual field. Other common symptoms are palinopsia, enhanced entoptic phenomena, photophobia, and tension headaches. The condition is typically always present and has no known cure, as viable treatments are still under research. Astigmatism, although not presumed connected to these visual disturbances, is a common comorbidity. As well, migraine and tinnitus are common comorbidities which are both associated with a more severe presentation of the syndrome. TMJ may also be a common comorbidity.The cause of the syndrome is unclear. The underlying mechanism is believed to involve excessive excitability of neurons in the right lingual gyrus and left anterior lobe of cerebellum. Another hypothesis proposes that visual snow syndrome could be a type of thalamocortical dysrhythmia and may involve the thalamic reticular nucleus (TRN). A failure of inhibitory action from the TRN to the thalamus may be the underlying cause for inability to suppress excitatory sensory information. Research has been limited due to issues of case identification and diagnosis, the latter now largely addressed, and the limited size of any studied cohort. Initial functional brain imaging research suggests visual snow is a brain disorder.
Visual snow syndrome:
There is no established treatment for visual snow syndrome. Medications that may be used to treat the condition include lamotrigine, acetazolamide, or verapamil. However, in absence of a secondary pharmaceutical indication, these do not necessarily result in benefits, and the evidence for their use is limited.
Signs and symptoms:
In addition to visual snow, many of those affected have other types of visual disturbances such as starbursts, increased afterimages, floaters, trails, and many others.Visual snow likely represents a clinical continuum, with different degrees of severity. The presence of comorbidities such as migraine and tinnitus is associated with a more severe presentation of the visual symptoms.
Diagnosis Visual snow syndrome is usually diagnosed with the following proposed criteria: Visual snow: dynamic, continuous, tiny dots in the entire visual field lasting more than three months.
The dots are usually black/gray on a white background and gray/white on a black background; however, they can also be transparent, white flashing, or colored.
Presence of at least 2 additional visual symptoms of the 4 following categories: i. Palinopsia. At least 1 of the following: afterimages or trailing of moving objects.
ii. Enhanced entoptic phenomena. At least 1 of the following: excessive floaters in both eyes, excessive blue field entoptic phenomenon, self-light of the eye (phosphenes), or spontaneous photopsia.
iii. Photophobia.
iv. Nyctalopia; impaired night vision.
Symptoms are not consistent with typical migraine aura.
Symptoms are not better explained by another disorder (ophthalmological, drug abuse).
Normal ophthalmology tests (best-corrected visual acuity, dilated fundus examination, visual field, and electroretinogram); not caused by previous intake of psychotropic drugs.Additional and non visual symptoms like tinnitus, ear pressure or brain fog and more might be present. It can also be diagnosed by PET scan.
Signs and symptoms:
Comorbidities Migraine and migraine with aura are common comorbidities. However, comorbid migraine worsens some of the additional visual symptoms and tinnitus seen in "visual snow" syndrome. This might bias research studies by patients with migraine being more likely to offer study participation than those without migraine due to having more severe symptoms. In contrast to migraine, comorbidity of typical migraine aura does not appear to worsen symptoms.Psychological side effects of visual snow can include depersonalization, derealization, depression, photophobia and heliophobia in the individual affected.Patients with visual "snow" have normal equivalent input noise levels and contrast sensitivity. In a 2010 study, Raghaven et al. hypothesize that what the patients see as "snow" is eigengrau. This would also explain why many report more visual snow in low light conditions: "The intrinsic dark noise of primate cones is equivalent to ~4000 absorbed photons per second at mean light levels; below this the cone signals are dominated by intrinsic noise".
Causes:
The causes are unclear. The underlying mechanism is believed to involve excessive excitability of neurons within the cortex of the brain, specifically the right lingual gyrus and left cerebellar anterior lobe of the brain.Persisting visual snow can feature as a leading addition to a migraine complication called persistent aura without infarction, commonly referred to as persistent migraine aura (PMA). In other clinical sub-forms of migraine headache may be absent and the migraine aura may not take the typical form of the zigzagged fortification spectrum (scintillating scotoma), but manifests with a large variety of focal neurological symptoms.Visual snow does not depend on the effect of psychotropic substances on the brain. Hallucinogen persisting perception disorder (HPPD), a condition caused by hallucinogenic drug use, is sometimes linked to visual snow, but both the connection of visual snow to HPPD and the cause and prevalence of HPPD is disputed. Most of the evidence for both is generally anecdotal, and subject to spotlight fallacy.
Timeline:
In May 2015, visual snow was described as a persisting positive visual phenomenon distinct from migraine aura in a study by Schankin and Goadsby.
In December 2020, a study found local increases in regional cerebral perfusion in patients with visual snow syndrome.
In September 2021, two studies found white matter alterations in parts of the visual cortex and outside the visual cortex in patients with visual snow syndrome.
Treatments:
It is difficult to resolve visual snow with treatment, but it is possible to reduce symptoms and improve quality of life through treatment, both of the syndrome and its comorbidities. Medications that may be used include lamotrigine, acetazolamide, or verapamil, but these do not always result in benefits. As of 2021, there were two ongoing clinical trials using transcranial magnetic stimulation and neurofeedback for visual snow.A recent study in the British Journal of Ophthalmology has confirmed that common drug treatments are generally ineffective in visual snow syndrome (VSS). Vitamins and benzodiazepines, however, were shown to be beneficial in some patients and can be considered safe for this condition.Victoria Pelak, a Professor of Neurology and Ophthalmology in the Department of Neurology at the University of Colorado Anschutz Medical Campus has recently directed, published, and completed enrollment for a TMS study protocol. The study protocol aims to investigate the use of rTMS intervention to improve symptoms and visual dysfunction associated with visual snow (VS); the study protocol also describes the challenges during the COVID-19 pandemic.In addition, Pelak described during her practice that she lets patients know that current treatment options are only limited to alleviating symptoms. She recommends that her patients focus on pharmaceutical and non-pharmaceutical treatments to control migraine, headaches, anxiety, and depression. As for light sensitivity complications, Pelak advises patients to use FL-41 tinted lenses indoors. She also recommends visual occupational therapists who assist patients with color-tinted lenses to alleviate VSS symptoms. Furthermore, Pelak states that exercising, meditation, and a healthy balanced diet can improve overall daily functioning.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Cyclohexene oxide**
Cyclohexene oxide:
Cyclohexene oxide is a cycloaliphatic epoxide. It can react in cationic polymerization to poly(cyclohexene oxide). As cyclohexene is monovalent, poly(cyclohexene oxide) is a thermoplastic.
Production:
Cyclohexene oxide is produced in epoxidation reaction from cyclohexene. The epoxidation can take place either in a homogeneous reaction by peracids or heterogeneous catalysis (e.g. silver and molecular oxygen).
Production:
In industrial production the heterogeneously catalyzed synthesis is preferred because of better atom economy, a simpler separation of the product and easier recycling of catalyst. A short overview and an investigation of the oxidation of cyclohexene by hydrogen peroxide is given in the literature. In recent times the catalytic oxidation of cyclohexene by (immobilized) metalloporphyrin complexes has been found to be an efficient way.In laboratory, cyclohexene oxide can also be prepared by reacting cyclohexene with magnesium monoperoxyphthalate (MMPP) in a mixture of isopropanol and water as solvent at room temperature.
Production:
With this method, good yields up to 85 % can be reached.
Properties and reactions:
Cyclohexene has been studied extensively by analytical methods. Cyclohexene oxide can be polymerized in solution, catalyzed by a solid acid catalyst.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Ferrocerium**
Ferrocerium:
Ferrocerium (also known in Europe as Auermetall) is a synthetic pyrophoric alloy of mischmetal (cerium, lanthanum, neodymium, other trace lanthanides and some iron – about 95% lanthanides and 5% iron) hardened by blending in oxides of iron and/or magnesium. When struck with a harder material, the mixture produces hot sparks that can reach temperatures of 3,315 °C (6,000 °F) when rapidly oxidized by the process of striking the rod. Striking both scrapes fragments off, exposing them to the oxygen in the air, and easily ignites them by friction heat due to cerium's remarkably low ignition temperature of between 150–180 °C (302–356 °F).
Ferrocerium:
Its easy flammability gives ferrocerium many commercial applications, such as the ignition source for lighters, strikers for gas welding and cutting torches, deoxidization in metallurgy, and ferrocerium rods. Because of ferrocerium's ability to ignite in adverse conditions, rods of ferrocerium (also called ferro rods, spark rods, and flint-spark-lighters) are commonly used as an emergency fire lighting device in survival kits. The ferrocerium is referred to as a "flint" in this case despite being dissimilar to natural flint as both are used in conjunction for fire lighting, albeit with opposite mechanical operation.
Discovery:
Ferrocerium alloy was invented in 1903 by the Austrian chemist Carl Auer von Welsbach. It takes its name from its two primary components: iron (from Latin: ferrum), and the rare-earth element cerium, which is the most prevalent of the lanthanides in the mixture. Except for the extra iron and magnesium oxides added to harden it, the mixture is approximately the combination found naturally in tailings from thorium mining, which Auer von Welsbach was investigating. The pyrophoric effect is dependent on the brittleness of the alloy and its low autoignition temperature.
Composition:
In Auer von Welsbach's first alloy, 30% iron (ferrum) was added to purified cerium, hence the name "ferro-cerium". Two subsequent Auermetalls were developed: the second also included lanthanum to produce brighter sparks, and the third added other heavy metals. A modern ferrocerium firesteel product is composed of an alloy of rare-earth metals called mischmetal, containing approximately 20.8% iron, 41.8% cerium, about 4.4% each of praseodymium, neodymium, and magnesium, plus 24.2% lanthanum. A variety of other components are added to modify the spark and processing characteristics. Most contemporary flints are hardened with iron oxide and magnesium oxide.
Uses:
Ferrocerium is used in fire lighting in conjunction with steel, similarly to natural flint-and-steel, though ferrocerium takes on the opposite role to the traditional system; instead of a natural flint rock striking tiny iron particles from a firesteel, a steel striker (which may be in the form of hardened steel wheel) strikes particles of ferrocerium off of the "flint". This manual rubbing action, done by squeezing the handle, creates a spark due to cerium's low ignition temperature between 150–180 °C (302–356 °F). Carbon steel works better than most other materials, in much the same way natural flint and firesteel are used.It is most commonly used for Bunsen burners and oxyacetylene welding torches.About 700 tons were produced in 2000.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Cinder track**
Cinder track:
A cinder track is a type of race track, generally purposed for track and field or horse racing, whose surface is composed of cinders. For running tracks, many cinder surfaces have been replaced by all-weather synthetic surfaces, which provide greater durability and more consistent results, and are less stressful on runners. The impact on performance as a result of differing track surfaces is a topic often raised when comparing athletes from different eras.Synthetic tracks emerged in the late 1960s; the 1964 Olympics were the last to use a cinder track.The Little 500 bicycle race at Indiana University is still run annually on a cinder track.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**P-Dioxanone**
P-Dioxanone:
p-Dioxanone (1,4-dioxan-2-one) is the lactone of 2-(2-hydroxyethoxy)acetic acid. It is a monomer that can undergo ring-opening polymerization to give polydioxanone, a biodegradable implant material. It is isomeric to trimethylene carbonate (1,3-dioxan-2-one).
Preparation:
The common synthetic process for p-dioxanone is continuous gas-phase dehydrogenation of diethylene glycol on a copper or copper chromite catalyst at 280 °C.
This gives yields of up to 86%. Removal of excess diethylene glycol is crucial to the stability of the product as a monomer. Further purification with recrystallization, vacuum distillation, or melt crystallization allows purities of >99.5% to be achieved.
Properties:
Pure p-dioxanone is a white crystalline solid with a melting point of 28 °C.
Uses:
The oxidation of p-dioxanone with nitric acid or dinitrogen tetroxide gives diglycolic acid at 75% yield.p-Dioxanone can undergo ring-opening polymerization catalyzed by organic compounds of tin, such as tin(II) octoate or dibutyltin dilaurate, or by basic alkoxides such as aluminium isopropoxide. This affords polydioxanone, a biodegradable, semicrystalline and thermally labile polymer with uses in industry and medicine. Depolymerization back to the monomer is triggered at 100 °C.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Mexrenoic acid**
Mexrenoic acid:
Mexrenoic acid, or mexrenoate, is a synthetic steroidal antimineralocorticoid which was never marketed.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Sulfur water**
Sulfur water:
Sulfur water (or sulphur water) is a condition where water is exposed to hydrogen sulfide gas, giving a distinct "rotten egg" smell. This condition has different purposes in culture varying to health and implications to plumbing.
Chemical composition:
Sulfur water is made out of dissolved minerals that contain sulfate. These include baryte (BaSO4), epsomite (MgSO4 7H2O) and gypsum (CaSO42H20). It is reported that a notable change in taste to the water is found differently to the type of sulfate affecting the water. For sodium sulfate, 250 to 500 mg/litre, with calcium sulfate at 250 to 1000 mg/litre and magnesium sulfate at 400 to 600 mg/litre. A study by Zoeteman found that having 270 mg of calcium sulfate and 90 mg of magnesium sulfate actually had improved the taste of the water.
Health:
Bathing in water high in sulfur or other minerals for its presumed health benefits is known as balneotherapy. These are said to give a person bathing in the waters "ageless beauty" and relief from aches and pains.While humans have been able to adapt to higher levels of concentrations with time, some effects of ingestion of sulfur water has found to have cathartic effects on people consuming water with sulfate concentrations of 600 mg/litre according to a study from the US Department of health in 1962. Some adverse effects that have been found include dehydration, with excess amounts of sodium or magnesium sulfate in a person's diet according to a study in 1980, with some populations, such as children and elderly people, being seen as higher risk.
Health:
A survey was done in North Dakota US to better derive whether there was direct causation of a laxative effect from having sulfur in drinking water.
From this data, it was concluded that water containing more than 750 mg of sulfate per litre was due to a laxative effect, and below 600 was not.
Concerns According to The Environmental Protection Agency (EPA) and the Centers for Disease Control and Prevention (CDC), drinking water with high levels of sulfate can cause diarrhea, especially in infants.
Cultural implications:
Farming At the University of Wyoming in America, sulfur water was studied to see the effects it can have upon the performance of steers that are on a forage-based diet. Due to sulfur being a requirement to living things, as it contains essential amino acids that are used to create proteins, sulfur water, which is commonly found in Western States of America, is a major contributor to sulfur in the herds diet. However, with a herd drinking high concentrate of sulfur water, ruminants may contract sulfur induced polioencephalomalacia (sPEM), which is a neurological disorder. Because of this finding, the study tries to reach the goal of finding a dietary supplement which can be used to counteract the negative health effects on the steers. To reduce the extra sulfur in the ruminant's diet, ruminal bacteria break the excess down, resulting in Hydrogen Sulfide, which is soluble in water, but as temperature increases, the solubility decreases, which leads to the hydrogen sulfide gas being reinhaled by the animal, causing sulfur induced polioencephalomalacia. The study attempted to resolve this issue by introducing clinoptilolite to the diet of the herd, but has found inconclusive evidence which requires more study of clinoptilolite effects on methanogenesis and biohydrogenation.
Cultural implications:
Sulfur Springs There is also believed to be great health benefits within sulfur water, with sulfur water springs being a common thing within many cultures. Such springs can be found in many countries such as New Zealand, Japan and Greece. These sulfur springs are often created due to the local volcanic activity which contributes to heating up nearby water systems. This is due to volcanoes exhaling water vapour heavily encased in metals, with sulfur dioxide being one of them.
Cultural implications:
In New Zealand, the North Island was brought to fame in the 1800s, with its baths heated naturally from a volcano near the town of Rotorua. There are 28 spa hot pools which visitors can soak themselves, along with sulfur mud baths.
Another famous spring is the spring in Greece, Thermopylae, which means "hot springs" derives its name from its springs, as it was believed to be the entrance to Hades.
Cause and treatment:
The condition indicates a high level of sulfate-reducing bacteria in the water supply. This may be due to the use of well water, poorly treated city water, or water heater contamination.
Various methods exist to treat sulfur in water. These methods include Filtration of the water using a carbon filter (useful for very small amounts of hydrogen sulfide) Filtration of the water through a canister of manganese oxide coated greensand Aeration of the water Chlorination of water (can be used to treat large amounts of hydrogen sulfide)
Levels of sulfur in water around the world:
The Global Environment Monitoring System for Freshwater (GEMS/Water) has said that typical fresh water holds about 20 mg/litre of sulfur, and can range from 0 to 630 mg/litre in rivers, 2 to 250 mg/litre in lakes and 0 to 230 mg/litre in groundwater.Canada's rain has been found to have sulfate concentrations of 1.0 and 3.8 mg/L in 1980, found in a study by Franklin published in 1985. Western Canada in rivers ranged from 1 to 3040 mg/litre, with most concentrations below 580 mg/litre according to results from Environment Canada in 1984. Central Canada had levels that were also high in Saskatchewan, there were median levels of 368 mg/litre in drinking water from ground water supplies, and 97 mg/litre in surface water supplies, with a range of 32170 mg/litre.
Levels of sulfur in water around the world:
A study conducted in Canada found that a treatment to reduce sulfur in drinking water had actually increased it. This was conducted in Ontario, which had a mean sulfur level of 12.5 mg/litre when untreated, and 22.5 mg/litre after the treatment.
Levels of sulfur in water around the world:
The Netherlands has had below 150 mg/litre concentrations of sulfur water in their underground water supplies. 65% of water treatment plants reported that the sulfur level of drinking water was below 25 mg/litre, as found in a study by Dijk-Looijaard & Fonds in 1985.The US had the Public Health Service in 1970 to measure levels of sulfate in drinking water sources in nine different geographic areas. The results concluded that all of the 106 surface water supplies that were sampled had sulfate present, as well as 645 of 658 ground water deposits that were tested. The levels of sulfur that was found ranged from less than 1 mg/litre to 770.
Environment:
Due to sulfates being used in industrial products, they are often discharged into water supplies in the environment. This includes mines, textile mills and other industrial processes that involve using sulfates. Sulfates, such as magnesium, potassium and sodium are all highly soluble in water, which is what creates sulfur water, while other sulfates which are metal based, such as calcium and barium are less soluble. Atmospheric sulfur dioxide, also can infect surface water, and sulfur trioxide can combine with water vapour in the air, and create sulfur water rain, or what is colloquially known as acid rain.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**NPR1**
NPR1:
Natriuretic peptide receptor A/guanylate cyclase A (atrionatriuretic peptide receptor A), also known as NPR1, is an atrial natriuretic peptide receptor. In humans it is encoded by the NPR1 gene.
Function:
NPR1 is a membrane-bound guanylate cyclase that serves as the receptor for both atrial and brain natriuretic peptides (ANP and BNP, respectively).It is localized in the kidney where it results in natriuresis upon binding to natriuretic peptides. However, it is found in even greater quantity in the lungs and adipocytes.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Hashtag**
Hashtag:
A hashtag is a metadata tag that is prefaced by the hash symbol, #. On social media, hashtags are used on microblogging and photo-sharing services such as Twitter or Tumblr as a form of user-generated tagging that enables cross-referencing of content by topic or theme. For example, a search within Instagram for the hashtag #bluesky returns all posts that have been tagged with that term. After the initial hash symbol, a hashtag may include letters, numerals, or underscores.The use of hashtags was first proposed by American blogger and product consultant Chris Messina in a 2007 tweet. Messina made no attempt to patent the use because he felt that "they were born of the internet, and owned by no one". Hashtags became entrenched in the culture of Twitter and soon emerged across Instagram, Facebook, and YouTube. In June 2014, hashtag was added to the Oxford English Dictionary as "a word or phrase with the symbol # in front of it, used on social media websites and apps so that you can search for all messages with the same subject".
Origin and acceptance:
The number sign or hash symbol, #, has long been used in information technology to highlight specific pieces of text. In 1970, the number sign was used to denote immediate address mode in the assembly language of the PDP-11 when placed next to a symbol or a number, and around 1973, '#' was introduced in the C programming language to indicate special keywords that the C preprocessor had to process first. The pound sign was adopted for use within IRC (Internet Relay Chat) networks around 1988 to label groups and topics. Channels or topics that are available across an entire IRC network are prefixed with a hash symbol # (as opposed to those local to a server, which uses an ampersand '&').The use of the pound sign in IRC inspired Chris Messina to propose a similar system on Twitter to tag topics of interest on the microblogging network. He posted the first hashtag on Twitter: How do you feel about using # (pound) for groups. As in #barcamp [msg]? According to Messina, he suggested use of the hashtag to make it easy for lay users without specialized knowledge of search protocols to find specific relevant content. Therefore, the hashtag "was created organically by Twitter users as a way to categorize messages".The first published use of the term "hash tag" was in a blog post "Hash Tags = Twitter Groupings" by Stowe Boyd, on August 26, 2007, according to lexicographer Ben Zimmer, chair of the American Dialect Society's New Words Committee.
Origin and acceptance:
Messina's suggestion to use the hashtag was not immediately adopted by Twitter, but the convention gained popular acceptance when hashtags were used in tweets relating to the 2007 San Diego forest fires in Southern California. The hashtag gained international acceptance during the 2009–2010 Iranian election protests; Twitter users used both English- and Persian-language hashtags in communications during the events.Hashtags have since played critical roles in recent social movements such as #jesuischarlie, #BLM, and #MeToo.Beginning July 2, 2009, Twitter began to hyperlink all hashtags in tweets to Twitter search results for the hashtagged word (and for the standard spelling of commonly misspelled words). In 2010, Twitter introduced "Trending Topics" on the Twitter front page, displaying hashtags that are rapidly becoming popular, and the significance of trending hashtags has become so great that the company makes significant efforts to foil attempts to spam the trending list. During the 2010 World Cup, Twitter explicitly encouraged the use of hashtags with the temporary deployment of "hashflags", which replaced hashtags of three-letter country codes with their respective national flags.Other platforms such as YouTube and Gawker Media followed in officially supporting hashtags, and real-time search aggregators such as Google Real-Time Search began supporting hashtags.
Format:
A hashtag must begin with a hash (#) character followed by other characters, and is terminated by a space or the end of the line. Some platforms may require the # to be preceded with a space. Most or all platforms that support hashtags permit the inclusion of letters (without diacritics), numerals, and underscores. Other characters may be supported on a platform-by-platform basis. Some characters, such as & are generally not supported as they may already serve other search functions. Hashtags are not case sensitive (a search for "#hashtag" will match "#HashTag" as well), but the use of embedded capitals (i.e., CamelCase) increases legibility and improves accessibility.
Format:
Languages that do not use word dividers handle hashtags differently. In China, microblogs Sina Weibo and Tencent Weibo use a double-hashtag-delimited #HashName# format, since the lack of spacing between Chinese characters necessitates a closing tag. Twitter uses a different syntax for Chinese characters and orthographies with similar spacing conventions: the hashtag contains unspaced characters, separated from preceding and following text by spaces (e.g., '我 #爱 你' instead of '我#爱你') or by zero-width non-joiner characters before and after the hashtagged element, to retain a linguistically natural appearance (displaying as unspaced '我#爱你', but with invisible non-joiners delimiting the hashtag).
Format:
Etiquette and regulation Some communities may limit, officially or unofficially, the number of hashtags permitted on a single post.Misuse of hashtags can lead to account suspensions. Twitter warns that adding hashtags to unrelated tweets, or repeated use of the same hashtag without adding to a conversation can filter an account from search results, or suspend the account.Individual platforms may deactivate certain hashtags either for being too generic to be useful, such as #photography on Instagram, or due to their use to facilitate illegal activities.
Format:
Alternate formats In 2009, StockTwits began using ticker symbols preceded by the dollar sign (e.g., $XRX). In July 2012, Twitter began supporting the tag convention and dubbed it the "cashtag". The convention has extended to national currencies, and Cash App has implemented the cashtag to mark usernames.
Function:
Hashtags are particularly useful in unmoderated forums that lack a formal ontological organization. Hashtags help users find content similar interest. Hashtags are neither registered nor controlled by any one user or group of users. They do not contain any set definitions, meaning that a single hashtag can be used for any number of purposes, and that the accepted meaning of a hashtag can change with time.
Function:
Hashtags intended for discussion of a particular event tend to use an obscure wording to avoid being caught up with generic conversations on similar subjects, such as a cake festival using #cakefestival rather than simply #cake. However, this can also make it difficult for topics to become "trending topics" because people often use different spelling or words to refer to the same topic. For topics to trend, there must be a consensus, whether silent or stated, that the hashtag refers to that specific topic.
Function:
Hashtags may be used informally to express context around a given message, with no intent to categorize the message for later searching, sharing, or other reasons. Hashtags may thus serve as a reflexive meta-commentary.This can help express contextual cues or offer more depth to the information or message that appears with the hashtag. "My arms are getting darker by the minute. #toomuchfaketan". Another function of the hashtag can be used to express personal feelings and emotions. For example, with "It's Monday!! #excited #sarcasm" in which the adjectives are directly indicating the emotions of the speaker.Verbal use of the word hashtag is sometimes used in informal conversations. Use may be humorous, such as "I'm hashtag confused!" By August 2012, use of a hand gesture, sometimes called the "finger hashtag", in which the index and middle finger both hands are extended and arranged perpendicularly to form the hash, was documented.
Function:
Co-optation by other industries Companies, businesses, and advocacy organizations have taken advantage of hashtag-based discussions for promotion of their products, services or campaigns.
Function:
In the early 2010s, some television broadcasters began to employ hashtags related to programs in digital on-screen graphics, to encourage viewers to participate in a backchannel of discussion via social media prior to, during, or after the program. Television commercials have sometimes contained hashtags for similar purposes.The increased usage of hashtags as brand promotion devices has been compared to the promotion of branded "keywords" by AOL in the late 1990s and early 2000s, as such keywords were also promoted at the end of television commercials and series episodes.Organized real-world events have used hashtags and ad hoc lists for discussion and promotion among participants. Hashtags are used as beacons by event participants to find each other, both on Twitter and, in many cases, during actual physical events.
Function:
Since the 2012–13 season, the NBA has allowed fans to vote players in as All-Star Game starters on Twitter and Facebook using #NBAVOTE.Hashtag-centered biomedical Twitter campaigns have shown to increase the reach, promotion, and visibility of healthcare-related open innovation platforms.
Function:
Non-commercial use Political protests and campaigns in the early 2010s, such as #OccupyWallStreet and #LibyaFeb17, have been organized around hashtags or have made extensive usage of hashtags for the promotion of discussion. Hashtags have also been used to promote official events; the Finnish Ministry of Foreign Affairs officially titled the 2018 Russia–United States summit as the "#HELSINKI2018 Meeting".Hashtags have been used to gather customer criticism of large companies. In January 2012, McDonald's created the #McDStories hashtag so that customers could share positive experiences about the restaurant chain, but the marketing effort was cancelled after two hours when critical tweets outnumbered praising ones.
Function:
In 2017, the #MeToo hashtag became viral in response to the sexual harassment accusations against Harvey Weinstein. The use of this hashtag can be considered part of hashtag activism, spreading awareness across eighty-five different countries with more than seventeen million Tweets using the hashtag #MeToo. This hashtag was not only used to spread awareness of accusations regarding Harvey Weinstein but allowed different women to share their experiences of sexual violence. Using this hashtag birthed multiple different hashtags in connection to #MeToo to encourage more women to share their stories, resulting in further spread of the phenomenon of hashtag activism. The use of hashtags, especially, in this case, allowed for better and easier access to search for content related to this social media movement.
Function:
Sentiment analysis The use of hashtags also reveals what feelings or sentiment an author attaches to a statement. This can range from the obvious, where a hashtag directly describes the state of mind, to the less obvious. For example, words in hashtags are the strongest predictor of whether or not a statement is sarcastic—a difficult AI problem.
Function:
Professional development and education Hashtags play an important role for employees and students in professional fields and education. In industry, individuals' engagement with a hashtags can provide opportunities for them develop and gain some professional knowledge in their fields.In education, research on language teachers who engaged in the #MFLtwitterati hashtag demonstrates the uses of hashtags for creating community and sharing teaching resources. The majority of participants reported positive impact on their teaching strategies as inspired by many ideas shared by different individuals in the Hashtag. Emerging research in communication and learning demonstrates how hashtag practices influence the teaching and development of students. An analysis of eight studies examined the use of hashtags in K–12 classrooms and found significant results. These results indicated that hashtags assisted students in voicing their opinions. In addition, hashtags also helped students understand self-organisation and the concept of space beyond place. Related research demonstrated how high school students engagement with hashtag communication practices allowed them to develop story telling skills and cultural awareness. For young people at risk of poverty and social exclusion during the COVID-19 pandemic, Instagram hashtags were shown in a 2022 article to foster scientific education and promote remote learning.
In popular culture:
During the April 2011 Canadian party leader debate, Jack Layton, then-leader of the New Democratic Party, referred to Conservative Prime Minister Stephen Harper's crime policies as "a [sic] hashtag fail" (presumably #fail).In 2010 Kanye West used the term "hashtag rap" to describe a style of rapping that, according to Rizoh of the Houston Press, uses "a metaphor, a pause, and a one-word punch line, often placed at the end of a rhyme". Rappers Nicki Minaj, Big Sean, Drake, and Lil Wayne are credited with the popularization of hashtag rap, while the style has been criticized by Ludacris, The Lonely Island, and various music writers.On September 13, 2013, a hashtag, #TwitterIPO, appeared in the headline of a New York Times front-page article regarding Twitter's initial public offering.In 2014 Bird's Eye foods released "Mashtags", a mashed potato product with pieces shaped either like @ or #.In 2019, the British Ornithological Union included as hash character in the design of its new Janet Kear Union Medal, to represent "science communication and social media".
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Duhem–Margules equation**
Duhem–Margules equation:
The Duhem–Margules equation, named for Pierre Duhem and Max Margules, is a thermodynamic statement of the relationship between the two components of a single liquid where the vapour mixture is regarded as an ideal gas: ln ln ln ln xB)T,P where PA and PB are the partial vapour pressures of the two constituents and xA and xB are the mole fractions of the liquid. The equation gives the relation between changes in mole fraction and partial pressure of the components.
Derivation:
Let us consider a binary liquid mixture of two component in equilibrium with their vapor at constant temperature and pressure. Then from the Gibbs–Duhem equation, we have Where nA and nB are number of moles of the component A and B while μA and μB are their chemical potentials. Dividing equation (1) by nA + nB, then nAnA+nBdμA+nBnA+nBdμB=0 Or Now the chemical potential of any component in mixture is dependent upon temperature, pressure and the composition of the mixture. Hence if temperature and pressure are taken to be constant, the chemical potentials must satisfy Putting these values in equation (2), then Because the sum of mole fractions of all components in the mixture is unity, i.e., x1+x2=1 we have dx1+dx2=0 so equation (5) can be re-written: Now the chemical potential of any component in mixture is such that ln P where P is the partial pressure of that component. By differentiating this equation with respect to the mole fraction of a component: ln Pdx we have for components A and B Substituting these value in equation (6), then ln ln PBdxB or ln ln ln ln xB)T,P This final equation is the Duhem–Margules equation.
Sources:
Atkins, Peter and Julio de Paula. 2002. Physical Chemistry, 7th ed. New York: W. H. Freeman and Co.
Carter, Ashley H. 2001. Classical and Statistical Thermodynamics. Upper Saddle River: Prentice Hall.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Curium**
Curium:
Curium is a transuranic, radioactive chemical element with the symbol Cm and atomic number 96. This actinide element was named after eminent scientists Marie and Pierre Curie, both known for their research on radioactivity. Curium was first intentionally made by the team of Glenn T. Seaborg, Ralph A. James, and Albert Ghiorso in 1944, using the cyclotron at Berkeley. They bombarded the newly discovered element plutonium (the isotope 239Pu) with alpha particles. This was then sent to the Metallurgical Laboratory at University of Chicago where a tiny sample of curium was eventually separated and identified. The discovery was kept secret until after the end of World War II. The news was released to the public in November 1947. Most curium is produced by bombarding uranium or plutonium with neutrons in nuclear reactors – one tonne of spent nuclear fuel contains ~20 grams of curium.
Curium:
Curium is a hard, dense, silvery metal with a high melting and boiling point for an actinide. It is paramagnetic at ambient conditions, but becomes antiferromagnetic upon cooling, and other magnetic transitions are also seen in many curium compounds. In compounds, curium usually has valence +3 and sometimes +4; the +3 valence is predominant in solutions. Curium readily oxidizes, and its oxides are a dominant form of this element. It forms strongly fluorescent complexes with various organic compounds, but there is no evidence of its incorporation into bacteria and archaea. If it gets into the human body, curium accumulates in bones, lungs, and liver, where it promotes cancer.
Curium:
All known isotopes of curium are radioactive and have small critical mass for a nuclear chain reaction. They mostly emit α-particles; radioisotope thermoelectric generators can use the heat from this process, but this is hindered by the rarity and high cost of curium. Curium is used in making heavier actinides and the 238Pu radionuclide for power sources in artificial cardiac pacemakers and RTGs for spacecraft. It served as the α-source in the alpha particle X-ray spectrometers of several space probes, including the Sojourner, Spirit, Opportunity, and Curiosity Mars rovers and the Philae lander on comet 67P/Churyumov–Gerasimenko, to analyze the composition and structure of the surface.
History:
Though curium had likely been produced in previous nuclear experiments as well as the natural nuclear fission reactor at Oklo, Gabon, it was first intentionally synthesized, isolated and identified in 1944, at University of California, Berkeley, by Glenn T. Seaborg, Ralph A. James, and Albert Ghiorso. In their experiments, they used a 60-inch (150 cm) cyclotron.Curium was chemically identified at the Metallurgical Laboratory (now Argonne National Laboratory), University of Chicago. It was the third transuranium element to be discovered even though it is the fourth in the series – the lighter element americium was still unknown.The sample was prepared as follows: first plutonium nitrate solution was coated on a platinum foil of ~0.5 cm2 area, the solution was evaporated and the residue was converted into plutonium(IV) oxide (PuO2) by annealing. Following cyclotron irradiation of the oxide, the coating was dissolved with nitric acid and then precipitated as the hydroxide using concentrated aqueous ammonia solution. The residue was dissolved in perchloric acid, and further separation was done by ion exchange to yield a certain isotope of curium. The separation of curium and americium was so painstaking that the Berkeley group initially called those elements pandemonium (from Greek for all demons or hell) and delirium (from Latin for madness).Curium-242 was made in July–August 1944 by bombarding 239Pu with α-particles to produce curium with the release of a neutron: Pu 94 239 He Cm 96 242 +n01 Curium-242 was unambiguously identified by the characteristic energy of the α-particles emitted during the decay: Cm 96 242 Pu 94 238 He 24 The half-life of this alpha decay was first measured as 150 days and then corrected to 162.8 days.Another isotope 240Cm was produced in a similar reaction in March 1945: Pu 94 239 He Cm 96 240 +301n The α-decay half-life of 240Cm was correctly determined as 26.7 days.The discovery of curium and americium in 1944 was closely related to the Manhattan Project, so the results were confidential and declassified only in 1945. Seaborg leaked the synthesis of the elements 95 and 96 on the U.S. radio show for children, the Quiz Kids, five days before the official presentation at an American Chemical Society meeting on November 11, 1945, when one listener asked if any new transuranic element beside plutonium and neptunium had been discovered during the war. The discovery of curium (242Cm and 240Cm), its production, and its compounds was later patented listing only Seaborg as the inventor.
History:
The element was named after Marie Curie and her husband Pierre Curie, who are known for discovering radium and for their work in radioactivity. It followed the example of gadolinium, a lanthanide element above curium in the periodic table, which was named after the explorer of rare-earth elements Johan Gadolin: "As the name for the element of atomic number 96 we should like to propose "curium", with symbol Cm. The evidence indicates that element 96 contains seven 5f electrons and is thus analogous to the element gadolinium, with its seven 4f electrons in the regular rare earth series. On this basis element 96 is named after the Curies in a manner analogous to the naming of gadolinium, in which the chemist Gadolin was honored."The first curium samples were barely visible, and were identified by their radioactivity. Louis Werner and Isadore Perlman made the first substantial sample of 30 µg curium-242 hydroxide at University of California, Berkeley in 1947 by bombarding americium-241 with neutrons. Macroscopic amounts of curium(III) fluoride were obtained in 1950 by W. W. T. Crane, J. C. Wallmann and B. B. Cunningham. Its magnetic susceptibility was very close to that of GdF3 providing the first experimental evidence for the +3 valence of curium in its compounds. Curium metal was produced only in 1951 by reduction of CmF3 with barium.
Characteristics:
Physical A synthetic, radioactive element, curium is a hard, dense metal with a silvery-white appearance and physical and chemical properties resembling gadolinium. Its melting point of 1344 °C is significantly higher than that of the previous elements neptunium (637 °C), plutonium (639 °C) and americium (1176 °C). In comparison, gadolinium melts at 1312 °C. Curium boils at 3556 °C. With a density of 13.52 g/cm3, curium is lighter than neptunium (20.45 g/cm3) and plutonium (19.8 g/cm3), but heavier than most other metals. Of two crystalline forms of curium, α-Cm is more stable at ambient conditions. It has a hexagonal symmetry, space group P63/mmc, lattice parameters a = 365 pm and c = 1182 pm, and four formula units per unit cell. The crystal consists of double-hexagonal close packing with the layer sequence ABAC and so is isotypic with α-lanthanum. At pressure >23 GPa, at room temperature, α-Cm becomes β-Cm, which has face-centered cubic symmetry, space group Fm3m and lattice constant a = 493 pm. On further compression to 43 GPa, curium becomes an orthorhombic γ-Cm structure similar to α-uranium, with no further transitions observed up to 52 GPa. These three curium phases are also called Cm I, II and III.Curium has peculiar magnetic properties. Its neighbor element americium shows no deviation from Curie-Weiss paramagnetism in the entire temperature range, but α-Cm transforms to an antiferromagnetic state upon cooling to 65–52 K, and β-Cm exhibits a ferrimagnetic transition at ~205 K. Curium pnictides show ferromagnetic transitions upon cooling: 244CmN and 244CmAs at 109 K, 248CmP at 73 K and 248CmSb at 162 K. The lanthanide analog of curium, gadolinium, and its pnictides, also show magnetic transitions upon cooling, but the transition character is somewhat different: Gd and GdN become ferromagnetic, and GdP, GdAs and GdSb show antiferromagnetic ordering.In accordance with magnetic data, electrical resistivity of curium increases with temperature – about twice between 4 and 60 K – and then is nearly constant up to room temperature. There is a significant increase in resistivity over time (~10 µΩ·cm/h) due to self-damage of the crystal lattice by alpha decay. This makes uncertain the true resistivity of curium (~125 µΩ·cm). Curium's resistivity is similar to that of gadolinium, and the actinides plutonium and neptunium, but significantly higher than that of americium, uranium, polonium and thorium.Under ultraviolet illumination, curium(III) ions show strong and stable yellow-orange fluorescence with a maximum in the range of 590–640 nm depending on their environment. The fluorescence originates from the transitions from the first excited state 6D7/2 and the ground state 8S7/2. Analysis of this fluorescence allows monitoring interactions between Cm(III) ions in organic and inorganic complexes.
Characteristics:
Chemical Curium ion in solution almost always has a +3 oxidation state, the most stable oxidation state for curium. A +4 oxidation state is seen mainly in a few solid phases, such as CmO2 and CmF4. Aqueous curium(IV) is only known in the presence of strong oxidizers such as potassium persulfate, and is easily reduced to curium(III) by radiolysis and even by water itself. Chemical behavior of curium is different from the actinides thorium and uranium, and is similar to americium and many lanthanides. In aqueous solution, the Cm3+ ion is colorless to pale green; Cm4+ ion is pale yellow. The optical absorption of Cm3+ ion contains three sharp peaks at 375.4, 381.2 and 396.5 nm and their strength can be directly converted into the concentration of the ions. The +6 oxidation state has only been reported once in solution in 1978, as the curyl ion (CmO2+2): this was prepared from beta decay of americium-242 in the americium(V) ion 242AmO+2. Failure to get Cm(VI) from oxidation of Cm(III) and Cm(IV) may be due to the high Cm4+/Cm3+ ionization potential and the instability of Cm(V).Curium ions are hard Lewis acids and thus form most stable complexes with hard bases. The bonding is mostly ionic, with a small covalent component. Curium in its complexes commonly exhibits a 9-fold coordination environment, with a tricapped trigonal prismatic molecular geometry.
Characteristics:
Isotopes About 19 radioisotopes and 7 nuclear isomers, 233Cm to 251Cm, are known; none are stable. The longest half-lives are 15.6 million years (247Cm) and 348,000 years (248Cm). Other long-lived ones are 245Cm (8500 years), 250Cm (8300 years) and 246Cm (4760 years). Curium-250 is unusual: it mostly (~86%) decays by spontaneous fission. The most commonly used isotopes are 242Cm and 244Cm with the half-lives 162.8 days and 18.1 years, respectively.
Characteristics:
All isotopes 242Cm-248Cm, and 250Cm, undergo a self-sustaining nuclear chain reaction and thus in principle can be a nuclear fuel in a reactor. As in most transuranic elements, nuclear fission cross section is especially high for the odd-mass curium isotopes 243Cm, 245Cm and 247Cm. These can be used in thermal-neutron reactors, whereas a mixture of curium isotopes is only suitable for fast breeder reactors since the even-mass isotopes are not fissile in a thermal reactor and accumulate as burn-up increases. The mixed-oxide (MOX) fuel, which is to be used in power reactors, should contain little or no curium because neutron activation of 248Cm will create californium. Californium is a strong neutron emitter, and would pollute the back end of the fuel cycle and increase the dose to reactor personnel. Hence, if minor actinides are to be used as fuel in a thermal neutron reactor, the curium should be excluded from the fuel or placed in special fuel rods where it is the only actinide present.
Characteristics:
The adjacent table lists the critical masses for curium isotopes for a sphere, without moderator or reflector. With a metal reflector (30 cm of steel), the critical masses of the odd isotopes are about 3–4 kg. When using water (thickness ~20–30 cm) as the reflector, the critical mass can be as small as 59 gram for 245Cm, 155 gram for 243Cm and 1550 gram for 247Cm. There is significant uncertainty in these critical mass values. While it is usually on the order of 20%, the values for 242Cm and 246Cm were listed as large as 371 kg and 70.1 kg, respectively, by some research groups.Curium is not currently used as nuclear fuel due to its low availability and high price. 245Cm and 247Cm have very small critical mass and so could be used in tactical nuclear weapons, but none are known to have been made. Curium-243 is not suitable for such, due to its short half-life and strong α emission, which would cause excessive heat. Curium-247 would be highly suitable due to its long half-life, which is 647 times longer than plutonium-239 (used in many existing nuclear weapons).
Characteristics:
Occurrence The longest-lived isotope, 247Cm, has half-life 15.6 million years; so any primordial curium, that is, present on Earth when it formed, should have decayed by now. Its past presence as an extinct radionuclide is detectable as an excess of its primordial, long-lived daughter 235U. Traces of curium may occur naturally in uranium minerals due to neutron capture and beta decay, though this has not been confirmed. Traces of 247Cm are also probably brought to Earth in cosmic rays, but again this has not been confirmed.Curium is made artificially in small amounts for research purposes. It also occurs as one of the waste products in spent nuclear fuel. Curium is present in nature in some areas used for nuclear weapons testing. Analysis of the debris at the test site of the United States' first thermonuclear weapon, Ivy Mike, (1 November 1952, Enewetak Atoll), besides einsteinium, fermium, plutonium and americium also revealed isotopes of berkelium, californium and curium, in particular 245Cm, 246Cm and smaller quantities of 247Cm, 248Cm and 249Cm.Atmospheric curium compounds are poorly soluble in common solvents and mostly adhere to soil particles. Soil analysis revealed about 4,000 times higher concentration of curium at the sandy soil particles than in water present in the soil pores. An even higher ratio of about 18,000 was measured in loam soils.The transuranium elements from americium to fermium, including curium, occurred naturally in the natural nuclear fission reactor at Oklo, but no longer do so.Curium, and other non-primordial actinides, have also been suspected to exist in the spectrum of Przybylski's Star.
Synthesis:
Isotope preparation Curium is made in small amounts in nuclear reactors, and by now only kilograms of 242Cm and 244Cm have been accumulated, and grams or even milligrams for heavier isotopes. Hence the high price of curium, which has been quoted at 160–185 USD per milligram, with a more recent estimate at US$2,000/g for 242Cm and US$170/g for 244Cm. In nuclear reactors, curium is formed from 238U in a series of nuclear reactions. In the first chain, 238U captures a neutron and converts into 239U, which via β− decay transforms into 239Np and 239Pu.
Synthesis:
Further neutron capture followed by β−-decay gives americium (241Am) which further becomes 242Cm: For research purposes, curium is obtained by irradiating not uranium but plutonium, which is available in large amounts from spent nuclear fuel. A much higher neutron flux is used for the irradiation that results in a different reaction chain and formation of 244Cm: Curium-244 alpha decays to 240Pu, but it also absorbs neutrons, hence a small amount of heavier curium isotopes. Of those, 247Cm and 248Cm are popular in scientific research due to their long half-lives. But the production rate of 247Cm in thermal neutron reactors is low because it is prone to fission due to thermal neutrons. Synthesis of 250Cm by neutron capture is unlikely due to the short half-life of the intermediate 249Cm (64 min), which β− decays to the berkelium isotope 249Bk.
Synthesis:
The above cascade of (n,γ) reactions gives a mix of different curium isotopes. Their post-synthesis separation is cumbersome, so a selective synthesis is desired. Curium-248 is favored for research purposes due to its long half-life. The most efficient way to prepare this isotope is by α-decay of the californium isotope 252Cf, which is available in relatively large amounts due to its long half-life (2.65 years). About 35–50 mg of 248Cm is produced thus, per year. The associated reaction produces 248Cm with isotopic purity of 97%.
Synthesis:
Another isotope, 245Cm, can be obtained for research, from α-decay of 249Cf; the latter isotope is produced in small amounts from β−-decay of 249Bk.
Synthesis:
Metal preparation Most synthesis routines yield a mix of actinide isotopes as oxides, from which a given isotope of curium needs to be separated. An example procedure could be to dissolve spent reactor fuel (e.g. MOX fuel) in nitric acid, and remove the bulk of the uranium and plutonium using a PUREX (Plutonium – URanium EXtraction) type extraction with tributyl phosphate in a hydrocarbon. The lanthanides and the remaining actinides are then separated from the aqueous residue (raffinate) by a diamide-based extraction to give, after stripping, a mixture of trivalent actinides and lanthanides. A curium compound is then selectively extracted using multi-step chromatographic and centrifugation techniques with an appropriate reagent. Bis-triazinyl bipyridine complex has been recently proposed as such reagent which is highly selective to curium. Separation of curium from the very chemically similar americium can also be done by treating a slurry of their hydroxides in aqueous sodium bicarbonate with ozone at elevated temperature. Both americium and curium are present in solutions mostly in the +3 valence state; americium oxidizes to soluble Am(IV) complexes, but curium stays unchanged and so can be isolated by repeated centrifugation.Metallic curium is obtained by reduction of its compounds. Initially, curium(III) fluoride was used for this purpose. The reaction was done in an environment free of water and oxygen, in an apparatus made of tantalum and tungsten, using elemental barium or lithium as reducing agents.
Synthesis:
CmF3+3Li⟶Cm+3LiF Another possibility is reduction of curium(IV) oxide using a magnesium-zinc alloy in a melt of magnesium chloride and magnesium fluoride.
Compounds and reactions:
Oxides Curium readily reacts with oxygen forming mostly Cm2O3 and CmO2 oxides, but the divalent oxide CmO is also known. Black CmO2 can be obtained by burning curium oxalate (Cm2(C2O4)3), nitrate (Cm(NO3)3), or hydroxide in pure oxygen. Upon heating to 600–650 °C in vacuum (about 0.01 Pa), it transforms into the whitish Cm2O3: CmO Cm 2O3+O2 .Or, Cm2O3 can be obtained by reducing CmO2 with molecular hydrogen: CmO Cm 2O3+H2O Also, a number of ternary oxides of the type M(II)CmO3 are known, where M stands for a divalent metal, such as barium.Thermal oxidation of trace quantities of curium hydride (CmH2–3) has been reported to give a volatile form of CmO2 and the volatile trioxide CmO3, one of two known examples of the very rare +6 state for curium. Another observed species was reported to behave similar to a supposed plutonium tetroxide and was tentatively characterized as CmO4, with curium in the extremely rare +8 state; but new experiments seem to indicate that CmO4 does not exist, and have cast doubt on the existence of PuO4 as well.
Compounds and reactions:
Halides The colorless curium(III) fluoride (CmF3) can be made by adding fluoride ions into curium(III)-containing solutions. The brown tetravalent curium(IV) fluoride (CmF4) on the other hand is only obtained by reacting curium(III) fluoride with molecular fluorine: 2CmF3+F2⟶2CmF4 A series of ternary fluorides are known of the form A7Cm6F31 (A = alkali metal).The colorless curium(III) chloride (CmCl3) is made by reacting curium hydroxide (Cm(OH)3) with anhydrous hydrogen chloride gas. It can be further turned into other halides such as curium(III) bromide (colorless to light green) and curium(III) iodide (colorless), by reacting it with the ammonia salt of the corresponding halide at temperatures of ~400–450°C: CmCl3+3NH4I⟶CmI3+3NH4Cl Or, one can heat curium oxide to ~600°C with the corresponding acid (such as hydrobromic for curium bromide). Vapor phase hydrolysis of curium(III) chloride gives curium oxychloride: CmCl3+H2O⟶CmOCl+2HCl Chalcogenides and pnictides Sulfides, selenides and tellurides of curium have been obtained by treating curium with gaseous sulfur, selenium or tellurium in vacuum at elevated temperature. Curium pnictides of the type CmX are known for nitrogen, phosphorus, arsenic and antimony. They can be prepared by reacting either curium(III) hydride (CmH3) or metallic curium with these elements at elevated temperature.
Compounds and reactions:
Organocurium compounds and biological aspects Organometallic complexes analogous to uranocene are known also for other actinides, such as thorium, protactinium, neptunium, plutonium and americium. Molecular orbital theory predicts a stable "curocene" complex (η8-C8H8)2Cm, but it has not been reported experimentally yet.Formation of the complexes of the type Cm(n-C3H7-BTP)3 (BTP = 2,6-di(1,2,4-triazin-3-yl)pyridine), in solutions containing n-C3H7-BTP and Cm3+ ions has been confirmed by EXAFS. Some of these BTP-type complexes selectively interact with curium and thus are useful for separating it from lanthanides and another actinides. Dissolved Cm3+ ions bind with many organic compounds, such as hydroxamic acid, urea, fluorescein and adenosine triphosphate. Many of these compounds are related to biological activity of various microorganisms. The resulting complexes show strong yellow-orange emission under UV light excitation, which is convenient not only for their detection, but also for studying interactions between the Cm3+ ion and the ligands via changes in the half-life (of the order ~0.1 ms) and spectrum of the fluorescence.Curium has no biological significance. There are a few reports on biosorption of Cm3+ by bacteria and archaea, but no evidence for incorporation of curium into them.
Applications:
Radionuclides Curium is one of the most radioactive isolable elements. Its two most common isotopes 242Cm and 244Cm are strong alpha emitters (energy 6 MeV); they have fairly short half-lives, 162.8 days and 18.1 years, and give as much as 120 W/g and 3 W/g of heat, respectively. Therefore, curium can be used in its common oxide form in radioisotope thermoelectric generators like those in spacecraft. This application has been studied for the 244Cm isotope, while 242Cm was abandoned due to its prohibitive price, around 2000 USD/g. 243Cm with a ~30-year half-life and good energy yield of ~1.6 W/g could be a suitable fuel, but it gives significant amounts of harmful gamma and beta rays from radioactive decay products. As an α-emitter, 244Cm needs much less radiation shielding, but it has a high spontaneous fission rate, and thus a lot of neutron and gamma radiation. Compared to a competing thermoelectric generator isotope such as 238Pu, 244Cm emits 500 times more neutrons, and its higher gamma emission requires a shield that is 20 times thicker—2 inches (51 mm) of lead for a 1 kW source, compared to 0.1 inches (2.5 mm) for 238Pu. Therefore, this use of curium is currently considered impractical.A more promising use of 242Cm is for making 238Pu, a better radioisotope for thermoelectric generators such as in heart pacemakers. The alternate routes to 238Pu use the (n,γ) reaction of 237Np, or deuteron bombardment of uranium, though both reactions always produce 236Pu as an undesired by-product since the latter decays to 232U with strong gamma emission. Curium is a common starting material for making higher transuranic and superheavy elements. Thus, bombarding 248Cm with neon (22Ne), magnesium (26Mg), or calcium (48Ca) yields isotopes of seaborgium (265Sg), hassium (269Hs and 270Hs), and livermorium (292Lv, 293Lv, and possibly 294Lv). Californium was discovered when a microgram-sized target of curium-242 was irradiated with 35 MeV alpha particles using the 60-inch (150 cm) cyclotron at Berkeley: 24296Cm + 42He → 24598Cf + 10nOnly about 5,000 atoms of californium were produced in this experiment.The odd-mass curium isotopes 243Cm, 245Cm, and 247Cm are all highly fissile and can release additional energy in a thermal spectrum nuclear reactor. All curium isotopes are fissionable in fast-neutron reactors. This is one of the motives for minor actinide separation and transmutation in the nuclear fuel cycle, helping to reduce the long-term radiotoxicity of used, or spent nuclear fuel.
Applications:
X-ray spectrometer The most practical application of 244Cm—though rather limited in total volume—is as α-particle source in alpha particle X-ray spectrometers (APXS). These instruments were installed on the Sojourner, Mars, Mars 96, Mars Exploration Rovers and Philae comet lander, as well as the Mars Science Laboratory to analyze the composition and structure of the rocks on the surface of planet Mars. APXS was also used in the Surveyor 5–7 moon probes but with a 242Cm source.An elaborate APXS setup has a sensor head containing six curium sources with a total decay rate of several tens of millicuries (roughly one gigabecquerel). The sources are collimated on a sample, and the energy spectra of the alpha particles and protons scattered from the sample are analyzed (proton analysis is done only in some spectrometers). These spectra contain quantitative information on all major elements in the sample except for hydrogen, helium and lithium.
Safety:
Due to its radioactivity, curium and its compounds must be handled in appropriate labs under special arrangements. While curium itself mostly emits α-particles which are absorbed by thin layers of common materials, some of its decay products emit significant fractions of beta and gamma rays, which require a more elaborate protection. If consumed, curium is excreted within a few days and only 0.05% is absorbed in the blood. From there, ~45% goes to the liver, 45% to the bones, and the remaining 10% is excreted. In bone, curium accumulates on the inside of the interfaces to the bone marrow and does not significantly redistribute with time; its radiation destroys bone marrow and thus stops red blood cell creation. The biological half-life of curium is about 20 years in the liver and 50 years in the bones. Curium is absorbed in the body much more strongly via inhalation, and the allowed total dose of 244Cm in soluble form is 0.3 μCi. Intravenous injection of 242Cm- and 244Cm-containing solutions to rats increased the incidence of bone tumor, and inhalation promoted lung and liver cancer.Curium isotopes are inevitably present in spent nuclear fuel (about 20 g/tonne). The isotopes 245Cm–248Cm have decay times of thousands of years and must be removed to neutralize the fuel for disposal. Such a procedure involves several steps, where curium is first separated and then converted by neutron bombardment in special reactors to short-lived nuclides. This procedure, nuclear transmutation, while well documented for other elements, is still being developed for curium.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Primordial cyst**
Primordial cyst:
A primordial cyst is a developmental odontogenic cyst. It is found in an area where a tooth should have formed but is missing. Primordial cysts most commonly arise in the area of mandibular third molars. Under microscopes, the cyst looks like an odontogenic keratocyst (also called a Keratocyst odontogenic tumor) whereby the lesions displays a parakeratinized epithelium with palisading basal epithelial cells.
Primordial cyst:
The term "Primordial cyst" is considered an outdated term and should be avoided. Most "primordial cysts" are actually Keratocyst odontogenic tumors (KOT's).
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Cornhole**
Cornhole:
Bags (also known regionally as sack toss, or Cornhole) is a lawn game popular in North America in which players or teams take turns throwing fabric bean bags at a raised, angled board with a hole in its far end. The goal of the game is to score points by either landing a bag on the board (one point) or putting a bag through the hole (three points).
History:
The game was first described in Heyliger de Windt's 1883 patent for "Parlor Quoits", which displays most of the features of modern cornhole, but uses a square hole. Quoits is a game similar to horseshoes, played by throwing steel discs at a metal spike. Several earlier "parlor quoits" patents had sought to recreate quoit gameplay in an indoor environment, but De Windt's was the first to use bean bags and a slanted board with a hole as the target.
History:
He sold the rights to the game to a Massachusetts toy manufacturer which marketed a version of it under the name "Faba Baga". Unlike modern cornhole, which has one hole and one size of bags, a Faba Baga board had two different-sized holes, worth different point values, and provided each player with one extra-large bag per round, which could score double points.
History:
In September 1974, Popular Mechanics magazine published an article written by Carolyn Farrell about a similar game called "bean-bag bull's-eye." Bean-bag bull's-eye was played on a board the same width of modern cornhole boards (24"), but only 36" long as opposed to the 48" length used in cornhole. The hole was the same diameter (6") but was centered 8" (rather than 9") from the back of the board. Each player threw two bags, weighing eight ounces each, "in succession". The boards in bean-bag bull's-eye were placed "about 30 ft. apart for adults, 10 ft. for kids." Scoring was essentially the same as that used in cornhole (three points for a bag in the hole, one point for a bag remaining on the board) and also used cancellation scoring.
History:
In the Chicago area, a similar game is referred to as "bags", but uses rectangular bags. The game spread in Chicago, Illinois, and the Northwest region of Indiana in the late 1970s and early 1980s, perhaps due to the Popular Mechanics article mentioned above. Cornhole as it is now known originated and gained popularity on Cincinnati's west side (near Ferguson Avenue) in the 1980s and spread to surrounding areas in Kentucky and Southeast Indiana.
History:
Tournaments The American Cornhole League (ACL) was founded in 2015 by Stacey Moore. According to the ACL's website, it promotes and develops cornhole as a sport on every level, and created software and apps to manage cornhole leagues, tournaments, special events, and player development.The American Cornhole Organization (ACO) was established in 2005 by Frank Geers and is headquartered in Milford, Ohio. As of August 1, 2019, the ACO claimed on its website to be the "governing body for the sport of cornhole".The American Cornhole Association (ACA) is an organization whose sole mission is to help cornhole players enjoy the game of cornhole. According to its website, "[o]ne of the most important ways to achieve this goal is for people to have high-quality equipment to play on." Accordingly, the ACA is largely focused on selling cornhole-related products and equipment rather than acting as a sanctioning body of the sport; however, it does have its own rules and does sponsor events.
Rules and format:
Equipment and court layout Cornhole matches are played with two sets of four bags (eight total), two boards and two, four, or eight players.There are four bags to a set. Each set of bags should be distinguishable from the other, usually by using different colors. The American Cornhole League's rules call for double-seamed fabric bags measuring 6 by 6 inches (150 by 150 mm) and weighing 15.5 to 16.5 ounces (440 to 470 g). Although bags used to be filled with preserved corn kernels (hence "cornhole") or dried beans, bags are now usually filled with plastic resin or other materials that will maintain a consistent weight and shape over many throws without deforming. Bags are usually dual-sided, with each side of the bag being a different material that can affect grip and react faster or slower on the board's surface. Faster bags are often preferred in humid conditions when bags will not slide as readily. Additionally, professional players may opt for different materials depending on their personal throwing styles. Players with a lower, faster throw may use more rotation and prefer a slower bag material, whereas players with higher, slower throws may use less rotation and prefer a more reactive bag.
Rules and format:
Each board is 2 by 4 feet (0.61 by 1.22 m), with a 6-inch (150 mm) diameter hole. The hole's center is positioned 9 inches (230 mm) down from the center of the top edge of the board. Each board is angled with the top edge of the playing surface 12 inches (300 mm) above the ground, and the bottom edge 3–4 inches (76–102 mm) above the ground. A standard court places the two boards 33 feet (10 m) or 27 feet (8.2 m) apart, measuring from the bottom edge of the boards. Different (usually shorter) distances may be used if space is limited or if younger players are participating. Some smaller versions of the game, with scaled-down boards, bags, and holes are available specifically for children.
Rules and format:
The areas immediately to the left and right of the boards are the pitcher's boxes. The line (either drawn or imaginary) extending from the bottom edge of the board in both the left and right direction is the foul line. When throwing the bags, players cannot step past the foul line or else the throw does not count.
Gameplay A cornhole match is separated into innings (or frames). During each inning, each player or team will throw their designated four bags. The manner in which the bags are thrown depends on which format of cornhole is being played: singles (1 vs. 1), doubles (2 vs. 2), or crew (4 vs. 4).
Rules and format:
In singles (1 vs. 1), both players throw their four bags while standing on opposite sides of the same board (left vs. right pitcher's box), alternating throws between the two players. After all eight bags are thrown, both players walk to the opposite board, while remaining in their lane, to tally the score. To begin the next inning, both players turn around to throw at the other board in the same manner. The effect of this is that by always staying in their respective lane, the two players will alternate each inning throwing from the left vs. right pitcher's box.In doubles (2 vs. 2), one partner from each team stands in the left pitcher's box of one board while the other partner stands in the right pitcher's box of the opposite board. Thus, each team's partners are on opposite ends, facing each other, both in the same lane. From here, gameplay is similar to singles: the two opponents at one board alternate throwing their four bags at the other board, after which a mid-inning score is tallied; then their partners at that board alternate throwing their team's four bags back at the other board, after which the final inning score can be tallied. In doubles, players may not change sides, i.e. one partner will throw from the left pitcher's box of one board and the other from the right pitcher's box of the other board for the entire game.
Rules and format:
In the crew format (4 vs. 4), play is identical to doubles, but with two teammates at each of the two boards, one pair in the left pitcher's box of one board and the other pair in the right pitcher's box of the opposite board, each pair facing each other, in the same lane. Instead of each partner pitching four bags per inning (as in doubles), in crew each teammate pitches two bags per inning, again alternating throws both with the opposing team (as in singles and doubles) and with the player's teammate who is standing with them at the same board.
Rules and format:
Note that in doubles and crew, the score for any inning is based on eight throws per team, as opposed to four throws per player in singles.
Rules and format:
In all formats, the pitcher must throw the bag within 20 seconds. The time begins when the pitcher is inside the pitcher's box with an intent to throw. The first pitch of an inning goes to whichever player or team scored in the previous inning. If neither player or team scored in the previous inning, then whichever pitched first in the previous inning will again pitch first in the next inning. The first pitch of the first inning can be decided by a coin toss.
Rules and format:
A legal pitch must be tossed while the pitcher's feet are within the pitcher's box. If the pitcher begins the throw with a foot beyond the foul line or otherwise steps beyond the foul line before releasing the bag, the pitch is a foul and does not count. A foul throw cannot be re-taken and the bag is removed from play before continuing. If a foul bag moves other bags in the field of play, those bags are returned to their prior position before continuing, including if a bag was moved into the hole. If a bag lands only partially on the board and is also touching the ground, it does not count and is removed before continuing.
Rules and format:
Scoring To score points, bags must be on the surface of the board or fall through the hole. To score three points, a bag may fall directly into the hole, slide into the hole after hitting the board, or be knocked into the hole by another bag. A bag remaining on the board scores one point. A bag partially on the board and partially on the ground ("dirt bag") does not count and should be removed before the next throw.
Rules and format:
In cornhole, cancellation scoring is used. When the scores are tallied at the end of an inning, whichever player or team scores higher is awarded points equal to the difference between both sides. For example, if Team A scores 12 points in an inning and Team B scores 10 points, then Team A is awarded two points (12 minus 10); whereas if Team A and Team B both score 12 points, the difference is zero, and no one scores. Play continues until one player or team reaches or exceeds 21 points at the end of an inning. By using cancellation scoring, it is only possible for one side (or neither side) to score in any inning, so match ties are impossible.
Rules and format:
Different variations in scoring or house rules are sometimes used. Sometimes, a bag hanging over the hole, but which has not fallen through, is scored as two points. Other variations include requiring one team to reach exactly 21 points without going over to win. If a team exceeds 21 points after an inning (called "busting"), different punishments might be used such as automatically returning to 15 points, returning to the team's prior score, returning to the prior score minus one, etc. In some versions, if a team "busts" three times, their opponents automatically win the match.
Rules and format:
Strategy Gameplay strategy varies by player and skill level. At the professional level, players can easily slide all four bags into the hole if no bag blocks the path. Defensive strategies are often employed to slow down gameplay or force opponents to make difficult decisions. Defensive plays might include throwing a blocker bag that rests in front of the hole, thereby forcing an opponent to either slide through the blocker bag to reach the hole, throw another blocker behind the bag, or attempt a risky airmail shot over the bag aiming directly for the hole without touching the board.
Terminology:
The following is a list of terms commonly used in cornhole: Airmail: a bag that does not slide or bounce on the board but goes directly into the hole, usually over an opponent's blocker bag.
Back door, jumper, dirty rollup: a bag that goes over the top of a blocker and into the hole.
Backstop: a bag that lands past the hole but remains on the board creating a backboard for a slider to knock into without going off the board.
Blocker: a bag that lands in front of the hole, blocking the hole from an opponent's slide shot.
Busting: an unofficial rule that sends a player's score back down to a predetermined number if their score at the end of an inning exceeds 21.
Cornfusion: when players or teams cannot agree on the scoring of an inning.
Cornhole or Drano: a bag that falls in the hole and is worth three points; the alternative name is a reference to a trademark, that of a clog-clearing product.
Cornholio: same as "cornhole", depending on the region; named for the alter-ego of the character Beavis from the MTV animated series Beavis and Butt-Head.
Dirt bag: a bag that is on the ground or is hanging off the board and touching the ground.
Terminology:
Frame: an inning, a single round during which a player or team and their opponent(s) all throw their bags Four bagger, Grand Bag: a sequence wherein a player makes all four bags in the hole during an inning; more specifically, all bags have to go into the hole one bag after another by the player in a single inning, i.e. the bags cannot later be knocked from the board's surface into the hole during the inning, either by the player or their opponent; there is a tradition in some areas where any social player who puts all four bags in the hole in a single inning gets to sign the board, often with some type of ceremony and recognition.
Terminology:
Flop bag, floppy bag: type of toss that does not spin the bag horizontally or vertically, a bag without rotation or spin.
Hammer: when one or more hangers (see below) are around the hole, a hammer can be used; a hammer is a bag thrown as an airmail bag with a high arc in an attempt to move hanger bags into the hole along with it.
Hanger: a bag on the lip of the hole close to falling in.
Honors: the player or team who tosses first, resulting from the team scoring in the previous inning or winning the coin toss before the first inning.
Hooker: a bag that hits the board and while hooking or curving around a blocker goes into the hole.
Jumper: a bag that strikes another bag on the board causing it to jump up and into the hole.
Push, wash: when each player or team obtains an identical score in an inning resulting in no overall score change Short bag: when a bag lands on the ground just before the board.
Skunk, whitewash, shutout: a game that ends in a score of 21 (or more) to zero; by some unofficial rules a game may be called once a shutout score of at least 11–0 is reached.
Slide, slider: a bag that lands in front of the hole and slides in.
Swish: a bag that goes directly in the hole without touching the board (see also: "airmail").
Woody: any bag that has been pitched and remains on the board's surface at the end of the inning (scoring one point).
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Capacity factor**
Capacity factor:
The net capacity factor is the unitless ratio of actual electrical energy output over a given period of time to the theoretical maximum electrical energy output over that period. The theoretical maximum energy output of a given installation is defined as that due to its continuous operation at full nameplate capacity over the relevant period. The capacity factor can be calculated for any electricity producing installation, such as a fuel consuming power plant or one using renewable energy, such as wind or the sun. The average capacity factor can also be defined for any class of such installations, and can be used to compare different types of electricity production.
Capacity factor:
The actual energy output during that period and the capacity factor vary greatly depending on a range of factors. The capacity factor can never exceed the availability factor, or uptime during the period. Uptime can be reduced due to, for example, reliability issues and maintenance, scheduled or unscheduled. Other factors include the design of the installation, its location, the type of electricity production and with it either the fuel being used or, for renewable energy, the local weather conditions. Additionally, the capacity factor can be subject to regulatory constraints and market forces, potentially affecting both its fuel purchase and its electricity sale.
Capacity factor:
The capacity factor is often computed over a timescale of a year, averaging out most temporal fluctuations. However, it can also be computed for a month to gain insight into seasonal fluctuations. Alternatively, it can be computed over the lifetime of the power source, both while operational and after decommissioning. A capacity factor can also be expressed and converted to full load hours.
Sample calculations:
Nuclear power plant Nuclear power plants are at the high end of the range of capacity factors, ideally reduced only by the availability factor, i.e. maintenance and refueling. The largest nuclear plant in the US, Palo Verde Nuclear Generating Station has between its three reactors a nameplate capacity of 3,942 MW. In 2010 its annual generation was 31,200,000 MWh, leading to a capacity factor of: 31 200 000 MW·h 365 days 24 hours/day 3942 MW 0.904 90.4 % Each of Palo Verde’s three reactors is refueled every 18 months, with one refueling every spring and fall. In 2014, a refueling was completed in a record 28 days, compared to the 35 days of downtime that the 2010 capacity factor corresponds to.
Sample calculations:
In 2019, Prairie Island 1 was the US unit with the highest factor and actually reached 104.4%.
Wind farm The Danish offshore wind farm Horns Rev 2 has a nameplate capacity of 209.3 MW.
Sample calculations:
As of January 2017 it has produced 6416 GWh since its commissioning 7 years ago, i.e. an average annual production of 875 GWh/year and a capacity factor of: 875 000 MW·h 365 days 24 hours/day 209.3 MW 0.477 47.7 % Sites with lower capacity factors may be deemed feasible for wind farms, for example the onshore 1 GW Fosen Vind which as of 2017 is under construction in Norway has a projected capacity factor of 39%. Feasibility calculations may be affected by seasonality. For example in Finland, capacity factor during the cold winter months is more than double compared to July. While the annual average in Finland is 29.5%, the high demand for heating energy correlates with the higher capacity factor during the winter.
Sample calculations:
Certain onshore wind farms can reach capacity factors of over 60%, for example the 44 MW Eolo plant in Nicaragua had a net generation of 232.132 GWh in 2015, equivalent to a capacity factor of 60.2%, while United States annual capacity factors from 2013 through 2016 range from 32.2% to 34.7%.Since the capacity factor of a wind turbine measures actual production relative to possible production, it is unrelated to Betz's coefficient of 16/27 ≈ 59.3%, which limits production vs. energy available in the wind.
Sample calculations:
Hydroelectric dam As of 2017 the Three Gorges Dam in China is, with its nameplate capacity of 22,500 MW, the largest power generating station in the world by installed capacity. In 2015 it generated 87 TWh, for a capacity factor of: 87 000 000 MW·h 365 days 24 hours/day 22 500 MW 0.45 45 % Hoover Dam has a nameplate capacity of 2080 MW and an annual generation averaging 4.2 TW·h. (The annual generation has varied between a high of 10.348 TW·h in 1984, and a low of 2.648 TW·h in 1956.).
Sample calculations:
Taking the average figure for annual generation gives a capacity factor of: 200 000 MW·h 365 days 24 hours/day 080 MW 0.23 23 % Photovoltaic power station At the low range of capacity factors is the photovoltaic power station, which supplies power to the electricity grid from a large-scale photovoltaic system (PV system). An inherent limit to its capacity factor comes from its requirement of daylight, preferably with a sun unobstructed by clouds, smoke or smog, shade from trees and building structures. Since the amount of sunlight varies both with the time of the day and the seasons of the year, the capacity factor is typically computed on an annual basis. The amount of available sunlight is mostly determined by the latitude of the installation and the local cloud cover.
Sample calculations:
The actual production is also influenced by local factors such as dust and ambient temperature, which ideally should be low. As for any power station, the maximum possible power production is the nameplate capacity times the number of hours in a year, while the actual production is the amount of electricity delivered annually to the grid.
For example, Agua Caliente Solar Project, located in Arizona near the 33rd parallel and awarded for its excellence in renewable energy has a nameplate capacity of 290 MW and an actual average annual production of 740 GWh/year.
Its capacity factor is thus: 740 000 MW·h 365 days 24 hours/day 290 MW 0.291 29.1 % .A significantly lower capacity factor is achieved by Lauingen Energy Park located in Bavaria, near the 49th parallel. With a nameplate capacity of 25.7 MW and an actual average annual production of 26.98 GWh/year it has a capacity factor of 12.0%.
Determinants of a plant capacity factor:
There are several reasons why a plant would have a capacity factor lower than 100%. These include technical constraints, such as availability of the plant, economic reasons, and availability of the energy resource.
Determinants of a plant capacity factor:
A plant can be out of service or operating at reduced output for part of the time due to equipment failures or routine maintenance. This accounts for most of the unused capacity of base load power plants. Base load plants usually have low costs per unit of electricity because they are designed for maximum efficiency and are operated continuously at high output.
Determinants of a plant capacity factor:
Geothermal power plants, nuclear power plants, coal-fired plants and bioenergy plants that burn solid material are almost always operated as base load plants, as they can be difficult to adjust to suit demand.
A plant can also have its output curtailed or intentionally left idle because the electricity is not needed or because the price of electricity is too low to make production economical.
This accounts for most of the unused capacity of peaking power plants and load following power plants.
Peaking plants may operate for only a few hours per year or up to several hours per day.
Many other power plants operate only at certain times of the day or year because of variation in loads and electricity prices.
If a plant is only needed during the day, for example, even if it operates at full power output from 8 am to 8 pm every day (12 hours) all year long, it would only have a 50% capacity factor.
Due to low capacity factors, electricity from peaking power plants is relatively expensive because the limited generation has to cover the plant fixed costs.
A third reason is that a plant may not have the fuel available to operate all of the time. This can apply to fossil generating stations with restricted fuels supplies, but most notably applies to intermittent renewable resources.
Solar PV and wind turbines have a capacity factor limited by the availability of their "fuel", sunshine and wind respectively.
A hydroelectricity plant may have a capacity factor lower than 100% due to restriction or scarcity of water, or its output may be regulated to match the current power need, conserving its stored water for later usage.
Other reasons that a power plant may not have a capacity factor of 100% include restrictions or limitations on air permits and limitations on transmission that force the plant to curtail output.
Capacity factor of renewable energy:
For renewable energy sources such as solar power, wind power and hydroelectricity, the main reason for reduced capacity factor is generally the availability of the energy source. The plant may be capable of producing electricity, but its "fuel" (wind, sunlight or water) may not be available. A hydroelectric plant's production may also be affected by requirements to keep the water level from getting too high or low and to provide water for fish downstream. However, solar, wind and hydroelectric plants do have high availability factors, so when they have fuel available, they are almost always able to produce electricity.When hydroelectric plants have water available, they are also useful for load following, because of their high dispatchability. A typical hydroelectric plant's operators can bring it from a stopped condition to full power in just a few minutes.
Capacity factor of renewable energy:
Wind farms are variable, due to the natural variability of the wind. For a wind farm, the capacity factor is determined by the availability of wind, the swept area of the turbine and the size of the generator. Transmission line capacity and electricity demand also affect the capacity factor. Typical capacity factors of current wind farms are between 25 and 45%. In the United Kingdom during the five year period from 2011 to 2019 the annual capacity factor for wind was over 30%.Solar energy is variable because of the daily rotation of the earth, seasonal changes, and because of cloud cover. For example, the Sacramento Municipal Utility District observed a 15% capacity factor in 2005.
Capacity factor of renewable energy:
However, according to the SolarPACES programme of the International Energy Agency (IEA), solar power plants designed for solar-only generation are well matched to summer noon peak loads in areas with significant cooling demands, such as Spain or the south-western United States, although in some locations solar PV does not reduce the need for generation of network upgrades given that air conditioner peak demand often occurs in the late afternoon or early evening when solar output is reduced. SolarPACES states that by using thermal energy storage systems the operating periods of solar thermal power (CSP) stations can be extended to become dispatchable (load following).Geothermal has a higher capacity factor than many other power sources, and geothermal resources are generally available all the time.
Capacity factors by energy source:
Worldwide Nuclear power 88.7% (2006 - 2012 average of US's plants).
Hydroelectricity, worldwide average 44%, range of 10% - 99% depending on water availability (with or without regulation via storage dam).
Wind farms 20-40%.
CSP solar with storage and Natural Gas backup in Spain 63%, California 33%.
Photovoltaic solar in Germany 10%, Arizona 19%, Massachusetts 13.35% (8 year average as of July 2018).
United States According to the US Energy Information Administration (EIA), from 2013 to 2017 the capacity factors of utility-scale generators were as follows: However, these values often vary significantly by month.
United Kingdom The following figures were collected by the Department of Energy and Climate Change on the capacity factors for various types of plants in UK grid:
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Acoustic theory**
Acoustic theory:
Acoustic theory is a scientific field that relates to the description of sound waves. It derives from fluid dynamics. See acoustics for the engineering approach.
Acoustic theory:
For sound waves of any magnitude of a disturbance in velocity, pressure, and density we have (Conservation of Mass) (Equation of Motion) In the case that the fluctuations in velocity, density, and pressure are small, we can approximate these as ∂ρ′∂t+ρ0∇⋅v=0∂v∂t+1ρ0∇p′=0 Where v(x,t) is the perturbed velocity of the fluid, p0 is the pressure of the fluid at rest, p′(x,t) is the perturbed pressure of the system as a function of space and time, ρ0 is the density of the fluid at rest, and ρ′(x,t) is the variance in the density of the fluid over space and time.
Acoustic theory:
In the case that the velocity is irrotational ( ∇×v=0 ), we then have the acoustic wave equation that describes the system: 1c2∂2ϕ∂t2−∇2ϕ=0 Where we have v=−∇ϕc2=(∂p∂ρ)sp′=ρ0∂ϕ∂tρ′=ρ0c2∂ϕ∂t
Derivation for a medium at rest:
Starting with the Continuity Equation and the Euler Equation: ∂ρ∂t+∇⋅ρv=0ρ∂v∂t+ρ(v⋅∇)v+∇p=0 If we take small perturbations of a constant pressure and density: ρ=ρ0+ρ′p=p0+p′ Then the equations of the system are ∂∂t(ρ0+ρ′)+∇⋅(ρ0+ρ′)v=0(ρ0+ρ′)∂v∂t+(ρ0+ρ′)(v⋅∇)v+∇(p0+p′)=0 Noting that the equilibrium pressures and densities are constant, this simplifies to ∂ρ′∂t+ρ0∇⋅v+∇⋅ρ′v=0(ρ0+ρ′)∂v∂t+(ρ0+ρ′)(v⋅∇)v+∇p′=0 A Moving Medium Starting with ∂ρ′∂t+ρ0∇⋅w+∇⋅ρ′w=0(ρ0+ρ′)∂w∂t+(ρ0+ρ′)(w⋅∇)w+∇p′=0 We can have these equations work for a moving medium by setting w=u+v , where u is the constant velocity that the whole fluid is moving at before being disturbed (equivalent to a moving observer) and v is the fluid velocity.
Derivation for a medium at rest:
In this case the equations look very similar: ∂ρ′∂t+ρ0∇⋅v+u⋅∇ρ′+∇⋅ρ′v=0(ρ0+ρ′)∂v∂t+(ρ0+ρ′)(u⋅∇)v+(ρ0+ρ′)(v⋅∇)v+∇p′=0 Note that setting u=0 returns the equations at rest.
Linearized Waves:
Starting with the above given equations of motion for a medium at rest: ∂ρ′∂t+ρ0∇⋅v+∇⋅ρ′v=0(ρ0+ρ′)∂v∂t+(ρ0+ρ′)(v⋅∇)v+∇p′=0 Let us now take v,ρ′,p′ to all be small quantities.
Linearized Waves:
In the case that we keep terms to first order, for the continuity equation, we have the ρ′v term going to 0. This similarly applies for the density perturbation times the time derivative of the velocity. Moreover, the spatial components of the material derivative go to 0. We thus have, upon rearranging the equilibrium density: ∂ρ′∂t+ρ0∇⋅v=0∂v∂t+1ρ0∇p′=0 Next, given that our sound wave occurs in an ideal fluid, the motion is adiabatic, and then we can relate the small change in the pressure to the small change in the density by p′=(∂p∂ρ0)sρ′ Under this condition, we see that we now have ∂p′∂t+ρ0(∂p∂ρ0)s∇⋅v=0∂v∂t+1ρ0∇p′=0 Defining the speed of sound of the system: c≡(∂p∂ρ0)s Everything becomes ∂p′∂t+ρ0c2∇⋅v=0∂v∂t+1ρ0∇p′=0 For Irrotational Fluids In the case that the fluid is irrotational, that is ∇×v=0 , we can then write v=−∇ϕ and thus write our equations of motion as ∂p′∂t−ρ0c2∇2ϕ=0−∇∂ϕ∂t+1ρ0∇p′=0 The second equation tells us that p′=ρ0∂ϕ∂t And the use of this equation in the continuity equation tells us that ρ0∂2ϕ∂t−ρ0c2∇2ϕ=0 This simplifies to 1c2∂2ϕ∂t2−∇2ϕ=0 Thus the velocity potential ϕ obeys the wave equation in the limit of small disturbances. The boundary conditions required to solve for the potential come from the fact that the velocity of the fluid must be 0 normal to the fixed surfaces of the system.
Linearized Waves:
Taking the time derivative of this wave equation and multiplying all sides by the unperturbed density, and then using the fact that p′=ρ0∂ϕ∂t tells us that 1c2∂2p′∂t2−∇2p′=0 Similarly, we saw that p′=(∂p∂ρ0)sρ′=c2ρ′ . Thus we can multiply the above equation appropriately and see that 1c2∂2ρ′∂t2−∇2ρ′=0 Thus, the velocity potential, pressure, and density all obey the wave equation. Moreover, we only need to solve one such equation to determine all other three. In particular, we have v=−∇ϕp′=ρ0∂ϕ∂tρ′=ρ0c2∂ϕ∂t For a moving medium Again, we can derive the small-disturbance limit for sound waves in a moving medium. Again, starting with ∂ρ′∂t+ρ0∇⋅v+u⋅∇ρ′+∇⋅ρ′v=0(ρ0+ρ′)∂v∂t+(ρ0+ρ′)(u⋅∇)v+(ρ0+ρ′)(v⋅∇)v+∇p′=0 We can linearize these into ∂ρ′∂t+ρ0∇⋅v+u⋅∇ρ′=0∂v∂t+(u⋅∇)v+1ρ0∇p′=0 For Irrotational Fluids in a Moving Medium Given that we saw that ∂ρ′∂t+ρ0∇⋅v+u⋅∇ρ′=0∂v∂t+(u⋅∇)v+1ρ0∇p′=0 If we make the previous assumptions of the fluid being ideal and the velocity being irrotational, then we have p′=(∂p∂ρ0)sρ′=c2ρ′v=−∇ϕ Under these assumptions, our linearized sound equations become 1c2∂p′∂t−ρ0∇2ϕ+1c2u⋅∇p′=0−∂∂t(∇ϕ)−(u⋅∇)[∇ϕ]+1ρ0∇p′=0 Importantly, since u is a constant, we have (u⋅∇)[∇ϕ]=∇[(u⋅∇)ϕ] , and then the second equation tells us that 1ρ0∇p′=∇[∂ϕ∂t+(u⋅∇)ϕ] Or just that p′=ρ0[∂ϕ∂t+(u⋅∇)ϕ] Now, when we use this relation with the fact that 1c2∂p′∂t−ρ0∇2ϕ+1c2u⋅∇p′=0 , alongside cancelling and rearranging terms, we arrive at 1c2∂2ϕ∂t2−∇2ϕ+1c2∂∂t[(u⋅∇)ϕ]+1c2∂∂t(u⋅∇ϕ)+1c2u⋅∇[(u⋅∇)ϕ]=0 We can write this in a familiar form as [1c2(∂∂t+u⋅∇)2−∇2]ϕ=0 This differential equation must be solved with the appropriate boundary conditions. Note that setting u=0 returns us the wave equation. Regardless, upon solving this equation for a moving medium, we then have v=−∇ϕp′=ρ0(∂∂t+u⋅∇)ϕρ′=ρ0c2(∂∂t+u⋅∇)ϕ
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Folch solution**
Folch solution:
A Folch solution is a solution containing chloroform and methanol, usually in a 2:1 (vol/vol) ratio. One of its uses is in separating polar from nonpolar compounds, for example separating nonpolar lipids from polar proteins and carbohydrates in blood serum.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Fire adaptations**
Fire adaptations:
Fire adaptations are traits of plants and animals that help them survive wildfire or to use resources created by wildfire. These traits can help plants and animals increase their survival rates during a fire and/or reproduce offspring after a fire. Both plants and animals have multiple strategies for surviving and reproducing after fire. Plants in wildfire-prone ecosystems often survive through adaptations to their local fire regime. Such adaptations include physical protection against heat, increased growth after a fire event, and flammable materials that encourage fire and may eliminate competition.
Fire adaptations:
For example, plants of the genus Eucalyptus contain flammable oils that encourage fire and hard sclerophyll leaves to resist heat and drought, ensuring their dominance over less fire-tolerant species. Dense bark, shedding lower branches, and high water content in external structures may also protect trees from rising temperatures. Fire-resistant seeds and reserve shoots that sprout after a fire encourage species preservation, as embodied by pioneer species. Smoke, charred wood, and heat can stimulate the germination of seeds in a process called serotiny. Exposure to smoke from burning plants promotes germination in other types of plants by inducing the production of the orange butenolide.
Plant adaptations to fire:
Unlike animals, plants are not able to move physically during a fire. However, plants have their own ways to survive a fire event or recover after a fire. The strategies can be classified into three types: resist (above-ground parts survive fire), recover (evade mortality by sprouting), and recruit (seed germination after the fire). Fire plays a role as a filter that can select different fire response traits.
Plant adaptations to fire:
Resist Thick bark Fire impacts plants most directly via heat damage. However, new studies indicate that hydraulic failure kills trees during a fire in addition to fire scorching. High temperature cuts the water supply to the canopy and causes the death of the tree. Fortunately, thick bark can protect plants because they keep stems away from high temperature. Under the protection of bark, living tissue won't have direct contact with fire and the survival rate of plants will be increased. Heat resistance is a function of bark thermal diffusivity (a property of the species) and bark thickness (increasing exponentially with bark thickness). Thick bark is common in species adapted to surface or low-severity fire regimes. On the other hand, plants in crown or high-severity fire regimes usually have thinner barks because it is meaningless to invest in thick bark without it conferring an advantage in survivorship.
Plant adaptations to fire:
Self-pruning branches Self-pruning is another trait of plants to resist fires. Self-pruning branches can reduce the chance for surface fire to reach the canopy because ladder fuels are removed. Self-pruning branches are common in surface or low-severity fire regimes.
Plant adaptations to fire:
Recover Epicormic buds Epicormic buds are dormant buds under the bark or even deeper. Buds can turn active and grow due to environmental stress such as fire or drought. This trait can help plants to recover their canopies rapidly after a fire. For example, eucalypts are known for this trait. The bark may be removed or burnt by severe fires, but buds are still able to germinate and recover. This trait is common in surface or low-severity fire regimes.
Plant adaptations to fire:
Lignotubers Not all plants have thick bark and epicormic buds. But for some shrubs and trees, their buds are located below ground, which are able to re-sprout even when the stems are killed by fire. Lignotubers, woody structures around the roots of plants that contains many dormant buds and nutrients such as starch, are very helpful for plants to recover after a fire. In case the stem was damaged by a fire, buds will sprout forming basal shoots. Species with lignotubers are often seen in crown or high-severity fire regimes (e.g., chamise in chaparral).
Plant adaptations to fire:
Clonal spread Clonal spread is usually triggered by fires and other forms of removal of above-ground stems. The buds from the mother plant can develop into basal shoots or suckers from roots some distance from the plant. Aspen and Californian redwoods are two examples of clonal spread. In clonal communities, all the individuals developed vegetatively from one single ancestor rather than reproduced sexually. For example, the Pando is a large clonal aspen colony in Utah that developed from a single quaking aspen tree. There are currently more than 40,000 trunks in this colony, and the root system is about 80,000 years old.
Plant adaptations to fire:
Recruit Serotiny Serotiny is a seed dispersal strategy in which the dissemination of seeds is stimulated by external triggers (such as fires) rather than by natural maturation. For serotinous plants, seeds are protected by woody structures during fires and will germinate after the fire. This trait can be found in conifer genera in both the northern and southern hemispheres as well as in flowering plant families (e.g., Banksia). Serotiny is a typical trait in the crown or high-severity fire regimes.
Plant adaptations to fire:
Fire stimulated germination Many species persist in a long-lived soil seed bank, and are stimulated to germinate via thermal scarification or smoke exposure.
Fire-stimulated flowering A less common strategy is fire-stimulated flowering.
Dispersal Species with very high wind dispersal capacity and seed production often are the first arrivals after a fire or other soil disturbance. For example, fireweed is common in burned areas in the western United States.
Plants and fire regimes:
The fire regime exerts a strong filter on which plant species may occur in a given locality. For example, trees in high-severity regimes usually have thin bark while trees in low-severity regimes typically have thick bark. Another example will be that trees in surface fire regimes tend to have epicormic buds rather than basal buds. On the other hand, plants can also alter fire regimes. Oaks, for example, produce a litter layer which slows down the fire spread while pines create a flammable duff layer which increases fire spread. More profoundly, the composition of species can influence fire regimes even when the climate remains unchanged. For example, the mixed forests consists of conifers and chaparral can be found in Cascade Mountains. Conifers burn with low-severity surface fires while chaparral burns with high-severity crown fires. Ironically, some trees can "use" fires to help them to survive during competitions with other trees. Pine trees, for example, can produce flammable litter layers, which help them to take advantage during the completion with other, less fire adapted, species.Grasslands in Western Sabah, Malaysian pine forests, and Indonesian Casuarina forests are believed to have resulted from previous periods of fire. Chamise deadwood litter is low in water content and flammable, and the shrub quickly sprouts after a fire. Cape lilies lie dormant until flames brush away the covering and then blossom almost overnight. Sequoia rely on periodic fires to reduce competition, release seeds from their cones, and clear the soil and canopy for new growth. Caribbean Pine in Bahamian pineyards have adapted to and rely on low-intensity, surface fires for survival and growth. An optimum fire frequency for growth is every 3 to 10 years. Too frequent fires favor herbaceous plants, and infrequent fires favor species typical of Bahamian dry forests.
Evolution of fire survival traits:
Phylogenetic studies indicated that fire adaptive traits have evolved for a long time (tens of millions of years) and these traits are associated with the environment. In habitats with regular surface fires, similar species developed traits such as thick bark and self-pruning branches. In crown fire regimes, pines have evolved traits such as retaining dead branches in order to attract fires. These traits are inherited from the fire-sensitive ancestors of modern pines. Other traits such as serotiny and fire-stimulating flowering also have evolved for millions of years. Some species are capable of using flammability to establish their habitats. For example, trees evolved with fire-embracing traits can "sacrifice" themselves during fires. But they also cause fires to spread and kill their less flammable neighbors. With the help of other fire adaptive traits such as serotiny, flammable trees will occupy the gap created by fires and colonize the habitat.
Animals' adaptations to fires:
Direct effects of fires on animals Most animals have sufficient mobility to successfully evade fires. Vertebrates such as large mammals and adult birds are usually capable of escaping from fires. However, young animals which lack mobility may suffer from fires and have high mortality. Ground-dwelling invertebrates are less impacted by fires (due to low thermal diffusivity of soil) while tree-living invertebrates may be killed by crown fires but survive surface fires. Animals are seldom killed by fires directly. Of the animals killed during the Yellowstone fires of 1988, asphyxiation is believed to be the primary cause of death.
Animals' adaptations to fires:
Long term effects of fires on animals More importantly, fires have long-term effects on the post-burn environment. Fires in seldom-burned rainforests can cause disasters. For example, El Niño-induced surface fires in central Brazilian Amazonia have seriously affected the habitats of birds and primates. Fires also expose animals to dangers such as humans or predators. Generally in a habitat previously with more understory species and less open site species, a fire may replace the fauna structure with more open species and much less understory species. However, the habitat normally will recover to the original structure.
Animals and fire regimes:
Just like plants may alter fire regimes, animals also have impacts on fire regimes. For example, grazing animals consume fuel for fires and reduce the possibilities of future fires. Many animals play roles as designers of fire regimes. Prairie dogs, for example, are rodents which are common in North America. They are able to control fires by grazing grasses too short to burn.
Animal use of fire:
Fires are not always detrimental. Burnt areas usually have better quality and accessibility of foods for animals, which attract animals to forage from nearby habitats. For example, fires can kill trees, and dead trees can attract insects. Birds are attracted by the abundance of food, and they can spread the seeds of herbaceous plants. Eventually large herbivores will also flourish. Also, large mammals prefer newly burnt areas because they need less vigilance for predators.An example of animals' uses on fires is the black kite, a carnivorous bird which can be found globally. In monsoonal areas of north Australia, surface fires can spread, including across intended firebreaks, by burning or smoldering pieces of wood or burning tufts of grass carried intentionally by large flying birds accustomed to catch prey flushed out by wildfires. Species involved in this activity are the black kite (Milvus migrans), whistling kite (Haliastur sphenurus), and brown falcon (Falco berigora). Local Aborigines have known of this behavior for a long time, including in their mythology.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Merochlorophaeic acid**
Merochlorophaeic acid:
Merochlorophaeic acid is a depside with the molecular formula C24H30O8 which has been isolated from the lichen Cladonia merochlorophaea.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Drum stick**
Drum stick:
A drum stick (or drumstick) is a type of percussion mallet used particularly for playing snare drum, drum kit, and some other percussion instruments, and particularly for playing unpitched percussion.
Specialized beaters used on some other percussion instruments, such as the metal beater used with a triangle or the mallets used with tuned percussion (such as xylophone and timpani), are not normally referred to as drumsticks. Drumsticks generally have all of the following characteristics: They are normally supplied and used in pairs.
They may be used to play at least some sort of drum (as well as other instruments).
They are normally used only for unpitched percussion.
Construction:
The archetypical drumstick is turned from a single piece of wood, most commonly of hickory, less commonly of maple, and least commonly but still in significant numbers, of oak. Drumsticks of the traditional form are also made from metal, carbon fibre, and other modern materials.
The tip or bead is the part most often used to strike the instrument. Originally and still commonly of the same piece of wood as the rest of the stick, sticks with nylon tips have also been available since 1958. In the 1970s, an acetal tip was introduced.
Tips of whatever material are of various shapes, including acorn, barrel, oval, teardrop, pointed and round.
Construction:
The shoulder of the stick is the part that tapers towards the tip, and is normally slightly convex. It is often used for playing the bell of a cymbal. It can also be used to produce a cymbal crash when applied with a glancing motion to the bow or edge of a cymbal, and for playing ride patterns on china, swish, and pang cymbals.
Construction:
The shaft is the body of the stick, and is cylindrical for most applications including drum kit and orchestral work. It is used for playing cross stick and applied in a glancing motion to the rim of a cymbal for the loudest cymbal crashes.
The butt is the opposite end of the stick to the tip. Some rock and metal musicians use it rather than the tip.
Conventional numbering:
Plain wooden drumsticks are most commonly described using a number to describe the weight and diameter of the stick followed by one or more letters to describe the tip. For example, a 7A is a common jazz stick with a wooden tip, while a 7AN is the same weight of stick with a nylon tip, and a 7B is a wooden tip but with a different tip profile, shorter and rounder than a 7A. A 5A is a common wood tipped rock stick, heavier than a 7A but with a similar profile. The numbers are most commonly odd but even numbers are used occasionally, in the range 2 (heaviest) to 9 (lightest).
Conventional numbering:
The exact meanings of both numbers and letters differ from manufacturer to manufacturer, and some sticks are not described using this system at all, just being known as jazz (typically a 7A, 8A or 8D) or heavy rock (typically a 5B) for example. The most general purpose stick is a 5A. However, there is no one stick for any particular style of music.
Grip:
There are two main ways of holding drumsticks: Traditional grip, in which right and left hands use different grips.
Matched grip, in which the two hand grips are mirror-image.Traditional grip was developed to conveniently play a snare drum while riding a horse, and was documented by Sanford A. Moeller in The Art of Snare Drumming (1925). It was the standard grip for kit drummers in the first half of the twentieth century and remains popular.
Matched grips became popular towards the middle of the twentieth century, threatening to displace the traditional grip for kit drumming. However the traditional grip has since made a comeback, and both types of grip are still used and promoted by leading drummers and teachers.
Popular brands:
Pro-Mark Vic Firth Vater Percussion Regal Tip Tama Drums Collision Drumsticks
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.