text stringlengths 60 353k | source stringclasses 2 values |
|---|---|
**Polypharmacy**
Polypharmacy:
Polypharmacy (polypragmasia) is an umbrella term to describe the simultaneous use of multiple medicines by a patient for their conditions. Most commonly it is defined as regularly taking five or more medicines but definitions vary in where they draw the line for the minimum number of drugs. Polypharmacy is often the consequence of having multiple long-term conditions, also known as multimorbidity. An excessive number of medications is worrisome, especially for older patients with many chronic health conditions, because this increases the risk of an adverse event in those patients.The prevalence of polypharmacy is estimated to be between 10% and 90% depending on the definition used, the age group studied, and the geographic location. Polypharmacy continues to grow in importance because of aging populations. Many countries are experiencing a fast growth of the older population, 65 years and older. This growth is a result of the baby-boomer generation getting older and an increased life expectancy as a result of ongoing improvement in health care services worldwide. About 21% of adults with intellectual disability are also exposed to polypharmacy. The level of polypharmacy has been increasing in the past decades. Research in the USA shows that the percentage of patients greater than 65 years-old using more than 5 medications increased from 24% to 39% between 1999 and 2012. Similarly, research in the UK found that the number of older people taking 5 plus medication had quadrupled from 12% to nearly 50% between 1994 and 2011.Polypharmacy is not necessarily ill-advised, but in many instances can lead to negative outcomes or poor treatment effectiveness, often being more harmful than helpful or presenting too much risk for too little benefit. Therefore, health professionals consider it a situation that requires monitoring and review to validate whether all of the medications are still necessary. Concerns about polypharmacy include increased adverse drug reactions, drug interactions, prescribing cascade, and higher costs. A prescribing cascade occurs when a patient is prescribed a drug and experiences an adverse drug effect that is misinterpreted as a new medical condition, so the patient is prescribed another drug. Polypharmacy also increases the burden of medication taking particularly in older people and is associated with medication non-adherence.Polypharmacy is often associated with a decreased quality of life, including decreased mobility and cognition. Patient factors that influence the number of medications a patient is prescribed include a high number of chronic conditions requiring a complex drug regimen. Other systemic factors that impact the number of medications a patient is prescribed include a patient having multiple prescribers and multiple pharmacies that may not communicate.
Polypharmacy:
Whether or not the advantages of polypharmacy (over taking single medications or monotherapy) outweigh the disadvantages or risks depends upon the particular combination and diagnosis involved in any given case. The use of multiple drugs, even in fairly straightforward illnesses, is not an indicator of poor treatment and is not necessarily overmedication. Moreover, it is well accepted in pharmacology that it is impossible to accurately predict the side effects or clinical effects of a combination of drugs without studying that particular combination of drugs in test subjects. Knowledge of the pharmacologic profiles of the individual drugs in question does not assure accurate prediction of the side effects of combinations of those drugs; and effects also vary among individuals because of genome-specific pharmacokinetics. Therefore, deciding whether and how to reduce a list of medications (deprescribe) is often not simple and requires the experience and judgment of a practicing clinician, as the clinician must weigh the pros and cons of keeping the patient on the medication. However, such thoughtful and wise review is an ideal that too often does not happen, owing to problems such as poorly handled care transitions (poor continuity of care, usually because of siloed information), overworked physicians and other clinical staff, and interventionism.
Appropriate medical uses:
While polypharmacy is typically regarded as undesirable, prescription of multiple medications can be appropriate and therapeutically beneficial in some circumstances. “Appropriate polypharmacy” is described as prescribing for complex or multiple conditions in such a way that necessary medicines are used based on the best available evidence at the time to preserve safety and well-being. Polypharmacy is clinically indicated in some chronic conditions, for example in diabetes mellitus, but should be discontinued when evidence of benefit from the prescribed drugs no longer outweighs potential for harm (described below in Contraindications).Often certain medications can interact with others in a positive way specifically intended when prescribed together, to achieve a greater effect than any of the single agents alone. This is particularly prominent in the field of anesthesia and pain management – where atypical agents such as antiepileptics, antidepressants, muscle relaxants, NMDA antagonists, and other medications are combined with more typical analgesics such as opioids, prostaglandin inhibitors, NSAIDS and others. This practice of pain management drug synergy is known as an analgesia sparing effect.
Appropriate medical uses:
Examples A legitimate treatment regimen in the first year after a myocardial infarction may include: a statin, an ACE inhibitor, a beta-blocker, aspirin, paracetamol and an antidepressant.
In anesthesia (particularly IV anesthesia and general anesthesia) multiple agents are almost always required – including hypnotics or analgesic inducing/maintenance agents such as midazolam or propofol, usually an opioid analgesic such as morphine or fentanyl, a paralytic such as vecuronium, and in inhaled general anesthesia generally a halogenated ether anesthetic such as sevoflurane or desflurane.
Special populations:
People who are at greatest risk for negative polypharmacy consequences include elderly people, people with psychiatric conditions, patients with intellectual or developmental disabilities, people taking five or more drugs at the same time, those with multiple physicians and pharmacies, people who have been recently hospitalized, people who have concurrent comorbidities, people who live in rural communities, people with inadequate access to education, and those with impaired vision or dexterity. Marginalized populations may have a greater degrees of polypharmacy, which can occur more frequently in younger age groups.It is not uncommon for people who are dependent or addicted to substances to enter or remain in a state of polypharmacy misuse. About 84% of prescription drug misusers reported using multiple drugs. Note, however, that the term polypharmacy and its variants generally refer to legal drug use as-prescribed, even when used in a negative or critical context.
Special populations:
Measures can be taken to limit polypharmacy to its truly legitimate and appropriate needs. This is an emerging area of research, frequently called deprescribing. Reducing the number of medications, as part of a clinical review, can be an effective healthcare intervention. Clinical pharmacists can perform drug therapy reviews and teach physicians and their patients about drug safety and polypharmacy, as well as collaborating with physicians and patients to correct polypharmacy problems. Similar programs are likely to reduce the potentially deleterious consequences of polypharmacy such as adverse drug events, non-adherence, hospital admissions, drug-drug interactions, geriatric syndromes, and mortality. Such programs hinge upon patients and doctors informing pharmacists of other medications being prescribed, as well as herbal, over-the-counter substances and supplements that occasionally interfere with prescription-only medication. Staff at residential aged care facilities have a range of views and attitudes towards polypharmacy that, in some cases, may contribute to an increase in medication use.
Risks of polypharmacy:
The risk of polypharmacy increases with age, although there is some evidence that it may decrease slightly after age 90 years. Poorer health is a strong predictor of polypharmacy at any age, although it is unclear whether the polypharmacy causes the poorer health or if polypharmacy is used because of the poorer health. It appears possible that the risk factors for polypharmacy may be different for younger and middle-aged people compared to older people.The use of polypharmacy is correlated to the use of potentially inappropriate medications. Potentially inappropriate medications are generally taken to mean those that have been agreed upon by expert consensus, such as by the Beers Criteria. These medications are generally inappropriate for older adults because the risks outweigh the benefits. Examples of these include urinary anticholinergics used to treat incontinence; the associated risks, with anticholinergics, include constipation, blurred vision, dry mouth, impaired cognition, and falls. Many older people living in long term care facilities experience polypharmacy, and under-prescribing of potentially indicated medicines and use of high risk medicines can also occur.Polypharmacy is associated with an increased risk of falls in elderly people. Certain medications are well known to be associated with the risk of falls, including cardiovascular and psychoactive medications. There is some evidence that the risk of falls increases cumulatively with the number of medications. Although often not practical to achieve, withdrawing all medicines associated with falls risk can halve an individual's risk of future falls.
Risks of polypharmacy:
Every medication has potential adverse side-effects. With every drug added, there is an additive risk of side-effects. Also, some medications have interactions with other substances, including foods, other medications, and herbal supplements. 15% of older adults are potentially at risk for a major drug-drug interaction. Older adults are at a higher risk for a drug-drug interaction due to the increased number of medications prescribed and metabolic changes that occur with aging. When a new drug is prescribed, the risk of interactions increases exponentially. Doctors and pharmacists aim to avoid prescribing medications that interact; often, adjustments in the dose of medications need to be made to avoid interactions. For example, warfarin interacts with many medications and supplements that can cause it to lose its effect.
Risks of polypharmacy:
Pill burden Pill burden is the number of pills (tablets or capsules, the most common dosage forms) that a person takes on a regular basis, along with all associated efforts that increase with that number - like storing, organizing, consuming, and understanding the various medications in one's regimen. The use of individual medications is growing faster than pill burden. A recent study found that older adults in long term care are taking an average of 14 to 15 tablets every day.Poor medical adherence is a common challenge among individuals who have increased pill burden and are subject to polypharmacy. It also increases the possibility of adverse medication reactions (side effects) and drug-drug interactions. High pill burden has also been associated with an increased risk of hospitalization, medication errors, and increased costs for both the pharmaceuticals themselves and for the treatment of adverse events. Finally, pill burden is a source of dissatisfaction for many patients and family carers.High pill burden was commonly associated with antiretroviral drug regimens to control HIV, and is also seen in other patient populations. For instance, adults with multiple common chronic conditions such as diabetes, hypertension, lymphedema, hypercholesterolemia, osteoporosis, constipation, inflammatory bowel disease, and clinical depression may be prescribed more than a dozen different medications daily. The combination of multiple drugs has been associated with an increased risk of adverse drug events.Reducing pill burden is recognized as a way to improve medication compliance, also referred to as adherence. This is done through "deprescribing", where the risks and benefits are weighed when considering whether to continue a medication. This includes drugs such as bisphosphonates (for osteoporosis), which are often taken indefinitely although there is only evidence to use it for five to ten years. Patient educational programs, reminder messages, medication packaging, and the use of memory tricks has also been seen to improve adherence and reduce pill burden in several countries. These include associating medications with mealtimes, recording the dosage on the box, storing the medication in a special place, leaving it in plain sight in the living room, or putting the prescription sheet on the refrigerator. The development of applications has also shown some benefit in this regard. The use of a polypill regimen, such as combination pill for HIV treatment, as opposed to a multi-pill regimen, also alleviates pill burden and increases adherence.The selection of long-acting active ingredients over short-acting ones may also reduce pill burden. For instance, ACE inhibitors are used in the management of hypertension. Both captopril and lisinopril are examples of ACE inhibitors. However, lisinopril is dosed once a day, whereas captopril may be dosed 2-3 times a day. Assuming that there are no contraindications or potential for drug interactions, using lisinopril instead of captopril may be an appropriate way to limit pill burden.
Interventions:
The most common intervention to help people who are struggling with polypharmacy is deprescribing. Deprescribing can be confused with medication simplification, which does not attempt to reduce the number of medicines but rather reduce the number of dose forms and administration times. Deprescribing refers to reducing the number of medications that a person is prescribed and includes the identification and discontinuance of medications when the benefit no longer outweighs the harm. In elderly patients, this can commonly be done as a patient becomes more frail and treatment focus needs to shift from preventative to palliative. Deprescribing is feasible and effective in many settings including residential care, communities and hospitals. This preventative measure should be considered for anyone who exhibits one of the following: (1) a new symptom or adverse event arises, (2) when the person develops an end-stage disease, (3) if the combination of drugs is risky, or (4) if stopping the drug does not alter the disease trajectory.Several tools exist to help physicians decide when to deprescribe and what medications can be added to a pharmaceutical regimen. The Beers Criteria and the STOPP/START criteria help identify medications that have the highest risk of adverse drug events (ADE) and drug-drug interactions. The Medication appropriateness tool for comorbid health conditions during dementia (MATCH-D) is the only tool available specifically for people with dementia, and also cautions against polypharmacy and complex medication regimens.Barriers faced by both physicians and people taking the medications have made it challenging to apply deprescribing strategies in practice. For physicians, these include fear of consequences of deprescribing, the prescriber's own confidence in their skills and knowledge to deprescribe, reluctance to alter medications that are prescribed by specialists, the feasibility of deprescribing, lack of access to all of patients' clinical notes, and the complexity of having multiple providers. For patients who are prescribed or require the medication, barriers include attitudes or beliefs about the medications, inability to communicate with physicians, fears and uncertainties surrounding deprescribing, and influence of physicians, family, and the media. Barriers can include other health professionals or carers, such as in residential care, believing that the medicines are required.In people with multiple long-term conditions (multimorbidity) and polypharmacy deprescribing represents a complex challenge as clinical guidelines are usually developed for single conditions. In these cases tools and guidelines like the Beers Criteria and STOPP/START could be used safely by clinicians but not all patients might benefit from stopping their medication. There is a need for clarity about how much clinicians can do beyond the guidelines and the responsibility they need to take could help them prescribing and deprescribing for complex cases. Further factors that can help clinicians tailor their decisions to the individual are: access to detailed data on the people in their care (including their backgrounds and personal medical goals), discussing plans to stop a medicine already when it is first prescribed, and a good relationship that involves mutual trust and regular discussions on progress. Furthermore, longer appointments for prescribing and deprescribing would allow time explain the process of deprescribing, explore related concerns, and support making the right decisions.The effectiveness of specific interventions to improve the appropriate use of polypharmacy such as pharmaceutical care and computerised decision support is unclear. This is due to low quality of current evidence surrounding these interventions. High quality evidence is needed to make any conclusions about the effects of such interventions in any environment, including in care homes. Deprescribing is not influenced by whether medicines are prescribed through a paper-based or an electronic system. Deprescribing rounds has been proposed as a potentially successful methodology in reducing polypharmacy. Sharing of positive outcomes from physicians who have implemented deprescribing, increased communication between all practitioners involved in patient care, higher compensation for time spent deprescribing, and clear deprescribing guidelines can help enable the practice of deprescribing. Despite the difficulties, a recent blinded study of deprescribing reported that participants used an average of two fewer medicines each after 12 months showing again that deprescribing is feasible. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**CP-67**
CP-67:
CP-67 is a hypervisor, or Virtual Machine Monitor, from IBM for its System/360 Model 67 computer.
CP-67:
CP-67 is the control program portion of CP/CMS, a virtual machine operating system developed by IBM's Cambridge Scientific Center in Cambridge, Massachusetts. It was a reimplementation of their earlier research system CP-40, which ran on a one-off customized S/360-40. CP-67 was later reimplemented (again) as CP-370, which IBM released as VM/370 in 1972, when virtual memory was added to the System/370 series.CP and CMS are usually grouped together as a unit, but the "components are independent of each other. CP-67 can be used on an appropriate configuration without CMS, and CMS can be run on a properly configured System/360 as a single-user system without CP-67."
Minimum hardware configuration:
The minimum configuation for CP-67 is:: p.1 2067 CPU, model 1 or 2 2365 Processor Storage model 1—262,144 bytes of magnetic core memory with an access time of 750 ns (nanoseconds) per eight bytes.
IBM 1052 printer/keyboard IBM 1403 printer IBM 2540 card read/punch Three IBM 2311 disk storage units, 7.5 MB each, 22.5 MB total IBM 2400 magnetic tape data storage unit IBM 270x Transmission Control unit
Installation:
Disks to be used by CP have to be formatted by a standalone utility called FORMAT, loaded from tape or punched cards. CP disks are formatted with fixed-length 829 byte records. Following formatting, a second stand-alone utility, DIRECT, partitions the disk space between permanent (system and user files) and temporary (paging and spooling) space. DIRECT also creates the user directory identifying the virtual machines (users) available in the system. For each user the directory contains identifying information, id and password, and lists the resources (core, devices, etc) that this user can access, Although a user may be allowed access to physical devices it is more common to specify virtual devices, such as a spooled card reader, card punch, and printer. A user can be allocated one or more virtual disk units, "mini disks" [sic.], which resemble a real disk of the same device type, except that they occupy a subset of the space on the real device.: p.37 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**ReNu**
ReNu:
ReNu is a brand of soft contact lens care products produced by Bausch & Lomb. By far the most popular brand of lens solutions until 2006, ReNu has rebranded its formulations as renu sensitive (formerly ReNu Multi-Purpose) and renu fresh (formerly ReNu MultiPlus), the latter containing a patented ingredient called hydranate, known by chemists as hydroxyalkylphosphonate, that removes protein deposits and can eliminate the need for a separate enzymatic cleaner.
ReNu with MoistureLoc:
The brand made headlines in 2006 when a report from the United States Centers for Disease Control and Prevention suggested an increased incidence of fungal keratitis in people using Bausch & Lomb products. Bausch & Lomb subsequently suspended, then recalled shipments of one particular product, ReNu with MoistureLoc. Other ReNu formulations were unaffected, as they do not include the special ingredient in MoistureLoc, which was supposed to keep lenses moist but unfortunately also allowed microbes to survive the disinfection process.
ReNu with MoistureLoc:
Timeline In November 2005, Hong Kong health officials told Bausch & Lomb about a significant increase in hospital admissions due to contact lens related keratitis from June to September 2005.On March 3, 2006, New Jersey ophthalmologist Dr. David S. Chu contacted Bausch & Lomb to report that three of his patients had contracted a fungal infection called Fusarium keratitis; all three of these patients were contact lens wearers who used Bausch & Lomb's Renu with MoistureLoc. Subsequently, he also reported his findings to the Centers for Disease Control and Prevention on March 8, 2006.On April 10, 2006, the CDC announced that it was investigating 109 patients in the United States suspected to have Fusarium keratitis. They reported that they had complete data available for 30 patients in this group, the earliest onset of infection of which was June 15, 2005.On April 11, 2006, Bausch & Lomb stopped shipments of its ReNu with MoistureLoc contact lens solution from its Greenville, South Carolina, plant after the U.S. Centers for Disease Control and Prevention found what appeared to be a high correlation between use of the product and cases of suspected fungal keratitis. Similar claims had already been made against the product in Hong Kong and Singapore. The news about the U.S. suspension led to a 14.6% drop in the company's stock price—a drop of $8.41, to $49.03—the largest the company experienced in 5½ years.On April 13, 2006, Bausch & Lomb announced that it is withdrawing the ReNu with MoistureLoc product world-wide and is recommending that consumers stop using ReNu with MoistureLoc immediately. The FDA supports this decision. The FDA, as well as the American Optometric Association, also advise now that contact lenses be rubbed and rinsed even when a no-rub contact lens solution is used.According to a preliminary report released by the FDA on May 16, 2006, Bausch & Lomb failed to notify the FDA that Singapore's Ministry of Health reported 35 serious cases of Fusarium keratitis to Bausch & Lomb in February 2006.The U.S. suspension does not include ReNu products without MoistureLoc.
ReNu with MoistureLoc:
Only supplies of ReNu with MoistureLoc manufactured in the company's US plant are affected. European supplies are considered to be safe.
On March 7, 2007, Bausch & Lomb issued a voluntary recall on 1.5 million bottles of ReNu MultiPlus solution due to higher than normal amounts of iron in the batch. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Chemical reactor materials selection**
Chemical reactor materials selection:
Chemical reactor materials selection is an important aspect in the design of a chemical reactor. There are four main groups of chemical reactors - CSTR, PFR, semi-batch, and catalytic - with variations on each. Depending on the nature of the chemicals involved in the reaction, as well as the operating conditions (e.g. temperature and pressure), certain materials will perform better over others.
Material Options:
There are several broad classes of materials available for use in creating a chemical reactor. Some examples include metals, glasses, ceramics, polymers, carbon, and composites. Metals are the most common class of materials for chemical engineering equipment as they are comparatively easy to manufacture, have high strength, and are resistant to fracture. Glass is common in chemical laboratory equipment, but highly prone to fracture and so is not useful in large-scale industrial use. Ceramics are not that common of a material for chemical reactors as they are brittle and difficult to manufacture. Polymers have begun to gain more popularity in piping and valves as they aid in temperature stability. There are several forms of carbon, but the most useful form for reactors is carbon or graphite fibers in composites.
Criteria for Selection:
The last important criteria for a particular material is its safety. Engineers have a responsibility to ensure the safety of those who handle equipment or utilize a building or road for example, by minimizing the risks of injuries or casualties. Other considerations include strength, resistance to sudden failure from either mechanical or thermal shock, corrosion resistance, and cost, to name a few. To compare different materials to each other, it may prove useful to consult an ASHBY diagram and the ASME Pressure Vessel Codes. The material choice would be ideally drawn from known data as well as experience. Having a deeper understanding of the component requirements and the corrosion and degradation behavior will aid in materials selection. Additionally, knowing the performance of past systems, whether they be good or bad, will benefit the user in deciding on alternative alloys or using a coated system; if previous information is not available, then performing tests is recommended.
High Temperature Operation:
High temperature reactor operation includes a host of problems such as distortion and cracking due to thermal expansion and contraction, and high temperature corrosion. Some indications that the latter is occurring include burnt or charred surfaces, molten phases, distortion, thick scales, and grossly thinned metal. Some typical high-temperature alloys include iron, nickel, or cobalt that have >20% chromium for the purpose of forming a protective oxide against further oxidation. There are also various other elements to aid in corrosion resistance such as aluminum, silicon, and rare earth elements such as yttrium, cerium, and lanthanum. Other additions such as reactive or refractory metals, can improve the mechanical properties of the reactor. Refractory metals can experience catastrophic oxidation, which turns metals into a powdery oxide with little use. This damage is worse in stagnant conditions, however silicide coatings have been proven to offer some resistance. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Melk Formation**
Melk Formation:
The Melk Formation is a geologic formation in Austria. It preserves fossils dating back to the Paleogene period. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Glycan**
Glycan:
The terms glycans and polysaccharides are defined by IUPAC as synonyms meaning "compounds consisting of a large number of monosaccharides linked glycosidically". However, in practice the term glycan may also be used to refer to the carbohydrate portion of a glycoconjugate, such as a glycoprotein, glycolipid, or a proteoglycan, even if the carbohydrate is only an oligosaccharide. Glycans usually consist solely of O-glycosidic linkages of monosaccharides. For example, cellulose is a glycan (or, to be more specific, a glucan) composed of β-1,4-linked D-glucose, and chitin is a glycan composed of β-1,4-linked N-acetyl-D-glucosamine. Glycans can be homo- or heteropolymers of monosaccharide residues, and can be linear or branched.
Glycans and proteins:
Glycans can be found attached to proteins as in glycoproteins and proteoglycans. In general, they are found on the exterior surface of cells. O- and N-linked glycans are very common in eukaryotes but may also be found, although less commonly, in prokaryotes.
N-Linked glycans Introduction N-Linked glycans are attached in the endoplasmic reticulum to the nitrogen (N) in the side chain of asparagine (Asn) in the sequon. The sequon is an Asn-X-Ser or Asn-X-Thr sequence, where X is any amino acid except proline and the glycan may be composed of N-acetylgalactosamine, galactose, neuraminic acid, N-acetylglucosamine, fucose, mannose, and other monosaccharides.
Glycans and proteins:
Assembly In eukaryotes, N-linked glycans are derived from a core 14-sugar unit assembled in the cytoplasm and endoplasmic reticulum. First, two N-acetylglucosamine residues are attached to dolichol monophosphate, a lipid, on the external side of the endoplasmic reticulum membrane. Five mannose residues are then added to this structure. At this point, the partially finished core glycan is flipped across the endoplasmic reticulum membrane, so that it is now located within the reticular lumen. Assembly then continues within the endoplasmic reticulum, with the addition of four more mannose residues. Finally, three glucose residues are added to this structure. Following full assembly, the glycan is transferred en bloc by the glycosyltransferase oligosaccharyltransferase to a nascent peptide chain, within the reticular lumen. This core structure of N-linked glycans, thus, consists of 14 residues (3 glucose, 9 mannose, and 2 N-acetylglucosamine).
Glycans and proteins:
Image: https://www.ncbi.nlm.nih.gov/books/bv.fcgi?rid=glyco.figgrp.469 Dark squares are N-acetylglucosamine; light circles are mannose; dark triangles are glucose.
Glycans and proteins:
Processing, modification, and diversity Once transferred to the nascent peptide chain, N-linked glycans, in general, undergo extensive processing reactions, whereby the three glucose residues are removed, as well as several mannose residues, depending on the N-linked glycan in question. The removal of the glucose residues is dependent on proper protein folding. These processing reactions occur in the Golgi apparatus. Modification reactions may involve the addition of a phosphate or acetyl group onto the sugars, or the addition of new sugars, such as neuraminic acid. Processing and modification of N-linked glycans within the Golgi does not follow a linear pathway. As a result, many different variations of N-linked glycan structure are possible, depending on enzyme activity in the Golgi.
Glycans and proteins:
Functions and importance N-linked glycans are extremely important in proper protein folding in eukaryotic cells. Chaperone proteins in the endoplasmic reticulum, such as calnexin and calreticulin, bind to the three glucose residues present on the core N-linked glycan. These chaperone proteins then serve to aid in the folding of the protein that the glycan is attached to. Following proper folding, the three glucose residues are removed, and the glycan moves on to further processing reactions. If the protein fails to fold properly, the three glucose residues are reattached, allowing the protein to re-associate with the chaperones. This cycle may repeat several times until a protein reaches its proper conformation. If a protein repeatedly fails to properly fold, it is excreted from the endoplasmic reticulum and degraded by cytoplasmic proteases.
Glycans and proteins:
N-linked glycans also contribute to protein folding by steric effects. For example, cysteine residues in the peptide may be temporarily blocked from forming disulfide bonds with other cysteine residues, due to the size of a nearby glycan. Therefore, the presence of a N-linked glycan allows the cell to control which cysteine residues will form disulfide bonds.
N-linked glycans also play an important role in cell-cell interactions. For example, tumour cells make N-linked glycans that are abnormal. These are recognized by the CD337 receptor on Natural Killer cells as a sign that the cell in question is cancerous.
Glycans and proteins:
Within the immune system the N-linked glycans on an immune cell's surface will help dictate that migration pattern of the cell, e.g. immune cells that migrate to the skin have specific glycosylations that favor homing to that site. The glycosylation patterns on the various immunoglobulins including IgE, IgM, IgD, IgE, IgA, and IgG bestow them with unique effector functions by altering their affinities for Fc and other immune receptors. Glycans may also be involved in "self" and "non self" discrimination, which may be relevant to the pathophysiology of various autoimmune diseases; including rheumatoid arthritis and type 1 diabetes.The targeting of degradative lysosomal enzymes is also accomplished by N-linked glycans. The modification of an N-linked glycan with a mannose-6-phosphate residue serves as a signal that the protein to which this glycan is attached should be moved to the lysosome. This recognition and trafficking of lysosomal enzymes by the presence of mannose-6-phosphate is accomplished by two proteins: CI-MPR (cation-independent mannose-6-phosphate receptor) and CD-MPR (cation-dependent mannose-6-phosphate receptor).
Glycans and proteins:
O-Linked glycans Introduction In eukaryotes, O-linked glycans are assembled one sugar at a time on a serine or threonine residue of a peptide chain in the Golgi apparatus. Unlike N-linked glycans, there is no known consensus sequence yet. However, the placement of a proline residue at either -1 or +3 relative to the serine or threonine is favourable for O-linked glycosylation.
Glycans and proteins:
Assembly The first monosaccharide attached in the synthesis of O-linked glycans is N-acetyl-galactosamine. After this, several different pathways are possible. A Core 1 structure is generated by the addition of galactose. A Core 2 structure is generated by the addition of N-acetyl-glucosamine to the N-acetyl-galactosamine of the Core 1 structure. Core 3 structures are generated by the addition of a single N-acetyl-glucosamine to the original N-acetyl-galactosamine. Core 4 structures are generated by the addition of a second N-acetyl-glucosamine to the Core 3 structure. Other core structures are possible, though less common.
Glycans and proteins:
Images: https://www.ncbi.nlm.nih.gov/books/bv.fcgi?rid=glyco.figgrp.561 : Core 1 and Core 2 generation. White square = N-acetyl-galactosamine; black circle = galactose; Black square = N-acetyl-glucosamine. Note: There is a mistake in this diagram. The bottom square should always be white in each image, not black.
https://www.ncbi.nlm.nih.gov/books/bv.fcgi?rid=glyco.figgrp.562 : Core 3 and Core 4 generation.
Glycans and proteins:
A common structural theme in O-linked glycans is the addition of polylactosamine units to the various core structures. These are formed by the repetitive addition of galactose and N-acetyl-glucosamine units. Polylactosamine chains on O-linked glycans are often capped by the addition of a sialic acid residue (similar to neuraminic acid). If a fucose residue is also added, to the next to penultimate residue, a Sialyl-Lewis X (SLex) structure is formed.
Glycans and proteins:
Functions and importance Sialyl lewis x is important in ABO blood antigen determination.
Glycans and proteins:
SLex is also important to proper immune response. P-selectin release from Weibel-Palade bodies, on blood vessel endothelial cells, can be induced by a number of factors. One such factor is the response of the endothelial cell to certain bacterial molecules, such as peptidoglycan. P-selectin binds to the SLex structure that is present on neutrophils in the bloodstream and helps to mediate the extravasation of these cells into the surrounding tissue during infection.
Glycans and proteins:
O-linked glycans, in particular mucin, have been found to be important in developing normal intestinal microflora. Certain strains of intestinal bacteria bind specifically to mucin, allowing them to colonize the intestine.
Glycans and proteins:
Examples of O-linked glycoproteins are: Glycophorin, a protein in erythrocyte cell membranes Mucin, a protein in saliva involved in formation of dental plaque Notch, a transmembrane receptor involved in development and cell fate decisions Thrombospondin Factor VII Factor IX Urinary type plasminogen activator Glycosaminoglycans Another type of cellular glycan is the glycosaminoglycans (GAGs). These comprise 2-aminosugars linked in an alternating fashion with uronic acids, and include polymers such as heparin, heparan sulfate, chondroitin, keratan and dermatan. Some glycosaminoglycans, such as heparan sulfate, are found attached to the cell surface, where they are linked through a tetrasacharide linker via a xylosyl residue to a protein (forming a glycoprotein or proteoglycan).
Glycoscience:
A 2012 report from the U.S. National Research Council calls for a new focus on glycoscience, a field that explores the structures and functions of glycans and promises great advances in areas as diverse as medicine, energy generation, and materials science. Until now, glycans have received little attention from the research community due to a lack of tools to probe their often complex structures and properties. The report presents a roadmap for transforming glycoscience from a field dominated by specialists to a widely studied and integrated discipline.
Tools used for glycan research:
The following are examples of the commonly used techniques in glycan analysis: High-resolution mass spectrometry (MS) and high-performance liquid chromatography (HPLC) The most commonly applied methods are MS and HPLC, in which the glycan part is cleaved either enzymatically or chemically from the target and subjected to analysis. In case of glycolipids, they can be analyzed directly without separation of the lipid component.
Tools used for glycan research:
N-glycans from glycoproteins are analyzed routinely by high-performance-liquid-chromatography (reversed phase, normal phase and ion exchange HPLC) after tagging the reducing end of the sugars with a fluorescent compound (reductive labeling).
Tools used for glycan research:
A large variety of different labels were introduced in the recent years, where 2-aminobenzamide (AB), anthranilic acid (AA), 2-aminopyridin (PA), 2-aminoacridone (AMAC) and 3-(acetylamino)-6-aminoacridine (AA-Ac) are just a few of them. Different labels have to be used for different ESI modes and MS systems used. O-glycans are usually analysed without any tags, due to the chemical release conditions preventing them to be labeled.
Tools used for glycan research:
Fractionated glycans from high-performance liquid chromatography (HPLC) instruments can be further analyzed by MALDI-TOF-MS(MS) to get further information about structure and purity. Sometimes glycan pools are analyzed directly by mass spectrometry without prefractionation, although a discrimination between isobaric glycan structures is more challenging or even not always possible. Anyway, direct MALDI-TOF-MS analysis can lead to a fast and straightforward illustration of the glycan pool.In recent years, high performance liquid chromatography online coupled to mass spectrometry became very popular. By choosing porous graphitic carbon as a stationary phase for liquid chromatography, even non derivatized glycans can be analyzed. Detection is here done by mass spectrometry, but in instead of MALDI-MS, electrospray ionisation (ESI) is more frequently used.
Tools used for glycan research:
Multiple reaction monitoring (MRM) Although MRM has been used extensively in metabolomics and proteomics, its high sensitivity and linear response over a wide dynamic range make it especially suited for glycan biomarker research and discovery. MRM is performed on a triple quadrupole (QqQ) instrument, which is set to detect a predetermined precursor ion in the first quadrupole, a fragmented in the collision quadrupole, and a predetermined fragment ion in the third quadrupole. It is a non-scanning technique, wherein each transition is detected individually and the detection of multiple transitions occurs concurrently in duty cycles. This technique is being used to characterize the immune glycome.Table 1:Advantages and disadvantages of mass spectrometry in glycan analysis Arrays Lectin and antibody arrays provide high-throughput screening of many samples containing glycans. This method uses either naturally occurring lectins or artificial monoclonal antibodies, where both are immobilized on a certain chip and incubated with a fluorescent glycoprotein sample.
Tools used for glycan research:
Glycan arrays, like that offered by the Consortium for Functional Glycomics and Z Biotech LLC, contain carbohydrate compounds that can be screened with lectins or antibodies to define carbohydrate specificity and identify ligands.
Metabolic and covalent labeling of glycans Metabolic labeling of glycans can be used as a way to detect glycan structures. A well-known strategy involves the use of azide-labeled sugars which can be reacted using the Staudinger ligation. This method has been used for in vitro and in vivo imaging of glycans.
Tools used for glycan research:
Tools for glycoproteins X-ray crystallography and nuclear magnetic resonance (NMR) spectroscopy for complete structural analysis of complex glycans is a difficult and complex field. However, the structure of the binding site of numerous lectins, enzymes and other carbohydrate-binding proteins have revealed a wide variety of the structural basis for glycome function. The purity of test samples have been obtained through chromatography (affinity chromatography etc.) and analytical electrophoresis (PAGE (polyacrylamide electrophoresis), capillary electrophoresis, affinity electrophoresis, etc.).
Resources:
National Center for Functional Glycomics (NCFG) The focus of the NCFG is the development in the glycosciences, with an emphasis on exploring the molecular mechanisms of glycan recognition by proteins important in human biology and disease. They have a number of resources for glycan analysis as well as training in glycomics and protocols for glycan analysis GlyTouCan, Glycan structure repository Glycosciences.DE, German glycan database Carbohydrate Structure Database, Russian glycan database UniCarbKB, Australian glycan database GlycoSuiteDB, glycan database by Swiss Institute of Bioinformatics GlyGen, NIH funded glycoinformatics resource The Consortium for Functional Glycomics (CFG) is a non-profit research initiative comprising eight core facilities and 500+ participating investigators that work together to develop resources and services and make them available to the scientific community free of charge. The data generated by these resources are captured in databases accessible through the Functional Glycomics Gateway, a web resource maintained through a partnership between the CFG and Nature Publishing Group.
Resources:
Transforming Glycoscience: A Roadmap for the Future Archived 2014-10-20 at the Wayback Machine by the U.S. National Research Council. This site provides information about the U.S. National Research Council's reports and workshops on glycoscience. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Meta refresh**
Meta refresh:
Meta refresh is a method of instructing a web browser to automatically refresh the current web page or frame after a given time interval, using an HTML meta element with the http-equiv parameter set to "refresh" and a content parameter giving the time interval in seconds. It is also possible to instruct the browser to fetch a different URL when the page is refreshed, by including the alternative URL in the content parameter. By setting the refresh time interval to zero (or a very low value), meta refresh can be used as a method of URL redirection.
History:
This feature was originally introduced by Netscape Navigator 1.1 (circa 1995), in a form of HTTP header and corresponding HTML meta HTTP-equivalent element, which allows document author to signal client to automatically reload the document or change to a specified URL after a specified timeout. It is the earliest polling mechanism available for the web, allowing a user to see the latest update in a frequently-changing webpage, such as ones displaying stock price or weather forecast.
Usability:
Use of meta refresh is discouraged by the World Wide Web Consortium (W3C), since unexpected refresh can disorient users. Meta refresh also impairs the web browser's "back" button in some browsers (including Internet Explorer 6 and before), although most modern browsers compensate for this (Internet Explorer 7 and higher, Mozilla Firefox, Opera, Google Chrome).
There are legitimate uses of meta-refresh, such as providing updates to dynamic web pages or implementing site controlled navigation of a website without JavaScript. Many large websites use it to refresh news or status updates, especially when dependencies on JavaScript and redirect headers are unwanted.
Examples:
Place inside the <head> element to refresh page after 5 seconds: Redirect to https://example.com/ after 5 seconds: Redirect to https://example.com/ immediately:
Drawbacks:
Meta refresh tags have some drawbacks: If a page redirects too quickly (less than 2–3 seconds), using the "Back" button on the next page may cause some browsers to move back to the redirecting page, whereupon the redirect will occur again. This is bad for usability, as this may cause a reader to be "stuck" on the last website.
A reader may or may not want to be redirected to a different page, which can lead to user dissatisfaction or raise concerns about security.
Alternatives:
Meta refresh uses the http-equiv meta tag to emulate the Refresh HTTP header, and as such can also be sent as a header by an HTTP web server. Although Refresh is not part of the HTTP standard, it is supported by all common browsers.
HTTP Header example of a redirect to https://example.com/ after 5 seconds: Alternatives exist for both uses of meta refresh.
For redirection An alternative is to send an HTTP redirection status code, such as HTTP 301 or 302. It is the preferred way to redirect a user agent to a different page. This can be achieved by a special rule in the Web server or by means of a simple script on the Web server.
JavaScript is another alternative, but not recommended, because users might have disabled JavaScript in their browsers.
Alternatives:
The simplest way of JavaScript redirect using the onload property of the body tag: For refresh An alternative method is to provide an interaction device, such as a button, to let the user choose when to refresh the content. Another option is using a technique such as Ajax to update (parts of) the Web site without the need for a complete page refresh, but this would also require that the user enable JavaScript in their browser.
Alternatives:
You can refresh a web page using JavaScript location.reload method. This code can be called automatically upon an event or simply when the user clicks on a link. If you want to refresh a web page using a mouse click, then you can use the following code : | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**17β-Hydroxysteroid dehydrogenase**
17β-Hydroxysteroid dehydrogenase:
17β-Hydroxysteroid dehydrogenases (17β-HSD, HSD17B) (EC 1.1.1.51), also 17-ketosteroid reductases (17-KSR), are a group of alcohol oxidoreductases which catalyze the reduction of 17-ketosteroids and the dehydrogenation of 17β-hydroxysteroids in steroidogenesis and steroid metabolism. This includes interconversion of DHEA and androstenediol, androstenedione and testosterone, and estrone and estradiol.The major reactions catalyzed by 17β-HSD (e.g., the conversion of androstenedione to testosterone) are in fact hydrogenation (reduction) rather than dehydrogenation (oxidation) reactions.
Reactions:
17β-HSDs have been known to catalyze the following redox reactions of sex steroids: 20α-Hydroxyprogesterone ↔ Progesterone DHEA ↔ Androstenediol Androstenedione ↔ Testosterone Dihydrotestosterone ↔ 5α-Androstanedione / 3α-Androstanediol / 3β-Androstanediol Estrone ↔ Estradiol 16α-Hydroxyestrone ↔ Estriol Activity distribution
Genes:
Genes coding for 17β-HSD include: HSD17B1: Referred to as "estrogenic". Major subtype for activation of estrogens from weaker forms (estrone to estradiol and 16α-hydroxyestrone to estriol). Catalyzes the final step in the biosynthesis of estrogens. Highly selective for estrogens; 100-fold higher affinity for estranes over androstanes. However, also catalyzes the conversion of DHEA into androstenediol. Recently, has been found to inactivate DHT into 3α- and 3β-androstanediol. Expressed primarily in the ovaries and placenta but also at lower levels in the breast epithelium. Major isoform of 17β-HSD in the granulosa cells of the ovaries. Mutations and associated deficiency have not been reported in humans. Knockout mice show altered ovarian sex steroid production, normal puberty, and severe subfertility due to defective luteinization and ovarian progesterone production.
Genes:
HSD17B2: Describable as "antiestrogenic" and "antiandrogenic". Major subtype for inactivation of estrogens and androgens into weaker forms (estradiol to estrone, testosterone to androstenedione, and androstenediol to DHEA). Also converts inactive 20α-hydroxyprogesterone into active progesterone. Preferential activity on androgens. Expressed widely in the body including in the liver, intestines, lungs, pancreas, kidneys, endometrium, prostate, breast epithelium, placenta, and bone. Said to be responsible for 17β-HSD activity in the endometrium and placenta. Mutations and associated congenital deficiency have not been reported in humans. However, local deficiency in expression of HSD17B2 has been associated with endometriosis.
Genes:
HSD17B3: Referred to as "androgenic". Major subtype in males for activation of androgens from weaker forms (androstenedione to testosterone and DHEA to androstenediol). Also activates estrogens from weaker forms to a lesser extent (estrone to estradiol). This is essential for testicular but not ovarian production of testosterone. Not expressed in the ovaries, where another 17β-HSD subtype, likely HSD17B5, is expressed instead. Mutations are associated with 17β-HSD type III deficiency. Males with this condition have pseudohermaphroditism, while females are normal with normal androgen and estrogen levels.
Genes:
HSD17B4: Also known as D-bifunctional protein (DBP). Involved in fatty acid β-oxidation and steroid metabolism (specifically estrone to estradiol, for instance in the uterus). Mutations are associated with DBP deficiency and Perrault syndrome (ovarian dysgenesis and deafness).
HSD17B5: Also known as aldo-keto reductase 1C3 (AKR1C3). Has 3α-HSD and 20α-HSD activity in addition to 17β-HSD activity. Expressed in the adrenal cortex and may act as the "androgenic" 17β-HSD in ovarian thecal cells. Also expressed in the prostate gland, mammary gland, and Leydig cells.
HSD17B6: Has 3α-HSD activity and catalyzes conversion of the weak androgen androstanediol into the powerful androgen dihydrotestosterone in the prostate gland. Also involved into a backdoor pathway from 17α-hydroxyprogesterone to dihydrotestosterone by 3α-reduction of a metabolic intermediary, 17α-hydroxydihydroprogesterone, into another intermediary, 17α-hydroxyallopregnanolone. May be involved in the pathophysiology of PCOS.
HSD17B7: Is involved in cholesterol metabolism but is also thought to activate estrogens (estrone to estradiol) and inactivate androgens (dihydrotestosterone to androstanediol). Expressed in the ovaries, breasts, placenta, testes, prostate gland, and liver.
HSD17B8: Inactivates estradiol, testosterone, and dihydrotestosterone, though can also convert estrone into estradiol. Expressed in the ovaries, testes, liver, pancreas, kidneys, and other tissues.
HSD17B9: Also known as retinol dehydrogenase 5 (RDH5). Involved in retinoid metabolism. Mutations are associated with fundus albipunctatus.
HSD17B10: Also known as 2-methyl-3-hydroxybutyryl-CoA dehydrogenase (MHBD). Substrates include steroids, neurosteroids, fatty acids, bile acids, isoleucine, and xenobiotics. Mutations are associated with 17β-HSD type X deficiency (also known as HSD10 disease or MHBD deficiency) and mental retardation, X-linked, syndromic 10 (MRXS10), which are characterized by neurodegeneration and mental retardation, respectively.
HSD17B11 HSD17B12 HSD17B13 HSD17B14At least 7 of the 14 isoforms of 17β-HSD are involved in interconversion of 17-ketosteroids and 17β-hydroxysteroids.
Overview
Clinical significance:
Mutations in HSD17B3 are responsible for 17β-HSD type III deficiency.
Inhibitors of 17β-HSD type II are of interest for the potential treatment of osteoporosis.Some inhibitors of 17β-HSD type I have been identified, for example esters of cinnamic acid and various flavones (e.g. fisetin). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Simplicial map**
Simplicial map:
A simplicial map (also called simplicial mapping) is a function between two simplicial complexes, with the property that the images of the vertices of a simplex always span a simplex. Simplicial maps can be used to approximate continuous functions between topological spaces that can be triangulated; this is formalized by the simplicial approximation theorem. A simplicial isomorphism is a bijective simplicial map such that both it and its inverse are simplicial.
Definitions:
A simplicial map is defined in slightly different ways in different contexts.
Definitions:
Abstract simplicial complexes Let K and L be two abstract simplicial complexes (ASC). A simplicial map of K into L is a function from the vertices of K to the vertices of L, f:V(K)→V(L) , that maps every simplex in K to a simplex in L. That is, for any σ∈K , f(σ)∈L .: 14, Def.1.5.2 As an example, let K be ASC containing the sets {1,2},{2,3},{3,1} and their subsets, and let L be the ASC containing the set {4,5,6} and its subsets. Define a mapping f by: f(1)=f(2)=4, f(3)=5. Then f is a simplicial mapping, since f({1,2})={4} which is a simplex in L, f({2,3})=f({3,1})={4,5} which is also a simplex in L, etc.
Definitions:
If f is not bijective, it may map k-dimensional simplices in K to l-dimensional simplices in L, for any l ≤ k. In the above example, f maps the one-dimensional simplex {1,2} to the zero-dimensional simplex {4}.
Definitions:
If f is bijective, and its inverse f−1 is a simplicial map of L into K, then f is called a simplicial isomorphism. Isomorphic simplicial complexes are essentially "the same", up ro a renaming of the vertices. The existence of an isomorphism between L and K is usually denoted by K≅L .: 14 The function f defined above is not an isomorphism since it is not bijective. If we modify the definition to f(1)=4, f(2)=5, f(3)=6, then f is bijective but it is still not an isomorphism, since f−1 is not simplicial: f−1({4,5,6})={1,2,3} , which is not a simplex in K. If we modify L by removing {4,5,6}, that is, L is the ASC containing only the sets {4,5},{5,6},{6,4} and their subsets, then f is an isomorphism.
Definitions:
Geometric simplicial complexes Let K and L be two geometric simplicial complexes (GSC). A simplicial map of K into L is a function f:K→L such that the images of the vertices of a simplex in K span a simplex in L. That is, for any simplex σ∈K , conv (f(V(σ)))∈L . Note that this implies that vertices of K are mapped to vertices of L. Equivalently, one can define a simplicial map as a function from the underlying space of K (the union of simplices in K) to the underlying space of L, f:|K|→|L| , that maps every simplex in K linearly to a simplex in L. That is, for any simplex σ∈K , f(σ)∈L , and in addition, f|σ (the restriction of f to σ ) is a linear function.: 16 : 3 Every simplicial map is continuous.
Definitions:
Simplicial maps are determined by their effects on vertices. In particular, there are a finite number of simplicial maps between two given finite simplicial complexes. A simplicial map between two ASCs induces a simplicial map between their geometric realizations (their underlying polyhedra) using barycentric coordinates. This can be defined precisely.: 15, Def.1.5.3 Let K, L be to ASCs, and let f:V(K)→V(L) be a simplicial map. The affine extension of f is a mapping |f|:|K|→|L| defined as follows. For any point x∈|K| , let σ be its support (the unique simplex containing x in its interior), and denote the vertices of σ by v0,…,vk . The point x has a unique representation as a convex combination of the vertices, x=∑i=0kaivi with ai≥0 and ∑i=0kai=1 (the ai are the barycentric coordinates of x ). We define := ∑i=0kaif(vi) . This |f| is a simplicial map of |K| into |L|; it is a continuous function. If f is injective, then |f| is injective; if f is an isomorphism between K and L, then |f| is a homeomorphism between |K| and |L|.: 15, Prop.1.5.4
Simplicial approximation:
Let f:|K|→|L| be a continuous map between the underlying polyhedra of simplicial complexes and let us write st (v) for the star of a vertex. A simplicial map f△:K→L such that st st (f△(v)) , is called a simplicial approximation to f A simplicial approximation is homotopic to the map it approximates. See simplicial approximation theorem for more details.
Piecewise-linear maps:
Let K and L be two GSCs. A function f:|K|→|L| is called piecewise-linear (PL) if there exist a subdivision K' of K, and a subdivision L' of L, such that f:|K′|→|L′| is a simplicial map of K' into L'. Every simplicial map is PL, but the opposite is not true. For example, suppose |K| and |L| are two triangles, and let f:|K|→|L| be a non-linear function that maps the leftmost half of |K| linearly into the leftmost half of |L|, and maps the rightmost half of |K| linearly into the rightmostt half of |L|. Then f is PL, since it is a simplicial map between a subdivision of |K| into two triangles and a subdivision of |L| into two triangles. This notion is an adaptation of the general notion of a piecewise-linear function to simplicial complexes.
Piecewise-linear maps:
A PL homeomorphism between two polyhedra |K| and |L| is a PL mapping such that the simplicial mapping between the subdivisions, f:|K′|→|L′| , is a homeomorphism. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Backward chaining (applied behavior analysis)**
Backward chaining (applied behavior analysis):
Chaining is a technique used in applied behavior analysis to teach complex tasks by breaking them down into discrete responses or individual behaviors that are part of a task analysis. With a backward chaining procedure the learning can happen in two ways. In one approach the adult can complete all the steps for the learner and give the learner the opportunity to attempt the last one and prompt as needed. For the other approach the adult can prompt the learner throughout the steps on the chain and give the learner an opportunity to complete the last one independently. However, if unable to do so the adult helps by also prompting the learner through the last step and reinforcement is given to the learner once the last step is completed. Because independency is desired the goal is to remove the prompts as soon as the learner can complete the steps without help.
Task Analysis:
A task analysis involves breaking a complex skill into smaller teachable units creating a series of steps or tasks. In other words, it is the identification of all the stimuli and responses in a behavior chain. In backward chain task analysis, the final step of the routine is taught first so that the reinforcement for completing the step is accessible to the naturally occurring reinforcement.
Implementation:
In order to teach a task utilizing a backward chaining procedure, begin by breaking down the entire task into individual steps known as a task analysis. For example, a tooth brushing routine may be broken down as follows: 1. Grab toothbrush 2. Apply toothpaste to toothbrush, 3. Turn on water 4. Wet toothbrush, 5. Brush top teeth, 6. Brush bottom teeth, 7. Rinse toothbrush, 8. Turn of water 9. Put toothbrush away The trainer would begin by completing each step for the learner beginning with step one (Grabbing the toothbrush). Once the trainer has completed all steps, the trainer allows the learner to complete the last step (Put toothbrush away) independently. Once this step is independently mastered then the trainer can move on to training the last two steps (Steps 8 and 9). This training will continue until the student is completely independent and can complete the entire tooth brushing routine without assistance. It is important to note that transitioning from one step to the next will vary from learner to learner and should not be done until the learner is proficient in the targeted step.
Implementation:
The trainer can either complete the steps for the leaner or physically prompt the learner through all the steps before allowing the learner to complete the last step independently.
For example, a physical prompting of toothbrushing can look like hand over hand helping the learner complete all the steps correctly before letting the learner complete the last one independently.
Prompting:
The two types of prompting in a behavior chain are either most to least(MTL) or least to most (LTM).
Prompting:
MTL prompting is when the most intrusive prompt is introduced initially and then systematically faded out to least intrusive prompts. This prompting method is mainly used when the task analysis is being taught.LTM prompting there is no prompt initially, and the intrusiveness of the prompt is increased as necessary for each step of the task analysis. This prompting method is mainly used when you are doing an error correction on a specific step in the task analysis the learner has experience with.
Steps of Implementation:
1. When considering if backward chaining is appropriate for the learner one must consider if the learner is learning a new behavior or is it an issue with compliance. If the learner can not perform the task then the chain would be appropriate. If the learner can do the step but chooses not to then another procedure should be used in accordance with compliance.
Steps of Implementation:
2. Develop task analysis of the S - R chain When developing steps of the task analysis the steps should match the learner’s skill level.
3. Collect baseline data 4. Implement 5. Continue to collect data 6. Shift to intermittent reinforcement for maintenance
Fading & Mastery:
In order to fade prompts on the steps being targeted the learner must show increased independence. The fading technique used will be most to least because the skills being worked on are new. The prompts will be decreased to least intrusive when the learner shows increased ability to complete the task with less assistance.To determine mastery, assessments are done prior to the chaining procedure being implemented to establish the learner’s mastery level. There are two methods that can be used to assess mastery: single and multiple opportunity.
Fading & Mastery:
Single Opportunity: the learner is stopped if any step is skipped or they are unable to complete it.
Fading & Mastery:
Multiple Opportunity: allowing the learner to attempt each step in the chainOnce the mastery level has been established, a mastery criterion is also determined before the chain can be implemented. The mastery criterion is set for each of the steps and learner is said to have mastered the skill once they can perform all steps on the chain at the predetermined mastery criterion. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Roles of chemical elements**
Roles of chemical elements:
This table is designed to show the role(s) performed by each chemical element, in nature and in technology.
Z = Atomic number Sym. = Symbol Per. = Period Gr. = Group | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**FrogWatch**
FrogWatch:
A FrogWatch is any of several citizen science programs in which laypeople monitor amphibians. In a FrogWatch, people make recordings of frogs and other animals that live near them and send the recordings to databases for scientists and other people to hear and study.
FrogWatch:
Not all FrogWatch programs are run by the same people. The Association of Zoos and Aquariums runs FrogWatch USA, Nature Canada runs FrogWatch Canada, the India Biodiversity Portal runs the FrogWatch in India, and other organizations run FrogWatches in other countries.The National Geographic Society developed the program that FrogWatch USA volunteers use to add information and that FrogWatch uses to study it. Volunteers record temperature with thermometers and listen for sounds made by specific types of frogs and toads. FrogWatch USA volunteers record frog habitats for three and a half minutes, starting one half-hour (30 minutes) after the sun goes down.Scientists have used FrogWatch to study the way frogs and toads change the places they live, which types of frogs are becoming more numerous and which are becoming less numerous, species diversity, the way species react to changes in temperature, and the way they act during different parts of the year.
History:
The United States Geological Survey started FrogWatch USA in 1998, but the National Wildlife Federation took over in 2002.Between 1998 and 2005, 1,395 people working with FrogWatch USA visited 1,942 places where frogs live and gave information to FrogWatch. They found 79 different kinds of frogs and toads. This does not count visits, places, or species for FrogWatch Canada or FrogWatches in other countries.
History:
FrogWatch NT operates in northern Australia. It began in 1991 after cane toads came to Australia and became a problematic invasive species. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Peak oil**
Peak oil:
Peak oil is the point in time when the maximum rate of global oil production is reached, after which production will begin an irreversible decline. It is related to the distinct concept of oil depletion; while global petroleum reserves are finite, the limiting factor is not whether the oil exists but whether it can be extracted economically at a given price. A secular decline in oil extraction could be caused both by depletion of accessible reserves and by reductions in demand that reduce the price relative to the cost of extraction, as might be induced to reduce carbon emissions.Numerous predictions of the timing of peak oil have been made over the past century before being falsified by subsequent growth in the rate of petroleum extraction. M. King Hubbert is often credited with introducing the notion in a 1956 paper which presented a formal theory and predicted U.S. extraction to peak between 1965 and 1971. Hubbert's original predictions for world peak oil production proved premature and, as of 2021, forecasts of the year of peak oil range from 2019 to 2040. These predictions are dependent on future economic trends, technological developments, and efforts by societies and governments to mitigate climate change.Predictions of future oil production made in 2007 and 2009 stated either that the peak had already occurred, that oil production was on the cusp of the peak, or that it would occur soon. A decade later world oil production rose to hit a new high in 2018, as developments in extraction technology enabled an expansion of U.S. tight oil production. Following a collapse in oil demand at the outset of the COVID-19 pandemic and a price war between Saudi Arabia and Russia, a number of organizations have put forward predictions of a peak in the next 10 to 15 years.
Modeling global oil production:
The idea that the rate of oil production would peak and irreversibly decline is an old one. In 1919, David White, chief geologist of the United States Geological Survey, wrote of US petroleum: "... the peak of production will soon be passed, possibly within 3 years." In 1953, Eugene Ayers, a researcher for Gulf Oil, projected that if US ultimate recoverable oil reserves were 100 billion barrels, then production in the US would peak no later than 1960. If ultimate recoverable were to be as high as 200 billion barrels, which he warned was wishful thinking, US peak production would come no later than 1970. Likewise for the world, he projected a peak somewhere between 1985 (one trillion barrels ultimate recoverable) and 2000 (two trillion barrels recoverable). Ayers made his projections without a mathematical model. He wrote: "But if the curve is made to look reasonable, it is quite possible to adapt mathematical expressions to it and to determine, in this way, the peak dates corresponding to various ultimate recoverable reserve numbers"By observing past discoveries and production levels, and predicting future discovery trends, the geoscientist M. King Hubbert used statistical modelling in 1956 to predict that United States oil production would peak between 1965 and 1971. This prediction appeared accurate for a time however during 2018 daily production of oil in the United States was exceeding daily production in 1970, the year that was previously the peak. Hubbert used a semi-logistical curved model (sometimes incorrectly compared to a normal distribution). He assumed the production rate of a limited resource would follow a roughly symmetrical distribution. Depending on the limits of exploitability and market pressures, the rise or decline of resource production over time might be sharper or more stable, appear more linear or curved. That model and its variants are now called Hubbert peak theory; they have been used to describe and predict the peak and decline of production from regions, countries, and multinational areas. The same theory has also been applied to other limited-resource production.
Modeling global oil production:
More recently, the term "peak oil" was popularized by Colin Campbell and Kjell Aleklett in 2002 when they helped form the Association for the Study of Peak Oil and Gas (ASPO). In his publications, Hubbert used the term "peak production rate" and "peak in the rate of discoveries".In a 2006 analysis of Hubbert theory, it was noted that uncertainty in real world oil production amounts and confusion in definitions increases the uncertainty in general of production predictions. By comparing the fit of various other models, it was found that Hubbert's methods yielded the closest fit overall but none of the models were very accurate. In 1956 Hubbert himself recommended using "a family of possible production curves" when predicting a production peak and decline curve.A comprehensive 2009 study of oil depletion by the UK Energy Research Centre noted: Few analysts now adhere to a symmetrical bell-shaped production curve. This is correct, as there is no natural physical reason why the production of a resource should follow such a curve and little empirical evidence that it does. The report noted that Hubbert had used the logistic curve because it was mathematically convenient, not because he believed it to be literally correct. The study observed that in most cases the asymmetric exponential model provided a better fit (as in the case of Seneca cliff model), and that peaks tended to occur well before half the oil had been produced, with the result that in nearly all cases, the post-peak decline was more gradual than the increase leading up to the peak.
Demand:
The demand side of peak oil over time is concerned with the total quantity of oil that the global market would choose to consume at any given market price. The hypothesis that peak oil would be driven by a reduction in the availability of easily extractable oil implies that prices will increase over time to match demand with a declining supply. By contrast, developments since 2010 have given rise to the idea of demand-driven peak oil. The central idea is that, in response to technological developments and pressure to reduce carbon dioxide emissions, demand for oil at any given price will decline. In this context, the development of electric vehicles creates the possibility that the primary use of oil, transportation, will diminish in importance over time.
Demand:
After growing steadily until around 2006, oil demand has fluctuated, falling during recession periods, and then recovering, but at slower growth rates than in the past. Oil demand fell sharply during the early stages of the COVID-19 pandemic, with global demand for oil dropping from 100 million barrels a day in 2019 to 90 million in 2020. The drop in demand is not expected to recover until at least 2022, and British Petroleum predicts that oil demand will never recover to pre-pandemic levels due to increased proliferation of electric vehicles and stronger action on climate change. Developments in 2021 at Exxon, Chevron and Shell also lent further credence to the idea that peak oil had happened in 2019.Energy demand is distributed amongst four broad sectors: transportation, residential, commercial, and industrial. In terms of oil use, transportation is the largest sector and the one that has seen the largest growth in demand in recent decades. This growth has largely come from new demand for personal-use vehicles powered by internal combustion engines. This sector also has the highest consumption rates, accounting for approximately 71% of the oil used in the United States in 2013. and 55% of oil use worldwide as documented in the Hirsch report. Transportation is therefore of particular interest to those seeking to mitigate the effects of peak oil.
Demand:
Although demand growth is highest in the developing world, the United States is the world's largest consumer of petroleum. Between 1995 and 2005, US consumption grew from 17.7 to 20.7 million barrels (2.81 to 3.29 million cubic metres) per day. China, by comparison, increased consumption from 3.4 to 7.0 million bbl (0.54 to 1.11 million m3) per day in the same time frame. The Energy Information Administration (EIA) stated that gasoline usage in the United States may have peaked in 2007, in part because of increasing interest in and mandates for use of biofuels and energy efficiency.As countries develop, industry and higher living standards drive up energy use, oil usage being a major component. Thriving economies, such as China and India, are quickly becoming large oil consumers. For example, China surpassed the United States as the world's largest crude oil importer in 2015. Oil consumption growth is expected to continue; however, not at previous rates, as China's economic growth is predicted to decrease from the high rates of the early part of the 21st century. India's oil imports are expected to more than triple from 2005 levels by 2020, rising to 5 million barrels (790,000 m3) per day.
Demand:
Population Another significant factor affecting petroleum demand has been human population growth. The United States Census Bureau predicts that world population in 2030 will be almost double that of 1980. Oil production per capita peaked in 1979 at 5.5 barrels/year but then declined to fluctuate around 4.5 barrels/year since. In this regard, the decreasing population growth rate since the 1970s has somewhat ameliorated the per capita decline.
Demand:
Economic growth Some analysts argue that the cost of oil has a profound effect on economic growth due to its pivotal role in the extraction of resources and the processing, manufacturing, and transportation of goods. As the industrial effort to extract new unconventional oil sources increases, this has a compounding negative effect on all sectors of the economy, leading to economic stagnation or even eventual contraction. Such a scenario would result in an inability for national economies to pay high oil prices, leading to declining demand and a price collapse.
Supply:
Our analysis suggests there are ample physical oil and liquid fuel resources for the foreseeable future. However, the rate at which new supplies can be developed and the break-even prices for those new supplies are changing.
Supply:
Defining sources of oil Oil may come from conventional or unconventional sources. The terms are not strictly defined, and vary within the literature as definitions based on new technologies tend to change over time. As a result, different oil forecasting studies have included different classes of liquid fuels. Some use the terms "conventional" oil for what is included in the model, and "unconventional" oil for classes excluded.
Supply:
In 1956, Hubbert confined his peak oil prediction to that crude oil "producible by methods now in use." By 1962, however, his analyses included future improvements in exploration and production. All of Hubbert's analyses of peak oil specifically excluded oil manufactured from oil shale or mined from oil sands. A 2013 study predicting an early peak excluded deepwater oil, tight oil, oil with API gravity less than 17.5, and oil close to the poles, such as that on the North Slope of Alaska, all of which it defined as non-conventional. Some commonly used definitions for conventional and unconventional oil are detailed below.
Supply:
Conventional sources Conventional oil is extracted on land and offshore using "standard" (i.e., in common use before 2000) techniques, and can be categorized as light, medium, heavy, or extra heavy in grade. The exact definitions of these grades vary depending on the region from which the oil came.
Supply:
Light oil flows naturally to the surface or can be extracted by simply pumping it out of the ground. Heavy refers to oil that has higher density and therefore lower API gravity. It does not flow easily, and its consistency is similar to that of molasses. While some of it can be produced using conventional techniques, recovery rates are better using unconventional methods.According to the International Energy Agency, production of conventional crude oil (as then defined) peaked in 2006, with an all-time maximum of 70 million barrels per day.
Supply:
Tight oil was typically classified as "unconventional" prior to about 2006, but more recent analyses began to consider it to be "conventional" as its extraction became more common. It is extracted from deposits of low-permeability rock, sometimes shale deposits but often other rock types, using hydraulic fracturing, or "fracking." It is often confused with shale oil, which is oil manufactured from the kerogen contained in an oil shale (see below), Production of tight oil has led to a resurgence of US production in recent years. U.S. tight oil production peaked in March 2015, and fell a total of 12 per cent over the next 18 months. But then U.S. tight oil production rose again, and by September 2017 had exceeded the old peak, and as of October 2017, U.S. tight oil production was still rising. Tight oil as a whole peaked in late 2019, although it is unclear whether this was due to the COVID-19 pandemic; however, the Eagle Ford formation appeared to have peaked in 2015 and the Bakken has reached well saturation in top-producing regions.
Supply:
Unconventional sources As of 2019, oil considered unconventional is derived from multiple sources.
Supply:
Oil shale is a common term for sedimentary rock such as shale or marl, containing kerogen, a waxy oil precursor that has not yet been transformed into crude oil by the high pressures and temperatures caused by deep burial. The term "oil shale" is somewhat confusing, because what is referred to in the U.S. as "oil shale" is not really oil and the rock it is found in is generally not shale. Since it is close to the surface rather than buried deep in the earth, the shale or marl is typically mined, crushed, and retorted, producing synthetic oil from the kerogen. Its net energy yield is much lower than conventional oil, so much so that estimates of the net energy yield of shale discoveries are considered extremely unreliable.
Supply:
Oil sands are unconsolidated sandstone deposits containing large amounts of very viscous crude bitumen or extra-heavy crude oil that can be recovered by surface mining or by in-situ oil wells using steam injection or other techniques. It can be liquefied by upgrading, blending with diluent, or by heating; and then processed by a conventional oil refinery. The recovery process requires advanced technology but is more efficient than that of oil shale. The reason is that, unlike U.S. "oil shale", Canadian oil sands actually contain oil, and the sandstones they are found in are much easier to produce oil from than shale or marl. In the U.S. dialect of English, these formations are often called "tar sands", but the material found in them is not tar but an extra-heavy and viscous form of oil technically known as bitumen. Venezuela has oil sands deposits similar in size to those of Canada, and approximately equal to the world's reserves of conventional oil. Venezuela's Orinoco Belt tar sands are less viscous than Canada's Athabasca oil sands – meaning they can be produced by more conventional means – but they are buried too deep to be extracted by surface mining. Estimates of the recoverable reserves of the Orinoco Belt range from 100 billion barrels (16×10^9 m3) to 270 billion barrels (43×10^9 m3). In 2009, USGS updated this value to 513 billion barrels (8.16×1010 m3).Coal liquefaction or gas to liquids product are liquid hydrocarbons that are synthesised from the conversion of coal or natural gas by the Fischer–Tropsch process, Bergius process, or Karrick process. Currently, two companies SASOL and Shell, have synthetic oil technology proven to work on a commercial scale. Sasol's primary business is based on CTL (coal-to-liquid) and GTL (natural gas-to-liquid) technology, producing US$4.40 billion in revenues (FY2009). Shell has used these processes to recycle waste flare gas (usually burnt off at oil wells and refineries) into usable synthetic oil. However, for CTL there may be insufficient coal reserves to supply global needs for both liquid fuels and electric power generation.
Supply:
Minor sources include thermal depolymerization, as discussed in a 2003 article in Discover magazine, that could be used to manufacture oil indefinitely, out of garbage, sewage, and agricultural waste. The article claimed that the cost of the process was $15 per barrel. A follow-up article in 2006 stated that the cost was actually $80 per barrel, because the feedstock that had previously been considered as hazardous waste now had market value. A 2008 news bulletin published by Los Alamos Laboratory proposed that hydrogen (possibly produced using hot fluid from nuclear reactors to split water into hydrogen and oxygen) in combination with sequestered CO2 could be used to produce methanol (CH3OH), which could then be converted into gasoline.
Supply:
Discoveries All the easy oil and gas in the world has pretty much been found. Now comes the harder work in finding and producing oil from more challenging environments and work areas. It is pretty clear that there is not much chance of finding any significant quantity of new cheap oil. Any new or unconventional oil is going to be expensive. The peak of world oilfield discoveries occurred in the 1960s at around 55 billion barrels (8.7×109 m3)(Gb)/year. According to the Association for the Study of Peak Oil and Gas (ASPO), the rate of discovery has been falling steadily since. Less than 10 Gb/yr of oil were discovered each year between 2002 and 2007. According to a 2010 Reuters article, the annual rate of discovery of new fields has remained remarkably constant at 15–20 Gb/yr.
Supply:
But despite the fall-off in new field discoveries, and record-high production rates, the reported proved reserves of crude oil remaining in the ground in 2014, which totaled 1,490 billion barrels, not counting Canadian heavy oil sands, were more than quadruple the 1965 proved reserves of 354 billion barrels. A researcher for the U.S. Energy Information Administration has pointed out that after the first wave of discoveries in an area, most oil and natural gas reserve growth comes not from discoveries of new fields, but from extensions and additional gas found within existing fields.A report by the UK Energy Research Centre noted that "discovery" is often used ambiguously, and explained the seeming contradiction between falling discovery rates since the 1960s and increasing reserves by the phenomenon of reserve growth. The report noted that increased reserves within a field may be discovered or developed by new technology years or decades after the original discovery. But because of the practice of "backdating", any new reserves within a field, even those to be discovered decades after the field discovery, are attributed to the year of initial field discovery, creating an illusion that discovery is not keeping pace with production.
Supply:
Reserves Total possible conventional crude oil reserves include crude oil with 90% certainty of being technically able to be produced from reservoirs (through a wellbore using primary, secondary, improved, enhanced, or tertiary methods); all crude with a 50% probability of being produced in the future (probable); and discovered reserves that have a 10% possibility of being produced in the future (possible). Reserve estimates based on these are referred to as 1P, proven (at least 90% probability); 2P, proven and probable (at least 50% probability); and 3P, proven, probable and possible (at least 10% probability), respectively. This does not include liquids extracted from mined solids or gasses (oil sands, oil shale, gas-to-liquid processes, or coal-to-liquid processes).Hubbert's 1956 peak projection for the United States depended on geological estimates of ultimate recoverable oil resources, but starting in his 1962 publication, he concluded that ultimate oil recovery was an output of his mathematical analysis, rather than an assumption. He regarded his peak oil calculation as independent of reserve estimates.Many current 2P calculations predict reserves to be between 1150 and 1350 Gb, but some authors have written that because of misinformation, withheld information, and misleading reserve calculations, 2P reserves are likely nearer to 850–900 Gb. The Energy Watch Group wrote that actual reserves peaked in 1980, when production first surpassed new discoveries, that apparent increases in reserves since then are illusory, and concluded (in 2007): "Probably the world oil production has peaked already, but we cannot be sure yet." Concerns over stated reserves [World] reserves are confused and in fact inflated. Many of the so-called reserves are in fact resources. They're not delineated, they're not accessible, they're not available for production.
Supply:
Sadad Al Husseini estimated that 300 billion barrels (48×10^9 m3) of the world's 1,200 billion barrels (190×10^9 m3) of proven reserves should be recategorized as speculative resources.
Supply:
One difficulty in forecasting the date of peak oil is the opacity surrounding the oil reserves classified as "proven". In many major producing countries, the majority of reserves claims have not been subject to outside audit or examination. Several worrying signs concerning the depletion of proven reserves emerged in about 2004. This was best exemplified by the 2004 scandal surrounding the "evaporation" of 20% of Shell's reserves.For the most part, proven reserves are stated by the oil companies, the producer states and the consumer states. All three have reasons to overstate their proven reserves: oil companies may look to increase their potential worth; producer countries gain a stronger international stature; and governments of consumer countries may seek a means to foster sentiments of security and stability within their economies and among consumers.
Supply:
Major discrepancies arise from accuracy issues with the self-reported numbers from the Organization of the Petroleum Exporting Countries (OPEC). Besides the possibility that these nations have overstated their reserves for political reasons (during periods of no substantial discoveries), over 70 nations also follow a practice of not reducing their reserves to account for yearly production. Analysts have suggested that OPEC member nations have economic incentives to exaggerate their reserves, as the OPEC quota system allows greater output for countries with greater reserves.Kuwait, for example, was reported in the January 2006 issue of Petroleum Intelligence Weekly to have only 48 billion barrels (7.6×10^9 m3) in reserve, of which only 24 were fully proven. This report was based on the leak of a confidential document from Kuwait and has not been formally denied by the Kuwaiti authorities. This leaked document is from 2001, but excludes revisions or discoveries made since then. Additionally, the reported 1.5 billion barrels (240×10^6 m3) of oil burned off by Iraqi soldiers in the First Persian Gulf War are conspicuously missing from Kuwait's figures.
Supply:
On the other hand, investigative journalist Greg Palast argues that oil companies have an interest in making oil look more rare than it is, to justify higher prices. This view is contested by ecological journalist Richard Heinberg. Other analysts argue that oil producing countries understate the extent of their reserves to drive up the price.The EUR reported by the 2000 USGS survey of 2,300 billion barrels (370×10^9 m3) has been criticized for assuming a discovery trend over the next twenty years that would reverse the observed trend of the past 40 years. Their 95% confidence EUR of 2,300 billion barrels (370×10^9 m3) assumed that discovery levels would stay steady, despite the fact that new-field discovery rates have declined since the 1960s. That trend of falling discoveries has continued in the ten years since the USGS made their assumption. The 2000 USGS is also criticized for other assumptions, as well as assuming 2030 production rates inconsistent with projected reserves.
Supply:
Reserves of unconventional oil As conventional oil becomes less available, it can be replaced with production of liquids from unconventional sources such as tight oil, oil sands, ultra-heavy oils, gas-to-liquid technologies, coal-to-liquid technologies, biofuel technologies, and shale oil. In the 2007 and subsequent International Energy Outlook editions, the word "Oil" was replaced with "Liquids" in the chart of world energy consumption. In 2009 biofuels was included in "Liquids" instead of in "Renewables". The inclusion of natural gas liquids, a bi-product of natural gas extraction, in "Liquids" has been criticized as it is mostly a chemical feedstock which is generally not used as transport fuel.
Supply:
Reserve estimates are based on profitability, which depends on both oil price and cost of production. Hence, unconventional sources such as heavy crude oil, oil sands, and oil shale may be included as new techniques reduce the cost of extraction. With rule changes by the SEC, oil companies can now book them as proven reserves after opening a strip mine or thermal facility for extraction. These unconventional sources are more labor and resource intensive to produce, however, requiring extra energy to refine, resulting in higher production costs and up to three times more greenhouse gas emissions per barrel (or barrel equivalent) on a "well to tank" basis or 10 to 45% more on a "well to wheels" basis, which includes the carbon emitted from combustion of the final product.While the energy used, resources needed, and environmental effects of extracting unconventional sources have traditionally been prohibitively high, major unconventional oil sources being considered for large-scale production are the extra heavy oil in the Orinoco Belt of Venezuela, the Athabasca Oil Sands in the Western Canadian Sedimentary Basin, and the oil shale of the Green River Formation in Colorado, Utah, and Wyoming in the United States. Energy companies such as Syncrude and Suncor have been extracting bitumen for decades but production has increased greatly in recent years with the development of steam-assisted gravity drainage and other extraction technologies.Chuck Masters of the USGS estimates that, "Taken together, these resource occurrences, in the Western Hemisphere, are approximately equal to the Identified Reserves of conventional crude oil accredited to the Middle East." Authorities familiar with the resources believe that the world's ultimate reserves of unconventional oil are several times as large as those of conventional oil and will be highly profitable for companies as a result of higher prices in the 21st century. In October 2009, the USGS updated the Orinoco tar sands (Venezuela) recoverable "mean value" to 513 billion barrels (8.16×1010 m3), with a 90% chance of being within the range of 380-652 billion barrels (103.7×10^9 m3), making this area "one of the world's largest recoverable oil accumulations".
Supply:
Despite the large quantities of oil available in non-conventional sources, Matthew Simmons argued in 2005 that limitations on production prevent them from becoming an effective substitute for conventional crude oil. Simmons stated "these are high energy intensity projects that can never reach high volumes" to offset significant losses from other sources. Another study claims that even under highly optimistic assumptions, "Canada's oil sands will not prevent peak oil", although production could reach 5,000,000 bbl/d (790,000 m3/d) by 2030 in a "crash program" development effort.Moreover, oil extracted from these sources typically contains contaminants such as sulfur and heavy metals that are energy-intensive to extract and can leave tailings, ponds containing hydrocarbon sludge, in some cases. The same applies to much of the Middle East's undeveloped conventional oil reserves, much of which is heavy, viscous, and contaminated with sulfur and metals to the point of being unusable. However, high oil prices make these sources more financially appealing. A study by Wood Mackenzie suggests that by the early 2020s all the world's extra oil supply is likely to come from unconventional sources.
Supply:
Production The point in time when peak global oil production occurs defines peak oil. Some believe that the increasing industrial effort to extract oil will have a negative effect on global economic growth, leading to demand contraction and a price collapse, thereby causing production decline as some unconventional sources become uneconomical. Some believe that the peak may be to some extent led by declining demand as new technologies and improving efficiency shift energy usage away from oil.
Supply:
Worldwide oil discoveries have been less than annual production since 1980. World population has grown faster than oil production. Because of this, oil production per capita peaked in 1979 (preceded by a plateau during the period of 1973–1979).
Supply:
The increasing investment in harder-to-reach oil as of 2005 was said to signal oil companies' belief in the end of easy oil. While it is widely believed that increased oil prices spur an increase in production, an increased number of oil industry insiders believed in 2008 that even with higher prices, oil production was unlikely to increase significantly. Among the reasons cited were both geological factors as well as "above ground" factors that are likely to see oil production plateau.A 2008 Journal of Energy Security analysis of the energy return on drilling effort (energy returned on energy invested, also referred to as EROEI) in the United States concluded that there was extremely limited potential to increase production of both gas and (especially) oil. By looking at the historical response of production to variation in drilling effort, the analysis showed very little increase of production attributable to increased drilling. This was because of diminishing returns with increasing drilling effort: as drilling effort increased, the energy obtained per active drill rig in the past had been reduced according to a severely diminishing power law. The study concluded that even an enormous increase of drilling effort was unlikely to significantly increase oil and gas production in a mature petroleum region such as the United States. However, contrary to the study's conclusion, since the analysis was published in 2008, US production of crude oil has more than doubled, increasing 119%, and production of dry natural gas has increased 51% (2018 compared to 2008).The previous assumption of inevitable declining volumes of oil and gas produced per unit of effort is contrary to recent experience in the US. In the United States, as of 2017, there has been an ongoing decade-long increase in the productivity of oil and gas drilling in all the major tight oil and gas plays. The US Energy Information Administration reports, for instance, that in the Bakken Shale production area of North Dakota, the volume of oil produced per day of drilling rig time in January 2017 was 4 times the oil volume per day of drilling five years previous, in January 2012, and nearly 10 times the oil volume per day of ten years previous, in January 2007. In the Marcellus gas region of the northeast, The volume of gas produced per day of drilling time in January 2017 was 3 times the gas volume per day of drilling five years previous, in January 2012, and 28 times the gas volume per day of drilling ten years previous, in January 2007.New research estimates that the energy required to produce all petroleum liquids (not including transportation, refining and distribution) represents today the equivalent of 16% of this same production and by 2050, an amount equivalent to half of the gross energy production will be required. For gases, the energy needed for production is estimated to be equivalent to 7% of the gross energy produced today, and 24% for 2050.
Supply:
Anticipated production by major agencies Average yearly gains in global supply from 1987 to 2005 were 1.2 million barrels per day (190×10^3 m3/d) (1.7%). In 2005, the IEA predicted that 2030 production rates would reach 120,000,000 barrels per day (19,000,000 m3/d), but this number was gradually reduced to 105,000,000 barrels per day (16,700,000 m3/d). A 2008 analysis of IEA predictions questioned several underlying assumptions and claimed that a 2030 production level of 75,000,000 barrels per day (11,900,000 m3/d) (comprising 55,000,000 barrels (8,700,000 m3) of crude oil and 20,000,000 barrels (3,200,000 m3) of both non-conventional oil and natural gas liquids) was more realistic than the IEA numbers. More recently, the EIA's Annual Energy Outlook 2015 indicated no production peak out to 2040. However, this required a future Brent crude oil price of $US144/bbl (2013 dollars) "as growing demand leads to the development of more costly resources".
Supply:
Oil field decline In a 2013 study of 733 giant oil fields, only 32% of the ultimately recoverable oil, condensate and gas remained. Ghawar, which is the largest oil field in the world and responsible for approximately half of Saudi Arabia's oil production over the last 50 years, was in decline before 2009. The world's second largest oil field, the Burgan Field in Kuwait, entered decline in November 2005.Mexico announced that production from its giant Cantarell Field began to decline in March 2006, reportedly at a rate of 13% per year. Also in 2006, Saudi Aramco Senior Vice President Abdullah Saif estimated that its existing fields were declining at a rate of 5% to 12% per year. According to a study of the largest 811 oilfields conducted in early 2008 by Cambridge Energy Research Associates, the average rate of field decline is 4.5% per year. The Association for the Study of Peak Oil and Gas agreed with their decline rates, but considered the rate of new fields coming online overly optimistic. The IEA stated in November 2008 that an analysis of 800 oilfields showed the decline in oil production to be 6.7% a year for fields past their peak, and that this would grow to 8.6% in 2030. A more rapid annual rate of decline of 5.1% in 800 of the world's largest oil fields weighted for production over their whole lives was reported by the International Energy Agency in their World Energy Outlook 2008. The 2013 study of 733 giant fields mentioned previously had an average decline rate 3.83% which was described as "conservative." The United States' oil production peaked in February 2020, at about 18,826,000 barrels per day.
Supply:
Control over supply Entities such as governments or cartels can reduce supply to the world market by limiting access to the supply through nationalizing oil, cutting back on production, limiting drilling rights, imposing taxes, etc. International sanctions, corruption, and military conflicts can also reduce supply.
Supply:
Nationalization of oil supplies Another factor affecting global oil supply is the nationalization of oil reserves by producing nations. The nationalization of oil occurs as countries begin to deprivatize oil production and withhold exports. Kate Dourian, Platts' Middle East editor, points out that while estimates of oil reserves may vary, politics have now entered the equation of oil supply. "Some countries are becoming off limits. Major oil companies operating in Venezuela find themselves in a difficult position because of the growing nationalization of that resource. These countries are now reluctant to share their reserves."According to consulting firm PFC Energy, only 7% of the world's estimated oil and gas reserves are in countries that allow companies like ExxonMobil free rein. Fully 65% are in the hands of state-owned companies such as Saudi Aramco, with the rest in countries such as Russia and Venezuela, where access by Western European and North American companies is difficult. The PFC study implies political factors are limiting capacity increases in Mexico, Venezuela, Iran, Iraq, Kuwait, and Russia. Saudi Arabia is also limiting capacity expansion, but because of a self-imposed cap, unlike the other countries. As a result of not having access to countries amenable to oil exploration, ExxonMobil is not making nearly the investment in finding new oil that it did in 1981.
Supply:
OPEC influence on supply OPEC is an alliance among 14 diverse oil-producing countries (as of January 2019: Algeria, Angola, Ecuador, Equatorial Guinea, Gabon, Iran, Iraq, Kuwait, Libya, Nigeria, Republic of the Congo, Saudi Arabia, United Arab Emirates, Venezuela) to manage the supply of oil. OPEC's power was consolidated in the 1960s and 1970s as various countries nationalized their oil holdings, and wrested decision-making away from the "Seven Sisters" (Anglo-Iranian, Socony, Royal Dutch Shell, Gulf, Esso, Texaco, Socal), and created their own oil companies to control the oil. OPEC often tries to influence prices by restricting production. It does this by allocating each member country a quota for production. Members agree to keep prices high by producing at lower levels than they otherwise would. There is no way to enforce adherence to the quota, so each member has an individual incentive to "cheat" the cartel.Commodities trader Raymond Learsy, author of Over a Barrel: Breaking the Middle East Oil Cartel, contends that OPEC has trained consumers to believe that oil is a much more finite resource than it is. To back his argument, he points to past false alarms and apparent collaboration. He also believes that peak oil analysts have conspired with OPEC and the oil companies to create a "fabricated drama of peak oil" to drive up oil prices and profits; oil had risen to a little over $30/barrel at that time. A counter-argument was given in the Huffington Post after he and Steve Andrews, co-founder of ASPO, debated on CNBC in June 2007.
Supply:
Production figures post 2000 After the millennium, global oil extraction was characterized by sequences of years with relatively stable extraction figures (75 million barrel/d 2000–2002, 82-83 million barrel/d 2005–2010, 92-95 million barrel/d 2015–2019) and two episodes of 9-10% growth in between. 2019's production of 94.961 million barrel/d (34,700 million barrel/a, 5.5 million m3/a) surpassed 2018 by just 0.1% and was ranked as a candidate for peak oil, as production fell by 7% in 2020 to only partially rebound in 2021 and (expected) in 2022.
Predictions:
In 1962, Hubbert predicted that world oil production would peak at a rate of 12.5 billion barrels per year, around the year 2000. In 1974, Hubbert predicted that peak oil would occur in 1995 "if current trends continue". Those predictions proved incorrect. In 2009, a number of industry leaders and analysts believed that world oil production would peak between 2015 and 2030, with a significant chance that the peak would occur before 2020. They consider dates after 2030 implausible. By comparison, a 2014 analysis of production and reserve data predicted a peak in oil production about 2035. Determining a more specific range is difficult due to the lack of certainty over the actual size of world oil reserves. Unconventional oil is not currently predicted to meet the expected shortfall even in a best-case scenario. For unconventional oil to fill the gap without "potentially serious impacts on the global economy", oil production would have to remain stable after its peak, until 2035 at the earliest.Papers published since 2010 have been relatively pessimistic. A 2010 Kuwait University study predicted production would peak in 2014. A 2010 Oxford University study predicted that production would peak before 2015, but its projection of a change soon "... from a demand-led market to a supply constrained market ..." was incorrect. A 2014 validation of a significant 2004 study in the journal Energy proposed that it is likely that conventional oil production peaked, according to various definitions, between 2005 and 2011. A set of models published in a 2014 Ph.D. thesis predicted that a 2012 peak would be followed by a drop in oil prices, which in some scenarios could turn into a rapid rise in prices thereafter. According to energy blogger Ron Patterson, the peak of world oil production was probably around 2010.Major oil companies hit peak production in 2005. Several sources in 2006 and 2007 predicted that worldwide production was at or past its maximum. However, in 2013 OPEC's figures showed that world crude oil production and remaining proven reserves were at record highs. According to Matthew Simmons, former Chairman of Simmons & Company International and author of Twilight in the Desert: The Coming Saudi Oil Shock and the World Economy, "peaking is one of these fuzzy events that you only know clearly when you see it through a rear view mirror, and by then an alternate resolution is generally too late."
Possible consequences:
The wide use of fossil fuels has been one of the most important stimuli of economic growth and prosperity since the industrial revolution, allowing humans to participate in takedown, or the consumption of energy at a greater rate than it is being replaced. Some believe that when oil production decreases, human culture and modern technological society will be forced to change drastically. The impact of peak oil will depend heavily on the rate of decline and the development and adoption of effective alternatives.
Possible consequences:
In 2005, the United States Department of Energy published a report titled Peaking of World Oil Production: Impacts, Mitigation, & Risk Management. Known as the Hirsch report, it stated, "The peaking of world oil production presents the U.S. and the world with an unprecedented risk management problem. As peaking is approached, liquid fuel prices and price volatility will increase dramatically, and, without timely mitigation, the economic, social, and political costs will be unprecedented. Viable mitigation options exist on both the supply and demand sides, but to have substantial impact, they must be initiated more than a decade in advance of peaking." Some of the information was updated in 2007.
Possible consequences:
Oil prices Historical oil prices The oil price historically was comparatively low until the 1973 oil crisis and the 1979 oil crisis when it increased more than tenfold during that six-year timeframe. Even though the oil price dropped significantly in the following years, it has never come back to the previous levels. Oil price began to increase again during the 2000s until it hit historical heights of $143 per barrel (2007 inflation adjusted dollars) on 30 June 2008. As these prices were well above those that caused the 1973 and 1979 energy crises, they contributed to fears of an economic recession similar to that of the early 1980s.It is generally agreed that the main reason for the price spike in 2005–2008 was strong demand pressure. For example, global consumption of oil rose from 30 billion barrels (4.8×10^9 m3) in 2004 to 31 billion in 2005. The consumption rates were far above new discoveries in the period, which had fallen to only eight billion barrels of new oil reserves in new accumulations in 2004.
Possible consequences:
Oil price increases were partially fueled by reports that petroleum production is at or near full capacity. In June 2005, OPEC stated that they would 'struggle' to pump enough oil to meet pricing pressures for the fourth quarter of that year. From 2007 to 2008, the decline in the U.S. dollar against other significant currencies was also considered as a significant reason for the oil price increases, as the dollar lost approximately 14% of its value against the Euro from May 2007 to May 2008.
Possible consequences:
Besides supply and demand pressures, at times security related factors may have contributed to increases in prices, including the War on Terror, missile launches in North Korea, the Crisis between Israel and Lebanon, nuclear brinkmanship between the U.S. and Iran, and reports from the U.S. Department of Energy and others showing a decline in petroleum reserves.
Possible consequences:
More recently, between 2011 and 2014 the price of crude oil was relatively stable, fluctuating around $US 100 per barrel. It dropped sharply in late 2014 to below $US70 where it remained for most of 2015. In early 2016 it traded at a low of $US27. The price drop has been attributed to both oversupply and reduced demand as a result of the slowing global economy, OPEC reluctance to concede market share, and a stronger US dollar. These factors may be exacerbated by a combination of monetary policy and the increased debt of oil producers, who may increase production to maintain liquidity.The onset of the COVID-19 pandemic resulted in oil prices declining from approximately 60 dollars a barrel to 20 between January and April 2020 and market prices briefly becoming negative. On 22 April 2020, the North Dakota's crude oil spot prices were for Williston Sweet $-46.75 and Williston Sour $-51.31 (oilprice charts).
Possible consequences:
While the WTI was traded $6.46. WTI futures lowest price was above $-37 per barrel on 20 April 2020. In 2021, the record-high energy prices were driven by a global surge in demand as the world quit the economic recession caused by COVID-19, particularly due to strong energy demand in Asia. The price of oil was about $80 by October 2021, the highest since 2014.
Possible consequences:
Effects of historical oil price rises In the past, sudden increases in the price of oil have led to economic recessions, such as the 1973 and 1979 energy crises. The effect the increased price of oil has on an economy is known as a price shock. In many European countries, which have high taxes on fuels, such price shocks could potentially be mitigated somewhat by temporarily or permanently suspending the taxes as fuel costs rise. This method of softening price shocks is less useful in countries with much lower gas taxes, such as the United States. A baseline scenario for a recent IMF paper found oil production growing at 0.8% (as opposed to a historical average of 1.8%) would result in a small reduction in economic growth of 0.2–0.4%.Researchers at the Stanford Energy Modeling Forum found that the economy can adjust to steady, gradual increases in the price of crude better than wild lurches.Some economists predict that a substitution effect will spur demand for alternate energy sources, such as coal or liquefied natural gas. This substitution can be only temporary, as coal and natural gas are finite resources as well.Prior to the run-up in fuel prices, many motorists opted for larger, less fuel-efficient sport utility vehicles and full-sized pickups in the United States, Canada, and other countries. This trend has been reversing because of sustained high prices of fuel. The September 2005 sales data for all vehicle vendors indicated SUV sales dropped while small cars sales increased. Hybrid and diesel vehicles are also gaining in popularity.EIA published Household Vehicles Energy Use: Latest Data and Trends in Nov 2005 illustrating the steady increase in disposable income and $20–30 per barrel price of oil in 2004. The report notes "The average household spent $1,520 on fuel purchases for transport." According to CNBC that expense climbed to $4,155 in 2011.In 2008, a report by Cambridge Energy Research Associates stated that 2007 had been the year of peak gasoline usage in the United States, and that record energy prices would cause an "enduring shift" in energy consumption practices. The total miles driven in the U.S. peaked in 2006.The Export Land Model states that after peak oil petroleum exporting countries will be forced to reduce their exports more quickly than their production decreases because of internal demand growth. Countries that rely on imported petroleum will therefore be affected earlier and more dramatically than exporting countries. Mexico is already in this situation. Internal consumption grew by 5.9% in 2006 in the five biggest exporting countries, and their exports declined by over 3%. It was estimated that by 2010 internal demand would decrease worldwide exports by 2,500,000 barrels per day (400,000 m3/d).Canadian economist Jeff Rubin has stated that high oil prices are likely to result in increased consumption in developed countries through partial manufacturing de-globalisation of trade. Manufacturing production would move closer to the end consumer to minimise transportation network costs, and therefore a demand decoupling from gross domestic product would occur. Higher oil prices would lead to increased freighting costs and consequently, the manufacturing industry would move back to the developed countries since freight costs would outweigh the current economic wage advantage of developing countries. Economic research carried out by the International Monetary Fund puts overall price elasticity of demand for oil at −0.025 short-term and −0.093 long term.
Possible consequences:
Agricultural effects and population limits Since supplies of oil and gas are essential to modern agriculture techniques, a fall in global oil supplies could cause spiking food prices and unprecedented famine in the coming decades.The largest consumer of fossil fuels in modern agriculture is ammonia production (for fertilizer) via the Haber process, which is essential to high-yielding intensive agriculture. The specific fossil fuel input to fertilizer production is primarily natural gas, to provide hydrogen via steam reforming. Given sufficient supplies of renewable electricity, hydrogen can be generated without fossil fuels using methods such as electrolysis. For example, the Vemork hydroelectric plant in Norway used its surplus electricity output to generate renewable ammonia from 1911 to 1971.Iceland currently generates ammonia using the electrical output from its hydroelectric and geothermal power plants, because Iceland has those resources in abundance while having no domestic hydrocarbon resources, and a high cost for importing natural gas.
Possible consequences:
Long-term effects on lifestyle A majority of Americans live in suburbs, a type of low-density settlement designed around universal personal automobile use. Commentators such as James Howard Kunstler argue that the suburbs' reliance on the automobile is an unsustainable living arrangement. Peak oil would leave many Americans unable to afford petroleum based fuel for their cars, and force them to use other forms of transportation such as bicycles or electric vehicles. Additional options include remote work, moving to rural areas, or moving to higher density areas, where walking and public transportation are more viable options. In the latter two cases, suburbs may become the "slums of the future." The issue of petroleum supply and demand is also a concern for growing cities in developing countries (where urban areas are expected to absorb most of the world's projected 2.3 billion population increase by 2050). Stressing the energy component of future development plans is seen as an important goal.Rising oil prices, if they occur, would also affect the cost of food, heating, and electricity. A high amount of stress would then be put on current middle to low income families as economies contract from the decline in excess funds, decreasing employment rates. The Hirsch/US DoE Report concludes that "without timely mitigation, world supply/demand balance will be achieved through massive demand destruction (shortages), accompanied by huge oil price increases, both of which would create a long period of significant economic hardship worldwide."Methods that have been suggested for mitigating these urban and suburban issues include the use of non-petroleum vehicles such as electric cars, battery electric vehicles, transit-oriented development, carfree cities, bicycles, new trains, new pedestrianism, smart growth, shared space, urban consolidation, urban villages, and New Urbanism.
Possible consequences:
An extensive 2009 report on the effects of compact development by the United States National Research Council of the Academy of Sciences, commissioned by the United States Congress, stated six main findings. First, that compact development is likely to reduce "Vehicle Miles Traveled" (VMT) throughout the country. Second, that doubling residential density in a given area could reduce VMT by as much as 25% if coupled with measures such as increased employment density and improved public transportation. Third, that higher density, mixed-use developments would produce both direct reductions in CO2 emissions (from less driving), and indirect reductions (such as from lower amounts of materials used per housing unit, higher efficiency climate control, longer vehicle lifespans, and higher efficiency delivery of goods and services). Fourth, that although short-term reductions in energy use and CO2 emissions would be modest, that these reductions would become more significant over time. Fifth, that a major obstacle to more compact development in the United States is political resistance from local zoning regulators, which would hamper efforts by state and regional governments to participate in land-use planning. Sixth, the committee agreed that changes in development that would alter driving patterns and building efficiency would have various secondary costs and benefits that are difficult to quantify. The report recommends that policies supporting compact development (and especially its ability to reduce driving, energy use, and CO2 emissions) should be encouraged.
Possible consequences:
An economic theory that has been proposed as a remedy is the introduction of a steady state economy. Such a system could include a tax shifting from income to depleting natural resources (and pollution), as well as the limitation of advertising that stimulates demand and population growth. It could also include the institution of policies that move away from globalization and toward localization to conserve energy resources, provide local jobs, and maintain local decision-making authority. Zoning policies could be adjusted to promote resource conservation and eliminate sprawl.Since aviation relies mainly on jet fuels derived from crude oil, commercial aviation has been predicted to go into decline with the global oil production.
Possible consequences:
Mitigation To avoid the serious social and economic implications a global decline in oil production could entail, the Hirsch report emphasized the need to find alternatives, at least ten to twenty years before the peak, and to phase out the use of petroleum over that time. This was similar to a plan proposed for Sweden that same year. Such mitigation could include energy conservation, fuel substitution, and the use of unconventional oil. The timing of mitigation responses is critical. Premature initiation would be undesirable, but if initiated too late could be more costly and have more negative economic consequences.Global annual crude oil production (including shale oil, oil sands, lease condensate and gas plant condensate but excluding liquid fuels from other sources such as natural gas liquids, biomass and derivatives of coal and natural gas) increased from 75.86 million barrels (12.1 million cubic metres) in 2008 to 83.16 million bbl (13.2 million m3) per day in 2018 with a marginal annual growth rate of 1%. Many developed countries are already able to reduce the petro products consumption derived from crude oil. Crude oil consumption in oil exporting countries (OPEC and non OPEC countries), China and India has increased in last decade. The two major consumers, China (second globally) and India (third globally), are taking many steps not to increase their crude oil consumption by encouraging the renewable energy options. These are the clear cut signs that peak oil production due to declining crude oil consumption (not due to declining availability) is imminent in next few years mandated by alternate cheaper energy means/sources. During the year 2020, the crude oil consumption would decrease from earlier year due to COVID-19 pandemic.
Possible consequences:
Positive aspects Permaculture sees peak oil as holding tremendous potential for positive change, assuming countries act with foresight. The rebuilding of local food networks, energy production, and the general implementation of "energy descent culture" are argued to be ethical responses to the acknowledgment of finite fossil resources. Majorca is an island currently diversifying its energy supply from fossil fuels to alternative sources and looking back at traditional construction and permaculture methods.The Transition Towns movement, started in Totnes, Devon and spread internationally by "The Transition Handbook" (Rob Hopkins) and Transition Network, sees the restructuring of society for more local resilience and ecological stewardship as a natural response to the combination of peak oil and climate change.
Criticisms:
General arguments The theory of peak oil is controversial and became an issue of political debate in the US and Europe in the mid-2000s. Critics argued that newly found oil reserves forestalled a peak oil event. Some argued that oil production from new oil reserves and existing fields will continue to increase at a rate that outpaces demand, until alternate energy sources for current fossil fuel dependence are found. In 2015, analysts in the petroleum and financial industries claimed that the "age of oil" had already reached a new stage where the excess supply that appeared in late 2014 may continue.
Criticisms:
A consensus was emerging that parties to an international agreement would introduce measures to constrain the combustion of hydrocarbons in an effort to limit global temperature rise to the nominal 2 °C that scientists predicted would limit environmental harm to tolerable levels.Another argument against the peak oil theory is reduced demand from various options and technologies substituting oil. US federal funding to develop algae fuels increased since 2000 due to rising fuel prices. Many other projects are being funded in Australia, New Zealand, Europe, the Middle East, and elsewhere and private companies are entering the field.
Criticisms:
Oil industry representatives John Hofmeister, president of Royal Dutch Shell's US operations, while agreeing that conventional oil production would soon start to decline, criticized the analysis of peak oil theory by Matthew Simmons for being "overly focused on a single country: Saudi Arabia, the world's largest exporter and OPEC swing producer." Hofmeister pointed to the large reserves at the US outer continental shelf, which held an estimated 100 billion barrels (16×10^9 m3) of oil and natural gas. However, only 15% of those reserves were currently exploitable, a good part of that off the coasts of Texas, Louisiana, Mississippi, and Alabama.Hofmeister also pointed to unconventional sources of oil such as the oil sands of Canada, where Shell was active. The Canadian oil sands—a natural combination of sand, water, and oil found largely in Alberta and Saskatchewan—are believed to contain one trillion barrels of oil. Another trillion barrels are also said to be trapped in rocks in Colorado, Utah, and Wyoming, in the form of oil shale. Environmentalists argue that major environmental, social, and economic obstacles would make extracting oil from these areas excessively difficult. Hofmeister argued that if oil companies were allowed to drill more in the United States enough to produce another 2 million barrels per day (320×10^3 m3/d), oil and gas prices would not be as high as they were in the late 2000s. He thought in 2008 that high energy prices would cause social unrest similar to the 1992 Rodney King riots.In 2009, Dr. Christof Rühl, chief economist of BP, argued against the peak oil hypothesis: Physical peak oil, which I have no reason to accept as a valid statement either on theoretical, scientific or ideological grounds, would be insensitive to prices. ... In fact the whole hypothesis of peak oil – which is that there is a certain amount of oil in the ground, consumed at a certain rate, and then it's finished – does not react to anything ... Therefore there will never be a moment when the world runs out of oil because there will always be a price at which the last drop of oil can clear the market. And you can turn anything into oil if you are willing to pay the financial and environmental price ... (Global Warming) is likely to be more of a natural limit than all these peak oil theories combined. ... Peak oil has been predicted for 150 years. It has never happened, and it will stay this way. Rühl argued that the main limitations for oil availability are "above ground" factors such as the availability of staff, expertise, technology, investment security, funds, and global warming, and that the oil question was about price and not the physical availability.
Criticisms:
In 2008, Daniel Yergin of CERA suggest that a recent high price phase might add to a future demise of the oil industry, not of complete exhaustion of resources or an apocalyptic shock but the timely and smooth setup of alternatives. Yergin went on to say, "This is the fifth time that the world is said to be running out of oil. Each time-whether it was the 'gasoline famine' at the end of WWI or the 'permanent shortage' of the 1970s-technology and the opening of new frontier areas have banished the spectre of decline. There's no reason to think that technology is finished this time."In 2006, Clive Mather, CEO of Shell Canada, said the Earth's supply of bitumen hydrocarbons was "almost infinite", referring to hydrocarbons in oil sands.
Criticisms:
Others In 2006 attorney and mechanical engineer Peter W. Huber asserted that the world was just running out of "cheap oil", explaining that as oil prices rise, unconventional sources become economically viable. He predicted that, "[t]he tar sands of Alberta alone contain enough hydrocarbon to fuel the entire planet for over 100 years."Environmental journalist George Monbiot responded to a 2012 report by Leonardo Maugeri by suggesting that there is more than enough oil (from unconventional sources) to "deep-fry" the world with climate change. Stephen Sorrell, senior lecturer Science and Technology Policy Research, Sussex Energy Group, and lead author of the UKERC Global Oil Depletion report, and Christophe McGlade, doctoral researcher at the UCL Energy Institute have criticized Maugeri's assumptions about decline rates.
Peakists:
In the first decade of the twenty-first century, primarily in the United States, widespread beliefs in the imminence of peak oil led to the formation of a large subculture of "peakists" who transformed their lives in response to their belief in and expectation of supply-driven (i.e. resource-constrained) peak oil. They met at national and regional conferences. They also discussed and planned for life after oil, long before this became a regular topic of discussion in regards to climate change.
Peakists:
Researchers estimate that at the peak of this subculture there were over 100,000 hard-core "peakists" in the United States. The popularity of this subculture started to diminish around 2013, as a dramatic peak did not arrive, and as "unconventional" fossil fuels (such as tar sands and natural gas via hydrofracking) seemed to pick up the slack in the context of declines in "conventional" petroleum.The decline of the interest in peak oil predated empirical evidence that peak oil had not happened at the time forecast by Campbell.
Further information:
Books Aleklett, Kjel (2012). Peeking at Peak Oil. Springer Science. ISBN 978-1-4614-3423-8.
Campbell, Colin J. (2004). The Essence of Oil & Gas Depletion. Multi-Science Publishing. ISBN 978-0-906522-19-6.
Campbell, Colin J. (2005). Oil Crisis Multi-Science Publishing.
Campbell, Colin J. (2013). Campbell's Atlas of Oil and Gas Depletion ISBN 978-1-4614-3576-1 Deffeyes, Kenneth S. (2002). Hubbert's Peak: The Impending World Oil Shortage. Princeton University Press. ISBN 978-0-691-09086-3.
Deffeyes, Kenneth S. (2005). Beyond Oil: The View from Hubbert's Peak. Hill and Wang. ISBN 978-0-8090-2956-3.
Goodstein David (2005). Out of Gas: The End of the Age of Oil. AGU Fall Meeting Abstracts. Vol. 2004. WW Norton. pp. U21B–03. Bibcode:2004AGUFM.U21B..03G. ISBN 978-0-393-05857-4.
Greer, John M. (2008). The Long Descent: A User's Guide to the End of the Industrial Age. New Society Publishers. ISBN 978-0-865-71609-4.
Greer, John M. (2013). Not the Future We Ordered: The Psychology of Peak Oil and the Myth of Eternal Progress. Karnac Books. ISBN 978-1-78049-088-5.
Herold, D. M. (2012). Peak Oil. Hurstelung und Verlag. ISBN 978-3-8448-0097-5.
Heinberg, Richard (2003). The Party's Over: Oil, War, and the Fate of Industrial Societies. New Society Publishers. ISBN 978-0-86571-482-3.
Heinberg, Richard (2004). Power Down: Options and Actions for a Post-Carbon World. New Society Publishers. ISBN 978-0-86571-510-3.
Heinberg, Richard (2006). The Oil Depletion Protocol: A Plan to Avert Oil Wars, Terrorism and Economic Collapse. New Society Publishers. ISBN 978-0-86571-563-9.
Heinberg, Richard & Lerch, Daniel (2010). The Post Carbon Reader: Managing the 21st Century's Sustainability Crises. Watershed Media. ISBN 978-0-9709500-6-2.
Herberg, Mikkal (2014). Energy Security and the Asia-Pacific: Course Reader. United States: The National Bureau of Asian Research.
Huber, Peter (2005). The Bottomless Well. Basic Books. ISBN 978-0-465-03116-0.
Kunstler James H. (2005). The Long Emergency: Surviving the End of the Oil Age, Climate Change, and Other Converging Catastrophes. Atlantic Monthly Press. ISBN 978-0-87113-888-0.
Leggett Jeremy K. (2005). The Empty Tank: Oil, Gas, Hot Air, and the Coming Financial Catastrophe. Random House. ISBN 978-1-4000-6527-1.
Leggett, Jeremy K. (2005). Half Gone: Oil, Gas, Hot Air and the Global Energy Crisis. Portobello Books. ISBN 978-1-84627-004-8.
Lovins Amory; et al. (2005). Winning the Oil Endgame: Innovation for Profit, Jobs and Security. Rocky Mountain Institute. ISBN 978-1-881071-10-5.
Pfeiffer Dale Allen (2004). The End of the Oil Age. Lulu Press. ISBN 978-1-4116-0629-6.
Newman Sheila (2008). The Final Energy Crisis (2nd ed.). Pluto Press. ISBN 978-0-7453-2717-4. OCLC 228370383.
Roberts Paul (2004). The End of Oil. On the Edge of a Perilous New World. Boston: Houghton Mifflin. ISBN 978-0-618-23977-1.
Ruppert Michael C (2005). Crossing the Rubicon: The Decline of the American Empire at the End of the Age of Oil. New Society. ISBN 978-0-86571-540-0.
Simmons Matthew R (2005). Twilight in the Desert: The Coming Saudi Oil Shock and the World Economy. Hoboken, N.J.: Wiley & Sons. ISBN 978-0-471-73876-3.
Simon Julian L (1998). The Ultimate Resource. Princeton University Press. ISBN 978-0-691-00381-8.
Schneider-Mayerson Matthew (2015). Peak Oil: Apocalyptic Environmentalism and Libertarian Political Culture. University of Chicago Press. ISBN 978-0-226-28543-6.
Stansberry Mark A; Reimbold Jason (2008). The Braking Point. Hawk Publishing. ISBN 978-1-930709-67-6.
Tertzakian Peter (2006). A Thousand Barrels a Second. McGraw-Hill. ISBN 978-0-07-146874-9.
Vassiliou, Marius (2009). Historical Dictionary of the Petroleum Industry. Scarecrow Press (Rowman & Littlefield). ISBN 978-0-8108-5993-7.
Articles Benner, Katie (7 December 2005). "Lawmakers: Will we run out of oil?". CNN.
Benner, Katie (3 November 2004). "Oil: Is the end at hand?". CNN.
Colin, Campbell, Laherrère Jean (1998). "The end of cheap oil". Scientific American. 278 (3): 78–83. Bibcode:1998SciAm.278c..78C. doi:10.1038/scientificamerican0398-78. Archived from the original on 27 September 2007. Retrieved 2 May 2007.{{cite journal}}: CS1 maint: multiple names: authors list (link) De Young, R. (2014). "Some behavioral aspects of energy descent." Frontiers in Psychology, 5(1255).
Porter, Adam (10 June 2005). "'Peak oil' enters mainstream debate". BBC News. Retrieved 26 March 2010.
Ariel Schwartz (9 February 2011). "WikiLeaks May Have Just Confirmed That Peak Oil Is Imminent". Fast Company.
Matthew Schneider-Mayerson (2013). "From politics to prophecy: environmental quiescence and the peak-oil movement" (PDF). Environmental Politics.
Further information:
Documentary films The End of Suburbia: Oil Depletion and the Collapse of the American Dream (2004) A Crude Awakening: The Oil Crash (2006) The Power of Community: How Cuba Survived Peak Oil (2006) Crude Impact (2006) What a Way to Go: Life at the End of Empire (2007) Crude (2007) Australian Broadcasting Corporation documentary [3 x 30 minutes] about the formation of oil, and humanity's use of it PetroApocalypse Now? (2008) Blind Spot (2008) GasHole (2008) Collapse (2009) Peak Oil: A Staggering Challenge to "Business As Usual" Podcasts Saudi America? – The U.S. Oil Boom in Perspective KunstlerCast 275 — Art Berman Clarifies Whatever Happened to Peak Oil | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Experimental models of Alzheimer's disease**
Experimental models of Alzheimer's disease:
Experimental models of Alzheimer's disease are organism or cellular models used in research to investigate biological questions about Alzheimer's disease as well as develop and test novel therapeutic treatments. Alzheimer's disease is a progressive neurodegenerative disorder associated with aging, which occurs both sporadically (the most common form of diagnosis) or due to familial passed mutations in genes associated with Alzheimer's pathology. Common symptoms associated with Alzheimer's disease include: memory loss, confusion, and mood changes.As Alzheimer's disease affects around 55 million patients globally and accounts for approximately 60-70% of all dementia cases, billions of dollars are spent yearly towards research to better understand the biological mechanisms of the disease as well as develop effective therapeutic treatments for it. Researchers commonly use post-mortem human tissue or experimental models to conduct experiments relating to Alzheimer's disease. Experimental models of Alzheimer's disease are particularly useful as they allow complex manipulation of biological systems to elucidate questions about Alzheimer's disease without the risk of harming humans. These models often have genetic modifications that enable them to be more representative of human Alzheimer's disease and its associated pathology: extracellular amyloid-beta (Aβ) plaques and intracellular neurofibrillary tangles (NFTs). Current methods used by researchers are: traditional 2D cell culture, 3D cell culture, microphysiological systems, and animal models.
Cell culture models:
2D cell culture Traditional two dimensional cell culture is a useful experimental model of Alzheimer's disease to conduct experiments in a high throughput manner. These cultures occur on a dish or flask in a monolayer and can be made up of a single cell type or multiple cell types. 2D cultures often have difficulties producing insoluble Amyloid-β plaques even when they are able to secrete the Amyloid-β peptide. Common types of 2D cell culture used to model Alzheimer's disease are immortalized cell lines, primary neuron cultures, and patient derived induced pluripotent stem cells (iPSC).
Cell culture models:
Immortalized cell lines Immortalized cell lines are cells from an organism which have been genetically manipulated to be able to proliferate in vitro, making them a useful tool for researchers as they can do so quickly allowing for high-throughput experimentation. These mutations can occur from a natural caused mutation, like those found in cancer cells, or from being introduced by researchers. Common immortalized cell lines used to study Alzheimer's disease include: human embryonic kidney 293 (HEK293), human neuroblastoma (SH-SY5Y), human neuroglioma (H4), human embryonic mesencephalic (LUHMES), human neural progenitor (ReN), and pheochromocytoma (PC12) cells. These types of cells are commercially available, relatively inexpensive, and easy to culture and maintain. Pro-death compounds can be used in these models to induce Alzheimer's disease related cell death. These compounds include: Amyloid-β 42, tau protein, glutamic acid, and oxidative/pro-inflammatory compounds.
Cell culture models:
Primary neuron culture Primary neuron cultures are generated from embryonic or postnatal rodent brain tissue and cultured on plates. Common brain regions used for cultures to study Alzheimer's disease include the hippocampus, cortex, and amygdala; however any brain region is suitable for cultures. This method requires dissection of the desired brain region from rodent tissue followed by digestion, dissociation, and plating steps. As these cultures are derived directly from rodent brain tissue, they morphologically and physiologically resemble human brain cells, contain multiple neuronal cell types, and do not proliferate. When initially cultured, these cells are spherical and over time begin to form axons, dendrites, and eventually develop synaptic connections.
Cell culture models:
Induced pluripotent stem cells Patient-derived induced pluripotent stem cell (iPSC) lines are unique in which differentiated somatic cells are taken from Alzheimer's disease patients and reverted into pluripotent stem cells via an ectopic transcriptional "Tamanaka" factor cocktail. These stem cells can then be directed to differentiate into many cell types, including neurons, astrocytes, microglia, oligodendrocytes, pericytes, and endothelial cells. This allows these models to be generated from both early-onset familial Alzheimer's disease (FAD) patients with mutations in APP, PSEN1, or PSEN2 genes as well as late-onset/sporadic Alzheimer's disease (SAD) patients, a population which is not wholly replicated in animal models. As SAD is the most commonly diagnosed form of AD, this highlights iPSCs as key tools for understanding this form of the disease. These cells can also be purchased commercially. CRISPR-Cas9 technology can be used alongside iPSC cells to generate neurons carrying multiple FAD mutations. One major downfall of these models are that they can inadequately resemble mature neurons as well as being more expensive and difficult to maintain. iPSCs have also been shown to exhibit genomic instability and develop additional mutations when passaged (harvested and reseeded into daughter cultures) numerous times, posing both safety concerns for patient use as well as potential reproducibility problems in experimental studies. Due to the nature of reprogramming procedures, iPSC cells lose cellular and epigenetic signatures acquired by aging and environmental factors, limiting iPSCs ability to recapitulate diseases associated with aging, like Alzheimer's disease. While these cultures have some limitations, many fundamental discoveries about Alzheimer's disease biology have been elucidated using this model system.
Cell culture models:
3D organoid culture Three dimensional organoid culture methods have become a popular way of recapitulating AD pathology in a more "brain-like" environment than traditional 2D culture as they create a organized structure similar to that of the human cortex. This has proven effective specifically for modeling Alzheimer's disease as 2D cultures tend to fail at producing insoluble amyloid-β while 3D culture models are able. These models consist of multiple neuronal cell types co-cultured together in artificial matrices allowing for the understanding of how non-neuronal cells and neuroinflammation influence Alzheimer's disease pathogenesis. The neuronal cell types expressed in these models often include neurons, astrocytes, microglia, oligodendrocytes, epithelial, and endothelial cells. These organoids develop over many months in order to display Alzheimer's pathology and can be maintained for long periods of time. They can be derived from both iPSCs or immortalized undifferentiated cells and typically reach a diameter of several millimeters. 3D cultures can either be allowed to self-organize or be placed under guided formation in which exogenous factors influence the differentiation pattern of the organoid. 3D culture methods have shown more robust Amyloid-β aggregation, phosphorylated-tau accumulation, and endosome abnormalities than 2D culture methods of the same cell lines, indicating accelerated pathology.
Cell culture models:
Common issues arising from the use of 3D cultures is the lack of vasculature within the organoid, leading to cell death and dysfunction at inner layers. Current efforts are focusing on introducing endothelial cells into guided formation cultures in order to create vascular systems and provide nutrient distribution to deep layers. Self-organizing organoids also vary in terms in proportion and location of expressed cells causing challenges in reproducibility of experiments. More effort has been placed on guided formation organoids to account for this problem, however this method is more time consuming and difficult to optimize. 3D organoid culture's ability to resemble aging phenotypes is also limited as many organoid methods rely on iPSCs which are more similar to prenatal brain cells due to reprograming protocols. Researchers are currently investigating common transcriptional profiles associated with Alzheimer's disease and aging in order to reintroduce these lanscapes into iPSCs for future biomedical research and therapeutic development.
Cell culture models:
Microphysiological systems Neuronal microphysiological systems, also referred to as a "brain-on-a-chip," are a combination of 3D cultures and a microfluidics platform, which circulates the media provided to the cultured cells. These devices are beneficial as they improve cell viability and better model physiological conditions as they improve oxygen availability and nutrient delivery to inner layers of 3D cultures. These systems additionally introduce physiological cues such as fluid sheer stress, tension, and compression which allows these in vitro conditions to better resemble the in vivo environment. Microphysiological systems were shown to replicate Amyloid-β aggregation, hyperphosphorylated tau, and neuroinflamation as well as display microglial recruitment, release of cytokines and chemokines, and microglial neurotoxic activation as a response of more physiologically relevant cell-cell interactions. These systems can also be developed incorporating brain endothelial cells to mimic the blood–brain barrier, making this an extremely useful model for BBB dysfunction in Alzheimer's disease, screening novel therapeutics potential to pass from the blood into the brain, therapeutic pharmacokinetics, as well as drug adsorption, distribution, metabolism, elimination, and toxicity (ADMET) tendencies.
Animal models:
Rodents Rodent animal models of Alzheimer's disease are commonly used in research as rodents and humans have many of the same major brain regions and neurotransmitter systems. These models are small, easy to house, as well as breed very well. Mice and rats on average tend to live for 2 years, a much shorter lifespan than humans, presenting both limitations as well as benefits for more rapid experiment completion.In order to recapitulate and accelerate human Alzheimer's disease pathology, scientists commonly introduce FAD associated mutations. Common genes targeted for genetic engineering in animal models are APP, MAPT, PSEN1, PSEN2, and APOE. This results in the animal models having a higher tendency to form amyloid-β plaques and/or neurofibrillary tangles, the two pathological hallmarks of Alzheimer's disease. These mutated genes can either be over-expressed (first generation models) or expressed at endogenous levels (second generation models) as a way of further replicating disease pathology. Scientists also over-express non-mutated human genes in the hope of seeing similar Alzheimer's disease pathology. These introduced mutations or over-expression of human Alzheimer's associated genes, can lead these animals to additionally display cognitive impairment, deficits in long-term potentiation (LTP), synaptic loss, gliosis, and neuronal loss. As current models are highly reliant on FAD mutations to induce Alzheimer's like pathology, there is still no ideal model that fully replicates SAD (sporadic Alzheimer's disease), which is the most common type of diagnosis in patients.Common methods used to generate these lines are the use of transgenes controlled by a specific promoter, Cre-Lox recombination, and the CRISPR-Cas9 system. Scientists can also use injection methods such as intracerebroventricular injection, intravenous injection, or intrahippocampal injection to modify wild type rodents into displaying Alzheimer's disease pathology. These rodent models are often used to test and develop drugs treating Alzheimer's disease before progressing to clinical trials in humans.
Animal models:
Mouse models Rat models Non-human primates Non-human primates can be used by researchers to study mechanisms of Alzheimer's disease as well as develop therapeutics. Non-human primates are useful as they have a more similar aging pattern to humans compared to rodent models. During non-human primate aging, they can display neuropathy, cognitive changes, and amyloid-β deposits, similar to that of Alzheimer's disease. While these models are useful in studying the process of aging, they are not always exact models of Alzheimer's disease. Common non-human primates used in AD research include: rhesus monkeys (Macaca mulattas), stump-tailed macaques (Macaca arctoides), mouse lemurs (Microcebus murinus), the common marmoset (Callithrix jacchus), and crab-eating macaques (Macaca fascicularis). These models can be studied both spontaneously or through artificial induction of Alzheimer's disease responses. Common techniques used to induce these models include: cholinergic nervous system injury, amyloid-β injection, intrinsic formaldehyde, and streptozotocin (a methyl nitrosourea sugar compound which induces diabetes).
Animal models:
Alternative organisms Zebrafish Drosophila Caenorhabditis elegans | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Peptidoglycan beta-N-acetylmuramidase**
Peptidoglycan beta-N-acetylmuramidase:
Peptidoglycan β-N-acetylmuramidase (EC 3.2.1.92, exo-β-N-acetylmuramidase, exo-β-acetylmuramidase, β-2-acetamido-3-O-(D-1-carboxyethyl)-2-deoxy-D-glucoside acetamidodeoxyglucohydrolase) is an enzyme with systematic name peptidoglycan β-N-acetylmuramoylexohydrolase. It catalyses the hydrolysis of terminal, non-reducing N-acetylmuramic residues. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Diastole**
Diastole:
Diastole ( dy-AST-ə-lee) is the relaxed phase of the cardiac cycle when the chambers of the heart are re-filling with blood. The contrasting phase is systole when the heart chambers are contracting. Atrial diastole is the relaxing of the atria, and ventricular diastole the relaxing of the ventricles.
The term originates from the Greek word διαστολή (diastolē), meaning "dilation", from διά (diá, "apart") + στέλλειν (stéllein, "to send").
Role in cardiac cycle:
A typical heart rate is 75 beats per minute (bpm), which means that the cardiac cycle that produces one heartbeat, lasts for less than one second. The cycle requires 0.3 sec in ventricular systole (contraction)—pumping blood to all body systems from the two ventricles; and 0.5 sec in diastole (dilation), re-filling the four chambers of the heart, for a total of 0.8 sec to complete the cycle.
Role in cardiac cycle:
Early ventricular diastole During early ventricular diastole, pressure in the two ventricles begins to drop from the peak reached during systole. When the pressure in the left ventricle falls below that in the left atrium, the mitral valve opens due to a negative pressure differential (suction) between the two chambers. The open mitral valve allows blood in the atrium (accumulated during atrial diastole) to flow into the ventricle (see graphic at top). Likewise, the same phenomenon runs simultaneously in the right ventricle and right atrium through the tricuspid valve.
Role in cardiac cycle:
The ventricular filling flow (or flow from the atria into the ventricles) has an early (E) diastolic component caused by ventricular suction, and then a late one created by atrial systole (A). The E/A ratio is used as a diagnostic measure as its diminishment indicates probable diastolic dysfunction, though this should be used in conjunction with other clinical characteristics and not by itself.
Role in cardiac cycle:
Late ventricular diastole Early diastole is a suction mechanism between the atrial and ventricular chambers. Then, in late ventricular diastole, the two atrial chambers contract (atrial systole), causing blood pressure in both atria to increase and forcing additional blood flow into the ventricles. This beginning of the atrial systole is known as the atrial kick—see Wiggers diagram. The atrial kick does not supply the larger amount of flow (during the cardiac cycle) as about 80 percent of the collected blood volume flows into the ventricles during the active suction period.
Role in cardiac cycle:
Atrial diastole At the beginning of the cardiac cycle the atria, and the ventricles are synchronously approaching and retreating from relaxation and dilation, or diastole. The atria are filling with separate blood volumes returning to the right atrium (from the vena cavae), and to the left atrium (from the lungs). After chamber and back pressures equalize, the mitral and tricuspid valves open, and the returning blood flows through the atria into the ventricles. When the ventricles have completed most of their filling, the atria begin to contract (atrial systole), forcing blood under pressure into the ventricles. Now the ventricles start to contract, and as pressures within the ventricles rise, the mitral and tricuspid valves close producing the first heart sound (S1) as heard with a stethoscope.
Role in cardiac cycle:
As pressures within the ventricles continue to rise, they exceed the "back pressures" in the aorta, and the pulmonary trunk. The aortic and pulmonary valves known as the semilunar valves open, and a defined fraction of blood within the heart is ejected into the aorta and pulmonary trunk. Ejection of blood from the heart is known as systole. Ejection causes pressure within the ventricles to fall, and, simultaneously, the atria begin to refill (atrial diastole). Finally, pressures within the ventricles fall below the back pressures in the aorta and the pulmonary arteries, and the semilunar valves close. Closure of these valves give the second heart sound (S2). The ventricles then start to relax, the mitral and tricuspid valves begin to open, and the cycle begins again.In summary, when the ventricles are in systole and contracting, the atria are relaxed and collecting returning blood. When, in late diastole, the ventricles become fully dilated (understood in imaging as LVEDV and RVEDV), the atria begin to contract, pumping blood to the ventricles. The atria feed a steady supply of blood to the ventricles, thereby serving as a reservoir to the ventricles and ensuring that these pumps never run dry. This coordination ensures that blood is pumped and circulated efficiently throughout the body.
Clinical notation:
Blood pressure is usually written with the systolic pressure expressed over the diastolic pressure or separated by a slash, for example, 120/80 mmHg. This clinical notation is not a mathematical figure for a fraction or ratio, nor a display of a numerator over a denominator, rather it is a medical notation showing the two clinically significant pressures involved. It is often shown followed by a third value, the number of beats per minute of the heart rate. Mean blood pressure is also an important determinant in people who have had certain medical interventions like Left Ventricular Assist Devices (LVAD) and hemodialysis that replace pulsatile flow with continuous blood flow.
Diagnostic value:
Examining diastolic function during a cardiac stress test is a good way to test for heart failure with preserved ejection fraction.Classification of blood pressure in adults:
Effects of impaired diastolic function:
Brain natriuretic peptide (BNP) is a cardiac neurohormone secreted from ventricular myocytes (ventricular muscle cells) at the end of diastole—this in response to the normal, or sub-normal (as the case may be), stretching of cardiomyocytes (heart muscle cells) during systole. Elevated levels of BNP indicate excessive natriuresis (excretion of sodium to the urine) and decline of ventricular function, especially during diastole. Increased BNP concentrations have been found in patients who experience diastolic heart failure.Impaired diastolic function can result from the decreased compliance of ventricular myocytes, and thus the ventricles, which means the heart muscle does not stretch as much as needed during filling. This will result in a reduced end diastolic volume (EDV) and, according to the Frank-Starling mechanism, a reduced EDV will lead to a reduced stroke volume, thus a reduced cardiac output. Over time, decreased cardiac output will diminish the ability of the heart to circulate blood efficiently throughout the body. Degradation of compliance in the myocardium is a natural consequence of aging. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Theory-theory**
Theory-theory:
The theory-theory (or 'theory theory') is a scientific theory relating to the human development of understanding about the outside world. This theory asserts that individuals hold a basic or 'naïve' theory of psychology ("folk psychology") to infer the mental states of others, such as their beliefs, desires or emotions. This information is used to understand the intentions behind that person's actions or predict future behavior. The term 'perspective taking' is sometimes used to describe how one makes inferences about another person's inner state using theoretical knowledge about the other's situation.This approach has become popular with psychologists as it gives a basis from which to explore human social understanding. Beginning in the mid-1980s, several influential developmental psychologists began advocating the theory theory: the view that humans learn through a process of theory revision closely resembling the way scientists propose and revise theories. Children observe the world, and in doing so, gather data about the world's true structure. As more data accumulates, children can revise their naive theories accordingly. Children can also use these theories about the world's causal structure to make predictions, and possibly even test them out. This concept is described as the 'Child Scientist' theory, proposing that a series of personal scientific revolutions are required for the development of theories about the outside world, including the social world.
Theory-theory:
In recent years, proponents of Bayesian learning have begun describing the theory theory in a precise, mathematical way. The concept of Bayesian learning is rooted in the assumption that children and adults learn through a process of theory revision; that is, they hold prior beliefs about the world but, when receiving conflicting data, may revise these beliefs depending upon their strength.
Child development:
Theory-theory states that children naturally attempt to construct theories to explain their observations. As all humans do, children seek to find explanations that help them understand their surroundings. They learn through their own experiences as well as through their observations of others' actions and behaviors.Through their growth and development, children will continue to form intuitive theories; revising and altering them as they come across new results and observations. Several developmentalists have conducted research of the progression of their theories, mapping out when children start to form theories about certain subjects, such as the biological and physical world, social behaviors, and others' thoughts and minds ("theory of mind"), although there remains controversies over when these shifts in theory-formation occur.As part of their investigative process, children often ask questions, frequently posing "Why?" to adults, not seeking a technical and scientific explanation but instead seeking to investigate the relation of the concept in question to themselves, as part of their egocentric view. In a study where Mexican-American mothers were interviewed over a two-week period about the types of questions their preschool children ask, researchers discovered that the children asked their parents more about biology and social behaviors rather than nonliving objects and artifacts. In their questions, the children were mostly ambiguous, unclear if they desired an explanation of purpose or cause. Although parents will usually answer with a causal explanation, some children found the answers and explanations inadequate for their understanding, and as a result, they begin to create their own theories, particularly evident in children's understanding of religion.This theory also plays a part in Vygotsky's social learning theory, also called modeling. Vygotsky claims that humans, as social beings, learn and develop by observing others' behavior and imitating them. In this process of social learning, prior to imitation, children will first post inquiries and investigate why adults act and behave in a particular way. Afterwards, if the adult succeeds at the task, the child will likely copy the adult, but if the adult fails, the child will choose not to follow the example.
Comparison with other theories:
Theory of mind (ToM) Theory-theory is closely related to theory of mind (ToM), which concerns mental states of people, but differs from ToM in that the full scope of theory-theory also concerns mechanical devices or other objects, beyond just thinking about people and their viewpoints.
Simulation theory In the scientific debate in mind reading, theory-theory is often contrasted with simulation theory, an alternative theory which suggests simulation or cognitive empathy is integral to our understanding of others. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cholesterol absorption inhibitor**
Cholesterol absorption inhibitor:
Cholesterol absorption inhibitors are a class of compounds that prevent the uptake of cholesterol from the small intestine into the circulatory system.
Most of these molecules are monobactams but show no antibiotic activity. An example is ezetimibe (SCH 58235) Another example is Sch-48461. The "Sch" is for Schering-Plough, where these compounds were developed.
Phytosterols are also cholesterol absorption inhibitors.
Physiology:
There are two sources of cholesterol in the upper intestine: dietary (from food) and biliary (from bile). Dietary cholesterol, in the form of lipid emulsions, combines with bile salts, to form bile salt micelles from which cholesterol can then be absorbed by the intestinal enterocyte.
Once absorbed by the enterocyte, cholesterol is reassembled into intestinal lipoproteins called chylomicrons. These chylomicrons are then secreted into the lymphatics and circulated to the liver. These cholesterol particles are then secreted by the liver into the blood as VLDL particles, precursors to LDL.
As a class, cholesterol absorption inhibitors block the uptake of micellar cholesterol, thereby reducing the incorporation of cholesterol esters into chylomicron particles. By reducing the cholesterol content in chylomicrons and chylomicron remnants, cholesterol absorption inhibitors effectively reduce the amount of cholesterol that is delivered back to the liver.
The reduced delivery of cholesterol to the liver increases hepatic LDL receptor activity and thereby increases clearance of circulating LDL. The net result is a reduction in circulating LDL particles.
Importance:
Managing cholesterol at the site of absorption is an increasingly popular strategy in the treatment of hypercholesterolemia. Cholesterol absorption inhibitors are known to have a synergistic effect when combined a class of antihyperlipidemics called statins, to achieve an overall serum cholesterol target. For statin-resistant or statin-sensitive populations that are characterized by low one-year compliance rates, such a combination therapy is proving to be especially effective. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cheiralgia paresthetica**
Cheiralgia paresthetica:
Cheiralgia paraesthetica (Wartenberg's syndrome) is a neuropathy of the hand generally caused by compression or trauma to the superficial branch of the radial nerve. The area affected is typically on the back or side of the hand at the base of the thumb, near the anatomical snuffbox, but may extend up the back of the thumb and index finger and across the back of the hand. Symptoms include numbness, tingling, burning or pain. Since the nerve branch is sensory there is no motor impairment. It may be distinguished from de Quervain syndrome because it is not dependent on motion of the hand or fingers.
Cause:
The most common cause is thought to be constriction of the wrist, as with a bracelet or watchband (hence reference to "wristwatch neuropathy"). It is especially associated with the use of handcuffs and is therefore commonly referred to as handcuff neuropathy. Other injuries or surgery in the wrist area can also lead to symptoms, including surgery for other syndromes such as de Quervain's. The exact etiology is unknown, as it is unclear whether direct pressure by the constricting item is alone responsible, or whether edema associated with the constriction also contributes.
Diagnosis:
Symptoms commonly resolve on their own within several months when the constriction is removed; NSAIDs are commonly prescribed. In some cases surgical decompression is required. The efficacy of cortisone and laser treatment is disputed. Permanent damage is possible.
History:
This neuropathy was first identified by Robert Wartenberg in a 1932 paper. Recent studies have focused on handcuff injuries due to the legal liability implications, but these have been hampered by difficulties in followup, particularly as large percentages of the study participants have been inebriated when they were injured. Diagnostically it is often subsumed into compression neuropathy of the radial nerve as a whole (e.g. ICD-9 354.3), but studies and papers continue to use the older term to distinguish it from more extensive neuropathies originating in the forearm. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Power Plus Pro**
Power Plus Pro:
Power Plus Pro is a piece of financial software produced by Reuters Group in the form of an add-in for Microsoft Office Excel. A real-time data engine, it pushes new data into Excel when it receives notification of updates from a Reuters TIBCO bus or from Thomson Reuters' RMDS. This commonly involves live market data, such as stock prices, from a financial exchange. Using the addin, Excel can also contribute information to the TIBCO bus or to RMDS; such information then becomes available to other permissioned users using the addin on another computer or using Reuters 3000 Xtra stand-alone software. Power Plus Pro also has features which allow retrieval of historical market data.
Power Plus Pro:
A typical Excel formula call might read: =RtGet("IDN_SELECTFEED","AAPL","LAST").
Power Plus Pro:
Competitors include; Arcontech's Excelerator [1], MDX Technology's Connect, Vistasource's RTW, and the addin associated with the Bloomberg Terminal (Bloomberg L.P.'s equivalent of Reuters 3000 Xtra). Arcontech Excelerator, MDXT Connect and Vistasource RTW, provide real-time data engines and Excel add-ins and will generally source data from Bloomberg, Reuters and/or other market data sources. Due to the widespread use of Reuters Power Plus Pro, some of the competitors provide conversion utilities to convert from Power Plus Pro functions to their own equivalents (e.g., Arcontech Excelerator) and/or emulation functionality so that Power Plus Pro functions can be used directly without modifications to spreadsheets (e.g., Arcontech Excelerator, MDXT Connect).
Power Plus Pro:
Power Plus Pro has been succeeded by Thomson Reuters' Eikon Excel which can convert Power Plus Pro spreadsheets to use new functions. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ethyl levulinate**
Ethyl levulinate:
Ethyl levulinate is an organic compound with the formula CH3C(O)CH2CH2C(O)OC2H5. It is an ester derived from the keto acid levulinic acid. Ethyl levulinate can also be obtained by reaction between ethanol and furfuryl alcohol. These two synthesis options make ethyl levulinate a viable biofuel option, since both precursors can be obtained from biomass: levulinic acid from 6-carbon polymerized sugars such as cellulose, and furfural from 5-carbon polymerized sugars such as xylan and arabinan. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Face-off**
Face-off:
A face-off is the method used to begin and restart play after goals in some sports using sticks, primarily ice hockey, bandy, floorball, broomball, rinkball, and lacrosse.
During a face-off, two teams line up in opposition to each other, and the opposing players attempt to gain control of the puck or ball after it is dropped or otherwise placed between their sticks by an official.
Ice hockey:
Hockey face-offs (also called 'bully', and originally called 'puck-offs') are generally handled by centres, but are sometimes handled by wingers, and, rarely, by defensemen. One of the referees drops the puck at centre ice to start each period and following the scoring of a goal. The linesmen are responsible for all other face-offs.
Ice hockey:
One player from each team stands at the face-off spot (see below) to await the drop of the puck. All teammates must be lateral to or behind the player taking the face-off. Generally, the goal of the player taking the face-off is to draw the puck backward, toward teammates; however, they will, occasionally attempt to shoot the puck forward, past the other team, usually to kill time when shorthanded although shooting directly at the net is also possible—scoring a goal directly from a face-off, while rare, is not unheard of. However, where the face-off occurs at one of the five face-off spots that have circles marked around them, only the two opposing players responsible for taking the face-off may be in the circle. A common formation, especially at centre ice, is for a skater to take the face-off, with the wings lateral to the centre on either side, and the skater, usually a defenseman, behind the player handling the face-off, one toward each side. This is not mandatory, however, and other formations are seen—especially where the face-off is in one of the four corner face-off spots.
Ice hockey:
Face-offs are typically conducted at designated places marked on the ice called face-off spots or dots. There are nine such spots: two in each attacking zone, two on each end of the neutral zone, and one in the centre of the rink. Face-offs did not always take place at the marked face-off spots. If a puck left the playing surface, for example, the face-off would take place wherever the puck was last played. On June 20, 2007, the NHL Board of Governors approved a change to NHL Rule 76.2, which governs face-off locations. The rule now requires that all face-offs take place at one of the nine face-off spots on the ice, regardless of what caused the stoppage of play. Rule 76.2 also dictates that, with some exceptions, a face-off following a penalty must occur at one of the two face-off dots of the offending team's end.An official may remove the player taking the face-off if the player or any players from the same team attempt to gain an unfair advantage during the face-off (called a face-off violation). When a player is removed, one of the teammates not originally taking the face-off is required to take the face-off. Common face-off violations include: moving the stick before the puck is dropped, not placing the stick properly when requested to do so, not placing the body square to the face-off spot, or encroachment into the face-off circle by a teammate. In the NHL, the player from the visiting team is required to place his stick on the ice for the face-off first when it takes place at the centre-line dot. For all other face-offs, the player from the defending team must place his stick first. Before the league's 2015–16 season, the visiting player was required to place his stick first on all face-offs.
Ice hockey:
A player who does faceoffs as a speciality is sometimes called or deemed a Face-Off Specialist.
Ice hockey:
History In the first organized ice hockey rules (see Amateur Hockey Association of Canada, AHAC), both centres faced the centre line of the ice rink, like the wingers do today. At that time, another forward position existed, the rover, who faced forward like centres did today, but a few feet away. The opposing forwards would whack the ice on their own side of the puck three times, then strike each other's stick above the puck, and then scramble for the puck. This manoeuvre was known as 'bully'. The Winnipeg players invented what is today known as a 'face-off'. In Germany and other countries the term 'bully' is still commonly used.
Bandy:
In bandy, play begins with a "stroke off" with each team confined to its own half of the bandy pitch. However, the game is restarted with a face-off when the game has been temporarily interrupted. The face-off is executed on the place where the ball was situated when the game was interrupted. If the ball was inside the penalty area when the game was interrupted, the face-off is moved to the nearest free-stroke point on the penalty line.
Bandy:
In a face-off one player of each team place themselves opposite each other and with their backs turned to their own end-lines. The sticks are held parallel to each other and on each side of the ball. The ball must not be touched until the referee has blown his whistle. At face-off the ball may be played in any direction.
Bandy:
In bandy, face-offs are regulated in section 4.6 of the Bandy Playing Rules set up by the Federation of International Bandy (FIB).
Floorball:
Floorball is a type of floor hockey with five players and a goalkeeper in each team, it's played indoors with a tennis sized ball. Matches are played in three twenty-minute periods and just like ice hockey it begins with a face-off.
Broomball:
Like in ice hockey, a game of broomball begins with a face-off.
Rinkball:
Rinkball, a sport combining bandy and ice hockey elements, also begins with a face-off.
Lacrosse:
Field lacrosse Face-offs are used in men's field lacrosse after each goal, and to start every quarter and overtime periods, unless a team playing man-up controls the ball at the end of the previous quarter.In the field lacrosse face-off, two players face each other at the X in the middle of the field, in a crouching position with the ball placed on the ground on the center line between the heads of their sticks, set four inches (10 cm) apart, parallel to the midline but the ends pointing in opposite directions. Two other players from each team must wait behind wing lines, 20 yards from the faceoff spot on opposite sides of the field until the whistle.Any player except the goalkeeper, due to the much larger head on his stick, can face off; in practice face-offs are usually taken by midfielders. When a team is down a player due to a penalty, there will only be one other midfielder on the wing, or none if two or more players are serving time. When a third player, the maximum allowed by the rules before penalties are stacked, is serving time, the team thus penalized is allowed to have one of its defensemen come out and play on the wing during a faceoff.Players facing off must rest their stick in their gloved hands on the ground and position themselves entirely to the left of their sticks' heads. They may kneel or keep both feet on the ground. Between the time they go down into position and the referee's whistle, the players facing off must remain still. A premature movement by any player will be called as a technical foul, and the other team will be awarded the ball. To ensure that they remain still, referees are instructed to time their whistle differently on every face-off.At the whistle, each face-off player makes a move to clamp the ball under their stick head, or tries to direct the ball to their teammates on the wing. Only those six players can attempt to pick up the ball at first. The three attackmen and defensemen from either team must remain in their respective zones behind the restraining lines 20 yards (18 m) from the center line. Once possession is established, or the loose ball crosses either restraining line, the faceoff is considered to have ended and all players are allowed to leave their zones.If the loose ball goes out of bounds on a face-off before either team can pick it up, it is awarded to the team that last touched it and all other players are released when play is restarted.The players facing off may not step on or hold each other's sticks to prevent the other from getting the ball. Nor may they trap the ball beneath their sticks without attempting a "tennis pickup" to prevent anyone from establishing possession, an action normally penalized as withholding the ball from play, another technical foul. If they pick the ball up on the back of their stick but do not immediately flip it into the pocket, it is also considered withholding. In all these cases the face-off will be ended with the ball awarded to the opposing team at the spot of the infraction. Players facing off who deliberately handle or touch the ball in an attempt to gain possession, or use their open hand to hold the opposing face-off player's stick, receive a three-minute unreleasable penalty for unsportsmanlike conduct in addition to possession being awarded to the other team.Under NCAA rules in college lacrosse, if a team violates rules specific to face-offs, either by false starts before them by any player at midfield or illegal actions by the players facing off, more than twice in a half, each additional violation results in a 30-second penalty assessed against the team, to be served by the designated "in-home" player.A player who does faceoffs as a speciality is called a Face-Off Specialist. Also nicknamed a "FOGO", which stands for "face off, get off".
Lacrosse:
Women's lacrosse In women's lacrosse, a procedure similar to a face-off is also used, although it is called a draw. The two players taking the draw stand at the center of the field, and hold their sticks together at waist level while the referee places the ball between the heads, which face each other. Four other players from each team stand on the outside of a 30-foot (9.1 m) center circle. At the whistle, the two center players both lift their sticks, tossing the ball in the air, while the players on the outside attempt to gain possession when it comes down.
Field hockey:
A similar technique, known as a bully-off, is used in field hockey. The two opposing players alternately touch their sticks on the ground and against each other before attempting to strike the ball. Its use as the method of starting play was discontinued in 1981.
Similar rules in other sports:
A face-off is also similar to other methods used to start or resume play in a variety of other sports. All of these involve two opposing players attempting to gain control of the ball after it is released by an official.
A jump ball in basketball, a ball-up in Australian rules football, and a throw-up in shinty, all involve an official throwing the ball upwards into the air after which players must play for the ball.
A dropped-ball (if contested) is a method used in association football whereby an official will drop the ball rather than releasing it into the air.
Shinty A technique, known as a throw-up, is used in the stick-and-ball sport of shinty. A game of shinty begins with referee throwing the ball into the air between two opposing players whose sticks, called "camans", are raised in the air. The players must play for the ball in the air.
Similar rules in other sports:
American football An event similar to a face-off has been attempted in at least two leagues of American football: the 2001 instance of the XFL instituted an "opening scramble", replacing the coin toss, in which one player from each team attempted to recover a loose football after a twenty-yard dash. The team whose player recovered the ball got first choice of kicking, receiving, or defending one side of the field.
Similar rules in other sports:
Because of an extremely high rate of injury in these events (in the league's first game, one XFL player was lost for the season after separating his shoulder in a scramble), the event has not gained mainstream popularity in most other football leagues. X-League Indoor Football nonetheless adopted a modified version opening scramble (using the name "X-Dash") when it began play in 2014, but tweaked to avoid the injuries so that each player chased after their own ball.
Similar rules in other sports:
Coin toss The coin toss remains the method of choice for determining possession at the beginning of an American football game. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Strategies for engineered negligible senescence**
Strategies for engineered negligible senescence:
Strategies for engineered negligible senescence (SENS) is a range of proposed regenerative medical therapies, either planned or currently in development, for the periodic repair of all age-related damage to human tissue. These therapies have the ultimate aim of maintaining a state of negligible senescence in patients and postponing age-associated disease. SENS was first defined by British biogerontologist Aubrey de Grey. While some biogerontologists support the SENS program, others contend that the ultimate goals of de Grey's programme are too speculative given the current state of technology. The 31-member Research Advisory Board of de Grey's SENS Research Foundation have signed an endorsement of the plausibility of the SENS approach.
Framework:
The term "negligible senescence" was first used in the early 1990s by professor Caleb Finch to describe organisms such as lobsters and hydras, which do not show symptoms of aging. The term "engineered negligible senescence" first appeared in print in Aubrey de Grey's 1999 book The Mitochondrial Free Radical Theory of Aging. De Grey defined SENS as a "goal-directed rather than curiosity-driven" approach to the science of aging, and "an effort to expand regenerative medicine into the territory of aging".The ultimate objective of SENS is the eventual elimination of age-related diseases and infirmity by repeatedly reducing the state of senescence in the organism. The SENS project consists in implementing a series of periodic medical interventions designed to repair, prevent or render irrelevant all the types of molecular and cellular damage that cause age-related pathology and degeneration, in order to avoid debilitation and death from age-related causes.
Framework:
Strategies As described by SENS, the following table details major ailments and the program's proposed preventative strategies:
Scientific reception:
While some fields mentioned as branches of SENS are supported by the medical research community, e.g., stem cell research, anti-Alzheimers research and oncogenomics, the SENS programme as a whole has been a highly controversial proposal. Many of its critics argue that the SENS agenda is fanciful and that the complicated biomedical phenomena involved in aging contain too many unknowns for SENS to be fully implementable in the foreseeable future.Cancer may deserve special attention as an aging-associated disease, but the SENS claim that nuclear DNA damage only matters for aging because of cancer has been challenged in other literature, as well as by material studying the DNA damage theory of aging. More recently, biogerontologist Marios Kyriazis has criticised the clinical applicability of SENS by claiming that such therapies, even if developed in the laboratory, would be practically unusable by the general public. De Grey responded to one such criticism.
Scientific reception:
2005 EMBO Reports statement In November 2005, 28 biogerontologists published a statement of criticism in EMBO Reports, "Science fact and the SENS agenda: what can we reasonably expect from ageing research?," arguing "each one of the specific proposals that comprise the SENS agenda is, at our present stage of ignorance, exceptionally optimistic," and that some of the specific proposals "will take decades of hard work [to be medically integrated], if [they] ever prove to be useful." The researchers argue that while there is "a rationale for thinking that we might eventually learn how to postpone human illnesses to an important degree," increased basic research, rather than the goal-directed approach of SENS, is currently the scientifically appropriate goal.
Technology Review contest:
In February 2005, the MIT Technology Review published an article by Sherwin Nuland, a Clinical Professor of Surgery at Yale University and the author of "How We Die", that drew a skeptical portrait of SENS, at the time de Grey was a computer associate in the Flybase Facility of the Department of Genetics at the University of Cambridge. While Nuland praised de Grey's intellect and rhetoric, he criticized the SENS framework both for oversimplifying "enormously complex biological problems" and for promising relatively near-at-hand solutions to those unsolved problems.During June 2005, David Gobel, CEO and co-founder of the Methuselah Foundation with de Grey, offered Technology Review $20,000 to fund a prize competition to publicly clarify the viability of the SENS approach. In July 2005, Jason Pontin announced a $20,000 prize, funded 50/50 by Methuselah Foundation and MIT Technology Review. The contest was open to any molecular biologist, with a record of publication in biogerontology, who could prove that the alleged benefits of SENS were "so wrong that it is unworthy of learned debate." Technology Review received five submissions to its challenge. In March 2006, Technology Review announced that it had chosen a panel of judges for the Challenge: Rodney Brooks, Anita Goel, Nathan Myhrvold, Vikram Sheel Kumar, and Craig Venter. Three of the five submissions met the terms of the prize competition. They were published by Technology Review on June 9, 2006. On July 11, 2006, Technology Review published the results of the SENS Challenge.In the end, no one won the $20,000 prize. The judges felt that no submission met the criterion of the challenge and discredited SENS, although they unanimously agreed that one submission, by Preston Estep and his colleagues, was the most eloquent. Craig Venter succinctly expressed the prevailing opinion: "Estep et al. ... have not demonstrated that SENS is unworthy of discussion, but the proponents of SENS have not made a compelling case for it." Summarizing the judges' deliberations, Pontin wrote in 2006 that SENS is "highly speculative" and that many of its proposals could not be reproduced with current scientific technology. Myhrvold described SENS as belonging to a kind of "antechamber of science" where they wait until technology and scientific knowledge advance to the point where it can be tested. Estep and his coauthors challenged the result of the contest by saying both that the judges had ruled "outside their area of expertise" and had failed to consider de Grey's frequent misrepresentations of the scientific literature.
SENS Research Foundation:
The SENS Research Foundation is a non-profit organization co-founded by Michael Kope, Aubrey de Grey, Jeff Hall, Sarah Marr and Kevin Perrott, which is based in California, United States. Its activities include SENS-based research programs and public relations work for the acceptance of and interest in related research. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Autopass**
Autopass:
Autopass (stylized autoPASS) is an electronic toll collection system used in Norway. It allows collecting road tolls automatically from cars. It uses electronic radio transmitters and receivers operating at 5.8 GHz (MD5885) originally supplied by the Norwegian companies Q-Free and Fenrits. Since 2013 Kapsch and Norbit supplied the transponders. In 2016 the Norwegian Public Roads Administration revealed that they had chosen Norbit and Q-Free as suppliers of Autopass-transponders the next four years.From 2022, contracts with vehicle owners are made with private competing companies. Autopass as a national company only handles the technology. A contract gives in general 20 % discount for light weight vehicles. Contracts and tags are compulsory for heavy vehicles. Foreign registred vehicles without a contract is handled by the EPASS24 company which will track the owner and bill them. Owners are adviced to register their vehicle with EPASS24 and pay, in order to avoid extra cost. This includes foreign borrowed or rented vehicles. Customers having Norwegian rental vehicles can't make their own contract with any AutoPASS provider, but have to wait for the rental company to get the toll bill and charge the customer afterwards. For rental cars, tolls will include VAT, while tolls normally are VAT free, because legally only the owner is responsible for tolls, and charging the rental customer is legally seen as extra rental fee.
Autopass:
The toll rings are in general from 2022 using the "hour rule" meaning that only one passage per hour is charged for, if the owner has a contract. Especially in Oslo and Tromsø with multiple or odd shaped ring borders, driving without a contract can make multiple times the cost compared to having a contract. Electric vehicles have a large discount, usually half price as well as the general 20 % discount contract, but only if having a contract.
Autopass:
AutoPASS has in 2022 left the EasyGo partnership, so the AutoPASS tag are no longer valid in Denmark and Sweden, unless the contract provider has such a validity.In 2019 more and more ferry crossings are also using Autopass as a payment option through the "AutoPass for ferry" concept. A few crossings are automatic, but most are still manual. If you have a tag you pay only for the vehicle at fully automatic crossings with a 10% discount. If you apply for an Autopass ferry account, which is prepaid, you get a 50%(40% corporate) discount for vehicle, and 17% for passengers at manual payment crossings. See https://www.autopassferje.no for more information.
Technology:
The system involves the installation of a DSRC based radio transponder on the windscreen of a vehicle, and to sign an agreement with one of the toll collection companies in Norway. Tolls are charged at toll plazas and cars can drive past in over 100 kilometres per hour (62 mph). The system is administrated by the Norwegian Public Roads Administration. All public toll roads now use the electronic toll collection system.
Technology:
Each Autopass unit contains a microcontroller which will process requests from the road side, and respond with the proper information to the road side.
Technology:
There are 5 generations of cryptographic key pairs inside each Autopass unit, which are unique for each unit. The cryptographic keys are used for authenticating the unit when passing a toll plaza, thus making it difficult to make fraudulent copies of an Autopass unit. Unlike similar DSRC based tolling systems used in many countries, there is no access control in the Norwegian system, the unique ID within the unit being available for those who have the proper DSRC equipment.
Technology:
There is an internal storage space for 100 log entries, which are normally updated each time a vehicle owner is charged when passing a toll plaza. This is a collection of receipt entries which includes the time, date, and the station identity of the toll plaza which did the tolling transaction.
Each Autopass unit features a move detect mechanism. When the unit is removed from the windscreen, an electrical switch will be activated, causing a flag to be set in a processor within the Autopass unit. This flag will be registered when doing a tolling transaction the next time the unit passes a toll plaza.
Obligatory tag for heavy vehicles:
As of 1 January 2015 it is compulsory for all vehicles over 3.5 tonnes (3.4 long tons; 3.9 short tons) which are registered to an enterprise, state, county, or municipal administration, or which are otherwise primarily used for business purposes, to have an electronic toll payment tag when driving in Norway. The provision has its legal basis in regulations that were adopted on 10 October 2014. It applies to all above-mentioned Norwegian and foreign vehicles on the entire public road network. Failure to carry a toll payment tag will result in a fine of 8,000 NOK. Failure to pay within three weeks means that the penalty charge will be increased to 12,000 NOK. If you are stopped twice without a tag within a period of two years, you will be fined 16,000 NOK. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ornithine decarboxylase**
Ornithine decarboxylase:
The enzyme ornithine decarboxylase (EC 4.1.1.17, ODC) catalyzes the decarboxylation of ornithine (a product of the urea cycle) to form putrescine. This reaction is the committed step in polyamine synthesis. In humans, this protein has 461 amino acids and forms a homodimer.In humans, ornithine decaroxylase (ODC) is expressed by the gene ODC1. The protein ODC is sometimes referred to as "ODC1" in research pertaining to humans and mice, but certain species such as Drosophila (dODC2), species of Solanaceae plant family (ODC2), and the lactic acid bacteria Paucilactobacillus wasatchensis (odc2) have been shown to have a second ODC gene.
Reaction mechanism:
Lysine 69 on ornithine decarboxylase (ODC) binds the cofactor pyridoxal phosphate to form a Schiff base. Ornithine displaces the lysine to form a Schiff base attached to orthonine, which decarboxylates to form a quinoid intermediate. This intermediate rearranges to form a Schiff base attached to putrescine, which is attacked by lysine to release putrescine product and reform PLP-bound ODC. This is the first step and the rate-limiting step in humans for the production of polyamines, compounds required for cell division.
Reaction mechanism:
Spermidine synthase can then catalyze the conversion of putrescine to spermidine by the attachment of an aminopropyl moiety. Spermidine is a precursor to other polyamines, such as spermine and its structural isomer thermospermine.
Structure:
The active form of ornithine decarboxylase is a homodimer. Each monomer contains a barrel domain, consisting of an alpha-beta barrel, and a sheet domain, composed of two beta-sheets. The domains are connected by loops. The monomers connect to each other via interactions between the barrel of one monomer and the sheet of the other. Binding between monomers is relatively weak, and ODC interconverts rapidly between monomeric and dimeric forms in the cell.The pyridoxal phosphate cofactor binds lysine 69 at the C-terminus end of the barrel domain. The active site is at the interface of the two domains, in a cavity formed by loops from both monomers.
Function:
The ornithine decarboxylation reaction catalyzed by ornithine decarboxylase is the first and committed step in the synthesis of polyamines, particularly putrescine, spermidine and spermine. Polyamines are important for stabilizing DNA structure, the DNA double strand-break repair pathway and as antioxidants. Therefore, ornithine decarboxylase is an essential enzyme for cell growth, producing the polyamines necessary to stabilize newly synthesized DNA. Lack of ODC causes cell apoptosis in embryonic mice, induced by DNA damage.
Proteasomal degradation:
ODC is the most well-characterized cellular protein subject to ubiquitin-independent proteasomal degradation. Although most proteins must first be tagged with multiple ubiquitin molecules before they are bound and degraded by the proteasome, ODC degradation is instead mediated by several recognition sites on the protein and its accessory factor antizyme. The ODC degradation process is regulated in a negative feedback loop by its reaction products.Until a report by Sheaff et al. (2000), which demonstrated that the cyclin-dependent kinase (Cdk) inhibitor p21Cip1 is also degraded by the proteasome in a ubiquitin-independent manner, ODC was the only clear example of ubiquitin-independent proteasomal degradation.
Clinical significance:
ODC is a transcriptional target of the oncogene Myc and is upregulated in a wide variety of cancers. The polyamine products of the pathway initialized by ODC are associated with increased cell growth and reduced apoptosis. Ultraviolet light, asbestos and androgens released by the prostate gland are all known to induce increased ODC activity associated with cancer. Inhibitors of ODC such as eflornithine have been shown to effectively reduce cancers in animal models, and drugs targeting ODC are being tested for potential clinical use.
Clinical significance:
The mechanism by which ODC promotes carcinogenesis is complex and not entirely known. Along with their direct effect on DNA stability, polyamines also upregulate gap junction genes and downregulate tight junction genes. Gap junction genes are involved in communication between carcinogenic cells and tight junction genes act as tumor suppressors.Mutations of the ODC1 gene have been shown to cause Bachmann-Bupp syndrome (BABS), a rare neurometabolic disorder characterized by global developmental delay, alopecia, macrocephaly, dysmorphic features, and behavioral abnormalities. BABS is typically caused by an autosomal dominant de novo ODC1 variant.ODC gene expression is induced by a large number of biological stimuli including seizure activity in the brain. Inactivation of ODC by difluoromethylornithine (DMFO, eflornithine) is used to treat cancer and facial hair growth in postmenopausal females.
Clinical significance:
ODC is also an enzyme indispensable to parasites like Trypanosoma, Giardia, and Plasmodium, a fact exploited by the drug eflornithine.
Immunological significance:
In antigen-activated T cells, ODC enzymatic activity increases after activation, which corresponds with increase in polyamine synthesis in T cells after activation. As with ODC and cancer, MYC, also referred to as c-Myc for cellular Myc, is the master regulator of polyamine biosynthesis in T cells.A 2020 study by Wu et al. using T-cell specific ODC cKO mice showed that T cells can function and proliferate normally in vivo and other polyamine synthesis pathways can compensate for lack of ODC. However, blocking polyamine synthesis via ODC with DMFO and polyamine uptake with AMXT 1501 depleted the polyamine pool and inhibited T-cell proliferation and suppressed T-cell inflammation.Recent studies have shown the importance of ODC and polyamine synthesis in T helper cell fate commitment. A 2021 study by Puleston et al. showed that TH1 and TH2 cells express higher levels of ODC than regulatory T (Treg) cells and TH17 cells, which corresponded to higher levels of polyamine biosynthesis in TH1 and TH2. A 2021 study by Wagner et al. showed a promotion of a Treg program in Odc1-/- mice. They concluded that polyamine-related enzyme expression was enhanced in pathogenic TH17 and suppressed in Treg cells. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Split Cycle Offset Optimisation Technique**
Split Cycle Offset Optimisation Technique:
Split Cycle Offset Optimisation Technique (SCOOT) is a real time adaptive traffic control system for the coordination and control of traffic signals across an urban road network. Originally developed by the Transport Research Laboratory for the Department of Transport in 1979, research and development of SCOOT has continued to present day. SCOOT is used extensively throughout the United Kingdom as well as in other countries.SCOOT automatically adjusts the traffic signal timings to adapt to current traffic conditions, using flow data from traffic sensors. Sensor data is usually derived from inductive loops in the carriageway but other forms of detection are increasingly being used.
Split Cycle Offset Optimisation Technique:
Adjacent signal controlled junctions and pedestrian/cycle crossings are collected together into groups called "regions". SCOOT then calculates the most appropriate signal timings for the region. SCOOT changes the stage lengths or the splits to ensure that the delays are balanced as much as possible, changes the cycle time to ensure that delays are minimised and finally changes the offset between the signal installations to ensure that the timings are co-ordinated as well as possible.
Split Cycle Offset Optimisation Technique:
SCOOT has been demonstrated to yield improvements in traffic performance of the order of 15% compared to fixed timing systems.In early 2021, TRL released SCOOT 7, having updated the algorithm to work with future mobility needs. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Rice bran oil**
Rice bran oil:
Rice bran oil is the oil extracted from the hard outer brown layer of rice called bran. It is known for its high smoke point of 232 °C (450 °F) and mild flavor, making it suitable for high-temperature cooking methods such as stir frying and deep frying. It is popular as a cooking oil in East Asia, the Indian subcontinent, and Southeast Asia including India, Nepal, Bangladesh, Indonesia, Japan, Southern China and Malaysia.
Composition and properties:
Rice bran oil has a composition similar to that of peanut oil, with 38% monounsaturated, 37% polyunsaturated, and 25% saturated fatty acids. A component of rice bran oil is the γ-oryzanol, at around 2% of crude oil content. Thought to be a single compound when initially isolated, γ-oryzanol is now known to be a mixture of steryl and other triterpenyl esters of ferulic acids. Also present are tocopherols and tocotrienols (two types of vitamin E) and phytosterols.
Composition and properties:
Fatty acid compositionPhysical properties of crude and refined rice bran oil
Research:
Rice bran oil consumption has been found to significantly decrease total cholesterol (TC), LDL-C and triglyceride (TG) levels.
Uses:
Rice bran oil is an edible oil which is used in various forms of food preparation. It is also the basis of some vegetable ghee. Rice bran wax, obtained from rice bran oil, is used as a substitute for carnauba wax in cosmetics, confectionery, shoe creams, and polishing compounds.
Isolated γ-oryzanol from rice bran oil is available in China as an over-the-counter drug, and in other countries as a dietary supplement. There is no meaningful evidence supporting its efficacy for treating any medical condition. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Differential TTL**
Differential TTL:
Differential TTL is a type of binary electrical signaling based on the transistor-transistor logic (TTL) concept. It enables electronic systems to be relatively immune to noise. RS-422 and RS-485 outputs can be implemented as differential TTL.Normal TTL signals are single-ended, which means that each signal consists of a voltage on one wire, referenced to a system ground. The "low" voltage level is zero to 0.8 volts, and the "high" voltage level is 2 volts to 5 volts. A differential TTL signal consists of two such wires, also referenced to a system ground. The logic level on one wire is always the complement of the other. The principle is similar to that of low-voltage differential signaling (LVDS), but with different voltage levels.
Differential TTL:
Differential TTL is used in preference to single-ended TTL for long-distance signaling. In a long cable, stray electromagnetic fields in the environment, or stray currents in the system ground, can induce unwanted voltages that cause errors at the receiver. With a differential pair of wires, roughly the same unwanted voltage is induced in each wire. The receiver subtracts the voltages on the two wires, so that the unwanted voltage disappears, and only the voltage created by the driver remains.
Differential TTL:
A second advantage of differential TTL is that the differential pair of wires can form a current loop. The driver sources a current from the power supply into one wire. This current passes along the wire to the receiver, through the termination resistor and back up the other wire, then back through the driver and down to ground. No net current is exchanged between the driver and receiver, which means that none of the signal current has to return through the ground connection (if there is one) between the two ends. This arrangement prevents the signal from injecting currents into the ground connection, which might upset other circuits attached to it.
Differential TTL:
Differential TTL is the most common type of high-voltage differential signaling (HVDS).
Applications:
Differential TTL signaling was used in the Serial Storage Architecture (SSA) standard devised by IBM, but this is mostly obsolete. More efficient signaling techniques such as LVDS are now used instead. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Imide**
Imide:
In organic chemistry, an imide is a functional group consisting of two acyl groups bound to nitrogen. The compounds are structurally related to acid anhydrides, although imides are more resistant to hydrolysis. In terms of commercial applications, imides are best known as components of high-strength polymers, called polyimides. Inorganic imides are also known as solid state or gaseous compounds, and the imido group (=NH) can also act as a ligand.
Nomenclature:
Most imides are cyclic compounds derived from dicarboxylic acids, and their names reflect the parent acid. Examples are succinimide, derived from succinic acid, and phthalimide, derived from phthalic acid. For imides derived from amines (as opposed to ammonia), the N-substituent is indicated by a prefix. For example, N-ethylsuccinimide is derived from succinic acid and ethylamine. Isoimides are isomeric with normal imides and have the formula RC(O)OC(NR′)R″. They are often intermediates that convert to the more symmetrical imides. Organic compounds called carbodiimides have the formula RN=C=NR. They are unrelated to imides.
Nomenclature:
Imides from dicarboxylic acids The PubChem links gives access to more information on the compounds, including other names, ids, toxicity and safety.
Properties:
Being highly polar, imides exhibit good solubility in polar media. The N–H center for imides derived from ammonia is acidic and can participate in hydrogen bonding. Unlike the structurally related acid anhydrides, they resist hydrolysis and some can even be recrystallized from boiling water.
Occurrence and applications:
Many high strength or electrically conductive polymers contain imide subunits, i.e., the polyimides. One example is Kapton where the repeat unit consists of two imide groups derived from aromatic tetracarboxylic acids. Another example of polyimides is the polyglutarimide typically made from polymethylmethacrylate (PMMA) and ammonia or a primary amine by aminolysis and cyclization of the PMMA at high temperature and pressure, typically in an extruder. This technique is called reactive extrusion. A commercial polyglutarimide product based on the methylamine derivative of PMMA, called Kamax, was produced by the Rohm and Haas company. The toughness of these materials reflects the rigidity of the imide functional group.
Occurrence and applications:
Interest in the bioactivity of imide-containing compounds was sparked by the early discovery of the high bioactivity of the Cycloheximide as an inhibitor of protein biosynthesis in certain organisms. Thalidomide, famous for its adverse effects, is one result of this research. A number of fungicides and herbicides contain the imide functionality. Examples include Captan, which is considered carcinogenic under some conditions, and Procymidone.
Occurrence and applications:
In the 21st century new interest arose in thalidomide's immunomodulatory effects, leading to the class of immunomodulators known as immunomodulatory imide drugs (IMiDs).
Preparation:
Most common imides are prepared by heating dicarboxylic acids or their anhydrides and ammonia or primary amines. The result is a condensation reaction: (RCO)2O + R′NH2 → (RCO)2NR′ + H2OThese reactions proceed via the intermediacy of amides. The intramolecular reaction of a carboxylic acid with an amide is far faster than the intermolecular reaction, which is rarely observed.
They may also be produced via the oxidation of amides, particularly when starting from lactams.
R(CO)NHCH2R' + 2 [O] → R(CO)N(CO)R' + H2OCertain imides can also be prepared in the isoimide-to-imide Mumm rearrangement.
Reactions:
For imides derived from ammonia, the N–H center is weakly acidic. Thus, alkali metal salts of imides can be prepared by conventional bases such as potassium hydroxide. The conjugate base of phthalimide is potassium phthalimide. These anion can be alkylated to give N-alkylimides, which in turn can be degraded to release the primary amine. Strong nucleophiles, such as potassium hydroxide or hydrazine are used in the release step.
Reactions:
Treatment of imides with halogens and base gives the N-halo derivatives. Examples that are useful in organic synthesis are N-chlorosuccinimide and N-bromosuccinimide, which respectively serve as sources of "Cl+" and "Br+" in organic synthesis.
Imides in coordination chemistry In coordination chemistry transition metal imido complexes feature the NR2- ligand. They are similar to oxo ligands in some respects. In some the M-N-C angle is 180º but often the angle is decidedly bent. The parent imide (NH2-) is an intermediate in nitrogen fixation by synthetic catalysts. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Flameless candle**
Flameless candle:
Flameless candles are an electronic alternative to traditional wick candles. They are typically utilized as aesthetic lighting devices and come in a variety of shapes, colors and sizes. A flame-effect lightbulb contains multiple small light-emitting diodes and a control circuit to flash them in a semi-regular, flickering pattern. The bulb may be sold separately with a standard Edison screw for use in ordinary fixtures, or in a self-contained housing with battery.
Flameless candle:
Flameless candles are designed to eliminate the need for an open flame, thus, reducing their potential as fire hazards.
Appearance:
As a decorative element, the design of a flameless candle is relatively versatile. The body or "housing" of the device is commonly cylindrical, containing a battery pack and an often flame shaped LED light that rests at the top of the candle. Many manufactures use LED lights with an irregular twinkling or flicker effect to simulate the calming glow of an open flame. The body of a flameless candle can likewise be made of wax to enhance its resemblance to traditional candles. Because LED lights do not put out as much heat as a live flame, wax based flameless candles do not melt but, rather, maintain their original shape and size for future use.
Functionality:
Some flameless candles are scented, serving as air fresheners as well as lighting devices. Others, designed specifically for outdoor use, incorporate features including integrated insect repellent. As the sun sets, an ambient light sensor, housed in the body of the candle, triggers a small fan near a fragrance compartment. Geranial or other repellents are then released. Additional features may include remote control light switches, integrated timers and air treatment apparatus.
Safety:
Because flameless candles are illuminated by a small light bulb, rather than an open flame, they pose less threat as fire hazards and do not melt or lose their form over time. Nonetheless, the bulbs inside some flameless candles may heat up significantly. In a pediatric study conducted in 2013, it is suggested that flameless candles are a minor cause of battery related injuries in children. Close to 8 percent of batteries ingested by children were identified as having come from flameless candles. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Superior epigastric vein**
Superior epigastric vein:
In human anatomy, the superior epigastric veins are two or more venae comitantes which accompany either superior epigastric artery before emptying into the internal thoracic vein. They participate in the drainage of the superior surface of the diaphragm.
Structure:
Course The superior epigastric vein originates from the internal thoracic vein.: 193 The superior epigastric veins first run between the sternal margin and the costal margin of the diaphragm, then enter the rectus sheath. They run inferiorly, coursing superficially to the fibrous layer forming the posterior leaflet of the rectus sheath, and deep to the rectus abdominis muscle.: 211 The superior epigastric veins are venae comitantes of the superior epigastric artery, and mirror its course.
Structure:
Distribution The superior epigastric veins participate in the drainage of the superior surface of the diaphragm.
Fate The superior epigastric veins drain into the internal thoracic vein. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**High-sticking**
High-sticking:
High-sticking is the name of two infractions in the sport of ice hockey.
High-sticking:
High-sticking may occur when a player intentionally or inadvertently plays the puck with his stick above the height of the shoulders or above the cross bar of a hockey goal. This can result in a penalty or a stoppage of play. In the rules of the National Hockey League, high-sticking is defined as a penalty in Rule 60 and as a non-penalty foul in Rule 80.
High-sticking:
A penalty is assessed if a player strikes another player with a high stick. The player is given a minor penalty unless his high stick caused an injury, in which case the referee has the option to assess a double-minor, major, game misconduct or match penalty. It is the referee's discretion which penalty to assess: the rule calls for a double minor for an accidental injury, or a match penalty for a deliberate attempt to injure (whether the opposition player was actually injured). Injury is usually decided by the high stick causing bleeding, but the presence of blood does not automatically mean an extra penalty is awarded. Some referees have been known to award an extra penalty without the presence of blood if the referee determines that the injury sustained was sufficient to warrant a double-minor penalty.
High-sticking:
A stoppage in play results if a high stick comes in contact with the puck and the team who touched it regains control of the puck. However, play usually continues if a player touches the puck with a high stick and the opposing team gains control of the puck. If the puck goes into the opposing net after coming into contact with a high stick, the goal is disallowed. The level at which a stick is considered too high for a goal is the crossbar of the net. However, if a player knocks the puck into his own net with a high stick, the goal is allowed. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sequential decoding**
Sequential decoding:
Recognised by John Wozencraft, sequential decoding is a limited memory technique for decoding tree codes. Sequential decoding is mainly used as an approximate decoding algorithm for long constraint-length convolutional codes. This approach may not be as accurate as the Viterbi algorithm but can save a substantial amount of computer memory. It was used to decode a convolutional code in 1968 Pioneer 9 mission.
Sequential decoding:
Sequential decoding explores the tree code in such a way to try to minimise the computational cost and memory requirements to store the tree.
There is a range of sequential decoding approaches based on the choice of metric and algorithm. Metrics include:
Fano metric:
Zigangirov metric Gallager metricAlgorithms include: Stack algorithm Fano algorithm Creeper algorithm Fano metric Given a partially explored tree (represented by a set of nodes which are limit of exploration), we would like to know the best node from which to explore further. The Fano metric (named after Robert Fano) allows one to calculate from which is the best node to explore further. This metric is optimal given no other constraints (e.g. memory).
Fano metric:
For a binary symmetric channel (with error probability p ) the Fano metric can be derived via Bayes theorem. We are interested in following the most likely path Pi given an explored state of the tree X and a received sequence r . Using the language of probability and Bayes theorem we want to choose the maximum over i of: Pr Pr Pr (Pi|X) We now introduce the following notation: N to represent the maximum length of transmission in branches b to represent the number of bits on a branch of the code (the denominator of the code rate, R ).
Fano metric:
di to represent the number of bit errors on path Pi (the Hamming distance between the branch labels and the received sequence) ni to be the length of Pi in branches.We express the likelihood Pr (r|Pi,X) as pdi(1−p)nib−di2−(N−ni)b (by using the binary symmetric channel likelihood for the first nib bits followed by a uniform prior over the remaining bits).
We express the prior Pr (Pi|X) in terms of the number of branch choices one has made, ni , and the number of branches from each node, 2Rb Therefore: Pr (Pi|X,r)∝pdi(1−p)nib−di2−(N−ni)b2−niRb∝pdi(1−p)nib−di2nib2−niRb We can equivalently maximise the log of this probability, i.e.
Fano metric:
log log log log 2(1−p)+1−R) This last expression is the Fano metric. The important point to see is that we have two terms here: one based on the number of wrong bits and one based on the number of right bits. We can therefore update the Fano metric simply by adding log 2p+1−R for each non-matching bit and log 2(1−p)+1−R for each matching bit.
Computational cutoff rate:
For sequential decoding to a good choice of decoding algorithm, the number of states explored wants to remain small (otherwise an algorithm which deliberately explores all states, e.g. the Viterbi algorithm, may be more suitable). For a particular noise level there is a maximum coding rate R0 called the computational cutoff rate where there is a finite backtracking limit. For the binary symmetric channel: log 2(1+2p(1−p))
Algorithms:
Stack algorithm The simplest algorithm to describe is the "stack algorithm" in which the best N paths found so far are stored. Sequential decoding may introduce an additional error above Viterbi decoding when the correct path has N or more highly scoring paths above it; at this point the best path will drop off the stack and be no longer considered.
Algorithms:
Fano algorithm The famous Fano algorithm (named after Robert Fano) has a very low memory requirement and hence is suited to hardware implementations. This algorithm explores backwards and forward from a single point on the tree.
The Fano algorithm is a sequential decoding algorithm that does not require a stack.
The Fano algorithm can only operate over a code tree because it cannot examine path merging.
At each decoding stage, the Fano algorithm retains the information regarding three paths: the current path, its immediate predecessor path, and one of its successor paths.
Based on this information, the Fano algorithm can move from the current path to either its immediate predecessor path or the selected successor path; hence, no stack is required for queuing all examined paths.
The movement of the Fano algorithm is guided by a dynamic threshold T that is an integer multiple of a fixed step size ¢.
Only the path whose path metric is no less than T can be next visited. According to the algorithm, the process of codeword search continues to move forward along a code path, as long as the Fano metric along the code path remains non-decreasing.
Once all the successor path metrics are smaller than T, the algorithm moves backward to the predecessor path if the predecessor path metric beats T; thereafter, threshold examination will be subsequently performed on another successor path of this revisited predecessor.
In case the predecessor path metric is also less than T, the threshold T is one-step lowered so that the algorithm is not trapped on the current path.
For the Fano algorithm, if a path is revisited, the presently examined dynamic threshold is always lower than the momentary dynamic threshold at the previous visit, guaranteeing that looping in the algorithm does not occur, and that the algorithm can ultimately reach a terminal node of the code tree, and stop. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**SV2A**
SV2A:
Synaptic vesicle glycoprotein 2A is a ubiquitous synaptic vesicle protein that in humans is encoded by the SV2A gene. The protein is targeted by the anti-epileptic drugs (anticonvulsants) levetiracetam and brivaracetam. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Quantum natural language processing**
Quantum natural language processing:
Quantum natural language processing (QNLP) is the application of quantum computing to natural language processing (NLP). It computes word embeddings as parameterised quantum circuits that can solve NLP tasks faster than any classical computer. It is inspired by categorical quantum mechanics and the DisCoCat framework, making use of string diagrams to translate from grammatical structure to quantum processes.
Theory:
The first quantum algorithm for natural language processing used the DisCoCat framework and Grover's algorithm to show a quadratic quantum speedup for a text classification task. It was later shown that quantum language processing is BQP-Complete, i.e. quantum language models are more expressive than their classical counterpart, unless quantum mechanics can be efficiently simulated by classical computers.
These two theoretical results assume fault-tolerant quantum computation and a QRAM, i.e. an efficient way to load classical data on a quantum computer. Thus, they are not applicable to the noisy intermediate-scale quantum (NISQ) computers available today.
Experiments:
The algorithm of Zeng and Coecke was adapted to the constraints of NISQ computers and implemented on IBM quantum computers to solve binary classification tasks. Instead of loading classical word vectors onto a quantum memory, the word vectors are computed directly as the parameters of quantum circuits. These parameters are optimised using methods from quantum machine learning to solve data-driven tasks such as question answering, machine translation and even algorithmic music composition. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Disc spanning**
Disc spanning:
Disc spanning is a feature of CD and DVD burning software that automatically spreads a large amount of data across many data discs if the data set's size exceeds the storage capacity of an individual blank disc. The advantage is that the user does not need to split up files and directories into two or more (blank disc sized) pieces by hand. The software may or may not support slicing a single large file in order to span it but all disc spanners can divide numerous files that are smaller than one blank disc's capacity across many discs.
Disc spanning:
Disc spanning works well on CD media in many applications, but spanning on DVD media fails often. This lack of reliable DVD data disc spanning is odd, as disc spanning was used extensively on older 3.5" and 5.25" floppy discs. Most users assume every operating system can perform disc spanning on any media as a built in function; this is incorrect.
Disc spanning:
Some disc spanning schemes include a small program to reassemble the data set into the same structure it had on the source machine. This program could be written to the first disc only, or to every disc in the set.
The use of disc spanning will in most cases make your files unreadable to the file-system. Therefore, you are bounded to use the same program later on to restore the data. Many users don't want to be bound to such solutions and use "Simple disc spanning" instead.
Disc spanning:
Simple disc spanning is a solution that groups the files into any media grouped based on size. There is one drawback with this system. Files that are bigger than the target media will not be burnt to the drive. It is simple but powerful and a simple calculation would be "How many CD/DVD/BD/HD DVDs does this bunch of files need?".
Disc spanning:
The simple grouping can be displayed like this Disc1 -- (99%) [4,479MiB] Directory | +--- Dir1 +--- File1 Disc2 -- (98%) [4,468MiB] Directory | +--- Dir2 +--- File2 Disc3 -- (45%) [2,130MiB] | +--- Dir3 etc... | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**MSRA (gene)**
MSRA (gene):
Peptide methionine sulfoxide reductase (Msr) is a family of enzymes that in humans is encoded by the MSRA gene.
Function:
Msr is ubiquitous and highly conserved. Human and animal studies have shown the highest levels of expression in kidney and liver. It carries out the enzymatic reduction of methionine sulfoxide (MetO), the oxidized form of the amino acid methionine (Met), back to methionine, using thioredoxin to catalyze the enzymatic reduction and repair of oxidized methionine residues. Its proposed function is thus the repair of oxidative damage to proteins to restore biological activity. Oxidation of methionine residues in tissue proteins can cause them to misfold or otherwise render them dysfunctional.
Clinical significance:
MetO increases with age in body tissues, which is believed by some to contribute to biological ageing. Moreover, levels of methionine sulfoxide reductase A (MsrA) decline in aging tissues in mice and in association with age-related disease in humans. There is thus a rationale for thinking that by maintaining the structureincreased levels or activity of MsrA might retard the rate of aging.
Clinical significance:
Indeed, transgenic Drosophila (fruit flies) that overexpress methionine sulfoxide reductase show extended lifespan. However, the effects of MsrA overexpression in mice were ambiguous. MsrA is found in both the cytosol and the energy-producing mitochondria, where most of the body's endogenous free radicals are produced. Transgenically increasing the levels of MsrA in either the cytosol or the mitochondria had no significant effect on lifespan assessed by most standard statistical tests, and may possibly have led to early deaths in the cytosol-specific mice, although the survival curves appeared to suggest a slight increase in maximum (90%) survivorship, as did analysis using Boschloo's test, a binomial test designed to test greater extreme variation.Deletion of this gene has been associated with insulin resistance in mice, while overexpression reduces insulin resistance in old mice. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**XDNA**
XDNA:
xDNA (also known as expanded DNA or benzo-homologated DNA) is a size-expanded nucleotide system synthesized from the fusion of a benzene ring and one of the four natural bases: adenine, guanine, cytosine, and thymine. This size expansion produces an 8 letter alphabet which has a larger information density by a factor of 2n compared to natural DNA's (often referred to as B-DNA in literature) 4 letter alphabet. As with normal base-pairing, A pairs with xT, C pairs with xG, G pairs with xC, and T pairs with xA. The double helix is thus 2.4Å wider than a natural double helix. While similar in structure to B-DNA, xDNA has unique absorption, fluorescence, and stacking properties.Initially synthesized as an enzyme probe by Nelson J. Leonard's group, benzo-homologated adenine was the first base synthesized. Later, Eric T. Kool's group finished synthesizing the remaining three expanded bases, eventually followed by yDNA ("wide" DNA), another benzo-homologated nucleotide system, and naphtho-homologated xxDNA and yyDNA. xDNA is more stable when compared to regular DNA when subjected to higher temperature, and while entire strands of xDNA, yDNA, xxDNA and yyDNA exist, they are currently difficult to synthesize and maintain. Experiments with xDNA provide new insight into the behavior of natural B-DNA. The extended bases xA, xC, xG, and xT are naturally fluorescent, and single strands composed of only extended bases can recognize and bind to single strands of natural DNA, making them useful tools for studying biological systems. xDNA is most commonly formed with base pairs between a natural and expanded nucleobase, however x-nucleobases can also be paired together. Current research supports xDNA as a viable genetic encoding system in the near future.
Origins:
The first nucleotide to be expanded was the purine adenine. Nelson J. Leonard and colleagues synthesized this original x-nucleotide, which was referred to as "expanded adenine". xA was used as a probe in the investigation of active sites of ATP-dependent enzymes, more specifically what modifications the substrate could take while still being functional. Almost two decades later, the other three bases were successfully expanded and later integrated into a double helix by Eric T. Kool and colleagues. Their goal was to create a synthetic genetic system which mimics and surpasses the functions of the natural genetic system, and to broaden the applications of DNA both in living cells and in experimental biochemistry. Once the expanded base set was created, the goal shifted to identifying or developing faithful replication enzymes and further optimizing the expanded DNA alphabet.
Origins:
Synthesis In benzo-homologated purines (xA and xG), the benzene ring is bound to the nitrogenous base through nitrogen-carbon (N-C) bonds. Benzo-homologated pyrimidines are formed through carbon-carbon (C-C) bonds between the base and the benzene. Thus far, x-nucleobases have been added to strands of DNA using phosphoramidite derivatives, as traditional polymerases have been unsuccessful in synthesizing strands of xDNA. X-nucleotides are poor candidates as substrates for B-DNA polymerases as their size interferes with binding at the catalytic domain. Attempts at using template-independent enzymes have been successful as they have a reduced geometric constraint for substrates. Terminal deoxynucleotidyl transferase (TdT) has been used previously to synthesize strands of bases which have been bound to fluorophores. Using TdT, up to 30 monomers can be combined to form a double-helix of xDNA, however this oligomeric xDNA appears to inhibit its own extension beyond this length due to the overwhelming hydrogen bonding. In order to minimize inhibition, xDNA can be hybridized into a regular helix.
Origins:
Replication For xDNA to be used as a substitute structure for information storage, it requires a reliable replication mechanism. Research into xDNA replication using a Klenow fragment from DNA polymerase I shows that a natural base partner is selectively added in instances of single-nucleotide insertion. However, DNA polymerase IV (Dpo4) has been able to successfully use xDNA for these types of insertions with high fidelity, making it a promising candidate for future research in extending replicates of xDNA. xDNA's mismatch sensitivity is similar to that of B-DNA.
Structure:
Similar to natural bases, x-nucleotides selectively assemble into a duplex-structure resembling B-DNA. xDNA was originally synthesized by incorporating a benzene ring into the nitrogenous base. However, other expanded bases have been able to incorporate thiophene and benzo[b]thiophene as well. xDNA and yDNA use benzene rings to widen the bases and are thus termed "benzo-homologated". Another form of expanded nucleobases known as yyDNA incorporate naphthalene into the base and are "naptho-homologated". xDNA has a rise of 3.2Å and a twist of 32°, significantly smaller than B-DNA, which has a rise of 3.3Å and a twist of 34.2° xDNA nucleotides can occur on both strands—either alone (known as "doubly expanded DNA") or mixed with natural bases—or exclusively on one strand or the other. Similar to B-DNA, xDNA can recognize and bind complementary single-stranded DNA or RNA sequences.Duplexes formed from xDNA are similar to natural duplexes aside from the distance between the two sugar-phosphate backbones. xDNA helices have a greater number of base pairs per turn of the helix as a result of a reduced distance between neighbour nucleotides. NMR spectra report that xDNA helices are anti-parallel, right-handed and take an anti conformation around the glycosidic bond, with a C2'-endo sugar pucker. Helices created from xDNA are more likely to take a B-helix over an A-helix conformation, and have an increased major groove width by 6.5Å (where the backbones are farthest apart) and decreased minor groove width by 5.5Å (where the backbones are closest together) compared to B-DNA. Altering groove width affects the xDNA's ability to associate with DNA-binding proteins, but as long as the expanded nucleotides are exclusive to one strand, recognition sites are sufficiently similar to B-DNA to allow bonding of transcription factors and small polyamide molecules. Mixed helices present the possibility of recognizing the four expanded bases using other DNA-binding molecules.
Properties:
Expanded nucleotides and their oligomeric helices share many properties with their natural B-DNA counterparts, including their pairing preference: A with T, C with G. The various differences in chemical properties between xDNA and B-DNA support the hypothesis that the benzene ring which expands x-nucleobases is not, in fact, chemically inert. xDNA is more hydrophobic than B-DNA, and also has a smaller HOMO-LUMO gap (distance between the highest occupied molecular orbital and lowest unoccupied molecular orbital) as a result of modified saturation. xDNA has higher melting temperatures than B-DNA (a mixed decamer of xA and T has a melting temperature of 55.6 °C, 34.3 °C higher than the same decamer of A and T), and exhibits an "all-or-nothing" melting behaviour.
Properties:
Conformation Under lab conditions, xDNA orients itself in the syn conformation. This unfortunately does not expose the binding face of the xDNA nucleotides to face the neighbouring strand for binding, meaning that extra measures must be applied to alter the conformation of xDNA before attempting to form helices. However, the anti and syn orientations are practically identical energetically in expanded bases. This conformational preference is seen primarily in pyrimidines, and purines display minimal preference for orientation.
Properties:
Enhanced stacking Stacking of the nucleotides in a double helix is a major determinant of the helix's stability. With the added surface area and hydrogen available for bonding, stacking potential for the nucleobases increases with the addition of a benzene spacer. By increasing the separation between the nitrogenous bases and either sugar-phosphate backbone, the helix's stacking energy is less variable and therefore more stable. The energies for natural nucleobase pairs vary from 18 to 52 kJ/mol. This variance is only 14–40 kJ/mol for xDNA.Due to an increased overlap between and expanded strand of DNA and its neighbouring strand, there are greater interstrand interactions in expanded and mixed helices, resulting in a significant increase in the helix's stability. xDNA has enhanced stacking abilities resultant from changes in inter- and intrastrand hydrogen bonding that arise from the addition of a benzene spacer, but expanding the bases does not alter hydrogen's contribution to the stability of the duplex. These stacking abilities are exploited by helices consisting of both xDNA and B-DNA in order to optimize the strength of the helix. Increased stacking is seen most prominently in strands consisting only of A and xA and T and xT, as T-xA has stronger stacking interactions than T-A.The energy resultant from pyrimidines ranges from 30 to 49 kJ/mol. The range for purines is between 40-58kJ/mol. By replacing one nucleotide in a double-helix with an expanded nucleotide, the strength of the stacking interactions increases by 50%. Expanding both nucleotides results in a 90% increase in stacking strength. While xG has an overall negative effect on the binding strength of the helix, the other three expanded bases outweigh this with their positive effects. The change in energy caused by expanding the bases is mostly dependent on the rotation of the bond about the nucleobases' centers of mass, and center of mass stacking interactions improve the stacking potential of the helix. Because the size-expanded bases widen the helix, it is more thermally stable with a higher melting temperature.
Properties:
Absorption The addition of a benzene spacer in x-nucleobases affects the bases' optical absorption spectra. Time-dependent density functional theory (TDDFT) applied to xDNA revealed that the benzene component of the highest occupied molecular orbitals (HOMO) in the x-bases pins the absorption onset at an earlier point than natural bases. Another unusual feature of xDNA absorption spectra is the red-shifted excimers of xA in the low range. In terms of stacking fingerprints, there is a more pronounced hypochromicity seen in consecutive xA-T base pairs.
Properties:
Implications of xDNA's altered absorption include applications in nanoelectronic technology and nanobiotechnology. The reduced spacing between x-nucleotides makes the helix stiffer, thus it is not as easily affected by substrate, electrode, and functional nanoparticle forces. Other alterations to natural nucleotides resulting in different absorption spectra will broaden these applications in the future.
Properties:
Fluorescence One unique property of xDNA is its inherent fluorescence. Natural bases can be bound directly to fluorophores for use in microarrays, in situ hybridization, and polymorphism analysis. However, these fluorescent natural bases often fail as a result of self-quenching, which diminishes their fluorescent intensity and reduces their applicability as visual DNA tags. The pi interactions between the rings in x-nucleobases result in an inherent fluorescence in the violet-blue range, with a Stokes shift between 50 and 80 nm. They also have a quantum yield in the range of 0.3–0.6. xC has the greatest fluorescent emission.
Other expanded bases:
After the creation of and successful research surrounding xDNA, more forms of expanded nucleotides were investigated. yDNA is a second, similar system of nucleotides which uses a benzene ring to expand the four natural bases. xxDNA and yyDNA use naphthalene, a polycyclic molecule consisting of two hydrocarbon rings. The two rings expand the base even wider, further altering its chemical properties.
Other expanded bases:
yDNA The success and implications of xDNA prompted research to examine other factors which could alter B-DNA's chemical properties and create a new system for information storage with broader applications. yDNA also uses a benzene ring, similar to xDNA, with the only difference being the site of addition of the aromatic ring. The location of the benzene ring changes the preferred structure of the expanded helix. The altered conformation makes yDNA more similar to B-DNA in its orientation by changing the interstrand hydrogen bonds. Stability is highly dependent on the bases' rotation about the link between the base and the sugar of the backbone. yDNA's altered preference for this orientation makes it more stable overall than xDNA. The location of the benzene spacer also affects the bases' groove geometry, altering neighbour interactions. The base pairs between y-nucleotides and natural nucleotides is planar, rather than slightly twisted as with xDNA. This decreases the rise of the helix even further than achieved by xDNA. While xDNA and yDNA are quite similar in most properties, including their increased stacking interactions, yDNA shows superior mismatch recognition. y-pyrimidines display slightly stronger stacking interactions than x-pyrimidines as a result of the distance between the two anomeric carbons, which is slightly larger in yDNA. xDNA still has stronger stacking interactions in model helices, but adding either x- or y-pyrimidines to a natural double helix strengthens the intra- and interstrand interactions, increasing overall helix stability. In the end, which of the two has the strongest overall stacking interactions is dependent on the sequence; xT and yT bind A with similar strength, but the stacking energy of yC bound to G is stronger than xC by 4kJ/mol. yDNA and other expanded bases are part of a very young field which is highly understudied. Research suggest that the ideal conformation still remains to be discovered, but knowing that the benzene location affects the orientation and structure of expanded nucleobases adds information to their future design.
Other expanded bases:
yyDNA and xxDNA Doubly-expanded (or naphtho-homologated) nucleobases incorporate a naphthalene spacer instead of a benzene ring, widening the base twice as much with its two-ringed structure. These structures (known as xxDNA and yyDNA) are 4.8Å wider than natural bases and were once again created as a result of Leonard's research on expanded adenine in ATP-dependent enzymes in 1984. No literature was published on these doubly-expanded bases for nearly three decades until 2013 when the first xxG was produced by Sharma, Lait, and Wetmore and incorporated along with xxA into a natural helix. Although very little research has been performed on xxDNA, xx-purine neighbours have already been shown to increase intrastrand stacking energy by up to 119% (as opposed to 62% in x-purines). xx-purine and pyrimidine interactions show an overall decrease in stacking energies, but the overall stability of duplexes including pyrimidines and xx-purines increases by 22%, more than twofold that of pyrimidines and x-purines.
Uses:
xDNA has many applications in chemical and biological research, including expanding upon applications of natural DNA, such as scaffolding. In order to create self-assembling nanostructures, a scaffold is needed as a sort of trellis to support the growth. DNA has been used as a means to this end in the past, but expanded scaffolds make larger scaffolds for more complex self-assembly an option. xDNA's electrical conduction properties also make it a prime candidate as a molecular wire, as its π-π interactions help it efficiently conduct electricity. Its 8-letter alphabet (A, T, C, G, xA, xT, xC, xG) gives it the potential to store 2n times increase in storage density, where n represents the number of letters in a sequence. For example, combining 6 nucleotides of with B-DNA yields 4096 possible sequences, whereas a combination of the same number of nucleotides created with xDNA yields 262,144 possible sequences. Additionally, xDNA can be used as a fluorescent probe at enzyme active sites, as was its original application by Leonard et al.xDNA has also been applied to the study of protein-DNA interactions. Due to xDNA's natural fluorescing properties, it can easily be visualized in both lab and living conditions. xDNA is becoming more easy to create and oligomerize, and its high-affinity binding to complementary DNA and RNA sequences means that it can not only help locate these sequences floating around in the cell, but also when they are already interacting with other structures within the cell. xDNA also has potential applications in assays that employ TdT as it may improve reporters, and can be used as an affinity tag for interstrand bonding. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Flight watch**
Flight watch:
Flight Watch is the common name in the United States for an En route Flight Advisory Service (EFAS) dedicated to providing weather to and collecting reports from pilots while in flight.
Flight watch:
While U.S. Flight Service Stations (FSS) operate Flight Watch, Flight Watch does not provide a full range of FSS services such as filing flight plans, acquiring preflight weather briefings, providing NOTAMs, or picking up IFR clearances; instead, it is limited to the following: en route weather updates collection of pilot weather reports (PIREPs)The service was available on a single common frequency, 122.0 MHz, to flights operating below Flight Level 180 (18,000 feet MSL) across the conterminous United States. Discrete frequencies are available for high altitude aircraft (at and above 18,000 feet MSL, or FL180), based on location. Flight Watch may have been unavailable below 5,000 feet AGL, depending on terrain and the distance from the nearest station.
Flight watch:
On October 1, 2015, Flight Watch services were consolidated with existing Flight Service Station (FSS) services and the services were terminated on the 122.0 frequency. The frequency was monitored for a "few months" to steer pilots to the correct frequency. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Recursive acronym**
Recursive acronym:
A recursive acronym is an acronym that refers to itself, and appears most frequently in computer programming. The term was first used in print in 1979 in Douglas Hofstadter's book Gödel, Escher, Bach: An Eternal Golden Braid, in which Hofstadter invents the acronym GOD, meaning "GOD Over Djinn", to help explain infinite series, and describes it as a recursive acronym. Other references followed, however the concept was used as early as 1968 in John Brunner's science fiction novel Stand on Zanzibar. In the story, the acronym EPT (Education for Particular Task) later morphed into "Eptification for Particular Task".
Recursive acronym:
Recursive acronyms typically form backwardly: either an existing ordinary acronym is given a new explanation of what the letters stand for, or a name is turned into an acronym by giving the letters an explanation of what they stand for, in each case with the first letter standing recursively for the whole acronym.
Use in computing:
In computing, an early tradition in the hacker community, especially at MIT, was to choose acronyms and abbreviations that referred humorously to themselves or to other abbreviations. Perhaps the earliest example in this context is the backronym "Mash Until No Good", which was created in 1960 to describe Mung, and revised to "Mung Until No Good". It lived on as a recursive command in the editing language TECO.[3] In 1977 programmer Ted Anderson coined TINT ("TINT Is Not TECO"), an editor for MagicSix. This inspired the two MIT Lisp Machine editors called EINE ("EINE Is Not Emacs", German for one) and ZWEI ("ZWEI Was EINE Initially", German for two), in turn inspiring Anderson's retort SINE ("SINE is not EINE"). Richard Stallman followed with GNU (GNU's Not Unix). Recursive acronym examples often include negatives, such as denials that the thing defined is or resembles something else (which the thing defined does in fact resemble or is even derived from), to indicate that, despite the similarities, it was distinct from the program on which it was based.An earlier example appears in a 1976 textbook on data structures, in which the pseudo-language SPARKS is used to define the algorithms discussed in the text. "SPARKS" is claimed to be a non-acronymic name, but "several cute ideas have been suggested" as expansions of the name. One of the suggestions is "Smart Programmers Are Required to Know SPARKS". (this example is tail recursive) Other examples are the YAML language, which stands for "YAML ain't a markup language" and PHP language meaning "PHP: Hypertext Preprocessor".
Other examples:
Companies and organizations In media The initials for the Commodore CDTV stand for Commodore Commodore Dynamic Total Vision.
TTP: a technology project in the Dilbert comic strip. The initials stand for "The TTP Project".
GRUNGE: defined by Homer Simpson in The Simpsons episode That '90s Show as "Guitar Rock Utilizing Nihilist Grunge Energy", another uncommon example of a recursive acronym whose recursive letter is neither the first nor the last letter.
BOB: the primary antagonist from the series Twin Peaks. His name itself is an acronym standing for "Beware of BOB".
KOS-MOS: a character from the Xenosaga series of video games. "KOS-MOS" is a recursive acronym meaning "Kosmos Obey Strategical Multiple Operating Systems".
Hiroshi Yoshimura's "A・I・R" stands for "AIR IN RESORT".
Other examples:
Special The GNU Hurd project is named with a mutually recursive acronym: "Hurd" stands for "Hird of Unix-Replacing Daemons", and "Hird" stands for "Hurd of Interfaces Representing Depth." RPM, PHP, XBMC and YAML were originally conventional acronyms which were later redefined recursively. They are examples of, or may be referred to as, backronymization, where the official meaning of an acronym is changed.
Other examples:
Jini claims the distinction of being the first recursive anti-acronym: 'Jini Is Not Initials'. It might, however, be more properly termed an anti-backronym because the term "Jini" never stood for anything in the first place. The more recent "XNA", on the other hand, was deliberately designed that way.
Other examples:
Most recursive acronyms are recursive on the first letter, which is therefore an arbitrary choice, often selected for reasons of humour, ease of pronunciation, or consistency with an earlier acronym that used the same letters for different words, such as PHP, which now stands for "PHP: Hypertext Preprocessor", but was originally "Personal Home Page". However YOPY, "Your own personal YOPY" is recursive on the last letter.
Other examples:
A joke implying that the middle initial "B." in the name of Benoit B. Mandelbrot stands for "Benoit B. Mandelbrot" plays on the idea that fractals, which Mandelbrot studied, repeat themselves at smaller and smaller scales when examined closely.
Other According to Hayyim Vital, a 16th–17th century kabbalist, the Hebrew word adam (אדם, meaning "man") is an acronym for adam, dibbur, maaseh (man, speech, deed).
According to Isaac Luria, a 16th century kabbalist, the Hebrew word tzitzit (ציצת in its Biblical spelling, meaning "ritual fringes") is an acronym for tzaddik yafrid tzitziyotav tamid ("a righteous person should separate [the strings of] his tzitzit constantly"). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Osteoma**
Osteoma:
An osteoma (plural osteomas or less commonly osteomata) is a new piece of bone usually growing on another piece of bone, typically the skull. It is a benign tumor.
Osteoma:
When the bone tumor grows on other bone it is known as "homoplastic osteoma"; when it grows on other tissue it is called "heteroplastic osteoma".Osteoma represents the most common benign neoplasm of the nose and paranasal sinuses. The cause of osteomas is uncertain, but commonly accepted theories propose embryologic, traumatic, or infectious causes. Osteomas are also found in Gardner's syndrome. Larger craniofacial osteomas may cause facial pain, headache, and infection due to obstructed nasofrontal ducts. Often, craniofacial osteoma presents itself through ocular signs and symptoms (such as proptosis).
Variants:
Osteoma cutis (also known as "Albright's hereditary osteodystrophy") Osteoid osteoma Fibro-osteoma Chondro-osteoma | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Synectics**
Synectics:
Synectics is a problem solving methodology that stimulates thought processes of which the subject may be unaware. This method was developed by George M. Prince (April 5, 1918 – June 9, 2009) and William J.J. Gordon, originating in the Arthur D. Little Invention Design Unit in the 1950s.
History:
The process was derived from tape-recording (initially audio, later video) meetings, analysis of the results, and experiments with alternative ways of dealing with the obstacles to success in the meeting. "Success" was defined as getting a creative solution that the group was committed to implement.
History:
The name Synectics comes from Greek and means "the joining together of different and apparently irrelevant elements."Gordon and Prince named both their practice and their new company Synectics, which can cause confusion, as people not part of the company are trained and use the practice. While the name was trademarked, it has become a standard word for describing creative problem solving in groups.
Theory:
Synectics is a way to approach creativity and problem-solving in a rational way. "Traditionally, the creative process has been considered after the fact... The Synectics study has attempted to research creative process in vivo, while it is going on."According to Gordon, Synectics research has three main assumptions: The creative process can be described and taught; Invention processes in arts and sciences are analogous and are driven by the same "psychic" processes; Individual and group creativity are analogous.With these assumptions in mind, Synectics believes that people can be better at being creative if they understand how creativity works.
Theory:
One important element in creativity is embracing the seemingly irrelevant. Emotion is emphasized over intellect and the irrational over the rational. Through understanding the emotional and irrational elements of a problem or idea, a group can be more successful at solving a problem.Prince emphasized the importance of creative behaviour in reducing inhibitions and releasing the inherent creativity of everyone. He and his colleagues developed specific practices and meeting structures which help people to ensure that their constructive intentions are experienced positively by one another. The use of the creative behaviour tools extends the application of Synectics to many situations beyond invention sessions (particularly constructive resolution of conflict).
Theory:
Gordon emphasized the importance of "'metaphorical process' to make the familiar strange and the strange familiar". He expressed his central principle as: "Trust things that are alien, and alienate things that are trusted." This encourages, on the one hand, fundamental problem-analysis and, on the other hand, the alienation of the original problem through the creation of analogies. It is thus possible for new and surprising solutions to emerge.
Theory:
As an invention tool, Synectics invented a technique called "springboarding" for getting creative beginning ideas. For the development of beginning ideas, the method incorporates brainstorming and deepens and widens it with metaphor; it also adds an important evaluation process for Idea Development, which takes embryonic new ideas that are attractive but not yet feasible and builds them into new courses of action which have the commitment of the people who will implement them.
Theory:
Synectics is more demanding of the subject than brainstorming, as the steps involved imply that the process is more complicated and requires more time and effort. The success of the Synectics methodology depends highly on the skill of a trained facilitator.
Books:
The Practice of Creativity: A Manual for Dynamic Group Problem-Solving. George M. Prince, 2012, Vermont: Echo Point Books & Media, LLC, 0-9638-7848-4 The Practice of Creativity by George Prince 1970 Synectics: The Development of Creative Capacity by W. J. J. Gordon, London, Collier-MacMillan, 1961 Design Synectics: Stimulating Creativity in Design by Nicholas Roukes, Published by Davis Publications, 1988 The Innovators Handbook by Vincent Nolan 1989 Creativity Inc.: Building an Inventive Organization by Jeff Mauzy and Richard Harriman 2003 Imagine That! by Vincent Nolan and Connie Williams, Publishers Graphics, LLC, 2010 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Unergative verb**
Unergative verb:
An unergative verb is an intransitive verb that is characterized semantically by having a subject argument which is an agent that actively initiates the action expressed by the verb.
Unergative verb:
For example, in English, talk and resign in the sentence "You talk and you resign" are unergative verbs, since they are intransitive (one does not say "you talk someone") and "you" are the initiator or responsible for talking and resigning. But fall and die in the sentence "They fall and die" are unaccusative verbs, since usually they are not responsible for falling or dying but still the verb is intransitive, meaning it is comprehensively used without a direct object. (They cannot "fall something" or "die someone").Some languages treat unergative verbs differently from other intransitives in morphosyntactic terms. For example, in some Romance languages, such verbs use different auxiliaries when in compound tenses.
Unergative verb:
Besides the above, unergative verbs differ from unaccusative verbs in that in some languages, they can occasionally use the passive voice.
In Dutch, for example, unergatives take hebben (to have) in the perfect tenses: Ik telefoneer – ik heb getelefoneerd.
"I call (by phone). – I have called."In such cases, a transition to an impersonal passive construction is possible by using the adverb er, which functions as a dummy subject and the passive auxiliary worden: Er wordt door Jan getelefoneerd.
Unergative verb:
literally, "*There is by Jan telephoned." (meaning "A telephone call by Jan is going on.")By contrast, Dutch ergative verbs take zijn ("to be") in the perfect tenses: Het vet stolt – het vet is gestold "The grease solidifies – The grease has solidified."In that case, no passive construction with worden is possible. In other words, unergatives are truly intransitive, but ergatives are not. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Apache Ant**
Apache Ant:
Apache Ant is a software tool for automating software build processes which originated from the Apache Tomcat project in early 2000 as a replacement for the Make build tool of Unix. It is similar to Make, but is implemented using the Java language and requires the Java platform. Unlike Make, which uses the Makefile format, Ant uses XML to describe the code build process and its dependencies.Released under an Apache License by the Apache Software Foundation, Ant is an open-source project.
History:
Ant ("Another Neat Tool") was conceived by James Duncan Davidson while preparing Sun Microsystems's reference JSP and Servlet engine, later Apache Tomcat, for release as open-source. A proprietary version of Make was used to build it on the Solaris platform, but in the open-source world, there was no way of controlling which platform was used to build Tomcat; so Ant was created as a simple platform-independent tool to build Tomcat from directives in an XML "build file". Ant (version 1.1) was officially released as a stand-alone product on July 19, 2000.
History:
Several proposals for an Ant version 2 have been made, such as AntEater by James Duncan Davidson, Myrmidon by Peter Donald and Mutant by Conor MacNeill, none of which were able to find large acceptance with the developer community.At one time (2002), Ant was the build tool used by most Java development projects. For example, most open source Java developers included build.xml files with their distribution. Because Ant made it trivial to integrate JUnit tests with the build process, Ant allowed developers to adopt test-driven development and extreme programming.
History:
In 2004 Apache created a new tool with a similar purpose called Maven. Gradle, which is similar software, was created in 2008, which in contrary uses groovy (and a few other languages) code instead of XML.
Extensions:
WOProject-Ant is just one of many examples of a task extension written for Ant. These extensions are installed by copying their .jar files into ant's lib directory. Once this is done, these task extensions can be invoked directly in the typical build.xml file. The WOProject extensions allow WebObjects developers to use ant in building their frameworks and apps, instead of using Apple's Xcode suite.
Extensions:
Antcontrib provides a collection of tasks such as conditional statements and operations on properties as well as other useful tasks.Ant-contrib.unkrig.de implements tasks and types for networking, Swing user interfaces, JSON processing and other.
Other task extensions exist for Perforce, .NET Framework, EJB, and filesystem manipulations.
Example:
Below is listed a sample build.xml file for a simple Java "Hello, world" application. It defines four targets - clean, clobber, compile and jar , each of which has an associated description. The jar target lists the compile target as a dependency. This tells Ant that before it can start the jar target it must first complete the compile target.
Example:
Within each target are the actions that Ant must take to build that target; these are performed using built-in tasks. For example, to build the compile target Ant must first create a directory called classes (which Ant will do only if it does not already exist) and then invoke the Java compiler. Therefore, the tasks used are mkdir and javac. These perform a similar task to the command-line utilities of the same name.
Example:
Another task used in this example is named jar: This Ant task has the same name as the common Java command-line utility, JAR, but is really a call to the Ant program's built-in JAR/ZIP file support. This detail is not relevant to most end users, who just get the JAR they wanted, with the files they asked for.
Example:
Many Ant tasks delegate their work to external programs, either native or Java. They use Ant's own <exec> and <java> tasks to set up the command lines, and handle all the details of mapping from information in the build file to the program's arguments and interpreting the return value. Users can see which tasks do this (e.g. <csv>, <signjar>, <chmod>, <rpm>), by trying to execute the task on a system without the underlying program on the path, or without a full Java Development Kit (JDK) installed.
Portability:
Ant is intended to work with all systems for which Java runtimes are available. It is most commonly used with Windows, Linux, macOS and other Unix operating systems but has also been used on other platforms such as OS/2, OpenVMS, Solaris, HP-UX.Ant was designed to be more portable than Make. Compared to Make, Ant uses less platform-specific shell commands. Ant provides built-in functionality that is designed to behave the same on all platforms. For example, in the sample build.xml file above, the clean target deletes the classes directory and everything in it. In a Makefile this would typically be done with the command: rm -rf classes/ rm is a Unix-specific command unavailable in some other environments. Microsoft Windows, for example, would use: rmdir /S /Q classes In an Ant build file the same goal would be accomplished using a built-in command: Additionally, Ant does not differentiate between forward slash or backslash for directories and semicolon or colon for path separators. It converts each to the symbol appropriate to the platform on which it executes.
Limitations:
Ant build files, which are written in XML, can be complex and verbose, as they are hierarchical, partly ordered, and pervasively cross-linked. This complexity can be a barrier to learning. The build files of large or complex projects can become unmanageably large. Good design and modularization of build files can improve readability but not necessarily reduce size.
Many of the older tasks, such as <javac>, <exec> and <java>—use default values for options that are not consistent with more recent versions of the tasks. Changing those defaults would break existing Ant scripts.
When expanding properties in a string or text element, undefined properties are not raised as an error, but left as an unexpanded reference (e.g. ${unassigned.property}).
Ant has limited fault handling rules.
Limitations:
Lazy property evaluation is not supported. For instance, when working within an Antcontrib <for> loop, a property cannot be re-evaluated for a sub-value which may be part of the iteration. (Some third-party extensions facilitate a workaround; AntXtras flow-control tasksets do provide for cursor redefinition for loops.) In makefiles, any rule to create one file type from another can be written inline within the makefile. For example, one may transform a document into some other format by using rules to execute another tool. Creating a similar task in Ant is more complex: a separate task must be written in Java and included with the Ant build file in order to handle the same type of functionality. However, this separation can enhance the readability of the Ant script by hiding some of the details of how a task is executed on different platforms.There exist third-party Ant extensions (called antlibs) that provide much of the missing functionality. Also, the Eclipse integrated development environment (IDE) can build and execute Ant scripts, while the NetBeans IDE uses Ant for its internal build system. As both these IDEs are very popular development platforms, they can simplify Ant use significantly. (As a bonus, Ant scripts generated by NetBeans can be used outside that IDE as standalone scripts.) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Continuous operation**
Continuous operation:
In telecommunication, continuous operation is an operation in which certain components, such as nodes, facilities, circuits, or equipment, are in an operational state at all times. Continuous operation usually requires that there be fully redundant configuration, or at least a sufficient X out of Y degree of redundancy for compatible equipment, where X is the number of spare components and Y is the number of operational components. This article incorporates public domain material from Federal Standard 1037C. General Services Administration. (in support of MIL-STD-188). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Park and ride bus services in the United Kingdom**
Park and ride bus services in the United Kingdom:
Park and ride bus services in the United Kingdom are bus services designed to provide intermodal passenger journeys between a private mode of transport and a shared mode bus. The common model of bus based park and ride model is transfer from a private car to a public transport bus, although schemes may also be used by pedestrians and cyclists. "Park and ride" commonly refers to permanent schemes operated as part of the public transport system, for onward transport from a permanent car park to an urban centre. ‘Park and ride bus’ can also be used to describe temporary and seasonal schemes, services operated for private or specialised users, and services that do not necessarily serve an urban centre. Bus services can be permanent, seasonal, or only operate on specific days of the week, or for specific events.
Park and ride bus services in the United Kingdom:
Permanent public transport based park and ride sites are predominantly constructed, administered and financially supported by one or more of the local public authorities, although partial private funding also occurs, usually in partnership. Since bus deregulation in 1986, the actual bus service for particular schemes is currently operated by one or more private bus operators, or stand-alone companies, with the contract to operate the bus service being put out to commercial tender. An exception is Northern Ireland, where the state concern Translink promotes and operates all public transport park and ride schemes.
Park and ride bus services in the United Kingdom:
Schemes are often specially marketed with a specific brand separately from other standard local bus services. This is sometimes not necessarily using the name park and ride. Public transport schemes mostly operate at a net loss, with the budgetary cost justified by the reduction in traffic congestion and reduced need for central parking spaces. Generally, the car parking is free, with revenue for the scheme being achieved through fares or travel passes taken by the bus operator. Initially heavily lobbied for by the environmentalists, increasingly the net benefits of park and ride schemes to the environment have been questioned in studies examining the effect of schemes on overall vehicle mileages and passenger travelling behaviour.
Park and ride bus services in the United Kingdom:
Implementation of public transport park and ride bus services in the UK accelerated through the 1980s and 1990s, although some schemes have failed or been scaled back due to lack of use. Permanent schemes range in size from an allocated area with provision of less than 10 cars, to multiple dedicated sites catering in total for nearly 5,000 cars. Schemes predominantly serve a single town or smaller city, while rail based mode, where it exists, is the predominant implementation for the larger metropolitan areas. Larger regional bus schemes exist, such as at Ferrytoll in Fife, Scotland and in Northern Ireland.
History:
Permanent bus based park and ride schemes are most often found in the UK in historical towns and cities where the narrow streets mean traffic congestion hits hardest and streets cannot easily be widened. An example is Oxford, which operated the first scheme in the UK, initially with an experimental service operating part-time from a motel on the A34 in the 1960s, and then on a full-time basis from 1973. Large scale adoption in other towns then continued from the 1980s with increased car ownership. As of 2005 there were 92 park and ride sites across 40 locations in England
Permanent schemes:
Implementation Permanent park and ride services are predominantly intended for used by car driving commuters and their passengers, with shoppers being the next largest user, although it is also often targeted at day-trippers and tourists visiting by car. As well as car drivers, park and ride bus services may also be used by pedestrians and cyclists. Several schemes offer bicycle lockers to allow use of the bus by cyclists. For foot passengers, although the journey may be quicker than regular bus services, the fares may also be higher.
Permanent schemes:
For ease of access by car, a common arrangement for a permanent park and ride is a site or sites located on the outskirts or outer suburbs of a town or city, with the aim of providing a short onward trip by bus into the centre. Sites are usually located near to the major approach routes to the centre, usually near to motorway junctions or beside the main arterial routes. Some sites, such as the village on Ellon, Aberdeenshire, are located some distance from the central destination, but the site is located on a main arterial approach route. Larger regional sites exist, with longer journey times, such as Ferrytoll in Fife, Scotland. In larger cities, space permitting, sites may also be located at transport hubs or interchange stations further inside the urban area.
Permanent schemes:
As well as stand alone sites, permanent daily public transport park and ride car parks may also be operated adjacent to or as part of the car park of another facility, such as Basingstoke (a Leisure Park), Doncaster (a cinema) and Derby (a retail and leisure park). Some sites utilise football stadium car parks, as they are not usually in use on working daytimes, as happens in Brighton, Reading, Dorchester and Derby; and also horse racecourses as in Leicester and Cheltenham, although these services may not be available on match/race days.
Permanent schemes:
Most schemes do not allow for overnight parking and cater for daytime and early evening usage. Users who miss the last bus may often find their cars locked in, requiring the calling of an emergency number. Some schemes have addressed the issue of the "last bus" by using other services, such as in High Wycombe where tickets are valid on standard bus services passing the car park after the dedicated service has stopped running, although the site gates have to be left open. Other schemes are open late on designated late shopping nights (often a Thursday).
Permanent schemes:
Many sites are operational five or more days a week. Some schemes are often supplemented using additional sites with car parks normally used for other purposes during the week that only operate as park and ride on a Saturday or Sunday. These sites include the local County Hall or Town Hall car parks, or University car parks. Examples include Southport and Leicester.
Permanent schemes:
As of 2008, permanent bus based park and ride schemes are generally implemented in small to medium-sized towns and cities, with larger conurbations such as London, Birmingham and Manchester operating rail-based schemes. Edinburgh was the largest city with a comprehensive bus-based park and ride scheme, until replaced in part with the Edinburgh Trams network in 2014. In Manchester, the local transport authority, Transport for Greater Manchester, does not believe that park and ride systems achieve the main aim of reducing car based mileage, stating that when implemented, most passengers are drawn from people who would otherwise have used existing bus services, or cycle/walk, with only 1 in 5 spaces on average being filled by people who did not previously use public transport.
Permanent schemes:
Site facilities Purpose-built park and ride sites generally consist of a car park and adjacent bus-boarding facility within walking distance. Large sites may feature covered multi-story parking, and covered waiting areas or passenger facilities akin to a small bus station. Bigger car parks may feature more than one bus stop to limit the distance users have to walk. Smaller sites may feature just a bus stop and cabin for an attendant. For sites used as bus termini, the site may also feature bus stands.
Permanent schemes:
The location of park and ride sites is usually predominantly sign posted to assist car drivers, and sometimes there may exist electronic signs giving current information about parking availability.
Many sites feature a controlled perimeter, entry/exit barriers, CCTV and supervision by an attendant.
Permanent schemes:
Planning issues The location of sites is often restricted due to planning issues and land availability. The location of potential sites may conflict with the desires of other development aims. Some local authorities introduce new schemes as part of wider developments by stipulating as part of the planning permission approval that a private developer may only proceed if they include a suitable scheme in their development proposals.
Permanent schemes:
The advent of Local transport plans enacted by the Transport Act 2000 in England has allowed park and ride usage to be a material consideration in planning matters in England.
Permanent schemes:
In 2005 the Campaign to Protect Rural England called for a review of park and ride development expressing concern too many sites were being built on green belt land.In 2008, a scheme proposed by North Yorkshire County Council for a site for Whitby on green land within the North York Moors National Park was rejected by the park planning committee on grounds of its proposed location, within the park, despite claims by the borough council that no suitable alternatives existed, and traffic was an increasing issue in the town which threatened the prospects of economic growth.In 2008 in Truro, Cornwall, a scheme was launched aiming to be complementary and integrated with the rural environment, marketed as a "park for cars" rather than a car park, with features such as natural building materials, solar power and waste water management Bus operators and vehicles To comply with UK competition legislation, contracts for the bus operation aspect of schemes supported by local authority finance must be put out to commercial tender, although minimum quality conditions are often stipulated as part of the contract. Some bus operating contracts are awarded on the basis of a formal Quality Contract between authority and operator. For reasons of practicality and logistics, the winning bus operator is usually an operator already based in the local area.
Permanent schemes:
Dedicated park and ride bus services are usually provided using public transport buses. Depending on the passenger numbers, service may be provide with a combination of midibuses, single-decker buses, double-decker buses. In some schemes such as in Bristol (Bath Road), articulated buses are used. As of 2008, the Optare Solo is a common type of midibus found on smaller schemes.
Permanent schemes:
As with standard public transport bus services in the UK, as of 2008 the responsibility for operation of the buses for park and ride schemes is dominated by the major transport groups, either in part or full, with FirstGroup involved with 12 locations, Arriva and Stagecoach Group involved in 7, Go-Ahead Group in 6. Despite this, operation by small groups or independent operators forms a significant aspect of UK park and ride operations, such as Johnsons Excelbus (Stratford upon Avon) and Bennets Coaches (Cheltenham). Some municipal bus companies operate their town's service, such as Edinburgh (Lothian Buses), Nottingham (Nottingham City Transport), Swindon (Thamesdown Transport), Reading (Reading Buses), although the November 2008 transfer of the Ipswich operation from Ipswich Buses to First Eastern Counties demonstrated that council owned bus companies are not necessarily given favourable status in the awarding of council park and ride contracts.
Permanent schemes:
For large schemes, park and ride bus fleets used are usually of a higher and/or different specification to the predominant public transport bus fleet. Fleets are often purchased new in whole or in part for the award of a new contract, meaning low floor buses are increasingly common.
Permanent schemes:
In the Plymouth scheme, the buses are of an extremely high specification compared to regular public transport buses, being equipped with high backed leather seating, television screens, and individual radio/CD sockets for each seat.While dedicated park and ride fleets generally contain some of the newest vehicles in a bus company's fleet, in some areas this does not occur, for example, the award-winning operator in Derby, Trent Barton, defers operation of the park and ride to its Wellglade group sister company Notts & Derby, using vehicles older than that of the Trent Barton main fleet.
Permanent schemes:
Success and failure While most schemes are hailed as a success, and see additions to site/spaces or increase in vehicle size over time, some encounter lower than anticipated passenger numbers and need to be withdrawn/modified. In Gloucester, a two route scheme existing in 2003 was required to be scaled back and rationalised into one through service after passenger numbers fell, putting one service in doubt. In February 2008 a scheme in Kidderminster was closed down despite objections, as it was costing £1,000 a week to operate, although it was later said the scheme had been introduced as a temporary measure while building works occurred, that was allowed to continue permanently. In 2008 Sefton Council considered rerouting the bus service of Southport's third scheme after initial passenger projections were not met, although comparison was made to the slow start but eventual success of the first site. In January 2009 the service was re-routed to attract uses from the town's main hospital and college. In November 2009, Sefton Council announced the third site would be 'mothballed' for an indefinite period of time due to poor usuage. In 2007, the long-standing Maidstone scheme was reduced from four sites to three due to a reduction in usage of a centrally located site causing a budget shortfall for the council.A service operated by Go North East from the MetroCentre shopping centre coach park non-stop to Newcastle upon Tyne which operated on a model of pre-booked parking was abandoned after a year in September 2008 due to lack of use, replaced by a conventional service with more intermediate stops.Both Park & Ride sites in Worcester were closed in September 2014 as part of the wider local authority budget cuts at the time. The Perdiswell site opened in 2001 and at its peak in 2008, 450,000 people used the site, however by 2013-14 usage fell to 274,000. The service was operated by Worcestershire County Council.The service in Maidstone closed in February 2022 after the operator Arriva Southern Counties said it was uneconomical to operate as it was only carrying 500 passengers per day when 1,100 were needed to break even.
Permanent schemes:
Bus service Onward bus services from a park and ride car park are provided with dedicated bus routes, the regular local bus service, or a combination of both. In busy or frequent schemes, the central bus stops may be sited separately from those used by other regular bus services. Sites already well served by or located on the existing bus network may feature no dedicated service at all. These services may employ no specific branding, but reference to 'park and ride' may exist on rollsigns and timetables. Park and ride sites may also be used as stops on longer distance coach services, although they are generally not available for use by private coach operators.
Permanent schemes:
Dedicated routes are often operated point to point, running from the site to centre, and back, using the site as a bus terminus. Occasionally, through routes will run from one park and ride site to another, through the town or city centre, or to another suitable terminus such as a leisure centre. Routes may also call at multiple park and ride sites before commencing the onward journey to the destination.
Permanent schemes:
Dedicated bus services can be operated as express bus services, running non-stop calling at only the car park and the central area. These services may also operate as limited stop express services, stopping also at any important intermediate locations such as hospitals, railway stations, transport interchanges, out of town shopping centres, suburban retail parks and other places that are likely to see a high number of prospective passengers. For example, Park and Ride in Truro stops at Truro College (which also includes Treliske Retail Park) the Royal Cornwall Hospital, Royal Cornwall Museum and Victoria Square in Truro's high street. In areas where there is less overlap between regular bus services, park and ride designated services may stop at every stop like a regular service, as in York, where the park and ride services are also part of the local bus network. Some express or dedicated services may extend beyond the parking site as regular services to outlying areas.
Permanent schemes:
Funding and fares The majority of permanent park and ride schemes are supported by funding from the local authority, with investment for construction of the sites if included, or support for the operation of the bus services. Park and ride schemes rarely become financially self-sufficient even for just the operating costs, however most authorities cite the fact that profit is not the ultimate aim of schemes, rather the environmental benefits are what is being paid for. Some schemes where investment in a car park is not required can be funded fully commercially; the Stagecoach West Cheltenham Racecourse service is an example of a fully privately funded permanent service.
Permanent schemes:
Authority involvement can be singular, or is often jointly between a borough and county council, such as in Bedford, or even as a cooperation between multiple public institutions as part of a wider regional transport initiative. Public budget provision for schemes is often combined with infrastructure and vehicle investment toward high quality bus priority schemes, such as the A638 Quality Bus Corridor for Doncaster or guided busway and bus rapid transit scheme investment.
Permanent schemes:
Private companies often contribute additional funding where that company ultimately benefits from the scheme, either through increased custom or a reduction in employee parking needs. In High Wycombe, the scheme is part funded by a shopping centre and a local development partner, as well as the local council as it is integrated into the newly developed Cressex business park.A portion of the ongoing funding of the operating costs of a park and ride scheme comes from the collection of fares from the users, although some schemes operate completely free to the user. While most fare paying schemes are operated on a free-parking, pay on the bus basis, some schemes charge for the parking, to offer a financial incentive to encourage carpooling to the car park for cars with more passengers. Examples of car based payment schemes are Norwich and Canterbury. In the Chester scheme, a 2008 proposal to move from a bus based to a car park based charging system was dropped due to public opposition
Specialist schemes:
As well as serving as general public transport to central areas, specialist permanent park and ride services also exist, catering for a more specific user travel need. These exist both as supplementary routes from permanent public sites, or operate from private car parks. These services may be still available to the general public, or be restricted to a specific user/customer, and may be publicly or privately funded or both.
Specialist schemes:
Private user schemes marketed as park and ride include airport buses and other shuttle bus links where they transport passengers from a car park to a destination such as a hotel or conference centre.
NHS Trust supported routes link public park and ride sites to hospitals, such as in Nottingham (Medlink), Reading and Cheltenham, for the benefit of passengers and staff. In Bath, a demand responsive transport service has been combined with a park and ride to hospital shuttle.
Another specialist service is the transport of football spectators to football matches from public park and ride sites, such as in Southampton. These services may only be available to those with a match ticket. A football service is privately funded by Sunderland A.F.C. on match days from the Stadium of Light to Sunderland Enterprise Park.
Temporary schemes:
Some sites only become operational during a specific season, or for specific events, and as such may receive public or private financial support, or in the case of high usage, be self-funding. These ad-hoc services may not feature a dedicated bus fleet, but rather are provided by drawing on buses from other duties. Temporary or seasonal services often use free buses, such as Weymouth, which uses existing public pay and display car parks.
Temporary schemes:
Seasonal services occur in the summer to cater for tourists, such as Weymouth, or the Christmas period to cater for shoppers, such as Peterborough and Kingston.Examples of services for specific events are Southampton (for the Boat Show) and Whitby (for the Whitby Regatta)
Marketing and liveries:
Some high-profile public authority backed schemes employ a common "park and ride" brand identity for their park and ride scheme, and project this brand commonly across a website, printed material, and even extending to the colour of the bus in an all-over livery. In a small number of cases, the branding concept does not use the "park and ride" moniker as the primary identity, opting for a different name, such as Centre Shuttle (Basingstoke), Quicksilver Shuttle (Leicester), Taunton Flyer and Park for Truro.
Marketing and liveries:
Smaller schemes may not necessarily employ specific marketing or dedicated all-over liveries where the passenger revenue does not justify this, such as in Stoke and Scarborough, although the term "park and ride" is a near-universally accepted term that is still applied to these smaller schemes on timetables and/or non-overall livery route branding. This also occurs in busier schemes where other high-profile branding of local bus services exist, or the park and ride bus service is of the type that only consists of just another regular stop on the local services, rather than a dedicated shuttle type service, such as in Leeds and Nottingham.
Marketing and liveries:
Schemes will often be promoted in terms of being high quality, with bus drivers undergoing customer service training, and schemes attaining the government Charter Mark for excellence in public service quality, or the Park Mark award for implementing enhanced car park security features. Maidstone was the first scheme to obtain a Charter Mark.
Marketing and liveries:
All-over liveries are employed in single or multiple site schemes. Liveries often emphasise the green credentials of the scheme, such as Plymouth's cloud livery, or by using green as a base colour (Oxford, Winchester). Other schemes use a bold overall colour scheme to reinforce the brand with publicity material, such as the Chelmsford (jet-black), Maidstone (yellow), Canterbury (Silver base with green piping and decals), York (All-over silver with large red City crests), Basingstoke (purple) and Ipswich (pink), coordinated with the colours used in a website/publications. In some multiple site schemes, the all-over livery aspect is often extended to a distinct livery for each route/group of routes, such as the multi-colour coded schemes of Swansea, Norwich and Cambridge.
Marketing and liveries:
At peak times, standard liveried buses from the operator's main fleet may also supplement the service, or as replacement cover in the event of a dedicated vehicle's failure.
Locations:
Ferrytoll As well as the Edinburgh council schemes, a regional scheme exists in Scotland in the form of a large site in south Fife, designated the Ferrytoll park and ride. It does not have a dedicated service, but rather a large number of regional services are coordinated through the site, serving as a park and ride service south to Edinburgh, and as an important intermediate stop for inter-urban and long-distance services north into Fife and Dundee. The southern park and ride section aims to relieve the congested Forth Bridge road crossing.
Locations:
England Northern Ireland In Northern Ireland, bus based park and ride is organised on a regional basis by Translink. Belfast and other towns and cities are served by various sites, with bus services operated by Ulsterbus with their local bus services, and their Goldlink interurban branded services. Unlike in the rest of the UK, several site in Northern Ireland are as small as 10 spaces up to around 300, with (as of 2008) 20 sites across Northern Ireland.
Locations:
Scotland Wales Cardiff, the capital, currently operates 4 sites in north, south, east and west of the city. A scheme in Swansea, in the south west of the country, currently operates from 3 sites. The Mid Wales town of Aberystwyth also has its own scheme.† – Not including sites proposed or under construction. Only including sites in use 5 days or more a week, and not seasonal/Saturday only/event only usage | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Conductive hearing loss**
Conductive hearing loss:
Conductive hearing loss (CHL) occurs when there is a problem transferring sound waves anywhere along the pathway through the outer ear, tympanic membrane (eardrum), or middle ear (ossicles). If a conductive hearing loss occurs in conjunction with a sensorineural hearing loss, it is referred to as a mixed hearing loss. Depending upon the severity and nature of the conductive loss, this type of hearing impairment can often be treated with surgical intervention or pharmaceuticals to partially or, in some cases, fully restore hearing acuity to within normal range. However, cases of permanent or chronic conductive hearing loss may require other treatment modalities such as hearing aid devices to improve detection of sound and speech perception.
Causes:
Common causes of conductive hearing loss include: External ear Cerumen (earwax) or foreign body in the external auditory canal Otitis externa, infection or irritation of the outer ear Exostoses, abnormal growth of bone within the ear canal Tumor of the ear canal Congenital stenosis or atresia of the external auditory canal (narrow or blocked ear canal).
Ear canal stenosis & atresia can exist independently or may result from congenital malformations of the auricle such as microtia or anotia.
Causes:
Acquired stenosis (narrowing) of the external auditory canal following surgery or radiotherapy Middle ear Fluid accumulation is the most common cause of conductive hearing loss in the middle ear, especially in children. Major causes are ear infections or conditions that block the eustachian tube, such as allergies or tumors. Blocking of the eustachian tube leads to decreased pressure in the middle ear relative to the external ear, and this causes decreased motion of both the ossicles and the tympanic membrane.
Causes:
Acute or Serous otitis media Chronic suppurative otitis media (CSOM) Perforated eardrum Tympanosclerosis or scarring of the eardrum Cholesteatoma Eustachian Tube Dysfunction, inflammation or mass within the nasal cavity, middle ear, or eustachian tube itself Otosclerosis, abnormal growth of bone in or near the middle ear Middle ear tumour Ossicular discontinuity as a consequence of infection or temporal bone trauma Congenital malformation of the ossicles. This can be an isolated phenomenon or can occur as part of a syndrome where development of the 1st and 2nd branchial arches is seen such as in Goldenhar syndrome, Treacher Collins syndrome, branchio-oto-renal syndrome etc.
Causes:
Barotrauma, unequal air pressures in the external and middle ear. This can temporarily occur, for example, by the environmental pressure changes as when shifting altitude, or inside a train going into a tunnel. It is managed by any of various methods of ear clearing manoeuvres to equalize the pressures, like swallowing, yawning, or the Valsalva manoeuvre. More severe barotrauma can lead to middle ear fluid or even permanent sensorineural hearing loss.
Causes:
Inner ear Third window effect caused by: Superior canal dehiscence – which may require surgical correction.
Enlarged vestibular aqueduct Labyrinthine fistula
Presentation:
Conductive hearing loss makes all sounds seem faint or muffled. The hearing loss is usually worse in lower frequencies.
Presentation:
Congenital conductive hearing loss is identified through newborn hearing screening or may be identified because the baby has microtia or other facial abnormalities. Conductive hearing loss developing during childhood is usually due to otitis media with effusion and may present with speech and language delay or difficulty hearing. Later onset of conductive hearing loss may have an obvious cause such as an ear infection, trauma or upper respiratory tract infection or may have an insidious onset related to chronic middle ear disease, otosclerosis or a tumour of the naso-pharynx. Earwax is a very common cause of a conductive hearing loss which may present suddenly when the wax blocks sound from getting through the external ear canal to the middle and inner ear.
Diagnosis:
Diagnosis requires a detailed history, local examination of the ear, nose, throat and neck, and detailed hearing tests. In children a more detailed examination may be required if the hearing loss is congenital.
Otoscopy Examination of the external ear canal and ear drum is important and may help identify problems located in the outer ear up to the tympanic membrane.
Diagnosis:
Differential testing For basic screening, a conductive hearing loss can be identified using the Rinne test with a 256 Hz tuning fork. The Rinne test, in which a patient is asked to say whether a vibrating tuning fork is heard more loudly adjacent to the ear canal (air conduction) or touching the bone behind the ear (bone conduction), is negative indicating that bone conduction is more effective that air conduction. A normal, or positive, result, is when air conduction is more effective than bone conduction.
Diagnosis:
With a one-sided conductive component the combined use of both the Weber and Rinne tests is useful. If the Weber test is used, in which a vibrating tuning fork is touched to the midline of the forehead, the person will hear the sound more loudly in the affected ear because background noise does not mask the hearing on this side.
Diagnosis:
The following table compares sensorineural hearing loss to conductive: Tympanometry Tympanometry, or acoustic immitance testing, is a simple objective test of the ability of the middle ear to transmit sound waves from the outer ear to the middle ear and to the inner ear. This test is usually abnormal with conductive hearing loss. A type B tympanogram reveals a flat response, due to fluid in the middle ear (otitis media), or an eardrum perforation. A type C tympanogram indicates negative middle ear pressure, which is commonly seen in eustachian tube dysfunction. A type As tympanogram indicates a shallow compliance of the middle ear, which is commonly seen in otosclerosis.
Diagnosis:
Audiometry Pure tone audiometry, a standardized hearing test over a set of frequencies from 250 Hz to 8000 Hz, may be conducted by a medical doctor, audiologist or audiometrist, with the result plotted separately for each ear on an audiogram. The shape of the plot reveals the degree and nature of hearing loss, distinguishing conductive hearing loss from other kinds of hearing loss. A conductive hearing loss is characterized by a difference of at least 15 decibels between the air conduction threshold and bone conduction threshold at the same frequency. On an audiogram, the "x" represents responses in the left ear at each frequency, while the "o" represents responses in right ear at each frequency.
Diagnosis:
CT scan Most causes of conductive hearing loss can be identified by examination but if it is important to image the bones of the middle ear or inner ear then a CT scan is required. CT scan is useful in cases of congenital conductive hearing loss, chronic suppurative otitis media or cholesteatoma, ossicular damage or discontinuity, otosclerosis and third window dehiscence. Specific MRI scans can be used to identify cholesteatoma.
Management:
Management falls into three modalities: surgical treatment, pharmaceutical treatment, and supportive, depending on the nature and location of the specific cause.In cases of infection, antibiotics or antifungal medications are an option. Some conditions are amenable to surgical intervention such as middle ear fluid, cholesteatoma, and otosclerosis. If conductive hearing loss is due to head trauma, surgical repair is an option. If absence or deformation of ear structures cannot be corrected, or if the patient declines surgery, hearing aids which amplify sounds are a possible treatment option. Bone conduction hearing aids are useful as these deliver sound directly, through bone, to the cochlea or organ of hearing bypassing the pathology. These can be on a soft or hard headband or can be inserted surgically, a bone anchored hearing aid, of which there are several types. Conventional air conduction hearing aids can also be used. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Circle graph**
Circle graph:
In graph theory, a circle graph is the intersection graph of a chord diagram. That is, it is an undirected graph whose vertices can be associated with a finite system of chords of a circle such that two vertices are adjacent if and only if the corresponding chords cross each other.
Algorithmic complexity:
Spinrad (1994) gives an O(n2)-time algorithm that tests whether a given n-vertex undirected graph is a circle graph and, if it is, constructs a set of chords that represents it.
Algorithmic complexity:
A number of other problems that are NP-complete on general graphs have polynomial time algorithms when restricted to circle graphs. For instance, Kloks (1996) showed that the treewidth of a circle graph can be determined, and an optimal tree decomposition constructed, in O(n3) time. Additionally, a minimum fill-in (that is, a chordal graph with as few edges as possible that contains the given circle graph as a subgraph) may be found in O(n3) time.Tiskin (2010) has shown that a maximum clique of a circle graph can be found in O(n log2 n) time, while Nash & Gregg (2010) have shown that a maximum independent set of an unweighted circle graph can be found in O(n min{d, α}) time, where d is a parameter of the graph known as its density, and α is the independence number of the circle graph.
Algorithmic complexity:
However, there are also problems that remain NP-complete when restricted to circle graphs. These include the minimum dominating set, minimum connected dominating set, and minimum total dominating set problems.
Chromatic number:
The chromatic number of a circle graph is the minimum number of colors that can be used to color its chords so that no two crossing chords have the same color. Since it is possible to form circle graphs in which arbitrarily large sets of chords all cross each other, the chromatic number of a circle graph may be arbitrarily large, and determining the chromatic number of a circle graph is NP-complete. It remains NP-complete to test whether a circle graph can be colored by four colors. Unger (1992) claimed that finding a coloring with three colors may be done in polynomial time but his writeup of this result omits many details.Several authors have investigated problems of coloring restricted subclasses of circle graphs with few colors. In particular, for circle graphs in which no sets of k or more chords all cross each other, it is possible to color the graph with as few as 7k2 colors. One way of stating this is that the circle graphs are χ -bounded. In the particular case when k = 3 (that is, for triangle-free circle graphs) the chromatic number is at most five, and this is tight: all triangle-free circle graphs may be colored with five colors, and there exist triangle-free circle graphs that require five colors. If a circle graph has girth at least five (that is, it is triangle-free and has no four-vertex cycles) it can be colored with at most three colors. The problem of coloring triangle-free squaregraphs is equivalent to the problem of representing squaregraphs as isometric subgraphs of Cartesian products of trees; in this correspondence, the number of colors in the coloring corresponds to the number of trees in the product representation.
Applications:
Circle graphs arise in VLSI physical design as an abstract representation for a special case for wire routing, known as "two-terminal switchbox routing". In this case the routing area is a rectangle, all nets are two-terminal, and the terminals are placed on the perimeter of the rectangle. It is easily seen that the intersection graph of these nets is a circle graph. Among the goals of wire routing step is to ensure that different nets stay electrically disconnected, and their potential intersecting parts must be laid out in different conducting layers. Therefore circle graphs capture various aspects of this routing problem.
Applications:
Colorings of circle graphs may also be used to find book embeddings of arbitrary graphs: if the vertices of a given graph G are arranged on a circle, with the edges of G forming chords of the circle, then the intersection graph of these chords is a circle graph and colorings of this circle graph are equivalent to book embeddings that respect the given circular layout. In this equivalence, the number of colors in the coloring corresponds to the number of pages in the book embedding.
Related graph classes:
A graph is a circle graph if and only if it is the overlap graph of a set of intervals on a line. This is a graph in which the vertices correspond to the intervals, and two vertices are connected by an edge if the two intervals overlap, with neither containing the other.
The intersection graph of a set of intervals on a line is called the interval graph.
String graphs, the intersection graphs of curves in the plane, include circle graphs as a special case.
Every distance-hereditary graph is a circle graph, as is every permutation graph and every indifference graph. Every outerplanar graph is also a circle graph.The circle graphs are generalized by the polygon-circle graphs, intersection graphs of polygons all inscribed in the same circle. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Armstrong's acid**
Armstrong's acid:
Armstrong's acid (naphthalene-1,5-disulfonic acid) is a fluorescent organic compound with the formula C10H6(SO3H)2. It is one of several isomers of naphthalenedisulfonic acid. It a colorless solid, typically obtained as the tetrahydrate. Like other sulfonic acids, it is a strong acid. It is named for British chemist Henry Edward Armstrong.
Production and use:
It is prepared by disulfonation of naphthalene with oleum: C10H8 + 2 SO3 → C10H6(SO3H)2Further sulfonation gives The 1,3,5-trisulfonic acid derivative.
Reactions and uses:
Fusion of Armstrong's acid in NaOH gives the disodium salt of 1,5-dihydroxynaphthalene, which can be acidified to give the diol. The intermediate in this hydrolysis, 1-hydroxynaphthalene-5-sulfonic acid, is also useful. Nitration gives nitrodisulfonic acids, which are precursors to amino derivatives.
Reactions and uses:
The disodium salt is sometimes used as a divalent counterion for forming salts of basic drug compounds, as an alternative to the related mesylate or tosylate salts. When used in this way such a salt is called a naphthalenedisulfonate salt, as seen with the most common salt form of the stimulant drug CFT. The disodium salt is also used as an electrolyte in certain kinds of chromatography. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Scorpion (processor)**
Scorpion (processor):
Scorpion is a central processing unit (CPU) core designed by Qualcomm for use in their Snapdragon mobile systems on chips (SoCs). It was released in 2008. It was designed in-house, but has many architectural similarities with the ARM Cortex-A8 and Cortex-A9 CPU cores.
Overview:
10/12 stage integer pipeline with 2-way decode, 3-way out-of-order speculatively issued superscalar execution Pipelined VFPv3 and 128-bit wide NEON (SIMD) 3 execution ports 32 KB + 32 KB L1 cache 256 KB (single-core) or 512 KB (dual-core) L2 cache Single or dual-core configuration 2.1 DMIPS/MHz 65/45/28 nm process | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**PCGamerBike**
PCGamerBike:
The PCGamerBike is an exercise bike that can interact with computer games. It uses magnets to produce resistance which makes the bike relatively quiet in operation, and comes with software that will automatically logs calories burned, distance and speed to a daily graph.
Types:
There are two versions of the PCGamerBike; the PCGamerBike Mini and the PCGamerBike Recumbent. The PCGamerBike Mini is a compact exercise bike, and the PCGamerBike Recumbent is a full-sized recumbent exercise bike.
Use:
The PCGamerBike is configurable and as a result can interact with a broad range of PC games. They are typically used to control character(s) in a game, or a character's vehicle, such as a car, bike or boat, by pedaling forward or backward, to move the character in those directions. Side to side controls require the use of a keyboard or mouse, which can be used in accompaniment with the bike. When used with driving and racing games, character speed is proportional to pedal speed. The PCGamerBike Mini can be used with any game that supports a keyboard, as it is connected via a USB port as a game controller. The resistance of the pedals on the PCGamerBike Recumbent can be adjusted to the player's preference and will also vary depending on certain in-game situations, for example, in a situation when the character is going up or down hill.
Awards:
The PCGamerBike received the 2007 International CES Innovations Design and Engineering Award. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tangential angle**
Tangential angle:
In geometry, the tangential angle of a curve in the Cartesian plane, at a specific point, is the angle between the tangent line to the curve at the given point and the x-axis. (Some authors define the angle as the deviation from the direction of the curve at some fixed starting point. This is equivalent to the definition given here by the addition of a constant to the angle or by rotating the curve.)
Equations:
If a curve is given parametrically by (x(t), y(t)), then the tangential angle φ at t is defined (up to a multiple of 2π) by cos sin φ).
Equations:
Here, the prime symbol denotes the derivative with respect to t. Thus, the tangential angle specifies the direction of the velocity vector (x(t), y(t)), while the speed specifies its magnitude. The vector (x′(t),y′(t))|x′(t),y′(t)| is called the unit tangent vector, so an equivalent definition is that the tangential angle at t is the angle φ such that (cos φ, sin φ) is the unit tangent vector at t.
Equations:
If the curve is parametrized by arc length s, so |x′(s), y′(s)| = 1, then the definition simplifies to cos sin φ).
In this case, the curvature κ is given by φ′(s), where κ is taken to be positive if the curve bends to the left and negative if the curve bends to the right.
Equations:
Conversely, the tangent angle at a given point equals the definite integral of curvature up to that point: φ(s)=∫0sκ(s)ds+φ0 φ(t)=∫0tκ(t)s′(t)dt+φ0 If the curve is given by the graph of a function y = f(x), then we may take (x, f(x)) as the parametrization, and we may assume φ is between −π/2 and π/2. This produces the explicit expression arctan f′(x).
Polar tangential angle:
In polar coordinates, the polar tangential angle is defined as the angle between the tangent line to the curve at the given point and ray from the origin to the point. If ψ denotes the polar tangential angle, then ψ = φ − θ, where φ is as above and θ is, as usual, the polar angle.
Polar tangential angle:
If the curve is defined in polar coordinates by r = f(θ), then the polar tangential angle ψ at θ is defined (up to a multiple of 2π) by cos sin ψ) .If the curve is parametrized by arc length s as r = r(s), θ = θ(s), so |r′(s), rθ′(s)| = 1, then the definition becomes cos sin ψ) .The logarithmic spiral can be defined a curve whose polar tangential angle is constant. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Rui wa Tomo o Yobu**
Rui wa Tomo o Yobu:
Rui wa Tomo o Yobu (るいは智を呼ぶ, lit. Rui Calls Tomo) is a Japanese adult visual novel developed by Akatsuki Works and first released for Windows as a DVD on June 26, 2008 as a limited edition; the regular edition followed on July 31, 2008. The game is described by the development team as a "new beautiful girl entertainment ADV" (新美少女エンタメADV, shin bishōjo entame ADV). The gameplay in Rui wa Tomo o Yobu follows a plot line which offers pre-determined scenarios with courses of interaction, and focuses on the appeal of the five female main characters. The title Rui wa Tomo o Yobu is also a Japanese proverb equivalent to the English proverb "birds of a feather flock together" when written with the kanji and kana 類は友を呼ぶ.
Gameplay:
Rui wa Tomo o Yobu's gameplay requires little interaction from the player as most of the duration of the game is spent simply reading the text that appears on the screen which represents either dialogue between the various characters or the inner thoughts of the protagonist. Every so often, the player will come to a point where he or she is given the chance to choose from multiple options. The time between these points is variable and can occur anywhere from a minute to much longer. Gameplay pauses at these points and depending on which choice the player makes, the plot will progress in a specific direction. There are five main plot lines that the player will have the chance to experience, one for each of the heroines in the story. To view all five plot lines, the player will have to replay the game multiple times and make different decisions to progress the plot in an alternate direction. One of the goals of the gameplay is for the player to enable the viewing of hentai scenes depicting the protagonist, Tomo, and one of the five heroines having sexual intercourse.
Plot:
Rui wa Tomo o Yobu revolves around the feminine Tomo Wakutsu who was brought up by his mother as a girl due to a small mark he has on his body. After his mother's death, he discovers via her will that she emphases Tomo continue to live as a female, and following this, Tomo starts to go through more troubles in his life. Tomo soon discovers that he is linked with five girls who are around his age as a second year high school student. These girls happen to have the same mark he has, and also have been going through hardships in their lives. Tomo and these five girls decide to form a pact to stay together and support each other to solve each of their problems and to bring peace to their lives. Though Tomo initially hides the fact that he is male from the others, the five girls eventually discover his secret.
Development:
Rui wa Tomo o Yobu is Akatsuki Works' second game in less than one year. The project is notable as having a very few people credited for having taken a part in the creation of the game, and none of them had previous worked on Akatsuki Works' first title Boku ga Sadame-kun ni wa Tsubasa o.. The scenario was divided between two writers, Wataru Hino and Jō Shūdō. Art direction and character design were done by Hokuto Saeki who was also one of five main artists for BaseSon's 2007 visual novel Koihime Musō.
Development:
Release history Before Rui wa Tomo o Yobu's initial release, a free game demo became available for download at the visual novel's official website. In the demo, the player is introduced to the main characters in the game that is typical of the gameplay found in a visual novel which includes times during gameplay where the player is given several choices to make in order to further the plot in a specific direction. The full game was first released on June 26, 2008 as a limited edition playable as a DVD on a Microsoft Windows PC; the regular edition followed on July 31, 2008. The limited edition contained an art collection from the game which includes storyboards and rough illustrations.
Music:
The visual novel has two main theme songs, one opening theme and one ending theme. The opening theme, "Kizuna" (絆, lit. "Bonds"), is sung, written, and composed by Marika of the Japanese musical group Angel Note. The ending theme, "Takaramono" (宝物, lit. "Treasure"), is sung and written by Riryka, and composed by Shunsuke Shiina, both of whom are also from Angel Note. A vocal CD maxi single containing both theme songs was released as a promotional gift to those who pre-ordered the limited edition version of the visual novel, and was released with that version on June 26, 2008.
Reception:
The limited edition version of Rui wa Tomo o Yobu ranked tenth in terms of national sales of PC games in Japan in June 2008. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Motorsport industry**
Motorsport industry:
The motorsport industry is the range of engineering and service businesses that support the sporting discipline of motorsports.
In motorsports, a competitors' success is intimately linked with the performance of his or her equipment - in this case a vehicle. The role of engineering in delivering on-track success has led to the formation of a considerable global industry which supplies motorsport competitors with the equipment necessary to participate in the sport.
The industry:
The motorsport industry designs, develops and manufactures prototypes including chassis, materials, electronics, engines, transmissions, brakes, telemetry and suspension components. The industry relies upon the skills of competitive engineers who, season after season, incrementally improve components to deliver identifiable advantage and ongoing success on the race track. Competitive Engineering is the centerpiece of these internationally-trading small businesses. Motorsport businesses have developed a unique ability to use sporting endeavour and entertainment as a catalyst for engineering and manufacturing advances - advances subsequently of real value to other High Performance Engineering (HPE) customer groups – Defence, Marine, Aerospace and Automotive.
The Motorsport Industry Association (MIA):
The Motorsport Industry Association (MIA) is the world's leading trade association for the motorsport, performance engineering, services and tuning sectors. The MIA represents the specialised needs of this highly successful global industry as it undergoes continuing rapid development throughout the world.
The Motorsport Industry Association (MIA):
In April 1994, leading personalities in British motorsport joined forces to form their own trade association - the MIA - with the aim of promoting one of the UK's most successful industries - motorsport. The original concept was proposed by Founder and original CEO, Brian Sims, with the first Executive Committee comprising Rob Baldock (Accenture); Dick Scammel (Cosworth);Tony Schulp (Haymarket); John Kirkpatrick (Jim Russell Racing Drivers School); Tony Panaro (Euro Northern Travel) and Tony Fletcher (Premier Fuels).The MIA represents its members from motorsport, high performance engineering and tuning companies; race and rally teams; governing bodies; motorsport services; research organisations; race circuits; Universities and colleges - amongst many others. The MIA enjoys membership of the Confederation of British Industry (CBI), in turn providing members access to the UK's “Voice of Industry”. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Transport puzzle**
Transport puzzle:
Transport puzzles are logistical puzzles, which often represent real-life transportation problems. The classic transport puzzle is the river crossing puzzle in which three objects are transported across a river one at time while avoiding leaving certain pairs of objects together. The term should not be confused with the usage of transport puzzle as a shortened form of transportation puzzle, representing children's puzzles with different transportation vehicles used as puzzle pieces.
Description:
A transport problem is one in which objects are moved from a starting position to a destination position following the logical rules of the puzzle. Transport puzzles do not necessarily involve any physical movement of objects, although they often do. Rather, they are those puzzles that consist of finding a path through the state space of the puzzle to reach the goal state. State changes can include rotations and distortions of the object being transported as well as its translation in space.As in rearrangement puzzles, no piece is ever lost or added to the board. In contrast to rearrangement puzzles, however, transport puzzles have all persons and objects follow certain routes given on the board; they cannot be lifted off the board and placed on faraway positions that have no visible connection to the from-position. Hence transport puzzles often mean that the player has to move (physical) objects in a very restricted space. The player may or may not be part of the game (either directly, or as a player character on the board).
Types of transport puzzles:
Tour puzzles are first-person transport puzzles: the player does the tour him/herself or is represented by a player character on the board.
labyrinths: player runs one convoluted path way, no dead ends.
mazes: player runs fixed set of pathways, many dead ends.
Sokoban-type puzzles: player pushes objects into place.
sliding puzzles with single player, for example Rush Hour other first-person transport puzzles. Some of them are elimination puzzles: these are similar to Sokoban-type puzzles, but one eliminates pieces on the way rather than pushing them around.
Other transport games: The player is not represented in the game.
sliding puzzles: slide pieces (on a board) into place.The fifteen puzzle is the best known example of these.
train shunting puzzles: move trains and carriages along tracks.
river crossing puzzles: move a set of pieces across a river using a bridge or boat. Certain conditions apply.
Math:
The Seven Bridges of Königsberg is a historically notable problem in mathematics. Its negative resolution by Leonhard Euler in 1736 laid the foundations of graph theory and prefigured the idea of topology.
Literature:
The famous British puzzler Henry Dudeney added several puzzles to this category.Transportation puzzles can be used to study intelligence and educational issues. They are good for this purpose because, as logic puzzles, they require no outside information. Everything needed is contained within the puzzle. Also, the state-space representation makes them amenable to computer analysis, but at the same time they are appealing to human subjects of cognitive psychology experiments. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**High-alert nuclear weapon**
High-alert nuclear weapon:
A high-alert nuclear weapon commonly refers to a launch-ready ballistic missile that is armed with a nuclear warhead whose launch can be ordered (through the National Command Authority) and executed (via a nuclear command and control system) within 15 minutes. It can include any weapon system capable of delivering a nuclear warhead in this time frame.
High-alert nuclear weapon:
Virtually all high-alert nuclear weapons are possessed by the United States and Russia. Both nations use automated command-and-control systems, in conjunction with their early warning radar and/or satellites, to facilitate the rapid launch of their land-based intercontinental ballistic missiles (ICBMs) and some submarine-launched ballistic missiles (SLBMs). Fear of a "disarming" nuclear first strike, which would destroy their command and control systems and nuclear forces, led both nations to develop "launch-on-warning" capability, which requires high-alert nuclear weapons that can launch within 30 minutes of a tactical warning, the nominal flight time of ICBMs traveling between both countries.
High-alert nuclear weapon:
A definition of "high-alert" requires no specific explosive power of the weapon carried by the missile or weapon system, but in general, most high-alert missiles are armed with strategic nuclear weapons with yields equal to or greater than 100 kilotons. The United States and Russia have for decades possessed ICBMs and SLBMs that can be launched in only a few minutes.
High-alert nuclear weapon:
The U.S. and Russia as of 2008 have a total of 900 missiles and 2581 strategic nuclear warheads on high-alert launch-ready status. The total explosive power of the weapons is about 1185 megatons, or the equivalent explosive power of 1.185 billion tons of TNT. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Promethean gap**
Promethean gap:
The Promethean gap (German: prometheisches Gefälle) is a concept concerning the relations of humans and technology and a growing "asynchronization" between them. In popular formulations, the gap refers to an inability or incapacity of human faculties to imagine the effects of the technologies that humans produce, specifically the negative effects. The concept originated with philosopher Günther Anders in the 1950s and for him, an extreme test case was the atomic bomb and its use at Hiroshima and Nagasaki in 1945, a symbol of the larger technology revolution that the 20th century was witnessing. The gap has been extended to and understood within multiple variations – a gap between production and ideology; production and imagination; production and need; production and use; technology and the body; doing and imagining; and doing and feeling. The gap can also be seen in areas such as law and in the actions of legislatures and policymakers.Various authors use different words to explain Gefälle, accordingly resulting in Promethean divide, Promethean disjunction, Promethean discrepancy, Promethean gradient, Promethean slope, Promethean decline, Promethean incline, Promethean disparity, Promethean lag, and Promethean differential.
Origin:
Günther Anders (1902–1992), born in Germany and of Jewish descent, attempted to conceptualize the discrepancy between humans and technology based on his observations and hands-on experience as an émigré in the United States, and his general theoretical background in Marxist concepts such as substructure and superstructure. In the United States, he did various jobs. He was a tutor, a factory worker, and even a Hollywood costume designer. By the 1950s conceptualizing this discrepancy had become an important and pervasive part of his writings and would remain a feature of his work until his death. In the 1980s he would go on to call his philosophy a philosophy of discrepancy (Diskrepanzphilosophie).The first published usage of the phrase was in the first volume of Andres's book The Outdatedness of Human Beings (German: Die Antiquiertheit des Menschen) published in the German language in 1956. Gunther uses exaggeration when explaining the concept of the Promethean gap and the associated concepts of Promethean shame (and pride) and states that there is a necessity and urgency for the exaggeration. Human "blindness" amidst the increasing gradient demanded it. The aim then became to expand humans' capacity and ability to imagine. In Burning Conscience (1961), letters between US airman Claude Eatherly and Gunther, Gunther writes, your task consists in bridging the gap that exists between your two faculties... to level off the incline... you have to violently widen the narrow capacity of your imagination (and the even narrower one of your feelings) until imagination and feeling become capable to grasp and to realize the enormity of your doings; until you are capable to seize and conceive, to accept or reject it—in short: your task is: to widen your moral fantasy.
Origin:
Gunther considered the service members of the US Army Air Forces unit 509th Composite Group, which conducted the atomic bombings of Hiroshima and Nagasaki, and of which Eatherly was a part, as an example of people affected by the Promethean gap. Along with the atomic bombings, Auschwitz (representing the Holocaust) was an example from the same time period, both represented technology enabled conditions of large scale mechanized death, a new era which required conceptualizing as a basis of future prevention. Gunther took these two examples of advances in civilization under the same umbrella of mechanization, taking note that the atomic bombings and Auschwitz differed in a key point of distance between the individuals involved which accordingly influenced his interaction with the atomic bombings. An increasingly networked technologization is seeing increasing sophistication in all forms which our human faculties are unable to keep up with, we are "unable to imagine the things we make", an inversion of before.
Origin:
Prometheus The word "Promethean" has been taken from the Greek myth of Prometheus. There are a number of stories attached to him along with variations of the stories.Prometheus, a Titan and a trickster, created primitive versions of humanity. He created them in the image of the Greek gods, however Zeus limited the powers of humanity. Following this, Prometheus tricked Zeus, at least twice. The first deception by Prometheus resulted in Zeus confiscating fire from humanity. Prometheus, in retaliation, stole fire from Mount Olympus and gave it back to humanity. When humanity flourished once again and Zeus saw that they had been given fire, he eternally punished Prometheus. Anders uses this story as symbolism, where the fire is modern technology and the eternal punishment given to Prometheus the negative consequences.The convergence of the variations in the story is the gift of fire. Through this gift, humanity can now play its own tricks, for both good or bad. In variations of the story, Heracles unchains Prometheus, and the story of Pandora and her jar follows. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Operations research**
Operations research:
Operations research (British English: operational research) (U.S. Air Force Specialty Code: Operations Analysis), often shortened to the initialism OR, is a discipline that deals with the development and application of analytical methods to improve decision-making. The term management science is occasionally used as a synonym.Employing techniques from other mathematical sciences, such as modeling, statistics, and optimization, operations research arrives at optimal or near-optimal solutions to decision-making problems. Because of its emphasis on practical applications, operations research has overlapped with many other disciplines, notably industrial engineering. Operations research is often concerned with determining the extreme values of some real-world objective: the maximum (of profit, performance, or yield) or minimum (of loss, risk, or cost). Originating in military efforts before World War II, its techniques have grown to concern problems in a variety of industries.
Overview:
Operational research (OR) encompasses the development and the use of a wide range of problem-solving techniques and methods applied in the pursuit of improved decision-making and efficiency, such as simulation, mathematical optimization, queueing theory and other stochastic-process models, Markov decision processes, econometric methods, data envelopment analysis, ordinal priority approach, neural networks, expert systems, decision analysis, and the analytic hierarchy process. Nearly all of these techniques involve the construction of mathematical models that attempt to describe the system. Because of the computational and statistical nature of most of these fields, OR also has strong ties to computer science and analytics. Operational researchers faced with a new problem must determine which of these techniques are most appropriate given the nature of the system, the goals for improvement, and constraints on time and computing power, or develop a new technique specific to the problem at hand (and, afterwards, to that type of problem).
Overview:
The major sub-disciplines in modern operational research, as identified by the journal Operations Research, are: Computing and information technologies Financial engineering Manufacturing, service sciences, and supply chain management Policy modeling and public sector work Revenue management Simulation Stochastic models Transportation theory (mathematics) Game theory for strategies Linear programming Nonlinear programming Integer programming in NP-complete problem specially for 0-1 integer linear programming for binary Dynamic programming in Aerospace engineering and Economics Information theory used in Cryptography, Quantum computing Quadratic programming for solutions of Quadratic equation and Quadratic function
History:
In the decades after the two world wars, the tools of operations research were more widely applied to problems in business, industry, and society. Since that time, operational research has expanded into a field widely used in industries ranging from petrochemicals to airlines, finance, logistics, and government, moving to a focus on the development of mathematical models that can be used to analyse and optimize sometimes complex systems, and has become an area of active academic and industrial research.
History:
Historical origins In the 17th century, mathematicians Blaise Pascal and Christiaan Huygens solved problems involving sometimes complex decisions (problem of points) by using game-theoretic ideas and expected values; others, such as Pierre de Fermat and Jacob Bernoulli, solved these types of problems using combinatorial reasoning instead. Charles Babbage's research into the cost of transportation and sorting of mail led to England's universal "Penny Post" in 1840, and to studies into the dynamical behaviour of railway vehicles in defence of the GWR's broad gauge. Beginning in the 20th century, study of inventory management could be considered the origin of modern operations research with economic order quantity developed by Ford W. Harris in 1913. Operational research may have originated in the efforts of military planners during World War I (convoy theory and Lanchester's laws). Percy Bridgman brought operational research to bear on problems in physics in the 1920s and would later attempt to extend these to the social sciences.Modern operational research originated at the Bawdsey Research Station in the UK in 1937 as the result of an initiative of the station's superintendent, A. P. Rowe and Robert Watson-Watt. Rowe conceived the idea as a means to analyse and improve the working of the UK's early-warning radar system, code-named "Chain Home" (CH). Initially, Rowe analysed the operating of the radar equipment and its communication networks, expanding later to include the operating personnel's behaviour. This revealed unappreciated limitations of the CH network and allowed remedial action to be taken.Scientists in the United Kingdom (including Patrick Blackett (later Lord Blackett OM PRS), Cecil Gordon, Solly Zuckerman, (later Baron Zuckerman OM, KCB, FRS), C. H. Waddington, Owen Wansbrough-Jones, Frank Yates, Jacob Bronowski and Freeman Dyson), and in the United States (George Dantzig) looked for ways to make better decisions in such areas as logistics and training schedules.
History:
Second World War The modern field of operational research arose during World War II. In the World War II era, operational research was defined as "a scientific method of providing executive departments with a quantitative basis for decisions regarding the operations under their control". Other names for it included operational analysis (UK Ministry of Defence from 1962) and quantitative management.During the Second World War close to 1,000 men and women in Britain were engaged in operational research. About 200 operational research scientists worked for the British Army.Patrick Blackett worked for several different organizations during the war. Early in the war while working for the Royal Aircraft Establishment (RAE) he set up a team known as the "Circus" which helped to reduce the number of anti-aircraft artillery rounds needed to shoot down an enemy aircraft from an average of over 20,000 at the start of the Battle of Britain to 4,000 in 1941.
History:
In 1941, Blackett moved from the RAE to the Navy, after first working with RAF Coastal Command, in 1941 and then early in 1942 to the Admiralty. Blackett's team at Coastal Command's Operational Research Section (CC-ORS) included two future Nobel prize winners and many other people who went on to be pre-eminent in their fields. They undertook a number of crucial analyses that aided the war effort. Britain introduced the convoy system to reduce shipping losses, but while the principle of using warships to accompany merchant ships was generally accepted, it was unclear whether it was better for convoys to be small or large. Convoys travel at the speed of the slowest member, so small convoys can travel faster. It was also argued that small convoys would be harder for German U-boats to detect. On the other hand, large convoys could deploy more warships against an attacker. Blackett's staff showed that the losses suffered by convoys depended largely on the number of escort vessels present, rather than the size of the convoy. Their conclusion was that a few large convoys are more defensible than many small ones.
History:
While performing an analysis of the methods used by RAF Coastal Command to hunt and destroy submarines, one of the analysts asked what colour the aircraft were. As most of them were from Bomber Command they were painted black for night-time operations. At the suggestion of CC-ORS a test was run to see if that was the best colour to camouflage the aircraft for daytime operations in the grey North Atlantic skies. Tests showed that aircraft painted white were on average not spotted until they were 20% closer than those painted black. This change indicated that 30% more submarines would be attacked and sunk for the same number of sightings. As a result of these findings Coastal Command changed their aircraft to using white undersurfaces.
History:
Other work by the CC-ORS indicated that on average if the trigger depth of aerial-delivered depth charges were changed from 100 to 25 feet, the kill ratios would go up. The reason was that if a U-boat saw an aircraft only shortly before it arrived over the target then at 100 feet the charges would do no damage (because the U-boat wouldn't have had time to descend as far as 100 feet), and if it saw the aircraft a long way from the target it had time to alter course under water so the chances of it being within the 20-foot kill zone of the charges was small. It was more efficient to attack those submarines close to the surface when the targets' locations were better known than to attempt their destruction at greater depths when their positions could only be guessed. Before the change of settings from 100 to 25 feet, 1% of submerged U-boats were sunk and 14% damaged. After the change, 7% were sunk and 11% damaged; if submarines were caught on the surface but had time to submerge just before being attacked, the numbers rose to 11% sunk and 15% damaged. Blackett observed "there can be few cases where such a great operational gain had been obtained by such a small and simple change of tactics".
History:
Bomber Command's Operational Research Section (BC-ORS), analyzed a report of a survey carried out by RAF Bomber Command. For the survey, Bomber Command inspected all bombers returning from bombing raids over Germany over a particular period. All damage inflicted by German air defenses was noted and the recommendation was given that armor be added in the most heavily damaged areas. This recommendation was not adopted because the fact that the aircraft were able to return with these areas damaged indicated the areas were not vital, and adding armor to non-vital areas where damage is acceptable reduces aircraft performance. Their suggestion to remove some of the crew so that an aircraft loss would result in fewer personnel losses, was also rejected by RAF command. Blackett's team made the logical recommendation that the armor be placed in the areas which were completely untouched by damage in the bombers who returned. They reasoned that the survey was biased, since it only included aircraft that returned to Britain. The areas untouched in returning aircraft were probably vital areas, which, if hit, would result in the loss of the aircraft. This story has been disputed, with a similar damage assessment study completed in the US by the Statistical Research Group at Columbia University, the result of work done by Abraham Wald.When Germany organized its air defences into the Kammhuber Line, it was realized by the British that if the RAF bombers were to fly in a bomber stream they could overwhelm the night fighters who flew in individual cells directed to their targets by ground controllers. It was then a matter of calculating the statistical loss from collisions against the statistical loss from night fighters to calculate how close the bombers should fly to minimize RAF losses.The "exchange rate" ratio of output to input was a characteristic feature of operational research. By comparing the number of flying hours put in by Allied aircraft to the number of U-boat sightings in a given area, it was possible to redistribute aircraft to more productive patrol areas. Comparison of exchange rates established "effectiveness ratios" useful in planning. The ratio of 60 mines laid per ship sunk was common to several campaigns: German mines in British ports, British mines on German routes, and United States mines in Japanese routes.Operational research doubled the on-target bomb rate of B-29s bombing Japan from the Marianas Islands by increasing the training ratio from 4 to 10 percent of flying hours; revealed that wolf-packs of three United States submarines were the most effective number to enable all members of the pack to engage targets discovered on their individual patrol stations; revealed that glossy enamel paint was more effective camouflage for night fighters than conventional dull camouflage paint finish, and a smooth paint finish increased airspeed by reducing skin friction.On land, the operational research sections of the Army Operational Research Group (AORG) of the Ministry of Supply (MoS) were landed in Normandy in 1944, and they followed British forces in the advance across Europe. They analyzed, among other topics, the effectiveness of artillery, aerial bombing and anti-tank shooting.
History:
After World War II In 1947 under the auspices of the British Association, a symposium was organized in Dundee. In his opening address, Watson-Watt offered a definition of the aims of OR: "To examine quantitatively whether the user organization is getting from the operation of its equipment the best attainable contribution to its overall objective."With expanded techniques and growing awareness of the field at the close of the war, operational research was no longer limited to only operational, but was extended to encompass equipment procurement, training, logistics and infrastructure. Operations research also grew in many areas other than the military once scientists learned to apply its principles to the civilian sector. The development of the simplex algorithm for linear programming was in 1947.In the 1950s, the term Operations Research was used to describe heterogeneous mathematical methods such as game theory, dynamic programming, linear programming, warehousing, spare parts theory, queue theory, simulation and production control, which were used primarily in civilian industry. Scientific societies and journals on the subject of operations research were founded in the 1950s, such as the Operation Research Society of America (ORSA) in 1952 and the Institute for Management Science (TIMS) in 1953. Philip Morse, the head of the Weapons Systems Evalution Group of the Pentagon, became the first president of ORSA and attracted the companies of the military-industrial complex to ORSA, which soon had more than 500 members. In the 1960s, ORSA reached 8000 members. Consulting companies also founded OR groups. In 1953, Abraham Charnes and William Cooper published the first textbook on Linear Programming.In the 1950s and 1960s, chairs of operations research were established in the U.S. and United Kingdom (from 1964 in Lancaster) in the management faculties of universities. Further influences from the U.S. on the development of operations research in Western Europe can be traced here. The authoritative OR textbooks from the U.S. were published in Germany in German language and in France in French (but not in Italian), such as the book by George Dantzig "Linear Programming"(1963) and the book by C. West Churchman et al. "Introduction to Operations Research"(1957). The latter was also published in Spanish in 1973, opening at the same time Latin American readers to Operations Research. NATO gave important impulses for the spread of Operations Research in Western Europe; NATO headquarters (SHAPE) organised four conferences on OR in the 1950s – the one in 1956 with 120 participants – bringing OR to mainland Europe. Within NATO, OR was also known as "Scientific Advisory" (SA) and was grouped together in the Advisory Group of Aeronautical Research and Development (AGARD). SHAPE and AGARD organized an OR conference in April 1957 in Paris. When France withdrew from the NATO military command structure, the transfer of NATO headquarters from France to Belgium led to the institutionalization of OR in Belgium, where Jacques Drèze founded CORE, the Center for Operations Research and Econometrics at the Catholic University of Leuven in 1966. With the development of computers over the next three decades, Operations Research can now solve problems with hundreds of thousands of variables and constraints. Moreover, the large volumes of data required for such problems can be stored and manipulated very efficiently." Much of operations research (modernly known as 'analytics') relies upon stochastic variables and a therefore access to truly random numbers. Fortunately, the cybernetics field also required the same level of randomness. The development of increasingly better random number generators has been a boon to both disciplines. Modern applications of operations research includes city planning, football strategies, emergency planning, optimizing all facets of industry and economy, and undoubtedly with the likelihood of the inclusion of terrorist attack planning and definitely counterterrorist attack planning. More recently, the research approach of operations research, which dates back to the 1950s, has been criticized for being collections of mathematical models but lacking an empirical basis of data collection for applications. How to collect data is not presented in the textbooks. Because of the lack of data, there are also no computer applications in the textbooks.
Problems addressed:
Critical path analysis or project planning: identifying those processes in a multiple-dependency project which affect the overall duration of the project Floorplanning: designing the layout of equipment in a factory or components on a computer chip to reduce manufacturing time (therefore reducing cost) Network optimization: for instance, setup of telecommunications or power system networks to maintain quality of service during outages Resource allocation problems Facility location Assignment Problems: Assignment problem Generalized assignment problem Quadratic assignment problem Weapon target assignment problem Bayesian search theory: looking for a target Optimal search Routing, such as determining the routes of buses so that as few buses are needed as possible Supply chain management: managing the flow of raw materials and products based on uncertain demand for the finished products Project production activities: managing the flow of work activities in a capital project in response to system variability through operations research tools for variability reduction and buffer allocation using a combination of allocation of capacity, inventory and time Efficient messaging and customer response tactics Automation: automating or integrating robotic systems in human-driven operations processes Globalization: globalizing operations processes in order to take advantage of cheaper materials, labor, land or other productivity inputs Transportation: managing freight transportation and delivery systems (Examples: LTL shipping, intermodal freight transport, travelling salesman problem, driver scheduling problem) Scheduling: Personnel staffing Manufacturing steps Project tasks Network data traffic: these are known as queueing models or queueing systems.
Problems addressed:
Sports events and their television coverage Blending of raw materials in oil refineries Determining optimal prices, in many retail and B2B settings, within the disciplines of pricing science Cutting stock problem: Cutting small items out of bigger ones.Operational research is also used extensively in government where evidence-based policy is used.
Management science:
In 1967 Stafford Beer characterized the field of management science as "the business use of operations research". Like operational research itself, management science (MS) is an interdisciplinary branch of applied mathematics devoted to optimal decision planning, with strong links with economics, business, engineering, and other sciences. It uses various scientific research-based principles, strategies, and analytical methods including mathematical modeling, statistics and numerical algorithms to improve an organization's ability to enact rational and meaningful management decisions by arriving at optimal or near-optimal solutions to sometimes complex decision problems. Management scientists help businesses to achieve their goals using the scientific methods of operational research.
Management science:
The management scientist's mandate is to use rational, systematic, science-based techniques to inform and improve decisions of all kinds. Of course, the techniques of management science are not restricted to business applications but may be applied to military, medical, public administration, charitable groups, political groups or community groups.
Management science is concerned with developing and applying models and concepts that may prove useful in helping to illuminate management issues and solve managerial problems, as well as designing and developing new and better models of organizational excellence.The application of these models within the corporate sector became known as management science.
Management science:
Related fields Some of the fields that have considerable overlap with Operations Research and Management Science include: Applications Applications are abundant such as in airlines, manufacturing companies, service organizations, military branches, and government. The range of problems and issues to which it has contributed insights and solutions is vast. It includes: Scheduling (of airlines, trains, buses etc.) Assignment (assigning crew to flights, trains or buses; employees to projects; commitment and dispatch of power generation facilities) Facility location (deciding most appropriate location for new facilities such as warehouses; factories or fire station) Hydraulics & Piping Engineering (managing flow of water from reservoirs) Health Services (information and supply chain management) Game Theory (identifying, understanding; developing strategies adopted by companies) Urban Design Computer Network Engineering (packet routing; timing; analysis) Telecom & Data Communication Engineering (packet routing; timing; analysis)Management is also concerned with so-called soft-operational analysis which concerns methods for strategic planning, strategic decision support, problem structuring methods. In dealing with these sorts of challenges, mathematical modeling and simulation may not be appropriate or may not suffice. Therefore, during the past 30 years, a number of non-quantified modeling methods have been developed. These include: stakeholder based approaches including metagame analysis and drama theory morphological analysis and various forms of influence diagrams cognitive mapping strategic choice robustness analysis
Societies and journals:
Societies The International Federation of Operational Research Societies (IFORS) is an umbrella organization for operational research societies worldwide, representing approximately 50 national societies including those in the US, UK, France, Germany, Italy, Canada, Australia, New Zealand, Philippines, India, Japan and South Africa. For the institutionalization of Operations Research, the foundation of the (IFORS) in 1960 was of decisive importance, which stimulated the foundation of national OR societies in Austria, Switzerland and Germany. IFORS held important international conferences every three years since 1957. The constituent members of IFORS form regional groups, such as that in Europe, the Association of European Operational Research Societies (EURO). Other important operational research organizations are Simulation Interoperability Standards Organization (SISO) and Interservice/Industry Training, Simulation and Education Conference (I/ITSEC)In 2004 the US-based organization INFORMS began an initiative to market the OR profession better, including a website entitled The Science of Better which provides an introduction to OR and examples of successful applications of OR to industrial problems. This initiative has been adopted by the Operational Research Society in the UK, including a website entitled Learn About OR.
Societies and journals:
Journals of INFORMS The Institute for Operations Research and the Management Sciences (INFORMS) publishes thirteen scholarly journals about operations research, including the top two journals in their class, according to 2005 Journal Citation Reports. They are: Decision Analysis Information Systems Research INFORMS Journal on Computing INFORMS Transactions on Education (an open access journal) Interfaces Management Science Manufacturing & Service Operations Management Marketing Science Mathematics of Operations Research Operations Research Organization Science Service Science Transportation Science Other journals These are listed in alphabetical order of their titles.
Societies and journals:
4OR-A Quarterly Journal of Operations Research: jointly published the Belgian, French and Italian Operations Research Societies (Springer); Decision Sciences published by Wiley-Blackwell on behalf of the Decision Sciences Institute European Journal of Operational Research (EJOR): Founded in 1975 and is presently by far the largest operational research journal in the world, with its around 9,000 pages of published papers per year. In 2004, its total number of citations was the second largest amongst Operational Research and Management Science journals; INFOR Journal: published and sponsored by the Canadian Operational Research Society; Journal of Defense Modeling and Simulation (JDMS): Applications, Methodology, Technology: a quarterly journal devoted to advancing the science of modeling and simulation as it relates to the military and defense.
Societies and journals:
Journal of the Operational Research Society (JORS): an official journal of The OR Society; this is the oldest continuously published journal of OR in the world, published by Taylor & Francis; Military Operations Research (MOR): published by the Military Operations Research Society; Omega - The International Journal of Management Science; Operations Research Letters; Opsearch: official journal of the Operational Research Society of India; OR Insight: a quarterly journal of The OR Society published by Palgrave; Pesquisa Operacional, the official journal of the Brazilian Operations Research Society Production and Operations Management, the official journal of the Production and Operations Management Society TOP: the official journal of the Spanish Statistics and Operations Research Society. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Epidemiology of autism**
Epidemiology of autism:
The epidemiology of autism is the study of the incidence and distribution of autism spectrum disorders (ASD). A 2022 systematic review of global prevalence of autism spectrum disorders found a median prevalence of 1% in children in studies published from 2012 to 2021, with a trend of increasing prevalence over time. However, the study's 1% figure may reflect an underestimate of prevalence in low- and middle-income countries.ASD averages a 4.3:1 male-to-female ratio in diagnosis, not accounting for ASD in gender diverse populations, which overlap disproportionately with ASD populations. The number of children known to have autism has increased dramatically since the 1980s, at least partly due to changes in diagnostic practice; it is unclear whether prevalence has actually increased; and as-yet-unidentified environmental risk factors cannot be ruled out. In 2020, the Centers for Disease Control's Autism and Developmental Disabilities Monitoring (ADDM) Network reported that approximately 1 in 54 children in the United States (1 in 34 boys, and 1 in 144 girls) is diagnosed with an autism spectrum disorder (ASD), based on data collected in 2016. This estimate is a 10% increase from the 1 in 59 rate in 2014, 105% increase from the 1 in 110 rate in 2006 and 176% increase from the 1 in 150 rate in 2000. Diagnostic criteria of ASD has changed significantly since the 1980s; for example, U.S. special-education autism classification was introduced in 1994.ASD is a complex neurodevelopmental disorder, and although what causes it is still not entirely known, efforts have been made to outline causative mechanisms and how they give rise to the disorder. The risk of developing autism is increased in the presence of various prenatal factors, including advanced paternal age and diabetes in the mother during pregnancy. In rare cases, autism is strongly associated with agents that cause birth defects. It has been shown to be related to genetic disorders and with epilepsy. ASD is believed to be largely inherited, although the genetics of ASD are complex and it is unclear which genes are responsible. ASD is also associated with several intellectual or emotional gifts, which has led to a variety of hypotheses from within evolutionary psychiatry that autistic traits have played a beneficial role over human evolutionary history.Other proposed causes, such as childhood vaccines, are controversial. The vaccine hypothesis has been extensively investigated and shown to be false, lacking any scientific evidence. Andrew Wakefield published a small study in 1998 in the United Kingdom suggesting a causal link between autism and the trivalent MMR vaccine. After data included in the report was shown to be deliberately falsified, the paper was retracted, and Wakefield was struck off the medical register in the United Kingdom.It is problematic to compare autism rates over the last three decades, as the diagnostic criteria for autism have changed with each revision of the Diagnostic and Statistical Manual (DSM), which outlines which symptoms meet the criteria for an ASD diagnosis. In 1983, the DSM did not recognize PDD-NOS or Asperger's syndrome, and the criteria for autistic disorder (AD) were more restrictive. The previous edition of the DSM, DSM-IV, included autistic disorder, childhood disintegrative disorder, PDD-NOS, and Asperger's syndrome. Due to inconsistencies in diagnosis and how much is still being learnt about autism, the most recent DSM (DSM-5) only has one diagnosis, autism spectrum disorder (ASD), which encompasses each of the previous four disorders. According to the new diagnostic criteria for ASD, one must have both struggles in social communication and interaction and restricted repetitive behaviors, interests and activities (RRBs).
Epidemiology of autism:
ASD diagnoses continue to be over four times more common among boys (1 in 34) than among girls (1 in 154), and they are reported in all racial, ethnic and socioeconomic groups. Studies have been conducted in several continents (Asia, Europe and North America) that report a prevalence rate of approximately 1 to 2 percent. A 2011 study reported a 2.6 percent prevalence of autism in South Korea.
Frequency:
Although incidence rates measure autism prevalence directly, most epidemiological studies report other frequency measures, typically point or period prevalence, or sometimes cumulative incidence. Attention is focused mostly on whether prevalence is increasing with time.
Incidence and prevalence Epidemiology defines several measures of the frequency of occurrence of a disease or condition: The incidence rate of a condition is the rate at which new cases occurred per person-year, for example, "2 new cases per 1,000 person-years".
The cumulative incidence is the proportion of a population that became new cases within a specified time period, for example, "1.5 per 1,000 people became new cases during 2006".
The point prevalence of a condition is the proportion of a population that had the condition at a single point in time, for example, "10 cases per 1,000 people at the start of 2006".
Frequency:
The period prevalence is the proportion that had the condition at any time within a stated period, for example, "15 per 1,000 people had cases during 2006".When studying how conditions are caused, incidence rates are the most appropriate measure of condition frequency as they assess probability directly. However, incidence can be difficult to measure with rarer conditions such as autism. In autism epidemiology, point or period prevalence is more useful than incidence, as the condition starts long before it is diagnosed, bearing in mind genetic elements it is inherent from conception, and the gap between initiation and diagnosis is influenced by many factors unrelated to chance. Research focuses mostly on whether point or period prevalence is increasing with time; cumulative incidence is sometimes used in studies of birth cohorts.
Frequency:
Estimation methods The three basic approaches used to estimate prevalence differ in cost and in quality of results. The simplest and cheapest method is to count known autism cases from sources such as schools and clinics, and divide by the population. This approach is likely to underestimate prevalence because it does not count children who have not been diagnosed yet, and it is likely to generate skewed statistics because some children have better access to treatment.The second method improves on the first by having investigators examine student or patient records looking for probable cases, to catch cases that have not been identified yet. The third method, which is arguably the best, screens a large sample of an entire community to identify possible cases, and then evaluates each possible case in more detail with standard diagnostic procedures. This last method typically produces the most reliable, and the highest, prevalence estimates.
Frequency:
Frequency estimates Estimates of the prevalence of autism vary widely depending on diagnostic criteria, age of children screened, and geographical location. Most recent reviews tend to estimate a prevalence of 1–2 per 1,000 for autism and close to 6 per 1,000 for ASD;PDD-NOS is the vast majority of ASD, Asperger syndrome is about 0.3 per 1,000 and the atypical forms childhood disintegrative disorder and Rett syndrome are much rarer.A 2006 study of nearly 57,000 British nine- and ten-year-olds reported a prevalence of 3.89 per 1,000 for autism and 11.61 per 1,000 for ASD; these higher figures could be associated with broadening diagnostic criteria. Studies based on more detailed information, such as direct observation rather than examination of medical records, identify higher prevalence; this suggests that published figures may underestimate ASD's true prevalence. A 2009 study of the children in Cambridgeshire, England used different methods to measure prevalence, and estimated that 40% of ASD cases go undiagnosed, with the two least-biased estimates of true prevalence being 11.3 and 15.7 per 1,000.A 2009 U.S. study based on 2006 data estimated the prevalence of ASD in eight-year-old children to be 9.0 per 1,000 (approximate range 8.6–9.3). A 2009 report based on the 2007 Adult Psychiatric Morbidity Survey by the National Health Service determined that the prevalence of ASD in adults was approximately 1% of the population, with a higher prevalence in males and no significant variation between age groups; these results suggest that prevalence of ASD among adults is similar to that in children and rates of autism are not increasing.
Frequency:
Changes with time Attention has been focused on whether the prevalence of autism is increasing with time. Earlier prevalence estimates were lower, centering at about 0.5 per 1,000 for autism during the 1960s and 1970s and about 1 per 1,000 in the 1980s, as opposed to today's 23 per 1000.
The number of reported cases of autism increased dramatically in the 1990s and 2000s, prompting ongoing investigations into several potential reasons: More children may have autism; that is, the true frequency of autism may have increased.
There may be more complete pickup of autism (case finding), as a result of increased awareness and funding. For example, attempts to sue vaccine companies may have increased case-reporting.
The diagnosis may be applied more broadly than before, as a result of the changing definition of the disorder, particularly changes in DSM-III-R and DSM-IV.
An editorial error in the description of the PDD-NOS category of Autism Spectrum Disorders in the DSM-IV, in 1994, inappropriately broadened the PDD-NOS construct. The error was corrected in the DSM-IV-TR, in 2000, reversing the PDD-NOS construct back to the more restrictive diagnostic criteria requirements from the DSM-III-R.
Successively earlier diagnosis in each succeeding cohort of children, including recognition in nursery (preschool), may have affected apparent prevalence but not incidence.
Frequency:
A review of the "rising autism" figures compared to other disabilities in schools shows a corresponding drop in findings of intellectual disability.The reported increase is largely attributable to changes in diagnostic practices, referral patterns, availability of services, age at diagnosis, and public awareness. A widely cited 2002 pilot study concluded that the observed increase in autism in California cannot be explained by changes in diagnostic criteria, but a 2006 analysis found that special education data poorly measured prevalence because so many cases were undiagnosed, and that the 1994–2003 U.S. increase was associated with declines in other diagnostic categories, indicating that diagnostic substitution had occurred.A 2007 study that modeled autism incidence found that broadened diagnostic criteria, diagnosis at a younger age, and improved efficiency of case ascertainment, can produce an increase in the frequency of autism ranging up to 29-fold depending on the frequency measure, suggesting that methodological factors may explain the observed increases in autism over time. A small 2008 study found that a significant number (40%) of people diagnosed with pragmatic language impairment as children in previous decades would now be given a diagnosis as autism. A study of all Danish children born in 1994–99 found that children born later were more likely to be diagnosed at a younger age, supporting the argument that apparent increases in autism prevalence were at least partly due to decreases in the age of diagnosis.A 2009 study of California data found that the reported incidence of autism rose 7- to 8-fold from the early 1990s to 2007, and that changes in diagnostic criteria, inclusion of milder cases, and earlier age of diagnosis probably explain only a 4.25-fold increase; the study did not quantify the effects of wider awareness of autism, increased funding, and expanding support options resulting in parents' greater motivation to seek services. Another 2009 California study found that the reported increases are unlikely to be explained by changes in how qualifying condition codes for autism were recorded.Several environmental factors have been proposed to support the hypothesis that the actual frequency of autism has increased. These include certain foods, infectious disease, pesticides. There is overwhelming scientific evidence against the MMR hypothesis and no convincing evidence for the thiomersal (or Thimerosal) hypothesis, so these types of risk factors have to be ruled out. Although it is unknown whether autism's frequency has increased, any such increase would suggest directing more attention and funding toward addressing environmental factors instead of continuing to focus on genetics.
Frequency:
Geographical frequency Africa The prevalence of autism in Africa is unknown.
The Americas The prevalence of autism in the Americas overall is unknown.
Frequency:
Canada The rate of autism diagnoses in Canada was 1 in 450 in 2003. However, preliminary results of an epidemiological study conducted at Montreal Children's Hospital in the 200–2004 school year found a prevalence rate of 0.68% (or 1 per 147).A 2001 review of the medical research conducted by the Public Health Agency of Canada concluded that there was no link between MMR vaccine and either inflammatory bowel disease or autism. The review noted, "An increase in cases of autism was noted by year of birth from 1979 to 1992; however, no incremental increase in cases was observed after the introduction of MMR vaccination." After the introduction of MMR, "A time trend analysis found no correlation between prevalence of MMR vaccination and the incidence of autism in each birth cohort from 1988 to 1993." United States CDC's most recent estimate is that 1 out of every 44 children, or 23 per 1,000, have some form of ASD as of 2018.
Frequency:
The number of diagnosed cases of autism grew dramatically in the U.S. in the 1990s and have continued in the 2000s. For the 2006 surveillance year, identified ASD cases were an estimated 9.0 per 1000 children aged 8 years (95% confidence interval [CI] = 8.6–9.3). These numbers measure what is sometimes called "administrative prevalence", that is, the number of known cases per unit of population, as opposed to the true number of cases. This prevalence estimate rose 57% (95% CI 27%–95%) from 2002 to 2006.The National Health Interview Survey (NHIS) for 2014–2016 studied 30,502 US children and adolescents and found the weighted prevalence of ASD was 2.47% (24.7 per 1,000); 3.63% in boys and 1.25% in girls. Across the 3-year reporting period, the prevalence was 2.24% in 2014, 2.41% in 2015, and 2.76% in 2016.The number of new cases of autism spectrum disorder (ASD) in Caucasian boys is roughly 50% higher than found in Hispanic children, and approximately 30% more likely to occur than in Non-Hispanic white children in the United States.A further study in 2006 concluded that the apparent rise in administrative prevalence was the result of diagnostic substitution, mostly for findings of intellectual disability and learning disabilities. "Many of the children now being counted in the autism category would probably have been counted in the mental retardation or learning disabilities categories if they were being labeled 10 years ago instead of today," said researcher Paul Shattuck of the Waisman Center at the University of Wisconsin–Madison, in a statement.A population-based study in Olmsted County, Minnesota county found that the cumulative incidence of autism grew eightfold from the 1980–83 period to the 1995–97 period. The increase occurred after the introduction of broader, more-precise diagnostic criteria, increased service availability, and increased awareness of autism. During the same period, the reported number of autism cases grew 22-fold in the same location, suggesting that counts reported by clinics or schools provide misleading estimates of the true incidence of autism.
Frequency:
Venezuela A 2008 study in Venezuela reported a prevalence of 1.1 per 1,000 for autism and 1.7 per 1,000 for ASD.
Asia A journal reports that the median prevalence of ASD among 2–6-year-old children who are reported in China from 2000 upwards was 10.3/10,000.
Hong Kong A 2008 Hong Kong study reported an ASD incidence rate similar to those reported in Australia and North America, and lower than Europeans. It also reported a prevalence of 1.68 per 1,000 for children under 15 years.
Frequency:
Japan A 2005 study of a part of Yokohama with a stable population of about 300,000 reported a cumulative incidence to age 7 years of 48 cases of ASD per 10,000 children in 1989, and 86 in 1990. After the vaccination rate of the triple MMR vaccine dropped to near zero and was replaced with MR and M vaccine, the incidence rate grew to 97 and 161 cases per 10,000 children born in 1993 and 1994, respectively, indicating that the combined MMR vaccine did not cause autism. A 2004 Japanese autism association reported that about 360.000 people have typical Kanner-type autism.
Frequency:
Middle East Israel A 2009 study reported that the annual incidence rate of Israeli children with a diagnosis of ASD receiving disability benefits rose from zero in 1982–1984 to 190 per million in 2004. It was not known whether these figures reflected true increases or other factors such as changes in diagnostic measures.
Frequency:
Saudi Arabia Studies of autism frequency have been particularly rare in the Middle East. One rough estimate is that the prevalence of autism in Saudi Arabia is 18 per 10,000, slightly higher than the 13 per 10,000 reported in developed countries.(compared to 168 per 10,000 in the USA) Europe Denmark In 1992, thiomersal-containing vaccines were removed in Denmark. A study at Aarhus University indicated that during the chemical's usage period (up through 1990), there was no trend toward an increase in the incidence of autism. Between 1991 and 2000 the incidence increased, including among children born after the discontinuation of thimerosal.
Frequency:
France France made autism the national focus for the year 2012 and the Health Ministry estimated the rate of autism in 2012 to have been 0.67%, i.e. 1 in 150.Eric Fombonne made some studies in the years 1992 and 1997. He found a prevalence of 16 per 10,000 for the global pervasive developmental disorder (PDD).
Frequency:
The INSERM found a prevalence of 27 per 10,000 for the ASD and a prevalence of 9 per 10,000 for the early infantile autism in 2003. Those figures are considered as underrated as the WHO gives figures between 30 and 60 per 10,000. The French Minister of Health gives a prevalence of 4.9 per 10,000 on its website but it counts only early infantile autism.
Frequency:
Germany A 2008 study in Germany found that inpatient admission rates for children with ASD increased 30% from 2000 to 2005, with the largest rise between 2000 and 2001 and a decline between 2001 and 2003. Inpatient rates for all mental disorders also rose for ages up to 15 years, so that the ratio of ASD to all admissions rose from 1.3% to 1.4%.
Frequency:
Norway A 2009 study in Norway reported prevalence rates for ASD ranging from 0.21% to 0.87%, depending on assessment method and assumptions about non-response, suggesting that methodological factors explain large variances in prevalence rates in different studies.
Frequency:
United Kingdom The incidence and changes in incidence with time are unclear in the United Kingdom. The reported autism incidence in the UK rose starting before the first introduction of the MMR vaccine in 1989. However, a perceived link between the two arising from the results of a fraudulent scientific study has caused considerable controversy, despite being subsequently disproved. A 2004 study found that the reported incidence of pervasive developmental disorders in a general practice research database in England and Wales grew steadily during 1988–2001 from 0.11 to 2.98 per 10,000 person-years, and concluded that much of this increase may be due to changes in diagnostic practice.
Genetics:
As late as the mid-1970s there was little evidence of a genetic role in autism; evidence from genetic epidemiology studies now suggests that it is one of the most heritable of all psychiatric conditions. The first studies of twins estimated heritability to be more than 90%; in other words, that genetics explains more than 90% of autism cases. When only one identical twin is autistic, the other often has learning or social disabilities. For adult siblings, the risk of having one or more features of the broader autism phenotype might be as high as 30%, much higher than the risk in controls. About 10–15% of autism cases have an identifiable Mendelian (single-gene) condition, chromosome abnormality, or other genetic syndrome, and ASD is associated with several genetic disorders.Since heritability is less than 100% and symptoms vary markedly among identical twins with autism, environmental factors are most likely a significant cause as well. If some of the risk is due to gene-environment interaction the 90% heritability may be too high; However, in 2017, the largest study, including over three million participants, estimated the heritability at 83%.Genetic linkage analysis has been inconclusive; many association analyses have had inadequate power. Studies have examined more than 100 candidate genes; many genes must be examined because more than a third of genes are expressed in the brain and there are few clues on which are relevant to autism.
Causative factors:
A few studies have found an association between autism and frequent use of acetaminophen (e.g. Tylenol, Paracetamol) by the mother during pregnancy. Autism is also associated with several other prenatal factors, including advanced age in either parent, and diabetes, bleeding, or use of psychiatric drugs in the mother during pregnancy. Autism was found to be indirectly linked to prepregnancy obesity and low weight mothers. It is not known whether mutations that arise spontaneously in autism and other neuropsychiatric disorders come mainly from the mother or the father, or whether the mutations are associated with parental age. However, recent studies have identified advancing paternal age as a significant indicator for ASD. Increased chance of autism has also been linked to rapid "catch-up" growth for children born to mothers who had unhealthy weight at conception.A large 2008 population study of Swedish parents of children with autism found that the parents were more likely to have been hospitalized for a mental disorder, that schizophrenia was more common among the mothers and fathers, and that depression and personality disorders were more common among the mothers.It is not known how many siblings of autistic individuals are themselves autistic. Several studies based on clinical samples have given quite different estimates, and these clinical samples differ in important ways from samples taken from the general community.Autism has also been shown to cluster in urban neighborhoods of high socioeconomic status. One study from California found a three to fourfold increased risk of autism in a small 30 by 40 km region centered on West Hollywood, Los Angeles.
Causative factors:
Sex and gender differences Boys have a higher chance of being diagnosed with autism than girls. The ASD sex ratio averages 4.3:1 and is greatly modified by cognitive impairment: it may be close to 2:1 with intellectual disability and more than 5.5:1 without. Recent studies have found no association with socioeconomic status, and have reported inconsistent results about associations with race or ethnicity.RORA deficiency may explain some of the difference in frequency between males and females. RORA protein levels are higher in the brains of typically developing females compared to typically developing males, providing females with a buffer against RORA deficiency. This is known as the Female protective effect. RORA deficiency has previously been proposed as one factor that may make males more vulnerable to autism.There is a statistically notable overlap between ASD populations and gender diverse populations.
Comorbid conditions:
Autism is associated with several other conditions: Genetic disorders. About 10–15% of autism cases have an identifiable Mendelian (single-gene) condition, chromosome abnormality, or other genetic syndrome, and ASD is associated with several genetic disorders.
Intellectual disability. The fraction of autistic individuals who also meet criteria for intellectual disability has been reported as anywhere from 25% to 70%, a wide variation illustrating the difficulty of assessing autistic intelligence.
Comorbid conditions:
Anxiety disorders are common among children with ASD, although there are no firm data. Symptoms include generalized anxiety and separation anxiety, and are likely affected by age, level of cognitive functioning, degree of social impairment, and ASD-specific difficulties. Many anxiety disorders, such as social phobia, are not commonly diagnosed in people with ASD because such symptoms are better explained by ASD itself, and it is often difficult to tell whether symptoms such as compulsive checking are part of ASD or a co-occurring anxiety problem. The prevalence of anxiety disorders in children with ASD has been reported to be anywhere between 11% and 84%.
Comorbid conditions:
Epilepsy, with variations in risk of epilepsy due to age, cognitive level, and type of language disorder; 5–38% of children with autism have comorbid epilepsy, and only 16% of these have remission in adulthood.
Several metabolic defects, such as phenylketonuria, are associated with autistic symptoms.
Minor physical anomalies are significantly increased in the autistic population.
Comorbid conditions:
Preempted diagnoses. Although the DSM-IV rules out concurrent diagnosis of many other conditions along with autism, the full criteria for ADHD, Tourette syndrome, and other of these conditions are often present and these comorbid diagnoses are increasingly accepted. A 2008 study found that nearly 70% of children with ASD had at least one psychiatric disorder, including nearly 30% with social anxiety disorder and similar proportions with ADHD and oppositional defiant disorder. Childhood-onset schizophrenia, a rare and severe form, is another preempted diagnosis whose symptoms are often present along with the symptoms of autism. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Conyca Geodrone**
Conyca Geodrone:
The Conyca Geodrome is a 1.5m fixed-wing drone that specializes in topography and photogrammetric applications. It has all the legal requirements to operate in Spain. The system has totally autonomous take-off and landing operations, can present 3D terrain models with 4 cm resolution accuracy at 100 m height, and can take orthophotos. Although this system has been orientated to topography, there are surveillance and precision agriculture versions than can use infrared and hyperspectral cameras.
Specifications:
MTOW: 2000 g Maximum Payload: 600 g Cruise speed: 20 m/s Max speed: 30 m/s Autonomy: 1h Max area: 130 Ha Wingspan: 1500 mm Power: 625 W Processor: 32-bit ARM Cortex® M4 Processor | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Succinylornithine transaminase**
Succinylornithine transaminase:
In enzymology, a succinylornithine transaminase (EC 2.6.1.81) is an enzyme that catalyzes the chemical reaction N2-succinyl-L-ornithine + 2-oxoglutarate ⇌ N-succinyl-L-glutamate 5-semialdehyde + L-glutamateThus, the two substrates of this enzyme are N2-succinyl-L-ornithine and 2-oxoglutarate, whereas its two products are N-succinyl-L-glutamate 5-semialdehyde and L-glutamate.
This enzyme belongs to the family of transferases, specifically the transaminases, which transfer nitrogenous groups. The systematic name of this enzyme class is N2-succinyl-L-ornithine:2-oxoglutarate 5-aminotransferase. Other names in common use include succinylornithine aminotransferase, N2-succinylornithine 5-aminotransferase, AstC, SOAT, and 2-N-succinyl-L-ornithine:2-oxoglutarate 5-aminotransferase. This enzyme participates in arginine and proline metabolism. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**British School of Motoring**
British School of Motoring:
The British School of Motoring (BSM) is a driving school in the United Kingdom, providing training in vehicle operation and road safety.
British School of Motoring:
BSM has around 1000 driving instructors. RAC's parent company, Aviva, sold BSM to Arques Industries AG in February 2009. In November 2009 the business was then sold to Managing Directors Abu-Haris Shafi and Nikolai Kesting and was then acquired by Acromas Holdings, which was the holding company for The AA and Saga in January 2011. The AA (including BSM ownership) then announced stock market flotation in June 2014.
Founder:
British School of Motoring founded in 1910 was an independent and private educational organisation like the Chelsea College of Aeronautical and Automobile Engineering founded in 1924. Both were founded by their principal, S C H Roberts. Born in 1889 Stanley Coryton Hugh Roberts (known as "C H") died in September 1957 and was succeeded as managing director of both BSM and The College of Aeronautical and Automobile Engineering by Miss Denise McCann. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Agrellite**
Agrellite:
Agrellite (NaCa2Si4O10F) is a rare triclinic inosilicate mineral with four-periodic single chains of silica tetrahedra.
Agrellite:
It is a white to grey translucent mineral, with a pearly luster and white streak. It has a mohs hardness of 5.5 and a specific gravity of 2.8. Its type locality is the Kipawa Alkaline Complex, Quebec, Canada, where it occurs as tabular laths in pegmatite lenses. Other localities include Murmansk Oblast, Russia, Dara-i-Pioz Glacier, Tajikistan, and Saima Complex, Liaoning, China. Common associates at the type locality include Zircon, Eudialyte, Vlasovite, Miserite, Mosandrite-(Ce), and Calcite.Agrellite displays pink fluorescence strongly under shortwave and weakly under longwave ultraviolet light. The fluorescent activator is dominantly Mn2+, with minor Eu2+, Sm3+, and Dy3+.It is named in honor of Stuart Olof Agrell (1913–1996), a British mineralogist at Cambridge University. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Equivalence principle (geometric)**
Equivalence principle (geometric):
The equivalence principle is one of the corner-stones of gravitation theory. Different formulations of the equivalence principle are labeled weakest, weak, middle-strong and strong. All of these formulations are based on the empirical equality of inertial mass, gravitational active and passive charges.
Equivalence principle (geometric):
The weakest equivalence principle is restricted to the motion law of a probe point mass in a uniform gravitational field. Its localization is the weak equivalence principle that states the existence of a desired local inertial frame at a given world point. This is the case of equations depending on a gravitational field and its first order derivatives, e. g., the equations of mechanics of probe point masses, and the equations of electromagnetic and Dirac fermion fields. The middle-strong equivalence principle is concerned with any matter, except a gravitational field, while the strong one is applied to all physical laws.
Equivalence principle (geometric):
The above-mentioned variants of the equivalence principle aim to guarantee the transition of General Relativity to Special Relativity in a certain reference frame. However, only the particular weakest and weak equivalence principles are true. To overcome this difficulty, the equivalence principle can be formulated in geometric terms as follows.
Equivalence principle (geometric):
In the spirit of Felix Klein's Erlanger program, Special Relativity can be characterized as the Klein geometry of Lorentz group invariants. Then the geometric equivalence principle is formulated to require the existence of Lorentz invariants on a world manifold X . This requirement holds if the tangent bundle TX of X admits an atlas with Lorentz transition functions, i.e., a structure group of the associated frame bundle FX of linear tangent frames in TX is reduced to the Lorentz group SO(1,3) . By virtue of the well known theorem on structure group reduction, this reduction takes place if and only if the quotient bundle FX/SO(1,3)→X possesses a global section, which is a pseudo-Riemannian metric on X Thus the geometric equivalence principle provides the necessary and sufficient conditions of the existence of a pseudo-Riemannian metric, i.e., a gravitational field on a world manifold.
Equivalence principle (geometric):
Based on the geometric equivalence principle, gravitation theory is formulated as gauge theory where a gravitational field is described as a classical Higgs field responsible for spontaneous breakdown of space-time symmetries. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Today sponge**
Today sponge:
The Today sponge is a brand of plastic contraceptive sponge saturated with a spermicide nonoxynol-9 to prevent conception. Within two years of its launch, Today had become the largest selling over-the-counter female contraceptive in the United States, and was soon rolled out into other markets.
History:
The Today sponge dates back to 1976 when it was created by Bruce Ward Vorhauer. Vorhauer struggled for seven years to get the device approved and on the market. Following U.S. Food and Drug Administration (FDA) approval, the brand was rolled out in June 1983. The product, manufactured by VLI Corp. of Irvine, California, was classified as "relatively safe" by the FDA in 1984. A 1984 study in the American Journal of Obstetrics and Gynecology compared it with the diaphragm and found that the Today sponge was a "safe and acceptable method of contraception with an effectiveness rate in the range of other vaginal contraceptives." The Today sponge also broke the barrier in several markets for advertising contraceptive devices.
History:
The Today sponge "was manufactured until 1995, when FDA imposed new manufacturing standards." The product had several setbacks while marketed, including a link to toxic shock syndrome. Personal financial problems forced Vorhauer to sell the entire manufacturing operation to American Home Products, now Wyeth. Almost the entire content of the facility was moved to the Whitehall-Robbins facility in Hammonton, New Jersey, from its original California home. The sponge was removed from the U.S. market in 1994 after problems were found at the facility related to the deionized water system. The water system, which was originally sized for much larger production, could not produce the small amounts of deionized water required for this one product and became repeatedly contaminated. Based on slumping sales and to avoid any further FDA issues, Wyeth stopped selling the sponge rather than move production or modify its plant.
History:
In 1998, Allendale Pharmaceuticals acquired the rights to the Today sponge and it was once again available. New FDA standards for manufacturing and record-keeping forced repeated delays, but the Today sponge was finally re-introduced in Canada in March 2003, and in the U.S. in September 2005. In January 2007, Allendale Pharmaceuticals was acquired by Synova Healthcare, Inc. In December 2007 Synova filed for bankruptcy reorganization; in 2008 the manufacturing rights to the Today sponge were purchased by Alvogen. In mid May 2009, Mayer Laboratories, Inc., the distributor of the Today sponge for the US, Canada and the EU, announced the Today sponge had been re-launched in the United States. At the end of 2019, Mayer Labs had issues with the equipment to manufacturer the Today sponge. As of the onset of the COVID pandemic, it is out of production with no information as to "when or if this situation will change."
In popular culture:
A 1995 Seinfeld episode, "The Sponge", revolved around Elaine's attempts to procure her favorite form of birth control, the discontinued Today sponge, and her rationing them based on whether a potential partner was "sponge-worthy". This was later revisited in the series finale when the pharmacist testifies against Elaine for buying a case of sponges. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Developing tank**
Developing tank:
A developing tank is a light-tight container used for developing film. A developing tank allows photographic film to be developed in a daylight environment. This is necessary because most film is panchromatic and therefore can not be exposed to any light during processing. Depending upon the size and type, a developing tank can hold one to many roll or sheet films.
Developing tank:
Famous brands include Paterson, Yankee, Jobo and Nikor.
A film reel holds roll films in a spiral shape. The film is held evenly spaced so that the chemicals in the developing tank reach all of the film.
Types:
General General tank support 110,126,135,120,620 format films Developing tanks and film reels for roll films come in two varieties: plastic and stainless steel. With stainless steel reels, the film is clipped to the center and then gently pinched while the reel is turned so that the film falls into the reel's grooves. With a plastic reel, the film is loaded from the outside and then wound onto the reel by rotating the reel with a back-and-forth motion.
Types:
Special purpose Minox daylight development tank. Minox format film can be loaded in broad daylight without the use of a changing bag or darkroom.
Meopta 16mm development tank. Used for 16mm film only. Film must be loaded in darkroom.
Agfa Rondinax 35 and 60. Respectively used for 135 and 120 film formats. Can be loaded in broad daylight without the use of a changing bag or darkroom.
Use:
The user begins by opening the film canister (in the case of 35 mm film) or separating the film from a paper backing (in the case of medium format film, e.g. 120/220 format). The film is then loaded onto a film reel in a completely dark environment; this can be a light-tight room or a changing bag. Care must be taken during this step, as improperly loading the film may result in parts of the film not getting developed. Once the film is on the reel it is put into the developing tank, and then a lid is placed on the developing tank. Because the lid prevents light from reaching the film, the rest of the film developing process may be carried out in daylight. In addition to protecting the film from light, the lid contains an opening for rapidly pouring liquids into and out of the developing tank. Finally, a separate cap seals the canister, which prevents the contents of the tank from spilling when agitated by inverting the tank. But the tank should be cleaned by water every time after development as any residual fixer may not lead to proper development of the next film. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Silanization of silicon and mica**
Silanization of silicon and mica:
Silanization of silicon and mica is the coating of these materials with a thin layer of self assembling units.
Biological applications of silanization:
Nanoscale analysis of proteins using atomic force microscopy (AFM) requires surfaces with well-defined topologies and chemistries for many experimental techniques. Biomolecules, particularly proteins, can be immobilized simply on an unmodified substrate surface through hydrophobic or electrostatic interactions. However, several problems are associated with physical adsorption of proteins on surfaces. With metal surfaces, protein denaturation, unstable and reversible binding, nonspecific and random immobilization of protein have been reported.One alternative involves the interaction of chemically modified surfaces with proteins under non-denaturing circumstances. Chemical modification of surfaces provides the potential to precisely control the chemistry of the surface, and with the correct chemical modifications, there are several advantages to this approach. First, the proteins adsorbed on the surface are more stable over a wide range of conditions. The proteins also adopt a more uniform orientation on the surface. Additionally, the higher density of protein deposition with greater reproducibility is possible.
Biological applications of silanization:
Chemical modification of surfaces has been successfully applied in several instances to immobilize proteins in order to obtain valuable information. For instance, atomic force microscopy imaging of DNA has been performed using mica coated with 3-aminopropyltriethoxysilane (APTES). The negatively charged DNA backbone bound strongly to the positive charges on the amine functionality, leading to stable structures that could be imaged both in air and in buffer. In a recent study by Behrens et al., amine-terminated silicon surfaces were successfully used to immobilize bone morphogenetic protein 2 (BMP2) for medical purposes (cf. hydrogen-terminated silicon surface). 14 Molecules with amine groups (especially APTES) are important for biological applications, because they allow for simple electrostatic interactions with biomolecules.
Functionalization of surfaces using self-assembled monolayers:
Self-assembled monolayers (SAM) are an extremely versatile approach that allows for precise control of surface characteristics. It was introduced in 1946 by Bigelow et al., but it was not until 1983 that it attracted widespread interest, when the formation of SAMs of alkanethiolates on gold was reported by Allara et al. Self-assembly of monolayers can be achieved using several systems. The basis for self-assembly is the formation of a covalent bond between the surface and the molecule forming the layer; and this requirement can be fulfilled using a variety of chemical groups such as organosilanes at hydroxylated materials (glass, silicon, aluminium oxide, mica) and organosulfur-based compounds species at noble metals . While the latter system has been well characterized, much less is known about the behavior of organosilane layers on surfaces and the underlying mechanisms that control monolayer organization and structure.
Functionalization of surfaces using self-assembled monolayers:
Although silanization of silicate surfaces was introduced more than 40 years ago, the process of formation of smooth layers on surfaces is still poorly understood. Probably the most important reason for this situation is that a number of studies that have involved silanization as part of the procedure have not been concerned with thoroughly characterizing the silane layer formed. The one result that unifies recent studies on the characterization of silane layers is centered on the extreme sensitivity of the reactions that lead to the formation of silane layers. Indeed, self-assembled layers of silanes on silicate surfaces have been reported to be dependent on various parameters such as humidity, temperature, impurities in the silane reagent and the type of silicate surface. In order to consistently and reproducibly make diverse functionalized surfaces with layers that are molecularly smooth, it is critical to understand the chemistry of the silicate surfaces and the ways in which various parameters affect the nature of the self-assembled layers.
Surface structure of silicon and mica:
Silicon Oxidized silicon has been extensively studied as a substrate for the deposition of biomolecules. Piranha solution can be used to increase the surface density of reactive hydroxyl groups on the surface of silicon. The –OH groups can hydrolyze and subsequently form siloxane linkages (Si-O-Si) with organic silane molecules. Preparation of silicon surfaces for silanization involves the removal of surface contaminants. This can be achieved by using UV-ozone and piranha solution. Piranha solution in particular constitutes quite a harsh treatment that can potentially damage the integrity of the silicon surface. Finlayson-Pitts et al. investigated the effect of certain treatments on silicon and concluded that both the roughness (3-5 Å) and the presence of scattered large particles were preserved after 1 cycle of plasma-treatment. However, the silicon surface was significantly damaged after 30 cycles of treatment with piranha solution or plasma. In both cases, treatment introduced irregularities and large aggregates on the surface (aggregate size > 80 nm), with the effect being more pronounced when piranha was used. In either case, multiple treatments rendered the surface inadequate for deposition of small biomolecules.
Surface structure of silicon and mica:
Mica Mica is another silicate that is widely used as substrate for the deposition of biomolecules. Mica bears a noticeable advantage over silicon because it is molecularly smooth and hence better suited for studies of small, flat molecules. It has a crystalline structure with generic formula K[Si3Al]O10Al2(OH)2 and contains sheets of octahedral hydroxyl-aluminum sandwiched between two silicon tetrahedral layers. In the silicon layer, one in four silicon atoms is replaced by an aluminum atom, generating a difference in charge that is offset by unbound K+ present in the region between neighboring silicon layers. Muscovite mica is most susceptible to cleavage along the plane located in the potassium layer. When a freshly cleaved mica surface is placed in contact with water, hydrated potassium ions can desorb from the mica surface, leading to a negative charge at the surface.
Surface structure of silicon and mica:
Similar to silicon, the surface of mica does not contain an appreciable density of silanol groups for covalent attachment by silanes. A recent study reported that freshly cleaved mica carries 11% silanol groups (i.e., approximately 1 in 10 silicon atoms bears a hydroxyl group). Although it is possible that silanization may be carried out using untreated mica, the increased density of surface silanol groups on activated mica can significantly improve covalent attachment of silane molecules to the surface. Mica can be activated by treatment with argon/water plasma, leading to a silanol surface density of 30%. Working with activated surfaces introduces another consideration about the stability of the silanol groups on the activated surfaces. Giasson et al. reported that the silanol groups on freshly cleaved mica that was not subjected to any treatment were found to be more stable under high vacuum compared to the plasma-activated mica: after 64 hrs, surface coverage of the silanol groups for freshly cleaved mica plasma was roughly the same, while surface coverage for activated mica decreased 3-fold to 10%.
Adsorption of molecules onto silicate surfaces:
Adsorption describes the process by which molecules or particles bind to surfaces and is distinguishable from absorption, whereby the particles spread in the bulk of the absorbing material. The adsorbed material is called the adsorbate, while the surface is called the adsorbent. It is common to distinguish between two types of adsorption, namely physical adsorption (which consists of intermolecular forces holding the adsorbed material to the surface) and chemical adsorption (which consists of covalent bonds tethering the adsorbed material to the surface). The nature of the layer of adsorbate formed depends on the interactions between the adsorbed material and the adsorbent. More specifically, the mechanisms involved in adsorption include ion exchange (replacement of counter ions adsorbed from the solution by similarly charged ions), ion pairing (adsorption of ions from solution phase onto sites on the substrates that carry the opposite charge), hydrophobic bonding (non-polar attraction between groups on the substrate surface and molecules in solution), polarization of p-electrons polar interactions between partially charged sites on the substrate surface and molecules carrying opposite partial charges in solution, and covalent bonds. The variety of ways for adsorption to occur provides an indication of the complexities associated with controlling the type of layer that is adsorbed.
Adsorption of molecules onto silicate surfaces:
The type of silane used can further compound the problem, as in the case of APTES. APTES is the classical molecule used for the immobilization of biomolecules and has historically been the most widely studied molecule in the field by far. Since APTES contains three ethoxy groups per molecule, it can polymerize in the presence of water, leading to lateral polymerization between APTES molecules in horizontal and vertical directions and the formation of oligomers and polymers which can attach to the surface.
Adsorption of molecules onto silicate surfaces:
Self-assembly can be approached using solution-phase reactions or vapor-phase reactions. In solution-phase experiments, the silane is dissolved in an anhydrous solvent and placed in contact with the surface; in vapor-phase experiments, only the vapor of the silane reaches the substrate surface.
Solution-phase reactions:
Solution-phase reaction has historically been the method that has been most studied, and a general consensus that has evolved with regards to the conditions required for the formation of smooth aminosilane films includes the following: (1) an anhydrous solvent such as toluene is required, with a rigidly controlled trace amount of water to regulate the degree of polymerization of aminosilanes at the surface and in solution; (2) formation of oligomers and polymers is favored at higher silane concentrations (>10%); (3) moderate temperatures (60–90 °C) can disrupt non-covalent interactions such as hydrogen bonds, leading to fewer silane molecules that are weakly tethered to the surface. Additionally, condition (3) favors desorption of water from the substrate into the toluene phase20; (4) Rinsing with solvents such as toluene, ethanol and water following the silanization reaction favors the removal of weakly bonded silane molecules and the hydrolysis of residual alkoxy linkages in the layer; (5) drying and curing at high temperature (110 °C) favors the formation of siloxane linkages and also converts ammonium ions to the neutral amine, which is more reactive.
Vapor-phase reactions:
Vapor-phase silanization has been approached as a way to circumvent the complexities of trace water in solution and silane purity. Since oligomers and polymers of silanes have negligible vapor pressure at the reaction temperatures commonly used, they do not reach the surface of the silicate during deposition. Since there is no solvent in the system, it is easier to control the amount of water in the reaction. Smooth monolayers have been reported for vapor-phase silanizations of several types of silanes, including aminosilanes, octadecyltrimethoxysilane and fluoalkyl silanes. However, the nature of the attachment of the silane molecules to the substrate is uncertain, although siloxane bond formation can be favored by soaking the substrate in water following deposition.
Vapor-phase reactions:
In a recent study by Chen et al., APTES monolayers were obtained consistently at different temperatures and deposition times. The thicknesses of the layers obtained were 5 Å and 6 Å at 70 °C and 90 °C respectively, which corresponds to the approximate length of an APTES molecule and indicates that monolayers formed on the substrates in each case. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Gynaecologic cytology**
Gynaecologic cytology:
Gynecologic cytology, also gynecologic cytology, is a field of pathology concerned with the investigation of disorders of the female genital tract. The most common investigation in this field is the Pap test, which is used to screen for potentially precancerous lesions of the cervix. Cytology can also be used to investigate disorders of the ovaries, uterus, vagina and vulva. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Clean and Environmentally Safe Advanced Reactor**
Clean and Environmentally Safe Advanced Reactor:
The Clean and Environmentally Safe Advanced Reactor (CAESAR) is a nuclear reactor concept created by Claudio Filippone, the Director of the Center for Advanced Energy Concepts at the University of Maryland, College Park and head of the ongoing CAESAR Project. The concept's key element is the use of steam as a moderator, making it a type of reduced moderation water reactor. Because the density of steam may be controlled very precisely, Filippone claims it can be used to fine-tune neutron fluxes to ensure that neutrons are moving with an optimal energy profile to split 23892U nuclei – in other words, cause fission.
Clean and Environmentally Safe Advanced Reactor:
The CAESAR reactor design exploits the fact that the fission products and daughter isotopes produced via nuclear reactions also decay to produce additional delayed neutrons. Filippone claims that unlike light water-cooled fission reactors, where fission occurring in enriched 235U fuel rods moderated by liquid-water coolant ultimately creates a Maxwellian thermal neutron flux profile, the neutron energy profile from delayed neutrons varies widely. In a conventional reactor, he theorizes, the moderator slows these neutrons down so that they cannot contribute to the 238U reaction; 238U has a comparatively large cross-section for neutrons at high energies.
Clean and Environmentally Safe Advanced Reactor:
Filippone maintains that when steam is used as the moderator, the average neutron energy is increased from that of a liquid water-moderated reactor such that the delayed neutrons persist until they hit another nucleus. The resulting extremely high neutron economy, he claims, will make it possible to maintain a self-sustaining reaction in fuel rods of pure 238U, once the reactor has been started by enriched fuel.
Clean and Environmentally Safe Advanced Reactor:
Skeptics , however point out that it is generally believed that a controlled, sustained chain reaction is not possible with 238U. Starting in the 1930s Physicists have used the Six factor formula and its derivative Four factor formula to calculate the behavior of nuclear chain reactions inside a mass of fissile material. Based on these calculations even an infinitely large mass of pure U-238 is incapable of sustaining a chain reaction with only its own neutron production, so coupling the gas-cooled fast-spectrum core with a moderated outer slow-neutron section is required, or alternatively some level of fissile enrichment is required. It can undergo fission when impacted by an energetic neutron with over 1 MeV of kinetic energy. But the high-energy neutrons produced by 238U fission (after quickly losing energy by inelastic scattering), are not, themselves, sufficient to induce enough successive fissions in 238U to create a critical system (one in which the number of neutrons created by fission is equal to the number absorbed). Instead, bombarding 238U with neutrons below the 1 MeV fission threshold causes it to absorb them without fissioning (becoming 239U) and decay by beta emission to 239Pu (which is itself fissile). The energy of delayed neutrons is so low that contribution to 238U fission is almost 0.0000, requiring some fissile material to keep the reactor safely under prompt criticality: (e.g. 235U in natural uranium and preferably also some moderator, possibly outside the extra-fast core).
Clean and Environmentally Safe Advanced Reactor:
The maximum ratio of 238U fission is limited by the neutron physics to less than 100%, but greater than 40%, which allows even a relatively low conversion ratio of 0.6 to breed its own fuel (without uranium enrichment or Pu produced elsewhere). Conversion ratio of 0.6 is achievable in practice (actually achieved even with light-water reactor designs that waste a lot of neutrons in Boron, that has better alternatives). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Zinc–zinc oxide cycle**
Zinc–zinc oxide cycle:
For chemical reactions, the zinc–zinc oxide cycle or Zn–ZnO cycle is a two step thermochemical cycle based on zinc and zinc oxide for hydrogen production with a typical efficiency around 40%.
Process description:
The thermochemical two-step water splitting process uses redox systems: Dissociation: ZnO → Zn + 1/2 O2 Hydrolysis: Zn + H2O → ZnO + H2For the first endothermic step concentrating solar power is used in which zinc oxide is thermally dissociated at 1,900 °C (3,450 °F) into zinc and oxygen. In the second non-solar exothermic step zinc reacts at 427 °C (801 °F) with water and produces hydrogen and zinc oxide. The temperature level is realized by using a solar power tower and a set of heliostats to collect the solar thermal energy. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Diversification (marketing strategy)**
Diversification (marketing strategy):
Diversification is a corporate strategy to enter into a new products or product lines, new services or new markets, involving substantially different skills, technology and knowledge.
Diversification (marketing strategy):
Diversification is one of the four main growth strategies defined by Igor Ansoff in the Ansoff Matrix: Ansoff pointed out that a diversification strategy stands apart from the other three strategies. Whereas, the first three strategies are usually pursued with the same technical, financial, and merchandising resources used for the original product line, the diversification usually requires a company to acquire new skills and knowledge in product development as well as new insights into market behavior simultaneously. This not only requires the acquisition of new skills and knowledge, but also requires the company to acquire new resources including new technologies and new facilities, which exposes the organisation to higher levels of risk.Note: The notion of diversification depends on the subjective interpretation of “new” market and “new” product, which should reflect the perceptions of customers rather than managers. Indeed, products tend to create or stimulate new markets; new markets promote product innovation.
Diversification (marketing strategy):
Product diversification involves addition of new products to existing products either being manufactured or being marketed. Expansion of the existing product line with related products is one such method adopted by many businesses. Adding tooth brushes to tooth paste or tooth powders or mouthwash under the same brand or under different brands aimed at different segments is one way of diversification. These are either brand extensions or product extensions to increase the volume of sales and the number of customers.
A typology of diversification strategies:
The strategies of diversification can include internal development of new products or markets, acquisition of a firm, alliance with a complementary company, licensing of new technologies, and distributing or importing a products line manufactured by another firm. Generally, the final strategy involves a combination of these options. This combination is determined in function of available opportunities and consistency with the objectives and the resources of the company.
A typology of diversification strategies:
There are three types of diversification: concentric, horizontal, and conglomerate.
A typology of diversification strategies:
Concentric diversification This means that there is a technological similarity between the industries, which means that the firm is able to leverage its technical know-how to gain some advantage. For example, a company that manufactures industrial adhesives might decide to diversify into adhesives to be sold via retailers. The technology would be the same but the marketing effort would need to change.
A typology of diversification strategies:
It also seems to increase its market share to launch a new product that helps the particular company to earn profit. For instance, the addition of tomato ketchup and sauce to the existing "Maggi" brand processed items of Food Specialities Ltd. is an example of technological-related concentric diversification.
The company could seek new products that have technological or marketing synergies with existing product lines appealing to a new group of customers. This also helps the company to tap that part of the market which remains untapped, and which presents an opportunity to earn profits.
Horizontal diversification The company adds new products or services that are often technologically or commercially unrelated to current products but that may appeal to current customers. This strategy tends to increase the firm's dependence on certain market segments. For example, a company that was making notebooks earlier may also enter the pen market with its new product.
When is horizontal diversification desirable? Horizontal diversification is desirable if the present customers are loyal to the current products and if the new products have a good quality and are well promoted and priced. Moreover, the new products are marketed to the same economic environment as the existing products, which may lead to rigidity or instability.
A typology of diversification strategies:
Another interpretation Horizontal integration occurs when a firm enters a new business (either related or unrelated) at the same stage of production as its current operations. For example, Avon's move to market jewellery through its door-to-door sales force involved marketing new products through existing channels of distribution. An alternative form of that Avon has also undertaken is selling its products by mail order (e.g., clothing, plastic products) and through retail stores (e.g.,Tiffany's). In both cases, Avon is still at the retail stage of the production process.
A typology of diversification strategies:
Conglomerate diversification (or lateral diversification) Conglomerate diversification involves adding new products or services that are significantly unrelated and with no technological or commercial similarities. For example, if a computer company decides to produce stationery items, the company is pursuing a conglomerate diversification strategy.
Goal of diversification:
According to Calori and Harvatopoulos (1988), there are two dimensions of rationale for diversification. The first one relates to the nature of the strategic objective: Diversification may be defensive or offensive.
Defensive reasons may be spreading the risk of market contraction, or being forced to diversify when current product or current market orientation seems to provide no further opportunities for growth. Offensive reasons may be conquering new positions, taking opportunities that promise greater profitability than expansion opportunities, or using retained cash that exceeds total expansion needs.
The second dimension involves the expected outcomes of diversification: Management may expect great economic value (growth, profitability) or first and foremost great coherence with their current activities (exploitation of know-how, more efficient use of available resources and capacities).
In addition, companies may also explore diversification just to get a valuable comparison between this strategy and expansion.
Risks:
Of the four strategies presented in the Ansoff matrix, Diversification has the highest level of risk and requires the most careful investigation. Going into an unknown market with an unfamiliar product offering means a lack of experience in the new skills and techniques required. Therefore, the company puts itself in a great uncertainty. Moreover, diversification might necessitate significant expanding of human and financial resources, which may detract focus, commitment, and sustained investments in the core industries. Therefore, a firm should choose this option only when the current product or current market orientation does not offer further opportunities for growth.
Risks:
In order to measure the chances of success, different tests can be done: The attractiveness test: the industry that has been chosen has to be either attractive or capable of being made attractive.
The cost-of-entry test: the cost of entry must not capitalize all future profits.
Risks:
The better-off test: the new unit must either gain competitive advantage from its link with the corporation or vice versa.Because of the high risks explained above, many companies attempting to diversify have led to failure. However, there are a few good examples of successful diversification: Apple moved from PCs to mobile devices Virgin Group moved from music production to travel and mobile phones Walt Disney moved from producing animated movies to theme parks and vacation properties Canon diversified from a camera-making company into producing an entirely new range of office equipment. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Phoenix abscess**
Phoenix abscess:
A phoenix abscess is an acute exacerbation of a chronic periapical lesion. It is a dental abscess that can occur immediately following root canal treatment. Another cause is due to untreated necrotic pulp (chronic apical periodontitis). It is also the result of inadequate debridement during the endodontic procedure. Risk of occurrence of a phoenix abscess is minimised by correct identification and instrumentation of the entire root canal, ensuring no missed anatomy.
Phoenix abscess:
Treatment involves repeating the endodontic treatment with improved debridement, or tooth extraction. Antibiotics might be indicated to control a spreading or systemic infection.
Causes:
Phoenix abscesses are believed to be due to a changing internal environment of the root canal system during the instrumentation stage of root canal treatment, causing a sudden worsening of the symptoms of chronic periradicular periodontitis. This instrumentation is thought to stimulate the residual microbes in the root canal space to cause an inflammatory reaction. These microbes are predominantly facultative anaerobic gram-positive bacteria, such as Streptococcus, Enterococcus and Actinomyces species. Another cause of a phoenix abscess is a decrease in a patient's resistance to these bacteria and their products.
Signs & Symptoms:
Clinical Features PainA common clinical feature is exacerbated and exaggerated pain. There may or may not be associated with pus & suppuration. The signs & symptoms are similar to that of an acute periradicular abscess, but with a periradicular radiolucency present as well.
Loss of VitalityThe problematic tooth will have a non-vital pulp with no previous symptoms. Vitality of teeth can be assessed through various means. Common tests would include ethyl chloride test or electric pulp test. Other examples of tests would be laser doppler flowmetry (LDF), pulse oximetry etc.
Tender to TouchThe tooth is extremely tender to touch, and it may be high on occlusion as it may be extruded from the socket.
MobileMobility may be observed.
Radiographic Features Radiographically, there will be a periapical lesion associated with the tooth. This lesion is normally existent prior to this episode. Widened periodontal ligament (PDL) space is visible.
Treatment:
For most situations urgent treatment is required to eliminate the pain and swelling.
1) Further Endodontic Treatment Further root canal treatment is often the best option. Firstly, the tooth should be accessed and thoroughly irrigated using sodium hypochlorite. Following this the canals should be dried using paper points. The tooth should then be debrided, and drainage established.
2) Medications i) Antibiotics In certain circumstances it may be necessary to provide an antibiotic. These circumstances include the presence of a diffuse swelling or cellulitis, when immediate drainage cannot be achieved, or the patient has systemic involvement.
ii) Analgesics Analgesics may also be advised for pain control.
3) Extraction If the tooth is unrestorable then extraction may also be an option.
4) Bite Adjustment Adjusting the bite may provide some relief but this will not be a permanent solution to the problem. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**CyClones**
CyClones:
CyClones is a first-person shooter video game for MS-DOS developed by Raven Software and published by Strategic Simulations in 1994.
Plot:
CyClones is set in the closing of the 20th century, when wars and pollution devastated many countries in the world and led to a policy of isolation for many governments. During this period, episodes of mass hysteria became widespread, and reports of increased UFO sightings and abductions abounded. A number of "E.T. Phobics" joined to create the Advanced Ideas Corporation (A.I.). Partially funded by the U.S. military, the corporation began operating in secret laboratories as the millennium came to a close. The corporation was eventually able to discover and examine a downed alien ship, confirming suspicions of alien activity on the planet. Jubilation over the discovery was short-lived, however, as three days later, the aliens attacked. The attack began with surgical strikes against earth's satellite and missile-control centers. A remarkable discovery about the alien invasion was that most of the invaders were cloned from human tissue samples, genetically engineered, and then cybernetically enhanced. These humanoids were then dubbed "CyClones" at the time the aliens attacked, A.I. had begun work a prototype of a weapon it dubbed the "HAVOC Unit". Built by combining human technology and alien technology recovered from the alien ship, the HAVOC project resulted in the production of a cybernetically-enhanced fighter with superior combat capabilities, which the U.S. government intended to use to sabotage the main alien operations and locate the main base of operations for the aliens, in order to destroy their leader and cause enough disarray in the enemy forces to allow for their defeat by conventional armies.
Gameplay:
The player controls the "HAVOC unit" in a series of missions given by the Earth government to confront and push back the alien invaders and their clone servants. The gameplay follows the standard first-person shooter formula set by Doom the year before, requiring the player to navigate several levels while fighting enemies and activating switches or seeking keys to gain access to different areas. Several missions have specific objectives that must be met before the player is allowed to continue and instructions on what to do next may be displayed on-screen when the player enters a certain area. The enemies encountered include the CyClones humanoids, as well as several robotic and alien creatures. Some enemies are stationary, such as floor or ceiling turrets, and hazards such as explosive barrels are also included. The player can find medikits and "mech-kits" (which serve as armor) in order to ensure their survivability, as well as acquire several weapons which are either human or alien in origin. Later in the game it is possible to acquire an alien suit which grants access to even more powerful weapons, as well as a jetpack that allows the player to fly around the stage.
Gameplay:
A distinguishing aspect of CyClones' gameplay is the aiming system. Unlike Doom and most other first person shooter games of the era, CyClones implemented a mouse aiming system, featuring a movable aiming reticle, and allowed players to look up and down, and to jump. The aiming system allows the player to fire their weapon at whatever position they point at in the screen, allowing for significant more accuracy than what the vertical autoaim system employed by most other shooters allowed for. Also unlike Doom, items are picked up not by walking over them but by clicking on them with the mouse. In addition, it is possible to use the mouse to operate the HUD on the screen, in order to for example use an inventory item, switch to another weapon or access the map screen. This control system is very similar than what was later used by the game System Shock.
Development:
Development for CyClones started in early 1994, when Raven Software's developers split in two teams: one working with id software's Doom engine to create a fantasy game which later evolved into Heretic, while the other team started on a project that was to use the engine from ShadowCaster to create a futuristic first-person shooter game for Strategic Simulations. However, the ShadowCaster engine had by then become dated, especially when compared to Doom, and so Raven produced a new in-house engine, developed mainly by Carl Stika. The engine was nicknamed "Steam" and offered significant improvements over the Wolfenstein 3D-derived ShadowCaster engine, such as moving platforms, catwalks, sloped floors, and transparent textures. However, the engine was still limited to 90-degree wall geometry, akin to Rise of the Triad and some other shooters of the period.
Development:
The game was originally released on floppy disks, featuring a MIDI soundtrack, but was later re-released on a CD in "multimedia enhanced" edition, featuring redbook audio soundtrack and a full motion video (FMV) cutscene that introduces the game and its story. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Haloferax**
Haloferax:
In taxonomy, Haloferax (common abbreviation: Hfx.) is a genus of the Haloferacaceae.
Genetic exchange:
Cells of H. mediterranei and cells of the related species H. volcanii can undergo a process of genetic exchange between two cells which involves cell fusion resulting in a heterodiploid cell (containing two different chromosomes in one cell). Although this genetic exchange ordinarily occurs between two cells of the same species, it can also occur at a lower frequency between an H. mediterranei and an H. volcani cell. These two species have an average nucleotide sequence identity of 86.6%. During this exchange process, a diploid cell is formed that contains the full genetic repertoire of both parental cells, and genetic recombination is facilitated. Subsequently, the cells separate, giving rise to recombinant cells.
Taxonomy:
As of 2022, 13 species are validly published under the genus Haloferax.
Proposed speciesSeveral species and novel binomial names have been proposed, but not validly published.
Haloferax antrum, Haloferax opilio, Haloferax rutilus and Haloferax viridis were isolated from Romanian salt lakes and first proposed as new species in 2006. Only H. prahovense, that was proposed along them has since been validly published.
Haloferax berberensis was isolated in Algeria and proposed as new species in 2005.
Haloferax litoreum, Haloferax marinisediminis and Haloferax marinum were first published in 2021, but are not accepted as of 2022.
Haloferax marisrubri and Haloferax profundi were first published in 2020, but is not accepted as of 2022.
Haloferax massilisiensis or Haloferax massiliense was first published in 2016 and again in 2018 as human associated halophilic archaeon. As of 2022, this species is not accepted. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**FOXP2**
FOXP2:
Forkhead box protein P2 (FOXP2) is a protein that, in humans, is encoded by the FOXP2 gene. FOXP2 is a member of the forkhead box family of transcription factors, proteins that regulate gene expression by binding to DNA. It is expressed in the brain, heart, lungs and digestive system.FOXP2 is found in many vertebrates, where it plays an important role in mimicry in birds (such as birdsong) and echolocation in bats. FOXP2 is also required for the proper development of speech and language in humans. In humans, mutations in FOXP2 cause the severe speech and language disorder developmental verbal dyspraxia. Studies of the gene in mice and songbirds indicate that it is necessary for vocal imitation and the related motor learning. Outside the brain, FOXP2 has also been implicated in development of other tissues such as the lung and digestive system.Initially identified in 1998 as the genetic cause of a speech disorder in a British family designated the KE family, FOXP2 was the first gene discovered to be associated with speech and language and was subsequently dubbed "the language gene". However, other genes are necessary for human language development, and a 2018 analysis confirmed that there was no evidence of recent positive evolutionary selection of FOXP2 in humans.
Structure and function:
As a FOX protein, FOXP2 contains a forkhead-box domain. In addition, it contains a polyglutamine tract, a zinc finger and a leucine zipper. The protein attaches to the DNA of other proteins and controls their activity through the forkhead-box domain. Only a few targeted genes have been identified, however researchers believe that there could be up to hundreds of other genes targeted by the FOXP2 gene. The forkhead box P2 protein is active in the brain and other tissues before and after birth, many studies show that it is paramount for the growth of nerve cells and transmission between them. The FOXP2 gene is also involved in synaptic plasticity, making it imperative for learning and memory.FOXP2 is required for proper brain and lung development. Knockout mice with only one functional copy of the FOXP2 gene have significantly reduced vocalizations as pups. Knockout mice with no functional copies of FOXP2 are runted, display abnormalities in brain regions such as the Purkinje layer, and die an average of 21 days after birth from inadequate lung development.FOXP2 is expressed in many areas of the brain, including the basal ganglia and inferior frontal cortex, where it is essential for brain maturation and speech and language development. In mice, the gene was found to be twice as highly expressed in male pups than female pups, which correlated with an almost double increase in the number of vocalisations the male pups made when separated from mothers. Conversely, in human children aged 4–5, the gene was found to be 30% more expressed in the Broca's areas of female children. The researchers suggested that the gene is more active in "the more communicative sex".The expression of FOXP2 is subject to post-transcriptional regulation, particularly microRNA (miRNA), causing the repression of the FOXP2 3' untranslated region.Three amino acid substitutions distinguish the human FOXP2 protein from that found in mice, while two amino acid substitutions distinguish the human FOXP2 protein from that found in chimpanzees, but only one of these changes is unique to humans. Evidence from genetically manipulated mice and human neuronal cell models suggests that these changes affect the neural functions of FOXP2.
Clinical significance:
The FOXP2 gene has been implicated in several cognitive functions including; general brain development, language, and synaptic plasticity. The FOXP2 gene region acts as a transcription factor for the forkhead box P2 protein. Transcription factors affect other regions, and the forkhead box P2 protein has been suggested to also act as a transcription factor for hundreds of genes. This prolific involvement opens the possibility that the FOXP2 gene is much more extensive than originally thought. Other targets of transcription have been researched without correlation to FOXP2. Specifically, FOXP2 has been investigated in correlation with autism and dyslexia, however with no mutation was discovered as the cause. One well identified target is language. Although some research disagrees with this correlation, the majority of research shows that a mutated FOXP2 causes the observed production deficiency.There is some evidence that the linguistic impairments associated with a mutation of the FOXP2 gene are not simply the result of a fundamental deficit in motor control. Brain imaging of affected individuals indicates functional abnormalities in language-related cortical and basal ganglia regions, demonstrating that the problems extend beyond the motor system.Mutations in FOXP2 are among several (26 genes plus 2 intergenic) loci which correlate to ADHD diagnosis in adults – clinical ADHD is an umbrella label for a heterogeneous group of genetic and neurological phenomena which may result from FOXP2 mutations or other causes.A 2020 genome-wide association study (GWAS) implicates single-nucleotide polymorphisms (SNPs) of FOXP2 in susceptibility to cannabis use disorder.
Clinical significance:
Language disorder It is theorized that the translocation of the 7q31.2 region of the FOXP2 gene causes a severe language impairment called developmental verbal dyspraxia (DVD) or childhood apraxia of speech (CAS) So far this type of mutation has only been discovered in three families across the world including the original KE family. A missense mutation causing an arginine-to-histidine substitution (R553H) in the DNA-binding domain is thought to be the abnormality in KE. This would cause a normally basic residue to be fairly acidic and highly reactive at the body's pH. A heterozygous nonsense mutation, R328X variant, produces a truncated protein involved in speech and language difficulties in one KE individual and two of their close family members. R553H and R328X mutations also affected nuclear localization, DNA-binding, and the transactivation (increased gene expression) properties of FOXP2.These individuals present with deletions, translocations, and missense mutations. When tasked with repetition and verb generation, these individuals with DVD/CAS had decreased activation in the putamen and Broca's area in fMRI studies. These areas are commonly known as areas of language function. This is one of the primary reasons that FOXP2 is known as a language gene. They have delayed onset of speech, difficulty with articulation including, slurred speech, stuttering, and poor pronunciation, as well as dyspraxia. It is believed that a major part of this speech deficit comes from an inability to coordinate the movements necessary to produce normal speech including mouth and tongue shaping. Additionally, there are more general impairments with the processing of the grammatical and linguistic aspects of speech. These findings suggest that the effects of FOXP2 are not limited to motor control, as they include comprehension among other cognitive language functions. General mild motor and cognitive deficits are noted across the board. Clinically these patients can also have difficulty coughing, sneezing, and/or clearing their throats.While FOXP2 has been proposed to play a critical role in the development of speech and language, this view has been challenged by the fact that the gene is also expressed in other mammals as well as birds and fish that do not speak. It has also been proposed that the FOXP2 transcription-factor is not so much a hypothetical 'language gene' but rather part of a regulatory machinery related to externalization of speech.
Evolution:
The FOXP2 gene is highly conserved in mammals. The human gene differs from that in non-human primates by the substitution of two amino acids, a threonine to asparagine substitution at position 303 (T303N) and an asparagine to serine substitution at position 325 (N325S). In mice it differs from that of humans by three substitutions, and in zebra finch by seven amino acids. One of the two amino acid differences between human and chimps also arose independently in carnivores and bats. Similar FOXP2 proteins can be found in songbirds, fish, and reptiles such as alligators.DNA sampling from Homo neanderthalensis bones indicates that their FOXP2 gene is a little different though largely similar to those of Homo sapiens (i.e. humans). Previous genetic analysis had suggested that the H. sapiens FOXP2 gene became fixed in the population around 125,000 years ago. Some researchers consider the Neanderthal findings to indicate that the gene instead swept through the population over 260,000 years ago, before our most recent common ancestor with the Neanderthals. Other researchers offer alternative explanations for how the H. sapiens version would have appeared in Neanderthals living 43,000 years ago.According to a 2002 study, the FOXP2 gene showed indications of recent positive selection. Some researchers have speculated that positive selection is crucial for the evolution of language in humans. Others, however, were unable to find a clear association between species with learned vocalizations and similar mutations in FOXP2. A 2018 analysis of a large sample of globally distributed genomes confirmed there was no evidence of positive selection, suggesting that the original signal of positive selection may be driven by sample composition. Insertion of both human mutations into mice, whose version of FOXP2 otherwise differs from the human and chimpanzee versions in only one additional base pair, causes changes in vocalizations as well as other behavioral changes, such as a reduction in exploratory tendencies, and a decrease in maze learning time. A reduction in dopamine levels and changes in the morphology of certain nerve cells are also observed.
Interactions:
FOXP2 is known to regulate CNTNAP2, CTBP1, SRPX2 and SCN3A.FOXP2 downregulates CNTNAP2, a member of the neurexin family found in neurons. CNTNAP2 is associated with common forms of language impairment.FOXP2 also downregulates SRPX2, the 'Sushi Repeat-containing Protein X-linked 2'. It directly reduces its expression, by binding to its gene's promoter. SRPX2 is involved in glutamatergic synapse formation in the cerebral cortex and is more highly expressed in childhood. SRPX2 appears to specifically increase the number of glutamatergic synapses in the brain, while leaving inhibitory GABAergic synapses unchanged and not affecting dendritic spine length or shape. On the other hand, FOXP2's activity does reduce dendritic spine length and shape, in addition to number, indicating it has other regulatory roles in dendritic morphology.
In other animals:
Chimpanzees In chimpanzees, FOXP2 differs from the human version by two amino acids. A study in Germany sequenced FOXP2's complementary DNA in chimps and other species to compare it with human complementary DNA in order to find the specific changes in the sequence. FOXP2 was found to be functionally different in humans compared to chimps. Since FOXP2 was also found to have an effect on other genes, its effects on other genes is also being studied. Researchers deduced that there could also be further clinical applications in the direction of these studies in regards to illnesses that show effects on human language ability.
In other animals:
Mice In a mouse FOXP2 gene knockouts, loss of both copies of the gene causes severe motor impairment related to cerebellar abnormalities and lack of ultrasonic vocalisations normally elicited when pups are removed from their mothers. These vocalizations have important communicative roles in mother-offspring interactions. Loss of one copy was associated with impairment of ultrasonic vocalisations and a modest developmental delay. Male mice on encountering female mice produce complex ultrasonic vocalisations that have characteristics of song. Mice that have the R552H point mutation carried by the KE family show cerebellar reduction and abnormal synaptic plasticity in striatal and cerebellar circuits.Humanized FOXP2 mice display altered cortico-basal ganglia circuits. The human allele of the FOXP2 gene was transferred into the mouse embryos through homologous recombination to create humanized FOXP2 mice. The human variant of FOXP2 also had an effect on the exploratory behavior of the mice. In comparison to knockout mice with one non-functional copy of FOXP2, the humanized mouse model showed opposite effects when testing its effect on the levels of dopamine, plasticity of synapses, patterns of expression in the striatum and behavior that was exploratory in nature.When FOXP2 expression was altered in mice, it affected many different processes including the learning motor skills and the plasticity of synapses. Additionally, FOXP2 is found more in the sixth layer of the cortex than in the fifth, and this is consistent with it having greater roles in sensory integration. FOXP2 was also found in the medial geniculate nucleus of the mouse brain, which is the processing area that auditory inputs must go through in the thalamus. It was found that its mutations play a role in delaying the development of language learning. It was also found to be highly expressed in the Purkinje cells and cerebellar nuclei of the cortico-cerebellar circuits. High FOXP2 expression has also been shown in the spiny neurons that express type 1 dopamine receptors in the striatum, substantia nigra, subthalamic nucleus and ventral tegmental area. The negative effects of the mutations of FOXP2 in these brain regions on motor abilities were shown in mice through tasks in lab studies. When analyzing the brain circuitry in these cases, scientists found greater levels of dopamine and decreased lengths of dendrites, which caused defects in long-term depression, which is implicated in motor function learning and maintenance. Through EEG studies, it was also found that these mice had increased levels of activity in their striatum, which contributed to these results. There is further evidence for mutations of targets of the FOXP2 gene shown to have roles in schizophrenia, epilepsy, autism, bipolar disorder and intellectual disabilities.
In other animals:
Bats FOXP2 has implications in the development of bat echolocation. Contrary to apes and mice, FOXP2 is extremely diverse in echolocating bats. Twenty-two sequences of non-bat eutherian mammals revealed a total number of 20 nonsynonymous mutations in contrast to half that number of bat sequences, which showed 44 nonsynonymous mutations. All cetaceans share three amino acid substitutions, but no differences were found between echolocating toothed whales and non-echolocating baleen cetaceans. Within bats, however, amino acid variation correlated with different echolocating types.
In other animals:
Birds In songbirds, FOXP2 most likely regulates genes involved in neuroplasticity. Gene knockdown of FOXP2 in area X of the basal ganglia in songbirds results in incomplete and inaccurate song imitation. Overexpression of FOXP2 was accomplished through injection of adeno-associated virus serotype 1 (AAV1) into area X of the brain. This overexpression produced similar effects to that of knockdown; juvenile zebra finch birds were unable to accurately imitate their tutors. Similarly, in adult canaries, higher FOXP2 levels also correlate with song changes.Levels of FOXP2 in adult zebra finches are significantly higher when males direct their song to females than when they sing song in other contexts. "Directed" singing refers to when a male is singing to a female usually for a courtship display. "Undirected" singing occurs when for example, a male sings when other males are present or is alone. Studies have found that FoxP2 levels vary depending on the social context. When the birds were singing undirected song, there was a decrease of FoxP2 expression in Area X. This downregulation was not observed and FoxP2 levels remained stable in birds singing directed song.Differences between song-learning and non-song-learning birds have been shown to be caused by differences in FOXP2 gene expression, rather than differences in the amino acid sequence of the FOXP2 protein.
In other animals:
Zebrafish In zebrafish, FOXP2 is expressed in the ventral and dorsal thalamus, telencephalon, diencephalon where it likely plays a role in nervous system development. The zebrafish FOXP2 gene has an 85% similarity to the human FOX2P ortholog.
History:
FOXP2 and its gene were discovered as a result of investigations on an English family known as the KE family, half of whom (15 individuals across three generations) had a speech and language disorder called developmental verbal dyspraxia. Their case was studied at the Institute of Child Health of University College London. In 1990, Myrna Gopnik, Professor of Linguistics at McGill University, reported that the disorder-affected KE family had severe speech impediment with incomprehensible talk, largely characterized by grammatical deficits. She hypothesized that the basis was not of learning or cognitive disability, but due to genetic factors affecting mainly grammatical ability. (Her hypothesis led to a popularised existence of "grammar gene" and a controversial notion of grammar-specific disorder.) In 1995, the University of Oxford and the Institute of Child Health researchers found that the disorder was purely genetic. Remarkably, the inheritance of the disorder from one generation to the next was consistent with autosomal dominant inheritance, i.e., mutation of only a single gene on an autosome (non-sex chromosome) acting in a dominant fashion. This is one of the few known examples of Mendelian (monogenic) inheritance for a disorder affecting speech and language skills, which typically have a complex basis involving multiple genetic risk factors.
History:
In 1998, Oxford University geneticists Simon Fisher, Anthony Monaco, Cecilia S. L. Lai, Jane A. Hurst, and Faraneh Vargha-Khadem identified an autosomal dominant monogenic inheritance that is localized on a small region of chromosome 7 from DNA samples taken from the affected and unaffected members. The chromosomal region (locus) contained 70 genes. The locus was given the official name "SPCH1" (for speech-and-language-disorder-1) by the Human Genome Nomenclature committee. Mapping and sequencing of the chromosomal region was performed with the aid of bacterial artificial chromosome clones. Around this time, the researchers identified an individual who was unrelated to the KE family but had a similar type of speech and language disorder. In this case, the child, known as CS, carried a chromosomal rearrangement (a translocation) in which part of chromosome 7 had become exchanged with part of chromosome 5. The site of breakage of chromosome 7 was located within the SPCH1 region.In 2001, the team identified in CS that the mutation is in the middle of a protein-coding gene. Using a combination of bioinformatics and RNA analyses, they discovered that the gene codes for a novel protein belonging to the forkhead-box (FOX) group of transcription factors. As such, it was assigned with the official name of FOXP2. When the researchers sequenced the FOXP2 gene in the KE family, they found a heterozygous point mutation shared by all the affected individuals, but not in unaffected members of the family and other people. This mutation is due to an amino-acid substitution that inhibits the DNA-binding domain of the FOXP2 protein. Further screening of the gene identified multiple additional cases of FOXP2 disruption, including different point mutations and chromosomal rearrangements, providing evidence that damage to one copy of this gene is sufficient to derail speech and language development. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Higher gauge theory**
Higher gauge theory:
In mathematical physics higher gauge theory is the general study of counterparts of gauge theory that involve higher-degree differential forms instead of the traditional connection forms of gauge theories.
Frameworks for higher gauge theory:
There are several distinct frameworks within which higher gauge theories have been developed. Alvarez et al. extend the notion of integrability to higher dimensions in the context of geometric field theories. Several works of John Baez, Urs Schreiber and coauthors have developed higher gauge theories heavily based on category theory. Arthur Parzygnat has a detailed development of this framework. An alternative approach, motivated by the goal of constructing geometry over spaces of paths and higher-dimensional objects, has been developed by Saikat Chatterjee, Amitabha Lahiri, and Ambar N. Sengupta.
Frameworks for higher gauge theory:
The mathematical framework for traditional gauge theory places the gauge potential as a 1-form on a principal bundle over spacetime. Higher gauge theories provide geometric and category-theoretic, especially higher category theoretic, frameworks for field theories that involve multiple higher differential forms. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sotrovimab**
Sotrovimab:
Sotrovimab, sold under the brand name Xevudy, is a human neutralizing monoclonal antibody with activity against severe acute respiratory syndrome coronavirus 2, known as SARS-CoV-2. It was developed by GlaxoSmithKline and Vir Biotechnology, Inc. Sotrovimab is designed to attach to the spike protein of SARS-CoV-2.The most common side effects include hypersensitivity (allergic) reactions and infusion-related reactions.Although Sotrovimab was used world-wide against SARS-CoV-2, including in the United States under an FDA emergency use authorization (EUA), the FDA canceled the EUA in April 2022 due to lack of efficacy against the Omicron variant.
Medical uses:
In the European Union, sotrovimab is indicated for the treatment of COVID-19 in people aged twelve years of age and older and weighing at least 40 kilograms (88 lb) who do not require supplemental oxygen and who are at increased risk of the disease becoming severe.Sotrovimab is given by intravenous infusion, preferably within 5 days of onset of COVID-19 symptoms.
Development and mechanism of action:
Sotrovimab's development began in December 2019, at Vir Biotechnology when Vir scientists first learned of the initial COVID-19 outbreak in China. Vir subsidiary Humabs BioMed had already compiled a library of frozen blood samples from patients infected with viral diseases, including two samples from patients infected with SARS-CoV-1. Vir scientists obtained samples of the novel SARS-CoV-2 virus and mixed them with various antibodies recovered from the old SARS-CoV-1 blood samples. The objective was to identify antibodies effective against both SARS-CoV-1 and SARS-CoV-2. This would imply that the antibodies were targeting highly conserved sequences and in turn would be more likely to remain effective against future variants of SARS-CoV-2. In April 2020, Lawrence Berkeley National Laboratory conducted a X-ray crystallography study at Vir's request to investigate how such antibodies bind to SARS-CoV-2 at the molecular level. The Berkeley Lab data helped Vir identify candidates for further study, and Vir eventually settled on a single candidate antibody, S309. Vir collaborated with GlaxoSmithKline to make various refinements to S309, resulting in sotrovimab.Sotrovimab has been engineered to possess an Fc LS mutation (M428L/N434S) that confers enhanced binding to the neonatal Fc receptor resulting in an extended half-life and potentially enhanced drug distribution to the lungs.Sotrovimab has demonstrated activity via two antiviral mechanisms in vitro, antibody-dependent cellular cytotoxicity (ADCC) and antibody-dependent cellular phagocytosis (ADCP).
Development and mechanism of action:
Clinical efficacy The pivotal COMET-ICE study is an ongoing, randomized, double-blind, placebo-controlled study to assess the safety and efficacy of sotrovimab in adults with confirmed COVID-19 (mild, early disease with less than five days of symptoms) at risk of disease progression.An interim analysis of this study reported that sotrovimab reduced the risk of hospitalization for more than 24 hours or death by 85% compared with placebo. Overall 1% of people receiving sotrovimab died or required hospitalization for more than 24 hours compared to 7% of people treated with placebo. The study is ongoing and preliminary results have been published in the New England Journal of Medicine.The full analysis of the COMET-ICE trial was published in JAMA and it showed that sotrovimab reduced risk of hospitalization for more than 24 hours or death by 79% compared to placebo (1% for sotrovoimab group and 6% for the placebo group). The trial involved 1057 participants and took place before the omicron variant was prevalent.
Manufacturing:
Sotrovimab is a biologic product which takes six months to manufacture in living cells. It is produced in Chinese hamster ovary cells. At product launch in May 2021, sotrovimab's active pharmaceutical ingredient was produced by WuXi Biologics in China and sent to a GlaxoSmithKline plant in Parma, Italy for further processing into the finished product. In January 2022, the spread of the SARS-CoV-2 Omicron variant began to render other monoclonal antibodies obsolete and caused global demand for sotrovimab to skyrocket. In response, Vir and GlaxoSmithKline announced they were working with Samsung Biologics on manufacturing sotrovimab at an additional site in South Korea.
Society and culture:
Economics In 2021, the United States government agreed to purchase 1.5 million doses of the drug at $2,100 per dose.
Society and culture:
Legal status In May 2021, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency (EMA) completed its review on the use of sotrovimab for the treatment of COVID-19. It concluded that sotrovimab can be used to treat confirmed COVID-19 in adults and adolescents (aged twelve years and above and weighing at least 40 kilograms (88 lb)) who do not require supplemental oxygen and who are at risk of progressing to severe COVID-19. On 16 December 2021, the CHMP recommended authorizing sotrovimab for use in the EU and authorization was granted the next day.In May 2021, the U.S. Food and Drug Administration (FDA) issued an emergency use authorization (EUA) for sotrovimab for the treatment of mild-to-moderate COVID-19 in people aged twelve years and above weighing at least 40 kilograms (88 lb) with positive results of direct SARS-CoV-2 viral testing and who are at high risk for progression to severe COVID-19, including hospitalization or death.In August 2021, sotrovimab was granted provisional approval for the treatment of COVID-19 in Australia.In September 2021, sotrovimab was granted special exception authorization in Japan.In December 2021, the Medicines and Healthcare products Regulatory Agency (MHRA) in the United Kingdom approved sotrovimab for use in people aged twelve years of age and over who weigh more than 40 kilograms (88 lb).In March 2022, the FDA withdrew the EUA for sotrovimab in Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island, Vermont, New Jersey, New York, Puerto Rico, the Virgin Islands, Illinois, Indiana, Michigan, Minnesota, Ohio, Wisconsin, Arizona, California, Hawaii, Nevada, American Samoa, Commonwealth of the Northern Mariana Islands, Federated States of Micronesia, Guam, Marshall Islands, Republic of Palau, Alaska, Idaho, Oregon, and Washington due to the high frequency of the Omicron BA.2 sub-variant and data showing that the authorized dose of sotrovimab is unlikely to be effective against that sub-variant. In April 2022, the FDA withdrew the EUA for sotrovimab.
Research:
Sotrovimab is being evaluated in the following clinical trials: Clinical trial number NCT04545060 for "COMET-ICE" at ClinicalTrials.gov Clinical trial number NCT04779879 for "COMET-PEAK" at ClinicalTrials.gov Clinical trial number NCT04501978 for "ACTIV-3-TICO" at ClinicalTrials.gov Clinical trial number NCT04634409 for "BLAZE-4" at ClinicalTrials.govIn March 2022, Australian virologists observed that sotrovimab may cause a drug-resistant mutation. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Find My Phone**
Find My Phone:
Find my Phone or similar is the name given by various manufacturers to software and a service for smartphones, whereby a registered user can find the approximate location of the phone if switched on, over the Internet, or by the phone sending e-mail or SMS text messages. This helps to locate lost or stolen phones.Apple offers a free service called Find My for iPhones running iOS. Microsoft's My Windows Phone once offered a similar service for phones running Windows Phone. Similarly, Google offers Find My Device for phones running Android.Some of these applications may have limitations which can be checked before installing, such as only working in some countries, dependencies upon the phone's implementation of GPS, etc. Similar paid or free apps are also available for all device platforms.
Find My Phone:
Similar applications are available for computers. Computers rarely have built-in GPS receivers or mobile telephone network connectivity, so these methods of location and signalling are not available. A computer connected to the Internet by a cabled connection gives its location as the location of the Internet Service Provider (ISP) it is connected to, usually a long distance away and not very useful, although the IP address may help. However, a WiFi-connected computer (typically a laptop computer) can find its approximate location by checking WiFi networks in range against a database, allowing approximate location to be determined and signalled over the Internet. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Toyota Electronic Modulated Suspension**
Toyota Electronic Modulated Suspension:
TEMS (Toyota Electronic Modulated Suspension) is a shock absorber that is electronically controlled (Continuous Damping Control) based on multiple factors, and was built and exclusively used by Toyota for selected products during the 1980s and 1990s (first introduced on the Toyota Soarer in 1983). The semi-active suspension system was widely used on luxury and top sport trim packages on most of Toyota's products sold internationally. Its popularity fell after the “bubble economy” as it was seen as an unnecessary expense to purchase and maintain, and remained in use on luxury or high performance sports cars.
Summary:
TEMS consisted of four shock absorbers mounted at all four wheels, and could be used in either an automatic or driver selected mode based on the installation of the system used. The technology was installed on top-level Toyota products with four wheel independent suspension, labeled PEGASUS (Precision Engineered Geometrically Advanced SUSpension). Because of the nature of the technology, TEMS was installed on vehicles with front and rear independent suspensions. Although there were TEMS equipped cars with the rear dependent suspension too – the minibuses or minivans like Toyota TownAce/MasterAce, Toyota HiAce at the top package.
Summary:
Based on road conditions, the system would increase or decrease ride damping force for particular situations. The TEMS system was easily installed to suit ride comfort, and road handling stability on small suspensions, adding a level of ride modification found on larger, more expensive luxury vehicles. The technology was originally developed and calibrated for Japanese driving conditions due to Japanese speed limits, but was adapted for international driving conditions with later revisions.
Summary:
As the Japanese recession of the early 1990s began to take effect, the system was seen as an unnecessary expense as buyers were less inclined to purchase products and services seen as “luxury” and more focused on basic needs. TEMS installation was still achieved on vehicles that were considered luxurious, like the Toyota Crown, Toyota Century, Toyota Windom, and the Toyota Supra and Toyota Soarer sports cars.
Summary:
Recently the technology has been installed on luxury minivans like the Toyota Alphard, Toyota Noah and the Toyota Voxy.
The TEMS system has been recently named “Piezo TEMS” (with piezoelectric ceramics), “Skyhook TEMS” “Infinity TEMS” and more recently “AVS” (Adaptive Variable Suspension).
Configuration settings:
The system was deployed with an earlier two-stage switch labeled “Auto-Sport”, with a later modification of “Auto-Soft-Mid-Hard”. Some variations used a dial to specifically select the level of hardness to the driver's desires. For most driving situations, the “Auto” selection was recommended. When the system was activated, an indicator light reflected the suspension setting selected.
The system components consisted of a control switch, indicator light, four shock absorbers, shock absorber control actuator, shock absorber control computer, vehicle speed sensor, stop lamp switch, with a throttle position sensor and a steering angle sensor on TEMS three stage systems only. All the absorbers are controlled with the same level of hardness.
Operation parameters of TEMS:
The following describes how the system would activate on the earlier version installed during the 1980s on two stage TEMS During normal running 100 km/h (62 mph)The system chooses the "SOFT" selection, to provide a softer ride.
At high speeds 85–100 km/h (53–62 mph)The system selects the "HARD" selection and determines that at high speeds, it assumes a more rigid configuration for better ride stability, and to reduce roll tendencies.
Braking (reducing speed to 50 km/h (31 mph))In order to prevent “nose dive”, the process proceeds to "HARD" automatically damping force until it senses the brakes to be at the"SOFT" setting. It will return to the "SOFT" state when the brake light is off, and the pedal has been released after 2 seconds or more.
(Only 3-stage systems) during hard accelerationTo suppress suspension “squat” the system switches to "HARD" based on accelerator pedal position and throttle position.
(Only 3-stage systems) during hard corneringTo suppress suspension “roll” the system switches to "HARD" based on steering angle sensor position.
SPORT modeThe system remains in the "HARD" position regardless of driving conditions. (For 3-stage systems, the system automatically chooses between the “MID” and the "HARD" configurations - by the other words, the "SOFT" stage is excepted)
Vehicles installed:
The following is a list of vehicles in Japan that were installed with the technology. There may have been vehicles exported internationally that were also equipped.
Vehicles installed:
Starlet (EP71-based Turbo S, EP82-based GT) Tercel / Corsa / Corolla II (EL31-based GP turbo) Cynos Sera Corolla / Sprinter (AE92 series GT) Corolla Levin / Sprinter Trueno (AE92 • AE101GT-APEX) Corolla FX (AE92-GT) Corona (ST171-based GT-R) Celica / Carina ED / Corona EXiV (ST183 system) Century Crown Majesta Camry / Vista (SV20-based GT and Prominent G, SV30-based GT) Pronard Aristo (S140) Town Ace / Master Ace Lite Ace Mark II / Chaser / Cresta (GX71-based Twin Cam Grande, GX81-based Twin Cam Grande system, JZX91 Grande G, JZX100 Grande G, JZX101 Grande G, JZX110 Grande G) Windom (MCV10 system G, MCV20 system G, MCV30 system G) Hiace Hilux Surf (KZN130) Hilux Surf (KZN185)) Crown Soarer (GZ20 system 2.0GT Twin turbo L, JZZ30 system 2.5GT twin turbo L) Soarer (1UZ-FE V8 UZZ31).
Vehicles installed:
Supra (Select Models) Celsior: Piezo TEMS Noah / Voxy Alphard Land Cruiser (100 series) Ipsum (acm20 system) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Apache cTAKES**
Apache cTAKES:
Apache cTAKES: clinical Text Analysis and Knowledge Extraction System is an open-source Natural Language Processing (NLP) system that extracts clinical information from electronic health record unstructured text. It processes clinical notes, identifying types of clinical named entities — drugs, diseases/disorders, signs/symptoms, anatomical sites and procedures. Each named entity has attributes for the text span, the ontology mapping code, context (family history of, current, unrelated to patient), and negated/not negated.cTAKES was built using the UIMA Unstructured Information Management Architecture framework and OpenNLP natural language processing toolkit.
Components:
Components of cTAKES are specifically trained for the clinical domain, and create rich linguistic and semantic annotations that can be utilized by clinical decision support systems and clinical research.These components include: Named Section identifier Sentence boundary detector Rule-based tokenizer Formatted list identifier Normalizer Context dependent tokenizer Part-of-speech tagger Phrasal chunker Dictionary lookup annotator Context annotator Negation detector Uncertainty detector Subject detector Dependency parser patient smoking status identifier Drug mention annotator
History:
Development of cTAKES began at the Mayo Clinic in 2006. The development team, led by Dr. Guergana Savova and Dr. Christopher Chute, included physicians, computer scientists and software engineers. After its deployment, cTAKES became an integral part of Mayo's clinical data management infrastructure, processing more than 80 million clinical notes.When Dr. Savova's moved to Boston Children's Hospital in early 2010, the core development team grew to include members there. Further external collaborations include: University of Colorado Brandeis University University of Pittsburgh University of California at San DiegoSuch collaborations have extended cTAKES' capabilities into other areas such as Temporal Reasoning, Clinical Question Answering, and coreference resolution for the clinical domain.In 2010, cTAKES was adopted by the i2b2 program and is a central component of the SHARP Area 4.In 2013, cTAKES released their first release as an Apache incubator project: cTAKES 3.0.
History:
In March 2013, cTAKES became an Apache Top Level Project (TLP). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Odilorhabdin**
Odilorhabdin:
Odilorhabdins are a class of natural antibacterial agents produced by the bacterium Xenorhabdus nematophila. Odilorhabdins act against both Gram-positive and Gram-negative pathogens, and were shown to eliminate infections in mouse models.
Mechanism of action:
Odilorhabdins interfere with the pathogen's protein synthesis and are ribosome-targeting. They bind to the small ribosomal subunit at a site not exploited by previous antibiotics and induce miscoding and premature stop codon bypass. Odilorhabdins were shown to act particularly against carbapenem-resistant members of bacteria family Enterobacteriaceae, having potential to kill pathogens with antimicrobial resistance.
Discovery:
The discovery of odilorhabdins was announced in 2013 and formally described in 2018 by the researchers of the University of Illinois at Chicago and Nosopharm. To identify the antibiotic, the Nosopharm researchers tested 80 cultured bacterial strains for antimicrobial properties and then isolated the active compounds. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lumateperone**
Lumateperone:
Lumateperone, sold under the brand name Caplyta, is an atypical antipsychotic medication of the butyrophenone class. It is approved for the treatment of schizophrenia as well as bipolar depression, as either monotherapy or adjunctive therapy (with lithium or valproate). It is developed by Intra-Cellular Therapies, licensed from Bristol-Myers Squibb. Lumateperone was approved for medical use in the United States in December 2019 with an initial indication for schizophrenia, and became available in February 2020. It has since demonstrated efficacy in bipolar depression and received FDA approval in December 2021 for depressive episodes associated with both bipolar I and II disorders.
Medical uses:
Schizophrenia On December 20, 2019, the United States Food and Drug Administration (FDA) approved lumateperone for the treatment of schizophrenia in adults.
Bipolar depression In December 2021, the FDA approved lumateperone for the treatment of bipolar depression in adults as monotherapy and as adjunctive therapy with lithium or valproate. The number needed to treat (NNT) for bipolar depression at a dose of 42mg daily is 7 patients.
Pharmacology:
Mechanism of action Lumateperone acts as a receptor antagonist of 5-HT2A receptor and antagonizes several dopamine receptors (D1, D2, and D4) with lower affinity. It has moderate serotonin transporter reuptake inhibition. It has additional off-target antagonism at alpha-1 receptors, without appreciable antimuscarinic or antihistaminergic properties, limiting side effects associated with other atypical antipsychotics.
Pharmacology:
Pharmacokinetics After taking the medication by mouth, lumateperone reaches maximum plasma concentrations within 1–2 hours and has a terminal elimination half-life of 18 hours. Lumateperone is a substrate for numerous metabolic enzymes, including various glucuronosyltransferase (UGT) isoforms (UGT1A1, 1A4, and 2B15), aldo-keto reductase (AKR) isoforms (AKR1C1, 1B10, and 1C4), and cytochrome P450 (CYP) enzymes (CYP3A4, 2C8, and 1A2).Lumateperone does not cause appreciable inhibition of any common CYP450 enzymes. It is not a substrate for p-glycoprotein.
History:
The FDA approved lumateperone based on evidence from three clinical trials (Trial 1/NCT01499563, Trial 2/NCT02282761 and Trial 3/NCT02469155) that enrolled 818 adult participants with schizophrenia. The trials were conducted at 33 sites in the United States. Trials 1 and 2 provided data on the benefits and side effects of lumateperone, and Trial 3 provided data on side effects only.Three trials provided data for the approval of lumateperone. In each trial, hospitalized participants with schizophrenia were randomly assigned to receive either lumateperone or a comparison treatment (placebo or active comparator) once daily for four weeks (Trials 1 and 2) or six weeks (Trial 3). Neither the participants nor the health care providers knew which treatment was being given until after the trials were completed.Trials 1 and 2 provided data for the assessment of benefits and side effects through four weeks of therapy. Benefit was assessed by measuring the overall improvement in the symptoms of schizophrenia. Trial 3 provided data for the assessment of side effects only during six weeks of therapy.Two Phase III lumateperone monotherapy studies were conducted and completed for the treatment of bipolar depression, those being trial Study 401 and Study 404. A third trial, Study 402, aims to test lumateperone in addition to lithium or valproate, the data pertaining this trial is due out in 2020.Study 401 was conducted solely in the United States while Study 404 was a global study and included patients from the US. Of the entire Study 404 population (381 patients), two-thirds were from Russia and Colombia. At the completion of the two monotherapy Phase III trials only Study 404 met its primary endpoint and one of its secondary endpoints. In Study 404, patients received 42 mg lumateperone once daily or placebo for six weeks. Study 404 patients saw an improvement of depressive symptoms compared to placebo as documented by a change in MADRS total score of 4.6. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Oppenheimer–Phillips process**
Oppenheimer–Phillips process:
The Oppenheimer–Phillips process or strip reaction is a type of deuteron-induced nuclear reaction. In this process the neutron half of an energetic deuteron (a stable isotope of hydrogen with one proton and one neutron) fuses with a target nucleus, transmuting the target to a heavier isotope while ejecting a proton. An example is the nuclear transmutation of carbon-12 to carbon-13.
Oppenheimer–Phillips process:
The process allows a nuclear interaction to take place at lower energies than would be expected from a simple calculation of the Coulomb barrier between a deuteron and a target nucleus. This is because, as the deuteron approaches the positively charged target nucleus, it experiences a charge polarization where the "proton-end" faces away from the target and the "neutron-end" faces towards the target. The fusion proceeds when the binding energy of the neutron and the target nucleus exceeds the binding energy of the deuteron itself; the proton formerly in the deuteron is then repelled from the new, heavier, nucleus.
History:
An explanation of this effect was published by J. Robert Oppenheimer and Melba Phillips in 1935, considering experiments with the Berkeley cyclotron showing that some elements became radioactive under deuteron bombardment.
Mechanism:
During the O-P process, the deuteron's positive charge is spatially polarized, and collects preferentially at one end of the deuteron's density distribution, nominally, the "proton end". As the deuteron approaches the target nucleus, the positive charge is repelled by the electrostatic field until, assuming the incident energy is not sufficient for it to surmount the barrier, the "proton end" approaches to a minimum distance having climbed the Coulomb barrier as far as it can. If the "neutron end" is close enough for the strong nuclear force, which only operates over very short distances, to exceed the repulsive electrostatic force on the "proton end", fusion of a neutron with the target nucleus may begin. The reaction proceeds as follows: In the O-P process, as the neutron fuses to the target nucleus, the deuteron binding force pulls the "proton end" closer than a naked proton could otherwise have approached on its own, increasing the potential energy of the positive charge. As a neutron is captured, a proton is stripped from the complex and is ejected. The proton at this point is able to carry away more than the incident kinetic energy of the deuteron since it has approached the target nucleus more closely than what is possible for an isolated proton with the same incident energy. In such instances, the transmuted nucleus is left in an energy state as if it had fused with a neutron of negative kinetic energy. There is an upper bound of how much energy the proton can be ejected with, set by the ground state of the daughter nucleus. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Karen Chin**
Karen Chin:
Karen Chin is an American paleontologist and taphonomist who is considered one of the world's leading experts in coprolites.
Biography:
Chin loved studying living things as a child, and enjoyed memorizing the names of species that she read about. As a college student, she worked as a nature interpreter for the National Park Service.When Chin was in graduate school at Montana State University, studying modern grasslands, she took a job at the Museum of the Rockies. There Chin worked with Jack Horner and preparing fossils from the Two Medicine Formation for study. She began by slicing newly unearthed maiasaura bones for Horner to study with a microscope. Among the fossil were eggs and nests and unusual "blobs" that had not yet been identified. Chin asked to be the one to study these fossils and her research would confirm her hypothesis that they were coprolites.This experience was so positive that Chin says it gave her "fossil fever" and she turned her attention to studying fossils.She notes that due to her gender and racial identity, she is unusual in her field, saying: I was an atypical student when I began my academic career in paleontology because I was female, a person of color (Black, Chinese, plus...), and older than most students entering graduate school. Yet ironically, the people that have been important mentors to me are three white men who had confidence in my abilities and offered critical guidance on my academic journey. The generous counsel of these scientists helped me succeed. In turn, I am happy to demonstrate that paleontologists can come in all colors and flavors.
Biography:
Chin is a professor at the University of Colorado, Boulder, and Curator of Paleontology at the University of Colorado Museum of Natural History.
Selected publications:
Chin, Karen; Feldmann, Rodney M.; Tashman, Jessica N. "Consumption of crustaceans by megaherbivorous dinosaurs: dietary flexibility and dinosaur life history strategies". Scientific Reports. 7.
Chin, K., Hartman, J.H., and Roth, B. 2009. Opportunistic exploitation of dinosaur dung: fossil snails in coprolites from the Upper Cretaceous Two Medicine Formation of Montana. Lethaia 42: 185–198.
Chin, K., Bloch, J.D., Sweet, A.R., Tweet, J.S., Eberle, J.J., Cumbaa, S.L., Witkowski, J., and Harwood, D.M. 2008. Life in a temperate polar sea: a unique taphonomic window on the structure of a Late Cretaceous Arctic marine ecosystem. Proceedings of the Royal Society B 275: 2675–2685.
Tweet, J.S., Chin, K., Braman, D.R., and Murphy, N.L. 2008. Probable gut contents within a specimen of Brachylophosaurus canadensis (Dinosauria: Hadrosauridae) from the Upper Cretaceous Judith River Formation of Montana. PALAIOS 23: 625–636.
Chin, K. 2007. The paleobiological implications of herbivorous dinosaur coprolites from the Upper Cretaceous Two Medicine Formation of Montana: why eat wood? Palaios 22: 554–566.
Chin, K., and Bishop, J. 2007. Exploited twice: bored bone in a theropod coprolite from the Jurassic Morrison Formation of Utah, U.S. In: Bromley, R.G., Buatois, L.A., Mángano, M.G., Genise, J.F., and Melchor, R.N. [eds.], Sediment-Organism Interactions: A Multifaceted Ichnology. SEPM Special Publications, v. 88, pp. 377–385.
Chin, K., Tokaryk, T.T., Erickson, G.M., Calk, L.C., 1998, A king-sized theropod coprolite, Nature v. 393, pp. 680–682. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sham surgery**
Sham surgery:
Sham surgery (placebo surgery) is a faked surgical intervention that omits the step thought to be therapeutically necessary.
Sham surgery:
In clinical trials of surgical interventions, sham surgery is an important scientific control. This is because it isolates the specific effects of the treatment as opposed to the incidental effects caused by anesthesia, the incisional trauma, pre- and postoperative care, and the patient's perception of having had a regular operation. Thus sham surgery serves an analogous purpose to placebo drugs, neutralizing biases such as the placebo effect.
Human research:
A number of studies done under Institutional Review Board-approved settings have delivered important and surprising results. With the progress in minimally invasive surgery, sham procedures can be more easily performed as the sham incision can be kept small similarly to the incision in the studied procedure.
Human research:
A review of studies with sham surgery found 53 such studies: in 39 there was improvement with the sham operation and in 27 the sham procedure was as good as the real operation. Sham-controlled interventions have therefore identified interventions that are useless but had been believed by the medical community to be helpful based on studies without the use of sham surgery.
Human research:
Examples Cardiovascular diseases In 1939 Fieschi introduced internal mammary ligation as a procedure to improve blood flow to the heart. Not until a controlled study was done two decades later could it be demonstrated that the procedure was only as effective as the sham surgery.
Human research:
Central nervous system disease In neurosurgery, cell-transplant surgical interventions were offered in many centers in the world for patients with Parkinson disease until sham-controlled experiments involving the drilling of burr holes into the skull demonstrated such interventions to be ineffective and possibly harmful. Subsequently, over 90% of surveyed investigators believed that future neurosurgical interventions (e.g. gene transfer therapies) should be evaluated by sham-controlled studies as these are superior to open-control designs, and have found it unethical to conduct an open-control study because the design is not strong enough to protect against the placebo effect and bias. Kim et al. point out that sham procedures can differ significantly in invasiveness, for instance in neurosurgical experiments the investigator may drill a burr hole to the dura mater only or enter the brain. In March 2013 a sham surgical study of a popular but biologically inexplicable venous balloon angioplasty procedure for multiple sclerosis showed the surgery was no better than placebo.
Human research:
Orthopedic diseases Moseley and coworkers studied the effect of arthroscopic surgery for osteoarthritis of the knee establishing two treatment groups and a sham-operated control group. They found that patients in the treatment group did no better than those in the control group. The fact that all three groups improved equally points to the placebo effect in surgical interventions.
Human research:
In a 2016 study it was found that arthroscopic partial meniscectomy does not offer any benefit over sham surgery in relieving symptoms of knee locking or catching in patients with degenerative meniscal tears.A randomised controlled trial was carried out to investigate the effectiveness of shoulder surgery to remove an acromial spur (bony protuberance on x-ray) in patients with shoulder pain. This found that improvement after sham surgery was as great as with real surgery.A systematic review has identified a number of studies comparing orthopedic surgery to sham surgery. This demonstrates that it is possible to undertake such studies and that the findings are important.
Animal research:
Sham surgery has been widely used in surgical animal models. Historically, studies in animals also allowed the removal or alteration of an organ; using sham-operated animals as control, deductions could be made about the function of the organ. Sham interventions can also be performed as controls when new surgical procedures are developed.
Animal research:
For instance, a study documenting the effect of ONS (Optical Nerve Section) on Guinea pigs detailed its sham surgery as: "In the case of optic nerve section, a small incision was then made in the dural sheath of the optic nerve to access the nerve fibers, which were teased free and cut. The same procedure was followed for animals undergoing sham surgery, except that the optic nerve was left intact after visualization." | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Desmodromic valve**
Desmodromic valve:
In general mechanical terms, the word desmodromic is used to refer to mechanisms that have different controls for their actuation in different directions.A desmodromic valve is a reciprocating engine poppet valve that is positively closed by a cam and leverage system, rather than by a more conventional spring.
Desmodromic valve:
The valves in a typical four-stroke engine allow the air/fuel mixture into the cylinder at the beginning of the cycle and exhaust spent gases at the end of the cycle. In a conventional four-stroke engine, valves are opened by a cam and closed by return spring. A desmodromic valve has two cams and two actuators, for positive opening and closing without a return spring.
Etymology:
The word comes from the Greek words desmos (δεσμός, translated as "bond" or "knot") and dromos (δρόμος, "track" or "way"). This denotes the major characteristic of the valves being continuously "bound" to the camshaft.
Idea:
The common valve spring system is satisfactory for traditional mass-produced engines that do not rev highly and are of a design that requires low maintenance. At the period of initial desmodromic development, valve springs were a major limitation on engine performance because they would break from metal fatigue. In the 1950s new vacuum melt processes helped to remove impurities from the metal in valve springs, increasing their life and efficiency greatly. However, many springs would still fail at sustained operation above 8000 RPM. The desmodromic system was devised to remedy this problem by completely removing the need for a spring. Furthermore, as maximum RPM increases, higher spring force is required to prevent valve float, leading to larger springs (with increased spring mass, and thus greater inertia), cam drag (as the valve springs require energy to compress, robbing the engine of power), and higher wear on the parts at all speeds, problems addressed by the desmodromic mechanism.
Design and history:
Fully controlled valve movement was conceived during the earliest days of engine development, but devising a system that worked reliably and was not overly complex took a long time. Desmodromic valve systems are first mentioned in patents in 1896 by Gustav Mees. Austin's marine engine of 1910 produced 300 bhp and was installed in a speedboat called "Irene I"; its all-aluminium, twin-overhead-valve engine had twin magnetos, twin carburettors and desmodromic valves. The 1914 Grand Prix Delage and Nagant (see Pomeroy "Grand Prix Car") used a desmodromic valve system (quite unlike the present day Ducati system).Azzariti, a short-lived Italian manufacturer from 1933 to 1934, produced 173 cc and 348 cc twin-cylinder engines, some of which had desmodromic valve gear, with the valve being closed by a separate camshaft.The Mercedes-Benz W196 Formula One racing car of 1954–1955, and the Mercedes-Benz 300SLR sports racing car of 1955 both had desmodromic valve actuation.
Design and history:
In 1956, Fabio Taglioni, a Ducati engineer, developed a desmodromic valve system for the Ducati 125 Grand Prix, creating the Ducati 125 Desmo.
He was quoted: The specific purpose of the desmodromic system is to force the valves to comply with the timing diagram as consistently as possible. In this way, any lost energy is negligible, the performance curves are more uniform and dependability is better.
The engineers that came after him continued that development, and Ducati held a number of patents relating to desmodromics. Desmodromic valve actuation has been applied to top-of-the-range production Ducati motorcycles since 1968, with the introduction of the "widecase" Mark 3 single cylinders.
In 1959 the Maserati brothers introduced one of their final designs: a desmodromic four-cylinder, 2000cc engine for their last O.S.C.A. Barchetta.
Comparison with conventional valvetrains:
In modern engines, valve spring failure at high RPM has been mostly remedied. The main benefit of the desmodromic system is the prevention of valve float at high rpm.
Comparison with conventional valvetrains:
In traditional spring-valve actuation, as engine speed increases, the inertia of the valve will eventually overcome the spring's ability to close it completely before the piston reaches top dead centre (TDC). This can lead to several problems. First,the valve does not completely return to its seat before combustion begins. This allows combustion gases to escape prematurely, leading to a reduction in cylinder pressure which causes a major decrease in engine performance. This can also overheat the valve, possibly warping it and leading to catastrophic failure. Second, and most damaging, the piston collides with the valve and both are destroyed. In spring-valve engines the traditional remedy for valve float is to stiffen the springs. This increases the seat pressure of the valve (the static pressure that holds the valve closed). This is beneficial at higher engine speeds because of a reduction in the aforementioned valve float. The drawback is that the engine has to work harder to open the valve at all engine speeds. The higher spring pressure causes greater friction (hence temperature and wear) in the valvetrain.
Comparison with conventional valvetrains:
The desmodromic system avoids this problem, because it does not have to overcome the force of the spring. It must still overcome the inertia of the valve opening and closing, and that depends on the mass distribution of the moving parts. The effective mass of a traditional valve with spring includes one-half of the valve spring mass for symmetric springs and all of the valve spring retainer mass. However, a desmodromic system must deal with the inertia of the two rocker arms per valve, so this advantage depends greatly on the skill of the designer. Another disadvantage is the contact point between the cams and rocker arms. It is relatively easy to use roller tappets in conventional valvetrains, although it does add considerable moving mass. In a desmodromic system the roller would be needed at one end of the rocker arm, which would greatly increase its moment-of-inertia and negate its "effective mass" advantage. Thus, desmo systems have generally needed to deal with sliding friction between the cam and rocker arm and therefore may have greater wear. The contact points on most Ducati rocker arms are hard-chromed to reduce this wear. Another possible disadvantage is that it would be very difficult to incorporate hydraulic valve lash adjusters in a desmodromic system, so the valves must be periodically adjusted, but this is true of typical performance oriented motorcycles as valve lash is typically set using a shim under a cam follower.
Disadvantages:
Before the days when valve drive dynamics could be analyzed by computer, desmodromic drive seemed to offer solutions for problems that were worsening with increasing engine speed. Since those days, lift, velocity, acceleration, and jerk curves for cams have been modelled by computer to reveal that cam dynamics are not what they seemed. With proper analysis, problems relating to valve adjustment, hydraulic tappets, push rods, rocker arms, and above all, valve float, became things of the past without desmodromic drive.
Disadvantages:
Today most automotive engines use overhead cams, driving a flat tappet to achieve the shortest, lightest weight, and most inelastic path from cam to valve, thereby avoiding elastic elements such as pushrod and rocker arm. Computers have allowed for fairly accurate acceleration modelling of valve-train systems.
Disadvantages:
Before numerical computing methods were readily available, acceleration was only attainable by differentiating cam lift profiles twice, once for velocity and again for acceleration. This generates so much hash (noise) that the second derivative (acceleration) was uselessly inaccurate. Computers permitted integration from the jerk curve, the third derivative of lift, that is conveniently a series of contiguous straight lines whose vertices can be adjusted to give any desired lift profile.
Disadvantages:
Integration of the jerk curve produces a smooth acceleration curve while the third integral gives an essentially ideal lift curve (cam profile).
With such cams, which mostly do not look like the ones "artists" formerly designed, valve noise (lift-off) went away and valve train elasticity came under scrutiny.
Disadvantages:
Today, most cams have mirror image (symmetric) profiles with identical positive and negative acceleration while opening and closing valves. However, some high speed (in terms of engine RPM) motors now employ asymmetrical cam profiles in order to quickly open valves and set them back in their seats more gently to reduce wear. As well, production vehicles have employed asymmetrical cam lobe profiles since the late 1940s, as seen in the 1948 Ford V8. In this motor both the intake and exhaust profiles had an asymmetric design. More modern applications of asymmetrical camshafts include Cosworth's 2.3 liter crate motors, which use aggressive profiles to reach upwards of 280 brake horsepower. An asymmetric cam either opens or closes the valves more slowly than it could, with the speed being limited by Hertzian contact stress between curved cam and flat tappet, thereby ensuring a more controlled acceleration of the combined mass of the reciprocating componentry (specifically the valve, tappet and spring).
Disadvantages:
In contrast, desmodromic drive uses two cams per valve, each with separate rocker arm (lever tappets). Maximum valve acceleration is limited by the cam-to-tappet galling stress, and therefore is governed by both the moving mass and the cam contact area. Maximum rigidity and minimum contact stress are best achieved with conventional flat tappets and springs whose lift and closure stress is unaffected by spring force; both occur at the base circle, where spring load is minimum and contact radius is largest. Curved (lever) tappets of desmodromic cams cause higher contact stress than flat tappets for the same lift profile, thereby limiting rate of lift and closure.
Disadvantages:
With conventional cams, stress is highest at full lift, when turning at zero speed (initiation of engine cranking), and diminishes with increasing speed as inertial force of the valve counters spring pressure, while a desmodromic cam has essentially no load at zero speed (in the absence of springs), its load being entirely inertial, and therefore increasing with speed. Its greatest inertial stress bears on its smallest radius. Acceleration forces for either method increase with the square of velocity resulting from kinetic energy.Valve float was analyzed and found to be caused largely by resonance in valve springs that generated oscillating compression waves among coils, much like a Slinky. High speed photography showed that at specific resonant speeds, valve springs were no longer making contact at one or both ends, leaving the valve floating before crashing into the cam on closure.
Disadvantages:
For this reason, today as many as three concentric valve springs are sometimes nested inside one other; not for more force (the inner ones having no significant spring constant), but to act as snubbers to reduce oscillations in the outer spring.An early solution to oscillating spring mass was the mousetrap or hairpin spring used on Norton Manx engines. These avoided resonance but were ungainly to locate inside cylinder heads.
Disadvantages:
Valve springs that do not resonate are progressive, wound with varying pitch or varying diameter called beehive springs from their shape. The number of active coils in these springs varies during the stroke, the more closely wound coils being on the static end, becoming inactive as the spring compresses or as in the beehive spring, where the small diameter coils at the top are stiffer. Both mechanisms reduce resonance because spring force and its moving mass vary with stroke. This advance in spring design removed valve float, the initial impetus for desmodromic valve drive.
Examples:
Famous examples include the successful Mercedes-Benz W196 and Mercedes-Benz 300 SLR race cars and, most commonly, modern Ducati motorcycles.
Examples:
Ducati motorcycles with desmodromic valves have won numerous races and championships, including Superbike World Championships from 1990 to 1992, 1994–96, 1998–99, 2001, 2003–04, 2006, 2008 and 2011. Ducati's return to Grand Prix motorcycle racing was powered by a desmodromic V4 990 cc engine in the GP3 (Desmosedici) bike, which went on to claim several victories, including a one-two finish at the final 990 cc MotoGP race at Valencia, Spain in 2006. With the onset of the 800 cc era in 2007, they are generally still considered to be the most powerful engines in the sport, and have powered Casey Stoner to the 2007 MotoGP Championship and Ducati to the constructors championship with the GP7 (Desmosedici) bike. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cotton Candy (single-board computer)**
Cotton Candy (single-board computer):
The Cotton Candy is a very small, fanless single-board computer on a stick, putting the full functions of a personal computer on a device the size of a USB memory stick, manufactured by the Norwegian-based hardware and software for-profit startup company FXI Technologies (also referred to as just "FXI Tech").
Overview:
Cotton Candy is a low-power ARM architecture CPU based computer which uses dual-core processors such as the dual-core 1.2 GHz Exynos 4210 (45 nm ARM Cortex-A9 with 1MB L2 cache) SoC (System on a chip) by Samsung, featuring a quad-core 200 MHz ARM Mali-400 MP GPU OpenGL ES 2.0 capable 2D/3D graphics processing unit, an Audio and Video Decoder hardware engine, and TrustZone Cryptographic Engine and Security Accelerator (CESA) co-processor. The platform is said to be able to stream and decode H.264 1080p content, and be able to use desktop class interfaces such as KDE or GNOME under Linux.FXI Technologies claims it will run both Android 4.0 (Ice Cream Sandwich) and the latest Ubuntu Desktop Linux operating systems, leveraging Linaro builds and Linux kernel optimizations.As of 13 September 2012, FXI started to ship to those that pre-ordered devices. At the time of writing (November 2013), the Cotton Candy is generally available. FXI have also made a Beta android ICS image and Beta Linux image available for download.On 16 of July 2014, FXI declared bankruptcy.
Reception:
In January 2012 the Cotton Candy made it to the top-10 finalist at the "Last Gadget Standing" new technology competition at CES 2012. Also at CES 2012, LaptopMag.com made Cotton Candy a top-10 finalist for its "Readers’ Choice for Best of CES 2012" award. EFYTimes News Network as well named FXI Technologies Cotton Candy a "Top 10 Gadgets Launched @ CES 2012". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fenpropimorph**
Fenpropimorph:
Fenpropimorph is a morpholine-derived fungicide used in agriculture, primarily on cereal crops such as wheat. It has been reported to disrupt eukaryotic sterol biosynthesis pathways, notably by inhibiting fungal Δ14 reductases. It has also been reported to inhibit mammalian sterol biosynthesis by affecting lanosterol demethylation. Although used in agriculture for pest management purposes, it has been reported to have a strong adverse effect on sterol biosynthesis in higher-plants by inhibiting the cycloeucalenol-obtusifoliol isomerase. This inhibition was shown to not only alter the lipid composition of the plasma-membrane, but also impact cell division and growth, in plants.In addition to its effects on fungi, fenpropimorph is also a very high affinity ligand of the mammalian sigma receptor. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Convenient number**
Convenient number:
A convenient number is a number which in several situations can prove convenient for use by humans for counting and measuring, and is related to preferred numbers (which are standard recommendations used for choosing product dimensions).
The convenient numbers in this article were developed in the USA in connection with the attempted introduction of the metric system in the United States in the 1970s. Hence they may be viewed as a recommendation for choosing product dimensions when switching to the metric system, but can also have other uses.
History:
In the 1970s, the American National Bureau of Standards (NBS), which was later renamed to the National Institute of Standards and Technology (NIST), defined a set of convenient numbers when it was developing procedures for metrication in the United States.
History:
An NBS technical note describes that system of convenient metric values as the 1-2-5 series in reverse, with assigned preferences for those numbers which are multiples of 5, 2, and 1 (plus their powers of 10). Linear dimensions above 100 mm were excluded (because such measurements are defined by another set of rules). A table of this 5, 2, 1 series can be seen below in the section "Schedule of convenient numbers between 10 and 100".The NBS technical note also states that "Basically, integers are more convenient than expressions which include decimal parts [decimal fractions]. Furthermore, where measuring devices are used, values which represent numbered subdivisions on such instruments are more useful than values which have to be interpolated. For example, where a tape or a scale is graduated in intervals of 5, any value that represents a multiple of 5 is more "convenient" to measure or verify than one which is not. In addition, where operations involve the subdivision of quantities into two or more equal parts, any number that is highly divisible has an explicit advantage."
Schedule of convenient numbers between 10 and 100:
Notes: Numbers are shown once only, in the highest applicable preference column. (For example, the number 20 would occur as 3rd, 4th, 5th, and 6th preference as well as 2nd preference).
Schedule of convenient numbers between 10 and 100:
In some contexts, 25 and 75 may become 2nd preferences rather than 4th preferences.The Technical Note also states, "In the practical application of a "convenient numbers approach" to the selection of suitable metric values, it is desirable to start with the highest possible preference and then to gradually refine the difference until an acceptable and convenient metric value has been found." | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**End mill**
End mill:
An end mill is a type of milling cutter, a cutting tool used in industrial milling applications. It is distinguished from the drill bit in its application, geometry, and manufacture. While a drill bit can only cut in the axial direction, most milling bits can cut in the radial direction. Not all mills can cut axially; those designed to cut axially are known as end mills.
End mill:
End mills are used in milling applications such as profile milling, tracer milling, face milling, and plunging.
Types:
Several broad categories of end- and face-milling tools exist, such as center-cutting versus non-center-cutting (whether the mill can take plunging cuts); and categorization by number of flutes; by helix angle; by material; and by coating material. Each category may be further divided by specific application and special geometry.
Types:
A very popular helix angle, especially for general cutting of metal materials, is 30°. For finishing end mills, it is common to see more tight spiral, with helix angles 45° or 60°. Straight flute end mills (helix angle 0°) are used in special applications, like milling plastics or composites of epoxy and glass. Straight flute end mills were also used historically for metal cutting before invention of helical flute end mill by Carl A. Bergstrom of Weldon Tool Company in 1918.
Types:
There exist end mills with variable flute helix or pseudo-random helix angle, and discontinuous flute geometries, to help break material into smaller pieces while cutting (improving chip evacuation and reducing risk of jamming) and reduce tool engagement on big cuts. Some modern designs also include small features like the corner chamfer and chipbreaker. While more expensive, due to more complex design and manufacturing process, such end mills can last longer due to less wear and improve productivity in high speed machining (HSM) applications.
Types:
It is becoming increasingly common for traditional solid end mills to be replaced by more cost-effective inserted cutting tools (which, though more expensive initially, reduce tool-change times and allow for the easy replacement of worn or broken cutting edges rather than the entire tool). Another advantage of indexable end mills(another term for tools with inserts) is their ability to be flexible with what materials they can work on, rather than being specialized for a certain material type like more traditional end mills. For the time being however, this only generally applies to larger diameter end mills, at or above 3/4 of an inch. These end mills are generally used for roughing operation, whereas traditional end mills are still used for finishing and work where a smaller diameter, or a tighter tolerance, are required; modular tooling introduces additional margins of error that can compound with each new component, whereas a solid tool can provide a smaller tolerance range for the same price level.
Types:
End mills are sold in both imperial and metric shank and cutting diameters. In the USA, metric is readily available, but it is only used in some machine shops and not others; in Canada, due to the country's proximity to the US, much the same is true. In Asia and Europe, metric diameters are standard.
Geometry:
A variety of grooves, slots, and pockets in the work-piece may be produced from a variety of tool bits. Common tool bit types are: square end cutters, ball end cutters, t-slot cutters, and shell mills. Square end cutters can mill square slots, pockets, and edges. Ball end cutters mill radiused slots or fillets. T-slot cutters mill exactly that: T-shaped slots. Shell end cutters are used for large flat surfaces and for angle cuts. There are variations of these tool types as well.
Geometry:
There are four critical angles of each cutting tool: end cutting edge angle, axial relief angle, radial relief angle, and radial rake angle.
Geometry:
Depending on the material being milled, and what task should be performed, different tool types and geometry may be used. For instance, when milling a material like aluminum, it may be advantageous to use a tool with very deep, polished flutes, a very sharp cutting edge and high rake angles. When machining a tough material such as stainless steel, however, shallow flutes and a squared-off cutting edge will optimize material removal and tool life.
Geometry:
A wide variety of materials are used to produce the cutting tools. Carbide inserts are the most common because they are good for high production milling. High speed steel is commonly used when a special tool shape is needed, not usually used for high production processes. Ceramics inserts are typically used in high speed machining with high production. Diamond inserts are typically used on products that require tight tolerances, typically consisting of high surface qualities (nonferrous or non-metallic materials).
Geometry:
In the early 90s, use of coatings became more common. Coatings can provide various benefits including wear resistance, reduction of friction to assist with chip evacuation, and increased heat resistance. Most of these coatings are referred to by their chemical composition.
Geometry:
Though PCD veins is not a coating, some end mills are manufactured with a 'vein' of polycrystalline diamond. The vein is formed in a high temperature-high pressure environment. The vein is formed in a blank and then the material is ground out along the vein to form the cutting edge. Although the tools can be very costly, they can last many times longer than other tooling.
Geometry:
Advances in end mill coatings are being made, however, with coatings such as Amorphous Diamond and nanocomposite PVD coatings beginning to be seen at high-end shops (as of 2004).
Although coatings have a typical color, manufacturers may modify the coating process or add additives to change the appearance without affecting the performance as part of their branding. Bright blues, reds and turquoise are among the "unnatural" colors.
Geometry:
End mills are typically made on CNC (computer numeric controlled) tool and cutter grinder machines under high-pressure lubricants such as water, water-soluble oil, and high-flashpoint oil. Grinding inside the machine is accomplished with abrasive wheels mounted on a spindle (and in some cases, multiple spindles). Depending on what material is being ground, these wheels are made with industrial diamond (when grinding tungsten carbide), cubic boron nitride (when grinding cobalt steel), and other materials (when grinding, for instance, ceramics), set in a bond (sometimes copper).
Flute types:
Single: Is used to remove lots of material at a very fast rate. Traditionally used in a roughing operation.2 Flute: Allows for more chips to be removed from the part. Primarily used in slotting and pocketing operations in non-ferrous materials.
3 Flute: Similar to the 2 Flute end mill but can be used to cut ferrous and non-ferrous materials 4+ Flute: Designed to run at faster feed rates but due to having more flutes it causes issues with chip removal.
Operations:
Roughing: the purpose is to remove a big chunk of material from workpieces, sometimes to get rid of excess material in order to get closer to the final shape. It attempts to get really close to the finalized shape. Traditionally it's the first major operation in the machining process. Contouring/Profiling: this is a process used to mill different surfaces such as flat or irregular ones. This type of process can be done during the roughing or finishing phase of the overall operation.Facing: is an operation used to face the part down to specified dimension. Facing can be done using end mills or a special face mill.Pocketing/Slotting: this is a process to make a pocket on the inside of the part. A pocket can be shallow or deep, depending on specs. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Crackme**
Crackme:
A crackme (often abbreviated by cm) is a small program designed to test a programmer's reverse engineering skills.They are programmed by other reversers as a legal way to crack software, since no intellectual property is being infringed upon.
Crackmes, reversemes and keygenmes generally have similar protection schemes and algorithms to those found in proprietary software. However, due to the wide use of packers/protectors in commercial software, many crackmes are actually more difficult as the algorithm is harder to find and track than in commercial software.
Keygenme:
A keygenme is specifically designed for the reverser to not only find the protection algorithm used in the application, but also write a small keygen for it in the programming language of their choice.
Most keygenmes, when properly manipulated, can be self-keygenning. For example, when checking, they might generate the corresponding key and simply compare the expected and entered keys. This makes it easy to copy the key generation algorithm.
Often anti-debugging and anti-disassemble routines are used to confuse debuggers or make the disassembly useless. Code-obfuscation is also used to make the reversing even harder. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ternary commutator**
Ternary commutator:
In mathematical physics, the ternary commutator is an additional ternary operation on a triple system defined by [a,b,c]=abc−acb−bac+bca+cab−cba.
Also called the ternutator or alternating ternary sum, it is a special case of the n-commutator for n = 3, whereas the 2-commutator is the ordinary commutator.
Properties:
When one or more of a, b, c is equal to 0, [a, b, c] is also 0. This statement makes 0 the absorbing element of the ternary commutator.
The same happens when a = b = c. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Project management information system**
Project management information system:
A project management information system (PMIS) is the logical organization of the information required for an organization to execute projects successfully. A PMIS is typically one or more software applications and a methodical process for collecting and using project information. These electronic systems "help [to] plan, execute, and close project management goals." PMIS systems differ in scope, design and features depending upon an organisation's operational requirements.
PMIS PMBOK 5th edition definition:
The project management information system, which is part of the environmental factors, provides access to tools, such as a scheduling tool, a work authorization system, a configuration management system, an information collection and distribution system, or interfaces to other online automated systems. Automated gathering and reporting on key performance indicators (KPI) can be part of this system.
Project management information system software:
At the center of any modern PMIS is a software. Project management information system can vary from something as simple as a File system containing Microsoft Excel documents, to a full blown enterprise PMIS software.
Characteristics of a PMIS The methodological process used to collect and organize project information can match normalized methodologies such as PRINCE2.
A PMIS Software supports all Project management knowledge areas such as Integration Management, Project Scope Management, Project Time Management, Project Cost Management, Project Quality Management, Project Human Resource Management, Project Communications Management, Project Risk Management, Project Procurement Management, and Project Stakeholder Management.
A PMIS Software is a multi-user application, and can be cloud based or hosted on-premises.
Relationship between a PMS and PMIS:
A project management system (PMS) could be a part of a PMIS or sometimes an external tool beside project management information system. PMS is basically an aggregation of the processes, tools, techniques, methodologies, resources, and procedures to manage a project. What a PMIS does is to manage all stakeholders in a project such as the project owner, client, contractors, sub-contractors, in-house staff, workers, managers etc. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Rope pump**
Rope pump:
A rope pump is a kind of pump where a loose hanging rope is lowered into a well and drawn up through a long pipe with the bottom immersed in water. On the rope, round disks or knots matching the diameter of the pipe are attached which pull the water to the surface. It is commonly used in developing countries for both community supply and self-supply of water and can be installed on boreholes or hand-dug wells.
Description:
A rope pump is a type of pump of which the main or most visible component is a continuous piece of rope, in which the rope is integral in raising water from a well. Rope pumps are often used in developing areas, the most common design of which uses PVC pipe and a rope with flexible or rigid valves. Rope pumps are cheap to build and easy to maintain. One design of rope pump using a solar-powered rope pump can pump 3,000 litres to 15 meters per day using an 80 watt solar panel. Rope pumps can be powered by low speed gasoline/diesel engines, electricity, human energy, wind and solar energy.
History:
Washer or chain pumps were used by the Chinese over 1000 years ago. In the 1980s Reinder van Tijen an inventor and grass roots activist, with the support of the Royal Tropical Institute of Amsterdam, created and began instructing various communities around the world how to make a rope pump from simple available parts using PVC pipes and plastic moldings. He began at Burkina Faso in Africa, continued to Tunisia, Thailand and Gambia among others. In Nicaragua, the technology was introduced around 1985 and by 2010 there were an estimated 70.000 pumps installed. An estimated 20.000 were installed on wells for rural communal water supply and over 25% of the rural water supply was with rope pumps. The other 50.000 pumps were installed on private wells of rural families and farmers, partly or completely paid for by families themselves, (so called Self-supply). Many rope pumps are now being replaced by electric pumps so families climb "the water ladder". It is also used in other parts of Central America with over 25,000 pumps installed to date. In Africa the improved model of the rope pump was introduced around 1995 but in many countries failed due to the use of outdated designs and lack of long term follow up on quality in production and installation. By the 2020 an estimated 5 million people in 20 countries worldwide were using rope pumps for domestic uses and small scale irrigation.
Construction:
The original rope pumps used knots along the rope length but can be made with flexible or rigid valves on the rope instead of knots. Alternatively they may use only rope, simply relying on the water clinging to the rope as it is quickly pulled to the surface.
Flexible valve rope pumps Flexible valves can be made from cut pieces of bicycle wheel tubing. The valves are positioned approximately 20 cm apart on the rope. One disadvantage of flexible valve rope pumps is that they must be appropriately sized and thickness for different types, sizes and length of pipes.
Construction:
Rigid valve rope pump Rigid valves using plastic or metal washers that fit tightly into the PVC pipe as the rope is dragged through are also used. If the fit is tight, the washers can be spaced up to half a meter apart. The deeper the well, the smaller the pipe inner diameter must be, given the available power constraints. These rope pumps are often worked with a hand crankValves can also be made from knots in the rope itself.
Construction:
Valveless rope pumps Valveless pumps rely on friction with water clinging to the rope, which is moved at high speed, often using a bicycle to produce the required speed. It is a less efficient design but is simpler to construct than the other rope pumps.
Intellectual property:
Rope pump technology is in the public domain and there are no patents pending on it. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**DNA nanotechnology**
DNA nanotechnology:
DNA nanotechnology is the design and manufacture of artificial nucleic acid structures for technological uses. In this field, nucleic acids are used as non-biological engineering materials for nanotechnology rather than as the carriers of genetic information in living cells. Researchers in the field have created static structures such as two- and three-dimensional crystal lattices, nanotubes, polyhedra, and arbitrary shapes, and functional devices such as molecular machines and DNA computers. The field is beginning to be used as a tool to solve basic science problems in structural biology and biophysics, including applications in X-ray crystallography and nuclear magnetic resonance spectroscopy of proteins to determine structures. Potential applications in molecular scale electronics and nanomedicine are also being investigated.
DNA nanotechnology:
The conceptual foundation for DNA nanotechnology was first laid out by Nadrian Seeman in the early 1980s, and the field began to attract widespread interest in the mid-2000s. This use of nucleic acids is enabled by their strict base pairing rules, which cause only portions of strands with complementary base sequences to bind together to form strong, rigid double helix structures. This allows for the rational design of base sequences that will selectively assemble to form complex target structures with precisely controlled nanoscale features. Several assembly methods are used to make these structures, including tile-based structures that assemble from smaller structures, folding structures using the DNA origami method, and dynamically reconfigurable structures using strand displacement methods. The field's name specifically references DNA, but the same principles have been used with other types of nucleic acids as well, leading to the occasional use of the alternative name nucleic acid nanotechnology.
Fundamental concepts:
Properties of nucleic acids Nanotechnology is often defined as the study of materials and devices with features on a scale below 100 nanometers. DNA nanotechnology, specifically, is an example of bottom-up molecular self-assembly, in which molecular components spontaneously organize into stable structures; the particular form of these structures is induced by the physical and chemical properties of the components selected by the designers. In DNA nanotechnology, the component materials are strands of nucleic acids such as DNA; these strands are often synthetic and are almost always used outside the context of a living cell. DNA is well-suited to nanoscale construction because the binding between two nucleic acid strands depends on simple base pairing rules which are well understood, and form the specific nanoscale structure of the nucleic acid double helix. These qualities make the assembly of nucleic acid structures easy to control through nucleic acid design. This property is absent in other materials used in nanotechnology, including proteins, for which protein design is very difficult, and nanoparticles, which lack the capability for specific assembly on their own.The structure of a nucleic acid molecule consists of a sequence of nucleotides distinguished by which nucleobase they contain. In DNA, the four bases present are adenine (A), cytosine (C), guanine (G), and thymine (T). Nucleic acids have the property that two molecules will only bind to each other to form a double helix if the two sequences are complementary, meaning that they form matching sequences of base pairs, with A only binding to T, and C only to G. Because the formation of correctly matched base pairs is energetically favorable, nucleic acid strands are expected in most cases to bind to each other in the conformation that maximizes the number of correctly paired bases. The sequences of bases in a system of strands thus determine the pattern of binding and the overall structure in an easily controllable way. In DNA nanotechnology, the base sequences of strands are rationally designed by researchers so that the base pairing interactions cause the strands to assemble in the desired conformation. While DNA is the dominant material used, structures incorporating other nucleic acids such as RNA and peptide nucleic acid (PNA) have also been constructed.
Fundamental concepts:
Subfields DNA nanotechnology is sometimes divided into two overlapping subfields: structural DNA nanotechnology and dynamic DNA nanotechnology. Structural DNA nanotechnology, sometimes abbreviated as SDN, focuses on synthesizing and characterizing nucleic acid complexes and materials that assemble into a static, equilibrium end state. On the other hand, dynamic DNA nanotechnology focuses on complexes with useful non-equilibrium behavior such as the ability to reconfigure based on a chemical or physical stimulus. Some complexes, such as nucleic acid nanomechanical devices, combine features of both the structural and dynamic subfields.The complexes constructed in structural DNA nanotechnology use topologically branched nucleic acid structures containing junctions. (In contrast, most biological DNA exists as an unbranched double helix.) One of the simplest branched structures is a four-arm junction that consists of four individual DNA strands, portions of which are complementary in a specific pattern. Unlike in natural Holliday junctions, each arm in the artificial immobile four-arm junction has a different base sequence, causing the junction point to be fixed at a certain position. Multiple junctions can be combined in the same complex, such as in the widely used double-crossover (DX) structural motif, which contains two parallel double helical domains with individual strands crossing between the domains at two crossover points. Each crossover point is, topologically, a four-arm junction, but is constrained to one orientation, in contrast to the flexible single four-arm junction, providing a rigidity that makes the DX motif suitable as a structural building block for larger DNA complexes.Dynamic DNA nanotechnology uses a mechanism called toehold-mediated strand displacement to allow the nucleic acid complexes to reconfigure in response to the addition of a new nucleic acid strand. In this reaction, the incoming strand binds to a single-stranded toehold region of a double-stranded complex, and then displaces one of the strands bound in the original complex through a branch migration process. The overall effect is that one of the strands in the complex is replaced with another one. In addition, reconfigurable structures and devices can be made using functional nucleic acids such as deoxyribozymes and ribozymes, which can perform chemical reactions, and aptamers, which can bind to specific proteins or small molecules.
Structural DNA nanotechnology:
Structural DNA nanotechnology, sometimes abbreviated as SDN, focuses on synthesizing and characterizing nucleic acid complexes and materials where the assembly has a static, equilibrium endpoint. The nucleic acid double helix has a robust, defined three-dimensional geometry that makes it possible to simulate, predict and design the structures of more complicated nucleic acid complexes. Many such structures have been created, including two- and three-dimensional structures, and periodic, aperiodic, and discrete structures.
Structural DNA nanotechnology:
Extended lattices Small nucleic acid complexes can be equipped with sticky ends and combined into larger two-dimensional periodic lattices containing a specific tessellated pattern of the individual molecular tiles. The earliest example of this used double-crossover (DX) complexes as the basic tiles, each containing four sticky ends designed with sequences that caused the DX units to combine into periodic two-dimensional flat sheets that are essentially rigid two-dimensional crystals of DNA. Two-dimensional arrays have been made from other motifs as well, including the Holliday junction rhombus lattice, and various DX-based arrays making use of a double-cohesion scheme. The top two images at right show examples of tile-based periodic lattices.
Structural DNA nanotechnology:
Two-dimensional arrays can be made to exhibit aperiodic structures whose assembly implements a specific algorithm, exhibiting one form of DNA computing. The DX tiles can have their sticky end sequences chosen so that they act as Wang tiles, allowing them to perform computation. A DX array whose assembly encodes an XOR operation has been demonstrated; this allows the DNA array to implement a cellular automaton that generates a fractal known as the Sierpinski gasket. The third image at right shows this type of array. Another system has the function of a binary counter, displaying a representation of increasing binary numbers as it grows. These results show that computation can be incorporated into the assembly of DNA arrays.DX arrays have been made to form hollow nanotubes 4–20 nm in diameter, essentially two-dimensional lattices which curve back upon themselves. These DNA nanotubes are somewhat similar in size and shape to carbon nanotubes, and while they lack the electrical conductance of carbon nanotubes, DNA nanotubes are more easily modified and connected to other structures. One of many schemes for constructing DNA nanotubes uses a lattice of curved DX tiles that curls around itself and closes into a tube. In an alternative method that allows the circumference to be specified in a simple, modular fashion using single-stranded tiles, the rigidity of the tube is an emergent property.Forming three-dimensional lattices of DNA was the earliest goal of DNA nanotechnology, but this proved to be one of the most difficult to realize. Success using a motif based on the concept of tensegrity, a balance between tension and compression forces, was finally reported in 2009.
Structural DNA nanotechnology:
Discrete structures Researchers have synthesized many three-dimensional DNA complexes that each have the connectivity of a polyhedron, such as a cube or octahedron, meaning that the DNA duplexes trace the edges of a polyhedron with a DNA junction at each vertex. The earliest demonstrations of DNA polyhedra were very work-intensive, requiring multiple ligations and solid-phase synthesis steps to create catenated polyhedra. Subsequent work yielded polyhedra whose synthesis was much easier. These include a DNA octahedron made from a long single strand designed to fold into the correct conformation, and a tetrahedron that can be produced from four DNA strands in one step, pictured at the top of this article.Nanostructures of arbitrary, non-regular shapes are usually made using the DNA origami method. These structures consist of a long, natural virus strand as a "scaffold", which is made to fold into the desired shape by computationally designed short "staple" strands. This method has the advantages of being easy to design, as the base sequence is predetermined by the scaffold strand sequence, and not requiring high strand purity and accurate stoichiometry, as most other DNA nanotechnology methods do. DNA origami was first demonstrated for two-dimensional shapes, such as a smiley face, a coarse map of the Western Hemisphere, and the Mona Lisa painting. Solid three-dimensional structures can be made by using parallel DNA helices arranged in a honeycomb pattern, and structures with two-dimensional faces can be made to fold into a hollow overall three-dimensional shape, akin to a cardboard box. These can be programmed to open and reveal or release a molecular cargo in response to a stimulus, making them potentially useful as programmable molecular cages.
Structural DNA nanotechnology:
Templated assembly Nucleic acid structures can be made to incorporate molecules other than nucleic acids, sometimes called heteroelements, including proteins, metallic nanoparticles, quantum dots, amines, and fullerenes. This allows the construction of materials and devices with a range of functionalities much greater than is possible with nucleic acids alone. The goal is to use the self-assembly of the nucleic acid structures to template the assembly of the nanoparticles hosted on them, controlling their position and in some cases orientation.
Structural DNA nanotechnology:
Many of these schemes use a covalent attachment scheme, using oligonucleotides with amide or thiol functional groups as a chemical handle to bind the heteroelements. This covalent binding scheme has been used to arrange gold nanoparticles on a DX-based array, and to arrange streptavidin protein molecules into specific patterns on a DX array.
Structural DNA nanotechnology:
A non-covalent hosting scheme using Dervan polyamides on a DX array was used to arrange streptavidin proteins in a specific pattern on a DX array. Carbon nanotubes have been hosted on DNA arrays in a pattern allowing the assembly to act as a molecular electronic device, a carbon nanotube field-effect transistor. In addition, there are nucleic acid metallization methods, in which the nucleic acid is replaced by a metal which assumes the general shape of the original nucleic acid structure, and schemes for using nucleic acid nanostructures as lithography masks, transferring their pattern into a solid surface.
Dynamic DNA nanotechnology:
Dynamic DNA nanotechnology focuses on forming nucleic acid systems with designed dynamic functionalities related to their overall structures, such as computation and mechanical motion. There is some overlap between structural and dynamic DNA nanotechnology, as structures can be formed through annealing and then reconfigured dynamically, or can be made to form dynamically in the first place.
Dynamic DNA nanotechnology:
Nanomechanical devices DNA complexes have been made that change their conformation upon some stimulus, making them one form of nanorobotics. These structures are initially formed in the same way as the static structures made in structural DNA nanotechnology, but are designed so that dynamic reconfiguration is possible after the initial assembly. The earliest such device made use of the transition between the B-DNA and Z-DNA forms to respond to a change in buffer conditions by undergoing a twisting motion.
Dynamic DNA nanotechnology:
This reliance on buffer conditions caused all devices to change state at the same time. Subsequent systems could change states based upon the presence of control strands, allowing multiple devices to be independently operated in solution. Some examples of such systems are a "molecular tweezers" design that has an open and a closed state, a device that could switch from a paranemic-crossover (PX) conformation to a (JX2) conformation with two non-junction juxtapositions of the DNA backbone, undergoing rotational motion in the process, and a two-dimensional array that could dynamically expand and contract in response to control strands. Structures have also been made that dynamically open or close, potentially acting as a molecular cage to release or reveal a functional cargo upon opening.DNA walkers are a class of nucleic acid nanomachines that exhibit directional motion along a linear track. A large number of schemes have been demonstrated. One strategy is to control the motion of the walker along the track using control strands that need to be manually added in sequence. It is also possible to control individual steps of a DNA walker by irradiation with light of different wavelengths. Another approach is to make use of restriction enzymes or deoxyribozymes to cleave the strands and cause the walker to move forward, which has the advantage of running autonomously. A later system could walk upon a two-dimensional surface rather than a linear track, and demonstrated the ability to selectively pick up and move molecular cargo. In 2018, a catenated DNA that uses rolling circle transcription by an attached T7 RNA polymerase was shown to walk along a DNA-path, guided by the generated RNA strand. Additionally, a linear walker has been demonstrated that performs DNA-templated synthesis as the walker advances along the track, allowing autonomous multistep chemical synthesis directed by the walker. The synthetic DNA walkers' function is similar to that of the proteins dynein and kinesin.
Dynamic DNA nanotechnology:
Strand displacement cascades Cascades of strand displacement reactions can be used for either computational or structural purposes. An individual strand displacement reaction involves revealing a new sequence in response to the presence of some initiator strand. Many such reactions can be linked into a cascade where the newly revealed output sequence of one reaction can initiate another strand displacement reaction elsewhere. This in turn allows for the construction of chemical reaction networks with many components, exhibiting complex computational and information processing abilities. These cascades are made energetically favorable through the formation of new base pairs, and the entropy gain from disassembly reactions. Strand displacement cascades allow isothermal operation of the assembly or computational process, in contrast to traditional nucleic acid assembly's requirement for a thermal annealing step, where the temperature is raised and then slowly lowered to ensure proper formation of the desired structure. They can also support catalytic function of the initiator species, where less than one equivalent of the initiator can cause the reaction to go to completion.Strand displacement complexes can be used to make molecular logic gates capable of complex computation. Unlike traditional electronic computers, which use electric current as inputs and outputs, molecular computers use the concentrations of specific chemical species as signals. In the case of nucleic acid strand displacement circuits, the signal is the presence of nucleic acid strands that are released or consumed by binding and unbinding events to other strands in displacement complexes. This approach has been used to make logic gates such as AND, OR, and NOT gates. More recently, a four-bit circuit was demonstrated that can compute the square root of the integers 0–15, using a system of gates containing 130 DNA strands.Another use of strand displacement cascades is to make dynamically assembled structures. These use a hairpin structure for the reactants, so that when the input strand binds, the newly revealed sequence is on the same molecule rather than disassembling. This allows new opened hairpins to be added to a growing complex. This approach has been used to make simple structures such as three- and four-arm junctions and dendrimers.
Applications:
DNA nanotechnology provides one of the few ways to form designed, complex structures with precise control over nanoscale features. The field is beginning to see application to solve basic science problems in structural biology and biophysics. The earliest such application envisaged for the field, and one still in development, is in crystallography, where molecules that are difficult to crystallize in isolation could be arranged within a three-dimensional nucleic acid lattice, allowing determination of their structure. Another application is the use of DNA origami rods to replace liquid crystals in residual dipolar coupling experiments in protein NMR spectroscopy; using DNA origami is advantageous because, unlike liquid crystals, they are tolerant of the detergents needed to suspend membrane proteins in solution. DNA walkers have been used as nanoscale assembly lines to move nanoparticles and direct chemical synthesis. Further, DNA origami structures have aided in the biophysical studies of enzyme function and protein folding.DNA nanotechnology is moving toward potential real-world applications. The ability of nucleic acid arrays to arrange other molecules indicates its potential applications in molecular scale electronics. The assembly of a nucleic acid structure could be used to template the assembly of molecular electronic elements such as molecular wires, providing a method for nanometer-scale control of the placement and overall architecture of the device analogous to a molecular breadboard. DNA nanotechnology has been compared to the concept of programmable matter because of the coupling of computation to its material properties.In a study conducted by a group of scientists from iNANO and CDNA centers in Aarhus University, researchers were able to construct a small multi-switchable 3D DNA Box Origami. The proposed nanoparticle was characterized by atomic force microscopy (AFM), transmission electron microscopy (TEM) and Förster resonance energy transfer (FRET). The constructed box was shown to have a unique reclosing mechanism, which enabled it to repeatedly open and close in response to a unique set of DNA or RNA keys. The authors proposed that this "DNA device can potentially be used for a broad range of applications such as controlling the function of single molecules, controlled drug delivery, and molecular computing."There are potential applications for DNA nanotechnology in nanomedicine, making use of its ability to perform computation in a biocompatible format to make "smart drugs" for targeted drug delivery, as well as for diagnostic applications. One such system being investigated uses a hollow DNA box containing proteins that induce apoptosis, or cell death, that will only open when in proximity to a cancer cell. There has additionally been interest in expressing these artificial structures in engineered living bacterial cells, most likely using the transcribed RNA for the assembly, although it is unknown whether these complex structures are able to efficiently fold or assemble in the cell's cytoplasm. If successful, this could enable directed evolution of nucleic acid nanostructures. Scientists at Oxford University reported the self-assembly of four short strands of synthetic DNA into a cage which can enter cells and survive for at least 48 hours. The fluorescently labeled DNA tetrahedra were found to remain intact in the laboratory cultured human kidney cells despite the attack by cellular enzymes after two days. This experiment showed the potential of drug delivery inside the living cells using the DNA ‘cage’. A DNA tetrahedron was used to deliver RNA Interference (RNAi) in a mouse model, reported a team of researchers in MIT. Delivery of the interfering RNA for treatment has showed some success using polymer or lipid, but there are limits of safety and imprecise targeting, in addition to short shelf life in the blood stream. The DNA nanostructure created by the team consists of six strands of DNA to form a tetrahedron, with one strand of RNA affixed to each of the six edges. The tetrahedron is further equipped with targeting protein, three folate molecules, which lead the DNA nanoparticles to the abundant folate receptors found on some tumors. The result showed that the gene expression targeted by the RNAi, luciferase, dropped by more than half. This study shows promise in using DNA nanotechnology as an effective tool to deliver treatment using the emerging RNA Interference technology. The DNA tetrahedron was also used in an effort to overcome the phenomena multidrug resistance. Doxorubicin (DOX) was conjugated with the tetrahedron and was loaded into MCF-7 breast cancer cells that contained the P-glycoprotein drug efflux pump. The results of the experiment showed the DOX was not being pumped out and apoptosis of the cancer cells was achieved. The tetrahedron without DOX was loaded into cells to test its biocompatibility, and the structure showed no cytotoxicity itself. The DNA tetrahedron was also used as barcode for profiling the subcellular expression and distribution of proteins in cells for diagnostic purposes. The tetrahedral-nanostructured showed enhanced signal due to higher labeling efficiency and stability.Applications for DNA nanotechnology in nanomedicine also focus on mimicking the structure and function of naturally occurring membrane proteins with designed DNA nanostructures. In 2012, Langecker et al. introduced a pore-shaped DNA origami structure that can self-insert into lipid membranes via hydrophobic cholesterol modifications and induce ionic currents across the membrane. This first demonstration of a synthetic DNA ion channel was followed by a variety of pore-inducing designs ranging from a single DNA duplex, to small tile-based structures, and large DNA origami transmembrane porins. Similar to naturally occurring protein ion channels, this ensemble of synthetic DNA-made counterparts thereby spans multiple orders of magnitude in conductance. The study of the membrane-inserting single DNA duplex showed that current must also flow on the DNA-lipid interface as no central channel lumen is present in the design that lets ions pass across the lipid bilayer. This indicated that the DNA-induced lipid pore has a toroidal shape, rather than cylindrical, as lipid headgroups reorient to face towards the membrane-inserted part of the DNA. Researchers from the University of Cambridge and the University of Illinois at Urbana-Champaign then demonstrated that such a DNA-induced toroidal pore can facilitate rapid lipid flip-flop between the lipid bilayer leaflets. Utilizing this effect, they designed a synthetic DNA-built enzyme that flips lipids in biological membranes orders of magnitudes faster than naturally occurring proteins called scramblases. This development highlights the potential of synthetic DNA nanostructures for personalized drugs and therapeutics.
Design:
DNA nanostructures must be rationally designed so that individual nucleic acid strands will assemble into the desired structures. This process usually begins with specification of a desired target structure or function. Then, the overall secondary structure of the target complex is determined, specifying the arrangement of nucleic acid strands within the structure, and which portions of those strands should be bound to each other. The last step is the primary structure design, which is the specification of the actual base sequences of each nucleic acid strand.
Design:
Structural design The first step in designing a nucleic acid nanostructure is to decide how a given structure should be represented by a specific arrangement of nucleic acid strands. This design step determines the secondary structure, or the positions of the base pairs that hold the individual strands together in the desired shape. Several approaches have been demonstrated: Tile-based structures. This approach breaks the target structure into smaller units with strong binding between the strands contained in each unit, and weaker interactions between the units. It is often used to make periodic lattices, but can also be used to implement algorithmic self-assembly, making them a platform for DNA computing. This was the dominant design strategy used from the mid-1990s until the mid-2000s, when the DNA origami methodology was developed.
Design:
Folding structures. An alternative to the tile-based approach, folding approaches make the nanostructure from one long strand, which can either have a designed sequence that folds due to its interactions with itself, or it can be folded into the desired shape by using shorter, "staple" strands. This latter method is called DNA origami, which allows forming nanoscale two- and three-dimensional shapes (see Discrete structures above).
Design:
Dynamic assembly. This approach directly controls the kinetics of DNA self-assembly, specifying all of the intermediate steps in the reaction mechanism in addition to the final product. This is done using starting materials which adopt a hairpin structure; these then assemble into the final conformation in a cascade reaction, in a specific order (see Strand displacement cascades below). This approach has the advantage of proceeding isothermally, at a constant temperature. This is in contrast to the thermodynamic approaches, which require a thermal annealing step where a temperature change is required to trigger the assembly and favor proper formation of the desired structure.
Design:
Sequence design After any of the above approaches are used to design the secondary structure of a target complex, an actual sequence of nucleotides that will form into the desired structure must be devised. Nucleic acid design is the process of assigning a specific nucleic acid base sequence to each of a structure's constituent strands so that they will associate into a desired conformation. Most methods have the goal of designing sequences so that the target structure has the lowest energy, and is thus the most thermodynamically favorable, while incorrectly assembled structures have higher energies and are thus disfavored. This is done either through simple, faster heuristic methods such as sequence symmetry minimization, or by using a full nearest-neighbor thermodynamic model, which is more accurate but slower and more computationally intensive. Geometric models are used to examine tertiary structure of the nanostructures and to ensure that the complexes are not overly strained.Nucleic acid design has similar goals to protein design. In both, the sequence of monomers is designed to favor the desired target structure and to disfavor other structures. Nucleic acid design has the advantage of being much computationally easier than protein design, because the simple base pairing rules are sufficient to predict a structure's energetic favorability, and detailed information about the overall three-dimensional folding of the structure is not required. This allows the use of simple heuristic methods that yield experimentally robust designs. Nucleic acid structures are less versatile than proteins in their function because of proteins' increased ability to fold into complex structures, and the limited chemical diversity of the four nucleotides as compared to the twenty proteinogenic amino acids.
Materials and methods:
The sequences of the DNA strands making up a target structure are designed computationally, using molecular modeling and thermodynamic modeling software. The nucleic acids themselves are then synthesized using standard oligonucleotide synthesis methods, usually automated in an oligonucleotide synthesizer, and strands of custom sequences are commercially available. Strands can be purified by denaturing gel electrophoresis if needed, and precise concentrations determined via any of several nucleic acid quantitation methods using ultraviolet absorbance spectroscopy.The fully formed target structures can be verified using native gel electrophoresis, which gives size and shape information for the nucleic acid complexes. An electrophoretic mobility shift assay can assess whether a structure incorporates all desired strands. Fluorescent labeling and Förster resonance energy transfer (FRET) are sometimes used to characterize the structure of the complexes.Nucleic acid structures can be directly imaged by atomic force microscopy, which is well suited to extended two-dimensional structures, but less useful for discrete three-dimensional structures because of the microscope tip's interaction with the fragile nucleic acid structure; transmission electron microscopy and cryo-electron microscopy are often used in this case. Extended three-dimensional lattices are analyzed by X-ray crystallography.
History:
The conceptual foundation for DNA nanotechnology was first laid out by Nadrian Seeman in the early 1980s. Seeman's original motivation was to create a three-dimensional DNA lattice for orienting other large molecules, which would simplify their crystallographic study by eliminating the difficult process of obtaining pure crystals. This idea had reportedly come to him in late 1980, after realizing the similarity between the woodcut Depth by M. C. Escher and an array of DNA six-arm junctions. Several natural branched DNA structures were known at the time, including the DNA replication fork and the mobile Holliday junction, but Seeman's insight was that immobile nucleic acid junctions could be created by properly designing the strand sequences to remove symmetry in the assembled molecule, and that these immobile junctions could in principle be combined into rigid crystalline lattices. The first theoretical paper proposing this scheme was published in 1982, and the first experimental demonstration of an immobile DNA junction was published the following year.In 1991, Seeman's laboratory published a report on the synthesis of a cube made of DNA, the first synthetic three-dimensional nucleic acid nanostructure, for which he received the 1995 Feynman Prize in Nanotechnology. This was followed by a DNA truncated octahedron. It soon became clear that these structures, polygonal shapes with flexible junctions as their vertices, were not rigid enough to form extended three-dimensional lattices. Seeman developed the more rigid double-crossover (DX) structural motif, and in 1998, in collaboration with Erik Winfree, published the creation of two-dimensional lattices of DX tiles. These tile-based structures had the advantage that they provided the ability to implement DNA computing, which was demonstrated by Winfree and Paul Rothemund in their 2004 paper on the algorithmic self-assembly of a Sierpinski gasket structure, and for which they shared the 2006 Feynman Prize in Nanotechnology. Winfree's key insight was that the DX tiles could be used as Wang tiles, meaning that their assembly could perform computation. The synthesis of a three-dimensional lattice was finally published by Seeman in 2009, nearly thirty years after he had set out to achieve it.New abilities continued to be discovered for designed DNA structures throughout the 2000s. The first DNA nanomachine—a motif that changes its structure in response to an input—was demonstrated in 1999 by Seeman. An improved system, which was the first nucleic acid device to make use of toehold-mediated strand displacement, was demonstrated by Bernard Yurke the following year. The next advance was to translate this into mechanical motion, and in 2004 and 2005, several DNA walker systems were demonstrated by the groups of Seeman, Niles Pierce, Andrew Turberfield, and Chengde Mao. The idea of using DNA arrays to template the assembly of other molecules such as nanoparticles and proteins, first suggested by Bruche Robinson and Seeman in 1987, was demonstrated in 2002 by Seeman, Kiehl et al. and subsequently by many other groups.
History:
In 2006, Rothemund first demonstrated the DNA origami method for easily and robustly forming folded DNA structures of arbitrary shape. Rothemund had conceived of this method as being conceptually intermediate between Seeman's DX lattices, which used many short strands, and William Shih's DNA octahedron, which consisted mostly of one very long strand. Rothemund's DNA origami contains a long strand which folding is assisted by several short strands. This method allowed forming much larger structures than formerly possible, and which are less technically demanding to design and synthesize. DNA origami was the cover story of Nature on March 15, 2006. Rothemund's research demonstrating two-dimensional DNA origami structures was followed by the demonstration of solid three-dimensional DNA origami by Douglas et al. in 2009, while the labs of Jørgen Kjems and Yan demonstrated hollow three-dimensional structures made out of two-dimensional faces.DNA nanotechnology was initially met with some skepticism due to the unusual non-biological use of nucleic acids as materials for building structures and doing computation, and the preponderance of proof of principle experiments that extended the abilities of the field but were far from actual applications. Seeman's 1991 paper on the synthesis of the DNA cube was rejected by the journal Science after one reviewer praised its originality while another criticized it for its lack of biological relevance. By the early 2010s the field was considered to have increased its abilities to the point that applications for basic science research were beginning to be realized, and practical applications in medicine and other fields were beginning to be considered feasible. The field had grown from very few active laboratories in 2001 to at least 60 in 2010, which increased the talent pool and thus the number of scientific advances in the field during that decade. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Alcohol dehydrogenase (azurin)**
Alcohol dehydrogenase (azurin):
Alcohol dehydrogenase (azurin) (EC 1.1.9.1, type II quinoprotein alcohol dehydrogenase, quinohaemoprotein ethanol dehydrogenase, QHEDH, ADHIIB) is an enzyme with systematic name alcohol:azurin oxidoreductase. This enzyme catalyses the following chemical reaction primary alcohol + azurin ⇌ aldehyde + reduced azurinThis enzyme is a periplasmic PQQ-containing quinohemoprotein. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Inositol-hexakisphosphate kinase**
Inositol-hexakisphosphate kinase:
Inositol-hexakisphosphate kinase (EC 2.7.4.21, ATP:1D-myo-inositol-hexakisphosphate phosphotransferase) is an enzyme with systematic name ATP:1D-myo-inositol-hexakisphosphate 5-phosphotransferase. This enzyme catalyses the following chemical reaction (1) ATP + 1D-myo-inositol hexakisphosphate(Phytic acid) ⇌ ADP + 1D-myo-inositol 5-diphosphate 1,2,3,4,6-pentakisphosphate (2) ATP + 1D-myo-inositol 1,3,4,5,6-pentakisphosphate ⇌ ADP + 1D-myo-inositol diphosphate tetrakisphosphate (isomeric configuration unknown)Three mammalian isoforms are known to exist. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Half dollar (United States coin)**
Half dollar (United States coin):
The half dollar, sometimes referred to as the half for short or 50-cent piece, is a United States coin worth 50 cents, or one half of a dollar. In both size and weight, it is the largest United States circulating coin currently produced, being 1.205 inches (30.61 millimeters) in diameter and 0.085 in (2.16 mm) in thickness, and is twice the weight of the quarter. The coin's design has undergone a number of changes throughout its history. Since 1964, the half dollar depicts the profile of President John F. Kennedy on the obverse and the seal of the president of the United States on the reverse.Though seldom used today, half-dollar coins have a long history of heavy use alongside other denominations of US coinage, but have become uncommon in general circulation for several reasons. Half-dollars were produced in fairly large quantities until the year 2002, when the U.S. Mint reduced production of the coin and ceased minting them for general circulation. As a result of its decreasing usage, many pre-2002 half dollars remain in Federal Reserve vaults, prompting the change in production. Presently, collector half dollars can be ordered directly from the U.S. Mint, and pre-2002 circulation half dollars may be available at most American banks and credit unions. Beginning In 2021, half dollars were again produced for general circulation.
Circulation:
Half-dollar coins saw heavy circulation until the mid 1960s. For many years, they were (and in many areas still are) commonly used by gamblers at casinos and other venues with slot machines. Rolls of half dollars may still be kept on hand in cardrooms for games requiring 50-cent antes or bring-in bets, for dealers to pay winning naturals in blackjack, or where the house collects a rake in increments. Additionally, some concession vendors at sporting events distribute half-dollar coins as change for convenience.
Circulation:
By the early 1960s, the rising price of silver neared the point where the bullion value of U.S. silver coins would exceed face value. In 1965, the U.S. introduced layered-composition coins made of a pure copper core sandwiched between two cupronickel outer faces. The silver content of dimes and quarters was eliminated, but the Kennedy half-dollar, introduced in 1964, contained silver (reduced from 90% in 1964 to 40% from 1965 to 1970). Even with its reduced silver content, the half dollar attracted widespread interest from speculators and coin collectors, and that interest led to extensive hoarding of half dollars dated 1970 and earlier. In 1971, the half's composition was changed to match that of the clad dimes and quarters, and with an increase in production, the coin saw a moderate increase in usage; by this time however, many businesses and the public had begun to lose interest in the half dollar and by the end of the 1970s, the coin had gradually become uncommon in circulation. Merchants stopped ordering half dollars from their banks, and many banks stopped ordering half dollars from the Federal Reserve, and the U.S. mints sharply reduced production of the coins.
Circulation:
From 2001-2020, half dollars were minted only for collectors, due to large Federal Reserve and government inventories on hand of pre-2001 pieces; this is mostly due to lack of demand and large quantity returns from casino slot machines that now operate "coin-less". Eventually, when the reserve supply runs low, the mint will again fill orders for circulation half dollars. It took 18 years (1981–1999) for the large inventory stockpile of a similar low-demand coin, the Susan B. Anthony dollar, to reach reserve levels low enough to again strike pieces for circulation.
Circulation:
Modern-date half dollars can be purchased in proof sets, mint sets, rolls, and bags from the U.S. Mint, and existing inventory circulation pieces can be obtained or ordered through most U.S. banks and credit unions. All collector issues since 2001 have had much lower mintages than in previous years. Although intended only for collectors, 2001-2020 half dollars can often be found in circulation.
Aspects of early history:
On December 1, 1794, the first half dollars, approximately 5,300 pieces, were delivered. Another 18,000 were produced in January 1795 using dies of 1794, to save the expense of making new ones. Another 30,000 pieces were struck by the end of 1801. The coin had the Heraldic Eagle, based on the Great Seal of the United States on the reverse. 150,000 were minted in 1804 but struck with dies from 1803, so no 1804 specimens exist, though there were some pieces dated 1805 that carried a "5 over 4" overdate.In 1838, half-dollar dies were produced in the Philadelphia Mint for the newly established New Orleans Mint, and ten test samples of the 1838 half dollars were made at the main Philadelphia mint. These samples were put into the mint safe along with other rarities like the 1804 silver dollar. The dies were then shipped to New Orleans for the regular production of 1838 half dollars. However, New Orleans production of the half dollars was delayed due to the priority of producing half dimes and dimes. The large press for half-dollar production was not used in New Orleans until January 1839 to produce 1838 half dollars, but the reverse die could not be properly secured, and only ten samples were produced before the dies failed. Rufus Tyler, chief coiner of the New Orleans mint, wrote to Mint Director Patterson of the problem on February 25, 1839. The Orleans mint samples all had a double stamped reverse as a result of this production problem and they also showed dramatic signs of die rust, neither of which are present on the Philadelphia produced test samples. While eight Philadelphia minted samples survive to this day, there is only one known New Orleans minted specimen with the tell-tale double stamped reverse and die rust. This is the famous coin that Rufus Tyler presented to Alexander Dallas Bache (great grandson of Benjamin Franklin) in the summer of 1839 and was later purchased in June 1894 by A. G. Heaton, the father of mint mark coin collecting. The 1838 Philadelphia-produced half dollars are extremely rare, with two separate specimens having sold for $632,500 in Heritage auctions in 2005 and 2008 respectively. The sole surviving Orleans minted 1838 is one of the rarest of all American coins. In 1840, this mint produced nearly 180,000 half dollars.In 1861, the New Orleans mint produced coins for three different governments. A total of 330,000 were struck under the United States government, 1,240,000 for the State of Louisiana after it seceded from the Union, and 962,633 after it joined the Confederacy. Since the same die was used for all strikings, the output looks identical. However the Confederate States of America actually minted four half dollars with a CSA (rather than USA) reverse and the obverse die they used had a small die crack. Thus "regular" 1861 half dollars with this crack probably were used by the Confederates for some of the mass striking.There are two varieties of Kennedy half dollars in the proof set issues of 1964. Initially, the die was used with accented hair, showing deeper lines than the president's widow, Jacqueline Kennedy, preferred. New dies were prepared to smooth out some of the details. It is estimated that about 1 to 3% (40,000 to 100,000) of the proof halves are of the earlier type, making them somewhat more expensive for collectors. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Looting**
Looting:
Looting is the act of stealing, or the taking of goods by force, typically in the midst of a military, political, or other social crisis, such as war, natural disasters (where law and civil enforcement are temporarily ineffective), or rioting. The proceeds of all these activities can be described as booty, loot, plunder, spoils, or pillage.During modern-day armed conflicts, looting is prohibited by international law, and constitutes a war crime.
After disasters:
During a disaster, police and military forces are sometimes unable to prevent looting when they are overwhelmed by humanitarian or combat concerns, or they cannot be summoned because of damaged communications infrastructure. Especially during natural disasters, many civilians may find themselves forced to take what does not belong to them in order to survive. How to respond to that and where the line between unnecessary "looting" and necessary "scavenging" lies are often dilemmas for governments. In other cases, looting may be tolerated or even encouraged by governments for political or other reasons, including religious, social or economic ones.
History:
In armed conflict Looting by a victorious army during war has been a common practice throughout recorded history. Foot soldiers viewed plunder as a way to supplement an often-meagre income and transferred wealth became part of the celebration of victory. In the wake of the Napoleonic Wars and particularly after World War II, norms against wartime plunder became widely accepted.In the upper ranks, the proud exhibition of the loot plundered formed an integral part of the typical Roman triumph, and Genghis Khan was not unusual in proclaiming that the greatest happiness was "to vanquish your enemies... to rob them of their wealth".In ancient times, looting was sometimes prohibited due to religious concerns. For example, King Clovis I of the Franks, forbade his soldiers to loot when they campaigned near St Martin's shrine in Tours, for fear of offending the saint. In the Biblical narrative, Moses, Joshua and Samuel at various points order the Israelites not to take loot from their enemies due to God's commandment. In warfare in ancient times, the spoils of war included the defeated populations, which were often enslaved. Women and children might become absorbed into the victorious country's population, as concubines, eunuchs and slaves. In other pre-modern societies, objects made of precious metals were the preferred target of war looting, largely because of their ease of portability. In many cases, looting offered an opportunity to obtain treasures and works of art that otherwise would not have been obtainable. Beginning in the early modern period and reaching its peak in the New Imperialism era, European colonial powers frequently looted areas they captured during military campaigns against non-European states. In the 1930s, and even more so during the Second World War, Nazi Germany engaged in large-scale and organized looting of art and property, particularly in Nazi-occupied Poland.Looting, combined with poor military discipline, has occasionally been an army's downfall since troops who have dispersed to ransack an area may become vulnerable to counter-attack. In other cases, for example, the Wahhabi sack of Karbala in 1801 or 1802, loot has contributed to further victories for an army. Not all looters in wartime are conquerors; the looting of Vistula Land by the retreating Imperial Russian Army in 1915 was among the factors sapping the loyalty of Poles to Russia. Local civilians can also take advantage of a breakdown of order to loot public and private property, as took place at the Iraq Museum in the course of the Iraq War in 2003. Lev Nikolayevich Tolstoy's novel War and Peace describes widespread looting by Moscow's citizens before Napoleon's troops entered the city in 1812, along with looting by French troops elsewhere.
History:
In 1990 and 1991, during the Gulf War, Saddam Hussein's soldiers caused significant damage to both Kuwaiti and Saudi infrastructure. They also stole from private companies and homes. In April 2003, looters broke into the National Museum of Iraq, and thousands of artefacts remain missing.Syrian conservation sites and museums were looted during the Syrian Civil War, with items being sold on the international black market. Reports from 2012 suggested that the antiquities were being traded for weapons by the various combatants.
History:
Prohibited under international law Both customary international law and international treaties prohibit pillage in armed conflict. The Lieber Code, the Brussels Declaration (1874), and the Oxford Manual have recognized the prohibition against pillage. The Hague Conventions of 1899 and 1907 (modified in 1954) obliges military forces not only to avoid the destruction of enemy property but also to provide for its protection. Article 8 of the Statute of the International Criminal Court provides that in international warfare, "pillaging a town or place, even when taken by assault", is a war crime. In the aftermath of World War II, a number of war criminals were prosecuted for pillage. The International Criminal Tribunal for the Former Yugoslavia (1993–2017) brought several prosecutions for pillage.The Fourth Geneva Convention of 1949 explicitly prohibits the looting of civilian property during wartime.Theoretically, to prevent such looting, unclaimed property is moved to the custody of the Custodian of Enemy Property, to be handled until returned to its owners.
History:
Modern conflicts Despite international prohibitions against the practice of looting, the ease with which it can be done means that it remains relatively common, particularly during outbreaks of civil unrest during which rules of war may not yet apply. The 2011 Egyptian Revolution, for example, caused a significant increase in the looting of antiquities from archaeological sites in Egypt, as the government lost the ability to protect the sites. Other acts of modern looting, such as the looting and destruction of artifacts from the National Museum of Iraq by Islamic State militants, can be used as an easy way to express contempt for the concept of rules of war altogether.In the case of a sudden change in a country or region's government, it can be difficult to determine what constitutes looting as opposed to a new government taking custody of the property in question. This can be especially difficult if the new government is only partially recognized at the time the property is moved, as was the case during the 2021 Taliban offensive, during which a number of artifacts and a large amount of property of former government officials who had fled the country fell into the hands of the Taliban before they were recognized as the legitimate government of Afghanistan by other countries. Further looting and burning of civilian homes and villages has been defended by the Taliban as within their right as the legitimate government of Afghanistan.Looting can also be common in cases where civil unrest is contained largely within the borders of a country or during peacetime. Riots in the wake of the 2020 George Floyd protests in numerous American cities led to increased amounts of looting, as looters took advantage of the delicate political situation and civil unrest surrounding the riots themselves.During the ongoing Kashmir conflict, looting of Kashmiris trapped between the Indian and Pakistani militarized zones is common and widespread.In 2022, international observers accused Russia of engaging in large scale looting during the Russo-Ukrainian War, reporting the widespread looting of everything from food to industrial equipment. Despite the publication of numerous photos and videos by Ukrainian journalists and civilians, numerous Russian commanders, such as Gareo Novalsky, have denied these claims. International observers have theorized that this looting is either the result of direct orders, despite to Russia's claims to the contrary, or due to Russian soldiers not being issued with adequate food and other resources by their commanders.
Archaeological removals:
The term "looting" is also sometimes used to refer to antiquities being removed from countries by unauthorized people, either domestic people breaking the law seeking monetary gain or foreign nations, which are usually more interested in prestige or previously, "scientific discovery". An example might be the removal of the contents of Egyptian tombs that were transported to museums across the West. Whether that constitutes "looting" is a debated point, with other parties pointing out that the Europeans were usually given permission of some sort, and many of the treasures would not have been discovered at all if the Europeans had not funded and organized the expeditions or digs that located them. Many such antiquities have already been returned to their country of origin voluntarily.
Looting of industry:
As part of World War II reparations, Soviet forces systematically plundered the Soviet occupation zone of Germany, including the Recovered Territories, which later transferred to Poland. The Soviets sent valuable industrial equipment, infrastructure and whole factories to the Soviet Union.Many factories in the rebels' zone of Aleppo during the Syrian Civil War were reported as being plundered and their assets transferred abroad. Agricultural production and electronic power plants were also seized, to be sold elsewhere.
Sources:
Abudu, Margaret, et al., "Black Ghetto Violence: A Case Study Inquiry into the Spatial Pattern of Four Los Angeles Riot Event-Types", 44 Social Problems 483 (1997) Curvin, Robert and Bruce Porter (1979), Blackout Looting Dynes, Russell & Enrico L. Quarantelli, "What Looting in Civil Disturbances Really Means", in Modern Criminals 177 (James F. Short Jr., ed., 1970) Green, Stuart P., "Looting, Law, and Lawlessness", 81 Tulane Law Review 1129 (2007) Mac Ginty, Roger, "Looting in the Context of Violent Conflict: A Conceptualisation and Typology", 25 Third World Quarterly 857 (2004). JSTOR 3993697.
Sources:
Stewart, James, "Corporate War Crimes: Prosecuting Pillage of Natural Resources", 2010 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pilot line**
Pilot line:
A pilot line is a pre-commercial production line that produces small volumes of new technology-based products, or employs new production technology, as a step towards the commercialisation of the new technology.Pilot lines help bridge the gap between research and commercialisation, which is caused by the fact that new technology that has been proved only in a laboratory usually is not ready yet for application. They help bridge this gap in two ways: Pilot lines promote learning about a technology at a higher level of sophistication than in laboratories, that is, a level at which the technology is integrated in an early prototype and tested in an environment that is representative of the final operating environment.
Pilot line:
Pilot lines help reduce risk by providing information on the new technology and its application in many respects, including its technical feasibility, manufacturability, costs, and market and social acceptance.In terms of the Technology readiness level framework, pilot plants validate production technology at levels TRL5-6: validation and demonstration of technology in a relevant environment.
Terminology:
A word similar to pilot line is pilot plant. Essentially, pilot plants and pilot lines perform the same functions, but 'pilot plant' is used in the context of (bio)chemical and advanced materials production systems, whereas 'pilot line' is used for new technology in general, e.g. by the European Union.
Example:
An industrial pilot line for micro-fabricated medical devices is established as part of the ECSEL JU project InForMed. The pilot line will be hosted by a large industrial end-user, and is specifically targeted and equipped to bridge the gap between concept creation and full-scale production. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.