text stringlengths 60 353k | source stringclasses 2 values |
|---|---|
**SBGrid Consortium**
SBGrid Consortium:
The SBGrid Consortium is a research computing group financially supported by participating research laboratories and operated out of Harvard Medical School. SBGrid provides the global structural biology community with support for research computing. Members of the SBGrid Consortium fund SBGrid’s ongoing operations through an annual membership fee. The resulting organization is a user-supported and user-directed community resource.SBGrid’s primary service is the collection, deployment and maintenance of a comprehensive set of software and computational tools that are useful in structural biology research. As of 2015, SBGrid curates a collection of 300 structural biology applications for installation on computers in SBGrid laboratories around the world. SBGrid also develops a specialized research computing infrastructure for structural biologists in the Boston area.
Background:
SBGrid was first created by Piotr Sliz as an in-house effort to support and maintain a few dozen X-ray crystallography in the laboratory of Stephen C. Harrison and the late Don Craig Wiley, then at Harvard University and Boston Children’s Hospital. After adding support for additional labs, SBGrid began charging user fees to recover operational costs in 2002. It also expanded software support to include electron microscopy (EM), nuclear magnetic resonance (NMR) and other structural biology techniques. In response to requests from users for support for Macintosh computers, SBGrid recompiled most of its applications to run on the Mac OS X platform in 2004. By 2006, the SBGrid consortium included 37 laboratories at 14 different institutions.
Background:
SBGrid’s user-oriented community began to solidify in 2008 with its first user meeting: Quo Vadis Structural Biology (“Where is structural biology heading?”). The meeting attracted approximately 300 participants and incorporated a structural biology symposium and three workshops: scientific programming with Python; molecular visualization with Maya; and macOS programming. SBGrid held subsequent meetings in Boston (2009, 2013, 2014). In 2011 SBGrid hosted the Open Science Grid All-Hands Meeting at Harvard Medical School after having established a Virtual Organization (SBGrid VO) within the Open Science Grid (OSG) and deployed a grid computing portal in 2010. SBGrid has become one of the top OSG users (outside of high-energy physics users) and utilizes ~5,000,000 CPU hours per year.
Background:
In 2012, SBGrid launched a webinar program featuring software tutorials from a different developer each month. Recordings are publicly available on the SBGridTV YouTube channel. SBGrid team members have also published a guide to software licensing, an editorial that advocates for better disclosure of source code, and recommendations for optimizing peer review of software source code.By 2014, SBGrid had 245 member laboratories around the world.
Membership:
During the registration process, an SBGrid associate will advise new labs regarding hardware and computing requirements to deploy SBGrid support onsite. Once a new member laboratory’s hardware is in place, most new members are fully operational with SBGrid within two weeks of joining.
Membership:
SBGrid software services for members The SBGrid team installs and maintains its collection of structural biology applications on Linux and OS X computers in member laboratories, including laptops. A few commercial applications are also supported, including Geneious for cloning and bioinformatics, incentive builds for PyMOL, and for North American labs, the Schrödinger Small-Molecule Drug Discovery Suite. Members access a complete execution environment that includes the suite of structural biology applications preconfigured to run without any additional settings.
Membership:
SBGrid monitors all software websites for updates and installs major software upgrades on a monthly basis. The SBGrid team also recompile existing software for newer releases of supported operating systems and respond to user bug reports and new software requests.
Training for SBGrid members SBGrid hosts monthly live webinars that feature tutorials by contributing developers and offer members the opportunity to ask the developer questions directly.
Resources for SBGrid members The SBGrid technical team offers guidance to new members in setting up an adequate computing infrastructure. Members also benefit from access to a number of other specialized computing resources.
SBGrid Resources for software developers:
SBGrid provides developers of SBGrid-supported applications with access to the SBGrid build-test computing network at Harvard Medical School for building and testing software on a range of operating systems. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hallucinogen persisting perception disorder**
Hallucinogen persisting perception disorder:
Hallucinogen persisting perception disorder (HPPD) is a non-psychotic disorder in which a person experiences apparent lasting or persistent visual hallucinations or perceptual distortions after a previous use of drugs, including but not limited to psychedelics, dissociatives, entactogens, tetrahydrocannabinol (THC), and SSRIs. Despite being designated as a hallucinogen-specific disorder, the specific contributory role of psychedelic drugs is unknown. The hallucinations and perceptual changes consist of, but are not limited to, visual snow, trails and after images (palinopsia), light fractals on flat surfaces, intensified colors, altered motion perception, pareidolia, micropsia, and macropsia. People who have never previously taken drugs have also reported some symptoms associated with HPPD (such as floaters and visual snow).HPPD is a DSM-5 diagnosis with diagnostic code 292.89 (F16.983). For the diagnosis to be made, other psychological, psychiatric, or neurological conditions must be ruled out and it must cause distress in everyday life. In the ICD-10, the diagnosis code F16.7 corresponds most closely to the clinical picture. HPPD is rarely recognized amongst both hallucinogen users and psychiatrists, and is often misdiagnosed as a substance-induced psychosis.
Hallucinogen persisting perception disorder:
Newer research makes a distinction between HPPD I and HPPD II. The more drastic cases, as seen in HPPD II, are believed to be caused by the use of psychedelics as well as comorbid mental disorders. Some people who have this disorder report that they developed symptoms of HPPD after their first use of such drugs (most notably LSD). Because research regarding HPPD is currently lacking, there is little information on effective treatments, its aetiology and relationship to other disorders, and precise mechanism.
History:
In 1898, the English physician and intellectual Havelock Ellis reported a heightened sensitivity to what he described as "the more delicate phenomena of light and shade and color" for a prolonged period of time after he was exposed to the hallucinogenic drug mescaline. This may have been one of the first recorded symptoms of what would later be called HPPD. Hallucinogen-persisting perception disorder was first described in 1954, with other observations made in early psychedelic research. Horowitz first introduced the term flashbacks, referring to recurrent and spontaneous perceptual distortions and unbidden images. When these “flashbacks” present as recurrent, but without a current acute, or chronic hallucinogen intake, the disturbance is referred to as HPPD. Horowitz classified also three types of visual flashbacks: (a) perceptual distortions (e.g., seeing haloes around objects); (b) heightened imagery (e.g., visual experiences as much more vivid and dominant in one’s thoughts); and (c) recurrent unbidden images (e.g., subjects see objects that are not there). LSD therapist Stanislav Grof noted an HPPD phenomenon in his book LSD Psychotherapy from 1978, which noted that “[l]ong after the pharmacological effect of the drug has subsided, the patient may still report anomalies in color-perception, blurred vision, after-images, spontaneous imagery, alterations in body image, intensification of hearing, ringing in the ears, or various strange physical feelings.” HPPD has been introduced under the diagnosis of Post- hallucinogen Perception Disorder in 1987 within the DSM-III-R. Subsequently, the DSM-IV-TR recognized the syndrome as Hallucinogen-Persisting Perception Disorder (Flashbacks) (code 292.89) (15). The Neurosensory Research Foundation was founded by HPPD sufferers to promote research and awareness around the condition. Subsequently, in 2021, the Perception Restoration Foundation was launched to bolster efforts for research, awareness and harm reduction. In 2022, journalists at Psymposia and New York Magazine revealed that a participant in MAPS' landmark MDMA trials for PTSD developed post-psychedelic visual effects similar to HPPD. Subclinical HPPD phenomena have occurred in other trial settings.
Symptoms:
Typical symptoms of the disorder include: halos or auras surrounding objects, trails following objects in motion, difficulty distinguishing between colors, apparent shifts in the hue of a given item, the illusion of movement in a static setting, visual snow, distortions in the dimensions of a perceived object, intensified hypnagogic & hypnopompic hallucinations, monocular double vision, seeing an excessive amount of floaters and blue field entopic phenomenon. The visual alterations experienced by those with HPPD are not homogeneous and there appear to be individual differences in both the number and intensity of symptoms.Visual aberrations can occur periodically in healthy individuals – e.g. afterimages after staring at a light, noticing floaters inside the eye, blue field entoptic phenomenon or seeing specks of light in a darkened room. However, in people with HPPD, symptoms seem typically to be worse, but complication comes from the additional roles played anxiety and fixation. Indeed, anxiety has been implicated in visual perceptual effects similar to HPPD, and authors have recognized the crucial role of attending to underlying anxiety and panic in recovering from the disorder. There is some uncertainty about to what degree visual snow constitutes a true HPPD symptom. There are individuals who have never used a drug which could have caused the onset, but yet experience the same grainy vision reported by those with HPPD, like people with the closely-linked neurological disorder known as Visual snow syndrome. There are a few potential reasons for this, the most obvious of which being the theory that the drug usage may exaggerate the intensity of visual snow. At the same time, beyond the characteristic visual snow symptom, there is considerable overlap between the conditions, including after-images, palinopsia, tinnitus, dissociation and free-floating anxiety, leading some to suggest that HPPD shares a strong relationship with Visual snow syndrome. Visual snow syndrome is defined as lacking any known cause and is specifically distinguished from HPPD in its nosology, yet further research may clarify the relationship.
Symptoms:
HPPD usually has a visual manifestation, but some hallucinogenic and psychiatric drugs affect the auditory sense and can produce tinnitus-like symptoms as a side effect, and there's many anecdotal reports of people getting tinnitus with their HPPD.
Symptoms:
A significant number of those reporting HPPD also describe comorbid depersonalization-derealization and anxiety disorders. Anxiety, PTSD and panic can promote depersonalization-derealization and visual disturbances, and vice versa, so these features may run in multidirectional relationships. Abraham suggested that all three can arise from a broader mechanism of disinhibition in sensory perception, affect and sense-of-self occasioned by psychedelic experience. It is not uncommon for depersonalization-derealization to be the most distressing symptom of the condition. According to a 2016 review, there are two theorized subtypes of the condition. Type 1 HPPD is where people experience random, brief flashbacks. Type 2 HPPD entails experiencing persistent changes to vision, which may vary in intensity. This model has faced scrutiny, however, due to "flashbacks" often being considered a separate condition and not always a perceptual one.
Causes:
HPPD is not related to psychosis due to the fact people affected by the disorder can easily distinguish their visual disturbances from reality. A vast list of psychoactive substances has been identified and linked with the development of this condition, including lysergamides like LSD and LSA, tryptamines like psilocybin and DMT, phenethylamines like 2C-B, MDMA, MDA and mescaline. Dissociatives such as ketamine and dextromethorphan as well as cannabis and synthetic cannabinoids, salvia divinorum, datura and iboga are also known to trigger HPPD. It is therefore clear that HPPD is not strictly associated with psychedelic consumption as a number of hallucination-inducing substances may be correlated with its arising. For some, the dosage and how frequently one uses these substances doesn't seem to matter in the development of this condition, since there are several reports in the literature where patients were diagnosed after a single use. This strongly indicates that there may be a genetic predisposition to this condition. It also seems that combining recreational or medical drugs that act on the 5HT2-a receptors, like SSRIs, drastically increases the chances of developing HPPD due to the drug-drug interaction.The exact pathophysiologic mechanism underlying HPPD is poorly understood. The primary neurobiological hypothesis is that persistent hallucinations are the result of chronic disinhibition of visual processors and subsequent dysfunction in the central nervous system following consumption of hallucinogens. Chronic disinhibition may occur from destruction and/or dysfunction of cortical serotonergic inhibitory interneurons involving the inhibitory neurotransmitter, gamma-aminobutyric acid (GABA). This ultimately can cause disruption of the normal neurological mechanisms that are responsible for filtration of unnecessary stimuli in the brain. On a macroscopic level, the lateral geniculate nucleus (LGN) of the thalamus, which is important in visual processing, has also been implicated in the pathophysiology of HPPD.Other researchers have suggested HPPD may be related to drug-induced elevations in neuroplasticity - an effect also noted to occur for SSRIs. Reverse neuroplasticity effects may account for anecdotal reports of individuals treating their HPPD symptoms with further psychedelic drug use, while others report significant deterioriations in their symptoms. Which drugs are most prone to causing HPPD is not entirely known. While LSD has been described as the leading cause of HPPD, this may be a function of LSD's historically higher relative popularity as a recreational psychedelic drug. Popularity effects may explain the high proportion of cases precipitated by cannabis. A recent clinical review found no significant difference in the induction of subclinical visual phenomena between MDMA, LSD and psilocybin. Curiously, lasting visual effects have also occurred as complications of benzodiazepine withdrawal syndrome.Being characterized by clinical distress and impairment, however, HPPD is also shaped by psychosocial factors. There is tentative evidence that those who develop distressing HPPD have higher trait anxiety, or experienced elevations in baseline anxiety from possibly negative psychedelic experiences. Elevations in anxiety, and anxious responses to visual and related symptoms, may provoke direct elevations in symptom intensity and fuel the distress that defines HPPD as a clinical entity. Certain core beliefs and automatic thoughts are observed to occur among those reporting HPPD: fears of brain damage, a 'never-ending trip', the development of schizophrenia or a related psychosis spectrum disorder, a more generalized concern surrounding insanity, and destructive thoughts concerning the loss of one's previous self or a new identity centred on brokenness and alienation from others. Being a drug-related disorder, HPPD is therefore vulnerable to internalized anti-drug stigma, specifically around 'flashbacks' and 'brain frying', which were heavily propagandized in prohibitionist campaigns in the 20th-century.
Treatment:
As of January 2022 there is no officially recognized cure or therapy for HPPD, but those affected with HPPD are heavily advised to discontinue all recreational drug use. Improving sleep quality, reducing anxiety, lowering screen use, improving diet quality and pursuing regular exercise are encouraged as general lifestyle changes. To decrease fixation and monitoring behaviors with visual symptoms, increased focus on external tasks may also be encouraged. Antipsychotics such as aripiprazole or risperidone intended to treat mental disorders like schizophrenia should only be taken in careful consultation with a psychiatrist experienced in HPPD. The success rate of antipsychotics as a treatment method for HPPD is still debated. Two young men with HPPD and schizophrenia as a comorbidity experienced a remission of visual perceptual disturbance during a 6-month follow-up observation under treatment with risperidone. There was a case study in 2013 where oral risperidone was also successful for treating HPPD. In other cases risperidone has shown no effect on HPPD or where it had a paradoxical effect and lead to permanent symptom exacerbation.
Treatment:
Lamotrigine an anticonvulsant is the most popular medication for HPPD treatment. In the case of a 36-year-old man with HPPD for 18-years, the complex visual perception disorders largely resolved within 12 months after initiation of treatment with lamotrigine. In another case a 33-year-old woman developed HPPD after abusing LSD for a year long at the age of 18. She reported afterimages, perception of movement in her peripheral visual fields, blurring of small patterns, halo effects, and macro- and micropsia. Previous treatment with antidepressants and risperidone failed to ameliorate these symptoms. Upon commencing drug therapy with lamotrigine, these complex visual disturbances receded almost completely. There are also many anecdotal reports on Lamotrigine alleviating some of the symptoms. Lamotrigine is considered a possible treatment option for HPPD. Lamotrigine is generally well tolerated with a relative lack of adverse effects.
Treatment:
Clonidine an antihypertensive that a pilot study of eight patients suggested could help significantly alleviate "LSD-related flashbacks." In a case study of two subjects with synthetic cannabis-induced HPPD the symptoms significant improvement with Clonazepam treatment. In a 2003 study 16 people with LSD-induced HPPD reported a significant relief and the presence of only mild symptomatology during clonazepam administration. And as with Lamotrigine, there are many anecdotal reports of Clonazepam greatly decreasing symptoms at >1.5 mg doses Other medical drugs that people have reported some symptom reduction with is the anticonvulsants Gabapentin, Levetiracetam and Valproic acid.A 2022 case reported indicated promise for brain stimulation therapy for a longstanding HPPD patient Outside of pharmacotherapy, recovery from HPPD as a clinical entity - that, is involving distress and impairment - can come about through psychological and social means. Case reports of psychotherapy for HPPD suggest that anxiety reduction, muscle relaxation, and re-framing one's visual phenomena through personal destigmatization and normalization may be helpful. Some authors have suggested that HPPD be better designated as a particular somatic symptom disorder rather than a disorder defined centrally by hallucinogen use. Cognitive behavioral therapy has shown promise for somatic symptom disorders, as well as related distress from tinnitus. CBT has likewise shown promise for depersonalization-derealization disorder, which occurs as a common comorbidity to HPPD and seems to share many of the same catastrophic thoughts. The Perception Restoration Foundation hosts a Specialists Directory that lists professionals with prior experience or relevant expertise in helping those with HPPD.
Prevalence:
In a 2010 study of psychedelic users, 23.9% reported constant HPPD-like effects, though only 4.2% considered seeking treatment due to the severity. In a 2022 double-blind, placebo-controlled study, 142 subjects received LSD, psilocybin, or both, reported no cases of HPPD, and up to 9.2% of the subjects had flashbacks which were "transient, mostly experienced as benign and did not impair daily life".
Society and culture:
In the second episode of the first season of the 2014 series True Detective ("Seeing Things"), primary character Rustin Cohle (Matthew McConaughey) is depicted as having symptoms similar to HPPD such as light tracers as a result of "neurological damage" from substance use.American journalist Andrew Callaghan, former host of the internet series All Gas No Brakes and current host of Channel 5, revealed during a 2021 interview with Vice News that he experiences the symptoms of HPPD as a result of psilocybin use at a young age. Describing his symptoms, he noted that he experiences persistent visual snow and palinopsia.American Youtuber and musician Matt Watson, known for cohosting the Youtube channel with over 1 million subscribers known as SuperMega, has revealed in a podcast interview with bbno$ that he contracted HPPD as a result of LSD use at the age of 22. He stated that he experiences several persistent floaters in his vision, constant visual "static," and various other symptoms associated with HPPD. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**The Lost Golem**
The Lost Golem:
The Lost Golem (ゴーレムのまいご, Golem no Maigo) is a puzzle game for the Dreamcast developed and published by Caramelpot and released in Japan only in 2000.
Gameplay:
The player controls a golem and must guide a king in a castle to safety. The king walks by himself and turns when he bumps into a wall, so the player must move walls around to guide him to safety.
Reception:
The Lost Golem sold under 500 copies, making it obscure and relatively hard to find. Despite poor sales, it has become a cult classic among Dreamcast owners. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Rachis**
Rachis:
In biology, a rachis (from the Ancient Greek: ῥάχις [rhákhis], "backbone, spine") is a main axis or "shaft".
In zoology and microbiology:
In vertebrates, rachis can refer to the series of articulated vertebrae, which encase the spinal cord. In this case the rachis usually forms the supporting axis of the body and is then called the spine or vertebral column. Rachis can also mean the central shaft of pennaceous feathers.
In the gonad of the invertebrate nematode Caenorhabditis elegans, a rachis is the central cell-free core or axis of the gonadal arm of both adult males and hermaphrodites where the germ cells have achieved pachytene and are attached to the walls of the gonadal tube. The rachis is filled with cytoplasm.
In botany:
In plants, a rachis is the main axis of a compound structure. It can be the main stem of a compound leaf, such as in Acacia or ferns, or the main, flower-bearing portion of an inflorescence above a supporting peduncle. Where it subdivides into further branches, these are known as rachillae (singular rachilla).
In botany:
A ripe head of wild-type wheat is easily shattered into dispersal units when touched or blown by the wind. A series of abscission layers forms that divides the rachis into dispersal units consisting of a small group of flowers (a single spikelet) attached to a short segment of the rachis. This is significant in the history of agriculture, and referred to by archaeologists as a "brittle rachis", one type of shattering in crop plants. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Soil carbon feedback**
Soil carbon feedback:
The soil carbon feedback concerns the releases of carbon from soils in response to global warming. This response under climate change is a positive climate feedback. There is approximately two to three times more carbon in global soils than the Earth's atmosphere, which makes understanding this feedback crucial to understand future climate change. An increased rate of soil respiration is the main cause of this feedback, where measurements imply that 4 °C of warming increases annual soil respiration by up to 37%.
Impact on climate change:
An observation based study on future climate change, on the soil carbon feedback, conducted since 1991 in Harvard, suggests release of about 190 petagrams of soil carbon, the equivalent of the past two decades of greenhouse gas emissions from fossil fuel burning, until 2100 from the top 1-meter of Earth's soils, due to changes in microbial communities under elevated temperatures.A 2018 study concludes, "Climate-driven losses of soil carbon are currently occurring across many ecosystems, with a detectable and sustained trend emerging at the global scale." Permafrost Thawing of permafrost (frozen ground), which is located in higher latitudes, the Arctic and sub-Arctic regions, suggest based on observational evidence a linear and chronic release of greenhouse gas emissions with ongoing climate change from these carbon dynamics.
Impact on climate change:
Tipping point A study published in 2011 identified a so-called compost-bomb instability, related to a tipping point with explosive soil carbon releases from peatlands. The authors noted that there is a unique stable soil carbon equilibrium for any fixed atmospheric temperature. Despite the prediction that the carbon balance of peatlands is going to shift from a sink to a source this century, peatland ecosystems are still omitted from the main Earth system models and integrated assessment models.
Impact on climate change:
Uncertainties Climate models do not account for effects of biochemical heat release associated with microbial decomposition. A limitation in our understanding of carbon cycling comes from the insufficient incorporation of soil animals, including insects and worms, and their interactions with microbial communities into global decomposition models. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Geothermal energy**
Geothermal energy:
Geothermal energy is thermal energy in the Earth's crust. It combines energy from the formation of the planet and from radioactive decay. Geothermal energy has been exploited as a source of heat and/or electric power for millennia.
Geothermal energy:
Geothermal heating, using water from hot springs, for example, has been used for bathing since Paleolithic times and for space heating since Roman times. Geothermal power, (generation of electricity from geothermal energy), has been used since the 20th century. Unlike wind and solar energy, geothermal plants produce power at a constant rate, without regard to weather conditions. Geothermal resources are theoretically more than adequate to supply humanity's energy needs. Most extraction occurs in areas near tectonic plate boundaries.
Geothermal energy:
The cost of generating geothermal power decreased by 25% during the 1980s and 1990s. Technological advances continued to reduce costs and thereby expand the amount of viable resources. In 2021, the U.S. Department of Energy estimated that power from a plant "built today" costs about $0.05/kWh.In 2019, 13,900 megawatts (MW) of geothermal power was available worldwide. An additional 28 gigawatts provided heat for district heating, space heating, spas, industrial processes, desalination, and agricultural applications as of 2010.As of 2019 the industry employed about 100 thousand people. Pilot programs like EWEB's customer opt-in Green Power Program suggest that customers would be willing to pay a little more for renewable energy.The adjective geothermal originates from the Greek roots γῆ (gê), meaning Earth, and θερμός (thermós), meaning hot.
History:
Hot springs have been used for bathing since at least Paleolithic times. The oldest known spa is at the site of the Huaqing Chi palace. In the first century CE, Romans conquered Aquae Sulis, now Bath, Somerset, England, and used the hot springs there to supply public baths and underfloor heating. The admission fees for these baths probably represent the first commercial use of geothermal energy. The world's oldest geothermal district heating system, in Chaudes-Aigues, France, has been operating since the 15th century. The earliest industrial exploitation began in 1827 with the use of geyser steam to extract boric acid from volcanic mud in Larderello, Italy.
History:
In 1892, the US's first district heating system in Boise, Idaho was powered by geothermal energy. It was copied in Klamath Falls, Oregon in 1900. The world's first known building to utilize geothermal energy as its primary heat source was the Hot Lake Hotel in Union County, Oregon, beginning in 1907. A geothermal well was used to heat greenhouses in Boise in 1926, and geysers were used to heat greenhouses in Iceland and Tuscany at about the same time. Charles Lieb developed the first downhole heat exchanger in 1930 to heat his house. Geyser steam and water began heating homes in Iceland in 1943.
History:
In the 20th century, geothermal energy came into use as a generating source. Prince Piero Ginori Conti tested the first geothermal power generator on 4 July 1904, at the Larderello steam field. It successfully lit four light bulbs. In 1911, the world's first commercial geothermal power plant was built there. It was the only industrial producer of geothermal power until New Zealand built a plant in 1958. In 2012, it produced some 594 megawatts.In 1960, Pacific Gas and Electric began operation of the first US geothermal power plant at The Geysers in California. The original turbine lasted for more than 30 years and produced 11 MW net power.A binary cycle power plant was first demonstrated in 1967 in the USSR and introduced to the US in 1981. This technology allows the generation of electricity from much lower temperature resources than previously. In 2006, a binary cycle plant in Chena Hot Springs, Alaska, came on-line, producing electricity from a record low temperature of 57 °C (135 °F).In 2021 Quaise Energy announced the idea of using a gyrotron as a boring machine to drill a hole 20 kilometers in depth. The technique used frequencies of 30-300 GHz to transfer energy to rock 1012 (1 trillion) times more efficiently than a laser. Lasers would be disrupted by the vaporized rock, which would effect the gyrotron's longer-wavelength much less. Drilling rates of 70 meters/hour were claimed to be possible with a 1 MW gyrotron.
Resources:
The Earth has an internal heat content of 1031 joules (3·1015 TWh), About 20% of this is residual heat from planetary accretion; the remainder is attributed to past and current radioactive decay of naturally occurring isotopes. For example, a 5275 m deep borehole in United Downs Deep Geothermal Power Project in Cornwall, England, found granite with very high thorium content, whose radioactive decay is believed to power the high temperature of the rock.Earth's interior temperature and pressure are high enough to cause some rock to melt and the solid mantle to behave plastically. Parts of the mantle convect upward since it is lighter than the surrounding rock. Temperatures at the core–mantle boundary can reach over 4000 °C (7200 °F).The Earth's internal thermal energy flows to the surface by conduction at a rate of 44.2 terawatts (TW), and is replenished by radioactive decay of minerals at a rate of 30 TW. These power rates are more than double humanity's current energy consumption from all primary sources, but most of this energy flux is not recoverable. In addition to the internal heat flows, the top layer of the surface to a depth of 10 m (33 ft) is heated by solar energy during the summer, and cools during the winter.
Resources:
Outside of the seasonal variations, the geothermal gradient of temperatures through the crust is 25–30 °C (45–54 °F) per km of depth in most of the world. The conductive heat flux averages 0.1 MW/km2. These values are much higher near tectonic plate boundaries where the crust is thinner. They may be further augmented by combinations of fluid circulation, either through magma conduits, hot springs, hydrothermal circulation.
Resources:
The thermal efficiency and profitability of electricity generation is particularly sensitive to temperature. Applications receive the greatest benefit from a high natural heat flux most easily from a hot spring. The next best option is to drill a well into a hot aquifer. An artificial one may be built by injecting water to hydraulically fracture bedrock. This last approach is called hot dry rock geothermal energy in Europe, or enhanced geothermal systems in North America.2010 estimates of the potential for electricity generation from geothermal energy vary sixfold, from 0.035to2TW depending on the scale of investments. Upper estimates of geothermal resources assume wells as deep as 10 kilometres (6 mi), although 20th century wells rarely reached more than 3 kilometres (2 mi) deep. Wells of this depth are common in the petroleum industry.
Geothermal power:
Geothermal power is electrical power generated from geothermal energy. Dry steam,, flash steam, and binary cycle power stations have been used for this purpose. As of 2010 geothermal electricity was generated in 26 countries.As of 2019, worldwide geothermal power capacity amounted to 15.4 gigawatts (GW), of which 23.86 percent or 3.68 GW were in the United States.Geothermal energy supplies a significant share of the electrical power in Iceland, El Salvador, Kenya, the Philippines and New Zealand.Geothermal power is considered to be renewable energy because the heat extraction is insignificant compared with the Earth's heat content. The greenhouse gas emissions of geothermal electric stations are on average 45 grams of carbon dioxide per kilowatt-hour of electricity, or less than 5 percent of that of coal-fired plants.
Geothermal power:
Geothermal electric plants were traditionally built on the edges of tectonic plates where high-temperature geothermal resources approach the surface. The development of binary cycle power plants and improvements in drilling and extraction technology enable enhanced geothermal systems over a greater geographical range. Demonstration projects are operational in Landau-Pfalz, Germany, and Soultz-sous-Forêts, France, while an earlier effort in Basel, Switzerland, was shut down after it triggered earthquakes. Other demonstration projects are under construction in Australia, the United Kingdom, and the US. In Myanmar over 39 locations are capable of geothermal power production, some of which are near Yangon.
Geothermal heating:
Geothermal heating is the use of geothermal energy to heat buildings and water for human use. Humans done this since the Paleolithic era. Approximately seventy countries made direct use of a total of 270 PJ of geothermal heating in 2004. As of 2007, 28 GW of geothermal heating satisfied 0.07% of global primary energy consumption. Thermal efficiency is high since no energy conversion is needed, but capacity factors tend to be low (around 20%) since the heat is mostly needed in the winter.
Geothermal heating:
Even cold ground contains heat: below 6 metres (20 ft) the undisturbed ground temperature is consistently at the Mean Annual Air Temperature that may be extracted with a ground source heat pump.
Types:
Geothermal energy comes in either vapor-dominated or liquid-dominated forms. Larderello and The Geysers are vapor-dominated. Vapor-dominated sites offer temperatures from 240 to 300 °C that produce superheated steam.
Types:
Liquid-dominated plants Liquid-dominated reservoirs (LDRs) are more common with temperatures greater than 200 °C (392 °F) and are found near volcanoes in/around the Pacific Ocean and in rift zones and hot spots. Flash plants are the common way to generate electricity from these sources. Steam from the well is sufficient to power the lpant. Most wells generate 2–10 MW of electricity. Steam is separated from liquid via cyclone separators and drives electric generators. Condensed liquid returns down the well for reheating/reuse. As of 2013, the largest liquid system was Cerro Prieto in Mexico, which generates 750 MW of electricity from temperatures reaching 350 °C (662 °F). Lower-temperature LDRs (120–200 °C) require pumping. They are common in extensional terrains, where heating takes place via deep circulation along faults, such as in the Western US and Turkey. Water passes through a heat exchanger in a Rankine cycle binary plant. The water vaporizes an organic working fluid that drives a turbine. These binary plants originated in the Soviet Union in the late 1960s and predominate in new plants. Binary plants have no emissions.
Types:
Enhanced geothermal systems Enhanced geothermal systems (EGS) actively inject water into wells to be heated and pumped back out. The water is injected under high pressure to expand existing rock fissures to enable the water to flow freely. The technique was adapted from oil and gas fracking techniques. The geologic formations are deeper and no toxic chemicals are used, reducing the possibility of environmental damage. Drillers can employ directional drilling to expand the reservoir size.Small-scale EGS have been installed in the Rhine Graben at Soultz-sous-Forêts in France and at Landau and Insheim in Germany.
Economics:
As with wind and solar energy, geothermal power has minimal operating costs; capital costs dominate. Drilling accounts for over half the costs, and not all wells produce an exploitable resources. For example, a typical well pair (one for extraction and one for injection) in Nevada can produce 4.5 megawatts (MW) and costs about $10 million to drill, with a 20% failure rate, making the average cost of a successful well $50 million.
Economics:
Drilling geothermal wells is more expensive than drilling oil and gas wells of comparable depth for several reasons: Geothermal reservoirs are usually in igneous or metamorphic rock, which is harder to penetrate than the sedimentary rock of typical hydrocarbon reservoirs.
The rock is often fractured, which causes vibrations that damage bits and other drilling tools.
The rock is often abrasive, with high quartz content, and sometimes contains highly corrosive fluids.
The rock is hot, which limits use of downhole electronics.
Well casing must be cemented from top to bottom, to resist the casing's tendency to expand and contract with temperature changes. Oil and gas wells are usually cemented only at the bottom.
Economics:
Well diameters are considerably larger than typical oil and gas wells.As of 2007 plant construction and well drilling cost about €2–5 million per MW of electrical capacity, while the break-even price was 0.04–0.10 € per kW·h. Enhanced geothermal systems tend to be on the high side of these ranges, with capital costs above $4 million per MW and break-even above $0.054 per kW·h.Heating systems are much simpler than electric generators and have lower maintenance costs per kW·h, but they consume electricity to run pumps and compressors.
Development:
Geothermal projects have several stages of development. Each phase has associated risks. Many projects are canceled during the stages of reconnaissance and geophysical surveys, which are unsuitable for traditional lending. At later stages can often be equity-financed.
Sustainability:
Geothermal energy is considered to be sustainable because the heat extracted is so small compared to the Earth's heat content, which is approximately 100 billion times 2010 worldwide annual energy consumption. Earth's heat flows are not in equilibrium; the planet is cooling on geologic timescales. Anthropic heat extraction typically does not accelerate the cooling process.
Sustainability:
Wells can further be considered renewable because they return the extracted water to the borehole for reheating and re-extraction, albeit at a lower temperature. Replacing material use with energy has reduced the human environmental footprint in many applications. Geothermal has the potential to allow further reductions. For example, Iceland has sufficient geothermal energy to eliminate fossil fuels for electricity production and to heat Reykjavik sidewalks and eliminate the need for gritting.
Sustainability:
However, local effects of heat extraction must be considered. Over the course of decades, individual wells draw down local temperatures and water levels. The three oldest sites, at Larderello, Wairakei, and the Geysers experienced reduced output because of local depletion. Heat and water, in uncertain proportions, were extracted faster than they were replenished. Reducing production and injecting additional water could allow these wells to recover their original capacity. Such strategies have been implemented at some sites. These sites continue to provide significant energy.The Wairakei power station was commissioned in November 1958, and it attained its peak generation of 173 MW in 1965, but already the supply of high-pressure steam was faltering. In 1982 it was down-rated to intermediate pressure and the output to 157 MW. In 2005 two 8 MW isopentane systems were added, boosting output by about 14 MW. Detailed data were lost due to re-organisations.
Environmental effects:
Fluids drawn from underground carry a mixture of gasses, notably carbon dioxide (CO2), hydrogen sulfide (H2S), methane (CH4) and ammonia (NH3). These pollutants contribute to global warming, acid rain, and noxious smells if released. Existing geothermal electric plants emit an average of 122 kilograms (269 lb) of CO2 per megawatt-hour (MW·h) of electricity, a small fraction of the emission intensity of fossil fuel plants. A few plants emit more pollutants than gas-fired power, at least in the first few years, such as some geothermal power in Turkey. Plants that experience high levels of acids and volatile chemicals are typically equipped with emission-control systems to reduce the exhaust.
Environmental effects:
Water from geothermal sources may hold in solution trace amounts of toxic elements such as mercury, arsenic, boron, and antimony. These chemicals precipitate as the water cools, and can damage surroundings if released. The modern practice of returning geothermal fluids into the Earth to stimulate production has the side benefit of reducing this environmental impact.
Environmental effects:
Construction can adversely affect land stability. Subsidence occurred in the Wairakei field. In Staufen im Breisgau, Germany, tectonic uplift occurred instead. A previously isolated anhydrite layer came in contact with water and turned it into gypsum, doubling its volume. Enhanced geothermal systems can trigger earthquakes as part of hydraulic fracturing. A project in Basel, Switzerland was suspended because more than 10,000 seismic events measuring up to 3.4 on the Richter Scale occurred over the first 6 days of water injection.Geothermal power production has minimal land and freshwater requirements. Geothermal plants use 3.5 square kilometres (1.4 sq mi) per gigawatt of electrical production (not capacity) versus 32 square kilometres (12 sq mi) and 12 square kilometres (4.6 sq mi) for coal facilities and wind farms respectively. They use 20 litres (5.3 US gal) of freshwater per MW·h versus over 1,000 litres (260 US gal) per MW·h for nuclear, coal, or oil.
Production:
Philippines The Philippines began geothermal research in 1962 when the Philippine Institute of Volcanology and Seismology inspected the geothermal region in Tiwi, Albay. The first geothermal power plant in the Philippines was built in 1977, located in Tongonan, Leyte. The New Zealand government contracted with the Philippines to build the plant in 1972. The Tongonan Geothermal Field (TGF) added the Upper Mahiao, Matlibog, and South Sambaloran plants, which resulted in a 508 MV capacity.The first geothermal power plant in the Tiwi region opened in 1979, while two other plants followed in 1980 and 1982. The Tiwi geothermal field is located about 450 km from Manila. The three geothermal power plants in the Tiwi region produce 330 MWe, putting the Philippines behind the United States and Mexico in geothermal growth. The Philippines has 7 geothermal fields and continues to exploit geothermal energy by creating the Philippine Energy Plan 2012-2030 that aims to produce 70% of the country's energy by 2030.
Production:
United States According to the Geothermal Energy Association (GEA) installed geothermal capacity in the United States grew by 5%, or 147.05 MW, in 2013. This increase came from seven geothermal projects that began production in 2012. GEA revised its 2011 estimate of installed capacity upward by 128 MW, bringing installed U.S. geothermal capacity to 3,386 MW. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Line engraving**
Line engraving:
Line engraving is a term for engraved images printed on paper to be used as prints or illustrations. The term is mainly used in connection with 18th- or 19th-century commercial illustrations for magazines and books or reproductions of paintings. It is not a technical term in printmaking, and can cover a variety of techniques, giving similar results.
Line engraving:
Steel engraving is an overlapping term, for images that in fact are often mainly in etching, mostly used for banknotes, illustrations for books, magazines and decorative prints, often reproductive, from about 1820 to the early 20th century, when the technique became less used. Copperplate engraving is another somewhat outdated term for engravings. With photography long established, engravings made today are nearly all artistic ones in printmaking, but the technique is not as common as it used to be; more than other printmaking techniques, engraving requires great skill and much practice, even for an experienced artist.
Technique:
Engraving for the purpose of printmaking creates plates for intaglio printing. Intaglio engravings are made by carving into a plate of a hard substance such as copper, zinc, steel, or plastic. Afterward ink is rubbed into the carved areas and away from the flat surface. Moistened paper is placed over the plate and both are run through the rollers of an intaglio press. The pressure exerted by the press on the paper pushes it into the engraved lines and prints the image made by those lines. In an intaglio print, the engraved lines print black.
Technique:
Wood engraving is a relief printing technique, with the images made by carving into fine-grained hardwood blocks. Ink is rolled onto the surface of the block, dry paper is placed on top of the block and it is printed either by rolling both through a press, or, by hand, using a baren to rub the ink from the surface of the block onto the paper. In a relief print, the engraved lines show white.
Early history:
The art of engraving has been practiced from the earliest ages. The prehistoric Aztec hatchet given to Alexander von Humboldt in Mexico was just as truly engraved as a modern copper-plate which may convey a design by John Flaxman; the Aztec engraving may be less sophisticated than the European, but it is the same art form. Jewelry and many types of fine metal works frequently are engraved as well as furniture. Engraving often is used as an embellishment of knives, swords, guns, and rifles.
Early history:
Niellos The important discovery which made line engraving one of the multiplying arts was the accidental discovery of how to print an incised line. This method was known for some time before its real utility was realized. The goldsmiths of Florence in the middle of the 15th century ornamented their works by means of engraving, after which they filled up the hollows produced by the burin with a black enamel-like substance made of silver, lead, and sulfur. The resulting design, called a niello, was much higher in contrast and thus, much more visible.
Early history:
As this enamel was difficult to remove, goldsmiths developed alternate means of viewing their work while still in progress. They would take a sulfur cast of the work on a matrix of fine clay, and fill up the lines in the sulfur with lampblack, producing the desired high-contrast image.
Beginnings of European printmaking It was discovered later that a proof could be taken on damped paper by filling the engraved lines with ink and wiping it off the surface of the plate. Pressure was then applied to push the paper into the hollowed lines and draw the ink out of them. This was the beginning of plate printing.
This convenient way of proofing a niello saved the effort of producing a cast, but further implications went unexplored. Although goldsmiths continued to engrave nielli to ornament plates and furniture, it was not until the late 15th century that the new method of printing was implemented.
Early history:
Early style In early Italian and German prints, the line is used with such perfect simplicity of purpose that the methods of the artists are as obvious as if we saw them actually at work. In all these figures the outline is the primary focus, followed by the lines which mark the leading folds of the drapery. These are always engravers' lines, such as may be made naturally with the burin, and they never imitate the freer line of the pencil or etching needle.
Early history:
Shading is used in the greatest moderation with thin straight strokes that never overpower the stronger organic lines of the design. In early metal engraving the shading lines are often cross-hatched. In the earliest woodcuts they are not. The reason being that when lines are incised, they may as easily be crossed, as not. Whereas when they are reserved, the crossing involves much non-artistic labor.
Early history:
Italy The early style of Italian engravers differs greatly from that of a modern chiaroscurist. Mantegna, for example, did not draw and shade at the same time. He got his outlines and the patterns on his dresses all very accurate initially. Then he added a veil of shading with all the lines being straight and all the shading diagonal. This is the primitive method, its peculiarities being due to a combination of natural genius with technical inexperience.
Early history:
Marcantonio, the engraver trained by Raphael, first practiced by copying German woodcuts into line engravings. Marcantonio became an engraver of remarkable power and through him, the pure art of line-engraving reached its maturity. He retained much of the simple early Italian manner in his backgrounds. His figures are modeled boldly in curved lines, crossing each other in the darker shades, but left single in the passages from dark to light and breaking away in fine dots as they approach the light itself, which is of pure white paper. A new Italian school of engraving was born, which put aside minute details for a broad, harmonious treatment.
Early history:
Germany The characteristics of early metal engraving in Germany are demonstrated in the works of Martin Schongauer (d. 1488) and Albrecht Dürer (d. 1528). Schongauer used outline and shade as a unified element, and the shading, generally in curved lines. His skill is far more masterly than the straight shading of Mantegna. Dürer continued Schongauer's curved shading, with increasing manual delicacy and skill, and over-loaded his plates with quantities of living and inanimate objects. He applied the same intensity of study to every art form he explored.
Early history:
Peter Paul Rubens and the engravers he employed, made marked technical developments in the field of engraving. Instead of his finished paintings, Rubens provided his engravers with drawings as guides, allowing them to discard the Italian outline method and in its place substitute modeling. They substituted broad masses for the minutely-finished detail of the northern schools, and adopted a system of a dark and light characteristic of engraving, which reportedly Rubens stated, rendered the detail as more harmonious.
17th and 18th centuries:
In the 17th and 18th centuries, line engraving made no new development. Instead, it flourished around the established techniques and principles. English and French artists began to use the technique, with the English learning primarily from the Germans (led by Rubens), and the French from the Italians (Raphael). There was, however, a good deal of cross-influence among all involved traditions.
17th and 18th centuries:
Sir Robert Strange, as many other English engravers, made it his study to soften and lose the outline, specifically in figure-engraving. Meanwhile, Gerard Audran (d. 1703) led the Renaissance school in perfecting the art of modeling with the burin.
19th century:
In the 19th century, line engraving was both helped and hindered. Help came from the growth of public wealth, increasing interest in art, and the increase in the commerce of art—as exemplified by the career of such art dealers as Ernest Gambart—and the growing demand for illustrated books. Hindrance to line engraving came from the desire for cheaper and more rapid methods – a desire satisfied in various ways, but especially, by etching and various kinds of photography.
19th century:
The history of the art of line engraving during the last quarter of the 19th century, is one of continued decay. By the beginning of the 20th century, pictorial line engraving in England was practically non-existent. The disappearance of the art is due to the fact that the public refused to wait for several years for proofs (some important proofs took as long as 12 years to create) when they could obtain their plates more quickly by other methods. The invention of steel-facing S copper plate enabled the engraver to proceed more quickly; but even in this case he can no more compete with the etcher than the mezzotint engraver can keep pace with the photogravure manufacturer.
19th century:
Line-engraving flourished in France until the early 20th century, only through official encouragement and intelligent fostering by collectors and connoisseurs. The class of the work changed, however, partly through the reduction of prices paid for it, partly through the change of taste and fashion, and partly, again, through the necessities of the situation. French engravers were driven to simplify their work in order to satisfy public impatience. To compensate for loss of color, the art developed in the direction of elegance and refinement.
19th century:
In Italy, line engraving decayed just as it had in England, and outside Europe, line engraving seems to have been almost nonexistent. There were still a few who could engrave a head from a photograph or drawing, or a small engraving for book illustration or for book plates; there were more who were highly proficient in mechanical engraving for decorative purposes, but the engraving-machine was quickly superseding this class.
19th century:
Style Nineteenth-century line engraving, compared with previous work, had a more thorough and delicate rendering of local color, light and shade, and texture. Older engravers could draw just as correctly, but they either neglected these elements or admitted them sparingly, as opposed to the spirit of their art, but there is a certain sameness in pure line engraving that is more favorable to some forms and textures than to others.
19th century:
In the well-known prints from Rosa Bonheur, for example, the tone of the skies is achieved by machine-ruling, as is much undertone in the landscape. The fur of the animals is all etched, as are the foreground plants; the real burin work is used sparingly where most favorable to texture. Even in the exquisite engravings after J. M. W. Turner, which reached a degree of delicacy in light and shade far surpassing the work of the old masters, the engravers had recourse to etching, finishing with the burin and dry point. Considered as important an influence upon engraving as Raphael and Rubens, Turner contributed much to the field in the direction of delicacy of tone.
19th century:
The new French school of engraving had several distinctive characteristics, including the substitution of exquisite greys for the rich blacks of old and, simplicity of method coupled with extremely high elaboration. Their object is, as always, to secure the faithful transcript of the painter they reproduce while readily sacrificing the power of the old method, which, whatever its force and beauty, was easily acquired by mediocre artists of technical ability. The Belgian school of engraving elaborated an effective "mixed method" of graver-work and dry-point. The Stauffer-Bern method of using many fine lines to create tone had a certain advantage in modeling.
Modern and contemporary art:
Although dwindled to a rarity, modern engravers continue to practice in the art world, most prominently Andrew Raftery. His choice of subjects is comparable to Hogarth, and his style the French school of elegant and geometrical form.
Tools of the trade:
The most important of the tools used in line-engraving is the burin, or graver, a bar of steel with one end fixed in a handle, somewhat resembling a mushroom with one side cut away. The burin is shaped so that the sharpened, cutting end takes the form of a lozenge, and points downward. The burin acts exactly as a plough in the earth: it makes a furrow and turns out a shaving of metal in the same way a plough turns the soil of a field. The burin, unlike a plough, is pushed through the material. This particular characteristic separates it from other instruments employed in the arts of design such as pencils, brushes, pens, and etching needles.
Tools of the trade:
Example of burin engraving The elements of engraving with the burin are evident in the engraving of letters, specifically, the capital letter B. This letter consists of two perpendicular straight lines and four distinct curves. The engraver scratches these lines, reversed, very lightly with a sharp point or stylus. Next, the engraver cuts out the blacks (not the whites, as in wood engraving) with two different burins. First, the vertical black line is ploughed with the burin between the two scratched lines, then similarly, some material is removed from the thickest parts of the two curves. Finally, the gradations from the thick middle of the curve to the thin points touching the vertical are worked out with a finer burin.
Tools of the trade:
The hollows are then filled with printing ink, the surplus ink is wiped from the smooth surface of the metal, damped paper is laid upon the surface and driven into the hollowed letter by the pressure of a revolving cylinder. The paper draws out the ink, and the letter B is printed in intense black.
Tools of the trade:
When the surface of a metal plate is sufficiently polished to be used for engraving, the slightest scratch upon it will print as a black line. An engraved plate from which visiting cards are printed is a good example of some elementary principles of engraving. It contains thin lines and thick ones, as well as a considerable variety of curves. An elaborate line engraving, if it is a pure line engraving and nothing else, will contain only these simple elements in different combinations. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Direct Web Remoting**
Direct Web Remoting:
Direct Web Remoting, or DWR, is a Java open-source library that helps developers write web sites that include Ajax technology. It allows code in a web browser to use Java functions running on a web server as if those functions were within the browser. The DWR project was started by Joe Walker in 2004, 1.0 released at August 29, 2005.
Application:
Code to allow JavaScript to retrieve data from a servlet-based web server using Ajax principles.
Application:
A JavaScript library that makes it easier for the website developer to dynamically update the web page with the retrieved data.DWR takes a novel approach to Ajax by dynamically generating JavaScript code based on Java classes. Thus the web developer can use Java code from JavaScript as if it were local to the web browser; whereas in reality the Java code runs in the web server and has full access to web server resources. For security reasons the web developer must configure exactly which Java classes are safe to export (which is often called web.xml or dwr.xml).
Application:
This method of remoting functions from Java to JavaScript gives DWR users a feel much like conventional RPC mechanisms like RMI or SOAP, with the benefit that it runs over the web without requiring web browser plug-ins.
DWR does not consider the web browser / web server protocol to be important, and prefers to ensure that the programmer's interface is natural. The greatest challenge to this is to marry the asynchronous nature of Ajax with the synchronous nature of normal Java method calls.
Application:
In the asynchronous model, result data is only available some time after the initial call is made. DWR solves this problem by allowing the web developer to specify a function to be called when the data is returned using an extra method parameter. This extra method is called CallBack Method. The value returned from the java function will be passed to the callback method.
Application:
Here is a sample Callback: The callback is that function inside the JSON object passed as an additional parameter to the remoted function.
With version 2.0, DWR supports Comet (also called "Reverse Ajax) where Java code running on the server can deliberately send dedicated JavaScript to a browser. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**GXL**
GXL:
GXL (Graph eXchange Language) is designed to be a standard exchange format for graphs. GXL is an extensible markup language (XML) sublanguage and the syntax is given by an XML document type definition (DTD). This exchange format offers an adaptable and flexible means to support interoperability between graph-based tools.
Overview:
In particular, GXL was developed to enable interoperability between software reengineering tools and components, such as code extractors (parsers), analyzers and visualizers. GXL allows software reengineers to combine single-purpose tools especially for parsing, source code extraction, architecture recovery, data flow analysis, pointer analysis, program slicing, query techniques, source code visualization, object recovery, restructuring, refactoring, remodularization, etc., into a single powerful reengineering workbench.
Overview:
There are two innovative features in GXL that make it well-suited to an exchange format for software data.
The conceptual data model is a typed, attributed, directed graph. This is not to say that all software data ought to be manipulated as graphs, but rather that they can be exchanged as graphs.
Overview:
It can be used to represent instance data as well as schemas for describing the structure of the data. Moreover, the schema can be explicitly stated along with instance data. The structure of graphs exchanged by GXL streams is given by a schema represented as a Unified Modeling Language (UML) class diagram.Since GXL is a general graph exchange format, it can also be used to interchange any graph-based data, including models between computer-aided software engineering (CASE) tools, data between graph transformation systems, or graph visualization tools. GXL includes support for hypergraphs and hierarchical graphs, and can be extended to support other types of graphs.
Overview:
GXL originated in the merger of GRAph eXchange format (GraX: University of Koblenz, DE) for exchanging typed, attributed, ordered, directed graphs (TGraphs), Tuple Attribute Language (TA: University of Waterloo, CA), and the graph format of the PROGRES graph rewriting system (University Bw München, DE). Furthermore, GXL includes ideas from exchange formats from reverse engineering, including Relation Partition Algebra (RPA: Philips Research Eindhoven, NL) and Rigi Standard Format (RSF: University of Victoria, CA). The development of GXL was also influenced by various formats used in graph drawing (e.g. daVinci, Graph Modelling Language (GML), Graphlet, GraphXML) and current discussions on exchange formats for graph transformation systems.
Presentations of former GXL versions:
At the 2000 International Conference on Software Engineering (ICSE 2000) Workshop on Standard Exchange Formats (WoSEF), GXL was accepted as working draft for an exchange format by numerous research groups working in the domain of software reengineering and graph transformation.
During the APPLIGRAPH Subgroup Meeting on Exchange Formats for Graph Transformation, an overview of GXL was given [Schürr, 2000] and participants decided to use GXL to represent graphs within their exchange format for graph transformation systems (GTXL).
The 2000 IBM Centers for Advanced Studies Conference (CASCON 2000) included two half-day workshops on GXL. In the morning, 'Software Data Interchange with GXL: Introduction and Tutorial' gave a primer on the syntax and concepts in the format, while the afternoon workshop, 'Software Data Interchange with GXL: Implementation Issues' discussed the development of converters and standard schemas.
At the Seventh Working Conference on Reverse Engineering (WCRE 2000), GXL was presented in a tutorial [Holt et al., 2000] and during the workshop on exchange formats [Holt/Winter, 2000]. Central results were a simpler representation of ordering information, the usage of UML class diagrams to present graph schemata and the representation of UML class diagrams by GXL graphs.
The Dagstuhl Seminar on Interoperability of Reengineering Tools ratified GXL 1.0 as a standard interchange format for exchanging reengineering related data. Numerous groups from industry and research committed to using GXL, to import and export GXL documents to their tools, and to write various GXL tools.
GXL Partners:
During various conferences and workshops the following groups from industry and academics committed to refining GXL to be the standard graph exchange format, write GXL filters and tools or use GXL as exchange format in their tools: Bell Canada (Datrix Group) Centrum Wiskunde & Informatica (CWI), The Netherlands (Interactive Software Development and Renovation and Information Visualization) IBM Centre for Advanced Studies, Canada Mahindra British Telecom, India Merlin Software-Engineering GmbH, Germany Nokia Research Center, Finland (Software Technology Laboratory) Philips Research, The Netherlands (Software Architecture Group) RWTH Aachen, Germany (Department of Computer Science III) TU Berlin, Germany (Theoretical CS/Formal Specification Group) University of Berne, Switzerland (Software Composition Group) University of Bremen, Germany (Software Engineering Group) Bundeswehr University Munich, Germany (Institute for Software Technology) University of Edinburgh, UK, (Edinburgh Concurrency Workbench) University of Koblenz, Germany (GUPRO Group) University of Oregon, USA (Department of Computer Science) University of Paderborn, Germany (AG Softwaretechnik) University of Stuttgart, Germany (BAUHAUS Group) University of Szeged, Hungary (Research Group on Artificial Intelligence) University of Toronto, Canada (Software Architecture Group) University of Victoria, Canada (RIGI Group) University of Waterloo, Canada (Software Architecture Group) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Astereognosis**
Astereognosis:
Astereognosis (or tactile agnosia if only one hand is affected) is the inability to identify an object by active touch of the hands without other sensory input, such as visual or sensory information. An individual with astereognosis is unable to identify objects by handling them, despite intact elementary tactile, proprioceptive, and thermal sensation. With the absence of vision (i.e. eyes closed), an individual with astereognosis is unable to identify what is placed in their hand based on cues such as texture, size, spatial properties, and temperature. As opposed to agnosia, when the object is observed visually, one should be able to successfully identify the object.
Astereognosis:
Individuals with tactile agnosia may be able to identify the name, purpose, or origin of an object with their left hand but not their right, or vice versa, or both hands. Astereognosis refers specifically to those who lack tactile recognition in both hands. In the affected hand(s) they may be able to identify basic shapes such as pyramids and spheres (with abnormally high difficulty) but still not tactilely recognize common objects by easily recognizable and unique features such as a fork by its prongs (though the individual may report feeling a long, metal rod with multiple, pointy rods stemming off in uniform direction). These symptoms suggest that a very specific part of the brain is responsible for making the connections between tactile stimuli and functions/relationships of those stimuli, which, along with the relatively low impact this disorder has on a person's quality of life, helps explain the rarity of reports and research of individuals with tactile agnosia. However in some cases, those persons with tactile agnosia may have many challenges in daily life and occupation. An example is a task that requires typing quickly, as this agnosia type prevents the recognition of keys without looking at a keyboard.Astereognosis is associated with lesions of the parietal lobe or dorsal column or parieto-temporo-occipital lobe (posterior association areas) of either the right or left hemisphere of the cerebral cortex. Despite cross-talk between the dorsal and ventral cortices, fMRI results suggest that those with ventral cortex damage are less sensitive to object 3D structure than those with dorsal cortex damage. Unlike the ventral cortex, the dorsal cortex can compute object representations. Thus, those with object recognition impairments are more likely to have acquired damage to the dorsal cortex. Those suffering from Alzheimer’s disease show a reduction in stereognosis, the ability to perceive and recognize the form of an object in the absence of visual and auditory information. This supports the notion that astereognosis appears to be an associative disorder in which the connections between tactile information and memory is disturbed.While astereognosis is characterized by the lack of tactile recognition in both hands, it seems to be closely related to tactile agnosia (impairment connected to one hand). Tactile agnosia observations are rare and case-specific. Josef Gerstmann recounts his experience with patient JH, a 34-year-old infantryman who suffered a lesion to the posterior parietal lobe due to a gunshot. Following the injury, JH was unable to recognize or identify everyday objects by their meaning, origin, purpose and use with his left hand using tactile sensation alone. His motility performance, elementary sensitivity, and speech were intact, and he lacked abnormalities in brain nerves.The majority of all objects JH touched with his left hand went unrecognized, but very simple objects (i.e. globes, pyramids, cube, etc.) were regularly recognized based on form alone. For more complex objects, his behavior and recognition varied daily based on his tactile resources that changed over time and depended on his fatigue. That is, JH’s ability to recognize depended on his concentration and ability to recognize simple forms and single qualities like size, shape, etc. With further interrogation and greater effort, he was able to correctly identify more specific features of an object (i.e. softness, rounded or cornered, broad or narrow) and could even draw a copy of it, but he was often left unable to identify the object by name, use, or origin. This behavioral deficit occurred even if JH had handled the object in his fully intact right hand.Interventions tend to focus on helping these patients and their family and caregivers cope and adapt to the condition, and furthermore, to help patients function independently within their context. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Novo Nordisk Foundation Center for Protein Research**
Novo Nordisk Foundation Center for Protein Research:
The Novo Nordisk Foundation Center for Protein Research was established at the Faculty of Health and Medical Sciences at the University of Copenhagen, to promote basic and applied discovery research on human proteins of medical relevance. The establishment of the center (announced in April 2007) has been made possible by a donation of 600 million DKK (~113 MUSD) from the Novo Nordisk Foundation and through significant contributions from the University of Copenhagen for the renovation of the Center laboratories.The Center comprises a wide range of expertise and skills within research areas of disease systems biology, proteomics, high throughput protein production and characterisation, chemical biology, disease biology, and protein therapeutics. The Center also contributes to the progress of translational research within medicine and provide fundamental insights which can be used to promote drug discovery and development. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bytecode**
Bytecode:
Bytecode (also called portable code or p-code) is a form of instruction set designed for efficient execution by a software interpreter. Unlike human-readable source code, bytecodes are compact numeric codes, constants, and references (normally numeric addresses) that encode the result of compiler parsing and performing semantic analysis of things like type, scope, and nesting depths of program objects.
Bytecode:
The name bytecode stems from instruction sets that have one-byte opcodes followed by optional parameters. Intermediate representations such as bytecode may be output by programming language implementations to ease interpretation, or it may be used to reduce hardware and operating system dependence by allowing the same code to run cross-platform, on different devices. Bytecode may often be either directly executed on a virtual machine (a p-code machine, i.e., interpreter), or it may be further compiled into machine code for better performance.
Bytecode:
Since bytecode instructions are processed by software, they may be arbitrarily complex, but are nonetheless often akin to traditional hardware instructions: virtual stack machines are the most common, but virtual register machines have been built also. Different parts may often be stored in separate files, similar to object modules, but dynamically loaded during execution.
Execution:
A bytecode program may be executed by parsing and directly executing the instructions, one at a time. This kind of bytecode interpreter is very portable. Some systems, called dynamic translators, or just-in-time (JIT) compilers, translate bytecode into machine code as necessary at runtime. This makes the virtual machine hardware-specific but does not lose the portability of the bytecode. For example, Java and Smalltalk code is typically stored in bytecode format, which is typically then JIT compiled to translate the bytecode to machine code before execution. This introduces a delay before a program is run, when the bytecode is compiled to native machine code, but improves execution speed considerably compared to interpreting source code directly, normally by around an order of magnitude (10x).Because of its performance advantage, today many language implementations execute a program in two phases, first compiling the source code into bytecode, and then passing the bytecode to the virtual machine. There are bytecode based virtual machines of this sort for Java, Raku, Python, PHP, Tcl, mawk and Forth (however, Forth is seldom compiled via bytecodes in this way, and its virtual machine is more generic instead). The implementation of Perl and Ruby 1.8 instead work by walking an abstract syntax tree representation derived from the source code.
Execution:
More recently, the authors of V8 and Dart have challenged the notion that intermediate bytecode is needed for fast and efficient VM implementation. Both of these language implementations currently do direct JIT compiling from source code to machine code with no bytecode intermediary.
Examples:
ActionScript executes in the ActionScript Virtual Machine (AVM), which is part of Flash Player and AIR. ActionScript code is typically transformed into bytecode format by a compiler. Examples of compilers include one built into Adobe Flash Professional and one built into Adobe Flash Builder and available in the Adobe Flex SDK.
Examples:
Adobe Flash objects BANCStar, originally bytecode for an interface-building tool but used also as a language Berkeley Packet Filter Berkeley Pascal Byte Code Engineering Library C to Java virtual machine compilers CLISP implementation of Common Lisp used to compile only to bytecode for many years; however, now it also supports compiling to native code with the help of GNU lightning CMUCL and Scieneer Common Lisp implementations of Common Lisp can compile either to native code or to bytecode, which is far more compact Common Intermediate Language executed by Common Language Runtime, used by .NET languages such as C# Dalvik bytecode, designed for the Android platform, is executed by the Dalvik virtual machine Dis bytecode, designed for the Inferno (operating system), is executed by the Dis virtual machine EiffelStudio for the Eiffel programming language EM, the Amsterdam Compiler Kit virtual machine used as an intermediate compiling language and as a modern bytecode language Emacs is a text editor with most of its functions implemented by Emacs Lisp, its built-in dialect of Lisp. These features are compiled into bytecode. This architecture allows users to customize the editor with a high level language, which after compiling into bytecode yields reasonable performance.
Examples:
Embeddable Common Lisp implementation of Common Lisp can compile to bytecode or C code Common Lisp provides a disassemble function which prints to the standard output the underlying code of a specified function. The result is implementation-dependent and may or may not resolve to bytecode. Its inspection can be utilized for debugging and optimization purposes. Steel Bank Common Lisp, for instance, produces: Ericsson implementation of Erlang uses BEAM bytecodes Ethereum's Virtual Machine (EVM) is the runtime environment, using its own bytecode, for transaction execution in Ethereum (smart contracts).
Examples:
Icon and Unicon programming languages Infocom used the Z-machine to make its software applications more portable Java bytecode, which is executed by the Java virtual machine ASM BCEL Javassist Keiko bytecode used by the Oberon-2 programming language to make it and the Oberon operating system more portable.
KEYB, the MS-DOS/PC DOS keyboard driver with its resource file KEYBOARD.SYS containing layout information and short p-code sequences executed by an interpreter inside the resident driver.
LLVM IR LSL, a scripting language used in virtual worlds compiles into bytecode running on a virtual machine. Second Life has the original Mono version, Inworldz developed the Phlox version.
Lua language uses a register-based bytecode virtual machine m-code of the MATLAB language Malbolge is an esoteric machine language for a ternary virtual machine.
Examples:
Microsoft P-code used in Visual C++ and Visual Basic Multiplan O-code of the BCPL programming language OCaml language optionally compiles to a compact bytecode form p-code of UCSD Pascal implementation of the Pascal language Parrot virtual machine Pick BASIC also referred to as Data BASIC or MultiValue BASIC The R environment for statistical computing offers a bytecode compiler through the compiler package, now standard with R version 2.13.0. It is possible to compile this version of R so that the base and recommended packages exploit this.
Examples:
Pyramid 2000 adventure game Python scripts are being compiled on execution to Python's bytecode language, and the compiled files (.pyc) are cached inside the script's folderCompiled code can be analysed and investigated using a built-in tool for debugging the low-level bytecode. The tool can be initialized from the shell, for example: Scheme 48 implementation of Scheme using bytecode interpreter Bytecodes of many implementations of the Smalltalk language The Spin interpreter built into the Parallax Propeller microcontroller The SQLite database engine translates SQL statements into a bespoke byte-code format.
Examples:
Apple SWEET16 Tcl TIMI is used by compilers on the IBM i platform.
Tiny BASIC Visual FoxPro compiles to bytecode WebAssembly YARV and Rubinius for Ruby ZCODE | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Growing context-sensitive grammar**
Growing context-sensitive grammar:
In formal language theory, a growing context-sensitive grammar is a context-sensitive grammar in which the productions increase the length of the sentences being generated. These grammars are thus noncontracting and context-sensitive. A growing context-sensitive language is a context-sensitive language generated by these grammars.
Growing context-sensitive grammar:
In these grammars the "start symbol" S does not appear on the right hand side of any production rule and the length of the right hand side of each production exceeds the length of the left side, unless the left side is S.These grammars were introduced by Dahlhaus and Warmuth. They were later shown to be equivalent to the acyclic context-sensitive grammars. Membership in any growing context-sensitive language is polynomial time computable; however, the uniform problem of deciding whether a given string belongs to the language generated by a given growing or acyclic context-sensitive grammar is NP-complete. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Quality circle**
Quality circle:
A quality circle or quality control circle is a group of workers who do the same or similar work, who meet regularly to identify, analyze and solve work-related problems. It consists of minimum three and maximum twelve members in number. Normally small in size, the group is usually led by a supervisor or manager and presents its solutions to management; where possible, workers implement the solutions themselves in order to improve the performance of the organization and motivate employees. Quality circles were at their most popular during the 1980s, but continue to exist in the form of Kaizen groups and similar worker participation schemes.Typical topics for the attention of quality circles are improving occupational safety and health, improving product design, and improvement in the workplace and manufacturing processes. The term quality circles was most accessibly defined by Professor Kaoru Ishikawa in his 1985 handbook, "What is Total Quality Control? The Japanese Way" and circulated throughout Japanese industry by the Union of Japanese Scientists and Engineers in 1960. The first company in Japan to introduce Quality Circles was the Nippon Wireless and Telegraph Company in 1962. By the end of that year there were 36 companies registered with JUSE by 1978 the movement had grown to an estimated 1 million Circles involving some 10 million Japanese workers. The movement built on work by Dr. W. Edwards Deming during the Allied Occupation of Japan, for which the Deming Prize was established in 1950, as well as work by Joseph M. Juran in 1954.Quality circles are typically more formal groups. They meet regularly on company time and are trained by competent persons (usually designated as facilitators) who may be personnel and industrial relations specialists trained in human factors and the basic skills of problem identification, information gathering and analysis, basic statistics, and solution generation. Quality circles are generally free to select any topic they wish (other than those related to salary and terms and conditions of work, as there are other channels through which these issues are usually considered).Quality circles have the advantage of continuity; the circle remains intact from project to project. (For a comparison to Quality Improvement Teams, see Juran's Quality by Design.).
Quality circle:
Handbook of Quality Circle: Quality circle is a people-development concept based on the premise that an employee doing a certain task is the most informed person in that topic and, as a result, is in a better position to identify, analyse, and handle work-related challenges through their innovative and unique ideas. It is, in fact, a practical application of McGregor's Theory Y, which argues that if employees are given the right atmosphere and decision-making authority, they will enjoy and take pride in their work, resulting in a more fulfilling work life. A quality circle is a small group of workers that work in the same area or do similar sorts of work and meet once a week for an hour to identify, analyse, and resolve work-related issues. The objective is to improve the quality, productivity, and overall performance of the company, as well as the workers' quality of life at work. TQM World Institution of Quality Excellence publication division published a book, "Handbook of Quality Circle" by Prasanta Kumar Barik which tried to bring all the theoretical concepts with detailed implementation steps for Quality Circle. This will be useful in Quality Circle implementation in all types of organizations.
History:
Quality circles were originally described by W. Edwards Deming in the 1950s, Deming praised Toyota as an example of the practice. The idea was later formalized across Japan in 1962 and expanded by others such as Kaoru Ishikawa. The Japanese Union of Scientists and Engineers (JUSE) coordinated the movement in Japan. The first circles started at the Nippon Wireless and Telegraph Company; the idea then spread to more than 35 other companies in the first year. By 1978 it was claimed by JUSE in their publication Gemba to QC Circles, that there were more than one million quality circles involving some 10 million Japanese workers. As of 2015 they operate in most East Asian countries; it was recently claimed by the President of the Chinese Quality Circles Society at the ICSQCC Conference in Beijing 30 August 1997 that there were more than 20 million quality circles in China.Quality circles have been implemented even in educational sectors in India, and QCFI (Quality Circle Forum of India) is promoting such activities. However this was not successful in the United States, as the idea was not properly understood and implementation turned into a fault-finding exercise – although some circles do still exist. Don Dewar, founder of Quality Digest together with Wayne Ryker and Jeff Beardsley established quality circles in 1972 at the Lockheed Space Missile factory in California.
History:
TQM World Institution of Quality Excellence (TQM-WIQE) through its E-learning division Quality Excellence Forum (QEF) is providing training on Quality Circle with three different levels of certification for better implementation of Quality Circle worldwide. The certifications level are Quality Circle Fundamentals (QCF), Quality Circle Professional (QCP) and Quality Circle Master (QCM).
Empirical studies:
In a structures-fabrication and assembly plant in the south-eastern US, some quality circles (QCs) were established by the management (management-initiated); whereas others were formed based on requests of employees (self-initiated). Based on 47 QCs over a three-year period, research showed that management-initiated QCs have fewer members, solve more work-related QC problems, and solve their problems much faster than self-initiated QCS. However, the effect of QC initiation (management- vs. self-initiated) on problem-solving performance disappears after controlling QC size. A high attendance of QC meetings is related to lower number of projects completed and slow speed of performance in management-initiated QCS QCs with high upper-management support (high attendance of QC meetings) solve significantly more problems than those without. Active QCs had lower rate of problem-solving failure, higher attendance rate at QC meetings, and higher net savings of QC projects than inactive QCs. QC membership tends to decrease over the three-year period. Larger QCs have a better chance of survival than smaller QCs. A significant drop in QC membership is a precursor of QC failure. The sudden decline in QC membership represents the final and irreversible stage of the QC's demise. Attributions of quality circles' problem-solving failure vary across participants of QCs: Management, supporting staff, and QC members.There are seven basic quality improvement tools that circles use: Cause-and-effect diagrams (sometimes called Ishikawa or "fishbone" diagrams) Pareto charts Process mapping, data gathering tools such as check sheets Graphical tools such as histograms, frequency diagrams, spot charts and pie charts Run charts and control charts Scatter plots and correlation analysis Flowcharts
Student quality circles:
Student quality circles work on the original philosophy of total quality management. The idea of SQCs was presented by City Montessori School (CMS) Lucknow India at a conference in Hong Kong in October 1994. It was developed and mentored by two engineers of Indian Railways PC, Bihari and Swami Das, in association with Principal Dr. Kamran of CMS Lucknow India. They were inspired and facilitated by Jagdish Gandhi, who founded CMS after his visit to Japan, where he learned about Kaizen. CMS has continued to conduct international conventions on student quality circles every two years. After seeing its utility, educators from many countries started such circles. The World Council for Total Quality & Excellence in Education was established in 1999 with its Corporate Office in Lucknow and head office in Singapore. It monitors and facilitates student quality circle activities in its member countries, which number more than a dozen. SQC's are considered to be a co-curricular activity. They have been established in India, Bangladesh, Pakistan, Nepal, Sri Lanka, Turkey, Mauritius, Iran, UK (Kingston University and started in University of Leicester), and USA.
Student quality circles:
In Nepal, Prof. Dinesh P. Chapagain has been promoting the approach through QUEST-Nepal since 1999. He has written a book entitled A Guide Book on Students' Quality Circle: An Approach to prepare Total Quality People, which is considered a standard guide to promote SQC's in academia for students' personality development.The TQM World Institution of Quality Excellence through its Academic Outreach Initiative (WIQE-AOI), promoting Student Quality Circle concept.Its providing training and certification for students and mentors at Universities, Management & Engineering Institutions and schools for better implementation of Student Quality Circle in academics and overall growth of students. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**5-OH-DPAT**
5-OH-DPAT:
5-OH-DPAT is a synthetic compound that acts as a dopamine receptor agonist with selectivity for the D2 receptor and D3 receptor subtypes. Only the (S)-enantiomer is active as an agonist, with the (R)-enantiomer being a weak antagonist at D2 receptors. Radiolabelled 11C-5-OH-DPAT is used as an agonist radioligand for mapping the distribution and function of D2 and D3 receptors in the brain, and the drug is also being studied in the treatment of Parkinson's disease. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Proton decay**
Proton decay:
In particle physics, proton decay is a hypothetical form of particle decay in which the proton decays into lighter subatomic particles, such as a neutral pion and a positron. The proton decay hypothesis was first formulated by Andrei Sakharov in 1967. Despite significant experimental effort, proton decay has never been observed. If it does decay via a positron, the proton's half-life is constrained to be at least 1.67×1034 years.According to the Standard Model, the proton, a type of baryon, is stable because baryon number (quark number) is conserved (under normal circumstances; see chiral anomaly for an exception). Therefore, protons will not decay into other particles on their own, because they are the lightest (and therefore least energetic) baryon. Positron emission and electron capture – forms of radioactive decay which see a proton become a neutron – are not proton decay, since the proton interacts with other particles within the atom.
Proton decay:
Some beyond-the-Standard Model grand unified theories (GUTs) explicitly break the baryon number symmetry, allowing protons to decay via the Higgs particle, magnetic monopoles, or new X bosons with a half-life of 1031 to 1036 years. For comparison, the universe is roughly 1.38 × 1010 years old. To date, all attempts to observe new phenomena predicted by GUTs (like proton decay or the existence of magnetic monopoles) have failed.
Proton decay:
Quantum tunnelling may be one of the mechanisms of proton decay.Quantum gravity (via virtual black holes and Hawking radiation) may also provide a venue of proton decay at magnitudes or lifetimes well beyond the GUT scale decay range above, as well as extra dimensions in supersymmetry.There are theoretical methods of baryon violation other than proton decay including interactions with changes of baryon and/or lepton number other than 1 (as required in proton decay). These included B and/or L violations of 2, 3, or other numbers, or B − L violation. Such examples include neutron oscillations and the electroweak sphaleron anomaly at high energies and temperatures that can result between the collision of protons into antileptons or vice versa (a key factor in leptogenesis and non-GUT baryogenesis).
Baryogenesis:
One of the outstanding problems in modern physics is the predominance of matter over antimatter in the universe. The universe, as a whole, seems to have a nonzero positive baryon number density – that is, there is more matter than antimatter. Since it is assumed in cosmology that the particles we see were created using the same physics we measure today, it would normally be expected that the overall baryon number should be zero, as matter and antimatter should have been created in equal amounts. This has led to a number of proposed mechanisms for symmetry breaking that favour the creation of normal matter (as opposed to antimatter) under certain conditions. This imbalance would have been exceptionally small, on the order of 1 in every 1010 particles a small fraction of a second after the Big Bang, but after most of the matter and antimatter annihilated, what was left over was all the baryonic matter in the current universe, along with a much greater number of bosons.
Baryogenesis:
Most grand unified theories explicitly break the baryon number symmetry, which would account for this discrepancy, typically invoking reactions mediated by very massive X bosons (X) or massive Higgs bosons (H0). The rate at which these events occur is governed largely by the mass of the intermediate X or H0 particles, so by assuming these reactions are responsible for the majority of the baryon number seen today, a maximum mass can be calculated above which the rate would be too slow to explain the presence of matter today. These estimates predict that a large volume of material will occasionally exhibit a spontaneous proton decay.
Experimental evidence:
Proton decay is one of the key predictions of the various grand unified theories (GUTs) proposed in the 1970s, another major one being the existence of magnetic monopoles. Both concepts have been the focus of major experimental physics efforts since the early 1980s. To date, all attempts to observe these events have failed; however, these experiments have been able to establish lower bounds on the half-life of the proton. Currently, the most precise results come from the Super-Kamiokande water Cherenkov radiation detector in Japan: a 2015 analysis placed a lower bound on the proton's half-life of 1.67×1034 years via positron decay, and similarly, a 2012 analysis gave a lower bound to the proton's half-life of 1.08×1034 years via antimuon decay, close to a supersymmetry (SUSY) prediction of 1034–1036 years. An upgraded version, Hyper-Kamiokande, probably will have sensitivity 5–10 times better than Super-Kamiokande.
Theoretical motivation:
Despite the lack of observational evidence for proton decay, some grand unification theories, such as the SU(5) Georgi–Glashow model and SO(10), along with their supersymmetric variants, require it. According to such theories, the proton has a half-life of about 1031~1036 years and decays into a positron and a neutral pion that itself immediately decays into two gamma ray photons: Since a positron is an antilepton this decay preserves B − L number, which is conserved in most GUTs.
Theoretical motivation:
Additional decay modes are available (e.g.: p+ → μ+ + π0 ), both directly and when catalyzed via interaction with GUT-predicted magnetic monopoles. Though this process has not been observed experimentally, it is within the realm of experimental testability for future planned very large-scale detectors on the megaton scale. Such detectors include the Hyper-Kamiokande.
Theoretical motivation:
Early grand unification theories (GUTs) such as the Georgi–Glashow model, which were the first consistent theories to suggest proton decay, postulated that the proton's half-life would be at least 1031 years. As further experiments and calculations were performed in the 1990s, it became clear that the proton half-life could not lie below 1032 years. Many books from that period refer to this figure for the possible decay time for baryonic matter. More recent findings have pushed the minimum proton half-life to at least 1034–1035 years, ruling out the simpler GUTs (including minimal SU(5) / Georgi–Glashow) and most non-SUSY models. The maximum upper limit on proton lifetime (if unstable), is calculated at 6 × 1039 years, a bound applicable to SUSY models, with a maximum for (minimal) non-SUSY GUTs at 1.4 × 1036 years.(part 5.6)Although the phenomenon is referred to as "proton decay", the effect would also be seen in neutrons bound inside atomic nuclei. Free neutrons – those not inside an atomic nucleus – are already known to decay into protons (and an electron and an antineutrino) in a process called beta decay. Free neutrons have a half-life of 10 minutes (610.2±0.8 s) due to the weak interaction. Neutrons bound inside a nucleus have an immensely longer half-life – apparently as great as that of the proton.
Projected proton lifetimes:
The lifetime of the proton in vanilla SU(5) can be naively estimated as τ p ∼ M X 4 / m p 5 {\textstyle \tau _{p}\sim M_{X}^{4}/m_{p}^{5}} . Supersymmetric GUTs with reunification scales around µ ~ 2×1016 GeV/c2 yield a lifetime of around 1034 yr, roughly the current experimental lower bound.
Decay operators:
Dimension-6 proton decay operators The dimension-6 proton decay operators are {\textstyle qqq\ell /\Lambda ^{2},} {\textstyle d^{c}u^{c}u^{c}e^{c}/\Lambda ^{2},} {\textstyle {\overline {e^{c}}}{\overline {u^{c}}}qq/\Lambda ^{2},} and {\textstyle {\overline {d^{c}}}{\overline {u^{c}}}q\ell /\Lambda ^{2},} where Λ is the cutoff scale for the Standard Model. All of these operators violate both baryon number (B) and lepton number (L) conservation but not the combination B − L.
Decay operators:
In GUT models, the exchange of an X or Y boson with the mass ΛGUT can lead to the last two operators suppressed by GUT {\textstyle 1/\Lambda _{\text{GUT}}^{2}} . The exchange of a triplet Higgs with mass M can lead to all of the operators suppressed by {\textstyle 1/M^{2}} . See doublet–triplet splitting problem.
Proton decay. These graphics refer to the X bosons and Higgs bosons.
Decay operators:
Dimension-5 proton decay operators In supersymmetric extensions (such as the MSSM), we can also have dimension-5 operators involving two fermions and two sfermions caused by the exchange of a tripletino of mass M. The sfermions will then exchange a gaugino or Higgsino or gravitino leaving two fermions. The overall Feynman diagram has a loop (and other complications due to strong interaction physics). This decay rate is suppressed by SUSY {\textstyle 1/MM_{\text{SUSY}}} where MSUSY is the mass scale of the superpartners.
Decay operators:
Dimension-4 proton decay operators In the absence of matter parity, supersymmetric extensions of the Standard Model can give rise to the last operator suppressed by the inverse square of sdown quark mass. This is due to the dimension-4 operators qℓd͂c and ucdcd͂c.
The proton decay rate is only suppressed by SUSY {\textstyle 1/M_{\text{SUSY}}^{2}} which is far too fast unless the couplings are very small. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Remote manipulator**
Remote manipulator:
A remote manipulator, also known as a telefactor, telemanipulator, or waldo (after the 1942 short story "Waldo" by Robert A. Heinlein which features a man who invents and uses such devices), is a device which, through electronic, hydraulic, or mechanical linkages, allows a hand-like mechanism to be controlled by a human operator. The purpose of such a device is usually to move or manipulate hazardous materials for reasons of safety, similar to the operation and play of a claw crane game.
History:
In 1945, the company Central Research Laboratories was given the contract to develop a remote manipulator for the Argonne National Laboratory. The intent was to replace devices which manipulated highly radioactive materials from above a sealed chamber or hot cell, with a mechanism which operated through the side wall of the chamber, allowing a researcher to stand normally while working.
History:
The result was the Master-Slave Manipulator Mk. 8, or MSM-8, which became the iconic remote manipulator seen in newsreels and movies, such as the The Andromeda Strain or THX 1138.
History:
Robert A. Heinlein claimed a much earlier origin for remote manipulators. He wrote that he got the idea for "waldos" after reading a 1918 article in Popular Mechanics about "a poor fellow afflicted with myasthenia gravis ... [who] devised complicated lever arrangements to enable him to use what little strength he had." An article in Science Robotics on robots, science fiction, and nuclear accidents discusses how the science fiction waldos are now a major type of real-world robots used in the nuclear industry. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Presentation copy**
Presentation copy:
A presentation copy is a copy of a book that has been dedicated, illustrated, or signed (without request) by the author, or a book that was a gift from the author. An inscribed copy, by contrast, is one signed by the author at the book owner's request. Presentation copies are generally more valuable and rarer than inscribed copies.
Examples of presentation copies:
Plays, Never Before Printed (1668), signed by Margaret Cavendish at the Folger Shakespeare Library An Account of the Abipones (1784), presentation copy from John Carter Brown to John R. Bartlett at the John Hay Library, Brown University A Study in Scarlet (1887), signed "With the Author's Compliments" by Arthur Conan Doyle at the Beinecke Library, Yale University The Nursery "Alice" (1889), dedicated by Lewis Carroll, sold by Sotheby's in 2012 for £36,050 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**The Alternate Source Programmer's Journal**
The Alternate Source Programmer's Journal:
The Alternate Source, also known as The Alternate Source Programmer's Journal, was a magazine of technical programming articles, most of which were at the assembly language level, focused on the TRS-80 Model I and Model III. A few articles related to the TRS-80 Color Computer.It was published by Charlie W. Butler (d. September 11, 2014) and Joni M. Kosloski of The Alternate Source, a major TRS-80 software publisher, from around 1980 to around 1983. TAS was known for the high intensity level of its articles and as such was the "prestige" technical journal of the time. Among its contributors were Jake Commander, Jack Decker, Bruce Hansen, Larry Kingsbury, Dennis Kitsz, Steven Kovitz, Alan Moluf, Troy L. Pierce, and Gordon Williams.
The Alternate Source Programmer's Journal:
The meaning behind the name "The Alternate Source" is that TAS set itself up as being an alternative to the official software and information coming from Radio Shack, the manufacturer of the TRS-80. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**XAGE1D**
XAGE1D:
G antigen family D member 2 is a protein that in humans is encoded by the XAGE1D gene.This gene is a member of the XAGE subfamily, which belongs to the GAGE family. The GAGE genes are expressed in a variety of tumors and in some fetal and reproductive tissues. This gene is strongly expressed in Ewing's sarcoma, alveolar rhabdomyosarcoma and normal testis. The protein encoded by this gene contains a nuclear localization signal and shares a sequence similarity with other GAGE/PAGE proteins. Because of the expression pattern and the sequence similarity, this protein also belongs to a family of CT (cancer-testis) antigens. Alternative splicing of this gene generates 3 transcript variants, and one of which includes 2 transcripts generated from alternate transcription initiation sites. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Glycerol**
Glycerol:
Glycerol (), also called glycerine or glycerin, is a simple triol compound. It is a colorless, odorless, viscous liquid that is sweet-tasting and non-toxic. The glycerol backbone is found in lipids known as glycerides. Because it has antimicrobial and antiviral properties, it is widely used in wound and burn treatments approved by the U.S. Food and Drug Administration. Conversely, it is also used as a bacterial culture medium. Its presence in blood can be used as an effective marker to measure liver disease. It is also widely used as a sweetener in the food industry and as a humectant in pharmaceutical formulations. Because of its three hydroxyl groups, glycerol is miscible with water and is hygroscopic in nature.
Structure:
Although achiral, glycerol is prochiral with respect to reactions of one of the two primary alcohols. Thus, in substituted derivatives, the stereospecific numbering labels the molecule with a sn- prefix before the stem name of the molecule.
Production:
Glycerol is generally obtained from plant and animal sources where it occurs in triglycerides, esters of glycerol with long-chain carboxylic acids. The hydrolysis, saponification, or transesterification of these triglycerides produces glycerol as well as the fatty acid derivative: Triglycerides can be saponified with sodium hydroxide to give glycerol and fatty sodium salt or soap.
Production:
Typical plant sources include soybeans or palm. Animal-derived tallow is another source. Approximately 950,000 tons per year are produced in the United States and Europe; 350,000 tons of glycerol were produced per year in the U.S. alone from 2000 to 2004. The EU directive 2003/30/EC set a requirement that 5.75% of petroleum fuels were to be replaced with biofuel sources across all member states by 2010. It was projected in 2006 that by 2020, production would be six times more than demand, creating an excess of glycerol as a byproduct of biofuel production.Glycerol from triglycerides is produced on a large scale, but the crude product is of variable quality, with a low selling price of as low as US$0.02–0.05 per kilogram in 2011. It can be purified, but the process is expensive. Some glycerol is burned for energy, but its heat value is low.Crude glycerol from the hydrolysis of triglycerides can be purified by treatment with activated carbon to remove organic impurities, alkali to remove unreacted glycerol esters, and ion exchange to remove salts. High purity glycerol (greater than 99.5%) is obtained by multi-step distillation; a vacuum chamber is necessary due to its high boiling point (290 °C).
Production:
Synthetic glycerol Although usually not cost-effective, glycerol can be produced by various routes from propene. The epichlorohydrin process is the most important: it involves the chlorination of propylene to give allyl chloride, which is oxidized with hypochlorite to dichlorohydrin, which reacts with a strong base to give epichlorohydrin. Epichlorohydrin can be hydrolyzed to glycerol. Chlorine-free processes from propylene include the synthesis of glycerol from acrolein and propylene oxide.
Production:
Because of the large-scale production of biodiesel from fats, where glycerol is a waste product, the market for glycerol is depressed. Thus, synthetic processes are not economical. Owing to oversupply, efforts are being made to convert glycerol to synthetic precursors, such as acrolein and epichlorohydrin.
Applications:
Food industry In food and beverages, glycerol serves as a humectant, solvent, and sweetener, and may help preserve foods. It is also used as filler in commercially prepared low-fat foods (e.g., cookies), and as a thickening agent in liqueurs. Glycerol and water are used to preserve certain types of plant leaves. As a sugar substitute, it has approximately 27 kilocalories per teaspoon (sugar has 20) and is 60% as sweet as sucrose. It does not feed the bacteria that form a dental plaque and cause dental cavities. As a food additive, glycerol is labeled as E number E422. It is added to icing (frosting) to prevent it from setting too hard.
Applications:
As used in foods, glycerol is categorized by the U.S. Academy of Nutrition and Dietetics as a carbohydrate. The U.S. Food and Drug Administration (FDA) carbohydrate designation includes all caloric macronutrients excluding protein and fat. Glycerol has a caloric density similar to table sugar, but a lower glycemic index and different metabolic pathway within the body.
Applications:
It is also recommended as an additive when using polyol sweeteners such as erythritol and xylitol which have a cooling effect, due to its heating effect in the mouth, if the cooling effect is not wanted.Excessive consumption by children can lead to glycerol intoxication. Symptoms of intoxication include hypoglycemia, nausea and a loss of consciousness (syncope). While intoxication as a result of excessive glycerol consumption is rare and its symptoms generally mild, occasional reports of hospitalization have occurred. In the United Kingdom in August 2023, manufacturers of syrup used in slush ice drinks were advised to reduce the amount of glycerol in their formulations by the Food Standards Agency to reduce the risk of intoxication.
Applications:
Medical, pharmaceutical and personal care applications Glycerin is mildly antimicrobial and antiviral and is an FDA-approved treatment for wounds. The Red Cross reports that an 85% solution of glycerin shows bactericidal and antiviral effects, and wounds treated with glycerin show reduced inflammation after roughly two hours. Due to this it is used widely in wound care products, including glycerin based hydrogel sheets for burns and other wound care. It is approved for all types of wound care except third-degree burns, and is used to package donor skin used in skin grafts.Glycerol is used in medical, pharmaceutical and personal care preparations, often as a means of improving smoothness, providing lubrication, and as a humectant.
Applications:
Ichthyosis and xerosis have been relieved by the topical use of glycerin. It is found in allergen immunotherapies, cough syrups, elixirs and expectorants, toothpaste, mouthwashes, skin care products, shaving cream, hair care products, soaps, and water-based personal lubricants. In solid dosage forms like tablets, glycerol is used as a tablet holding agent. For human consumption, glycerol is classified by the FDA among the sugar alcohols as a caloric macronutrient. Glycerol is also used in blood banking to preserve red blood cells prior to freezing.
Applications:
Glycerol is a component of glycerin soap. Essential oils are added for fragrance. This kind of soap is used by people with sensitive, easily irritated skin because it prevents skin dryness with its moisturizing properties. It draws moisture up through skin layers and slows or prevents excessive drying and evaporation.Taken rectally, glycerol functions as a laxative by irritating the anal mucosa and inducing a hyperosmotic effect, expanding the colon by drawing water into it to induce peristalsis resulting in evacuation. It may be administered undiluted either as a suppository or as a small-volume (2–10 ml) enema. Alternatively, it may be administered in a dilute solution, such as 5%, as a high-volume enema.Taken orally (often mixed with fruit juice to reduce its sweet taste), glycerol can cause a rapid, temporary decrease in the internal pressure of the eye. This can be useful for the initial emergency treatment of severely elevated eye pressure.In 2017, researchers showed that the probiotic Limosilactobacillus reuteri bacteria can be supplemented with glycerol to enhance its production of antimicrobial substances in the human gut. This was confirmed to be as effective as the antibiotic vancomycin at inhibiting Clostridioides difficile infection without having a significant effect on the overall microbial composition of the gut.Glycerol has also been incorporated as a component of bio-ink formulations in the field of bioprinting. The glycerol content acts to add viscosity to the bio-ink without adding large protein, saccharide, or glycoprotein molecules.
Applications:
Botanical extracts When utilized in "tincture" method extractions, specifically as a 10% solution, glycerol prevents tannins from precipitating in ethanol extracts of plants (tinctures). It is also used as an "alcohol-free" alternative to ethanol as a solvent in preparing herbal extractions. It is less extractive when utilized in a standard tincture methodology. Alcohol-based tinctures can also have the alcohol removed and replaced with glycerol for its preserving properties. Such products are not "alcohol-free" in a scientific or FDA regulatory sense, as glycerol contains three hydroxyl groups. Fluid extract manufacturers often extract herbs in hot water before adding glycerol to make glycerites.When used as a primary "true" alcohol-free botanical extraction solvent in non-tincture based methodologies, glycerol has been shown to possess a high degree of extractive versatility for botanicals including removal of numerous constituents and complex compounds, with an extractive power that can rival that of alcohol and water–alcohol solutions. That glycerol possesses such high extractive power assumes it is utilized with dynamic (critical) methodologies as opposed to standard passive "tincturing" methodologies that are better suited to alcohol. Glycerol possesses the intrinsic property of not denaturing or rendering a botanical's constituents inert like alcohols (ethanol, methanol, and so on) do. Glycerol is a stable preserving agent for botanical extracts that, when utilized in proper concentrations in an extraction solvent base, does not allow inverting or mitigates reduction-oxidation of a finished extract's constituents, even over several years. Both glycerol and ethanol are viable preserving agents. Glycerol is bacteriostatic in its action, and ethanol is bactericidal in its action.
Applications:
Electronic cigarette liquid Glycerin, along with propylene glycol, is a common component of e-liquid, a solution used with electronic vaporizers (electronic cigarettes). This glycerol is heated with an atomizer (a heating coil often made of Kanthal wire), producing the aerosol that delivers nicotine to the user.
Antifreeze Like ethylene glycol and propylene glycol, glycerol is a non-ionic kosmotrope that forms strong hydrogen bonds with water molecules, competing with water-water hydrogen bonds. This interaction disrupts the formation of ice. The minimum freezing point temperature is about −38 °C (−36 °F) corresponding to 70% glycerol in water.
Applications:
Glycerol was historically used as an anti-freeze for automotive applications before being replaced by ethylene glycol, which has a lower freezing point. While the minimum freezing point of a glycerol-water mixture is higher than an ethylene glycol-water mixture, glycerol is not toxic and is being re-examined for use in automotive applications.In the laboratory, glycerol is a common component of solvents for enzymatic reagents stored at temperatures below 0 °C (32 °F) due to the depression of the freezing temperature. It is also used as a cryoprotectant where the glycerol is dissolved in water to reduce damage by ice crystals to laboratory organisms that are stored in frozen solutions, such as fungi, bacteria, nematodes, and mammalian embryos. Some organisms like the moor frog produce glycerol to survive freezing temperatures during hibernation.
Applications:
Chemical intermediate Glycerol is used to produce nitroglycerin, which is an essential ingredient of various explosives such as dynamite, gelignite, and propellants like cordite. Reliance on soap-making to supply co-product glycerol made it difficult to increase production to meet wartime demand. Hence, synthetic glycerol processes were national defense priorities in the days leading up to World War II. Nitroglycerin, also known as glyceryl trinitrate (GTN) is commonly used to relieve angina pectoris, taken in the form of sub-lingual tablets, patches, or as an aerosol spray.
Applications:
Trifunctional polyether polyols are produced from glycerol and propylene oxide. An oxidation of glycerol affords mesoxalic acid. Dehydrating glycerol affords hydroxyacetone.
Applications:
Vibration damping Glycerol is used as fill for pressure gauges to damp vibration. External vibrations, from compressors, engines, pumps, etc., produce harmonic vibrations within Bourdon gauges that can cause the needle to move excessively, giving inaccurate readings. The excessive swinging of the needle can also damage internal gears or other components, causing premature wear. Glycerol, when poured into a gauge to replace the air space, reduces the harmonic vibrations that are transmitted to the needle, increasing the lifetime and reliability of the gauge.
Applications:
Niche uses Entertainment industry Glycerol is used by set decorators when filming scenes involving water to prevent an area meant to look wet from drying out too quickly.Glycerine is also used in the generation of theatrical smoke and fog as a component of the fluid used in fog machines as a replacement for glycol, which has been shown to be an irritant if exposure is prolonged.
Applications:
Ultrasonic couplant Glycerol can be sometimes used as replacement for water in ultrasonic testing, as it has favourably higher acoustic impedance (2.42 MRayl versus 1.483 MRayl for water) while being relatively safe, non-toxic, non-corrosive and relatively low cost.
Internal combustion fuel Glycerol is also used to power diesel generators supplying electricity for the FIA Formula E series of electric race cars.
Research on additional uses Research continues into potential value-added products of glycerol obtained from biodiesel production. Examples (aside from combustion of waste glycerol): Hydrogen gas production.
Glycerine acetate is a potential fuel additive.
Additive for starch thermoplastic.
Conversion to various other chemicals: Propylene glycol Acrolein Ethanol Epichlorohydrin, a raw material for epoxy resins
Metabolism:
Glycerol is a precursor for synthesis of triacylglycerols and of phospholipids in the liver and adipose tissue. When the body uses stored fat as a source of energy, glycerol and fatty acids are released into the bloodstream.
Metabolism:
Glycerol is mainly metabolized in the liver. Glycerol injections can be used as a simple test for liver damage, as its rate of absorption by the liver is considered an accurate measure of liver health. Glycerol metabolism is reduced in both cirrhosis and fatty liver disease.Blood glycerol levels are highly elevated during diabetes, and is believed to be the cause of reduced fertility in patients who suffer from diabetes and metabolic syndrome. Blood glycerol levels in diabetic patients average three times higher than healthy controls. Direct glycerol treatment of testes has been found to cause significant long-term reduction in sperm count. Further testing on this subject was abandoned due to the unexpected results, as this was not the goal of the experiment.Circulating glycerol does not glycate proteins as do glucose or fructose, and does not lead to the formation of advanced glycation endproducts (AGEs). In some organisms, the glycerol component can enter the glycolysis pathway directly and, thus, provide energy for cellular metabolism (or, potentially, be converted to glucose through gluconeogenesis).
Metabolism:
Before glycerol can enter the pathway of glycolysis or gluconeogenesis (depending on physiological conditions), it must be converted to their intermediate glyceraldehyde 3-phosphate in the following steps: The enzyme glycerol kinase is present mainly in the liver and kidneys, but also in other body tissues, including muscle and brain. In adipose tissue, glycerol 3-phosphate is obtained from dihydroxyacetone phosphate with the enzyme glycerol-3-phosphate dehydrogenase.
Metabolism:
Glycerol has very low toxicity when ingested; its LD50 oral dose for rats is 12600 mg/kg and 8700 mg/kg for mice. It does not appear to cause toxicity when inhaled, although changes in cell maturity occurred in small sections of lung in animals under the highest dose measured. A sub-chronic 90-day nose-only inhalation study in Sprague–Dawley (SD) rats exposed to 0.03, 0.16 and 0.66 mg/L glycerin (Per liter of air) for 6-hour continuous sessions revealed no treatment-related toxicity other than minimal metaplasia of the epithelium lining at the base of the epiglottis in rats exposed to 0.66 mg/L glycerin.
Historical cases of contamination with diethylene glycol:
On 4 May 2007, the FDA advised all U.S. makers of medicines to test all batches of glycerol for diethylene glycol contamination. This followed an occurrence of hundreds of fatal poisonings in Panama resulting from a falsified import customs declaration by Panamanian import/export firm Aduanas Javier de Gracia Express, S. A. The cheaper diethylene glycol was relabeled as the more expensive glycerol. Between 1990 and 1998, incidents of DEG poisoning reportedly occurred in Argentina, Bangladesh, India, and Nigeria, and resulted in hundreds of deaths. In 1937, more than one hundred people died in the United States after ingesting DEG-contaminated elixir sulfanilamide, a drug used to treat infections.
Etymology:
The origin of the gly- and glu- prefixes for glycols and sugars is from Ancient Greek γλυκύς glukus which means sweet.
Properties:
Table of thermal and physical properties of saturated liquid glycerin: | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Yamamoto's reciprocity law**
Yamamoto's reciprocity law:
In mathematics, Yamamoto's reciprocity law is a reciprocity law related to class numbers of quadratic number fields, introduced by Yamamoto (1986). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Restorative neurology**
Restorative neurology:
Restorative neurology is a branch of neurology dedicated to improving functions of the impaired nervous system through selective structural or functional modification of abnormal neurocontrol according to underlying mechanisms and clinically unrecognized residual functions. When impaired, the body naturally reconstructs new neurological pathways and redirects activity. The field of restorative neurology works to accentuate these new pathways and primarily focuses on the theory of the plasticity of an impaired nervous system. Its main goal is to take a broken down and disordered nervous system and return it to a state of normal function. Certain treatment strategies are used to augment instead of fully replace any performance of surviving and also improving the potential of motor neuron functions. This rehabilitation of motor neurons allows patients a therapeutic approach to recovery opposed to physical structural reconstruction. It is applied in a wide range of disorders of the nervous system, including upper motor neuron dysfunctions like spinal cord injury, cerebral palsy, multiple sclerosis and acquired brain injury including stroke, and neuromuscular diseases as well as for control of pain and spasticity. Instead of applying a reconstructive neurobiological approach, i.e. structural modifications, restorative neurology relies on improving residual function. While subspecialties like neurosurgery and pharmacology exist and are useful in diagnosing and treating conditions of the nervous system, restorative neurology takes a pathophysiological approach. Instead of heavily relying on neurochemistry or perhaps an anatomical discipline, restorative neurology encompasses many fields and blends them together.
History:
William James is credited for the idea of neuroplasticity based on the ideas in his two-volume book, The Principles of Psychology, in 1890. Although it was not referred to neuroplasticity at the time, his concepts were clear. He was the first to recognize the brain as malleable, however his ideas were not widely accepted until the 1970s. Scientists had previously thought that a human adult brain was fixed, meaning that it was unable to generate new cells, and was essentially unchangeable. Children were the only group of individuals thought to have the ability to expand their knowledge and readily absorb new information.
History:
Several discoveries were made throughout the study of neuroplasticity. Eugenio Tanzi was responsible for the discovery of the neural articulations, known as synapses, and Ernesto Lugaro was later responsible for the association of neural plasticity with synaptic plasticity.
History:
It wasn’t until tests on rhesus monkeys, beginning in the 1920s, proved evidence of the brain activity described by William James. Karl Lashley worked with adult rhesus monkeys and found neurons to travel in different pathways in response to the same stimuli. This led him to believe that neural plasticity was possible, and the brain of an adult rhesus monkey was able to incorporate change and the ability to remodel itself. Despite these discoveries, the idea was largely unaccepted. Another study on rhesus monkeys in 1970, led by Michael Merzenich, researched sensory motor neurons in response to severed nerve endings in the hands of Rhesus monkeys. They discovered that the brain was able to rewire itself so that the monkeys could process signals from other parts of the hand where they could still feel. “Plasticity” was made popular by Livingstons work in 1966. He challenged the consensus that the brain only develops during a critical period in early childhood. He showed how many places of the brain continue to display plasticity through adulthood.
Transcranial direct-current stimulation:
Transcranial direct-current stimulation, tDCS, is a form of neurostimulation or neuromodulation. tDCS targets specific areas of the brain by using extremely low levels of constant electrical current. The use of electrical currents to modify brain function is a dated technique that dates back to more than 200 years ago. Various scientific studies have shown that tDCS has the ability to improve memory, coordination, and problem solving. Researchers have also documented that tDCS has the potential to treat other various disorders such as depression, anxiety, and PTSD.Another parameter to take into account is the orientation of the electric field on the patient. The cathode is the negatively charged electrode while the anode is the positively charged electrode. When the electricity is turned on, the current flows from the cathode to the anode, exciting the brain. tDCS is based on the duration and strength of the current. It has been shown that larger current densities results in larger and longer after effects of tDCS.
Use:
Restorative neurology is a new way and a combination of neural components that are able to determine how long a natural functional recovery can take place and to what extent clinical interventions can help such recovery. Although detecting any anatomy of the injured nervous system can be considered really difficult, this approach has made it possible to be able to track changes or improvements occurring in the neural injury. Restorative neurology’s main goal is to take advantage of the new anatomy and physiology approach for enhanced neurological recovery.
Use:
A study has been done on a 37-year-old male who had unilateral spastic cerebral palsy (USCP). USCP, being the common subtype results with movement impairments on one side of the body. There are a few therapies for this type of rehabilitation. The study participant was diagnosed with USCP at 18 months due to a car accident. Along with robotic therapy, they also used tDCS. They applied them over the motor map of the affected hand. For each therapy session, the participant received 20 min of anodal tDCS. The excitatory sponge was placed over the location of motor map of the damaged hand. The anodal sponge was then place on the contralateral forehead. Both of these sponges were moistened with saline and held in place with a headband. By the end of the study it was confirmed that combined tDCS and robotic upper limb therapy safely improves upper limb function. - This study was adopted from their work with stroke rehab, that being said it is not known if the duration and dose of therapy is actually ideal for people with USCP. For this study in particular, it is stated that the participant confirmed that he reached the max accuracy with the robots by the midpoint of the study. However, it is not known if the effects of therapy would have been persistent had the training been shorter. That being said more work and research has yet to be done to identify “stop signals”, which indicate that participant has reached their improvement goal. There is another study in which Another study in which eight adults with chronic incomplete cervical spinal cord injury (iCSCI) participated. Being diagnosed with iCSCI meant minimal finger motor function. tDCS current was transferred by two saline soaked surface sponge electrodes. In order to stimulate the primary motor cortex, the anode electrode was place over C3 and C4. The cathode electrode was then placed over the contralateral supraorbital area. Results proved that the combination therapy protocol of 20 minutes of 2mA anodal tDCS over M1 with 60 minutes of high intensity training along with robotic exoskeleton is known to be safe in treatment of impaired arm and hand functions due to chronic incomplete spinal cord injury. This study’s report proved a promise in improving arm and hand function due to the therapy. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Materials Science Citation Index**
Materials Science Citation Index:
The Materials Science Citation Index is a citation index, established in 1992, by Thomson ISI (Thomson Reuters). Its overall focus is cited reference searching of the notable and significant journal literature in materials science. The database makes accessible the various properties, behaviors, and materials in the materials science discipline. This then encompasses applied physics, ceramics, composite materials, metals and metallurgy, polymer engineering, semiconductors, thin films, biomaterials, dental technology, as well as optics. The database indexes relevant materials science information from over 6,000 scientific journals that are part of the ISI database which is multidisciplinary. Author abstracts are searchable, which links articles sharing one or more bibliographic references. The database also allows a researcher to use an appropriate (or related to research) article as a base to search forward in time to discover more recently published articles that cite it.Materials Science Citation Index lists 625 high-impact journals, and is accessible via the Science Citation Index Expanded collection of databases.
Editions:
Coverage of Materials science is accomplished with the following editions: Materials Science, Ceramics Materials Science, Characterization & Testing Materials Science, Biomaterials Materials Science, Coatings & Films Materials Science, Composites Materials Science, Paper & Wood Materials Science, Multidisciplinary Materials Science, Textiles | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**BG Geminorum**
BG Geminorum:
BG Geminorum is an eclipsing binary star system in the constellation Gemini. It consists of a K0 supergiant with a more massive but unseen companion. The companion is likely to be either a black hole or class B star. Material from the K0 star is being transferred to an accretion disk surrounding the unidentified object. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Flip–flop kinetics**
Flip–flop kinetics:
Flip–flop kinetics, or flip–flop pharmacokinetics, describes an atypical situation in pharmacokinetics where a drug's rate of absorption or the rate at which it enters the bloodstream is slower than its elimination rate. That is, when the ka (absorption constant) is slower than ke (elimination constant).
Flip–flop kinetics:
These circumstances can occur with sustained-release formulations, depot injections, and some subcutaneous or intradermal injections. In the resulting slope of log plasma concentration (log Cp) versus time, the apparent ke is determined by the ka, and the apparent ke is smaller than when the drug is administered intravenously or by immediate-release formulation. Depot injections such as depot antipsychotics and long-acting injectable steroid hormone medications like estradiol valerate, testosterone enanthate, and medroxyprogesterone acetate are examples of drugs with flip–flop kinetics.The term "flip–flop" indicates that the downward slope more closely represents ka rather than ke.
Flip–flop kinetics:
Flip–flop kinetics can create difficulties in the determination and interpretation of pharmacokinetic parameters if not recognized. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**DB-Engines ranking**
DB-Engines ranking:
The DB-Engines Ranking ranks database management systems by popularity, covering over 380 systems. The ranking criteria include number of search engine results when searching for the system names, Google Trends, Stack Overflow discussions, job offers with mentions of the systems, number of profiles in professional networks such as LinkedIn, mentions in social networks such as Twitter. The ranking is updated monthly. It has been described and cited in various database-related articles.By grouping over specific database features like database model or type of license, regularly published statistics reveal historical trends which are used in strategic statements.
History:
The DB-Engines DBMS portal was created in 2012 and is maintained by the Austrian consulting company Solid IT.
Based on its ranking, DB-Engines grants a yearly award for the system that gained most in popularity within a year. The award winners are: 2013 - MongoDB 2014 - MongoDB 2015 - Oracle 2016 - Microsoft SQL Server 2017 - PostgreSQL 2018 - PostgreSQL 2019 - MySQL 2020 - PostgreSQL 2021 - Snowflake 2022 - Snowflake
Methodology:
The ranking comes from an average of the following parameters after normalization: Number of mentions in search engines queries Google Bing Yandex Frequency of searches Google Trends Number of related questions and the number of interested users Stack Overflow DBA Stack Exchange Number of job postings Indeed Simply Hired Number of profiles in professional networks LinkedIn Upwork Number of mentions in social networks Twitter | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**UMTS Terrestrial Radio Access Network**
UMTS Terrestrial Radio Access Network:
UMTS Terrestrial Radio Access Network (UTRAN) is a collective term for the network and equipment that connects mobile handsets to the public telephone network or the Internet. It contains the base stations, which are called Node B's and Radio Network Controllers (RNCs) which make up the Universal Mobile Telecommunications System (UMTS) radio access network. This communications network, commonly referred to as 3G (for 3rd Generation Wireless Mobile Communication Technology), can carry many traffic types from real-time Circuit Switched to IP based Packet Switched. The UTRAN allows connectivity between the UE (user equipment) and the core network.
UMTS Terrestrial Radio Access Network:
The RNC provides control functionalities for one or more Node Bs. A Node B and an RNC can be the same device, although typical implementations have a separate RNC located in a central office serving multiple Node Bs. Despite the fact that they do not have to be physically separated, there is a logical interface between them known as the Iub. The RNC and its corresponding Node Bs are called the Radio Network Subsystem (RNS). There can be more than one RNS present in a UTRAN.
UMTS Terrestrial Radio Access Network:
There are four interfaces connecting the UTRAN internally or externally to other functional entities: Iu, Uu, Iub and Iur. The Iu interface is an external interface that connects the RNC to the Core Network (CN). The Uu is also external, connecting Node B with the User Equipment (UE). The Iub is an internal interface connecting the RNC with Node B. And at last, there is the Iur interface which is an internal interface most of the time but can, exceptionally be an external interface too for some network architectures. The Iur connects two RNCs with each other. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Treats!**
Treats!:
treats! (often referred to as Treats, Treats!, Treats Magazine or Treats! Magazine) is an American limited-edition erotica and fine arts magazine that is primarily available by subscription. The magazine, which debuted in 2011, is described as a quarterly although it was initially only published twice a year.
Content:
treats! describes itself as "a limited edition, fine art print & digital publication available only by subscription & sold at news-stands, book stores & worldwide." The literary content of the magazine has been described as "left-of-center" by Daily Beast. The magazine, which is based in Los Angeles, is described by USA Today, The Huffington Post, and The New York Times as an artistic erotica magazine. Adam Tschorn of the Los Angeles Times noted that his "copilot" felt that the magazine's nude photography was "virtually indistinguishable" from Playboy's despite the "fine arts quarterly" billing.
Publication history:
Steve Shaw, the magazine's publisher, has a background in celebrity glamour photography for Maxim, FHM, Playboy, and British Esquire. He states that he had become irritated with shooting restrictions such as "three quarters of one side of a boob... You can only show one inch down from the bum crack..." and with uncooperative subjects. Thus, he says, he created his own magazine with what Daily Beast described as "female full-frontal nudity, luxe-y aesthetic, and [an] underpinning of fashion-world credibility" that has gotten "influential tastemakers and industry icons" to take notice. According to Shaw, treats! was founded to present content that was too risqué for magazines such as Vogue, Elle and InStyle. Shaw's nickname for photos that could not be used because they pushed the borders too far was "treats", and he decided to use the nickname as the title for the magazine. The magazine presents images that have not been airbrushed or photographically retouched.Shaw's initial investment for his magazine was $600,000. He publishes the magazine independently out of editorial offices on La Brea Avenue in Los Angeles, with a staff of three people. Shaw served as editor-in-chief and publisher, Eric Roinestad was art director, Rebecca Black was director of photography, and the editor was Rob Hill was, who had previously been editor of Hollywood Life magazine. Shaw and the magazine throw an annual Halloween "Trick or Treats" party.
Publication history:
The launch party for the magazine was held on February 24, 2011 at the James Goldstein residence as an Oscars-week party before the 83rd Academy Awards. Issue 1 of the magazine, which had no advertisements, debuted with a cover photo of models Irene Lambers and Cassy Gerasimova photographed by Tony Duran that was described by Business Wire as "edgy, scintillating and elegant". Articles in the premier include features on Jason Statham and Shepard Fairey. Five thousand copies were printed of the debut issue, and 10,000 for issue 2. Its launch was recognised with a "best new launch" award of 2011 by the Media Industry Newsletter (MIN).The magazine debuted prices of $20 at the newsstand price, $65 for an annual subscription price and $15 for a download. With the fourth issue the newsstand price changed to $30. In 2012, the magazine added an online gallery that sold prints of the magazine's content at prices ranging from $395 to $3,995—depending on size and framing.Emily Ratajkowski posed for several of the early issues. She states that her appearance on the March 2012 issue 3 cover is what brought her two unsolicited high-profile music video modeling roles (Robin Thicke, T.I., and Pharrell Williams' "Blurred Lines" and Maroon 5's "Love Somebody"). Thicke had seen the treats! magazine black-and-white cover and convinced director Diane Martel to cast her in the "Blurred Lines" music video.Within the first year of its launch photographers, including Brett Ratner, were volunteering to shoot for the magazine. As of 2014, Duran, Mark Seliger, Ben Watts, Josh Ryan and Bob Carlos Clarke are among the photographers who have been featured in the magazine.The cover of the seventh, 2014 Spring/Summer, issue of treats!, which was published in April 2014, featured Dylan Penn, daughter of Sean Penn and Robin Wright, nude, albeit with a Fendi bag strategically placed in front of her groin, with similar placements used in her interior pictorial. Penn was photographed by Duran. According to an E! Online report on March 5, 2014, Penn had declined a $150,000 offer to pose for the cover of Playboy.The eighth issue (featuring Lydia Hearst on the cover) was released in February 2015.
Legal issues:
In 2018, the owner and publisher, Steve Shaw, was sued in the Delaware Court of Chancery by Tyler and Cameron Winklevoss, who alleged Shaw mismanaged $1.3 million that they had invested in the magazine. By turn, Shaw alleges the Winklevoss twins used the magazine to "advance their sleazy agenda," and that no funds were mismanaged. This decision was the subject of an article on the defense of laches in The National Law Review on March 27, 2019.
Format:
The print editions are produced in oversized format on 70 lb. matte stock. The magazine is also available digitally in several formats such as on Zinio for iPad and as a mobile phone app as well as via the official website, a blog, and various social media websites. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Declension**
Declension:
In linguistics, declension (verb: to decline) is the changing of the form of a word, generally to express its syntactic function in the sentence, by way of some inflection. Declensions may apply to nouns, pronouns, adjectives, adverbs, and articles to indicate number (e.g. singular, dual, plural), case (e.g. nominative case, accusative case, genitive case, dative case), gender (e.g. masculine, neuter, feminine), and a number of other grammatical categories. Meanwhile, the inflectional change of verbs is called conjugation.
Declension:
Declension occurs in many of the world's languages. It is an important aspect of language families like Quechuan (i.e., languages native to the Andes), Indo-European (e.g. German, Lithuanian, Latvian, Slavic, Sanskrit, Latin, Ancient Greek, Modern Greek, Albanian, Classical Armenian and Modern Armenian and Kurdish), Bantu (e.g. Zulu, Kikuyu), Semitic (e.g. Modern Standard Arabic), Finno-Ugric (e.g. Hungarian, Finnish, Estonian), and Turkic (e.g. Turkish).
Declension:
Old English was an inflectional language, but largely abandoned inflectional changes as it evolved into Modern English. Though traditionally classified as synthetic, Modern English has moved towards an analytic language.
English-speaking perspective:
Unlike English, many languages use suffixes to specify subjects and objects and word cases in general. Inflected languages have a freer word order than modern English, an analytic language in which word order identifies the subject and object. As an example, even though both of the following sentences consist of the same words, the meaning is different: "The dog chased a cat." "A cat chased the dog."Hypothetically speaking, suppose English were a language with a more complex declension system in which cases were formed by adding the suffixes: -no (for nominative singular), -ge (genitive), -da (dative), -ac (accusative), -lo (locative), -in (instrumental), -vo (vocative), -ab (ablative)The above sentence could be formed with any of the following word orders and would have the same meaning: "The dogno chased a catac." "A catac chased the dogno." "Chased a catac the dogno."As a more complex example, the sentence: Mum, this little boy's dog was chasing a cat down our street!becomes nonsensical in English if the words are rearranged (because there are no cases): A cat was down our street chasing dog this little boy's mum!But if English were a highly inflected language, like Latin or some Slavic languages such as Croatian, both sentences could mean the same thing. They would both contain five nouns in five different cases: mum – vocative (hey!), dog – nominative (who?), boy – genitive (of whom?), cat – accusative (whom?), street – locative (where?); the adjective little would be in the same case as the noun it modifies (boy), and the case of the determiner our would agree with the case of the noun it determines (street).Using the case suffixes invented for this example, the original sentence would read: Mumvo, thisge littlege boyge dogno was chasing a catac down ourlo streetlo!And like other inflected languages, the sentence rearranged in the following ways would mean virtually the same thing, but with different expressiveness: A catac was down ourlo streetlo chasing dogno thisge littlege boyge, mumvo! Mumvo, down streetlo ourlo a catac was chasing thisge littlege boyge dogno!Instead of the locative, the instrumental form of "down our street" could also be used: Mumvo, thisge littlege boyge dogno ourin streetin was chasing a catac! A catac was, mumvo, ourin streetin chasing dogno thisge littlege boyge Ourin streetin a catac was chasing dogno thisge littlege boyge, mumvo!Different word orders preserving the original meaning are possible in an inflected language, while modern English relies on word order for meaning, with a little flexibility. This is one of the advantages of an inflected language. The English sentences above, when read without the made-up case suffixes, are confusing.
English-speaking perspective:
These contrived examples are relatively simple, whereas actual inflected languages have a far more complicated set of declensions, where the suffixes (or prefixes, or infixes) change depending on the gender of the noun, the quantity of the noun, and other possible factors. This complexity and the possible lengthening of words is one of the disadvantages of inflected languages. Notably, many of these languages lack articles. There may also be irregular nouns where the declensions are unique for each word (like irregular verbs with conjugation). In inflected languages, other parts of speech such as numerals, demonstratives, adjectives, and articles are also declined.
History:
It is agreed that Ancient Greeks had a "vague" idea of the forms of a noun in their language. A fragment of Anacreon seems to confirm this idea. Nevertheless, it cannot be concluded that the Ancient Greeks actually knew what the cases were. The Stoics developed many basic notions that today are the rudiments of linguistics. The idea of grammatical cases is also traced back to the Stoics, but it is still not completely clear what the Stoics exactly meant with their notion of cases.
Modern English:
In Modern English, the system of declensions is so simple compared to some other languages that the term declension is rarely used.
Nouns Most nouns in English have distinct singular and plural forms. Nouns and most noun phrases can form a possessive construction. Plurality is most commonly shown by the ending -s (or -es), whereas possession is always shown by the enclitic -'s or, for plural forms ending in s, by just an apostrophe.
Consider, for example, the forms of the noun girl. Most speakers pronounce all forms other than the singular plain form (girl) exactly the same.
By contrast, a few irregular nouns (like man/men) are slightly more complex in their forms. In this example, all four forms are pronounced distinctly.
Modern English:
For nouns, in general, gender is not declined in Modern English. There are isolated situations where certain nouns may be modified to reflect gender, though not in a systematic fashion. Loan words from other languages, particularly Latin and the Romance languages, often preserve their gender-specific forms in English, e.g. alumnus (masculine singular) and alumna (feminine singular). Similarly, names borrowed from other languages show comparable distinctions: Andrew and Andrea, Paul and Paula, etc. Additionally, suffixes such as -ess, -ette, and -er are sometimes applied to create overtly gendered versions of nouns, with marking for feminine being much more common than marking for masculine. Many nouns can actually function as members of two genders or even all three, and the gender classes of English nouns are usually determined by their agreement with pronouns, rather than marking on the nouns themselves.
Modern English:
There can be other derivations from nouns that are not considered declensions. For example, the proper noun Britain has the associated descriptive adjective British and the demonym Briton. Though these words are clearly related, and are generally considered cognates, they are not specifically treated as forms of the same word, and thus are not declensions.
Modern English:
Pronouns Pronouns in English have more complex declensions. For example, the first person "I": Whereas nouns do not distinguish between the subjective (nominative) and objective (oblique) cases, some pronouns do; that is, they decline to reflect their relationship to a verb or preposition, or case. Consider the difference between he (subjective) and him (objective), as in "He saw it" and "It saw him"; similarly, consider who, which is subjective, and the objective whom (although it is increasingly common to use who for both).
Modern English:
The one situation where gender is still clearly part of the English language is in the pronouns for the third person singular. Consider the following: The distinguishing of neuter for persons and non-persons is peculiar to English. This has existed since the 14th century. However, the use of singular they is often restricted to specific contexts, depending on the dialect or the speaker. It is most typically used to refer to a single person of unknown gender (e.g. "someone left their jacket behind") or a hypothetical person where gender is insignificant (e.g. "If someone wants to, then they should"). Its use has expanded in recent years due to increasing social recognition of persons who do not identify themselves as male or female. (see gender-nonbinary) The singular they still uses plural verb forms, reflecting its origins.
Modern English:
Adjectives and adverbs Some English adjectives and adverbs are declined for degree of comparison. The unmarked form is the positive form, such as quick. Comparative forms are formed with the ending -er (quicker), while superlative forms are formed with -est (quickest). Some are uncomparable; the remainder are usually periphrastic constructions with more (more beautiful) and most (most modestly). See degree of comparison for more.
Modern English:
Adjectives are not declined for case in Modern English (though they were in Old English), nor number nor gender.
Determiners The demonstrative determiners this and that are declined for number, as these and those. The article is never regarded as declined in Modern English, although formally, the words that and possibly she correspond to forms of the predecessor of the (sē m., þæt n., sēo f.) as it was declined in Old English.
Latin:
Just as verbs in Latin are conjugated to indicate grammatical information, Latin nouns and adjectives that modify them are declined to signal their roles in sentences. There are five important cases for Latin nouns: nominative, genitive, dative, accusative, and ablative. Since the vocative case usually takes the same form as the nominative, it is seldom spelt out in grammar books. Yet another case, the locative, is limited to a small number of words.
Latin:
The usual basic functions of these cases are as follows: Nominative case indicates the subject.
Genitive case indicates possession and can be translated with 'of'.
Dative case marks the indirect object and can be translated with 'to' or 'for'.
Accusative case marks the direct object.
Ablative case is used to modify verbs and can be translated as 'by', 'with', 'from', etc.
Vocative case is used to address a person or thing.The genitive, dative, accusative, and ablative also have important functions to indicate the object of a preposition.
Given below is the declension paradigm of Latin puer 'boy' and puella 'girl': From the provided examples we can see how cases work: liber puerī → the book of the boy (puerī boy=genitive) puer puellae rosam dat → the boy gives the girl a rose (puer boy=nominative; puellae girl=dative; rosam rose=accusative; dat give=third person singular present)
Sanskrit:
Sanskrit, another Indo-European language, has eight cases: nominative, vocative, accusative, genitive, dative, ablative, locative and instrumental. Some do not count vocative as a separate case, despite it having a distinctive ending in the singular, but consider it as a different use of the nominative.Sanskrit grammatical cases have been analyzed extensively. The grammarian Pāṇini identified six semantic roles or karaka, which correspond closely to the eight cases: agent (kartṛ, related to the nominative) patient (karman, related to the accusative) means (karaṇa, related to the instrumental) recipient (sampradāna, related to the dative) source (apādāna, related to the ablative) relation (sambandha, related to genitive) locus (adhikaraṇa, related to the locative) address (sambodhana, related to the vocative)For example, consider the following sentence: Here leaf is the agent, tree is the source, and ground is the locus. The endings -aṁ, -at, -āu mark the cases associated with these meanings.
Declension in specific languages:
Albanian declension Arabic ʾIʿrab Basque declension Hindi declension Greek and Latin Ancient Greek and Latin First declension Ancient Greek and Latin Second declension Ancient Greek and Latin Third declension Greek declension Latin declension Celtic languages Irish declension Germanic languages Dutch declension system German declension Gothic declension Icelandic declension Middle English declension Baltic languages Latvian declension Lithuanian declension Slavic languages Bosnian, Croatian, Montenegrin and Serbian declension Czech declension Polish declension Russian declension Slovak declension Slovene declension Ukrainian declension Uralic languages Finnish language noun cases | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Foundation Fieldbus H1**
Foundation Fieldbus H1:
Foundation Fieldbus H1 is one of the FOUNDATION fieldbus protocol versions. Foundation H1 (31.25 kbit/s) is a bi-directional communications protocol used for communications among field devices and to the control system. It utilizes either twisted pair, or fiber media to communicate between multiple nodes (devices) and the controller. The controller requires only one communication point to communicate with up to 32 nodes, this is a significant improvement over the standard 4–20 mA communication method which requires a separate connection point for each communication device on the controller system.
Foundation Fieldbus H1:
The Foundation Fieldbus H1 has support for Intrinsically Safe Wiring. Unlike other protocols, FOUNDATION H1 provides explicit synchronization of control and communication for precisely periodic (isochronous) communication and execution of control functions with minimized dead time and jitter. It synchronizes clocks in fieldbus devices for support of Function Block scheduling and alarm time-stamping at the point of detection.
The original concept was to connect as many fields devices as possible on controller field connection, limited only by signal strength.
Foundation HSE:
Foundation HSE is a control network technology specifically designed for process automation to connect higher-level devices such as controllers and remote-I/O, high-density data generators etc., and for horizontal integration of subsystems.
Foundation HSE:
Foundation HSE is based on unmodified IEEE 802.3 Ethernet and, therefore, is compatible with standard Ethernet equipment. FOUNDATION HSE provides complete "DCS style" redundancy with redundant network switches, redundant devices, and redundant communication ports ensuring unsurpassed availability. Foundation HSE is also based on standard IP, enabling it to coexist with other devices and ensuring compatibility with standard tools. At the highest level, Foundation HSE includes a standard application layer that provides interoperability between devices beyond the mere coexistence provided by Ethernet and TCP/IP. Foundation HSE communication is schedule-driven to minimize dead-time and jitter with support for peer-to-peer communication directly between devices. Again, a rigorous interoperability testing program ensures quality connectivity.
Foundation HSE:
The hub-and-spoke tree topology of Ethernet makes it very easy to add and remove devices without upsetting the operating network. Because Foundation HSE is based on unmodified Ethernet, standard Ethernet tools can be used for installation qualification, testing, and troubleshooting. These tools speed up the resolution of communication problems. Foundation HSE is supported by better troubleshooting tools not available for RS485 and coax. Since Foundation HSE is based on UDP and TCP, standard network management tools employing SNMP, RMON, etc., can be used. Similarly, familiar IP addressing is used including support for DHCP.
Parameters:
The communication line can stretch 1900 meters without repeaters or 9500 meters with up to four allowed repeaters.
Communication Methods:
The communication methods supported are: Client/ServerThe Client/Server Virtual Communications Relationship (VCR) Type is used for queued, unscheduled, user initiated, one to one, communication between devices on the fieldbus. Queued means that messages are sent and received in the order submitted for transmission, according to their priority, without overwriting previous messages.
Publisher/SubscriberThe Publisher/Subscriber VCR Type is used for buffered, one-to-many communications. Buffered means that only the latest version of the data is maintained within the network. New data completely overwrites previous data.
Report DistributionThe Report Distribution VCR Type is used for queued, unscheduled, user initiated, and one-to-many communications.
Use:
This protocol is primarily used for analog and discrete process control devices. The primary advantage is configuration by functional block concept.
Power Supply:
The big advantage of Foundation Fieldbus is that it allows power to be transferred over the communication bus to the controlled devices, this requires a Foundation Fieldbus power supply. Power supplies are normally redundant type with rating of 32 V DC with 500 mA current, and are mostly installed in the Marshalling Cabinet or System Cabinet in the Control Room.
Termination:
Every field bus segment needs exactly two terminators to operate properly. The terminators are designed to be the equivalent of a 1 µF capacitor and a 100 Ω resistor in series. The terminators serve several purposes including shunting the Fieldbus current (device communication) and protecting against electrical reflections.
Termination:
The primary function of terminators is to act as a current shunt for the control network. Fieldbus communication works by the field device modulating its current draw. When the devices need to transmit data, the FF devices will act as a current sink. The devices will draw less current to represent a high signal (1, one) and draw more current to represent a low signal (0, zero). As the modulating current of the FF devices is between 15 mA and 20 mA peak to peak, while the modulating voltage in the bus caused by this modulating current is between 0.75 V and 1 V peak to peak. This 0.75 and 1 V is simply a result of the 15 mA x 50 Ω which equal to 0.75 V and 20 mA x 50 Ω which is equal to 1 V. The 50 Ω resistor is an equivalent of two parallel resistors from the FF terminator. Because of all devices use the same cable, only one device can transmit a message at any given time. Without the proper number of terminators, the signal level will be out of specification and can disrupt the network.
Termination:
Another function of the terminator is to reduce the impact of electrical reflections.
Intrinsically Safe Fieldbus:
There are several Intrinsically Safe Fieldbus technologies in the market. The most commonly used are: Entity Barrier ConceptThis Design takes the concept of normal Field barriers that has been successfully used in the analog signal world(4-20 mA). These barriers use an infallible resistor (wire-wound), Zener diodes and a fuse, and require a good intrinsically safe ground. While this barrier limited energy sufficient for Zone 0/1 all Gas Groups (Class I Div 1 all Gas Groups), it only provides 80 mA for the Fieldbus segment.
Intrinsically Safe Fieldbus:
This optimistically could only power four Fieldbus devices which typically take 15-26 mA each. The Entity barrier concept is safe, but its low power limitations and engineering requirements effectively eliminate many of the benefits of using bus technology.
FISCOFieldbus Intrinsically Safe Concept (FISCO), which was first developed by PTB (Physikalisch-Technische Bundesanstalt, the national metrological institute of Germany) as a method to provide higher power to a Fieldbus segment in hazardous areas. The FISCO concept, considers the entire circuit of the Fieldbus segment.
Intrinsically Safe Fieldbus:
The maximum total cable length in a FISCO system is 1 km in Gas Groups A and B (IIC) and 1.9 km in Gas Groups C and D (IIA and IIB), while the maximum allowed spur length (length from the segment junction box/protector) is 60 metres for Gas Groups A through D (IIC, IIB and IIA). Additional constraints are also placed on the power conditioners i.e. load-sharing redundant power conditioners are not allowed in a FISCO power supply.
Intrinsically Safe Fieldbus:
Certifying devices to a standard before implementation, allows them to be integrated into systems without the engineering requirements necessitated by the Entity approach. This then allows FISCO power supplies to generate more power (and allow more devices per segment) than the Entity barrier solution. The bottleneck of this solution is that it requires each part of the system, including devices, cables and power conditioners to be FISCO compliant and that FISCO design and installation rules are strictly followed.
Intrinsically Safe Fieldbus:
While it does provide more power than the Entity barrier approach, still the system can only support four to five devices per segment as the trunk current is limited to 115 mA.
Intrinsically Safe Fieldbus:
HPT with Field BarriersA more recent enhancement for intrinsically safe applications is the High Power Trunk (HPT) with field-based field barriers (FBs), which limits power at the spur, rather than the trunk. This method significantly changes the equation for end users of Fieldbus in hazardous settings. It increases the amount of available power and therefore the number of connected devices on a segment. It also lets end users maximize the length of their trunk cables without the restrictions of FISCO/Entity Barrier Concept.
Intrinsically Safe Fieldbus:
While the HPT model does provide some significant improvements(500 mA at Fieldbus segment), it is not without its downsides. The field barrier is in essence a field-based isolated power conditioner. So even though the segment can be powered by load-sharing redundant power conditioners at the host, the practical MTBF (Mean Time Before Failure) is still that of a single power conditioner, since most field barriers are not redundant.
Intrinsically Safe Fieldbus:
Intrinsically Safe-High Power Trunk/High Power Intrinsically Safe Trunk (HPIST)The High Power IS Trunk (HPIST) technique provides an enhanced level of safety and simplicity in installation, along with the ability to use it for all devices (FISCO and Entity) and hazardous Zones and Divisions. It delivers approximately 350 mA of IS power to segments located in hazardous areas.
Intrinsically Safe Fieldbus:
This is achieved by utilizing a split-architecture design that puts part of the barrier in an isolator card located in the safe area (power supply rack) & the other part in each of the spurs of field-mounted device couplers/segment. The barrier in the isolator allows 350 mA to be run through the segment up to the Spur/junction Box through trunk cable. Since infallible resistors are used, devices from Zone 0/1 or 2 can be direct connected. Having 350 mA, now allows users to power up to 16 Fieldbus devices (20 mA each) at 500 m while retaining intrinsic safety.
Development:
This protocol is developed, enhanced and supported by FieldComm Group | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lipopolysaccharide 3-alpha-galactosyltransferase**
Lipopolysaccharide 3-alpha-galactosyltransferase:
In enzymology, a lipopolysaccharide 3-alpha-galactosyltransferase (EC 2.4.1.44) is an enzyme that catalyzes the chemical reaction UDP-galactose + lipopolysaccharide ⇌ UDP + 3-alpha-D-galactosyl-[lipopolysaccharide glucose]Thus, the two substrates of this enzyme are UDP-galactose and lipopolysaccharide, whereas its two products are UDP and [[3-alpha-D-galactosyl-[lipopolysaccharide glucose]]].
This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-galactose:lipopolysaccharide 3-alpha-D-galactosyltransferase. Other names in common use include UDP-galactose:lipopolysaccharide alpha,3-galactosyltransferase, UDP-galactose:polysaccharide galactosyltransferase, uridine diphosphate galactose:lipopolysaccharide, alpha-3-galactosyltransferase, uridine diphosphogalactose-lipopolysaccharide, and alpha,3-galactosyltransferase. This enzyme participates in lipopolysaccharide biosynthesis and glycan structures - biosynthesis 2.
Structural studies:
As of late 2007, two structures have been solved for this class of enzymes, with PDB accession codes 1GA8 and 1SS9. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Stone massage**
Stone massage:
The stone massage is a form of alternative medicine massage therapy and bodywork involving the placement of either heated or cooled stones to the body for the purpose of pain relief, relaxation and therapy. There are many variations and techniques used in the application of stone massage therapy, deriving from a variety of traditional practices. Stone massages are primarily used to alleviate physical pain issues, however, are also used to promote emotional and spiritual wellbeing in practice.
Origin and history:
Stone massage and similar practices involving the placement of objects of different temperatures have been dated back to ancient civilisations as a form of healing and therapy. Cultures including Native American, Hawaiian and many South Pacific nations have practiced similar methods of ritual and technique to provide physical and spiritual ease. The traditional Hawaiian healing massage ‘Lomilomi’ involves the use of warmed Lomi stones in order to increase areas of blood flow in the body and provide a healing. Similar practices in China have also dated back 2000 years involving the use of heated stones to stimulate improved internal organ function. Such traditional practices have evolved and influenced the application of modern stone massage practices.
Origin and history:
The re-emergence of such stone massage techniques was seen in 1993 by Mary Nelson with the development of a form of massage utilising hot and cold stones referred to as LaStone Therapy. This form of massage quickly rose to popularity becoming a multi-million dollar industry and has a strong focus on spiritual healing centering around chakras and energy channelling. Many massage therapy parlours providing stone massages offer LaStone Therapy due to its success amongst clients and the established reputable name for the process. These modern forms of stone massage combine techniques utilised in Swedish massage and deep tissue massage.
Technique:
Volcanic stones, typically basalt are placed in hot water typically at a temperature ranging between 40-60 degrees Celsius (100-140 degrees Fahrenheit) to reach a suitable temperature or placed in chilled water to achieve a chilled stone of -5-25 degrees Celsius 25-75 degrees Fahrenheit) for the practice, the use of a calibrated thermometer is common and recommended to reach the ideal temperatures required. In order to maintain adequate external skin hydration massage oils or lotions are commonly applied to the client’s skin. Sufficient internal hydration is also essential for the treatment due to an increase in body temperature to be experience, which can be achieved through clients consuming water before, during and after a session. A sheet or towel is placed on the client’s skin to provide a barrier between the hot or cold stones and their bare skin, preventing potential burns or discomfort. The stones are then placed on the client according to areas of concern or needing treatment, including the back, legs, arms or feet. Additionally, the stones are held by the massage therapist and massage into the muscle acting as an extension of their hands.The temperature of the stones are consistently monitored to ensure they remain at a safe and comfortable temperature that will produce the most effective results and enjoyable experience for the client. Controlling the heat of the water in which the stones are warmed or cooled is essential to produce stones at a suitable temperature necessary for the treatment.Some therapists may also perform a Swedish massage prior to the application of stones in order to warm and loosen up the muscles. The duration of stone massages typically range from 60 to 90 minutes depending on the technique used and needs of the client.
Training:
In order for an adequate and successful stone massage to take place a professionally trained masseuse needs to conduct the massage. Due to the potential dangers and harms associated with the practice of stone massages, extensive, appropriate training is necessary to carry out a safe, enjoyable experience for the client. It is necessary to gain knowledge of the correct adaption of the practice depending on the needs of the client and methods in which to incorporate stone massages into other massage practices. Certification from professionally recognised massage associations is mandatory to conduct a stone massage in a professional setting. Internationally, to maintain professional standards, stone message therapists are able to obtain Continuing Professional Education (CPE) points which can be achieved through training programmes and courses. With the potential dangers and risks associated with the practice, a manifold of liability insurance options are available for therapists conducting stone massages. It is highly recommended to obtain background knowledge and training in principles of hydrotherapy to safely carry out a stone massage therapy. Training to participate in stone massages focuses significantly on the adaptive approach necessary to conducting this treatment, emphasising the importance of understanding the needs and current condition of the client.Mary Nelson, creator of the modern stone massage through LaStone Therapy recognised the need for high-quality, extensive training and instruction in the field of stone massage. Nelson developed a group of trained therapists to teach and train the practice internationally and produced informational videos to highlight effective methods of the stone massage.
Effectiveness and benefits:
A primary benefit associated with the practice is that of stimulating blood flow in the circulatory system through the heat and movement of the stones. Stone massages also ease muscle pain and, often, the presence of muscle tension and spasms, through reducing inflammation and relaxing muscles, through a combination of both the heat and movement experience during the practice to access deeper tissues. This method of massage is also commonly recommended for physically sensitive individuals as it allows for a deeper tissue massage without excessive hand treatment from the therapist.Many also engage in the practice due to its relaxational and mental benefits that can be reaped from undertaking a stone massage. The environment and physical effects experienced from a stone massage assist in inducing a state of deep relaxation for many participants, which often improves mental clarity and improves in destressing for many individuals. Alongside this, studies have shown that stone massages and related therapies have assisted in improving sleep quality for individuals. A study conducted at the Urmia University of Medical Science found that basalt hot stone massage therapy “…can successfully contribute in reducing sleep disturbances, improving quality of sleep and enhancing comfort level…”. The study applied five of the specialised stones to the area of the first, second, third, fourth and fifth chakra to stimulate an enhancement of sleep quality.There have also found to benefits received by the massage therapist in conducting a hot stone massage. As the stones carry out the mass of the contact and work with the client, the instance of stresses and strain being experienced by the therapist in the areas of hands, wrists and upper body are reduced. The occurrence of stress injuries caused by repetitive activity in the fingers and hand are commonly diminished for the therapist.
Risks:
There are a number of dangers and risks associated with stone massage therapy, particularly due to the presence of high temperatures being exposed to the skin. Improper heating of the stones can lead to a higher potential for burns on the skin caused by unsafe and uneven temperatures of the stones being put on clients.There are also certain risk involved in receiving a stone massage for individuals with specific medical conditions. Medical conditions including diabetes, epilepsy, skin conditions and heart disease and neuropathy pose a contraindication with stone massage as a treatment and have potential for causing harm. Clients with recent skin or shallow abrasions such as cuts, burns, bruising and varicose are advised to avoid stone massages as this therapy has potential to increase further injury or greater concerns such as tissue damage. Such conditions or minor injuries also pose risk for higher chances of infection from bacterial exposure from the stones, massage oils or the masseuse.The impairment subsequent from drug and alcohol use largely impede on the safety and effectiveness of stone massage therapy. The effect of such substances often limit an individual’s judgement and impulsive control, both necessary to actively engage in a stone massage. Participation and contribution from the client are essential in a stone massage due to the risk and uncertainty of reaction from the temperature of the stones, a reaction that could be largely desensitised due to the effects of alcohol or drugs.
Misconceptions of the Practice:
There are a manifold of common misconceptions surrounding the practice of stone massage largely derived from graphic and media representations of the practice. Mainstream representation of the massage often involves the stones being directly placed on the clients skin, whilst the real life application of the practice rarely involves this method, instead using a cloth or towel to separate the stones from direct contact with the skin. Many individuals avoid the practice due to fear of burns or pain from this widespread misconception. It is often commonly misconceived that the massage is a set, standard practice and routine that is applied to every client. Professional treatments vary depending on the needs of the client and skills acquired by the therapist to suit the therapy being applied. Representation of the practice commonly shows the stones arranged in an orderly pattern down the centre of the client’s back, tracing the spine. The specialist stones are placed all around the body, commonly avoiding the crevice of the spine, focusing on the areas of concern for the client.
Supply Companies and Industries:
As the stone massage utilises equipment and techniques that were new to the massage industry and unfamiliar to practioneers, the rise in demand for the practice evidenced a necessity for specialised equipment and resources. A number of stone supply companies were developed to fulfil the need for greater supplies of the stones required for the practice. Firms such as Desert Stone People, RubRocks and Nature’s Stones Inc became prominent suppliers and distributors of the stones to spas and parlours predominantly across the United States. The development of equipment suited towards heating and cooling the stones was also essential is establishing adequate and safe applications of this process, with products such as the ‘Spa-Pro Massage Stone Heater being designed and distributed.
Notable Media Representation:
As the stone massage increased in popularity internationally as a form of therapy and mainstream massage, the practice was incorporated into media including films, tv shows, advertisement and books. The 2006 movie Big Momma's House 2 features a notable scene in which the characters receive a hot stone massage. The animation film The Lego Movie 2: The Second Part also features a scene including a hot stone massage. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Spectral concentration problem**
Spectral concentration problem:
The spectral concentration problem in Fourier analysis refers to finding a time sequence of a given length whose discrete Fourier transform is maximally localized on a given frequency interval, as measured by the spectral concentration.
Spectral concentration:
The discrete-time Fourier transform (DTFT) U(f) of a finite series wt , t=1,2,3,4,...,T is defined as U(f)=∑t=1Twte−2πift.
In the following, the sampling interval will be taken as Δt = 1, and hence the frequency interval as f ∈ [-½,½]. U(f) is a periodic function with a period 1.
For a given frequency W such that 0<W<½, the spectral concentration λ(T,W) of U(f) on the interval [-W,W] is defined as the ratio of power of U(f) contained in the frequency band [-W,W] to the power of U(f) contained in the entire frequency band [-½,½]. That is, λ(T,W)=∫−WW‖U(f)‖2df∫−1/21/2‖U(f)‖2df.
It can be shown that U(f) has only isolated zeros and hence 0<λ(T,W)<1 (see [1]). Thus, the spectral concentration is strictly less than one, and there is no finite sequence wt for which the DTFT can be confined to a band [-W,W] and made to vanish outside this band.
Statement of the problem:
Among all sequences {wt} for a given T and W, is there a sequence for which the spectral concentration is maximum? In other words, is there a sequence for which the sidelobe energy outside a frequency band [-W,W] is minimum? The answer is yes; such a sequence indeed exists and can be found by optimizing λ(T,W) . Thus maximising the power ∫−WW‖U(f)‖2df subject to the constraint that the total power is fixed, say ∫−1/21/2‖U(f)‖2df=1, leads to the following equation satisfied by the optimal sequence wt sin 2πW(t−t′)π(t−t′)wt′=λwt.
Statement of the problem:
This is an eigenvalue equation for a symmetric matrix given by sin 2πW(t−t′)π(t−t′).
Statement of the problem:
It can be shown that this matrix is positive-definite, hence all the eigenvalues of this matrix lie between 0 and 1. The largest eigenvalue of the above equation corresponds to the largest possible spectral concentration; the corresponding eigenvector is the required optimal sequence wt . This sequence is called a 0th–order Slepian sequence (also known as a discrete prolate spheroidal sequence, or DPSS), which is a unique taper with maximally suppressed sidelobes.
Statement of the problem:
It turns out that the number of dominant eigenvalues of the matrix M that are close to 1, corresponds to N=2WT called as Shannon number. If the eigenvalues λ are arranged in decreasing order (i.e., λ1>λ2>λ3>...>λN ), then the eigenvector corresponding to λn+1 is called nth–order Slepian sequence (DPSS) (0≤n≤N-1). This nth–order taper also offers the best sidelobe suppression and is pairwise orthogonal to the Slepian sequences of previous orders 3....
Statement of the problem:
,n−1) . These lower order Slepian sequences form the basis for spectral estimation by multitaper method.
Not limited to time series, the spectral concentration problem can be reformulated to apply on the surface of the sphere by using spherical harmonics, for applications in geophysics and cosmology among others. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Anterior tibial artery**
Anterior tibial artery:
The anterior tibial artery is an artery of the leg. It carries blood to the anterior compartment of the leg and dorsal surface of the foot, from the popliteal artery.
Structure:
Course The anterior tibial artery is a branch of the popliteal artery. It originates at the distal end of the popliteus muscle posterior to the tibia. The artery typically passes anterior to the popliteus muscle prior to passing between the tibia and fibula through an oval opening at the superior aspect of the interosseus membrane. The artery then descends between the tibialis anterior and extensor digitorum longus muscles.
Structure:
It is accompanied by the anterior tibial vein, and the deep peroneal nerve, along its course.
It crosses the anterior aspect of the ankle joint, at which point it becomes the dorsalis pedis artery.
Branches The branches of the anterior tibial artery are: posterior tibial recurrent artery anterior tibial recurrent artery muscular branches anterior medial malleolar artery anterior lateral malleolar artery dorsalis pedis artery
Clinical significance:
As the artery passes medial to the fibular neck, it becomes vulnerable to damage during a tibial osteotomy. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Stone Love Movement**
Stone Love Movement:
Stone Love Movement, commonly referred to as simply Stone Love, is one of the major Jamaican sound systems.
History:
Based in Kingston, Jamaica, Winston "Wee Pow" Powell built the Stone Love sound system in 1972, using locally-built amplifiers. These were soon upgraded, and the equipment has been kept up to date over Stone Love's four decade history, with long-time engineer Winston Samuels in charge of technical aspects of the sound system. It became Jamaica's most popular sound system, known for its superior sound quality, and maintained this position into the 21st century. It has also played overseas in the United States, the United Kingdom, and Japan. Stone Love is renowned for its exclusive dubplates, which have included sides by many of the artists which it helped to establish, including Buju Banton and Wayne Wonder, and Johnny Osbourne and Shabba Ranks, Sanchez, and Beenie Man.In the 1990s, rivalry with the Killamanjaro sound system led to a series of 'sound clashes' being staged. In 2003 Powell started a second sound system, Purple Love, which concentrates on vintage Jamaican music. A live album featuring a recording of the sound system performing, Stone Love Live, was released in 2005 on November Records. The sound system hosts the 'Weddy Weddy Wednesday' party every Wednesday at its base in Burlington Avenue, Kingston.
History:
Apart from Wee Pow, the selectors have included Rory, Geefus, Billy Slaughter, Diamond, Fire Ras, Ice Burg, Scary Gary, Dwayne Pow The sound system spawned the Stone Love and Father Pow record labels that have released hits by the likes of Bounty Killer, Jigsy King and Tony Curtis, Capleton, Tanya Stephens, and Daddy Screw. In August 2014 it was announced that Powell would receive the Order of Distinction in October that year. On 30 October 2014, the movement appeared at the Red Bull Culture Clash, clashing against Boy Better Know, A$AP Mob and eventual winners Rebel Sound (David Rodigan, Shy FX, Chase & Status and MC Rage). To celebrate Stone Love's 42nd anniversary, the sound system toured the US in a series of performances ending on 27 December 2014 at the Red Stripe Oval in Kingston. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Monoammonium glutamate**
Monoammonium glutamate:
Monoammonium glutamate is a compound with formula NH4C5H8NO4. It is an ammonium acid salt of glutamic acid.
It has the E number E624 and is used as a flavor enhancer. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**October 2023 lunar eclipse**
October 2023 lunar eclipse:
A partial lunar eclipse will take place on Saturday, 28 October 2023.
Visibility:
It will be completely visible over Europe and most of Asia and Africa, will be seen rising over the eastern Americas, and setting over Australia.
Related eclipses:
Eclipses of 2023 A hybrid solar eclipse on 20 April.
A penumbral lunar eclipse on 5 May.
An annular solar eclipse on 14 October.
A partial lunar eclipse on 28 October.
Lunar year series Saros series This eclipse is part of Saros cycle 146.
Related eclipses:
Metonic series This eclipse is the last of four Metonic cycle lunar eclipses on the same date, 28–29 October, each separated by 19 years: The metonic cycle repeats nearly exactly every 19 years and represents a Saros cycle plus one lunar year. Because it occurs on the same calendar date, the earth's shadow will in nearly the same location relative to the background stars.
Related eclipses:
Half-Saros cycle A lunar eclipse will be preceded and followed by solar eclipses by 9 years and 5.5 days (a half saros). This lunar eclipse is related to two partial solar eclipses of Solar Saros 153. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Interlaken (networking)**
Interlaken (networking):
Interlaken is a royalty-free interconnect protocol.
Interlaken (networking):
It was invented by Cisco Systems and Cortina Systems in 2006, optimized for high-bandwidth and reliable packet transfers. It builds on the channelization and per channel flow control features of SPI-4.2, while reducing the number of integrated circuit (chip) I/O pins by using high speed SerDes technology. Bundles of serial links create a logical connection between components with multiple channels, backpressure capability, and data-integrity protection to boost the performance of communications equipment. Interlaken manages speeds of up to 6 Gbit/s per pin (lane) and large numbers of lanes can form an Interlaken interface. It was designed to handle high-speed (10 Gigabit Ethernet, 100 Gigabit Ethernet and beyond) computer network connections. An alliance was formed in 2007.Xilinx and Intel have both developed FPGAs that have Interlaken hard IP built in. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Luteal support**
Luteal support:
Luteal support is the administration of medication, generally progesterone, progestins, hCG or GnRH agonists, to increase the success rate of implantation and early embryogenesis, thereby complementing and/or supporting the function of the corpus luteum. It can be combined with for example in vitro fertilization and ovulation induction.
Progesterone appears to be the best method of providing luteal phase support, with a relatively higher live birth rate than placebo, and a lower risk of ovarian hyperstimulation syndrome (OHSS) than hCG. Addition of other substances such as estrogen or hCG does not seem to improve outcomes.
Progesterone and progestins:
The live birth rate is significantly higher with progesterone for luteal support in IVF cycles with or without intracytoplasmic sperm injection (ICSI). Co-treatment with GnRH agonists further improves outcomes, by a live birth rate RD of +16% (95% confidence interval +10 to +22%).
Progesterone and progestins:
Routes and formulations There is no evidence of any route of administration of progesterone or progestins being more beneficial than others for luteal support. The main ones are: Oral administration of progesterone or progestin pills. Oral administration of progestins provides at least similar live birth rate than vaginal progesterone capsules when used for luteal support in embryo transfer, with no evidence of increased risk of miscarriage.
Progesterone and progestins:
Intravaginal administration of gel, tablets or other inserts, such as endometrin. A weekly vaginal ring is an effective and safe method for intravaginal administration.
Intramuscular administration. Daily intramuscular injections of progesterone-in-oil (PIO) have been the standard route of administration, but are not FDA-approved for use in pregnancy.
Time of initiation The time for beginning luteal support can be put in relation to various events: In IVF, generally somewhere between the evening of oocyte retrieval and day 3 after oocyte retrieval, with weak evidence indicating that 2 days after oocyte retrieval may be optimal.
In artificial insemination, luteal support is generally started on the day of insemination, or 1 to 2 days after.
Duration Luteal support given for a shorter duration than 7 weeks results in an increased risk of miscarriage in women with a dysfunctional corpus luteum (as can be diagnosed by blood tests for endogenous progesterone). In general, however, luteal support can safely be discontinued at the time of a positive pregnancy test (approximately 2 weeks after fertilization).
Other substances tested in luteal phase:
The addition of estrogen or hCG as adjunctives to progesterone do not appear to affect outcomes pregnancy rate and live birth rate in IVF. In fact, luteal support with human chorionic gonadotropin (hCG) alone or as a supplement to progesterone has been associated with a higher risk of ovarian hyperstimulation syndrome (OHSS). Low molecular weight heparin as luteal support may improve the live birth rate but has substantial side effects and has no reliable data on long-term effects. Glucocorticoids such as cortisol has limited evidence of efficacy as luteal support. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**2013 in paleomammalogy**
2013 in paleomammalogy:
This paleomammalogy list records new fossil mammal taxa that were described during the year 2013, as well as notes other significant paleomammalogy discoveries and events which occurred during that year.
Newly named eutherians:
Xenarthrans Odd-toed ungulates Even-toed ungulates Cetaceans Carnivorans Lagomorphs Rodents Primates and plesiadapiforms Others | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Class kappa-ell function**
Class kappa-ell function:
In control theory, it is often required to check if a nonautonomous system is stable or not. To cope with this it is necessary to use some special comparison functions. Class KL functions belong to this family: Definition: A continuous function β:[0,a)×[0,∞)→[0,∞) is said to belong to class KL if: for each fixed s , the function β(r,s) belongs to class kappa; for each fixed r , the function β(r,s) is decreasing with respect to s and is s.t. β(r,s)→0 for s→∞ | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dual-clutch transmission**
Dual-clutch transmission:
A dual-clutch transmission (DCT) (sometimes referred to as a twin-clutch transmission) is a type of multi-speed vehicle transmission system, that uses two separate clutches for odd and even gear sets. The design is often similar to two separate manual transmissions with their respective clutches contained within one housing, and working as one unit. In car and truck applications, the DCT functions as an automatic transmission, requiring no driver input to change gears.
Dual-clutch transmission:
The first DCT to reach production was the Easidrive automatic transmission introduced on the 1961 Hillman Minx mid-size car. This was followed by various eastern European tractors through the 1970s (using manual operation via a single clutch pedal), then the Porsche 962 C racing car in 1985. The first DCT of the modern era was used in the 2003 Volkswagen Golf R32. Since the late 2000s, DCTs have become increasingly widespread, and have supplanted hydraulic automatic transmissions in various models of cars.
Dual-clutch transmission:
More generally, a transmission with several clutches can be called a multi clutch transmission. For example, the Koenigsegg Jesko has a transmission with one clutch per gear, making up for a total of 7 clutches.
Design:
The fundamental principle of a DCT is that one clutch drives a gear-set for the even-numbered gears, while the other clutch drives the odd-numbered gears. Since the DCT can pre-select an odd gear while the vehicle is being propelled in an even gear (or vice versa), DCTs can shift several times faster than is possible with a manual transmission. By timing the operation of one clutch to engage at the precise moment that the other is disengaging, a DCT can shift gears without interrupting the torque supply to the wheels.A DCT uses clutch packs (as per a manual transmission), rather than the torque converter used by traditional (hydraulic) automatic transmissions. The DCT clutches are either "wet" or "dry" and are similar to the clutches used in most motorcycles. Wet-clutches are bathed in oil to provide cooling for the clutch surface, therefore wet-clutches are often used in applications with higher torque loads, such as the 1,250 N⋅m (922 lbf⋅ft) engine in the Bugatti Veyron.
Design:
Several arrangements for the two clutches are possible, and are outlined below: Most automotive DCTs use two concentric clutch packs located on the same axis as the flywheel. Therefore, the outer clutch pack has a larger diameter than the inner clutch pack.
Design:
Many DCTs for tractors (such as the Fortschritt ZT 320) use a similar arrangement where the clutches are located on the same axis of the flywheel. The difference is that the clutches are at different positions on this axis (i.e. one in front of the other) and the same size as each other. This design is used by the Tremec TR-9070 DCT used by the 2020 Ford Mustang Shelby GT500.
Design:
Another design (as used by the Volkswagen DQ200 transmission) arranges two identical-size clutches located side-by-side. This design requires two side-by-side input shafts, which are driven from the crankshaft via gears.
History:
The concept of a dual-clutch transmission was invented by French engineer Adolphe Kégresse in 1939. The transmission was intended for use in the Citroën Traction Avant, however Kégresse ran out of money before a working model could be developed.One of the first production DCTs was the Easidrive unit developed in the late 1950s by UK's Smiths Industries and Rootes. This DCT – introduced on the 1961 Hillman Minx (Series IIIC) – used two electro-magnetic clutches, along with analogue electronics and a series of solenoids to implement the gear shifts. The Easidrive was offered as an option on Hillman and Singer models, however it was not a reliable device and many were replaced by conventional manual transmissions.
History:
Porsche began development of DCTs for racing cars in the late 1970s, due to the possibility of a DCT preventing a drop in boost during gear shifts on a turbocharged engine. As the electronics required to control the transmission became compact enough to be practical, the Porsche Doppelkupplungsgetriebe ('dual-clutch gearbox') (PDK) transmission was installed as a prototype in a Porsche 956 Le Mans racing car in 1983. The first use of a PDK in competition was the 1985 Porsche 962 C Le Mans racing car, which won the World Sportscar Championship in 1986. The PDK transmission was also used in the 1985 Audi Sport Quattro S1 Group B rally car.The first mass-production passenger car to use a DCT was the 2003 Volkswagen Golf R32.
Manufacturers:
BorgWarnerBorgWarner produced the first mass-production DCT, as used in the R32 model introduced to the Volkswagen Golf range in 2003. The company have produced many of the DCTs used by the Volkswagen Group (marketed as DSG for Volkswagen-branded cars) and produced various components for the 2007 Nissan GT-R sports car, an early application for DCTs involving high torque loads. The company supplies several car manufacturers with complete transmission units, wet-clutches and/or mechatronic control modules.
Manufacturers:
GetragGetrag began production of DCTs in 2008 and has supplied manufacturers including BMW, Dacia, Dodge, Ferrari, Mercedes-Benz, Ford, Mitsubishi, Renault, Smart and Volvo. The Getrag 7DCL750 is a 7-speed DCT which is designed for high-performance engines and has a torque rating of 750 N⋅m (553 lbf⋅ft). It is used in mid-engined sports cars such as the 2009 Ferrari 458, the 2014 Mercedes-AMG GT and the 2017 Ford GT.
Manufacturers:
LuKLuK DCTs have been used by Volkswagen Group since 2008, in several smaller cars with relatively low torque outputs.
RicardoRicardo designed and built 7-speed DCT used by the 2005-2015 Bugatti Veyron, which has a turbocharged 16-cylinder engine producing 1,250 N⋅m (922 lbf⋅ft) of torque.
TremecTremec provides the 8-speed DCT used in the 2020 Chevrolet Corvette C8 and the 7-speed DCT used in the 2020 Ford Mustang Shelby GT500.
ZF FriedrichshafenZF Friedrichshafen produce the 7-speed DCT used by Porsche.
Usage in motor vehicles:
Following its 2003 introduction in the Volkswagen Golf R32, a 6-speed DCT (model code DQ250), with two wet-clutches arranged concentrically, has been used in several Volkswagen and Audi models. In 2008, Volkswagen group began production of the DQ200, a 7-speed DCT using two dry-clutches arranged side-by-side (instead of concentrically). Volkswagen claims fuel economy improvements of 6% compared with a 6-speed manual and 20% compared with a traditional (hydraulic) automatic transmission. DCT transmissions have been used on vehicles sold by Alfa Romeo, Volkswagen, Audi, SEAT, Skoda, and Bugatti, mostly marketed using the term Direct-Shift Gearbox (except for Alfa Romeo, which has used the term TCT and Audi, which has also used the term S-Tronic).Usage in high performance cars began in 2005 with the 7-speed DCT used in the Bugatti Veyron. Other early high performance applications include the 2007 Nissan GT-R, the 2008 Ferrari California, the 2008 Mitsubishi Lancer Evolution X and the 2009 Porsche 911 (997).The 2009 Honda VFR1200F is the first motorcycle to use a DCT. Honda has since expanded the application of the DCT to the Gold Wing model (model year 2018), the Africa Twin, the Rebel 1100, and the NC750X (model year 2020).
Usage in motor vehicles:
In 2010, the Mitsubishi Fuso 6-speed Duonic transmission became the first DCT to be used in a truck.The 2016 Acura ILX uses a torque converter (a device typically used in hydraulic automatic transmissions) paired with its 8-speed DCT. The purpose of the torque converter is to improve the smoothness of low-speed driving, through the elimination of jolting and shuddering sometimes found in DCTs at low speed.
Usage in tractors:
Several 1970s tractors from eastern European countries (such as the Kirovets K-700 derivatives) used manually-operated DCTs. For example, the Fortschritt ZT 300 has an Unterlastschaltstufe ('shifting under load') function, which needs to be pre-selected by the driver and then activated by pressing the clutch halfway down. This engages the second clutch, which applies a reduction gear to the driven wheels without any interruption in the torque transmission to the wheels.
Usage in railcars:
A different type of dual-clutch transmission has been used in some railcars. The two clutches are placed one on the gearbox input shaft and the other on the gearbox output shaft. To make a gear change, both clutches disengage simultaneously and a brake inside the gearbox engages. The gearchange occurs with all gears stationary, so no synchronizing mechanism is needed. After the gear change, both clutches re-engage. There is a significant break in power transmission, so this system is unsuitable for shunting locomotives. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sensitive compartmented information facility**
Sensitive compartmented information facility:
A sensitive compartmented information facility (SCIF ), in United States military, national security/national defense and intelligence parlance, is an enclosed area within a building that is used to process sensitive compartmented information (SCI) types of classified information.
Sensitive compartmented information facility:
SCIFs can be either permanent or temporary and can be set up in official government buildings (such as the Situation Room in the White House), onboard ships, in private residences of officials, or in hotel rooms and other places of necessity for officials when traveling. Portable SCIFs can also be quickly set up when needed during emergency situations.Because of the operational security (OPSEC) risk they pose, personal cell phones, smart watches, computer flash drives (aka, "thumb drives"), or any other sort of Personal Electronic Device (PED), cameras (analog or digital) other than those that are U.S. Government property and which are used only under strict guidelines, and/or any other sort of recording or transmitting devices (analog or digital) are expressly prohibited in SCIFs.
Access:
Access to SCIFs is normally limited to those individuals with appropriate security clearances. Non-cleared personnel in SCIFs must be under the constant oversight of cleared personnel and all classified information and material removed from view to prevent unauthorized access. As part of this process, non-cleared personnel are also typically required to surrender all recording, photographic and other electronic media devices. All of the activity and conversation inside is presumed restricted from public disclosure.
Construction:
Some entire buildings are SCIFs where all but the front foyer is secure. A SCIF can also be located in an air, ground or maritime vehicle, or can be established temporarily at a specific site. The physical construction, access control, and alarming of the facility has been defined by various directives, including Director of Central Intelligence Directives (DCIDs) 1/21 and 6/9, and most recently (2011) by Intelligence Community Directive (ICD) 705, signed by the Director of National Intelligence. ICD 705 is a three-page capstone document that implements Intelligence Community Standard (ICS) 705-1, ICS 705-2 and the Technical Specifications for Construction and Management of Sensitive Compartmented Information Facilities or "Tech Specs." The latest version of the Tech Specs was published in March 2020 (Version 1.5).Computers operating within such a facility must conform to rules established by ICD 503. Computers and telecommunication equipment within must conform to TEMPEST emanations specification as directed by a Certified TEMPEST Technical Authority (CTTA).
Construction:
Officials documented to have had a SCIF set up in their private residences include: President George W. Bush at his Prairie Chapel Ranch in Crawford, Texas (which he used as his Western White House) Secretary of State Hillary Clinton at her Washington, D.C., and Chappaqua, New York, homes President Donald Trump at both Trump Tower in New York City, and at his Mar-a-Lago resort in Palm Beach, Florida | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**R. David Britt**
R. David Britt:
R. David Britt is the Winston Ko Chair and Distinguished Professor of Chemistry at the University of California, Davis. Britt uses electron paramagnetic resonance (EPR) spectroscopy to study metalloenzymes and enzymes containing organic radicals in their active sites. Britt is the recipient of multiple awards for his research, including the Bioinorganic Chemistry Award in 2019 and the Bruker Prize in 2015 from the Royal Society of Chemistry. He has received a Gold Medal from the International EPR Society (2014), and the Zavoisky Award from the Kazan Scientific Center of the Russian Academy of Sciences (2018). He is a Fellow of the American Association for the Advancement of Science and of the Royal Society of Chemistry.
Early life and education:
Britt studied at the North Carolina State University, graduating with his B.S. in Physics in 1978. He completed his graduate studies in Physics at the University of California, Berkeley, graduating with his Ph.D. in 1988. At Berkeley, Britt worked in the laboratory of Prof. Melvin P. Klein as a NSF Graduate Research Fellow on the construction of a pulsed electron paramagnetic resonance (EPR) spectrometer. Britt was able to use the electron spin echo envelope modulation (ESEEM) technique with this spectrometer to study the molecular structure of the manganese-containing oxygen-evolving complex (OEC). Understanding of the OEC could improve our understanding of the mechanisms of the light-dependent reactions of photosynthesis, and could lead to the development of artificial photosynthesis.
Independent career:
Britt began his independent career at the University of California, Davis in 1989 as an Assistant Professor. He was promoted to Associate Professor in 1994, and to full Professor in 1997. Since 2005, he has served as Chair of the Department of Chemistry at Davis, and 2018 he was named the Winston Ko Professorship in Science Leadership.
Research:
Electron paramagnetic resonance (EPR) spectroscopy is a technique that measures the relaxation of unpaired electron spins in an applied magnetic field. This technique is particularly useful for studying the mechanism of catalysis of metalloenzymes and enzymes containing organic radicals, as these mechanistic intermediates often contain unpaired electrons and thus give a distinct EPR signal. Enzymatic systems that the Britt group studies include the oxygen-evolving complex of photosystem II, the H2-producing [FeFe] hydrogenases, nitrogenases, and radical SAM enzymes.With then-postdoctoral scholar Stefan Stoll, Britt developed EasySpin, an open-source MATLAB software toolbox for simulating and fitting a wide range of EPR spectra.Britt has collaborated with many synthetic chemists and biologists, including Daniel G. Nocera, Philip P. Power, Michael A. Marletta, Elizabeth M. Nolan, William H. Casey, and Judith P. Klinman. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Rayleigh quotient iteration**
Rayleigh quotient iteration:
Rayleigh quotient iteration is an eigenvalue algorithm which extends the idea of the inverse iteration by using the Rayleigh quotient to obtain increasingly accurate eigenvalue estimates.
Rayleigh quotient iteration:
Rayleigh quotient iteration is an iterative method, that is, it delivers a sequence of approximate solutions that converges to a true solution in the limit. Very rapid convergence is guaranteed and no more than a few iterations are needed in practice to obtain a reasonable approximation. The Rayleigh quotient iteration algorithm converges cubically for Hermitian or symmetric matrices, given an initial vector that is sufficiently close to an eigenvector of the matrix that is being analyzed.
Algorithm:
The algorithm is very similar to inverse iteration, but replaces the estimated eigenvalue at the end of each iteration with the Rayleigh quotient. Begin by choosing some value μ0 as an initial eigenvalue guess for the Hermitian matrix A . An initial vector b0 must also be supplied as initial eigenvector guess.
Calculate the next approximation of the eigenvector bi+1 by where I is the identity matrix, and set the next approximation of the eigenvalue to the Rayleigh quotient of the current iteration equal to To compute more than one eigenvalue, the algorithm can be combined with a deflation technique.
Algorithm:
Note that for very small problems it is beneficial to replace the matrix inverse with the adjugate, which will yield the same iteration because it is equal to the inverse up to an irrelevant scale (the inverse of the determinant, specifically). The adjugate is easier to compute explicitly than the inverse (though the inverse is easier to apply to a vector for problems that aren't small), and is more numerically sound because it remains well defined as the eigenvalue converges.
Example:
Consider the matrix A=[123121321] for which the exact eigenvalues are λ1=3+5 , λ2=3−5 and λ3=−2 , with corresponding eigenvectors v1=[1φ−11] , v2=[1−φ1] and v3=[101] .(where φ=1+52 is the golden ratio).
The largest eigenvalue is 5.2361 and corresponds to any eigenvector proportional to 0.6180 1].
We begin with an initial eigenvalue guess of 200 .Then, the first iteration yields 0.57927 0.57348 0.57927 5.3355 the second iteration, 0.64676 0.40422 0.64676 5.2418 and the third, 0.64793 0.40045 0.64793 5.2361 from which the cubic convergence is evident.
Octave implementation:
The following is a simple implementation of the algorithm in Octave. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Gustavo Tamayo**
Gustavo Tamayo:
Gustavo E. Tamayo is a Colombian ophthalmologist known for developing a refractive surgery method known as Contoured Ablation Patterns (CAP), which enables doctors to make surgeries faster and at an easier rate. Tamayo has also developed and patented a procedure to treat presbyopia, which is at the moment being tested by AMO (Abbott Medical Optics) in order to massively apply this procedure in a global manner once approved by the FDA. He also has other patents dealing with cataract removal through laser application. Tamayo was designated subdirector of the Subspecialty Refractive Surgery Day at American Academy of Ophthalmology meeting in 2008 which took place in Atlanta and was appointed director for the same meeting in its 2009 edition. He is the president and founder of a surgical eye clinic in the north of Bogotá founded in 2001 called Bogota Laser Ocular Surgery Center. Tamayo also serves as a Member of the Medical Advisory Board at AMO, Presbia, Keramet and currently serves as the medical director for Latin America of Avedro. Tamayo is a member of various medical associations, including the American Academy of Ophthalmology, The Cornea Society, and both the American and European Societies of Cataract & Refractive Surgery. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Barrel roof**
Barrel roof:
A barrel roof is a curved roof that, especially from below, is curved like a cut-away barrel. They have some advantages over dome roofs, especially being able to cover rectangular buildings, due to their uniform cross-section. They are mainly used as bulk storage industrial buildings due to their capability to have better space to structure ratio and larger spans.Barrel vaults are a particular form of barrel roof.
Barrel roof:
Two examples of a barrel roof | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**6-Methyl-MDA**
6-Methyl-MDA:
6-Methyl-3,4-methylenedioxyamphetamine (6-Methyl-MDA) is an entactogen and psychedelic drug of the amphetamine class. It was first synthesized in the late 1990s by a team including David E. Nichols at Purdue University while investigating derivatives of 3,4-methylenedioxyamphetamine (MDA) and 3,4-methylenedioxy-N-methylamphetamine (MDMA).6-Methyl-MDA has IC50 values of 783 nM, 28,300 nM, and 4,602 nM for inhibiting the reuptake of serotonin, dopamine, and norepinephrine in rat synaptosomes. In animal studies it substitutes for MBDB, MMAI, LSD, and 2,5-dimethoxy-4-iodoamphetamine (DOI), though not amphetamine, but only partially and at high doses. Thus, while several-fold less potent than its analogues 2-methyl-MDA and 5-methyl-MDA, and approximately half as potent as MDA, 6-methyl-MDA is still significantly active, and appropriate doses may be similar to or somewhat higher than those of MDMA. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Chloropyramine**
Chloropyramine:
Chloropyramine is a classical first-generation antihistamine drug approved in Eastern European countries (like Russia) for the treatment of allergic conjunctivitis, allergic rhinitis, bronchial asthma, and other atopic (allergic) conditions. Related indications for clinical use include angioedema (Quincke's edema), allergic reactions to insect bites, food and drug allergies, and anaphylactic shock.
Chloropyramine:
Chloropyramine is known as a competitive reversible H1 receptor antagonist (also known as an H1 inverse agonist), meaning that it exerts its pharmacological action by competing with histamine for the H1 subtype histamine receptor. By blocking the effects of histamine, the drug inhibits the vasodilation, increased vascular permeability, and tissue edema associated with histamine release in the tissue. The H1 antagonistic properties of chloropyramine can be used by researchers for the purposes of blocking the effects of histamine on cells and tissues. In addition, chloropyramine has some anticholinergic properties.Chloropyramine's anticholinergic properties and the fact that it can pass through the blood–brain barrier are linked to its clinical side effects: drowsiness, weakness, vertigo, fatigue, dryness in the mouth, constipation, and rarely — visual disturbances and increase of intraocular pressure.
Clinical dosage and administration:
In cases of severe allergic reactions, chloropyramine can be injected intramuscularly or intravenously. Oral administration: In adults, 25 mg can be taken 3 to 4 times daily (up to 150 mg); in children over 5 years old, 25 mg can be taken 2 to 3 times daily. For external application, the skin or the eye conjunctiva can be treated up to several times a day by applying a thin layer of cream or ointment containing 1% chloropyramine hydrochloride.
Contraindications:
Contraindications for parenteral or oral administration include benign prostatic hyperplasia, peptic ulcer, pyloric and duodenal stenosis, uncontrolled glaucoma, pregnancy and breast-feeding. It is not intended for the management of acute bronchospasm.
Special warnings and precautions:
Chloropyramine should not be used internally with alcohol, sedative drugs and hypnotics because of the potentiation of the effects. It should be used with caution in patients with hyperthyroidism, cardiovascular diseases and asthma. In children, it can induce agitation, and in many adult patients dizziness may be observed. Because of the pronounced sedative effect the preparation should be prescribed cautiously in drivers and people working with machines.
Special warnings and precautions:
A large study on people 65 years old or older linked the development of Alzheimer's disease and other forms of dementia to the "higher cumulative" use of first-generation antihistamines, due to their anticholinergic properties.
Drug interactions:
Chloropyramine should not be used internally with MAO inhibitors. Because of its anticholinergic activity, concurrent administration with cholinomimetics is not advisable. General anesthetics, analgesic agents and psycholeptics potentiate the sedative effect of chloropyramine.
Trade names:
Allergopress, Chimpharm AD (KZ) Allergosan, Sopharma AD (BG, GE, LV) Suprastin, Egis Pharmaceuticals PLC (GE, HU, KZ, LT, LV, UA, RU) Supralgon, Biopharm JSC (GE) Supranorm-Tsiteli A, Rompharm Co. (GE) Synopen, Pliva d.o.o. (BA, HR, RS,MK)
Synthesis:
The preparation begins with the condensation of 4-chlorobenzaldehyde with 1,1-dimethyethylenediamine. The resulting Schiff base is reduced. The resulting amine is then further reacts with 2-bromopyridine in the presence of sodamide. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Data mining**
Data mining:
Data mining is the process of extracting and discovering patterns in large data sets involving methods at the intersection of machine learning, statistics, and database systems. Data mining is an interdisciplinary subfield of computer science and statistics with an overall goal of extracting information (with intelligent methods) from a data set and transforming the information into a comprehensible structure for further use. Data mining is the analysis step of the "knowledge discovery in databases" process, or KDD. Aside from the raw analysis step, it also involves database and data management aspects, data pre-processing, model and inference considerations, interestingness metrics, complexity considerations, post-processing of discovered structures, visualization, and online updating.The term "data mining" is a misnomer because the goal is the extraction of patterns and knowledge from large amounts of data, not the extraction (mining) of data itself. It also is a buzzword and is frequently applied to any form of large-scale data or information processing (collection, extraction, warehousing, analysis, and statistics) as well as any application of computer decision support system, including artificial intelligence (e.g., machine learning) and business intelligence. The book Data Mining: Practical Machine Learning Tools and Techniques with Java (which covers mostly machine learning material) was originally to be named Practical Machine Learning, and the term data mining was only added for marketing reasons. Often the more general terms (large scale) data analysis and analytics—or, when referring to actual methods, artificial intelligence and machine learning—are more appropriate.
Data mining:
The actual data mining task is the semi-automatic or automatic analysis of large quantities of data to extract previously unknown, interesting patterns such as groups of data records (cluster analysis), unusual records (anomaly detection), and dependencies (association rule mining, sequential pattern mining). This usually involves using database techniques such as spatial indices. These patterns can then be seen as a kind of summary of the input data, and may be used in further analysis or, for example, in machine learning and predictive analytics. For example, the data mining step might identify multiple groups in the data, which can then be used to obtain more accurate prediction results by a decision support system. Neither the data collection, data preparation, nor result interpretation and reporting is part of the data mining step, although they do belong to the overall KDD process as additional steps.
Data mining:
The difference between data analysis and data mining is that data analysis is used to test models and hypotheses on the dataset, e.g., analyzing the effectiveness of a marketing campaign, regardless of the amount of data. In contrast, data mining uses machine learning and statistical models to uncover clandestine or hidden patterns in a large volume of data.The related terms data dredging, data fishing, and data snooping refer to the use of data mining methods to sample parts of a larger population data set that are (or may be) too small for reliable statistical inferences to be made about the validity of any patterns discovered. These methods can, however, be used in creating new hypotheses to test against the larger data populations.
Etymology:
In the 1960s, statisticians and economists used terms like data fishing or data dredging to refer to what they considered the bad practice of analyzing data without an a-priori hypothesis. The term "data mining" was used in a similarly critical way by economist Michael Lovell in an article published in the Review of Economic Studies in 1983. Lovell indicates that the practice "masquerades under a variety of aliases, ranging from "experimentation" (positive) to "fishing" or "snooping" (negative).
Etymology:
The term data mining appeared around 1990 in the database community, with generally positive connotations. For a short time in 1980s, the phrase "database mining"™, was used, but since it was trademarked by HNC, a San Diego-based company, to pitch their Database Mining Workstation; researchers consequently turned to data mining. Other terms used include data archaeology, information harvesting, information discovery, knowledge extraction, etc. Gregory Piatetsky-Shapiro coined the term "knowledge discovery in databases" for the first workshop on the same topic (KDD-1989) and this term became more popular in the AI and machine learning communities. However, the term data mining became more popular in the business and press communities. Currently, the terms data mining and knowledge discovery are used interchangeably.
Background:
The manual extraction of patterns from data has occurred for centuries. Early methods of identifying patterns in data include Bayes' theorem (1700s) and regression analysis (1800s). The proliferation, ubiquity and increasing power of computer technology have dramatically increased data collection, storage, and manipulation ability. As data sets have grown in size and complexity, direct "hands-on" data analysis has increasingly been augmented with indirect, automated data processing, aided by other discoveries in computer science, specially in the field of machine learning, such as neural networks, cluster analysis, genetic algorithms (1950s), decision trees and decision rules (1960s), and support vector machines (1990s). Data mining is the process of applying these methods with the intention of uncovering hidden patterns. in large data sets. It bridges the gap from applied statistics and artificial intelligence (which usually provide the mathematical background) to database management by exploiting the way data is stored and indexed in databases to execute the actual learning and discovery algorithms more efficiently, allowing such methods to be applied to ever-larger data sets.
Process:
The knowledge discovery in databases (KDD) process is commonly defined with the stages: Selection Pre-processing Transformation Data mining Interpretation/evaluation.It exists, however, in many variations on this theme, such as the Cross-industry standard process for data mining (CRISP-DM) which defines six phases: Business understanding Data understanding Data preparation Modeling Evaluation Deploymentor a simplified process such as (1) Pre-processing, (2) Data Mining, and (3) Results Validation.
Process:
Polls conducted in 2002, 2004, 2007 and 2014 show that the CRISP-DM methodology is the leading methodology used by data miners. The only other data mining standard named in these polls was SEMMA. However, 3–4 times as many people reported using CRISP-DM. Several teams of researchers have published reviews of data mining process models, and Azevedo and Santos conducted a comparison of CRISP-DM and SEMMA in 2008.
Process:
Pre-processing Before data mining algorithms can be used, a target data set must be assembled. As data mining can only uncover patterns actually present in the data, the target data set must be large enough to contain these patterns while remaining concise enough to be mined within an acceptable time limit. A common source for data is a data mart or data warehouse. Pre-processing is essential to analyze the multivariate data sets before data mining. The target set is then cleaned. Data cleaning removes the observations containing noise and those with missing data.
Process:
Data mining Data mining involves six common classes of tasks: Anomaly detection (outlier/change/deviation detection) – The identification of unusual data records, that might be interesting or data errors that require further investigation.
Association rule learning (dependency modeling) – Searches for relationships between variables. For example, a supermarket might gather data on customer purchasing habits. Using association rule learning, the supermarket can determine which products are frequently bought together and use this information for marketing purposes. This is sometimes referred to as market basket analysis.
Clustering – is the task of discovering groups and structures in the data that are in some way or another "similar", without using known structures in the data.
Classification – is the task of generalizing known structure to apply to new data. For example, an e-mail program might attempt to classify an e-mail as "legitimate" or as "spam".
Regression – attempts to find a function that models the data with the least error that is, for estimating the relationships among data or datasets.
Summarization – providing a more compact representation of the data set, including visualization and report generation.
Process:
Results validation Data mining can unintentionally be misused, producing results that appear to be significant but which do not actually predict future behavior and cannot be reproduced on a new sample of data, therefore bearing little use. This is sometimes caused by investigating too many hypotheses and not performing proper statistical hypothesis testing. A simple version of this problem in machine learning is known as overfitting, but the same problem can arise at different phases of the process and thus a train/test split—when applicable at all—may not be sufficient to prevent this from happening.The final step of knowledge discovery from data is to verify that the patterns produced by the data mining algorithms occur in the wider data set. Not all patterns found by the algorithms are necessarily valid. It is common for data mining algorithms to find patterns in the training set which are not present in the general data set. This is called overfitting. To overcome this, the evaluation uses a test set of data on which the data mining algorithm was not trained. The learned patterns are applied to this test set, and the resulting output is compared to the desired output. For example, a data mining algorithm trying to distinguish "spam" from "legitimate" e-mails would be trained on a training set of sample e-mails. Once trained, the learned patterns would be applied to the test set of e-mails on which it had not been trained. The accuracy of the patterns can then be measured from how many e-mails they correctly classify. Several statistical methods may be used to evaluate the algorithm, such as ROC curves.
Process:
If the learned patterns do not meet the desired standards, it is necessary to re-evaluate and change the pre-processing and data mining steps. If the learned patterns do meet the desired standards, then the final step is to interpret the learned patterns and turn them into knowledge.
Research:
The premier professional body in the field is the Association for Computing Machinery's (ACM) Special Interest Group (SIG) on Knowledge Discovery and Data Mining (SIGKDD). Since 1989, this ACM SIG has hosted an annual international conference and published its proceedings, and since 1999 it has published a biannual academic journal titled "SIGKDD Explorations".Computer science conferences on data mining include: CIKM Conference – ACM Conference on Information and Knowledge Management European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases KDD Conference – ACM SIGKDD Conference on Knowledge Discovery and Data MiningData mining topics are also present in many data management/database conferences such as the ICDE Conference, SIGMOD Conference and International Conference on Very Large Data Bases.
Standards:
There have been some efforts to define standards for the data mining process, for example, the 1999 European Cross Industry Standard Process for Data Mining (CRISP-DM 1.0) and the 2004 Java Data Mining standard (JDM 1.0). Development on successors to these processes (CRISP-DM 2.0 and JDM 2.0) was active in 2006 but has stalled since. JDM 2.0 was withdrawn without reaching a final draft.
Standards:
For exchanging the extracted models—in particular for use in predictive analytics—the key standard is the Predictive Model Markup Language (PMML), which is an XML-based language developed by the Data Mining Group (DMG) and supported as exchange format by many data mining applications. As the name suggests, it only covers prediction models, a particular data mining task of high importance to business applications. However, extensions to cover (for example) subspace clustering have been proposed independently of the DMG.
Notable uses:
Data mining is used wherever there is digital data available. Notable examples of data mining can be found throughout business, medicine, science, finance and surveillance.
Privacy concerns and ethics:
While the term "data mining" itself may have no ethical implications, it is often associated with the mining of information in relation to user behavior (ethical and otherwise).The ways in which data mining can be used can in some cases and contexts raise questions regarding privacy, legality, and ethics. In particular, data mining government or commercial data sets for national security or law enforcement purposes, such as in the Total Information Awareness Program or in ADVISE, has raised privacy concerns.Data mining requires data preparation which uncovers information or patterns which compromise confidentiality and privacy obligations. A common way for this to occur is through data aggregation. Data aggregation involves combining data together (possibly from various sources) in a way that facilitates analysis (but that also might make identification of private, individual-level data deducible or otherwise apparent). This is not data mining per se, but a result of the preparation of data before—and for the purposes of—the analysis. The threat to an individual's privacy comes into play when the data, once compiled, cause the data miner, or anyone who has access to the newly compiled data set, to be able to identify specific individuals, especially when the data were originally anonymous.It is recommended to be aware of the following before data are collected: The purpose of the data collection and any (known) data mining projects.
Privacy concerns and ethics:
How the data will be used.
Who will be able to mine the data and use the data and their derivatives.
The status of security surrounding access to the data.
Privacy concerns and ethics:
How collected data can be updated.Data may also be modified so as to become anonymous, so that individuals may not readily be identified. However, even "anonymized" data sets can potentially contain enough information to allow identification of individuals, as occurred when journalists were able to find several individuals based on a set of search histories that were inadvertently released by AOL.The inadvertent revelation of personally identifiable information leading to the provider violates Fair Information Practices. This indiscretion can cause financial, emotional, or bodily harm to the indicated individual. In one instance of privacy violation, the patrons of Walgreens filed a lawsuit against the company in 2011 for selling prescription information to data mining companies who in turn provided the data to pharmaceutical companies.
Privacy concerns and ethics:
Situation in Europe Europe has rather strong privacy laws, and efforts are underway to further strengthen the rights of the consumers. However, the U.S.–E.U. Safe Harbor Principles, developed between 1998 and 2000, currently effectively expose European users to privacy exploitation by U.S. companies. As a consequence of Edward Snowden's global surveillance disclosure, there has been increased discussion to revoke this agreement, as in particular the data will be fully exposed to the National Security Agency, and attempts to reach an agreement with the United States have failed.In the United Kingdom in particular there have been cases of corporations using data mining as a way to target certain groups of customers forcing them to pay unfairly high prices. These groups tend to be people of lower socio-economic status who are not savvy to the ways they can be exploited in digital market places.
Privacy concerns and ethics:
Situation in the United States In the United States, privacy concerns have been addressed by the US Congress via the passage of regulatory controls such as the Health Insurance Portability and Accountability Act (HIPAA). The HIPAA requires individuals to give their "informed consent" regarding information they provide and its intended present and future uses. According to an article in Biotech Business Week, "'[i]n practice, HIPAA may not offer any greater protection than the longstanding regulations in the research arena,' says the AAHC. More importantly, the rule's goal of protection through informed consent is approach a level of incomprehensibility to average individuals." This underscores the necessity for data anonymity in data aggregation and mining practices.
Privacy concerns and ethics:
U.S. information privacy legislation such as HIPAA and the Family Educational Rights and Privacy Act (FERPA) applies only to the specific areas that each such law addresses. The use of data mining by the majority of businesses in the U.S. is not controlled by any legislation.
Copyright law:
Situation in Europe Under European copyright and database laws, the mining of in-copyright works (such as by web mining) without the permission of the copyright owner is not legal. Where a database is pure data in Europe, it may be that there is no copyright—but database rights may exist, so data mining becomes subject to intellectual property owners' rights that are protected by the Database Directive. On the recommendation of the Hargreaves review, this led to the UK government to amend its copyright law in 2014 to allow content mining as a limitation and exception. The UK was the second country in the world to do so after Japan, which introduced an exception in 2009 for data mining. However, due to the restriction of the Information Society Directive (2001), the UK exception only allows content mining for non-commercial purposes. UK copyright law also does not allow this provision to be overridden by contractual terms and conditions.
Copyright law:
Since 2020 also Switzerland has been regulating data mining by allowing it in the research field under certain conditions laid down by art. 24d of the Swiss Copyright Act. This new article entered into force on 1 April 2020.The European Commission facilitated stakeholder discussion on text and data mining in 2013, under the title of Licences for Europe. The focus on the solution to this legal issue, such as licensing rather than limitations and exceptions, led to representatives of universities, researchers, libraries, civil society groups and open access publishers to leave the stakeholder dialogue in May 2013.
Copyright law:
Situation in the United States US copyright law, and in particular its provision for fair use, upholds the legality of content mining in America, and other fair use countries such as Israel, Taiwan and South Korea. As content mining is transformative, that is it does not supplant the original work, it is viewed as being lawful under fair use. For example, as part of the Google Book settlement the presiding judge on the case ruled that Google's digitization project of in-copyright books was lawful, in part because of the transformative uses that the digitization project displayed—one being text and data mining.
Software:
Free open-source data mining software and applications The following applications are available under free/open-source licenses. Public access to application source code is also available.
Carrot2: Text and search results clustering framework.
Chemicalize.org: A chemical structure miner and web search engine.
ELKI: A university research project with advanced cluster analysis and outlier detection methods written in the Java language.
GATE: a natural language processing and language engineering tool.
KNIME: The Konstanz Information Miner, a user-friendly and comprehensive data analytics framework.
Massive Online Analysis (MOA): a real-time big data stream mining with concept drift tool in the Java programming language.
MEPX: cross-platform tool for regression and classification problems based on a Genetic Programming variant.
mlpack: a collection of ready-to-use machine learning algorithms written in the C++ language.
NLTK (Natural Language Toolkit): A suite of libraries and programs for symbolic and statistical natural language processing (NLP) for the Python language.
OpenNN: Open neural networks library.
Orange: A component-based data mining and machine learning software suite written in the Python language.
PSPP: Data mining and statistics software under the GNU Project similar to SPSS R: A programming language and software environment for statistical computing, data mining, and graphics. It is part of the GNU Project.
scikit-learn: An open-source machine learning library for the Python programming language; Torch: An open-source deep learning library for the Lua programming language and scientific computing framework with wide support for machine learning algorithms.
UIMA: The UIMA (Unstructured Information Management Architecture) is a component framework for analyzing unstructured content such as text, audio and video – originally developed by IBM.
Weka: A suite of machine learning software applications written in the Java programming language.
Proprietary data-mining software and applications The following applications are available under proprietary licenses.
Angoss KnowledgeSTUDIO: data mining tool LIONsolver: an integrated software application for data mining, business intelligence, and modeling that implements the Learning and Intelligent OptimizatioN (LION) approach.
PolyAnalyst: data and text mining software by Megaputer Intelligence.
Microsoft Analysis Services: data mining software provided by Microsoft.
NetOwl: suite of multilingual text and entity analytics products that enable data mining.
Oracle Data Mining: data mining software by Oracle Corporation.
PSeven: platform for automation of engineering simulation and analysis, multidisciplinary optimization and data mining provided by DATADVANCE.
Qlucore Omics Explorer: data mining software.
RapidMiner: An environment for machine learning and data mining experiments.
SAS Enterprise Miner: data mining software provided by the SAS Institute.
SPSS Modeler: data mining software provided by IBM.
STATISTICA Data Miner: data mining software provided by StatSoft.
Tanagra: Visualisation-oriented data mining software, also for teaching.
Vertica: data mining software provided by Hewlett-Packard.
Google Cloud Platform: automated custom ML models managed by Google.
Amazon SageMaker: managed service provided by Amazon for creating & productionising custom ML models. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Introsort**
Introsort:
Introsort or introspective sort is a hybrid sorting algorithm that provides both fast average performance and (asymptotically) optimal worst-case performance. It begins with quicksort, it switches to heapsort when the recursion depth exceeds a level based on (the logarithm of) the number of elements being sorted and it switches to insertion sort when the number of elements is below some threshold. This combines the good parts of the three algorithms, with practical performance comparable to quicksort on typical data sets and worst-case O(n log n) runtime due to the heap sort. Since the three algorithms it uses are comparison sorts, it is also a comparison sort.
Introsort:
Introsort was invented by David Musser in Musser (1997), in which he also introduced introselect, a hybrid selection algorithm based on quickselect (a variant of quicksort), which falls back to median of medians and thus provides worst-case linear complexity, which is optimal. Both algorithms were introduced with the purpose of providing generic algorithms for the C++ Standard Library which had both fast average performance and optimal worst-case performance, thus allowing the performance requirements to be tightened. Introsort is in place and not stable.
Pseudocode:
If a heapsort implementation and partitioning functions of the type discussed in the quicksort article are available, the introsort can be described succinctly as procedure sort(A : array): maxdepth ← ⌊log2(length(A))⌋ × 2 introsort(A, maxdepth) procedure introsort(A, maxdepth): n ← length(A) if n < 16: insertionsort(A) else if maxdepth = 0: heapsort(A) else: p ← partition(A) // assume this function does pivot selection, p is the final position of the pivot introsort(A[1:p-1], maxdepth - 1) introsort(A[p+1:n], maxdepth - 1) The factor 2 in the maximum depth is arbitrary; it can be tuned for practical performance. A[i:j] denotes the array slice of items i to j including both A[i] and A[j]. The indices are assumed to start with 1 (the first element of the A array is A[1]).
Analysis:
In quicksort, one of the critical operations is choosing the pivot: the element around which the list is partitioned. The simplest pivot selection algorithm is to take the first or the last element of the list as the pivot, causing poor behavior for the case of sorted or nearly sorted input. Niklaus Wirth's variant uses the middle element to prevent these occurrences, degenerating to O(n2) for contrived sequences. The median-of-3 pivot selection algorithm takes the median of the first, middle, and last elements of the list; however, even though this performs well on many real-world inputs, it is still possible to contrive a median-of-3 killer list that will cause dramatic slowdown of a quicksort based on this pivot selection technique.
Analysis:
Musser reported that on a median-of-3 killer sequence of 100,000 elements, introsort's running time was 1/200 that of median-of-3 quicksort. Musser also considered the effect on caches of Sedgewick's delayed small sorting, where small ranges are sorted at the end in a single pass of insertion sort. He reported that it could double the number of cache misses, but that its performance with double-ended queues was significantly better and should be retained for template libraries, in part because the gain in other cases from doing the sorts immediately was not great.
Implementations:
Introsort or some variant is used in a number of standard library sort functions, including some C++ sort implementations.
The June 2000 SGI C++ Standard Template Library stl_algo.h implementation of unstable sort uses the Musser introsort approach with the recursion depth to switch to heapsort passed as a parameter, median-of-3 pivot selection and the Knuth final insertion sort pass for partitions smaller than 16.
Implementations:
The GNU Standard C++ library is similar: uses introsort with a maximum depth of 2×log2 n, followed by an insertion sort on partitions smaller than 16.LLVM libc++ also uses introsort with a maximum depth of 2×log2 n, however the size limit for insertion sort is different for different data types (30 if swaps are trivial, 6 otherwise). Also, arrays with sizes up to 5 are handled separately. Kutenin (2022) provides an overview for some changes made by LLVM, with a focus on the 2022 fix for quadraticness.The Microsoft .NET Framework Class Library, starting from version 4.5 (2012), uses introsort instead of simple quicksort.Go uses a modification of introsort: for slices of 12 or less elements it uses insertion sort, and for larger slices it uses pattern-defeating quicksort and more advanced median of three medians for pivot selection. Prior to version 1.19 it used shell sort for small slices.
Implementations:
Java, starting from version 14 (2020), uses a hybrid sorting algorithm that uses merge sort for highly structured arrays (arrays that are composed of a small number of sorted subarrays) and introsort otherwise to sort arrays of ints, longs, floats and doubles.
Variants:
pdqsort Pattern-defeating quicksort (pdqsort) is a variant of introsort incorporating the following improvements: Median-of-three pivoting, "BlockQuicksort" partitioning technique to mitigate branch misprediction penalities, Linear time performance for certain input patterns (adaptive sort), Use element shuffling on bad cases before trying the slower heapsort.pdqsort is used by Rust, GAP, and the C++ library Boost. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Molded interconnect device**
Molded interconnect device:
A molded interconnect device (MID) is an injection-molded thermoplastic part with integrated electronic circuit traces. The use of high temperature thermoplastics and their structured metallization opens a new dimension of circuit carrier design to the electronics industry. This technology combines plastic substrate/housing with circuitry into a single part by selective metallization.
Applications:
Key markets for the MID technology are consumer electronic, telecommunication, automotive and medical. A very common application for MIDs are integrated antennas in cellphones and other mobile devices including laptops and netbooks.
Manufacturing methods:
Molded interconnect devices are typically manufactured in these technologies: Laser Direct Structuring (LDS) The LDS process uses a thermoplastic material, doped with a (non-conductive) metallic inorganic compound activated by means of laser. The basic component is single-component injection molded, with practically no restrictions in terms of 3D design freedom. A laser then writes the course of the later circuit trace on the plastic. Where the laser beam hits the plastic the metal additive forms a micro-rough track. The metal particles of this track form the nuclei for the subsequent metallization. In an electroless copper bath, the conductor path layers arise precisely on these tracks. Successively layers of copper, nickel and gold finish can be raised in this way.
Manufacturing methods:
The LDS process is characterized by: single-component injection molding a wide range of materials is available full three-dimensionality in a sphere flexibility: for a changed routing of traces, only new control data have to be transmitted to the laser unit. Thus different functional components can be produced from one basic unit precision: finest conductor pathes with a width of < 80 µm are possible prototyping: available LDS-coating of any part enables test specimenLaser Direct Structuring was invented at Hochschule Ostwestfalen-Lippe, University of Applied Sciences in Lemgo, Germany, from 1997 until 2001. LDS technology was developed in a research cooperation with the former LPKF Limited, patented by the inventors and first exclusively licensed to LPKF. In 2002 the patents concerning LDS technology were transferred to LPKF Laser & Electronics AG.
Manufacturing methods:
The major drawbacks of LDS are the need for the expensive metallic inorganic compound for the entire mold, the necessity for a chemical plating process, a very rough surface of the plated layer making connectors difficult to achieve. The created circuitry usually is limited to only one layer of wiring without crosses.
Manufacturing methods:
Printed Electronics Selective metallization can be achieved by printing of conductive traces (Printed Electronics) onto the surface of the thermoplastic part. Aerosol jet, inkjet, or screen printing may be used, whereas aerosol jet printing delivers the most reliable results on an arbitrary shaped mold. The main advantages to PE include: any polymer can be used for injection molding no metallic inorganic compound is necessary, which reduces cost large variety of conductive coating materials including silver, copper, gold, platinum, graphite, and conductive polymers thickness can be tightly controlled direct deposition without plating possible more complex circuitry possible as isolation layers, dielectrics, and other materials can be deposited in multiple layers higher line precision of down to 10 µm higher surface smoothnessCurrently, printed electronics is still a research and development area but an increasing number of companies start production of smart phone antennas and substitute LDS on other injection-molded parts.
Manufacturing methods:
The major drawback is a low level of standardization because of the versatility of the technique.
Manufacturing methods:
Two-shot molding Two-shot molding is an injection molding process using two different resins and only one of the two resins is platable. Typically the platable substrate is ABS and the non-platable substrate is polycarbonate. In a two shot component, these are then submitted to an electroless plating process where the butadiene is used to chemically roughen the surface and allow adhesion of a copper primary layer. The plating chemistry can be controlled to prevent the roughening of the polycarbonate portions of the component. While not commonly found outside of cellphone antenna production, this technology is public and widely available. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Spice bag**
Spice bag:
A spice bag (or spicebag, spicy bag, spice box or spicy box; Irish: mála spíosrach) is a fast food dish, popular in most of Ireland and "inspired" by Chinese cuisine. The dish is most commonly sold in Chinese takeaways in Ireland. Typically, a spice bag consists of deep-fried salt and chilli chips, salt and chilli chicken (usually shredded, occasionally balls/wings), red and green peppers, sliced chili peppers, fried onions, and a variety of spices. A vegetarian or vegan option is often available, in which deep fried tofu takes the place of the shredded chicken. It is sometimes accompanied by a tub of curry sauce.Available in Chinese takeaways and fish and chip shops since the 2010s, the dish has developed something of a cult following, and a Facebook group created as a tribute to the dish has attracted over 17,000 members. It is often cited as a popular "hangover cure". It was voted 'Ireland's Favourite Takeaway Dish' in the 2020 Just Eat National Takeaway Awards in the Republic of Ireland, while in 2021 Deliveroo Ireland started a petition to create a "National Spice Bag Day". The dish is not as common in Northern Ireland.
History:
According to RTÉ reporter Liam Geraghty, the dish was supposedly created in 2010 by The Sunflower Chinese takeaway in Templeogue, Dublin, with the first spice bag sold on Just Eat in 2012. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sequent Computer Systems**
Sequent Computer Systems:
Sequent Computer Systems was a computer company that designed and manufactured multiprocessing computer systems. They were among the pioneers in high-performance symmetric multiprocessing (SMP) open systems, innovating in both hardware (e.g., cache management and interrupt handling) and software (e.g., read-copy-update).
Sequent Computer Systems:
Through a partnership with Oracle Corporation, Sequent became a dominant high-end UNIX platform in the late 1980s and early 1990s. Later they introduced a next-generation high-end platform for UNIX and Windows NT based on a non-uniform memory access architecture, NUMA-Q. As hardware prices fell in the late 1990s, and Intel shifted their server focus to the Itanium processor family, Sequent joined the Project Monterey effort in October 1998. which aimed to move a standard Unix to several new platforms.In July 1999 Sequent agreed to be acquired by IBM. At the time, Sequent's CEO said its technology would "find its way through IBM's entire product field" and IBM announced it would "both sell Sequent machines, and fold Sequent's technology...into its own servers", but by May 2002 a decline in sales of the models acquired from Sequent, among other reasons, led to the retirement of Sequent-heritage products.Vestiges of Sequent's innovations live on in the form of data clustering software from PolyServe (subsequently acquired by HP), various projects within OSDL, IBM contributions to the Linux kernel, and claims in the SCO v. IBM lawsuit.
History:
Originally named Sequel, Sequent was formed in 1983 when a group of seventeen engineers and executives left Intel after the failed iAPX 432 "mainframe on a chip" project was cancelled; they were joined by one non-Intel employee. They started Sequent to develop a line of SMP computers, then considered one of the up-and-coming fields in computer design.
History:
Balance Sequent's first computer systems were the Balance 8000 (released in 1984) and Balance 21000 (released in 1986). Both models were based on 10 MHz National Semiconductor NS32032 processors, each with a small write-through cache connected to a common memory to form a shared memory system. The Balance 8000 supported up to 6 dual-processor boards for a total maximum of 12 processors. The Balance 21000 supported up to 15 dual-processor boards for a total maximum of 30 processors.The systems ran a modified version of 4.2BSD Unix the company called DYNIX, for DYNamic unIX. The machines were designed to compete with the DEC VAX-11/780, with all of their inexpensive processors available to run any process. In addition the system included a series of libraries that could be used by programmers to develop applications that could use more than one processor at a time.
History:
Symmetry Their next series was the Intel 80386-based Symmetry, released in 1987. Various models supported between 2 and 30 processors, using a new copy-back cache and a wider 64-bit memory bus. 1991's Symmetry 2000 models added multiple SCSI boards, and were offered in versions with from one to six Intel 80486 processors. The next year they added the VMEbus based Symmetry 2000/x50 with faster CPUs.
History:
The late 1980s and early 1990s saw big changes on the software side for Sequent. DYNIX was replaced by DYNIX/ptx, which was based on a merger of AT&T Corporation's UNIX System V and 4.2BSD. And this was during a period when Sequent's high-end systems became particularly successful due to a close working relationship with Oracle, specifically their high-end database servers. In 1993 they added the Symmetry 2000/x90 along with their ptx/Cluster software, which added various high availability features and introduced custom support for Oracle Parallel Server.
History:
In 1994 Sequent introduced the Symmetry 5000 series models SE20, SE60 and SE90, which used 66 MHz Pentium CPUs in systems from 2 to 30 processors. The next year they expanded that with the SE30/70/100 lineup using 100 MHz Pentiums, and then in 1996 with the SE40/80/120 with 166 MHz Pentiums. A variant of the Symmetry 5000, the WinServer 5000 series, ran Windows NT instead of DYNIX/ptx.
History:
NUMA Recognizing the increase in competition for SMP systems after having been early adopters of the architecture, and the increasing integration of SMP technology into microprocessors, Sequent sought its next source of differentiation. They began investing in the development of a system based on a cache-coherent non-uniform memory architecture (ccNUMA) and leveraging Scalable Coherent Interconnect. NUMA distributes memory among the processors, avoiding the bottleneck that occurs with a single monolithic memory. Using NUMA would allow their multiprocessor machines to generally outperform SMP systems, at least when the tasks can be executed close to their memory — as is the case for servers, where tasks typically do not share large amounts of data.
History:
In 1996 they released the first of a new series of machines based on this new architecture. Known internally as STiNG, an abbreviation for Sequent: The Next Generation (with Intel inside), it was productized as NUMA-Q and was the last of the systems released before the company was purchased by IBM for over $800 million.
IBM then started Project Monterey with Santa Cruz Operation, intending to produce a NUMA-capable standardized Unix running on IA-32, IA-64 and POWER and PowerPC platforms. This project later fell through as both IBM and SCO turned to the Linux market, but is the basis for "the new SCO"'s SCO v. IBM Linux lawsuit.
History:
IBM purchase and disappearance With their future product strategy in tatters, it appeared Sequent had little future standing alone, and was purchased by IBM in 1999 for $810 million. IBM released several x86 servers with a NUMA architecture. The first was the x440 in August, 2002 with a follow-on x445 in 2003. In 2004, an Itanium-based x455 was added to the NUMA family. During this period, NUMA technology became the basis for IBM's extended X-Architecture (eXA, which could also stand for enterprise X-Architecture). As of 2011, this chipset is now on its fifth generation, known as eX5 technology. It now falls under the brand IBM System x.
History:
According to a May 30, 2002 article in the Wall Street Journal (WSJ) entitled "Sequent Deal Serves Hard Lesson for IBM": When IBM bought Sequent, ...it [Sequent] lacked the size and resources to compete with Sun and Hewlett-Packard Co. in the Unix server market....
In 1999, IBM had problems of its own with an aged and high-priced line of servers, particularly for its version of Unix known as AIX. It also faced huge losses in personal computers and declining sales in its cash-cow mainframe line.
Detailed model descriptions:
The following is a more detailed description of the first two generations of Symmetry products, released between 1987 and 1990.
Detailed model descriptions:
Symmetry S-series Symmetry S3 The S3 was the low-end platform based on commodity PC components running a fully compatible version of DYNIX 3. It featured a single 33 MHz Intel 80386 processor, up to 40 megabytes of RAM, up to 1.8 gigabytes of SCSI-based disk storage, and up to 32 direct-connected serial ports.Symmetry S16 The S16 was the entry-level multiprocessing model, which ran DYNIX/ptx. It featured up to six 20 MHz Intel 80386 processors, each with a 128 kilobyte cache. It also supported up to 80 MB of RAM, up to 2.5 GB of SCSI-based disk storage, and up to 80 direct-connected serial ports.Symmetry S27 The S27 ran either DYNIX/ptx or DYNIX 3. It featured up to ten 20 MHz Intel 80386 processors, each with a 128 KB cache. It also supported up to 128 MB of RAM, up to 12.5 GB of disk storage, and up to 144 direct-connected serial ports.Symmetry S81 The S81 ran either DYNIX/ptx or DYNIX 3. It featured up to 30 20 MHz Intel 80386 processors, each with a 128 KB cache. It also supported up to 384 MB of RAM, up to 84.8 GB of disk storage, and up to 256 direct-connected serial ports.
Detailed model descriptions:
Symmetry 2000 series Symmetry 2000/40 The S2000/40 was the low-end platform based on commodity PC components running a fully compatible version of DYNIX/ptx. It featured a single 33 MHz Intel 80486 processor, up to 64 megabytes of RAM, up to 2.4 gigabytes of SCSI-based disk storage, and up to 32 direct-connected serial ports.Symmetry 2000/200 The S2000/200 was the entry-level multiprocessing model, which ran DYNIX/ptx. It featured up to six 25 MHz Intel 80486 processors, each with a 512 kilobyte cache. It also supported up to 128 MB of RAM, up to 2.5 GB of SCSI-based disk storage, and up to 80 direct-connected serial ports.Symmetry 2000/400 The S2000/400 ran either DYNIX/ptx or DYNIX 3. It featured up to ten 25 MHz Intel 80486 processors, each with a 512 KB cache. It also supported up to 128 MB of RAM, up to 14.0 GB of disk storage, and up to 144 direct-connected serial ports.Symmetry 2000/700 The S2000/700 ran either DYNIX/ptx or DYNIX 3. It featured up to 30 25 MHz Intel 80486 processors, each with a 512 KB cache. It also supported up to 384 MB of RAM, up to 85.4 GB of disk storage, and up to 256 direct-connected serial ports. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**CoB—CoM heterodisulfide reductase**
CoB—CoM heterodisulfide reductase:
In enzymology, a CoB—CoM heterodisulfide reductase (EC 1.8.98.1) is an enzyme that catalyzes the chemical reaction coenzyme B + coenzyme M + methanophenazine ⇌ N-{7-[(2-sulfoethyl)dithio]heptanoyl}-O3-phospho-L-threonine + dihydromethanophenazineThe 3 substrates of this enzyme are coenzyme B, coenzyme M, and methanophenazine, whereas its two products are [[N-{7-[(2-sulfoethyl)dithio]heptanoyl}-O3-phospho-L-threonine]] and dihydromethanophenazine.
This enzyme belongs to the family of oxidoreductases, specifically those acting on a sulfur group of donors with other, known, acceptors. The systematic name of this enzyme class is coenzyme B:coenzyme M:methanophenazine oxidoreductase. Other names in common use include heterodisulfide reductase, and soluble heterodisulfide reductase. This enzyme participates in folate biosynthesis. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Heat content (fuel)**
Heat content (fuel):
In the U.S. energy industry, heat content is the amount of heat energy that will be released by combustion of a unit quantity of a fuel or by transformation of another energy form. For example, fossil fuels are rated by heat content, with a distinction made between gross heat content (which includes heat energy used to vaporize moisture in the fuel) and net heat content (which excludes heat energy used to vaporize moisture in the fuel.) The term is also sometimes applied to other energy forms, such as heat content of a kilowatt-hour of electricity or a pound of steam. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Depolymerization**
Depolymerization:
Depolymerization (or depolymerisation) is the process of converting a polymer into a monomer or a mixture of monomers. This process is driven by an increase in entropy.
Ceiling temperature:
The tendency of polymers to depolymerize is indicated by their ceiling temperature. At this temperature, the enthalpy of polymerization matches the entropy gained by converting a large molecule into monomers. Above the ceiling temperature, the rate of depolymerization is greater than the rate of polymerization, which inhibits the formation of the given polymer.
Applications:
Depolymerization is a very common process. Digestion of food involves depolymerization of macromolecules, such as proteins. It is relevant to polymer recycling. Sometimes the depolymerization is well behaved, and clean monomers can be reclaimed and reused for making new plastic. In other cases, such as polyethylene, depolymerization gives a mixture of products. These products are, for polyethylene, ethylene, propylene, isobutylene, 1-hexene and heptane. Out of these, only ethylene can be used for polyethylene production, so other gases must be turned into ethylene, sold, or otherwise be destroyed or be disposed of by turning them into other products.Depolymerization is also related to production of chemicals and fuels from biomass. In this case, reagents are typically required. A simple case is the hydrolysis of celluloses to glucose by the action of water. Generally this process requires an acid catalyst: H(C6H10O5)nOH + (n - 1) H2O → n C6H12O6 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**OKFOL**
OKFOL:
OKFOL is an explosive, used in a variety of applications. It is particularly suitable for use in shaped charges. It normally consists of 95% HMX phlegmatized with 5% wax. It has a density of 1.761 to 1.813 grams per cubic centimetre, explosive velocity of 8,670 metres per second and a TNT equivalent of 1.70. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**HMGXB4**
HMGXB4:
HMG-box containing 4 is a protein that in humans is encoded by the HMGXB4 gene.
Function:
High mobility group (HMG) proteins are nonhistone chromosomal proteins. See HMG2 (MIM 163906) for additional information on HMG proteins.[supplied by OMIM, Nov 2010]. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Scanning Flow Cell**
Scanning Flow Cell:
Scanning Flow Cell (SFC) is an electrochemical technique, based on the principle of channel electrode. The electrolyte is continuously flowing over a substrate that is introduced externally on translation stage. In contrast to the reference and counter electrode that are integrated in the main channel or placed in side compartments connected with a salt bridge.SFC utilizes V-formed geometry with a small opening on the bottom (in range of 0.2-1mm diameter) used to establish the contact with sample. The convective flow is sustained also in non-contact mode of operation that allows easy exchange of the working electrode.
Application:
The SFC is employed for combinatorial and high-throughput electrochemical studies. Due to its non-homogenous flow profile distribution, it is currently used for comparative kinetic studies. SFC is predominantly used for coupling of electrochemical measurements with post analytical techniques like UV-Vis, ICP-MS, ICP-OES etc. This makes possible a direct correlation of electrochemical and spectrometric signal. This methodology was successfully applied for corrosion studies. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**DVBViewer**
DVBViewer:
DVBViewer is proprietary, commercial software for viewing & recording of DVB TV & Radio using a TV tuner card or box and a Media Center for viewing Music, Video and Pictures. Among its other features are an Electronic Program Guide (EPG), remote control support, on-screen display, teletext, time shifting and picture-in-picture. Besides the support for BDA adapters, there is also the ability to use the Hauppauge MediaMVP with DVBViewer. The software also allows Unicable, DiSEqC and usage of CI-Modules with most adapters. The worldwide charge for the application is 15 euro or 22 US Dollars. Additional functions such as video on demand, TV series and movie management, home network distribution of TV to network devices including iPod Touch, iPhone & iPad & Android devices (with additional remote control features), and a recording service with web interface are provided by free plugins. A plug-in offering MHEG-5 and HbbTV support is available for a license fee of 12 euros.
DVBViewer:
DVBViewer was the first windows application with HDTV support, right after the Euro 1080 started its transmission via satellite. It also was one of the first alternative applications supporting the DVB-S2 broadcast standard and being compatible to most tuner cards supporting this standard using BDA, a standardized driver interface for digital video capture developed by Microsoft. Therefore, it presented itself as a valuable replacement for the manufacturers bundled software, often only providing fundamental functionality.
DVBViewer:
The current version allows the use of Sat>IP on the client- and also on the server side.
Editions:
There are three editions of the DVBViewer available, an OEM edition, a commercial one, DVBViewer Pro and an alternative edition, DVBViewer GE, mainly aimed at German speaking customers of the commercial edition. An OEM edition is included with products by various manufacturers: TechniSat DVBViewer TE & TE2 TerraTec DVBViewer Terratec Edition TechnoTrend TT Viewer Digital Everywhere DVBSky Turbosight DVBShop RF Central.
Reviews:
A comprehensive review (in German) of the software can be found at DVBMagic, or in English in the UK's best selling computer magazine, Computeractive. DVBViewer is also mentioned in 16 hardware reviews on techradar, Future plc's UK Tech magazine. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Insulin lispro**
Insulin lispro:
Insulin lispro, sold under the brand name Humalog among others, is a modified type of medical insulin used to treat type 1 and type 2 diabetes. It is used by injection under the skin or within an insulin pump. Onset of effects typically occurs within 30 minutes and lasts about 5 hours. Often a longer-acting insulin like insulin NPH is also needed.Common side effects include low blood sugar. Other serious side effects may include low blood potassium. Use in pregnancy and breastfeeding is generally safe. It works the same as human insulin by increasing the amount of glucose that tissues take in and decreasing the amount of glucose made by the liver.Insulin lispro was first approved for use in the United States in 1996. It is a manufactured form of human insulin where an amino acid has been switched. In 2020, it was the 71st most commonly prescribed medication in the United States, with more than 10 million prescriptions.
Medical uses:
Insulin lispro is used to treat people with type 1 diabetes or type 2 diabetes. People doing well on short-acting insulin should not routinely be changed to insulin lispro, but may benefit from some advantages like flexibility and responsiveness.
Side effects:
Common side effects include skin irritation at the site of injection, hypoglycemia, hypokalemia, and lipodystrophy. Other serious side effects include anaphylaxis, and hypersensitivity reactions.
Mechanism of action:
Through recombinant DNA technology, the final lysine and proline residues on the C-terminal end of the B-chain are reversed. This modification does not alter receptor binding, but blocks the formation of insulin dimers and hexamers. This allows larger amounts of active monomeric insulin to be immediately available for postprandial injections.
Chemistry:
It is a manufactured form of human insulin where the amino acids lysine and proline have been switched at the end of the B chain of the insulin molecule. This switch of amino acids mimics Insulin-like growth factor 1 which also has lysine (K) and proline (P) in that order at positions 27 and 28.
History:
Insulin lispro (brand name Humalog) was granted marketing authorization in the European Union in April 1996, and it was approved for use in the United States in June 1996.Insulin lispro (brand name Liprolog) was granted marketing authorization in the European Union in May 1997, and again in August 2001.Combination drugs combining insulin lispro and other forms of insulin were approved for use in the United States in December 1999.Insulin lispro Sanofi was granted marketing authorization as a biosimilar in the European Union in July 2017.Insulin lispro injection (brand name Admelog) was approved for use in the United States in December 2017.In January 2020, the Committee for Medicinal Products for Human Use (CHMP) in the European Union recommended granting of a marketing authorization for insulin lispro acid (brand name Lyumjev) for the treatment of diabetes mellitus in adults. Insulin lispro (Lyumjev) was approved for use in the European Union in March 2020, and in the United States on 18 June 2020 as reported by Medscape.
Society and culture:
Economics In the United States, the price of for a vial of Humalog increased from US$35 in 2001 to $234 in 2015, or $10.06 and $29.36 per 100 units. In April 2019, Eli Lilly and Company announced they would produce a version selling for $137.35 per vial. The chief executive said that this was a contribution "to fix the problem of high out-of-pocket costs for Americans living with chronic conditions", but Patients for Affordable Drugs Now said it was just a public relations move, as "other countries pay $20 for a vial of insulin." In March 2023, Lilly announced a program capping their insulin prices at $35 per month. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Attunement**
Attunement:
Attunement was the early term adopted by practitioners of the pseudoscientific discipline of energy medicine, originally developed by Lloyd Arthur Meeker (1907 – 1954) and his colleagues. Meeker taught and practiced Attunement as a central feature of his spiritual teaching and ministry, Emissaries of Divine Light. Attunement is taught as a personal spiritual practice and as a healing modality offered through the hands. Emissaries of Divine Light believe that Attunement is a pivotal factor in the conscious evolution of humanity.Like qigong, reiki, and therapeutic touch, attunement is a putative practice as defined by the United States National Center for Complementary and Integrative Health (NCCIH), lacking published scientific study of its effectiveness. Attunement practitioners and clients rely on personal and anecdotal experience to promote it.
History:
Beginnings Lloyd Arthur Meeker shared the first Attunement with Rudolph Plagge in Wichita, Kansas, in 1929, and developed the teaching and practice of Attunement with colleagues until his death in 1954. From September 14 to 16, 1932, Meeker had a spiritual awakening experience that he described as a “heavenly ordination.” He marked that experience as the initiation of Emissaries of Divine Light. That same year he instituted a series of energy medicine experiments. Meeker reported that he could stand across the room from the client and the client could feel the intensification of life force. He also reported excellent results when his hands were one to six inches from the client.Lloyd Arthur Meeker wrote and lectured using the name Uranda, which was how he was known to his followers. From 1935 to 1945, Meeker traveled across the United States and Canada, establishing centers for healing and spiritual teaching for varying periods of time in Atascadero, Oakland, Burbank and Long Beach, California; Buffalo, New York; Grand Forks, Iowa; and Loveland, Colorado. In December 1945 Meeker established his headquarters at Sunrise Ranch in Loveland, where Attunement continues to be taught and practiced.
History:
The Role of G-P-C Chiropractors The G-P-C movement played a significant role in the development of Attunement. G-P-C stood for God – Patient – Chiropractor. It was a no-fee system of service that George Shears created in the late 1930s after he, himself, had a severely debilitating ruptured disk, and vowed to offer his services on a donation basis. Shears had been a Major League Baseball pitcher in 1912, and then a graduate of the Palmer School of Chiropractic in 1917. He experimented with "no-force" chiropractic adjustments in which he believed it was the healing energy through his hands that brought positive results, shown through x-rays. The G-P-C movement saw the relationship between the chiropractor and the patient as the base of a triangle with God at the apex. Meeker eventually embraced this model for the healing relationship.In 1949, Albert Ackerley, a G-P-C chiropractor in Toronto, Ontario, Canada, was introduced to Lloyd Arthur Meeker's writings. In June 1949, when Ackerley was preparing to offer an adjustment to his patient, he saw that the patient's spine had aligned before he had given the adjustment. He believed that this result was a consequence of the flow of subtle energy between himself and the patient, rather than any physical intervention. Ackerley met Meeker in July 1949 and began to practice Attunement under his tutelage. Up to this point, Meeker had referred to Attunements as “treatments.” It was Albert Ackerley who named those treatments “Attunements.” With Lloyd Meeker's urging, Ackerley began to experiment with long-distance Attunements in which the person receiving the Attunement was not in the physical presence of the practitioner. Albert Ackerley and G.P.C. President, Virgil Givens, were both prosecuted legally due to their practice of energy medicine, but continued to practice nonetheless.In May 1950, Lloyd Arthur Meeker met George Shears. Meeker's meeting with Shears was followed by G-P-C meetings at a Chiropractic Convention in August 1950 in Davenport, Iowa, and then a G-P-C conference in Huntingburg, Indiana, which was attended by Meeker. Following these events, about twenty-five chiropractors attended a G-P-C Convention from September 2 through 8 of that year at Sunrise Ranch.The prospect of joining with Meeker and Emissaries of Divine Light raised suspicion and concerns among the G-P-C chiropractors. Nonetheless, at the G-P-C Convention in the home of George Shears in Huntingburg, Indiana, on February 24 and 25, 1951, the G-P-C board of directors voted to cooperate with the Emissaries to establish a G-P-C Servers Training School at Sunrise Ranch. Lloyd Arthur Meeker led three six-month G-P-C Servers Training School sessions at Sunrise Ranch from 1952 to 1954. His classes from the 1952 session were transcribed and published as The Divine Design of Man, # 1 and # 2. The audio recordings and the transcripts of Meeker's classes from the 1953 and 1954 sessions are still extant. The sessions included Attunement technique, nutrition, psychology and a broad spectrum of spiritual teachings.In August 1954, Lloyd Arthur Meeker, his wife, Kathy Meeker, Albert Ackerley and two children died in the crash of Meeker's small plane in San Francisco Bay. A close associate of Meeker's, Martin Cecil, assumed the responsibility for the leadership of Emissaries of Divine Light and for carrying forward Meeker's Attunement work. With assistance from G-P-C chiropractors, James Wellemeyer and Bill Bahan, and from Roger de Winton, Alan Hammond and others, Martin Cecil continued the Servers Training School at Sunrise Ranch and the teaching of Attunement. George Shears eventually moved to Sunrise Ranch in 1968 where he practiced Attunement until he died in 1978.
History:
Development As Emissaries of Divine Light grew in the 1960s, '70s and '80s, so did the teaching and practice of Attunement. Martin Cecil emphasized in his teaching of Attunement that the basis of it was a spiritual practice. While many of the early Attunement practitioners were chiropractors, lay people became increasingly active in the practice. Building on the early work of Lloyd Arthur Meeker, Attunement evolved to include groups of people practicing it together. In 1993, a World Blessing time was established for practitioners to share a time of collective Attunement and healing prayer. In the '80s and '90s, the teaching and practice of long-distance Attunement was developed further.Following Martin Cecil's death in 1988, his son, Michael Cecil, become the Spiritual Leader of the Emissaries. In 1996, Emissaries of Divine Light formed an Attunement Guild, which established standards for the teaching and certification of Attunement practitioners. A group of Attunement practitioners, including Chris Jorgensen and Andrew Shier, formed the International Association of Attunement Practitioners (IAAP) in 1999. IAAP developed and taught the practice of Attunement separate from the organization of Emissaries of Divine Light. Roger de Winton continued his Attunement trainings through Attunement Intensives offered at Sunrise Ranch. He also continued his work of long-distance Attunement until his death in 2001.In 1996, Michael Cecil left Emissaries of Divine Light to continue his own work, which includes Attunement through The Ashland Institute. A group of trustees assumed the leadership of Emissaries of Divine Light with Michael Cecil's departure. In August 2004, the trustees of Emissaries of Divine Light named David Karchere as the leader of the global network. Since becoming the leader of the Emissaries, Karchere has developed programs, including Life Destiny Immersion and Journey into the Fire, that are designed to assist people to transform the spiritual and emotional factors that block the experience of Attunement. In 2010, with other Attunement practitioners, David Karchere founded the Attunement School at Sunrise Ranch.
Philosophy:
Attunement is based on Lloyd Arthur Meeker's vision that the human body is designed to be the temple of God. The foundational principle underlying Attunement is what Meeker named as The One Law, or the Law of Cause and Effect. Emissaries of Divine Light teach that the causative factor in spiritual regeneration is the universal power and intelligence within all people, and that through response and opening to that power and intelligence, people experience healing. Attunement practitioners believe that positive shifts in consciousness release a healing energy through a person's body, mind and emotions. Traditionally, the Attunement practitioner is referred to as a server and the recipient is referred to as a servee.Attunement servers believe they transmit universal life energy through their hands to the servee. The primary connecting points on the servee are the endocrine glands. Attunement servers teach that the endocrine glands are portals for universal life energy that operates through the physical body, and through the mental and emotional function of the individual, and that the servee has the opportunity to open more fully to the life energy within them through receiving an Attunement.Emissaries of Divine Light hold that the origin of universal life energy is divine in nature and that the core reality of all people is divine. The goal of Attunement is to increase the energetic flow while removing blockages to that flow so that a person's core reality can emerge.Lloyd Meeker taught that the human connection to universal life energy relies on pneumaplasm, which was his name for the aura of subtle energy, or etheric body surrounding the physical body. Attunement practitioners believe that pneumaplasm is generated when the universal life energy flows through a person, and that the clarity of the pneumaplasmic body depends on the clarity of that energy flow. Attunement practitioners focus on clarifying and enriching the pneumaplasm associated with the endocrine glands and the anatomical systems of the body.Practitioners believe that the endocrine glands translate seven aspects of the universal life energy into the human experience. They name these as the Seven Spirits.
Philosophy:
Attunement practitioners relate these Seven Spirits to the Seven Spirits of God referenced in the book of Revelation in the Bible. Some Attunement practitioners correlate the seven endocrine glands with seven chakras.
Technique:
At the core of the teaching of the technique is the establishment of an energetic circuit between the practitioner (server) and the client (servee). Practitioners seek to establish that circuit by the radiant extension of life energy through the dominant hand of the practitioner to the gland or organ of the client, and the receiving of life energy through the opposite hand from a corresponding contact point in the body.Meeker taught that the first step in the Attunement process was the alignment of the cervical vertebrae by the radiation of healing energy through the hands on either side of the neck. Contemporary Attunement practitioners continue to teach attunement technique that begins and ends with an Attunement of the cervical vertebrae. Often, the cervical Attunement is followed by Attunement of the endocrine glands and some of the major organs of the body.
Spiritual practice:
As a spiritual practice, Attunement is intended to connect a person more closely to their spiritual source and to open the flow of life current. The practice includes conscious attention to the quality of spirit expressing through the practitioner in the daily living of life, and specific periods of meditation in the beginning and ending of each day, taught as Sanctification in the Evening and the Morning.A central aspect of Attunement as a spiritual practice is referred to by Emissaries of Divine Light as spiritual centering, which they define as a daily practice of opening thoughts and emotions to the spiritual. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Altiplano-Puna Magma Body**
Altiplano-Puna Magma Body:
The Altiplano-Puna Magma Body (APMB), a magma body located within the Altiplano-Puna plateau approximately 10-20 km beneath the Altiplano-Puna Volcanic Complex (APVC) in the Central Andes. High resolution tomography shows that this magma body has a diameter of ~200 km, a depth of 14-20 km, with a total volume of ~500,000 km3, making it the largest known active magma body on Earth. Thickness estimates for the APMB are varied, with some as low as 1 km, others around 10-20 km, and some extending as far down as the Moho. The APMB is primarily composed of 7-10 wt% water andesitic melts and the upper portion may contain more dacitic melts with partial melt percentages ranging from 10-40%. Measurements indicate that the region around the Uturuncu volcano in Bolivia is uplifting at a rate of ~10 mm/year, surrounded by a large region of subsidence. This movement is likely a result of the APMB interacting with the surrounding rock and causing deformation. Recent research demonstrates that this uplift rate may fluctuate over months or years and that it has decreased over the past decade. Various techniques, such as seismic, gravity, and electromagnetic measurements have been used to image the low-velocity zone in the mid to upper crust known as the APMB.
Composition:
The APMB is likely compositionally zoned with the lower 18-30 km containing andesitic melts and the upper 9-18 km containing dacitic melts. Estimates for the percentage of andesitic melt vary from 8 vol% on the low end and up to 30 vol% on the high end. These andesitic melts also have a high water content (~7-10 wt.% water) indicated by the high electrical conductivity measured in the APMB. Measurements for the partial melt percentage in the APMB also vary, with seismic imaging indicating that it is anywhere from 10-40% partial melt. For a magma body with ~20% partial melt, the viscosity is estimated to be <1016 Pa s.
Deformation:
The Altiplano-Puna region around the Uturuncu volcano is experiencing a type of deformation termed 'sombrero uplift,' which means that there is a central zone of uplift surrounded by a region of subsidence. One potential explanation for this sombrero uplift pattern is the formation and growth of a large diapir arising from the APMB. Lower density magma than the surrounding rocks is produced during partial melting in the APMB, causing a plume of buoyant magma to rise from the center of the magma body. This causes material to be removed from the APMB to feed the growing diapir, resulting in a region of subsidence surrounding the uplift zone.Data collected between 1992 and 2010 demonstrates that the region is uplifting at a rate of ~10 mm/year and subsiding at a slower rate (only a few mm/year). More recent InSAR data, collected between September 2014 and December 2017, shows that the uplift rate over this time period has decreased to 3-5 mm/year and may experience short-term velocity reversals. Additionally, there is evidence that the uplift and subsidence rates have balanced out over the past 16,000 years to create no net deformation. These aspects of the uplift and subsidence cannot be easily explained by the diapir model, so other possible mechanisms for driving the deformation are being investigated. One such mechanism that might explain the deformation is the movement of volatiles in a column connected to the APMB. Movement like this may explain the surface deformation rate that varies on monthly or yearly scales and appears to have resulted in no net deformation over longer time periods.
Imaging Techniques:
Seismic Between 1996 and 1997, several broadband seismic stations were deployed over the Altiplano-Puna Volcanic Complex (APVC) in order to characterize the magmatic structures beneath the surface. These stations found a low velocity region approximately 10-20 km beneath the surface that was interpreted to be a sill-like magma body associated with the APVC. Seismic studies and modeling continues to take place in this area, further constraining the extent and characteristics of this magma body.
Imaging Techniques:
Gravity A 3D density model of the Central Andes was developed based on modeling of Bouguer anomalies and it provided a more detailed view of the region's lithospheric structure and an estimation of the amount of partial melt present in the APMB (~9%). Continued investigation of Bouguer anomaly data led to the discovery of a column-like, low density structure extending from the top of the APMB with a diameter of approximately 15 km.
Imaging Techniques:
Electromagnetic Electromagnetic methods have also been used to investigate structures in the Andes as well as determine characteristics of the APMB. Magnetotelluric stations were deployed across the Central Andes and resolved a highly conductive region beneath the Altiplano-Puna plateau, which appeared to coincide with the low velocity zone associated with the APMB. Further magnetotelluric studies showed that the region has low electrical resistivities of <3 Ωm. Resistivity values in this range are interpreted to only occur with magma that contains a minimum of 15% andesitic melt. Additionally, these resistivity values indicate that the melt has a water content up to 10 wt.% H2O, which makes up approximately 10% of the APMB. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**RtT RNA**
RtT RNA:
The RtT RNA (repeat structure of the tyrT operon) is a RNA element that is released from the tyrT operon of Escherichia coli. The exact function of RtT is unknown although it is thought that it may be involved in changing the cellular response in relation to amino acid starvation.The functional prediction is strengthened when the tyrT locus of E. coli K12 is compared with the B strain, which lacks RtT RNA and has an alternate starvation response. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**IEC 61850**
IEC 61850:
IEC 61850 is an international standard defining communication protocols for intelligent electronic devices at electrical substations. It is a part of the International Electrotechnical Commission's (IEC) Technical Committee 57 reference architecture for electric power systems. The abstract data models defined in IEC 61850 can be mapped to a number of protocols. Current mappings in the standard are to Manufacturing Message Specification (MMS), GOOSE (Generic Object Oriented System Event) [see section 3, Terms and definitions, term 3.65 on page 14], SV (Sampled Values) or SMV (Sampled Measure Values), and soon to web services. In the previous version of the standard, GOOSE stood for "Generic Object Oriented Substation Event", but this old definition is still very common in IEC 61850 documentation. These protocols can run over TCP/IP networks or substation LANs using high speed switched Ethernet to obtain the necessary response times below four milliseconds for protective relaying.
Standard documents:
IEC 61850 consists of the following parts: IEC TR 61850-1:2013 – Introduction and overview IEC TS 61850-2:2003 – Glossary IEC 61850-3:2013 – General requirements IEC 61850-4:2011 – System and project management IEC 61850-5:2013 – Communication requirements for functions and device models IEC 61850-6:2009 – Configuration language for communication in electrical substations related to IEDs IEC 61850-7-1:2011 – Basic communication structure – Principles and models IEC 61850-7-2:2010 – Basic communication structure – Abstract communication service interface (ACSI) IEC 61850-7-3:2010 – Basic communication structure – Common Data Classes IEC 61850-7-4:2010 – Basic communication structure – Compatible logical node classes and data classes IEC 61850-7-410:2012 – Basic communication structure – Hydroelectric power plants – Communication for monitoring and control IEC 61850-7-420:2009 – Basic communication structure – Distributed energy resources logical nodes IEC TR 61850-7-510:2012 – Basic communication structure – Hydroelectric power plants – Modelling concepts and guidelines IEC 61850-8-1:2011 – Specific communication service mapping (SCSM) – Mappings to MMS (ISO 9506-1 and ISO 9506-2) and to ISO/IEC 8802-3 IEC 61850-9-2:2011 – Specific communication service mapping (SCSM) – Sampled values over ISO/IEC 8802-3 IEC/IEEE 61850-9-3:2016 – Precision Time Protocol profile for power utility automation IEC 61850-10:2012 – Conformance testing IEC TS 61850-80-1:2016 – Guideline to exchanging information from a CDC-based data model using IEC 60870-5-101 or IEC 60870-5-104 IEC TR 61850-80-3:2015 – Mapping to web protocols – Requirements and technical choices IEC TS 61850-80-4:2016 – Translation from the COSEM object model (IEC 62056) to the IEC 61850 data model IEC TR 61850-90-1:2010 – Use of IEC 61850 for the communication between substations IEC TR 61850-90-2:2016 – Using IEC 61850 for communication between substations and control centres IEC TR 61850-90-3:2016 – Using IEC 61850 for condition monitoring diagnosis and analysis IEC TR 61850-90-4:2013 – Network engineering guidelines IEC TR 61850-90-5:2012 – Use of IEC 61850 to transmit synchrophasor information according to IEEE C37.118 IEC TR 61850-90-7:2013 – Object models for power converters in distributed energy resources (DER) systems IEC TR 61850-90-8:2016 – Object model for E-mobility IEC TR 61850-90-12:2015 – Wide area network engineering guidelines
Features:
IEC 61850 features include: Data modelling – Primary process objects as well as protection and control functionality in the substation is modelled into different standard logical nodes which can be grouped under different logical devices. There are logical nodes for data/functions related to the logical device (LLN0) and physical device (LPHD).
Reporting schemes – There are various reporting schemes (BRCB & URCB) for reporting data from server through a server-client relationship which can be triggered based on pre-defined trigger conditions.
Fast transfer of events – Generic Substation Events (GSE) are defined for fast transfer of event data for a peer-to-peer communication mode. This is again subdivided into GOOSE & GSSE.
Setting groups – The setting group control Blocks (SGCB) are defined to handle the setting groups so that user can switch to any active group according to the requirement.
Sampled data transfer – Schemes are also defined to handle transfer of sampled values using Sampled Value Control blocks (SVCB) Commands – Various command types are also supported by IEC 61850 which include direct & select before operate (SBO) commands with normal and enhanced securities.
Data storage – Substation Configuration Language (SCL) is defined for complete storage of configured data of the substation in a specific format. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Climbing fiber**
Climbing fiber:
Climbing fibers are the name given to a series of neuronal projections from the inferior olivary nucleus located in the medulla oblongata.These axons pass through the pons and enter the cerebellum via the inferior cerebellar peduncle where they form synapses with the deep cerebellar nuclei and Purkinje cells. Each climbing fiber will form synapses with 1-10 Purkinje cells.
Early in development, Purkinje cells are innervated by multiple climbing fibers, but as the cerebellum matures, these inputs gradually become eliminated resulting in a single climbing fiber input per Purkinje cell.
Climbing fiber:
These fibers provide very powerful, excitatory input to the cerebellum which results in the generation of complex spike excitatory postsynaptic potential (EPSP) in Purkinje cells. In this way climbing fibers (CFs) perform a central role in motor behaviors.The climbing fibers carry information from various sources such as the spinal cord, vestibular system, red nucleus, superior colliculus, reticular formation and sensory and motor cortices.
Climbing fiber:
Climbing fiber activation is thought to serve as a motor error signal sent to the cerebellum, and is an important signal for motor timing. In addition to the control and coordination of movements, the climbing fiber afferent system contributes to sensory processing and cognitive tasks likely by encoding the timing of sensory input independently of attention or awareness.In the central nervous system, these fibers are able to undergo remarkable regenerative modifications in response to injuries, being able to generate new branches by sprouting to innervate surrounding Purkinje cells if these lose their CF innervation. This kind of injury-induced sprouting has been shown to need the growth associated protein GAP-43. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Area density**
Area density:
The area density (also known as areal density, surface density, superficial density, areic density, mass thickness, column density, or density thickness) of a two-dimensional object is calculated as the mass per unit area. The SI derived unit is the kilogram per square metre (kg·m−2). A related area number density can be defined by replacing mass in by number of particles or other countable quantity.
Area density:
In the paper and fabric industries, it is called grammage and is expressed in grams per square meter (g/m2); for paper in particular, it may be expressed as pounds per ream of standard sizes ("basis ream").
Formulation:
Area density can be calculated as: or where ρA is the average area density, m is the total mass of the object, A is the total area of the object, ρ is the average density, and l is the average thickness of the object.
Column density:
A special type of area density is called column (mass) density (also columnar mass density), denoted ρA or σ. It is the mass of substance per unit area integrated along a path; It is obtained integrating volumetric density ρ over a column: In general the integration path can be slant or oblique incidence (as in, for example, line of sight propagation in atmospheric physics). A common special case is a vertical path, from the bottom to the top of the medium: where z denotes the vertical coordinate (e.g., height or depth).
Column density:
Columnar density ρA is closely related to the vertically averaged volumetric density ρ¯ as where {\textstyle \Delta z=\int 1\,\mathrm {d} z} ; ρ¯ , ρA , and Δz have units of, for example, grams per cubic metre, grams per square metre, and metres, respectively.
Column number density Column number density refers instead to a number density type of quantity: the number or count of a substance—rather than the mass—per unit area integrated along a path:
Usage:
Atmospheric physics It is a quantity commonly retrieved by remote sensing instruments, for instance the Total Ozone Mapping Spectrometer (TOMS) which retrieves ozone columns around the globe. Columns are also returned by the differential optical absorption spectroscopy (DOAS) method and are a common retrieval product from nadir-looking microwave radiometers.A closely related concept is that of ice or liquid water path, which specifies the volume per unit area or depth instead of mass per unit area, thus the two are related: Another closely related concept is optical depth.
Usage:
Astronomy In astronomy, the column density is generally used to indicate the number of atoms or molecules per square cm (cm2) along the line of sight in a particular direction, as derived from observations of e.g. the 21-cm hydrogen line or from observations of a certain molecular species. Also the interstellar extinction can be related to the column density of H or H2.The concept of area density can be useful when analysing accretion disks. In the case of a disk seen face-on, area density for a given area of the disk is defined as column density: that is, either as the mass of substance per unit area integrated along the vertical path that goes through the disk (line-of-sight), from the bottom to the top of the medium: where z denotes the vertical coordinate (e.g., height or depth), or as the number or count of a substance—rather than the mass—per unit area integrated along a path (column number density): Data storage media Areal density is used to quantify and compare different types media used in data storage devices such as hard disk drives, optical disc drives and tape drives. The current unit of measure is typically gigabits per square inch.
Usage:
Paper The area density is often used to describe the thickness of paper; e.g., 80 g/m2 is very common.
Usage:
Fabric Fabric "weight" is often specified as mass per unit area, grams per square meter (gsm) or ounces per square yard. It is also sometimes specified in ounces per yard in a standard width for the particular cloth. One gram per square meter equals 0.0295 ounces per square yard; one ounce per square yard equals 33.9 grams per square meter.
Usage:
Other It is also an important quantity for the absorption of radiation. When studying bodies falling through air, area density is important because resistance depends on area, and gravitational force is dependent on mass.
Bone density is often expressed in grams per square centimeter (g·cm−2) as measured by x-ray absorptiometry, as a proxy for the actual density.
The body mass index is expressed in units of kilograms per square meter, though the area figure is nominal, being the square of the height.
The total electron content in the ionosphere is a quantity of type columnar number density.
Snow water equivalent is a quantity of type columnar mass density. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Artery of round ligament of uterus**
Artery of round ligament of uterus:
The artery of the round ligament of the uterus, also known as Sampson's artery, is a branch of the inferior epigastric artery.
It runs under, and supplies, the round ligament of the uterus.
It constitutes an anastomosis of the uterine artery and ovarian artery.
It was originally named after John A. Sampson (1873–1946), an American gynecologist who studied endometriosis.
Clinical significance:
It is considered an insignificant artery that is dissected during hysterectomies.
It can be the source of hemoperitoneum, but only rarely does it pose a hemodynamic risk to the patient if severed and it is easily cauterized or sutured to prevent bleeding. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dilution of precision (computer graphics)**
Dilution of precision (computer graphics):
Dilution of precision is an algorithmic trick used to handle difficult problems in hidden line removal, caused when horizontal and vertical edges lie on top of each other due to numerical instability. Numerically, the severity escalates when a CAD model is viewed along the principal axii or when a geometric form is viewed end-on. The trick is to alter the view vector by a small amount, thereby hiding the flaws. Unfortunately, this mathematical modification introduces new issues of its own, namely that the exact nature of the original problem has been destroyed, and visible artifacts of this kludge will continue to haunt the algorithm. One such issue is that edges that were well defined and hidden will now be problematic. Another common issue is that the bottom edges on circles viewed end-on will often become visible and propagate their visibility throughout the problem. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Psychiatric medication**
Psychiatric medication:
A psychiatric or psychotropic medication is a psychoactive drug taken to exert an effect on the chemical makeup of the brain and nervous system. Thus, these medications are used to treat mental illnesses. These medications are typically made of synthetic chemical compounds and are usually prescribed in psychiatric settings, potentially involuntarily during commitment. Since the mid-20th century, such medications have been leading treatments for a broad range of mental disorders and have decreased the need for long-term hospitalization, thereby lowering the cost of mental health care. The recidivism or rehospitalization of the mentally ill is at a high rate in many countries, and the reasons for the relapses are under research.
History:
Several significant psychiatric drugs were developed in the mid-20th century. In 1948, lithium was first used as a psychiatric medicine. One of the most important discoveries was chlorpromazine, an antipsychotic that was first given to a patient in 1952. In the same decade, Julius Axelrod carried out research into the interaction of neurotransmitters, which provided a foundation for the development of further drugs. The popularity of these drugs have increased significantly since then, with millions prescribed annually.The introduction of these drugs brought profound changes to the treatment of mental illness. It meant that more patients could be treated without the need for confinement in a psychiatric hospital. It was one of the key reasons why many countries moved towards deinstitutionalization, closing many of these hospitals so that patients could be treated at home, in general hospitals and smaller facilities. Use of physical restraints such as straitjackets also declined.
History:
As of 2013, the 10 most prescribed psychiatric drugs by number of prescriptions were alprazolam, sertraline, citalopram, fluoxetine, lorazepam, trazodone, escitalopram, duloxetine, bupropion XL, and venlafaxine XR.
Administration:
Psychiatric medications are prescription medications, requiring a prescription from a physician, such as a psychiatrist, or a psychiatric nurse practitioner, PMHNP, before they can be obtained. Some U.S. states and territories, following the creation of the prescriptive authority for psychologists movement, have granted prescriptive privileges to clinical psychologists who have undergone additional specialised education and training in medical psychology. In addition to the familiar dosage in pill form, psychiatric medications are evolving into more novel methods of drug delivery. New technologies include transdermal, transmucosal, inhalation, suppository or depot injection supplements.
Research:
Psychopharmacology studies a wide range of substances with various types of psychoactive properties. The professional and commercial fields of pharmacology and psychopharmacology do not typically focus on psychedelic or recreational drugs, and so the majority of studies are conducted on psychiatric medication. While studies are conducted on all psychoactive drugs by both fields, psychopharmacology focuses on psychoactive and chemical interactions within the brain. Physicians who research psychiatric medications are psychopharmacologists, specialists in the field of psychopharmacology.
Adverse and withdrawal effects:
Psychiatric disorders, including depression, psychosis, and bipolar disorder, are common and gaining more acceptance in the United States. The most commonly used classes of medications for these disorders are antidepressants, antipsychotics, and lithium. Unfortunately, these medications are associated with significant neurotoxicities.
Adverse and withdrawal effects:
Psychiatric medications carry risk for neurotoxic adverse effects. The occurrence of neurotoxic effects can potentially reduce drug compliance. Some adverse effects can be treated symptomatically by using adjunct medications such as anticholinergics (antimuscarinics). Some rebound or withdrawal adverse effects, such as the possibility of a sudden or severe emergence or re-emergence of psychosis in antipsychotic withdrawal, may appear when the drugs are discontinued, or discontinued too rapidly.
Adverse and withdrawal effects:
Medicine combinations with clinically untried risks While clinical trials of psychiatric medications, like other medications, typically test medicines separately, there is a practice in psychiatry (more so than in somatic medicine) to use polypharmacy in combinations of medicines that have never been tested together in clinical trials (though all medicines involved have passed clinical trials separately). It is argued that this presents a risk of adverse effects, especially brain damage, in real-life mixed medication psychiatry that are not visible in the clinical trials of one medicine at a time (similar to mixed drug abuse causing significantly more damage than the additive effects of brain damages caused by using only one illegal drug). Outside clinical trials, there is evidence for an increase in mortality when psychiatric patients are transferred to polypharmacy with an increased number of medications being mixed.
Types:
There are five main groups of psychiatric medications.
Antidepressants, which treat disparate disorders such as clinical depression, dysthymia, anxiety disorders, eating disorders and borderline personality disorder.
Antipsychotics, which treat psychotic disorders such as schizophrenia and psychotic symptoms occurring in the context of other disorders such as mood disorders. They are also used for the treatment of bipolar disorder.
Anxiolytics, which treat anxiety disorders, and include hypnotics and sedatives Mood stabilizers, which treat bipolar disorder and schizoaffective disorder.
Stimulants, which treat disorders such as attention deficit hyperactivity disorder and narcolepsy.
Types:
Antidepressants Antidepressants are drugs used to treat clinical depression, and they are also often used for anxiety and other disorders. Most antidepressants will hinder the breakdown of serotonin, norepinephrine, and/or dopamine. A commonly used class of antidepressants are called selective serotonin reuptake inhibitors (SSRIs), which act on serotonin transporters in the brain to increase levels of serotonin in the synaptic cleft. Another is the serotonin-norepinephrine reuptake inhibitors (SNRIs), which increase both serotonin and norepinephrine. Antidepressants will often take 3–5 weeks to have a noticeable effect as the regulation of receptors in the brain adapts. There are multiple classes of antidepressants which have different mechanisms of action. Another type of antidepressant is a monoamine oxidase inhibitor (MAOI), which is thought to block the action of monoamine oxidase, an enzyme that breaks down serotonin and norepinephrine. MAOIs are not used as first-line treatment due to the risk of hypertensive crisis related to the consumption of foods containing the amino acid tyramine.Common antidepressants: Fluoxetine (Prozac), SSRI Paroxetine (Paxil, Seroxat), SSRI Citalopram (Celexa), SSRI Escitalopram (Lexapro), SSRI Sertraline (Zoloft), SSRI Duloxetine (Cymbalta), SNRI Venlafaxine (Effexor), SNRI Bupropion (Wellbutrin), NDRI Mirtazapine (Remeron), NaSSA Isocarboxazid (Marplan), MAOI Phenelzine (Nardil), MAOI Tranylcypromine (Parnate), MAOI Amitriptyline (Elavil), TCA Antipsychotics Antipsychotics are drugs used to treat various symptoms of psychosis, such as those caused by psychotic disorders or schizophrenia. Atypical antipsychotics are also used as mood stabilizers in the treatment of bipolar disorder, and they can augment the action of antidepressants in major depressive disorder.
Types:
Antipsychotics are sometimes referred to as neuroleptic drugs and some antipsychotics are branded "major tranquilizers".
There are two categories of antipsychotics: typical antipsychotics and atypical antipsychotics. Most antipsychotics are available only by prescription.
Common antipsychotics: Anxiolytics and Hypnotics Benzodiazepines are effective as hypnotics, anxiolytics, anticonvulsants, myorelaxants and amnesics. Having less proclivity for overdose and toxicity, they have widely supplanted barbiturates.
Developed in the 1950s onward, benzodiazepines were originally thought to be non-addictive at therapeutic doses, but are now known to cause withdrawal symptoms similar to barbiturates and alcohol. Benzodiazepines are generally recommended for short-term use.Z-drugs are a group of drugs with effects generally similar to benzodiazepines, which are used in the treatment of insomnia.
Types:
Common benzodiazepines and z-drugs include: Mood stabilizers In 1949, the Australian John Cade discovered that lithium salts could control mania, reducing the frequency and severity of manic episodes. This introduced the now popular drug lithium carbonate to the mainstream public, as well as being the first mood stabilizer to be approved by the U.S. Food & Drug Administration. Besides lithium, several anticonvulsants and atypical antipsychotics have mood stabilizing activity. The mechanism of action of mood stabilizers is not well understood.
Types:
Common non-antipsychotic mood stabilizers include: Lithium (Lithobid, Eskalith), the oldest mood stabilizer Anticonvulsants Carbamazepine (Tegretol) and the related compound oxcarbazepine (Trileptal) Valproic acid, and salts (Depakene, Depakote) Lamotrigine (Lamictal) Stimulants A stimulant is a drug that stimulates the central nervous system, increasing arousal, attention and endurance. Stimulants are used in psychiatry to treat attention deficit-hyperactivity disorder. Because the medications can be addictive, patients with a history of drug abuse are typically monitored closely or treated with a non-stimulant.
Types:
Common stimulants: Methylphenidate (Ritalin, Concerta), a norepinephrine-dopamine reuptake inhibitor Dexmethylphenidate (Focalin), the active dextro-enantiomer of methylphenidate Serdexmethylphenidate/dexmethylphenidate (Azstarys) Mixed amphetamine salts (Adderall), a 3:1 mix of dextro/levo-enantiomers of amphetamine Dextroamphetamine (Dexedrine), the dextro-enantiomer of amphetamine Lisdexamfetamine (Vyvanse), a prodrug containing the dextro-enantiomer of amphetamine Methamphetamine (Desoxyn), a potent but infrequently prescribed amphetamine
Controversies:
Professionals, such as David Rosenhan, Peter Breggin, Paula Caplan, Thomas Szasz and Stuart A. Kirk sustain that psychiatry engages "in the systematic medicalization of normality". More recently these concerns have come from insiders who have worked for and promoted the APA (e.g., Robert Spitzer, Allen Frances).: 185 Intellectuals as Goffman, Deleuze, Rosen consider pharmacological "treatment" a lay religion: a "medication" is an "eucharist", or just a concoction.Antipsychotics have been associated with decreases in brain volume over time, which may indicate a neurotoxic effect. However, untreated psychosis has also been associated with decreases in brain volume.Scholars, and even professionals such as Cooper, Foucalt, Szasz believe that pharmacological "treatment" is only a placebo effect, and that administration of drugs is just a religion in disguise and ritualistic chemistry. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cebranopadol**
Cebranopadol:
Cebranopadol (developmental code GRT-6005) is an opioid analgesic of the benzenoid class which is currently under development internationally by Grünenthal, a German pharmaceutical company, and its partner Depomed, a pharmaceutical company in the United States, for the treatment of a variety of different acute and chronic pain states. As of November 2014, it is in phase III clinical trials.
Cebranopadol:
Cebranopadol is unique in its mechanism of action as an opioid, binding to and activating all four of the opioid receptors; it acts as a full agonist of the μ-opioid receptor (Ki = 0.7 nM; EC50 = 1.2 nM; IA = 104%), and δ-opioid receptor (Ki = 18 nM; EC50 = 110 nM; IA = 105%), and as a partial agonist of the nociceptin receptor (Ki = 0.9 nM; EC50 = 13.0 nM; IA = 89%) and κ-opioid receptor (Ki = 2.6 nM; EC50 = 17 nM; IA = 67%). The EC50 values of 0.5–5.6 µg/kg when introduced intravenously and 25.1 µg/kg after oral administration.Cebranopadol shows highly potent and effective antinociceptive and antihypertensive effects in a variety of different animal models of pain. Notably, it has also been found to be more potent in models of chronic neuropathic pain than acute nociceptive pain compared to selective μ-opioid receptor agonists. Relative to morphine, tolerance to the analgesic effects of cebranopadol has been found to be delayed (26 days versus 11 days for complete tolerance). In addition, unlike morphine, cebranopadol has not been found to affect motor coordination or reduce respiration in animals at doses in or over the dosage range for analgesia. As such, it may have improved and prolonged efficaciousness and greater tolerability in comparison to currently available opioid analgesics.As an agonist of the κ-opioid receptor, cebranopadol may have the capacity to produce psychotomimetic effects, dysphoria, and other adverse reactions at sufficiently high doses, a property which could potentially limit its practical clinical dosage range, but would likely reduce the occurrence of patients taking more than their prescribed dose. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ships husbandry**
Ships husbandry:
Ships husbandry or ship husbandry is all aspects of maintenance, cleaning, and general upkeep of the hull, rigging, and equipment of a ship. It may also be used to refer to aspects of maintenance which are not specifically covered by the technical departments. The term is used in both naval and merchant shipping, but naval vessel husbandry may also be used for specific reference to naval vessels.
Ships husbandry diving:
Underwater ships husbandry can be financially advantageous when it eliminates the need for dry-dock repairs or extends the interval between dry-dockings, and reduces the time a ship is required to stay in dry-dock.Underwater ship husbandry includes the following operations, usually done by commercial divers, though some can be done by ROVs or robotic machinery: Underwater hull cleaning to remove fouling organisms which increase drag, and therefore reduce top speed and increase fuel consumption. Such cleaning may be of the entire hull or parts thereof, particularly propellers, shafts and thrusters. The underwater hull may be inspected prior to cleaning, and the amount of cleaning done may depend on the inspection results. Hull cleaning may be done by divers using hand held or self-propelled mechanical brushing equipment, water jets or scrapers. : 2–3, 5 Non-destructive testing and inspection including fouling surveys, inspection of known or suspected damage to structure, equipment or coatings, and inspection of repairs. Several methods may be used, including visual inspection, video recording and magnetic particle testing.: 2, 4 Underwater painting is done to repair paintwork after repairs, or where small areas of paint have been damaged or have worn out. Suitable paints are applied by the diver using brush or roller.: 2, 5 Fiberglass repair, can be hull repair or propeller shaft protective coating repair. Repair of fibreglass shaft coating is generally done in a dry habitat mounted over the shaft, allowing access through the open bottom for the divers. The shaft is first cleaned before wrapping with a new layer of sheathing.: 2, 4 Underwater welding is either done in a submerged dry habitat or wet. Better quality welds can be achieved in dry conditions as the cooling rate is reduced and there is less problem with hydrogen embrittlement. Weld surfaces are prepared by cleaning with scrapers, chipping hammers or hand-held brushes, and pneumatic or hydraulic grinding tools.: 2, 4 Minor repairs to the rubber coating of sonar domes can be done by divers. This entails removal of damaged rubber, preparation of the surface and application of a rubber patch using a suitable adhesive.: 2 Environmental impact Several of the operations classified as ship husbandry will release some quantity of harmful material into the water, particularly hull cleaning operations which will release antifouling toxins. Underwater ship husbandry can cause an adverse environmental effect as significant amounts of copper and zinc are released by underwater hull scrubbing. Alien biofouling organisms may also be released during this process.: 15 Diving under the hull The underside of the hull is an overhead environment with no direct vertical access to the surface. As such it constitutes an entrapment hazard, particularly under large vessels where it may be too dark due to low natural light or turbid water to see the way to the side of the hull. The bottom of the largest ships is mostly flat and featureless, exacerbating the problem. Only surface-supplied diving is authorised for this work in most jurisdictions, as this not only secures the diver's breathing gas supply, but also provides a guideline to the exit point. The use of mechanised bottom scrubbing devices which are steered along the hull surface by a diver and scrub it with rotary brushes has been linked with high release of environmental toxins. There is also a hazard of crushing if the clearance is small and the tide range is large. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Strontium bromide**
Strontium bromide:
Strontium bromide is a chemical compound with a formula SrBr2. At room temperature it is a white, odourless, crystalline powder. Strontium bromide imparts a bright red colour in a flame test, showing the presence of strontium ions. It is used in flares and also has some pharmaceutical uses.
Preparation:
SrBr2 can be prepared from strontium hydroxide and hydrobromic acid.
Sr(OH)2+2HBr⟶SrBr2+2H2O Alternatively strontium carbonate can also be used as strontium source.
SrCO3+2HBr⟶SrBr2+H2O+CO2↑ These reactions give hexahydrate of SrBr2, which decomposes to dihydrate at 89 °C. At 180 °C anhydrous SrBr2 is obtained.
Structure:
At room temperature, strontium bromide adopts a crystal structure with a tetragonal unit cell and space group P4/n. This structure is referred to as α-SrBr2 and is isostructural with EuBr2 and USe2. The compound's structure was initially erroneously interpreted as being of the PbCl2 type, but this was later corrected.Around 920 K (650 °C), α-SrBr2 undergoes a first-order solid-solid phase transition to a much less ordered phase, β-SrBr2, which adopts the cubic fluorite structure. The beta phase of strontium bromide has a much higher ionic conductivity of about 1 S cm−1, comparable to that of molten SrBr2, due to extensive disorder in the bromide sublattice. Strontium bromide melts at 930 K (657 °C). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cytochrome b5**
Cytochrome b5:
Cytochromes b5 are ubiquitous electron transport hemoproteins found in animals, plants, fungi and purple phototrophic bacteria. The microsomal and mitochondrial variants are membrane-bound, while bacterial and those from erythrocytes and other animal tissues are water-soluble. The family of cytochrome b5-like proteins includes (besides cytochrome b5 itself) hemoprotein domains covalently associated with other redox domains in flavocytochrome cytochrome b2 (L-lactate dehydrogenase; EC 1.1.2.3), sulfite oxidase (EC 1.8.3.1), plant and fungal nitrate reductases (EC 1.7.1.1, EC 1.7.1.2, EC 1.7.1.3), and plant and fungal cytochrome b5/acyl lipid desaturase fusion proteins.
Structure:
3-D structures of a number of cytochrome b5 and yeast flavocytochrome b2 are known. The fold belongs to the α+β class, with two hydrophobic cores on each side of a β-sheet. The larger hydrophobic core constitutes the heme-binding pocket, closed off on each side by a pair of helices connected by a turn. The smaller hydrophobic core may have only a structural role and is formed by spatially close N-terminal and C-terminal segments. The two histidine residues provide the fifth and sixth heme ligands, and the propionate edge of the heme group lies at the opening of the heme crevice. Two isomers of cytochrome b5, referred to as the A (major) and B (minor) forms, differ by a 180° rotation of the heme about an axis defined by the α- and γ-meso carbons.
Cytochrome b5 in some biochemical reactions:
EC 1.6.2.2 cytochrome-b5 reductase NADH + H+ + 2 ferricytochrome b5 → NAD+ + 2 ferrocytochrome b5EC 1.10.2.1 L-ascorbate—cytochrome-b5 reductase L-ascorbate + ferricytochrome b5 → monodehydroascorbate + ferrocytochrome b5EC 1.14.18.2 CMP-N-acetylneuraminate monooxygenase CMP-N-acetylneuraminate + 2 ferrocytochrome b5 + O2 + 2 H+ → CMP-N-glycoloylneuraminate + 2 ferricytochrome b5 + H2OEC 1.14.19.1 stearoyl-CoA 9-desaturase stearoyl-CoA + 2 ferrocytochrome b5 + O2 + 2 H+ → oleoyl-CoA + 2 ferricytochrome b5 + H2OEC 1.14.19.3 linoleoyl-CoA 9-desaturase linoleoyl-CoA + 2 ferrocytochrome b5 + O2 + 2 H+ → γ-linolenoyl-CoA + 2 ferricytochrome b5 + H2O | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pisohamate ligament**
Pisohamate ligament:
The pisohamate ligament is a ligament in the hand. It connects the pisiform to the hook of the hamate. It is a prolongation of the tendon of the flexor carpi ulnaris.
It serves as part of the origin for the abductor digiti minimi. It also forms the floor of the ulnar canal, a canal that allows the ulnar nerve and ulnar artery into the hand. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**PotC RNA motif**
PotC RNA motif:
The potC RNA motif is a conserved RNA structure discovered using bioinformatics. The RNA is detected only in genome sequences derived from DNA that was extracted from uncultivated marine bacteria. Thus, this RNA is present in environmental samples, but not yet found in any cultivated organism.
potC RNAs are located in the presumed 5' untranslated regions of genes predicted to encode either membrane transport proteins or peroxiredoxins. Therefore, it was hypothesized that potC RNAs are cis-regulatory elements, but their detailed function is unknown.
A number of other RNAs were identified in the same study, including: Lacto-usp RNA motif mraW RNA motif Ocean-V RNA motif psaA RNA motif Pseudomon-Rho RNA motif rne-II RNA motif STAXI RNA motif TwoAYGGAY RNA motif Whalefall-1 RNA motif wcaG RNA motif ykkC-yxkD leader | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Burt's solar compass**
Burt's solar compass:
Burt's solar compass or astronomical compass/sun compass is a surveying instrument that makes use of the Sun's direction instead of magnetism. William Austin Burt invented his solar compass in 1835. The solar compass works on the principle that the direction to the Sun at a specified time can be calculated if the position of the observer on the surface of the Earth is known, to a similar precision. The direction can be described in terms of the angle of the Sun relative to the axis of rotation of the planet.
Burt's solar compass:
This angle is made up of the angle due to latitude, combined with the angle due to the season, and the angle due to the time of day. These angles are set on the compass for a chosen time of day, the compass base is set up level using the spirit levels provided, and then the sights are aligned with the Sun at the specified time, so the image of the Sun is projected onto the cross grating target. At this point the compass base will be aligned true north–south. It is then locked in place in this alignment, after which the sighting arms can be rotated to align with any landmark or beacon, and the direction can be read off the verniers as an angle relative to true north.
Burt's solar compass:
This device avoided the problems of the normal magnetic compass used by surveyors, which displayed erratic readings when in a locality of high iron ore content and inconsistent and unknown local magnetic variation. The instrument was found to be so accurate that it was the choice of the United States government when surveying public lands, state boundaries, and railroad routes. It won awards from various organizations and was used by surveyors from the nineteenth into the twentieth century.
History:
Burt became a United States deputy surveyor in 1833 and began surveying government land for a territory northwest of the Ohio River. By 1834, he and his surveying crew were surveying territory in the lower peninsula of Michigan. He was surveying land in the upper peninsula of Michigan by 1835 to be used by new settlers. Here he found that his sensitive compass that worked by magnetic field attraction was fluctuating erratically because of the iron ore deposits in the area that interfered with the field. Burt devised an instrument attachment that relied on sunlight, not magnetism, to find true north. He called the resulting product a True Meridian Finding instrument. It overcame the vagaries of the surveyor's compass caused by interference from iron ore deposits in a local land mass district.Burt first used the solar instrument in his Michigan surveys. He found large outcropping deposits of iron ore at Negaunee in Marquette County in his later 1844 survey of the upper peninsula of the state of Michigan. This would become known as the Marquette Iron Range. His crew found small deposits of iron ore in the state's lower peninsula at about the same time. His accidental discovery of these iron deposits in Michigan contributed much to America's Industrial Revolution. The Calumet and Hecla Mine of Michigan's Copper Country was discovered with Burt's instrument, and it became the leading copper producer in the world.Burt's solar compass uses the location of the Sun with astronomical tables and enables surveyors to run more accurate lines, saving them much time. Burt had a model of his instrument built in 1835 by William James Young, a professional instrument maker. He then submitted this solar compass to a committee at the Franklin Institute in Philadelphia. They examined its characteristics and then awarded Burt twenty dollars in gold and the John Scott Medal for its technology. Burt patented his solar compass innovation on February 25, 1836. It has since been referred to as Burt's solar compass or astronomical compass. He used it in the 1836–1837 season to survey the fifth principal meridian in Iowa.Burt improved on the instrument over the years with certain mechanics to make it simpler to operate with better accuracy. In 1840, he received another patent on his improved solar compass. He resubmitted the updated version of the instrument to the Franklin Institute where they found it to be more accurate and easier to use than the first version. The Federal Land Office general surveyor E. S. Haines examined Burt's surveying instrument in December 1840 and reported in a 1841 letter that with its four-year experience in surveying it was found to be superior in technology to the normal compass then used by most surveyors. The Commissioner of the Federal Land Office sent letters to surveyors general throughout the United States saying Burt's compass was being manufactured by the surveyor Henry Ware and available for purchase.Burt in 1849 went to Washington with his son to apply for a renewal of his solar compass original patent of 1835 that was about to expire. The land commissioner committee, which consisted of senators from Michigan and other states, recognizing the value of Burt's solar compass in public land surveys, persuaded him to forego renewal and petition congress for suitable advance compensation. Burt did as was suggested believing that he would be compensated appropriately. However, the compensation indicated did not materialize in Burt's lifetime or at any time thereafter. Since there was no patent on Burt's solar compass after 1850, instrument makers manufactured and sold "Burt's solar compass" to surveyors as a commercial product. The inventor spent thousands of dollars to perfect his instrument, but only received eighty dollars in sales of his tool for his labors.In the preface to his Key to Solar Compass and Surveyor's Companion (1858) by his associate William S. Young, Burt refers to the many requests for such a book on how to use his solar compass. He explains that the common surveyor's compass had problems with the true meridian at different localities. It also had problems from day to day with different readings from that expected as a constant or from previous readings. It was determined that a magnetic compass was prone to interference from the local attraction of iron ore. A more accurate guide for the surveyor was desired, so the solar compass was created by Burt.Surveyor Bela Hubbard noted in 1845 that with Burt's solar compass they could survey a straight line through iron-rich country, which would have been an impossible task using the normal compass instrument. The original impetus for Burt's solar compass was for use where the old fashion compass was vulnerable to large land iron deposits that made unusable readings. It was then found to be superior in general to the common compass, even when local iron ore deposits were not a problem. A solar compass attachment to the surveyor's transit was still the recommended method for obtaining the true north direction as instructed in the 1973 surveyor's manual of the US Bureau of Land Management. The instrument was widely adopted for surveying land in the United States and mandatory for government surveying from the mid-nineteenth century until the year 2000, when the satellite Global Positioning System technology became the preferred method of surveying.
Description:
Principle of operation Surveyors can locate true north by viewing the Sun or other astronomical objects like stars or the Moon, which have a direction from any given point on the surface of the Earth. It can be calculated precisely for a given date and time, and is not influenced by local variations in the magnetic field due to local deposits of minerals such as iron ore. Burt's instrument allows surveyors to determine the true north direction in reference to the Sun rather than being influenced by the Earth's magnetic field. It is made of brass and therefore has no magnetic influence on a compass needle, as it was originally a small attachment to a standard surveyor's common compass.Application of the solar compass requires knowledge of the apparent motion of the Sun around the Earth, relative to the Earth as the center of the frame of reference, and more specifically, relative to the position of the instrument when set up to use in a survey. An understanding of the latitudinal and seasonal declination and the longitudinal variation with time of day are necessary, as the compass has specific sub-assemblies to take each of these variables into account.At the Earth's equator at the equinoxes, the Sun has no seasonal declination, rises due east, and sets due west. At noon the Sun is at its highest point, directly overhead. It is at its lowest point at midnight, and appears to move in the plane of the equator. At locations away from the equator, the noon altitude of the Sun will be reduced by the angle of the Earth's local horizontal to the polar axis – the latitude – so at a latitude of 10° south or north, the noon altitude will be 90° − 10° = 80° at the equinoxes. This angle is known to the surveyor and is set on the latitude arc of the instrument so that with the base leveled, and the compass aligned with true north, the axis of the hour arc will be parallel to the polar axis.The rotational axis of the Earth is tilted from the perpendicular to the plane of its orbit around the Sun. This angle causes the altitude angle of the Sun to vary with the seasons by an amount which depends on the direction of the misalignment. It varies predictably throughout the year, increasing and decreasing smoothly at a calculable rate, and is constant for everywhere on Earth at the same time. This value is also known to the surveyor, as it is published in a set of tables in an almanac. A correction for this declination is made on the declination arc of the compass, which is mounted to rotate on the polar axis on top of the latitude arc, as the latitude and declination angles are additive. The angle of the Sun due to time of day is set on the hour angle arc, which is perpendicular to the polar axis. This angle is calculated from longitude and GMT, also tabulated for the convenience of surveyors and navigators, as the calculations are tedious to perform in the field, and any error could have extensive effects.Assuming a spherical Earth, if a straight line were drawn from the rising to the setting sun, and from the sun at noon and at midnight on the equinoxes, both of these lines would pass through the Earth's center and the equator would intersect these lines. This is not so when the sun has north or south declination because its apparent motion will be at an angle to the equatorial plane, equal to the amount of the sun's declination north or south, so that when the sun has north or south declination, and the Earth is regarded as the center of its revolutions, the line from the Sun to the center of the Earth describes a cone.This conical motion of the Sun can also be illustrated by the dished spokes of the wheel of a covered wagon with the rim representing the Sun's apparent path, the hub representing the Earth, and the spokes being lines drawn from the Sun's path. It may be seen that a line drawn from the Sun to the Earth's center would pass north or south of the equator, equal in degree to its declination north or south. The instrument has an equatorial movement, with a mechanical attachment for sighting a star as a reference.
Description:
Construction and operation Burt's solar compass consists of a main plate mounted to the tripod stand and leveled using the two orthogonal attached spirit levels. It carries a common compass needle box, having divisions for the north end of the needle of about 36 degrees, with a vernier to read the needle's variation, and the three adjustable arcs of the solar instrument: one is set for the latitude of the location; another for the seasonal declination of the Sun; and the third for the hour of the day adjusted for longitude of the location. The sights to set alignment by the Sun are mounted on the movable arm of the declination arc and have a small lens for focusing an image of the Sun's disc on the target grating. The upper plate is aligned with the Sun and remains stationary after polar alignment, while bearings are taken with the sights on the lower plate. The lower plate carries the surveying sights and can be rotated relative to the upper plate, and may be clamped in any position to the upper plate. There is a graduated ring on the lower plate which displays the relative rotation between the north-aligned top plate and the surveying sight-line on the bottom plate, and has verniers to allow precise reading of the angle.The latitude arc is attached perpendicular to the upper plate. The hour arc is fixed perpendicular to the movable upper part of the latitude arc, and the declination arc swivels on a polar axis over and perpendicular to the declination arc. The positions of the arcs can be finely adjusted by screws and the angles read with a vernier. Clamp screws are provided where necessary to lock the components in place. At one end of the adjustable limb of the declination arc there are small lenses set up to focus an image of the Sun's disc onto a target plate inscribed with parallel pairs of perpendicular lines to frame the image when correctly aligned.The operation is as follows: Set the Sun's declination for that day, obtained by means of tables, on a scale attached perpendicular to the time arc.
Description:
Set the latitude of the location on a scale in the alidade.
Set the approximate local time on the arc that rotates on a polar axis.
Description:
Orient the instrument, while it remains level, so the image of the Sun appears between four scribed lines on the screen opposite to the lens. The time dial is finely adjusted to bring the image between a second pair of scribed lines perpendicular to the first pair. The main axis of the upper plate will then point to the pole.
Description:
The pinnula (sighting vanes) may then be aligned with a terrestrial object and its bearing read from the angle scale.
The magnetic declination may be read from a compass attached to the base plate.
Reception:
Burt improved the instrument over the years and it won awards from various organizations for its technology as being simple, rugged, inexpensive, reliable and accurate and was used by surveyors from the nineteenth into the twentieth century. In 1851, he exhibited his latest version at the Great Exhibition World's Fair in London. There it was examined and endorsed by scholar John Herschel. Burt received a prize from the fair for his compass instrument design. He then received another medal for his simpler, more accurate version by jurors of Astronomical Instruments with a personal compliment by Albert Edward, Prince of Wales, on October 15, 1851, at Hyde Park, London, England.This instrument was invented to get away from the highly variable and unreliable readings given by the common compass of the day in a locality with a magnetic field anomaly caused by large iron ore deposits. The instrument was found to be so accurate that it was specified by the United States government for surveying public lands, state boundaries, and railroad routes. Burt's instrument was used to survey 75 per cent of the public lands of the United States, consisting of nearly a billion acres. It had saved the government millions of dollars because of its general inexpensive price tag and the accuracy of the survey. It surveyed mineral lands in many states, including Michigan, Wisconsin, Minnesota, Arkansas, and Colorado. Its project expenditure to survey a section of land was only a fraction of what it used to cost before his invention. An example of comparison was the boundary line between Iowa and Minnesota that was surveyed before at $120 per mile ($75/km) with the use of the old-fashioned instruments, while with Burt's solar compass it was only $15 per mile ($9.3/km). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Quantum Computing: A Gentle Introduction**
Quantum Computing: A Gentle Introduction:
Quantum Computing: A Gentle Introduction is a textbook on quantum computing. It was written by Eleanor Rieffel and Wolfgang Polak, and published in 2011 by the MIT Press.
Topics:
Although the book approaches quantum computing through the model of quantum circuits, it is focused more on quantum algorithms than on the construction of quantum computers. It has 13 chapters, divided into three parts: "Quantum building blocks" (chapters 1–6), "Quantum algorithms" (chapters 7–9), and "Entangled subsystems and robust quantum computation" (chapters 10–13).After an introductory chapter overviewing related topics including quantum cryptography, quantum information theory, and quantum game theory, chapter 2 introduces quantum mechanics and quantum superposition using polarized light as an example, also discussing qubits, the Bloch sphere representation of the state of a qubit, and quantum key distribution. Chapter 3 introduces direct sums, tensor products, and quantum entanglement, and chapter 4 includes the EPR paradox, Bell's theorem on the impossibility of local hidden variable theories, as quantified by Bell's inequality. Chapter 5 discusses unitary operators, quantum logic gates, quantum circuits, and functional completeness for systems of quantum gates. Chapter 6, the final chapter of the building block section, discusses (classical) reversible computing, and the conversion of arbitrary computations to reversible computations, a necessary step to performing them on quantum devices.In the section of the book on quantum algorithms, chapter 7 includes material on quantum complexity theory and the Deutch algorithm, Deutsch–Jozsa algorithm, Bernstein–Vazirani algorithm, and Simon's algorithm, algorithms devised to prove separations in quantum complexity by solving certain artificial problems faster than could be done classically. It also covers the quantum Fourier transform. Chapter 8 covers Shor's algorithm for integer factorization, and introduces the hidden subgroup problem. Chapter 9 covers Grover's algorithm and the quantum counting algorithm for speeding up certain kinds of brute-force search. The remaining chapters return to the topic of quantum entanglement and discuss quantum decoherence, quantum error correction, and its use in designing robust quantum computing devices, with the final chapter providing an overview of the subject and connections to additional topics. Appendices provide a graphical approach to tensor products of probability spaces, and extend Shor's algorithm to the abelian hidden subgroup problem.
Audience and reception:
The book is suitable as an introduction to quantum computing for computer scientists, mathematicians, and physicists, requiring of them only a background in linear algebra and the theory of complex numbers, although reviewer Donald L. Vestal suggests that additional background in the theory of computation, abstract algebra, and information theory would also be helpful. Prior knowledge of quantum mechanics is not required.Reviewer Kyriakos N. Sgarbas has some minor notational quibbles with the book's presentation, and complains that the level of difficulty is uneven and that it lacks example solutions. However, reviewer Valerio Scarani calls the book "a masterpiece", particularly praising it for its orderly arrangement, its well-thought-out exercises, the self-contained nature of its chapters, and its inclusion of material warning readers against falling into common pitfalls.
Related works:
There are many other textbooks on quantum computing; for instance, Scarani lists Quantum Computer Science: An Introduction by N. David Mermin (2007), An Introduction to Quantum Computing by Kaye, Laflamme, and Mosca (2007), and A Short Introduction to Quantum Information and Quantum Computation by Michel Le Bellac (2006). Sgarbas lists in addition Quantum Computing Explained by D. McMahon (2008) and Quantum Computation and Quantum Information by M. A. Nielsen and I. L. Chuang (2000). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**N-ary group**
N-ary group:
In mathematics, and in particular universal algebra, the concept of an n-ary group (also called n-group or multiary group) is a generalization of the concept of a group to a set G with an n-ary operation instead of a binary operation. By an n-ary operation is meant any map f: Gn → G from the n-th Cartesian power of G to G. The axioms for an n-ary group are defined in such a way that they reduce to those of a group in the case n = 2. The earliest work on these structures was done in 1904 by Kasner and in 1928 by Dörnte; the first systematic account of (what were then called) polyadic groups was given in 1940 by Emil Leon Post in a famous 143-page paper in the Transactions of the American Mathematical Society.
Axioms:
Associativity The easiest axiom to generalize is the associative law. Ternary associativity is the polynomial identity (abc)de = a(bcd)e = ab(cde), i.e. the equality of the three possible bracketings of the string abcde in which any three consecutive symbols are bracketed. (Here it is understood that the equations hold for all choices of elements a, b, c, d, e in G.) In general, n-ary associativity is the equality of the n possible bracketings of a string consisting of n + (n − 1) = 2n − 1 distinct symbols with any n consecutive symbols bracketed. A set G which is closed under an associative n-ary operation is called an n-ary semigroup. A set G which is closed under any (not necessarily associative) n-ary operation is called an n-ary groupoid.
Axioms:
Inverses / unique solutions The inverse axiom is generalized as follows: in the case of binary operations the existence of an inverse means ax = b has a unique solution for x, and likewise xa = b has a unique solution. In the ternary case we generalize this to abx = c, axb = c and xab = c each having unique solutions, and the n-ary case follows a similar pattern of existence of unique solutions and we get an n-ary quasigroup.
Axioms:
Definition of n-ary group An n-ary group is an n-ary semigroup which is also an n-ary quasigroup.
Identity / neutral elements In the 2-ary case, there can be zero or one identity elements: the empty set is a 2-ary group, since the empty set is both a semigroup and a quasigroup, and every inhabited 2-ary group is a group. In n-ary groups for n ≥ 3 there can be zero, one, or many identity elements.
Axioms:
An n-ary groupoid (G, f) with f = (x1 ◦ x2 ◦ ⋯ ◦ xn), where (G, ◦) is a group is called reducible or derived from the group (G, ◦). In 1928 Dörnte published the first main results: An n-ary groupoid which is reducible is an n-ary group, however for all n > 2 there exist inhabited n-ary groups which are not reducible. In some n-ary groups there exists an element e (called an n-ary identity or neutral element) such that any string of n-elements consisting of all e's, apart from one place, is mapped to the element at that place. E.g., in a quaternary group with identity e, eeae = a for every a.
Axioms:
An n-ary group containing a neutral element is reducible. Thus, an n-ary group that is not reducible does not contain such elements. There exist n-ary groups with more than one neutral element. If the set of all neutral elements of an n-ary group is non-empty it forms an n-ary subgroup.Some authors include an identity in the definition of an n-ary group but as mentioned above such n-ary operations are just repeated binary operations. Groups with intrinsically n-ary operations do not have an identity element.
Axioms:
Weaker axioms The axioms of associativity and unique solutions in the definition of an n-ary group are stronger than they need to be. Under the assumption of n-ary associativity it suffices to postulate the existence of the solution of equations with the unknown at the start or end of the string, or at one place other than the ends; e.g., in the 6-ary case, xabcde = f and abcdex = f, or an expression like abxcde = f. Then it can be proved that the equation has a unique solution for x in any place in the string.
Axioms:
The associativity axiom can also be given in a weaker form.: 17
Example:
The following is an example of a three element ternary group, one of four such groups aaa=aaab=baac=caba=cabb=aabc=baca=bacb=cacc=abaa=bbab=cbac=abba=abbb=bbbc=cbca=cbcb=abcc=bcaa=ccab=acac=bcba=bcbb=ccbc=acca=accb=bccc=c
(n,m)-group:
The concept of an n-ary group can be further generalized to that of an (n,m)-group, also known as a vector valued group, which is a set G with a map f: Gn → Gm where n > m, subject to similar axioms as for an n-ary group except that the result of the map is a word consisting of m letters instead of a single letter. So an (n,1)-group is an n-ary group. (n,m)-groups were introduced by G Ĉupona in 1983. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Slim jim (lock pick)**
Slim jim (lock pick):
A slim jim (more technically known as a lockout tool) is a thin strip of metal (usually spring steel) roughly 60 centimetres (24 in) long and about 2–4 centimetres (0.79–1.57 in) wide originally marketed under that name by HPC Inc., a manufacturer and supplier of specialty locksmithing tools. Slim jims are used to unlock automobile doors without use of a key or lock pick. It acts directly on the levers and interconnecting rods that operate the door, completely avoiding the complexity of dealing with the lock mechanism itself. The hooked end of the tool is slipped between a car's window and the rubber seal, catching the rods that connect to the lock mechanism. With careful manipulation, the door can be opened.Unskilled use of the tool will often detach the lock rods, leaving the lock inoperable even with the key. This is often a clue that someone has attempted to break into a car. Newer cars have also incorporated internal defenses against this tool such as barrier blocks on the bottom of the window, preventing entry, and also shrouding the operating rods and the lock cylinder to prevent manipulation of internal linkages.
Slim jim (lock pick):
There have been unsubstantiated claims that in modern vehicles there is a chance for setting off the side airbag deployment system of the vehicle, possibly causing injury to a person using a slim jim. However, according to research by the National Highway Traffic Safety Administration, this has not been verified and manufacturers state it is impossible. An episode of MythBusters showed experimenters that were also unable to deploy an airbag with a slim jim. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Smear layer**
Smear layer:
In dentistry, the smear layer is a layer found on root canal walls after root canal instrumentation. It consists of microcrystalline and organic particle debris. It was first described in 1975 and research has been performed since then to evaluate its importance in bacteria penetration into the dentinal tubules and its effects on endodontic treatment. More broadly, it is the organic layer found over all hard tooth surfaces.
Description:
Early studies of dentinal walls after cavity preparation performed by Brännström and Johnson (1974) showed the presence of a thin layer of debris that was 2 to 5 micrometres thick.In 1975 McComb and Smith first described the smear layer. They observed an amorphous layer of debris, with an irregular and granular surface, on instrumented dentinal walls using a scanning electron microscope (SEM). The thin, granular microcrystalline layer of debris was 2-5 micrometres thick and was found packed onto the canal wall. The authors stated that “most standard instrumentation techniques produced a canal wall that was smeared and packed with debris.”In the same year Mader et al. studied the morphological characteristics of the smear layer in teeth that were endodontically instrumented with k type files and irrigated with 5.25% NaOCl. The smear layer was examined from two aspects; the first aspect looked “down onto” the smear layer and the second from the side. Photomicrographs obtained by SEM showed that the smear layer consists of two confluent components. These were described as a thin superficial layer 1-2 micrometres thick overlying a densely packed layer and a second that penetrated into the dentinal tubules for distances of up to 40 micrometres. The packed material showed finger like structures projecting into the tubules from the canal wall.
Contents:
Composition In 1984 Pashely described the smear layer as being composed of two phases; an organic phase, composed of collagen residues and glycosaminoglycans from extracellular matrix of pulp cells, which acts as a matrix for an inorganic phase. This organo-mineral content is composed of two distinct superimposed layers. The first layer covers the canal wall and is loosely adherent and easy to remove. The second layer however occludes the dentinal tubules and strongly adheres to the canal walls.
Contents:
Contents of the smear layer Dentine particles Residual vital pulp tissue Residual necrotic pulp tissue Erythrocytes Remnant of odontoblast process Saliva Bacterial components Thickness of the smear layer The smear layer is a physical barrier that decreases the penetration of disinfecting agents into dentinal tubules, and consequently, their efficacy. The most important cause of endodontic failure is the residual microorganisms that are harboured within the root canal system and hard-to-reach areas. Studies were conducted into the thickness of smear layer created by different instruments, to enhance the understanding and aid the removal of the smear layer, and therefore aid the removal of any bacteria that may otherwise have been entombed by the smear layer. Results of the study showed that the Protaper series of rotary instruments caused the maximum amount of smear layer, followed by the Profile series of rotary instruments. The hand instruments caused the least amount of smear layer. Increasing the roughness of instruments has been found to increase the thickness of the smear layer as well.
Contents:
Bacterial Penetration Olgart et al. (1974) examined the penetration of bacteria into dentinal tubules of ground, fractured and acid treated dentin surfaces. In vitro the penetration of bacteria into tubules of intact dentin exposed by fracture was compared in pairs of teeth, one of which in each pair was mounted with intrapulpal hydrostatic pressure (30mmHg). In vivo, intra pair comparisons of bacterial invasion into dentinal tubules beneath ground, fractured and acid treated surfaces were made. They observed that an outward flow of fluids into the tubules due to intrapulpal pressure mechanically hindered bacterial growth and that the debris and smear layer produced from grinding obstructed the bacterial invasion into tubules. However this barrier seemed to be removed after a few days which allowed bacterial growth into intact dentin. Olgart came to a conclusion that acid produced by microorganisms may dissolve the smear layer allowing bacteria to pass into dentinal tubules.However, when Pashley et al. (1981) studied the scanning electron microscope (SEM) appearance of dentin before and after removing successive layers of the smear layer they came to a different conclusion. Twenty dentin disks were cut from human extracted third molars. The dentin surface of the disks was etched with 6% citric acid for 5, 15, 30, 45 and 60 seconds. SEM examination showed that citric acid was able to remove smear layer in successive layers according to etching time finally exposing the dentinal tubules. Pashley concluded that the maintenance of the smear layer established a protective diffusion barrier.Gettleman et al. (1991) assessed the influence of a smear layer on the adhesion of sealer cements to dentin. A total of 120 teeth was tested, 40 per sealer namely AH26, Sultan, and Sealapex; 20 each with and without the smear layer. The teeth were split longitudinally, and the internal surfaces were ground flat. In the smear layer-free specimens the smear layer was removed by washing for 3 minutes with 17% EDTA followed by 5.25% NaOCl. Using a specially designed jig, the sealer was placed into a 4-mm wide × 4-mm deep well which was then set onto the tooth at a 90-degree angle and allowed to set for 7 days. This set-up was then placed into a mounting jig which was designed for the Instron Universal Testing Machine so that only a tensile load was applied without shearing. The set-up was subjected to a tensile load at a crosshead speed of 1 mm per min. The only significant difference with regard to the presence or absence of the smear layer was found with AH26, which had a stronger bond when the smear layer was removed.
Removal of the smear layer:
Why is the smear layer removed? The smear layer can affect bonding, disinfection as well as obturation hence why it is considered important to remove. As discussed earlier this is a result of the fact that bacteria can be left entombed within the smear layer, if not removed.
Removal of the smear layer:
Impaired bonding Removing the smear layer exposes the underlying bulk and the orifices of the dentine tubules which are occluded following mechanical tooth preparation. This allows interaction between the bulk dentine and restorative resin ensuring an effective bond and seal. Failure to remove the smear layer may compromise the bond strength and sealing ability as it may not be strongly bound to the underlying dentine.
Removal of the smear layer:
Impaired disinfection If the smear layer is not removed during dental procedures, the disinfection process will be compromised as the disinfectant is unable to penetrate the infected dentinal tubules. The presence of bacteria in dentinal tubules can reduce the outward flow of dentinal fluid, promoting disease and an increased diffusion rate of substrates into tubules. Disease promotion can lead to inflammatory changes in the pulpo-dentinal complex, leading to pulpitis, pulpal necrosis, infection of the root canal system and periapical disease. This can cause pain, and discomfort and further complications if left unchecked.
Removal of the smear layer:
Impaired obturation Following mechanical tooth preparation during endodontic procedures the smear layer can act as a barrier between filling materials and the root canal wall. This compromises the formation of a satisfactory seal, leading to possible coronal leakage. Bacteria left underneath a material can populate and allow the invasion of more bacteria into dentinal tubules infection and failure of treatment. Teeth prepared using GI sealant and lateral condensation of GP showed higher rates of leakage when smear layer was not removed.
Removal of the smear layer:
How is the smear layer removed? Endodontic Irrigation Because the smear layer produced during endodontic instrumentation contains both inorganic and organic material, it cannot be removed by any of the presently available root canal irrigants alone. Therefore, the recommended protocol for smear layer removal is NaOCl followed by EDTA (ethylenediaminetetraacetic acid) or citric acid. Water, saline, chlorhexidine (CHX), or iodine compounds have no dissolving effect on the smear layer.
Removal of the smear layer:
Sodium hypochlorite (NaOCl) The most widely used root canal irrigation solution for several decades because it is inexpensive, can dissolve infected necrotic tissue and is bactericidal. The antimicrobial effectiveness is due to its high pH. This interferes with the cytoplasmic membrane integrity with irreversible enzymatic inhibition, biosynthetic alterations in cellular metabolism and phospholipid degradation. When hypochlorite contacts proteins it causes amino acid degradation and hydrolysis through the action of chloramine molecules. Thus necrotic tissue and pus are dissolved. Both citric acid and EDTA immediately reduce the available chlorine in solution, rendering the sodium hypochlorite irrigant ineffective on bacteria and necrotic tissue. Hence, citric acid or EDTA should never be mixed with sodium hypochlorite.
Removal of the smear layer:
Ethylenediaminetetraacetic acid (EDTA) This is the most widely used chelating agent. Its prominence as a chelating agent arises from its ability to sequester di and triactionic metal ions such as Ca2+ and Fe3+. With direct exposure for extended time EDTA extracts bacterial surface proteins by combining with metal ions from the cell envelope, which can eventually lead to bacterial death. Effect of phosphoric acid etching and self-etching primer application methods on dentinal shear bond strength. Testing and clinical evidence has shown that 17% EDTA needs to be placed inside the root canal for one minute to effectively dissolve organic components and smear layer. If the EDTA is placed within the root canal for less than one minute, the smear layer will not optimally be removed. The recommended time for smear layer removal is two minutes. EDTA alone cannot remove the smear layer completely. The inorganic portion is removed but the organic matter is still left partially blocking the dentin canal openings. EDTA effectively abolishes the tissue‐dissolving effect of NaOCl and should therefore not be used until at the end of the treatment as the final rinse.
Removal of the smear layer:
Citric acid 10% citric acid can be used as an alternative to EDTA as the final rinse to remove the smear layer after use of NaOCL. Citric acid is more potent than EDTA. Citric acid is used as a component in MTAD and Tetraclean, the combination products for smear layer removal. In the MTAD preparation citric acid helps remove the smear layer by allowing the doxycycline to enter the dentinal tubules and exert an antibacterial effect.Paste Lubricants Gel based lubricants can be placed on the instrument before insertion into the root canal to reduce friction. Examples include “Glyde” and “Fileze” which both contain the chelating agent EDTA which can help enlarge narrow root canals by softening the canal walls.Dentine conditioners These are generally acidic solutions which dissolve or at least solubilize the smear layer in attempt to expose the underlying dentine to the bonding agent. Examples include: phosphoric acid, nitric acid, maleic acid, citric acid, EDTA. Most manufacturers now supply a single agent to simultaneously etch enamel and condition the dentine.
Removal of the smear layer:
Three step dentine bonding agents Three separate solutions, etch, prime and bond are applied to the tooth surface. Examples of such include Optibond, Adper Scotchbond MP, It is important to rinse tooth after application of the etch to ensure remove the smear layer. If there is no rinsing stage post conditioning, there can be re-deposition on the dentine surface. Following this the dentine is now ready to be treated with the primer and dentine bonding agent. Manufacturers have attempted to simplify this process by producing a variety of products to combine these stages.
Removal of the smear layer:
Two step dentine bonding agents It is possible for the dentine conditioner and primer to be applied in one stage often referred to as a “self-etching primer” examples: Clearfil SE Bond. The primer used is acidic which dissolves the smear layer whilst also providing the functions of a primer. “Self-etching primers” should not be washed off as this would remove the primer and interfere with the bonding process. The smear layer is incorporated within the primer which has direct contact with bulk dentine. Two stage systems where the prime and bond steps are combined and the etch is applied separately also exist– examples include Prime & Bond NT, Optibond Solo.
Removal of the smear layer:
One step dentine bonding agents Some manufacturers have produced products able to condition, prime and bond in one application. Such systems include Fuji bond, Scotchbond Universal, Xeno III. It has been argued that self-etching systems may not be as effective as a phosphoric acid at etching enamel alone.
Further research:
Clark-Holke et al. (2003) focused on determining the effect of the smear layer on the magnitude of bacterial penetration through the apical foramen around obturating materials. Thirty extracted teeth were classified into two test groups; the first group had the smear layer removed by rinsing with 17% EDTA while in the second group the smear layer was left intact. Canal preparation and obturation using lateral condensation, gutta-percha, and AH 26 sealer was performed on all of the teeth. The model systems consisted of an upper chamber attached to the cemento-enamel junction and a lower chamber at the apices of the teeth. Standardized bacterial suspensions containing Fusobacterium nucleatum, Campylobacter rectus and Peptostreptococcus micros were inoculated into the upper chambers. Models were incubated anaerobically at 37 °C. Leakage results were as follows: In the first group 6 teeth showed bacterial leakage, the second group and third groups showed no bacterial leakage. This study indicated that removal of the smear layer reduced the leakage of bacteria through the root canal system.Kokkas et al. (2004) examined the effect of the smear layer on the penetration depth of three different sealers (AH Plus, Apexit, and a Grossman type-Roth 811) into the dentinal tubules. Sixty four extracted human single-rooted teeth were used and divided into two groups. The smear layer remained intact in all the roots of group A. Complete removal of the smear layer in group B was achieved after irrigation with 3 ml of 17% EDTA for 3 min, followed by 3 ml of 1% NaOCl solution. Ten roots from each group were obturated with AH Plus and laterally condensed gutta-percha points. The same process was repeated for the remaining roots by using sealers Apexit and Roth 811 correspondingly. After complete setting, the maximum penetration depth of the sealers into the dentinal tubules was examined in upper, middle, and lower levels. The smear layer prevented all the sealers from penetrating dentinal tubules. In contrast, in smear layer–free root canals, all the sealers penetrated dentinal tubules, although the depth of penetration varied between the sealers. Furthermore, smear layer adversely affected the coronal and apical sealing ability of sealers.
Further research:
Çobankara et al. (2004) determined the effect of the smear layer on apical and coronal leakage in root canals obturated with AH26 or RoekoSeal sealers. A total of 160 maxillary anterior teeth were used. Eight groups were created by all possible combinations of three factors: smear layer (present/absent), leakage assessment (apical/coronal), and sealer used (AH26/Roeko-Seal). All teeth were obturated using lateral condensation technique of gutta-percha. A fluid filtration method was used to test apical or coronal leakage. According to the results of this study, the smear (+) groups displayed higher apical and coronal leakage than those smear (-) groups for both root canal sealers. Apical leakage was significantly higher than coronal leakage for both root canal sealers used in this study. It was determined that that removal of the smear layer has a positive effect in reducing apical and coronal leakage for both AH26 and RoekoSeal root canal sealers.However Bertacci et al. (2007) evaluated the ability of a warm gutta-percha obturation system Thermafil to fill lateral channels in the presence or absence of the smear layer. Forty single-rooted extracted human teeth were randomly divided into two groups one of which had the smear layer removed by 5 ml of 5% NaOCl followed by 2.5 ml of 17% EDTA. Obturation was performed using AH Plus sealer and Thermafil. Specimens were cleared in methyl salicylate and analyzed under a stereomicroscope to evaluate the number, length, and diameter of lateral channels. All lateral channels were found to be filled in both groups. No statistically significant differences regarding number, length, and diameter were observed between the two groups. It was concluded that the smear layer did not prevent the sealing of lateral channels.Yildirim et al. (2008) investigated the effect of the smear layer on apical microleakage in teeth obturated with MTA. Fifty single-rooted central maxillary teeth were used in this study. The selected teeth were instrumented and randomly divided into 2 groups. In the first group (smear [+]), the teeth were irrigated with only 5.25% NaOCl. In the second group (smear [-]), the teeth were irrigated with EDTA (17%) and NaOCl (5.25%) to remove the smear layer. The teeth were then filled with MTA. The computerized fluid filtration method was used for evaluation of apical microleakage. The quantitative apical leakage of each tooth was measured after 2, 30, and 180 days. It was found that there was no difference between the groups after 2 days but removal of the smear layer caused significantly more apical microleakage than when the smear layer was left intact after 30 and 180 days. It was concluded that the apical microleakage of MTA is less when the smear layer is present than when it is absent.Saleh et al. (2008) studied the effect of the smear layer on the penetration of bacteria along different root canal filling materials. A total of 110 human root segments were instrumented to size 80 under irrigation with 1% sodium hypochlorite. Half of the roots were irrigated with a 5-mL rinse of 17% EDTA to remove the smear layer. Roots were filled with gutta-percha (GP) and AH Plus sealer (AH), GP and Apexit sealer (AP), or RealSeal cones and sealer (RS). Following storage in humid conditions at 37 °C for 7 days, the specimens were mounted into a bacterial leakage test model for 135 days. Survival analyses were performed to calculate the median time of leakage and log-rank test was used for pairwise comparisons of groups. Selected specimens were longitudinally sectioned and inspected by scanning electron microscopy for the presence of bacteria at the interfaces. In the presence of the smear layer, RS and AP leaked significantly more slowly than in its absence. In the absence of the smear layer, AH leaked significantly more slowly than RS. It was concluded that removal of the smear layer did not impair bacterial penetration along root canal fillings. A comparison of the sealers revealed no difference except that AH performed better than RS in the absence of the smear layer.Fachin et al.(2009) evaluated whether smear layer removal has any influence on the filling of the root canal system, by examining the obturation of lateral canals, secondary canals and apical deltas. Eighty canines were randomly divided into two groups, according to their irrigation regimen. Both groups were irrigated with 1% NaOCl during canal shaping, but only the teeth in Group II received a final irrigation with 17% EDTA for smear layer removal. The root canals were obturated with lateral condensation of gutta-percha and the specimens were cleared, allowing for observation under the microscope. The results showed that In Groups I and II, 42.5% and 37.5% of the teeth, respectively, presented at least one filled canal ramification. In conclusion, smear layer removal under the conditions tested in this study did not affect the obturation of root canal ramifications when lateral condensation of gutta-percha was the technique used for root canal filling. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Colour Index International**
Colour Index International:
Colour Index International (CII) is a reference database jointly maintained by the Society of Dyers and Colourists and the American Association of Textile Chemists and Colorists. It currently contains over 27,000 individual products listed under 13,000 Colour Index Generic Names. It was first printed in 1925 but is now published solely on the World Wide Web. The index serves as a common reference database of manufactured colour products and is used by manufacturers and consumers, such as artists and decorators.
Colour Index International:
Colorants (both dyes and pigments) are listed using a dual classification which use the Colour Index Generic Name (the prime identifier) and Colour Index Constitution Numbers. These numbers are prefixed with C.I. or CI, for example, C.I. 15510. (This abbreviation is sometimes mistakenly thought to be CL, due to the font used to display it.) A detailed record of products available on the market is presented under each Colour Index reference. For each product name, Colour Index International lists the manufacturer, physical form, and principal uses, with comments supplied by the manufacturer to guide prospective customers.
Colour Index International:
For manufacturers and consumers, the availability of a standard classification system for pigments is helpful because it resolves conflicting historic, proprietary, and generic names that have been applied to colours.
List of Colour Index Constitution Numbers:
The colour index numbers are 5-digit numbers grouped into numerical ranges according to the chemical structure.
Print editions:
1st (-) 2nd (1956) 3rd (1971) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Arlberg technique**
Arlberg technique:
The Arlberg technique is a progressive system that takes the skier from the snowplough turn to the parallel christie through measured stages of improvement. The system, or slightly modified versions, remains in widespread use to this day. Modern ski equipment is also capable of a more efficient turning style known as carving that uses entirely different techniques and movements. Some ski schools have started moving students directly from the snowplough to carving as early as possible, avoiding learning stemming habits that may be difficult to un-learn.The system was developed by Hannes Schneider while working as an instructor in the Arlberg mountains in Austria. His methods were popularized in Europe in a series of films in the 1920s and 30s. It became popular in the United States after Schneider moved there in 1939, having been jailed during the Anschluss.
History:
Hannes Schneider took a job as a ski instructor at the Hotel Post in Sankt Anton am Arlberg in Austria in 1907. He started developing various modifications to current ski techniques during this time, and the Arlberg technique developed through this process. During World War I he used the technique to train the Austria's alpine troops, and fought with the Austrian army in Russia and on the Italian front. With the ending of the war, he returned to Hotel Post and continued to develop the Arlberg technique.
History:
In 1920 the German filmmaker Arnold Fanck visited Arlberg and produced an early instructional ski film, Das Wunder des Schneeschuhs. This introduced the Arlberg technique to the world, and it was quickly taken up by ski schools. A follow-up film in 1931, The White Ecstasy, followed the tribulations of two friends who travel to Arlberg to learn how to ski. This film was produced along with an instructional book, which was featured in the film. Stills from the film were also used to illustrate the book.By 1925 Schneider's technique had become known as the "Arlberg Technique". He trained Otto Schneibs and Hannes Schroll to become emissaries to the United States for the now-certified technique, as described in Schneib's book, Modern Ski Technique. The book and technique helped underpin the success of the Dartmouth College ski team, where Schneibs was a ski coach.Schneider travelled to the United States in 1936 to demonstrate his techniques at a winter sports show in the Boston Garden. The demonstrations were held on a wooden slide that was covered with shaved ice. He repeated these demonstrations Madison Square Garden two weeks later. The techniques were soon taken up by US instructors.
History:
Schneider was jailed during the Anschluss, but his US contacts led to his freedom. These efforts were led by Harvey Dow Gibson, president of the Manufacturer's Trust. Gibson had started the Cranmore Mountain Resort, a ski resort in his home town of North Conway, New Hampshire. Carol Reed ran a ski school in the town (at the time, schools and rentals were often 3rd party services, as opposed to being owned by the resort itself) and had hired one of Schneider's students to run it, Benno Rybizka.Gibson bought the school from Reed, moving Reed to a newly formed Saks Fifth Avenue Ski Shop. He then wrote to the German Minister of Finance, Hjalmar Schacht, requesting that Schneider be freed to take the now-vacant lead instructor position. Schancht agreed, and Schneider arrived in the US in 1939. He continued to teach the Arlberg technique personally, while also introducing it at schools across the country.
Basic concepts:
Downhill skiing focusses much of its attention on the development of skier techniques for smoothly turning the skis. This is used both for directional control as well as the primary method for controlling speed. When the skier is pointed down the hill, or "along the fall line", they will accelerate. If the same skier points the skis across the fall line, or more radically, uphill, speed will be reduced. Using turns, the skier can control the amount of time the skis are pointed down the fall line, and thereby control their speed.
Basic concepts:
Early downhill techniques were based on two techniques, the telemark style or stemming. Over time, the latter became much more popular, and the more athletic telemark has remained a niche technique since the 1900s. Stemming is based on creating turning forces by skidding the edge of the ski over the snow at an angle to the forward movement of the skier. The angle between the ski and the motion over the snow creates sideways forces that cause the skier to turn. In general, the skier angles the ski by keeping the tip roughly in-line with their shoulders, while pushing the tail of the ski out and to the side. The various styles of stemming turns differ primarily in form; the snowplough places both skis at roughly the same angle throughout a ski run, moving the ski on the inside of the desired turn toward the body.
Basic concepts:
the stem or stem Christie turn is similar, but the skis are kept parallel when they are not being turned, and the ski on the outside of the turn is pushed away from the body to initiate the turn (stemming). This is sometimes known as the "wedge Christie".
Basic concepts:
further refinement of the basic Christie turn continues through development of "weighting", moving the skis into the turn by moving weight from one ski to the other, as opposed to pushing the skis directly.The Arlberg technique is based on the similarly of these concepts, introducing each stage as a series of modifications on the previous concepts. The snowplough is typically introduced to beginners by having them move their legs to produce a "pizza slice" shape, tips together and tails apart. Speed along the fall line can be controlled by adjusting the angle of the slice; with the tails far apart more drag is created, slowing the skier. Turns are accomplished through brute force, having the skier rotate the ski on the inside of the turn so it moves inward through sideways pressure from the leg and rotation of the foot.
Basic concepts:
As the skier gains confidence and can increase their speed, the angle of the snowplough is reduced until it devolves into the skis lying parallel to each other. At this point turning is initiated not by moving the inside ski toward the body, but moving the outside ski outward. This is the classic "stemming" motion, developing directly from the snowplough. Christy turning is essentially a technique for easily stemming, an active method that involves motion of the upper body, hips and knees.
Later developments:
The Arlberg technique remained essentially unchanged into the 1960s. This was due largely to the limitations of the equipment of the era. Ski boots were stiff only in the sole, and offered little or no support laterally above the ski - moving the legs to the side would simply cause the upper portion of the boot to bend, it would not transmit this force to the ski. The only forces that could be transmitted were those that were parallel to the top of the ski (or more specifically, the bottom of the boot), namely rotating the toe in or out, or pushing the entire foot to one side or the other. As the ski can only be pushed inward until it meets the other ski, most of the control movements were accomplished by pushing the skis outward to the sides - the stemming motion.
Later developments:
During the 1950s and 60s, several developments in downhill ski equipment dramatically changed the sport. These changes were first introduced by the Head Standard ski, the Look Nevada ski binding, and the Lange and Rosemount plastic ski boots. Each of these further, and dramatically, improved the ability to transmit rotational forces to the ski, and from the ski to the snow. This allowed the ski to be turned by directly rotating it onto its edge, exposing the curved sidecut to the snow, bending the ski into an arc, and causing it to naturally turn along that arced path. The parallel turn developed from what was essentially a weight-balanced version of the stem Christy into a much less athletic version, today known as carving.
Later developments:
Early versions of the parallel turn can be taught as modifications of the stem Christy, and this became a popular addition to the Arlberg technique through the 1960s and especially in the 1970s. By the late 1970s, the upper echelon of ski technique was based on a series of short, rapid parallel turns with the upper body remaining aligned down the fall line as long as possible, similar to modern mogul skiing technique. Turn initiation was based on weighting and rotating the ski, like carving, but the power of the turn remained in the skidding. However, as equipment continued to improve, especially the introduction of the "parabolic" skis in the 1990s (today known as "shaped"), the ratio of skidding to carving continued to change, and techniques along with it. Modern technique is based largely on carving, adding skidding only as needed to tighten the turn.
Later developments:
Modern skis make carving turns so simple that the Arlberg technique of gradual progression is no longer universal. Many ski schools graduate advancing students directly from the snowplough to the carving turn. These are taught as two entirely separate techniques, one using stemming and the other using the movement of the knees, so the progressive connection emphasized in the Arlberg technique is no longer maintained. This is by no means universal, and many schools continue to follow the classic Arlberg progression. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sencha**
Sencha:
Sencha (煎茶, lit. 'infused tea') is a type of Japanese ryokucha (緑茶, green tea) which is prepared by infusing the processed whole tea leaves in hot water. This is as opposed to matcha (抹茶), powdered Japanese green tea, where the green tea powder is mixed with hot water and therefore the leaf itself is included in the beverage. Sencha is the most popular tea in Japan.
Overview:
Among the types of Japanese green tea prepared by infusion, sencha is distinguished from such specific types as gyokuro in that it is shaded for a shorter time or not at all, or bancha which is the same tea but harvested later in the season. It is the most popular tea in Japan, representing about 80 percent of the tea produced in the country.The flavour depends upon the season and place where it is produced, but shincha, or 'new tea' from the first flush of the year, is considered the most delicious. Tea-picking in Japan begins in the south, gradually moving north with the spring warmth. During the winter, tea plants store nutrients, and the tender new leaves which sprout in the spring contain concentrated nutrients. Shincha represents these tender new leaves. The shincha season, depending upon the region of the plantation, is from early April to late May, specifically the 88th day after Setsubun which usually falls around February 4, a cross-quarter day traditionally considered the start of spring in Japan. Setsubun or Risshun is the beginning of the sexagenary cycle; therefore, by drinking sencha one can enjoy a year of good health.The ideal colour of the sencha beverage is a greenish golden colour. Depending upon the temperature of the water in which it is decocted, the flavour will be different, adding to the appeal of sencha. With relatively more temperate water, it is relatively mellow; with hot water, it is more astringent. Some varieties expand when steeped to resemble leaf vegetable greens in smell, appearance, and taste.
Overview:
The tea production process by which sencha and other Japanese ryokucha are created differs from Chinese green teas, which are initially pan-fired. Japanese green tea is first steamed for between 15 and 20 seconds to prevent oxidization of the leaves. Then, the leaves are rolled, shaped, and dried. This step creates the customary thin cylindrical shape of the tea. Finally, the leaves are sorted and divided into differing quality groups.The initial steaming step imparts a difference in the flavour between Chinese and Japanese green tea, with Japanese green tea having a more vegetal, almost grassy flavour (some taste seaweed-like). Infusions from sencha and other green teas that are steamed (like most common Japanese green teas) are also greener in colour and slightly more bitter than Chinese-style green teas.
Types:
Jō Sencha (上煎茶), superior sencha Tokujō Sencha (特上煎茶), extra superior sencha Hachijūhachiya Sencha (八十八夜煎茶), sencha harvested after 88 days (respectively nights) after spring begins (risshun) Kabuse Sencha or kabusecha (かぶせ茶), covered sencha Asamushi (浅蒸し), lightly steamed sencha Chumushi (中蒸し), middle steamed (30–90 seconds) Fukamushi (深蒸し) or fukamushicha, deeply steamed sencha – 1–2 minutes Shincha (新茶) or ichibancha (一番茶), first-picked sencha of the year
Shincha:
Shincha (新茶), 'new tea', represents the first month's harvest of sencha. Basically, it is the same as ichibancha (一番茶), 'the first-picked tea', and is characterized by its fresh aroma and sweetness. Ichibancha distinguishes shincha from both nibancha ('the second-picked tea') and sanbancha ('the third-picked tea'). Use of the term shincha makes emphatically clear that this tea is the year's earliest, the first tea of the season.
Kabusecha:
Kabusecha (冠茶) is sencha grown in the shade to increase amino acids, such as theanine, which contribute to its distinctive flavor. About a week before the tea leaf buds are picked in the spring, the plantation is covered with a screen to cut out the direct sunlight. This shading produces a milder tea than standard sencha. The shaded tea known as gyokuro differs from kabusecha in that it is shaded for a longer period: about 20 days.Special nets (kabuse) are hung over the plants to obtain a natural shade without completely blocking out sunlight. Kabusecha sencha has a mellower flavour and more subtle colour than sencha grown in direct sunlight.
Senchadō:
Senchadō (煎茶道 'Way of Sencha') is the formal art of enjoying sencha. Generally it involves the high-grade gyokuro class. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**ICTP Ramanujan Prize**
ICTP Ramanujan Prize:
The DST-ICTP-IMU Ramanujan Prize for Young Mathematicians from Developing Countries is a mathematics prize awarded annually by the International Centre for Theoretical Physics in Italy. The prize is named after the Indian mathematician Srinivasa Ramanujan. It was founded in 2004, and was first awarded in 2005.
The prize is awarded to a researcher from a developing country less than 45 years of age who has conducted outstanding research in a developing country. The prize is supported by the Ministry of Science and Technology (India) and Norwegian Academy of Science and Letters through the Abel Fund, with the cooperation of the International Mathematical Union. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Computer-aided architectural design**
Computer-aided architectural design:
Computer-aided architectural design (CAAD) software programs are the repository of accurate and comprehensive records of buildings and are used by architects and architectural companies for architectural design and architectural engineering. As the latter often involve floor plan designs CAAD software greatly simplifies this task.The first program was created back in the 1960s to increase architects' productivity, which at the time was held back by manual drawing of blueprints.Computer-aided design also known as CAD was originally the type of program that architects used, but since CAD could not offer all the tools that architects needed to complete a project, CAAD developed as a distinct class of software.
Overview:
All CAD and CAAD systems employ a database with geometric and other properties of objects; they all have some kind of graphic user interface to manipulate a visual representation rather than the database; and they are all more or less concerned with assembling designs from standard and non-standard pieces. Currently, the main distinction which causes one to speak of CAAD rather than CAD lies in the domain knowledge (architecture-specific objects, techniques, data, and process support) embedded in the system. A CAAD system differs from other CAD systems in two respects: It has an explicit object database of building parts and construction knowledge.
Overview:
It explicitly supports the creation of architectural objects.In a more general sense, CAAD also refers to the use of any computational technique in the field of architectural design other than by means of architecture-specific software. For example, software which is specifically developed for the computer animation industry (e.g. Maya and 3DStudio Max), is also used in architectural design. These programs can produce photo realistic 3d renders and animations. Nowadays real-time rendering is being popular thanks to the developments in graphic cards. The exact distinction of what properly belongs to CAAD is not always clear. Specialized software, for example for calculating structures by means of the finite element method, is used in architectural design and in that sense may fall under CAAD. On the other hand, such software is seldom used to create new designs.
Overview:
In 1974 Caad became a current word and was a common topic of commercial modernization.
Three-dimensional objects:
CAAD has two types of structures in its program. The first system is surface structure which provides a graphics medium to represent three-dimensional objects using two-dimensional representations. Also algorithms that allow the generation of patterns and their analysis using programmed criteria, and data banks that store information about the problem at hand and the standards and regulations that applies to it. The second system is deep structure which means that the operations performed by the computer have natural limitations. Computer hardware and machine languages that are supported by these make it easy to perform arithmetical operations quickly and accurately. Also an almost illogical number of layers of symbolic processing can be built enabling the functionalities that are found at the surface.
Advantages:
Another advantage to CAAD is the two way mapping of activities and functionalities. The two instances of mapping are indicated to be between the surface structures and the deep structures. These mappings are abstractions that are introduced in order to discuss the process of design and deployment of CAAD systems. In designing the systems the system developers usually consider surface structures. A one-to-one mapping is the typical statement, which is to develop a computer based functionality that maps as closely as possible into a corresponding manual design activity, for example, drafting of stairs, checking spatial conflict between building systems, and generating perspectives from orthogonal views.
Advantages:
The architectural design processes tend to integrate models isolated so far. Many different kinds of expert knowledge, tools, visualization techniques, and media are to be combined. The design process covers the complete life cycle of the building. The areas that are covered are construction, operations, reorganization, as well as destruction. Considering the shared use of digital design tools and the exchange of information and knowledge between designers and across different projects, we speak of a design continuum.
Advantages:
An architect's work involves mostly visually represented data. Problems are often outlined and dealt with in a graphical approach. Only this form of expression serves as a basis for work and discussion. Therefore, the designer should have maximum visual control over the processes taking place within the design continuum. Further questions occur about navigation, associative information access, programming and communication within very large data sets. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Striation (geology)**
Striation (geology):
In geology, a striation is a groove, created by a geological process, on the surface of a rock or a mineral.
In structural geology, striations are linear furrows, or linear marks, generated from fault movement. The striation's direction reveals the movement direction in the fault plane.
Similar striations, called glacial striations, can occur in areas subjected to glaciation. Striations can also be caused by underwater landslides.
Striations can also be a growth pattern or mineral habit that looks like a set of hairline grooves, seen on crystal faces of certain minerals. Examples of minerals that can show growth striations include pyrite, feldspar, quartz, tourmaline, chalcocite and sphalerite. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Slow dance**
Slow dance:
A slow dance is a type of partner dance in which a couple dance slowly, swaying to the music. This is usually done to very slow-beat songs, namely sentimental ballads.Slow dancing can refer to any slow couple dance (such as certain ballroom dances), but is often associated with a particular, simple style of dance performed by middle school, high school, and college students.
Technique:
When two partners dance together, the male partner typically holds his hands against the sides of the female partner's hips, buttocks, or waist while the female drapes her hands on the male's shoulders. The couple then sways back and forth with the music. Foot movement is minimal, but the pair may use their feet to slowly turn on the spot. Because the dance requires little physical concentration, participants often talk to each other while dancing. Some couples who have a close relationship may dance very closely together, in a "hug-and-sway" fashion. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bemiparin sodium**
Bemiparin sodium:
Bemiparin (trade names Ivor and Zibor, among others) is an antithrombotic and belongs to the group of low molecular weight heparins (LMWH).
Medical uses:
Bemiparin is used for the prevention of thromboembolism after surgery, and to prevent blood clotting in the extracorporeal circuit in haemodialysis.
Contraindications:
The medication is contraindicated in patients with a history of heparin-induced thrombocytopenia with or without disseminated intravascular coagulation; acute bleeding or risk of bleeding; injury or surgery of the central nervous system, eyes or ears; severe liver or pancreas impairment; and acute or subacute bacterial endocarditis.
Interactions:
No interaction studies have been conducted. Drugs that are expected to increase the risk of bleeding in combination with bemiparin include other anticoagulants, aspirin and other NSAIDs, antiplatelet drugs, and corticosteroids.
Chemistry:
Like semuloparin, bemiparin is classified as an ultra-LMWH because of its low molecular mass of 3600 g/mol on average. (Enoxaparin has 4500 g/mol.) These heparins have lower anti-thrombin activity than classical LMWHs and act mainly on factor Xa, reducing the risk of bleeding. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Nicorandil**
Nicorandil:
Nicorandil is a vasodilatory drug used to treat angina.
Nicorandil:
Angina is chest pain that results from episodes of transient myocardial ischemia. This can be caused by diseases such as atherosclerosis, coronary artery disease and aortic stenosis. Angina commonly arises from vasospasm of the coronary arteries. There are multiple mechanisms causing the increased smooth muscle contraction involved in coronary vasospasm, including increased Rho-kinase activity. Increased levels of Rho-kinase inhibit myosin phosphatase activity, leading to increased calcium sensitivity and hypercontraction. Rho-kinase also decreases nitric oxide synthase activity, which reduces nitric oxide concentrations. Lower levels of nitric oxide are present in spastic coronary arteries. L-type calcium channel expression increases in spastic vascular smooth muscle cells, which could result in excessive calcium influx, and hypercontraction.It was patented in 1976 and approved for medical use in 1983.
Side effects:
Side effects listed in the British National Formulary include flushing, palpitations, weakness and vomiting. More recently, perianal, ileal and peristomal ulceration has been reported as a side effect. Anal ulceration is now included in the British National Formulary as a reported side effect. Other side effects include severe migraine, toothache, and nasal congestion.
Mechanism of action:
Nicorandil is an anti-angina medication that has the dual properties of a nitrate and ATP-sensitive K+ channel agonist. In humans, the nitrate action of nicorandil dilates the large coronary arteries at low plasma concentrations. At high plasma concentrations nicorandil reduces coronary vascular resistance, which is associated with increased ATP-sensitive K+ channel (KATP) opening.Nicorandil stimulates guanylate cyclase to increase formation of cyclic GMP (cGMP). cGMP activates protein kinase G (PKG), which phosphorylates and inhibits GTPase RhoA and decreases Rho-kinase activity. Reduced Rho-kinase activity permits an increase in myosin phosphatase activity, decreasing the calcium sensitivity of the smooth muscle.PKG also activates the sarcolemma calcium pump to remove activating calcium. PKG acts on K+ channels to promote K+ efflux and the ensuing hyperpolarization inhibits voltage-gated calcium channels. Overall, this leads to relaxation of the smooth muscle and coronary vasodilation.
Mechanism of action:
The effect of nicorandil as a vasodilator is mainly attributed to its nitrate property. Yet, nicorandil is effective in cases where nitrates, such as nitroglycerine, are not effective. Studies show that this is due to its KATP channel agonist action which causes pharmacological preconditioning and provides cardioprotective effects against ischemia. Nicorandil activates KATP channels in the mitochondria of the myocardium, which appears to relay the cardioprotective effects, although the mechanism is still unclear. In experimental animal models of the Long QT syndrome, Nicorandil normalizes the prolonged cardiac action potential duration and the QT interval.
Brand names:
Nicorandil is marketed under the brand names Ikorel (in the United Kingdom, Australia and most of Europe), Angedil (in Romania, Poland), Dancor (in Switzerland), Nikoran, PCA (in India), Aprior (in the Philippines), Nitorubin (in Japan), and Sigmart (in Japan, South Korea, Taiwan and China). Nicorandil is not available in the United States.
Synthesis:
Amide reaction between Nicotinoyl Chloride [10400-19-8] & 2-Aminoethyl Nitrate [646-02-6].
The reaction of N-(2-Hydroxyethyl)Nicotinamide [6265-73-2] with nitric acid gives nicorandil. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Joint Level Interface Protocol**
Joint Level Interface Protocol:
The Joint Level Interface Protocol (JLIP) is a video equipment control data standard.
JLIP was JVC's answer to the Sony Control L or LANC two-way serial bus. It is used to allow devices communicate with other, carrying control signals and exchanging data. JLIP jacks are now fitted to all new JVC camcorders, some older models, some VCRs and peripheral devices, like their new video printer. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Gait abnormality**
Gait abnormality:
Gait abnormality is a deviation from normal walking (gait). Watching a patient walk is an important part of the neurological examination. Normal gait requires that many systems, including strength, sensation and coordination, function in an integrated fashion. Many common problems in the nervous system and musculoskeletal system will show up in the way a person walks.
Presentation and causes:
Patients with musculoskeletal pain, weakness or limited range of motion often present conditions such as Trendelenburg's sign, limping, myopathic gait and antalgic gait.Patients who have peripheral neuropathy also experience numbness and tingling in their hands and feet. This can cause ambulation impairment, such as trouble climbing stairs or maintaining balance. Gait abnormality is also common in persons with nervous system problems such as cauda equina syndrome, multiple sclerosis, Parkinson's disease, Alzheimer's disease, vitamin B12 deficiency, myasthenia gravis, normal pressure hydrocephalus, and Charcot–Marie–Tooth disease. Research has shown that neurological gait abnormalities are associated with an increased risk of falls in older adults.Orthopedic corrective treatments may also manifest into gait abnormality, such as lower extremity amputation, healed fractures, and arthroplasty (joint replacement). Difficulty in ambulation that results from chemotherapy is generally temporary in nature, though recovery times of six months to a year are common. Likewise, difficulty in walking due to arthritis or joint pains (antalgic gait) sometimes resolves spontaneously once the pain is gone. Hemiplegic persons have circumduction gait, where the affected limb moves through an arc away from the body, and those with cerebral palsy often have scissoring gait. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**MacEnhancer**
MacEnhancer:
The MacEnhancer is an expansion box originally developed in 1985 by Microsoft for Apple Computer's original Macintosh. Plugged into either the Macintosh's serial printer or modem ports, the MacEnhancer provides IBM-standard printer and serial ports as well as a passthrough for the Mac-standard serial port, for a net gain of three peripheral ports. Along with a provided disk of drivers, this expansion box allows the Macintosh to run a host of printers and other business peripherals not originally supported by Apple.
Background:
Microsoft began producing hardware for Apple with the Z-80 SoftCard, an Apple II processor card, in 1980. The SoftCard is Microsoft's first hardware product. When Apple introduced the first Macintosh in 1984, the only printer it supported was Apple's own ImageWriter, which connects to the Macintosh through a serial interface—the only type of connection this Macintosh offers. This dearth in choices for printers led the Macintosh to flounder in the business world, where the IBM PC, and the Apple II before it, achieved widespread adoption owing to their parallel ports, which support a wide variety of printers and other peripherals. To rectify this, in January 1985, Microsoft announced the MacEnhancer, an expansion box for the original Macintosh (retronymically dubbed the Macintosh 128K) and the recently released Macintosh 512K. Microsoft's announcement came on the heels of Apple announcing their Macintosh Office initiative to develop more hardware to make the Macintosh attractive to corporate buyers, which bore the LaserWriter printer.
Specifications:
The MacEnhancer is an expansion box less than 12 inches (30 cm) wide, 4 inches (10 cm) deep, and 1 inch (2.5 cm) high. It connects to the Macintosh via a cable with an 8-pin mini-DIN connector to the MacEnhancer side and a DE-9 connector on the Macintosh side, to either the Macintosh's RS-422 printer or modem connectors. The MacEnhancer has four ports—one Macintosh-standard DE-9 connector (as a passthrough for the occupied modem or printer connector), two IBM-standard DB-25 RS-232 serial ports, and one IBM-standard DB-25 parallel port. Accompanying floppy disks with the MacEnhancer provide the user with a utility used to control the MacEnhancer, device drivers for numerous contemporary printers, and MacTerminal—a terminal emulator. While the MacEnhancer allows multiple devices to be connected to it, it does not support output to more than one port at a time. The included MacEnhancer software utility allows the user to switch the active port.
Release and reception:
The MacEnhancer retailed for US$245 (equivalent to $667 in 2022). Microsoft sold out of its initial production run of 4,000 units in April 1985, contracting the manufacture of another 2,000 units that month. On the release of the Macintosh Plus in 1986, the company had to revise the MacEnhancer slightly to account for a missing power rail on one of its rear serial connectors.David Ushijima of Macworld gave the MacEnhancer a positive review, calling the included software easy to use and the hardware reliable and broadly supportive as advertised. While he recognized the benefit of having support for different types of printers for different applications (e.g. lower-fidelity dot-matrix printers for graphical work and letter-quality printers for business correspondence), he ultimately dubbed the MacEnhancer an "expensive alternative to plugging and unplugging cables" and only saw real value in the added IBM-standard parallel printer port.Microsoft left the Macintosh hardware market in 1986, selling the hardware and software rights for the MacEnhancer to SoftStyle, a software development company based in Hawaii Kai, Hawaii, that specialized in device drivers. SoftStyle issued another version of the MacEnhancer in late 1986. The box largely remained the same but changed the 9-pin DB passthrough connector to an 8-pin mini-DIN connector—a style of connector that had become standard for Macintosh peripherals with the release of the Plus. The software also added support for controlling two MacEnhancers plugged into the same Macintosh, effectively giving the Macintosh eight peripheral ports. SoftStyle's MacEnhancer dropped support for the Macintosh 128K because of its requirement for versions of Finder that support HFS (version 5.3 onward).SoftStyle was acquired by Phoenix Technologies in 1988; the latter terminated all of SoftStyle's Macintosh hardware products after the acquisition. Several ex-programmers for SoftStyle formed Momentum, Inc., in Honolulu, Hawaii. This company marketed the Momentum Port Juggler, which like the MacEnhancer offered several serial ports for Macintosh products. The company fizzled in the late 1990s, after Apple announced that they had ditched mini-DIN serial cables with the Power Macintosh G3 in 1997. Looking retrospectively, Benj Edwards of PC Magazine called the MacEnhancer a "very useful expansion peripheral" and a "lost" hardware product of Microsoft. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Epomediol**
Epomediol:
Epomediol (trade name Clesidren) is a synthetic terpenoid with choleretic effects. It has been used in the symptomatic treatment of itching due to intrahepatic cholestasis of pregnancy. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Invasion gene associated RNA (InvR)**
Invasion gene associated RNA (InvR):
Invasion gene associated RNA (also known as InvR) is a small non-coding RNA involved in regulating one of the major outer cell membrane porin proteins in Salmonella species.
InvR was originally predicted by computational screening the genome of Salmonella typhimurium for novel sRNA genes. In this screen 46 candidate sRNA genes not conserved in Escherichia coli were identified.
Invasion gene associated RNA (InvR):
The Salmonella the virulence factors that facilitate invasion of the host intestinal epithelium are located in a ~ 40 kb region in the Salmonella genome referred to as Salmonella pathogenicity Island 1 (SPI-1). The gene encoding InvR is located in this SPI-1 region between two genes called InvH and STM2901. InvR appears to be unique to Salmonella species as there does not appear to be any predicted homologues in other Enterobacteriaceae species.
Invasion gene associated RNA (InvR):
InvR is ~80nt long and appears to be independently expressed from its own promoter. Its expression is activated by the transcription factor HilD and has been shown to be abundantly expressed during exponential growth. InvR has been shown to bind the RNA chaperone Hfq in vitro and Hfq is required for in vivo stability. In S. typhimurium InvR RNA has been shown to repress the synthesis of the abundant outer membrane porin protein OmpD. Despite being located in the SPI-1 region there has been no link identified between the function of InvR and the SPI-1 dependent secretion pathway or invasion. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Spent mushroom compost**
Spent mushroom compost:
Spent mushroom compost is the residual compost waste generated by the mushroom production industry.
Background:
It is readily available (bagged, at nursery suppliers), and its formulation generally consists of a combination of wheat straw, dried blood, horse manure and ground chalk, composted together. It is an excellent source of humus, although much of its nitrogen content will have been used up by the composting and growing mushrooms. It remains, however, a good source of general nutrients (1-2% N, 0.2% P, 1.3% K plus a full range of trace elements), as well as a useful soil conditioner. However, due to its chalk content, it may be alkaline, and should not be used on acid-loving plants, nor should it be applied too frequently, as it will overly raise the soil's pH levels.Mushroom compost may also contain pesticide residues, particularly organochlorides used against the fungus gnat. If the compost pile was stored outside, it may contain grubs or other insects attracted to decaying matter. Chemicals may also have been used to treat the straw, and also to sterilize the compost. Therefore, the organic gardener must be careful regarding the sourcing of mushroom compost; if in doubt, samples can be analyzed for contamination – in the UK, the Department for Environment, Food and Rural Affairs is able to advise regarding this issue.
Background:
Commercially available 'spent' mushroom compost is not always truly spent. It is sold by mushroom farms when it is no longer producing commercially viable yields of mushrooms. It can be used to grow further smaller crops of mushrooms before final use on the garden. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ideas on the Nature of Science**
Ideas on the Nature of Science:
Ideas on the Nature of Science is a book by Canadian author and radio producer David Cayley. It is a compilation of his conversations that took place during the CBC Radio series "How to Think About Science" for the program Ideas. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Click (acoustics)**
Click (acoustics):
A click is a sonic artifact in sound and music production.
Analog recording artifact:
On magnetic tape recordings, clicks can occur when switching from magnetic play to record in order to correct recording errors and when recording a track in sections. On phonograph records, clicks are perceived in various ways by the listener, ranging from tiny 'tick' noises which may occur in any recording medium through 'scratch' and 'crackle' noise commonly associated with analog disc recording methods. Analog clicks can occur due to dirt and dust on the grooves of the vinyl record or granularity in the material used for its manufacturing, or through damage to the disc from scratches on its surface.
Digital recording artifact:
In digital recording, clicks (not to be confused with the click track) can occur due to multiple issues. When recording through an audio interface, insufficient computer performance or audio driver issues can cause clicks, pops and dropouts. They can result from improper clock sources and buffer size. Also, clicks can be caused by electric devices near the computer or by faulty audio or mains cables. In sample recording, digital clicks occur when the signal levels of two adjacent audio sections do not match. The abrupt change in gain can be perceived as a click. In electronic music, clicks are used as a musical element, particularly in glitch and noise music, for example in the Clicks & Cuts Series (2000–2010).
Speech noise:
In speech recording, click noises (not to be confused with click consonants) result from tongue movements, swallowing, mouth and saliva noises. While in voice-over recordings, click noises are undesirable, they can be used as a sound effect of close-miking in ASMR and pop music, e.g. in Bad Guy (2019) by Billie Eilish.
Click removal:
In audio restoration and audio editing, hardware and software de-clickers provide click removal or de-clicking features. A spectrogram can be used to visually detect clicks and crackles (corrective spectral editing). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.